^600/2-39-045
EPA
United States
Environmental Protection
Agency
Robert S Kerr Environmental
Research Laboratory
Ada OK 74820
EPA/600/2-89/045
August 1989
Research and Development
Predicting SubsurfaceENy,POi :TAL
Contaminant \ AGENCYN
Transport and DALLAS'TEXAS
Transformation: LIBRMY
MAR R1995
Considerations for Model
Selection and Field
Validation
-------
EPA/600/2-89/045
August 1989
PREDICTING SUBSURFACE CONTAMINANT
TRANSPORT AND TRANSFORMATION:
CONSIDERATIONS FOR
MODEL SELECTION AND FIELD VALIDATION
by
James Weaver and C. G. Enfield
Robert S. Kerr Environmental Research Laboratory
Ada, OK 74820
Scott Yates
USDA, ARS, Univ of California at Riverside
Riverside, CA 92521
David Kreamer
Arizona State University
Tempe, AZ 85287
David White
University of Tennessee
Knoxville, TN 37932-2567
Robert S. Kerr Environmental Research Laboratory
Office of Research and Development
U. S. Environmental Protection Agency
Ada, OK 74820
-------
DISCLAIMER
The information in this document has been funded wholly by the United
States Environmental Protection Agency through in-house research efforts conducted at
the Robert S. Kerr Environmental Research Laboratory. It has been subjected to the
Agency's peer and administrative review, and it has been approved for publication as
an EPA document. Mention of trade names or commercial products does not constitute
endorsement or recommendation for use.
-------
FOREWORD
EPA is charged by Congress to protect the Nation's land, air and water
systems. Under a mandate of national environmental laws focused on air and water
quality, solid waste management and the control of toxic substances, pesticides, noise
and radiation, the Agency strives to formulate and implement actions which lead to a
compatible balance between human activities and the ability of natural systems to
support and nurture life.
The Robert S. Kerr Environmental Research Laboratory is the Agency's
center of expertise for investigation of the soil and subsurface environment. Personnel
at the Laboratory are responsible for management of research programs to: (a)
determine the fate, transport, and transformation rates of pollutants in the soil, the
unsaturated, and the saturated zones of the subsurface environment; (b) define the
processes to be used in characterizing the soil and subsurface environment as a
receptor of pollutants; (c) develop techniques for predicting the effect of pollutants on
ground water, soil, and indigenous organisms; and (d) define and demonstrate the
applicability and limitations of using natural processes, indigenous to the soil and
subsurface environment, for the protection of this resource.
The Environmental Protection Agency uses numerous different mathematical
models to describe the transport and transformation of contaminants in subsurface
environments. It is recognized that these models need to be valid for the assessment or
decision under consideration. An obvious concern is the appropriateness of the model
being used and the confidence that should be placed in the model projections. The
report briefly discusses several of the processes which might be incorporated in a
transport and transformation model and presents considerations which should be
evaluated when implementing a model code at a particular site. The report presents
examples of model validation attempts at a variety of sites but does not attempt to
suggest that any particular model code is "valid".
'Clinton W. Hall
Director
Robert S. Kerr Environmental
Research Laboratory
in
-------
ABSTRACT
Predicting subsurface contaminant transport and transformation requires
mathematical models based on a variety of physical, chemical, and biological
processes. The mathematical model is an attempt to quantitatively describe observed
processes in order to permit systematic forecasting of cause and response
relationships. The mathematical models and the computer codes which solve the
differential equations are approximations of the systems being described. The validity
of these approximations depends on the purposes of the calculation. This report
describes several mass transport processes but does not attempt to describe
transformation processes. The body of the report focuses on considerations for model
implementation and the validity of the implementation.
IV
-------
CONTENTS
DISCLAIMER ii
FOREWORD iii
ABSTRACT iv
INTRODUCTION 1
TRANSPORT PROCESSES 8
Mechanisms of Mass Transport 8
Diffusion 8
Advection 9
Effusion 11
Combinations of transport processes 11
Interphase transfer and transformation 11
Fundamental pore and REV scale phenomena 12
Field scale phenomena 16
Modeling considerations 17
MODEL IMPLEMENTATION 19
Initial Considerations in Model Application 19
Selection of a Model 20
Simple vs. complex models 20
Dimensionality 21
Geologic materials 23
Ground-water flow systems 24
Multiple fluid phases 25
Unsaturated zone 25
Pollutant effects on fluid properties 26
Chemical processes 26
Model Selection 28
Application of the Model to a Specific Site 31
Initial and boundary conditions 31
Field measurement of parameters 32
Hydraulic conductivity 33
Dispersion 35
Unsaturated systems 37
Usage of field data in numerical models 38
Calibration 38
Geostatistics 41
Using gee-statistics for site characterization 43
Range of influence 43
Determining sample numbers 44
Validation 45
Examples 47
1) Alternatives comparison with unvalidated model 48
2) Qualitative validity 48
3) Ground-water flow model revaluation 49
4) Solute transport model Devaluation 49
5) Modeling aquifer remediation 50
6) Solute transport under uncertainty 52
Use of Models for Ground-water Contamination Problems 53
-------
LITERATURE CITED 55
VI
-------
INTRODUCTION
The Environmental Protection Agency uses mathematical models describing
the transport and transformation of contaminants in subsurface systems to
systematically make regulatory assessments and environmental decisions. Examples
are in the development of regulatory policies, for remediation of waste sites, in the
design and evaluation of monitoring and data collection activities, detecting pollutant
sources, developing aquifer or well-head protection zones and the assessment of
exposure, hazard, damage and health risk. These models need to be valid for the
assessment or decision under consideration. An obvious concern is the
appropriateness of the models being used and the confidence that should be placed in
model projections. This report does not attempt to endorse any specific model by
calling it "valid." The report is divided into three parts: a description of selected
processes required in model development; a description of how one might apply
statistical techniques for determining the "validity" of a site characterization; and a
description of implementation of site characteristics to a mathematical model. It is
hoped that the report will suggest to the reader that a "validated" model will only be
valid for the site where the modeling exercise was performed and only for the purpose
of the modeling exercise.
The choice or creation of a model begins with a conceptualization of system
behavior. Conceptual models are descriptive and based on the modeler's experience
and technical judgment. The conceptual model represents the modeler's understanding
of the system or field site to be modeled. A mathematical model does not substitute for
a solid conceptual model nor does a mathematical model substitute for site
characterization data in a site specific model application. Mathematical model results
should be viewed only as extrapolations of a basic understanding of the system.
Generally, conceptual models require little or no mathematical expertise; they are very
subjective and are required precursors to more well-defined mathematical-type models.
Mathematical models are based on mathematical equations, and may be
very abstract. Mathematical models attempt to quantify specific environmental
processes based on a relatively small set of parameters which represent the system.
They can be further classified as analytical and numerical. Analytical models provide
exact solutions to the mathematically stated problem. Such models can often be solved
using hand-held calculators because data requirements are small due to simplifying
assumptions or idealized conditions. Numerical models, on the other hand, are
-------
approximations of the exact mathematical solution. Numerical models are capable of
addressing more complicated problems, rely less on simplifying assumptions, have
large data requirements and therefore generally require digital computers for solutions.
Various numerical methods commonly in use include finite difference, finite element,
boundary element, and the method of characteristics.
Physical models are actual scaled-down representations of systems being
studied. Examples are laboratory flow-through columns or sand tanks used to model
soils or aquifer systems. Such models are used to gain insight into contaminant
transport behavior, but a major limitation is the fact that it is becoming increasingly
apparent that actual in situ behavior is seldom duplicated with such physical models.
Physical models are often utilized to test mathematical models by removing as much
spatial variability as possible. Thus, a portion of the model assumptions are tested.
Physical models may be used as research tools to aid in the understanding
of fundamental processes. As such, physical models need to be constructed which
adequately simulate processes under study. Mathematical models which attempt to
describe fundamental processes may then be used to test hypotheses about these
processes as components of natural systems. Both the physical and mathematical
models become more complex as more processes are modelled and interrelationships
of important components within the systems are considered. Common to all models are
assumptions concerning how representative they are of the system and/or processes
they attempt to simulate. Some of the simplifying assumptions commonly employed
are:
1. Addressing only one primary transport or transformation process such
as adsorption, radioactive decay or biological transformation which is
assumed to follow first order kinetics.
2. There exists a direct scaling between the model and the real situation.
3. Linear reversible sorption processes.
4. Reactions are fast enough to neglect kinetics, making local equilibrium
assumptions valid.
5. Isotropic homogeneous medium.
6. Consistency of physical, chemical and biological conditions in the flow
field.
7. One mobile phase.
As more and more of these 'simplifying' assumptions prove significant, model
complexity increases to accommodate additional processes important to contaminant
transport predictions. Model development therefore is dynamic, relying on new
-------
discoveries of important processes previously neglected in the theoretical model or
innovative mathematical techniques which better solve the controlling equations without
introducing anomalies due to the mathematical technique.
Ground-water contaminant transport models based on physical models as
described above are limited not only by mathematical assumptions or simplifications but
also by the possible inadequacy of the physical model to simulate the natural system.
Data generated by the physical laboratory model may validate the mathematicai model,
but the processes incorporated into the mathematical construct may not address all the
environmentally significant processes operating in the 'field'. Field evaluations of
models often lead to a better understanding of the processes taking place and point to
additional research needs. As a result of field evaluation of models, new processes are
frequently incorporated in more complex mathematical models. For this reason, both
laboratory validation and field evaluation of contaminant transport models are essential
if the ultimate goal is prediction of transport and fate of environmental pollutants.
Although at a given site there is only one distribution of geologic features, our
ability to characterize those features and the relevant transport parameters associated
with them is severely limited. Also, physical, chemical and biological processes may
alter transport parameters over time; and most modeling techniques do not consider
temporal changes in the physical system. Differences between predicted and observed
concentration distributions are common in well sampled field sites. This observation
leads to the question as to what is predictable about pollutant transport in the
subsurface. Do typical site investigations produce sufficient information to predict
detailed pollutant distributions or arrival and cleanup times? In most cases, the data
available simply do not justify definitive predictions. In many cases, model usage
should be limited to comparison of various alternatives based on reasonable estimates
of the range of parameter values. Using "average" values for parameters frequently
gives poor projections since many parameters are not normally distributed nor linearly
implemented in the model. Pollutant behavior often depends on extreme values.
In light of the usual and unavoidable uncertainty in characterizing the
subsurface, statistical modeling techniques have been developed. The results of these
models are probabilistic distributions of pollutant concentrations, based on the known or
assumed uncertainty in model input parameters. A statistical approach gives the
analyst the opportunity to judge the likelihood of an outcome. Users of statistical or
deterministic model information should resist the tendency to view computer model
output as "the" answer unless the "answer" is given in terms of a probability, - since this
suggests only a likelihood.
-------
Model validation is defined as the comparison of model results with
numerical data independently derived from laboratory experiments or observations of
the environment (ASTM 1984). The objective of model validation is to determine how
well the model's theoretical foundation describes the actual system behavior (van der
Heijde and Park 1986). Validation should not be confused with verification or
calibration. Model verification involves checking computer code for mathematical
accuracy, while calibration of a model is adjusting model coefficients to obtain a 'best fit'
to selected observed data for subsequent application and validation (testing).
There are certain components of the model validation process which are
common to all such efforts, whether the validation is performed using laboratory or field
data. These are:
1. Conceptual Model - understanding of the conceptual model, especially
its basic assumptions, fundamental processes and limitations.
2. Mathematical Model - understanding of mathematical model
development, implementation of the boundary and initial conditions,
especially the significance of simplifications or computational approach
used to obtain the results.
3. Physical Model and Model Parameters - understanding the limitations
of the physical model used and identification of important experimental
parameters.
4. Performance Criteria - identification of data needs, data quality and
establishment of validation performance criteria.
An evaluation of data needs for model input as well as comparison data for
validation efforts is necessary. Data evaluation involves determinations of necessary
data quantity and quality. Choice of statistical treatment for data analysis will depend
on the availability of data and accuracy and precision (or data quality) deemed
necessary for adequate model performance.
The American Society for Testing Materials (ASTM 1984) established three
hierarchical procedures for the validation of environmental chemical fate models as
follows:
1. Statistical - a statistical validation of time dependent results requires
various "realizations" of the model. The statistical validity could be
demonstrated from many applications of a model to many sites.
2. Deviate - When data are insufficient to statistically determine model
output adequacy, a test of derivative validity can be applied. The
-------
deviation of a predicted value (x) from a measured value (y) can be
determined from a deviation coefficient (d) where
d = (x - y)/x
The model possesses derivative validity if d < E, where E is the
subjectively established validity criterion.
3. Qualitative validity is the criterion often used to judge the validity of a
model. This typically involves a qualitative judgement of the
differences between model predictions and measured values.
Graphical representations of measured vs predicted values are often
used for this purpose.
Not all three would be used at a given site, it would depend on data availability. These
criteria are not being recommended nor are they the only criteria that might be used, but
are included as a reference set of criteria which have been established.
As already indicated, mathematical models are simplifications of physical,
chemical and biological systems. The actual validity of a mathematical model depends
on:
1. An understanding of all processes at a site under investigation which
impact the transport and fate of a contaminant. A valid model must
incorporate those processes which are significant and present.
2. An accurate description of the temporal and spatial variability of the
system being modeled.
3. A computer code which accurately describes the solution of a set of
equations written to describe physical, chemical and biological
processes with their appropriate boundary and initial conditions.
4. An implementation of the computer code to a characterized site.
All processes are not fully understood. Methods do not exist for accurately
characterizing the variability of transport and transformation parameters. Therefore, if
the four parts of model validation listed above are considered essential in the validation
process, it will probably not be possible to obtain a "valid" model in the foreseeable
future.
It is possible to verify a model's code and demonstrate that the algorithms
utilized are rigorous and robust (valid). This type of validation can be accomplished in
an office environment while simultaneously generating data on parameter sensitivity.
But, this does not assure that the processes described by the model are appropriate to
any 'field' or, for that matter, laboratory situation. Even if appropriate processes are
incorporated into a "valid" code, application without adequate knowledge of the spatial
-------
and temporal variability at a site will most likely yield results for which the user will have
little confidence. The user community will need numerous "valid" codes along with
trained and experienced modelers. To select the most appropriate code for the site in
question will require a great deal of judgment along with a modeler's knowledge of
parameter sensitivity and field conditions being modeled. The required precision in the
modeling exercise needs to be based on the availability of input data (knowledge of the
processes and variability at a given site).
Field evaluation presents interesting challenges beyond those found under
laboratory conditions. In the laboratory, physical boundaries are reasonably well
defined. Initial conditions are reasonably well known. Spatial variability is usually
minimized. Scale of the problem is generally small; and statistically, one samples a
reasonably large portion of the total population. In the field, the converse is generally
true. The physical boundary conditions are generally not well defined. The initial
conditions are generally unknown (i.e. the time record of pollutant discharges). The
physical, chemical and biological properties of the system are generally systematically
chaotic. Only a small fraction of the total system is sampled at very limited times. The
sampling procedure often modifies the actual sample yielding biased data. In addition
to sampling only a small portion of the total environment, interpretation of data collected
frequently uses cyclic logic. For example, a model is used to interpret a pump test to
yield hydraulic conductivity of the aquifer. This parameter is then used as input data to
the same model to project how the aquifer will behave so that the assumptions of the
model are used to determine the parameter value resulting in a non-independent
derived value.
Freyberg (1988) presented an interesting exercise addressing the inverse
problem that is faced in field evaluation. He created a relatively simple two-dimensional
hydrogeology and calculated how the system would respond to field testing of this
hypothetical aquifer using a numerically valid code. The test results were given to
several groups of graduate hydrology students who were asked to define the original
hydrogeologic system using "calibration" techniques with the same code. There was a
wide range in the generated hydrogeologies and there were significant differences in
the precision of the fit to the given data. None of the student-calculated hydrogeologies
matched the instructor's original description. This suggests that when performing a field
"validation," a "good" match between model projections and field observations does not
necessarily mean the model describes the field nor does it mean the model accurately
describes the processes taking place at the particular field site. Actual field situations
are infinitely more complex than the system defined by Freyberg. This example
-------
considers only the hydrologic response of the aquifer. Even more difficulties would be
encountered in calibration for solute transport. The field "validation" problem in effect
becomes two linked problems, one being the validity of the site characterization and the
other the ability of a modeler to implement this site characterization into a numerical
code which will yield results with sufficient accuracy and precision for the assessment or
decision at hand.
We too often compliment ourselves in the complexity of our mathematical
algorithms and their ability to consider system variability with little concern as to the
availability or cost of obtaining input data required to implement these codes. A model
must be chosen which is appropriate to the level of data that is available and the
purpose of the modeling exercise. Analytic and semi-analytic models are often valid for
making regulatory decisions when little or no specific site is available. These screening
decisions often are used to rank one chemical versus another chemical. These same
models may not be valid when attempting to forecast the actual environmental fate of a
chemical at a specific site. Care must be exercised not to adopt at the exclusion of
alternative methods "validated" models without considering the purpose of the
modeling.
-------
TRANSPORT PROCESSES
Mechanisms of Mass Transport
In the 19th century, Thomas Graham described the major mechanisms of
transport (Mason and Evans, 1969). These basic categories still can be used to
describe migration. These categories are: diffusion (also called Brownian movement),
advection (also referred to as forced diffusion, laminar viscous flow, bulk flow,
transpiration, and convection), and effusion (also called free molecule or Knudsen flow).
These mechanisms are quite different physically and in their mathematical description,
therefore modeling which considers contaminant transport must make a clear distinction
between the processes and identify which are dominant in any given site specific
application.
Diffusion
Diffusion is a process whereby elements in a single phase spatially
equilibrate. This process arises from random molecular motions; each molecule is
constantly undergoing collision with other molecules, and the result is constant motion
with many changes in direction and no preferred direction of motion. Although this
motion is random, there is a net transfer of molecules of one compound from regions of
high concentration to low concentration. The net transfer of molecules is predictable
whereas the direction of any individual molecule at any moment is not. Diffusion is a
spreading out or scattering of molecules and may be divided into the categories of
ordinary diffusion, thermal diffusion, and particle diffusion.
Ordinary diffusion is a process in which the components of any phase will
eventually become thoroughly mixed. Taylor and Ashcroft (1972) outline the rate
potential of ordinary gaseous diffusion in the soil atmosphere, stating that no ordinary
diffusion will take place if the density (concentration) of the diffusing substance is the
same throughout a given region. Further, if the vapor density of the diffusing substance
is different at different points, diffusion will take place from points of greater to lesser
density, and will not cease until the density at all places is the same. Therefore, it
follows, for example, that ordinary gaseous diffusion of each gas component in the
subsurface will occur whenever there is a difference in its concentration (a) between the
8
-------
soil and outer atmosphere, or (b) at points within the soil because of irregularities in the
consumption or release of gases.
Ordinary diffusion is normally described by an equation written by Pick in
1855 after the principles described by Graham and in analogy with Fourier's Law for
heat conduction. Pick's Law, as the equation has become known, when applied to
gaseous diffusion in a porous medium, relates the diffusive flux to a concentration
gradient over distance multiplied by an empirically derived coefficient called the
effective diffusion coefficient. The magnitude of the effective diffusion coefficient is
dependent on the properties of the porous media, (tortuosity, porosity, moisture content)
and properties of the diffusing gas.
Thermal diffusion occurs when a temperature gradient is established in a
gaseous mixture. The non-uniformity of temperature results in a concentration gradient
as more dense molecules move down the temperature gradient while less dense
molecules move in the direction of increasing temperature (Grew and Ibbs, 1952).
Thermal gradients in the vadose zone may be caused by heating of indoor facilities,
radiant energy absorbed by surface pavement or structures, and geothermal sources.
The greater the thermal gradient, the more effectively a mixture may be separated by
this phenomena. Thermal diffusion is a slow process, however, and is not usually
considered to contribute significantly to transport. Most modeling efforts normally
ignore thermally induced diffusion where there is not a heat generating waste involved
(such as some forms of nuclear waste).
Particle diffusion refers to diffusion of ions to or from exchange sites on soil
particles. Hellfereich (1962) states that the chemical potential for interactions between
the molecules and a sorptive matrix depends on: (a) the generation of diffusion induced
electrical forces, (b) the absorbent selectivity or preference for a particular contaminant,
and (c) specific interactions such as Coulombic forces, van der Waals forces, and
hydrogen bonding. Particle diffusion is significant when strong interactions between the
exchange medium and diffusing contaminant exist.
Advection
Advection involves transport in which movement is in response to a pressure
gradient. Unlike diffusion, advection occurs as bulk flow, i.e. any tendency that the
mixture has to migrate according to the concentration gradient of its separate
components is overwhelmed by its pressure response, and behaves as one phase.
Whereas diffusion will produce varied movement of the individual components of a
-------
mixture; in advection, the entire mixture will move, retaining the same percentages of
individual components as it is forced to regions of lower pressure.
Movement in the unsaturated zone through advection is governed partly by
the permeability of the porous medium and partly by the pressure created by the
sources or sinks. For example, a popular method for the cleanup of volatile compounds
in the unsaturated zone utilizes extraction of gases by inducing gaseous advection and
enhanced volatilization.
Laminar viscous flow of a fluid through a porous media is usually described
by Darcy's Law (analogous to Pick's and Fourier's Laws) which is an empirical equation
relating fluid flux to a pressure gradient over distance times a constant of proportionality
called permeability or, in the case of water flow, hydraulic conductivity. Permeability, in
turn, is dependent on properties of the flowing fluid and the porous medium through
which it flows.
Klinkenberg (1941) pointed out that gases do not stick to the pore walls as
required in Darcy's Law and that a phenomenon known as "slip" occurs. This gives rise
to an apparent dependence of permeability on pressure, referred to commonly as the
Klinkenberg effect, and often must be considered in mathematical descriptions of
transient state advective flow.
Thermal advection is another type of flow and is caused by a thermally
induced pressure gradient. As temperature increases in unsaturated porous media, the
resultant increase in molecular energies results in increased pressure exerted by the
constituents. These heated phases then flow in an effort to reestablish a pressure
equilibrium.
The effectiveness of thermal advection depends somewhat on the thermal
regime of the soil. A soil medium with low thermal conductivity can create a greater
thermal gradient over a given distance because heat is contained. Air is a poor thermal
conductor, therefore thermal advection is enhanced (because of high temperature
gradient development) in soils with low bulk density (high air content). But the overall
effect of thermal advection as a transport mechanism is considered to be limited.
However, if a heat generating source below the ground surface exists, thermal
advection could become significant. Kreamer (1988) observed an increase in rates of
gaseous tracer movement when alluvial soils were heated at depth in simulation of
buried, heat generating waste at the Nevada Test Site. It has also been noted in
regions of fractured hard rock with very deep water tables that holes drilled to great
depths will seasonally advect gases and seemingly "breathe", with expulsion of gases
from the ground normally in the winter months. This convection is likely related to the
10
-------
magnitude, frequency, continuity, and configuration of fractures, and the annual
temperature variation. This factor obviously has important implications to waste
repositories containing volatile compounds which are placed in deep, highly fractured
media or Karst systems.
Effusion
In porous media where pore spaces are extremely small, collisions between
gas molecules can be ignored compared to collisions of molecules with the surrounding
walls. If molecule-wall collisions dominate, the flux of molecules through any shape
pore space is equal to the number of molecules entering the pore space times the
probability that any one molecule will pass through and not be deflected back the way it
entered. This forward flux is called effusion, free-molecule, or Knudsen flow and its
mathematical description is different from diffusion or advection processes. Typically,
effusion can be a factor in fine-grained material, yet is mostly ignored in models of
contaminant migration.
Combinations of transport processes
The previously described vapor transport processes can and often do occur
in combination. Effusion and advection in combination leads to the phenomenon,
discovered experimentally in 1875 by Kundt and Warberg, known as "viscous slip."
Advection and diffusion processes occurring in concert were called "diffusive slip" when
first discussed by Kramers and Kistemaker in 1943. A third combination, that of
effusion and diffusion, was researched extensively in the early to mid-1940's in support
of work on isotope separation. And, all three processes of diffusion, advection and
effusion could be significant in a given field situation and could be described by yet
another combination. This variability in types of transport processes points out the need
to clearly understand gaseous migration in any field situation before accepting a
mathematical representation of its migration in that situation.
Interphase transfer and transformation
The discussion, thus-far, has been limited to mass transfer in a given phase
within the system. Transfer between phases also needs to be considered when
modeling transport of contaminants. The mass of a contaminant is divided between all
11
-------
of the existing phases making the system dynamic with transfer between the phases.
The phase transfer modifies the rate of movement of contaminants but does not change
the total mass within the system. A very hydrophobic compound, for example, will have
very low concentrations in an aqueous phase and much higher concentrations in
phases that are composed of other hydrocarbons. When the original source of
contamination is associated with the water, the movement of a hydrophobic compound
will be much slower than the movement of the aqueous phase originally carrying the
contaminant. Many different models have been proposed to describe the transfer of
contaminants from phase to phase. The models attempt to describe the processes
such as:
precipitation/dissolution
sorption/desorption
ion exchange
evaporation/condensation
phase partitioning
Some of the models are linear and others are nonlinear. Most of the models are based
on kinetic theories of mass transfer. However, most current modeling approaches
consider linear partitioning between the phases and local equilibrium between the
phases. These assumptions remove the need to model the kinetics of phase transfer
and make it possible to view the apparent contaminant velocity independent of
contaminant concentration. Although these assumptions reduce the data requirements
and simplify the mathematical coding of a model, they may not be satisfactory in all
instances.
Chemicals are also transformed, thereby removing the parent compound
from the system. Transformation takes places both biologically and abiotically. Most
modeling approaches consider the transformation process as following first order
kinetics. This type of equation adequately describes certain abiotic processes such as
radioactive decay. However, biological oxidation of an organic substrate is more
frequently limited by mass transfer of oxygen than the ability of the organism to
assimilate the biodegradable organic material. Biotransformations based solely on first
order kinetics are therefore generally invalid. Few models consider the formation of
daughter products such as the formation of dichloroethylene (DCE) from
trichloroethylene (TCE). The results from such model applications may correctly assess
the question being asked without yielding insight into the problems remaining.
Fundamental pore and REV scale phenomena
12
-------
Since contamination of ground water by soluble pollutants is a typical aspect
of non-aqueous phase liquid (NAPL) pollution, most modeling information for dissolved
pollutants is required (i.e. sorption coefficients, dispersivities, ground-water flow
velocities and directions, geologic structure, etc.) The existence of multiple fluid phases
requires additional information for modeling contaminated sites. This information arises
from the fundamental physical phenomena that control multiphase flows. Of importance
are the interfacial tensions between each pair of fluids and between each fluid and the
solid. Direct results of these phenomena are preferential wetting of the solids, contact
angles between the solid and fluid interfaces, pressure differences between fluids
(capillary pressures) and capillary trapping. These effects occur in individual pores, so
that modeling at this fundamental scale requires determination of the details of the pore
geometry. In addition, such an approach is computationally very intensive and not well
suited for most applications.
The majority of models, including multiphase flow models, however, are
formulated over larger scale "representative elementary volumes" (REVs) (Bear, 1979),
which are defined to be of such a size that a certain property is invariant over the REV.
Practicallyr4be~size of the REV can be viewed as smattest sample size of the device
usod to measure the property; By formulating the model over the REV, knowledge of
the pore geometry is no longer necessary, as that information is contained in an
integrated or average sense in REV-measured properties.
The connection between the pore and REV scales is made for both single
phase and multiphase flow by Darcy's law. It is used to model the flux of each fluid
phase present in the multiphase flow system. In a three-phase air/oil/water system,
there are then three Darcy's law equations. Implicit in the usage of Darcy's law is that
the flux of each phase depends on its own gradient only, in the way that a single fluid
phase flows in response to the gradient imposed upon it (Dullien, 1979). All
assumptions that are built into Darcy's Law for single phase flow are also implicit in its
multiphase flow generalization. To account for the presence of multiple fluids (i.e. the
pore scale phenomena discussed above), Darcy's law is modified in two places. The
first modification arises because there is a capillary pressure (defined below) between
each pair of fluids. Second, the conductivity of the medium to each fluid is less than the
total possible because the other fluids take up part of the pore space; and the remaining
pore space has increased tortuosity since some paths are not available anymore. Both
of these properties vary with fluid saturation (percent of the pore space filled by a fluid),
and are briefly discussed below. Functional forms of the relationships are of singular
13
-------
importance for multiphase flow modeling since they incorporate much of the
fundamental physics into the model equations.
Capillary pressure is defined, in general, as the pressure difference between
the wetting and nonwetting fluids (Bear, 1979). In a uniform tube, the capillary pressure
is a constant, depending on the fluid interactions with the solid and the geometry. In a
porous media, however, the pore sizes vary so that with invariant fluid interactions the
capillary pressure must vary with the amount of the pore space filled by each fluid.
Each individual pore corresponds in some way to a tube of a certain diameter, and has
a capillary pressure associated with it. Averaging pressures over the REV gives the
capillary pressure for the REV at that particular amount of pore space filled by the given
fluids. There is no simple direct quantitative connection between the pore level
phenomena and the macroscopic capillary pressure function. There is, however, a
qualitative connection that can be inferred from the shape of the curves (Batycky et a/.,
1981). Various methods exist for measuring these curves in the laboratory.
An important aspect of some capillary pressure curves is that there often is a
somewhat distinct capillary pressure that must be exceeded before a non-wetting fluid
can enter a porous medium saturated with a wetting fluid. After that capillary pressure
is exceeded, the non-wetting fluid can displace the wetting fluid. As the wetting fluid is
expelled, the capillary pressure increases, tending toward infinity. The residual wetting
fluid saturation is defined as the saturation when an increase in capillary pressure no
longer changes the volume fraction occupied by the wetting fluid. The significance of
the residual saturation is that at this saturation the fluid is present only as discontinuous
parcels of fluid, held in the pore space by (pore-scale) capillary forces. Since the
parcels are not in contact with each other, there is no flow of the fluid. In some soils the
capillary pressure may increase but not tend to infinity so that the concept of residual
saturation must be viewed as an idealization.
The magnitude of the residual saturation is usually needed for models of
relative permeability and for determining the amount of oil trapped in the pore space.
For the water phase, the determination of the residual saturation is relatively straight-
forward (e.g., Brooks and Corey, 1964). For the oil phase in a strongly water-wetted
system, however, the residual saturation depends on the amount of water present in the
pore space. When there is no water present, the oil acts like a wetting phase and the
residual is found in the small pores and grain contacts. When water is present, the oil
residual is found in large pores. The magnitude of the residual is likely to be different in
each of these cases, since the oil resides in different-sized pores. One proposed way of
handling this effect is for the relative permeability model to interpolate between the
14
-------
measured two-phase oil/water and air/oil residual saturations, based upon the amount
of water present (Stone, 1973). Models, such as that proposed by Parker and Lenhard
(1987) which include full hysteretic modeling, can determine the residual dynamically.
The reduction in conductivity of the medium is quantified by the second
function of fluid saturation, the relative permeability (sometimes referred to as the
relative hydraulic conductivity), which ranges in value from zero to one. When none of
a certain fluid is present or it is all trapped at residual, its relative permeability is zero
and there is no flow of that fluid. When the pore space is filled by that fluid, its relative
permeability is one, and the conductivity is the same as the single phase conductivity.
Between the endpoints, the relative permeability depends on all of the underlying
physical phenomena. Again, the shapes of the curves are somewhat indicative of the
pore level interactions (Owens and Archer, 1971, and Mohanty and Salter, 1983).
Three-phase relative permeability is extremely difficult to measure in the
laboratory and so its measurement is rarely undertaken (Stone, 1973). Most often, the
relative permeability is modeled on the basis of more easily measured parameters of
the porous medium. Typically, these are derived from measured two-phase capillary
pressure curves. (This follows because both the relative permeability and capillary
pressure are dependent upon the same underlying pore level phenomena.) In two
phases, the parameters are used in unsaturated flow modeling (Brooks and Corey,
1964), while in three phases, they are used for multiphase flow of NAPL pollutants.
Several difficulties are inherent in modeling three-phase relative permeability, however.
First, all capillary pressure and relative permeability functions are subject to hysteresis.
The functions are not single valued and the function values depend on the history of the
system and direction of displacement of the system (e.g., water displacing oil). Second,
most of the models of three-phase relative permeability are based on an assumed
distribution of fluids in the REV that would result from a strongly wetted medium (e.g.,
Stone, 1973). This is a typical condition for porous materials near the surface, but may
be altered by the presence of the pollutants in some cases (changes in wetting have
been noted by petroleum engineers; Trieber etal., 1972).
In addition to the pore scale phenomena discussed above, fluid density and
viscosity play important roles in the description of these processes. First, the
conductivity depends on the density and viscosity. The flux is directly proportional to
the density and inversely proportional to the dynamic viscosity. Thus various fluids will
flow at different rates through the pore space. A second impact of the fluid properties is
on the stability of displacements. In simple terms, when the displacing fluid is of higher
conductivity than the displaced fluid, the displacement is unstable. Phenomena such as
15
-------
viscous fingering result, as the displaced fluid cannot move as fast as the displacing
fluid (Kueper and Frind, 1988). Some of the displaced fluid is left behind. Eventually
the displacing fluid bypasses the displaced fluid.
Field scale phenomena
The fundamental interactions between immiscible fluids were described
above. These lead to some unique phenomena which occur on the field scale. If a
NAPL pollutant is released near the surface, the pollutant is driven downward by gravity
and pressure (both imposed and capillary) gradient. Upon reaching the water table,
NAPLs that are more dense than water (D-NAPL) can penetrate the water table and
begin displacing aquifer water. They must have sufficient driving force, however to
overcome the entry capillary pressure and enter the water-filled pore space. Gary et al.
(1989) illustrate in the laboratory a situation where TCE cannot enter a water-saturated
sand. When present in sufficient quantity to penetrate the water table, the NAPL can
displace water. Since the density and viscosity of such fluids are such that their
saturated conductivities are greater than that of water, the displacement is inherently
unstable. Fingering of the NAPL results where the fluid is found in localized areas. The
direction which the NAPL fingers take is highly dependent on minute variations in the
aquifer properties. These situations are essentially unpredictable on a detailed level by
currently existing theory and models.
When the NAPL is less dense than water (L-NAPL), as in the case of fuel
hydrocarbons, the NAPL tends to float on the water table. Some aquifer water can be
displaced by the weight of the mound of oil. In this case, however, once the mound
height is reduced, the water table rebounds to its original position. Since the NAPL has
penetrated the aquifer some of the aquifer material below the water is now
contaminated with a trapped residual NAPL saturation. Fluctuation of the water table,
either naturally occurring or caused by recovery wells, causes further smearing of the
NAPL lens and entrapment at residual saturation. NAPL not so entrapped may move
down gradient along the water table, as long as its saturation is above residual. In the
case of the denser-than-water NAPL, a similar type of mound may form at the bottom
boundary with a layer that prevents further downward motion. In this case the direction
of motion of NAPL may not coincide with the direction of water flow, but may be
primarily determined by the geologic formations.
Geologic layers, in general, can serve as boundaries in multiphase systems.
An abrupt reduction in saturated conductivity can reduce the flux of a liquid that can be
16
-------
transported downward. This in turn may lead to lateral spreading and mounding at the
boundary. Somewhat paradoxically, a NAPL can be stopped even by a layer of higher
saturated conductivity. This occurs because there cannot be a capillary pressure
discontinuity across a boundary between two layers. The two layers normally have
different capillary pressure curves; so that at a given pressure (capillary pressure), the
saturation in the two layers (determined by the capillary pressure curve) is different. A
high saturation may be required in the upper layer to maintain continuity in capillary
pressure. This phenomenon depends on the relative shapes of the capillary pressure
curves and not the saturated conductivity. This effect is responsible for the well-known
end effect in laboratory testing, where the second "layer" is actually the atmosphere.
Flow out of the column occurs only after the capillary pressure near the end of the
column is zero, causing an increase in wetting fluid saturation (Collins, 1961).
A closely related phenomenon occurs with wells, since the well is of large
enough diameter so that capillary pressures in the wells are essentially zero. Fluids
then can only flow into wells when capillary pressures in the formation are zero, which is
essentially a maximum saturation condition. A hydrostatic balance between a L-NAPL
floating on the water in a well and the L-NAPL in the aquifer, indicates that the thickness
of the L-NAPL in the wells is not indicative of the L-NAPL thickness in the formation
(Zilliox and Muntzer, 1974). It is likely that there also are transient nonhydrostatic
effects (Kemblowski and Chang, 1988). For these reasons the mass or volume of L-
NAPL in a formation should not be estimated from uncorrected weH thicknesses.
Several correction methods were reviewed and tested against a laboratory model by
Hampton and Miller (1988).
Modeling considerations
As discussed in the introduction to this section, many of the NAPLs of
interest are composed of complex mixtures. Some of the constituents of the NAPL are
present in high enough concentrations so that they consist of a high percent by mass of
the NAPL. Therefore, their loss from the NAPL to the aqueous phase may have a
significant effect on the flow of the NAPL. In such cases, the most accurate modeling
would be accomplished by using a multiphase, multicomponent model where many of
the NAPL transport properties are dependent upon the chemical composition. The
model proposed by Abriola and Pinder (1985) is of this type. Such models have
extreme computation time requirements. This type of model may also prove necessary
17
-------
for modeling a spill of pure TCE, which owing to its considerable water solubility may be
rapidly lost as a separate phase.
At the other end of the spectrum is a constituent that represents only a small
fraction of the NAPL mass. For such pollutants, the NAPL can be treated as a carrier
from which the constituent partitions in to the water and air and onto the soil. The
model formulation for these types of problems is simpler than the fully multiphase,
multicomponent models. In the former, most of the multiphase flow phenomena can be
considered to be independent of the chemical constituent(s).
Although three fluid phases may be present at a contaminated site, they may
not all be flowing simultaneously. Single- or two-phase models that include the
presence of the third phase could be used for some situations. Justification for this
approach lies with measured three-phase relative permeability curves which show
relatively small regions where significant amounts of all phases flow. This occurs
because when the saturations of the fluid phases drop, their permeabilities drop very
rapidly. Thus if one fluid is occupying enough of the pore space to flow, then the others
are occupying small amounts which only are associated with very low relative
permeabilities.
Typically water or a NAPL can easily displace much of the air in the
unsaturated zone since air has a low density and viscosity. At the displacing liquid
front, the flow may be two- or three-phase; but behind the front, most of the flow will be
in the liquid phase. Infiltrating rainwater can have an important effect on a NAPL phase.
Rainfall is often not explicitly included, since once the water saturation is reduced, its
relative permeability is dramatically reduced. Because of this phenomenon, the typical
moisture content of soils below one or two meters remains relatively constant. Few
rainfalls are of the necessary intensity and/or duration to reach most water tables.
Several simplified NAPL models have been constructed upon the premises of easily
displaced air phase and constant water saturation (Mull, 1970, Dracos, 1978, Reible,
1988, Weaver and Charbeneau, 1989).
18
-------
MODEL IMPLEMENTATION
There are a variety of factors which determine how well a model describes
the behavior of a real system to which it is applied. Some of these will be discussed
below. First, the conceptualization of the flow system is critical for modeling success.
Information from the preliminary site investigation is used to determine the important
chemical and physical processes occurring at the site. From such information a model
can be selected for the site. Application of the model to the site involves selection of the
appropriate initial and boundary conditions, determination of the regional hydrogeologic
characteristics, and identification of correct parameter values. The model then
represents and incorporates the best possible understanding of the site; it never
substitutes for knowledge of the site. Proper usage of the model enhances
understanding of the transport behavior occurring at the site. At this stage, various
predictions of the site behavior can be made to assess alternative operation or
remediation strategies. By comparing actual versus predicted behavior, the validity of
the model's representation of the site can be assessed. In some cases, the comparison
may reveal gaps in the conceptual understanding of the site that require reevaluation
and adjustment of the model. Through iteration between data collection and model
usage, a better understanding of the site behavior is obtained. This section discusses
the application of a model to a field site and presents example field cases from the
literature.
Initial Considerations in Model Application
Before modeling begins, there should be a clearly defined problem and
specific questions for the modeling work to address. The selection of the model
depends on the questions to be answered, in addition to the processes occurring at the
site. For example, if one only wishes to approximate the flux of chemicals from a land
treatment site over a long time period, simple steady-state models may be appropriate.
If for the same situation, the response to a certain rainfall pattern or the precise
distribution of chemicals in the soil column is desired, then a more complex model may
be needed. Models for the alternative modeling situations may differ significantly in
character and data requirements. The selected model should be able to answer the
questions, without introducing unneeded complications. That is, it is inappropriate to
apply a highly complex model to a situation, for which a simple model can answer the
19
-------
questions posed. One reason for this is that as the complexity of the modeling
increases, so do the data requirements and expense. In the example above, for steady-
state infiltration, average annual recharge rates are used, while storm intensities,
durations and arrival times are used to simulate a series of rainfall events. Even for
only one aspect of this example, the type of data required is more extensive and difficult
to obtain for the more complicated situation.
A similar issue is related to the implication of error in the calculation. As the
required margin of error for the answer decreases, then required data and modeling
accuracy for the site must increase. For example, there is a different level of
approximation appropriate, if the model is being used to help in siting a high level
nuclear waste repository, as compared to that for a municipal water supply well. An
abbreviated site investigation may be appropriate for the latter, but not the former. Both
the data collection and modeling effort must be recognized as being strongly dependent
on the purpose of the modeling activity.
Selection of a Model
A crucial aspect of applying a model to a field site is the selection of an
appropriate model. The process of model selection begins with site characterization
data, which is used to form a conceptual model of the site. The analyst's qualitative
understanding of the geology, hydrology and transport processes occurring at the site
follows. From this understanding, a specific model can be selected that will simulate
the important processes occurring at the site. Since some simplification will be
necessary for modeling, various decisions must be made in selecting an appropriate
model. The following discussion can be viewed as a partial guide for the site
investigation. By conducting the investigation with these decisions in mind, an
appropriate understanding of the site can be obtained for the application of a model.
The section ends with a comparison between a typical site investigation effort and a
level deemed necessary for full understanding of a site.
Simple vs. complex models
As discussed above, there are situations where simple models may be
adequate and most appropriate. Simple models have a variety of usages which will be
discussed briefly before proceeding to the more complex situations. Usually these are
analytic solutions of the governing equations, which are exact mathematical
20
-------
representations of the solution for simplified situations. Generally analytic solutions
apply to homogeneous porous media with certain boundary conditions. Solutions exist
for radial flow toward single wells (e.g., Freeze and Cherry, 1979), various one-
dimensional solute transport situations with constant ground-water velocity (van
Genuchten and Alves, 1982) and flow through single fractures (Tang et a/., 1981)
among others. One common usage of analytic solutions, which will be discussed
below, is for parameter identification at the field scale. Strictly, they only apply to very
limited situations. In practice, they are applied successfully over small areas or in cases
where the solution is relatively insensitive to variation in the conditions of the solution.
A second usage, when data concerning the site is limited and uncertain, is in the early
stages of an investigation. In this instance, an analytical model can be used to gain
understanding of some of the transport processes at the site. By varying the input
parameters, the response of the aquifer under varying conditions can be easily seen.
At sites that are undergoing further investigation, the analytic models can be used in
initial stages to help guide and interpret data collection. Conditions at the actual field
site will probably not be as idealized as strictly required by the simple model, and in
many cases the qualitative behavior can be quite different than in the idealized case.
As more data becomes available, especially concerning geologic variability, numerical
model usage becomes more appropriate.
Note that even the usage of a numerical model implies simulation of a
simplified system, because the real systems are far too complex to model in every
detail, only certain aspects of the problem may be important for the case at hand, and
there are processes that are not understood completely. Certain aspects of multiphase
flows, dispersion, and chemical and biological transformations are examples falling into
the latter category. A common situation is that geologic variability is not completely
characterized.
Beginning then with the most general situation, simplifications are sought to
make the modeling effort practical, but still representative of the site and able to answer
the questions posed. Much of the art of modeling lies in the ability to properly
conceptualize the site processes and apply a model in an appropriate fashion to gain
further understanding of the site and to achieve valid model predictions.
Dimensionality
Ground-water flow systems are three-dimensional in nature, but under
appropriate circumstances may be represented by one- or two-dimensional models.
21
-------
When appropriate, such simplification can reduce input data requirements and
computation time. Many existing ground-water models are two-dimensional models
where any variation in the third dimension is neglected; all inputs and outputs are
assumed constant over this dimension. Two-dimensional models can be used when
the aquifer pollutants are found evenly distributed over a uniform aquifer. There are
many situations where the two-dimensional approximation is appropriate and serious
consideration needs to be given to this option. A limitation of fully three-dimensional
modeling is the greater and more detailed site characterization data needed.
A common three-dimensional situation arises in layered systems where the
water flow may be more or less planar, but variations in hydraulic conductivity among
the layers may result in vertical variations in contaminant concentration that are largely
due to the layering. As a result, the contaminant is not well mixed over the thickness of
the aquifer, as implicitly assumed in a two-dimensional solute transport code. For such
situations, three-dimensional effects may be important and three-dimensional modeling
should be considered.
The ability of the analyst to determine if a three-dimensional model is needed
is closely related to the sampling program. If, for example, a fully-penetrating well was
used to sample pollutants that are not uniformly mixed over the thickness of the aquifer,
then dilution in the borehole would give an apparent uniform concentration regardless of
the actual distribution with depth. By mixing various concentrations, not only is the
vertical distribution lost, but the apparent concentration is lower than the actual
maximum. Wells screened or packed-off over certain intervals can be used to give the
actual distribution of the pollutants. Another sampling concern is that ground-water flow
data is often available only in an implicitly two-dimensional form such as water levels in
wells. Such data does not indicate flow in the third (vertical) dimension. Vertical flow is
indicated by data from well clusters-wells drilled to different depths in close proximity to
each other. Screened or packed-off wells and well clusters, then, are primary tools for
determining three-dimensional flow.
It can be seen that three-dimensional modeling requires more data and thus
expense for the site characterization. Justification of using a three-dimensional model
lies in the importance of concentration variations and gradients in the vertical (or third)
dimension. If such effects are fundamentally important and different than the results
from two-dimensional simulations, then three-dimensional modeling may be necessary.
Two-dimensional models should not be ruled out as they may be able to adequately
answer the questions posed at a site. For example, a vertical concentration distribution
might exist at a site and be defined during site characterization. Horizontal migration of
22
-------
the pollutant could be represented in a two-dimensional model by using the maximum
concentration of the pollutant as the initial condition. The model results would represent
a worst case assumption for the solution. By assigning the highest concentration found
in the profile to the entire aquifer, much more pollutant mass is in the modeled system
than in the real system. Averaging the concentration over the vertical to approximate
the mass actually present could severely underpredict the maximum concentration in
the aquifer. It would be valuable to apply the two-dimensional model to a vertical cross
section of the site. This application would show how much vertical transport to expect in
the systerh, and attenuation of the maximum concentration.
In an example where vertical gradients were important, Keely (1987)
reported on a Superfund site where drums containing hazardous materials had been
improperly stored. Initial investigation determined that the ground water flow was
generally to the west, indicating that the pollutants would eventually discharge to a
nearby river. More detailed investigation using clusters of wells that sampled different
depths, showed steep downgradients in the vicinity of the river. In fact, some large
wells were located across the river, presumably producing water from under the river.
This case illustrates that although the water levels generally indicated flow toward the
river, when the true three-dimensional flow was considered, the contaminants were
seen to be moving toward the large wells and heading under the river.
Geologic materials
The nature of the geologic materials that impact fluids flow must be
determined. Classical mathematical descriptions of flow through porous media (i.e.,
Darcy's Law) assume a so-called porous medium, where flow is through the pores of
the material. Many geologic materials can be treated as a porous medium in this
manner. In contrast, however, there are materials where flow is mainly through larger
scale features, such as fractures, macropores, or solution channels. (These will all be
referred to as fractures.) Although in principle the fractures are like large pores, the flow
is directed along the fractures since their larger size gives them a greater ability to
conduct liquids. In crystalline rocks, for example, the hydraulic conductivity of the rock
may be so much lower than that of the fractures that only the fracture conductivity is
significant. Fluid then flows only in the fractures which may have a certain orientation
leading to preferential flow directions. In other cases, due to the possibility of pollutants
diffusing in and out of the fractures, the conductivity of the geologic material and the
fracture must be considered. A further complication is that as in the case of some clays,
23
-------
the material may swell in the presence of water. This would cause the size and
conductivity of the fractures to change temporally with the amount of water present.
Obviously, very different models are needed for the different types of materials.
Other information relevant for model selection that must come from site
characterization is information concerning the anisotropy and the general degree of
heterogeneity of the geologic formations. First, many geologic deposits are anisotropic
(having properties which vary when direction of measurement is changed) because
they were laid down under water. Typically, there is a preferred direction of fluid
movement; in many cases the horizontal hydraulic conductivity is greater than the
vertical. Second, if the system must be treated as being composed of a series of layers
or if there are leaky confining beds, the model will have to account for this. For model
selection, the general nature of these items needs to be elucidated for inclusion into the
model.
Ground-water flow systems
For several reasons, it is important to determine if the aquifer is under
confined or water table conditions. Some aquifers change depending on water levels
from confined to unconfined. It is important to understand such behavior for both
modeling and understanding the aquifer. Some models of solute transport apply to only
one type of aquifer. For example, the widely used USGS MOC solute transport code
(Konikow and Bredehoeft, 1978) was developed for confined aquifers.
Climate affects ground-water systems and may be included in certain models
as recharge, evapotranspiration, temperature, relative humidity, etc. In studying the
site, the important climatic processes must be selected are approximated for modeling.
For example, in some aquifers such as the Ogallala of the high plains of Texas, there is
only a very small recharge to the aquifer, which for many situations could be neglected.
A general consideration is that the climatic processes occur over discrete periods of
time--a rainfall event, the diurnal temperature variation, or seasonal rainfall trends. The
modeler must decide to what level of detail and thus approximation this type of variation
needs to be included. As an example, many aquifers exhibit seasonal water level
fluctuations that may or may not be important for solute transport. This has implications
for data collection since water levels measured at only one point in time can not indicate
changes in gradient that might occur throughout the year. When important, the model
must be able to simulate transient conditions in the aquifer.
24
-------
As noted by Reilly et a!. (1987) ground-water velocities are usually low, so
that contaminant plumes, on the scale of thousands of feet, are embedded in much
larger regional ground-water flow systems. Knowledge of the regional flow system is
necessary for complete understanding of the local system at the contaminated site.
This may be important for selecting boundary conditions for the model used in the
immediate vicinity of the site. Konikow and Mercer (1988) discussed a site where
modeling was done on three scales: regional, local and site. Modeling at the larger
scales provided input for the lower scales. In this manner, the best usage of all data
can be made and appropriate boundary conditions can be placed on the site model.
Multiple fluid phases
Since many ground-water pollutants are immiscible with water, the model
may need to include multiple fluid phases which flow separately. The site investigation
should reveal if multiple fluid phases are present and if they are present in significant
amounts. If the NAPL which always has a small water solubility is present in small
amounts, it may be totally dissolved in the water phase. Even when present in larger
amounts, its migration as a separate phase may still be negligible. Due to capillary
phenomena, NAPLs can become trapped within the pore space, although perhaps only
temporarily. So in some cases even when NAPLs are present, they may migrate only a
small distance and have little impact on the motion of a contaminant. Their role as
reservoirs of hydrophobic chemicals is probably always important and should be
included in some fashion when they are present.
Inclusion of separate-phase flow depends on the purpose of the modeling
and the site conditions. In some cases the presence of the NAPL may be treated as an
unlimited source of a dissolved pollutant, while in others the motion of the NAPL itself is
important. Generally, NAPL motion becomes more important as its quantity in the
subsurface increases. Much more complicated models are required for multiphase flow
than if the NAPL presence can be treated as a source term only.
Another concern with NAPLs is their density. NAPLs less dense than water
or L-NAPLs tend to float on the water table. L-NAPLs displace only a small amount of
water. Specialized models may be used for L-NAPLs which model the motion of the
fluid on the watertable. Fluids more dense than water or D-NAPLs displace the water
and settle downward toward the lowest confining layer.
25
-------
Unsaturated zone
An important special case of multiphase flow is unsaturated flow where the
two fluid phases are air and water. If at a particular site the contaminant's flow through
the unsaturated zone is important, unsaturated flow must be included. If both saturated
and unsaturated conditions exist, then a coupled saturated-unsaturated model may be
needed. In some cases, it may be acceptable to neglect the unsaturated zone and use
the aqueous concentrations immediately below the source as a source term. In other
cases, significant tranformations and/or attenuation may occur in the unsaturated zone.
These may need to be modeled to obtain accurate modeling of transport in an
underlying aquifer.
Pollutant effects on fluid properties
If fluid density and viscosity are unaffected by the solute, then the hydraulic
equations can be solved first and their results used for the constituent motion. If the
fluid properties depend on the solute concentration, a much more complicated
numerical model is required. When the solute increases, the density of the water the
pollutant will move downward. This behavior can impact the dimensionality of the
modeling effort; if the pollutant tends to sink or float as it moves, it will not be uniformly
distributed over a the vertical thickness of the aquifer. Two-dimensional planar models
cannot address this behavior. Three-dimensional modeling may be appropriate if the
vertical distribution is important for the situations addressed. Note again that the
questions to be answered for a site also play a role in determining if certain phenomena
are included in the model or not. The modeler must decide if these effects warrant
inclusion in the model.
Chemical processes
Many solute transport models include sorption of the chemical onto the
porous matrix. Such a transfer of solute from one phase of the system to another is in
general referred to as partitioning. Other partitionings are between liquid phases in a
multiphase system, and partitioning to the air phase of volatiles. An example of a
transformation is biodegradation, which has often been treated as a first order loss (i.e.,
described completely by a half life). One problem with this approach is that often
oxygen must be present in the system for the degradation to occur. In this situation,
degradation of the chemical is strongly dependent on oxygen transport.
26
-------
The following brief example illustrates some of the difficulties related to
chemical transformations. Often, chemicals found in contaminated ground water do not
match the chemicals found at the source of contamination. This is the case in an
example presented by Keely (1987), where one possible explanation for the
discrepancy was that the original chemicals were undergoing biotransformations.
Tetrachloroethylene (PCE) and trichloroethylene (TCE) possibly were being converted
into dichloroethylene (DCE) which was being detected in wells. Although there may be
other explanations for the observed conditions at the site, the transformation will be
assumed to be the correct explanation for the sake of illustration. The transformation
itself is important for modeling the site, as some chemicals disappear from the system
and others appear. Also important is that the migration rates of the chemicals differ due
to differing tendencies for sorption. Literature values of partitioning coefficients indicate
that sorption decreases in the order PCE, TCE, DCE; so that as the biotransformations
occur, these pollutants become more mobile in addition to changing chemically. In
general, simulation of such a situation would require knowledge of the mechanisms and
rates of transformation and a model capable of simulating reactive transport.
The immediately preceding discussion has focused on some important
considerations for applying a model to a contaminated site. The foundation of any
model application must rest on adequate characterization of the flow and transport
domain. As stated earlier, modeling cannot substitute for an understanding of the site.
For determining the occurrence of the processes above, much site characterization
work is needed. This is in contrast to the typical site characterization (Keely, 1987)
-installation of a few shallow monitoring wells
-sampling for the 129 priority pollutants
-defining geologic variability by driller's logs and cuttings
-evaluating geohydrology by water level maps only
Such a level of effort may be appropriate for initial screening of the site. But to gain
further understanding more work may be necessary, including
-installation of well clusters
-using extensive coring and/or split spoon sampling for defining geologic
variability and the presence of trapped oil phases
-conducting geophysical surveys
-conducting tracer tests
27
-------
-determining partitioning behavior on core material
-measuring chemical parameters of the fluids
-assessing the potential for biological and
abiotic transformation processes
Obviously, the expense of obtaining information increases as more of these tests are
performed. There may be no alternatives to investing more money in site
characterization, however, if a thorough understanding of the site is needed. The
success of the modeling effort depends on how well the site is characterized. Any
model predictions that are based on limited information and understanding of the site
are inherently uncertain. Such work can easily be challenged by reviewers, when the
actual conditions and behavior of the site are unknown by the analysts. Increasing the
knowledge concerning the site serves to decrease, but not eliminate, uncertainty in the
model results. Modeling should not be expected to provide exact predictions of
pollutant migration but rather to show possible ranges of results based on input
variability and comparisons between different management alternatives.
Model Selection
Once the important processes occurring at the site have been determined
from the site characterization, selection of the appropriate model may proceed. From
the analyst's understanding of the site and the questions to be answered about it, the
appropriate model is selected for the site. Lists of available models such as
Groundwater Management: the Use of Numerical Models, (van der Heijde et a/., 1985)
and Selection Criteria for Mathematical Models Used in Exposure Assessments:
Ground-Water Models (EPA, 1988) are a beginning point in selecting the model.
Further guidance can be obtained directly from the International Ground Water
Modeling Center (IGWMC) at Holcomb Research Institute in Indianapolis. From such
lists, models can be found that might apply to the site at hand. Models that survive a
preliminary screening can be investigated further by checking the references for more
information.
The following discusses some aspects of model selection and generally
follows the terminology and some of the items discussed in ASTM (1984) titled
Evaluating Environmental Fate Models of Chemicals. Models selected for usage at
contaminated sites should have had sufficient previous application to field sites so that
the user can have confidence in the model itself. Documentation should be available
28
-------
concerning such usage and the model's development. Specifically, from this
information the user must be able to ascertain what processes the model incorporates
and what assumptions have been made in building the model. Then the user can
determine how well the theoretical basis of the model fits the situation to be modeled.
For a model to be selected, the processes identified as being important at the site must
correspond to those included in the model. Model development reports or other
publications should list this information in a clear and concise manner so that the user
can screen available models. Presentation of case studies in reports aids the user in
determining the intended applications of the model.
Sensitivity analyses are usually a part of reports describing numerical
models, and are important for field application of the models. These analyses determine
which of the model's input parameters have the most influence on the output. In the
field, this information should be used to direct the data collection effort; the most effort
should be directed toward the parameters that most effect the solution. Another area
where sensitivity analysis is important in field studies is in the dispersivity. A major
theoretical deficiency exists in the usage of dispersivity, as dispersivity cannot currently
be related to fundamental properties of aquifers. Dispersivity is well-known to exhibit a
scale dependence for even relatively homogeneous aquifers. Predictions from models
that use dispersivity should contain estimates of solute transport for a range of
dispersivity values (Reilly etal., 1987).
Algorithm examination or verification includes evaluation of the numerical
techniques selected for the problem and their limitations, evaluation of the mass
balance achieved by the code, and evaluation of any numeric problems such as
numerical dispersion or instability. Although these activities are largely the province of
the model designer, some are important for field implementation, because they may
affect the performance and ability of the model to simulate field conditions. For
example, the finite element method is commonly regarded as being more able to
represent nonuniform geometry than the finite difference method. A finite element
model might be preferable for modeling flow in an irregularly shaped aquifer. Therefore
in some cases, the selection of the model may depend on the numeric method used.
Also if a code is subject to numerical dispersion or other effects dependent on
parameters used to control program execution, comparison with field conditions may be
adversely affected. Numerical dispersion, for example, is a common problem with
solute transport modeling. Many of the methods used to solve the equations introduce
mathematical smearing of the solution which appears as "extra" dispersion. Solutions
obtained from such models show the solutes dispersing more than they would in reality.
29
-------
In all cases, verification should include independent mass balance checks.
Since most modeling is based upon differential mass conservation equations, a
necessary condition for model correctness is conservation of mass. For
nonconservative solutes, the amount of mass in the aquifer or unsaturated zone and the
cumulative mass lost from the system should balance. When mass balance is not used
to determine the solution, mass balance provides an independent check of the solution.
If mass is not conserved, the model cannot be correct. Correct mass balance does not
guarantee a correct model, but is is a good indicator that the model equations are being
solved correctly. Model users should check model results for mass balance errors and
assess the level of accuracy obtained.
Since direct calculation of the model result is impossible, various strategies
are employed by model developers and others for determining if numerical models are
correct relative to the underlying differential equations. Where available, analytic
solutions are compared with model output. Presumably, a model is mathematically
correct if it can duplicate numerically an exact result. Unfortunately, exact solutions are
available only for simplified cases, and the full capability of a numerical model is not
tested by these comparisons. Also, even though an analytical solution exists, many
times the methods used to evaluate the solution may introduce error. The comparison
may not be against an unimpeachable standard. At the next level are comparisons
with other codes developed for similar problems (code comparison). Favorable
comparison of results gives an indication that the model is producing correct output, and
adds weight to the credibility of the model. Once again, the full capability of a code
might not be tested, if the other existing codes have lesser capabilities than the code in
question.
Several other aspects related to model selection can be identified. The cost
and availability of the model are important considerations. Obviously, there are direct
costs in obtaining the model and its documentation. Indirect costs include the level of
experience needed to run the model: technical complexity-does the potential user
have the necessary training to use the model? Also what are the costs associated with
staff members learning to use a new model? Are pre- and post-processors available for
the model so that input data are easily prepared and output is easily made into a form
suitable for interpretation and presentation? Existence of both of these enhances the
usability and reduces the cost of using the model. Concerning availability, there are
several reasons why the use of proprietary models where the computer code itself is
unavailable presents a problem. One of which is court proceedings where model
30
-------
results may be contested on the basis of the theoretical foundations of the code (van
der Heijde and Park, 1986).
Application of the Model to a Specific Site
Once site characterization is completed and the model selected, the model
must be fit to the specific site under consideration. In general, the purpose of any
model is to transform a given set of input parameters (i.e., hydraulic conductivity,
dispersivities, partition coefficients, etc.) to a set of outputs (i.e., water levels,
concentrations, fluxes, etc.) The transformation is accomplished by solving a system of
partial differential equations by numerical methods. The results of the model are made
applicable to the specific site of interest by choosing input parameters and boundary
conditions which represent the site. Numerical solution techniques often allow these
parameters to vary over time and the model domain. Flexibility in modeling
heterogeneous systems is achieved at the expense of needing a computer to solve the
equations and having to supply proper values of the input parameters for each location
in the model. Many different values of the parameters may be needed to capture the
behavior of the real system. In addition to the parameter values themselves, initial and
boundary conditions are required for solution of the model equations. These must be
selected to correspond to hydrogeologic conditions and pollutant loading conditions at
the site. Selection of these is discussed below, followed by a discussion of some field
techniques for parameter identification.
Initial and boundary conditions
The solution of differential equations requires a certain number of imposed
conditions, called initial and boundary conditions. These make the general solution of
the differential equation apply to the specific situation modeled. The initial condition for
transient problems defines the initial data in the domain from which the solution begins.
Boundary conditions are applied to define suitable conditions to the edges of the
domain modeled. Numerical models utilize several types of boundary conditions.
Some of the most common are fixed potential, fixed flux or mixed. For ground-water
flow, a fixed potential boundary condition might be applied where a stream can be
treated as an unlimited source of water to an aquifer. Since it is unlimited, no aquifer
stress changes the head at the river. A special type of fixed flux condition is the no-flow
31
-------
boundary condition. As its name implies, at this boundary there is no flow, and it could
represent a nearly impermeable formation that bounds an aquifer.
As emphasized by Franke et al. (1987), in an ideal situation all of the
boundary conditions applied to the model would correspond to a hydrogeologic feature
of the aquifer. In many cases, however, the model boundary must be selected based
on other considerations. A ground-water divide is a typical applcation of a no-flow
boundary. Ground-water divides, however, often can move in response to pumping or
other stresses. If used as a boundary condition, these should be selected such that
their possible motion has no significant effect on the area of interest to the model when
the stresses on the model change. Since this boundary condition is selected in a
somewhat arbitrary fashion, it may not be suitable for all stresses applied to the model.
When this is the case, then the model domain should be extended until an appropriate
boundary is reached. Another example is a stream that is treated as a constant head
boundary. If the stream flow is limited, then large pumping rates may dry it up and it
may not act as an unlimited source. As noted by Franke et al., sensitivity analysis
should be applied to each boundary condition that is not based on a fixed hydrologic
feature of the aquifer. From these analyses, stress limits are determined that can be
suitably modeled.
The origin of the pollutants often represents a difficult aspect of modeling. In
many cases there is little or no information available on loading rates, durations or
concentrations of the pollutants that are leaked into an aquifer. This occurs because
contamination is often detected at some point downgradient from the source. The
models require very specific types of boundary conditions that must be known. One
approach to this problem is to treat the source parameters as calibration parameters.
Then the calibration of the model includes fitting of source parameters that give the
overall best fit of the model.
Field measurement of parameters
A major problem in applying models to specific sites is determining the
appropriate input parameter values. Some important parameters are the hydraulic
conductivity or transmissivity, dispersion coefficients, and unsaturated flow parameters.
Field measurement of these parameters will be discussed below. Laboratory
measurements are often considered inferior to field measured values and will not be
discussed here. The immediately following discussion will focus on analytic solutions to
the ground-water flow equations that have an important role in aquifer test analysis. For
32
-------
many measurement techniques, field data are compared with an analytical solution that
depends on one or two parameters. Processes developed to invert the analytical
solution then give the values of the parameters. Brief discussions follow on field
techniques for parameter identification for hydraulic conductivity, dispersion, and
unsaturated flow.
Hydraulic conductivity
Aquifer tests are often performed for the purpose of determining field values
of aquifer hydraulic conductivities and storativities. Analyses of the field test data are
based upon analytical solutions for radial flow toward wells under a variety of
circumstances. The analytic solutions are exact mathematical representations of
drawdowns produced by pumping at a given rate that can usually be manipulated
without recourse to computer usage. These solutions include homogeneous confined
aquifers, leaky aquifers, leaky two-aquifer systems, and homogeneous unconfined
aquifers (see Freeze and Cherry, 1979, pg. 315-327), and fractured systems (Streltsova-
Adams, 1978). From these, inverse solutions for parameter identification, using
graphical methods, have been developed and are widely used. Aquifer properties are
determined by matching the drawdown data to the exact solution, which is represented
by a type curve. In principle, because only one or two parameter values represent the
simplified geometry used for the analytic solution, a water level record in an observation
well can be used to determine uniquely the model properties.
Several problems can occur in the application of these methods. First, real
systems may not match exactly the conditions required by the exact solution. For
example, the Theis method for confined aquifers is derived for homogeneous aquifers
of constant thickness and infinite areal extent. Aquifer thicknesses, however, commonly
vary and are not as uniform as required by the exact solution. No aquifer is of unlimited
extent but practically, the size of the aquifer need only be large enough so that the
effects of the pumpage do not extend beyond the boundary of the aquifer.
Second, when a pump test is performed in a realistic aquifer, type curve
analysis applied to different observation wells may produce different parameter values.
This result indicates that the aquifer is heterogeneous and the conditions for the exact
solution are not met. Moving the pumping well may lead to a different set of parameter
values. In such a case, it may be appropriate to use a numerical model to determine
aquifer parameters, subject to the discussion below.
33
-------
One of the reasons, however, that these methods can be successfully
applied to field data is that the solutions are relatively insensitive to the transmissivity.
As a result, an apparent uniform transmissivity can be determined for a non-uniform
aquifer. Predicted water levels or heads for the equivalent uniform aquifer will be close
to the observed values. Using such solutions to determine aquifer properties, then
presents the problem that although observed heads are matched fairly well, the
distribution of the transmissivity variation is not determined. For safe ground-water yield
problems, such variation may be unimportant for many situations. On the other hand,
the migration of a pollutant may be very sensitive to the undetected variations in
transmissivity, since the pollutant can move most readily through high transmissivity
zones. The distribution of these zones may control the pollutant's distribution to a
greater degree than they impact the ground- water motion.
A related problem is that type curves for various situations resemble one
another. Due to insensitivity to parameter values, different sets of parameter values
could be obtained from plotting the observed data on different type curves. This
problem is compounded by the fact the there may be uncertainties in the data, which
may result from observation errors and water level fluctuations in the observation wells.
In such cases, the selection of the appropriate type curve may be difficult.
As an alternative to pump tests, tracer tests can be used to determine
ground-water flow rates directly. Since the hydraulic conductivity and gradient are
related by Darcy's law to the flow rate, the tracer tests can provide information
equivalent to that obtained from the pump tests. The tests are performed by injecting
artificial tracers (ones that do not occur naturally in the flow system) and measuring their
concentrations in observation wells. Inorganic salts, radioisotopes, organic complexes
and fluorescent dyes have been used for tracers (Freeze and Cherry, 1979).
Dispersion of the tracer causes the concentration in the observation well to increase
gradually; and after correcting for this effect, the ground-water velocity can be
calculated. Some disadvantages of the tracer tests are that injection wells and
observation wells must be close together, since ground-water flow rates are relatively
low. Resulting velocities obtained are valid only near the wells. Heterogeneities in the
formations may cause fast migration or the tracers to not appear in observation wells
because of preferential flow paths. Adding more wells increases the chances of
detecting the tracer, but also increases the cost of the test and chances of distorting the
flow field (Todd, 1980). Freeze and Cherry (1979) note that due to the time and effort
needed, these tests are not often performed.
34
-------
A similar method that finds wide usage in Europe is the borehole- or point-
dilution method for determining the average horizontal velocity. A tracer is introduced
into a packed-off interval of the test well. The tracer concentration is measured with
time as the flowing ground water causes the tracer in the borehole to be diluted. A
simple analytic solution for the concentration is used to determine the ground-water flow
velocity. Other methods such as borehole flow meters also exist to determine the
vertical distribution of the hydraulic conductivity of a single well.
Dispersion
The equations of solute transport that are solved in ground-water codes are
derived assuming that the solute migration is due to advection and hydrodynamic
dispersion. Advective transport results from the displacement of the solute caused by
the average movement of the water. The second mechanism, hydrodynamic
dispersion, acts to transport mass from regions of high concentration to regions of low
concentration in a manner analogous to molecular diffusion. Molecular diffusion is in
fact one part of hydrodynamic dispersion, but is often of relatively low magnitude
compared with the total apparent dispersion. Hydrodynamic dispersion results from the
spreading of the solute caused both by varying velocities within individual pores and by
varying velocities due to larger scale geologic heterogeneities. If every detail of the
geologic structure of the medium could be determined, then small dispersivities, on the
order of centimeters, resulting from the pore scale velocity variations would apply to
each geologic unit present. An extremely well characterized system could accurately
predict dispersive transport with small dispersivities. Since there will never be enough
information for total characterization of the site heterogeneities, some of the effects of
varying transport through the heterogeneities become incorporated into measured
values. As the scale of measurement increases (distance between tracer test wells,
etc.), more heterogeneities are sampled and there is more apparent dispersion, leading
to higher measured values. The dispersion coefficient values "contain" error due to the
incomplete characterization of the subsurface. Because of this, values measured in the
laboratory typically will be much too small for field applications.
An important example is that of the layered system. Pollutants move fastest
in the high transmissivity layers. In wells that sample over many layers, there is
apparent dispersion that results from contributions of pollutants to the well that originate
from different layers. The pollutants in the high transmissivity reach the well first and
the concentration increases slowly as more layers contribute to the mass in the well. To
35
-------
an observer at the well, it may appear to be the same as one layer of uniform aquifer
material with a high dispersivity. As before, exact characterization of the layers and
small dispersivities would model the system accurately, instead of an incompletely
characterized system with a large dispersion coefficient. The problem with the latter
approach is that if the observation well were moved, the single value of the dispersion
coefficient would likely no longer fit the observations.
A second problem arises because it is known from studies of transport in
rivers that there is an initial period where the dispersion cannot be treated as
mathematically similar to diffusion. In ground water, this is also recognized, even
though methods have not been developed to treat it. As a result of these two
phenomena, measured dispersivities are known to be scale dependent and proportional
to the length of the flow system. Since they are not constant, dispersion coefficients are
not a fundamental parameter of the ground-water flow system. Dispersion is an issue
that is far from being resolved and must currently be viewed as a limitation of solute
transport modeling.
Field dispersivities (the porous medium characteristic component of the
dispersion coefficient) can be determined from tracer tests. The dispersivity is
determined by fitting an analytic or numerical model to concentration data in
observation wells. This procedure is analogous to conductivity and storativity
determinations via a pump test. Four types of tracer tests exist for measuring
dispersivity:
single well injection-withdrawal tests,
natural gradient tests
two well recirculating injection-withdrawal tests
and two well pulse tests.
In each, a nonreactive tracer is injected and its concentration in the observation well is
recorded. The concentration vs. time data is analyzed to estimate, among other things,
the aquifer dispersivity. Guven et al. (1986) outlined several methods. As noted by
several authors (Hoopes and Harleman, 1967, Mercado, 1967) much of the full aquifer
dispersion noted in the tests can be attributed to differential advection between the
wells and/or variable hydraulic conductivities in the aquifer. Guven et al., (1986) show
that the concentration history in the observation well is very sensitive to the vertical
distribution of hydraulic conductivity.
36
-------
Unsaturated systems
In unsaturated flow, the capillary pressure-saturation and hydraulic
conductivity-saturation (or hydraulic conductivity-capillary pressure) relationships are
used to extend Darcy's Law to multiphase flow. Methods exist for fitting these
relationships to field data for soil moisture. In the unsaturated zone, however, there is
no analog of the pumping test to measure response to stresses over a large area.
Here, methods are small scale in situ measurement techniques that avoid some of the
problems inherent in using laboratory derived parameters. Klute (1972) reviewed
various methods for measuring unsaturated flow parameters, both in the laboratory and
the field.
Tensiometers are used to determine the capillary pressure-saturation
relationship. The tensiometer consists of a porous cup connected to a manometer, all
of which is filled with water. The pressure in the soil being less than atmospheric draws
water out of the tensiometer, causing a pressure drop to be indicated by the manometer
(Hillel, 1980). The corresponding water saturations are typically nondestructive^
measured by neutron probes. Schuh et al. (1984) describe a typical field installation to
measure the capillary pressure to depths of 1.3 meters. In the field, tensiometers were
buried at 15 cm intervals, a neutron probe access tube was installed nearby, and the
site was flooded with water. During subsequent drainage, measurements were made,
yielding the capillary pressure for various saturations at each depth. For such methods,
the installation of the tensiometers and neutron probe access tubes may disturb the
natural soil and so introduce error into the determination. Also the methods are limited
to near surface soils due to the need to install the tensiometers.
The hydraulic conductivity is obtained by the so-called instantaneous profile
method originally developed for laboratory columns by Watson (1966). In this method,
the instantaneous hydraulic conductivity is determined by a solution of the unsteady
mass conservation equation. The tensiometer data is used in this determination (e.g.,
Hillel et al., 1972). As noted by several authors, this method is expensive and time
consuming, as a determination may take several months to complete. The resulting
curves often do not cover the entire range of water contents, but only the limited range
seen in the field during the time of the experiment. Note that in the laboratory the
curves can be measured over their full range of possible moisture contents. Simplified
methods have been developed based on an assumed capillary pressure-saturation
relationship (Libardi et al., 1980, Ahuja et al., 1980, 1988), and were reviewed by
Schuh et al., (1984). No matter what method is used for determining the unsaturated
37
-------
hydraulic conductivity, it should be recognized that there is a great deal of variability
both areally and with respect to the depth. Field measurements for NAPL curves have
not be developed as of the present. These curves can be measured in the laboratory
and/or some of their character inferred from the moisture content curves (e.g., Parker
and Lenhard 1987).
Usage of field data in numerical models
One of the reasons that numerical models are used instead of analytic models (i.e.,
models for simplified situations where the solution may be determined exactly) is that
numerical models allow parameters to vary over the domain of interest, while analytic
models often do not. Accurate application of the numerical model, then, may require
that input parameter values be known at many locations or mesh points in the domain,
because of the input parameter variation. Also, fine meshes are often needed to
maintain the internal accuracy of the program. Since subsurface flows are largely
undetectable at the surface, at best there can be only limited observations made.
Typically, wells are used for water level observation and water sampling. Due to
practical limitations in ground-water investigations, the number of wells (and thus
information) is available at only a few locations compared with the total number of mesh
points in the model. Also, the data obtained represent the subsurface conditions at
discrete points in space and time. Knowledge of the geology of the site, inferred from
surface outcrops, well logs arid possibly geophysical techniques, is likewise limited. So,
even with the best available technologies for measuring mode! parameters in the field,
the modeler is faced with the task of assigning input parameters to a large number of
locations in the model. The challenge is to use the available data to determine the
parameter values at each point of the mesh. Intuitively, it can be sensed that there is a
limit to the extrapolation that can be made from a limited amount of data. Note also that
many of the methods for determining parameter values are based on an analytic
solution of the governing equations. Thus, those parameter values, when applied to a
real situation where an analytic solution does not apply, may not be strictly correct.
These may only apply to a limited area around the test well.
Calibration
The way that models are fit to sites is by using the parameters that are
known for certain nodes, guessing parameter values for other nodes, and then
38
-------
comparing the model output to observed conditions. The parameters are varied until an
acceptable fit of the output is achieved. In practice, this is usually accomplished by a
trial and error procedure. This process is known as calibration, which is defined
formally as a test of a model with known input and output information that is used to
adjust or estimate factors for which data are not available (ASTM, 1984). After
calibration, the model is presumed to represent the real system. A typical example is
the calibration of ground-water flow models to reproduce historical water level records
by varying transmissivities and storativities at various points in the grid. Trials are made
until the modeled heads match observed heads sufficiently well.
A hypothetical example of aquifer calibration was prepared for this report. A
distribution of transmissivites, ranging from 10 to 10 ft /s was made throughout a
uniform 20 ft thick aquifer. This distribution was taken as the correct transmissivity
distribution. In addition to a small natural flow gradient (2.9 x 10 ~^, a pumping well was
placed in the aquifer. The steady state response of the aquifer was used for calibration.
The model was calibrated by varying transmissivities until the modeled heads matched
the "observed" heads at eight nodes that were taken as observation wells. When the
errors were less than 0.25 ft in each observation well and the range of errors for the
observation wells was from 0.07 to 0.25 ft, then the model was assumed calibrated. For
the calibrated distribution of transmissivities, the average error over every node in the
model was 0.63 ft. This information obviously would not be available in a real situation,
but is useful for checking the calibration process.
Some of the large scale distribution of the transmissivity was detected during
the calibration process, so that some of the high and low permeability zones were
correctly located. Approximating the transmissivity in a given zone depended on there
being an observation well in that zone. At many nodes the model was made to fit the
observed heads, without determining the true values of transmissivity that produced
them. This accounts for the error of 0.63 ft over all the nodes being higher than the
error in the observation wells.
To test the calibration, the stress on the model was changed by changing the
location of the pumping well. If the calibrated model contained the correct transmissivity
values or close approximations to them, then the errors in the observation wells should
remain approximately the same before and after the change. In this case, however, the
average error in the observation well heads increased to 0.47 ft with a range of 0.01 to
3.06 ft. In one observation well, the error increased at least 1000%. The drastic
increase in error in observation well heads indicates that the calibration for the first
39
-------
situation was not universal. The average error over all the nodes increased somewhat
to 0.84 ft. The relatively smaller percentage increase over all the nodes probably is due
to their higher error under the original conditions.
A more extensive example of this nature is presented by Freyberg (1988)
who had nine groups of graduate students calibrate a ground-water flow model as a
class project. The groups had to determine the aquifer bottom elevation, transmissivity,
and recharge rate from observed water levels in 22 wells. The groups were given error-
free data concerning the water levels in the observation wells, areal geometry of the
aquifer, all boundary conditions, pumping rates and interactions with a river crossing the
aquifer. The information given to the students is much more extensive than would
typically be available in a field application.
Some of the conclusions drawn from the exercise were that the solution of
the inverse problem is not unique even with all the above information supplied. In this
example, 22 pieces of measured information were used to determine over 1300
parameters. Obviously, the data is stretched too thin for a determination of the
parameter values. Different strategies for calibration that were used by the groups were
zoning the model with relatively large blocks of constant transmissivity vs. fitting the
observed data by adjusting each individual model cell. The best results (i.e. best
prediction of future aquifer response) were found when the zoning technique was used,
even though by adjusting each block a better fit to the observed data was possible. The
best fit obtained in the latter fashion happened to give the worst prediction of future
response.
To eliminate the guesswork inherent in trial and error calibration, various
researchers have applied automated methods to the parameter identification or inverse
problem. (This may be viewed as a mathematical approach to calibration, which would
give an efficient procedure to fit the model to the site.) A recent in-depth review of such
techniques can be found in Yeh (1986). Most of the effort along these lines has been
directed toward determining ground-water flow parameters-transmissivity and
storativity. Only a very limited number of attempts have been made for solute transport
parameters-dispersion coefficients (e.g., Strecker and Chu, 1986). Some of the
problems discussed above are related to mathematical concepts of uniqueness,
identifiability, and uncertainty. Rigorous definitions of these concepts are given by
Carrera and Neuman (1986). From an intuitive standpoint, it is easy to appreciate that
there is a limit to the extrapolation that is possible from a small set of data. Depending
on the amount of data and the distribution of heterogeneities, it is possible that various
spatial patterns of parameter values could result in the same or nearly the same model
40
-------
outputs. Calibration of the model does not guarantee that the parameter values
selected represent the true values. Further, the nature of determining the solution of a
differential equation-integration-makes the output smoother than the input. As a result
variations in input parameters may be difficult to detect from the output. Also, data
taken from the field contain measurement errors, and so the process of determining the
solution to the inverse problem is subject to using data that is not "true". This has been
found to result in instability in some automated methods of solving the inverse problem.
Geostatistics
Geostatistical techniques offer a powerful set of tools to interpret field data
and statistically characterize a field site. Geostatistics provide an alternative to
calibration and permit using the data normally reserved for calibration to evaluate the
implementation of the model at the site. However, implementing a geostatistical
approach for site characterization frequently requires more field-obtained data than is
commonly collected at sites where models are implemented. Some of the types of
estimation problems (ASCE Task Force Report, 1989) which are ideally suited for
geostatistics include:
1) Estimating the value of a property in space and time. This is important for
developing contour maps which provide a pictorial representation of the spatial
distribution of the property of interest, and estimating the value of the model parameters
at locations required by a model (i.e., at the nodes of a finite-element program or at the
abscissa points of a numerical integration solution). For example, a quantitative
definition of the biomass and community structure of the microbiota in the subsurface
utilizing microbial estimation techniques (White, 1983) could be mapped and used as an
initial condition for an aquifer restoration modeling exercise.
2) Estimating the area or volume of a substance over a known region. This is
important in determining the total mass of contaminant or magnitude of a source term in
the polluted area. Where data have been obtained over various sampling areas or
volumes, then geostatistical techniques can be used to "reduce" the data obtained over
different areas or volumes to a consistent level. Thus, the estimated values for all the
soil and hydraulic properties can be based on the same areal dimension (say for
example equivalent to the size of the elements of a finite element grid system) which is
more meaningful than using parameters some of which are averaged over several
41
-------
elements while others correspond to point measurements which may or may not be
appropriate over the entire element.
3) Estimating the slope of some property at a specified point. For example,
determining the hydraulic gradient which is needed in a calculation of the water velocity
which is in turn used in the solute transport equation.
Geostatistics offer a means whereby the detailed site characterization can be
undertaken in a systematic manner and enable the hydrogeologist to describe the
geometry and spatial distribution of the properties of interest over a given area. Due to
the variability exhibited in the field, it is impossible to obtain a "deterministic" or error
free prediction of the required properties throughout the area of interest. Therefore,
some method for determining the spatial patterns of a property in space is needed. This
is recognized in the geostatistical methods and provides a rational means for
determining the most likely value of a property at a specified location. The estimation
error associated with each likely value can be used to guide further site characterization
optimizing the use of available resources.
The geostatistical techniques are based on the observation that samples
which are located close together are generally of similar values and this similarity can
be used to an advantage when estimating a property at an unsampled location. This
observation is expressed mathematically by the spatial correlation function. The spatial
correlation function may take on any of several forms including a Sernivariogram
Function, Covariance Function, or an Autocorrelation Function. These functions
describe the underlying behavior of the spatially dependent random function. Estimates
of the function at an unsampled location are possible by evaluating a sum of functions
of a subset of sample values. In the general case, this provides a nonlinear estimator
for the parameter and is termed the disjunctive kriging estimator. If, however, the
functions are taken as linear functions then the estimator is the same as the linear
ordinary kriging estimator.
The disjunctive kriging estimator has several advantages over the ordinary
kriging estimator. First, since it is a nonlinear estimator, it produces more accurate
estimates compared to the ordinary kriging. Also, and more importantly, the disjunctive
kriging estimator can be used to obtain the conditional probability that the estimated
value at a point is greater than a specified level. This can be used to produce the
probability of exceeding critical values such as an MCL. This in turn can be used to
determine the location of undesirable events and for risk analysis.
42
-------
Ordinary kriging has an advantage over disjunctive kriging in its simplicity
and reduced computational requirements. In general, for making estimates the two
techniques provide approximately the same accuracy, therefore, the additional
complexity and computational requirements for disjunctive kriging are not justified.
However, if the conditional probability that the property of interest is greater than a
specified level is needed, then disjunctive kriging becomes an attractive method for
obtaining this information.
Using geostatistics for site characterization
Part of the site characterization problem is the determination of the spatial
distribution of the soil hydraulic properties and other model parameters. Ideally, this
would include some measure of the estimation error, since it is unlikely that exact
values can be determined at all locations. The spatial distribution is characterized using
a spatial correlation function such as the semivariogram, covariance function or an
autocorrelation function. Using this function and the collected data, estimates can be
obtained using kriging. The most common forms of kriging, simple and ordinary kriging,
are Best Linear Unbiased Estimators (BLUE) all of which give an indication of the
accuracy of the estimate, called the kriging variance. It should be noted that the kriging
variance cannot be used to determine the probability that the estimated value exceeds
a given level since the kriging estimator, and hence the estimation variance, is
smoothed with respect to the original random function and therefore would provide
erroneous results. However, other kriging techniques such, as disjunctive kriging can be
used to determine the probability of exceedence.
Contour maps can be generated which give a pictorial representation of each
property's distribution in space and time. Using block kriging, the input parameters to a
finite element or finite difference model can be obtained directly from the spatial
correlation structure and the sample data. Also, the values obtained correspond to the
average value of the property over the entire element, which would be a more
appropriate value compared to point or larger block values.
Range of influence
The correlation structure can be used to determine the average distance for
which pairs of samples are correlated. This is termed the range of influence and can be
expressed by various measures, described below. To investigate the distances for
43
-------
which a random function is spatially correlated, three spatial correlation measures can
be used. The first two are integral scales which were used by Bakr et al. (1978) and
Russo and Bresler (1981), respectively. The third measure is the zone of influence
described by Gajem et al. (1981) and provides a conservative test of the hypothesis
that the autocorrelation is zero (Davis, 1973). These three measures provide a means
for determining at what distances (and greater) the samples are independent. This
information can be used to determine how samples can be taken so that they include or
reject spatial dependence, depending on the intent of the investigation. If the samples
are independent, the number of samples needed to be within a certain deviation from
the mean of the random function can be determined using classical statistical theory
and is described below.
Determining sample numbers
Geostatistics provide a means whereby the number of samples that are
necessary to describe the spatial distribution of a property with some specified level of
accuracy can be determined and can be used to manage the sampling costs. For a
given sample of a random function which is normally distributed, the number of samples
necessary to estimate the mean value with a given level of certainty can be determined
using classical statistical theory.
Classical theory may also be used for a spatially dependent and normally
distributed random variable if the samples are taken from locations which are separated
from each other by sufficient distance so that the samples are independent. This can be
accomplished by using either the integral (i.e. correlation) scales, if the minimum
distance between samples is desired, or the range of the semivariogram if a somewhat
larger distance than is necessary is adequate. In general, it has been found that the
correlation scales suggest that independence between samples occurs at distances
which are less than the range of the semivariogram. Therefore, knowledge of the
semivariogram or the correlation scales is important to assure that the samples used
are spatially independent. If the semivariogram is unknown and samples are collected
randomly, it may be incorrectly assumed that a sample of independent values was
obtained.
Geostatistics can also be used to determine areas where insufficient data
have been collected from information provided by the estimation variance. Given that a
specified maximum level for the estimation variance is prescribed, then areas can be
identified with too large estimation variances so that additional samples can be taken in
44
-------
those areas. In this manner, geostatistics can be used to find the best location to place
an additional sample.
It is also important to know how many samples are required and their best
placement so that an optimal representation of the spatial correlation structure can be
obtained. There is no hard and fast rule to determine this information and it is usually
best to undertake an initial sampling to determine the general behavior of the spatial
correlation for a given property. Once this has been accomplished, a trial spatial
correlation function can be determined and geostatistical principles used to 1)
determine the best location for the remaining samples and 2) given the spatial
correlation, determined from the initial sampling, determine where the samples should
be placed to optimize the construction of the next correlation structure. It should be
pointed out that these may be conflicting issues and determining the placement of the
samples to reduce the estimation error may not result in the best locations for
determining the "optimal" correlation structure. This is an area of geostatistics which
requires further research. In general, the type of investigation dictates which approach
would be used. For practical applications, reducing the estimation error would have
priority over optimizing the spatial correlation structure, and as the number of samples
increases, the sample correlation structure would approach the true value for the
population.
Validation
Model validation is defined as the comparison of model results with data
independently derived from laboratory experiments or observations of the environment
(ASTM, 1984). The goal of validation is to determine how well the model describes the
actual behavior of the system. The emphasis in the ASTM standard is on validation of
the models themselves. An ongoing effort in this regard is the international hydrologic
code intercomparison (HYDROCOIN) project organized by the Swedish Nuclear Power
Inspectorate. In this study, codes for ground-water flow are being utilized to make
safety assessments for proposed nuclear waste repositories. The project includes
verification, validation of the codes using field experiments, and sensitivity and
uncertainty analyses (reported in IGWMC Newsletter June, 1988).
Here a different perspective is taken: model validation for a field site is as
dependent on the way the model is applied to the site as it is upon the model's internal
workings. The emphasis here is not upon trying to determine overall validity of certain
models, but rather upon the validity of applying a certain model to a certain site. When
45
-------
considering the application of a model to a field situation, validation must include
evaluation of chosen boundary conditions, recharge rates, leakage rates, and
pumpages as well as parameters of the model-all of which are highly site specific and
will have an impact on the validity of model results. Appropriateness of model
assumptions is assumed to have been addressed in the selection of the model for the
site in question. If in evaluating model validity, it is determined that the appropriate
processes are not included in the model, then another model must be substituted. The
original model may still be valid for other situations to which it is properly applied.
A simple example of the latter is the usage of vertically-averaged
concentrations and aquifer properties in a two-dimensional solute transport model. This
model could be applied to an aquifer consisting of a single geologic material whose
properties vary only in the horizontal plane. When the pollutants are uniform over the
thickness of the aquifer, it may be possible to show the model to be valid by comparing
measured and predicted concentrations and heads. At another site, where the aquifer
consists of coarse-grained materials interspersed with clay lenses, that same model
could be applied by averaging the hydraulic conductivities of the layers and using a
value that reproduces measured heads in the aquifer. Thus the model may be judged
to be valid for predicting the heads. It may be, however, that the model cannot be
validated with respect to concentration predictions, because significant inhomogeneities
were neglected. In this situation the model is not valid, but this is a determination that
does not affect its validity for other situations. An overall judgement of the model's
validity cannot be made from either application, because of the importance of conditions
specific to each problem.
A determination of model validity can be made by three methods listed
below, based on ASTM (1984):
1. Statistical Validity. Statistical measures are used to determine if a given
set of model outputs belongs to the same statistical distribution as the
measured data. A statistical measure of validity is desirable because of
instrumental and sampling errors and various other uncertainties in
obtaining field data. An amount of measured data sufficient for statistical
analysis is required for application of this method. The amount of data
required may be on a par with that obtained from intensive field studies (e.g.
Mackay etal., 1986).
2. Deviate validity. In cases where sufficient data is not available for
statistical determinations, a quantitative determination of validity can still be
made. In this case, deviative validity is determined as a measure of the
46
-------
difference between measured and calculated outputs. If the measure is less
than a level determined for acceptance, the model is then judged valid. This
measure will probably be the most commonly applicable for contaminated
sites.
3. Qualitative validity. Purely subjective judgments of validity can be made
from a comparison of measured and calculated outputs. Here qualitative
agreement between model results and the observed data is expected. This
measure may be applicable to poorly characterized sites, or for situations
where only crude results are needed.
Ideally, some of the data from the site investigation would be reserved for
validation and not used for parameter identification. The reserved data would then be
compared with model predictions and appropriate modifications to the model could be
made to reflect the enhanced understanding of the site. Because of limited availability
of data, however, all site data are often used for calibration. Validation data then come
only from comparing model predictions with later observed conditions. In this situation
there can be less confidence placed in the first model predictions, than if the ideal
validation procedure is followed.
The validity should also be consistent in that the model should be valid under
varying input data and comparison data. This obviously applies to situations where the
model is applied to different sites, but also to cases where certain inputs may be
changed at a given site. For example, the transmissivity calibration presented above
did not achieve consistent validity, since when the location of the pumping well was
changed the model response became invalid by the deviate validity criterion selected.
Here varying the input data-pumping well location-caused the model to become
invalid. The weakness of the calibration process was revealed, since changing the
stress on the model changed the error between predicted and measured heads.
Another aspect of validity is global validity where each output variable is
shown to be valid for the modeling. In the hypothetical example above where a two-
dimensional model was applied to a layered site, the heads could be judged valid while
the concentration distribution turns out to be invalid. This may be a fairiy common
situation as the heads are relatively insensitive to variations in transmissivity.
47
-------
Examples
This section contains six examples which illustrate aspects of field
application of a model that are discussed above. Most of the examples are presented
briefly with emphasis on field validation.
1) Alternatives comparison with unvalidated model
Chapelle (1986) simulated brackish water intrusion in a 15-square-mile
portion of the Patuxent aquifer near Baltimore, Maryland. Chapelle used the USGS
model (Konikow and Bredehoeft, 1978), calibrated to a 130-year pumping record and
extensive water level record. Historical information was used for the transmissivity
distribution and storage coefficient. Porosity was estimated from geophysical logs.
Calibration included varying the leakance of confining bed levels and dispersivities of
the aquifer material. Changes in leakages in certain parts of the modeled area were
checked against geological evidence, providing justification for the calibration values. A
geologic investigation for a tunnel excavation showed that a filled river channel
breached the upper confining layer. The breach allowed communication between the
ground water in the Patuxent and the ocean in the area that the calibration process
indicated high leakage. Such information is desirable as it provides a physical reason
for parameters that are often varied arbitrarily to fit measured water levels.
After calibration, the model was used to predict the extent of brackish water
contamination of the aquifer. As Chapelle states there was uncertainty in the model
predictions because of the assumptions and simplifications, and the limited amount of
site data available. Parameter information from the literature was used for this
modeling, so close predictions between observed and modeled concentrations should
not be expected. The model results were not validated, since field data were not used
to check the predictions. The model was used to investigate the effects of various
alternative management strategies for the water supply. Given that the model
represents the aquifer to some degree of approximation, the usefulness of the
unvalidated model lies in comparisons between various mamagement strategies on the
aquifer. Such comparisons can indicate the probable response relative to a baseline
case that is also generated by the model.
2) Qualitative validity
Hillman and Verschuren (1988) developed a model for two-dimensional
saturated-unsaturated flow to study the effects of forestry practices on soil water.
48
-------
Validation of the model was attempted, using data collected earlier from a field
experiment conducted by the U.S. Forest Service. The effects of a single rainstorm
were simulated. The actual system produced some runoff from the site, while the
simulations produced none. The model can be judged invalid by the qualitative criteria
because the observed behavior was not duplicated by the model. Hillman and
Verschuren give some possible reasons for the discrepancy: treating soil layers as
uniform and isotropic in the model, and flow through macropores in the real system
were not modeled. Also, the authors treated a multilayer system as if only two layers.
The model then may not include all phenomena that are important in simulating this
experiment. Also, since the field experiment was not designed to validate this model,
not all of the data that were needed for the model were measured. In this particular
experiment, not enough measurements were made for validating a detailed model. The
last two items represent common problems in using data from experiments that were
designed for other purposes, and illustrate that data collection for modeling field sites
needs to be based at least partially on the requirements of the model.
3) Ground-water flow model reevaluation
Konikow (1986) reviewed the modeling of an aquifer in central Arizona
(Anderson, 1968). The model was an electric analog, calibrated with 40 years of head
data. Predicted heads, for the period 1965 to 1974, were considerably lower than the
actual observed heads for this time period. Some of the error is attributable to declines
in net withdrawals over the time period. Understandably, future actions cannot be
predicted with certainty. Konikow found low correlation, however, between pumpage
error in the model and error in measured heads. Other reasons for the errors are cited
as individual water-level measurements not reflecting the local static water table, water
produced by land surface subsidence, and inherently three-dimensional nature of the
aquifer not being represented in the two-dimensional model. The original analog model
no longer existed in 1986. Other techniques would, however, need to be used to
incorporate the new information into the modeling, because of the limitations of electric
analog models.
4) Solute transport model reevaluation
Konikow and Person (1985) investigated the earlier application of a
numerical solute transport model (Konikow and Bredehoeft, 1974) to a stream-aquifer
49
-------
system in an 11-mile reach of the Arkansas River Valley in southeastern Colorado.
Originally, the model was calibrated with dissolved solids data for 1971 and 1972. A
continuous increase in the average annual dissolved solids concentration was
predicted. For 1982 the model predicted a mean concentration of about 2700 mg/l.
The observed basin-wide concentration in 1982 was, however, more nearly 2200 mg/l,
which indicates that the predicted increase in dissolved solids concentration did not
occur. The observed 1982 value was only slightly higher than the 1971 and 1972
values. A judgement was made, based implicitly on deviate validity criteria, that the
prediction was not correct and thus the model invalid. The calibration comparison was
made with average basin-wide concentrations, which were determined by processes
not without uncertainty themselves.
The analysis performed by Konikow and Person (1985) indicates that the
dissolved solids concentration was experiencing a short-term increasing trend during
the model calibration period. Reasons proposed for this were related to the increased
dissolved solids concentration in the river as the average annual flow decreased over
the period 1971 to 1973, and the usage of higher dissolved solids ground water for
irrigation during this period of low flow in the river. The original model failed to predict
the 1982 concentrations, because the increase in concentration from 1971 to 1972 was
taken to represent the long term trend, instead of phenomena associated with the flow
in the river. The model, although calibrated to match the 1971 to 1972 data, was not
representative of the real system as it did not include the aquifer-river interaction.
A common problem in both the preceding cases is that of calibrating the
model to historic data, produced models that could duplicate the data but did not truly
represent the system being modeled. The initial calibrations were not adequate to
assure that the model selected for the site and the parameters derived from calibration
could actually represent the site. In both cases, the re-analysis revealed physical
interactions that were not apparent initially or not considered. Although selecting
appropriate simplifications is important in modeling, oversimplification can lead to
significant prediction error. In the analog model case, the state of the art at the time
resulted in a limitation of the phenomena that could be included in the model.
5) Modeling aquifer remediation
A contaminated site where several organic chemicals were found in the
ground water was simulated by Freeberg et al. (1987). At this site the chemicals had
leaked intermittently from an underground storage tank during some unknown period of
50
-------
its 15-year usage. The USGS MOC model was calibrated using 1.5 years of observed
data and used to predict the behavior of trichloroethylene (TCE), the chemical present
at the highest concentrations, in an actual recovery system. Over 100 runs were
required to calibrate the model to the 210-square-meter site.
Since in this case the rates and durations of TCE releases were unknown,
the source of the contaminant was treated as an injection well. The mass flux and
duration were selected to result in the same mass of TCE in the aquifer as estimated in
the field. These two unknowns were treated as calibration parameters. Calibration of
the model resulted in a constant injection rate and injection concentration for a duration
of one half year. Freeberg et al. report that longer durations of injection resulted in
plumes that were longer than the observed plume. They note that this representation of
the source is probably a major source of error in the simulations. Also treated as
calibration parameters were the location of a constant head boundary condition and the
head associated with it. This constant head boundary was chosen to drive flow across
the site in the direction observed in the site investigation.
Comparison of observed concentrations in 13 wells, six of which had
observed concentrations of greater than 1 g/l, with simulated concentrations for the
monitoring period resulted in qualitative agreement between the concentrations. Errors
in concentration ranged from essentially zero outside the boundary of the plume to
errors of 18% to 60% where the measured concentration was on the order of 100-1000
g/l. Three wells had errors such that they were not considered having a reasonable
match. The authors gave the following possible reasons for the error: oxidative or
hydrolysis reactions of TCE which were not included in the model, inexact location of
the observation wells in the model due to grid placement, geologic heterogeneities not
simulated, and the aforementioned treatment of the source. Note that apparently all of
the site data were used for calibration and that none were reserved for validation. In
this case the independent check of the calibration comes from the remediation activity,
in which different stresses were applied to the system.
Comparison of simulated and actual remedial measures was performed for
data at 0.16 and 0.47 years after the beginning of the remediation. Four of the
observation wells were used as pumping wells so a smaller number of observations
were available for comparison. Freeberg et al. used the simulated concentrations as
the initial condition for the remediation simulation. At 0.16 years, five wells were used,
four of which were on the periphery of the predicted plume; none were located where
the highest concentrations were located. Error in the concentration ranged from 8% to
100%. Errors at two of the wells were attributed possibly to failures in the pumping
51
-------
system. Only at one of the wells (the one with 8% error) was the simulated
concentration judged to match the observed concentration fairly well. At 0.47 years,
nine wells were used for comparison. Two of these had observed concentrations above
1 g/l, but both were located away from the areas of highest concentration. The
authors judged that the agreement of the model and the observed concentration was
good. But for both of these situations the areas of highest concentration in the
predicted plume were not in the vicinity of observation wells. Judgement on the
quantitative fit of the model in these areas must be reserved.
Spatial variability of the geologic formation and model parameters is not well
known. The amount of data for calibration and verification is limited. As a result, the
best that might be expected from a deterministic prediction is a qualitative agreement
with observed contaminant migration.
6) Solute transport under uncertainty
Mercer et al. (1983) used the two-dimensional USGS aquifer flow model
presented by Trescott et al. (1976) to model ground-water flow in Niagara Falls, New
York. The intent of the modeling was to assist in the interpretation and prediction of
contaminant behavior at Love Canal. The authors modeled the regional flow in this
area to gain an understanding of the flow at the contaminated site. They identified
-the geologic formations
-recharge and discharge areas which determine model boundary conditions
-long term water table behavior
-the existence of man-made factors complicating ground-water flow
-a pumped-storage reservoir and its piping system
The model of the site was constructed from this information. Mercer et al. (1983)
presented a detailed table comparing the actual site conditions with the model
assumptions. The model was calibrated to match the regional steady flow conditions.
A degree of validation of the model was achieved in that measured heads in the canal
area were underpredicted by at most one foot. Some aspects of the real system
remained unknown, because of data limitations.
Many of the input parameters for the model were estimated from a
combination of observed data, an aquifer test analysis and hydrologic judgement, so
sensitivity analyses were performed showing the influence of model parameters.
52
-------
Transmissitivies, hydraulic conductivities in confining layers, and constant head
boundary conditions were varied to determine the effect of the chosen values on the
model output.
The calibrated flow model was used to predict the effects of interceptor wells
between the canal and the Niagara River, and the non-dispersive travel times of
pollutants from the canal to the river. Monte Carlo analysis was used to generate
confidence limits on the travel times from assumed parameter distributions.
Mercer et al. (1983) present a good example of model application where an
extensive effort was made to understand the site, both at the canal and throughout the
region. Where parameters were uncertain, sensitivity analysis was used to assess their
impact. A further step was taken by using statistical techniques to estimate the
uncertainty.
Use of Models for Ground-water Contamination Problems
Because of the problems involved with using models for ground-water
contamination problems, there are distinct limitations to the usage to which models may
be put. As in the examples discussed above, limited field data means that most of the
data will be used to calibrate the model. Dispersion and the physical phenomena it
represents remain major theoretical unknowns. Calibration is often the way that this
parameter, which is not a true constant of porous media, is determined. In practice,
geologic complexity is often not fully characterized at sites. Models can currently
include only the simplest of chemical and biological reactions and many of the chemical
and biological processes are unknown at this time. Therefore, any predictions from
models must be considered as containing uncertainty. The tendency to run a computer
program and view the output from a computer program as "the answer" should be
avoided.
Two approaches are suggested at this point. First, sensitivity analysis can
give an indication as to the possible range of outcomes in a certain situation. For
parameters that are not well known, such as dispersion, or those that have a large
impact on the model, varying the input values over a range of values will give an
expected range of output parameters. A more sophisticated approach is to use a
statistical technique (as did Mercer et al. 1983) which can also assign probabilities to
the outcomes. From the model results, the likely limiting cases can be determined.
Decisions can then be made based on an appropriate basis for the situation. In some
cases, the basis may be the worst case scenario as determined by a limiting case. This
53
-------
strategy is appropriate for planned facilities where no actual contamination is present,
as well as where contamination currently exists. Second, where contamination exists,
it is only with time that the accuracy of the model predictions can be judged. By
comparing measured and modeled outputs at various times, the model can be
evaluated. Recognizing that knowledge of conditions at any field site and the
processes occurring is imperfect, periodic re-adjustment of model parameters and
reassessment of the site may be needed. When time and resources allow, there
should be periodic data collection for these purposes, including post
decision/implementation monitoring, since this may be the best way to demonstrate
model validity.
54
-------
LITERATURE CITED
Abriola, L M., and G. F. Finder, 1985. A multiphase approach to the modeling of
organic compounds, 1. Equation development, Water Resources Research. Vol.
21,11-18.
Ahuja, L. R., J. D. Ross, R. R. Bruce, and D. K. Cassel, 1988. Determining unsaturated
hydraulic conductivity from tensiometric data alone, Soil Science Society of America
Journal. Vol. 52, 27-34.
Ahuja, L. R., R. E. Green, 3. K. Chong, and D. R. Nielson, 1980. A simplified functions
approach for determining soil hydraulic conductivities and water characteristics in
situ, Water Resources Research. Vol. 16, 947-953.
Anderson, T. W., 1968. Electric Analog Analysis of Groundwater Depletion in Central
Arizona. U.S. Geological Survey Water-Supply Paper No. 1860, 21 pp.
ASCE, 1989. Review of statistics in hydrogeology, 1. Basic concepts, Task Committee
on Geostatistical Techniques in Geohydrology-American Society of Civil Engineers,
submitted to ASCE.
ASCE, 1989. Review of statistics in hydrogeology, 2. Applications, Task Committee on
Geostatistical Techniques in Geohydrology-American Society of Civil Engineers,
submitted to ASCE.
ASTM, 1984. Standard practice for evaluating environmental fate models of chemicals,
Annual Book of ASTM Standards. E 978-84, American Society for Testing and
Materials, Philadelphia, 558-565.
Bakr, A. A., L W. Gelhar, A. L. Gutjahr, and J. R. McMillan. 1978. Stochastic analysis
of variability in subsurface flows: I. Comparison of one- and three-dimensional
flows, Water Resources Research. Vol. 14, 263-271.
Batycky, J. P., F. G. McCaffery, P. K. Hodgins, and D. B. Fischer, 1981. Interpreting
relative permeability and wettability from unsteady-state displacement methods,
Transactions. American Institute of Mining Engineers. Vol. 271, 11296-11308.
Bear, J., 1979. Hydraulics of Groundwater. McGraw-Hill, NY.
Brooks, R. H., and A. T. Corey, 1964. Hydraulic Properties of Porous Media. Colorado
State University Hydrology Paper 3, Fort Collins, Colorado.
Carrera, J., and S. P. Neuman, 1986. Estimation of aquifer parameters under transient
and steady state conditions, 2, Uniqueness, stability, and solution algorithms, Water
Resources Research. Vol. 22, 211 -227.
Gary, J. W., J. F. McBride, and C. S. Simmons, 1989. Trichloroethylene residuals in the
capillary fringe as affected by air-entry pressures, Journal of Environmental Quality.
Vol. 18,72-77.
55
-------
Chapelle, F. H., 1986. A solute-transport simulation of brackish-water intrusion near
Baltimore, Maryland, Ground Water. Vol. 24, 304-311.
Collins, R. E, 1961. Flow of Fluids Through Porous Material. Rienhold Publishing
Corporation, London.
Davis, J. C., 1973. Statistics and Data Analysis in Geology. John Wiley and Sons, NY.
Dracos, T., 1978. Theoretical considerations and practical implications on the infiltration
of hydrocarbons in aquifers, Proceedings of the International Symposium on
Groundwater Pollution bv Oil Hydrocarbons. International Association of
Hydrologists, 127-137.
Dullien, F. A. L, 1979. Porous Media. Fluid Transport and Pore Structure. Academic
Press, NY.
Environmental Protection Agency, 1988. Selection Criteria for Mathematical Models
Used in Exposure Assessments: Ground-Water Models. EPA/600/8-88/075.
Fick, A., 1855. Annln. Phys.. Vol. 170, No. 59.
Franke, O. L., T. E. Reilly, and G. D. Bennett, 1987. Definition of boundary and initial
conditions in the analysis of saturated ground-water flow systems--an introduction,
Techniques of Water-Resources Investigations of the United States Geological
Survey. Book 3, Chapter B5.
Freeberg, K. M., P. B. Bedient, and J. A. Connor, 1987. Modeling of TCE contamination
and recovery in a shallow sand aquifer, Ground Water. Vol. 25, 70-80.
Freeze, R. A., and J.A. Cherry, 1979. Groundwater. Prentice-Hall Inc., Englewood, NJ.
Freyberg, D. L, 1988. An execise in ground-water model calibration and prediction.
Ground Water. Vol. 26, 350-360.
Gajem, Y. M., A. W. Warrick, and D. E. Myers, 1981. Spatial dependence of physical
properties of a typic torrifluvent soil, Soil Science Society of America J.. Vol. 46,
709-715.
Grew, K. E., and T. L. Ibbs, 1952. Thermal Diffusion in Gases. Cambridge University
Press.
Guven, O., R. W. Falta, F. J. Molz, and J. G. Melville, 1986. A simplified analysis of two-
well tracer tests in stratified aquifers, Ground Water. Vol. 24, 63-71.
Hampton, D. R., and P. D. G. Miller, 1988. Laboratory investigation of the relationship
between actual and apparent product thickness in sands, In Petroleum
Hydro/carbons and Organic Chemicals in Ground Water: Prevention. Detection and
Restoration. Nov. 9-11, Houston, Tx, 157-181.
Hellfereich, F., 1962. Ion Exchange. McGraw-Hill, NY.
Hillel, D., 1980. Fundamentals of Soil Phvsics. Academic Press.
56
-------
Hillel, D., V. D. Krentos, and Y. Stylianou, 1972. Procedure and test of an internal
drainage method for measuring soil hydraulic characteristics in situ, Soil Science.
Vol. 114,395-400.
Hillman, G. R., and J. P Verschuren, 1988. Simulation of the effects of forest cover, and
its removal, on subsurface water, Water Resources Research. Vol. 24, 305-314.
Hoopes, J. A., and D. R. Harleman, 1967. Wastewater recharge and dispersion in
porous media, American Society of Civil Engineers. Journal of the Hydraulics
Division. Vol. 93, 51-71.
IGWMC, 1988. International Ground Water Modeling Center Newsletter. Vol. VII, No. 2,
June.
Keely, J. F., 1987. The Use of Models in Managing Ground-Water Protection Programs.
U. S Environmental Protection Agency, EPA/600/8-87/003.
Kemblowski, M. W., and C. Y. Chang, 1988. Analysis of the measured free product
thickness in dynamic aquifers, In Petroleum Hydrocarbons and Organic Chemicals
in Ground Water: Prevention. Detection and Restoration. Nov. 9-11, Houston, TX,
183-205.
Klinkenberg, L J., 1941. The permeability of porous media to liquids and gases,
American Petroleum Institute Drilling Products Practices. 200-213.
Klute, A., 1972. The determination of the hydraulic conductivity and diffusivity of
unsaturated soils, Soil Science. 113, 264-276.
Konikow, L. F., 1986. Predictive accuracy of a ground-water model-lessons from a
postaudit, Ground Water. Vol. 24, 173-184.
Konikow, L. F., and J. W. Mercer, 1988. Groundwater flow and transport modeling,
Journal of Hydrology. Vol. 100, pp 379-410.
Konikow, L. F., and M. Person, 1985. Assessment of long-term salinity changes in an
irrigated stream-aquifer system, Water Resources Research. Vol. 21, No. 11,1611-
1624.
Konikow, L. F., and J. D. Bredehoeft, 1978. Computer model of two-dimensional solute
transport and dispersion in ground water, Techniques of Water-Resources
Investigations of the United States Geological Survey. Chapter C2, U. S.
Government Printing Office.
Konikow, L. F., and J. D. Bredehoeft, 1974. Modeling flow and chemical quality changes
in an irrigated stream-aquifer system, Water Resources Research. Vol. 10, 546-
562.
Kreamer, D. K., 1988. Monitoring Technology for Auaered Shaft Waste Disposal - The
Shallow Text Plot Experiments. Interim Report for Reynolds Electrical and
Engineering Company, Las Vegas, Nevada.
Kueper, B. H. and E. 0. Frind, 1988. An overview of immiscible fingering in porous
media, Journal of Contaminant Hydrology. Vol. 2, 95-110.
57
-------
Libardi, P. L., K. Reichardt, D. R. Nielsen, and J. W. Biggar, 1980. Simple field methods
of estimating soil hydraulic conductivity, Soil Science Society of America Journal.
Vol. 44:3-7.
Mackay, D. M., D. L Freyberg, P. V. Roberts, and J. A. Cherry, 1986. A natural
gradient experiment in a sand aquifer: 1. Approach and overview of plume
movement, Water Resources Research. Vol. 22, 2017-2030.
Mason, E. A., and R. B. Evans, 1969. Graham's laws: simple demonstrations of gases
in motion, Part I, Theory, J. Chem. Ed.. Vol. 6, No. 6, 358-364.
Mercado, A., 1967. The spreading pattern of injected waters in a permeable stratified
aquifer, Proceedings of International Association of Scientific Hydrology
Symposium. Haifa. Publ. 72, 23-26.
Mercer, J. W., L. R. Silka, and C. R. Faust, 1983. Modeling ground-water flow at Love
Canal, New York, American Society of Civil Engineers Journal. Environmental
Engineering Division. Vol. 109, 924-942.
Mohanty, K. K., and S. L. Salter, 1983. Multiphase flow in porous media: III. Oil
mobilization, transverse dispersion, and wettability, Society of Petroleum Engineers.
Preprint No. 12127.
Mull, R., 1971. The migration of oil-products in the subsoil with regard to ground-water
pollution by oil, Proceedings of the Fifth International Conference on Advances in
Water Pollution Research. S.H. Jenkins (ed.), Pergammon Press, Vol. 2, HA 7(A)/1-
8.
Owens, W. W., and D. L Archer, 1971. The effect of rock wettability on oil-water relative
permeability relationships, Journal of Petroleum Technology. Vol. 23, No. 7, 873-
878.
Parker, J. C. and R. J. Lenhard, 1987. A model for hysteretic constitutive relations
governing multiphase flow: 1. Saturation-pressure relations, Water Resources
Research. Vol. 23, 2187-2196.
Reible, D. p., T. H. Illangasekare, D. V. Doshi, and M. E. Malhiet, 1988. Infiltration of
immiscible contaminants in the unsaturated zone, submitted to Ground Water.
Reilly, T. E., O. L. Franke, H. T. Buxton, and G. D. Bennett, 1987. A Conceptual
Framework for Ground-water Solute-Transport Studies with Emphasis on Physical
Mechanisms of Solute Movement. U. S. Geological Survey, Water-Resources
Investigation Report 87-4191, 44 pp.
Russo, D., and E. Bresler, 1981. Soil Hydraulic properties as stochastic processes: I. An
analysis of field spatial variability, Soil Science Society of America J.. Vol. 45, 682-
687.
Schuh, W. M., J. W. Bauder, and S. C. Gupta, 1984. Evaluation of simplified methods
for determining unsaturated hydraulic conductivity of layered soils, Soil Science
Society of America Journal. 48:730-736.
Stone, H. L, 1973. Estimation of three-phase relative permeability and residual oil data,
The Journal of Canadian Petroleum Technology. Vol. 20, No. 2, 53-61.
58
-------
Strecker, E. W., and W. S. Chu, 1986. Parameter identification of a ground-water
contaminant transport model, Ground Water. Vol. 24, 56-62.
Streltsova-Adams, T. D., 1978. Well hydraulics in heterogeneous aquifer formations,
Advances in Hvdroscience. V. T. Chow, (ed.), Vol. 11, 357-423.
Tang, D. H., E. O. Frind, and E. A. Sudicky, 1981. Contaminant transport in fractured
porous media: Analytical solution for a single fracture, Water Resources Research.
Vol. 17,555-564.
Taylor, A. T., and G. L Ashcroft, 1972. Physical Edaphology. W.H. Freeman and Co.,
San Francisco.
Todd, D. K., 1980. Groundwater Hydrology. 2ed, Wiley, 535 pp.
Treiber, L. E., D. L. Archer, and W. W. Owens, 1972. A laboratory evaluation of the
wettability of fifty oil-producing reservoirs, Society of Petroleum Engineers Journal.
Vol. 12, No. 6, 531-540.
Trescptt, P. C., G. F. Pinder, and S. P. Larson, 1976. Finite-difference model for aquifer
simulation in two-dimensions with results of numerical experiments, Techniques of
Water Resources Investigations of the U.S. Geological Survey. Book 7, Chapter C1,
116pp.
vander Heijde, P., and R. A. Park, 1986. U.S. EPA Ground-Water Modeling Policy
Study Group. Report of Findings and Discussion of Selected Ground-Water
Modeling Issues. International Ground Water Modeling Center, Butler University,
Indianapolis, Indiana, 64 pp.
van der Heijde, P., Y. Bachmat, J. Bredehoeft, B. Andrews, D. Holtz, and S. Sebastian,
1985. Groundwater Management: The Use of Numerical Models. 2ed., American
Geophysical Union, Monograph 5, 180 pp.
van Genuchten, M. T., and W. J. Alves, 1982. Analytical Solutions of the One-
Dimensional Conyective-Dispersive Solute Transport Equation. United States
Department of Agriculture, Agricultural Research Service, Technical Bulletin 1661.
Watson, K. K., 1966. An instantaneous profile method for determining the hydraulic
conductivity of unsaturated porous materials, Water Resoures Research. Vol. 2,
709-715.
Weaver, J. W., and R. J. Charbeneau, 1989. A kinematic model of oil drainage in soils,
submitted to Water Resources Research.
White, D. C., 1983. Analysis of microorganisms in terms of quantity and activity in
natural environments, In Microbes in Their Natural Environments. J. H. Slater, R.
Whittenbury, and J. W. T. Wimpenny, Eds., Society for General Microbiol. Symp.,
Vol. 34, 37-66.
Yeh, W. W-G., 1986. Review of parameter identification procedures in groundwater
hydrology: the inverse problem, Water Resources Research. Vol. 22, 95-1-8.
59
-------
Zilliox L. P. and P. Muntzer, 1974. Effects of Hydrodynamic Processes on the
Development of Ground-Water Pollution, Progress in Water Technology. Vol. 7, No 3/4,
pp. 561-568.
60
------- |