United States Office of Research and EPA/600/R-99/028
Environmental Protection Development June 1999
Agency Washington DC 20460
SEPA Reliability-Based
Uncertainty Analysis of
Groundwater Contaminant
Transport and Remediation
-------
EPA/600/R-99/028
June 1999
Reliability-Based Uncertainty Analysis of
Groundwater Contaminant Transport and
Remediation
by
Maged M. Hamed
Philip B.Bedient
Rice University
Houston, Texas 77005-1892
Cooperative Agreement CR-821906
Project Officer
Mary E. Gonsoulin
Subsurface Protection and Remediation Division
National Risk Management Research Laboratory
Ada, Oklahoma 74820
National Risk Management Research Laboratory
Office of Research and Development
U.S. Environmental Protection Agency
Cincinnati, OH 45268
-------
Notice
The U.S. Environmental Protection Agency through its Office of Research and Development partially
funded and collaborated in the research described here under Cooperative Agreement No. CR-821906 to
Rice University. It has been subjected to the Agency's peer and administrative review and has been approved
for publication as an EPA document. The document contains copyrighted material on pages: 22, 23, 29, 30,
31, 32, 33, 34, 35, 36, 40, 42, 43, 44, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, and
65. Mention of trade names or commercial products does not constitute endorsement or recommendation for
use.
All research projects making conclusions or recommendations based on environmentally related mea-
surements and funded by the Environmental Protection Agency are required to participate in the Agency
Quality Assurance Program. This project was conducted under an approved Quality Assurance Project Plan.
The procedures specified in this plan were used without exception. Information on the plan and documenta-
tion of the quality assurance activities and results are available from the Principal Investigator.
-------
Foreword
The U.S. Environmental Protection Agency is charged by Congress to protect the Nation's land, air, and
water resources. Under a mandate of national environmental laws, the Agency strives to formulate and
implement actions leading to a compatible balance between human activities and the ability of natural
systems to support and nurture life. To meet these mandates, EPA's research program is providing data and
technical support for solving environmental problems of today and building a science knowledge base
necessary to manage our ecological resources wisely, understand how pollutants affect our health, and
prevent or reduce environmental risks in the future.
The National Risk Management Research Laboratory is the Agency's center for investigation of techno-
logical and management approaches for reducing risks from threats to human health and the environment.
The focus of the laboratory's research program is on methods for the prevention and control of pollution to
air, land, water, and subsurface resources; protection of water quality in public water systems; remediation of
contaminated sites and ground water; and prevention and control of indoor air pollution. The goal of this
research effort is to catalyze development and implementation of innovative, cost-effective environmental
technologies; develop scientific and engineering information needed by EPA to support regulatory and policy
decisions; and provide technical support and information transfer to ensure effective implementation of
environmental regulations and strategies.
This report presents a discussion of the application of the first- and second-order reliability methods
(FORM and SORM, respectively) to groundwater transport and remediation, and to public health risk
assessment. Using FORM and SORM allows the formal incorporation of parameter uncertainty in the analysis
of such applications. The report shows the advantages of the reliability methods over currently available
simulation methods, such as the Monte Carlo Simulation method. Special attention is also given to the
stochastic sensitivity results obtained as an integral part from the reliability analysis. Numerous examples and
case studies are given to illustrate the method's application. The report concludes with an outlook of possible
future efforts needed in this research area. It is published and made available by EPA's Office of Research
and Development to assist the user community.
Clinton W. Hall, Director
Subsurface Protection and Remediation Division
National Risk Management Research Laboratory
-------
Abstract
Failure to rigorously accommodate physical parameter uncertainty in groundwater transport models casts
serious doubts on our ability to accurately delineate the contaminant plume at a given site. This failure could
also considerably reduce the possibility of success of the remediation scheme intended to clean up a plume
within the specified time.
In this research, the probability that a contaminant leaking from a waste source will exceed some
predetermined target level at a downgradient well is estimated, along with the sensitivity of this probability to
the basic uncertainty in input parameters. The relevant parameters are assumed random with prescribed
probability distributions.
We present a probabilistic modeling tool based on first- and second-order reliability methods (FORM and
SORM) to account for parameter uncertainty in groundwater contaminant transport and remediation. The
methodology is applied to analytical groundwater models to provide a simple screening tool for the
assessment of contamination and remediation. In addition, numerical-based reliability models are developed
to account for aquifer spatial heterogeneity and correlation structure in more complex systems.
In the analytical phase, the program PROBAN was used for the probabilistic analysis. Results indicate that
the greatest impact on the probabilistic event is due to basic uncertainty in seepage velocity. However,
chemical-related and source-related parameter uncertainty were also found to be very important factors to
consider. In the numerical phase, the finite-element code FLOTRAN was linked to the reliability shell
CALREL. Hydraulic conductivity was treated as a spatial random field. Considerable saving in computa-
tional time was achieved by using a coarse random variables mesh with a finer numerical mesh. Series system
reliability was used to analyze failure at several wells. Probabilistic assessment of plume containment was
also provided.
FORM and SORM are powerful tools for the probabilistic analysis of groundwater contaminant transport
and remediation. Examples given in this work are only samples of the variety of applications that FORM and
SORM can address. Their use in areas such as probabilistic risk assessment should be of great potential and
interest to regulatory agencies and groundwater professionals alike.
This report was submitted in fulfillment of Cooperative Agreement No. CR-821906 by Rice University
under the sponsorship of the United States Environmental Protection Agency. This report covers a period
from 10/01/93 to 03/31/97, and work was completed as of 11/03/98.
IV
-------
Contents
Foreword iii
Abstract iv
Figures vii
Tables ix
Acknowledgments / Copyright Statement x
Sectionl Introduction 1
1.1 Uncertainty in Contaminant Transport Modeling 1
1.2 Motivation 1
1.3 Research Objectives 2
1.4 Report Outline 3
Section 2 Theoretical Background 5
2.1 Introduction 5
2.2 Basics of Component Reliability Concepts 5
2.3 Component Versus System Reliability 6
2.4 The Probability of Failure 6
2.5 FORM and SORM as Approximation Methods 7
2.5.1 Transformation to the Standard Normal Space 7
2.5.1.1 The Hasofer-Lind Transformation 7
2.5.1.2 The Diagonal Transformation 7
2.5.1.3 The Nataf Transformation 8
2.5.1.4 The Rosenblatt Transformation 8
2.5.2 Design Point Determination 9
2.5.3 Limit-state Surface Approximation 10
2.5.4 Computation of the Failure Probability 11
2.6 Sensitivity Measures 12
2.7 Uncertainty Importance Factors 14
2.8 System Reliability 15
2.9 Monte Carlo Simulation Methods 16
Section 3 Analytical Probabilistic Groundwater Modeling 19
3.1 Assessment of Groundwater Contamination 19
3.2 Models Utilized 19
3.2.1 Horizontal Plane Source Model 19
3.2.2 PROBAN 21
3.2.3 HPS-PROBAN Model 21
3.3 Non-reactive Solute Transport 23
3.3.1 Limit-state Function Formulation 23
3.3.2 Inputs to the Case of Non-reactive Solute (Case 1) 24
3.3.3 Results of the Case of Non-reactive Solute (Case 1) 24
3.3.4 Inputs to the Case of Non-reactive Solute (Case 2) 28
3.3.5 Results of the Case of Non-reactive Solute (Case 2) 30
3.4 Reactive Solute Transport 32
3.4.1 Limit-state Function Formulation 33
3.4.2 Inputs to the Case of Reactive Solute 33
3.4.3 Results of the Case of Reactive Solute 34
-------
Section 4 Numerical Probabliistic Groundwater Modeling 37
4.1 General 37
4.2 Selection of Numerical Transport Model 37
4.3 Selection of Reliability Shell 38
4.4 FLOTRAN-CALREL Interface 39
4.5 Spatial Random Fields 39
4.6 Applications 41
4.6.1 Symmetric and Asymmetric Target Wells 41
4.6.2 Effect of Material Heterogeneity 46
4.6.2.1 Case MH-1 46
4.6.2.2 Case MH-2 49
4.6.3 Effect of Mesh Overlay and Discretization 49
4.6.4 System Reliability Analysis 56
4.6.5 Remediation/Containment under Uncertainty 58
Section 5 Summary and Conclusions 67
5.1 Probabilistic Analytical Transport Model 67
5.2 Probabilistic Numerical Transport Model 67
5.3 Recommendations for Future Research 68
Bibliography 69
VI
-------
Figures
Figure 2.1 Mapping of the physical space into the standard normal space 8
Figure 2.2 Region of major contribution to the failure probability (after Madsen et al., 1986) 11
Figure 2.3 FORM and SORM approximations to the failure surface in the standard normal space 12
Figure 3.1 Flow chart of the combined HPS-PROBAN analytical probabilistic screening model 22
Figure 3.2 Schematic of the case study setup 23
Figure 3.3 Probability of failure at a location 100 m downgradient from the leaking source 25
Figure 3.4 Failure probability surface at various well locations for different normalized target
concentration levels 25
Figure 3.5 Contours of equal failure probabilities for case 1 26
Figure 3.6 The effect of the mean value of the longitudinal dispersivity \alphax on the
probability of failure at different downgradient locations 27
Figure 3.7 The effect of the mean value of the longitudinal dispersivity ax on the
importance factors (upper surface represents seepage velocity, lower one
represents dispersivity) 27
Figure 3.8 Importance of source strength, seepage velocity, and dispersivity on the
reliability estimate (upper surface: source strength, middle: seepage velocity,
lower: dispersivity) 28
Figure 3.9 Importance factors at a location 60.0 m downgradient from the leaking source 29
Figure 3.10 Effect of normalized target concentration levels on the probability of
failure for the non-reactive case 2 30
Figure 3.11 Effect of normalized target concentration levels on the reliability index for
the non-reactive case 2 3 1
Figure 3.12 Percent difference between FORM and SORM failure probabilities results for
the non-reactive case 2 32
Figure 3.13 Effect of normalized target concentration on importance factors in
non-reactive case 2 32
Figure 3.14 Effect of o-xylene target concentration levels on the probability of failure 34
Figure 3.15 Effect of o-xylene target concentration levels on the reliability index 35
Figure 3.16 Effect of o-xylene target concentration on importance factors 35
Figure 3.17 Comparison of FORM, SORM, and MCS estimate of the probability of
failure for the o-xylene case study 36
Figure 4.1 Flow chart of the numerical probabilistic transport model 40
Figure 4.2 Setup of the numerical case study 42
Figure 4.3 Results for the symmetric well: (a) design point, (b) gamma sensitivity,
(c) delta sensitivity 43
Figure 4.4 Results for the asymmetric well: (a) design point, (b) gamma sensitivity,
(c) delta sensitivity 44
VII
-------
Figure 4.5 Effect of correlation scale on the first-order probability of failure (C=2.0 mg/L) 46
Figure 4.6 Setup of case MH-1 47
Figure 4.7 Design point for case MH-1 (cm/sec) 47
Figure 4.8 Gamma sensitivities for case MH-1 48
Figure 4.9 Delta sensitivities for case MH-1 48
Figure 4.10 Contaminant plume for case MH-1 (mg/L) 49
Figure 4.11 Setup of case MH-2 50
Figure 4.12 Design point for case MH-2 (cm/sec) 50
Figure 4.13 Gamma sensitivities for case MH-2 51
Figure 4.14 Delta sensitivities for case MH-2 51
Figure 4.15 Setup of the mesh refinement case study 52
Figure 4.16 Design point hydraulic conductivity field for 144 random variables case study (cm/sec).
Source area is shown by X's and monitoring well is shown with a circle 53
Figure 4.17 Gamma sensitivities for 144 random variables case study 53
Figure 4.18 Delta sensitivities for 144 variables case study 54
Figure 4.19 Effect of number of random variables on the probabilistic event 55
Figure 4.20 Design point hydraulic conductivity field for 144 random variables
case study (cm/sec) for a target concentration = 1.0 mg/L 55
Figure 4.21 Setup of the system reliability case study 57
Figure 4.22 Sensitivity of the upper bi-modal bound on the system failure probability
to the local mean hydraulic conductivity: (a) for the 3-well case, (b) for the 2-well case,
(c) for 2-well with lower conductivity lens 59
Figure 4.23 Setup of the system reliability case with a lower conductivity lens 60
Figure 4.24 Setup of the plume containment case study 61
Figure 4.25 Initial contaminant plume (mg/L) 62
Figure 4.26 Results for the plume containment case study for pumping rate= 2.52 L/s (40.0 gpm):
(a) Design point, (b) Gamma sensitivity, (c) Delta sensitivity 63
Figure 4.27 Setup of the plume containment case study with a lower conductivity lens 64
Figure 4.28 Results for the plume containment with a lower conductivity lens case study:
(a) Design point, (b) Gamma sensitivity, (c) Delta sensitivity 65
VIM
-------
Tables
Table 3.1 Deterministic Input Parameters for the Non-reactive Case (1) 24
Table 3.2 Random Input Variables for the Non-reactive Case (1) 24
Table 3.3 Deterministic Input Parameters for the Non-reactive Case (2) 29
Table 3.4 Random Input Variables for the Non-reactive Case (2) 29
Table 3.5 Deterministic Input Parameters for the Case of o-xylene Transport 33
Table 3.6 Random Input Variables to the Case of o-xylene Transport 33
Table 4.1 Input Parameters for the Numerical Case 42
Table 4.2 Comparison between the Analysis Methods 45
Table 4.3 Input Parameters for the Mesh Discretization Case Study 52
Table 4.4 Comparison between the Analysis Methods 56
Table 4.5 Input Parameters for the System Reliability Case Study 57
Table 4.6 Results of the System Reliability Analysis for the 3-Well Case Study 58
Table 4.7 Results of the System Reliability Analysis for the 2-Well Case Study 60
Table 4.8 Input parameters for the Plume Containment Case Study 61
Table 4.9 Failure Probabilities for the Remediation Case Study for
Pumping Rate = 2.52 L/s (40.0 gpm) 62
Table 4.10 FORM Failure Probabilities for Different Pumping Rates for the
Remediation Case Study 64
IX
-------
Acknowledgments/ Copyright Statement
Section 3 contains material reprinted from Journal of Environmental Engineering, Vol. 121, No. 11,
Hamed, Conte, and Bedient, Probabilistic screening tool for groundwater contamination assessment,
pp 767-775, 1995, with kind permission from the American Society of Civil Engineers.
Section 4 contains material reprinted from Journal of Contaminant Hydrology, Vol 24, No. 1, Hamed,
Bedient, and Conte, Numerical stochastic analysis of groundwater contaminant transport and plume contain-
ment, pp 1-24, 1996, with kind permission from Elsevier Science-NL, Sara Burgerhartstraat 25, 1055 KV,
Amsterdam, The Netherlands.
Section 4 contains material reprinted from Advances in Water Resources, Vol. 19, No. 5, Hamed, Bedient,
and Dawson, Probabilistic modeling of aquifer heterogeneity using reliability methods, pp 277-295, 1996,
with kind permission from Elsevier Science Ltd, The Boulevard, Langford Lane, Kidlington 0X5 1GB, UK.
-------
Section 1
Introduction
1.1 Uncertainty in Contaminant Transport Modeling
The uncertainty of the physical parameters in subsurface contaminant transport problems is ubiquitous. This is
manifested in the basic heterogeneity of the aquifer formation and the uncertainty related to the chemical, physical
and biological properties of the contaminant being released and transported. Furthermore, there is a great deal of
uncertainty regarding the leaking source dimensions, concentration, leaking rate and duration. These parameters
vary largely from one site to another and also exhibit great spatial variability within the same site.
In many applications, consideration of the uncertainty constitutes an integral part of the modeling process. The most
evident example is the regulatory requirements established by the U.S. Nuclear Regulatory Commission (NRC), the
U.S. Environmental Protection Agency (EPA), and the U.S. Department of Energy (DOE) to conduct performance
assessment analyses in order to consider the uncertainty associated with the predictions of the performance of a
geologic repository for a high-level nuclear waste (Eisenberg et al, 1987). In exposure risk assessment problems,
the probabilistic approach is equally appealing, because it makes little sense to give a deterministic value for the
risk associated with a given receptor due to the large uncertainty in exposure point concentration.
There are different sources of uncertainty that the hydrogeologist has to account for. Some of these are summarized
in the following. Modeling uncertainty arises due to using a simplistic relationship to describe the actual behavior
of a physical system and the idealization down to operational mathematical expressions. This can be quantified
either by comparisons with other, more involved models that provide a more accurate representation of reality, or
by comparisons with collected data from the field. Modeling uncertainty can be formally treated in a reliability
context by introducing a random variable that describes the ratio between the actual and predicted model response
or output.
Prediction uncertainty means that the reliability estimate depends on the state of knowledge that is available to the
engineer at the time of analysis. Various factors could affect the model response which are not included in the
analysis simply due to lack of knowledge. As the state of knowledge increases, our assessment of the reliability is
refined, usually combined with an increased reliability index (Melchers, 1987). Human factors are the collection of
errors that arise during data collection, recording, and analysis.
At field sites, data are collected, and statistical estimators (mean and higher order moments) are obtained, and a
probability density function (PDF) is chosen to represent the distribution of each input random variable. Since
collected data are usually inadequate and noisy, those PDFs are bound to be biased. This is often termed statistical
or information uncertainty (Dettinger and Wilson, 1981). One way of alleviating this problem is to consider the
parameters such as the mean and variance themselves as random variables to estimate how uncertainty of the
statistical parameters propagate to the model response. Another solution is to collect more data and use a Bayesian
approach to update the inferred statistics (Melchers, 1987).
The last type of uncertainty is that resulting from the inherent randomness of the medium variables under
consideration. This is quite evident in the soil formations, for which properties such as hydraulic conductivity can
span many orders of magnitude at the same site (Freeze and Cherry, 1979; Bakr et al., 1978). This type of
uncertainty is irreducible, and is often referred to as the inherent, intrinsic, or physical uncertainty (Melchers, 1987;
Dettinger and Wilson, 1981). Although the current research focuses on addressing the physical uncertainty, the
approach is equally applicable to other types of uncertainty with the necessary modifications of the formulation.
1.2 Motivation
The aforementioned sources of uncertainty greatly affect the predictive ability of groundwater flow and contami-
nant transport models. Failure to rigorously accommodate physical parameter uncertainty in transport models casts
-------
serious doubts on our ability to accurately delineate the contaminant plume at a given site on the one hand. On the
other hand, it could also considerably reduce the possibility of success of the remediation scheme intended to clean
up a plume within the specified time, simply because of the uncertainty related to the plume delineation.
Although a fair amount of research has been done in the area of uncertainty analysis of subsurface contaminant
transport, many of the proposed methods are either of low applicability to actual field conditions, or computationally
very intensive. For example, the classic Monte Carlo simulation method (MCS) is considered to be a very popular
tool for the stochastic risk assessment procedure. MCS is general, its accuracy increases as the sample size
increases and the results converge to the exact one. However, in cases where events are characterized by very low
probability of occurrence, or when applied to a complex groundwater flow and transport model with large number
of input random variables, MCS becomes computationally expensive. The number of samples required to estimate
an event probability p is in the order of 100//7 to get a coefficient of variation of the estimate of 0.10 (Bjerager,
1990). This means that in order to have a good estimate of a probability of 1 \times 10~3 using MCS, roughly 1X105
Monte Carlo simulations are needed, which constitutes such a computational burden that might hinder the use of the
methodology on the day-to-day tasks, especially for large scale field problems characterized by large number of
input random variables with complex correlation structure. Note that very low probability events are not uncommon
in contaminant transport problems, especially if the chemical is highly reactive, or if the soil is very tight or if it
contains tight lenses in an otherwise permeable soil.
Moreover, other methods such as perturbation and spectral methods usually involve making restrictive assumptions
about the problem domain (e.g., assuming infinite extent, or setting an upper limit on parameters variability), that
may not be applicable when the random variables have large variances; which is usually the case in aquifer
parameters (e.g., hydraulic conductivity). More discussions on the different stochastic analysis methods are given
in Section 2. It is thus evident that there is a need for developing a computationally efficient, yet simple,
methodology for the probabilistic analysis of contaminant transport in groundwater.
Furthermore, little work has been done in the area of aquifer remediation under uncertainty. The work that is most
relevant is that of Wagner and Gorelick (1987) which addresses the optimal management policy for ground water
under parameter uncertainty. With the very large spending on pump-and-treat remediation schemes in Superfund
sites, and with frequent failure to meet cleanup levels, the issue of assessing the performance of a given remediation
scheme under parameter uncertainty stands out as a topic that deserves utmost attention, since this will be of great
help in designing aquifer restoration schemes, and analyzing their performance in light of the relevant parameters
uncertainty.
1.3 Research Objectives
The objectives of this work can be summarized in the following:
1. Develop a computationally efficient, yet simple, probabilistic screening tool for the analysis of
groundwater contamination that can take into account aquifer-related, source-related, and chemical-
related parameter uncertainty.
2. Analyze the impact of physical parameters uncertainty in a framework that would not force us to make
unrealistic and restrictive assumptions about the problem geometry and boundary conditions, or about
the variability of the relevant parameters.
3. Identify the parameters and the associated variability that have the greatest effect on the probabilistic
outcome. Some of the methodologies available to date can provide such information but only after an
extensive amount of additional calculations. It is our objective to obtain such sensitivity measures at no
additional computational cost.
4. Develop a rigorous numerical probabilistic model for the assessment of groundwater contamination.
This would enable us to take into account more realistic case studies, where spatial variability of the
aquifer parameters, material heterogeneity, and stresses on the aquifer in the form of pumping and/or
injection can be considered.
5. Assess the impact of some numerical issues on the resulting probabilistic outcome. Issues such as the
effect of correlation scale of the random field, the relationship between the correlation scale and the
discretization level, the impact of material heterogeneity such as the presence of lower permeability
regions, are a few of many important issues that one has to understand in order to fully, and properly,
account for aquifer parameter uncertainty.
-------
6. Analyze the probability of exceeding pre-determined target concentration levels at more than one
location in the aquifer. This is needed in many case studies, where the plume should not escape beyond
a site boundary, or when assessing the regulatory compliance at more than one well in the aquifer. In this
instance, compliance at each point along the site boundary becomes necessary.
7. Develop a probabilistic tool to enable the rigorous assessment of the efficiency of containment/
remediation schemes under parameter uncertainty, using a numerical transport model.
1.4 Report Outline
The report is composed of five sections, including this one. Section 2 presents the mathematical basics of the first-
and second-order reliability methods (FORM, and SORM). Since the methods were originally developed in the field
of structural engineering, the section attempts to present the theory in a groundwater contamination framework.
Section 3 illustrates the application of reliability methods in the development of an analytical probabilistic screening
tool for groundwater contamination. Examples of the probabilistic analysis to conservative, as well as reactive, solutes
are given. The performance of FORM and SORM is tested with comparison to that of the Monte Carlo simulation
method.
Section 4, on the other hand, addresses more complex case studies. A numerical transport model based on the finite
element method is utilized in developing the probabilistic model. The probabilistic model allows us to take into
account spatial variability of the hydraulic conductivity, and test the effect of correlation scale, discretization level,
and the presence of tight lenses on the probabilistic outcome. System reliability was also used to show the joint
failure event when looking at more than one receptor well in the aquifer. An example of the analysis of plume
containment efficiency under the effect of spatially variable hydraulic conductivity is also given.
Section 5 concludes this report with a summary of the results obtained in previous sections, along with
recommendations for future research.
-------
-------
Section 2
Theoretical Background
2.1 Introduction
The first- and second-order reliability methods (FORM and SORM, respectively) were originally developed in the
past 10-15 years to assess the safety of structural components and structural systems and are now widely used in the
study of structural reliability problems. This section presents a brief review of the mathematical basis of the
reliability analysis methods to the extent necessary to understand the subsequent formulation of the methodology.
A full account of the reliability methods development and evolution can be found in Madsen et al., (1986), Der
Kiureghian and Liu (1986), Melchers (1987), and Ditlevsen and Madsen (1996).
2.2 Basics of Component Reliability Concepts
In component reliability formulation the uncertain parameters involved in the problem describing the component of
interest are represented by a set of ^-random variables, X = (XfX2,...,XJ. These are termed the basic random
variables or uncertain variables. The limit-state function (also termed performance function) is a scalar function of
the input random variables, g(X): IRn—>IR. When the vector of random variables X has the realization x=(xrx2,...,xn),
then the value g(x},x2,...,xn) determines the state of the component for that particular realization.
The g-function is formulated with the convention that if g(x},x2,...,xn) > 0, the component has survived, whereas if
g(x1,x2,...,xJ 0)} denotes the safe domain, and
F = {x;g(x) < 0)} denotes the failure domain C2 1)
The ^-dimensional hypersurface {x;g(x)=0} is the limiting condition between failure and survival, and is termed the
limit-state surface.
Note that the term component in this context means that there is a single mode of failure. To state it simply, the
component of interest would either fail or survive. This is different from the system reliability formulation,
however, where the system has more than one failure mode. This concept will be explained shortly.
The classic example of limit-state function definition in structural reliability is the "load-resistance" problem. In
such a problem, the resistance of a given structural component, R, is assumed random, for example due to material
imperfections. The load applied on the structural component, L, is also assumed random. Wind and earthquake
loads are examples of such random loading. Now the limit-state function is typically formulated as follows:
g(R,L)=R-L (2.2)
Therefore, realizations that cause the g-function to be negative indicate that the structural component has failed to
withstand the applied load. On the other hand, realizations resulting in positive values of the g-function indicate a
condition where the structure has survived in withholding the applied random load.
In groundwater contamination problems, on the other hand, the uncertain variables of interest can be categorized
into aquifer-, source-, and chemical-related parameters. Aquifer-related parameters can include seepage velocity (or
hydraulic conductivity), dispersivities in the x-, y-, and z-directions, soil porosity, soil bulk density, and fraction of
organic carbon. Source-related parameters include source dimensions and concentration. Chemical-related param-
eters may include the characteristics of the chemical's sorption and biodegradation. For example, the distribution
coefficient, and the parameters describing the kinetics of biodegradation can be considered uncertain.
If the interest is in assessing the potential of groundwater contamination risk due to a leaking source, the limit-state
function is formulated as follows:
-------
= Ctarget-C(X) (2.3)
where Cto et is the pre-specified threshold target concentration level at the receptor well, and C(X) is the simulated
value of the contaminant concentration at the well, taking into account parameters variability.
It is obvious that the events described by {g(X) < 0} and {C(X) > C } are equivalent. In other words, the failure
state in this case means failure to meet regulatory standards regarding the contaminant of interest at the well within
the required simulation time. Note that for a continuously leaking source, the contaminant breakthrough at the
receptor well increases monotonically with time. Therefore once the target concentration at the receptor well is
exceeded (and failure occurs), the concentration at the well will always be greater than the target value as time
progresses and failure conditions will persist.
On the other hand, if the interest is in assessing the remediation efficiency of a certain remediation alternative, C
is taken to be the pre-determined target cleanup goal that is required at a given water supply well, and C(^0 is the
simulated value of the contaminant concentration at the well, after the remediation operation. In other words, the
failure state in this case means failure to meet the required target cleanup goals at the given well location, using the
proposed remediation technology (e.g., pump-and-treat) within the operation time available for site restoration.
2.3 Component Versus System Reliability
In component reliability problems, situations with a single failure mode are analyzed. As indicated earlier, examples
include failure to meet regulatory concentration levels at a single receptor well in the vicinity of a hazardous waste
site, or failure to meet the target remediation cleanup levels for a specific well at a contaminated site.
In system reliability problems, however, we consider the problem of evaluating the reliability of a system where the
state is described by more than one limit-state function. This is important in exposure assessment situations where
the interest is on more than one water supply well, or when assessing the performance of a remediation scheme
based on the success to meet the target cleanup levels at a few points in the aquifer. In this case, the state of the
system is described by the states of its components,
(2.4)
where limit-state function g^X), /' = !,...,«, defines whether the target concentration C^ et has been exceeded by the
simulated concentration at any point /',/' = !,...,«, in the solution domain. The problem physics imply that failure of
the system occurs if at least one of its components fails, that is, the system can be modeled as a series system. More
discussion on the system reliability estimation is presented in Section 2.8.
2.4 The Probability of Failure
In our "load-resistance" example, probability of failure is defined as follows:
PF=P[g(R,L)<0]=P[R
-------
integration domain boundaries given by the g-function. This is the case when numerical solutions of the transport
equation are obtained using the finite element or finite difference methods, such that a sequence of solutions of a
large numerical transport problem is required to find a single point on the limit-state surface. In addition, problems
arise due to the lack of information concerning the multivariate joint probability density function in many practical
situations.
In addition to FORM and SORM, other methods exist for the approximation of the above integral. These include
simulation methods (Bjerager, 1988), and hybrid methods combing simulations with FORM/SORM (Lin and Der
Kiureghian, 1987).
2.5 FORM and SORM as Approximation Methods
The primary objective of the reliability methods is to overcome the aforementioned difficulties and to evaluate the
multidimensional integral in (2.6). FORM and SORM are analytical schemes used to approximate the probability
integral when the basic variables have strictly increasing continuous joint cumulative distribution functions.
FORM and SORM consist of a number of steps (Bjerager, 1990): (1) transformation of the basic variables, X, into
the standardized and uncorrelated normal variates, U (2) determination of the most likely failure point in the
standard space, (3) approximation of the limit-state surface in the standard space at the design point, and (4)
computation of the probability of failure in accordance with the approximation surface selected in step (3).
2.5.1 Transformation to the Standard Normal Space
The first step in the reliability approach is to transfom the random vector, X, into the standard normal vector [/with
zero mean, unit variance and zero correlation using a non-linear one-to-one mapping, U=T(X). This means that the
original joint probability density function f^(x) is mapped into the standard normal density function
fu(u)=$n(u)= /2 exp^--Mr»J (2.7)
Such a transformation always exists for random variables having strictly increasing continuous joint cumulative
distribution functions. The space of the basic random variables ^is often termed the x-space or physical space. The
transformed space is often termed the standard normal space or the u-space.
Jang et al. (1990) lists the advantages of the transformation into the standard normal space:
1. Probability density in the standard space is rotationally symmetric, that is, for all hyperplanes of equal
distance from the origin, the probability content p of the half space away from the origin, is constant.
2. Since the probability density decays exponentially with square of the distance from the origin,
integration at a linearization point in a standard space can approximate the probability of failure with
good accuracy.
3. Closed form solutions for estimating the probability contents for simple sets in the standard normal
space are readily available.
Figure 2.1 illustrates the transformation from the physical to the standard normal space for a simple two-dimensional
case. The choice of transformation is dependent on: (1) the available level of statistical information, (2) whether the
random variables Xare normal, and (3) whether^is statistically independent. This is explained in the following.
2.5.1.1 The Hasofer-Lind Transformation
The simplest case is when the basic random variables X are uncorrelated and normally distributed. Here, the
nonlinear transformation reduces to the affine Hasofer-Lind transformation of the form
U=L-ID-I(X-\LX) (2.8)
can be used, where D = diag[a], and a is the standard deviation of X L is a lower triangular matrix resulting from
the Cholesky decomposition of the correlation matrix R = [p..], such that R = LLT, and tt is the mean vector of X.
1J X
2.5.1.2 The Diagonal Transformation
In case of statistically independent non-normal random variables, the non-linear mapping is reduced to the
following diagonal transformation:
-------
Joint PDF in
Physical Space
x) = 0
Safe Domain, g(x) > 0
Non-Linear One-to-One Mapping, U=T(X)
u-space
Failure Domain, G(u) < 0
G(u) = 0
Safe Domain, G(u) > 0
Figure 2.1 Mapping of the physical space into the standard normal space.
(2.9)
where O[ ] is the standard normal cumulative distribution and K^C*) is the cumulative distribution function of X.. This
transformation is termed "diagonal" because each random variable X is transformed separately from the other
variables.
2.5.1.3The Nataf Transformation
In the special case of incomplete statistical information, where the marginal distributions and the covariances are
known, the transformation into the standard normal space proceeds in two stages, following the Nataf model (Der
Kuireghian and Liu, 1986). First, the basic random variables, X, are transformed into a space of correlated standard
normal variates, Z, such that Z^O"1 /^(A:,) . The variates Z have a correlation matrix Rg. The second step
consists of transforming the vector Z into the space of uncorrelated standard normal variates as follows:
C/=roZ (2.10)
where F0 is a lower triangular matrix resulting from the Cholesky decomposition of the correlation matrix of Z, i.e.
F0= L^ in which R = L0 L^ Elements of the matrix RQ are the correlation coefficients, pZlZ . These, in turn, are
related to the correlation coefficients, pXiX of the basic random variables, X, through the following implicit integral
relationship (Der Kuireghian and Liu, 1986):
,.+00 ,.+0
, =LL
o.
o.
(2.11)
where 02(z;,z., pZlZ) is the bivariate normal density function of normal variates with zero means, unit variances, and
correlation coefficient pziz; (JLt and
-------
2.5.1.4The Rosenblatt Transformation
When complete statistical information, in the form of full joint probability distribution of X, is known, the Rosenblatt
transformation (Rosenblatt, 1952) is used. This makes use of the successive conditioning (Melchers, 1987):
(2.12)
U=®~
[FXnXl--Xn-l(Xn\Xl-Xn-l)]
where Fv \X,...X ,(x x,...x ,) is the conditional cumulative distribution ofX given X,... X „ and all conditional
Xn ' 1 n-P n 1 n-Y n° 1 n-P
distribution functions are assumed continuous. The transformation first transforms X1 into the M-space. Then all
conditional variables of X2\X} are transformed into the w-space, and so on (Madsen et al., 1986). It should be noted
that the Rosenblatt transformation is reduced to that in (2.9) when the basic random variables are mutually
statistically independent. When the basic random variables are also jointly normal, the transformation reduces to
the linear Hasofer-Lind given in (2.8).
2.5.2 Design Point Determination
The second step in the reliability approach (and a major task) is to determine the design point, which is a point on
the limit-state surface in the M-space closest to the origin. The design point, u*, is of practical significance. In
standard normal space, the contours of probability density are concentric spheres and the probability density decays
with distance from the origin. Therefore, w*, which is the nearest point to the origin in the failure region, has the
highest likelihood among all failure points. Hence, the design point is considered the most likely failure point in the
standard normal space (Cawlfield and Sitar, 1987).
The design point, however, may not necessarily be the most likely failure point, if the underlying probability density
functions of the basic random variables are unknown, and only incomplete statistical information is available. It
should be noted that the design point in the standard normal space, w*, can be inversely mapped into its counterpart
in the physical space, jc*, to give a more physically meaningful interpretation. The design point, w*, is the solution
of the following non-linear constrained optimization problem:
Minimize
(2.13)
subject to G(w) = 0
The above optimization problem consists of finding the point that lies on the limit-state surface and has a minimum
distance from the origin in the standard normal space. Several algorithms have been proposed for the solution of this
problem. These include the HL-RF method, which was originally developed by Hasofer and Lind (1974) in second
moment reliability analysis, and later extended by Rackwitz and Fiessler (1978) to include distribution information;
the gradient projection method, the Sequential Quadratic Programming (SQP) method, and the modified HL-RF
method (Liu and Der Kiureghian, 1986b).
For example, the HL-RF method is based on constructing a sequence of points, w2, ..., un, where ul is an initial
guess. The HL-RF method follows the procedure (Madsen et al., 1986):
;—^T
Vu,G(u,)\
a,
(2.14)
such that
(2.15)
is the gradient vector, and
-------
-------
This area provides major
contribution to the failure
probability
Tangent hyperplane
Limit-state surface
Figure 2.2 Region of major contribution to the failure probability (after Madsen et al., 1986).
or when the limit-state function is highly non-linear (categories in which most subsurface contamination problems
fall). In such instances, the second-order reliability method (SORM) can be more effective, since it can account for
the limit-state surface curvature in the standard normal space.
Bjerager (1990) reports that the first thorough study of SORM was done by Fiessler et al. (1979). They obtained the
failure probability by approximating the failure surface at the design point in the standard normal space by a
second-order surface. Both second-order Taylor expansion as well as curvature-fitted paraboloid approximation
surfaces were analyzed.
Bjerager (1990) identifies the second-order Taylor expansion as more general than the paraboloid approximation
method. However, due to the fact that it is not based on the geometrical properties of the failure surface, it lacks
invariance with respect to the choice of the limit-state function. This is not the case for the geometrically-based
paraboloid approximation method, which is invariant to the choice of the limit-state function. Consequently, the
parabolic SORM is usually the preferred analysis method. There exists two paraboloid approximations in the
literature, namely the curvature-fitted (Breitung, 1984; Tvedt, 1988), and the point-fitted (Der Kiureghian and Lin,
1987).
2.5.4 Computation of the Failure Probability
The final step in the FORM/SORM estimation of the «-fold probability integral given in (2.6) is to compute the
failure probability corresponding to the approximating failure surface. It should be emphasized that FORM and
SORM are full distribution methods, meaning that the full marginal and joint probability density functions can be
incorporated, and the transformation from the x-space to the M-space is exact. Other methods often cited in the
literature include FOSM, which refers to "Second Moment" methods, which can only incorporate the first two
moments. Nevertheless, FORM and SORM can still be defined in a second moment context (Bjerager, 1990).
The first-order reliability index is given by the inner product
p=aV (2.18)
where a* is the unit normal at the design point directed toward the failure region (Figure 2.3). The first-order
approximation of the probability of failure is given by
11
-------
FORM
standard normal
PDF
.SORM
Failure Domain, G(u) < 0
Failure surface G(u) = 0
Safe Domain, G(u) > 0
Standard Normal Space
Figure 2.3 FORM and SORM approximations to the failure surface in the standard normal space.
0>(-p) (2.19)
where O() is the standard cumulative normal probability.
The estimation of the probability content of parabolic limit-state surface is an equally important step in the SORM
analysis. The asymptotically exact result developed by Breitung (1984) had an appreciable impact on making SORM
a practical method (Bjerager, 1990). The formula for second-order failure probability is given by:
P,
.SORM
F asymptotic
q-l
(2.20)
where ft" are the q-\ principal curvatures at the most likely failure point, u*, in w-space, with the sign convention that
curvatures are negative when the surface curves towards the origin. A numerical procedure to evaluate the integral
was suggested by Tvedt (1988). This formula is asymptotically exact as /3 approaches infinity and the term /3 K.
remains fixed.
Tvedt (1988) provided an exact and numerically feasible result for SORM, thus making SORM a powerful tool for
reliability estimates. His result is given for a parabolic approximation by:
/rfr exp((/ + p)2
• ~ -Uo ~t
\n J
(2.21)
where 0() denotes the standard normal probability density, K. are defined as before, and /' is the imaginary number
2.6 Sensitivity Measures
As an integral part of the first-order reliability method, the user is provided with measures of sensitivity of the
reliability index and the first-order estimate of the failure probability with respect to the basic random variables, as
well as to the parameters defining the probability distribution and the limit-state function. These sensitivity measures
12
-------
identify the random variables or the deterministic parameters which have the greatest impact on the failure
probability.
The simplest measure of sensitivity is the partial derivative of the reliability index, /3, with respect to the coordinates of
the design point in the standard normal space. This is given by:
where
V .p =
u K
(2.22)
a =-
V .G(u
u \
(2.23)
and
v.p=
u
(2.24)
The a* vector, hence, gives a measure of the change in /3 when a given basic random variable is perturbed. Since the
partial derivatives in (2.22) are estimated at the design point, they reflect only the sensitivity with respect to small
changes in the random variables at that particular point. In other words, this vector gives an indication to measure
relative importance of the standard variates M;. Since a* is necessary for obtaining the design point, the aforemen-
tioned sensitivity measure is obtained with no additional computational effort.
Although a* is useful, the sensitivity of/3 with respect to the coordinates of the design point in the physical space
would be more physically meaningful. Sensitivities with respect to changes in the coordinates of the design point in
the physical space are obtained by the chain rules:
V .p=(v.p)j . .= '\/
(2.27)
In this work, the target concentration level at the well, whether in a risk assessment scenario or during the
assessment of remediation efficiency, would be a deterministic parameter. Hence the T] sensitivity in that context
would provide the sensitivity of the reliability index with respect to variations in the chosen target concentration.
Hence, if the sensitivity with respect to target concentration is sought, and with the limit-state function formulation
given in (2.3), it can be easily shown that:
13
-------
since the term Vn g(x*,ri') in this case which is equal to VCtarget g(x*) reduces to 1.0.
The choice of distribution parameters of each basic random variable can have a profound impact on the reliability
estimate of the component of interest. For example, when considering the first-order decay coefficient describing
the biodegradation of an organic chemical to be uniformly distributed, the choice of the upper and lower limits
would have a significant impact on the probability of such a chemical to exceed the regulatory standards at a
receptor well downgradient from a leaking source.
Such sensitivities can also be obtained during a reliability analysis. Let the joint probability density function of the
basic random variables fx(x) = f^(x,Q), where 0 is a vector of the distribution parameters describing the basic
random variables. The vector 0 could, for example, include the mean, standard deviation, and correlation
coefficients of the random variables. Madsen et al. (1986) have shown that:
Ve(3 = a*Jr,0 (2.29)
where JT ,0 is the matrix containing the partial derivatives of the transformation T with respect to the parameters
0, evaluated at the design point.
Cawlfield and Sitar (1987) indicate that the evaluation of (2.29) is cumbersome in general. This is because the
partial derivatives of the transformation T with respect to the distribution parameters, 0, are sometimes difficult to
obtain.
The sensitivities of the first-order probability of failure are obtained by applying the chain rule to (2.19).
(2.30)
and
(2.31)
2.7 Uncertainty Importance Factors
Additional valuable information to the parametric sensitivity factors is provided by the uncertainty importance
factors of the reliability index. This indicates the importance of modeling the random variable Xi as a distributed
variable rather than as a fixed valued variable, the median of the distribution being the fixed value.
For statistically independent variables, it has been shown (Madsen, 1988) that omission sensitivity factors (defined
as the relative error in the first-order reliability index when a basic variable X. is replaced by a deterministic number
equal to its median X.J are given by:
(2'32)
It is therefore observed that the uncertainty importance factors, 1 00 X a2 as printed by PROB AN (Veritas Research,
1992b), give a measure of the relative importance of modeling the uncertainty of a basic random variable, Xp with
respect to the final probability outcome.
This concept naturally extends to higher dimensions. The relative error in the first-order reliability index of
representing a group of m mutually dependent variables, Xyi=\,...,m, by their respective medians is given by
(Veritas Research, 1992a):
33)
14
-------
Therefore, the uncertainty importance factors associated with a group of mutually dependent variables can be
expressed by the quantity lOOY^™ a . In case m=\, the results of (2.33) reduce to that of (2.32).
Importance factors allow for the identification of the random variables which have the least impact on the final
reliability outcome. Each of these variables can then be replaced, for all practical purposes, by a deterministic value,
its median for example. Therefore, the importance factors are very useful in reducing the number of basic random
variables in large size reliability models. Importance factors are further explored in Section 3.
2.8 System Reliability
In the preceding section, we have looked at cases where the state of the system is described by a single limit-state
function, i.e., having a single "mode of failure". However, a situation can arise that necessitates the simultaneous
consideration of several limit-state functions; for example, if we consider the probability that the contaminant
concentrations at any of several points exceed a predetermined critical value. This is important in exposure
assessment situations where the interest is on more than one point of human exposure in the aquifer, or when
assessing the performance of a remediation scheme based on the likelihood of success to meet the target cleanup
levels at every point in the aquifer. In this case, the state of the system is described by the states of its components:
gt(X)=Ct-Ct(X) (2.34)
where each limit-state function gt(X), /=!,...,m, defines whether the target concentration Ct has been exceeded by
the simulated concentration at point i,i=l,...,m, in the solution domain.
By definition of the problem, failure of the system occurs if at least one of its components fails, that is, the system
under consideration is a series system. Hence, the system failure probability can be expressed as
p system _ p
' F
(2.35)
The calculation of P^stem is complicated due to the fact that the components g.(x) are usually statistically dependent
since they share some of the same basic random variables. Upper and lower bounds on the probability of failure of
a series system can be obtained from the individual component failure probabilities, Ppp and the joint failure
probabilities in any two modes, PpjF.. The uni-modal bounds make use of the Pp. terms only, and are given by
(Madsen et al., 1986): J
(2.36)
2=1 i=l
The bi-modal Ditlevsen bounds (Ditlevsen, 1979) make use of the individual modal failure probabilities and the
joint failure probabilities in any two modes:
(2.37)
The Ditlevsen bounds depend on the ordering of the failure modes (or components), Fp F2, ..., Fm, and different
orderings may correspond to the largest lower bound and the smallest upper bound. Practical experience suggests
that the failure modes (i.e., g-functions) be numbered in a decreasing order ofPpi (Madsen et al., 1986).
The modal and joint modal failure probabilities Pp. and Ppip., respectively, used in formulating the above uni- and
bi-modal bounds were assumed to be exact failure probabilities. A first-order (FORM) approximation of these
modal and joint modal failure probabilities are given by (Madsen et al., 1986).
15
-------
,FORM\
/ )
FORM
,FORM
(2.38)
In which p.. denotes the correlation coefficient between the failure modes / and/ linearized at their design point. This
modal correlation coefficient is obtained as the inner product of the two unit normal vectors at the modal design points
Pij=a*-a*T (2.39)
The integral in (2.38) must be evaluated numerically. To avoid this numerical integration, Ditlevsen (1979) has
proposed the following simple bounds on Ppipj :
ifPij>0
ifPlJ<0
(2'40)
where
FORM nFORM
PS
l-py
(2.41)
By substituting the above bounds on Pp in (2.37), the relaxed bi-modal bounds on p^stem are obtained:
(2.42)
i=2 ;=1 !=1 i=2 1<1
in which /*F ^ .M and ^^ z denote upper and lower bounds on Ppip., respectively.
Finally, a measure of the "system" reliability can be presented by reporting the system reliability index, given by
(2 43)
2.9 Monte Carlo Simulation Methods
In this work, verification of the first- and second-order reliability results is accomplished by comparing the
reliability results to that obtained by the Monte Carlo Simulation method. The zero-one indicator-based Monte
Carlo simulation method is used for that purpose. The method works as follows:
First, we define an indicator function, I(x) such that:
Ji ifg(*)<;o
/W=|o ifg(*)>0 <2-44)
Next, the Monte Carlo simulation estimates the probability of failure according to the following expression:
*. IW&W* =*[/(*)] (2.45)
where E[ ] is the expectation operator. By performing N Monte Carlo simulations by random selection of the
prescribed joint probability density of the vector .Y, the probability of failure is estimated as follows:
16
-------
where p=I(x) and jc. denotes the Ith simulation. The stopping criterion for most Monte Carlo simulation procedures
is when a
given by:
is when a threshold for the coefficient of variation of the estimate ofPp is reached. The coefficient of variation is
c.o.v. =-
E[PF]
It can be easily shown that:
(2.48)
Pp (2.49)
~ — — if Pp is small
which means that for small PF, and for a stopping criteria of coefficient of variation of 0.1, the number of Monte
Carlo simulations needed is about 100/PF.
17
-------
18
-------
Section 3
Analytical Probabilistic Groundwater Modeling
In this section, the first- and second-order reliability methods are applied to an analytical contaminant transport code.
This phase of the work is intended as a first step in which the use of the reliability methods is tested on some simple
case studies, such that the different aspects of the methodology are investigated, and the limitations are assessed. In
this section, we are interested in the assessment of groundwater contamination risk due to the leaking of contaminants
from a waste source, where the Horizontal Plane Source model of Galya (1987) is utilized to solve cases of non-
reactive and reactive contaminant migration.
The use of the analytical transport code in this stage precludes the consideration of the spatial variability and
correlation structure of the aquifer properties (e.g., hydraulic conductivity). This is because analytical models are
usually simplistic models that cannot accommodate spatial variability in input parameters. Nevertheless, the
intention in this part is twofold: (1) to perform a proof-of-concept type of study for illustrating the potential power of
the reliability methods when applied to groundwater contamination problems; and (2) to develop a probabilistic tool
that can provide general indication of contamination risk, along with sensitivity of the results with respect to the
various basic sources of uncertainty, in a framework that can explicitly account for aquifer-related, source-related,
and, in the case of reactive solutes, chemical-related parameter uncertainty. Spatial variability of aquifer properties
is considered in Section 4.
3.1 Assessment of Groundwater Contamination
In order to assess the potential of groundwater contamination, the probability that a contaminant leaking from a waste
source will exceed a threshold level at a well downgradient from the source is estimated, along with the sensitivity of
this probability to the input random variables. The relevant parameters are assumed uncertain with prescribed
probability distributions. These parameters can be grouped into the following three categories:
1. Aquifer-related parameters, such as:
• hydraulic conductivity,
soil porosity,
soil bulk density,
fraction of organic carbon, and
longitudinal and transverse dispersivities.
2. Source-related parameters, which include:
• source dimensions, and
• source concentration.
3. Chemical-related parameters (in case of a reactive solute). For example, the organic carbon partitioning
coefficient can be assumed uncertain. Furthermore, uncertainty in the reaction kinetics of the chemical can also
be considered. For example if the biodegradation of the chemical under consideration is assumed to follow first-
order kinetics, then first-order decay coefficient may be assumed uncertain.
3.2 Models Utilized
The uncertainty analysis of groundwater contaminant transport is developed by extending a deterministic semi-
analytical contaminant transport model (Galya, 1987) to include the effect of parameter uncertainty, through linking
it with the general purpose probabilistic analysis program PROBAN (Veritas Research, 1992b). A brief description
of each model follows.
3.2.1 Horizontal Plane Source Model
The proposed methodology is tested using the Horizontal Plane Source model developed by (Galya, 1987), which is
a three-dimensional, semi-analytical model that uses Green's function solutions, along with numerical integration to
simulate uniform one-dimensional advective transport in the x-direction with three-dimensional dispersion in the x-,
y-, and z-directions. It can incorporate retardation and first-order decay. The advection-dispersion equation of
transport assuming a homogeneous and isotropic aquifer is given by
19
-------
ac M ac D a2c A, a2c A92c _ M
__ I __ _ _ x __ 1 __ j __ 1 __ z ___ At _| __ /o i \
3/1 R dx~ R dx2 R dy2 R 3z2 ~ 0 ( '
where
C = concentration of contaminant [L3/T]
M = seepage velocity in the x-direction [L/T]
t = time [T]
Z) = dispersion coefficient in x-direction [L2/T]
= ^x U
ax = dispersivity in x-direction [L]
D = dispersion coefficient in j-direction [L2/T]
= a M
a = dispersivity in j-direction [L]
D = dispersion coefficient in z-direction [L2/T]
= az M
ax = dispersivity in z-direction [L]
M^ = mass source flux [M]
9 = porosity of soil matrix in aquifer
X = first-order decay coefficient [T"1]
R = retardation factor
pb = bulk density [M/L3]
Kd = distribution coefficient [L3/M]
Initial condition applied to solve the transport equation is given by
C=0 at ^=0 (3.2)
and the boundary conditions are given by
C=0 at y=±°° (3.3)
~\/~i
DZ—=Q at z=0 (3.4)
oZ
A||=O at z=H (3.5)
oz
C=0 at z=- (3.6)
where // is the aquifer thickness. The three-dimensional analytical solution which gives the contaminant concentra-
tion at any point in space for an instantaneous release of a unit mass of contaminant is given by (Galya, 1987)
C=±-X0(X,i)Y0(y,i)Z0(z,i)T(t) (3.7)
ttfY
where
T = degradation function
= exp(-A t) for first-order degradation and 'k includes the effect of hydrolysis, biodegradation, and
chemical reactions
Xff Yg and Zg = Green's functions for the transport in the x-, y-, and z-directions, respectively.
Green's functions for a source extending from -L/2 to L/2 in the x-direction and from -B/2 to B/2 in the y-direction
are given by Galya (1987) as:
*,=-!-
° 2L
erfv2 , ' R> + erf-
2
(3.8)
20
-------
where
L = width of source in x-direction
x = distance in x-direction to calculation point
xs = distance in x-direction to center of source
erf = error function
and
7_J_
° IB
where
B = length of source in j-direction
y = distance in j-direction to calculation point
ys = distance in j-direction to center of source
For a source located at the top of an aquifer of infinite thickness:
whereas for a source located at the top of an aquifer having a thickness H:
m=l
COS(IWTIZ/#) (3 n)
Galya (1987) indicates that applying a convolution integral with respect to time provides for the concentration due to
a continuous release at a rateM(r) for any time, T, following the beginning of release:
where M = Mass released per unit time .
The aforementioned integration is carried out numerically, thus obtaining the concentration at any point downgradient
from the source. Input to the model includes the seepage velocity, dispersivities in the x-, y-, and z-directions, soil
porosity, aquifer thickness, first-order decay coefficient, organic carbon partitioning coefficient, soil bulk density,
fraction of organic carbon, receptor location, along with the source location, dimensions, concentration and
infiltration rate. The model can accommodate any number of sources with varying concentrations, and any number
of receptor locations.
3.2.2 PROBAN
Physical parameter uncertainty is considered by linking the transport model to the general purpose probability
analysis program PROBAN (Veritas Research, 1992b). PROBAN is a software package that is designed for
sophisticated probabilistic analysis. It has a flexible input module, allowing for the definition of simple models as
well as sophisticated models with complicated dependencies. It provides for a variety of methods aimed at different
types of probability and distribution analyses, along with sensitivity measures. PROBAN comes equipped with a
distribution library that contains more than twenty probability distributions. Thus the user can create any marginal
probability distribution or joint density of the input random variables, by assigning the required distribution to the
respective variables.
3.2.3 HPS-PROBAN Model
Figure 3. 1 is a schematic presentation of the various elements involved in the combined HPS-PROBAN model. The
probabilistic transport model can be applied as a "black box" without necessitating a great deal of knowledge of the
reliability theory on the user's part. Input to the model include physical parameters uncertainty as well as
deterministic parameters.
21
-------
Physical Parameter
Uncertainty
Input Uncertainty
Information
Probabilistic Screening
Transport Model
aquifer-related |
i
\
source-related 1
:
:
chemical-related 1
i — »
— »
joint multivariate
probability density
function
mean, variance
and correlation
— »r
— >
analytical transport
model (HPS)
i
\
probabilistic analysis
program (PROBAN)
assess groundwater
contamination risk
probablity of exceeding
target concentration level
relative "importance" of
basic random variables
limit—state function
formulation
design optimal
site sampling
assess data worth
Figure 3.1 Flow chart of the combined HPS-PROBAN analytical probabilistic screening model. (Reprinted from Probabi-
listic screening tool for groundwater contamination assessment, 1995, by M. M. Hamed, J. P. Conte, and P.B.
Bedient with permission of Journal of Environmental Engineering, ASCE.)
Various levels of statistical information can be accommodated by the model. When only incomplete statistical
information is available, the first two moments of each random variable are specified, without providing a specific
probability distribution. If some, or all, of the random variables are correlated, this information can be readily
incorporated into the model through correlation coefficients. When there is incomplete statistical information, only an
"ad-hoc" estimate of the probability of failure can be obtained. On the other hand, if complete probability information
is available, the input random variables are defined by their full joint density function.
The user is also prompted to formulate the limit-state function for the site-specific conditions, which identifies the
performance criteria of the component being considered. It should be noted that the formulation is different in the case
of continuous and instantaneous leaking sources, and differs depending on whether groundwater flow or contaminant
transport is the primary focus of the study.
Once the limit-state function is formulated and the input variables are provided, the model applies the first- and
second-order reliability methods to provide the user with the probability of failure at the receptor well, the reliability
index characterizing the transport scenario, and the sensitivity of the reliability index (or the failure probability) to
uncertainties in the basic random variables. The decision regarding the assessment of groundwater contamination risk
can be made with the probabilistic outcome available. The sensitivity information can help the user assess the worth
of available data and guide in future data collection protocols. Monte Carlo simulations of the failure probability, or
22
-------
of the concentration at the receptor well, can be readily provided by the HPS-PROBAN model with minor
modifications to the input file.
The interface between PROBAN and HPS is done using FORTRAN 77 user-defined subroutines, and the HPS-
PROBAN is run on a SUN SPARCstation 2. In the FORM/SORM analysis, HPS is used at each iteration in the
constrained optimization routine used to obtain the design point (as discussed in Section 2) to provide an estimate of
the limit-state function for the current realization, x, of the vector of random variables, X. The search algorithm
usually converges to a minimum in less than 20 iterations. To ensure that a global minimum is obtained, the user may
choose to run the probabilistic model with different starting points, and check whether the algorithm converges to the
same solution at each time.
3.3 Non-reactive Solute Transport
First, the methodology is demonstrated on a couple of cases of transport of a conservative (non-reactive) solute in
groundwater. This means that the only mechanisms involved in the solute transport are advection and dispersion. The
solute is assumed to undergo no chemical transformations, biological degradation, or sorption to the soil matrix. The
source is assumed continuous. In the first of those two cases, a fewer number of random variables is considered. This
will allow us to look into a number of parametric sensitivity analyses, and present them in a way that would be
difficult with cases involving a larger number of random variables.
3.3.1 Limit-state Function Formulation
The problem of interest in this phase is to study the probability that the concentration of a given contaminant leaking
continuously from a source exceeds a pre-determined target level at a downgradient water supply well during the
simulation time of interest. Figure 3.2 shows a schematic of the case study setup. The formulation follows that of
Cawlfield and Sitar (1988).
In the case of non-reactive solute, we look at the normalized target concentration at the receptor well, (C(X)/C0),
where Cg is the source concentration and C(X) is the simulated concentration when parameter uncertainty is taken
into account. Hence the limit-state function is formulated as follows:
g(x)=(c/c0)target-c(x)/c0
where (C/Cg)tar et is the pre-specified normalized target concentration at the well.
(3.13)
Figure 3.2 Schematic of the case study setup. (Reprinted from Probabilistic screening tool for groundwater
contamination assessment, 1995, by M. M. Hamed, J. P. Conte, and P.B. Bedient with permission of
Journal of Environmental Engineering, ASCE.)
23
-------
Component failure occurs when g(x) (for a given realization x, of the random variables) is less than zero; that is,
when the normalized target concentration at the receptor well is exceeded. Note that this formulation is intended for
a continuous source transport problem, in which case the concentration at the receptor well increases with time in a
monotonic fashion. This implies that once the target concentration at the receptor well is exceeded (and failure
occurs), the concentration at the well will always be greater than the target value as time continues to increase, and
failure conditions will continue to prevail.
3.3.2 Inputs to the Case of Non-reactive Solute (Case 1)
The probability of failure at a receptor well that is 100 m downgradient from the source was studied, considering the
seepage velocity, along with the dispersion characteristics of the aquifer as random variables. Input to the model
includes deterministic parameters (Table 3.1) as well as random variables (Table 3.2). For the case of shifted-
lognormally distributed variables, the mean, standard deviation, and the minimum values are given. Mean and
standard deviation are given for the normally distributed variables.
Table 3.1 Deterministic Input Parameters for the Non-reactive Case (1)
Variable Units Value
source length, B
source width, L
aquifer thickness, H
aquifer porosity, 0
simulation time, t
source concentration, C0
infiltration rate, Q
retardation factor, R
1st -order decay coefficient, A,
m
m
m
—
year
mg/L
m/year
—
year1
80.0
20.0
60.0
0.35
20.0
1.0
1.0
1.0
0
Table 3.2 Random Input Variables for the Non-reactive Case (1)
Variable Units Distribution
seepage velocity, u m/year SLN(10.0,3.0,1.0)
longitudinal dispersity, ax m N(5.0,1.5)
horizontal dispersivity, a* m 0.2 of ax
vertical dispersity, az y m 0.05 of ax
SLN(mean, std. dev., lower limit): ShiftedLognormal N(mean, std. dev.): Normal
3.3.3 Results of the Case of Non-reactive Solute (Case 1)
Instead of performing the analysis for a single normalized target concentration, a set of such target concentrations
was considered. In other words, the reliability problem is solved a number of times, varying the term (C/Cg)tar rfthat
appears in Equation 3.13 at each time. Note that the probability distribution of the failure event under consideration
can be obtained by varying the target concentration values and using the parametric sensitivity results with respect to
limit-state function parameters. This gives a more flexible way of assessing the groundwater contamination risk at
the receptor well for any selected target concentration value.
The analysis was done using both FORM and parabolic SORM as the probability analysis options. The results are
shown in Figure 3.3. The decrease in failure probability as the target (C/Cg) increases is expected, since the
probability of exceeding a high concentration value downgradient is usually less than that of exceeding a smaller
value for a continuous source.
The deviation between FORM and SORM failure probabilities indicates the nonlinear nature of the limit-state
function, and thus a second-order surface is expected to provide a belter approximation of the failure surface at the
design point in the standard normal space.
To generalize the analysis even more, the location of the receptor well was varied, in order to study the effect of both
distance from the leaking source as well as the specified target concentration level (C/Cg) on the failure probability.
24
-------
0.05
Figure 3.3 Probability of failure at a location 100 m downgradient from the leaking source.
Figure 3.4 illustrates the failure probability "surface" as a function of both the distance from the source and the
normalized target concentration. Figure 3.5 shows horizontal "slices" in that surface, thus producing contours of
equal failure probabilities, for the range of distances from source, and target concentration values of interest. The
figures indicate that for a given target (C/Cfl), the probability of failure decreases as the well distance increases, and
that for a given well location, the probability of failure decreases with increasing the normalized target concentration.
Although the results are qualitatively intuitive, the quantitative aspects could not have been obtained without the
formal probabilistic computations described in the above formulation.
Various parametric studies can also be performed using the probabilistic program, to study the effect that the various
distribution parameters of the input random variables have on the probabilistic outcome. An example is given below,
-&1
Ł
C3
O
0.5*
0.5 50
Distance from source (m)
Target (C/Co)
Figure 3.4 Failure probability surface at various well locations for different normalized target concentration levels.
25
-------
0.2 0.3
Target (C/Co)
Figure 3.5 Contours of equal failure probabilities for case 1.
where the mean value of the longitudinal dispersivity a. is allowed to take on values in the range of 1.0 to 5.0 meters.
The effect on the probability of failure at different downgradient locations is shown in Figure 3.6. The decrease in the
probability of failure with the increase in the mean value of a. is intuitive, since the higher the dispersivity in the in-
direction, the higher the dispersivities in the y- and z-directions will become (because we considered a and az to be
fully correlated to a). Therefore, the mass of contaminant will be dispersed on a larger volume, and the
concentration at the well will decrease, thus reducing the probability of failure. Besides, the figure shows that the
probability of failure decreases with distance from the source for the same value of mean longitudinal dispersivity.
This is due to the fact that for larger distances from the leaking source, the spread of contaminant is greater than that
at closer distances, and the probability of exceeding the designated normalized target concentration will be smaller.
Additional valuable information to the failure probability and the parametric sensitivity factors is provided by the
importance factors. Those were explained in Section 2 and they indicate the relative importance of the uncertainty in
each basic random variable.
Figure 3.7 shows the change in importance factors with changing the mean value of a. along with the downgradient
distance from the source. It is clear that in this case the probabilistic outcome is more sensitive to the uncertainty in
the seepage velocity than the uncertainty in the dispersivity for the most part, and only when the mean value of the
longitudinal dispersivity exceeds a value of about 4.0 m, and for distances greater than about 90.0 m from the source,
the sensitivity to the uncertainty in the longitudinal dispersivity becomes significant. This result shows that for
receptor locations away from the source, the effect of dispersion becomes noticeable, since at large distances from the
source larger dispersivities would cause the contaminant to spread over a larger volume, reducing the concentration
at the receptor location, hence decreasing the probability of failure. The results of the sensitivity analysis in this case
are not intended to be general and will vary with the choice of problem configuration, prescribed probability
distributions, and other pertinent factors. However, the methodology is applicable to all cases.
To study the effect of source strength, further analysis was performed considering the source initial concentration
(Cg) to be a random variable, having a normal distribution, with a mean of 1.0 mg/L, and a coefficient of variation of
0.3, and still assuming the seepage velocity along with the longitudinal, transverse horizontal and vertical
dispersivities to be random variables with the distributions given in Table 3.2. The analysis was carried out by
estimating the importance factors of these random variables, for a range of normalized target concentrations, and for
26
-------
2 2.5 3 3.5 4 4.5
Mean value of Long. Dispersivity (m)
Figure 3.6 The effect of the mean value of the longitudinal dispersivity ocx on the probability of failure at different
downgradient locations.
80
Distance from source (m)
90
100 1
Mean of long. disp. (m)
Figure 3.7 The effect of the mean value of the longitudinal dispersivity ocx on the importance factors (upper surface
represents seepage velocity, lower one represents dispersivity).
27
-------
varying receptor well locations downgradient from the source. In this case, by target (C/C0) we mean the
concentration normalized with respect to the mean source concentration (which is equal to 1.0 mg/L).
Results are shown as a three-dimensional plot in Figure 3.8, indicating the paramount importance that the source
strength uncertainty has on the reliability measure. It is clear from this plot that, for the range of distances studied,
the source strength has such a significant impact on the reliability estimates, regardless of the location of the receptor
well from the source. This is not the case for the importance of the seepage velocity and dispersivity, which are
affected by the distance from the source. A vertical section of this figure at a distance of 60 m downgradient from the
source is shown in Figure 3.9, which shows the greater sensitivity of the reliability index to the source strength
uncertainties than to the uncertainty of the dispersivity or seepage velocity.
3.3.4 Inputs to the Case of Non-reactive Solute (Case 2)
The deterministic parameters to this case study are listed in Table 3.3. The probability of failure at a receptor well
downgradient from the waste source is studied. The well is assumed to be screened from the water table to a depth of
2.0m below the water table, hence the observation point is taken to be 1.0m below the water table.
Input random variables to the case study are categorized into aquifer-related and source-related parameters and are
listed in Table 3.4. Aquifer-related parameters include seepage velocity, dispersivities in the x-, y-, and z-directions,
and soil porosity. Source-related parameters include the source dimensions parallel to the x- and j-axes. Probability
density functions and relevant parameters for the aquifer-related parameters were obtained from the nationwide
survey conducted by the EPA in 1988 (U.S. EPA, 1988), along with the 400-site survey conducted by Newell et al.
(1990). The selection of the uniform probability distribution for the source dimensions is arbitrary. Since there is
usually limited information on the extent of the source area, the uniform distribution, which implies equal likelihood
of occurrence, is believed to conveniently describe the level of uncertainty related to the source dimensions. The basic
random variables are assumed to be mutually statistically independent.
0.
100
90
80
Distance from source (m)
Target (C/Co)
Figure 3.8 Importance of source strength, seepage velocity, and dispersivity on the reliability estimate (upper surface:
source strength, middle: seepage velocity, lower: dispersivity).
28
-------
I
a
— Source concentration
- Seepage velocity
• • • Dispersivity
0.2 0.25 0.3 0.35
Target (C/Co)
Figure 3.9 Importance factors at a location 60.0 m downgradient from the leaking source.
Table 3.3
Deterministic Input Parameters for the Non-reactive Case (2) (Reprinted from Probabilistic screening tool for
groundwater contamination assessment, 1995, by M. M. Hamed, J. P. Conte, and P.B. Bedient with permis-
sion of Journal of Environmental Engineering, ASCE.)
Variable Units
Aquifer thickness, H
simulation time, t
x-distance to the well
y-distance to the well
z-distance to the well
infiltration rate, Q
retardation factor, R
1st -order decay coefficient,
m
year
m
m
m
m/year
1 year1
Value
30.0
20.0
variable
10.0
1.0
1.0
1.0
0.0
Table 3.4
Note: z-distance is measured from the water table
Random Input Variables for the Non-reactive Case (2) (Reprinted from Probabilistic screening tool for
groundwater contamination assessment, 1995, by M. M. Hamed, J. P. Conte, and P.B. Bedient with permis-
sion of Journal of Environmental Engineering, ASCE.)
Variable
Units
Distribution
Aquifer-related Parameters
seepage velocity, u
x-dispersivity, (Xx
y-dispersivity, a"
z-dispersivity, az
soil porosity, 0
m/year
m
m
m
LN(126.7, 227.37)
SLN(10, 4, 0.01)
SLN(1, 0.4, 0.001)
SLN(0.1, 0.04, 0.0001)
U(0.3,0.5)
Source-related Parameters
source length, Lx
source width, Lyx
m
m
U(50,100)
U(50,100)
LN(mean, std. dev.):Lognormal SLNfmean, std. dev., lower limit):
ShiftedLognormal Uflower limit, upper limit): Uniform
29
-------
For the case of lognormally distributed variables, the mean and standard deviation are given, whereas for shifted
lognormal variables, the mean, standard deviation, and the minimum values are given. Lower and upper limits are
provided for the case of uniformly distributed variables.
3.3.5 Results of the Case of Non-reactive Solute (Case 2)
The probability of failure at receptor wells at distances of 200 m and 400 m downgradient for a range of normalized
target concentration levels are shown in Figure 3.10. First-order and curvature-fitted second-order reliability
methods were used. The decrease in failure probability with target concentration increase is intuitive, since it is less
probable for the solute concentration to exceed a high value at the downgradient well than a smaller value for a
continuous source.
FORM and SORM failure probabilities were found to be in good agreement for low target concentration values (and
hence for higher failure probabilities). However, FORM and SORM results depart from each other for large target
concentration values (and lower failure probabilities), which indicates the appreciable non-linearity of the limit-state
surface at the design point. In this case, a second-order method is expected to provide a better approximation of the
failure surface at the design point since it accounts for the principal curvature of the limit-state surface in the standard
normal space.
Figure 3.11 illustrates the effect of changing the normalized target concentration levels at the receptor well on the
FORM and SORM reliability index for the 200 m and 400 m cases. Since there is a monotonic one-to-one
relationship between the probability of failure and reliability index, the same trend of agreement at low normalized
target concentration levels and discrepancy at large normalized target concentration levels is observed for the FORM
and SORM results. The reliability index is a measure of the component reliability, such that it increases for
decreasing probability of failure.
The discrepancy between the FORM and SORM results are further investigated by estimating the percent difference
between the results for varying normalized target concentrations and distances from the source. The absolute value of
the percent difference was estimated as follows:
— FORM failure probability
- SORM failure probability
10
0.2 0.3 0.4 0.5
Normalized target concentration
Figure 3.10 Effect of normalized target concentration levels on the probability of failure for the non-reactive case 2.
(Reprinted from Probabilistic screening tool for groundwater contamination assessment, 1995, by M. M.
Hamed, J. P. Conte, and P.B. Bedient with permission of Journal of Environmental Engineering, ASCE.)
30
-------
4.5
4
3.5
x
u
•73
2
IS
1
0.5
0
-0.5
0.1
FORM Reliability index
- - SORM Reliability index
x = 400m
0.2
0.3 0.4 0.5
Normalized target concentration
0.6
0.7
Figure 3.11 Effect of normalized target concentration levels on the reliability index for the non-reactive case 2. (Reprinted
from Probabilistic screening tool for groundwater contamination assessment, 1995, by M. M. Hamed, J. P.
Conte, and P.B. Bedient with permission of Journal of Environmental Engineering, ASCE.)
FORM
SORM ~
absolute percent difference = absolute
iFOKM
xlOO
(3.14)
This is shown in Figure 3.12. FORM and SORM failure probabilities were found to be in good agreement for low
target concentration values, or for high target concentration values at closer distances to the source. In other words,
the agreement between the FORM and SORM results is good for cases with high probability of failures (above 10"1
for this particular case study and for the prescribed probability distributions). FORM and SORM results depart
significantly from each other, however, for large target concentration values, or for small target concentrations at
larger distances from the source, which indicates the appreciable non-linearity of the limit-state surface at the design
point, in which case the use of SORM is warranted.
The previous discussion draws attention to an important issue regarding the choice of SORM versus FORM as the
reliability analysis option. In many practical situations, FORM and SORM results would be in good agreement,
provided that the limit-state surface at the design point in the standard normal space is nearly flat. On the other hand,
when the limit-state function contains highly non-linear terms, or when the input random variables have an
accentuated non-normal character, SORM can produce more accurate results than FORM. However, it should be
recognized that SORM requires more computational effort than FORM. As discussed, in component reliability, the
run time required for a FORM analysis grows proportionally with the problem dimensionality, n, whereas the
additional computational effort needed for a SORM analysis grows with n2/2. Consequently, the selection of the
reliability method should be based on problem dimensionality, available computer resources, and the required level of
accuracy. In other words, one should always conduct a careful trade-off analysis between computational and
accuracy requirements. In this case study, the CPU times (user + system times) for FORM and SORM were on the
order of one minute and two and a half minutes, respectively.
The change of the importance factors with changing normalized target concentration levels is shown in Figure 3.13.
It is evident that over the range of target concentrations selected, and for the probability distributions prescribed for
this case study, the probability of failure at the receptor well is most sensitive to the basic variabilities in the seepage
velocity. Sensitivities with respect to z-dispersivity and source length, LX, come next. The importance factors for the
remaining variables were less than 1.0% and were not plotted. Hence, although the impact of seepage velocity on the
probabilistic outcome is evident, the significance of the source-related uncertainty should also be recognized. When
31
-------
0.1
0.2 0.3 0.4 0.5 0.6 0.7
Normalized target concentration
Figure 3.12 Percent difference between FORM and SORM failure probabilities results for the non-reactive case 2. (Re-
printed from Probabilistic screening tool for groundwater contamination assessment, 1995, by M. M. Hamed,
J. P. Conte, and P.B. Bedient with permission of Journal of Environmental Engineering, ASCE.)
100
90
80
,-, 70
I 60
CT3
u 50
"§ 40
I"
30
20
10
0
• seepage velocity
• z-dispersivity
source length
0.3 0.4 0.5
Normalized target concentration
0.7
Figure 3.13 Effect of normalized target concentration on importance factors in non-reactive case 2. (Reprinted from
Probabilistic screening tool for groundwater contamination assessment, 1995, by M. M. Hamed, J. P. Conte,
and P.B. Bedient with permission of Journal of Environmental Engineering, ASCE.)
correlation between some of the input random variables was included, the resulting probability of failure was within
about 3% of the uncorrelated case.
3.4 Reactive Solute Transport
As an example of transport of a reactive solute in the subsurface, the proposed methodology is applied to the case of
transport of o-xylene. A continuous leaking source is also used in this case. The concentration at a well 100.0 m
downgradient, and 1.0 m off the centerline (r-axis) is studied.
32
-------
3.4.1 Limit-state Function Formulation
In this case study, we look at actual (as opposed to normalized) o-xylene concentration at the receptor well, resulting
from the continuous leaking source. Note that o-xylene undergoes adsorption to the soil matrix and biodegradation
in addition to advection and dispersion, hence the actual concentration of o-xylene at the receptor well will be much
smaller than that of a similar leak of a non-reactive solute. The limit-state function for this case is formulated as
follows:
g(X}=Ctarget-C(X] (3.15)
where Cf f is the non-normalized o-xylene target concentration at the well.
target J •—'
3.4.2 Inputs to the Case of Reactive Solute
The deterministic parameters used in the case study are listed in Table 3.5 and the input random variables to the
model are listed in Table 3.6.
The input random variables are classified into three categories: aquifer-related, source-related, and chemical-related
parameters. Basic random variables are assumed mutually statistically independent.
Table 3.5
Deterministic Input Parameters for the Case of o-xylene Transport (Reprinted from Probabilistic screening
tool for groundwater contamination assessment, 1995, by M. M. Hamed, J. P. Conte, and P.B. Bedient with
permission of Journal of Environmental Engineering, ASCE.)
Variable
simulation time, t
x-distance to the well location
y-distance to the well location
z-distance to the well location
infiltration rate, Q
sourc concentration, C0
contaminant type
Units
year
m
m
m
m/year
mg/L
Value
20.0
100.0
1.0
1.0
1.0
200.0
o-xylene
Table 3.6
Note: z-distance is measured from the water table
Random Input Variables to the Case of o-xylene Transport (Reprinted from Probabilistic screening tool for
groundwater contamination assessment, 1995, by M. M. Hamed, J. P. Conte, and P.B. Bedient with permis-
sion of Journal of Environmental Engineering, ASCE.)
Variable
Units
Distribution
Aquifer-related Parameters
seepage velocity, u m/year
dispersivity (x-direction), ax m
y-dispersivity (y-direction), a m
z-dispersivity (z-direction), az m
soil porosity, 0 z —
soil bulk density, pb g/cm3
fraction of organic carbon, foc % weight
LN(126.7, 227.37)
SLN(10, 4, 0.01)
SLN(1, 0.4, 0.001)
SLN(0.1, 0.04, 0.0001)
U(0.3,0.5)
U(1.2, 1.8)
SLN(0.0031, 0.0003, 0.001)
Source-related Parameters
source length, Lx m
source width, L * m
U(50,100)
U(50,100)
Chemical-related Parameters
organic carbon partition coefficient,Koc cm3/g
1st -order decay coefficient, A, °° year
U(200.0, 900.0)
U(1.456, 5.72)
LN(mean, std. dev.):Lognormal SLN(mean, std. dev., lower limit): ShiftedLognormal
U (lower limit, upper limit): Uniform
33
-------
First-order kinetics have been extensively used to describe processes like natural biodegradation, chemical reactions,
and radioactive decay. For the purpose of this work, it is assumed that o-xylene undergoes natural anaerobic
biodegradation by indigenous microorganisms following first-order kinetics. Due to changing depths of groundwater
elevation and fluctuation in levels of nutrients and electron acceptors, there is uncertainty in the value of the first-
order decay coefficient for the contaminant. This is taken into account by assuming the decay coefficient to be
random. The choice of the range of equally likely values of the first-order decay coefficient used in this work takes
into account actual rates for natural biodegradation reported by Wilson et al. (1993) for o-xylene. As for the organic
carbon partition coefficient, Koc, the range of values were obtained from the listed values given by Fetter (1993).
3.4.3 Results of the Case of Reactive Solute
The effect of changing the target concentration levels at the receptor well location on the probability of failure is
illustrated in Figure 3.14. Both FORM and SORM (curvature-fitted) were used for the component reliability
analysis. As was observed and explained in the non-reactive case there is a similar trend in decreasing the failure
probability with increasing target concentration. For this application, FORM and SORM results agree reasonably
well for the whole range. A similar trend was also noted in good agreement between FORM and SORM reliability
index for the whole range of target concentration levels. This is shown in Figure 3.15. The effect of correlation
between some of the input random variables in this case was similar to that of the non-reactive case, and the resulting
difference in the probability of failure was within about 5% of the uncorrelated case.
The importance factors for a range of o-xylene target concentration levels is shown in Figure 3.16. It is clear that over
the range of target concentration selected, and for the probability distributions prescribed for this case study, the
probability of failure at the receptor well is most sensitive to the basic uncertainty in the seepage velocity, the first-
order decay coefficient, the organic carbon partition coefficient, and, to a lesser extent, the source length Lx.
Therefore, although the impact of seepage velocity on the probabilistic outcome is pronounced, the significance of the
chemical-related and source-related uncertainty should be recognized and failure to account for these uncertainties
would cast shadows of doubt over the risk assessment results. The importance factors for other variables were
negligible (below 1.0%) hence they were not plotted. The apparent oscillation in the results are spurious numerical
artifacts due to the fact that the level of accuracy of the reliability algorithm varies slightly with the level of target
concentration levels resulting from the non-linear nature of the limit-state surface at the design point in the standard
normal space. The trend of the true behavior of the solution, however, remains clear.
0.26
0.24
0.22
I 0.2
.c
2
20.16
0.14
0.12
• FORM failure probability
SORM failure probability
°'0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0.6
Target concentration (mg/L)
Figure 3.14 Effect of o-xylene target concentration levels on the probability of failure. (Reprinted from Probabilistic
screening tool for groundwater contamination assessment, 1995, by M. M. Hamed, J. P. Conte, andP.B.
Bedient with permission of Journal of Environmental Engineering, ASCE.)
34
-------
1.3
1.2
1.1
0.8
0.7
— FORM reliability index
- SORM reliability index
'.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0.6
Target Concentration (mg/L)
Figure 3.15 Effect of o-xylene target concentration levels on the reliability index.
100
90
70
?
| 60
"o
^ 50
O
I ,n
— seepage velocity
x 1 st-order decay coefficient
- - organic carbon partition coefficient
— source length
0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0.6
Target concentration (mg/L)
Figure 3.16 Effect of o-xylene target concentration on importance factors. (Reprinted from Probabilistic screening tool for
groundwater contamination assessment, 1995, by M. M. Hamed, J. P. Conte, and P.B. Bedient with permis-
sion of Journal of Environmental Engineering, ASCE.)
In Figure 3.17, the failure probabilities obtained by FORM, curvature-fitted SORM, and the classic Monte Carlo
simulation (MCS) are plotted for the well location described before, and for an o-xylene target concentration of 0.75
mg/L. The MCS and SORM results seem to be in good agreement. FORM results, however, depart from the "true"
solution predicted by the ensemble mean of the large number of Monte Carlo simulations . It should be emphasized
that 10,000 Monte Carlo simulations were required to get a reliable estimate of the probability of failure (i.e., a
coefficient of variation of the estimate within the permissible limits, 0.02 in this case). FORM and SORM results
were obtained in less than 20 iterations, and the required CPU time on a SUN SPARCstation 2 was on the order of
35
-------
0.06
0.055
0.05
-------
Section 4
Numerical Probabilistic Groundwater Modeling
4.1 General
In this Section, we develop a probabilistic transport model based on a numerical groundwater contaminant transport
code. The impetus for using the numerical transport code in this phase can be summarized in the following:
1 . Numerical models allow for the consideration of more realistic problems, with complex boundary
conditions and geometry.
2. They allow for the consideration of the spatial variability of the aquifer properties. In this case, the
formulation is done in terms of spatial random fields as opposed to random variables. Spatial random
fields describe variability in the parameters in addition to correlation structure between any two points in
space.
3 . It is possible, using numerical models, to examine a number of critical issues in modeling groundwater
flow and transport problems. Examples include:
(a) the effect of the discretization level of the solution domain on the accuracy of the results.
(b) the effect of the ratio of the discretization scale to the integral scale of the aquifer on the solution
accuracy.
(c) the effect of material heterogeneity on the reliability of the groundwater system considered.
4. Numerical models allow for the analysis of problems where the aquifer is stressed by pumping and/or
injection.
4.2 Selection of Numerical Transport Model
The selection of the numerical contaminant transport model, along with the reliability model should be carefully done
in order to assure robustness and efficiency of the resulting probabilistic transport algorithm. The transport model
chosen for this phase of the work is FLOTRAN developed by Clint Dawson at the Computational and Applied
Mathematics Department (CAAM) at Rice University. The code is written in FORTRAN 77. The underlying theory
of the solution is explained in Bell et al. (1988), and Dawson (1990, 1991, and 1993).
FLOTRAN is a recently developed finite element simulation tool for modeling unsaturated/saturated flow and
contaminant transport in porous media. The code solves Richard's equation for flow combined with a system of
advection-diffusion-reaction equations describing contaminant transport. It assumes a three-dimensional, logically
rectangular domain, but allows for fairly general geometries by using smooth mappings between the logically
rectangular domain and a rectangular computational domain. Therefore, we describe the algorithms used in the
method in the context of brick-shaped elements.
FLOTRAN solves the governing flow equation using the mixed finite element method for spatial discretization and
fully implicit time discretization. The mixed finite element method applied to flow equations has been described in
numerous papers. The version employed herein is defined in the context of linear elliptic equations in Arbogast et al.
(1998). In this method, hydraulic head and Darcy velocity are simultaneously approximated, with hydraulic head
approximated using constants in each element. Numerical integration is used to reduce the finite element equations
to a cell-centered finite difference scheme in head variables only. The implicit time discretization leads to a system
of nonlinear finite difference equations. These equations are solved using a damped Newton method with linesearch
backtracking to guarantee convergence from bad starting guesses (Dennis and Schnabel, 1983). The Jacobian
equations which arise at each Newton step are solved using a preconditioned orthomin iterative procedure.
The Darcy velocities and moisture content computed from the flow are used in the system of transport equations for
each contaminant species. These equations model the advection, diffusion, and chemical reactions of species in the
system. Each equation is of the form
g (4.1)
at
37
-------
where 9 is moisture content, c is the contaminant concentration, v is the Darcy velocity, D is the diffusion/dispersion
tensor, and g incorporates sources and sinks through wells and chemical reactions; g may be a function of several
species in the system.
Equation (4.1) is solved numerically using a time-splitting approach. Given an approximation C(t) to c at some time
t, we advance to time t + Stby advecting and diffusing C(t), which gives an intermediate solution ~Ł(t + 8 f), then
incorporating reactions using ^(t + 8f) as initial conditions. This ultimately gives C(t + 8f)~ c(t + 8f). This type
of splitting has been analyzed in Wheeler and Dawson (1988), and has been adopted and modified by a number of
other researchers, see, for example, Valocchi and Malstead (1992). The advantages of this approach are that the
advection and diffusion of each component can be performed separately (and in parallel if desired), advection-
diffusion and reactions can be modeled using appropriately-sized time steps, and different reaction models can be
easily incorporated.
Advection and diffusion/dispersion are modeled using the Godunov-Mixed Method, developed and analyzed by
Dawson and described in Dawson (1991,1993). In this scheme a second-order accurate Godunov procedure (Bellet
al., 1988) is used to approximate the advective flux vc, and a mixed finite element method, similar to that used for the
flow equation, is used to incorporate diffusion/dispersion. The resulting scheme is a cell-centered finite difference
method for contaminant concentrations. The advective flux is incorporated explicitly in time, while the diffusion/
dispersion step is implicit. By modeling advection explicitly and diffusion implicitly, a symmetric, diagonally
dominant and positive definite system of equations is obtained at each time-step. This system is easily solved using
Jacobi preconditioned conjugate gradient. Although contaminant concentrations are approximated by constants in
each cell, the Godunov-Mixed Method developed is second-order accurate in space and first-order accurate in time at
the center of each element. This has been observed both theoretically and computationally.
When nonlinear reactions are present, with reaction terms depending on a number of species, a system of ordinary
differential equations must be solved after each component has been transported. This system is approximated using
a second-order Runge-Kutta procedure, however, more sophisticated techniques may also easily be incorporated.
4.3 Selection of Reliability Shell
The general purpose reliability shell CALREL (Liu et al., 1989) is linked to the finite element program FLOTRAN.
Experience with the analytical phase of the research showed that although PROBAN is a sophisticated and versatile
probabilistic analysis program, it is better suited for analytical transport problems. This is mainly due to the extensive
amount of preparation and interface programs that the user has to provide. Experience with CALREL, on the other
hand, has established the advantages of using it with a numerical scheme, since it lends itself more easily to interface
with numerical models with large numbers of input random variables.
CALREL is a general purpose reliability analysis shell that is designed to compute probability integrals as well as the
sensitivities of estimates of the probability of failure with respect to deterministic parameters defining the probability
distribution or the limit-state function. CALREL is modularized into a set of routines to perform:
1. First-order reliability analysis (FORM),
2. Second-order reliability analysis (SORM),
3. First-order sensitivity analysis,
4. First-order bounds for series systems,
5. Monte Carlo simulation, and
6. Directional simulation.
Input to the CALREL shell includes:
1. An analysis flag to determine whether component or system reliability is used.
2. The choice of the optimization method used in determining the design point, including:
(a) HL-RF method,
(b) modified HL-RF method,
(c) gradient projection method, and
(d) sequential quadratic programming method.
3. The statistical data for the random variables, this in turn includes:
(a) Whether the variables are correlated or independent,
38
-------
(b) Whether the variables are described by their marginal distributions and correlation matrix, or by
conditional and marginal distributions and correlation matrix,
(c) The probability distribution of each random variable, and
(d) Deterministic parameters used for defining the limit-state function.
4. Type of reliability or simulation analysis to be performed.
4.4 FLOTRAN-CALREL Interface
Linking CALREL and FLOTRAN is done through the use of three user provided routines. The first routine, UGFUN
defines the limit state function(s). In some instances, the user may choose to provide analytical derivatives of the
limit-state function(s) with respect to the basic random variables. In that case, these analytical derivatives are coded
in the second routine, UDGX. The third routine, UDD is used to provide for user specified probability distributions
that are not available in the CALREL distribution library. However, since the random variables considered here
follow either normal, lognormal, or uniform distributions, which are already available in the CALREL library, the
UDD routine is not used in this work.
The interface between CALREL and FLOTRAN is done by considering the numerical transport code as a subroutine
that is called using the UGFUN subroutine of CALREL. The combined model is used to calculate the probability of
exceeding a certain target concentration, at the location and time specified by the user. By having a range of target
concentration values, a cumulative distribution function (CDF) of the failure probability can be plotted, which shows
the probability of having a concentration that is less than or equal to the corresponding target concentration. Using the
finite difference method, one can derive the probability density function (PDF) by differentiating the CDF for the
same range of target concentration values. An alternative way of deriving the PDF is by plotting the sensitivity values
of the probability of failure with respect to the deterministic parameter (which is the target concentration in this case)
with an opposite sign, since this is equivalent to the slope of the CDF at that value of the target concentration.
At each iteration in the process of determining the design point, UGFUN calls FLOTRAN to evaluate the limit-state
function g(x) and its gradient for a given realization of the discretized spatial random field x. The gradient which is
required by the nonlinear optimization scheme is approximated using the central finite difference method. Thus, the
rth element of the gradient matrix is approximated by
+Ax.-x.-Ax,.)
(4'2)
in which the step size A x. is chosen as a small fraction of the standard deviation of each random variable x.. The
combined FLOTRAN-CALREL code provides the probability of failure, reliability index, and sensitivity informa-
tion.
Figure 4. 1 illustrates the steps involved in this phase. The general layout of the flow chart is essentially of the same
structure as the HPS-PROBAN flow chart, except for the step where spatial random fields are discretized. This is
explained in the following sections.
4.5 Spatial Random Fields
In the following numerical probabilistic analyses, the hydraulic conductivity is modeled as a spatial random field,
w(s). It is assumed that the statistical information available on the aquifer property consists of pointwise (or
marginal) probability distribution, FW(W), and the spatial correlation coefficient function, pww(s7,s2) (i.e , second-
order joint moments). Thus, it is a state of incomplete probability information which is made complete by assuming
that the transformed random field, V(s)=Q>~l\_Fw w(s)], is Gaussian with zero mean, unit variance, and spatial
correlation.
This probabilistic model is the random field version of the Nataf model defined in Section 2 for multiple random
variables. The spatial correlation structure of the hydraulic conductivity is considered to be of the exponential type
(4.3)
where \h \ = \ \ s1,s2 \ \ is the lag (separation) distance, and A, denotes the correlation length. This formulation
assumes statistical isotropy, i.e., the correlation coefficient function depends only on the distance between spatial
39
-------
Input Uncertainty
Information
Probabilistic Numerical
Transport Model
input marginal probability
density functions
input correlation structure
and correlation scale
a vector of random
variables, X
discretize spatial
random field
numerical transport
code (FLOTRAN)
reliability shell
(CALREL)
assess groundwater
contamination risk
probability of exceeding
target concentration level
relative "importance" of
basic random variables
limit-state function
formulation
Figure 4.1 Flow chart of the numerical probabilistic transport model. (Reprinted from Probabilistic modeling of aquifer
heterogeneity using reliability methods, 1996, by M. M. Hamed, P. B. Bedient, and C. N. Dawson, with
permission of Advances in Water Resources.)
locations and is independent of direction. The correlation length is a measure of the rate of random fluctuations of a
random field. It corresponds to the distance over which the correlation coefficient drops from 1 to e~;=0.368. The
exponential correlation function is used here since it has been shown to adequately describe the spatial correlation of
the log-conductivity data (Bakr et al., 1978; Jang et al., 1994).
To perform a finite element reliability analysis of a system, it is required to discretize the spatial random field
considered. This means that the spatial random field will be represented by an equivalent set of random variables.
The random field discretization method employed in this study is the midpoint discretization method. Here, the
random field values are defined at a finite set of discrete points, which corresponds to the midpoint (or centroid) of
each finite element. Then the random field is represented in terms of a vector of random variables, X, the elements of
which are correlated (Der Kiureghian and Ke, 1988). The correlation coefficient matrix is obtained directly from the
correlation coefficient function Pww(h) and the separation distances between the centroids of the finite element.
Other types of correlation functions used in groundwater flow and transport problems are described by Cawlfield and
Sitar (1987). Although only one correlation function is considered in this work, other types of correlation structure
are provided for the sake of completeness. The forms are given for statistically isotropic functions:
• The linear correlation function is given by
40
-------
[0 for|h|>A
The squared exponential autocorrelation function is of the form
- forhA (4'7)
4.6 Applications
In Section 3, we presented the application of FORM and SORM to simple analytical probabilistic groundwater
models. The developed models allowed for the assessment of the extent of groundwater contamination from a
continuous leaking source at the screening level, accounting for aquifer-, chemical-, and source-related parameter
uncertainty. Reliability methods were found to be very efficient in these types of analyses.
Jang et al. (1994) applied FORM and SORM methods to probabilistically model contaminant transport in one- and
two-dimensions. They applied their code to estimate the probability of exceeding target concentrations at specific
points, taking into account the spatial random variability of the input parameters, along with their spatial correlation.
They concluded that the probability of failure increases as the correlation scale increases and approaches that
obtained using the analytical transport model, which represents the transport in a perfectly correlated medium. The
sensitivity analysis showed that the solution is most sensitive to hydraulic conductivity near both the source and target
regions. They also found that the second-order reliability method, SORM, can handle large variance of the input
parameters; something which is not possible with many of the current available techniques.
The work presented here extends the work in the sense that the formulation is similar, but more emphasis is given to
issues like the analysis of failure at several wells in the aquifer, the use of reliability methods in plume containment
and remediation, and the effect of the presence of a lower conductivity lens on the estimated reliability.
4.6.1 Symmetric and Asymmetric Target Wells
The limit-state function is given by
g(X}=Ctarget-C(X] (4.8)
where Ctarget is the target concentration at the well and C(X) is the simulated concentration taking into account the
effect of parameters uncertainty. Figure 4.2 illustrates the case study setup. A non-reactive chemical leaking from the
waste source is carried downgradient due to advection and dispersion to downgradient observation points, located
symmetrically and asymmetrically as shown. The study area is 66.0 m by 42.0 m. The domain is divided into 11x7
grid blocks. Input parameters used in this case study are listed in Table 4.1.
The hydraulic conductivity is assumed to follow a lognormal distribution, with a mean of 5.0 X 10~3 cm/sec and a
coefficient of variation of 1.0. The spatial correlation structure is assumed to follow an exponential pattern, with a
correlation length of 18 m. Left and right constant head boundaries are set so that the resulting hydraulic gradient is
0.01.
The midpoint method is used to discretize the spatial random field into random variables. The latter are essentially the
grid block hydraulic conductivities. Therefore, the number of random variables to be considered is 11x7=77'. The
target concentration level is taken to be 2.0 mg/L.
The design point, which is the realization of hydraulic conductivity at the most likely failure scenario at the
observation well is shown for the symmetric observation well in Figure 4.3(a). It is clear that the grid block hydraulic
41
-------
No-flow boundary
No-flow boundary
Constant head boundary
Constant head boundary
Contaminant Source
Symmetric target node
Asymmetric target node
Figure 4.2 Setup of the numerical case study.
Table 4.1 Input Parameters for the Numerical Case (Reprinted from Numerical stochastic analysis ofgroundwater
contaminant transport and plume containment, 1996, by M. M. Hamed, P.B. Bedient, and J. P. Conte with
permission of Journal of Contaminant Hydrology.)
Variable Units
Value
Deterministic Data
grid dimension, Ax=Ay m
aquifer thickness, H m
aquifer porosity, 0 m3/m3
longitudinal dispersivity,(Xx m
transverse dispersivity,ay x m
simulation time, t y day
source concentration, C0 mg/L
6.0
6.0
0.4
0.2
0.02
350.0
10.0
Random Field Data
hydraulic conductivity, K cm/sec LN(5
correlation scale, A, m
.OxlO-3, 5.0X10-3)
18.0
LN(mean, standard deviation): Lognormal
conductivities at the design point attain their maximum along the path lines connecting the source to the observation
point.
Gamma sensitivities are shown in Figure 4.3(b) for the symmetric observation well. These show the scaled and
normalized sensitivities of the reliability index with respect to equally likely changes in grid block hydraulic
conductivities at the design point. Sensitivities are higher with respect to the hydraulic conductivity of the grid blocks
at the source, the observation well, and along the streamlines connecting the two. Symmetry of the design point and
sensitivities for the symmetric case are intuitive.
The design point and gamma sensitivities for the asymmetric observation well are shown in Figure 4.4(a) and Figure
4.4(b), respectively. In this case, the hydraulic conductivity at the design point is also at its maximum along the
contaminant travel path. As for the gamma sensitivities, these are higher at the source, observation well, and along the
42
-------
(a)
xl(T
«10.5
(b)
0.4-,
«10.5
(c)
«10.5
Figure 4.3 Results for the symmetric well: (a) design point, (b) gamma sensitivity, (c) delta sensitivity. (Reprinted from
Numerical stochastic analysis ofgroundwater contaminant transport and plume containment, 1996, by M. M.
Hamed, P. B. Bedient, and J. P. Conte with permission of Journal of Contaminant Hydrology.)
43
-------
(a)
x 10"
10.5
(b)
10.5
1.5
0.5 J
(c)
10.5
Figure 4.4 Results for the asymmetric well: (a) design point, (b) gamma sensitivity, (c) delta sensitivity. (Reprinted
from Numerical stochastic analysis ofgroundwater contaminant transport and plume containment, 1996, by
M. M. Hamed, P. B. Bedient, and J. P. Conte with permission of Journal of Contaminant Hydrology.)
44
-------
connecting region. Since the observation well is asymmetric with the source, the design point and gamma sensitivities
are also asymmetric.
Sensitivity information is very useful in designing future sampling at the site, in the sense that samples should be
collected from spatial locations where the sensitivity of the probabilistic event is the highest. This will help reduce the
uncertainty regarding the probability of exceeding the pre-determined concentration levels at the target observation
well.
Note that hydraulic conductivities at the design point in the asymmetric case are higher than that for the symmetric
case. This is because in the symmetric case, the target concentration at the observation well is closer to the mean
concentration. The mean contaminant concentration at the asymmetric well is 0.17 mg/L, as opposed to 1.93 mg/L
for the symmetric well. Therefore, in order for the contaminant concentration at the well to exceed the target level of
2.0 mg/L, hydraulic conductivities have to be considerably higher than the mean. This is shown in Figure 4.4(a). The
sign of the gamma sensitivities indicates whether the relationship between the reliability index and an increase in the
design point is of direct or inverse proportionality. The previous findings are consistent with that of Jang et al. (1994).
Other valuable information that CALREL provides includes the delta parametric sensitivity information. Recall that
the delta sensitivities are the scaled and normalized change in the reliability index due to change in the mean values
of the random variables:
8,.= c,
_a§_
(4.9)
where //. and <7 are the mean and standard deviation of the basic random variable X., respectively.
Figure 4.3(c) shows the delta sensitivities forthe symmetric case. Negative sensitivities along the contaminant travel
path indicate a decrease in the reliability index as the mean value of the hydraulic conductivity in this region
increases. This result is intuitive. An increase in the hydraulic conductivity in the grid blocks that exhibit negative
delta sensitivity will cause more contaminant to reach the target observation well within the specified simulation time.
Therefore, the probability of exceeding the target concentration level will increase, and the reliability index will
decrease. The reverse is also true. Delta sensitivities forthe asymmetric observation well are shown in Figure 4.4(c).
The same discussion applies to this case.
Table 4.2 presents a comparison between FORM, SORM and Monte Carlo simulation results for the symmetric
observation well for target concentration values of 2.0 and 5.0 mg/L. The stopping criterion for the Monte Carlo
simulation method was a coefficient of variation of 5% of the simulated probability of failure. The failure probability
at 2.0 mg/L is greater than that at 5.0 mg/L, as expected. This is due to the fact that there is a higher probability of
exceeding a lower target concentration than a greater target concentration at a given point. Note also the inverse
relationship between Pp and /3. Since the reliability index is a measure of the reliability of the system, then the higher
the failure probability, the lower the reliability index, and vice versa. This is shown in the Table.
The comparison shows that SORM results better match those of Monte Carlo simulation method than FORM results.
FORM results, however, were obtained with a much less computational effort. The Monte Carlo simulation method
Table 4.2 Comparison between the Analysis Methods
Ct = 2.0 mg/L
Method
/3 CPU Time§ (min)
FORM
SORMt
MCS
0.274
0.084
0.168
0.560
1.380
0.962
7.04
16.23
30.58
Ct = 5.0 mg/L
Method Pv
FORM 0.164
SORMt i.iOxlO-2
MCS 4.38 x 10-2
0
0.978
2.289
1.708
CPU Time§ (min)
9.54
21.22
977.10
§ on a SUN SPARC station 2
| Improved Breitung
45
-------
required a greater number of simulations as the failure probability decreases, as expected. This trend continues as the
failure event departs from the bulk of the distribution, where an exceedingly larger number of simulations are
required. This illustrates the computational burden one might expect to encounter when using the Monte Carlo
simulation method for simulating very small probability events.
The effect of correlation scale, X, on the probability of failure is shown in Figure 4.5 for the case where target
concentration is 2.0 mg/L. Correlation scales ranging from 5 to 30 meters were studied, and the effect on the FORM
probability of failure is reported. The figure shows that the probability of exceeding the target concentration increases
as the correlation scale increases. Ideally, the probability of failure would approach that obtained using the analytical
model, where the correlation scale approaches the problem domain length.
4.6.2 Effect of Material Heterogeneity
The presence of regions with hydraulic conductivities that are orders of magnitude lower than the prevailing
formation is of prime importance in the analysis of transport through natural porous formations. Many sites exhibit
such localized lenses of a much different permeability characteristics than the predominant aquifer material.
In the following case studies, hydraulic conductivity is assumed to follow a lognormal distribution. The mean value
changes from one region to the other in the same case study, as well as from one case study to the next. Nevertheless,
the coefficient of variation of the distribution is maintained at 1.0. The cases are numbered MH-1 and MH-2, where
the notation MH stands for "Material Heterogeneity." It should be emphasized that in these case studies the location
and extent of the macro-heterogeneity are assumed to be known. Although this information may not be available with
certainty at many field sites, the modeling exercise is useful in illustrating the importance of gamma sensitivities for
the probabilistic modeling of contaminant transport in the subsurface.
4.6.2.1 Case MH-1
First, we look at the effect of the presence of a block of lower hydraulic conductivity. Table 4.1 lists input data for
the case study. Figure 4.6 shows this case study, where a region of 15 grid blocks is assumed to exist with a mean
hydraulic conductivity of 5 X 10~4 cm/sec, which is an order of magnitude less than that of the remaining grid blocks
(with a mean hydraulic conductivity of 5 X 10~3 cm/sec).
The probability of exceeding a target concentration of 5 x 10~4mg/L at the target node is studied. The reason for
choosing such a low target concentration is the fact that the average concentration when the low conductivity region
is present is much less than the average concentration without it. In fact, the mean concentration at the target node
is 2.45 X 10"4mg/L. Hence, it is more convenient to assign a target concentration in this case that is of the same order
of magnitude as the mean. The FORM probability of failure was calculated to be 0.2237, with a reliability index of
0.76. The design point, gamma, and delta sensitivities are shown in Figures 4.7, 4.8, and 4.9, respectively.
15 20
Correlation Scale
Figure 4.5 Effect of correlation scale on the first-order probability of failure (Ct=2.0 mg/L).
46
-------
No—flow boundary
Constant head boundary
No—flow boundary
x
Constant head boundary
Contaminant Source
K = 0.0005 cm/sec
K = 0.005 cm/sec
Observation well
Figure 4.6 Setup of case MH-1. (Reprinted from Probabilistic modeling of aquifer heterogeneity using reliable meth-
ods, 1996, by M. M. Hamed, P. B. Bedient, and C. N. Dawson with permission of Advances in water Re-
sources.)
Figure 4.7 Design point for case MH-1 (cm/sec). (Reprinted from Probabilistic modeling of aquifer heterogeneity using
reliable methods, 1996, by M. M. Hamed, P. B. Bedient, and C. N. Dawson with permission of Advances in
Water Resources.)
47
-------
10.5
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5
Figure 4.8 Gamma sensitivities for case MH-1. (Reprinted from Probabilistic modeling of aquifer heterogeneity using
reliable methods, 1996, by M. M. Hamed, P. B. Bedient, and C. N. Dawson with permission of Advances in
Water Resources.)
6.5
5.5
4.5
3.5 4.5 5.5 6.5 7.5 8.5 9.5 10.5
X
Figure 4.9 Delta sensitivities for case MH-1. (Reprinted from Probabilistic modeling of aquifer heterogeneity using
reliable methods, 1996, by M. M. Hamed, P. B. Bedient, and C. N. Dawson with permission of Advances in
Water Resources.)
48
-------
Figure 4.10 which illustrates the contaminant concentration for this case study emphasizes the preferential
pathways that the contaminant takes in order to circumvent the tight region. In other words, due to the reduction in
the hydraulic conductivity in the indicated region, groundwater selects the path of least resistance, thus avoiding the
high resistance from the low conductivity region.
Therefore, it is intuitive for gamma sensitivities to attain their maximum value above the lower conductivity region,
as shown in Figure 4.8. The ease at which the contaminant would preferentially flow around the tight zone is
dependent upon the hydraulic conductivity of the region above it. The negative delta sensitivities in the region
above the tight zone indicate that the higher the hydraulic conductivity, the lower the reliability index, due to the
increased flow.
4.6.2.2 CaseMH-2
Next we consider the presence of (elongated) lower conductivity "lenses" in an otherwise highly permeable
medium. Figure 4.11 shows the configuration considered. Lenses of tight formation (mean hydraulic conductivity
of 5.0 x 10"5 cm/sec) are considered to exist parallel to the mean flow direction as indicated in the figure. Each lens
is one block wide and three blocks long. Hydraulic conductivity for the remainder of the domain has a mean value
of 5.0 x 10"3 cm/sec. The coefficient of variation of each random variable is maintained at 1.0. The probability of
exceeding a target concentration of 2.0 mg/L at the observation well is obtained. FORM estimates the failure
probability to be 0.118, with a reliability index of 1.19.
Figure 4.12 illustrates the design point, while Figures 4.13 and 4.14 show the gamma and delta sensitivities,
respectively. In this case too, the contaminant bypasses the lower conductivity regions in an effort to find a more
conductive and less "resistive" pathway. The sensitivities, consequently, illustrate the relative importance of the
conductive regions in this case.
4.6.3 Effect of Mesh Overlay and Discretization
First, we look at the effect of the mesh discretization level on the accuracy of the estimated probability of failure.
There are two meshes that we consider in any numerical reliability problem. The first is the mesh required for the
solution of the transport equation by the finite element model, which we refer to as the numerical mesh. In addition,
we must assign realizations of the random variables (hydraulic conductivity) to the individual grid blocks in the
numerical mesh. This step is achieved by discretizing the spatial random field on a number of grid blocks, which
is done by superimposing another mesh, called the random variables mesh. This concept is illustrated by a
numerical example.
9.5
Figure 4.10 Contaminant plume for case MH-1 (mg/L). (Reprinted from Probabilistic modeling of aquifer heterogeneity
using reliable methods, 1996, by M. M. Hamed, P. B. Bedient, and C. N. Dawson with permission of Advances
in Water Resources.)
49
-------
No—flow boundary
Constant head boundary
Constant head boundary
Contaminant Source
K = 0.00005 cm/sec
K = 0.005 cm/sec
Observation well
Figure 4.11 Setup of case MH-2. (Reprinted from Probabilistic modeling of aquifer heterogeneity using reliable methods,
1996, by M. M. Hamed, P. B. Bedient, and C. N. Dawson with permission of Advances in Water Resources.)
9.5 10.5
Figure 4.12 Design point for case MH-2 (cm/sec). (Reprinted from Probabilistic modeling of aquifer heterogeneity using
reliable methods, 1996, by M. M. Hamed, P. B. Bedient, and C. N. Dawson with permission of Advances in
Water Resources.)
50
-------
8.5 9.5 10.5
Figure 4.13 Gamma sensitivities for case MH-2. (Reprinted from Probabilistic modeling of aquifer heterogeneity using
reliable methods, 1996, by M. M. Hamed, P. B. Bedient, and C. N. Dawson with permission of Advances in
Water Resources.)
9.5 10.5
Figure 4.14 Delta sensitivities for case MH-2. (Reprinted from Probabilistic modeling of aquifer heterogeneity using
reliable methods, 1996, by M. M. Hamed, P. B. Bedient, and C. N. Dawson with permission of Advances in
Water Resources.)
Figure 4.15 illustrates the setup for the case study, where Table 4.3 lists the problem configuration. In this case, we
have a domain of 36.0 m on the side with the indicated boundary conditions. Here, the contaminant behaves
conservatively, although any chemical reactions can be considered. In order to solve the contaminant transport
equations using FLOTRAN, we discretize the domain into 12 X 12 grid blocks. This numerical grid is shown in
Figure 4.15(a). Now in order to take into account the uncertainty in the aquifer material, we describe the hydraulic
51
-------
(a)
x
(c)
(e)
observation well
contaminant source
constant head boundary
no—flow boundary
border of numerical mesh
border of random variables mesh
Figure 4.15 Setup of the mesh refinement case study. (Reprinted from Probabilistic modeling of aquifer heterogeneity
using reliable methods, 1996, by M. M. Hamed, P. B. Bedient, and C. N. Dawson with permission of Advances
in Water Resources.)
Table 4.3 Input Parameters for the Mesh Discretization Case Study (Reprinted from Probabilistic modeling of aquifer
heterogeneity using reliable methods, 1996, by M. M. Hamed, P. B. Bedient, and C. N. Dawson with permission
of Advances in Water Resources.)
Variable Units
Value
Deterministic Data
grid block dimension, Ax=Ay m
aquifer thickness, H m
aquifer porosity, 0 m3/m3
longitudinal dispersivity,ax m
transverse dispersivity,a m
hydraulic gradient, i m/m
simulation time, t day
source concentration, C0 mg/L
target concentration, Ct mg/L
3.0
6.0
0.35
3.0
0.6
0.02
10.0
10.0
3.0
Random Field Data
hydraulic conductivity, K cm/sec LN(5.0xlO 2,
correlation scale, A m 18.0
LN(mean, standard deviation): Lognormal
52
-------
conductivity by means of a spatial random field, using the exponential correlation function. We first look at the case
where the same mesh is used for finite element and random field discretization (see Figure 4.15(f)).
The design point, gamma sensitivity, and delta sensitivity are shown in Figures 4.16, 4.17, and 4.18, respectively.
The same patterns observed in the case study of Section 4.6.1 were also obtained in this case, where the design point
and the sensitivities are highest along the contaminant travel path.
We next experiment with using two different levels of discretization for the numerical and random variables
meshes. In other words, we use a coarse random variables mesh for the assignment of random variables, and a fine
mesh for the solution of the transport equations using the finite element method. Numerous mesh discretization
levels are studied. In all of these cases, the numerical mesh is maintained at 12 x 12.
Figure 4.16 Design point hydraulic conductivity field for 144 random variables case study (cm/sec). Source area is shown by
X's and monitoring well is shown with a circle. (Reprinted from Probabilistic modeling of aquifer heterogeneity
using reliable methods, 1996, by M. M. Hamed, P. B. Bedient, and C. N. Dawson with permission of Advances
in Water Resources.)
0.5 1.5 2.5 3.5 4.5 5.5 6.5 7.5 8.5 9.5 10.5 11.5
-0.15 -0.1 -0.05 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35
Figure 4.17 Gamma sensitivities for 144 random variables case study. (Reprinted from Probabilistic modeling of aquifer
heterogeneity using reliable methods, 1996, by M. M. Hamed, P. B. Bedient, and C. N. Dawson with permission
of Advances in Water Resources.)
53
-------
.5 1.5 2.5 3.5 4.5 5.5 6.5 7.5 8.5 9.5 10.5 11.5
Figure 4.18 Delta sensitivities for 144 variables case study. (Reprinted from Probabilistic modeling of aquifer heterogeneity
using reliable methods, 1996, by M. M. Hamed, P. B. Bedient, and C. N. Dawson with permission of Advances
in Water Resources.)
Figure 4.15(b) illustrates the case of 4 random variables. This means that each 36 grid blocks contained in a single
element of the random variables mesh will be assigned the same value of hydraulic conductivity. Figures 4.15(c)
through 4.15(f) show the cases of 9, 16, 36, and 144 random variables, respectively. For the 144 variables case, each
grid block in the numerical mesh is assigned a different, yet correlated, value.
Figures 4.19(a), 4.19(b), and 4.19(c) illustrate the effect of the number of random variables on the failure
probability, reliability index, and the required CPU time on a SUN SPARCstation 2, respectively. It should be noted
that the greater the number of random variables considered, the more accurate the estimate of the failure probability
(and, reliability index estimate). Nevertheless, the resulting computational burden increases considerably. This is
indicated by the increase in CPU time about 35 fold when the number of random variables is increased from 4 to
144. The trade-off between computational time and accuracy becomes a crucial issue.
Figure 4.19(d) shows the percent difference between the failure probability estimated at each case as compared to
the "true" solution, assumed to be represented by the solution obtained with 144 random variables. If one considers
the case of 16 random variables, the error in failure probability is only 5% where the saving in CPU time is 88%.
Therefore, it is clear that using a random variables mesh that is coarser than the numerical mesh can result in
considerable savings in computational effort without jeopardizing numerical accuracy. In this specific case study,
fairly accurate results were obtained with 16 random variables, where the dimension of the grid block of the random
variables mesh is 3 cells on the side; i.e. , A x=A. y= one-half the correlation scale. This finding is not general, and
it depends on our choice of correlation type and scale, along with problem setup. However, the outlined approach
can be used to identify the level of discretization that combines accuracy with computational efficiency.
Next, we explore the effect that the target concentration has on the probabilistic event. The design point for target
concentrations of 3.0 mg/L was shown in Figure 4.16. The mean concentration at the well is 2.3 mg/L. This occurs
when grid block hydraulic conductivities are at their mean value of 5 X 10~2 cm/sec. Hence, the figures show the
design point for a case where the target concentration is greater than the mean concentration. The design point for
a target concentration of 1.0 mg/L, which is less than the mean value, is shown in Figure 4.20.
It is interesting to see the difference in the design point for the two target concentration levels. When the target
concentration is at 1.0 mg/L, which is less than the mean concentration, the realization of hydraulic conductivity
which produces the most likely failure is lower along the stream tubes connecting the source to the well than the
mean hydraulic conductivity. On the other hand, when the target concentration at the observation well is 3.0 mg/L,
which is greater than the mean concentration, the realization of the hydraulic conductivity producing the most likely
failure is greater along the stream tubes connecting the source to the well than the mean hydraulic conductivity.
Table 4.4 shows a comparison of the results for the two target concentration levels. The Monte Carlo results were
obtained by the following procedure. The prescribed probability density function of the hydraulic conductivity is
randomly sampled, so that the correlation structure is satisfied. With this realization, the transport model is run and
54
-------
4 9 16 36 144
Number of Random Variables
4 9 16 36 144
Number of Random Variables
4 9 16 36 144
Number of Random Variables
4 9 16 36 144
Number of Random Variables
Figure 4.19 Effect of number of random variables on the probabilistic event. (Reprinted from Probabilistic modeling of
aquifer heterogeneity using reliable methods, 1996, by M. M. Hamed, P. B. Bedient, and C. N. Dawson with
permission of Advances in Water Resources.)
0.0358
0.036
0.0362
0.0364
0.0366
0.0368
Figure 4.20 Design point hydraulic conductivity field for 144 random variables case study (cm/sec) for a target
concentration =1.0 mg/L. (Reprinted from Probabilistic modeling of aquifer heterogeneity using reliable
methods, 1996, by M. M. Hamed, P. B. Bedient, and C. N. Dawson with permission of Advances in Water
Resources.)
55
-------
Table 4.4 Comparison between the Analysis Methods (Reprinted from Probabilistic modeling of aquifer heterogeneity
using reliable methods, 1996, by M. M. Hamed, P. B. Bedient, and C. N. Dawson with permission of Advances
in Water Resources.)
Method
FORM
SORMt
MCS
Method
FORM
SORMt
MCS
ct = i.o
PF
0.473
0.421
0.412
Ct = 3.0
PF
0.205
0.151
0.152
mg/L
P
0.068
0.203
0.222
mg/L
P
0.826
1.031
1.029
CPU Time§
19.2
25.5
28.6
CPU Time§
24.3
35.0
72.1
(min)
(min)
§ on a SUN SPARCstation 2
| Improved Breitung
the g-function is obtained. At this point, another random sampling is performed, the transport model is run again,
and another set of results is obtained. Enough Monte Carlo simulations are performed until the coefficient of
variation of the ensemble of failure probability drops below 5%. The Monte Carlo simulations needed for the 1.0
and 3.0 mg/L were 1000 and 2500, respectively.
The comparison indicates that SORM results matched those of the Monte Carlo simulation method very closely,
and bettered the match between FORM and the Monte Carlo results. Nevertheless, FORM results were obtained at
less of a computational effort than that of SORM. The selection of the approximation method, therefore, becomes
an issue of trade-off between computational efficiency and demand. Time saving is not very dramatic in this case,
since the failure threshold is in the bulk of the distribution. Besides, the saving in computational time increases with
decreasing failure probability. However, the reliability methods presented sensitivity information at no additional
computational burden, which the Monte Carlo simulation cannot provide.
4.6.4 System Reliability Analysis
In many groundwater contamination applications, we are interested in studying the reliability of more than one
component in the solution domain. For example, we may be interested in the probability that the contaminant will
exceed a predetermined level at any point along a property boundary. The compliance with the regulatory standards
at more than one receptor well in the aquifer of interest is another example. In these situations, the problem is
formulated in a system reliability framework, in which several limit-state functions are considered, one for every
component of interest.
As an example, consider the case where a contaminant source leaks chemicals into an underlying groundwater
aquifer. Assume that there exists a number of downgradient points of human exposure (i.e., wells). In this case, we
have several limit-state functions, one at each well. The formulation is similar to (2.34).
In general, a system can be idealized as a series system, a parallel system, or a combination of the previous two. The
problem of groundwater contamination is posed in a series system format since failure of any of the components
constitutes failure of the system. In other words, in a multiple well compliance example, if the concentration at any
of the wells exceeds the predetermined target value, the system has failed.
As an application example, consider the probability of exceeding 2.0 mg/L at any of the three observation wells
given in Figure 4.21. That is, the probability of failure to meet the 2.0 mg/L threshold after the 350 days simulation
time is required. Table 4.5 lists the input data for this case study. In Table 4.6, the system reliability results are
shown for first-order uni-modal, bi-modal, and relaxed bi-modal bounds. As expected for a series system, the
system failure probability obtained is higher than the largest component failure probability (P^ORM =0.385,
Pp =PP =0.224). The bi-modal bounds are narrower than the relaxed bi-modal bounds, which are, in turn,
tighter than the first-order uni-modal bounds.
56
-------
The correlation coefficient between the failure event at wells 1 and 2 is the same as that between wells 1 and 3 and
equals 0.726. This means that failure to meet the target concentration of 2.0 mg/L at well 1 is closely related to the
failure at well 2. A careful analysis of the physics of the problem explains this result. In order for the contaminant
to exceed the target concentration at well 1, the grid block hydraulic conductivities at the design point should be
high enough to allow easier and less resistive paths for the contaminant to reach the target well at the designated
threshold value. The high conductivity realizations also allow for more contaminant to reach well 2, hence resulting
in a large correlation coefficient between the probability of exceeding the target level at wells 1 and 2. The
correlation coefficient between the failure events at wells 2 and 3 is 0.217, which is less than the correlation
between wells 1 and 2. This is explained by the fact that wells 2 and 3 are separated by a greater distance than wells
1 and 2, hence their failure events are correlated at a smaller value.
Additional information gained from the system reliability analysis is the sensitivity of the system failure probability
with respect to changes in the distribution parameters of the grid block conductivities. Figure 4.22(a) displays the
sensitivities of the upper bi-modal bound on p^"1 with respect to changes in the local mean value of the hydraulic
conductivities. The results indicate that the greater the mean hydraulic conductivity of the region along the flow
paths, the greater the system failure probability, a behavior easily explained by the earlier discussion.
Contaminant source
No-flow boundary
No-flow boundary
Observation wells
Constant head boundary
H = 9.0 m
Constant head boundary
H = 8.34 m
Figure 4.21 Setup of the system reliability case study. (Reprinted from Probabilistic modeling of aquifer heterogeneity
using reliable methods, 1996, by M. M. Hamed, P. B. Bedient, and C. N. Dawson with permission of Advances
in Water Resources.)
Table 4.5 Input Parameters for the System Reliability Case Study (Reprinted from Numerical stochastic analysis of
groundwater contaminant transport and plume containment, 1996, by M. M. Hamed, P. B. Bedient, and J. P.
Conte with permission of Journal of Contaminant Hydrology.)
Variable
Units
Value
Deterministic Data
grid dimension, Ax=Ay
aquifer thickness, H
longitudinal dispersivity,ax
transverse dispersivity,a
simulation time, t
source concentration, C0
target concentration, C t
& ' target
m
m
m
m
day
mg/L
mg/L
6.0
6.0
0.2
0.02
350.0
10.0
2.0
Random Field Data
hydraulic conductivity, K
correlation scale, A,
cm/sec LN(5.0xlO-3, 5.0xl()-3)
m 18.0
57
-------
Table 4.6 Results of the System Reliability Analysis for the 3-well Case Study (Reprinted from Numerical stochastic
analysis ofgroundwater contaminant transport and plume containment, 1996, by M. M. Hamed, P. B. Bedient,
and J. P. Conte with permission of Journal of Contaminant Hydrology.)
Method
First-order Uni-modal Bounds
First-order Relaxed Bi-modal Bounds
First-order Bi-modal Bounds
Result
Bounds on p^"™
Bounds on P system
Bounds on p^'™
Bounds on P s>'stem
Bounds on PF^"™
Bounds on P system
Value
0.385'aem<0.623
0.291>Ps>'stem> -0.331
0.385'stem<0.543
0.291>Ps>'stem> -0.109
0.428 Ps>'stem> -0.075
Next, we look at the series system reliability in which only wells 2 and 3 are included. In this case, the system failure
probability bounds are given in Table 4.7. The system failure probability in this case is less than in the case with
three wells, as expected. Figure 4.22(b) depicts the sensitivities of the upper bi-modal bounds of system failure
probability with respect to changing the mean value of grid block conductivities for the 2-well case. The pattern in
which the sensitivities behave is interesting. Mean hydraulic conductivities along the branching path leading to both
wells 2 and 3 have the highest impact on the system failure probability. The negative sign indicates an inverse
relationship between local mean conductivity and system failure probability.
When a lower conductivity lens is considered to exist between the two wells and in an orientation parallel to the
mean groundwater flow direction (Figure 4.23), the first-order bi-modal bounds on system failure probability
changes from 0.377 to 0.369. In other words, the presence of the lower conductivity lens affected the system
reliability only slightly. However, the sensitivity of the system failure probability with respect to the local mean
hydraulic conductivity in this case differs considerably from the case without the lens. This is shown in Figure
4.22(c).
4.6.5 Remediation/Containment under Uncertainty
The impact of parameter uncertainty on achieving remediation/containment goals is important. Failure to account
for such uncertainty can dramatically hinder the efficiency of the remediation/containment scheme, creating
significant economic ramifications. In this section, we present a reliability formulation to study the effect of the
natural variability of the hydraulic conductivity on achieving a plume containment goal. The limit-state function is
formulated as in (4.8), however, Ct in this context is the remediation/containment threshold or target concentration
at a specific well. Failure in this case indicates failure to contain the plume from reaching the observation well, or
failure to remediate the plume to the predetermined threshold level.
Figure 4.24 illustrates the problem setup. The aquifer's extent is 66.0 m on the side. A numerical grid of 1 lx 11 is
used to discretize the solution domain. The random field mesh used to discretize the spatial random field of
hydraulic conductivity is assumed to coincide with the numerical mesh. That is, the number of random variables is
llxl 1=121, comprising the hydraulic conductivity in the center point of each grid block. Table 4.8 lists the input
parameters for this case study. It should be noted that the hydraulic gradient is assumed constant throughout the case
studies. The initial contaminant plume is shown in Figure 4.25.
It is assumed that a property boundary is located as shown in Figure 4.24. A pumping scheme is installed in such a
manner so as to contain the plume from escaping into the neighboring property beyond the site boundary within the
30 day pumping period. The target concentration at the observation well is chosen, arbitrarily, to be 1.0 mg/L. In
realistic applications, the target concentration and well location are chosen according to: (1) the type of
contaminant, (2) land use at the neighboring property, and (3) risk estimation at the receptor well.
The numerical reliability model is used to estimate the probability of failure of the remediation/containment
scheme. FORM failure probability and reliability index were found to be 0.743 and -0.655, respectively. This
means that if a single, fully penetrating well at the middle of the domain is pumped at a rate of 1.26 L/s (20 gpm)
for 30 days, there will be a 74% probability of failure to contain the plume from reaching the downgradient
observation well at a concentration exceeding the target concentration of 1.0 mg/L. The negative reliability index
simply means that the probability of failure exceeds 0.5. This is clear by an examination of the definition of the
relationship between Pp and b given in (2.19).
58
-------
(a)
6.5
5.5
4.5
3.5
9.5
10.5
(b)
•=- ""TT^ 5 4.5
if / S J. J
0.5 1-5 ^
x
(c)
"7V" 7 5 8-5
5.5 6-5
9.5
10.5
9.5
10.5
0.5 0.5
Figure 4.22 Sensitivity of the upper bi-modal bound on the system failure probability to the local mean hydraulic conductiv-
ity: (a) for the 3-well case, (b) for the 2-well case, (c) for 2-well with lower conductivity lens. (Reprinted from
Numerical stochastic analysis ofgroundwater contaminant transport and plume containment, 1996, by M. M.
Hamed, P. B. Bedient, and J. P. Conte with permission of Journal of Contaminant Hydrology.)
59
-------
Next, the pumping rate is increased to 1.89 and 2.52 L/s (30.0 and 40.0 gpm), and the reliability analysis is repeated
to study the effect of increasing the pumping rate on the failure probability. For the 1.89 L/s (30 gpm) case, FORM
failure probability and reliability index are 0.41 and 0.227, respectively. Table 4.9 lists FORM, SORM, and Monte
Carlo results for the 2.52 L/s (40.0 gpm) scenario. FORM and SORM results were in good agreement with that of
the Monte Carlo simulation method. This good agreement in the reliability to that of the Monte Carlo results was
observed for all of the case studies conducted. Thus, an increase in the pumping rate from 1.26 to 2.52 L/s (20.0 to
40.0 gpm) caused the failure probability at the observation well to drop from 74% to about 33%. The design point,
gamma and delta sensitivities for this case study are shown in Figures 4.26(a), 4.26(b), and 4.26(c), respectively. It
is interesting to see that the probabilistic event is most sensitive to hydraulic conductivities in the region
downgradient from the pumping well. This is due to the fact that the more conductive this region is, the more
"clean" water the pump is able to flush towards the observation well, and the better the containment becomes. This
is indicated by the positive delta sensitivities in that region, which indicates a direct proportionality between the
local mean value of the hydraulic conductivity in that region and the reliability index.
Next, we look at the impact of the presence of a lower conductivity lens downgradient from the pumping well, and
upgradient from the observation well, as shown in Figure 4.27. The lens has a conductivity of 2.0 x 10~5 cm/s, which
Table 4.7 Results of the System Reliability Analysis for the 2-well Case Study (Reprint from Numerical stochastic
analysis ofgrounchvater contaminant transport and plume containment, 1996, by M. M. Hamed, P. B. Bedient,
and J. P. Conte with permission of Journal of Contaminant Hydrology.)
Method
Result
Value
First-order Uni-modal Bounds
Bounds on p^'™
Bounds on P s>'stem
0.224Ps>'stem>0.287
First-order Bi-modal Bounds
Bounds on p^'™
Bounds on P s>'aem
0.377'stem<0.377
0.313>Ps>'st':m>0.313
Contaminant source
No—flow boundary
\ \
Lower conductivity lens
-5
(K= 5.0x10 cm/sec)
X
No—flow boundary
Observation wells
Constant head boundary
H = 9.0 m
Constant head boundary
H = 8.34 m
Figure 4.23 Setup of the system reliability case with a lower conductivity lens. (Reprint from Numerical stochastic analysis
ofgrounchvater contaminant transport and plume containment, 1996, by M. M. Hamed, P. B. Bedient, and J. P.
Conte with permission of Journal of Contaminant Hydrology.)
60
-------
is two orders of magnitude less than the prevailing hydraulic conductivity. When the pumping rate of 1.89 L/s (30.0
gpm) was applied to this case, the probability of failure to meet the target concentration of 1.0 mg/L dramatically
increased to 0.985, with a reliability index of -2.12. This illustrates the significant impact that material
heterogeneity and the presence of lenses have on the success of cleanup schemes. Table 4.10 lists FORM failure
probability and reliability index for different pumping rates. For each pumping rate, the original case study, as well
as that with the lower conductivity lens, are analyzed. The table indicates that failure to meet the target cleanup
level increases significantly for the case with the lens. This emphasizes the importance of accounting for the
material variability and heterogeneity when designing aquifer remediation systems. Failure to account for this
No—flow boundary
7
site boundary
contaminant plume
pumping well
No—flow boundary
observation well
Constant head boundary
H = 9.0m
Constant head boundary
H = 8.93m
Figure 4.24 Setup of the plume containment case study. (Reprint from Numerical stochastic analysis ofgroundwater
contaminant transport and plume containment, 1996, by M. M. Hamed, P. B. Bedient, and J. P. Conte with
permission of Journal of Contaminant Hydrology.)
Table 4.8 Input Parameters for the Plume Containment Case Study (Reprint from Numerical stochastic analysis of
groundwater contaminant transport and plume containment, 1996, by M. M. Hamed, P. B. Bedient, and J. P.
Conte with permission of Journal of Contaminant Hydrology.)
Variable
Units
Value
Deterministic Data
grid block dimension, Ax=Ay
aquifer thickness, H
aquifer porosity, 0
longitudinal dispersivity,ax
transverse dispersivity,ayx
hydraulic gradient, i
simulation time, t
pumping time, q
m
m
m3/m3
m
m
m/m
day
L/s
6.0
6.0
0.35
3.0
0.3
0.001
30.0
1.26
Random Field Data
hydraulic conductivity, K
correlation scale, A,
cm/sec LN(2.0xlO-3, 2.0xlQ-3)
m 12.0
LN(mean, standard deviation): Lognormal
61
-------
9.5
10.5
4.5
3.5
2.5
1.5
0.5 0.5
Figure 4.25 Initial contaminant plume (mg/L). (Reprint from Numerical stochastic analysis ofgroundwater contaminant
transport and plume containment, 1996, by M. M. Hamed, P. B. Bedient, and J. P. Conte with permission of
Journal of Contaminant Hydrology.)
factor will reduce the chances of success to meet the predetermined target cleanup standards within the specified
time.
Figures 4.28(a), 4.28(b), and 4.28(c) illustrates the design point, gamma, and delta sensitivities, respectively, for the
case with the lens, and for a pumping rate of 5.68 L/s (90.0 gpm). Comparison of the trends in these figures with
their counterparts for the original case (Figure 4.26) indicates that when the lower conductivity lens is present, the
failure probability is most sensitive to the region downgradient from the pumping well, in addition to the region
around the lens. This is another indication of the significance of careful site investigation to delineate the extent of
heterogeneity present at the site before designing a remediation system.
Table 4.9 Failure Probabilities for the Remediation Case Study for Pumping Rate = 2.52 L/s (40.0 gpm) (Reprint from
Numerical stochastic analysis ofgroundwater contaminant transport and plume containment, 1996, by M. M.
Hamed, P. B. Bedient, and J. P. Conte with permission of Journal of Contaminant Hydrology.)
Method
FORM
SORMt
SORMt
MCS
Failure Probability
0.292
0.343
0.337
0.334
Reliabilitylndex
0.547
0.405
0.421
0.429
Improved Breitung
Tvedt's Exact Integral
62
-------
xlO~
(a)
0.5 0.5 I-5
2.5
5.5
4.5
5.5
'6.5 7.5
8.5 9.5 10.5
(b)
10.5
8.5 9.5 10.5
10.5
9.5
10.5
Figure 4.26 Results for the plume containment case study for pumping rate= 2.52 L/s (40.0 gpm): (a) Design point, (b)
Gamma sensitivity, (c) Delta sensitivity. (Reprint from Numerical stochastic analysis ofgroundwater contami-
nant transport and plume containment, 1996, by M. M. Hamed, P. B. Bedient, and J. P. Conte with permission
of Journal of Contaminant Hydrology.)
63
-------
t \
No— flow boundary
f\
conti
(_
mi in
ant p
pumping well fi"
N-L
lume
^E
site boundary
/
lower conductivity lens
**»»
jj§
s
•
^
N
y
\
7
/
/
,
No— flow boundary
observation well
/
, x
\
Constant head boundary
H = 9.0m
Constant head boundary
H = 8.93 m
Figure 4.27 Setup of the plume containment case study with a lower conductivity lens. (Reprint from Numerical stochastic
analysis ofgrounchvater contaminant transport and plume containment, 1996, by M. M. Hamed, P. B. Bedient,
and J. P. Conte with permission of Journal of Contaminant Hydrology.)
Table 4.10 FORM Failure Probabilities for Different Pumping Rates for the Remediation Case Study (Reprint from
Numerical stochastic analysis ofgrounchvater contaminant transport and plume containment, 1996, by M. M.
Hamed, P. B. Bedient, and J. P. Conte with permission of Journal of Contaminant Hydrology.)
Without a lower
conductivity lens
Pumping Rate in L/s (gpm) P
With a lower
conductivity lens
1.89(30.0)
3.15 (50.0)
4.42 (70.0)
5.68 (90.0)
0.410
0.137
0.061
0.014
0.227
1.095
1.556
2.189
0.985
0.794
0.526
0.321
-2.121
-0.821
-0.066
0.464
64
-------
5 9.5 10.5
(b)
10.5
9.5
10.5
0.
9.5
10.5
Figure 4.28 Results for the plume containment with a lower conductivity lens case study: (a) Design point, (b) Gamma
sensitivity, (c) Delta sensitivity. (Reprint from Numerical stochastic analysis ofgrounchvater contaminant
transport and plume containment, 1996, by M. M. Hamed, P. B. Bedient, and J. P. Conte with permission of
Journal of Contaminant Hydrology.)
65
-------
66
-------
Section 5
Summary and Conclusions
The uncertainty of the parameters describing the aquifer material, the chemical, and the leaking source have been the
focus of many research efforts. This is indicated by the wealth of literature on stochastic hydrology in the past
decade. This was motivated by the recognition of the impact that such uncertainty has on the predictive ability of
groundwater fate and transport models. Most of the work presented so far has been structured around one of three
methods: (1) classic Monte Carlo simulation method (MCS), (2) spectral analysis, and (3) perturbation theory.
Furthermore, Sitar, Cawlfield, Der Kuireghian, and co-workers have pioneered the application of the first- and
second-order reliability methods (FORM and SORM, respectively) in groundwater contamination problems (Sitar
et al., 1987; Cawlfield and Sitar, 1988; Mok et al, 1993). A brief survey of some of the stochastic hydrogeology
work was presented in Section 2.
In this work, a probabilistic approach for modeling groundwater contaminant transport and remediation has been
presented based on FORM and SORM. The theory of the reliability methods has been reviewed to the extent
necessary to understand the formulation. We looked at the use of analytical and numerical transport and
remediation models for the probabilistic analysis of uncertainty in groundwater. The development of probabilistic
models based on analytical transport codes have proved to be much easier and less computationally intensive than
numerical-based probabilistic models. Numerical models, however, allowed us to look at the effect of spatial
heterogeneity and correlation structure of the hydraulic conductivity, which is not possible with analytical models.
The issue of trade-off between intensive computational effort to account for more complex cases versus lower cost
analytical methods that give screening decisions within a few minutes becomes of great significance. In this section,
we list the general conclusions of the work presented in preceding sections.
5.1 Probabilistic Analytical Transport Model
We have shown that FORM and SORM can be a potential alternative to the classical Monte Carlo simulation method
when dealing with groundwater contamination events that have very small probability of occurrence, therefore
requiring thousands of Monte Carlo simulations to provide reliable results. FORM and SORM were used to assess
the probability that a given contaminant exceeds a certain target concentration level at a selected point in space and
time in the solution domain and to provide the sensitivity of such a probabilistic event to the basic uncertainty in the
input variables. Contamination scenarios with both reactive and non-reactive solutes were presented for demonstra-
tion purposes.
FORM and SORM results were compared and were tested against those obtained using the classical Monte Carlo
simulation method. In selecting the analysis method, it should be emphasized that, in this work, SORM was more
accurate than FORM, but computationally more expensive. The problems analyzed were fairly simple, hence
computer time required for SORM was still very low (on the order of a few minutes on a SUN SPARCstation 2) and
the use of SORM was warranted. However, for larger problems, and when using more complex numerical models
(based on finite difference or finite element methods) this becomes a matter of considerable significance, and a
careful trade-off analysis between computational effort and accuracy should be conducted to determine which
approximation method to use. We also showed that FORM results are sometimes very different from SORM results,
and in such cases, SORM analysis is warranted, in spite of its extra computational effort required.
The impact of the basic uncertainty in seepage velocity was identified. However, chemical-related and source-
related parameter uncertainty are also important factors to consider in the probabilistic analysis of groundwater
transport problems, and their importance should not be overshadowed by the aquifer-related parameter uncertainty.
5.2 Probabilistic Numerical Transport Model
Numerical solution to the transport equations was obtained using a finite-element model. This allows considering
the spatial variability of the aquifer material, as well as more complex geometry and boundary conditions.
Hydraulic conductivity was assumed to be spatially correlated, following an exponential auto-correlation function,
and lognormal distribution.
Failure to meet target concentration level at a downgradient well was analyzed for a number of cases. The reliability
analysis provides the design point, as well as gamma and delta sensitivities. The effect of lower conductivity
regions, whether large blocks, a few elongated lenses, or many small scattered lenses, was considered. The
sensitivity, as well as the design point obtained indicate that the flow lines circumvent the lower conductivity
67
-------
regions, in an effort to produce the target concentration at the well. Sensitivity of the probabilistic event was highest
with respect to hydraulic conductivities along the tortuous flow path that bypasses the tight lenses.
There are two types of meshes that the numerical probabilistic model utilizes: (1) a random variables mesh, and (2)
a numerical mesh. The former is used by the mid-point method to discretize the spatial random field (the hydraulic
conductivity). The numerical mesh, on the other hand, is required in the finite-element method to discretize the
transport equation. We used a coarse random variables mesh superimposed on a fine numerical mesh, and found
that considerable savings in computational effort can be obtained in this manner. The choice of the appropriate
stochastic discretization level, however, needs some engineering intuition. In other words, a careful trade-off
analysis between computational effort and numerical accuracy must be conducted in order to determine the spatial
discretization level needed that would capture the stochastic features of the random field without much computa-
tional effort.
When analyzing the failure to meet the target levels at more than one location, system reliability was used. Series
system formulation was used since failure to meet the pre-determined target concentration at any well results in the
failure of the system. The joint probability of failure, as well as the correlation between individual failure
probabilities, was obtained using reliability bounds.
The effect of spatial heterogeneity of the aquifer on containment/remediation was studied. Hydraulic conductivity
was assumed to be represented by a spatial random field, and the probability of the plume escaping the site
boundary was estimated. Furthermore, the relative importance of grid block conductivities was obtained. The use of
analytical gradient is expected to alleviate some convergence problems we faced when looking at stressed aquifer
conditions. Pumping and/or injection apparently causes the limit-state surface to be extremely non-linear and noisy,
and the use of finite-difference to estimate gradients has resulted in failure in many cases to converge to the design
point.
5.3 Recommendations for Future Research
Most of the reliability analysis software available to date use gradient-based optimization techniques to determine
the design point. Examples include the HL-RF, SQP, and the gradient projection methods. The reason for using
these methods is their superior convergence in the neighborhood of the optimum point. This is because the methods
use, in the search, information on the gradients of the limit-state function with respect to basic random variables.
In the case of algorithmic formulation of the limit-state function, such as the work presented in Section 4, a finite-
element routine is required to obtain any point on the limit-state surface. Therefore, the calculation of the required
gradients can be very time consuming. Furthermore, numerical noise can render these gradients considerably
inaccurate. If these factors are combined, together with a starting point that may not be in the neighborhood of the
solution, the optimization routine can fail to find the design point.
Non-gradient search methods may be advantageous in cases where the limit-state function is highly noisy, since
these methods do not use gradient information, but rather rely solely on functional evaluations. Examples of non-
gradient search methods include the Nelder-Mead method (Press et al., 1989), or one of many available non-
classical methods, such as genetic algorithms (GA) (Goldberg, 1989; Davis, 1991), simulated annealing (SA)
(Kirkpatrick et al., 1983; Goffe et al., 1994), and evolution strategies and evolutionary programming (ES and EP)
(Baeck et al., 1991; Baeck et al., 1993; Baeck and Schwefel, 1993).
Hybrid optimization methods that rely on a non-gradient search method to locate the neighborhood of the design
point, combined with a gradient optimization method to accurately and efficiently determine the design point may
also be another valid approach that may prove very useful in cases of noisy limit-state function.
The use of reliability methods in analyzing the effect of pumping on containing a plume within a site boundary was
presented. Due to the fact that convergence to the design point was not always achieved due to noisy gradients, the
estimation of analytical gradients for reliability-based remediation models is highly recommended.
The use of reliability Bayesian updating scheme can also be an important addition to the work presented here, since
it allows updating the reliability estimates as more data becomes available from the site.
Finally, although FORM and SORM required a fraction of the computational effort needed by the Monte Carlo
Simulation method, the CPU time necessary for solving large problems could still be extensive. The use of parallel
computer architecture can be utilized to considerably reduce the computational time required. This will allow for
the consideration of large dimensional problems without necessitating overwhelming CPU time.
68
-------
Bibliography
Arbogast, T., Dawson, C., Keenan, P., Wheeler, M. and Yotov, I. (1998). Enhanced cell-centered finite differences
for elliptic equations on general geometry, J. Sci. Comput, 19(2):404.
Baeck, T. and Schwefel, H.-P. (1993). An overview of evolutionary algorithms for parameter optimization, Evol.
Comput. 1(1): 1-23.
Baeck, T., Hoffmeister, E. and Schwefel, H.-P. (1991). A survey of evolution strategies, in R. K. Belew and L. B.
Booker (eds), Proceedings of the 4th International Conference on Genetic Algorithms, pp. 2-9.
Bakr, A. A., Gelhar, L. W., Gutjahr, A. L. and MacMillan, J. R. (1978). Stochastic analysis of spatial variability in
subsurface flows. 1. comparison of one- and three-dimensional flows, Water Resour. Res., 14(2): 263-271.
Beack, T., Rudolph, G. and Schwefel, H.-P. (1993). Evolutionary programming and evolution strategies:
Similarities and differences, in D. B. Fogel and W. Atmar (eds), Proceedings of the 2nd Annual Conference on
Evolutionary Programming, pp. 11-22.
Bell, J. B., Dawson, C. N. and Shubin, G. R. (1988). An unsplit, higher order Godunov method for scalar
conservation laws in multiple dimensions, J. Comput. Phys., 74:1-24.
Bjerager, P. (1988). Probability integration by directional simulation, J. Eng. Mech., ASCE, 114(8): 1285-1302.
Bjerager, P. (1990). On computation methods for structural reliability analysis, Struct. Saf, 9:79-96.
Breitung, K. (1984). Asymptotic approximation for multinomial integrals, J. Eng. Mech., ASCE, 110(3):357-366.
Breitung, K. (1991). Probability approximations by log likelihood maximization, J. Eng. Mech., ASCE, 117(3): 45 7-
477.
Cawlfield, J.D. and Sitar, N. (1987). Application of first-order reliability to stochastic finite element analysis of
groundwater flow, Technical Report UCB/GT/87-01, Department of Civil Engineering, Division of Geotechnical
Engineering, University of California, Berkeley.
Cawlfield, J.D. and Sitar, N. (1988). Stochastic finite element analysis of groundwater flow using the first-order
reliability method, in A.Peck, S.Gorelick, G.deMarsily, S.Foster and V.Kovalevsky (eds), Consequences of
Spatial Variability in Aquifer Properties and Data Limitations for Groundwater Modelling Practice, IAHS
Publication No. 175, International Association of Hydrological Sciences, Gentbrugge, Belgium, pp. 191-216.
Davis, L. (ed.) (1991). Handbook of Genetic Algorithms, Van Nostrand Reinhold, New York.
Dawson, C.N. (1990). Godunov-mixed methods for immesible displacement, Int. J. Numer. Methods Fluids,
11:835-847.
Dawson, C.N. (1991). Gudonov-mixed methods for advective flow problems in one space dimension, SIAM J.
Numer. Anal. 28(5): 1282-1309.
Dawson, C.N. (1993). Gudonov-mixed methods for advection-difrusion equations in multidimensions, SIAM J.
Numer. Anal. 30(5): 1315-1332.
Dennis, J. and Schnabel, R. (1983). Numerical Methods for Unconstrained Optimization and Nonlinear
Equations, Prentice-Hall, New Jersey.
Der Kiureghian, A. and Ke, J.-B. (1985). Finite element based reliability analysis of frame structures, Proc. Fourth
International Conference on Structural Safety and Reliability, Vol. I, Kobe, Japan, pp. 395-404.
Der Kiureghian, A. and Ke, J.-B. (1988). The stochastic finite element method in structural reliability, Prob. Eng.
Mech. 3(2):83-91.
Der Kiureghian, A. and Lin, H.-Z. S.-J.H. (1987). Second order reliability approximations, J. Eng. Mech., ASCE,
113(8): 1208-1225.
Der Kiureghian, A. and Liu, P.-L. (1986). Structural reliability under incomplete probability information, J. Eng.
Mech., ASCE, 112(1):85-104.
Dettinger, M.D. and Wilson, J.L. (1981). First order analysis of uncertainty in numerical models of groundwater
flow: Part 1. mathematical development, Water Resour. Res., 17(1): 149-161.
69
-------
Ditlevsen, O. (1979). Narrow reliability bounds for structural systems, J. Stntc. Mech., 7(4): 45 3-472.
Ditlevsen, O. and Madsen, H.O. (1996). Structural Reliability Methods, John Wiley & Sons.
Eisenberg, N.A., Rickertsen, L.D. and Voss, C. (1987). Performance assessment, site characterization, and
sensitivity and uncertainty methods: Their necessary association for licensing, in B.B. Buxton (ed.), Proc. of
the Conference on Geostatistical, Sensitivity, and Uncertainty Methods for Ground-Water Flow and Radionuclide
Transport Modeling, DOE/AECL, Batelle Press, San Francisco, California, pp.9-38.
Fetter, C.W. (1993). Contaminant Hydrogeology, McMillan Publishing Company, New York.
Fiessler, B., Neumann, H.-J. and Rackwitz, R. (1979). Quadratic limit states in structural reliability, J. Eng. Mech.,
ASCE, 105(4): 661-676.
Freeze, R.A. and Cherry, J.A. (1979). Groundwater, Prentice-Hall, Inc., Englewood Cliffs, New Jersey.
Galya, D.P. (1987). A horizontal plane source model for groundwater transport, Ground Water, 25(6):733-739.
Goffe, W.L., Ferrier, G.D. and Rogers, J. (1994). Global optimization of statistical functions with simulated
annealing, J. Economics, 60:65-99.
Goldberg, D.E. (1989). Genetic Algorithms in Search, Optimization, and Machine Learning, Addison-Wesley,
Reading, MA.
Hasofer, A.M. and Lind, C.N. (1974). An exact invariant first-order reliability format, J. Eng. Mech., ASCE,
100:111-121.
Hohenbichler, M., Gollwitzer, S., Kruse, W. and Rackwitz, R. (1987). New light on first- and second-order
reliability methods, Struct. Saf., 4:267-284.
Jang, Y.-S., Sitar, N. and Der Kiureghian, A. (1990). Reliability approach to probabilistic modeling of contaminant
transport, Technical Report UCB/GT-90/3, Department of Civil Engineering, Division of Geotechnical
Engineering, University of California, Berkeley.
Jang, Y. S., Sitar, N. and Der Kiureghian, A. (1994). Reliability approach to probabilistic modeling of contaminant
transport, Water Resour. Res., 30(8):2435-2448.
Kirkpatrick, S., Gelatt, C.D. and Vecchi, M.P. (1983). Optimization by simulated annealing, Science,
220(4598):671-680.
Lin, H. Z. and Der Kiureghian, A. (1987). Second order system reliability using directional simulation, in N.C.
Lind (ed.), Reliability and Risk Analysis in Civil Engineering, ICASP5, Vol. 2, University of Waterloo,
Ontario, Canada, pp. 930-936.
Liu, P.L. and Der Kiureghian, A. (1986a). Multivariate distribution models with prescribed marginals and
covariances, Prob. Eng. Mech. 1(2): 105-112.
Liu, P. L. and Der Kiureghian, A. (1986b). Optimization algorithms for structural reliability analysis, Technical
Report UCB/SESM-86/09, Department of Civil Engineering, Division of Structural Engineering and Structural
Mechanics, University of California, Berkeley.
Liu, P.L. and Der Kiureghian, A. (1991). Optimization algorithms for structural reliability, Struct. Saf., 9:161-177.
Liu, P. L., Lin, H. Z. and Kiureghian, A.D. (1989). CALREL User Manual, University of California, Berkeley,
Department of Civil Engineering.
Madsen, H.O. (1988). Omission sensitivity factors, Struct. Saf., 5:35-45.
Madsen, H.O., Krenk, S. and Lind, N.C. (1986). Methods of Structural Safety, Prentice-Hall, Inc., Englewood
Cliffs, New Jersey.
Melchers, R.E. (1987). Structural Reliability, Analysis and Prediction, Ellis Horwood Series in Civil Engineering-
Structural Engineering Section, Ellis Horwood.
Mok, C.M., Sitar, N. and Der Kuireghian, A. (1993). Bayesian reliability analysis of groundwater flow and
subsurface contaminant transport, EOS, American Geophysical Union, 74(43):251.
Newell, C.J., Hopkins, L.P. and Bedient, P.B. (1990). A hydrogeologic database for ground-water modeling,
Ground Water, 28(5):703-714.
Press, W.H., Flannery, B.P., Teukolsky, S.A. and Vetterling, W.T. (1989). Numerical Recipes, Cambridge
University Press.
70
-------
Rackwitz, R. and Fiessler, B. (1978). Structural reliability under combined load sequences, Comput. Struct., 9:489-
494.
Rosenblatt, M. (1952). Remarks on a multivariate transformation, Ann. Math. Stat. 23:470-472.
Sitar, N., Cawlfield, J.D. and Der Kiureghian, A. (1987). First order reliability approach to stochastic analysis of
subsurface flow and contaminant transport, Water Resour. Res., 23(5):794-804.
Tvedt, L. (1988). Second order probability by an exact integral, in P.Thoft-Christensen (ed.), Proc. of2ndIFIP
Working Conference on Reliability and Optimization on Structural Systems, number 48 in Lecture Notes in
Engineering, Springer-Verlag, Berlin, Germany, pp. 377-384.
U. S. EPA (1988). Background Document for EPA 's Composite Landfill Model (EPACML), Washington, D.C.
Valocchi, A. and Malstead, M. (1992). Accuracy of operator-splitting for advection-diffusion-reaction problems,
Water Resour. Res., 28:1471-1476.
Veritas Research (1992a). PROBAN: General Purpose Probabilistic Analysis Program, Theory Manual, Det
norske veritas, Hovik, Norway.
Veritas Research (1992b). PROBAN: General Purpose Probabilistic Analysis Program, User's Manual, Det
norske veritas, Hovik, Norway.
Wagner, B.J. and Gore lick, S.M. (1987). Optimal groundwater management under parameter uncertainty, Water
Resour. Res., 23:1162-1174.
Wheeler, M.F. and Dawson, C.N. (1988). An operator-splitting method for advection-diffusion-reaction problems,
in J.A. Whiteman (ed.), MAFELAP Proceedings VI, Academic Press, pp. 463-482.
Wilson, J.T., Kampbell, D.H. and Armstrong, J.H. (1994). Natural bioreclamation of alkylbenzenes (BTEX) from
a gasoline spill in methanogenic ground water, in Hydrocarbon Bioremediation, Lewis Publishers, CRC Press,
pp. 201-218.
71
------- |