-------
Chapter 5 Assessment of Environmental Data For Useability in Baseline Risk Assessments
Qualified data can usually be used for
quantitative risk assessments.
The assessmcntof data quality indicators foreither sampling or
analysis involves the evaluation of five indicators: complete-
ness, comparability, representativeness, precision, and accu-
racy. The effects of uncertainty in completeness, comparabil-
ity and representativeness influence the certainty of chemical
identification by increasing the probability of false negatives
and positives. Variation in completeness, comparability, repre-
sentativeness, precision and accuracy and affects the uncer-
tainty of estimates of average concentration and reasonable
maximum exposure. Once the indicator is examined or a
numerical value is determined, the results can be compared to
the performance objectives established during RI planning.
This comparison determines the useability of the data and any
required corrective actions.
A summary of the minimum requirements for data quality
indicators is presented in Exhibit 5-2, and the evaluation
process is illustrated in Exhibit 5-5. Specific requirements for
each indicator are presented in the sampling and analytical data
quality indicator assessment sections below.
5.6.1. ASSESSMENT OF SAMPLING
DATA QUALITY INDICATORS
The major activity in determining the useability of data based
on sampling is assessing the effectiveness of the sampling
operations performed. Samples provided for analysis must
answers to the four basic decisions to be made with RI data in
the risk assessment cited at the beginning of this chapter.
Completeness
Minimum requirements:
100% for critical samples. Background samples and broad
spectrum analyses are usually critical.
Sufficient number of samples to meet specified perform-
ance measures.
Impact:
A reduction in the number of samples reduces site coverage
and may affect representativeness.
Ability to differentiate site levels from background.
False negatives.
Reduction in confidence levels and power.
Upper confidence limit estimates may be inflated.
Corrective action:
Resampling.
Additional analysis of samples already at laboratory.
Cuinpleienc.s.-> i.s uilcuialcd In ilie lolloping lunnulu.
(Number of accepted
data points) X 100
Percent Completeness = --
Total number of
samples collected
This measure of completeness is useful for data collection and
analysis management but misses the k;y risk assessment issue,
which is the total number of data points available and accepted
for each chemical of potential concern. All occurrences of
incompleteness should be assessed to determine if an accept-
able level of data useability can still be obtained or whether the
level of completeness must be increased, either by further
sampling or by other corrective action. Any decrease in the
numbcrof samples from that specified in the sample design will
affect the final results. In this case, the option of obtaining more
samples should be reviewed.
Typical causes for sample attrition include: site conditions
preventing sample collection (e.g., a well runs dry), sample
breakage, and invalid or unusable analytical results. Complete-
ness can affect the uncertainty involved in risk assessments by
reducing the available number of samples on which idcntifica-
tion of chemicals at the site and estimates of concentration
levels are based. The reduction in the number of samples from
the original design further affects representativeness by reduc-
ing site coverage and increases the variability in concentration
estimates. Only the collection of additional samples will
resolve the problem, unless the samples involved were dupli-
cates or splits. In this case, or if the cause was laboratory
performance, the extracts may be considered tor reanal) sis.
Comparability
Minimum requirement:
Unbiased sample design or documented reasons for select-
ing another sample design
Impact:
Non-additivity of sample results.
Reduced confidence, power, and ability to detect differ-
ences, given the number of samples available.
Corrective action:
Statistical analysis of effects of bias.
Comparability issues have little impact on performance meas-
ures associated with sampling provided that the sample design
is unbiased, and the sample design or analytical methods !u\ e
82
-------
Chapter 5 Assessment of Environmental Data For Useability in Baseline Risk Assessments
EXHIBIT 5-5
USING DATA QUALITY INDICATORS
TO DETERMINE DATA USEABILITY
Environmental
Data
Statistical
Assumptions
Consult a
Statistician
Group Data by
Medium/Stratum
Estimate Statistical
Performance
Require
Performance
Achieved'
Accep
Probability
Missing Hot
Spot'
Judgmental
Model
yte
No
Yes
t
Non-Statistical
Treatment
No
^
s
r
Modify Performance
Objective
Estimate Sampling
Measurement Error
Accept and Qualify
Data or Reject
A
Total Error Estimates
4
4
No
Accept Quantitativ
Data
No},
Estimate Analytical
Measurement Error
i
S XYes
9 4 No / SignificantV_l
1 V Effect' }^_
Determine
Corrective
Action
83
-------
Chapter 5 Assessment of Environmental Data For Useability in Baseline Risk Assessments
not changed over time. If any of these factors change, the risk
assessor may experience difficulties in combining data sets to
estimate the reasonable maximum exposure. The determina-
tion of RME is based on the principal of estimating risk over
!'PT" for the exposure area. The ideal situation occurs when
samples can be added within the basic design and the only effect
is to decrease the level of uncertainty.
Representativeness
Minimum requirements:
Sample data representative of exposure area.
Sample preparation procedures (i.e., filtering, compositing,
and sample preservation) do not affect representativeness.
Impact:
Bias high or low in estimate of RME.
False negatives.
Corrective action:
Additional sampling.
Examination of effects of sample preparation procedures.
Representativeness of data is critical to risk assessments. The
results of the risk assessment will be biased to the degree that
the data do not reflect the chemicals and concentrations present
in the exposure area of interest. Non-representative chemical
identification will result in false negatives. Non-representative
estimates of concentration levels may be high or low. Only
additional sampling will resolve the problems associated with
unrepresentative sampling, unless the risk assessment is ac-
cepted with explicit discussion of its potential limitations.
It is important to determine whether any changes have occurred
in the actual sample collection that convert an originally
unbiased sampling plan into a biased sampling episode. Bias
in unbiased designs is difficult to assess because no measure of
the true value is known. Bias in non-statistical designs is
assumed.
R eprcscntauvcness is primarily a planning concern. The solu-
tion is in the design of a sampling plan that is representative.
Once the design is implemented, only the sampling variability
is evaluated during the assessment process, unless contamina-
tion occurs in thequality control samples or blanks. Complete-
ness problems decrease representativeness and increase the
potential for false negatives and the bias in estimations of
concentration, but only the increase in variability can be esti-
mated.
Precision
Minimum requirements:
Confidence level of 80%.
Power of 90%.
Minimum detectable relative differences specified in SAP
and modified after analysis of background samples if nec-
essary.
One set of duplicates or more, as specified in the SAP.
« Measurement error specified.
Impact:
Errors in decisions to act or not act based on analytical data
Unacceptable level of uncertainty.
Corrective action:
Add samples.
Adjust performance objectives.
The two basic activities performed in the assessment of preci-
sion are estimating sampling variability from the observed
spatial variation and estimating the measurement error attribut-
able to the data collection process. Assumptions concerning
the sample design and data distributions must be examined
prior to interpreting the results. This ex animation will provide
the basis for selecting calculation form ulas and knowing when
statistical consultation is required.
The type of sample design selected is critical to the estimation
of sampling variability as discussed in Sections 3,2 and 4.1 If
the sample design is purposive, the nature of the sampling error
cannot be determined and estimates ot the average concentra-
tions of analytes may not be representative of the site.
The distribution of the data must
always be determined before applying
statistical measures.
The nature of the observed chemical data distribution
estimation procedures. The estimation of variability and con-
fidence intervals will become complex if the distribution can-
not be assumed normal or to approximate normal when trans-
formed using standard procedures such as the transformation to
log normal. Estimates of the 95% upper confidence limit
(UCL) of the average concentration for the RME should be
84
-------
Chapter 5 Assessment of Environmental Data For Useability in Baseline Risk Assessments
based on an analysis of the frequency distribution of the data
whenever the database is sufficient to support such analysis.
The use of statistical tests comparing the distribution of the
observed data with the normal or some other distribution is
p;cf:rrcd. O.iphs '.if data '.' 'thout statistic.!1, test results may
also be acceptable for some data sets. Statistical computer
software can assist in the analyses of data distribution.
If the analysis shows thai the data are not normally distributed,
the risk assessor should transform the data to a normal distribu-
tion, if possible, lo facilitate statistical analyses. After data are
normalized, the UCL of the arithmetic mean can be calculated
for data that are logarithmically distributed (RAGS Section
6.4.1). If log normal is not appropriate, other parametric
models such as pareto, gamma, or beta, might be used or non-
parametric approaches utilized. A statistician should be con-
sulted, since the wrong choice of the distributional model can
result in low power and even invalid tests.
Sampling Variability
Exhibit 5-6 summarizes the assessment procedures for the
evaluation of variability from different sampling procedures.
The estimation of confidence levels, power, and minimum
detectable relative differences requires assumptions about the
coefficients of variation from sampling variability for each
chemical of potential concern. The RPM or risk assessor
should discuss the implications of these assumptions with a
statistician to determine their potential impacts on data useabil-
ity.
The statistical measures of perform-
ance most app!i;abl<: :o site conditions
should he determineJ Before assessing
EXHIBIT 5-6
STEPS TO ASSESS SAMPLING PERFORMANCE
Confirm slaoslcal assumptions
Summarize anaryre detection data by strai
vwrffxn nwdia
3 Transform analyte concentration data so distribution ts approximately noma/
4 Calculate the coefficient of variation lor eacn analyte detected
5 Using Exhibit 4-4 "Relationships Between Measures o( Statistical Perlormarv-s and Numnyr
of Samples Heojuired." loo* up tfw range of power confidence Mvyl and minimal delect ir<. =
differences for the calculated coefficient of variation
6 Compare the statistical performance measures required to those achievable given :he
coefficient of variation and sample size
7 If the performance objectives are achieved, go to step 9
if ttie required statieflcal performance levels an not met, men additional samples must be
taken or one or more of tw performance parameters must be changed
If lampta are to be added. Exhibit 4-4 and *w calculation formula! In Appendix IV can t*.
used 10 determine t^e number needed
0 If the performance parameters are to be changed. **e parameter to be changed shoukj he
the one which wffl ncrease tie probabiity of taking unnecessary acnon as opposed to
unnecessary risk.
9 Examine the reeurts of tf^e quafty convol samples If none exist, the sample rssutts ~xs- -
considered to be qualitative
to If ffie quakty control sample resurs indicate possible bias through contarmndrjon !a>.»
appropriate correcwe action
for data useability in risk assessment are provided in Exnibr.
5-7.
To determine whether the performance objectives have
met, first summarize the sample results at the ana! vte !:
stratum, including media within a site or site subgroup
stratum within media. If a particular combination <.M s.
andanalyte yields only asingle data point, the issue of: sa;v,
error is notconsidered relevant, and theassessment proce
the assessment of analytical error for that stratum and -I'
combination. SituationsinvolvingasingledatajH.ii,: .!,. ,
trcvi'C'J as instances of risk T-'scssmcn; Ku ^ ' ' ><
concentration.
been
e! h\
Unce !!v sf;.i!i ;tic;»,i aspirins ion-; a;,.: olxei vcd an jlj te vanabi!
ity are known, selected statistical performance measures can be
assessed to determine the aaia qujMu achieved. Additional
samples may be needed or m-xlificd data qualu> objccuves
required as a result of evalu iiT.t: su:'ip!,ng variability. Three
issues are m\"Ived in 'iv .:s-.^-s- .'-: of -;.|;iiro(i suitistica!
performance.
Level of certainty or confidence
Power
Minimum detectable difference
The required level for ea.h of r: Ciree critical statistical
performance measures should be Deluded in the SAP as data
quality objectives (DQOs) The L-^or's data quality require-
ments defined by these stalls^..., :-, \isures determine the
number of samples that are taker during data collection.
Recommended minimum sti: si.u.: ;--;rformance parameters
EXHIBITS-?
RECOMMENDED MINIMUM STAT!ST;CA'
PERFORMANCE PARAMETERS
FOR RISK ASSESSMENT
NULL HYPOTHES'S. ON-SITE CONTA'/I'J/''
CONCENTRATIONS ARE NOT HIGHER "i t-/
THE BACKGROUND
Confidence level (Type I error) 80% mi'iirr^r,. rt-^i t
when true (take unnecessary action)
Power (Type It error) 9C% minimum, acceo' "j.i A'^pr
(fail to take action when nskj
Minimum de'ectable difference 10% - 20% ^'.^n''^
depends on concentration of concern
Source EPA 1989C
85
-------
Chapter 5 Assessment of Environmental Data For Useability in Baseline Risk Assessments
Inihcmajontyofcascs, for.siratum/analytc combinations with
multiple data points, the distribution should be examined for
normality and transformed to log normal The coefficient of
variation is calculated for each stratum/analyte combination. If
tlv distribution resulting from !N:!r 'rKfnpmnon is not normal,
a new distributional model will need to be identified and
validated in consultation with a statistician. Non-parametric
procedures which lequire no distributional assumptions may
also be used.
Once the coefficient of variation is calculated, the number of
samples required to achieve any specific statistical perform-
ance measure can be determined from tables or statistical
formulas. Conversely, the statistical performance achieved,
given the coefficient of variation, can also be determined. The
statistical performance achie\ed should be compared to the
requirements stated in planning. If the performance objectives
are achieved, the risk assessor can proceed to the assessment of
measurement error.
For example, if the required statistical performance objectives
are not met, additional samples must be taken or one (or more)
of the performance parameters must be changed. Ifsamplesare
added, the tables or formulas can be used to calculate the
number of samples required. If a performance parameter is
changed, the one to be changed should be the one which will
increase the probability of taking unnecessary action as op-
posed to unnecessary risk. As a result, the confidence level
would be reduced first, the n. immurn detectable relative differ-
ences would be increased second, and the level of power would
be reduced last. Minimum recommended levels of reduction
for use in risk assessment are 80~c confidence levels, 90%
power, arid 10-20% minimum deteeiub'e relative differences.
Exhibit 5-7 summan/vs the KV v~'~,^d-:d data qualitvobjec-
tives for statistical performance parameters.
Measurement Error
Measurement error is estimated u^ing the results of duplicate
samples. The estimate represents, the difference, between the
reported values. This type of variation has four basic sources:
sample collodion procedures x^--'.e Handling an:1 storage
procedures, analytical proced^r-r^ _:,:! data processing proce-
dures. The use of duplicate -.am r:e^'o determine measurement
error is discussed in section 5.5 under data review procedures.
Variability due to sampling can be estimated from field dupli-
cates, orcollocated samples il the sample isalsoanaly/.ed as the
laboratory duplicate. Otherwise, tr: field duplicates determine
total withm-batch measurement C~.T including analytical er-
ror.
The formula for computing the relative percent difference fl
(RPD) between duplicates is: ^
R1 - R2|
RPD= - x 100
(R1 + R2)/2
where Rl and R2 are the results fiom the first and second
duplicate samples, respectively.
Accuracy
Minimum requirements:
Spikes to assess accuracy of non-detects and positive sample
results if specified in the SAP.
Blanks to determine contamination.
Impact:
False negatives.
False positives.
Corrective action:
Consider re-sampling at affected locations.
Accuracy is controlled primarily by the analytical process and
is reported as bias. The bias of the sample design cannot be
determined since the true value of the chemicals of concern in
the exposure area can never be known. However, certain fl
sample designs describe in Chapter 4 produced unbiased re-
sults if followed.
The bias associated with the measurement process can be
estimated using field spikeson field evaluation or audit samples
to assess the accuracy and comparability of results. The-.:
estimates will rc-flec'the effectsofs;>-r.pl-?c '". '!''' .!> T-i'ni '
holding times, and the analytical process on the value of the
sample collected.
Bias is estimated for the measurement process b\ computing
the percent recovery for the spiked or reference coin pound .-,
follows:
(Measure amount - Amount
i" 1i - e n i. .-> H 53 -i o\ v -r^
Ml UllOfJ v'x^J Cl d < «J ' C j ,^ I ^ .-/
Percent Recovery =
Amount SpiKed
Because of the inherent problems associated with the spiking
procedure and the interpretation of recovery, spikes are consid-
ered minimum requirements only if specified in the SAP.
Field matrix spikes arc currently not recommended for use in
soils (EPA 1989c).
86
-------
Chapter 5 Assessment of Environmental Data For Useability in Baseline Risk Assessments
Field blanks are evaluated to estimate the potential bias caused
by contamination from sample collection, preparation, ship-
ping and/or storage. Results for the analysis of field blanks
indicate whether contamination resulted in bias but are not
,>-;iirmtf >. of nfviirnr\ R'I;K is romputed as follows:
Percent Bias =
(Measured amount) X 100
Required Detection Limit
Blanks areof primary concern for the analysis of bias involved
in sampling because of the difficulty in performing field spikes
and the availability of appropriate reference standards and
matrix for evaluation samples.
5.6.2 ASSESSMENT OF ANALYTICAL
DATA QUALITY INDICATORS
Determining the useability of analytical results for the risk
assessor begins with the review of quality control samples and
qualifiers. The review is used to determine an overall assess-
ment of analytical performance as determined by laboratory
and method performance. Note that it is more important to
evaluate the effect on the data than to determine the source of
the error. The data package is re% lewed as a whole for some
criteria. Data are reviewed at the sample level for certain
criteria, such as holding time. Factors affecting the accuracy of
identification and precision and accuracy of quantitation of
EXHIBFT 5-8
USE OF QUALITY CONTROL DATA FOR RISK ASSESSMENT
QUAIJTY
c-»;,^
(High Recovery)
(Low Recovery)
-X ~r\»*
CAWMon
Internal Standards^
iReorcdoobiifty)
Irtwmal Standards
iHign Recovery)
interned Standards
(Low Recovery)
1 False negative o
2 Effect on bias de
EFFECT OH
ISMCT MET S-A£ USE
~T -"°"U"-IJ"-"1^
""'"^ - -* to~"
None unl»«» anafyte -t-y -y -* iJ«ta < *«tj,-r«te poor
and not in« om«r
nownve or neoanv*
EctM Dosicve -< y S^t TonflderMre level Sx Ijtank
> ^-M data above confidenc* level
^** data below mn^Oeoce leve^
| M estimate
| - T :r ^*4 data as estimate
j _;W jriess problem is extreme
False Neo/rtrvw ^-^ect dita or examne raw data ariJ
-s«» professional judgment
,se data as estimate poor
_ r . _ ^e data M lower limit
False NeQatrve1 -.-3- ~se data as upper limit
Tly likely il recovery is near :e-:
ermcied t^ e»arnnaton o' ~4*j T> *-»i^~ rxttvid^jal annlyte
individual chemicals, such as calibration and recoveries, must
be examined analyte-by-analyte. The qualifiers used in the
review of CLP data arc presented and the effect on data quality
is discussed in this section. Exhibit 5-8 presents a summary of
the qiialiiv control samples and the data use implication of
qualified data. Corrective action options are shown in Exhibit
5-1.
In environmental analysis, sample mediacan be more complex
than expected as in the case of sludge or oily wastes or can
contain interfering chemicals whose presence cannot be pre-
dicted in both precision and accuracy measurements. The risk
assessor must examine the reported precision and accuracy
data to determine useability. Ranges used for rejection and
qualification of CLP data have been determined based on the
analysisof target compounds in environmental media (soil, and
water). These ranges, documented in "Laboratory Data Vali-
dation: Functional Guidelines for Evaluating Organics/Inor-
ganics Analyses" can be used in the absence of specifications
in the planning documents (EPA 1988d, 1988e).
Completeness
Minimum requirements:
Percentage of sample completeness determined during
planning.
100% for critical samples (one sample per medium per
exposure pathway).
All data from critical samples considered crucial.
Impact:
Consequences generally decrease as the number of samples
Data (or critical samples have significantly more impact
than incomplete data for non-critical samples.
For critical samples, decrease useability of data.
For non-critical samples, potential decrease in useability of
data.
Corrective action:
Determine whether the missing data are crucial to the risk
assessment (i.e., data from critical samples).
Resampling or sample re-analysis to fill data gaps.
The completeness for analytical data required for risk assess-
ment is defined as the number of chemical-specific data results
for an exposure area that are determined acceptable after data
review, expressed as a percent of the total:
(acceptable samples) x 100
Percent Completeness = --
total samples
87
-------
Chapter 5 Assessment of Environmental Data For Useability in Baseline Risk Assessments
An analysis is considered complete if all data generated are
determined to be acceptable measurements as defined in the
SAP. Data for each analyte should be present for each sample.
In addition, data from quality control samples necessary to
determine precision and accuracy should be present. Quality
control samples and the effects of problems associated with
these samples are discussed in section 5.6.2.
Comparability
Minimum requirements:
The analytical methods used must have common analytical
parameters.
Same units of measure used in reporting.
Similar detection limits.
Equivalent sample preparation techniques.
Impact:
Increase in overall error.
Corrective action:
Preferentially use those data that provide the most defini-
tive identification and quantitation of the chemicals of
potential concern. For organic chemical identification,
GC-MS data are preferred over GC data generated with
other detectors. For quantitation, examine the precision
and accuracy data along with the reported detection limits.
Sample re-analysis using comparable methods.
The need to combine data from
different sampling events and/or
different analytical me;hods should be
anticipated.
Comparability is a very important qualitative data indicator for
analytical assessment, and is a cnueal parameter when consid-
ering the combination of data sets from different analyses for
the same chemicals of potential concern. The assessment
determines if analytical results being reported arc equivalent to
data obtained from similar analyses. Only comparable data sets
can readily be combined for the purpose of generating a single
risk assessment calculation.
The use of routine methods simplifies the determination of
comparability because all laboratories use the same standard-
ized procedures and reporting parameters. In other cases, the
risk assessor may have to consult v. nh an analytical chemist to
evaluate whether different methods are sufficiently compa-
rable to combine data sets. The nsk assessor should request
complete descriptions of non-routine methods. A preliminary
assessment can be made by comparing the analytcs, useful
range, and detection limit of the methods. If different units of
measure have been reported, all measurements must be con-
verted to a common set of units before comparison.
Representativeness
requirements:
As specified in the SAP.
Impact:
Inaccurate identification or estimate of concentration that
leads to inaccurate calculation of risk.
If a large portion of the data are rejected or if all data from
analyses of samples at a specific location are rejected, the
remaining data may no longer sufficiently represent the
site.
Corrective action:
For critical samples, re-analyses of samples or resampling
of the affected site areas. For non-critical samples, re-
analyses or re-sampling should be decided by the RPM in
consultation with the technical team.
If the re-sampling or re -analyses cannot be performed,
document in the site assessment report what areas of the site
are not represented due to poor quality of analytical data.
Representativeness is determined by examining the sampling
plan, as discussed in Section 3.2. In determining the represcn-
tativeness of the data, the evaluator examines the degree to
which the data meet the performancs standards of the method
and to which the analysis represents the sample submitted to the
laboratory. Analytical data quality affects representativeness
since data of low quality may be rejected for use in risk
rK
-------
Chapter 5 Assessment of Environmental Data For Useability in Baseline Risk Assessments
If there is too much variability in the analyses, the risk
assessor can use the larger sample result to set an upper
bound on the risk.
Precision is a measure of the repeatability of a single measure-
ment and is evaluated from the results of duplicate samples and
splits. The relative percent difference (RPD) between dupli-
cates is calculated using the following formula:
-R2|
RPD =
x 100
R2)/2
Low precision can be caused by poor instrument performance,
inconsistent application of method protocols, or by a difficult,
heterogeneous sample matrix. The last effect can be distin-
guished from the others because it is unique to a single sample
or set of samples from the same location.
If split samples have been analyzed by different methods or
different laboratories, then data users have a measure of the
quality of individual techniques. Splits are particularly effec-
tive in cases when one laboratory is a reference laboratory. If
both sets of data exhibit the same problems, then laboratory
performance can usually be ruled out as a source of error. Splits
are also useful when using non-routine methods or comparing
results from different analytical methods.
Accuracy
Minimum requirements:
Use of methods (routine methods whenever possible) that
specify expected or required recovery ranges using spike or
other quality control measures.
As specified in the SAP.
No chemicals of potential concern detected in the blanks.
Impact:
Potential for false negatives. If spike recovery is low, it is
probable that the method or analysis is biased low for that
analyte and values in all related samples may underestimate
the actual concentration.
Potential for false positives. If spike recovery exceeds
100%, interferences may be present, and it is probable that
the method or analysis is biased high. Analytical results
overestimate the actual concentration of the spiked analyte.
Corrective action:
In many validated methods the percent recovery is used as
a correction factor in calculating the analyte concentration.
However, no correction factor is applied for CLP data.
If recoveries are extremely low or extremely high, the risk
assessor should consult with an analytical chemist to iden-
tify a more appropriate method for re-analysis of the samples.
Accuracy is a measure of ovcrestimation or underestimation of
reported concentrations and is evaluated from the results of
spiked samples. Recoveries from spiked or performance evalu-
ation samples can be calculated using the following formula:
Percent Recovery =
(measured amount - amount
in unspiked sample) x 100
amount spiked
The procedures will vary according to differences in the num-
ber of measurements and the precision of the estimates. Data
that are not reported by confidence limits cannot be assigned
weights based on precision and should not be combined for use
(Taylor 1987).
Spiked samples are particularly useful in the analysis of com-
plex sample types because they help the reviewer determine the
extent of bias on the sample measurement. A set of standards
at known concentrations is mixed into a portion of the sample
or into distilled water prior to sample preparation and analysis.
The analytical results are compared to the amount spiked to
determine the level of recovery. It is important to note that
unless every sample is spiked, spike recoveries present only a
trend rather than a specific quantitative measure.
Results from blanks can be used to estimate the extent of high
bias in the event of contamination. The following procedures
should be implemented to prevent the assignment of false
positive values due to blank contamination:
If the field blanks are contaminated and the laboratory
blanks are not, the risk assessor can conclude that contami-
nation occurred prior to receipt of the samples by the
laboratory. If the contamination is significant (i.e., it will
interfere with the determination of risk), consider resam-
pling at affected locations.
Ifitisnotpossibletoresample.theriskassessormustassess
the effect of the contamination on the potential for false
positives. Often, this determination can be made by exam-
ining data from samples located nearby. If all samples and
blanks show the same level of a particular chemical, the
presence of the chemical in the samples is most likely due
to contamination.
If the laboratory blanks are contaminated, the laboratory
should be required to rerun the associated analyses. This is
especially important in the case of critical analytes or
samples. Before reanalyses, the laboratory must demon-
strate freedom from contamination by providing results of
a clean laboratory blank. Note: if laboratory blanks are
contaminated, field blanks will generally also be contami-
nated.
If reanalysis is not possible, then the sample data must be
qualified. "Laboratory Data Validation: Functional Guide-
lines for Evaluation of Organics/Inorganics Analyses"
89
-------
Chapter 5 Assessment of Environmental Data For Useability in Baseline Risk Assessments
provides examples of blank qualification (EPA 1988d,
1988c). Chemicals detected in the associated samples
below the action level defined in the Functional Guidelines
arc considered undetected.
Data qualifiers
All of the data generated by the Routine Analytical Services
(RAS) of the CLP are reviewed and qualified by regional
representatives according to the guidelines found in "Labora-
tory Data Validation: Functional Guidelines for Evaluation of
Organics/Inorganics Analyses" (EPA 1988d, 1988e) as modi-
fied to fit the requirements of the individual regions.
Data qualified "U" or "f are useable
| for risk assessment purposes.
Analytes qualified with a U are considered not detected. If data
precision and accuracy are good (as determined by the quality
control samples), data are entered in the data summary tables in
the data validation report as the SQL or corrected quantitation
limit (method detection limit corrected for sample factors such
as dilution and percent moisture), and qualified with a U. Note
that the same chemical can be reported undetected in a series of
samples at different concentrations because of sample differ-
ences. The use of zero, half the detection limit, or the corrected
method detection limit as reported in the data summary tables
will determine minimum, average, or maximum risk, respec-
tively.
Data qualified with an R are rejected because performance
requirements in the sample or in associated quality control
analyses were not met. For example, if a mass spectrometer
tune is not within specifications neither the identification nor
qjanutation of chemicals can be accepted with confidence.
Extremely low recoveries of a chemical in a spiked sample
might also warrant an R designation for that chemical in
associated samples because of the risk of false negatives (see
Appendix VI).
Data qualified with a J present a more complex issue. J
qualified data are considered estimated because quantitation in
the sample or in associated qua! 11\ control samples did not meet
specifications. The justification for qualifying the data is
explained in the validation report and is proposed to be included
on a qualifier summary table submitted with the validation
report by draft revisions of the functional guidelines. Data can
be biased high or low for qualification as estimated. The bias
can often be determined by examining the results of the quality
control samples used to quahf> the data. For example, if
interfering levels of aluminum are found in inorganic analysis
of the interference check sample ICS), the sample results arc
probably biased high because the iron signal overlap is added
to the signal being reported. When volatilcorgamc compounds
arc qualified J for holding time violation, the results arc usually
biased low because some of the volatile compounds may have fl
evaporated during storage. ^
Data associated with contaminated blanks are not considered
estimated and arc no! flagged T The presence of the hl.ink
contaminant chemical in the analytical samples is questionable
at levels up to five to ten times those found in the blank,
depending on the nature of the analytc. An action level is
determined for each chemical based on the quantity found in the
blank, and data above the action level is accepted without
qualification. Data between the Contract Required Quantita-
tion (Detection) Limit (CRQL, CRDL) and the action level are
qualified U (undetected) because die confidence in the detec-
tion is low due to blank contamination.
Estimated organics and inorganics data that are below the
CRQL or CRDL are qualified as UJ. This qualifier signifies
that the chemical is not detected but the precision of the
measurement is not good enough for confidence in the quanti-
tation limit. UJ is used for the same reasons as J but the
appropriate quantitation limit is reported rather than the amount
found.
Other qualifiers may be added to the analytical data by the
laboratory. A set of qualifiers (or flags) has been defined by the
CLP for use by the laboratories to denote problems with the
analytical data. These qualifiers and their potential use in risk _
assessment are discussed in RAGs (EPA 1989a). fl
5.6.3 COMBINING THE ASSESSMENT OF
SAMPLING AND ANALYSIS
Once the quality of the sampling and analysis effort has been
assessed using the five data quail) indicators, the p;ol le.;,
becomes one of combining the results to determine the overall
assessment of a particular indicator across sampling and analy-
sis. Combining the assessment for completeness, comparabil-
ity, and representativeness is discussed in this section as a
qualitative procedure. Statistical models arc available for
combining data sets with different variability and bias. Theri.sk
assessor should consult a chemist or statistician if the magni-
tude of the sampling and analysis effort warrants the use of a
formal statistical treatment of comparability.
The basic model for estimating total variability across sam-
pling and analysis components is presented in Exhibit 5-9. A
non-statistical approach to combining the assessment results is
suggested in Exhibit 5-10. Using this approach, each data
quality indicator is assessed to determine whether a problem
exists in cither sampling or analysis. This assessment leads to
different combinations of problem determination. For ex-
ample, completeness may have been a problem in sampling
[yes] but not aproblcm in analysis [no]; the combinauon is
no].
yes/
'
90
-------
Chapter 5 Assessment of Environmental Data For Useability in Baseline Risk Assessments
EXHIBIT 5-9
BASIC MODEL FOR ESTIMATING TOTAL VARIABILITY
ACROSS SAMPLING AND ANALYSIS COMPONENTS
where o, = total variability
n = measurement variability
a = population variability
where c^ = sampling variability (standard deviation)
q, = handling, transportation and storage variability
qs = preparation variability (subsamplmg variability)
oj, = laboratory analytical variability
<$, = between batch variability
NOTE: It is assumed that the data are normally distributed or that a
normalizing data transformation has been performed.
Source EPA 1990b
EXHIBIT 5-10
COMBINING DATA QUALITY INDICATORS FROM
SAMPLING AND ANALYSIS INTO A SINGLE
ASSESSMENT OF UNCERTAINTY
Data
Quality
Indicators
Assessment of P'oo'ers Combined Sampling
and Analytical
Sampling Analytical
Determination
Completeness
YES
NO
YES
NO
YES/YES
YES/NO
MO'YES
Comparability
YES
NO
YES
MO
YES/YES
YES/NO
NO/YES
! Representativeness
YES
NO
NO I
YES/YES
YES/NO
NO/YES
Precision
YES
NO
YES/YES
YES/NO
NC/YES
Accuracy
YES
NO
VES
YES/YES
YES/NO
NO/YES
The combination NO NO inc ca'es ~a" "~e data quah'y indicator wil
not affect the level o' i-0cer'a -% - ca'a -seability
Once assessment patterns based on the determination of a prob-
lem have been established, some basic guidance is given on the
combinations. This guidance is qualitative in nature and is pre-
sented only to assist in organizing the data assessment problem
for the application of professional judgment. If the assessment
pattern is [no/no], the issue of combining results i-> not a
problem. Conversely, if the pattern is [yes/yes), the issue of
combining results is an issue of the effects of the combined
magnitudes. Instances of combined sampling and analysis
problems for a single indicator will have significant effects on
the risk assessment uncertainty. The most complicated assess-
ment pattern to interpret is encountered when a problem occurs
in one area but not in another (e.g., in sampling but not in
analysis). This situation is briefly discussed for each indicator
in the following sections.
Completeness
An incomplete sample can be considered incomplete for all
analytes, while analytical incompleteness is usually related to
particular analytes. In the instance of a completeness problem
[yes/yes], the effect on the risk assessment will vary according
to chemical. For some chemicals, the data points will be lost in
both sampling and analysis.
The effects of a loss in the number of sample points for a
particular chemical could be as much as 50%. For example, if
collection of 10 samples was planned and one sample could not
be collected because of site access problems, one was broken
in transport, and the laboratory experienced anaK sis problems
with three samples for the chemical of potential concern
causing the data to be rejected, then only five data points
remain.
If the assessment pattern is [yes/no], the effects arc distributed
across all chemicals involved in the risk asse.ssinei.'i 1; me
pattern is [no/yes] the effects are localized to the particular
chemical affected.
Comparability
Comparability problems in sampling are primarily due to
different sample designs and time periods. Seasonal \,aruiiior,s
are treated like spatial variations since the risk assessment is
calculated as risk over time. Data can be as eraged and consid-
ered as a single data set. For analytical data, comparability
problems arc related primarily to the use of different methods
and laboratories. A pattern of [yes/yes] w ill indicate that the
risk assessor will have considerable difficulty mcombmmg the
various data SCLS into a single assessment of risk. In situations
of [yes/no] and (no/yes], the problem of sampling comparabil-
ity is more difficult to resolve. Models exist lor determining
comparability between methods and integrating results across
laboratories. These models involve the general statistic a!
approach to confirming data sets with dilferent but kno'.ui
variability and bias (Ta\ lor 19S7)
91
-------
Chapter 5 Assessment of Environmental Data For Useability in Baseline Risk Assessments
Representativeness
Representativeness is critical to the risk assessment in the
sampling area. Non-representativeness affects both false
negatives (chemicals not identified) and estimates of concen-
tration magnitudes and, therefore, affects estimates of reason-
able maximum exposure (RME). Analytical representative-
ness involves the question of whether the analysis results
represent the sample collected. For example, holding times and
sample preservation can cause the analytical results not to be
representative of the sample collected. These questions should
be treated separately in the discussion of effects.
Precision
Sampling variability typically overshadows variability intro-
duced by the measurement process, which includes analytical
variability. If precision is a problem in both sampling and
analysis, the risk assessor should focus on the impact of
sampling variability on the estimate of RME and the resulting
confidence limits. Unless the sampling variability is low and
the analytical variability very high, the effects of analytical
variability will be minimal in comparison to the effects of
sampling variability.
Accuracy
The assessment of accuracy with regard to sampling is focused
primarily on recoveries from spiked or performance evaluation
samples. Blank contamination can indicate the likelihood of
false positives. For analysis, both blank contamination and
analytical performance arc reflected by spike recoveries. If ihe
pattern is [yes/yes] for accuracy, this may require identifying
blank contaminants and integrating the identification of con-
tamination across field and laboratory blanks.
If the accuracy pattern is [no/yes], then the issue is the analyti-
cal performance. Low variability in sampling as measured by
low coefficients of variation for chemicals of potential concern
should increase the risk assessor's concern over an analytical
accuracy problem. High sampling variability (CV>25) will
greatly reduce the real effects of analytical bias on the level of
certainty of the risk assessment.
92
-------
Chapter 6 Application of Data to Risk Assessment
Chapter 1
Introduction and Background
Chapter 2
The Risk Assessment Process
Chapter 3
Criteria for Evaluating Data Useability in Baseline
Risk Assessments
1
Chapter 4
Steps for Planning for the Acquisition of Useable
Environmental Data in Baseline Risk Assessments
Chapter 5
Assessment of Environmental Data for Useability in
Baseline Risk Assessments
Chapter 6
Application of Data to
Risk Assessments
Provides procedures to determine
the uncertainty of the analytical data.
Explains how to distinguish site from
background levels of contamination
and determine presence (absence)
of contamination.
Discusses how to characterize
exposure pathways.
93
-------
Chapter 6 Application of Data to Risk Assessment
ACRONYMS FOR CHAPTER SIX
RAGS
SAP
SOP
Risk Assessment Guidance for Superfund
Sampling and Analysis Plan
Standard Operating Procedure
94
-------
Chapter 6 Application of Data to Risk Assessment
6.0 APPLICATION OF DATA TO
RISK ASSESSMENT
This chapter provides guidance for integrating the assessment
of data useability to determine the overall level of confidence
of the completed risk assessment. This guidance builds on each
of the previous chapters in this manual.
Chapter 2 explained the risk assessment process and the
roles and responsibilities of key participants. Exhibit 2-3
defined a continuum of level of certainty in the baseline risk
assessment result based on the ability of the risk assessor to
quantify or qualify the level of confidence associated with
the analytical data.
Chapter 3 defined six data useability criteria and examined
preliminary issues that must be considered during sampling
and analysis planning to increase the certainty of the ana-
lytical data collected for the risk assessment.
Chapter 4 presented strategies for planning sampling and
analysis activities based on the six data useability criteria.
Chapter 5 described how to use each data useability crite-
rion in a separate assessment phase to determine the effect
of sampling and analysis problems on data quality and on
the useability of the data in the baseline risk assessment.
The Data Useability Worksheet (Exhibit 5-4) was designed to
assist the risk assessor in summarizing the determination of
data quality across the various assessment phases. The work-
sheet forms the basis for this chapter's discussion of the impact
of the quality of the analytical data on the level of confidence
of the risk assessment.
6.1 Assessment of Level of
Certainty Associated with the
Analytical Data
This section explains how to assess the level of confidence in
sampling and analytical procedures in the context of the four
major decisions to be made by the nsk assessor with environ-
mental analytical data. Exhibits in this section apply the data
useability criteria, defined in Chapter 3 and appearing on the
Data Useability Worksheet, to the four decisions. The data
useability criteria affect the level of confidence involved in
each decision. Accordingly, the level of certainty in the data
collection and evaluation component of the risk assessment
will affect the overall certaint} of the risk estimate.
6.1.1 WHAT CONTAMINATION IS
PRESENT AND AT WHAT LEVELS?
The risk assessor's first task is to use the analytical data to
determine what contamination is presentat the site and at what
levels (i.e., what potential exists for increased risk from this
contamination). Exhibit 6-1 lists the criteria from the Daia
Useability Worksheet that affect this decision. The most
critical question to be answered about the analytical data before
calculating the risk is the probability of false positives or false
negatives in the data. Risk assessors are concerned primarily
with false negatives because their occurrence causes the assess-
ment of risk to be biased low. False positives cause the
calculated risk to be biased high.
The major concern with false negatives
is that the decision based on the risk
assessment may not be protective of
human health.
Probability of false negatives
False negatives occur when chemicals of potential concern are
present but are not detected by the analytical method or the
EXHIBIT 6-1
DATA USEABILITY CRITERIA AFFECTING CONTAMINATION PRESENCE
Work»h«*t
Rafaranca
Dm* Uaaablllty
Criterion
Data ColWction and
Evaluation Decision
1
28
2C
3A
4
5
6A
6C
60
6E
Reports to risk assessor
Documentation (SOPs)
Documentation (analytical raco'dsj
Data sources (anaiy'icai)
Analytical methods
Data review
Completeness (analytical)
Representativeness (sampling)
Precision (analytical)
Accuracy (sampling and analytical)
sample design. The following parameters from the Dam
Useability Workshectcan be used to determine the probability
of false negatives: analytical methods, data review, samplum
completeness, sampling representativeness, analytical com-
pleteness, analytical precision and accuracy, and combined
error.
False negatives can occur if sampling
is not representative, if detection limns
are above concentrations of concern,
or if spike recoveries are very low.
95
-------
Chapter 6 Application of Data to Risk Assessment
Sampling contributes to the probability of false negatives if too
few samples were taken or if sections of the site were not
sampled. If sampling of any exposure pathway was not
representative, the probability of false negatives increases.
Knowing the analyte-specific detection limits is critical to the
determination of the probability of false negatives. Recovery
values from spikes, internal standards, surrogates, and system
monitoring compounds are used to assess the level of accuracy
and precision in the laboratory data and determine whether the
detection limits stated in the analytical methods have been met
by the laboratory.
If the concentration of concern is at or below the detection
limit for any analyte, the probability of false negatives for
that analyte is high. This probability should have been
documented during planning if no analytical methods were
found with detection limits below the concentration of
concern.
If the spike recoveries are acceptable or biased high as
documented during the data review process and the detec-
tion limits are below the concentration of concern for each
analyte, the probability of false negatives is low.
If the spike recoveries are biased low and the detection
limits are below the concentration of concern for each
analyte, the probability of false negatives is directly related
to the amount of bias. The effect is more pronounced the
closer the concentration of concern is to the detection limits.
The possibility of false negatives should be carefully evalu-
ated whenever samples have been highly diluted (i.e.,
diluted beyond normal method specifications).
Probability of false positives
False positives occur when a chemical of concern is not present
at the site but is detected by the anal\ tical method. Assessment
of the following parameters from the Data Useability Work-
sheet can be used to determine the probability of false positives:
analytical methods, data review, sampling accuracy, analytical
completeness, analytical precision and accuracy, and com-
bined error.
False positives can occur when blanks
are contaminated or spike recoveries
are very high.
Sampling and analysis uncertainties connected with false posi-
tives can be assessed by examining the results of quality control
samples. Blank contamination is the most important indicator
for determining the probability of false positives, particularly
when accompanied by high spike recoveries. As described in
Chapter 5, samples can be contaminated during sampling,
storage, or analysis, resulting in false positives. Field and
laboratory blanks identify this problem by determining the
level and point of contamination. Sample matrix interferences
also cause false positives. High spike recoveries indicate that
matrix interference has occurred.
If a chemical of potential concern has been detected in any
of the blanks, the probability of false positives associated
with that analyte is high. False positives should be sus-
pected for any sample value less than five times the blank
concentration (ten times for common laboratory contami-
nants). High spike recoveries compound problems with
blank contamination and increase the likelihood of false
positives.
If chemicals of potential concern are detected in the blanks
and spike recoveries for any analyte are biased high, the
probability of false positives for that analyte is directly
related to the amount of bias. The probability of false
positives is highest when the reported concentration is near
the detection limit for an analyte.
If chemicals of potential concern have not been detected m
any of the blanks and spike recoveries are not biased high,
the probability of false positives is low.
6.1.2 ARE SITE CONCENTRATIONS
SUFFICIENTLY DIFFERENT FROM
BACKGROUND?
Background samples serve as a baseline measurement to deter-
mine the degree of contamination. Background samples arc
collected and analyzed for each medium of concern in the same
manner as other site samples. They require the same degree of
quality control and data review. Background samples differ
from other samples because the sampling location, as defined
in the samplingand analysis plan (SAP), is intended to be in an
area that has not been exposed to the source of contamination
Exhibit 6-2 lists the criteria from the Data Useability Work-
sheet that affect this decision.
As part of the risk assessment process, the risk assessor must
determine if the samples collected as background samples arc
actually uncontaminated. If chemicals of potential concern are
EXHIBIT 6-2
DATA USEABILITY CRITERIA AFFECTING
BACKGROUND LEVEL COMPARISON
WorfcshMt
O«l« UmMbllity
Criterion
Data Collection and
Evaluation Decision
1
2A
3A
6A
6B
60
6E
Reports to risk assessor
Documentation (SAP)
Data sources (analytical)
Completeness (sampling)
Comparability (analytical)
Precision (analytical)
Accuracy (sampling and
analytical)
96
-------
Chapter 6 Application of Data to Risk Assessment
not found in the background samples, the entire data collection
process will be simplified. If chemicalsof potential concern are
found in the background samples, the risk assessor must deter-
mine whether they are at naturally occurring levels, are of
anthropogenic origin, are due to contamination during the
sampling process, or if they are site contaminants.
Both naturally occurring chemicals and anthropogenic chemi-
cals have significance for risk assessment. Naturally occurring
chemicals are those expected at a site in the absence of human
influence. Metals are naturally occurring chemicals that are
often included in risk analysis. They are generally present in
varying concentrations depending on the medium. For ex-
ample, soils of high organic content, such as humus, would
have a low concentration of metals by weight, while soils with
a high clay content would contain higher metal levels. Anthro-
pogenic chemicals are defined in RAGS (EPA 1989a) as
concentrations of chemicals that are present in the environment
due to man-made, non-site sources (e.g., industry, automo-
biles). Chemicals of anthropogenic origin include organic
compounds such as phthalates (plasticizers), DDT, or polycy-
clic aromatic hydrocarbons and inorganic chemicals such as
lead (from automobile exhaust).
If chemicals of potential concern are found in background
samples, they should not be cons idered naturally occurring.
They can be present because they are either site contami-
nants or are of anthropogenic origin. They also could be a
result of contamination during sampling.
If chemical concentrations in the background samples fall
within naturally-occurring levels and there is no risk asso-
ciated with those levels, the risk assessor may eliminate the
chemicals from the risk assessment calculations.
If chemical concentrations in the background samples are
higher than naturally occurring levels, it is an indication of
contamination, either site-specific or anthropogenic in
nature. The risk assessor may include the analytical data
with other site data or perform a separate risk assessment
based on best professional wdgment.
Anthropogenic chemicals
the risk assessment
should not be eliminated from
Statistical analysis may be necessary to determine if site
levels are distinctly different from those found in the
background samples.
Statistical analysis may also be necessary in those cases
where chemicals of potential concern are detected in site
samples at very low concentrations. It is difficult to
distinguish adifference between background andsite sample
concentrations at levels close to the detection limit.
6.1.3
Statistical analysis may be able to
determine if site concentrations are
significantly above background
concentrations when the differences
are not obvious.
ARE ALL EXPOSURE PATHWAYS
IDENTIFIED AND EXAMINED?
The identification and examination of exposure pathways is
discussed in detail in RAGS (EPA 1989a). Exhibit 6-3 summa-
rizes the criteria that the risk assessor must assess to determine
the probable level of certainty that all pathways have been
identified and examined.
EXHIBIT 6-3
DATA USEABILITY CRITERIA AFFECTING
EXPOSURE PATHWAY EXAMINATION
Data Collection and
Evaluation Decision
Worksheet Dili U«»bility
Reference Criterion
1
2A
3B
6A
6B
Reports to risk assessor
Documentation (SAP)
Data sources (non-analytical)
Completeness (sampling)
Comparability (sampling)
The nature of the pathways to be examined is critical to the
selection of a sample design and analytical methods. If the
exposure pathways are not identified properly, the resulting
characterization will be inappropriate. The risk assessor should
determine which pathways are not adequate and determine the
effect on the risk assessment if those pathways are excluded
from study.
If additional samples can be collected to include the inade-
quately represented exposure pathway in the risk assess-
ment, the risk assessor should recommend their acquisition.
(Sampling considerations presented in Chapter 3 of this
manual should be re-examined).
If additional samples cannot be collected from an inade-
quately represented pathway, the risk assessor should in-
vestigate whether computer simulation modeling is fea-
sible. For example, if the contamination of the soil and
water at the site is fully characterized but no air samples
were obtained, air flow models could be used to estimate
transport of volatile contaminants.
If additional samples cannot be collected from an inade-
quately represented pathway and no simulation models arc
appropriate, the risk assessor should note in the report that
97
-------
Chapter 6 Application of Data to Risk Assessment
the risk could not be determined for that pathway or use
simple chemical/physical relationships to estimate the
exposure. For example, equilibrium partition coefficients
can he used toestimate movement in the vadose /.one of soil
il insufficient data exist to calibrate a groundwater transport
model.
6.1.4 ARE ALL EXPOSURE PATHWAYS
FULLY CHARACTERIZED?
Assessing how well exposure pathways have been character-
ized involves evaluation of completeness, comparability, and
representativeness across analytical and sampling data quality
indicators. Exhibit 6-4 lists the criteria from the worksheet that
affect this decision. To be fully characterized, the exposure
pathway must have been appropriately sampled. Broad-spec-
trum analyses also must have been conducted for the media of
concern and analyte-specific methods used where appropriate.
The uncertainty in the data collection and analysis depends on
the evaluation of completeness, comparability and representa-
EXHIBIT 6-4
DATA USEABILITY CRITERIA AFFECTING
EXPOSURE PATHWAY CHARACTERIZATION
Worksheet
Reference
D«U UswbNKy
Criterion
Data Collection and
Evaluation Decision
1
2A
28
2C
3A
3B
6A
6B
6C
60
Reports to risk assessor
Documentation (SAP)
Documentation (SOPs)
Documentation (field records)
Data sources (analytical)
Data sources (non-analytical)
Completeness (sampling and analynca1)
Comparability (sampling and ana'y"ca v
Representativeness (sampling
and analytical)
Precision (sampling)
tiveness as discussed in Section 5.6. Based on these indicators,
the risk assessor should determine the magnitude of the effect
of analytical data uncertainty on the risk assessment.
If the uncertainty associated with the data for an exposure
pathway is not significant, the risk assessor should use the
data and note in the report the high level of certainty
associated with assessment of the affected exposure path-
way.
If the uncertainty associated with the data for any exposure
pathway is significant but does not warrant resampling and
rcanalysis, statistical procedures may be necessary to inter-
pret the data.
If the uncertainty associated with the data is high, the risk
assessor may determine that an exposure pathway is not
fully characterized.
6.2 Assessment of Uncertainty
Associated With the Baseline
Risk Assessment for Human
Health
The level of certainty in making each of the four decisions
discussed in Section 6.1 contributes to the overall uncertainty
in the data collection and analysis component of the risk
assessment. The critical factor in assessing the effect of
uncertainty on the environmental analytical data component of
the risk assessment is not that uncertainty exists, but rather that
the risk assessor be able to qualify ami/or quantify the uncer-
tainty. The certainty levels for risk assessment represented in
Exhibit 6-5 are based on the ability to quantify the uncertainty
in analytical data collection and evaluation. Data collection
and evaluation, however, comprise on ly one source of uncer-
tainty in the risk assessment. Other components of the risk
assessment process, such as toxicity of chemicals and exposure
assumptions, influence the four decisions to be made and
contribute significantly to the uncertainty of the baseline risk
assessment.
icni occurs when ihc
The most quantitative lev el of risk
uncertainty in the data can be determined quantitatively. The
next level occurs when the uncertainty can be determined
qualitatively, or the impact of the uncertainty is assessed using
sensitivity analysis. The least desirable situation occurs when
the uncertainty in the data is unknown. This situation can occ ur
if the minimum requirements given in Chapter 5 for the data
useabilily criteria have not been achieved.
The primary planning objective is that
uncertainty levels are acceptable,
known and quantifiable, not that
uncertainty be eliminated.
98
-------
Chapter 6 Application of Data to Risk Assessment
EXHIBIT 6-5
UNCERTAINTY IN DATA COLLECTION AND EVALUATION DECISIONS
AFFECTS THE CERTAINTY OF THE RiSK ASSESSMENT
Decisions To
What
Contamination
is Present and
at What Levels
Are Site
Concentrations
Sufficiently
Different From
Background
Aie A
Exposure
Pathways
Identified and
Examined
Are All
Exposure
Pathways Fully
Characterized
Risk Assessment
Process
Data Collection
and Evaluation
Exposure
Assessment
; Toxicity
i Assessment
Risk
Characterization
Nature of Risk
Assessment
Quantitative
(Uncertainty
explicitly stated)
Quantitative
(Uncertainty not
known)
Qualitative (No
uncertainty
estimate)
99
-------
Glossary
GLOSSARY
Accuracy. A measure of ihe closeness of an observed concen-
tration to the true value.
Aliquot. A measured portion of a sample or extract taken for
analysis.
Analvte. One of the chemicals or chemical species for which
a sample is analyzed.
Anthropogenic Background Levels. Concentrations of chemi-
cals that are present in the environment due to human-made,
non-site sources (i.e., industry, automobiles).
Background Sample. A sample taken from a location where
chemicals present in the ambient medium are assumed due to
natural sources.
Bias. A measure of overestimauon or underestimation of
reported values.
Biased Sampling. A sampling plan in which the data obtained
may be systematically different from the true mean. Bias in
sampling is caused by systematic error in data location, such as
clustering data points.
Blank. A clean sample that has not been exposed to the
analyzed sample stream in order to monitor contamination
during sampling, transport, storage, or analysis.
Broad Spectrum Analysis. An analytical procedure capable of
providing identification and quantitation of a wide variety of
potential chemicals.
Calibration. The comparison of a measurement iUuiuard or
instrument with another standard or instrument to report or
eliminate, by adjustment, any variation (deviation) in accuracy
of the item being compared. The le\ els of calibration standards
should bracket the range of levels for which actual measure-
ments are to be made.
Chem ical of Potential Concern Chemical initially identified or
suspected to be present at a site that ma\ be hazardous to human
health.
Chromatogram. The plot of a detector response as a function
of time that represents the separation of compounds during
analysis.
Classical Statistics. The theor. of statistics that assumes that
data points are mutually independent. Classical methods
consider that one point is not related to another.
Co-clution. When the time of relea^ of two or more analytcs
from the column of a gas chromaiograph cannot be distin-
guished.
Co-extractable. When two or more analytes are released from
the matrix under Cample preparation conditions
Coefficient of Variation. A measure of relative dispersion
(precision) used in parametric statistics. It is equal to the
standard deviation divided by the mean, multiplied by 100 and
is expressed as a percentage.
Collocated Sample. Independent samples that are equally
representative of the parameters of interest at a given point in
space and time.
Comparability. A measure of the equivalence of data.
Completeness. A measure of the amount of useable data
resulting from a data collection activity, given the sample
design and analysis.
Composite Sample. A sample that is combined from several
sampling locations in order to reduce cost and provide an
estimate of the mean of the population from which the samples
are drawn. No estimate of the variance of the mean, and hence,
the precision with which the mean is estimated can be obtained
from a composite sample.
Compound Class. A group of organic compounds that are
structurally related.
Confidence. A measure of the probability of taking action
when action is required.
Concentration of Concern. A site-specific level of concentra-
tion that the risk assessor determines to be of concern; may be
health-based, required by statute or of environmental signifi-
cance.
Contract Laboratory Program (CLP). Analytical program
developed for analysis of Superfund site samples to provide
analytical results of know quality, supported by a high level of
quality assurance and documentation.
Contract Required Ouantitation Limit (CROP The chemical
specific quantitation levels that the CLPrcquircs to be routine!)
and reliably quantitated in specified sample matrices.
Data Assessment. The determination of the quantity and
quality of data.
Data Quality Indicator (DOT). A performance measure for
sampling and analytical procedures.
Data Quality Objectives (POPs'). Qualitative and quantitative
statements that specify the quality of the data required to
support decisions. DQOs arc determined based on the end u>e
of the data to be collected.
101
-------
Glossary
Data Review. The evaluation process that determines the
quality of reported analytical results. It involves examination
of raw data (i.e., instrument output) and quality control and
method parameters by a professional with knowledge of the
tests performed
Data Validation. CLP specific evaluation process that exam-
ines adherence to performance based acceptance criteria as
outlined in the Functional Guidelines for Evaluating Organics
or Inorganics Analyses.
Data Useabilitv. The process of assuring or determining
whether the quality of the data generated meet their intended
use.
Detection Limit. The minimum concentration or weight of
analyte that can be detected by a single measurement with a
known confidence level.
Digestion. The application of acid and heat to a solution or
suspension to bring metals into solution for elemental (inorgan-
ics) analysis.
Dilution. Adding solvent to a sample, with analyte concentra-
tion higher than the standard calibration curve, to bring the
analyte concentration into a quantifiably measurable range.
Dissolved Metals. Metals present in solution rather than sorbed
on suspended particles.
Dose-Response Evaluation. The process of quantitatively
evaluating toxicity information and characterizing the relation-
ship between die dose of a contaminant administered or re-
ceived and the incidence of adverse health effects in the
exposed populations.
Duplicate. Tv»u sample^ Uikeu iruui Uic .saiu^ sumce ai the
same time and analyzed under identical conditions.
Exposure Pathway. The course of a chemical or physical agent
from a source to an exposed organism . Each exposure pathway
includes a release from a source, an exposure unit, and an
exposure route.
ExtractablcOrganics. Compounds that can be partitioned into
an organic solvent from the sample matrix and that are ame-
nable to gas chromatography (CLP designation).
Extraction. The process of releasing compounds from a sample
matrix.
False Negative (Type II or beta error). A statement that a
substance is not present \>.hen the substance is present.
False Positive (Type I or alpha error). A statement that a
substance is present when it is not.
Geostatislics A theory of statistics that rccogm/.es obsci\ od
concentrations as dependent on one another and governed by
physical processes. Geostalistical methods consider Uic loca-
tion of data and the si/c of the site for calculations.
1 leterogencous Distribution. Sample property that is unevenK
distributed in the population.
Historical daia. UaUi collected before the remedial investiga-
tion.
Holding time. The length of time from the date of sampling to
the date of analys is. CLP designates the holding time as the date
from laboratory receipt of sample until date of analysis.
Homogeneous Distribution. A sample property that is evenly
distributed over the population.
Hot Spot. Location of substantially higher concentration of a
chemical of concern than in surrounding areas of a site.
Hydrocarbon. An organic compound composed of carbon and
hydrogen.
Identification. Confirmation of the presence of a specific
compound or analyte in a sample.
Instrument Detection Limit (IDL). The lowest amount of a
substance that can be detected by an instrument without correc-
tion for the effects of sample matrix, handling and preparation.
Intake Estimate. A measure of exposure expressed as the mass
of a substance in contact with the exchange boundary per unit
body weight per unit time.
Integrated Risk Information System (IRTS). An EPA database
containing verified RfDs, slope factors, up-to-date health risks
and EPA regulatory information for numerous chemicals. IRl.s
is EPA's preferred source for toxicity information for Super-
fund.
Internal Standard. A compound added to organic samples and
blanks at a known concentration prior to analysis. It is used as
the basis for quantitation of target compounds.
Judgmental Sampling. The process of locaungsumplmg pmn^
based on the investigator's judgment of where the sample
should be taken.
Kriging. A procedure utilizing the covariancc function that
determines an acceptable spacing for sampling locations on a
square grid.
Limit of Detection (LQD). The concentration of a chemical
that has a 99 percent probability of producing an analytical
result above zero using a specific method.
-------
Glossary
Limit of Ouantitation CLOQ'). The concentration of a chemical
that has 99 percent probability of producing an analytical result
above the LOD. Results below LOQ are not quantitative.
Linearity. The agreement between an actual instrument read-
ing and the reading predicted by a straight line drawn between
calibration points that bracket the reading.
Lowest-Observable-Adverse-Effect-Level (LOAEL). In dose
experiments, the lowest exposure level at which there are
statistically or biologically significant increases in frequency
or severity of adverse effects between the exposed population
and its apparent control group.
Mass Spectrum. A characteristic pattern of ion fragments of
different masses resulting from analysis that can be compared
with a mass spectral library for analyte identification.
Matrix/Medium. The predominant material comprising the
sample to be analyzed (e.g., drinking water, sludge, air).
Measurement Error. The difference between the true sample
value and the observed value.
Method Detection Limit (MDLX The detection limit that takes
into account the reagents, sample matrix, and preparation steps
applied to a sample in specific analytical methods.
Minimum Detectable Difference. Percent difference between
two concentration levels that can be detected in analyses.
Naturally Occurring Background Levels. Ambient concentra-
tions of chemicals that are present in the environment in the
absence of human intervention (e.g., aluminum, manganese).
No-Observed-Adverse-Effect-Level (NOAEL). In dose re-
sponse experiments, an exposure ION el at which there are no
statistically or biologically significant increases in the fre-
quency or severity of adverse effects between the exposed
population and its appropriate control.
Noise. The random errors of observation and other uncontrol-
lable effects that are not related to the presence of the analyte
being measured.
Nonparamclric. Statistical equations that assume the data set is
not normally distributed.
Normal Distribution. A probabilit> density function that ap-
proximates the distribution of man> random variables and has
the form generally called the "bell-shaped curve."
Null Hypothesis. For risk assessment, statistical hypothesis
that states on-sitc contaminant concentrations are not higher
than background.
Parameter. A specified component of a procedure or method.
Parametric. Statistical equations that assume the data set is
normally distributed.
Particulatc. Solid material suspended in a fluid (air or water)
medium.
Performance Evaluation Sample. A sample of known compo-
sition provided for laboratory analysis to monitor laboratory
and method performance.
Power. A measure of the probability of taking no action when
no action is required.
Precision. A measure of the reproducibility or variability of a
measurement under a given set of conditions.
Preservation. The sample treatment for maintaining represen-
tative sample properties.
Qualifier. Acode appended to an analytical result that indicates
possible qualitative or quantitative uncertainty in the result.
Qualitative. An analysis that identifies an analyte in a sample
without numerical certainty.
Quantitative. An analysis that gives a numerical level of
certainty to the concentration of an analyte in a sample.
Random Sampling. The process of locating sample points
randomly within a sampling area.
Reasonable Maximum Exposure (RME). The highest expo-
sure that is reasonably expected to occur at a site.
Recovery. A determination of the accuracy of the analytical
procedure made by comparing measured values for a spiked
sample against the known spike values.
Reference Dose
EPA's preferred toxicity value for
evaluating noncarcinogenic effects resulting from exposures at
Superfund sites.
Representativeness. The extent to which data measure the
objectives of the data collection.
Resolution. The degree of difference between t^o measure-
ments.
Retention Index. Retention time data specific to an anak ucal
gas chromatography column compared with retention times of
standards.
Retention Time. The length of time that acompound is re Limed
on an analytical column (GC, HPLC, 1C).
Routine Method. A method issued by an organi/ation with
appropriate responsibility. A routine method has been vali-
dated and published and contains information on minimum
performance characteristics.
103
-------
Glossary
Sample Integrity. The maintenance of the sample in the same
condition as when sampled.
Sample Quantitation Limit (SOU). The detection limit that
accounts for sample characteristics, sample preparation and
analytical adjustments such as dilution.
Sensitivity. The capability of methodology or instrumentation
to discriminate between measurement responses for quantita-
tive differences in a parameter of interest.
Slope factor. A plausible upper-bound estimate of the proba-
bi li ty of a response per unit intake of a chemical over a lifetime.
The slope factor is used to estimate an upper-bound probability
of an individual developing cancer as a result of a lifetime
exposure to a particular level of a potential carcinogen.
Solvent A liquid used to dissolve and separate analytes from
the matrix of origin.
Spatial Variation. Term describing the manner in which
contaminants vary as a function of space. The magnitude of
difference in contaminant concentrations in samples separated
by a known distance is a measure of spatial variability.
Spike. A known amount of a chemical added to a sample for the
purpose of determining efficiency of recovery; a type of quality
control sample.
Split. A single sample divided for the same measurement by
two processes for the purpose of monitoring precision, accu-
racy or comparability of two analyses.
Standard Deviation. The most common measure of the disper-
sion of observed values or results expressed as the magnitude
of the square root of the variance.
Stratified Random Sampling. The process of locating samples
randomly within distinct populations, unit areas or strata.
Stratify. To divide into strata.
Surrogate. A substance added to environmental samples for
quality control purposes that is not likely to be found in an
environmental sample but that mimics the analyte of interest.
Systematic Random (Grid) Sampling. A non-biased sampling
plan using a grid comprised of equidistant parallel lines at right
angles to each other.
Target Compound. The compound of interest in a specific
method. The term also has been used in the Federal Register to
denote compounds of regulatory significance.
Temporal Variation. Variation ohscrved in chcmic il concen-
trations that is dependent on time.
Tentatively Identified Compound (TIC). Organic compound^
detected in a sample that are not target compounds, internal
standards or surrogates.
Toxicological Threshold. The concentration at which a com-
pound exhibits toxic effects.
Turnaround Time. Thetimefrom laboratory receiptof samples
to receipt of a data package by the client.
95% Upper Confidence Limit (UCL). The upper limit on a
normal distribution curve below which the observed mean of a
data set will occur 95% of the time.
Useful Range. That portion of the calibration curve that v, ill
produce the most accurate and precise results.
Variance. A measure of dispersion. It is the sum of the squares
of the difference between the individual values and the arithme-
tic mean of the set, divided by one less that the number of
values.
Viscosity. The physical property of a fluid that offers a
continued resistance to flow.
Volatile Organics. The solid or liquid compounds that may
undergo spontaneous phase change to a gaseous state at stan-
dard temperature and pressure.
Wavelength. The linear distance between successive <^:^ -
mum or minima of a wave form.
Weight-of-Evidence Classification. An EPA classification
system for characterizing the extent to which available dau
indicate that an agent is a human carcinogen. Recently, EPA
has developed weight-of-evidence systems for other kinds of
toxic effects, such as developmental effects.
104
-------
References
REFERENCES
American Chemical Society (ACS) Committee on Environ-
mental Improvement. 1983. Principles of Environmental
Analysis. Analytical Chemistry. 55:2210-2218.
Annual Book of ASTM Standards. American Society for
Testing and Materials. Philadelphia, PA.
American Society for Testing and Materials (ASTM). 1979.
Sampling and Analysis of Toxic Organics in the Atmos-
phere. ASTM Symposium. American Society for Testing
and Materials. Philadelphia, PA.
Baudo,R.,Glesy,J.,andMuntan,H.(Eds). 1990. Sediments:
Chemistry and Toxicitv of In-Place Pollutants. Lewis
Publishers, Inc. Ann Arbor, MI.
Borgman, L.E., and Quimby, W.F. 1988. Sampling for Tests
for Tests of Hypothesis When Data are Correlated in Time
and Space. In: Principles of Environmental Sampling. L.
H. Keith, Ed. American Chemical Society. Washington,
D.C.
CARD. CLP Results Database. 1988. U.S. Environmental
Protection Agency. Office of Emergency and Remedial
Response.
Clcsccri et al. (Eds). 1989. Standard Methods for the Exami-
nation of Water and Wastewater. 17th Edition. American
Public Health Association. Washington, D.C.
Dragun,J. 1988. The Soil Chemistry of Hazardous Materials.
Hazardous Materials Control Research Institute. Silver
Spring, MD.
Environmental Protection Agency (EPA). 1983. Methods for
Chemical Analysis of Water and Wastes (EPA 200 and
300 Methods). Environmental Monitoring Services Labo-
ratory. EPA/600/4-79/020.
Environmental Protection Agency (EPA). 1984. Methods for
Organic Chemical Analysis of Municipal and Industrial
Wastewater (EPA 600 Methods) as presented in 40 CFT
Part 136, Guidelines Establishing Test Procedures for the
Analysis of Pollutants under the Clean Water ACL
Environmental Protection Agency (EPA). 1985. Methodology
for Characterization of Uncertainty in Exposure Assess-
ment Office of Research and Development. EPA/600/8-
85/009.
Environmental Protection Agency (EPA). 1986a. Test Meth-
ods for Evaluating Solid Waste .(SW-846V Ptnsical/
Chemical Methods. Third Edition. Office of Solid Waste.
Environmental Protection Agency (EPA). 1986b. Guidelines
for Carcinogenic Risk Assessment. 51 Federal Register
33992 (September 24,1986).
Environmental Protection Agency (EPA). 1986c. Guidelines
for the Health Risk Assessment of Chemical Mixtures. 51
Federal Register 34014 (September 24,1986).
Environmental Protection Agency (EPA). 1986U CLP Statis-
tical System Database. Office of Emergency and Reme-
dial Response.
Environmental Protection Agency (EPA). 1987a. Data Qual-
ity Objectives for Remedial Response Activities: Devel-
opment Process. EPA/540/G-87/003 (NTIS9B88-131370).
Environmental Protection Agency (EPA). 1987b. Data Qual-
ity Objectives for Remedial Response Activities. Example
Scenario: RI/FS Activities at a Site with Contaminated
Soil and Ground Water. Office of Emergency and Reme-
dial Response. EPA/540/G-87/004.
Environmental Protection Agency (EPA). 1987c. Field Screen-
ing Methods Catalog. Office of Emergency and Remedial
Response.
Environmental Protection Agency (EPA). 1987J. A Compen-
dium of Superfund Field Operations Methods. Office 01
Emergency and Remedial Response. EPA /540/P-87/001
(OSWER Directive 9355.0-14).
Environmental Protection Agency (EPA). 1988. Field Screen-
ing Methods for Hazardous Waste Site Investigation
Presented at the U.S. Environmental Protection Agency
First International Symposium. Las Vegas, NV.
Environmental Protection Agency (EPA). 1988a. Guidance
for Conducting Remedial Investigations and Feasibility
Studies under CERCLA. Office of Solid Waste and
Emergency and Remedial Response. EPA/540/G-89/004.
Environmental Protection Agency (EPA). 1988b. National Oil
and Hazardous Substances Pollution Contigencv Plan
fNCP). 53 Federal Register 51394.(December 23. 1988).
105
-------
References
Environmental Protection Agency (EPA). 1988c. Suoerfund
Exposure Assessment Manual. Office of Emergency and
Remedial Response. EPA/540/1-88/001. (OSWER Di-
rective 9285.5-1).
Environmental Protection Agency (EPA). 1988d. Laboratory
Data Validation Functional Guidelines for Evaluating
Inorganics Analysis. Office of Emergency and Remedial
Response.
Environmental Protection Agency (EPA). 1988e. Laboratory
Data Validation Functional Guidelines for Evaluating
Organics Analysis. Office of Emergency and Remedial
Response.
Environmental Protection Agency (EPA). 1988f. Review of
Ecological Risk Assessment Methods. Office of Policy
Analysis. EPA/230/10-88/041.
Environmental Protection Agency (EPA). 1988g. Contract
Laboratory Program Statement of Work for Inorganic
Analysis: Multi-media. Multi-concentration. Office of
Emergency and Remedial Response. SOW No. 788.
Environmental Protection Agency (EPA). 1988h. Contract
Laboratory Program Statement of Work for Organic
Analysis: Multi-media. Multi-concentration. Office of
Emergency and Remedial Response. SOW No. 288.
Environmental Protection Agency (EPA). 1988L Guidance
Document for the Assessment of RCRA Environmental
Data. In Draft. Office of Solid V> aste
Environmental Protection Agency (EPA). 1988J. Methods for
the Determination of Organic Compounds in Drinking
Water (EPA 500 Methods). Environmental Monitoring
Services Laboratory. Las Vegas. NV. EPA/600/4-88/039.
Environmental Protection Agency ("EPA). 1988k. Toxic
Release Inventory System (data base). Office of Solid
Waste and Emergency and Remedial Response.
Environmental Protection Agency (EPA). 1989. Integrated
Risk Information System (data baseX Office of Research
and Development.
Environmental Protection Agenc> (EPA). 1989a. Risk As-
sessment Guidance for Superfund Human Health Evalu-
ation Manual Part A. Office of Solid Waste and Emer-
gency and Remedial Response EPA/540/1-89/002.
(OSWER Directive 9285.701 A >
Environmental Protection Agency (EPA). 1989b. Risk As-
sessment Guidance for Suncrfund: Volume II Environ-
mental Evaluation Manual. Office of Solid Waste and
Emergency and Remedial Response. EPA/540/1 -89/001
Environmental Protection Agency (EPA). 1989c. Soil Sam-
pling Quality Assurance User's Guide. Environmental
Monitoring Systems Laboratory. Las Vegas, NV. EPA.
600/8-89/046.
Environmental Protection Agency (EPA). 1989d. Health
Effects Assessment Summary Tables. Fourth Quarter FY
1989. Office of Research and Development.
(OERR 9200.6-303).
Environmental Protection Agency (EPA). 1989e. Proposed
Amendments to the Guidelines for the Health Assessment
of Suspect Developmental Toxicants. 54 Federal Register
9386 (March 14, 1989).
Environmental Protection Agency (EPA). 1989f. Ecological
Assessment of Hazardous Waste Sites: A Field and
Laboratory Reference. Environmental Research Labora-
tory. EPA/600/3-89/013.
Environmental Protection Agency (EPA). 1989g. Data U^e
Categories for the Field Analytical Support Project. In
Draft. Hazardous Site Evaluation Division Office of Solid
Waste and Emergency and Remedial Response.
Environmental Protection Agency (EPA). 1989h. Office of
Water Regulations and Standards/Industrial Technology
Division (ITDi Mctluxis (EPA 1600 Methods). OlYijeui"
Water.
Environmental Protection Agency (EPA). 1989i. Methods for
Evaluating the Attainment of Cleanup Standards. Volume
I: Soils and Solid Media. Office of Policy, Planning and
Evaluation. EPA/230/2-89/042.
Environmental Protection Agency (EPA). 1990b. A Rationale
for the Assessment of Errors in the Sampling of Soils.
Office of Research and Development. EPA/600/4-90/
013.
Environmental Protection Agency (EPA). 1990c. Health
Effects Assessment Summary Tables. First and Second
Quarters FY 1990. Office of Research and Development
(OERR 9200.6-303).
Finkcl, A.M. 1990. Confronting Uncertainty in Risk Manage-
ment: A Guide for Decision-Makers. Center for Risk
Management. Washington, D.C.
106
-------
References
Gilbert, R.O. 1987. Statistical Methods for Environmental
Pollution Monitoring. Van Nostrand. New York, NY.
Hclrich, Kcnnith,(Ed). 1990. Official Methods of Analysis of
the Association of Official Analytical Chemists. 15th
Edition. Association of Official Analytical Chemists.
Washington, D.C.
IRIS. Integrated Risk Information System (data base). 1989.
U.S. Environmental Protection Agency, Office of Re-
search and Development.
Keith, L.H. 1987. Principles of Environmental Sampling.
American Chemical Society. Washington, D.C.
Keith, L.H. 1990a. Environmental Sampling and Analysis. In
Print. American Chemical Society. Washington, D.C.
Keith, L.H. 1990b. Environmental Sampling: A Summary.
Environmental Science and Technology. 24:610-615.
Manahan, S.E. 1975. Environmental Chemistry. Willard
Grant Press. Boston, MA.
Neptune, D.E., Brantly, E.P., Messner, M., and Michael, D.I.
1990. Quantitative Decision Making in Superfund.
Hazardous Materials Control. 18-27.
National Research Council (NRC). 1983. Risk Assessment in
the Federal Government: Managing the Process. National
Academy Press. Washington, D.C.
Oak Ridge National Laboratory (.ORNL). 1982. Mcihuduluuv
for Environmental Risk Assessment. Environmental Sci-
ences Division. ORNL/TM-8167.
Oak Ridge National Laboratory (ORNL). 1986. User's Manual
for Ecological Risk Assessment. Environmental Sciences
Division. ORNL/TM-8167.
Pohlmann,K.F.,Hess,J.W. 1988. Generalized Ground-Water
Sampling Device Matrix. Desert Research Institute. Las
Vegas, NV.
Seller, F.A. 1987. Error Propagation for Large Errors. Risk
Analysis. 7:509-518.
Taylor, J.H. 1987. Quality Assurance of Chemical Measure-
ments. Lewis Publishers, Inc. Ann Arbor, MI.
Whitmore, R.W. 1985. Methodology for Characterization of
Uncertainty in Exposure Assessments. EPA/600/8-85/
009.
107
-------
Chapter 1
Introduction and Background
Chapter 2
The Risk Assessment Process
Chapter 3
Criteria for Evaluating Data Useability in Baseline
Risk Assessments
Chapter 4
Steps for Planning for the Acquisition of Useable
Environmental Data in Baseline Risk Assessments
Chapter 5
Assessment of Environmental Data for Useability in
Baseline Risk Assessments
Chapters
Application of Data to Baseline Risk Assessments
APPENDICES
Provide technical
reference tables for
sampling and analysis.
Describe data review
packages and meanings
of selected data
qualifiers.
-------
APPENDIX I
DESCRIPTION OF ORGANICS AND INORGANICS DATA REVIEW
PACKAGES
The purpose of Appendix I is to familiarize the reader with a model for data review deliverahles. This
appendix consists of the following items:
1. A description of the five major components of a typical data review package.
2. An example of a data review summary.
3. Example data review forms.
Please note that the example forms are designed for the validation of Contract Laboratory Program
(CLP) type data packages. An example form is included for each analytical fraction (volatiles, semivolatilcs,
pesticide/aroclors and metals) and for samples from soil/sediment and aqueous matrices. These forms
nevertheless include the necessary information for the review of most types of data (analytical results,
sample quantitation/detection limits, data qualifiers, etc.) not associated with the CLP.
111
-------
AlMM.\m\ I
i. nr.sc KIITION 01 DYIA iu:\ir.\\ PACKAGES
A typical data review package consists of the folluv. ing components
o Narrative
o Gkv-sa'-'. >f' 'I '"' '? 'i;'"' "
o Data Summaiv Form-
o AnalvMs Data Sheet-- foi Fach Yin,;','
o Support Documentation
Narrative
The Narrative provides sample identification information, and describes the problems
found that affect data qualitv
Glossary of Data Qualifiers
This glossary is a list of Data Review Qualifiers and their meanings for use in data
e\ aluation.
Data Summary Form;
A Data Summaiv Form consist of a grid of sample numbers and analv tes where all
positive results are lepoited for each anahte in each sample along with the applicable data
validation qualifier if piesent. The Data Summarv Forms provide a reference which indicate?
both position sample les'Jts and sample "-pecific quantitation limits
Analysis Data Sheei^ ':: F.ac!'. SampL
The results of all anahtes analvzed fiom each sample are reported bv the laboratory on
Anahsis Data Sheers This include" the testily of the L.aboratorv search compounds or
Tentativ elv Hent'f;-? °
undetected analv te^ :
applicable
Some data re r-
appropriate data le e
Analv sis Data Sheet- '. .
Siu^port Document?.: ^
Su|?port Docu:".-" M-n ;i, :;'!.!, 'i.-j i.-o in : ita .";.',> packaj-e^ ate Mimmane- : the
qualitv control re-ul:> ;aw data that hav . caused the data to be ciualified Documents winch
discuss data analvsjs -ep^iting 1^11-^ mav also he included
1 11'
-------
APPENDIX I (Continued)
2. DATA REVIEW SUMMARY
ORGANIC DATA SUMMARY FORMS UTILIZED
BY REGION III IN THE CLP
DATE:
SUBJECT:
FROM:
TO:
THRU:
OVERVIEW
Case consisted of four (4) low level water and two (2) low
level soil samples, submitted for -full organic analyses. Included
in this data set was one (1) equipment blank and one (1) trip
blank. The trip blank was analyzed for volatiles only. The
samples were analyzed as a Contract Laboratory Program (CLP)
Routine Analytical Service (RAS).
SUMMARY
All samples were successfully analyzed for all target compounds
with the exception of 2-Butanone and 2-Hexanone in the volatile
fraction. All remaining instrument? and method sensitivities were
according to the Contract Laboratory Program (CLP) Routine
Analytical Service (RAS) protocol.
MAJOR PROBLEM.
The response factors (RF) for 2-Butanone and 2-Hexanone were less
than 0.05 ir. one of the continuing volatile calibration. The
quantitation li-its for this compound in the affected samples
were qualified unreliable, "R". (See Table I in Appendix F for
the affected, sarcles.)
MINOR PROBLEMS
Several compour.is failed precision criteria for initial and/or
cont inu ing,, cal ir rations . Quantitation limits and the reported
results for ~r.-z~~- compounds may be biased and, therefore, have
been quail: Lei -intimated, "UJ" and "J", respectively. (See Table
I in Apporv.: ^x F :JT the affected samples) .
113
-------
APPENDIX I (Continued)
2. DATA REVIEW SUMMARY
NOTE?
Page 2 of 3
The soil semivolatile MG/MGD analyses were originally
extracted within the technical and contractual holding
times. Re-extractions were required because of surrogate
recoveries, and these re-extractions were performed outside
of holding times. Surrogate recoveries were again outside
of the QC limits, therefore, original sample results are
being reported.
The maximum concentration of compounds found in the trip
blanks, field blanks, or method blanks are listed below.
All samples with concentrations of common laboratory
contaminants less than ten times (<10X) the blank
concentration, and uncommon laboratory contaminants less
than five times (<5X) the blank concentration have been
qualified "B" in the data summary table. (See Appendix F) .
Compound
Methylene chloride *
Acetone *
Concentration (ug/L)
7 J
9 J
3is(2-ethylhexyl)phthalate * 10 J
* Common Laboratory Contaninant
o The semivolatile MS/MSD analyses had compounds other than
the spiking compounds present. The following is a table of
results and precision estimates for the non-sniked
comDouncs:
MS/MSD Non-Spiked Co-pounds
Concentration fua/L]
Compound
Phenanthrene
Fluoranthene
Benzo(a)anthracene
Chrysene
Bis (2-ethyihexyl) phthalatc
P.enTTi ( h 1 rv> " r*-: >
Benzo (b)pyrer.e
Benzo (k) pyren-
Benzo (a) pyre."-
RSD=-- Rele
1 r)0
340
290
290
1 6 0
190
730
740
j"
J
J
J
J
J
J
J
190
4 7 0
310
330
200
240
200
190
u
J
J
J
J
-~
J
, J
1 .', ()
4 4 0
320
300
240
240
220
2 4 0
T
J
J
J
J
J
J
J
_ve Standard Deviation
114
-------
APPENDIX I (Continued)
2. DATA REVIEW SUMMARY
Page 3 of 3
o The pesticide/PCB analyses of all soil sampler, and associated
QC samples had surrogate recoveries in excess of the QC limit.
Since no positive results were reported for any pesticide or
PCB compounds for any of the samples in this case no data was
affected. (See Appendix F) .
o The reported Tentatively Identified Compounds (TIC's) in
Appendix D have been reviewed and accepted or corrected.
o All data for Case were reviewed in accordance with the
Functional Guidelines for Evaluating Organic Analyses with
modifications for use within Region III. The text of this
report addresses only those problems affecting usability.
ATTACHMENTS
APPENDIX A - Glossary of Data Qualifiers
APPENDIX B - Data Summary. These "include:
(a) All positive results for target compounds with
qualifier codes where applicable.
(b) All unusable detection limits (qualified "R").
APPENDIX C - Results as Reported by the Laboratory for All
Target Compounds
APPENDIX D - Reviewed and Corrected Tentatively Identified
Compounds
APPENDIX E - Organic Regional Data Assessment Summary
APPENDIX F - Support Documentation
115
-------
APPENDIX I (Continued)
3. DATA REVIEW FORM
a
3
z
<
u
z
z
8
c
o
Locat
1
Nunbe
1
s
z
8.
§x § E § y> § e e E
c u - P E 3 -
(TJ
£ TJ TJ
« v^ &)
X U
> tl fi
01- X
a> «
Z
<
"8S
-C < U
*- < <_
fl; < O i-
X < 0 CL *-
e ro 4i
a* a > E
V) «)
O» t/1
*J (D
< C
X u) X «< to
X
0) «3 C U "O