2
^EDit^
^VpROft^	^Bgg^ ^j£DSj^ "^ »»**«^
Multi-Agency Radiological
Laboratory Analytical Protocols
Manual
Part I Training
"The MAPLAP Process"
United States Environmental Protection Agency, Region 9
and
California Department of Health Services
Sacramento, California
May 31-June 1, 2006
wm> zusgs nist

-------
Multi-Agency Radiological Laboratory
Analytical Protocols (MARLAP)
Part I — "The MARLAP Process"
May 31,2006
3:00	1 — Introduction
£:30	2 — Directed Planning Process
9:15	3 — Data Quality Objectives and the Gray Region'
Noon	Lunch
1:15	4 — Key Analytical Planning Issues*
3:15	3reak
3:30	5 — Project Planning Documents
3:50	6 — Measurement Uncertainty
4:45	Adjourn for the Day (Questions Welcome)
June 1,2006
&:00	7 — Evaluating Measurement Uncertainty*
9:30	b — Obtaining Laboratory 5ervices
10:00	3reak
10:15	9 — Method Validation*
12:15	Lunch
1:30	10 — Evaluating Methods and Laboratories
2:45	E2reak
3:00	11 — Data Verification and Validation*
4:45	Wrap Up and Adjourn (Questions Welcome)
* With group exercise
Bob Litman
Carl Gogolak
Carl Gogolak
l3ob Litman
Dave McCurdy
Keith McCroan
Keith McCroan
Dave McCurdy
Dave McCurdy
Dave McCurdy
Bob Litman

-------
Contents
AGENDA
ACRONYMS & SYMBOLS
MODULES	TAB
1	Introduction and Overview	1
2	Project Planning	2
3	Data Quality Objectives and the Development of Measurement Quality Objectives	3
4	Key Analytical Planning Issues: MQOs and Analytical Protocol Specifications	4
5	Project Plan Documents: Important Recommendations	5
6	Measurement Uncertainty			6
7	Evaluating Measurement Uncertainty	7
8	Obtaining Laboratory Services	8
9	Method Validation: Performance-Based Approach	9
10	Evaluating Methods and Laboratories	10
11	Data Verification and Validation	11
BACKGROUND MATERIALS
The MARLAP Process (Graphic)	12
Required Method Uncertainty: Key to the MARLAP Process (Graphic)	12
Extract from MARLAP Appendix B (Development of the Gray Region)	13
MARLAP Appendix C (Measurement Quality Objectives for Method Uncertainty and
Detection and Quantification Capability)	13
MARLAP Attachments 3 A (Measurement Uncertainty) and 3B (Analyte Detection)	13
Exercise: Analytical Protocol Specifications for 90Sr in Milk	14
Exercise: QC Charts for 90Sr in Milk	14
Exercise: Data Report for 90Sr in Milk	14
MARLAP Table 4.2: Crosswalk Between Project Plan Document Elements and Directed
Planning Process	15
MARLAP Table E.6: Example of a Proposal Evaluation Plan	15
Consolidated MARLAP Recommendations from Part I	16
EXERCISE MATERIALS
Plutonium Fabricators, Ltd.—Delineating the gray region and determining the required
method uncertainty (Module 3)	17
Analytical protocol specifications (APS) for americium-241 (Module 4)	18
Measurement uncertainty definitions and notation (Module 7)		19
Measurement uncertainty and statistical tests with associated data (Module 7)	19
Method validation exercise notes (Module 9)	20
Gamma and alpha spectrometry procedures for Am-241 (Module 9)	20
Method validation data templates for alpha and gamma spectrometry (Module 9)	20
Data validation and verification (Module 11)	21
April 18, 2006

-------
Acronyms Used in the MARLAP Process
AL 		action level
APS 		Analytical Protocol Specification
COC		chain of custody
CSU		combined standard uncertainty
DL 		discrimination level
DQO		data quality objective
GPC		gas proportional counting
GUM 		Guide to the Expression of Uncertainty in Measurement
HSA		historical site assessment
ISO		International Organization for Standardization
LCS 		laboratory control sample
LGBR		lower bound of the gray region
LSC 		liquid scintillation counting
MARLAP . . .	Multi-Agency Radiological Laboratory Analytical Protocols
Manual
MDA 		minimum detectable amount or minimum detectable activity
MDC 		minimum detectable concentration
MQC 		minimum quantifiable concentration
MQO 		measurement quality objective
MVRM		method validation reference material
QAPP		quality assurance project plan
PE		performance evaluation
PT		performance/proficiency testing [materials]
PM		project manager
RHT		radiological holding time
ROI		region of interest
RPD		relative percent difference
SOW		statement of work
SA 		spike concentration added
SR		unspiked sample result
SSR 		spiked sample result
TAT		turnaround time
TEC 		technical evaluation committee
UBGR 		upper bound of the gray region
V&V		verification and validation

-------
Symbols Used in the MARLAP Process
The total standard deviation of the data. It is represented by:
°=[*s2 + d,/2
The standard deviation of the contaminant concentration in the sampled
population (i.e., the sampling contribution to uncertainty)
The "true" standard deviation of the measurement process (i.e., the
laboratory contribution to uncertainty)
The width of the gray region.
A = (Action Level - Discrimination Level) = (AL-DL)
It also can be expressed as
A = (upper bound of the gray region - lower bound of the gray region)
Required relative method uncertainty above the action level (AL)
expressed as a fraction:
*Pmr : [ "mr /
Required absolute method uncertainty at and below the AL. An upper
bound to the value of
wMR = A/10 for the mean of a sampled population
Mjvir = A/3 for an individual sample
The statistical factor for assessing the probability of an analyte being
detected when none is present. Also referred to as the "Type I error rate."
Commonly assigned a value of 0.05.
The statistical factor for assessing the probability of an analyte not being
detected when it is present. Also referred to as the "Type II error rate."
Commonly assigned a value of 0.05.

-------
Multi-Agency Radiological
Analytical Protocols Manual
(MARLAP)
1. introduction
Carl Gogolak — carl@gogolak.org
Bob Litman — drbob20@comcast.net
Keith McCroan — mccroan.keith@epa.gov
Dave McCurdy — demccurdy@aol.com
Carl Gogolak, Ph.D., is a physicist with more than 35 years experience. He has conducted
experimental and theoretical studies of low-level environmental radiation fields to assess the
radiological impacts of energy production and to assure adequate environmental
surveillance of nuclear facilities. He was a major contributor to both MARRSIM and
MARLAP, for which he authored or co-authored several chapters and appendices dealing
with uncertainty, the gray region, and data quality. He was an original developer and
instructor on previous versions of the MARLAP Part I training course prior to his retirement
from the Environmental Measurements Laboratory of the U.S. Department of Energy and
later the Department of Homeland Security.
Robert Litman, Ph.D., has been a researcher and practitioner of nuclear and radiochemical
analysis for the past 33 years. He is well respected in the nuclear power industry as a
specialist in radiochemistry and instrumentation. Dr. Litman co-authored two chapters of
MARLAP. His particular areas of expertise are gamma spectroscopy and radiochemical
separations.
Keith McCroan, Ph.D., is an information technology specialist with the National Air and
Radiation Environmental Laboratory of the U.S. Environmental Protection Agency, where he
has worked since 1991. Although his formal education was in mathematics and computer
science, he has become better known among radiochemists as a statistician and
metrologist. Dr. McCroan was the principal author of five chapters and appendices of
MARLAP, including the chapters on measurement uncertainty and detection and
quantification limits, and was a contributor to four other chapters.
David E. McCurdy, Ph.D., is a nationally recognized expert in radioanalytical method
development, and he has 39 years of experience in the areas of radiometrology,
radiochemical method development, radiobioassay, radiological laboratory operations,
environmental monitoring and pathway analysis. He was the principal author or co-author of
seven chapters and appendices of MARLAP.

-------
Contents
MODULES
1	Introduction and Overview
2	Project Planning
3	DQOs and the Development of MQOs (Exercise)
4	Key Analytical Planning Issues: MQOs and AP5s (Exercise)
5	Project Plan Documents: Important Recommendations
6.	Measurement Uncertainty
7.	Evaluating Measurement Uncertainty (Exercise)
6	Obtaining Laboratory Services
9	Method Validation: Performance-Based Approach (Exercl6e)
10	Evaluating Methods and Laboratories
11	Data Verification and Validation (Exercise)
HANDOUTS
Extract from MARLAP Appendix B (Development of the Gray Region)
MARLAP Appendix C (MQOs for Method Uncertainty and Detection and Quantification Capability)
MARLAP Attachments 3A (Measurement Uncertainty) and 3B (Analyte Detection)
Exercise: Analytical Protocol Specifications for ^°Sr in Milk
Exercise- QC Charts for ^Sr in Milk
Exercise: Data Report for 905r in Milk
Table 4.2: Crosswalk Between Project Plan Document Elements and Directed Planning Process
Table E.6: Example of a Proposal Evaluation Plan
Consolidated MARLAP Recommendations from Part I
I. Introduction	1
During the group exercises, you will follow the various steps needed to
select and validate an analytical method for determining americium-241 in
groundwater at a former nuclear extraction facility. Participants will form
teams and apply MARLAP principles to:
Determine the required method uncertainty at the action level
Write an Analytical Protocol Specification for a selected nuclide/matrix
combination
Evaluate and approve a method based on laboratory validation
documentation
Apply data validation and verification qualifiers to a data set
Perform representative uncertainty calculations

-------
Enabling Goals
After this course, you will be able to —
1.	Navigate through MARLAP and understand its
organization
2.	Recognize that the required method uncertainty is the key
to the MARLAP process
3.	Describe, using specific equations, how the required
method uncertainty is used in the MARLAP process.
4.	Apply the MARLAP process during the project planning,
implementation, and assessment phases
1. introduction 			£l__	 ¦ 		2
2

-------
What is MARLAP?
•	A multi-agency guidance manual for project planners and
managers and radioanalytical laboratories
•	Participants include: EPA, DOD, DOE, DHS, NRC.
NIST, USGS,1 FDA, Kentucky, and California
•	Companion to MARSSIM
1. Irvtrotluf.tion
Eight federal agencies (EPA, Defense, Energy, Homeland Security, Nuclear
Regulatory Commission, Food and Drug Administration, US Geological
Survey, National Institute of Standards and Technology) plus two states
(Kentucky and California)
MARSSIM = Multi-Agency Radiation Survey and Site Investigation Manual.
MARSSIM provides guidance on how to design and implement a study to
demonstrate that a site meets appropriate release criteria.
MARLAP provides guidance and a framework for project planners and
laboratory personnel to ensure that radioanalytical data will meet the needs
of decisionmakers.
Websites:
MARLAP: http://www.epa.gov/radiation/marlap/
MARSSIM: http://www.epa.gov/radiation/marssim/
3

-------
Provide guidance and a framework to assure that
laboratory radioanalytical data meet a program's or
project's specific needs and requirements
4

-------
MARLAP Objectives
•	Providing a framework and an information resource for
using a performance-based approach for radioanalytical
work
•	Promoting a directed planning process involving
radioanalytical laboratory expertise
•	Providing guidance on how to link project planning,
implementation and assessment from an analytical
perspective
•	Making collective knowledge and experience in
radioanalytical laboratory work widely available
•	Providing guidance on obtaining and evaluating laboratory
services
1. Introduction
MARLAP is not a methods manual.
5

-------
Data Collection Activities
Examples of MARLArs applicability —
•	Cleanup of contaminated sites
•	Environmental monitoring
•	Waste management
•	Effluent monitoring of licensed facilities
•	Site characterization
•	Emergency response
•	Background studies
•	Decommissioning of nuclear facilities
1. Irttrcviijcticn	6

-------
Manual Outline
MARLAF Part I (Volume 1)
•	Chapter 1 — Introduction
•	Chapter 2 — Project Planning Process
•	Chapter 3 — Key Analytical Planning Issues and Developing APSs
•	Chapter 4 — Project Plan Documents
•	Chapter 5 — Obtaining Laboratory Services
•	Chapter 6 — Selection and Application of an Analytical method
•	Chapter 7 — Evaluating Methods and Laboratories
•	Chapter 8 — Radiochemical Data Verification and Validation
•	Chapter 9 — Data Quality Assessment
•	Five Appendices
1. lntrcv'ij::tiori
Part I principally directed towards the project planning, implementation and
assessment phases and emphasizes:
•	Preparation of project plan documents
•	Establishing a Statement of Work (SOW)
Identifying and obtaining proper laboratory services
Performance-based method selection and approval
Method validation guidance
Initial and ongoing laboratory performance evaluation
Data validation and assessment processes
Associated appendices cover:
A.	Directed Planning Approaches
B.	The DQO Process
C.	MQOs for Method Uncertainty and Detection and Quantification Capability
D.	Content of Project Plan Documents
E.	Contracting Laboratory Services
While Part I is of greatest significance to project planners and managers, lab
personnel need to understand what Part I contains in order to provide
necessary input during the planning process.
This course concentrates on Part I.
7

-------
Manual Outline (Continued)
MARLAPPart.il (Volume 2)

• Chapter 10 — Field and Sampling Issues

• Chapter 11 — Sample Receipt, Inspection, and Tracking

• Chapter 12 — Laboratory Sample Preparation

• Chapter 13 — Sample Dissolution

• Chapter 14 — Separation Techniques

• Chapter 15 — Quantification of Radionuclides

• Chapter 16 — Data Acquisition, Reduction, and Reporting

• Chapter 17 — Waste Management

• Appendix F — Laboratory Subsampling

1. Introduction 8
Part II directed towards laboratory personnel and the analysis process.
Part II spans two printed volumes because of size
•	Sample handling and preparation for analysis
•	Techniques for sample dissolution
•	Techniques for analyte separation
•	Techniques for radiological counting of samples
•	Data reduction
•	Waste management in radioanalytical laboratories
•	Quality control
•	Statistical methods of data evaluation
8

-------
Manual Outline (Continued)
MARLAF Part II (Volume 3)
•	Chapter 18 — Laboratory Quality Control
•	Chapter 19 — Measurement Uncertainty
•	Chapter 20 — Detection and Quantification Capabilities
•	Appendix G — Statistical Tables
1. IntroHuivion
9

-------
Data Life Cycle
Planning phase
Implementation phase
Assessment phase
Decision making
DATA LIFE CYCLE
PROCESS
PROCESS OUTPUTS
?
1
L
Dvected Ptaruwig
Process
Development of Data Quafity Obt*ctN*s and
Measuramartf Qua By Objectives (hdudng OpUnzad
Samphng and Analytical Design)
Plan Documents
Project Plan Document?
tnchxfeng Quatty Assurance Protect Plan (QAPP);
Work Ptan or Sampling and Anafyas Plan (SAP). Data
Vabdaten Plan, Data Qualty Assessmont Plan

ContracOng Services
State™ rt cfWort (SOW)
and Other Contractual Documents
1
Samphng
Laboratory Samples
1
|
Anatyso
Laboratory Analysts
(Indudmg OuaHy Control [QC] Samples)
Comptata Oata Package
|
Vtfifcabon
Verf«d Data
Data Verification Report
[
s
<
Vabdabon
Validated Data
Oata Vabdatnn Report

Data Quality AsMSsmant
Assessmert Report
Data ol Known Quality Appropnata for the Irtfended Um /
> JLlcI' J"
1. ln>roWuc:tiofi
•	Data life cycle provides structure for considering major project phases
involving data collection
•	Ensure that data will be of known quality and adequate to meet intended
use
•	Planning phase:
-	Directed planning process
-	Plan documents
-	Contracting services
•	Implementation phase
-	Sampling
-	Analysis
•	Assessment phase:
-	Verification
-	Validation
-	Data quality assessment
10

-------
The MARLAP Process
Dlreeted PtemUng Preew tCrtdeew 71
•	Kfy	hum	3)
•	Otvampuwrn ct Anafywai PiiMoed Specrfcaw
•	incjuoe* MOOa (Cnopnr 3)
Davaiop Ptan Document* Thai tacorporata
Anatytteal Protocol SpedfloUon* (Chapta* 4)
t«4.CAPP SAP, Oaa VaMatnnPUn)
OmtDnnrt of SOW iCtmxm w
Analytical Protocol Sc>*cu«awn* ai OCand PC Smffc Ranto
»lattattry Audrt»
•	CvDuiMxt oi 5arnc**-Spae a., yidd)
I AnalyM* ComcMad
lm(MnHru&on
— Phasa
Oata Evaluation i
•	Oata V«rdc*tan (Ctaptar 8}
•	(tea VkHflatioo (Criapcar 6)
•	Oata Ouakty Aaoaunwa (Ouom 9)
(Ma ol Known OwaWy for Daetakm Making
MARLAP is an iterative process, with feedback.
MARLAP establishes proper linkages among the three phases of the
data life cycle.
Integration of the phases ensures that the analytical data requirements
(defined during planning) can serve as measurement performance
criteria during implementation phase, and subsequently as data
evaluation criteria during assessment phase.

-------
Class Exercises
•	Example: 905r in milk (Instructors)
-	A MARLAP project example running through all the course
modules
•	Exercise: 241Am in ground water (Participants)
-	Participants group into project teams and apply MARLAP
principles applicable to each module
-	Information developed during each module will be used in
subsequent exercises
•	These exercises will demonstrate the MARLAP process
and enhance your ability to use it on your own projects
1. Introduction
Participant groups will work on exercises during:
•	Module 3 — Data Quality Objectives and the Gray Region (Day 1)
•	Module 4 — Key Analytical Planning Issues (Day 1)
•	Module 7 — Evaluating Measurement Uncertainty (Day 2)
•	Module 9 — Method Validation (Day 2)
•	Module 11 — Data Validation and Verification (Day 2)
12

-------
Emphasis of The Training Exercises
•	Focus on the planning phase of the project
•	Provide a template for getting started
•	Require teamwork in implementing the MARLAP
process
•	Meet the time available, but realistic
1. Introduction		13
The exercise will require participants to apply the MARLAP process by
referring to materials located following Tabs 17 to 21 in the course book.
These materials will be introduced by instructors at the appropriate time.
Solutions will be distributed and discussed after each exercise.
•	Delineating the gray region and determining the required method
uncertainty (Module 3)
•	Developing Analytical Protocol Specifications (Module 4)
•	Determining measurement uncertainty (Module 7)
•	Validating laboratory methods (Module 9)
•	Data validation and verification (Module 10)
13

-------
2

-------
PROJECT PLANNING PROCESS
Module 2
Carl V. Gogolak

-------
Planning Questions
•	How much data do we need?
•	What will we measure?
•	Where?
•	How?
•	How will we know when to stop collecting data and
make a decision?
2. Pir^sttfc:! Hawing froce-sz
2

-------
No Planning
•	We will measure everything everywhere with the
highest possible precision and accuracy
•	We will stop when the money runs out
9'ire-cieti Hjrwirvg f rocee>v

-------
Directed Planning Process
•	Involves all stakeholders, decisionmakers, and technical experts
•	Involves technical experts as principals
•	Each participant plays a constructive role in clearly defining:
-	The problem
-	Data the decisionmaker needs to resolve that problem
-	Why the decisionmaker needs that type and quality of data
-	The tolerable decision error rates
-	How the decisionmaker will use the data to make a defensible decision
•	Encourage efficient planning by framing and organizing complex
issues
•	Promotes communication among the stakeholders
•	Documentation provides project management with a more efficient
and consistent transfer of knowledge to new project members
2.	Hflnnir.q Frpctfgs
•	Brings together the stakeholders, decisionmakers, and technical experts at the
beginning of the project to obtain commitment for the project and a consensus
on the nature of the problem and the desired decision.
•	Involves radioanalytical and other technical experts as principals to ensure the
decisionmakers, data requirements and the results from the field and
radioanalytical laboratory are linked effectively.
•	Enables each participant to play a constructive role in clearly defining:
-	The problem that requires resolution;
-	What type, quantity, and quality of data the decisionmaker needs to
resolve that problem;
-	Why the decisionmaker needs that type and quality of data;
-	What are the tolerable decision error rates; and
-	How the decisionmaker will use the data to make a defensible decision.
•	Encourages efficient planning by framing and organizing complex issues.
•	Promotes timely, open, and effective communication among the stakeholders,
resulting in well-conceived and documented plans.
•	Documentation provides project management with a more efficient and
consistent transfer of knowledge to new project members.
4

-------
Planning
1.	State the problem
2.	Identify the decision
3.	Specify the decision rule and the tolerable decision
error rates
4.	Optimize the strategy for obtaining data
2. Olrv-cieti	froces-j	5
1.	State the problem:
•	Describe clearly the problem(s) facing the stakeholder or customer.
2.	Identify the decision:
•	Define the decision(s) or the alternative actions that will address the
problem(s)
•	Define the inputs and boundaries to the decision.
3.	Specify the decision rule and the tolerable decision error rates:
•	Develop a decision rule to get from the data to the desired decision
•	Define the limits on the decision error rates that will determine the type
and amount of data needed.
4.	Optimize the strategy for obtaining data:
•	Determine the optimum, cost-effective way to reach the decision while
satisfying the desired quality of the decision.

-------
DQO Process Crosswalk (MAPLAP and QA/G4)
5wp 1 Stat* th* ProUtm.
Dafirw ft* protoWi Aat n«cau*«f n A* itudy,
fta sbmng ttanv rcanw budget Mho^u*
Step 2 U*ratf) (tw God of Am Study.
Sole how fmronn'«ntol data ut bt uiod m oljpcl*** and
problem	tfcrtyqu*«tvna,9«i pepuaaow S cttaiactraac* of rMrwt
dafot tpatni 4 Umpoulbmo, scali of rta<»ix«
Stop 5 Ovrafop iho Analytic Appoach
Otftn* 3» panmctw of rtami, iptctfy n* typ* of rt«r«nto,
and ar»«^» if* log< tor Ome*; csncUon from taBng>
PtCUMn naimy
(hypodMM futfeg}
Etfitrufm *nd offwr
MuffOc apfiroxfrm
Si«p(. Sp«elty pMfomunc* or Acc«purca Cftmb
D«»*to(k portannanca e»*«na iornowdSa
b*«9 cehnid c JtttptatiH cmana tet
•rnaig data bctig conudvod to»u»o
}
}
}
Sf»p 7, Or*top d» Plan for OhuMnQ Om
Soiocith» man> iHtunotamping and xi*f*n plan
(haimMtt tw t^.'ormjrn* citana
}
1. Stale the problem
2. (a) (b) & (c)
Identify the decision
3. (a) & (b)
Specify the decision rule and the
tolerable decision error rates
4. Optimize the strategy
for obtaining data
2. Pirsctc-d r'tiinnirvi Process
6

-------
DQO Process Steps 1-4



Step 1. State the Problem.


Define the problem that necessitates the study;


identify the planning team, examine budget, schedule

» I

Step 2. Identify the Goal of the Study.


State how environmental data will be used in meeting objectives and


solving the problem, identify study questions, define alternative outcomes

I

Step 3. Identify Information Inputs.


Identify data & information needed to answer study questions.

I * I

Step 4. Define the Boundaries of the Study


Specify the target population & characteristics of interest.


define spatial & temporal limits, scale of inference

T

2,	Pitinnirifl KVoc^jrj
7

-------
DQO Process Steps 5-7
8

-------
k pu"S uJhJ*>A *r
Example Scenario
Does the milk from
downwind cows have
higher 90Sr concentrations
than that from upwind
cows?
2, Pir^UM t'Lirnw^frocev-i
v> I^ -p 
-------
1. State the Problem
(Section 2.5.1)
Information Needed by the
Project Planning Team
Facts relevant to current
situation (e.g., site history,
ongoing studies).
Analytes of concern or analytes
driving risk.
Matrix of concern.
Regulatory requirements and
related issues.
Existing data and its reliability.
Known sampling constraints.
Resources and relevant
deadlines.
Radioanalytical Specialists
Participation I Input
Evaluate existing radiological data
for use in defining the issues (e.g.,
analytes of concern).
Assure that the perceived problem is
really a concern by reviewing the
underlying data that are the basis for
the problem definition.
Consider how resource limitations
and deadlines will impact
measurement choices.
Use existing data to begin to define
the analyte of concern and the
potential range of concentrations.
Output I Product
Problem defined
with specificity.
Identification of the
primary decision-
maker, the available
resources, and
constraints.
From MARLAP Table 2.1
2. PifSCtcYl	Process
/W/Vlt. (\i£- fl-fc Afctl&jJ3
10

-------
2a. Identify the Decisions)
(Section 2.5.2)
Information Needed by the
Project Planning Team
Analytical aspects related to
the decision.
Possible alternative actions.
Sequence and priority for
addressing the problem.
Radioanalytical Specialists
Participation I Input
Available protocols For sampling and
analysis.
Provide focus on what analytes need
to be measured, considering analyte
relationships and background.
Begin to address the feasibility of
different analytical protocols.
Begin to identify the items of the ,
APSs.	^	a
Begin to determine how sample
collection and handling will affect
MQOs.
~X(UO$ . db'
Output / Product
Statements that link
the defined problem to
the associated
decisions and
alternative actions.

V
From MARLAP Table 2.1
2. Pirpct*^ F.'Liriniruq Froce&e
11

-------
2b. Identify Inputs to the Decisions
(Section 2.5.2.2)
(
| Information Needed by the
Project Planning Team
~
i
Radioanalytical Specialists . . ... .
n .. . .. I, . Output / Product
Participation 1 Input 1
ii
•	All useful existing data.
•	The general basis for
establishing an action level.
•	Acquisition strategy options
(if new data are needed).
•	Review the quality and
sufficiency of the existing
radiological data.
•	Identify alternate analytes.
•	Defined list of needed new
data.
•	Define the characteristic or
parameter of interest
(analyte/matrix).
•	Define the action level.
•	Identify estimated
concentration range for
analytes of interest.
From MARLAP Table 2.1
2. PirusuM Planning FVoctfSj
12

-------
2c. Define the Decision Boundaries
(Section 2.5.2.3)


	1

Information Needed by the
Project Planning Team
Radioanalytical Specialists
Participation / Input
Output / Product


•	Sampling or measurement
timeframe.
•	Sampling areas and
boundaries.
•	Subpopulations.
•	Practical constraints on data
collection (season,
equipment, turnaround time,
etc.).
•	Available protocols.
•	Identify temporal trends and
spatial heterogeneity using
existing data.
•	With the sampling
specialists, identify practical
constraints that impact
sampling and analysis.
•	Determine feasibility of
obtaining new data with
current methodology.
•	Identify limitations of
available protocols.
•	Temporal and spatial
boundaries.
•	The scale of decision.

From MARLAP Table 2.1


2. fti.-sst-frj Hiinnirvj f roce-j-j
13

-------
3a. Develop a Decision Rule
(Section 2.5.3)
Information Needed by the
Project Planning Team
Radioanalytical Specialists
Participation 1 Input
¦
. Output / Product
•	Statistical parameter to
describe the parameter of
interest and to be compared
to the action level.
•	The action level
(quantitative).
•	The scale of decisionmaking.
•	Available protocols for
sampling and analysis.
•	Identify potentially useful
methods.
•	Estimate measurement
uncertainty and detection
limits of available analytical
protocols.
• A logical, sequential series
of steps ("if...then") to
resolve the problem.
From MARLAP Table 2.1
Jin'.cte.A F'LinnitVj froc.e-5-j.	M

-------
3b. Specify Limits on Decision Error Rates
(Section 2.5.3)
		— — "¦ 					
Information Needed by the
Project Planning Team
Radioanalytical Specialists jj 0utput / Product
Participation / Input ;|
•	Potential consequences of
making wrong decisions.
•	Possible range of the
parameter of interest.
•	Allowable differences
between the action level and
the actual value.
•	Tolerable level of decision
errors or confidence.
•	Assess variability in existing
data for decisions on
hypothesis testing or statistical
decision theory.
•	Evaluate whether the tolerable
decision error rates can be met
with available laboratory
protocols, or if the error
tolerance needs to be relaxed
or new methods developed.
•	Defined baseline condition
(null hypothesis) and
quantitative estimates of
acceptable decision error
rates.
•	Defined range of possible
parameter values where the
consequence of a Type II
decision error is relatively
minor (gray region).
From MARLAP Table 2.1
2. Pirscttv! fVinni™ Frocetv
15

-------
4. Optimize the Strategy for Obtaining Data
(Section 2.5.4)
Information Needed by
the Project Planning
Team
All outputs from all
previous elements
including parameters
(analytcs and matrix) of
concern, action levels,
anticipated range of
concentration, tolerable
decision error rates,
boundaries, resources, and
practical constraints.
Available protocols for
sampling and analysis.
From MARLAf Table 2.1
Radioanalytical Specialists
Participation I Input
Sample preparation, compositing,
subsampling.
Available protocols.
Methods required by regulations (if any).
Detection and quantitation capability.
Achievable MQOs by method, matrix,
and analytc.
QC sample types, frequencies, and
evaluation criteria.
Sample volume, field processing,
preservatives, and container
requirements.
Realistic MQOs for sample analysis.
Complete parameters for the APSs.
Resources and timeframe to develop and
validate new methods, if required.
Output I Product
Available protocols for
sampling and analysis.
The most resource-effective
sampling and analysis
design that meets the
established constraints (i.e.,
number of samples needed
to satisfy the DQOs and the
tolerable decision error
rates).
A method for testing the
hypothesis.
MQOs and the statcmcnt(s)
of the APSs.
The process and criteria for
data assessment.
2. PirgsttM Pkimilr.fl f rocet-s
16

-------
MARLAP Recommends..,
(Section 2.8>)
•	Using a directed project planning process
•	Radioanalytical specialists be a part of the integrated
effort of the project planning team
•	The planning process rationale be documented and
the documentation integrated with the project plan
documents
•	A graded approach in which the sophistication, level
of QC and oversight, and resources applied are
appropriate to the project
2. DirectX Planning Process	t				 17
17

-------
3

-------
PQOs and the Development
of MQOs
Module 3
Carl V. Gogolak

-------
Data Quality Objectives
DQOs define the performance
criteria that limit the
probabilities of making
decision errors by:
•	Considering the purpose of
collecting the data
•	Defining the appropriate type
of data needed
•	Specifying tolerable
probabilities of making
decision errors
EH7WMBI
weneflaiOCS#*
SEPA
Guidance on Systematic
Planning Using the Data
Quality Objectives Process
EPA QA/G-4


3. DOC'y iiricH uhc Ocve.\oprr
-------
Measurement Quality Objectives
DQOs apply to both sampling
and analysis activities
MQOs can be viewed as the
analytical portion of the
overall project DQOs
MQOs are the part of the
project DQOs that apply to
the measured activity result
and its associated uncertainty
Multi-Agency Radiological
Laboratory Analytical Protocols Manual
votune l. Cfiaoten i - 9ano AopendcesA-E
¥m susgs Nisr
•hriyZCM
3. VO.Q'-j jnJ the Oc.-c-loprr.c^i of M(20m
MARLAP Section 3.3.7
3

-------
Measurement Quality Objectives
MQOs are statements of performance objectives or requirements
for a particular analytical method performance characteristic. For
example:
•	Method uncertainty
•	Detection capability
•	Ruggedness
•	Specificity
•	Range
In a performance-based approach:
•	MQOs are used initially for the selection and evaluation of
analytical protocols
•	MQOs are subsequently used for the ongoing and final evaluation
of the analytical data
The most important MQO is the analytical uncertainty at a
specified concentration (the action level)
3. VQ.Q-j the Dgvpk'p.r.pr;:- qf MuC':>	4
MARLAP Section 3.3.7.1
4

-------
3. DQOs aid the Development of MQOe	5
•	Refer to Attachment 3A in the course book behind Tab 13. Refer to NIST TN1297.
•	International Organization for Standardization (ISO). 1995. Guide to the Expression of Uncertainty
in Measurement. ISO, Geneva, Switzerland.
•	The ISO Guide to the Expression of Uncertainty in Measurement, or GUM, 1995, is available in
U.S. ($25) and international ($92) editions. The editions contain the same material, differing only
in decimal marker, spelling, and size. The ISO International Vocabulary of Basic and General
Terms in Metrology (VIM), 1993, a companion document to the GUM, is available only in an
international edition ($71). The U.S. edition of the GUM is: American National Standard for
Expressing Uncertainty—U.S. Guide to the Expression of Uncertainty in Measurement,
ANSI/NCSL Z540-2-1997.
•	NIST Technical Note 1297. Taylor, B.N. and C.E. Kuyatt (2003). Guidelines for Evaluating and
Expressing the Uncertainty of NIST Measurement Results. National Institute of Standards and
Technology (NIST), Gaithersburg, MD 20899-0001. Technical Note 1297. Available at:
http://physics.nist.gov/Pubs/pdf.html (pdf) and http://physics.nist.gov/Pubs/guidelines/
contents.html (html). Based on the comprehensive International Organization for Standardization
(ISO) publication, Guide to the Expression of Uncertainty in Measurement.
American National Standards Institute
105-111 South State Street
Hackensack, NJ 07601
(phone) 212-642-4900
(fax) 212-302-1286
ISO Central Secretariat
1 rue de Varembe
Case postale 56
CH-1211 Geneve 20
SWITZERLAND
5

-------
DQOs and Uncertainty
No measurement program or sampling plan can be
adequately designed without some estimate of the
uncertainty in the data relative to the action level.
If there were no measurement uncertainty and no
spatial variability, how many measurements would be
needed to find the average concentration of a
radionuclide in an area?
Total uncertainty = measurement uncertainty -
The answer to this question reveals why several samples are usually averaged. To
reduce the decision maker's uncertainty by reducing both measurement uncertainty
and spatial variability.
Planning is required to know how many samples are actually necessary to make
good decisions.

6

-------
flo^	^	(Xr*~ uj-e, ?
Uncertainty and the Action Level
A/a >4
The c/oser the mean of the
distribution of analytical
results is to the action
level, the smaller the
uncertainty needs to be to
distinguish the mean from
the action level.
A/a ~ 1+
Relatively large uncertainty can be tolerated:
(a)
S
t
i 1
© •
& .
Action
Level
I Mean
<—
A ^
q i
C6nceritralion
Either more accuracy or more samples are needed:

-------
Critical Value
Normal Distributions
Signal Absent Signal Present
SiOlciJ
D^-.C'Cuori'scii K"
5. CQO& tinci 3-hc LV/clppmcrit- of MOOfj
MARLAP Attachment B2, Decision Error Rates and the Gray Region for Detection
Decisions.
A critical value (or critical level) is used in hypothesis testing. In this example It is
the value that the test statistic must exceed in order for the null hypothesis to be
rejected.
A critical value is a value determined in advance to decide whether a hypothesis
will be accepted or rejected. If an observed value is at or beyond the critical value
(in the rejection region), the hypothesis is rejected; otherwise (if the observed
value is in the acceptance region), the hypothesis is accepted. Note that the
critical value divides all possible values of the test statistic into these two regions,
the rejection region and the acceptance region.
8

-------
Connecting the MQOs to the DQOs
•	Decision errors are possible because there is uncertainty
in the data
•	One part of the uncertainty is analytical measurement
uncertainty
•	Variation among samples with space or time also adds
uncertainty
•	To limit decision errors, the analytical measurement
uncertainty should be limited to a level appropriate to the
DQOs
How can you do that before you have any data?
MARLAP Appendix C, provided at Tab 13..
9

-------
kev
MARLAP Emphasizes Developing an M00 for Method Uncertainty
Developing a process to specify a required
maximum method uncertainty
Method Uncertainty refers to the predicted uncertainty of a
measured value that would be calculated if the method were
applied to a hypothetical laboratory sample with a specified
analyte concentration
Method uncertainty is a characteristic of the analytical
method and the measurement process Att> s-ovaa&s
Measurement uncertainty is a characteristic of an individual
measurement
3. PQD& md the Development of MQQs			10
See Section 3.3.7 and page 8 of chapter 1.
10

-------
Developing MQOs for Method Uncertainty
Data are collected so that decisions can be
made about...
•	... individual samples.. .as for bioassays
•	... the mean of a sampled population ...as
for MARSSIMfinal status surveys
3. VQ.Q-j cinj 'Jie Ocvc-lopmcr.t jr
MARSSIM = Multi-Agency Radiation Survey and Site Investigation Manual.
MARSSIM provides guidance on how to design and implement a study to
demonstrate that a site meets appropriate release criteria.
Website: http://www.epa.gov/radiation/marssim/

-------
Decision Rules 5pecify How the Parameter of Interest and Action Level
Will B>e Used To Make a Decision
A decision rule has three parts:
• Parameter of Interest
•	Action Level
•	Alternative Actions
Examples
O	cS
,	i>
•	If the activity of a sample exceeds a certain level, conclude
the sample contains the radionuclide(s) of interest;
otherwise conclude it does not.
•	If the mean concentration in an area is less than the action
level, conclude the area meets release criteria; otherwise
conclude that corrective action must be taken.
MARLAP Appendix B.
12

-------
Decision Rules
The decision rule will be applied by:
•	Collecting data
•	Computing test statistic related to the parameter of interest
•	Conducting a statistical hypothesis test
Examples
•	If the counts from a sample exceed a certain level, conclude the
sample truly contains the radionuclide(s) of interest; otherwise
conclude it does not
•	If the mean concentration from a set of samples is less than the
action level, conclude the true concentration in the area from
which the samples were taken meets release criteria; otherwise
conclude that corrective action must be taken
o. OOO'i jnd Hiis Osveloprr.erii of MQOii
MARLAP Appendix B.3.6.

-------
Ifi§
Decision Rules



k
; ¦
The decision maker and planning team
muet be completely comfortable with the
decision rule regarding the criteria for
taking action



3. DQOs and the Development of MQ0&
14
MARLAP Appendix B.3.6.

-------
Statistical Hypothesis Tests
•	Statistical hypothesis-testing provides a mechanism for
deciding between two mutually exclusive statements
based on the value of a test statistic calculated from the
data.
•	These statements are called the null hypothesis, H0, and
the alternative hypothesis, H,.
•	The null hypothesis is assumed to be true unless the value
of the test statistic obtained is very improbable under that
assumption. In that case the data are deemed inconsistent
with the null hypothesis. Therefore, it is rejected and the
alternative hypothesis is chosen instead.
3, DQOs and the Development of MQOs 		15
MARLAP Appendix B.3.7 and Appendix C.2.
15

-------
Possible Decision Errors
DECISION
TRUE STATE
CONSEQUENCES
Reject H0...
... when it is
actually true
Type I error
(probability a)
Deciding not to
reject H0 ...
... when it is
actually false
Type II error
(probability (3)
\
" f ^ c.
\
fah1^
^xr
e^rv^
3. PQO$ and the Development of MQOs
MARLAP Appendix B, Table B-1.

-------
Decisions Made About Individual Samples
H0: Sample contains no radioactivity
H,: Sample contains radioactivity
Type I error: Decide there is radioactivity when there isn't F/^
Type II error: Decide there is no radioactivity when there is
This ie the familiar framework for MPC calculations
r- />o£/r)L/£T
3. DOO'i iirul uhs Oevc-\cpn?r.i cf IvIuOf;
Refer to MARLAP Attachment 3B, MARLAP Section B.3.7, and MARLAP
Attachment B2 (Tab 13 of this course book).
•	Decisions made on individual samples.
•	The MDC problem.
17

-------
n
m
n
o
Distribution
of net counts
under null
hypothesis
Distribution of
net counts
under alternative
hypothesis
3. OO.Oo tind the Dcvet\o^n\e^ of MQOy
Refer to MARLAP Attachment 3B and MARLAFjf Attachment B2 (Tab 13 of this
course book).
Decisions made on individual samples.
The MDC problem.
- /A0C
?>
°c


-------
Action Level and Range of Concentrations
c
o 0.9
O
0)
o
0.5
|	Q g
o ' Specify the Action level
5	o.4	and a range of
5	03	concentrations of
o
£	02	interest around it
.Q
o
0.1
o.o
0
0.5 AL
AL
1.5 AL
Concentration
For example purposes, the range has been used as 0-1.5 times the action level.
Decisions made on individual samples. The MDC problem.
Refer to MARLAP Section B.3.7 and MARLAP Attachment B2 (Tab 13 of this
course book).
•	Decisions made on individual samples.
•	The MDC problem.
19

-------
Discrimination Level and Gray Region
c
o
o
d)
•4—»
0
Q
4—
o
-t—'
!5
ro
n
o
0.9 Specify the Discrimination
o.8| Level
0.7 The DL is a level that is
0 6 important to distinguish from the
AL: 0 DL but < AL form
The gray region
%
0.5 AL	AL
Concentration
1.5 AL
3. DQO-5 and the Development of MQOe
Refer to MARLAP Section B.3.7 and MARLAP Attachment B2 (Tab 13 of this
course book).
•	Decisions made on individual samples.
•	The MDC problem.
20

-------
Discrimination Level and Gray Region for MDC Example
c
o
o
£
"S
Q
h—
o
>>
-Q
<0
-Q
o
For the MDC problem,
the DL-0
The gray region
	A,	
0.5 AL	AL
Concentration
1.5 AL
3. OQQv ths Ocvelopmer.i of
Refer to MARLAP Section B.3.7 and MARLAP Attachment B2 (Tab 13 of this
course book).
•	Decisions made on individual samples.
•	The MDC problem.
21

-------
Specify Desired Limit on the Probability of Type I Decision Errors
c
o
o
cu
CD
Q
M—
o
>.
.5
cc
jq
o
1	
Gray Region
8

7
6
s Limit the probability

4 of flagging a blank

3 sample by specifying

a at the DL

V
a

0.5 AL	AL
Concentration
1.5 AL
5. DClO'j ^rttH the Ocvdoprric.r.'s o; MQOt;
Refer to MARLAP Section B.3.7 and MARLAP Attachment B2 (Tab 13 of this
course book).
•	Decisions made on individual samples.
•	The MDC problem.
22

-------
Specify Desired Limit on the Probability of Type II Decision Errors
c
o
o
0
o
>»
CD
.Q
o
0.9
1j	
J Gray Region a

1
7 Limit the probability

•6 of missing a sample

£ above the action

•< level by specifying

3 1 -p at the AL
.2
.1

0.5 AL	AL
Concentration
1.5 AL
5. DOQ* tirici ths Ds'.'ekip.-rerir. MuCV;
Refer to MARLAP Section B.3.7 and MARLAP Attachment B2 (Tab 13 of this
course book).
•	Decisions made on individual samples.
•	The MDC problem.
23

-------
Probability of Decision Errors in the Gray Region Not Controlled
c
o
o
a)

-------
Probability of Detection Increases with Concentration
O. DG.Q* t
-------
p<
5. DQ0<5> and the Development of MQOa		26
AL
True Concentration
c
O -o.s
0) "° 8
"S -0-6
>• '0.5
_Q -0.4
03
^3 -0.3-
O
dI -° 2
•0.1
*	
A
Gray Region
Refer to MARLAP Section B.3.7 and MARLAP Attachment B2 (Tab 13 of this
course book).
•	Decisions made on individual samples.
•	The MDC problem.
•	We can map the probabilities of exceeding the critical level as a function of true
concentration from the previous slide onto a power curve.
•	Here, the action level is 4 and a = 1.
26

-------
Receiver Operating Characteristic Curve (ROC)
Normal Distributions
Criterion =
<>




A
k
Jm
m
Signal Absent
Signal Present
.00 .25 .50 .75 1.00
False Alarm Rate
This graph shows the Signal
Absent and Signal Present
distributions that are the basis of
the signal detection theory model
of decision making.
0.048
0.950
Set Hits and False Alarms
=f
V
£
anal DeteciiorAstlt. h
5. DQOs and the Development of MQOe
Healy, M. R., Berger, D. E., Romero, V. L., Aberson, C. L, & Saw, A. 2002.
Detection Theory Applet and Tutorial. Available online at http://wise.cgu.«
Refer to MARLAP Section B.3.7 and MARLAP Attachment B2 (Tab 13 of this
course book).

-------
Calculating the Required Method Uncertainty
To limit the
probability of A
decision errors, a / \
and p, to 0.05,... j °\
...the standard deviation of the
analytical method must be small
enough that AA^ > 1.645 + 1.645.
a=al ^ •* 0"i < AL /3.29
/ *T
/ H,\
/ 
-------
The MQO for Method Uncertainty is the
Required Method Uncertainty
The performance requirement is that the upper bound for
the measurement standard deviation at the action level is
wMR = A / 3.29 ~ 0.3 x AL
uis the / cc/uii (?ci method 11 f 11 c11i 11}11 \
This is essentially the same as requiring that the MDC not
exceed the action level because:
a)	The minimum detectable concentration (MDC) is often
found to be about 3 or 4 times the measurement uncertainty
b)	MDC ~ 3 gmr implies gmr ~ MDC / 3
c)	If MDC < ~ AL, then aMR ~ AL/ 3 ~ 0.3 x AL
3. PQO<2> ar\d the Development of MQOe 		29
Refer to MARLAP Appendix C.3, Scenario II and MARLAP Attachment B2 (Tab 13
of this course book).
•	Decisions made on individual samples.
•	The MDC problem.
29

-------
The MQO for Method Uncertainty
Example
If the action level was 1 pCi/L, then the required method
uncertainty would be
wMR= 0.3 AL = 0.3 pCi/L
The laboratory's estimate of its measurement uncertainty
at the action level is called the method uncertainty. This
must not exceed the required method uncertainty.
5. OOO-j the Pc/elcof
Refer to MARLAP Appendix C.3, Scenario II and MARLAP Attachment B2 (Tab 13
of this course book).
•	Decisions made on individual samples.
•	The MDC problem.
30

-------
Developing MQOs for Method Uncertainty
Data are collected eo that decisions can be
made about...
•	... individual samples.. .as for bioassays
•	... the mean of a sampled population ...as
for MARSSIMfinal status surveys
Refer to MARLAP Appendix C.3 Scenario I and MARLAP Attachment B1.
The following series of slides consider the mean of a sampled population.
31

-------
tfJKl
3/
Decisions Made About the Mean of a Sampled Population
Decision Rule: If the true mean concentration in the
survey unit is less than the action level it may be
released for unrestricted use. Otherwise further
remediation may be required.
H0: The true mean exceeds the action level
Hj: The true mean is below the action level
•	Type I error: Decide the true mean does not exceed AL
when it does
•	Type II error: Decide the true mean exceeds AL when it
doesn't
3. DQOi md the Development of MQQs		32
•	Sampled Population.
•	Refer to MARLAP Appendix C.3 Scenario I and MARLAP Attachment B1.
32

-------
Specify Gray Region and Desired Limits on the Probability of Decision Errors
for True Concentrations at its Upper and Lower Boundaries
<
V
c
CD
0)
O)
c -06
*¦5
o -05
a>
O
*4—
0	-03
>>
.ti *0 2
1	«,
(0
n
o •»
i-p
<	=	>

\Gray Region

Limit the probability
\

of missing a mean
\

below the DL
\


The width\


of the gray\


region. A, \


represents the\
Limit the probability

smallest \
of missing a mean

concentration \
above the action level

difference that it is \


important to \
a

detect.
DL
AL
Mean Concentration
3. VG.Qv jnJ the 'Oc-.'ck'prter', of fvWOf)
Sampled Population.
Refer to MARLAP Appendix C.3 Scenario I and MARLAP Attachment B1.

-------
Power Curve
V <19
c
(0 4 8
e-r>i of MGGs
Sampled Population.
Refer to MARLAP Appendix C.3 Scenario I and MARLAP Attachment B1.
Healy, M. R., Berger, D. E., Romero, V. L., Aberson, C. L, & Saw, A. 2002.
Claremont Colleges' "Web Interface for Statistics Education" (WISE) Power
Applet. Available online at http://wise.cgu.edu/power/powerapplet1.html.
34

-------
Criteria for Setting the MQO for Method Uncertainty
•	The width of the gray region is A = AL - DL
•	The number of samples needed to conduct the
hypothesis test with specified limits on a and p
depends on the relative shift, A/a
•To keep the number of samples reasonable, a should
be such that 1 < A/a < 3. Ideally, A/a ~ 3
•	The cost in samples required rises rapidly when A/a
< 1, but there is little benefit from increasing A/a
above 3
3. PC'Oj jnJ thc Dc.-e-lvprr.er.i, of MOOs
Sampled Population.
Refer to MARLAP Appendix C.3 Scenario I and MARLAP Attachment B1.

-------
Method Uncertainty as a Component of the Total Uncertainty
•	The total variance of the data is G2 = a2M + o2s
•	The sampling standard deviation, as, depends on the
variability in the spatial distribution of the analyte
concentrations and other factors having to do with how the
sampling is performed
•	The analytical standard deviation, aM , is affected by
laboratory sample preparation, subsampling and analysis
procedures
3. OO.Q'j iind the Vc\'c\o^cr,
Sampled Population.
Refer to MARLAP Appendix C.3 Scenario I and MARLAP Attachment B1.

-------
cw




AfjC-1 +<->
U
L^l
The MQO for Method Uncertainty
Generally it is easier to control gm than os
If cs is large, then the best one can do is make aM small relative to
°s
How emaW is ema\\ enough?
If cM ~ as/3, then the analytical method variance is contributing less
than 10% to the total variance a2. Reducing it further will not reduce
a very much.
This implies that the upper bound for aM should be
_ G _ A/3 _ A _ A
Umr ~ VTo ~ Vio "3VT0 ~To
£¦. POO3 :hs Ps'.'piopfwpr.t. of M<20s
Sampled Population.
Refer to MARLAP Appendix C.3 Scenario I and MARLAP Attachment B1.

-------
Required Method Uncertainty and the Minimum Quantifiable
Concentration (MQC)
•	If LBGR = 0, then wMR = AL/10
•	This is the same as requiring that the required relative
standard deviation of the measurements, 
-------
Method Uncertainty: MARLAP's Common Thread
Definition:
•	Predicted uncertainty of a measured value that would likely result from the analysis of a sample at a
specified analyze concentration.
•	Combines imprecision and bias into a single parameter whose interpretation does not depend on context.
VIARLAP recommends:
•	Identify the me'.hod uncertainty at a specified concentration (typically the action level) as an important
method performance characteristic.
•	Establish a measurement quality objective for method uncertainty for each analyte/matrix combination.
MQO for the method uncertainty (at a specified concentration):
•	Links the three phases of the data life cycle: planning, implementation, and assessment.
•	Related to the width of the gray region. The gray region has an upper bound and a lower bound. The upper
bound typically is the action level, and the lower bound is termed the "discrimination limit."
Examples of MQOs for method uncertainty at a specified concentration:
•	A method uncertainty of 0.01 Bq/g or less is required at the action level of 0.1 Bq/g.
•	The method must be able to quantify the amount of 226Ra present, given elevated levels of 235U in the
samples.
Terminology:
•	uMR	Required method uncertainty (absolute)
•	ipMR = uMRl AL	Required method uncertainty (relative)
•	A = AL - DL	Width of the gray region (range of values where the consequences of a
decision error are relatively minor)
•	Action level	Concentration that will cause a decisionmaker to choose one of the alternative
actions
¦ Discrimination limit Synonymous with the lower bound of the gray region
39

-------
Example Scenario
3. Pi0.0*5 jnJ thg 'Ocvtk^rr.Mi. of MQQa	^		40

-------
On Average, Do Downwind Cows Have More 905r in their Milk?
•	Action Level - 8 pCi/L limit for total 90Sr in milk
•	Average background level of 2-3 pCi/L for 90Sr in milk
•	Choose Discrimination Level at 3 pCi/g
•	A = (AL-DL) = 8-3 = 5
_ <7 _ A A
"MR ~ TTo ~ 3Vio ~ To
uMR = 5/10 = 0.5 pCi/L q>MR = 0.5/8 = 6%
3. VOO'i jrtJ z\ic Qcyc-\oprr,?.r,i. cf MOOd
41

-------
Power Curve

1
<

V
0.9
c

CO
0.8
CD

2
0.7
Ol

c
0.6
xs

'o
0.5
0

Q
0.4
4—

o
0.3
>,




0.2
'JD

CO
0.1
_Q
O

j_
0
Q.

1-P



	V


\A


GrawRegion



a
3 5	8
Mean Concentration
3. DQOs arid tie Development of MGQa
42

-------
Required Method Uncertainty
3. DQOs and the Development of MQOs
Above the action level, the
bound on the relative
standard deviation is
constant and equal to
^mr = umr' Al_
The required method
uncertainty, uMR, is
specified at the action level
"mr = ± 0 5 PCi/L
Below the action level
bound on the standar1
deviation is constant
equal to uMR
8
True Concentration (pCi/L)
Refer to MARLAP Appendix C.4.1 (Tab 13).
43

-------
Review of Symbols
a = [os2 + om2] /2 is the total standard deviation of the data

-------
Review of Symbols
(Continued)
A is the width of the gray region
A = (Action Level - Discrimination Level) = (AL-DL) =
(UBGR-LBGR)
a is the probability of a Type I decision error
is the probability of a Type II decision error
3. PQOs and the PeyelOpfrierrp of MQOs		¦	45
a and (3 are often taken to be 0.05.
45

-------

Required Method Uncertainty
The required method uncertainty, uMR , and the required
method relative uncertainty, 
Refer to MARLAP Appendix C.4.2.
46

-------
4

-------
Key Analytical Planning Issues:
MQOe and APSs
Module 4
Bob Litman

-------
A zip-lock bag with mud, stones, and wire was left on my desk at a nuclear plant.
The attached note from "J.R." was all the information received.
A system engineer had pulled this out of one of the feedwater heaters (it was
labeled potentially contaminated but the heater was outside the radiologically
controlled area).
He wanted to know, "What is the material made of and where did it come from?"
After a visual exam I told him rust from inside the condenser and the pre-heaters
forward of this one.
"What about the chunks?" he asked.
I took out a microscope and prepared a slide. "It appears to be weld wire and weld
slag", I said after the visual with the microscope.
"What about scaffolding material?" he asked.
I told him, 'Til need a couple of days to analyze for aluminum. How much
scaffolding are we looking for?"
"A lot," he replied.
When I brought the sample to the lab (inside the RCA) I first did a gamma
spectrometry analysis to see if the sample was contaminated. Sure enough it
contained ^Co and ^Co. When I told this to the engineer he said, "Oh, that's
interesting. But that's not important to what I need to know".
The engineer's PROJECT was to find out what the materials was made of to
determine if it came from staging. What SPECIFICATION should he have made
about the sample so that the ANALYTICAL process would have proceeded more
smoothly?

-------
Overview of the Analytical Process
The Project Manager must ensure that technically
knowledgeable personnel write the Analytical Protocol
Specification (APS), before sampling occurs.
When the APS is completed..;
Project Manager is responsible for identifying processes
that comprise the entire sample life:
•	Sampling...
•	.. .through analysis...
•	...to data trending...
•	.. .and everything in between
4. Key Analytical Planning Issues		3
The Project Manager needs to ask the right questions and ensure that the
questions that relate to the sample processing are answered correctly.
All of this seems very obvious, but in reality it is seldom done!

-------
Kay Analytical Planning Issues
A key analytical planning issue has a significant effect on
the selection and development of analytical protocols or
has the potential to be a significant contributor of
uncertainty to the analytical process and, ultimately, the
resulting data.
Project Manager is responsible for having the "vision" and entrusting his
team specialists to find ways to implement the mission.
Need to know what we see as the end result so that the steps, from
sampling to final data analysis, can support the end result.
What we are going to do is discuss the key planning issues discussed in
Chapter 3.
Project
Manager
Minions
4

-------
Analytical Protocol Specifications
Output of a directed planning proceee that
contains the project's analytical data needs
and requirements in an organized, concise
form
I 4. Key Analytical Planning issues				5
The APSs represent the resolution and documentation of key analytical
planning issues, many of which will be covered in this session.

-------
The APS Will Contain....
The Analytical Protocol Specifications (APS) is the
central planning document that contains
important information:
•	Specifics about the range of concentrations to be determined
•	Potential interferences (radiological and non-radiological) that
may eliminate several potential methods
•	Amount of sample required, preservation, pretreatment, etc.
•	The required method uncertainty, uMR
•	Many other factors effecting the analytical results
This information is used to construct the MQOs that
will appear in the APS
4. Key Analytical Pfenning Issues
6

-------
The Premier Document in the Process
The APS must be completed before_samples are
collected or analyzed
4. Key Analytical Plsnninq Issues
"If a ship captain does not know to which port he is steering, no wind is
favorable"
The next segment of the presentation deals with the specific items which
need to be considered, detailed or eliminated when writing an APS.
^ ^ ^ uhjr<- j*"*. w ruu
C/yA ur	9,V
7

-------
Types of Key Analytical Planning Issues
General
•	Development of an analyte list	p
•	Concentration ranges	uv\^
•	Chain of custody
Matrix Specific
•	Filtration of liquid samples
•	Solid samples sieved to a particular mesh range
4. Key Analytical Planning Issues
8

-------
Analyte List (General)
(3.3.1)
Which radionuclides are we looking for?
Based on:
•	Process knowledge
•	Historical site assessment (HSA)
•	Previous studies of this or similar sites
•	Preliminary project studies
4. Ksv Analytical Planning Issues		.	9.
There are 2,540 radionuclides and 480 isomeric states of nuclides. The
selection of what could possibly be in the site that is being assessed would
be based on the radionuclide's half-life, how long before the site assessment
the production or use of the radionuclide was present, and how much was
there at anyone time.
9

-------
What is the expected concentration range for each
radionuclide in the samples we will be analyzing?
Based on:
•	Historical site assessment (HSA)
•	Existing data on expected "background" values
What might be one reason to anticipate why we need to know how much is
present?
•	Safety to samplers
•	Safety to analysts
•	Field versus laboratory analysis
•	Potential laboratory methods
•	Costs
•	Sample size

-------
Matrices of Concern (General)
(3.3.3)
What are the matrices in which the radionuclides can
potentially be contained?
What needs to be evaluated about each matrix?
•	Homogeneity/heterogeneity (commonly ignored)
•	Potential hazards to sampler or analyst
•	Chemical composition of the matrix
•	Pretreatments prior to analysis
4. E.e.y Analytical Planning Issues	11
•	What are examples of matrices: surface water, groundwater, soil,
concrete, asphalt, gypsum, linoleum, etc.
•	Will or should the samples be homogenized? In the field or by the lab?
•	What are the contaminants in this matrix, besides what you're looking for,
that may prove hazardous? For example, PCBs, explosives (H2 in tank
water), VOCs, asbestos, carbon monoxide, etc.
•	"Solid" is not a matrix! Humic, sand, loam are different descriptors for soil.
Linoleum? What is it made out of? Lab needs to know so that they can
analyze it successfully.
•	Does the groundwater sample need to be filtered, does the sandy soil
sample need to be sieved, must the whole sample be used, must the
whole sample be dissolved,...?
11

-------
Relationships Among the Radionuclides of Concern (General)
(3.3.4)
•	Parent-progeny
•	Easy-to-detect as markers for hard-to-detect
•	Process knowledge
4. Key Analytical Planning Issues	12
•	If medical waste is present and Tc-99m generators are part of the waste,
what might be present that most likely will be undetected by gross alpha-
beta analysis? Tc-99: parent daughter relationship plus process
knowledge.
•	Site produced 205Pb by neutron bombardment of 204Pb. What else might
you expect? 204TI (half life 3.8 years) from (n,p) reactions.

-------
Project Resources and Deadlines (General)
(3.3.5)
All projects have schedules. Some considerations are:
-	Existing method?
-	Turn-around-time?
-	Available funds?
These considerations may change -
Project phase...
/^^YwVlb(
(cvy	^
«&. -
4. E.e.y Analytical Planning Issues	j3
•	Does a validated method exist for the analyte matrix or does one need to
be developed, and can it be done within the scope of work?
•	Can an offsite laboratory get the results to the decision makers in a timely
manner, or will an on-site lab need to be established"
•	Will the funds available support the deadline based on the types of
samples and analyses?
•	In initial phases of the project, the time frame to get results may be
shorter and require less precise measurement. As the analyte and matrix
list is revised, the time to obtaining results may change and
measurements may become more precise.
13

-------
Refine Analyte and Matrix List (General)
(3.3.6)
Based on updated project information
Should be a routine part of the project development to
review these lists to ascertain if something (either an
analyte or matrix) is:
-	Omitted
-	Unnecessary
4. f^sv Anjilytical Planning Issues
Initially, you may know there is radioactive contamination. This could be one
of 3,020 possible radioactive substances (this is a small list compared to the
2+ million known organic compounds!). As more information about the site
becomes available, this list is pared down and the number of potential
radionuclides and possible matrices decreases. This should be done at
routine intervals in the early stages of the project and less frequently as time
progresses.
14

-------
Method Performance Characteristics and MQOs
(3.3.7)
•	Examples of Method Performance Characteristics
-	Method uncertainty
-	Method detection capability	*
-	Method range
-	Method specificity	TO! .-
-	Method ruggedness -
•	An MQO is a quantitative or qualitative statement of a
performance objective or requirement for a particular method
performance characteristic
•	MARLAP recommends that an MQO for method uncertainty
(z/MR) be established for each radionuclide/matrix combination
4. Key Analytical Planning leaues
This is where the Project Manager must hone the details of the project
goals. The above information is critical to developing project MQOs based
on the methods selected.
15

-------
Example MQOs for Select Method Performance Characteristics
(Matrix specific) (3.3.7,1)
•	Example MQO for minimum detectable concentration
(MDC)
-	An MDC for 60Co in water samples of 0.5 pCi/L for each
sample
•	Example MQO for required method uncertainty
-	A method uncertainty for 137Cs in soil of 2 pCi/g at the
action level of 20 pCi/g
4. Key Analytical Planning Issues	1Q
Refer to the APS in the handout (Tab 14). What do we know about the
historical data for the analysis of 90Sr in milk?
What methods of analysis will we be using?
16

-------
Limitations on Analytical Options (General and Matrix Specific)
(3.3.6)
•	Determined during the project planning phase
•	Limiting analytical options based on
-	Historical
-	Known interferences
-	Known method limitations
•	May be determined by presence of other radionuclides
4..Key Analytical Planning Issues		17
Significant quantities of 232Th can be determined directly by gamma
spectrometry using the 228Ac 911 keV gamma ray as long as there is
confidence that the actinium is always supported in the matrices to be
analyzed. However, the presence of 60Co would cause significant
background (from Compton) problems in that region of the gamma
spectrum. This limits the analytical options to methods other than gamma
spec if the MQOs cannot be achieved due to the presence of 60Co.
17

-------
Method Availability (Matrix Specific)
(3.3.9)
•	Does the method exist?
- May require research and development
•	Is the method validated for the project's matrices?
•	Is it performed routinely enough to support the project
activities?
4. Eey Analytical Planning Issues		Ig,
Method validation is a concept that deals with the laboratory's experience
with the method for the particular analyte in the particular matrix of your
project. (More about method validation in Module 9 about Chapter 6.)
18

-------
QC Samples: Types and Frequency (Matrix specific)
(3.3.10)
• Laboratory blank

• Matrix spike

• Laboratory control sample (LCS)

• Duplicate sample

• Matrix spike duplicate

4. f\8.v Analytical Planning Issues
19
Any or all of these may be chosen for the project. The radiochemical
specialist, in coordination with the project team, needs to decide which ones
should be used.
The first four will be discussed in more detail in Module 10 (on MARLAP
Chapter 7), where we will discuss acceptance criteria for each of these. A
Performance Evaluation/Testing sample (external program) is not
considered a "lab QC Sample," but is an important part of the laboratory QA
program and should be part of a laboratory's analytical load.
19

-------
Sample Tracking and COC
(3.3.11)
4. Ke.y Analytical Planning Issues
¦
Samples are obtained and sometimes stored prior to shipment. Locked up?
Refrigerated?
Who handled the sample? Must each person verify sample in their custody?
What abcut shipping? Are all containers sealed with tamper proof tape?
Original chain of custody returned to site? What about chain of custody at
the laboratory?
20

-------
Data Reporting Requirements (General)
(3.3.12)
•	What format should data report have?	crSl JA* *
-	Results ±1, 2, or 3 (combined standard uncertainty)?
-	Units?	^ y
-	MDC, Critical Level, MQC, ...? 4	¦
•	Two ID numbers—both should be on data report. ^ f
•	Particulars for each sample?
•	Enough data to reproduce calculations?
4. Key Analytical Planning Issues	.	. .	21
There is no standard report format. The Project Manager must let the lab
know the project data requirements.
What are the two ID numbers? The Project ID and the LAB ID.
21

-------
Matrix-Specific Issues
(3.4)
Sample container to be used?
Whole sample to be analyzed, or how can
representative sub-sampling be assured?
Sample to be homogenized by laboratory?
Spurious detritus in sample to be discounted/
discarded/analyzed separately?
4. Ke.v Analytical Planning Issues
How could sample container affect results? Filter papers stored in plastic
containers-static charge could dislodge particulates...
Issue of subsampling in the laboratory is usually not addressed adequately.
See MARLAP Appendix F (Volume II) for guidance.
What should the laboratory do with twig parts or plant roots in soil samples?
22

-------
Example of An APS
•	Handout provides an APS for detection of 90Sr in milk
•	We will discuss each area of the APS and the
significance of each specification
. 4. E.sy Analytical Planning, Issues .		23
Refer to APS example behind Tab 14.
23

-------
What Do We know About Strontium In Milk?
•	Assume that the selection of the analyte list is
completed based on historical assessment of the project
•	Section 3.2 directs us to identify the matrices,
concentration range and any chemical or radiological
interferences
4. Key Analytical Planning Issues
Describe the potential interferences (can be radioactive or non-radioactive):
•	Calcium, milk fats, 40K, fission products, magnesium
The anticipated concentration ranges to be needed:
•	Is it expected that zero pCi/L will be found in Milk? What is the historical
background for 90Sr in milk?
•	Are there any other facts they know about strontium in milk that may be
included in the APS description?
Collection procedure:
•	How preserved until analysis, type of cow, sheep, or goat matter?, Where
they grazed?
24

-------
The basics
Analytical Protocol Specifications
Analyte List:	90Sr		Analysis Limitations: Perform direct measurement
of analvte. Analysis of progeny allowed if	
radioactive equilibrium is established at laboratory
from freshly isolated parent-
Matrix: Raw Milk		Possible Interferences: Fresh beta-emitting, fission-
product nuclides if purification steps are inadequate
or non-existent.
Concentration Range: 1 to 50 pCi/L Action Level:	8 pCi/L
Method Validation Level: MARLAP Levels A. C. or D as applicable. See Attachment C for details.
MQOs: A required method uncertainty ()i„J of 0.5 nCi/L or less at 8 pCi/L	
4. Key Analytical Planning Issues	 25
Refer to the APS in the handout (Tab 14).
The questions that this segment of the APS answers are:
1.	What? Radionuclide Note: Only one analvte is listed. Exception would
be gamma spectrometry.
2.	Where? Matrix. Note: Only one matrix is listed
3.	How much? Upper/lower concentration range expected
4.	Important level for decision making? Action level and required method
uncertainty at the action level
5.	How to prove it can be done? Method validation level (to be discussed
during Module 9 on MARLAP Chapter 6)
6.	Potential problems? Limits on the analysis and interferences. Heads up
to the laboratory on what is needed to be accounted by the laboratory for
the chemical separations and analyses.
25

-------
What Method© Meet the MQO?
MQO: A required method uncertainty (uMR) of 0.5
pCi/L or less at 8> pCi/L


LSC
Beta
Detector
GPC
Required for Project


Routine Method
Uncertainty (pCi/L)
0.2
1.0
0.3
0.5
(Required Method Uncertainty)






4. K«y Analytical Planning Issues
The TEC and radiochemical specialist need to asses the three methods and
their method validation documentation, and determine which methods meet
the APS specifications.
It should be emphasized that uMR is one sigma (1a).
LSC is liquid scintillation counter, GPC is gas proportional counting.
26

-------
Stipulation of Quality Control
Type
Frequency Evaluation Criteria
Method blank 1 per batch
Duplicate	1 per batch
Matrix spike* 1 per batch
See Attachment B
See Attachment B
See Attachment B
"Spiking range provided in Attachment 5 of the APS (Analyte Detection)
4. Kay Analytical Planning Issues	27
Refer to the APS in the handout (Tab 14, page 3).
These will be discussed in detail when we discuss Chapter 7. The Important
points to note here are that the Project Manager decides the type of QCs to
be performed and the frequency. This is an example of batch requirements
for quality control samples!
27

-------
APS —Analytical Process Requirements

ACTIVITIES

SPECIAL REQUIREMENTS
1.
Field Sample Preparation/ ^



Preservation


2.
Sample Receipt/Inspection
y
See Example APS at Tab 14
3.
Lab Sample Preparation j





Continued...
i i
4. f/
Analytical PLsnninq Issues

2&
This is an example APS. Refer to the APS in the handout (Tab 14, page 2).
28

-------
Analytical Process Requirements (Continued)

ACTIVITIES
SPECIAL REQUIREMENTS
4.
Sample Dissolution ^

5.
Chemical Separations

6.
Preparing Sources for
Counting

7.
Nuclear Counting
See Example APS at Tab 14
8.
Data Reduction and Reporting

9.
Sample Tracking
Requirements

10.
Other- Chemical Yielding

4. Key Analytical Planning Issues
This is an example APS. Refer to the APS in the handout (Tab 14, page 2).
•	Sample dissolution: NONE is listed. Does that mean that digestion can't
be done? Remember, performance-based requirements.
•	What is the significance of the chemical yield requirements being distinct
for 85Sr tracer vs Sr carrier? Minimize uncertainty due to yield mass,
which ultimately minimizes uncertainty in the yield and the counting data.
29

-------
Attachment A-Data Reduction
1.
Calculation methodology for 90Sr
2.
Combined standard uncertainty for 90Sr calculation
3.
Sample-specific MDC based on analytical parameters

measured
4.
Sample Specific critical level
5.
Specific intermediate calculations required (ingrowth factors)
6.
Data reduction process reviewed as on-site audit or desk

audit, by client
7.
No changes in data reduction process without approval

Continued...
4. ,
-------
Attachment A (Continued)-Data Reporting
1.
Sample specific parameters to be reported
2.
Sample processing factors or parameters
3.
Required calculated information
4.
Batch QC results to be reported with each batch

of samples
5.
Laboratory to provide a narrative for each batch

of samples
6.
Reports - electronically and as hard copy
4. Key Analytical Planning Issues . 31
See Attachment A of APS at Tab 14. page 2
These are examples of what could be requested by the client. There are
additional factors that could be requested, such as:
•	Certificates for standards used
•	Trend graphs for all QC results
•	Trend graphs for specific analyte recoveries
•	Copies of "condition reports" or laboratory incident reports that may affect
the sample processing
•	Etc.
31

-------
p
AP5 Documents the Key Analytical Planning
Issues:
S Developed by the project team

s Created before sampling and analysis begin

S Tells the laboratory what is required of them in
specific
detail

S Identifies MQOs

•S Used as the roadmap to validate/assess results

4. Key Analytical Planning lg»»U5<3
Because the APS documents the resolution of the key analytical planning
issues:
•	It is critical that the whole project team review the APS to ensure that the
projected results will meet their specific needs.
•	The APS must be done before anything is analyzed.
•	There should be "give and take" with the laboratory so that the
requirements in the APS are not overly restrictive.
•	The MQOs should have already been selected and approved by all
stakeholders, so that when results begin to accumulate there is no
question as to what the measurements really mean.
•	The data validators/verifiers and assessors should use the APS to ensure
that the results have met the projected needs of the project.
32

-------
MARLAP Recommends...
•	Assumptions made during resolution of key analytical
planning issues be documented
•	Each radionuclide has an action level and a gray region
•	MQOs be established for select method performance
characteristics
•	An MQO for method uncertainty always be established for
each analyte/matrix combination
•	That all measurement results be reported directly as
obtained including negative values along with the
measurement uncertainty
4. Key Analytical Planning Is&ugs		33
See handout of consolidated MARLAP recommendations for Part I (Tab 16).

-------
Class Activity on APS
•	Each group will write their own APSs
•	Each group will designate
-	Project Manager
-	Radiochemical specialist
-	Field sampling coordinator
-	Certified Health Physicist
•	Blank APS form provided
•	APSs will be based on "The Plutonium Fabricators,
Ltd." scenario
•	Solution will be distributed following exercise
4. fvey Analytical Planning Issues
The Project Planning Team usually consists of a project manager, one or
more radioanalytical specialists, a certified health physicist, and a field
sampling coordinator. MARLAP recommends that the composition and
size of the team reflect the size and complexity of the project. The
following are examples of project roles and responsibilities; they may
change depending on the individual project:
•	Certified Health Physicist: Responsible for dose assessment, field
sampling Locations and modeling.
•	Radiochemical specialist: Responsible for establishing the correct
procedures for the analysis desired, the method uncertainty and data
review and validation.
•	Field sampling coordinator: Responsible for identifying the proper
sampling techniques and validity of the samples.
•	Turn to Tab 18.
34

-------
5

-------
Project Plan Documents:
Important Recommendations
Module 5
David McCurdy

-------
Importance of Planning Plan Documents (4.2)
•	Support data defensibility for environmental
compliance
•	Define project objectives
•	Tool for communication with stakeholders
5. T'ro\soi Plan Documents
•	State the references used to support the data to be gathered (why we're
doing 60Co and not 55Fe)
•	Define and uphold the plan objectives — keep the project team focused.
•	The stakeholders know what you're going to do
2

-------
MARLAP Recommends a Graded Approach
(4.3)
•	Diversity of environmental data collection
activities
-	Affects detail and content of plan
•	Flexibility in applying guidance
-	According to the nature of the work being
performed and the intended use of the data
5. Proiscc Pbn Vocunwiz
3

-------
Link Project Plan Documents to Project Planning Process
(4.6)
MARLAP recommende that the project plan
documents integrate all technical and
quality aspects for the life cycle of the
project
•	Planning
•	Implementation
.• Assessment
5. Protect Plan PocurrwriT-s
4

-------
MAPLAP Recommends a Primary Project Plan Document That Includes Other
Documents by Citation or As Appendices (4.4.2)
•	Primary project plan document integrates the
multi-disciplinary sections, other management
plans, and stand-alone documents
•	Appropriate management plans
-	Health and safety plan
-	Waste management plan
-	Risk analysis plan
5. Project Pbri Pocunwrce	5
Appropriate management plans may include these others as well:
•	Community Relations Plan
•	Records Management Plan
•	If available, the data validation plan and DQA plan
•	Detailed discussion of the project and a brief description of site history
5

-------
MARLAP Does Not Recommend a Particular Project Plan Document Approach,
Title, or Arran0ement (4.4.2)
Reasons Why:
•	Federal and state agencies have different
requirements for the various environmental data
collection activities
•	May be regulatory requirements
•	Project plan document should reflect (and be
consistent with) organization's QA policies and
procedures
5. Praises P'lan Oocume-rrts	6
For example:
•	Radiological Environmental Monitoring Program
•	License Termination Plan
•	Decontamination and Decommissioning, etc.
6

-------
National Standards Guidance on Project Plan Documents (4.4.1)
•	ASTM D5233, Standard Practice for Generation of
Environmental Data Related to Waste Management
Activities: Quality Assurance and Quality Control
Planning and Implementation
•	ASTM 95612, Standard Guide for Quality Planning and
Field Implementation of a Water Quality Measurements
Program
•	ASTM F5&5, Standard Provisional Guidance for
Expedited Site Characterization of Hazardous Waste
Contaminated Sites
5. Froisct Plan Documents
7

-------
Elements of Project Plan Documents (4.5)
•	Project DQOs, APSs including the MQOs [Chapter 3]
•	Sampling and analytical protocols that will achieve the
project objectives [Chapters 3 and 10]
•	Assessment procedures and documentation sufficient to
confirm that the data are of the type and quality needed
[Chapter 8]
5. Proioct PLm Documents
8

-------
Content of Project Plan Documents (4.5.1)
•	Project description and objectives
•	Identification of those involved in the data collection and
their responsibilities and authorities
•	Enumeration of the QC procedures to be followed
•	Reference to specific SOPs that will be followed for all
aspects of the projects
•	Health and safety protocols
5. Proisec Plan Oocumcritz
9

-------
Integrated Project Plan Documents
MARLAP strongly discourages using stand-alone plan
components of equivalent status without integrating
information and without a document being identified as a
primary document [4.5.2]
MARLAP recommends using a formal process to control
and document changes if updates of the original project
plan document are needed. [4.6]
'6. \'Jvo\toL Plan Poajm^nts
10

-------
TABLE 4.2. Crosswalk Between Project Plan Document Elements
and Directed Planning Process
A. Project Management

- 9 elements

B. Measurement/Data Acquisition

- 10 elements

C. Assessment/Oversight

- 2 elements

D. Data Validation and Usability

- 3 elements.

See Tabic 4.2 In handouts

b. Protect ?Ian Documents 11
Refer to Table 4.2 behind Tab 15
11

-------
TABLE 4.2. Crosswalk Between Project Plan Document Elements
and Directed Planning Process
i

1
Project Plan
Document Elements
(QAPP-EPA, 2001)
Content
Measurement / Data
Acquisition
Directed Planning Process 1
Input 1
B4
Analytical Methods
Requirements
Identify analytical methods and
procedures included needed
materials, waste disposal and
corrective actions
Project Plan team:
-	Identifies input to the decision
(analytc, matrices, etc.)
-	Establishes the required
method uncertainty
-	Specifies the optimum
sampling and analytical design
B5
Quality Control
Requirements
1)	Describe QC procedures and
associated acceptance criteria
and corrective actions for each
sampling and analytical
technique
2)	Define the type and frequency of
QC samples should be defined
along with the equations for
calculating QC statistics
Project Plan team:
-	Establishes the required
method uncertainty, which
will drive QC acceptance
criteria
-	Establishes the optimized
analytical protocols and
desired MQOs
1	1
5. Project Plan Pocurrtent-s
12
See Table 4.2 in handouts (Tab 15)
12

-------
MARLAP Recommends.
•	Using a graded approach to project plan writing because
of the diversity of environmental data collection activities
•	Developing a primary integrating project plan that
includes other documents by citation or as appendices
•	Developing project plan documents that integrate all
technical and quality aspects for the life-cycle of the
project, including planning, implementation, and
assessment
•	Including the report on the directed planning process in
the project plan documents (by citation or in an appendix)
Continued...
5. Protect Plan Documents
13

-------
MARLAP Recommends.
(Continued)
•	Including a summary of the planning process if the
planning process was not documented in a report
- Assumptions and decisions, action levels, DQO statement,
and APSs (which include the. established MQOs and any
specific analytical process requirements)
•	Using a formal process to control and document
changes if updates of the original project plan
document are needed
5. Protect Plan Pocunwrr.s
14

-------
6

-------
Measurement Uncertainty
Module 6
Keith McCroan

-------
Overview
•	Basic concepts (e.g., what is "uncertainty")
•	Why uncertainty is important
•	The role that uncertainty plays in MARLAP
•	Traditional practices
•	The GUM
•	Causes of uncertainty
•	MARLAP's recommendations
€>. Measurement Uncertainty
2

-------
What Is Uncertainty?
•	In general, "uncertainty" means a lack of complete
knowledge about something of interest
•	In metrology (the science of measurement)
uncertainty usually means uncertainty of
measurement, which has a more precise definition
o. MeasurciTiiMt UncsrtainW
3

-------
Definition of Uncertainty
•	"Parameter, associated with the result of a
measurement, that characterizes the dispersion of the
values that could reasonably be attributed to the
measurand" - International Vocabulary of Basic and General Terms In
Metrology (VIM)
•	Examples might include:
-	Standard deviation
-	Multiple of a standard deviation
-	Half-width of interval with stated level of confidence
6. Measurement Uncertainty
4

-------
Comments on the Definition
•	Associated with result of a measurement
(Not with a measurement process or procedure)
•	Measurement result and the uncertainty together allow
one to place reasonable bounds on what the "true"
value might be
6. WicavurerrifAK Uncertainty
Recall that MARLAP defines the "method uncertainty" as a performance
characteristic of a measurement process.
5

-------
Question for the Class
•	If a lab reports that a sample of soil from a frequently
used playground contains 110 pCi/g of 239Pu, what
actions if any would you recommend?
- Insist that the lab report uncertainty of result
•	If the uncertainty is 10 pCi/g, one might conclude the
playground should be closed while more tests are
performed
•	If the uncertainty is 300 pCi/g, the result doesn't mean
much
6. Measurement Uncertainty
6

-------
Importance of Uncertainty
If the result of a measurement is reported without
some indication of its uncertainty, the result is
useless for decision making
•o. Mftasurtfiritfnt Uiic^rUiinW
7

-------
Traceability
Are your results supposed to be "traceable"? If so,
note that the concept of traceability is defined as —
"Property of the result of a measurement or the value of
a standard whereby it can be related to stated
references, usually national or international standards,
through an unbroken chain of comparisons all having
stated uncertainties" - VIM
6. Me.-a*un?mc?nD Uncertainty
See notes to slide 12 for reference to VIM.
8

-------
Role of Uncertainty In MARLAP
•	MARLAP's approach to method evaluation and
selection uses criteria based on measurement
uncertainty (and the derived concept of method
uncertainty)
•	Criteria for evaluating a lab's performance based on
required method uncertainty
•	Criteria for evaluating internal laboratory QC based on
measurement uncertainty
•	Criteria for making decisions about the contents of
an individual'sample based on measurement
uncertainty

-------
Traditional Practices
•	Radiochemists have known about uncertainty for
many years, but for most of that time there was no
standard terminology or notation
•	Often use the term "sigma" to mean an uncertainty
expressed as a standard deviation
•	Some use one sigma (la), 2a, or even 1.96a
•	Uncertainty often stated without any explanation,
leaving data users to make their own assumptions
6. tvi^.*d'?iinorii:;riD Uncertainty
10

-------
Traditional Practices
•	Incomplete uncertainty evaluations common
•	Reported uncertainty might be only the "counting
error"
- It is one component of the total uncertainty
•	Sometimes result might be reported with a relative
uncertainty of only a fraction of 1 % (usually
unrealistic)
•	Sometimes you might even see 0 ± 0 pCi/L (bad!)
6. Mpasuremgnt Uncertainly
11

-------
The GUM
•	Guide to the Expression of Uncertainty in
Measurement (GUM)
-	Published in 1993 by ISO in the name of 7 international
organizations
-	Presents terminology, notation, and methods for
evaluating and expressing measurement uncertainty
•	Promotes more-complete uncertainty evaluations and
comparability of uncertainty statements
6. Mtf35unsmf!HD Uncertainty	12
International Organization for Standardization (ISO). 1995. Guide to the
Expression of Uncertainty in Measurement. ISO, Geneva, Switzerland.
The ISO Guide to the Expression of Uncertainty in Measurement, or GUM, is
available in U.S. ($25) and international ($92) editions. The editions contain the
same material, differing only in decimal marker, spelling, and size. The ISO
International Vocabulary of Basic and General Terms in Metrology (VIM), 1993,
companion document to the GUM, is available only in an international edition
($71). The U.S. edition of the GUM is: American National Standard for
Expressing Uncertainty—U.S. Guide to the Expression of Uncertainty in
Measurement, ANSI/NCSL Z540-2-1997.
American National Standards Institute
105-111 South State Street
Hackensack, NJ 07601
(phone) 212-642-4900
(fax) 212-302-1286
ISO Central Secretariat
1 rue de Varembe
Case postale 56
CH-1211 Geneve 20
SWITZERLAND

-------
The GUM - Continued
• MARLAP's primary recommendation regarding
measurement uncertainty is to
Follow the GUM
-	So we speak and write the same language about
uncertainty
-	So we can interpret each other's results and uncertainty
statements
6. MetturstriSKZ. Uncertaincy
13

-------
MARLAP and the GUM
•	If you follow the GUM, you're following the most
important part of MARLAP's guidance for evaluating
and expressing uncertainty
•	MARLAP goes further and applies the GUM to
radiochemical measurements
•	Most of additional guidance is intended to be helpful,
not prescriptive
&. Mfrasunsffignt UncsrUiinDy
14

-------
Question for the Class
How can you comply substantially with MARLAP's
guidance for evaluating and expressing uncertainty?
(jVWV
6. Measurement Uiicsrtaln&v
15

-------
Metrology and Statistics
•	What we're doing is called metrology, defined as the
science of measurement
•	Metrology ^ statistics, although metrology uses
statistical methods and terminology
•	Metrology uses lots of approximations (with no
apologies) and defines new terms and symbols that a
statistician wouldn't recognize

-------
Results as Random Variables
•	We consider the result of a measurement to be a
random variable
•	The result can vary if the measurement is repeated, but
it should vary in a manner that can be described
probabilistically
•	Can discuss its probability distribution, mean,
standard deviation, etc.
6. Measurement IMcsrtaincy
17

-------
Standard Uncertainty
When we talk about the uncertainty of a result, we'll
usually mean the uncertainty expressed as a
standard deviation
GUM calls this a standard uncertainty
Traditionally standard uncertainty often called a
"one sigma" uncertainty
6. Measurement Uncertainty
18

-------
What Causes Uncertainty?
•	One of the best-known sources of uncertainty is
"counting statistics"
•	A radiation counting measurement is based on the
detection of radioactive emissions produced by atoms
of radionuclides as they decay
•	Radioactive decay is inherently random
•	We can describe the probability that an atom will
decay during a specified time interval, but we can't be
100 % certain
6. Mea?uri5ir,snC U.-ic$ru»inw
19

-------
Counting Uncertainty
•	Radiation detection can also be random
•	If you could repeat the same radiation counting
measurement over and over with the same initial
conditions, you'd get a different result each time
•	Uncertainty of a result due to the randomness of
radioactive decay and radiation detection is what
MARLAP calls the counting uncertainty

-------
Causes of Uncertainty Subsampltng
•	Often the lab analyzes only a small portion of a much
larger sample
•	A typical sample has some heterogeneity, so one
portion differs in composition from another
•	Uncertainty due to subsampling is potentially very
large, but may be hard to quantify
6. Measurement Uncertainty	.	21
21

-------
Causes of Uncertainty. Instruments
•	Measuring instruments and their operators aren't
perfect
•	Radiation detectors usually aren't capable of detecting
every particle or ray emitted from the sample
•	Even volumes obtained using volumetric glassware
and masses measured using precise analytical balances
have uncertainty
€>. (vlfi.g^urcrrifint	flinty

-------
Causes of Uncertainty: Standards
Standards have uncertainties in their stated values
- Including standard solutions used for instrument
calibration
Typical (standard) uncertainty for standard solution is
~ 0.5 % to 2 %
These uncertainties may exceed the uncertainty due to
counting statistics for measurements of samples with
very high levels of activity
6. Measurement UncsrUiiri W
23

-------
Causes of Uncertainty: Other
Many other causes of uncertainty
-	Variable background radiation levels (e.g., cosmic)
-	Errors in mathematical models used to describe
measurement process (e.g., calibration curves)
-	Errors in published values for constants (e.g., half-lives
and radiation-emission probabilities)
-	Impurities in reagents
-	Contamination of glassware or instruments
-	Changing environmental conditions in the lab
(temperature and humidity)
6. Mtf;a$iiremsnt Uncertainw
24

-------
Uncertainty Propagation
•	Final result typically not measured directly but
calculated from other measured values
•	Measured values might include volumes, masses,
times, and numbers of counts
•	Uncertainties of the input values combine to produce
uncertainty in output value
•	Mathematical operation of combining individual
uncertainties to obtain the total uncertainty of final
result is called propagation of uncertainty
6. Mfl.a*un5irisrit Unc^rUiinW
25

-------
Combined Standard Uncertainty
•	Standard uncertainty of a result obtained by
propagating the standard uncertainties of all the input
values is called the combined standard uncertainty
•	"Total propagated uncertainty" (TPU) previously used
to denote same concept
6. Measurement Uncertainty
26

-------
Notation
Standard uncertainty denoted by lower-case u
If x is a measured value, standard uncertainty is w(x)
Exception: If standard uncertainty is combined
standard uncertainty, it may be denoted by uc(x)
Expanded uncertainty denoted by upper-case U
¦
1 r\|. IM*-1 ' .
&. Measurement Uncertainty!	..		 27
27

-------
Uncertainty Propagation
•	Propagating uncertainty not the simple addition of
uncertainty components
•	If you multiply a result x by a constant c, the
standard uncertainty of the product is |c| x u(x)
•	If you add two values x and y, the standard
uncertainty of their sum is the square root of the sum
of the squares of u(x) and u(y)
u(x + y) = *Ju2(x) + u2(y)
- Think of the Pythagorean Theorem (next slide)
6. MfVJ^urtfrritiriD Uncertainty
28

-------
The Uncertainty of a Sum
uc (*+y)=V (*)+;u2 (y)


u(y)
u(x)

6. Me.aPLinsrrigiit Uncertainty
29

-------
Large and Small Components
•	A consequence of rules for uncertainty propagation:
- Small uncertainty components tend to contribute even
less to the total uncertainty than one might think
•	Combine two uncertainty components 10 and 3 - the
^otaTuncertamtyJs only 10.4, not 13
6. Me^uncrricriD IJric^rUiinDv

-------
Expanded Uncertainty
•	The lab might report the combined standard
uncertainty for each result...
•	Or multiply CSU by k to obtain a larger uncertainty,
obtaining a wider interval about the result with greater
probability of containing the true value
•	Product of k x CSU = expanded uncertainty
•	Factor k called coverage factor
€>. Mp.'J^urcrrignt UficsrtainDv

-------
Questions for the Class
•	What is standard uncertainty?
•	What is combined standard uncertainty?
•	How do you denote the combined standard uncertainty
of y?
• What is expanded uncertainty?	>^|<
6. MfVJvHirsrnsnt Uncertainly
32

-------
Rounding Results
•	Consider a result reported as 15.381 pCi/g with CSU
4.076 pCi/g
•	Final digits in the result don't mean much, because of
the uncertainty
•	More sensible to report the result as 15 with
uncertainty of 4, or 15.4 with uncertainty of 4.1
£>. Measurement Uncertain w
33

-------
Rounding Rules
•	There is a widely accepted method for rounding
results with uncertainty
•	Regardless of whether you report the CSU or an
expanded uncertainty, round the uncertainty to either 1
or 2 figures
- MARLAP prefers 2 in all cases - Others may differ
•	Then round the result to the same number of decimal
places
•&. Me.-asunsffignt Uncertainty	34
34

-------
Example: Rounding
•	Suppose a measurement result is 17.93602 Bq/L, and
lab reports the result with a CSU of 0.37301 Bq/L.
•	How would you round the result and the CSU
according to MARLAP?	+
n,^ -
6. Mp.-a?urcrri:JHt Uncsrtciln w
35

-------
Shorthand Notations
•	There are common shorthand notations for reporting
uncertainty
•	If reporting CSU, place the digits of the rounded
uncertainty in parentheses just after the digits of the
rounded result:
17.94(37) Bq/L
•	This format is not commonly used by radiochemists
•	May be encountered in published documents
6. Measurement Uncertainty 	^ 	__		; ; 36
36

-------
Other Shorthand Notations
•	For expanded uncertainty, report the numerical values
of the result and uncertainty in parentheses followed
by the unit of measurement, with the result and
uncertainty separated by ± (or +—):
(17.94 ± 0.75) 3o/L
•	This format is more familiar to radiochemists
6. Measurement Uncertainty			37
37

-------
38

-------
Summary of MARLAPs Recommendations
•	Use the terminology, notation, and methodology of
GUM
•	Report all results - even if zero or negative - unless
believe they are invalid
•	Report either combined standard uncertainty or an
expanded uncertainty for each result
•	Explain the uncertainty - in particular, state coverage
factor for an expanded uncertainty
(continued)
6. Mcazursmsnt Uficsrtainc.v
39

-------
Summary of MARLAPs Recommendations
(Continued)
•	Consider all sources of uncertainty, and evaluate and
propagate all that are believed to be potentially
significant in final result
•	Do not ignore subsampling uncertainty (for solid
samples) just because hard to evaluate
•	Round reported uncertainty to 1 or 2 figures (we
suggest 2) and round the result to match
6. MtfsisurflfTisnt Uncertainty
40

-------
Final Recommendation
•	All preceding recommendations are severable
•	Do as much as you can
•	At least use GUM's terminology and notation so that
we all speak and write the same language
•	Make further progress as time and resources permit
£>. Me.3«?ursrTisn& Uncertainty
41

-------
Question for the Class
Does MARLAP prefer that a lab report the combined
standard uncertainty of each result, or an expanded
uncertainty?
€>.	rsrritint U^C(-rtaiiiW
42

-------
Question for the Class
When a lab reports an expanded uncertainty, what
coverage factor does MARLAP prefer?
6. Me.-asurerrisnc Uncertainty
43

-------
Question for the Class
&. Mf.af-ur.crridn: Uncertainty
44

-------
Question for the Class
6. Measurement Unccrtainw
45

-------
Your Questions?
6. Me^uremsnt Uncertainty
46

-------
7

-------
Evaluating Measurement
Uncertainty
Module 7
Keith McCroan

-------
•	Brief review of Module 6
•	Uncertainty evaluation
-	How does one calculate and propagate uncertainty?
-	What are some pitfalls?
-	What tools are available to make it easier?
-	Some examples and one exercise

-------
Review of Module 6
•	What is MARLAP's primary recommendation
regarding measurement uncertainty?
•	What is a standard uncertainty?
•	What is a combined standard uncertainty?
7. rVtfluaT-ing Mgafiursmsrit Uncrir'ointy
3

-------
Review of Module 6
• What is expanded uncertainty?
• What is a coverage factor?
/'. Ev'flluat'ir.g Ivleaauramsnt Uncsr'.^int-y
4

-------
Mathematical model
•	Typically one does not measure the final result
directly
•	The value is calculated from other measured values
using a mathematical model of the measurement
•	The model relates values of the directly measured
quantities (input quantities) to the final result (output
quantity, which is the measurand)
valusit-ing	Uncsrtoirrt-y
5

-------
The Model
•	Model might be a single equation or set of equations
•	We follow the GUM here and represent it abstractly as
a single equation
Y=f(Xl,X2,...JCN)
Ydenotes output quantity and X^,X2,...,XN denote input quantities
7. zviihiaUM. MtfcifjurcrTier.t Unc^rWiinU'	
-------
Input Estimates
•	Given a mathematical model of the measurement,
making a measurement requires estimating values of
the input quantities and using them to calculate the
value of the output quantity
•	Estimated values of the input quantities are called
input estimates
•	We denote input estimates as xlyx2,... jcN.

-------
Output Estimate
•	Given the model and the input estimates, the value of
the output quantity is calculated
•	The calculated value is the output estimate:
y=fix --^n)
Evaluating Mc^riLTtf.-nCit Unwrt-.-airifrv
We use lower-case variables for the input estimates and output estimate.
When you actually apply this theory, you'll use the same variable symbols whether
you're talking about the quantity or the estimated value of the quantity.
8

-------
Evaluating Uncertainty
•	We want the combined standard uncertainty of the
output estimate, y
•	First need the standard uncertainty of each input
estimate, xt
•	Then determine how much each of these uncertainties
contributes to the total uncertainty ofjy
•	Many ways to do the first step
7. -v'eiluatirtfl rAza'vuretnzrit Uncir"jint-v
9

-------
/> / r
Methods of Uncertainty Evaluation
GUM describes two general types of uncertainty
evaluation (for input estimates)	^
-	Type A evaluation of staii^rd- uncertai!ity:>y \ ^
statistical analysis of series of observations ' j/k*
-	Type S> evaluation of standard uncertainty: by any ) v?
other means


7. Eva uating Measurement Uncertainty
10

-------
Type A Evaluations
•	Canonical example:
-	Series of replicate measurements of input quantity X-t
-	Estimate the value by the average of the results
-	Estimate the standard uncertainty by the experimental
standard deviation of the mean
•	Least-squares regression is also a Type A method
•	If you have "degrees of freedom," it's probably a
Type A method
Evaluating Measurement Uncertainty	11_
11

-------
Example: Type A evaluation
•	Make 6 measurements of an input quantity, X-.
xi, 1 = 12> */,2=9, */,3=12> */,4=10> */,5=11>*i,6=9
•	Use average as input estimate:
X; = (12+9+12+10+11+9) 16 = 63/6= 10.5
•	Experimental standard deviation* of these 6 values
s(xi,k)= L378
•	Let w(x-) be the experimental standard deviation of
the mean, which equals 1.378 / V6 = 0.5627
*5ee next elide
Iv1dt)^ur^rn.cr,t Unctf
12

-------
Example: Type A evaluation
_	12 +9 + 12 +10 + 11 + 9 63 1A .
x, = x, = — > x, k =	= — = 10.5
' 6^	6	6
*(*,.*) =

-((1.5)2 + (— 1.5)2 + (1.5)2 + (—0.5)2 + (0.5)2 + (-1.5)2)
V 5
= ^L9
= 1.378
\	 „/77 \		_ 1-378 _ A ^
u(x,) s(jc()	0.5627
7. Hvalujiting Ivleasursrnortt UricsrWiit-y
13

-------

•	Any method of uncertainty evaluation that isn't
Type A is Type B
•	Many Type B examples, including
-	Poisson counting uncertainty
-	Using tolerances
-	Importing values with uncertainties from other sources
•	Sometimes a Type B evaluation is based on
professional judgment
In particular, if the uncertainty component is small, don't be afraid to guesstimate it.
E.g., how far might the meniscus deviate from the capacity mark in a pipet?
Don't be afraid to make an educated guess / > / ^
14

-------
1st Example: Poisson counting
•	Estimate standard uncertainty of the number of counts,
N, observed in a typical radiation counting
measurement by square root of N
•	Assuming distribution of N is Poisson:
- Standard deviation = square root of the mean
7. -rv'filueU-ifig Meafjursmsnt Unctfi'Kiirit-y
a

-------
Example: Poisson counting
•	Radiation-counting measurement where distribution
assumed Poisson
- Observe N = 169 counts
•	Standard uncertainty evaluated to be Vl69 -13
7. Hv-ilutU'ing fvleflsursmcrit, Uncsit^ir.t.y
16

-------
Low-level Poisson Counting
•	One may sse results reported as 0 ± 0
- When blank (or background) count and sample
count are zero and counting uncertainty is
estimated by taking square root of the count
•	Reporting 0 ± 0 or anything ± 0 is a bad idea
•	MARLAP recommends that when very low numbers
of counts possible, evaluate uncertainty of N as
u(N) = 4n + 1
7. Evakiat'mg Measurement Uncertainty	17
17

-------
Example: Low-level Polsson
•	Suppose sample count, Ns, happens to be 1
•	Evaluate the uncertainty of Ns by
u(N) = jNt+l = VI+T = 42 =1.414
luciT-ina Mr^ifjLTcnipr.t Urics
18

-------

2nd Example: Rounding
Suppose an input estimate is rounded to 0.8
The original value might have been between 0.75 and
0.85
How to account for this uncertainty?
1 Assume a rectangular distribution* centered on 0.8,
with half-width a = 0.05
Divide half-width a by V3 to get standard uncertainty
of the input estimate t ^ ^ ^	^ K-5X
; See next elide

7. Evaluating Measurement Uncertainty
When you round a measured result, you lose information and increase uncertainty.
But when done properly, rounding should add negligible uncertainty.
19

-------
Rectangular Distribution
II
fel *

-«(*,)-







xr
- a jc.
Xj + a
1 1
7. •Hv.nli.'ating lv1dri?3urc:m.cnL ir-t-y


20
The rectangular distribution assumes values between the lower and upper bound
are equally likely, and no other values are possible.
20

-------
f «/+ -u ^
	V	
^ ^ OS.(<- OVC-Cf-
• There is uncertainty in the capacity of a volumetric pipet
•	ASTM class A volumetric pipet has a stated tolerance a for
the nominal capacity V
•	Any value between V- a and V+ a is possible, but assume
values near V are most likely
•	Assume a triangular distribution*, centered on V, with half-
width a
•	Divide half-width a by V6 to get standard uncertainty of V
•	See next slide
21

-------
Triangular Distribution
V6
w(x;)
> i ¦

Xj - a Xj Xj + a
7. Eveilugai/ini" Mtjasursirwrit' Jnc^r
The triangular distribution assumes that only values between the lower and upper
bound are possible, but values near the center are most likely.
22

-------
Example: Triangular
•	Suppose the nominal capacity, V, of an ASTM class A
volumetric pipet is 1 mL with a specified tolerance of
a = 0.006 mL
•	Assume a triangular distribution for the error and
calculate the standard uncertainty of V
u(V) = a /V6 = 0.006 mL 12.4495 = 0.00245 mL
7. Ewilueitirig lA£eKjuremcr,i Urioirroirit-y
23

-------
Note
Uncertainty of pipet's capacity is not the total
uncertainty of volume delivered
Total uncertainty depends on operator's skill (among
other things)
However, the uncertainty does tend to be relatively
small
7. Evaluating Mcjasurertisrit {Jncsrwirtty
In many rad-chem measurements, the uncertainty of pipetting is one of the least of
your concerns.
Balance measurements also tend to have very small uncertainty when performed
properly.
24

-------
Rectangular & Triangular Other Uses
•	Rectangular and triangular distributions are often used
when you can estimate a bound a for the largest
possible error in your estimate
•	If you know nothing else about the distribution of the
error, assume a rectangular distribution
•	If you think values near your estimate are more likely
than values near the bounds, assume a triangular
distribution
7. Evaluating Msasurement Uncertainty ¦			i	i	25
25

-------
4th Example: Imported values
•	If you import a value measured by someone else (e.g.,
a half-life measured by NNDC) and use the reported
uncertainty, that's a Type B evaluation
•	If you buy a standard with a stated value and a
confidence interval (say 95 %), divide the half-width
of the confidence interval by an appropriate percentile
of the standard normal distribution to get the standard
uncertainty
7.	htasursmcfit iJncsrMir.t.y
26

-------
Example
Suppose the stated massic activity for a standard
solution is 204.1 Bq/g with 95 % confidence limits at
±3.2 Bq/g.
What is the standard uncertainty of the massic
activity?
Divide 3.2 Bq/g by 1.960 (97.5th percentile of the
standard normal distribution) to get the standard
uncertainty, 1.6 5q/g

-------
Uncertainty Propagation
For a typical measurement process, one calculates the
final result using a mathematical model of the
measurement
Measure or import estimates xlyx2,...jcN of the input
quantities and calculate an estimate y for the output
quantity
y=j[xljc2,...jN)
7.	Mc^urrirn.f.nt Unc^rr-jimy
28

-------
Uncertainty Propagation
Use same model to determine how standard
uncertainties of input estimates, xlrx2,.. .,xN, produce
combined standard uncertainty of the output
estimate, y
Mathematical operation of combining uncertainties of
the input estimates to get the uncertainty of the output
estimate is called propagation of uncertainty
7. Evaluating Measurement Uncertainty
29

-------
How Uncertainties Combine
•	Suppose u(x{) = 1.5 and w(x2) = 2
•	What is the uncertainty of the sum xx+ x2?
•	Uncertainties generally add "in quadrature"
- Square each uncertainty, add the squares, and take the
square root of the sum
•	Answer in this case is not 1.5 + 2 = 3.5, but
V 2.25 + 4 =2.5
30

-------
Components of Uncertainty
•	If model is more complicated than a simple sum, think
in terms of uncertainty "components"
•	An uncertainty component is the part of the total
uncertainty ofy that is generated by just one input
estimate xt
•	Uncertainty propagation usually consists of
calculating all the uncertainty components, squaring
them, adding up their squares, and then taking the
square root of the sum
7. izv'nluatirtd Micifjur^rncrit Unc^r*-;jir(tv
31

-------
Sensitivity
•	Uncertainty component due to xt depends on the
uncertainty u(xt) and also on how sensitive y is to
changes in x;
•	If large errors in xt don't generate large errors in y,
then y is not very "sensitive" to xt
•	Might be a large uncertainty in the time of sample
collection, but the sensitivity of the decay-correction
factor depends on the half-life (e.g., 131I vs 238U)
7. EiViiluaTirtg lA£i\-vure.\iv?.r,t Uncr:rwir(t-y
32

-------
Component of Uncertainty
The component of the combined standard uncertainty
uc(y) generated by w(xf) is:	, •
^>ft 48*^''
"fO)=
¥
dX:
xu(xt)
df / dxj is a sensitivity coefficient because it indicates
how "sensitive" y is to changes (or errors) in the value
of X:
7. Hvaluai'ind Ivld^riunsmsrit Uncsrt-.iir.tv
33

-------
Estimating Uncertainty Components
•	Generally preferred that sensitivity coefficients be
calculated using rules of calculus
•	But it is also OK to estimate the sensitivity
coefficients or the uncertainty components (e.g.,
spreadsheet methods)
•	One way to estimate an uncertainty component
without calculus is:
«,O) = \\f(xi>¦¦¦»¦x> + U(Xi)>->xn)"/(xI>•»,*, ~"(*/)>-.**)|
7. -viiluat-ing Measurement Uncertainty
34

-------
Uncertainty Propagation Formula
Equation for propagating uncertainty can be written in two
ways
«c(y)=

N
»( dfV

;=1
It
n v^ y
GUM calls this equation the /
-------
Derivatives
•	df / dxi indicates how much y changes when xt changes
by a small amount
•	E.g., ify - 2xx - 4x2, then df/dxx = 2 and df / dx2 - _4
•	For more complicated functions, you need calculus (or
estimation procedures)
lv\siVvvrcM]cr\t Unc^T-ainu
36

-------
Example
1
•	Simple model y = 2xl~ 4x2
where = 1.5 and u(x2) = 1.0
•	Wj(y) = |2| x u{xx) = 2 x 1.5 = 3.0
•	u2(y) = -4 x u(x2) = 4 x 1.0 = 4.0

Mc(j;)=V3.02+4.02

1 1
7. Evaluating Measurement Uncertainty
37
37

-------
Graphical Uncertainty Propagation
r-
i
«$
ii


= 1.5
«(*,) = 1.0

ux{y) = 2x1.5 = 3.0
uc(y) = 5.0 /

ui(y)= 4*1.0 = 4.0

u2(y) = 4.0
«(y) = Vw.2O;)+w2:O0
' rT


MiO) = 3.0


OA<. CoW>u^~''r'
7. -Evaluating, ivtetfsur
-------
Small Components
•	Small uncertainty component has even less impact on
the total uncertainty than one might think
•	In example below, uc(y) is almost identical to the
larger component, u{(y)

-------
Correlated Input Estimates
•	The form of the uncertainty propagation formula shown earlier
assumes that all the input estimates are determined
independently of each other
•	If some pairs of input estimates are correlated with each other,
you need a different version of the formula
y ,=i \oxi J	1=1 j=i+\oxi (JX j
ff yf	1 /
rou^i «-<_
7. Evaluating Measurement Uncertainty	40
Earlier we didn't tell you the whole truth about uncertainty propagation.
Sometimes there are complications.

-------
What Causes Correlations?
•	Two or more inputs in the model might be calculated from the
same data (e.g., parameters for a calibration curve estimated by
least squares)
•	Physics (e.g., the areas of two photopeaks in a gamma spectrum
might be correlated)
•	Environmental influences (e.g., temperature and humidity) but
often too small to care about
7. -vakiatiM Mtfasurfimsnt Uncsrwirit-v
41

-------
Correlations
7. "vnluatirtg M^isur-tfrnsnt UMtcJi-winfcy	42
Depending on the signs of the correlation coefficient r{x 1,x2) and sensitivity
coefficients d/7 dx1 and dfl dx2, a correlation might either increase or decrease the
combined standard uncertainty. On this slide we see an increase.
The value of the correlation coefficient r(xvx2) is related to the cosine of the angle
you see marked on the slide. (It has the same magnitude but perhaps a different
sign.)
The length of the blue line segment shows the magnitude of uc.
For comparison, the red line segment shows the magnitude of uc if the correlation
coefficient were zero.
42

-------
Correlations
7. [iviilusit-in? Me^rjurrmitfrt Unc^tTOint^
On this slide we see a situation where the effect of the correlation is to make the
combined standard uncertainty smaller than it otherwise would have been.
Again, the value of the correlation coefficient for x: and x2 is related to the cosine of
the angle you see marked.
The length of the blue line segment shows the magnitude of uc.
For comparison, the red line segment shows the magnitude of uc if the correlation
coefficient were zero.
43

-------
How to Estimate Correlations
•	Experimentally
- Type A evaluation of covariance based on a series of
paired measurements of two quantities
•	Calculate covariance using a formula similar to the
uncertainty propagation formula
•	See MARLAP Chapter 19 for more details
Evaluating Measurement UneriiT-air.ty
44

-------
Shortcuts
• It helps to remember shortcut formulas for propagating
uncertainty



• For example, if all the input estimates x,,..., xK, and z,,
..zL are nonzero and uncorrected, and if



y
x^x---xxK
z, x---xzL

then




«. O0 = ^|
y2X
'»:W
u2(xk) w2(z,) M2(zt)>
2 2 2
** zl z£ )

7. nviiluaUn^ ivteaaur.^msnt Uncertainty	45
What happens if x, is zero?
What happens if zy is zero?
45

-------
Shortcuts (Continued)
In radiochemistry, the following type of model is common:
/? . ^
y =
net
exYxV Cc^te(f - ft
The uncertainty equation fory can be written as follows (if no
correlations):
"c 00 =
"c(^net) +y*X
£2xY2xV2
V(g) u\Y) u\V?
e2 Y2 V2
7. DiviilUiiTin^. MtfcisurcfTicnt Unc^r'^int-y
The numerator is a net count rate. The denominator is a product of factors:
detection efficiency, chemical yield, sample volume analyzed.
This paradigm works for any number of factors in the denominator. So, you could
include decay/ingrowth factor (D) and emission probability (P) too.
46

-------
Pitfalls of Uncertainty Evaluation
Pitfall #1
•	Sometimes one input quantity appears in the mathematical
model more than once
•	What's the problem?
•	You might be tempted to treat each occurrence of the variable
as if it were a distinct variable
•	Think about it. What is the uncertainty of x - xl
It isn't -Ju2(x)+u2(x)
•	It's zero
7. nv'iil'.iat-irig Mgafturgmsrtt Uncgrroint-y	47
Note that x - x is an artificial example, which you probably will never encounter in
practice.
But obviously, the value of x - x is exactly zero, with no uncertainty, even if the
value of x is uncertain.
47

-------
Example: Repeated Variables
•	Here's a real example encountered not long ago
•	How to calculate uncertainty of
x{ +jc2
•	Does the uncertainty of xl contribute more uncertainty to y
because x, appears twice?
•	If you say YES, then what is the uncertainty of the following?
xi +x2
7.	ivlc^ui'arn^nt Uncertainty
In the real world, the x"s were numbers of counts, and y was a kind of "spillover"
factor.
48

-------
Example (Continued)
Both;; andz must have exactly the same uncertainty
dy_
3jc,
dy_
(*, +x2y
-X,
dz
0JC,
dz
(x, + x2)2 dx2
f	\2
u2c(y) =
X,
vC*l *2 )
.2..2
w2(jt,) +
-X,
\2
V(*l +X2)2
u2(x2)
x2u'(xl) +xfu2(x2)
o, +x2y
7. Evaluat-ina Measiurgmont Unc«jrtoirt
Since z = 1 - y, and the 1 has no uncertainty, it follows that u{z) = u{y).
49

-------
Pitfalls of Uncertainty Evaluation (Continued)
Pitfall #1 (continued)
•	Sometimes variables that appear explicitly in the
model might be calculated from other variables
•	Could tend to obscure fact that some variables (in
effect) appear twice in the model
•	Good example of this in alpha spectrometry using a
tracer, where detection efficiency is used in the
calculation of the yield
•	Efficiency actually has no effect on the final result
(cancels out)
7. Siv'tflUeJLii'ifl ivldfl&urtfnient Uric;r:r:..3intv
50

-------
Pitfalls of Uncertainty Evaluation (Continued)
Pitfall #2
•	Beware of some shortcut formulas when the output estimate is
zero
•	Remember
y
£XYxV
«cOO =
«c(*nc,) , .2
£2XY2XV2
+ y x
u2(£) { u (Y) i u (V)
Not
«c (y) =
y2x
, u2(£) , u\Y) , u\V)
T~ + —~ +	~ +
V l ^net
7.	IJnr^r'oirtv
If the net count rate, Rnet, is zero, the latter uncertainty equation causes a divide-by-
zero error, or, if you avoid that error, yoi\calculate zero for the uncertainty of y.
Either way, you make a mistake.


51

-------
Pitfalls of Uncertainty Evaluation (Continued)
Pitfall #3
•	Everyone likes to assume Poisson counting statistics
and estimate the counting uncertainty by Vn
•	What's wrong with that?
•	Often nothing, but when counting combined emissions
from more than one nuclide in a short-lived decay
chain, Poisson model isn't valid
- Generally underestimates the uncertainty
•	Deviation from Poisson is greater when detection
efficiency is high
7. IHviilutftirtg ivtesHjurtfrnsnt Unc^rt^inr-v
52

-------
Example: in a Scintillation Cell
•	Classic example is counting 222Rn and progeny in
alpha scintillation cell (Lucas cell)
•	Get alpha counts from 222Rn, 218Po, and 2I4Po
•	Detection efficiency is usually high
•	Counts tend to occur in clusters as one atom decays
through several states, not as independent events
•	Described by H.F. Lucas in the early 1960s but still
widely unknown
1. H'/aluaUig Msaaurgmsrit Uncsrtoint.y
53

-------
Other Examples of Non-Poisson Counting
•	Beta-counting 234Th, which has the short-lived decay
product, 234mPa, another beta-emitter
•	Any gross counting measurement where the nuclides
are unknown
- ibross alpha, gross beta, gross gamma
7. Evaluating Measurement Uncertainty f		 54
Gross beta in particular often has a high detection efficiency.
Gamma-ray detection efficiency is often low, so Poisson model may be OK.
54

-------
Pitfalls of Uncertainty Evaluation (Continued)
Pitfall #4
•	Some sources of uncertainty not shown explicitly in
the model
•	E.g., variability in the instrument background, or
varying levels of contamination in the blank
-	Include explicit extra term in the model or increase the
uncertainty of the blank count
•	Error due to subsampling heterogeneous solid
material, such as soil or sediment
-	See MARLAP Chapter 19 and Appendix F
55

-------
Pitfalls of Uncertainty Evaluation (Continued)
Pitfall #5
•	If model is nonlinear and some input estimates have large
uncertainties, uncertainty propagation formula may not
work
•	Uncertainty propagation formula based on an
approximation, which may not always be adequate
•	As a rule, keep relative uncertainties of count times,
aliquant sizes, decay-correction factors, detection
efficiencies, and yields small
•	Uncertainties of the raw counts can usually be large
(except for the tracer count)
.'cil:i2it-ir,fl Measurement UncdrMintv

-------
Software Tools
•	Software tools make uncertainty evaluations easier
•	Kragten spreadsheet method can be used by anyone
with a spreadsheet program
•	Standalone software systems (some free) and software
component libraries that do uncertainty propagation
automatically
7.	Uncertainty	R7
Keierences.	• ofr
•	Kragten, J. 1994. "Calculating standard deviations and confidence intervals with a
universally applicable spreadsheet technique," Analyst, 119(10), pp. 2161-2166.
•	Vetter, Thomas W. 2001. "Quantifying Measurement Uncertainty In Analytical
Chemistry - A Simplified Practical Approach." National Institute of Standards and
Technology (NIST), Gaithersburg, MD 20899-8393. Available at
http://www.cstl.nist.gov/div839/839.03/Uncertainty.pdf.
Vetter*s online article describes the Kragten technique and applies it to analytical
chemistry.
57

-------

c
ale

•	GumCalc: free standalone software system developed
to support MARLAP
•	Allows you to define a mathematical model of a
measurement, specify uncertainties of the input
estimates, and propagate uncertainty automatically
•	Propagates dimensions of quantities and does unit
conversions
•	Imports raw data and exports results to CSV files
•	Available at wvvw.rnccroan.com/GumCalc.htm
7. Evaluating Measurement Uncertainty	56
Another source of free software for uncertainty propagation is
http://metrologyforum.tm.agilent.com/download3.shtml
The GUM Workbench is available (for purchase) at: http://www.gum.dk/e-wb-
home/gw_home.html
Mention of trade names or specific applications does not imply endorsement or
acceptance by the U.S Environmental Protection Agency or any MARLAP agency.

-------
Parting Thoughts
•	Many people focus on uncertainty propagation as the
difficult problem that prevents better uncertainty
evaluations
•	The uncertainty propagation formula looks
complicated because of summation symbols and
partial derivatives
•	Actually straightforward - can be implemented
automatically by reusable software components
Continued...
7. •Ewiluatina Measuriinsrit Uncsrtairity
59

-------
p
11
¦MB
Thou
ghts
——BH¦¦¦BOHH
propagation of uncertainty but understanding the
measurement process well enough to know what
uncertainties need to be propagated
Software won't solve this problem for you anytime
soon
7. Evaluating Measurement Uncertainty
60

-------
Class Exercise
Questions?
7, Hv'nliJgitir.g Mgasurgmgrit Uncertainly	61
61

-------
8

-------
Obtaining Laboratory Services
Module &
Dave McCurdy

-------
Obtaining Laboratory Services
Chapter 5
Appendix E: Contracting Laboratory 5ervicee
•	More extensive than Chapter 5
•	Provides detailed information
•	Covers multi-agency contracting vehicles
Z\ Ob't-iiniHfl L'^pivn'orv Lv.-vic^;
Appendix E:
•	Request for Proposals (the solicitation)
•	Proposal requirements
•	Proposal evaluation and scoring procedures
•	The award
•	Duration of contract (period of performance and milestones)
•	Contract completion
2

-------
Importance of Technical and Contractual Specifications
Contract Specifications...
•	Capture analytical requirements in a concise At^
format
•	Verify that project planning documents contain all
the information required
•	Identify laboratory's responsibility for
documentation
b. Obt^iinina Lg:>oratflrv iVrvicfo
•	Capture analytical requirements in a concise format: facilitates selection
of appropriate analytical protocols by laboratory
•	Verify that project planning documents contain all the information
required: selection and implementation of the appropriate analytical
protocols
•	Identify laboratory's responsibility for documentation: data verification,
validation, and quality assessment
3

-------
Procurement Cycle
Includes SOW
Section E.2
Reflects SOW
Section E.3
Section E.5
Section E.6
Section E 7
Periodic Evaluations
Progress According to SOW
Section E.8
ft. C'bt^ininfl Labor,at'ory ^Vr/icy-;
MARLAP's procurement discussions generally conform to the Federal
Acquisition Regulations (FAR). FAR may not apply to states, universities, or
private-sector purchasers, but most large organizations have comparable
requirements and regulations.
4

-------
Procurement Options (E.2.2)


Continued...
Purchase Order
Noncompetetive procurements ("Sole Source")
Invitation for Bid (IFB) —	^1	^
Request for Quotation (RFQ) - t


Purchase Order
•	In-house process handled through purchasing staff; usually has a not-to-exceed limit for purchasing
•	Limited to small needs without a formal request; may be used to purchase a limited number of sample analyses
•	Commonly used to purchase supplies and less costly instruments or equipment
•	The maximum size of a purchase order is set by the organization. In the federal government, purchase orders are limited
to $100,000 (also called the "simplified acquisition threshold." A "micropurchase order* is authorized under FAR for
purchases below $2,500. No justification or competition is required. Purchase orders between $2,501 and $100,000 are
automatically set aside for 'small businesses" unless justification can be made.
Noncompetitive procurement:
•	Unusual or compelling urgency; Unique capabilities (such as a patent holder); National emergency
•	Federal Acquisition Regulations specify procedures and justifications for limited competition at 6.302. A lack of funding or
inadequate advance planning are insufficient justifications. -
Invitation for Bid (IFB)
•	Solicitation for proposals/offers issued under "sealed bid" procedures.
•	Uncommon for laboratory services.
•	A competitive bid process based solely on cost. Resulting contract is fixed-price.
Request for Quotation (RFQ)
•	Solicitation for a task order under an existing federal supply schedule; all costs, terms, and conditions are already
established
•	May not change established terms and conditions, or exceed or enhance established scope and size
•	A competitive bid process based mainly on cost, but may be on "best value"
•	RFQ usually results in a fixed-price contract; best suited where requirements are readily defined in advance
5

-------
Procurement Options (E.2.2)
(Continued)
•	Request for Proposal (RFP)
•	Basic Ordering Agreement (BOA) ^ ~ ^ ^
•	Modification (to Existing Contract)
ft. Obttimina Laboratory t'-.
Request for Proposal (RFP)
•	Solicitation for proposals to establish contracts under FAR's negotiated procurement process
•	Suitable for procurements where approach must be flexible or tailored to circumstances that can't be defined in advance
•	Generally addresses a major long-term need for contractor support (several years)
•	A competitive bid process based mainly on technical capability ("best value") rather than on price alone
Basic Ordering Agreement (BOA)
•	Agreements established with one or more qualified laboratories to process samples
•	BOA defines the analytes, costs, methods, APSs, MQOs, and any other parameters required
•	Agency can send samples to one or several vendors depending on anaiyte, matrix, sample turnaround time, or the laboratory's ability to
handle throughput
•	Fixed-price or time-and-materials task orders
•	Competitive procurement used to establish qualifications, capabilities, costs, capacities, and other requirements
•	BOAs may be established with vendors under the Federal Supply Schedule (GSA) or by individual agencies. BOAs permit faster ordering
because many of the pnces, terms, and conditions are already established.
•	BOAs establish the terms and conditions for indefinite delivery requirements (unknown quantities of task orders or analyses; unknown
delivery dates for analyses or task orders; or both)
Modification to an Existing Contract
•	Formal change to terms and conditions of a contract implementing options already built into the contract
¦	May be unilaterally imposed by government or bilaterally agreed upon
•	Approach meets a need that is consistent with the type of contract that is in place
•	Agency expands or extends contract to cover additional authorized work
¦	Agency amends contract to add a method for sample processing that is similar to work already covered
•	Modifications must be authorized within the contract. For example, a contract with Acme Consultants to evaluate SOPs for tritium may
not be modified to purchase tritium analyses without justifying sole source. Scope or quantity cannot be changed by a modification. But...
•	If the original contract was for 100 tritium analyses, and contained a provision that the government could optionally order 100 strontium
analyses, a modification may exercise that option.
•	The government always reserves the right to modify contract unilaterally. If the modifications change the price, the contractor may seek
"fair and equitable adjustments" in price.
•	Many modifications are purely administrative and implement changes to terms and conditions. The name of the technical contact, for
example.
6

-------
Statement of Work (SOW) Technical Specifications
•	MARLAP recommends preparing written technical
specifications ("statement of work"), regardless of
whether the services are to be contracted out or performed
by an organization's laboratory
•	The SOW should contain the Analytical Protocol
Specifications with MQOs
•	Single most important parameter for SOW is the
required method uncertainty at a specified
concentration
Obtaining laboratory Services	^ ;	^	7
An MQO is a quantitative or qualitative statement of a performance
objective or requirement for a particular method performance
characteristic
MARi_AP recommends that an MQO for method uncertainty (uMR) be
established for each radionuclide/matrix combination
The development of APSs, which includes the measurement quality
objectives (MQOs), is described in detail in Chapter 3.
The incorporation of these protocols into the relevant project plan
documents is covered in Chapter 4.
APSs should include such items as the MQOs, the type and frequency of
quality control (QC) samples, the level of performance demonstration
needed, number and type of samples, turnaround times, and type of data
package.
Other MQOs may include minimum detection capability, range, specificity,
and ruggedness.

-------
SOW Technical Specifications
Section 5.3
•	Project plan documents
-	Obtain technical requirements needed to develop a
SOW
-	MQOs and unique analytical process requirements
contained in the APSs
•	Level of specificity in the APSsAimitedPk)
Requirement^ that arefessentjaTtp project's
analytical data requirements
ft. Obtaining, Laboratory

-------
SOW Technical Specifications
Section 5.3
• Laboratory specifications to demonstrate ability to meet
the technical specifications in the RFP
-	Method validation* and documentation requirements
-	Information from previous contracts for similar analvtical
work
j
-	Performance evaluation* programs
-	Sample delivery requirements
-	Quality system requirements
See notes

•	See example APS in handouts (Tab 14). The example APS has outlined
the level of method validation required for a hypothetical project
involving 90Sr in milk. The level of method validation required depends
on whether a laboratory has an existing 90Sr method for milk or whether
a method must be modified or developed for the milk matrix.
•	PE programs may include:
-	Environmental Resources Associates (ERA) for EPA drinking
water requirements
-	Department of Energy's Mixed Analyte Performance Evaluation
Program (MAPEP) for environmental samples
-	Department of Energy's Quality Assurance Program (QAP)
-	... various commercial vendors.
9

-------
SOW Technical Specifications
Section 5.3
• Inclusion of Analytical Protocol Specifications
-	Analytes (5.3.1)
•	State radionuclides of interest, including expected
concentration range when available
•	List possible interfering chemical and radionuclides,
including expected concentration range when
available
•	MARLAP example: 90Sr plus possible 89Sr/fission
products; interference from Ca, Ba, fat molecules
-	Matrix (5.3.2)
•	Descriptive not general form of matrix (e.g., solids);
for MARLAP example, raw milk (fat content to
vary)
6. Obt<*«ininci Is bona t'orv
Review APS handout (Tab 14)
Analytes: Analyte list is complied from information obtained from process
history and/or investigative studies.
Matrix: Matrix description should be provided for each radionuclide. A
general description of matrix and (if possible) the chemical and physical
properties of the matrix.
10

-------
SOW Technical Specifications
Section 5.3
• Inclusion of Analytical Protocol Specifications
- MQOs (5.3.3)
•	Detection or quantification; analyte concentration range;
method specificity; method ruggedness
•	Required method uncertainty at a concentration (Action
Level)
.. method uncertainty = 0.5 pCi/L at & pCi/L ..."
•	Specificity (isolate and detect only the analyte of interest)
"... ^Sr plus possibled9Sr/fission products
interference..."
•	Ruggedness (matrix variations, change in analyst, slight
changes in method steps while maintaining quality and
performance)
"... raw milk (fat content to vary)..."
jbtt«inina LabcnMrorv S?rv
Review APS handout (Tab 14).
•	MQOs: MQOs may include required method uncertainty at the action
level, the MDC (including Critical Level) or MQC, and the analyte
concentration range, method specificity, and ruggedness. Method
specificity is defined as "The ability of the method to measure the analyte
of concern in the presence of interferences." Method ruggedness is
defined as "The relative stability of method performance for small
variations in method parameter values."
•	Unique analytical process requirements: Unique analytical process
requirements may be stated for sample preparation, chemical processing,
or radiation/atom detection.
•	Action level concentration may be incorporated in required method
uncertainty statement.
•	Method Specificity means the method's ability to isolate and detect only
the analyte of interest.
•	Method Ruggedness is the method's ability to handle matrix variations,
change in analyst or slight changes in method steps while maintaining
quality/performance specifications
11

-------

SOW Technical Specifications
Section 5.3
Unique analytical process requirements (5.3.4)
Other critical technical and quality specifications:
-	QC samples and PE Program requirements (5.3.5)
•	Schedule of batch QC and PE program
•	Criteria for acceptable performance
	.—. to	^ , j
-	Radiological holding and TAT (5.3.6)	^ ^
•	Nuclide-specific; reporting of data
-	# Samples expected and schedule (5.3.7)
-	Quality System requirements (5.3.8)
-	Method selection and approval process (5.3.9)
t-. Obt^inina Lsr-'pivHory i
QC samples and PE Program requirements: Specifications on the type of batch QC
samples and schedule of use (one set per batch of samples) should be provided. Also, the
requirement to successfully participate in an external government or commercial
performance evaluation program for the analytes and matrices of interest should be stated.
Criteria for acceptable performance for batch QCs and successful participation in PE
programs should be based on the MQOs.
Radiological holding and Turnaround Time (TAT): Specifications for radiological holding
time will be related to the half-life of the radionuclide, raaioanalytical method used, detection
capabilities, and interfering nuclides and decay products. Turnaround time specifications will
vary according project needs for the receipt of the analytical data. TAT specifications can
never be shorter than the radiological holding time. TAT typically stated in routine,
expedited, and emergency sample-processing time frames.
# Samples expected and schedule: The SOW should state the estimated sample load, by
schedule if possible. This information is important to evaluate the radioanalytical method
proposed and the allocation of staff and equipment resources. Also, a commercial laboratory
must ensure sufficient sample processing capacity for multiple clients sending samples at
the same time.
Quality System requirements: If the organization or project requires the lab to use a
certain Quality System process, details should be included in the SOW.
Method selection and approval process: The method selection and approval process
should be stated in the SOW. This includes the method validation requirements (Chapter 6)
for each combination of radionuclide and matrix and the acceptable criteria. Documentation
requirements for method selection and validation should be included. Also, the TEC's
evaluation process for method selection and validation should be included.
12

-------
Request for Proposal (RFP) (5.4)
General Contractual Requirements
•	Includes Statement of Work
•	Additional specifications not included in the
SOW
-	Quality, administrative, statutory, and regulatory
requirements
-	Proposal instructions (technical and cost/business)
•	RFP specifications usually included in resulting
contract

-------
Request for Proposal
General Contractual Requirements
Sample management plan (5.4.1)
Licenses, permits, and environmental regulations
(5.4.2)
Laboratory accreditation (5.5.1)
i\ Obtain ina Lsborv-M-orv
5.4.1 Sample Management: The RFP should require the laboratory to have an appropriate
sample management program that includes those administrative and quality assurance
aspects covering sample receipt, control, storage, and disposition.
5.4.2 Licenses, Permits, and Environmental Regulations: Various federal, state, and
local permits, licenses, and certificates (accreditation) may be necessary for the operation of
a radioanalytical laboratory. The RFP should require the laboratory to have the necessary
government permits, licenses, and certificates in place before the commencement of any
laboratory work for an awarded contract. All federal contracts contain "boiler-plate"
requirements for compliance with statutory mandates, such as recycling, insurance, liability,
etc.
5.5.1 Accreditation: If accreditation is required in the RFP, the TEC should confirm the
laboratory's accreditation for radioanalytical services. NELAC establishes and promotes
performance standards for the inspection and operation of environmental laboratories in
support of the National Environmental Laboratory Program (NELAP). If state-accredited, a
laboratory typically is accredited by the state in which it resides, and if the state is a NELAP-
recognized accrediting authority, the accreditation is recognized by other states and federal
agencies approved under NELAP.
14

-------
Request for Proposal
General Contractual Requirements
•	Data reporting and communications (5.4.3)
•	Sample re-analysis requirements (5.4.4)
•	Subcontracted analyses (5.4.5)
—JJlimHiJllJ.LIliJJiDll
5.4.3	Data Reporting and Communications: The type of information, schedules,
and data reports required of the laboratory, as well as the expected
communications between the appropriate staff or organizations, should be specified
in the RFP. The SOW should specify what data are required for data verification,
validation, and quality assessment.
•	5.4.3.1 Data Deliverables: A data package (sequentially page-numbered) may
include a project narrative (in a specified format including units), a data review
checklist, any non-conformance memos resulting from the work, sample-receipt
acknowledgment or chain of custody form (if required), sample and quality
control sample data, calibration verification data, and standard and tracer
information.
•	5.4.3.2 Software Verification and Control
•	5.4.3.3 Problem Notification and Communication
•	5.4.3.4 Status Reports
5.4.4	Sample Re-Analysis Requirements: Specific instructions and contractual
language should be included in the RFP that address such circumstances and the
resultant fiscal responsibilities (Appendix E).
5.4.5	Subcontracted Analyses: MARLAP recommends that the RFP state that
subcontracting of analyses will be permitted only with the contracting organization's
approval. In addition, contract language should be included giving the contracting
organization the authority to approve proposed subcontract laboratories.
15

-------
&(V
[frfP	X
Laboratory Selection and Qualification Criteria
Technical Proposal Evaluation (5.5)
•	Do not consider cost in technical evaluation process
•	Scoring and evaluation scheme established prior to RFP
distribution
-	Distributed to all prospective laboratories
•	Technical Evaluation Committee
-	Each member evaluates the prospective laboratory's
technical proposal
-	May not deviate from established scoring scheme
Continued...
Obt^inina Laboratory tvrvicer:
Also refer to Appendix E, Section E.5. Proposal Evaluation and Scoring
Procedures.
Agency personnel initially involved in establishing a new contract and
initiating the laboratory selection process may consists of: Contracting
Officer (administrative, non-technical), Contracting Officer's
Representative (technical staff person advising the Contracting Officer).
Technical Evaluation Committee (TEC), a team of technical staff
members, reviews the proposals sent by the laboratories. A chairperson
is designated to provide oversight of the evaluation process. The TEC
score the technical portion of each proposal according to the evaluation
scheme established.

-------
Laboratory Selection and Qualification Criteria
Technical Proposal Evaluation (5.5)
(...Continued)
• Scoring and evaluation scheme
-	Scoring elements
•	Technical merit
•	Adequacy and suitability of lab resources and equipment
•	Staff qualifications
•	Related experience and record of past performance
•	Other RFP requirements
-	Weighting of evaluation elements
•	Established before the RFP is distributed
•	If no weighting established, all are equal
Continued...
ft, Obtaining! Laborvafory t^rvicc'v	17
5.5.1.1	Scoring and Evaluation Scheme: The RFP should include information
concerning scoring of proposals or weighting factors for areas of evaluation. This helps a
laboratory to understand the relative importance of specific sections in a proposal and how
a proposal will be evaluated or scored.
5.5.1.2	Scoring Elements
Technical Merit: The lab's proposal (in response to RFP) should include details of
the laboratory's quality system and all the analytical methods to be employed by the
laboratory as well as the method validation documentation. The methods should be
evaluated against the APSs and MQOs provided in the SOW. Previous performance
should be reviewed and scored.
Adequacy and Suitability of Laboratoiy Resources and Equipment: If requested
in the RFP, the laboratory will provide a listing of the available instrumentation or
equipment by analytical method category. In addition, the RFP may have requested
information on the available sample processing capacity and the workload for other
clients during the proposed contract period.
Staff Qualifications: The RFP should require the identification of the technical staff
• and their duties, along with their educational background and experience in
radiochemistry, radiometrology, or laboratory operations. The laboratory staff that will
perform the radiochemical analyses should be employed and trained prior to the
award of the contract.
Related Experience and Record of Past Performance: The RFP should require the
laboratory to furnish references in relation to its past or present work.
Other RFP Requirements: The laboratory's proposal should outline the various
programs and commitments (QA, safety, waste management, etc.) as well as
documentation of various certifications, licenses, and permits to ensure the
requirements of the RFP will be met.
Also refer to Appendix E, Section E.5. Proposal Evaluation and Scoring
Procedures.

-------
Laboratory Selection and Qualification Criteria
Technical Proposal Evaluation (5.5)
(...Continued)
Technical Merit —
•	Review of lab's proposed methods to satisfy APS/MQOs
for each nuclide/matrix combination
•	Review of the method validation documentation to
determine if method uncertainty specifications are met
•	Review of past performance in PE programs and internal
QA program
Obwinir.a L^borv-Jtorv
18

-------
Laboratory Selection and Qualification Criteria
Section E5.3: Weighting of evaluation elements (example)
Element
Description
Weight (%)
I
Technical Merit
25
II
Past Performance
25
III
Understanding of the Requirements
15
IV
Adequacy and Suitability of Proposed
Equipment and Resources
15
V
Academic Qualifications and Experience of
Personnel
10
VI
Related Experience
10
5.5.1.2 Scoring Elements Technical Merit: The lab's proposal (in response to RFP) should
include details of the laboratory's quality system and all the analytical methods to be
employed by the laboratory as well as the method validation documentation. The methods
should be evaluated against the APSs and MQOs provided in the SOW. Previous
performance should be reviewed and scored.
Adequacy and Suitability of Laboratory Resources and Equipment: If requested in
the RFP, the laboratory will provide a listing of the available instrumentation or
equipment by analytical method category. In addition, the RFP may have requested
information on the available sample processing capacity and the workload for other
clients during the proposed contract period.
Staff Qualifications: The RFP should require the identification of the technical staff
and their duties, along with their educational background and experience in
radiochemistry, radiometrology, or laboratory operations. The laboratory staff that will
perform the radiochemical analyses should be employed and trained prior to the award
of the contract.
Related Experience and Record of Past Performance: The RFP should require the
laboratory to furnish references in relation to its past or present work.
Other RFP Requirements: The laboratory's proposal should outline the various
programs and commitments (QA, safety, waste management, etc.) as well as
documentation of various certifications, licenses, and permits to ensure the
requirements of the RFP will be met.
• Weighting of each element (See Section E5.3 in Appendix E)
Example weighting. Anticipated weights should be listed in RFP. (If divulged in RFP,
weights may not be changed without notifying proposers.)

-------
Laboratory Selection and Qualification Criteria
•	Pre-award proficiency evaluation (5.5.2)
-	PT samples sent to most qualified labs to assess each lab's
capability to meet MQOs and RFQ requirements
-	Scoring of each lab's performance
-	Ranking or weighting of each lab's performance as a
separate scoring element
•	Pre-award assessments and audits (5.5.3)
-	Emphasizes availability of instruments, facilities, staff,
quality system manual, methods, calibrations, etc.
-	Potential to handle the anticipated volume of work
5.5.2 Pre-Award Proficiency Evaluation: Some organizations may elect to
send proficiency or PT samples (sometimes referred to as "performance
evaluation" or "PE" samples) to the laboratories that meet a certain scoring
criterion in order to demonstrate the laboratory's analytical capability. The
composition and number of samples should be determined by the nature of
the proposed project.
5.5.3 Pre-Award Assessments and Audits: The RFP should indicate that
the laboratories with the highest combined scores for technical proposals
and proficiency samples may be given an on-site audit. A pre-award
assessment or audit may be performed to provide assurance that a selected
laboratory is capable of fulfilling the contract in accordance with the RFP.
20

-------
MARLAP Recommends...
•	Technical specifications contained in a single
document ("SOW") for all radioanalytical laboratory
services, regardless of whether the services are to be
contracted out or performed by an affiliated laboratory
•	MQOs and analytical process requirements contained
in the SOW are provided to the laboratory
•	SOW includes the specifications for the action level
and the required method uncertainty for the analyte
concentration at the action level for each
analyte/matrix combination
Continued...
ft. Obtaining! Labcnvnorv Lvryi.c.fO	21

-------
MARLAP Recommends.
(Continued)
•	Laboratory submits proposed methods and
required method validation documentation with
the formal response
•	RFP permits subcontracting only with the
contracting organization's approval
•	All members of the TEC have a technical
understanding of the subject matter related to the
proposed work

-------
9

-------
Method Validation:
Performance-Based Approach
Module 9
Dave McCurdy

-------
Method Selection and Validation
•	Part I
-	Concepts and information prepared.for Project Managers
•	Chapter 6 is different than rest of Part I
-	Concepts and information prepared for
•	Radioanalytical Specialists, Technical Evaluation Committee,
Project Managers
•	Laboratory managers and staff
-	Both audiences need to understand the material to
successfully implement
•	Performance-based method selection
•	Method validation
9. Method Validation
2

-------
Method Definition (6.2)
A laboratory "method" includes all physical, chemical, and
radiometric processes conducted at a laboratory in order to
provide an analytical result
9. Method Validation
3

-------
MARLAP Analytical Process
4

-------
To Method Selection
Process to select a validated method based on a demonstrated
capability to meet defined quality and performance criteria
(MQOs) and (together with a properly implemented QA
program) will produce appropriate and technically defensible
results under the applicable conditions
9. Method Validation
Objective: ~o facilitate the evaluation of all relevant and applicable methods
with the selection, modification, or development of the method that will
reliably produce the data as defined by the criteria of the directed planning
process (measurement quality objectives).
Intent: To allow the selection of the method that meets the MQOs to the
discretion of the laboratory performing the work or, in some cases, to the
discretion of a client organization. In most project plan documents, the
project manager has the authority and responsibility for approving and/or
selecting the methods proposed by the laboratory.
5

-------
Prescribed-Method Approach
•	Option of specifying a particular method in:
-	quality assurance project plan
-	statement of work
•" Recognized "prescribed methods"
•	In most cases, these methods have undergone some type
of validation process for their intended use
9. Mfthod
Recognized "prescribed" methods include:
•	Regulatory - e.g., EPA
•	National industry standards {International Organization for Standardization
(ISO), American Society for Testing and Materials (ASTM), American
National Standards Institute (ANSI), Official Methods of Analysis of AOAC
International - Association of Official Analytical Chemists International,
Standard Methods for the Examination of Water and Wastewater
•	Industry-specific (historically developed for internal use within a specific
organization/company).

-------
Method Selection
• MARLAP Key Parameters - MQOs
- Most important parameter is required method uncertainty
(wMR) at a specified concentration
3. fytethctti VaWJstiofi
7

-------
MARLAP Recommends.
Performance-based approach to method selection (6.3):
•	Laboratory selects and proposes a method(s)
•	Project Manager (or TEC) approves use of proposed
method

Project Manager (or technical evaluation committee) approves use of
proposed method: Evaluates submitted method validation documentation
or evaluates performance of lab's analysis of method-validation PT samples
Upon contract award: APSs/MQOs should be incorporated into a specific
project work plan for the laboratory.
8

-------
Project-Specific Considerations for
Method Selection (6.5)
•	Matrix and analyte (radionuclide) identification (6.5.1)
•	Process knowledge (6.5.2)
- Potential chemical and radionuclide interferences
•	Radiological holding and turnaround times (6.5.3)
•	Unique process specifications (6.5.4)
•	MQOs (6.5.5)
•	Bias considerations (6.5.5)
•	Operational aspects
9.	Validation
MQOs may include:
•	Method uncertainty [uMR] at the action level
•	Quantification capability (MQC) or minimum detection capability (MDC)
•	Expected/applicable analyte concentration range
•	Method specificity
•	Ruggedness
Operational aspects may include:
•	Available methods validated for analyte/matrix combinations
•	Qualified staff availability
•	Equipment calibration and availability
•	Production schedule and proposed number of samples
9

-------
Performance-Based Approach
To Method Selection
Laboratory must consider:
-	APSs & MQOs
-	Methods available for nuclide/matrix
-	Method validation status	,
-	Availability of qualified staff	c
-	Production schedule & number of samples
-	Radiological holding and sample turnaround times
-	Equipment calibration and availability, etc. ^ ^ ^
jp. IvitfthOi? Vcilidaci
10

-------
Performance-Based Approach
To Method Selection
Project Manager:
•	Reviews documentation and PE program performance
•	Evaluates response to other performance/ production
requirements
•	If possible, compares submitted methods to other existing or
known methods
•	Evaluates response to other performance/ production
requirements
Continued...
9. Mc-tliCti
11

-------
Performance-Based Approach
To Method Selection
Project Manager (Continued):
•	Makes decision to send pre-award, site-specific performance
testing matrix samples
•	Makes decision to perform pre-award, onsite laboratory, or
desk audit
•	From additional information, makes list of capable laboratories
(technical basis only)
•	Laboratory selection (Contracting Officer)
9. Method
MARLAP provides guidance only on project-specific method validation, not
general method validation.

-------
Method Application Life Cycle
Analyte / Matrix
Process Knowledge
Project
Management
Documentation of
Method Validation
& Performance
During Project
» Existing Method Method
Methods Development Modification
Samples
/ \
Project External QC
ft. tvic-tliod Validation

-------
Method Validation
Project Method Validation
•	Process demonstrating that the radioanalytical method selected for the
analysis of a particular radionuclide in a given matrix is capable of
providing analytical results to meet the project's measurement quality
objectives and any other requirements in the analytical protocol
specifications
General Method Validation
•	The laboratory's internal method validation process that demonstrates a
method's performance to meet established* quality performance
requirements for detection and quantification, especially precision and
bias requirements
•	Not specific to project
9. Method VaWoatlor	14_
Two types of method validation are considered in Section 6.6:
•	General
•	Project-specific
MARLAP provides guidance on project-specific method validation, not
general method validation
General method validation process (Section 6.6.1): Should be a basic
element in a laboratory's quality system. General guidance on single
laboratory method validation can be found in IUPAC (2002 ) and
EURACHEM (1998). For most applications, the method should be evaluated
for precision and relative bias for several analyte concentration levels. In
addition, the absolute bias, critical level and the a priori minimum detectable
concentration of the method, as determined from appropriate blanks, should
be estimated. (See Section 6.6.4 for a discussion on testing for absolute and
relative bias.)
EURACHEM. 1998. The Fitness for Purpose of Analytical Methods, A
Laboratory Guide to Method Validation and Related Topics. ISBN 0-948926-
12-0. Available at: www.eurachem.ul.pt/index.htm.
International Union of Pure and Applied Chemistry (IUPAC). 2002.
"Harmonized Guidelines for Single-Laboratory Validation of Methods of
Analysis." Pure Appl. Chem., 74:5, pp. 835-855.
14

-------
Project Method Validation
Laboratory Initiation*
•	Accomplished by the laboratory by processing internal,
external PT, or Method Validation Reference Material
(MVRM) samples according to the validation level
specified by the Project Manager or Technical Evaluation
Committee (TEC)
Project Manager Initiation (Optional)**
•	Accomplished by the Project Manager sending PT samples
to the laboratory
"Review notes
fvic-thod V&Wdiilicr,					If)
* Chapter 6 provides details for this approach.
** Not specifically covered in Chapter 6. However, the same approach would
be used by the project ^manager.
Prior to submitting PT sample to laboratory, the method validation level must
be selected, and the radioanalytical results of the PT samples evaluated
according to the method validation acceptance criteria.

-------
Project Method Validation Protocol Parameters (6.6.2)
Parameters specified or ascertained (including
interferents) from the analytical results generated
from DQOe & proceee history research:
•	APSs including MQOs for each analyte/matrix
•	Defined method validation level (Slide 19)
•	Analytes and testing range
•	Defined matrix for testing, including chemical and physical
characteristics that approximate project samples or...
Continued...
¦\elhod Va Itfatton
APSs including MQOs for each analyte/matrix (see handout): Plus bias
restrictions (if applicable) and other qualitative parameters to measure the
degree of method ruggedness or specificity.
Analytes: Chemical or physical characteristics of analyte when appropriate.
Applicable analyte concentration range: Includes zero analyte (blanks).
16

-------
Project Method Validation Protocol Parameters
(Continued)
•	Selected project-specific or appropriate alternative matrix
PT samples, including known chemical or radionuclide
interferences at appropriate levels
•	Defined sample preservation
•	Stated additional data testing criteria
•	Establish acceptable chemical/radiotracer yield values
•	Bias (if applicable)
Wc'tho^ Validation
17

-------
Tiered Approach to Method Validation (6.6.3)
•	Level of method validation necessary is established during
project planning - Project Manager responsible to ensure
level of method validation is included in SOW
•	Level of validation depends on the degree of confidence in
the method's performance to produce results consistent
with the required method uncertainty
•	Level of validation depends on the extent of method
development, specificity, and ruggedness
Mc'thod Vciliitezbr:
Level of validation depends on the extent of method development,
specificity and ruggedness:
•	New radionuclide or set of interferences
•	Matrices consistent with previous applications
•	Radionuclide concentration range consistent with other projects
•	Modification of existing method
18

-------
Tiered Project Method Validation Approach
Validation
Level
Application
Sample
Type"
Acceptance
Criteria*
Levels1
"(Concen.)
Replicates
A
(Without
Additional
Validation)
Existing
Validated
Method
-
Method Previously
Validated (By One of the
Validation Levels B
through E)
-
-
B
Same or
Similar
Matrix
Internal PT
Measured Value Within
±2 8«MI,or±2 8
-------
Tiered Project Method Validation Approach
Important Notes for Table
•	Acceptance Criterion
-	Established so that every validation sample must meet the
stated limit
-	Varies according to the number of validation samples
(degrees of freedom) to be consistent with a false rejection
rate of 5% when the measurement process is unbiased
-	Incorporates the required method uncertainty at the action
level
•	Concentration Range
-	Should cover the expected analyte concentration range
including the action level concentration
-	5 appropriate blanks included (but not as a test level) to
estimate the absolute bias of the method
9. Method Validation
20

-------
Method Validation Project Situations
•	Existing Methods Requiring No Additional Validation
(6.6.3.1)
•	Use of a Validated Method for Similar Matrices (6.6.3.2)
•	New Application of a Validated Method (6.6.3.3)
•	Newly Developed or Adapted Methods (6.6.3.4)

-------
Existing Methods Requiring
No Additional Validation (6.6.3.1)
• Level A Validation	'
-	Method previously has been validated (Levels B - E)
-	Matrix and analytes of new project sufficiently similar to
past samples analyzed by a lab's SOP
-	Project Manager assumes additional validation is
unwarranted
Caution
Without some degree of validation for a new project, there
is no assurance that the lab will perform to the same
quality and standards as an extension of earlier work
. Iv1f-tli9
-------
Existing Methods Requiring
No Additional Validation (6.6.3.1) - Example
Level A Project Method Validation
1)	New Client Project: Evaluation of Drinking Water
•	Use EPA approved method
•	Method validated previously under Level C (External PT)
•	Previous and ongoing acceptable performance in EPA Performance
Evaluation Program
•	Method use: continuously
2)	New Client Project: Evaluation of ""Sr in Raw Milk
•	Modified an EPA approved method for ^Sr in water to be used for raw milk
•	Method validated previously under Level C (Internal PT)
•	Previous and ongoing acceptable performance in internal performance
testing program and other client PE programs
•	Method use: continually for other clients
9. Method Validatisr
Additional New Client Project Example: Radium-226 in soil by alpha
spectrometry
•	Expanded contract to include processing soil samples from new area in
the same uranium mining and milling remediation site
•	Method validated under Level C (Internal PT) samples for similar project
for adjacent area during the previous year
23

-------
Routine Methods - No Previous Project Validation (6.6.3.2)
Level B validation
•	Lab has a routine method for a specific
radionuclide/matrix combination that has had no previous
project method validation
•	Requires evaluating method with internal PT samples at 3
concentration levels, with 3 replicates per level
9. fv'c'uhoii \'ciWdauor
Lab should have sufficient information on the performance of the method
using its internal quality control (QC) program and external Performance
Evaluation (PE) programs.
Lab has a routine method for a specific radionuclide/matrix combination
that has had no previous project method validation (e.g., the lab method
was derived "in-house" or does not match the American Society for
Testing and Materials (ASTM) or EPA method that may have been
referenced in the SOW).

-------
Routine Methods - No Previous Project Validation (6.6.3.2)
. Example
Level 3 Project Method Validation - 5ame Matrix
New Client Project: Surveillance of 90Sr in raw cow milk
•	Laboratory has routine method under general validation
but not used for five years
•	Expected sample matrix similar to previous milk samples
•	Records of past performance in a PE program or internal
QA not available
9. NVEhcX'i Validation
25

-------
Use Validated Methods for Similar Matrices (6.6.3.3)
•	Analysis of samples that are similar to the matrix and
analyte for which a previously method was developed
- Validation of the method according to Level B or C
•	Validation levels will provide a reasonable assurance that
the method will meet the required MQOs
Ivtethotf V£i\\Jzz)or.
26

-------
Use Validated Methods for Similar Matrices
(Continued)
Level B validation requires evaluating method with internal
PT samples at 3 concentration levels, with 3 replicates per
level
•	Requires that each result be within ±2.8 z/MR or ± 2.8 ipMR of
known value
Level C validation requires evaluating the method with
internal or external PT samples at 3 concentration levels,
with 5 replicates per level
•	Each result within ±2.9 or ±2.9 cpMR of known value
•	For 90Sr example: ±1.45 pCi/L or ±18% of known value
9. W'tzhoc* ValidciZicri	 		27
Example for 90Sr in milk: Requires a method validation level of A, C, or D
(as specified in the APS/SOW), depending on whether the laboratory has
an existing method for 90Sr in milk or whether a method must be modified
or developed for 90Sr in milk.
Level B validation: Requires the least amount of effort for the laboratory but
may not satisfy the level of method validation required by the project.
•	When the laboratory does not have the capability to produce internal QC
samples, the Level C validation protocol should be used.
•	Requires that each result be within ±2.8 uMR or ± 2.8 (pMR of known value
depending on the test level concentration.
Level C validation: A change in the method to address the increased
heterogeneity of the analyte distribution within a sample may require
another method validation depending on the ruggedness of the method
and the degree of analyte heterogeneity.
•	Requires a greater effort for the laboratory compared to Level B
validation.
•	Level C validation requires that each result be within ±2.9 uMR or ± 2.9 q>MR
of known value depending on the test level concentration.
•	For the APS 90Sr example, Level C Validation requires each result to be
within ± 1.45 pCi/L of the known value below the action level or ± 18% of
the known value at or above the action level.
27

-------
Use Validated Methods for Similar Matrices (6.6.3.3)
Example
Level 3 or C Project Method Validation - Similar Matrix
New Client Project: Surveillance of 90Sr in raw goat milk
•	Laboratory has a validated method for 90Sr in cow's milk that has been
used routinely for the past eight years
•	Expected sample matrix similar to cow's milk but analyte concentration
expected to be higher than milk from cows in the same area
•	Expected sample size is less but is only a concern for reprocessing a
backup sample
•	Use of client goat milk with spike is option for method validation
(samples from a batch composite)
-	One portion of the composite used to make the spiked test samples.
-	Another portion of the composite used as blank samples to determine the
inherent '°Sr in the samples
For slight changes In matrices, Validation Level & is typically required.
?, KVuh&j Validation
Additional New Client Project Example: Analysis of 90Sr in soil from new site
•	Laboratory has a validated method for 90Sr in soil from the northeast
United States that has been used routinely for the past five years
•	Expected sample matrix from different region will contain high levels of
iron compared to the soils of the northeast
•	To meet detection detection and quantification requirements, sample size
for analysis must be the same as the northeast samples
•	Existing method must be slightly modified to address the increased iron
content
•	Project Manager requires Validation Level C

-------
New Application of a Validated Method
(6.6.3.4)
New applications include:
-	Dissimilar matrices
-	Chemical speciation of the analyte or possible other
chemical interference
-	Analyte, chemical or radiometric interferences
-	Complete solubilization of the analyte and sample matrix
-	Degree of analyte or sample-matrix heterogeneity
9. fvlethov Validation
Methods that have been validated for one application normally require
another validation for a different application, such as a different sample
matrix.
The validation process for an existing validated method should be reviewed
to ensure applicability of the new (which can be more or less restrictive)
MQOs because MQOs may change from one project to another or from one
sample matrix to another.
Applying an existing method to another matrix is not recommended
without further method validation.
29

-------
New Application of a Validated Method
(Continued)
Level C validation requires evaluating the method with
internal or external PT samples at 3 concentration levels,
with 5 replicates per level
•	Each result within ±2.9 «MR or ±2.9 
-------
New Application of a Validated Method (6.6.3.4)
Example 1
Level C Method Validation - New Application
New Client Project: Surveillance of 90Sr in raw cow's milk
•	Laboratory has a method for 90Sr in drinking water that was
modified to be similar to U.S. Public Health Service method for
90Sr in milk by ion exchange
•	New method has undergone general method validation
•	Method has been used in the analysis of PT samples from a
commercial PE program with success
Project Manager requests Method Validation Level C with
external PT samples from a selected commercial source
supplier
i\ Method Validate.-.		51
•	Methods that have been validated for one application normally require another validation
for a different application, such as a different sample matrix.
•	The validation process for an existing validated method should be reviewed to ensure
applicability of the new (which can be more or less restrictive) MQOs because MQOs
may change from one project to another or from one sample matrix to another.
Additional examples:
•	New Client Project: Analysis of ^Sr in samples from low-level contamination of reactor
components (drained water stored in composite tank sample)
-	Existing validated method for 90Sr in drinking water that has been modified to
include additional cleanup and purification steps for other interfering radionuclides in
the sample
-	Expected sample matrix similar to water but is acidified and may have some
detergent
-	Other radionuclide analyses are to be performed on the sample
-	89Sr will interfere
-	Method Validation Level C may be appropriate for this application
•	New Client Project: Change in sample preparation for 239Pu in Soil Analysis
-	Existing validated method for 239Pu in soil that uses acid digestion/ leaching
preparation for 10 g of dried, blended soil
-	New client requests total dissolution by pyrosulfate fusion of 5 g of dried, blended
soil
-	Beginning of method must be modified to include the pyrosulfate fusion and then to
handle the chemical interference from the fusion process
-	Project Manager requests Method Validation Level C with external PT samples
prepared (spiked) by a commercial source supplier
31

-------
Method Validation Level C
for 90Sr Example
•	Lab modified its 90Sr method for water to be applicable for milk*
•	Lab uses internal PT samples prepared from fresh milk:
-	5 milk samples spiked with '•'Sr at 3 pCi/L; 5 spiked at 9 pCi/L;
-	5 spiked at 25 pCi/L
-	For lowest spike level (3 pCi/L), each result must be within ±2.9 wMR of
known value:
± 2.9 x 0.5 pCi/L = ±1.45 pCi/L of known; between 1.55 and 4.45
pCi/L
-	For two highest spike levels, each result must be within ±2.9 q>MR of
known value:
±2.9 * 6.25% = ± 18% of known; for the mid level spike (9 pCi/L) this
is ± 1.6 pCi/L of known or between 7.4 and 10.6 pCi/L; for the upper
level spike (25 pCi/L) this is ±4.5 pCi/L of known or between 20.5 and
29.5 pCi/L
'Review note e
9. N-lc-tfiOji Validation
* Because the needed chemical purification steps are unique to a milk
matrix, some may consider Level D validation more appropriate.
32

-------
Newly Developed or Adapted Methods
(6.6.3.5)
•	New method developed by laboratory not previously validated
by laboratory
•	Use of a published method (literature or nationally recognized
standard) not previously validated by laboratory
•	Adaptation of a published method (literature or nationally
recognized standard) not previously validated by laboratory
•	For routine or common matrices, Method Validation Level D is
required
•	For special project matrices, Method Validation Level E using
Method Validation Reference Material (MVRM) test samples
is required
- Project Manager supplies MVRM test samples

-------
Newly Developed or Adapted Methods
(6.6.3.5)
Level P validation: Internal or external PT samples at 3
concentration levels, with 7 replicates per level
•	Requires that each result be within ±3.0 uMR or ±3.0 tpMR of
known value
•	For 90Sr example: ±1.5 pCi/L or ±19% of known value
Level E validation: MVRM samples at 3 concentration levels,
with 7 replicates per level
•	Requires that each result be within ±3.0 »MR or ±3.0 cpMK of
known value
When the matrix under consideration is unique, the method should be
validated using the same matrix (e.g., MVRM) under Level E Validation. For
example, process/effluent waters versus laboratory deionized water and for
various heavy metal radionuclides in soils or sediments when compared to
spiked sand or commercial topsoil. For site-specific materials containing
severe chemical and radionuclides interferences (for example, sludge from a
tank that has been dewatered), many methods have been unable to properly
address the magnitude of interferences.
The MARLAP example for 90Sr in milk requires a Method Validation
Level of A, C, or D (as specified in the APS/SOW), depending on
whether the laboratory has an existing method for 90Sr in milk or
whether a method must be modified or developed for 90Sr in milk.
For the APS 90Sr example, Level D validation requires that each result must
be within ± 1.5 pCi/L of the known value below the action level or ± 19% of
the known value at and above the action level.
34

-------
Newly Developed or Adapted Methods (6.6.3.5)
Example
Level D Method Validation: Newly Developed Method
New Client Project: Analysis of 129I in groundwater
• Senior radiochemist and radiation spectrometrist at
laboratory develop new 129I radiochemical method based
on radiochemistry fundamentals and available nuclear
instrumentation
-	Method formulation incorporated the sample size,
sample preparation, chemical separations, final test
sample mount and 129I detection efficiency to meet
APSs
-	No short-lived iodine isotopes expected
-	Low-energy photon detector used

Additional examples:
•	New Client Project: 90Sr in drinking water samples
-	Standard operating procedure was prepared that incorporated all
steps of EPA Published Method 905.0 or EML Procedure Manual
Method SR-02
•	New Client Project: Thorium-230 and 232Th in soil samples having unique
characteristics
-	Soil samples from a contaminated U/Th site
-	APSs incorporate historical site knowledge of U and Th
concentrations
-	New laboratory prepared SOP for Th in soil that incorporated all steps
of a recognized method
o Th-234 tracer to determine the chemical yield of each sample
o Th-234 a decay product of 238U
-	Project Manager requires Method Validation Level E and provides
MVRM samples to determine if the proposed method will meet the
necessary Th and U specificity requirements and the Th chemical
yield determinations are not biased by the highest levels of U in the
anticipated sample population
35

-------
Testing for Method Bias (6.6.4)
Method 3\ae Should be Evaluated...
•	Initially — Method validation process
•	Continuously — Quality assurance program via batch QC

-------
Testing for Method Bias (6.6.4)
Two types of bias
• Absolute:
- Mean response at zero concentration

• Relative:
- Ratio of the change in the mean response to a change in
sample analyte concentration
CK
Absolute Bias: evaluates the mean response at zero concentration.
•	Testing for absolute bias involves repeated analyses of method blank samples
•	Method validation should include blank samples to assess absolute bias
Considerations:
•	Absolute bias in the measurement process can lead to incorrect detection decisions. Causes
include inadequate corrections made by the laboratory for instrument background, laboratory
reagent contamination, and other interferences.
•	The laboratory should eliminate any absolute bias in the measurement process by blank- or
background-correcting all measured results.
•	Test whether the corrections are adequate by analyzing a series of method blank samples,
applying all appropriate corrections exactly as for ordinary samples, and perform a f-test on the
results.
•	To avoid the appearance of a false bias, the determinations of the correction terms (e.g.,
background or reagent blank) should be repeated for each method blank sample analyzed.
Relative Bias: Ratio of the change in the mean response to a change in sample analyte
concentration.
•	Testing for relative bias requires repeated testing of spiked samples
•	Use either standard reference materials (SRMs) or certified reference materials (CFMs)
•	Replicate samples at each testing concentration level
Considerations:
•	Testing the method for relative bias is most important when one of the puiposes of analysis is to
determine whether the analyte concentration is above or below some positive action level.
•	To test for relative bias, the laboratory may analyze an appropriate Certified Reference Material (or
spiked sample) a number of time.
•	To avoid the appearance of a false bias, the laboratory should replicate as many steps in the
measurement process as possible for each analysis.

-------
Testing for Method Bias (6.6.4)
Depending on Project...
• Absolute bias at a certain analyte concentration may be the
most important consideration
-	Action Level
•	May want no statistical or major bias at the action level: premise
of required method validation application
-	Blank Samples
•	For certain research or survey projects, no absolute bias near the
the detection limit
?. Mc-lhoc' Valida-A:
38

-------
Testing for Method Bias (6A.2)
Bias test when analyte concentration * 0.0
Xovg = average measured value
s = experimental standard deviation
N = number of measurements
K = reference value
u(K.) = standard uncertainty for reference value
T = experimental T-statistic
' i-a/2 = 1 statistic with significance level a (typical 0.05)
vcfr = degrees of freedom
The number of effective degrees of freedom (veff) is calculated as follows:
Veff = (N - 1) x (1 + (U2[K] / [sx2 / N] ))2
T -
X -K
avg
Bias when |T| > t ,^2 @ (vcfr)
39

-------
Testing for Method 5iae (6A.2)
Absolute Bias
Absolute bias test when analyte concentration = 0.0
X
a/sVN
x,
avg = average measured value
s = experimental standard deviation
N = number of measurements

-------
Testing for Method 5iae (6A.2)
Absolute Bias - Example 6.1
•	Analyte concentration = 0.0
•	Data from 9 batch QC samples
. i |X I
rj1 	 I *vg|
1 ~ VsVN
0.4991 =M991=1
1 1 VO^745xT0745y9 0.3582
VefT = 9-1=8
t ,_a/2 @ (ve(r) = 2.306 (Table G.2 in Appendix G)
T< t: 1.3935 <2.306
...No biae is detected
0.714 0.993
2.453 0.472
-1.159 -0.994
0.845 0.673
0.495	
Xavg = 0.4991
s = 1.0745
9. Mc'thc-'d Vithdei'Jc
41

-------
Testing for Method Bias (6.6.4)
Bias Tests for Multiple Test Levels
Option 1: Weighted Least Squares (6A.3)
• Perform a weighted linear regression to fit a straight line
to the data and perform hypothesis tests to determine
whether the intercept = 0 and the slope = 1
9. Ivir-thosri V,ji ligation
Weighted Least Squares (6A.3): Weighted linear regression fit to the data
and perform hypothesis tests to determine whether the slope = 1
5 blank sample measurements
-	Separate test level if known analyte concentration = 0
-Mean value to be subtracted from each test result
Requires multiple measurements at three analyte test levels
-	Number of test samples/level either 3, 5, or 7

-------
Testing for Method Bias (6.6.4 )
Bias Tests for Multiple Test Levels
Option 2: Overall Method Bias (6A.3)
•	Evaluate overall method bias for all test levels
•	An overall a' is used instead of the significance level a
(typical 0.05)
•	Evaluate each test level for bias using the overall a' value
hod Validation
43

-------
Testing for Method Bias (6A.3)
Bias Tests for Multiple Test Levels
•	Test for overall method bias for all concentration
levels
•	Requires testing each concentration level but use a
"t" value based on a value a' instead of a:
* l-a/2 @ (V eff) 0r tl-a'/2 @ (Veff)
a' = 1 -(1- a)1/m
m = number of test levels
If a = 0.05 and m = 3, then a' = 0.01695

-------
Testing for Method Bias (6A.3)
Bias Tests for Multiple Test Levels
Test for overall method bias for all concentration levels
If all bias tests using the a' value for every test level
indicates no bias, then
- method considered free of bias based on an a false
rejection rate over the concentration range evaluated
•tethtti Villi
Note: For method validation, there would be 5 blank test samples, three test
levels and depending on the validation level either 3, 5, or 7 test samples per
test level
45

-------
Method Validation Documentation
(6.6.5)
When laboratory conducts method validation
•	All records, laboratory workbooks, and matrix spike data
used to validate an analytical method should be retained on
file
When Project Manager conducts method validation
(FT samples sent to laboratory)
•	Appropriate technical representative should retain all
records dealing with applicable method validation protocols,
PT sample preparation certification, level of validation,
results, and evaluations
Method		.	'46
Laboratory conducts Method Validation - Covered in Section 6.6.5:
•	The records and MV documentation should be retrievable for a specified length
of time after the method has been discontinued (reports to the Project Manager
containing these method validation data should be retained in the project
records or QAPP).
•	Data evaluations such as comparison of individual results to the validation
acceptance criterion and absolute bias in blanks and, when available, method
precision and bias, should be part of the data validation package sent to the
project manager.
•	All method validation documentation should be retained as part of the
documentation related to the laboratory's quality system.
Project Manager Validates Method (PT Samples to Lab) - Expected
documentation:
•	Evaluations include comparison of individual results to the validation
acceptance criterion, absolute bias in blanks and, if available, statistical
analyses of the data for method precision and bias.
•	Laboratory should provide the necessary documentation to the project manager
for these PT samples as required by the SOW.
•	Laboratory should request feedback from the project manager as to the method
performance.
•	Information, along with the sample analytical results documentation, should be
retained by the laboratory for future method validation documentation.
46

-------
Method Selection Life Cycle Documentation
(6.10)
information gathered during the use of the method
•	Method validation protocol and results
•	Analyst training and proficiency tests
•	Method manual control program
•	Instrument calibration and QC results
•	Internal QC and external PT sample result
•	Internal and external assessments
•	Corrective actions
Should be part of the quality system documentation

-------
MARLAP Recommends.
•	Performance-based approach for method selection
•	Only methods validated for a project's application are
used
•	SOW containing the MQOs and analytical process
requirements provided to the laboratory
•	SOW includes specifications for the action level and
required method uncertainty (z/MR) for the analyte
concentration at the action level for each combination of
analyte and matrix
Continued...
. McrUiiCtt* Validation
48

-------
MARLAP Recommends.
(Continued)
•	Method undergoes some basic general validation prior to
project method validation
•	Method applied to a specific project should undergo
validation for that specific application
•	As each new project is implemented, the methods used in
the analysis of the samples undergo some level of
validation (Project Manager's responsibility to assess the
level of method validation necessary)
•	Tiered approach for project method validation
9. Ivieitli^i VaWdzjUcr.
49

-------
Method Validation Discussion Period
•	Does the audience have questions about the validation
levels for methods used in their projects?
•	Take 15 minutes to:
-	Develop a method validation plan
-	Evaluate an example validation data set for validation
requirements based on required method uncertainty
3. ivi!rsLhod
50

-------
10

-------
Evaluating Methods and
Laboratories
Module 10
Dave McCurdy
and
Bob Litman

-------
Overview
This section of MARLAP examines:
•	Proposed method evaluation
•	Laboratory selection
VJ. E'/aluauina Mst'-hoHs a-it? Lift
2

-------
Proposed Analytical Method
Needs to satisfy:
•	Measurement Quality Objectives (MQOs)
•	Method validation requirements
•	Regulatory requirements
•	Data deadlines
•	Project costs
10. Evaluating Methods and Laboratories			¦ ¦ -¦ ..	.v -• ' : • ;	_3
The selected method must:
•	Be able to achieve the MQOs for the analyte
•	Be specific for the analyte or analytes
•	Be suitable for the matrix
•	Be applicable to the amount of sample that will be available
•	Be able to be performed in a timely manner
•	Have a reasonable cost based on project requirements
3

-------
Proposed Method Evaluation
Proposed method should not be based on:
•	Previously identified methods for the same analyses
•	Capricious request for the "best" method
•	The only method that a particular laboratory has for the
analysis
1J. E'/al'iatinq Mst-hotte and L^bor/itoric-g
4

-------
How Many Methods Are Needed?

Soil
Milk
Water
Grass
90Sr

~


137Cs




14C




3H




10. Evaluating iAsl-hcSe iV-id Lflbcrdtortes
We are focused in this example on strontium in milk. But if we had a project
with the indicated matrices and analytes, we may need to assess a method
for each of combinations.
5

-------
Method Evaluation
(7.2.2)
•	Technical evaluation committee (TEC) or radioanalytical
specialist considers whether proposed method is
appropriate based on project requirements
•	What considerations affect method evaluation?
-	MQOs
-	Radiological holding time (during transport and in the
laboratory)
-	Preservation or storage techniques
-	Sample digestion
-	Interferences, both radiological and non-radiological (more
or less significant)
-	Turnaround time for results
-	Method bias (see MARLAP Attachment 6A)
1Q. E'.'.-jlua uin,?. Mit/ncvis nnd Li'tbcrM-onf-'Z
Does the matrix present any difficulties with regard to the holding times?
Preservation and storage will need to be identified for each radionuclide
in each matrix.
Does digestion need to occur? (Sometimes not, as for tritium in water or
soil.)
The potential interferents need to be identified like calcium in milk
interfering with the strontium analysis (in particular in the gravimetric
recovery).
Once the lab gets the samples, what is their TAT, and does it match your
projects needs?
Has this lab performed this analysis in this matrix before?

-------
Deciding on a Method
TEC & Project Manager decide that the methods
proposed by the laboratory are:
•	Appropriate
-	Can achieve the MQOs and other APS requirements
•	Not appropriate
-	Cannot achieve the MQOs or other APSs
U. Evaluating Mst-hivte .1 nd LibarviToritfS
7

-------
What Methods Meet the 90Sr MQOe?
90Sr Example MQO:
A method uncertainty {uMR) of 0.5 pCi/L or less at 8 pCi/L


Beta



LSC
Detector , GPC
Required for Project
Routine Method



0.5
Uncertainty (pCi/L)
0.2
1.0
0.3
(Required Method Uncertainty)
10. 'I'whwtim Msthcvi? 3t\6 Lavoratcriet
Note: This example is focused on the method uncertainty. Other
requirements of the APS should be evaluated.
8

-------
Laboratory Evaluation Process
Laboratory evaluation process follows the evaluation
and approval of the method by the TEC:
•	Initial
•	Continuing
10. Evaluating	and Libor6iU5rie*s
9

-------
Laboratory Evaluation Process (7.3)
Consider:
•	Quality manual
•	Staff, instrumentation, and facilities
•	Prior contract work
•	Performance of internal QC program
•	Performance in external performance evaluation
programs
Continued...
1C. £'/alua>inq MsT-hip::!? ar\(?		10
Does the lab have a quality manual? Does it identify the key elements of a
QA program: organizational structure, training requirements,
documentation requirements, program goals, personnel qualifications,
traceability issues, etc.
Does the laboratory's size and staff size meet the projects needs? You
will be taking 50 samples a week to be analyzed for multiple analytes; The
lab has 3 bench personnel - is this a good fit?
Has the laboratory performed this type of work before especially with
regard to TAT, matrix, analyte and volume? Can they produce an
analytical report that meets the data needs of the project?
Ask to review their internal QC program documentation as well as any
external QC programs in which they participate.

-------
Laboratory Evaluation Process (7.3)
Continued...
•	Is an onsite audit or assessment necessary?
•	Can audit reports from other entities be used?
•	Performance test/evaluation samples as a pre-award
requirement?
•	Is the laboratory accredited? By whom?
10. Evaluating MsthoHs and Labgratorfcs	11
•	It may be possible to obtain audit reports from other agencies.
•	Should preference be given to laboratories that have been already
audited?
11

-------
Which Laboratory to Select?
•	The method is accepted by the TEC
•	The laboratory is approved based on the laboratory's
quality program, external audits, staffing, etc.
•	Several laboratories may meet the requirements
•	The scoring and evaluation scheme* developed will
allow the PM to decide which laboratory to select
*See Chapter 5 and Appendix E
10. E'/aluscina MsThotis and LibcjrVttorir::?
An example of a proposal evaluation scheme, taken from MARLAP Table
E.6, is in the handouts at Tab 15.
12

-------
Ongoing Evaluation of Laboratory Performance (7.4)
•	Project plan should identify the method of ongoing
evaluation, using the Statement of Work (SOW) and APS
as a quantitative measure:
-	"Desk" audit (using data packages from laboratory)
and if necessary
-	Onsite audit
•	Evaluation of QC samples for all matrices is a major part
of either type of audit.
10, Evaluate Methods and laboratories	13
Part of the evaluation process must consider how often is the laboratory not
producirg the results required by the SOW. A single non-compliance for
not achieving an MQO is not grounds for immediate change of laboratory.
•	Assess performance such as frequency of MQOs not achieved. Is once
per batch OK or twice per quarter?
•	Does the lab meet the quantitative contract requirements, but not the
TAT?
•	Are the reports complete or missing laboratory dialogue information?
•	Does the lab recognize ongoing bias or routine blank contamination
(whether or not significant)?
13

-------
Key Laboratory Quality Control Samples
In the MARLAP process, the criteria for evaluating
the batch QC samples are based on the required
project specific method uncertainty
•	Matrix spike
•	Laboratory control sample (LCS)
•	Duplicate sample
•	Laboratory blank
•	Matrix spike duplicate
10. Evaluating VAc'y,\od^ and L:jbo:'vzor\e--j
This is not a complete list of all QC samples that could be used. Not all
QC samples are appropriate for each investigation. Which ones will be
selected are part of the APS process.
Another way of evaluating lab performance is by performance
evaluation/testing sample (external program). These samples may be
provided by the project or by an independent contractor.

-------
Why Do All These QC samples?
•	To help ensure data is of proper quality to support the
decision.
•	The purpose of trending method uncertainties, LCS, and
spike results is to help decide if methods or laboratories
need to be changed.
•	This is part of the feedback loop for confirmation of
performance/improvement in the MARLAP process.
•	.. .and because the regulators tell you to.
10. IZ'/aluaciiifl Mst-ha^s Laboratories	15
15

-------
Matrix Spike
•	Acceptable spiking range
•	Method of spiking
•	Acceptance criteria (Z score)
10. Evaluating Methods and Laboratories	]	 :		; 16
The SOW will specify the acceptable range of activity. It is the
responsibility of the project team, TEC or PM to verify the correct
activities have been used.
Method of spiking need not be specified, but should be approved by the
TEC.
The SOW should specify the acceptance criteria so that the laboratory is
aware of the quantitative requirements for the project. Acceptable range
is based on the "Z-score."

-------
Matrix Spike Requirements for 905r in Milk
Z =
SSR-SR-SA
 pCi/L. Unspiked
sample result is 4 pCi/L. Does this meet the APS requirements?
10. E'/alusuina Mi't-htv'e Libcrjt-ori.f^
SSR = spiked sample result
SA = spike activity added
SR = unspiked sample result
Z = [57.8-50.0-4]/{0.0625x[57.82 + 82]1/2}
= 3.8/{0.0625[3.34x103 + 64]1/2}
= 3.8/(0.0625(58.3)} = 1.04
Z value is acceptable

-------
Laboratory Control Sample
•	Usually made in demineralized water matrix for
liquids (this would be the case for milk, unless a
surrogate, synthetic matrix is specified in the SOW)
•	Activity concentration should be near the AL
•	The uncertainty of the spike activity used is normally
negligible
10. Evaluat-ina Mgrf-hc^g Liiiori'tories	13
•	Performed for each batch but does not have to have the same activity
value each time prepared; monitor the percent difference (%D).
•	Activity needs to be measured near the decision level, but the number of
accumulated counts should be large enough so that the Poisson counting
uncertainty is small. This provides the continued confidence that the
method uncertainty is being reproduced at the action level.
•	The activity of the spike added should be from a primary standard and
thus its uncertainty is negligible when compared to the uncertainty of the
measured spiked sample activity.
18

-------
LCS QC Requirements for 90Sr in Milk
%O = ^-^xlQ0
SA
SSR = Spiked sample result
SA = spike activity (or concentration) added
Control limits: (±3 
-------
Duplicate Sample
•	A second aliquant taken from the original sample
container
•	Agreement based on a statistical test when average of
both samples is within a specified range
•IvahMZtiw M.cr-hcf^ L:
aliquant: A representative portion of a homogeneous sample removed for
the purpose of analysis or other chemical treatment. The quantity removed is
not an evenly divisible part of the whole sample. An "aliquot" (a term not
used in MARLAP) by contrast, is an evenly divisible part of the whole.
20

-------
Duplicates QC Requirements for 90Sr in Milk
x =
X. +x.
When X <8 the control limit for the absolute difference | x,- x2| is
CL = 4.24 mmr = 4.24x0.5 = 2.1
When X > 8 the control limit for the relative percent difference (RPD) is
RPD = 100x—'
and the value for the limit is
CL = 4.24 cpMR x 100= 4.24 x 0.625 x 100 = 27%
Duplicate results are obtained on an unknown sample: 14.6 and \12 pCi/L.
Are they acceptable per our example APS?
10. Evaluating Methods and laboratories
uMR = A/10 = (8-3)/ 10 = 0.5
"Pivr = "mr/UBGR = 0.5 / 8 = 0.0625
Fcr Xavg < 8 the control limit is:
4.24t;MR = 0.5 x 4.24 = 2.1
ForXavg > 8 the control limit is:

RPD = 100 x 4.24 cpMR = 100 x 4.24 x 0.0625 = 26.5%
For the example in this slide:
(Xi + X2) / 2 = (14.6 + 17.2) / 2 = 15.9 > 8
Calculate the RPD =
[ (xrx2) /15.9 ] x 100 = 16.4% < 27% OK
21

-------
Blanks
How are they made?
•	Field blank
•	Trip blank
•	Method blank
Actions if blanks are "positive" for activity?
•	Repeat batch analysis?
•	Subtract blank value from all results?
10. E'/aingtinq Methods Leibarvtkon^j	22
•	Type of blank must be specified in SOW.
•	The "true" blank value is assumed to be "zero." It is expected that there
will be a distribution of values around "zero."
•	Positive values are ones that exceed the critical level.
•	One "positive" blank does not require stopping the process or even
repeating an analysis. Evaluate the trend.
22

-------
Batch Blank Sample QC Requirements for 905r in Milk
Ideally the "true" value is zero. Control chart should
have the central line at zero with:
Control limits: ± 3 «MR
Values plotted on the control chart for trending
No action based on single measurement
Control limit for Sr APS is 1.5 pCi/L (3 * 0.5 pCi/L), the uMR
1C. Evaluating i/lsT-hivis and Liboratorfcs
wMR is the required method uncertainty on an absolute basis
What does it mean if the "value" is outside the control limit at the low end?
(i.e., the -3 uMR value is -0.12 and the blank for the batch is -0.16)?

-------
Stipulation of Quality Control
APS for 90Sr in Milk
•	What is the significance of the Attachment B
"preamble" of the APS*?
•	Note the specificity of agreement criteria for each of
the QC samples.
*See handout at Tab 14
10. [I'/aluatina Mel-hod* Lart'orscori??;
The batch limit is defined: note that the number per batch includes the QC
samples. Thus, the number of unknowns is limited to 20-4, or 16. Notice
that a single QC, matrix spike, or blank "failure" does not stop the
analytical process. However it is incumbent on the laboratory QC
manager to stop work when trending indicates a problem.
Each QC sample has its own equation and acceptance criterion.

-------
EXAMPLE:
QC Requirements - 90Sr in Milk
The current value is clearly outside the lower control limit. This will require
notation during the validation and verification process.
This will be addressed later during the workshop.

-------
MARLAP Recommends.
•	Radioanalytical specialist reviews the methods for
technical adequacy
•	TEC performs an independent calculation of the method's
MDC and required method uncertainty («MR) using
laboratory's typical or sample-specific parameters
•	PM or TEC evaluates available lab data for bias based on
PE testing or samples
•	"Z-score" is used for matrix spike evaluation
•	An audit team include a radioanalytical specialist
1S. il'.'alua&iiifl Msi.h(\ls &f\d Ldboratortejj
26

-------
Group Activity
Handouts identify the results received from the laboratory
for the 5 milk samples recently sent by your project (Batch
#31) with trend graphs of the QC samples performed by
the laboratory for the 90Sr analysis
Conduct an ongoing evaluation of the laboratory's
performance based on this data set
10. E'.'aluatina Mtt-hoHs biborvi-Don^j
Refer to APS handouts at Tab 14 pages 7-9 for example control charts.
27

-------
11

-------
Radiochemical Data Verification
and Validation
Module 11
Dave McCurdy
and
Bob Litman

-------
What is Data Verification?
•	Laboratory conditions and operations are compliant with:
-	SOW
-	Sample analysis plan
-	Quality assurance project plan (QAPP)
•	Identifies problems that should be investigated during data
validation
•	Material delivered by the laboratory in compliance with
SOW
•	Checks for consistency and comparability of the data
throughout the data package
•	Checks for completeness of the results to ensure all
necessary documentation is available
11. Pata Verification and Validation	2
Analytical data verification assures laboratory conditions and operations were
compliant with the SOW based on project plan documents. The updated project
plan documents specify the analytical protocols the laboratory should use to
produce data of acceptable quality and the content of the analytical data
package.
Verification compares the analytical data package delivered by the laboratory to
these requirements (compliance), and checks for consistency and comparability
of the data throughout the data package, correctness of basic calculations, data
for basic calculations, and completeness of the results to ensure all necessary
documentation is available.
Compliance verification may include a review of laboratory staff signatures
(written or electronic), data and report dates, case narrative reports, sample
identifiers, radionuclides and matrices for analyses, methods employed for
analyses, preservation of samples, reference/sampling and analysis dates,
spectral data, chemical yields, detector-efficiency factors, decay and ingrowth
factors, radiological holding times, analytical results, measurement uncertainties,
minimum detectable concentrations, daily instrument and batch QC results, etc.
All these actions are performed after the analysis has been done. How do you
ensure that a large percentage of the time that this process establishes usable
data? By telling the lab what your requirements are in the SOW!

-------
Data Verification
Focuses on the individual data generated by the
laboratory for each sample and laboratory process:
•	Are the data calculation processes and analytical methods
compliant with the SOW?
•	Based on measurable factors
•	Verification report presents summary of the process
inclu'ding a single data qualifier (E) if needed
11. Dawt Wificatbn and Valid&vion
The data package that the data verifier receives from the laboratory must
have all the data necessary to perform this function. This means that you
first need to know what characteristics of verification your project requires
to be performed. The data verification requirements are written down in the
QAPP and incorporated in the SOW.
The "E" qualifier stands for "exception." This indicates that the verifier has
found something in the data package which is an exception to the
requirements.
An example of this would be if the sample size or aliquant process is
missing from the report documentation.
Qualifiers (data flags) are discussed later in this module.
3

-------
Data Verification
(Continued)
Verification will determine whether:
-	Correct procedures were used
-	All required documentation was included in the laboratory
report
-	The report conforms to what was required in the SOW (e.g.,
analytes, MDCs to be achieved, and method uncertainty
(wMR) listed, reporting units, calculational process, sample
preservation, holding times)?
•	Note any exceptions
•	All points are described in a Verification Report
11. VcrificgEbn and Validation

-------
What is Data Validation?
•	Evaluates the data to determine the presence or absence of
an analyte
•	Establishes the uncertainty of the measurement process
•	Qualifies the usability of each datum
- Compares data produced with the measurement quality
objectives and any other analytical process requirements
contained in the analytical protocol specifications developed
in the planning process.

11. Daiti Vzr\?\cai\t
yn and Validation
5
Data assessment = data verification + data validation + data quality
assessment (not covered here)
•	Validation addresses the reliability of the data. Validation process begins
with a review of the verification report and laboratory data package to
identify its areas of strength and weakness.
•	This process involves the application of qualifiers that reflect the impact of
not meeting the MQOs and any other analytical process requirements.
Validation then evaluates the data to determine the presence or absence
of an analyte, and the uncertainty of the measurement process.
•	During validation, the technical reliability and the degree of confidence in
reported analytical data are considered.
•	The data validator should be a scientist with radiochemistry experience.
5

-------
Data Validation
Quantitative tests and qualitative inspection for
analytical detection and method uncertainty, and
review of any exceptions noted from verification
report
• Focus moves from individual data compliance with the
SOW requirements to overall project MQOs
II. Paw Verification and Validation
Qualitative inspection of alpha, gamma, or LSC spectra for proper energy
selection, interference corrections, etc.
In some cases, from the spectral data provided, the reviewer can quantify
or estimate the magnitude of the interferences and determine of the lab
corrected the results appropriately.

-------
1 it Data Yerificatio n and VgI \daVio n		7j
The quantitative measures that will be used are:
•	Have the MQOs been achieved for the methods used?
•	Have the QC samples met the requirements (for uncertainty and method
detection) of the QAPP?

-------
Responsibility for Verification and Validation
1 . ' ¦
Project Manager assigns data verifier and validator
-	Generally performed by different people (for cross-
checking)
Project validation plan developed and in place prior to
data Verification and Validation
Validation plan incorporates input from all stakeholders.
-	Should be part of initial planning process
-	Integral part of project plan documents
11, Data Verificatfon atid Validation
Validation plan should be part of the initial planning process and an integral
part of the project plan documents. May be stand-alone document or part of
another QC document.
The data validation plan should contain the following information:
•	A summary of the project's technical and quality objectives in terms of
sample and analyte lists, required measurement uncertainty, and
required detection limit and action level on a sample/analyte-specific
basis. It should specify the scope of validation, e.g., whether all the raw
data will be reviewed and in what detail (Section 8.3.1).
•	The necessary validation criteria (derived from the MQOs) and
performance objectives deemed appropriate for achieving project
objectives (Section 8.3.2).
•	Direction to the validator on what qualifiers are to be used and how final
qualifiers are assigned (Section 8.3.3).
•	Direction to the validator on the content of the validation report
(Section 8.3.4).
8

-------
Data Validation Plan
(S.3)
Plan (developed from the SOW and APS) includes
the specific tests and limits to be used by the
validator:
•	Tables indicating the MDC, critical level, MQC, required
method uncertainty (»MR), and how they are to be calculated
•	Acceptance criteria for duplicates, spikes, QC and blanks,
and how they are to be calculated
•	Which data qualifiers (8.3.3) are to be used and under what
circumstances
•	The percent of the raw data required to be reviewed ^
11. Data Verification and Validation
9

-------
Data Verification and Validation Process
(6.5)
Four Stages:
1.	Sample handling and analysis system
2.	QC sample requirements meet specified MQOs
3.	Tests of detection and unusual uncertainty
4.	Final data qualifiers are affixed to the individual datum
II. Paw* Verification ni\ci W.lidation
10

-------
Sample Handling and Analysis
Analytical Items for Verification (6.5.1)
Direct evidence of the sampled material being
properly analyzed is necessary:
1.	Identification
2.	Analysis and method
3.	Complete reporting
4.	Chain of custody
5.	Sample size
6.	Preservation
7.	Validity of QC samples and results
8.	Analysis requirements
U. Onu't Vcr\fical\cn and Vtilida^ion
•	Sample name and lab ID number
•	Analyte analyzed and identification of method used.
•	Complete reporting of required analytical parameters such as
concentration, combined standard uncertainty (CSU), critical level (CL),
minimum detectable concentration (MDC), proper reporting units,
radiological holding time (RHT), turnaround time (TAT), decay and
ingrowth times, etc.
•	Unbroken COC indicating correct sample date: dates of collection,
receipt, analysis, reporting.
•	Appropriate sample aliquant is used.
•	Preservation of sample is properly performed and maintained while at
laboratory awaiting analysis.
•	All QC samples required by the SOW/APS were used. The standards
were in the proper concentration range and were not expired.
•	Specific analytical requirements for accuracy and precision were
achieved (FWHM, self shielding, yield, dilution factors, count time, etc.).
These apply both to verification and validation.
11

-------
QC Samples
(3.5.2)
Evidence of all QC results (indexed to the samples in
a batch) should accompany the laboratory report:
•	Were the types of QC samples specified in the SOW
used?
•	Were the correct number of QC samples per batch size
used?
•	Do any of the QC sample results require a data qualifier
to be added to the sample results?
11. Pata Verification and Wili^atian			12
The word require is very important. This is a qualitative decision to be made
by the validator. The result of a single QC being beyond the project control
limit for one radionuclide does not necessarily cause the data set for that
radionuclide to receive an "R" data qualifier. What does the history (i.e.,
trend graph) for this parameter reveal?
12

-------
Elements Of Data Validation
(&A)
Effective data validation must include:
•	Use of an approved, pre-established data validation plan
and
•	A data package that has been verified to contain the
essential elements required for validation
11. O&Ui Verification and Validation
The laboratory needs to know how the data will be evaluated, so they can
attempt compliance.
If the data package is not compliant, the validator is stuck!

-------
£ - Indicates that an exception or non-compliance has
occurred. Examples of when this qualifier would be added
include:
Documentation absent from the data package
•	Sample analysis radiological holding time not met
•	Different procedure or unqualified analyst was used
Calculation of concentration is not in accordance with SOW
Several other non-compliances are possible
The "E" qualifier may be changed or eliminated during the
validation process
11, Verification and VaI\d&tipn	;	:	14

-------
Data Qualifiers
Validation
U: Analytical result is < critical value; a non-detect
Q: A reported measurement uncertainty that exceeds the
required method uncertainty or relative method
uncertainty /
-------
Validation
S{+/-): A LCS, MS or MSD which is above (+) or below (-)
the upper or lower control limit
A sample result with its duplicate(replicate) that
exceeds a control limit
B(+/-): A blank result that is outside the upper (+) or lower
(-) control limit
11. Data Verificationand Validation
These are qualifiers that are assigned to the samples based on the results of
QC samples in the data set.

-------
Data Qualifiers
Important Not«sl
Convention used for data validation qualifiers:
•	If a sample result is above the project reporting
concentration (usually the critical level)
NO QUALIFIER IS USED FOR DETECTION
•	If all parameters associated with the sample measurement,
and its associated QC samples are satisfactory
NO QUALIFIER IS USED
11. Paui Verification and Valisfotian	17
If there are no symbols in the data qualifier column, there is detectable
activity that is validated and verified.
17

-------
Required Method Uncertainty
Used two ways in verification and validation:
•	For individual data points, if the reported measurement
uncertainty is greater than the required method uncertainty
(wMR or cpMR), append data qualifier "Q" to the data
•	In equations for QC, blanks, duplicates, and spikes to set
up acceptance criteria

-------
Detection and Unusual Uncertainty
(0.5.3)
Data validator should determine if:
•	The critical level has not been exceeded, then the "U"
qualifier should be assigned
•	The "Q" qualifier should be used when the reported
measurement uncertainty is greater than the required
method uncertainty
11. Paw Vcrivicziuon and Validation
Is it required for the analyte for each sample, or is an aggregate agreement
with the required minimum detectable concentration (RMDC) satisfactory?
This would be a project specific requirement, BUT note that MARLAP
recommends that the MDC be sample specific.
19

-------
Data Rejection
(3.5.4)
Data rejection ("R") should be a rare occurrence
Three possible reasons to reject data are:
1.	Insufficient or incorrect data supporting results/
documentation are available
2.	Assumptions made in the planning process regarding the
applicability of the method to the analysis are not true
3.	High level of uncertainty ascribed to the datum
3-
11. Data Verification and Validation
1.	The laboratory cannot supply, or cannot supply in a timely manner all the
information needed to verify that the data is correctly calculated or the
proper procedures were followed during the life of the sample at the lab.
2.	The planning process assumed that all samples would be completely
dissolved by the lab method used. The lab reports that there was an
insoluble residue. The planning assumption does not meet the sample
analytical results. The data thus produced are not valid and would be
qualified with an "R" qualifier. This is an example of where the MARLAP
process feedback loop is essential. The insoluble residue was
unexpected and thus requires investigation into the sample dissolution
process and potentially the method being used. This would likely invovle
a change to the APS.
3.	This is known at all other levels as "Other (fill in the reason)."
20

-------
Validation Report
(5.6)
Summarizes the validation process and its
conclusions. Includes:
•	Either a narrative or table summarizing exceptional
circumstances regarding the validation tests
•	List of samples whose results have been validated with the
laboratory and client identifiers
•	Summary of all validated results with associated
uncertainty and final data qualifiers
- Actual values to be reported not an LLD or "<" value
•	Summary of the QC sample performance and any potential
effect on the validated data
11. Daw Verification and Validation
A summary of exceptional circumstances during the sampling or
analysis.
A list of validated samples with both the project and laboratory
identifications.
A summary of the validated results with the associated uncertainty for
each sample.
A summary of QC sample performance and any effect this may have on
the qualifiers ascribed to any datum.

-------
Equations Used for Validation
For Matrix Spikes
Calculate the Z statistic for each spike as follows:
^	SSR-SR-SA
 AL, the control limit for the relative percent difference-.
RPD = 100xlX|IX;l
X
= 100 X 4.24  8, the control limit would be
civy
RPD = 100 x 4.24 
-------
Equations Used for Validation
(Continued)
For Blanks
Plot the values for all blanks on a control chart with:
Control Limits = ±3 u.
MR
For LCS
Calculate the %D from the data as follows:
%D = SSR~SAxioo
SA
And plot the %D for all LCS on a control chart with:
Control Limits = (±3 
-------
90Sr in Milk — Data Qualifiers
Turn to Tab 14...
•	Review quality control graphs
•	Review data validation process
II. Pata Verification and Validation
24

-------

•	Project objectives, implementation activities, and QA/QC
data be well-documented in the project plans
•	Calibration be addressed in a quality system and through an
audit (demonstration of calibration may be required as part
of the project deliverables)
•	Assessment criteria be established in the directed planning
process and stated in the project plan documents
•	Results of each measurement, expanded measurement
uncertainty, critical level for each sample, and the
analyte/sample-specific MDC be reported for each sample
•	Any analyte for which the final measurement is less than the
critical level be qualified with a U for "undetected"

25

-------
Final Exercise: Plutonium Fabricators
Turn to Tab 21 for the laboratory report from Lab
XYZ concerning 241 Am by alpha spectrometry in the
ground water samples
II. Pati'i Vtrifica&ton awd WilWatbn
26

-------
SUMMARY: The Ksy to the MARLAP Process
The principal MQOs in any project will be defined by:
•	The required method uncertainty, uMR, below the action level
AND
•	The relative method uncertainty, cpMR, above the action level
(PmI< — "lMR
When making decisions about individual samples	~ A/3
When making decisions about the mean of several samples .. »MR ~ A/10
Where A is the width of the gray region	A = AL - DL
11. Oaut Verification and ValidarJor\
Method Uncertainty: MARLArs Common Thread
Definition:
•	Predicted uncertainty of a measured value that would likely result from the analysis of a sample at a
specified analyte concentration.
•	Combines imprecision and bias into a single parameter whose interpretation does not depend on context.
MARLAP recommends:
•	Identify the method uncertainty at a specified concentration (typically the action level) as an important
method performance characteristic.
•	Establish a measurement quality objective for method uncertainty for each analyte/matrix combination.
MQO for the method uncertainty (at a specified concentration):
•	Links the three phases of the data life cycle: planning, implementation, and assessment.
•	Related to the width of the gray region. The gray region has an upper bound and a lower bound. The upper
bound typically is the action level, and the lower bound is termed the "discrimination limit."
Examples of MQOs for method uncertainty at a specified concentration:
•	A method uncertainty of 0.01 Bq/g or less is required at the action level of 0.1 Bq/g.
•	The method must be able to quantify the amount of 226Ra present, given elevated levels of 235U in the
samples.
Terminology:
•	"MR
' * 
-------
12

-------

-------
The Key to the MARLAP Process
The principal MQOs in any project will be defined by:
•	The required method uncertainty, uMR, below the action level
AND
•	The relative method uncertainty, (p^]R, above the action level
^PmR — lMR
When making decisions about individual samples	uMR ~ A/3
When making decisions about the mean of several samples .. uUR ~ A/10
Where A is the width of the gray region	A = AL - DL
Method Uncertainty: MARLAP's Common Thread
Definition:
•	Predicted uncertainty of a measured value that would likely result from the analysis of a sample at a
specified analyte concentration.
•	Combines imprecision and bias into a single parameter whose interpretation does not depend on context.
MARLAP recommends:
•	Identify the method uncertainty at a specified concentration (typically the action level) as an important
method performance characteristic.
•	Establish a measurement quality objective for method uncertainty for each anaiyte/matrix combination.
MQO for the method uncertainty (at a specified concentration):
•	Links the three phases of the data life cycle: planning, implementation, and assessment.
•	Related to the width of the gray region. The gray region has an upper bound and a lower bound. The upper
bound typically is the action level, and the lower bound is termed the "discrimination limit."
Examples of MQOs for method uncertainty at a specified concentration:
•	A method uncertainty of 0.01 Bq/g or less is required at the action level of 0.1 Bq/g.
•	The method must be able to quantify the amount of 226Ra present, given elevated levels of 235U in the
samples.
Terminology:
•	uMR	Required method uncertainty (absolute)
•	(pMR = uMRl AL	Required method uncertainty (relative)
•	A = AL - DL	Width of the gray region (range of values where the consequences of a
decision error are relatively minor)
•	Action level	Concentration that will cause a decisionmaker to choose one of the alternative
actions
•	Discrimination limit Synonymous with the lower bound of the gray region

-------
3

-------
The Data Quality Objectives Process
HI. Specify a Range of Concentrations Where the Consequences of Decision Errors
Are Relatively Minor
The gray region, or region of uncertainty, indicates an area where the consequences of a Type 13
decision error are relatively minor. It may not be reasonable to attempt to control decision errors
within the gray area. The resources expended to distinguish small differences in concentration
could well exceed the costs associated with making the decision error.
In this example, the question is whether it would really make a major difference in the action
taken if the concentration is called 30 pCi/g when the true value is 26 or even 22 pCi/g. If not,
the gray region might extend from 20 to 30 pCi/g . This is shown in Figure B.5.
The width of the gray region reflects the decisionmaker's concern for Type II decision errors near
the action level. The decisionmaker should establish the gray region by balancing the resources
needed to "make a close call" versus the consequences of making a Type II decision error. The
cost of collecting data sufficient to distinguish small differences in concentration could exceed
the cost of making a decision error. This is especially true if the consequences of the error are
marlap
B-16
JULY 2004

-------
The Data Quality Objectives Process
judged to be minor.
There is one instance where the
consequences of a Type II
decision error might be considered
major. That is when expensive
remediation actions could be
required that are not necessary to
protect public health. It could be
argued that this is always the case
when the true concentration is less
than the action level. On the other
hand, it can be also be argued that
remediation of concentrations
near, even though not above the
action level, will still carry some
benefit. To resolve the issue,
however, the project planning team knows that not all values of the average concentration below
the action level are equally likely to exist in the survey unit. Usually, there is some knowledge, if
only approximate, of what the average value of the concentration in the survey unit is. This
information can be used to set the width of the gray region. If the planning team is fairly
confident that the concentration is less than 20 pCi/g but probably more than 10 pCi/g, they
would be concerned about making Type II errors when the true concentration is between 10 and
20 pCi/g. However, they will be much less concerned about making Type II errors when the true
concentration is between 20 and
30 pCi/g. This is simply because
they do not believe that the true
concentration is likely to be in that
range. Figure B.6 shows three
possible ways that the project
planning team might decide to set
the gray region. In "A" the project
planning team believes the true
concentration remaining in the
survey unit is about 15 pCi/g, in
"B" they believe it to be about 20
pCi/g, and in "C" about 25 pCi/g.
In each case, they are less
concerned about a decision error
involving a true concentration
greater than what is estimated to
actually remain. They have used
JULY 2004	B-17	MARLAP
1.0
Oi
0.6
0.4
0.2
0.0
-
i". j— Gray Region
1
1
Type 11 Errors
., i
i
i
i
i
H
m
i
1 Type I Errors
	1	l±	
0
10
20
30
40
50
Figure B.5 — The gray region is a specified range of values of
the true concentration where the consequences of a decision
error are considered to be relatively minor
o
O
CCS
-Q
0
01
1.0
0.8
0.6
0.4
0.2
0.0




i
j— Gray Region
-

J
c
3
1
-



1
-

lii!
——*
1
1
Type II Errors
1
Hi
lllflllllll
Type I Errors
0
10
20
30
40
50
Figure B.6 — Three possible ways of setting the gray region.
In (A) the project planning team believes the true
concentration remaining in the survey unit is about 15 pCi/g,
in (B) about 20 pCi/g and in (C) about 25 pCi/g

-------
The Data Quality Objectives Process
their knowledge of the survey unit to choose the range of concentration where it is appropriate to
expend resources to control the Type II decision error rate. The action level, where further
remediation would be necessary, defines the upper bound of the gray region where the probability
of a Type I error should be limited. The lower bound of the gray region defines the concentration
below which remediation should not be necessary. Therefore, it defines where the probability of
a Type II error that would require such an action should be limited.2
IV. Assign Tolerable Probability Values for the Occurrence of Decision Errors
Outside of the Range Specified in EI
As part of the DQO process, the decisionmaker and planning team must work together to identify
possible consequences for each type of decision error. Based on this evaluation, desired limits on
the probabilities for making decision errors are set over specific concentration ranges. The risk
associated with a decision error will generally be more severe as the value of the concentration
moves further from the gray region. The tolerance for Type I errors will decrease as the concen-
tration increases. Conversely, the tolerance for Type II errors will decrease as the concentration
deceases.
In the example, the decisionmaker has identified 20-30 pCi/g as the area where the consequen-
ces of a Type II decision error would be relatively minor. This is the gray region. The tolerable
limits on Type I decision errors should be smallest for cases where the decisionmaker has the
greatest concern for making an incorrect decision. This will generally be at relatively high values
of the true concentration, well above the action level. Suppose, in the example, that the
decisionmaker is determined to be nearly 99 percent sure that the correct decision is made,
namely, not to reject the null hypothesis, not to release the survey unit, if the true concentration
of radionuclide X is 40 pCi/g or more. That means the decisionmaker is only willing to accept a
Type I error rate of roughly 1 percent, or making an incorrect decision 1 out of 100 times at this
concentration level. This is shown in Figure B.7(a).
If the true concentration of X is closer to the action level, but still above it, the decisionmaker
wants to make the right decision, but the consequences of an incorrect decision are not
considered as severe at concentrations between 30 and 40 pCi/g as they are when the concen-
tration is over 40 pCi/g. The project planning team wants the correct action to be taken at least 90
percent of the time. They will accept an error rate not worse than about 10 percent. They will
only accept a data collection plan that limits the potential to incorrectly decide not to take action
when it is actually needed to about 1 in 10 times. This is shown in Figure B.7(b).
2 Had the null hypothesis.been chosen differently, the ranges of true concentration where Type I and Type II errors
occur would have been reversed.
MARLAP
B-18
JULY 2004

-------
The Data Quality Objectives Process
The decisionmaker and project
planning team are also concerned
about wasting resources by
cleaning up sites that do not
represent any substantial risk.
Limits of tolerable probability are
set low for extreme Type II errors,
i.e. failing to release a survey unit
when the true concentration is far
below the gray region and the
action level. They want to limit
the chances of deciding to take
action when it really is not needed
to about 1 in 20 if the true con-
centration is less than 10 pCi/g.
This is shown in Figure B.7(c).
They are more willing to accept higher decision error rates for concentrations nearer to the gray
region. After all, there is some residual risk that will be avoided even though the concentration is
below the action level. A Type II error probability limit of 20 percent in the 10 -20 pCi/g range is
agreed upon. They consider this to be an appropriate transition between a range of concentrations
where Type 13 errors are of great concern (<10 pCi/g) to a range where Type II errors are of little
concern. The latter is, by definition, the gray region, which is 20-30 pCi/g in this case . The
chance of taking action when it is not needed within the range 10-20 pCi/g is set at roughly 1 in
5. This is shown in Figure B.7(d).
Once the limits on both types of decision error rates have been specified, the information can be
displayed on a decision performance goal diagram, as shown in Figure B.7, or made into a
decision error limits table, as shown in Table B.3. Both are valuable tools for visualizing and
evaluating proposed limits for decision errors.
Table B.3 — Example decision error limits table
True Concentration
Correct Decision
Tolerable Probability of Making
a Decision Error
0- 10 pCi/g
Does not exceed
5%
10-20 pCi/g
Does not exceed
20%
20-30 pCi/g
Does not exceed
gray region: decision error
probabilities not controlled
30-40 pCi/g
Does exceed
10%
40-50 pCi/g
Does exceed
1%
Figure B.7 — Example decision performance goal diagram
JULY 2004
B-19
MARLAP

-------
The Data Quality Objectives Process
There are no fixed rules for identifying at what level the decisionmaker and project planning
team should be willing to tolerate the probability of decision errors. As a guideline, as the
possible true values of the parameter of interest move closer to the action level, the tolerance for
decision errors usually increases. As the severity of the consequences of a decision error
increases, the tolerance decreases.
The ultimate goal of the DQO process is to identify the most resource-effective study design that
provides the type, quantity, and quality of data needed to support defensible decisionmaking. The
decisionmaker and planning team must evaluate design options and select the one that provides
the best balance between cost and the ability to meet the stated DQOs.
A statistical tool
the action level. Figure B.8 shows
the power diagram constructed
from Figure B.7 by replacing the
desired limits on Type II error
probabilities, (3, with the power,
1-p. The desired limits on Type I
error probabilities, a, are carried
over without modification, as is
the gray region. Drawing a smooth
decreasing function through the
desired limits results in the
desired power curve. A decision performance goal diagram with an estimated power curve can
help the project planning team visually identify information about a proposed study design.
Statisticians can determine the number of measurements needed for a proposed survey design
from four values identified on the decision performance goal diagram:
(1)	The tolerable limit for the probability of making Type I decision errors, a, at the action
level AL).
(2)	The tolerable limit for the probability of making Type II decision errors, (3, along the
known as an estimated power curve can be extremely useful when investigating
the performance of alternative survey designs. The probability that the null hypothesis is rejected
when it should be rejected is
called the statistical power of a
hypothesis test. It is equal to one
minus the probability of a Type II
error (1~P). In the example, the
null hypothesis is false whenever
the true concentration is less than
Power
Type II Error
10	20	30	40
True Mean Value of X (Mean Concentration, pCi/g>
Figure B.8 — A power curve constructed from the decision
performance goal diagram in Figure B.7
MARLAP
B-20
JULY 2004

-------
The Data Quality Objectives Process
lower bound of the gray region (LBGR).
(3)	The width of the gray region, A = AL - LBGR, where the consequences of Type II
decision errors are relatively minor.
(4)	The statistical expression for the total expected variability of the measurement data in the
survey unit, a.
The actual power curve for the statistical hypothesis test can be calculated using these values, and
can be compared to the desired limits on the probability of decision errors.
The estimated number of measurements required for a proposed survey design depends heavily
on the expected variability of the measurement data in the survey unit, a. This may not always be
easy to estimate from the information available. However, the impact of varying this parameter
on the study design is fairly easy to determine during the planning process. Examining a range of
reasonable values for a may not result in great differences in survey design. If so, then a crude
estimate for a is sufficient. If not, the estimate for a may need to be refined, perhaps by a pilot
study of 20 to 30 samples. If the change in the number of samples (due to refining the estimate of
a) is also about 20 to 30 in a single survey unit, it may be better to simply use a conservative
estimate of a that leads to the larger number of samples rather than conduct a pilot study to
obtain a more accurate estimate of a . On the other hand, if several or many similar survey units
will be subject to the same design, a pilot study may be worthwhile.
The example in Figure B.9 shows that the probability of making a decision error for any value of
the true concentration can be
determined at any point on the
power curve. At 25 pCi/g, the
probability of a Type II error is
roughly 45-50 percent. At 35
pCi/g, the probability of a Type I
error is roughly 3 percent.
The larger the number of samples
required to meet the stated DQOs,
the greater the costs of sampling
and analysis for a proposed plan.
Specifying a narrow gray region
and/or very small limits on
decision error probabilities
indicate a high level of certainty is
needed and a larger number of
samples will be required.
True Mean Value of X (Mean Concentration, pCi/g)
Figure B.9 — Example power curve showing the key
parameters used to determine the appropriate number of
samples to take in the survey unit
JULY 2004
B-21
MARLAP

-------
The Data Quality Objectives Process
Specifying a wide gray region and/or larger limits on decision error probabilities indicates a
• lower level of certainty is required. A smaller number of samples will be necessary. The required
level of certainty should be consistent with the consequences of making decision errors balanced
against the cost in numbers of samples to achieve that level of certainty.
If a proposed survey design fails to meet the DQOs within constraints, the decisionmaker and
planning team may need to consider:
•	Adjusting the Acceptable Decision Error Rates. For example, the decisionmaker may
be unsure what probabilities of decision error are acceptable. Beginning with extremely
stringent decision error limits with low risk of making a decision error may require an
extremely large number of samples at a prohibitive cost. After reconsidering the potential
consequences of each type of decision error, the decisionmaker and planning team may be
able to relax the tolerable rates.
•	Adjust the Width of the Gray Region. Generally, an efficient design will result when the
relative shift, A/a, lies between the values of 1 and 3. A narrow gray region usually means
that the proposed survey design will require a large number of samples to meet the specified
DQOs. By increasing the number of samples, the chances of making a Type 13 decision error
is reduced, but the potential costs have increased. The wider the gray region, the less stringent
the DQOs. Fewer samples will be required, costs will be reduced but the chances of making a
Type II decision error have increased. The relative shift, A/a, depends on the width of the
gray region, A, and also on the estimated data variability, a. Better estimates of either or both
may lead to a more efficient survey design. In some cases it may be advantageous to try to
reduce a by using more precise measurement methods or by forming more spatially
homogeneous survey units, i.e. adjusting the physical boundaries of the survey units so that
the anticipated concentrations are more homogeneous with them.
MARLAP
B-22
JULY 2004

-------
APPENDIX C
MEASUREMENT QUALITY OBJECTIVES
FOR METHOD UNCERTAINTY AND
DETECTION AND QUANTIFICATION CAPABILITY
C.l Introduction
This appendix expands on issues related to measurement quality objectives (MQOs) for several
method performance characteristics which are introduced in Chapter 3, Key Analytical Planning
Issues and Developing Analytical Protocol Specifications. Specifically, this appendix provides
the rationale and guidance for establishing project-specific MQOs for the following method per-
formance characteristics: method uncertainty, detection capability and quantification capability.
In addition, it provides guidance in the development of these MQOs for use in the method selec-
tion process and guidance in the evaluation of laboratory data based on the MQOs. Section C.2 is
a brief overview of statistical hypothesis testing as it is commonly used in a directed planning
process, such as the Data Quality Objectives (DQO) Process (EPA, 2000). More information on
this subject is provided in Chapter 2, Project Planning Process and Appendix B, The Data
Quality Objectives Process. Section C.3 derives MARLAP's recommended criteria for establish-
ing project-specific MQOs for method uncertainty, detection capability, and quantification capa-
bility. These criteria for method selection will meet the requirements of a statistically based
decision-making process. Section C.4 derives MARLAP's recommended criteria for evaluation
of the results of quality control analyses by project managers and data reviewers (see also Chap-
ter 8, Radiochemical Data Verification and Validation).
It is assumed that the reader is familiar with the concepts of measurement uncertainty, detection
capability, and quantification capability, and with terms such as "standard uncertainty," "mini-
mum detectable concentration," and "minimum quantifiable concentration," which are intro-
duced in Chapter 1, Introduction to MARLAP, and discussed in more detail in Chapter 20,
Detection and Quantification Capabilities. MARLAP also uses the term "method uncertainty" to
refer to the predicted uncertainty of the result that would be measured if the method were applied
to a hypothetical laboratory sample with a specified analyte concentration. The method uncer-
tainty is a characteristic of the analytical
method and the measurement process.
C.2 Hypothesis Testing
Within the framework of a directed planning
process, one considers an "action level," which
is the contaminant concentration in either a
population (e.g., a survey unit) or an individual
Contents
C.l Introduction	C-l
C.2 Hypothesis Testing	C-l
C.3 Development of MQOs for Analytical Protocol
Selection 	 C-3
C.4 The Role of the MQO for Method Uncertainty in
Data Evaluation	C-9
C.5 References	 C-l7
JULY 2004
C-l
MARLAP

-------
MQOs For Method Uncertainty and Detection and Quantification Capability
item (e.g., a laboratory sample) that should not be exceeded. Statistical hypothesis testing is used
to decide whether the actual contaminant concentration, denoted by X, is greater than the action
level, denoted by AL. For more information on this topic, see EPA (2000), MARSSIM (2000),
NRC (1998), or Appendix B of this manual.
In hypothesis testing, one formulates two hypotheses about the value ofX, and evaluates the
measurement data to choose which hypothesis to accept and which to reject.1 The two hypotheses
are called the null hypothesis H0 and the alternative hypothesis H,. They are mutually exclusive
and together describe all possible values of Xunder consideration. The null hypothesis is
presumed true unless the data provide evidence to the contrary. Thus the choice of the null
hypothesis determines the burden of proof in the test.
Most often, if the action level is not zero, one assumes it has been exceeded unless the measure-
ment results provide evidence to the contrary. In this case, the null hypothesis is H0: X > AL and
the alternative hypothesis is H,: X< AL. If one instead chooses to assume the action level has not
been exceeded unless there is evidence to the contrary, then the null hypothesis is H0: X < AL
and the alternative hypothesis is H,: X> AL. The latter approach is the only reasonable one if
AL = 0, because it is virtually impossible to obtain statistical evidence that an analyte concentra-
tion is exactly zero.
For purposes of illustration, only the two forms of the null hypothesis described above will be
considered. However, when AL > 0, it is also possible to select a null hypothesis that states that X
does not exceed a specified value less than the action level (NRC, 1998). Although this third
scenario is not explicitly addressed below, the guidance provided here can be adapted for it with
few changes.
In any hypothesis test, there are two possible types of decision errors. A Type I error occurs if the
null hypothesis is rejected when it is, in fact, true. A Type II error occurs if the null hypothesis is
not rejected when it is false.2 Since there is always measurement uncertainty, one cannot elimi-
nate the possibility of decision errors. So instead, one specifies the maximum Type I decision
error rate a that is allowable when the null hypothesis is true. This maximum usually occurs
when AL. The most commonly used value of a is 0.05, or 5 %. One also chooses another
concentration, denoted here by DL (the "discrimination limit"), that one wishes to be able to
distinguish reliably from the action level. One specifies the maximum Type II decision error rate
1	In hypothesis testing, to "accept" the null hypothesis only means not to reject it, and for this reason many
statisticians avoid the word "accept" in this context. A decision not to reject the null hypothesis does not imply the
null hypothesis has been shown to be true.
2	The terms "false positive" and "false negative" are synonyms for "Type I error" and "Type II error," respectively.
However, MARLAP deliberately avoids these terms here, because they may be confusing when the null hypothesis
is an apparently "positive" statement, such as Xs AL.
MARLAP
C-2
JULY 2004

-------
MQOs For Method Uncertainly and Detection and Quantification Capability
/? that is allowable whenJf= DL, or, alternatively, the "power" 1 -/? of the statistical test when
X= DL. The gray region is then defined as the interval between the two concentrations AL
and DL.
The gray region is a set of concentrations close to the action level, where one is willing to tol-
erate a Type II decision error rate that is higher than /?. For concentrations above the upper bound
of the gray region or below the lower bound, the decision error rate is no greater than the speci-
fied value (either a or p as appropriate). Ideally, the gray region should be narrow, but in practice,
its width is determined by balancing the costs involved, including the cost of measurements and
the estimated cost of a Type II error, possibly using prior information about the project and the
parameter being measured.
If H0 is X ^ AL (presumed contaminated), then the upper bound of the gray region is AL and the
lower bound is DL. If H0 is X <, AL (presumed uncontaminated), then the lower bound of the
gray region is AL and the upper bound is DL. Since no assumption is made here about which
form of the null hypothesis is being used, the lower and upper bounds of the gray region will be
denoted by LBGR and UBGR, respectively, and not by AL and DL. The width of the gray region
(UBGR - LBGR) is denoted by A and called the shift or the required minimum detectable
difference in concentration (EPA, 2000; MARSSIM, 2000; NRC, 1998). See Appendix B, The
Data Quality Objectives Process, for graphical illustrations of these concepts.
Chapter 3 of MARLAP recommends that for each radionuclide of concern, an action level, gray
region, and limits on decision error rates be established during a directed planning process.
Section C.3 presents guidance on the development of MQOs for the selection and development
of analytical protocols. Two possible scenarios are considered. In the first scenario, the parameter
of interest is the mean analyte concentration for a sampled population. The question to be
answered is whether the population mean is above or below the action level. In the second
scenario a decision is to be made about individual items or specimens, and not about population
parameters. This is the typical scenario in bioassay, for example. Some projects may involve both
scenarios. For example, project planners may want to know whether the mean analyte concentra-
tion in a survey unit is above an action level, but they may also be concerned about individual
samples with high analyte concentrations.
C.3 Development of MQOs for Analytical Protocol Selection
This section derives MARLAP's recommendations for establishing MQOs for the analytical
protocol selection and development process. Guidance is provided for establishing project-
specific MQOs for method uncertainty, detection capability, and quantification capability. Once
selected, these MQOs are used in the initial, ongoing, and final evaluations of the protocols.
MARLAP considers two scenarios and develops MQOs for each.
JULY 2004
C-3
MARLAP

-------
MQOs For Method Uncertainty and Detection and Quantification Capability
SCENARIO I: A Decision Is to Be Made about the Mean of a Sampled Population
In this scenario the total variance of the data, a2, is the sum of two components
2	2
where crM is the average analytical method variance (M = "method" or "measurement") and  A / 3, the requirement that the total a be less than A / 3
cannot be met regardless of aw. In the latter case, it is sufficient to make aM negligible in com-
parison to as. Generally, aM can be considered negligible if it is no greater than about asl 3.
Often one needs a method for choosing aM in the absence of specific information about as. In this
situation, MARLAP recommends the requirement aM < A /10 by default. The recommendation is
justified below.
Since it is desirable to have a <, A I 3, this condition is adopted as a primary requirement. Assume
for the moment that as is large. Then aM should be made negligible by comparison. As mentioned
above, aM can be considered negligible if it is no greater than asl 3. When this condition is met,
further reduction of au has little effect on a and therefore is usually not cost-effective. So, the
inequality aM <, as / 3 is adopted as a second requirement.
Algebraic manipulation of the equation a2 = a^ + 
-------
MQOs For Method Uncertainty and Detection and Quantification Capability

v/To
The inequalities a < A / 3 and aM <. a / /TO together imply the requirement
A

3/10
or approximately
M 10
The required upper bound for the standard deviation crM will be denoted by 
-------
MQOs For Method Uncertainty and Detection and Quantification Capability
result for a laboratory sample whose concentration equals UBGR. Note that the term "method
uncertainty" and the symbol uM actually apply not only to the method but to the entire measure-
ment process.
In theory, the value  1 Bq/L.
In Scenario I, where decisions are made about the mean of a population based on multiple physi-
cal samples (e.g., from a survey unit), if the default value «MR = A /10 is assumed for the required
method uncertainty, then the required bound for the analytical standard deviation as a function of
concentration is as shown in Figure C.l. The figure shows that the bound, «Rcq, is constant at all
concentrations, x, below UBGR, and wRcq increases with x when x is above UBGR. So, «Rcq = wMR
when x < UBGR and wRcq = x • «MR /UBGR when x > UBGR.
JULY 2004
C-9
MARLAP

-------
MQOs For Method Uncertainty and Detection and Quantification Capability






/




	 Y = X
—¦ Y = X±

*


0' LBGR UBGR
True Concentration (X)
Figure C.l — Required analytical standard deviation («Rei|)
These requirements can be relaxed somewhat for samples with very high analyte concentrations
as long as the project's requirements for decision uncertainty are met. However, MARLAP does
not provide specific guidance to address this issue for Scenario I.
In Scenario II, where decisions are made about individual physical samples, it is possible to
widen the required bounds for the standard deviation at any concentration outside the gray
region. For example, suppose UBGR = AL, LBGR is set at some concentration below UBGR,
and the decision error probabilities a and /? are specified. Then the project planners require the
probability of a Type I error not to exceed a when the true concentration is at or above UBGR,
and they require the probability of a Type II error not to exceed /? when the true concentration is
at or below LBGR. The decision rule is based on the combined standard uncertainty of the meas-
urement result: any sample whose measured concentration, x, exceeds AL minus z,_a times the
combined standard uncertainty, mc(x), is assumed to exceed the action level. So, assuming wc(x) is
an adequate estimate of the analytical standard deviation, the planners' objectives are met if
if x < LBGR
if jc > UBGR
if LBGR < x < UBGR
UBGR-x
u(x) < •
Z + Z
1 -a \-p
x-LBGR
z + z
\ -a l-t
Z + Z
1 -a l-f
MARLAP
C-10
JULY 2004

-------
MQOs For Method Uncertainty and Detection and Quantification Capability
Example C.5 Consider the earlier example in which AL = UBGR = 1.0 Bq/L, LBGR:
0.5 Bq/L, a - 0.05, /? = 0.10, and wMR = 0.17 Bq/L. The less restrictive uncertainty
requirement can be expressed as
"c(*) * "
1.0Bq/L-x	.e nco /t
	-	,	if x <, 0.5 Bq/L
2.927
x - 0.5 Bq/L	.e , n d /t
	-—,	if xz 1.0 Bq/L
2.927
0.17,	if 0.5 Bq/L ^ jc ^ 1.0 Bq/L
So, if x - 0, the requirement is uc(x) <. (I Bq/L) / 2.927 = 0.34 Bq/L, and, ifx = 2 Bq/L, the
requirement is uc(x) ^ (2 Bq/L - 0.5 Bq/L) / 2.927 = 0.51 Bq/L, which is approximately
26 % in relative terms.
C.4.2 Acceptance Criteria for Quality Control Samples
The next issue to be addressed is how to set warning and control limits for quality control (QC)
sample results. These limits will be used by project data assessors to determine whether the lab-
oratory appears to be meeting MQOs. Presumably the lab has stricter internal QC requirements
(see Chapter 18, Laboratory Quality Control).
The development of acceptance criteria for QC samples will be illustrated with an example.
Assume UBGR = 5 Bq/g (soil) and LBGR = 1.5 Bq/g. The width of the gray region is A =
5-1.5-3.5 Bq/g. Project planners, following MARLAP's guidance, choose the required
method uncertainty at 5 Bq/g (UBGR) to be
"mr = ^ = 035
or 7 %. So, the maximum standard uncertainty at analyte concentrations less than 5 Bq/g should
be hmr = 0.35 Bq/g, and the maximum relative standard uncertainty at concentrations greater
than 5 Bq/g should be ^MR = 0.07, or 7 %.
Although it is possible to relax these uncertainty criteria for samples with very high analyte con-
centrations, MARLAP recommends that the original criteria be used to develop acceptance limits
for the results of QC sample analyses.
JULY 2004
C-ll
MARLAP

-------
MQOs For Method Uncertainty and Detection and Quantification Capability
C.4.2.1 Laboratory Control Samples
It is assumed that the concentration of a laboratory control sample (LCS) is high enough that the
relative uncertainty limit pMR = 0.07 is appropriate. The percent deviation for the LCS analysis is
defined as
o/oD = SSR " SA x 100 %
SA
where
SSR is the measured result (spiked sample result) and
SA is the spike activity (or concentration) added.
It is assumed that the uncertainty of SA is negligible; so, the maximum allowable relative stan-
dard deviation of %D is the same as that of the measured result itself, or cpUK x 100 %. Then the
2-sigma warning limits for %D are ± 2
-------
MQOs For Method Uncertainty and Detection and Quantification Capability
The warning limits in this case are
± 2pMR x 100 % = ± 14 %
and the control limits are
±3pMR* 100% = ±21 %
So, the calculated value of %D is above the upper warning limit but below the control limit.
C.4.2.2 Duplicate Analyses
Acceptance criteria for duplicate analysis results depend on the sample concentration, which is
estimated by the average x of the two measured results x, and x2.
X + X
2
When x < UBGR, the warning limit for the absolute difference l-x, - x2 \ is
^mmrv/2 ~ 2.83 uMR
and the control limit is
3mmr^ = 4.24 umr
Only upper limits are used, because the absolute value |x, - x2 | is being tested.
When x > UBGR, the acceptance criteria may be expressed in terms of the relative percent
difference (RPD), which is defined as
K -*,l
RPD = 2 x 100 %
x
The warning limit for RPD is
2(pMR\Jl x 100 % - 2.83 (pMR x 100 %
and the control limit is
3^mrV/2 x 100 % = 4.24 
-------
MQOs For Method Uncertainty and Detection and Quantification Capability
Duplicate Analyses


If Jc
-------
MQOs For Method Uncertainty and Detection and Quantification Capability
C.4.2.3 Method Blanks
Case 1. If an aliquant of blank material is analyzed, or if a nominal aliquant size is used in the
data reduction, the measured blank result is an activity concentration. The target value is zero,
but the measured value may be either positive or negative. So, the 2-sigma warning limits are
± 2uMR and the 3-sigma control limits are ± 3uMR.
Case 2. If no blank material is involved (only reagents, tracers, etc., are used), the measured
result may be a total activity, not a concentration. In this case the method uncertainty limit«MR
should be multiplied by the nominal or typical aliquant size, ms. Then the 2-sigma warning limits
are ± 2uMRms and the 3-sigma control limits are ± 3 uMRms.
The requirements for method blanks are summarized below.
Method Blanks

Concentration:

Statistic:
Measured concentration
Warning limits:

Control limits:
^ ^ "mr
Total Activity:

Statistic:
Measured total activity
Warning limits:
± 2 uMRms
Control limits:
±3mmr«s
Example C.8
(UBGR = 5 Bq/g, «MR = 0.35 Bq/g, ^MR - 0.07)
Suppose a method blank is analyzed and the result of the measurement is
x = 0.00020 Bq with combined standard uncertainty uc(x) = 0.00010 Bq
Assuming the nominal aliquant mass is 1.0 g, or ms = 0.001 g, the result is evaluated by
comparing x to the warning and control limits:
± 2 wMR ms = ± 0.00070 Bq
± 3 wMR ms = ± 0.00105 Bq
In this case x is within the warning limits.
JULY 2004
C-15
MARLAP

-------
MQOs For Method Uncertainty and Detection and Quantification Capability
C.4.2.4 Matrix Spikes
The acceptance criteria for matrix spikes are more complicated than those described above for
laboratory control samples because of pre-existing activity in the unspiked sample, which must
be measured and subtracted from the activity measured after spiking. The percent deviation for a
matrix spike is defined as
%D = SSR ~ SR - SA x 1Q0 %
SA
where
SSR is the spiked sample result
SR is the unspiked sample result
SA is the spike concentration added (total activity divided by aliquant size).
However, warning and control limits for %D depend on the measured values; so, %D is not a
good statistic to use for matrix spikes. A better statistic is the "Z score":
z _	SSR - SR - SA

-------
MQOs For Method Uncertainty and Detection and Quantification Capability
SR = 3.5 Bq/g with combined standard uncertainty wc(SR) = 0.29 Bq/g
the spike concentration added is
SA=10.1 Bq/g with combined standard uncertainty wc(SA) = 0.31 Bq/g
and the result of the analysis of the spiked sample is
SSR = 11.2 Bq/g with combined standard uncertainty wc(SSR) = 0.55 Bq/g
Since SR is less than UBGR (5), max(SR, UBGR) = UBGR = 5. So,
z = SSR - SR - SA = 11.2 Bq/g -3.5 Bq/g - 10.1 Bq/g = _2

-------
ATTACHMENT 3A
Measurement Uncertainty
3A.1 Introduction
No measurement is perfect. If one measures the same quantity more than once, the result gener-
ally varies with each repetition of the measurement. Not all the results can be exactly correct. In
fact it is generally the case that no result is exactly correct. Each result has an "error," which is
the difference between the result and the true value of the measurand (the quantity being meas-
ured). Ideally, the error of a measurement should be small, but it is always present and its value is
always unknown. (Given the result of a measurement, it is impossible to know the error of the
result without knowing the true value of the measurand.)
Since there is an unknown error in the result of any measurement, the measurement always
leaves one with some uncertainty about the value of the measurand. What is needed then is an
estimate of the range of values that could reasonably be attributed to the measurand on the basis
of the measurement. Determining such a range of reasonable values is the purpose of evaluating
the numerical "uncertainty" of the measurement (ISO, 1993).
This attachment gives only a brief overview of the subject of measurement uncertainty. Chapter
19 (.Measurement Uncertainty) of this manual describes the evaluation and expression of
measurement uncertainty in more detail.
3A.2 Analogy: Political Polling
The uncertainty of a laboratory measurement is similar to the "margin of error" reported with the
results of polls and other surveys. Note that a political poll is a form of measurement, the measur-
and in this case being the fraction of likely voters who support a specified candidate. (The frac-
tion is usually reported as a percentage.) The margin of error for the poll result is a kind of
measurement uncertainty.
Suppose a poll of 1200 people indicates that 43 percent of the population supports a particular
candidate in an election, and the margin of error is reported to be 3 percent. Then if the polling
procedure is unbiased, one can be reasonably confident (but not certain) that the actual percent-
age of people who support that candidate is really between 40 percent and 46 percent.
Political polling results can be wildly inaccurate, and the predicted winner sometimes loses. One
reason for this problem is the difficulty of obtaining an unbiased sample of likely voters for the
poll. A famous example of this difficulty occurred in the presidential election of 1936, when a
polling organization chose its sample from a list of people who owned telephones and automo-
biles and predicted on the basis of the poll that Alf Landon would defeat Franklin Roosevelt. A
JULY 2004
3-29
MARLAP

-------
Key Analytical Planning Issues and Developing APSs: Measurement Uncertainty
significant source of inaccuracy in the result was the fact that many voters during the Great
Depression were not affluent enough to own telephones and automobiles, and those voters tended
to support FDR, who won the election in a landslide. Another famous example of inaccurate
polling occurred in the 1948 presidential election, when polls erroneously predicted that Thomas
Dewey would defeat Harry Truman. It seems that the polls in this case were simply taken too
early in the campaign. They estimated the fraction of people who supported Dewey at the time
the polls were taken, but the fraction who supported him on election day was lower. So, the
margin of error in each of these cases was not a good estimate of the total uncertainty of the
polling result, because it did not take into account significant sources of inaccuracy. A more
complete estimate of the uncertainty would have combined the margin of error with other
uncertainty components associated with possible sampling bias or shifts in public opinion.
Similar issues may arise when laboratories evaluate measurement uncertainties.
3A.3 Measurement Uncertainty
To obtain a single numerical parameter that describes the uncertainty of a measured result in the
laboratory requires one to consider all the significant sources of inaccuracy. An internationally
accepted approach to the expression of measurement uncertainty involves evaluating the
uncertainty first in the form of an estimated standard deviation, called a standard uncertainty
(ISO, 1995). A standard uncertainty is sometimes informally called a "one-sigma" uncertainty.
In the political polling example above, the measurand is the fraction, p, of likely voters who
support candidate X. The poll is conducted by asking 1,200 likely voters whether they support
candidate X, and counting the number of those who say they do. If m is the number who support
X, then the pollster estimates p by the quotient m / 1200. Pollsters commonly evaluate the stan-
dard uncertainty of p as u(p) = 1 / 2/1200.
After the standard uncertainty of a result is calculated, finding a range of likely values for the
measurand consists of constructing an interval about the result by adding and subtracting a mul-
tiple of the standard uncertainty from the measured result. Such a multiple of the uncertainty is
called an expanded uncertainty. The factor, k, by which the standard uncertainty is multiplied is
called a coverage factor. Typically the value of £ is a small number, such as 2 or 3. If k = 2 or 3,
the expanded uncertainty is sometimes informally called a "two-sigma" or "three-sigma" uncer-
tainty. An expanded uncertainty based on a coverage factor of 2 provides an interval about the
measured result that has a reasonably high probability of containing the true value of the measur-
and (often assumed to be about 95 percent), and an expanded uncertainty based on a coverage
factor of 3 typically provides an interval with a very high probability of containing the true value
(often assumed to be more than 99 percent).
MARLAP
3-30
JULY 2004

-------
Key Analytical Planning Issues and Developing APSs: Measurement Uncertainty
In the polling example, the definition of the margin of error is equivalent to that of an expanded
uncertainty based on a coverage factor oik-2. Thus, the margin of error equals 2 times u(p), or
1 / ^1200, which is approximately 3 percent.
3A.4 Sources of Measurement Uncertainty
In radiochemistry the most familiar source of measurement uncertainty is counting statistics.
Mathematically, the uncertainty of a radiation measurement due to counting statistics is closely
related to the uncertainty represented by the margin of error for a political poll. If one prepares a
source from a measured amount of radioactive material, places the source in a radiation counter,
and makes several 10-minute measurements, the number of counts observed will not always be
the same. A typical set of five results might be as follows:
101, 115, 88, 111, 103
Similarly, if the political poll described above were repeated five times with different groups of
likely voters, the number of respondents in each poll who indicate they support the specified can-
didate might be as follows:
523,506, 520,516, 508
In either case, whether the numbers come from radiation counting or political polling, there is
some inherent variability in the results due to random sampling and counting. In radiation count-
ing, the variability exists partly because of the inherently random nature of radioactive decay and
partly because the radiation counter is not perfectly efficient at detecting the radiation emitted
from the source. In political polling, the variability exists because only a fraction of voters sup-
port the candidate and only a limited number of voters are surveyed.
As noted above, there are other potential sources of uncertainty in a political poll. The difficulty
in polling is in obtaining a representative sample of likely voters to be surveyed. A similar diffi-
culty is generally present in radiochemical analysis, since many analytical methods require that
only a small fraction of the entire laboratory sample be analyzed. The result obtained for that
small fraction is used to estimate the concentration of analyte in the entire sample, which may be
different if the fraction analyzed is not representative of the rest of the material.
There are many other potential sources of uncertainty in a radiochemical measurement, such as
instrument calibration standards, variable background radiation (e.g., cosmic radiation),
contaminants in chemical reagents, and even imperfect mathematical models. Some of these
errors will vary randomly each time the measurement is performed, and are considered to be
"random errors." Others will be fixed or may vary in a nonrandom manner, and are considered to
be "systematic errors." However, the distinction between a random error and a systematic error is
relatively unimportant when one wants to know the quality of the result of a single measurement.
JULY 2004
3-31
MARLAP

-------
Key Analytical Planning Issues and Developing APSs: Measurement Uncertainty
Generally, the data user wants to know how close the result is to the true value and seldom cares
whether the (unknown) error of the result would vary or remain fixed if the measurement were
repeated. So, the accepted methods for evaluating and expressing the uncertainty of a measure-
ment make no distinction between random and systematic errors. Components of the total uncer-
tainty due to random effects and systematic effects are mathematically combined in a single
uncertainty parameter.
3A.5 Uncertainty Propagation
In a radiochemical measurement one typically calculates the final result, y, called the "output
estimate," from the observed values of a number of other variables, x]^c2,...^cN, called "input esti-
mates," using a mathematical model of the measurement. The input estimates might include
quantities such as the gross sample count, blank count, count times, calibration factor, decay fac-
tors, aliquant size, chemical yield, and other variables. The standard uncertainty ofy is calculated
by combining the standard uncertainties of all these input estimates using a mathematical
technique called "uncertainty propagation." The standard uncertainty of y calculated in this man-
ner is called a "combined standard uncertainty" and is denoted by uc(y).
Radiochemists, like pollsters, have traditionally provided only partial estimates of their measure-
ment uncertainties, because it is easy to evaluate and propagate radiation counting uncertainty —
just as it is easy to calculate the margin of error for a political poll. In many cases the counting
uncertainty is the largest contributor to the overall uncertainty of the final result, but in some
cases other uncertainty components may dominate the counting uncertainty —just as the polling
uncertainty due to nonrepresentative sampling may dominate the uncertainty calculated from the
simple margin-of-error formula. MARLAP recommends (in Chapter 19) that all of the potentially
significant components of uncertainty be evaluated and propagated to obtain the combined
standard uncertainty of the final result.
3A.6 References
International Organization for Standardization (ISO). 1993. International Vocabulary of Basic
and General Terms in Metrology. ISO, Geneva, Switzerland.
International Organization for Standardization (ISO). 1995. Guide to the Expression of Uncer-
tainty in Measurement. ISO, Geneva, Switzerland.
MARLAP
3-32
JULY 2004

-------
ATTACHMENT 3B
Analyte Detection
3B.1 Introduction
In many cases one of the purposes of analyzing a laboratory sample is to determine whether the
analyte is present in the sample.1 If the data provide evidence that the analyte is present, the ana-
lyte is detected; otherwise, it is not detected: The purpose of this attachment is to explain the
issues involved in analyte detection decisions, which are often misunderstood. More details are
presented in Chapter 20 (Detection and Quantification Capabilities).
The result of a laboratory analysis is seldom if ever exactly equal to the true value of the meas-
urand (the quantity being measured), because the result is affected by measurement error (see
Attachment 3A). It is also rare for two or more analyses to produce exactly the same result,
because some components of the measurement error vary randomly when a measurement is
repeated. Typically some sources of error are well understood (e.g., radiation counting statistics)
while others (e.g., reagent contamination and interferences) may or may not be. For these
reasons, deciding whether an analyte is present in a sample is not always easy.
Acceptable methods for making detection decisions are based on statistical hypothesis testing. In
any statistical hypothesis test there are two hypotheses, which are called the null hypothesis and
the alternative hypothesis. Each hypothesis is a statement whose truth is unknown. Only one of
the two hypotheses in a hypothesis test can be true in any given situation. The purpose of the test
is to choose between the two statements. The null hypothesis is the statement that is presumed to
be true unless there is adequate statistical evidence (e.g., analytical data) to the contrary. When
the evidence for the alternative hypothesis is strong, the null hypothesis is rejected and the alter-
native hypothesis is accepted. When the evidence is weak, the null hypothesis is retained and
thus must still be assumed to be true, or at least possibly true. In the context of analyte detection,
the null hypothesis states that there is no analyte in the sample, while the alternative hypothesis
states that there is some analyte in the sample.
The concept of a null hypothesis is similar to that of a presumption of innocence in a criminal
trial, where the defendant is presumed to be innocent (the null hypothesis) unless there is strong
legal evidence to the contrary. If the evidence is strong enough to meet the burden of proof, the
defendant is found guilty (the alternative hypothesis). The important point here is that an acquit-
1 In other cases, the analyte's presence in a sample may be known or assumed before the analysis. For example,
project planners may want to know whether the concentration of a naturally occurring radionuclide, such as 238U, in
soil is above or below an action level, although there is little doubt that the analyte is present. In these cases it is
usually not necessary to make a detection decision.
JULY 2004
3-33
MARLAP

-------
Key Analytical Planning Issues and Developing APSs: Analyte Detection
tal does not require proof of innocence—only a lack of proof of the defendant's guilt. Analogous
rules apply in statistical hypothesis testing.
In the context of analyte detection, the null hypothesis states that there is no analyte in the
sample; so, one must presume that no analyte is present unless there is sufficient analytical
evidence to the contrary. Therefore, failing to detect an analyte is not the same thing as proving
that no analyte is present. Generally, proving that there is no knalyte in a sample is impossible
because of measurement error. No matter how small the result of the measurement is, even if the
I	'
result is zero or negative, one cannot be certain that there is not at least one atom or molecule of
the analyte in the sample.
3B.2 The Critical Value
When a laboratory analyzes a sample, the measuring instrument produces a response, or gross
signal, that is related to the quantity of analyte present in the sample, but random measurement
errors cause this signal to vary somewhat if the measurement is repeated. A nonzero signal may
be (and usually is) produced even when no analyte is present. For this reason the laboratory
analyzes a blank (or an instrument background) to determine the signal observed when no analyte
is present in the sample, and subtracts this blank signal from the gross signal to obtain the net
signal. In fact, since the signal varies if the blank measurement is repeated, there is a blank signal
distribution, whose parameters must be estimated. To determine how large the instrument signal
for a sample must be to provide strong evidence for the presence of the analyte, one calculates a
threshold value for the net signal, called the critical value, which is sometimes denoted by Sc. If
the observed net signal for a sample exceeds the critical value, the analyte is considered
"detected"; otherwise, it is "not detected."
Since the measurement process is statistical in nature, even when one analyzes an analyte-free
sample, it is possible for the net signal to exceed the critical value, leading one to conclude incor-
rectly that the sample contains a positive amount of the analyte. Such an error is sometimes
called a "false positive," although the term 'Type I error" is favored by MARLAP. The proba-
bility of a Type I error is often denoted by a. Before calculating the critical value one must
choose a value for a. The most commonly used value is 0.05, or 5 percent. If a - 0.05, then one
expects the net instrument signal to exceed the critical value in only about 5 percent of cases
(one in twenty) when analyte-free samples are analyzed.
Figure 3B.1 depicts the theoretical distribution of the net instrument signal obtained when
analyzing an analyte-free sample and shows how this distribution and the chosen Type I error
probability, a, together determine the critical value of the net signal, Sc. The probability a is
depicted as the area under the curve to the right of the dashed line. Note that decreasing the value
of a, requires increasing the critical value (shifting the dashed line to the right), and increasing
the value of a requires decreasing the critical value (shifting the dashed line to the left).
MARLAP
3-34
JULY 2004

-------
Key Analytical Planning Issues and Developing APSs: Analvte Detection
Figure 3B.1 — The critical value of the net signal
3B.3 The Minimum Detectable Value
As explained above, the critical value is chosen to limit the probability of a Type I decision error,
which means incorrectly concluding that the analyte has been detected when it actually is not
present. When the analyte actually is present in the sample being analyzed, another kind of
decision error is possible: incorrectly failing to detect the analyte. The latter type of error is called
a Type II error.
The detection capability of an analytical measurement process, or its ability to distinguish small
positive amounts of analyte from zero, is defined in terms of the probability of a Type II error.
The common measure of detection capability is the minimum detectable value, which equals the
smallest true value (amount, activity, or concentration) of the analyte at which the probability of
a Type II error does not exceed a specified value, fj.2 The definition of the minimum detectable
value presumes that an appropriate detection criterion (i.e., the critical value) has already been
chosen. So, the minimum detectable value is the smallest true value of the analyte that has a
specified probability, 1 - /?, of generating an instrument signal greater than the critical value. The
value of 13, like that of a, is often chosen to be 0.05, or 5 percent. (See Figure 20.1 in Chapter 20
for a graphical illustration of the relationship between the critical value and the minimum detect-
able value.)
In radiochemistry, the minimum detectable value may be called the minimum detectable concen-
tration (MDC), minimum detectable amount (MDA), or minimum detectable activity (also
abbreviated as MDA). MARLAP generally uses the term "minimum detectable concentration,"
or MDC.
2 Although the minimum detectable value is defined theoretically as a "true" value of the analyte, this value, like
almost any true value in the laboratory, is not known exactly and can only be estimated. The important point to be
made here is that the minimum detectable value should not be used as a detection threshold for the measured value
of the analyte.
JULY 2004
3-35
MARLAP

-------
Key Analytical Planning Issues and Developing APSs: Analyte Detection
It is common in radiochemistry to report the MDC (or MDA) for the measurement process.
Unfortunately, it is also common to use the MDC incorrectly as a critical value, which it is not. It
is difficult to imagine a scenario in which any useful purpose is served by comparing a measured
result to the MDC. Nevertheless such comparisons are used frequently by many laboratories and
data validators to make analyte detection decisions, often at the specific request of project
planners.
This common but incorrect practice of comparing the measured result to the MDC to make a
detection decision produces the undesirable effect of making detection much harder than it
should be, because the MDC is typically at least twice as large as the concentration that corres-
ponds to the critical value of the instrument signal. In principle, a sample that contains an analyte
concentration equal to the MDC should have a high probability (usually 95 percent) of
producing a detectable result. However, when the MDC is used for the detection decision, the
probability of detection is only about 50 percent, because the measured concentration is as likely
to be below the MDC as above it. When an analyte-free sample is analyzed, the probability of a
Type I error is expected to be low (usually 5 percent), but when the MDC is used for the
detection decision, the probability of a Type I error is actually much smaller—perhaps 0.1
percent or less.
Sometimes it may be desirable to have a Type I error rate much less than 5 percent; however,
this goal does not justify using the MDC for the detection decision. In this case, the correct
approach is to specify the critical value based on a smaller value of a, such as 0.01 instead
of 0.05.
MARLAP recommends that when a detection decision is required, the decision should be
made by comparing the measured value (e.g., of the net instrument signal) to its critical
value—not to the minimum detectable value.
3B.4 Sources of Confusion
There are several potential sources of confusion whenever one deals with the subject of analyte
detection in radiochemistry. One source is the lack of standardization of terminology. For exam-
ple, the term "detection limit" is used with different meanings by different people. In radiochem-
istry, the detection limit for a measurement process generally means the minimum detectable
value. However, in other fields the term may correspond more closely to the critical value. In
particular, in the context of hazardous chemical analysis, the term "method detection limit,"
which is abbreviated as MDL, is defined and correctly used as a critical value (i.e., detection
threshold); so, the MDL is not a "detection limit" at all in the sense in which the latter term is
commonly used in radiochemistry. Another potential source of confusion is the similarity be-
tween the abbreviations MDL and MDC, which represent very different concepts. Anyone who is
familiar with only one of these terms is likely to be confused upon first encountering the other.
MARLAP
3-36
JULY 2004

-------
Key Analytical Planning Issues and Developing APSs: Analyte Detection
Another cause of confusion may be the practice of reporting undetectable results as "< MDC." If
the measured result is less than the critical value, the practice of reporting "< MDC" may not be
ideal, but at least it can be defended on the basis that when the measured value is less than the
critical value, the true value is almost certainly less than the MDC. However, if this shorthand
reporting format is not explained clearly, a reader may interpret "< MDC" to mean that the meas-
ured value was less than the MDC and for that reason was considered undetectable. The latter
interpretation would be incorrect and might cause the reader to misunderstand the MDC concept.
(MARLAP recommends in Chapter 19 that the laboratory always report the measured value and
its uncertainty even if the result is considered undetectable.)
3B.S Implementation Difficulties
Conceptually, the theory of detection decisions and detection limits is straightforward, but the
implementation of the theory often presents difficulties. Such difficulties may include:
•	Difficulty in preparing and measuring appropriate blanks,
•	Variable instrument background,
•	Sample-specific interferences, and
•	Statistics of low-background radiation counting.
The concept of the "appropriate blank" is that of an artificial sample that is as much like a real
sample as practical in all important respects, but which contains none of the analyte being meas-
ured. The most appropriate type of blank depends on the analyte and the measurement procedure.
Too often the critical value is based on the distribution of the instrument background, even when
it is known that the presence of analyte in reagents and interferences from various sources cause
the observed signal for an analyte-free sample to be somewhat elevated and more variable than
the instrument background. This practice may produce a high percentage of Type I errors when
the critical value is used as a detection threshold. In other cases, the instrument background
measurement may overestimate the signal produced by an analyte-free sample and lead to higher
Type II error rates. Note that the problem in either of these cases is not the use of the critical
value but its incorrect calculation. There is still no justification for using the MDC as a detection
threshold. Instead, the critical value should be based on a better evaluation of the distribution of
the signal that is observed when analyte-free samples are analyzed.
Even when there are no interferences or reagent contamination, if the instrument background is
variable, some of the commonly used expressions for the critical value (which are based on
counting statistics only) may be inadequate. Again, the consequence of ignoring such variability
when calculating the critical value may be a high percentage of Type I errors. In this case too, the
mistake is not in how the critical value is used (as a detection threshold), but in how it is calcu-
lated.
JULY 2004
3-37
MARLAP

-------
Key Analytical Planning Issues and Developing APSs: Analyte Detection	
A final issue to be discussed is how to calculate an appropriate critical value when the observed
blank count is extremely low (e.g., less than 20 counts). Chapter 20 presents expressions for the
critical value that should give good results (Type I error rates close to those expected) in these
situations when the only variability is that due to counting statistics. However, when the blank
count is low and there is additional variability, the usefulness of these expressions cannot be
guaranteed, even when they are modified to account for the extra variability.
MARLAP
3-38
JULY 2004

-------
14

-------
p O^. p 3.5 L; Preserve on ice or with 5 mL of 37%
formaldehyde / L sample
Sample Receipt and Inspection
Return sample receipt acknowledgment letter with date of receipt
at Lab. Cross index list for Sample ID and assigned Lab ID.
Visually inspect containers upon receipt to ensure integrity and
normal sample appearance. Rad survey samples upon receipt.
COC documentation applies.
Laboratory Sample Preparation
Take sufficient aliquant of sample after gamma-ray spectrometry
analysis (see separate requirements in the gamma spectroscopy
APS). Keep 1 liter as backup until analytical results have been
approved by project manager.
Sample Dissolution
None
Chemical Separations
Isolation of Sr from the milk by either cation resin or
precipitation of Sr from soured or dry-ashed milk. Separation
from Ca is essential. Rare earth and Ba scavenging steps are
necessary to eliminate possible interferences from fresh fission
products.
Preparing Sources for Counting
Final source mount to accommodate nuclear instrumentation.
Nuclear Counting
Acceptable counting instrumentation includes: Liquid
Scintillation Counter, Gas Proportional Counter or Solid State
Beta Detector. Detection method to discriminate to the extent
possible for potential 89Sr contamination by physical or
calculations means.
Data Reduction and Reporting
See Attachment A
Sample Tracking Requirements
Chain-of-Custody
Other - Chemical Yielding
Gravimetric (must have 99% Ca removal) or 85Sr tracer with >
90% Ca removal.
* Spiking range provided in Attachment B
April 25, 2005

-------
Attachment A
Data Reduction and Reporting Requirements
Data Reduction
1.	Calculation of Sr-90 activity or concentration (pCi/L) can be based on the quantification of Sr-90 and/or Y-
90, with proper addressing of decay and ingrowth of Y-90.
2.	Calculation of the associated combined standard uncertainty (pCi/L) of the ^Sr concentration.
3.	Calculation of the MDC, in terms of pCi/L, shall be sample specific using the detector efficiency and
background, counting time, decay and ingrowth factors, Sr yield and sample volume used for the analysis.
4.	Calculation of critical level, in terms of pCi/L, shall be sample specific.
5.	Calculation of gross, net and background count rate, detector efficiency, chemical yield, decay and ingrowth
factors for each sample.
6.	Initial review and approval of data reduction equations shall be established during a desk or onsite audit as
part of the lab approval/contracting process.
7.	No changes in the equations used in data reduction shall be initiated without prior approval of the project
manager.
Data Reporting
1.	For each sample, the following sample specific parameters shall be reported:
Batch #, Sample ID, Lab ED, sample collection (reference) date, sample receipt date, estimated (or actual)
sample volume received, separation date, counting date, cross reference to batch QC samples, SOP used,
analyst, data reviewer and report date.
2.	For each sample, the following sample processing parameters or factors shall be reported:
Gross, net and background count rates, detector efficiency, sample volume processed, ^Sr decay factor, ^Y
decay and ingrowth factors (and times), and chemical yield factor.
3.	For each sample the following calculated information will be reported:
critical level, MDC, ^Sr concentration and associated combined standard uncertainty (CSU).
4.	Batch quality control results for the laboratory control sample (LCS), method blank, duplicate sample and
matrix spike sample shall be reported with each batch of samples:
Reporting data to include:
LCS - calculated sample and prepared spike concentration with associated CSUs, and percent difference
between sample result and known value
Duplicate samples - calculated concentrations with associated CSU for both samples
Matrix spike - calculated sample and known spike concentration with associated CSUs, and percent
difference between sample results
5.	A "Narrative" shall be provided with each batch of samples that describes problems encountered or noted
discrepancies for any sample, possible effect on the quality of a result and actions taken to remedy the
problem if recurrent.
6.	Reports shall be provided electronically and as a hard copy. An electronic data format will be provided.
April 25,2005
2

-------
Attachment B
Batch Quality Control Sample Evaluation Criteria
A "batch" of samples is defined as 20 samples or less including the QC samples. The results of the batch QC
samples shall be evaluated according to the equations provided below. It should be noted that no action is to be
taken when a "not to exceed" limit stated below is exceeded for an individual sample. However, if trending of
the results indicate many results or a trend of results exceeds a limit, actions must be taken to stop processing
samples, identify the root cause of the problem and take corrective actions. Sample processing can resume when
the corrective actions have been shown to be effective in eliminating the cause of the problem. It is expected
that the Laboratory's QA officer and project manager shall provide oversight on the sample processing and shall
track the batch QC results.
Laboratory Control Sample
The ^Sr spike concentration of an LCS shall be between 10 and 20 pCi/L and the spiking uncertainty should be
^ 5%. The percent deviation (%D) for the LCS analysis is defined as
%D ± SSR~SA x 100%	1
SA
where
SSR is the measured result (spiked sample result) and
SA is the spike activity (or concentration) added.
The %D control limit is ± 3 (pM/? x 100% or ±19%. For long-term trending, the %D results should be plotted
graphically in terms of a quality control chart with the expected mean %D value of zero.
Duplicate Samples
The acceptance criterion for duplicate analysis results depends on the analyte concentration of the sample, which
is determined by the average x of the two measured results x, and x2.
When x < 8, the control limit for the absolute difference! xx - x2 | is 4.24 uMR or 2.1.
When x ^ 8 pCi/L, the control limit for the relative percent difference (RPD), defined as,
\x. -x7\
RPD = 2 X 100%	5
JC
is 4.24 (pMR x 100% or 27 %. For long-term trending, the absolute difference and RPD results should be plotted
graphically in terms of a quality control chart with an expected absolute difference and RPD mean values of
zero.
April 25, 2005
3

-------
Attachment B (Continued)
Batch Quality Control Sample Evaluation Criteria
Matrix Spikes
The acceptance criteria for matrix spikes uses the "Z score," defined below, as the test for matrix spikes. The
pre-existing activity (or concentration) must be measured and subtracted from the activity measured after
spiking. The ^Sr spike concentration of a matrix spike shall be between 10 and 20 pCi/L and the spiking
uncertainty should be ^ 5%.
z _	SSR - SR - SA
cpM^SSR2 + max(SR, UBGR)2
„	SSR - SR - SA
	.	5)
0.0625yjSSR2 + max(SR, 8)2
where:
SSR is the spiked sample result,
SR is the unspiked sample result,
SA is the spike concentration added (total activity divided by aliquant mass), and max(SR,8) denotes the
maximum of SR and 8 pCi/L.
The control limit for Z is set at ± 3. It is assumed that the uncertainty of SA is negligible with respect to the
uncertainty of SSR. For long-term trending, the Z results should be plotted graphically in terms of a quality
control chart with a Z value of zero as the expected mean value.
Method Blanks When an aliquant of a blank material is analyzed, the target value is zero. However, the
measured value may be either positive or negative. The applicable control limit for blank samples shall be
within ± 3 uMR or ± 1.5 pCi/L. For long-term trending, the blank results should be plotted graphically in terms of
a quality control chart with an expected mean value of zero.
April 25, 2005
4

-------
Attachment C
Method Validation Requirements
Prior to processing any milk samples, the laboratory is required to validate its ^Sr in milk radioanalytical
method according to the specifications stated in MARLAP Chapter 6. The level of method validation will
depend on whether the laboratory has a previously validated method for ^Sr in milk (Level A), will modify a
previously validated ^Sr method for a milk matrix (Level C) or must newly develop or adapt a method for ^Sr
in milk (Level D). The laboratory shall submit the method validation documentation to the project manager for
review and approval prior to the acquisition of a laboratory contract. A summary of the method validation
criteria is presented below for the three validation levels.
Level A method validation pertains to a previously validated method for ^Sr in milk. No additional testing is
required if the method previously has been successfully validated and the available method validation
documentation has been reviewed and approved by the project manager. Documentation of method validation
should conform to the specifications provided below.
Level C method validation is to be conducted when a validated ^Sr method for a non-milk matrix is modified
for applicability for the milk matrix, e.g., when the EPA 905 ^Sr in water method is modified for use with a
milk matrix. A method validation plan should be developed and documented. Validation Level C requires the
preparation and analysis of five replicate milk samples (internal performance testing samples) spiked at three
different concentrations. For this project the three levels of 1, 10,20 pCi/L (or within ± 15% of the values)
should be used in the validation process. Each sample result for the lowest level (below the action level) must be
within ± 2.9 wMR or ± 1.45 pCi/L of the spiked concentration value. Each sample result from the two higher
spiked levels (above the action level) must be within ± 2.9 cpMR x 100% or ± 18% of the spiked concentration
value. Documentation of method validation should conform to the specifications provided below.
Level D method validation is to be conducted when a new method is specifically developed or adapted from the
literature for the project's ^Sr in milk application. Validation Level D requires the preparation and analysis of
seven replicate milk samples (internal performance testing samples) spiked at three different concentrations. For
this project the three levels of 1, 10, 20 pCi/L (or within ± 15% of the values) should be used in the validation
process. Each sample result for the lowest level (below the action level) must be within ± 3.0 wMR or ± 1.5 pCi/L
of the spiked concentration value. Each sample result from the two higher spiked levels (above the action level)
must be within ± 3.0 
-------
LWL	= Lower warning limit
UWL	= Upper warning limit
UCL	= Upper control limit
LCL	= Lower control limit
PLUS
RADIOACTIVITY SOLUTIONS
Data Report for:	XYZ Nuclear Handlers, Incorporated
Sample Matrix:	Whole Milk
QUALITY CONTROL GRAPHS
LCS QC Chart for Sr-90
pCi/L
14 	
0	5	10	15	20	25	30	35
Batch Number
,	LWL	LCL	UWL	UCL
Blank Results Sr-90 in Milk
pCi/L
3
2
1
0
-1
-2
-3
0	5 10 15 20 25 30 35
Batch
	LWL	LCL	UWL	UCL
7

-------
Abs Value of Duplicate Analyses Control Chart
Difference
3.5
3
2.5
2
1.5
1
0.5
0
0
5
10 15 20 25
30
3J
	UWL


Batch Number


UCL
8

-------
3
m plus
RADIOACTIVITY SOLUTIONS
Data Report for:	XYZ Nuclear Handlers, Incorporated
Sample Matrix:	Whole Milk
Date Samples Received:	April 18, 2006
Sample
Name -
Lab ID
Sample
Date
Analysis
Start
Time
Analysis
Completed
Analyte
Activity ± la,
pCi/L
MDC,
pCi/L
Guernsey 1
051002
3/24/05
4/4/05
4/11/05
90Sr
1.61 ± 0.38
0.80
Jersey 5
051003
3/24/05
4/4/05
4/07/05
90Sr
0.52 ±0.36
1.2
Holstein 3
051004
3/24/05
4/4/05
4/11/05
90Sr
1.10 ± 0.37
0.68
Guernsey 6
051005
3/24/05
4/4/05
4/11/05
90Sr
-0.55 ±0.93
0.50
Jersey 8
051006
3/25/05
4/4/05
4/11/05
90Sr
1.55 ±0.37
0.61







Guernsey 1 DU
051008
4/4/05
4/4/05
4/11/05
90Sr
1.95 ±0.38
0.85
Batch Blank
051009
4/4/05
4/4/05
4/11/05
90Sr
-0.43 ± 0.66
1.3
LCS
051007
4/4/05
4/4/05
4/11/05
90Sr
12.81 ±0.49
1.5
Jersey 8 MS
051010
4/4/05
4/4/05
4/11/05
90Sr
15.50 ±0.51
1.6
Matrix Spike: 20.0 pCi/L added.
LCS Target: 10.0 pCi/L
Analysis by Liquid Scintillation Counting
Critical Level ~ 0.6 pCi/L
Approved by: I. M. Wright, OA Officer
9

-------
*• : pLUS
RADIOACTIVITY SOLUTIONS
Data Report for:	XYZ Nuclear Handlers, Incorporated
Sample Matrix:	Whole Milk
Date Samples Received:	April 18, 2006
Sample
Name -
Lab ID
Sample
Date
Analysis
Start
Time
Analysis
Completed
Analyte
Activity ±
la, pCi/L
MDC,
pCi/L
Initial Data
Qualifiers
Based on
Sample
Results
Guernsey 1
051002
3/24/05
4/4/05
4/11/05
90Sr
1.61 ± 0.38
0.80

Jersey 5
051003
3/24/05
4/4/05
4/07/05
90Sr
0.52 ±0.36
1.2
u,
Holstein 3
051004
3/24/05
4/4/05
4/11/05
90Sr
1.10 ± 0.37
0.68

Guernsey 6
051005
3/24/05
4/4/05
4/11/05
90Sr
-0.55 ±0.93
0.50
U
Jersey 8
051006
3/25/05
4/4/05
4/11/05
90Sr
1.55 ±0.37
0.61








QC Test
Qualifiers
Guernsey 1 DU
051008
4/4/05
4/4/05
4/11/05
90Sr
1.95 ±0.38
0.85

Batch Blank
051009
4/4/05
4/4/05
4/11/05
90 Sr
-0.43 ± 0.66
1.3
E, Q*
LCS
051007
4/4/05
4/4/05
4/11/05
90Sr
12.81 ±0.49
1.5
S(+),
Jersey 8 MS
051010
4/4/05
4/4/05
4/11/05
90Sr
15.50 ± 0.51
1.6
f
S(-),
Matrix Spike: 20.0 pCi/L added.
LCS Target: 10.0 pCi/L
Analysis by Liquid Scintillation Counting
Critical Level ~ 0.6 pCi/L
Approved by: I. M. Wright, OA Officer
*The grayed-out qualifiers in the final column (E, Q) are present only as part of this exercise. These qualifiers
to the QC samples. This is particularly true for the matrix spike and the LCS where the MDC is relatively
concentraiton is obviously real. The E that was added in the sample section indicates that the MDC required
met. In this case, the E may or may not be retained by the data validator.
'c^V 6 + Co~'V f
L)-v ^
generally would NOT be applied
unimportant when the measured
in the APS of 1.0 pCi/L was not
10

-------
$
PLUS
RADIOACTIVITY SOLUTIONS
Data Report for:	XYZ Nuclear Handlers, Incorporated
Sample Matrix:	Whole Milk
Date Samples Received:	April 18, 2006
Sample
Name -
Lab ID
Sample
Date
Analysis
Start
Time
Analysis
Completed
Analyte
Activity +
1 a, pCi/L
MDC,
pCi/L
Ail Qualifiers
Guernsey 1
051002
3/24/05
4/4/05
4/11/05
90Sr
1.61 ± 0.38
0.80
S(+,-)
Jersey 5
051003
3/24/05
4/4/05
4/07/05
90Sr
0.52 ±0.36
1.2
S(+,-), U,
Holstein 3
051004
3/24/05
4/4/05
4/11/05
90Sr
1.10 ± 0.37
0.68
S(+,-)
Guernsey 6
051005
3/24/05
4/4/05
4/11/05
90Sr
-0.55 ±0.93"f
y
0.50
S(+,-), U, Q
Jersey 8
051006
3/25/05
4/4/05
4/11/05
90Sr
1.55 ±0.37
0.61
S(+,-)







QC Test
Qualifiers'
Guernsey 1 DU
051008
4/4/05
4/4/05
4/11/05
90Sr
1.95 ±0.38
0.85

Batch Blank
051009
4/4/05
4/4/05
4/11/05
90Sr
-0.43 ± 0.66
1.3
1", Q*
LCS
051007
4/4/05
4/4/05
4/11/05
90Sr
12.81 ±0.49
1.5
S(+)
Jersey 8 MS
051010
4/4/05
4/4/05
4/11/05
90Sr
15.50 ± 0.51
1.6
S(-)
Matrix Spike: 20.0 pCi/L added.
LCS Target: 10.0 pCi/L
Analysis by Liquid Scintillation Counting
Critical Level ~ 0.6 pCi/L
Approved by: I. M. Wright, QA Officer
*The grayed-out qualifiers in the final column (E, Q) are present only as part of this exercise. These qualifiers generally would NOT be applied
to the QC samples. This is particularly true for the matrix spike and the LCS where the MDC is relatively unimportant when the measured
concentraiton is obviously real. The E that was added in the sample section indicates that the MDC required in the APS of 1.0 pCi/L was not
met. In this case, the E may or mav not be retained by the data validator.
11

-------
15

-------
TABLE 4.2 — Crosswalk between project plan document elements and directed planning process
ID
Project Plan Document
Elements
(QAPP—EPA, 2001)*
Content
Directed Planning Process Input
A - Project Management
A1
Title and Approval Sheet
Title and approval sheet.

A2
Table of Contents
Document control format.

A3
Distribution List
Distribution list for the plan
document revisions and final
guidance.
Include the members of the project
planning team and stakeholders.
A4
Project/Task Organization
1)	Identify individuals or
organizations participating in the
project and discuss their roles and
responsibilities.
2)	Provide an organizational chart
showing relationships and
communication lines.
The directed planning process:
•	Identifies the stakeholders, data
users, decisionmakers.
•	Identifies the core planning team and
the technical planning team members
responsible for technical oversight.
•	Identifies the specific people/organi-
zations responsible for project
implementation (sampling and
analysis).
A5
Problem Definition/
Background
1)	State the specific problem to be
solved and decision to be made.
2)	Include enough background to
provide a historical perspective.
Project planning team:
•	Documents the problem, site history,
existing data, regulatory concerns,
background levels and thresholds.
•	Develops a decision statement.
A6
Project/Task Description
Identify measurements, special
requirements, sampling and analytical
methods, action levels, regulatory
standards, required data and reports,
quality assessment techniques, and
schedules.
Project planning team identifies:
•	Deadlines and other constraints that
can impact scheduling.
•	Existing and needed data inputs.
Project planning team establishes:
•	Action levels and tolerable decision
error rates that will be the basis for
the decision rule.
•	The optimized sampling and
analytical design as well as quality
criteria.
A7
Quality Objectives and
Criteria for Measurement
Data
1)	Identify DQOs, data use, type of
data needed, domain, matrices,
constraints, action levels, statistical
parameters, and acceptable decision
errors.
2)	Establish MQOs that link analysis
to the user's quality objectives.
3)	APSs.
4)	Method validation requirements.
Project planning team:
•	Identifies the regulatory standards
and the action level(s).
•	Establishes the decision rule.
•	Describes the existing and needed
data inputs.
•	Describes practical constraints and
the domain.
•	Establishes the statistical parameter
that is compared to the action level.
•	Establishes tolerable decision error
rates used to choose quality criteria.
•	Establishes quality criteria linked to
the optimized design.
•	Establishes data verification,
validation and assessment criteria
and procedures.
•	Establishes APSs and MOOs.
1

-------
Table 4.2 (Continued) — Crosswalk between project plan document elements and directed planning process
ID
Project Plan Document
Elements
(QAPP—EPA, 2001)*
Content
Directed Planning Process Input
A8
Special Training
Requirements/
Certification
Identify and discuss special
training/certificates required to
perform work.
Project planning team:
•	Identifies training, certification,
accreditation requirements for field
and laboratory.
•	Identifies federal and state
requirements for certification for
laboratories.
•	Identifies federal and state
requirements for activities, such as
disposal of field-generated residuals.
A9
Documentation and Record
Itemize the information and records,
which must be included in a data
report package including report
format and requirements for storage
etc.
Project planning team:
•	Indicates whether documents will be
controlled and the distribution list
incomplete.
•	Identifies documents that must be
archived.
•	Specifies period of time that
documents must be archived.
•	Specifies procedures for error
corrections (for hard copy and
electronic files).
B - Measurement/Data Acquisition
Bl
Sampling Process Designs
(Experimental Designs)
(1)	Outline the experimental design,
including sampling design and
rationale, sampling frequencies,
matrices, and measurement parameter
of interest.
(2)	Identify non-standard methods
and validation process.
Project planning team establishes the
rationale for and details of the sampling
design.
B2
Sampling Methods
Requirements
Describe sampling procedures,
needed materials and facilities,
decontamination procedures, waste
handling and disposal procedures,
and include a tabular description of
sample containers, sample volumes,
preservation and holding time
requirements.
Project planning team specifies the
preliminary details of the optimized
sampling method.
B3
Sample Handling and
Custody Requirements
Describe the provisions for sample
labeling, shipment, sample tracking
forms, procedures for transferring
and maintaining custody of samples.
Project planning team describes the
regulatory situation and site history,
which can be used to identify the
appropriate sample tracking level.
B4
Analytical Methods
Requirements
Identify analytical methods and
procedures including needed
materials, waste disposal and
corrective action procedures.
Project planning team:
•	Identifies inputs to the decision
(nuclide of interest, matrix, etc.).
•	Establishes the allowable measure-
ment uncertainty that will drive
choice of the analytical protocols.
•	Specifies the optimized sampling and
analytical design.
2

-------
Table 4.2 (Continued) — Crosswalk between project plan document elements and directed planning process
ID
Project Plan Document
Elements
(QAPP—EPA, 2001)*
Content
Directed Planning Process Input
B5
Quality Control
Requirements
(1)	Describe QC procedures and
associated acceptance criteria and
corrective actions for each sampling
and analytical technique.
(2)	Define the types and frequency of
QC samples should be defined along
with the equations for calculating QC
statistics.
Project planning team:
•	Establishes the allowable
measurement uncertainty, which will
drive QC acceptance criteria.
•	Establishes the optimized analytical
protocols and desired MQOs.
B6
Instrument/Equipment
Testing Inspection and
Maintenance Requirements
1)	Discuss determination of
acceptable instrumentation
performance.
2)	Discuss the procedures for
periodic, preventive and corrective
maintenance.

B7
Instrument Calibration and
Frequency
(1)	Identify tools, gauges and
instruments, and other sampling or
measurement devices that need
calibration.
(2)	Describe how the calibration
should be done.
Project planning team establishes the
desired MQOs, which drive acceptance
criteria for instrumentation
performance.
B8
Inspection/ Acceptance
Requirements for Supplies
and Consumables
Define how and by whom the
sampling supplies and other
consumables will be accepted for use
in the project.

B9
Data Acquisition
Requirements (Non-direct
Measurements)
Define criteria for the use of non-
direct measurement data such as data
that come from databases or
literature.
Project planning team:
•	Identifies the types of existing data
that are needed or would be useful.
•	Establishes the desired MQOs that
would also be applicable to archived
data.
BIO
Data Management
(1) Outline of data management
scheme including path of data, use of
storage and& record keeping
system.(2) Identify all data handling
equipment and procedures that will
be used to process, compile, analyze
the data, and correct errors.

C - Assessment/Oversight
Cl
Assessments and Response
Actions
(1)	Describe the number, frequency
and type of assessments needed for
the project.
(2)	For each assessment: list
participants and their authority, the
schedule, expected information,
criteria for success and unsatisfactory
conditions and those who will receive
reports and procedures for corrective
actions.
Project planning team establishes the
MQOs and develops statements of the
APSs, which are used in the selection of
the analytical protocols and in the
ongoing evaluation of the protocols.
C2
Reports to Management
Identify the frequency, content and
distribution of reports issued to keep
management informed.

3

-------
Table 4.2 (Continued) — Crosswalk between project plan document elements and directed planning process
ID
Project Plan Document
Elements
(QAPP—EPA, 2001)*
Content
Directed Planning Process Input
D - Data Validation and Usability
D1
Data Review, Verification
and Validation
Requirements
State the criteria including specific
statistics and equations, which will be
used to accept or reject data based on
quality.
Project planning team:
•	Establishes the MQOs for the sample
analysis, and may also discuss.
completeness and representativeness
requirements that will be the basis of
validation.
•	Establishes the action level(s)
relevant to the project DQOs.
•	Establishes the data validation
criteria.
D2
Verification and Validation
Methods
Describe the process to be used for
validating and verifying data,
including COC for data throughout
the lifetime of the project.
Project planning team:
•	Determines appropriate level of
custody.
•	May develop a validation plan.
D3
Reconciliation With Data
Quality Objectives
Describe how results will be
evaluated to determine if DQOs are
satisfied.
Project planning team:
•	Defines the necessary data input
needs.
•	Defines the constraints and
boundaries with which the project
has to comply.
•	Defines the decision rule.
•	Identifies the hypothesis and
tolerable decision error rates.
•	Defines MQOs for achieving the
project DQOs.
Adapted from: EPA, 2002]
* EPA QAPP elements are discussed in MARLAP Appendix D, Content of Project Plan Documents
U. S. Environmental Protection Agency (EPA). 2002. Guidance on Developing Quality
Assurance Project Plans (EPA QA/G-5). EPA/240/R-02/009. Office of Environmental
Information, Washington, DC. Available at www.epa.gov/quality/qa_docs.html.
4

-------
MARLAP Table E.6 — Example of a proposal evaluation plan
Proposal Evaluation
Objective: To ensure impartial, equitable, and comprehensive evaluation of proposals from
contractors desiring to accomplish the work as outlined in the Request for Proposals and to assure
selection of the contractor whose proposal, as submitted, offers optimum satisfaction of the
government's objective with the best composite blend of performance, schedules, and cost.
Basic Philosophy: To obtain the best possible technical effort which satisfies all the requirements of
the procurement at the lowest overall cost to the government.
Evaluation Procedures
1.	Distribute proposals and evaluation instructions to Evaluation Committee.
2.	Evaluation of proposals individually by each TEC member. Numerical values are recorded with
a concise narrative justification for each rating.
3.	The entire committee by group discussion prepares a consensus score for each proposal.
Unanimity is attempted, but if not achieved, the Chairperson shall decide the score to be given.
4.	A Contract Evaluation Sheet listing the individual score of each TEC member for each proposal
and the consensus score for the proposal is prepared by the Chairperson. The proposals are then
ranked in descending order.
5.	The Chairperson next prepares an Evaluation Report which includes a Contract Evaluation
Sheet, the rating sheets of each evaluator, a narrative discussion of the strong and weak points of
each proposal, and a list of questions which must be clarified at negotiation. This summary shall
be forwarded to the Contracting Officer.
6.	If required, technical clarification sessions are held with acceptable proposers.
7.	Analysis and evaluation of the cost proposal will be made by the Contracting Officer for all
proposals deemed technically acceptable. The Chairperson of the TEC will perform a
quantitative and qualitative analysis on the cost proposals or those firms with whom cost
negotiations will be conducted.
Evaluation Criteria
The criteria to be used in the evaluation of this proposal are selected before the RFP is issued. In
accordance with the established agency policy, TEC members prepare an average or consensus
score for each proposal on the basis of these criteria and only on these criteria.
A guideline for your numerical rating and rating sheets with assigned weights for each criteria are
outlined next under Technical Evaluation Guidelines for Numerical Rating.

-------
MARLAP Table E.6 (Continued) — Example of a proposal evaluation plan
Technical Evaluation Guidelines for Numerical Rating
1.	Each item of the evaluation criteria will be based on a rating of 0 to 10 points. Therefore, each
evaluator will score each item using the following guidelines:
a.	Above normal: 9 to 10 points (a quote element which has a high probability of exceeding the
expressed RFP requirements).
b.	Normal. 6 to 8 points (a quote element which, in all probability, will meet the minimum
requirements established in the RFP and Scope of Work).
c.	Below normal: 3 to 5 points (a quote element which may fail to meet the stated minimum
requirements, but which is of such a nature that it has correction potential).
d.	Unacceptable: 0 to 2 points (a quote element which cannot be expected to met the stated
minimum requirements and is of such a nature that drastic revision is necessary for
correction).
2.	Points will be awarded to each element based on the evaluation of the quote in terms of the
questions asked.
3.	The evaluator shall make no determination on his or her own as to the relative importance of
various items of the criteria. The evaluator must apply a 0 to 10 point concept to each item
without regard to his or her own opinion concerning one item being of greater significance than
another. Each item is given a predetermined weight factor in the Evaluation Plan when the RFP
is issued and these weight factors must be used in the evaluation.	

-------
16

-------
Consolidated Recommendations from MARLAP, Part I
2.8 Project Planning Process
1.	MARLAP recommends the use of a directed project planning process.
2.	MARLAP recommends that the radioanalytical specialists be a part of the integrated
effort of the project planning team.
3.	MARLAP recommends that the planning process rationale be documented and the
documentation integrated with the project plan documents.
4.	MARLAP recommends using a graded approach in which the sophistication, level of QC
and oversight, and resources applied are appropriate to the project.
3.8 Key Analytical Planning Issues and Developing Analytical Protocol
Specifications
5.	MARLAP recommends that any assumptions made during the resolution of key analytical
planning issues are documented, and that these assumptions are incorporated into the
appropriate narrative sections of project plan documents.
6.	MARLAP recommends that an action level and gray region be established for each
analyte during the directed planning process.
7.	MARLAP recommends that the method uncertainty at a specified concentration (typically
the action level) always be identified as an important method performance characteristic,
and that an MQO be established for it for each analyte.
8.	MARLAP recommends that the MQO for the detection capability be expressed as a
required minimum detectable concentration.
9.	MARLAP recommends that the MQO for the quantification capability be expressed as a
required minimum quantifiable concentration.
10.	MARLAP recommends that if the lower bound of the gray region is zero, and decisions
are to be made about individual items or specimens, an analytical method should be
chosen whose MDC is no greater than the action level.
11.	MARLAP recommends that if the lower bound of the gray region is zero, and decisions
are to be made about a sampled population, choose an analytical method whose MQC is
no greater than the action level.
12.	MARLAP recommends that units of the International System of Units (SI) be used
whenever possible.
1

-------
	Consolidated Recommendations from MARLAP, Part I (Continued)	
13.	MARLAP recommends that all measurement results be reported directly as obtained,
including negative values, along with the measurement uncertainty.
4.7 Project Plan Documents
14.	MARLAP recommends using a graded approach to project plan writing because of the
diversity of environmental data collection activities.
15.	MARLAP recommends developing a primary integrating project plan that includes other
documents by citation or as appendices.
16.	MARLAP recommends developing project plan documents that integrate all technical
and quality aspects for the life-cycle of the project, including planning, implementation,
and assessment.
17.	MARLAP recommends including, by citation or as an appendix, the report on the
directed planning process in the project plan documents.
18.	If the planning process was not documented in a report, MARLAP recommends that a
summary of the planning process addressing assumptions and decisions, established
action levels, the DQO statement, and APSs (which include the established MQOs and
any specific analytical process requirements) be included in the project plan documents.
19.	MARLAP recommends using a formal process to control and document changes if
updates of the original project plan document are needed.
5.6 Obtaining Laboratory Services
20.	MARLAP recommends that technical specifications be prepared in writing in a single
document, designated a SOW, for all radioanalytical laboratory services, regardless of
whether the services are to be contracted out or performed by an Agency's laboratory.
21.	MARLAP recommends that the MQOs and analytical process requirements contained in
the SOW be provided to the laboratory.
22.	MARLAP recommends that the SOW include the specifications for the action level and
the required method uncertainty for the analyte concentration at the action level for each
analyte/matrix.
23.	MARLAP recommends that the laboratory submit the proposed methods and required
method validation documentation with the formal response.
24.	MARLAP recommends that the RFP state that subcontracting will be permitted only with
the contracting organization's approval.
2

-------
Consolidated Recommendations from MARLAP, Part I (Continued)
25.	MARLAP recommends that all members of the TEC have a technical understanding of
the subject matter related to the proposed work.
6.11 Selection and Application of an Analytical Method
26.	MARLAP recommends the performance-based approach for method selection.
27.	MARLAP recommends that only methods validated for a project's application be used.
28.	MARLAP recommends that a SOW containing the MQOs and analytical process
requirements be provided to the laboratory.
29.	MARLAP recommends that the SOW include the specifications for the action level and
the required method uncertainty for the analyte concentration at the action level for each
combination of analyte and matrix.
30.	MARLAP recommends that a method undergo some basic general validation prior to
project method validation.
31.	MARLAP recommends that when a method is applied to a specific project, the method
should then undergo validation for that specific application.
32.	MARLAP recommends that as each new project is implemented, the methods used in the
analysis of the associated samples undergo some level of validation. However, it is the
project manager's responsibility to assess the level of method validation necessary.
33.	MARLAP recommends a tiered approach for project method validation.
7.5 Evaluating Methods and Laboratories
34.	MARLAP recommends that a radioanalytical specialist review the methods for technical
adequacy.
35.	MARLAP recommends that the TEC perform an independent calculation of the method's
MDC using laboratory-stated typical or sample-specific parameters.
36.	MARLAP recommends that the project manager or TEC evaluate the available data
provided by the laboratory or from performance evaluations for bias, based on multiple
analyses covering the applicable analyte concentration range.
37.	MARLAP recommends that project-specific MQOs be established and incorporated into
the SOW for laboratory radioanalytical services.
38.	MARLAP recommends that a MQO for method uncertainty be established for each
3

-------
Consolidated Recommendations from MARLAP, Part I (Continued)
analyte/matrix combination.
39.	MARLAP recommends the "Z score" as the test for matrix spikes.
40.	MARLAP recommends that an audit team include a radioanalytical specialist familiar
with the project's or program's technical aspects and requirements.
8.7	Radiochemical Data Verification and Validation
41.	MARLAP recommends that project objectives, implementation activities and QA/QC
data be well documented in project plans, reports, and records, since the success of the
assessment phase is highly dependent upon the availability of such information.
42.	MARLAP recommends that calibration be addressed in a quality system and through an
audit, although demonstration of calibration may be required as part of a project's
deliverables.
43.	MARLAP recommends that the assessment criteria of a project be established during the
directed planning process and documented in the respective plans as part of the project
plan documents.
44.	MARLAP recommends that the result of each measurement, its expanded measurement
uncertainty, and the estimated sample- or analyte-specific MDC be reported for each
sample in the appropriate units.
9.8	Data Quality Assessment
45.	MARLAP recommends that the assessment phase of a project (verification, validation,
and DQA processes) be designed during the directed planning process and documented in
the respective plans as part of the project plan documents.
46.	MARLAP recommends that project objectives, implementation activities, and QA/QC
data be well documented in project plans, reports, and records, since the success of the
assessment phase is highly dependent upon the availability of such information.
47.	MARLAP recommends the involvement of the data assessment specialist(s) on the
project planning team during the directed planning process.
48.	MARLAP recommends that the DQA process should be designed during the directed
planning process and documented in a DQA plan.
49.	MARLAP recommends that all sampling design and statistical assumptions be clearly
identified in project plan documents along with the rationale for their use.
4

-------
17

-------
Plutonium Fabricators. Ltd.
Site Description
Plutonium Fabricators was a company whose principal product was making ^^u fromd^f.
Their research group had discovered that bombardment of depleted uranium (DU) oxide with a
select alpha energy range could produce24'Pu via (a, n) reaction.
The DU targets were dissolved in acid and the 241 Pu was extracted, transformed to the oxide and
sold as a source. The remainder of the target was reconstituted for further production of the
plutonium. The reconstitution process involved solvent extraction (using xylene and tri-n-octyl
phosphine oxide), uranium oxide precipitation, and high-temperature firing of the precipitate.
Two processing storage facilities (Buildings "U" and "P"; see Figure 1) were built below grade
(at different depths) to perform the reprocessing and contain the waste solutions from the
extraction processes. The company used tanks and barrels for storage of solutions within the "U"
and "P" buildings. After 15 years of operation, the tanks in the lowest level of the "P" building
were found to be leaking. It was also noted at that time that the building foundation was cracked.
Company engineering personnel were unable to determine the age of the cracks or for how long
the tanks had been leaking.
The reprocessed uranium material was fired in the "U" building. It was also discovered that an
oven exhaust duct had developed a significant leak. This caused some particulate uranium oxide
to be distributed throughout the building. Ground water leaking into the "U" building compoun-
ded the problem of concrete contamination. The ground water spread out the contamination and
allowed seepage into the concrete.
The company temporarily ceased production of 24'Pu focusing their efforts on minimizing the
spread of contamination. During this cleanup phase the company went bankrupt. The major
source term materials have been removed from the site (i.e., the leaking tanks and barrels).
You have been assigned as the project manager for the assessment of the ground-water
contamination. Your task is to write an analytical protocol specification (APS) for each of
the radionuclides potentially present in the ground water.	
The site has had 10 sampling wells placed, which are equidistant from the midpoint of the two
buildings (Figure 1). These wells are sampled (4 liters each) monthly. The flow of ground water
radiates from directly below the building equally in all directions. All stakeholders have agreed
that the average of the 10 monthly samples around the site will be used to assess the 2 'Am
concentration with respect to the action level.
These wells have already been sampled and shown to contain24'Pu, the parent of24'Am, as well
as naturally occurring uranium and thorium (plus decay products) in concentrations less than 5
pCi/L. There will be continued monitoring for the plutonium. The current goal is to-determine
if the concentration of 241Am in the ground water (on average) is greater than 15 pCi/L.
Since the closure of theplant four years ago, ground water measurements of all wells gave 24'Pu
values ranging from K) to 100 pCi/L. If a monthly average concentration exceeds the action
level, stakeholders have agreed that the sampling frequency will be increased to weekly.
pO yrO	KYb SW
1

-------
The ground-water chemistry is identified in the table below:
pH
Specific
Conductivity
Na
(ppm)
Ca
(ppm)
Fe
(ppm)
Dissolved
Oxygen (ppm)
Turbidity
(NTU)
8.5
520
120
35
3.4
0.1
50
O
O
Well location
-ND
O
750 feet diameter

«U"

«p»
O
Bldg

Bldg
O
Admin
Bldg
O
120 feet
O
O
O
Figure 1. Schematic of Plutonium Fabrication Facility
2

-------
Delineating the Gray Region and Determining the Required Method Uncertainty
v
H
I
4-
vi
v1
sf
1)	What is the population,parameter to be estimated from the data to be obtained?
2)	What is the population from which the samples will be taken? ju_	* r^>
All	1/-JL	(vw\ (p
Use this information to fill in the x-axis title and show the following on the graph:
3) What is the action level? f f
4)	What is the discrimination
level (i.e. the lower bound ^
on the gray region)?, :
5)	What are the alternative
decisions?	j
ir^
Wt :II
6)	Decision Rule:
If vH' A»0/ftv
1
I 09
LU
I 0.8
to
0
S 07
vl 0.6
1	0.5
! 0.4
, &
§,0.3
"O
e 02
Ui
0»
°" 0.1
0
Or
Delineating the Gray Region
4
|V





































































10	15	20	25 .
. Concentration of Am-241 in	_ (Units
30
7)	Null Hypothesis: y
8)	Alternative Hypothesis: y ^ fXL-
9)	What is the desired limit on the probability of making a decision error if the true
concentration is at the DL? Q
l^pc,—.
10)	What is the desired limit on the probability making a decision error if the true
concentration is at the AL? o ^ X"	•
THf* —
11)	Using this imformation, what is the required method uncertainty for your project to meet
these goals? —<	«-
kl- WL


3

-------
18

-------
Analytical Protocol Specifications
(Template from MARLAP Chapter 3, Figure 3.2)
Analyte List:'^^/y\ ^	Analysis Limitations:
Matrix:	"T-9*	Possible Interferences:
Concentration Range: Q (00	Action Level: ( ^ K/(^
Method Validation Requirements: Level B or D that cover two applications: method for same
matrix and newly developed or adapted method.
MQOs:
flvy -	U\ pn/l	s^£	IT
QC Samples
* Jype
Frequency
Evaluation Criteria

y^
(To be completed later)

7
(To be completed later)


(To be completed later)


(To be completed later)
Analytical Process Requirements*
Activity
Special Requirements
Field Sample Preparation and Preservation
(To be completed later)
Sample Receipt and Inspection
(To be completed later)
Laboratory Sample Preparation
(To be completed later)
Sample Dissolution
(To be completed later)
Chemical Separations
(To be completed later)
Preparing Sources for Counting
(To be completed later)
Nuclear Counting
(To be completed later)
Data Reduction and Reporting
(To be completed later)
Sample Tracking Requirements
(To be completed later)
Other
(To be completed later)
*Consistent with a performance-based approach, analytical process requirements should be kept to a minimum,
therefore "none" or "N/A" may be appropriate for many of the activities.

-------
Remediation of Plutonium Fabricators Ltd.
•	Analytes of Interest: 24lPu, 235U, 234U, 238U, Total U,241 Am
For this exercise: 24lAm
•	Other possible radionuclide interferences:
ambient levels of Ra (2 pCi/L) and Ra (3 pCi/L) plus decay products including
2,0pb
238U, 235U, and 23.ikLplus short-lived decay products
241Pu with someP37^)p and 237U
•	Matrix: ground water with some solids (50 NTU) from monitoring wells located around
the site and background locations
•	Action level: 15 pCi/L - EPA Regulations - use for evaluating the mean of the sample
population
•	Required detection level: 1.5 pCi/L from EPA regulations; use for detectability for
individual samples
•	Estimated sample load: 20 samples per week
•	Sample size: ~ 4 liters
•	Required turnaround time: 30 days
•	Contract specification: 10% pricing penalty for late results
•	24"Am Characteristics
t/2 =432.2 years
Radiation emission: a ; 5.443 and 5.485 MeV
Decay product: 237Np - a, t= 2.1 x 106 years
Oxidation state: +3 (Am+3)
Other elemental considerations: most probable oxidation states:
Ra and Pb as +2, Pu as +3, and possibly VI and U as VI
•	24'Pu Characteristics
ti/2 = 14.29 years
Radiation emission: beta max. energy of 20.81 keV (99+%), alpha with energy of
4.89 MeV (2.4x10"3%)
Decay products:241 Am by a, ty2 = 432.2 years, 237U by (3, ty2 - 6.8 days
Anticipated oxidation state(s): +4 (Pu+4), +3(Pu+3) and possibly VI (Pu02+2) from
nitric acid used for dissolving U targets and then heated
Other elemental considerations: most probable oxidation states:
Ra, Ca, and Pb as +2, Am as +3, U as VI
Important Considerations for the Project Manager:
•	Has this radionuclide been analyzed by this laboratory?
•	Has it been analyzed at the detection levels needed for the project?
•	Has the laboratory ever encountered this specific matrix and do they have a specific
procedure for treating this matrix?
•	Has the laboratory performed this radionuclide analysis with the type of interferents (stable
or radioactive) known to exist in your sample?
•	Does the laboratory procedure specifically identify the interferents?
•	Has the laboratory analyzed this radionuclide in this matrix?
•	Does the laboratory participate in a PE program for this radionuclide in this matrix?
•	Does laboratory performance meet the project expectations?

-------
y

-------
MARLAP Training Module 7
— Measurement Uncertainty Exercise
Measurement Uncertainty
EXERCISE
Introduction: Your lab analyzes water samples for241 Am by alpha-particle spectrometry, using
chromatography to separate and purify the americium, and using microprecipitation to prepare a
filter source on a planchet for counting. Americium-243 is used as a tracer.
The full mathematical model for this measurement might be given by
NJU ~Nab/tb c xV xD xP
c __ as s	 ab b x 1 t 	t__l_	^
NJK-NJh V xD3xP3
where
ca =	activity concentration of 24'Am in the sample (the measurand)
Nas =	sample count in the 241 Am region of interest (ROI)
Mb =	blank count in the241 Am ROI
Ms =	sample count in the 243Am ROI
Mb=	blank count in the 243 Am ROI
ts -	sample count time
tb =	blank count time
ct =	243Am activity concentration of the tracer solution
Ft =	volume of tracer solution added to the sample aliquant
A =	correction factor for decay of243Am from the tracer reference date through counting
Pt =	alpha emission probability for the 243Am ROI
V=	volume of the sample aliquant analyzed
Da =	correction factor for decay of241 Am from sample collection through counting
Pa =	alpha emission probability for the 241 Am ROI
For simplicity in this example, since the decay factors tend to be very close to 1, we will omit
them. We will also assume that the alpha emission probabilities are exactly 1 (with no spillover
outside each ROI). So we'll use Equation lb as our model.
N,/t-N.!th c. xV
c = —	 ——	*-	(lb)
We will assume the count times ts and 4 have negligible uncertainty. We'll consider only the
uncertainty components due to Ms, Mb, Ms, Mb, ct, Vu and V.
Page 1

-------
MARLAP Training Module 7 — Measurement Uncertainty Exercise
Problem: (1) Using the information presented below, calculate each of the seven aforementioned
uncertainty components. (2) Use the results from step 1 to calculate the combined standard
uncertainty of the output estimate, ca. (3) Use the coverage factor k = 2 to calculate the expanded
uncertainty, U. (4) Format the result and its expanded uncertainty as they might be presented to a
client using one of the common shorthand notations discussed earlier.
Input
Value
Uncertainty information
/Vas
21
Poisson (low level), «(Mas) = ^Nas +1
/Vab
1
Poisson (low level), u(Nab) = ^Nab +1
Ms
892
Poisson (low level), u(Nts) = ^Nts +1
Mb
2
Poisson (low level), u(Ntb) = jNtb +1
4
36 000 s
Negligible uncertainty
tb
60 000 s
Negligible uncertainty
ct
3346 pCi/L
U=12 pCi/L (k-2) .
Vt
1 mL, or 0.001 L
u(Vt) = 0.004 mL, or4x 10"6 L
V
0.15000 L
u(V) = 0.00075 L
Assumptions:
•	None of the input estimates are correlated with each other.
•	Dead time is negligible.
•	Peaks in the alpha spectrum are cleanly separated, and there is no spillover from either
ROI.
•	Subsampling uncertainty is negligible for this water sample.
•	Historical QC data indicate no significant amount of241 Am contamination in method
blank samples.
•	We choose to ignore the decay-correction factors.
The output estimate (the activity concentration of24'Am) is calculated below.
N„/t.
NJt.
¦*J\xctxVt
¦NJt b
V
21/(36000 s)-l /(60000 s) (3346 pCi/L)x (0.001 L)
¦ X
892/(36000 s)-2/(60000 s)
= 0.510 839 695 pCi/L
0.15000 L
Notice that we haven't tried to round the result yet. That will happen later.
Page 2

-------
MARLAP Training Module 7 — Measurement Uncertainty Exercise
(1) The sensitivity coefficients have been calculated for you in the table below. Use the
information on the previous page to fill in the standard uncertainty of each of the seven input
estimates and calculate the associated component of the combined standard uncertainty, u,{y), in
the fourth column of the table.
Input
estimate
Xi
Sensitivity coefficient
c, =3/ /dx,
Standard
uncertainty
u(Xj)
Component of hc(v)
generated by u(xt)
ui (y)= \df! dxi x u (x,)
pCi/L
Square of u,{y)
u](y)
pCi2/L2
N*s
0.025041161503 pCi/L

o.im
0,0
N,b
-0.015024696902 pCi/L
V7+i S- hi
6 '0^1 (

Ms
-5.734617138 x 10"4 pCi.L

|?| ^0 o.ovn

Ntb
3.440770283 x 10"4pCi,L
Y5 ZYV?

l
Ct
1.526717557 x 10"4

SH ¦ ~0-00D?(
i, - <
vt
510.8396947 pCi/L2


I
V
-3.405597964 pCi/L2

6-00
e-fe 1


Combined variance u] (ca):
O .O^VL
(2)	Use the uncertainty propagation formula to calculate the combined varianefeof Ca. For this
exercise, just square each of the seven uncertainty components calculated in the fourth column of
the table above, write the results in the last column, add them up, and write the sum in the lower
right comer. (This sum is the combined variance of ca.) Then calculate the combined standard
uncertainty by taking the square roDt of the combined variance.
"c (ca) = Vwc(ca) = 6* 0 S"62^ pCi/L
(3)	Calculate U by multiplying wc(ca) by the coverage fact
Uuc'ca) =		pCi/L
(4)	Finally, round the result (0.510 839 695 pCi/L) and its expanded uncertainty, and write them
in the appropriate shorthand format.
yp/f
Page 3

-------
20

-------
i o vA\ K
U^1 ^
Procedure XYZ 15-10: Analysis of Liquid Samples for241 Am by Gamma Spectrometry
Introduction
Analysis for241 Am in groundwater samples can be performed by utilizing its gamma ray emission
line at 59 keV. Sample count time is 25,000 seconds for a 4-L Marinelli beaker, to achieve a
minimum detectable concentration (MDC) of 1.5 pCi/L. Water samples shall have been preserved by
adding sufficient concentrated nitric acid to a 4-L Marinelli beaker so that the pH of the sample is
less than 2.0. Sample acidification is important so that americium does not precipitate out during the
long count times required to achieve the required MDC.
References
1.	Procedure XYZ 1-1 QA Program for Gamma Spectrometry Analysis.
2.	USNRC Regulatory Guide 4.15.
3.	MARLAP. 2004. Multi-Agency Radiological Laboratory Analytical Protocols Manual.
Volumes 1-3. Washington, DC: EPA 402-B-04-001A-C, NUREG 1576, NTIS PB2004-
105421.
Precautions
1.	New Marinelli beakers shall be used for each new sample.
2.	Only Detectors 1 and 2 can achieve the stated MDC for241 Am in 25,000 seconds. Detectors
4, 5, and 6 require at least 35,000 seconds. All count times must be adjusted to accommodate
these detectors.
3.	A daily background shall be performed prior to the start of each batch of samples. Daily
background counts are 15,000 seconds per detector.
4.	Acidification of groundwater samples generally requires 15 mL of concentrated nitric acid.
The solution pH must be verified to be < 2.0 before commencing the gamma spectrometric
analysis.
5.	A matrix spike sample shall be run with each batch. The spike added to the unknown shall be
sufficient ot bring the final concentration of the solution to > 30 pCi/L.
Procedure
1.	Transfer approximately 2 L of sample to the 4-L Marinelli beaker.
2.	Add 15 mL of concentrated nitric acid.
3.	Transfer enough sample to bring the Marinelli beaker to the mark designated "4 L."
4.	Using a stirring rod and pH paper, verify that the pH of the solution is less than 2.0. If pH is >
2.0 add an additional 10 mL of concentrated nitric acid and repeat the pH measurement.
5.	Place the lid snugly on the Marinelli beaker, "burp" the container, and seal the lid interface
with electrical tape.
6.	Wipe the outside of the Marinelli beaker with a dry cloth.
7.	Place the Marinelli beaker on the detector can, close the cave, and put up the "in use" flag.
8.	Enter the preset count time according to the detector selected (see Precautions).
9.	When the count is finished verify the following parameters on the gamma ray printout sheets:
a.	Detector
b.	Count time for the detector used to achieve an MDC of 1.5 pCi/L
c.	Sample size
d.	The Sample date
e.	The count date
f.	MDC for241 Am is < 1.5 pCi/L if no gamma ray peak is identified.
10.	Log the values for each sample in the client folder on the LABDATA system. Values that are
below the MDC should be logged as "zero."
1

-------
Final Conditions
1.	All samples shall be disposed of in the containers marked "Acid Waste."
2.	Each detector shall be inspected for cleanliness following each sample counting period.
2

-------

Laboratory XYZ Method W04
Radiochemical Analysis of241 Am in Water by Alpha Spectrometry
Abbreviated Method with Major Detail
1.	Scope
1.1	This procedure describes a method for separation and quantification of americium in
water.
2.	Summary of Method
2.1 A calcium phosphate precipitation technique is used to concentrate and remove actinides
from water samples. Americium is separated by extraction chromatography from other
actinides prior to measurement by alpha spectrometry. Sequential extraction chromato-
graphy uses a CMPO-TBP resin column to remove actinides (Ac, Th, Pa, U, Np, Pu) and
lanthanides (La, Ce, etc.) from the sample. Americium and the lanthanides are eluted from
the column. If excessive lanthanides are in the sample, an Aliphatic Quaternary Amine
resin column is used to separate americium from the lanthanides. A 243Am radiotracer is
used to monitor chemical yield and correct results to improve accuracy.
3.	Interferences
3.1	Actinides with unresolvable alpha energies such as 241 Am and 238Pu must be chemically
separated to enable 24'Am quantification. This method separates these radionuclides
effectively.
3.2	Very high levels of phosphate in the sample may cause a chemical interference. Adjusting
the amount of phosphate added to coprecipitate the actinides may be necessary in these
cases.
4.	Apparatus - see detailed procedure W04
5.	Reagents - see detailed procedure W04
6.	Procedure
6.1. Water Sample Preparation:
6.1.1.	If required, filter a 1L sample through a 0.45 micron filter.
6.1.2.	Add 5 mL of concentrated HC1 (sp gr 1.19) per L of sample (0.5 mL per 100 mL) to
acidify each sample.
6.1.3.	Add appropriate tracers and/or analyze standards per lab protocol.
6.1.4.	Calcium phosphate precipitation option:
6.1.4.1	Add 0.5 mL of 1.25M Ca(NC>3)2 to each beaker.
6.1.4.2	Allow the samples to heat until boiling.
6.1.4.3	Once the samples boil, add 2-3 drops of phenolphthalein indicator and 200 pL of 3.2M
(NH4)2HP04 solution.
6.1.4.4	Add enough concentrated NH4OH with a squeeze bottle to reach the phenolphthalein end
point and form Ca3(PC>4)2 precipitate.
6.1.4.5	Separate the precipitate from solution by decanting the supernatant
6.1.4.6	Transfer the precipitate to a centrifuge tube and centrifuge the precipitate for
approximately 10 minutes at 2000 rpm.
6.1.4.7	Decant supernatant and discard to waste.
6.1.4.8	Wash the precipitate with an amount of water approximately twice the volume of the
precipitate. Mix well on a vortex mixer. Centrifuge for 5-10 minutes. Discard the
supernatant.

-------
1 « I
Method XYZ W04: Analysis of Liquid Samples lor" Am by Alpha Spectrometry (Cont'd)
6.1.4.9 Dissolve precipitate in approximately 5 mL concentrated nitric acid. Transfer solution to a
100 mL beaker. Rinse centrifuge tube with 2-3 mL of concentrated nitric acid and transfer
to beaker. Evaporate solution to dryness.
6.2. Am/La Separations using extraction chromatographic resins:
6.2.1	Redissolve calcium phosphate precipitate:
6.2.1.1	Dissolve each precipitate with 10 mL of 3M HNO3-IM A1(NC>3)3.
6.2.1.2	Add approximately 200 mg of ascorbic acid to each solution, swirling to mix.
6.2.2	Am Separation Using a CMPO-TBP Extraction Resin:
6.2.2.1	For each sample dissolved, place a CMPO-TBP Resin column in the column rack.
6.2.2.2	Pipet 5 mL of 2M HNO3 into each column to condition resin and allow to drain.
6.2.2.3	Transfer each solution from step 6.2.1.2 into the appropriate CMPO-TBP Resin column by
pouring and/or using a plastic transfer pipet.
6.2.2.4	Allow the load solution to drain through column.
6.2.2.5	Pipet 5 mL of 2M HNO3 into the sample beaker and transfer this rinse to the appropriate
column using the same plastic transfer pipet. Allow to drain.
6.2.2.6	Pipet 5 mL of 2 M HNO3- 0.1 M NaNC>2 directly into each column, rinsing each column
reservoir while adding the 2 M HNO3- 0.1 M NaNC>2.
Note: Sodium nitrite is used to oxidize Pu+3 to Pu+4 and enable the Pu/Am separation
6.2.2.7	Add 5 mL of 0.5M HNO3 to each column and allow to drain.
Note: 0.5M HNO3 is used to lower the nitrate concentration prior to conversion to the
chloride system.
6.2.2.8	Discard the load and rinse solutions to waste.
6.2.2.9	Ensure that clean, labeled beakers or vials are below each column.
6.2.2.10	Add 3 mL of 9M HC1 to each column to convert to chloride system. Collect eluate.
6.2.2.11. Add 20 mL of 4M HC1 to elute americium. Collect eluate in same beaker. Set beakers
aside for Am/La separation option 6.2.3
6.2.3	Option: Separation of americium from lanthanides using Aliphatic Quaternary Amine
Resin as required by significant lanthanides causing americium alpha spectral
degradation:
6.2.3.1	For each sample dissolved, place a Aliphatic Quaternary Amine column in the column
rack.
6.2.3.2	- 6.2.3.14 steps - see detailed procedure.
6.2.3.15 Dissolve sample in 10 mL of 4M HC1
6.3	Sample preparation for counting;
6.3.1	Add 0.2 mL of cerium carrier to each beaker from step 6.2.3.15.
6.3.2	Add 1.0 mL of concentrated HF to each beaker. Swirl to mix. Let the solutions sit for at
least 30 minutes before filtering.
6.3.3	Set up a 0.1 micron 25 mm filter, glassy side down on a Gelman filter apparatus, 50 mL
polysulfide funnel and 100 mL polypropylene Erlenmeyer flask.
6.3.4	Add 3-5 mL of 80% ethanol to each filter, applying vacuum and ensuring there are no
leaks along the sides. Add 2-3 mL of water to each filter.
6.3.5	Filter the sample and rinse 50 mL centrifuge tube with 5 mL water, transferring this rinse
to the filter apparatus.
6.3.6	Wash each filter with 3-5 mL of ethanol.
6.3.7	Remove filters, place in plastic Petri dishes, and dry under (UV) lamps for a few minutes.
6.3.8	Mount filters on stainless planchets, using double-sided tape or glue stick and count by
alpha spectrometry.
7.	Alpha Spectrometry Counting
7.1	Setup and perform an energy and efficiency calibration of the alpha spectrometry system
according to the detailed Procedure W04.
4

-------
Method XYZ W04: Analysis of Liquid Samples for241 Am by Alpha Spectrometry (Cont'd)
7.2	Place the mounted sample on the appropriate calibrated shelf of the alpha spectrometer
vacuum chamber.
7.3	Close the vacuum chamber door and initiate vacuum pump to slowly evacuate the
chamber according to the detailed procedure W04.
7.4	Apply bias between the sample planchet and detector.
7.5	Apply detector bias and begin counting for a time period to meet MQO requirements.
8.	Calculations
8.1 Calculate 241 Am sample concentration and associated uncertainty, critical level and MDC
according to the equations in the detailed procedure W04.
9.	Notes
9.1 Bias - A mean chemical yield of 95% has been reported for americium. Since results are
corrected based on spike recovery, no significant bias should exist for the method.
References - See detailed procedure.
5

-------
Laboratory XYZ Method Validation Data for Radiochemistry
Gamma Spectrometric Analysis of241 Am in Ground Water
Introduction
Laboratory XYZ has performed gamma spectrometric analysis of groundwater samples previously,
but has not had their gamma detector calibrated to 59 keV where the 241 Am gamma is located.
Detector calibration was completed using a 241 Am source (NIST traceable), and a count time of 150
minutes for a 4 L sample.
The software method for gamma-ray analysis uses the region of interest (ROI) routine rather than a
peak search algorithm. Each sample is counted for a period of 100 minutes, as are the blanks.
Aliquants from the NIST-traceable source were taken and appropriately diluted to 20, 10, and 5
pCi/L. Three of each of these solutions were made using laboratory demineralized water and nitric
acid and placed in separate, new Marinelli beakers. These samples were analyzed according to
Procedure XYZ 15-10 "Americium Analysis By Gamma Spectrometry" (this procedure and the
detector calibration were newly created for this analysis).
Sample of ground water was spiked with the 24'Am standard and analyzed along with a set of blank
samples. The blank samples were made from demineralized water and nitric acid to the same
concentration of the samples and also placed in new Marinelli beakers.
Analysis results for the gamma spectrometric results of the samples' matrix spikes and blanks are
shown below.
Data:
Method Validation SUidy
Nominal
Concentrations
20 pCi/L
10 pCi/L
5 pCi/L
Trial Number
pCi/L
+1 a
pCi/L
+1 a
pCi/L
+1 a
1
23.4
1.1
14.8
1.6
9.5
1.6
2
22.5
1.5
13.9
1.3
8.6
1.8
3
21.8
1.3
14.3
1.4
9.3
1.6
Matrix Spike and Blanks
Sample
Blank 1
Blank 2
Blank 3
Spike Result
Spike added
Unspiked Value
pCi/L
0.55
0.83
0.77
45.7 X
40.0
-1.62
+ la
1.4*
1.6y
LIX
0.95
0.10
1.8)t

vc/V(aTk41i^

-------
Laboratory XYZ Method Validation Data for Radiochemistry
Gamma Spectrometric Analysis of24'Am in Ground Water (Cont'd)
Method Validation Data Review Form
Nuclide: Am-241 Matrix: Water Action Level AT
Laboratory Name:	
Proposed Method: XYZ 15-10 Americium Analysis by Gamma Spectrometry
Required Method Validation Level: ^	Required Method Uncertainty: (X [A) /€* A J
Acceptance Criteria: Measured value within ± ' ^ * uMR_ or ±1- ^x cpmr IC&)f known value f fptO
Data Evaluation:	4-"Z-	^9

Test Level \ r x
20 pCi/L ^
Test Level 2
10 pCi/L '
Test Level 3
5 pCi/L
Trial
Number
Measured
|A|
Accepted
Y/N
Measured
|A|
Accepted
Y/N
Measured
|A|
Accepted
Y/N
1



lLLi

/V
9. r
4,r
/V
2

xr

i *4

K>
£ k

7
3
ZkS


/4.5
'/-I

1^3


4









5









6









7









|Aj = absolute value of difference between measured and known values
7

-------
Laboratory XYZ Method Validation Data for Radiochemistry
Alpha Spectrometric Analysis of241 Am in Ground Water
Introduction:
Laboratory XYZ had never analyzed 241 Am in water by radiochemistry on a routine basis. For this project, the
laboratory downloaded, from a commercial web site, a widely used radiochemical method for 241 Am in water.
The method was reviewed for project applicability and for laboratory instrumentation and equipment availability.
The radiochemist at the laboratory decided to use the cerium floride microprecipitation rather than electrodeposi-
tion for the last purification step and final sample mounting for counting by the alpha spectrometer. Because this
was a new method obtained from a nationally known method source, and had not previously been used by the
laboratory, a method validation plan was established to test the method to meet Method Validation Level D.
Internal test samples were prepared by adding sufficient amounts of a NIST-traceable 241 Am aqueous solution to
separate eight liter deionized water solutions (in high-density polyethylene containers) to obtain 20, 15, and 5
pCi/L concentrations. Prior to spiking, the solutions were made acidic by adding concentrated HC1 (5 mL/L of
sample). Seven 1 L samples were taken from each container and analyzed according to Procedure W04
(attached). A 243Am radiotracer was used with each sample to determine the 241 Am chemical yield for the sample
processed. Seven 1 L analytical blanks were prepared from acidified demineralized water (HC1) and were
included as a fourth concentration level.
Analytical results for the alpha spectrometric measurements of the test samples and blanks are shown below.
Data:
Method Validation Study
Test

15 pCi/L

0 pCi/L
Concentrations
20 pCi/L
Action Level
5 pCi/L
Blank
Trial Number
Measured
pCi/L ± 1s
1
20.6 ± 1.2
15.83 ± 0.97
5.22 ± 0.44
0.021 ± 0.013
2
19.1 ± 1.1
15.77 ± 0.97
4.57 ± 0.38
-0.015 ± 0.015
3
20.4 ± 1.2
14.25 ± 0.87
5.82 + 0.49
0.031 ± 0.022
4
20.9 ± 1.2
13.73 ± 0.84
4.46 + 0.38
0.010 ± 0.013
5
19.8 ± 1.1
14.78 ± 0.90
4.77 + 0.40
0.013 ± 0.013
6
19.5 + 1.1
15.31 ± 0.94
5.38 ± 0.45
-0.024 ± 0.013
7
20.6 ± 1.2
16.4 ± 1.0
6.32 ± 0.53
0.006 ± 0.013
*

-------
Laboratory XYZ Method Validation Data for Radiochemistry
Alpha Spectrometric Analysis of241 Am in Ground Water (Continued)
Method Validation Data Review Form
Nuclide: Am-241
Laboratory Name:
Matrix: Water
Action Level	^
Proposed Method: W04 Radiochemistry with Alpha Spectrometry
Required Method Validation Level: T)	Required Method Uncertainty: /« J Jt
Acceptance Criteria: Measured value within ± Z x uMr_ or ±j_
2,
/. H
i

|A| = absolute value of difference between measured and known va
ues

-------
21

-------
Laboratory XYZ
Project Name:
Sample Date:
Analysis Date:
Analysis Method:
"We are the Wizards"
Plutonium Fabricators, Ltd
September 1, 2005
November 1, 2005
Alpha Spectrometry, Method W04
Client ID
Laboratory ID
Sample Result
(pCi/L)
la Uncertainty
(pCi/L)
Qualifier
090105W1
1885P001
. -0.002
0.067

090105 W2
1885P002
4.97
0.33

090105 W3
1885P003
1.18
0.17

090105 W4
1885P004
12.61
0.52
s-f
090105W5
1885P005
-0.011
0.065
V 5 i
090105W6
1885P006
22.7
2.6
Q f* &
090105W7
1885P007
-0.007
0.066
^ \J s\
090105 W8
1885P008
6.66
0.38
<5 +
090105 W9
1885P009
1.58
0.19
O
090105W10
1885P010
0.90
0.15
s *
Matrix spike
1885PMS1-P0021
36.1
2.0
5 +
LCS
1885PQC12
26.1
1.7

Blank
1885PB1
4.01
0.29
s+
Duplicate
1885PDP1-P008
11.66
0.50

1.	Spike added to sample 1885P002 = 24.0 pCi/L
2.	QC nominal value = 20.0 pCi/L
\A


l

-------
Calculate the critical value (Lc) for the samples in this data set according to the Project Plan
Document as follows:
Critical Value = 1.645 * lo
Where lo is the laboratory-reported uncertainty and the Lc is based on the average value for the
historical blanks.
I. For the Matrix Spike Result.
Calculate the "Z statistic" using the following equation:
Z =
SSR-SR-SA

-------
III. For the Duplicate Result
Calculate the agreement based on the absolute value of the average of the two results as
compared with the AL:
X,:
X2: \\<(^f
AL:
^Pmr-
xavg= |x,+x2|/2 = | fpl)b + \\.kfi |/2 = \ JV&\
If Xavg > AL then use
Control Limit = 4.24 x cpMR x 100 = 4.24 x
and compare the relative percent difference to the CL:
RPD = 100x^X'~*2 ^
X
If X < AL then use
Control Limit = 4.24 x wmr = 4.24 x
and compare the absolute difference to the CL:
Absolute difference = | Xi- X21
IV. For the Laboratory Blank Sample:
The control limit for the balnk distribution is given by:
Control Limit = 3 x «mr = 3 x ( 1,^ ) =
The value for the blank is compared to this limit.	^ ^
x 100 =
( u-

3

-------
Data Qualifiers
(MARLAP Chapter 8, Section 8.3.3)
Qualifiers Applied During Verification
E Indicates that an exception or noncompliance has occurred. (This qualifier may be
removed during the validation if evidence shows that this exception does not affect the
sample results.)
Qualifiers Applied to Samples During Validation Based on Sample Results
U Analytical result is less than the critical value; a nondetect.
Q A reported measurement uncertainty that exceeds the required method
uncertainty or relative method uncertainty ((pMRor wMR).
J A result that is unusually uncertain or estimated.
R A result that is rejected due to severe data problems.
Qualifiers Applied to Samples During Validation Based on OC Sample Results
S(+/-) A LCS, MS, or MSD that is above (+) or below (-) the upper or lower control
limit.
P A sample result with its duplicate (replicate) that exceeds a control limit.
B(+/-) A blank result that is outside the upper (+) or lower (-) control limit.
4

-------