Disclaimer - For assistance accessing this document or additional
information, please contact radiation.questions@epa.gov.
NUREG-1575, Revision 2
EPA-402-P-20-001
DOE/AU-0002
MULTI-AGENCY
RADIATION SURVEY AND
SITE INVESTIGATION
MANUAL (MARSSIM)
DRAFT FOR PUBLIC
COMMENT
Revision 2	May 2020

-------
NUREG-1575, Rev. 2	MULTI-AGENCY RADIATION SURVEY AND SITE	MAY ?n?n
EPA-402-P-20-001	INVESTIGATION MANUAL (MARSSIM)	^
DOE/AU-0002	DRAFT FOR PUBLIC COMMENT	Kevision ^

-------
ABSTRACT
The Multi-Agency Radiation Survey and Site Investigation Manual (MARSSIM) provides
information on planning, conducting, evaluating, and documenting building surface and surface
soil radiological surveys for demonstrating compliance with requirements, often as part of a
dose- or risk-based regulation or standard.1 MARSSIM is a multi-agency consensus document
that was developed collaboratively by four Federal agencies having authority and control over
radioactive materials: Department of Defense (DOD), Department of Energy (DOE),
Environmental Protection Agency (EPA), and Nuclear Regulatory Commission (NRC).
MARSSIM's objective is to describe a consistent approach for planning, performing, and
assessing building surface and surface soil radiological surveys to meet established dose or
risk-based release criteria, while concurrently encouraging an effective use of resources.
1 MARSSIM uses the word "should" as a recommendation, not as a requirement. Each recommendation in this
manual is not intended to be taken literally and applied at every site. MARSSIM's survey planning documentation will
address how to apply the process on a site-specific basis.
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE
May 2020
DRAFT FOR PUBLIC COMMENT

-------
DISCLAIMER
This manual was prepared by four agencies of the United States Government. Neither the
United States Government nor any agency or branch thereof, or any of their employees, makes
any warranty, expressed or implied, or assumes any legal liability of responsibility for any third
party's use, or the results of such use, of any information, apparatus, product, or process
disclosed in this manual, or represents that its use by such third party would not infringe on
privately owned rights.
References within this manual to any specific commercial product, process, or service by trade
name, trademark, or manufacturer does not constitute an endorsement or recommendation by
the United States Government.
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
ii
May 2020
DO NOT CITE OR QUOTE

-------
ACKNOWLEDGMENTS
The Multi-Agency Radiation Survey and Site Investigation Manual (MARSSIM) came about as a
result of individuals—at the management level—within the Environmental Protection Agency
(EPA), Nuclear Regulatory Commission (NRC), Department of Energy (DOE), and Department
of Defense (DOD) who recognized the necessity for a standardized guidance document for
investigating radioactively contaminated sites. The creation of the MARSSIM was facilitated by
the cooperation of subject matter specialists from these agencies with management's support
and a willingness to work smoothly together toward reaching the common goal of creating a
workable and user-friendly guidance manual. Special appreciation is extended to Robert A.
Meek of the NRC and Anthony Wolbarst of EPA for developing the concept of a multi-agency
work group and bringing together representatives from the participating agencies.
The MARSSIM could not have been possible without the technical work group members who
contributed their time, talent, and efforts to develop this consensus guidance document:
CDR Colleen F. Petullo, U.S. Public Health Service, EPA, Chair
EPA:
NRC:
Mark Doehnert
Anthony Wolbarst, Ph.D.
H. Benjamin Hull
Sam Keith, CHP*
Jon Richards
Robert A. Meek, Ph.D.
Anthony Huffert
George E. Powers, Ph.D.
David Fauver, CHP
Cheryl Trottier
DOE: Hal Peterson, CHP
Kenneth Duvall
Andrew Wallo III
DOD: David Alberth (Army)
LCDR Lino Fragoso, Ph.D. (Navy)
Lt. Col. Donald Jordan (Air Force)
Capt. Kevin Martilla (Air Force)
Capt. Julie Coleman (Air Force)
Special mention is extended to the Federal agency contractors for their assistance in developing
the MARSSIM:
EPA: Scott Hay (S. Cohen & Associates, Inc.)
Todd Peterson, Ph.D. (S. Cohen & Associates, Inc.)
Harry Chmelynski, Ph.D. (S. Cohen & Associates, Inc.)
Ralph Kenning, CHP (S. Cohen & Associates, Inc.)
NRC: Eric Abelquist, CHP (Oak Ridge Institute of Science and Education)
James Berger (Auxier & Associates)
Carl Gogolak, Ph.D. (DOE/EML, under contract with NRC)
* Formerly with EPA National Air and Radiation Environmental Laboratory (NAREL). Currently with the
Agency for Toxic Substances and Disease Registry (ATSDR).
May 2020
DRAFT FOR PUBLIC COMMENT
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
DOE: Robert Colement, CHP (Oak Ridge National Laboratory)
John Kirk Williams (Oak Ridge National Laboratory)
Romance Carrier (Oak Ridge National Laboratory)
A special thank you is extended to Emilio Braganza (EPA), Gregory Budd (EPA), Mary Clark,
Ph.D. (EPA), Brian Littleton (EPA), John Karhnak (EPA), Sarah Seeley (EPA), Rett Sutton
(EPA/SEE), Juanita Beeson (NRC), Stephen A. McGuire, Ph.D. (NRC), Walter Oliu (NRC),
LT James Coleman (Navy), CDR David E. Farrand (Navy), CAPT David George (Navy),
CDR Garry Higgins (Navy), CAPT James Malinoski (Navy), Harlan Keaton (State of Florida),
J. Michael Beck, J.D. (EMS), Tom McLaughlin, Ph.D. (SC&A), Kevin Miller, Ph.D. (DOE/EML),
and the members of the EPA's Science Advisory Board (SAB) for their assistance in developing
the manual.
The membership of the SAB Radiation Advisory Committee's Review Subcommittee that
conducted an extensive peer review of the MARSSIM includes:
Chair
James E. Watson, Jr., Ph.D., University of North Carolina at Chapel Hill
Members
William Bair, Ph.D., (Retired), Battelle Pacific Northwest Laboratory
Stephen L. Brown, Ph.D., R2C2 (Risks of Radiation and Chemical Compounds)
June Fabryka-Martin, Ph.D., Los Alamos National Laboratory
Thomas F. Gesell, Ph.D., Idaho State University
F. Owen Hoffman, Ph.D., SENES Oak Ridge, Inc.
Janet Johnson, Ph.D., Shepherd Miller, Inc.
Bernd Kahn, Ph.D., Georgia Institute of Technology
Ellen Mangione, M.D., Colorado Department of Health
Paul J. Merges, Ph.D., New York State Department of Environmental Conservation
SAB Consultants
Michael E. Ginevan, Ph.D., M.E. Ginevan & Associates
David G. Hoel, Ph.D., University of South Carolina
David E. McCurdy, Ph.D., Yankee Atomic Electric Company
Frank L. Parker, Ph.D., Vanderbilt University [Liaison from Environmental
Management Advisory Board, U.S. Department of Energy]
Science Advisory Board Staff
K. Jack Kooyoomjian, Ph.D., Designated Federal Official, EPA
Mrs. Diana L. Pozun, Staff Secretary, EPA
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
iv
May 2020
DO NOT CITE OR QUOTE

-------
ACKNOWLEDGMENTS
The work group meetings were open to the public, and the following people attended meetings
as technical experts at the request of the work group or as observers:
K. Allison
A.T. Kearney
N. Lailas
EPA
L. Abramson
NRC
H. Larson
NRC
R. Abu-Eid
NRC
G. Lindsey
International Atomic
W. Beck
Oak Ridge Institute of

Energy Agency

Science and Education
J. Lux
Kerr-McGee Corporation
A. Boerner
Oak Ridge Institute of
M. Mahoney
Army

Science and Education
J. Malaro
NRC
Lt. E. Bonano
Air Force
H. Morton
Morton Associates
M. Boyd
EPA
H. Mukhoty
EPA
J. Buckley
NRC
A.J. Nardi
Westinghouse
B. Burns
Army
D. Ottlieg
Westinghouse Hanford
W. Cottrell
Oak Ridge National

Company

Laboratory
V. Patania
Oak Ridge
D. Culberson
Nuclear Fuel Services, Inc.

National Laboratory
M.C. Daily
NRC
C.L. Pittiglio
NRC
M. Eagle
EPA
C. Raddatz
NRC
M. Frank
Booz, Allen & Hamilton
L. Ralston
SC&A, Inc.
F. Galpin
RAE Corp.
P. Reed
NRC
R. Gilbert
Pacific Northwest
R. Rodriguez
Oak Ridge

Laboratory

National Laboratory
J.E. Glenn
NRC
N. Rohnig
-
J. Hacala
Booz, Allen & Hamilton
R. Schroeder
Army
L. Hendricks
Nuclear Environmental
C. Simmons
Kilpatrick & Cody

Services
E. Stamataky
EPA
K. Hogan
EPA
R. Story
Foster Wheeler
R. Hutchinson
National Institute of
E. Temple
EPA

Standards and Technology
D. Thomas
Air Force
G. Jablonowski
EPA
S. Walker
EPA


P. White
EPA


R. Wilhelm
EPA
May 2020
DRAFT FOR PUBLIC COMMENT
v
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
ACKNOWLEDGEMENTS FOR REVISION 2
The four federal agencies that originally published MARSSIM came to an agreement in 2010
that MARSSIM needed a comprehensive revision due to changes in the science of radiation
measurement and lessons learned over the 14-year time-span that had passed since its original
issuance. The Workgroup began by soliciting comments from the Government agencies and
members of the public on revisions to MARSSIM. In addition, the Workgroup held a consultation
with the EPA Science Advisory Board's Radiation Advisory Committee to solicit additional
comments on the document. Subject matter experts from all four Federal agencies contributed
their time and expertise to updating portions of the original document based on the comments
and suggestions received, in some cases writing new sections and appendices in order to
provide the most comprehensive update possible.
Revision 2 to MARSSIM could not have been possible without the technical workgroup
members and their colleagues who contributed their time, talent, and efforts to develop this
consensus guidance document:
Kathryn Snead, EPA, Chair
EPA: Nidal Azzam
DOE: Amanda Anderson, CHP
Eugene Jablonowski
Larainne Koehler
Anthony Nesky
Lyndsey Nguyen
David Pawel, Ph.D.
Colleen Petullo
Oleg Povetko, Ph.D.
Carlos Corredor
Derek Favret, CHP
Wayne Glines, CHP
Tim Vitkus, CHP
Alexander Williams, Ph.D.
NRC: Boby Abu-Eid, Ph.D.
Cynthia Barr
Luis Benevides
John Clements
Mark Fuhrmann, Ph.D.
Anthony Huffert, Ph.D.
Tanya Oxenberg, Ph.D.
Duane Schmidt, CHP
DoD: Erik Abkemeier, CHP (Navy)
David P. Alberth (Army)
Craig Bias, Ph.D., CHP (Air Force)
Ramachandra Bhat, Ph.D., CHP (Air Force)
David Bytwerk, Ph.D., CHP (Army)
Col. (Ret.) Robert Cherry, Ph.D., CHP
(Army)
Steven Doremus, Ph.D. (Navy)
Gerald Falo, Ph.D., CHP (Army)
Lt. Col. Alan C. Hale, CHP (Air Force)
G. Neil Keeney, CHP (Air Force)
Capt. Nathan Krzyaniak (Air Force)
Bret Rodgers (Air Force)
1st Lt. Trey Slauter (Air Force)
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
vi
May 2020
DO NOT CITE OR QUOTE

-------
Special mention is extended to the Federal agency contractors for their assistance in developing
MARSSIM, Revision 2:
Harry Chmelynski, Ph.D. (S. Cohen & Associates, Inc.)
Melody Geer (Trinity Engineering Associates, Inc.)
Jana Dawson (Trinity Engineering Associates, Inc.)
Carl Gogolak, Ph.D. (S. Cohen & Associates, Inc.)
Scott Hay (S. Cohen & Associates, Inc.)
Aris Papadopoulos (S. Cohen & Associates, Inc.)
Karene Riley (S. Cohen & Associates, Inc.)
David Stuenkel, Ph.D., CHP (Trinity Engineering Associates, Inc.)
Robert Thielke (Trinity Engineering Associates, Inc.)
A special thank you is extended to the following for their contributions to the document:
Lynn Albin (State of Washington), Mary Clark (EPA), E.S. Chandrasekaran (TVA), Phil Egidi
(EPA), Paul Giardina (EPA), Sandra Gogol (DHS), Jenny Goodman (State of New Jersey),
Robert Meek (SATS, LLC), Richard Vojtech (DHS), and Andrew Wallo (DOE).
[Add acknowledgement for SAB Review when completed.]
DEDICATION
The MARSSIM Workgroup notes, with much sadness, the sudden loss of Dr. George Edward
Powers of the U.S. Nuclear Regulatory Commission staff. Dr. Powers passed away in late 2011,
as the effort to revise the MARSSIM manual was getting underway. As a member of the
MARSSIM Workgroup, he made significant contributions on MARSSIM, both on the original
document, on its revision in 2001, and on this latest revision. Dr. Powers also contributed in an
important way to the development of the MARSAME and MARLAP manuals; his participation in
the development of all three documents (MARSSIM, MARSAME, and MARLAP) is well
recognized. The MARSSIM Workgroup members also are aware of the many contributions he
made as an NRC employee, including his involvement in several USNRC NUREG publications
over the years. Dr. Powers was truly a visionary in encouraging development of multi-agency
documents in this collaborative and collegial manner for use by many people. Dr. Powers' sense
of humor enhanced his ability to work so well in a positive manner with many of his colleagues
in multi-agency settings. He was a technically accomplished, kind, and courteous man who
contributed much to the Workgroup's efforts.
May 2020
DRAFT FOR PUBLIC COMMENT
vii
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
CONTENTS
Abstract	i
Disclaimer	ii
Acknowledgements	iii
Acknowledgements for Revision 2	vi
Dedication	vi
Contents	vii
Tables	xvi
Figures	xx
Abbreviations	xxiii
Symbols, Nomenclature, and Notations	xxviii
Conversion Factors	xxxv
1	Introduction	1-1
1.1	Purpose and Scope of MARSSIM	1-1
1.2	Structure of the Manual	1-6
1.3	Use of the Manual	1-7
1.4	Missions of the Federal Agencies Producing MARSSIM	1-8
1.4.1	Environmental Protection Agency	1-8
1.4.2	Nuclear Regulatory Commission	1-8
1.4.3	Department of Energy	1-8
1.4.4	Department of Defense	1-9
2	Overview of the Radiation Survey and Site Investigation Process	2-1
2.1	Introduction	2-1
2.2	Understanding Key MARSSIM Terminology	2-2
2.2.1	Key MARSSIM Terminology	2-2
2.2.2	Classification Assessment	2-6
2.3	Making Decisions Based on Survey Results	2-7
2.3.1	Planning Effective Surveys—Planning Phase	2-9
2.3.2	Evaluating Sources of Variability in Survey Results—Implementation Phase... 2-14
2.3.3	Evaluating Survey Results—Assessment Phase	2-14
2.3.4	Uncertainty in Survey Results	2-15
2.3.5	Reporting Survey Results	2-17
2.4	Radiation Survey and Site Investigation Process	2-18
2.4.1	Site Identification	2-19
2.4.2	Historical Site Assessment	2-19
2.4.3	Scoping Survey	2-25
2.4.4	Characterization Survey	2-26
2.4.5	Remedial Action Support Survey	2-27
2.4.6	Final Status Survey	2-27
NUREG-1575, Revision 2	viii	May 2020
DRAFT FOR PUBLIC COMMENT	DO NOT CITE OR QUOTE

-------
2.4.7 Regulatory Agency Confirmation and Verification Survey	2-28
2.5	Demonstrating Compliance with Dose- or Risk-Based Criteria	2-28
2.5.1	The Decision to Use Statistical Tests	2-29
2.5.2	Categorization and Classification	2-32
2.5.3	Design Considerations for Small Areas of Elevated Activity	2-33
2.5.4	Design Considerations for Relatively Uniform Distributions of Residual Radioactive
Material	2-34
2.5.5	Developing an Integrated Survey Design	2-35
2.6	Flexibility in Applying MARSSIM Approach	2-37
2.6.1	Alternate Statistical Methods	2-37
2.6.2	Integrating MARSSIM with Other Survey Designs	2-38
3	Historical Site Assessment	3-1
3.1	Introduction	3-1
3.2	Data Quality Objectives	3-3
3.3	Site Identification	3-5
3.4	Preliminary Investigation	3-5
3.4.1	Existing Radiation Data	3-8
3.4.2	Contacts and Interviews	3-10
3.5	Site Reconnaissance	3-10
3.6	Evaluation of Historical Site Assessment Information	3-11
3.6.1	Identify Potential Sources of Residual Radioactive Material	3-12
3.6.2	Identify Potential Areas with Residual Radioactive Material	3-13
3.6.3	Identify Potential Media with Residual Radioactive Material	3-13
3.6.4	Develop a Conceptual Model of the Site	3-20
3.6.5	Professional Judgment	3-21
3.7	Determining the Next Step in the Site Investigation Process	3-23
3.8	Historical Site Assessment Report	3-24
3.9	Review of the HSA	3-24
4	Considerations for Planning Surveys	4-1
4.1	Introduction	4-1
4.1.1	Purpose	4-1
4.1.2	Scope	4-1
4.1.3	Overview of Survey Planning	4-1
4.2	Data Quality Objectives Process	4-2
4.2.1	Planning Phase	4-3
4.2.2	Quality System	4-4
4.3	Survey Types	4-5
4.3.1	Scoping	4-5
4.3.2	Characterization	4-6
4.3.3	Remedial Action Support	4-6
4.3.4	Final Status	4-6
4.3.5	Simplified Procedures	4-8
4.3.6	A Note on Subsurface Assessments	4-8
May 2020
DRAFT FOR PUBLIC COMMENT
ix
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
4.3.7 Uranium Mill Tailings Radiation Control Act of 1978 (UMTRCA) Sites	4-8
4.4	The Unity Rule	4-9
4.5	Radionuclides	4-10
4.5.1	Radionuclides of Concern	4-10
4.5.2	Release Criteria and Derived Concentration Guideline Levels	4-10
4.5.3	Applying DCGLs	4-12
4.5.4	Investigation Levels	4-19
4.5.5	Conclusions	4-19
4.6	Area and Site Considerations	4-20
4.6.1	Area Classification	4-20
4.6.2	Identification of Survey Units	4-22
4.6.3	Selection of Background Reference Areas	4-23
4.7	Statistical Considerations	4-24
4.7.1	Basic Terms	4-25
4.7.2	Recommended Statistical Tests	4-25
4.7.3	Considerations on the Choice of a Statistical Test	4-26
4.7.4	Deviations from MARSSIM Statistical Test Recommendations	4-26
4.7.5	An Important Statistical Note	4-26
4.8	Measurements	4-26
4.8.1	Quality Control and Quality Assurance	4-26
4.8.2	Measurement Quality Objectives	4-27
4.8.3	Selecting a Measurement Technique	4-28
4.8.4	Selection of Instruments for Field Measurements	4-30
4.8.5	Selection of Sample Collection Methods	4-33
4.8.6	Selection of Measurement Techniques	4-36
4.8.7	Data Conversion	4-37
4.8.8	Additional Planning Considerations Related to Measurements	4-37
4.9	Site Preparation	4-39
4.9.1	Consent for Survey	4-39
4.9.2	Property Boundaries	4-39
4.9.3	Physical Characteristics of Site	4-40
4.9.4	Clearing to Provide Access	4-42
4.9.5	Reference Coordinate System	4-43
4.10	Health and Safety	4-49
4.11	Documentation	4-50
4.12	Examples	4-26
4.12.1	Scenario A or Scenario B?	4-26
4.12.2	DCGL Calculations	4-27
4.12.3	Required number of Samples for a Single Radionuclide	4-28
4.12.4	Required Number of Samples for the Multiple Radionuclides	4-30
4.12.5	Instrument Efficiencies	4-33
4.12.6	Data Conversion	4-36
4.12.7	Example of a Deviation from a Recommended Statistical Test	4-37
4.12.8	Release Criteria for Discrete Radioactive Particles	4-37
4.12.9	Uranium Mill Tailings Radiation Control Act of 1978 (UMTRCA) Sites	4-30
5 Survey Planning and Design	5-1
NUREG-1575, Revision 2	x	May 2020
DRAFT FOR PUBLIC COMMENT	DO NOT CITE OR QUOTE

-------
5.1	Introduction	5-1
5.2	Preliminary Surveys	5-5
5.2.1	Scoping Surveys	5-5
5.2.2	Characterization Surveys	5-9
5.2.3	Remedial Action Support Surveys	5-19
5.3	Final Status Surveys	5-21
5.3.1	Selecting the Appropriate Scenario	5-26
5.3.2	Application of Release Criteria	5-27
5.3.3	Determining Numbers of Data Points for Statistical Tests for Residual Radioactive
Material Present in Background	5-27
5.3.4	Determining Numbers of Data Points for Statistical Tests for Residual Radioactive
Material Not Present in Background	5-33
5.3.5	Determining the Number of Discrete Data Points for Small Areas of Elevated
Activity	5-36
5.3.6	Determining the Scan Area	5-42
5.3.7	Determining Survey Locations	5-44
5.3.8	Determining Investigation Levels	5-48
5.3.9	Developing an Integrated Survey Strategy	5-50
5.3.10	Evaluating Survey Results	5-55
5.3.11	Documentation	5-55
6 Field Measurement Methods and Instrumentation	6-1
6.1	Introduction	6-1
6.2	Data Quality Objectives	6-2
6.2.1	Identifying Data Needs for Field Measurement Methods	6-2
6.2.2	Measurement Performance Indicators	6-3
6.2.3	Instrument Performance Indicators	6-4
6.3	Detection Capability	6-6
6.3.1	Detection Capability for Direct Measurements	6-6
6.3.2	Detection Capability for Scans	6-11
6.4	Measurement Uncertainty	6-26
6.4.1	Systematic and Random Uncertainties	6-27
6.4.2	Statistical Counting Uncertainty	6-28
6.4.3	Uncertainty Propagation	6-29
6.4.4	Reporting Confidence Intervals	6-30
6.5	Select a Service Provider to Perform Field Data Collection Activities	6-31
6.6	Select a Measurement Method	6-32
6.6.1	Select a Measurement Technique	6-33
6.6.2	Select Instrumentation	6-35
6.6.3	Display and Recording Equipment	6-37
6.6.4	Instrument Calibration	6-38
6.6.5	Select Instrumentation	6-44
6.6.6	Select a Measurement Method	6-45
6.7	Data Conversion	6-46
6.7.1	Surface Activity	6-46
6.7.2	Soil Radionuclide Concentration and Exposure Rates	6-52
May 2020	xi	NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT	DO NOT CITE OR QUOTE

-------
6.8	Radon Measurements	6-52
6.8.1	Direct Radon Measurements	6-55
6.8.2	Radon Progeny Measurements	6-59
6.8.3	Radon Flux Measurements	6-60
6.9	Special Equipment	6-60
6.9.3 Local Microwave and Sonar Positioning Systems	6-61
6.9.2	Laser Positioning Systems	6-61
6.9.3	Mobile Systems with Integrated Positioning Systems	6-61
6.9.4	Radar, Magnetometer, and Electromagnetic Sensors	6-62
6.9.5	Aerial Radiological Surveys	6-64
7	SAMPLING AND PREPARATION FOR LABORATORY MEASUREMENTS	7-1
7.1	Introduction	7-1
7.2	Data Quality Objectives	7-2
7.2.1	Identifying Data Needs	7-2
7.2.2	Data Quality Indicators	7-3
7.3	Communications with the Laboratory	7-8
7.3.1	Communications During Survey Planning	7-8
7.3.2	Communications Before and During Sample Collection	7-9
7.3.3	Communications During Sample Analysis	7-9
7.3.4	Communications Following Sample Analysis	7-10
7.4	Selecting a Radioanalytical Laboratory	7-10
7.5	Sampling	7-11
7.5.1	Surface Soil	7-12
7.5.2	Building Surfaces	7-15
7.5.3	Other Media	7-16
7.6	Field Sample Preparation and Preservation	7-16
7.6.1	Surface Soil	7-16
7.6.2	Building Surfaces	7-17
7.6.3	Other Media	7-17
7.7	Analytical Procedures	7-17
7.7.1	Photon Emitting Radionuclides	7-19
7.7.2	Beta Emitting Radionuclides	7-21
7.7.3	Alpha Emitting Radionuclides	7-21
7.8	Sample Tracking	7-22
7.8.1	Field Tracking Considerations	7-22
7.8.2	Transfer of Custody	7-23
7.8.3	Radiochemical Holding Times	7-24
7.8.4	Laboratory Tracking	7-24
7.9	Packaging and Transporting Samples	7-25
7.9.1	U.S. Nuclear Regulatory Commission Regulations	7-26
7.9.2	U.S. Department of Transportation Regulations	7-26
7.9.3	U.S. Postal Service Regulations	7-27
7.9.4	International Atomic Energy Agency Regulations	7-27
8	Interpretation of Survey Results	8-1
NUREG-1575, Revision 2	xii	May 2020
DRAFT FOR PUBLIC COMMENT	DO NOT CITE OR QUOTE

-------
8.1	Introduction	8-1
8.2	Data Quality Assessment	8-1
8.2.1	Review the Data Quality Objectives, Measurement Quality Objectives, and Survey
Design	8-2
8.2.2	Conduct a Preliminary Data Review	8-3
8.2.3	Select the Statistical Test	8-11
8.2.4	Verify the Assumptions of the Statistical Tests	8-13
8.2.5	Draw Conclusions from the Data	8-15
8.3	Radionuclide Not Present in Background	8-18
8.3.1	Sign Test	8-19
8.3.2	Applying the Sign Test	8-22
8.4	Radionuclide Present in Background	8-27
8.4.1	Wilcoxon Rank Sum Test and Quantile Test	8-27
8.4.2	Applying the Wilcoxon Rank Sum Test	8-29
8.4.3	Applying the Quantile Test - Only used in Scenario B	8-32
8.4.4	Multiple Radionuclides	8-37
8.5	Scan-Only Surveys	8-44
8.6	Evaluate the Results: The Decision	8-45
8.6.1	Elevated Measurement Comparison	8-46
8.6.2	Interpretation of Statistical Test Results	8-47
8.6.3	If the Survey Unit Fails	8-48
8.6.4	Removable Radioactive Material	8-50
8.7	Documentation	8-50
References	Ref-1
A Example of MARSSIM Applied to a Final Status Survey	A-1
A.1 Introduction	A-1
A.2 Survey Preparations	A-1
A.3 Survey Design	A-9
A.4 Conducting Surveys	A-16
A.5 Evaluating Survey Results	A-16
B Simplified Procedure for Certain Users of Sealed Sources, Short Half-life Materials, and
Small Quantities	B-1
C Regulations and Requirements Associated with Radiation Surveys and
Site Investigations	C-1
C.1 EPA Statutory Authorities	C-1
C.2 DOE Regulations and Requirements	C-3
C.3 NRC Regulations and Requirements	C-9
C.4 DOD Regulations and Requirements	C-12
C.5	State and Local Regulations and Requirements	C-16
D MARSSIM PROJECT-LEVEL QUALITY SYSTEM COMPONENTS	D-1
D.1	The Planning Phase	D-2
D.2 The Implementation Phase	D-38
May 2020	xiii	NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT	DO NOT CITE OR QUOTE

-------
D.3 The Assessment Phase	D-42
D.4	Data Verification and Validation	D-47
E RANKED SET SAMPLING	E-1
E.1	Introduction	E-1
E.2 Integration of RSS into MARSSIM	E-6
E.3 Summary Example of an FSS Survey Using RSS	E-18
F The Relationship Between the Radiation Survey and Site Investigation Process, the
CERCLA Remedial or Removal Process, and the RCRA Corrective Action Process	F-1
G Historical Site Assessment Information Sources	G-1
H Description of Field Survey and Laboratory Analysis Equipment	H-1
H.1 Introduction	H-1
H.2 Field Survey Equipment	H-3
H.3	Laboratory Instruments	H-35
I Statistical Tables and Procedures	1-1
I.1	Normal Distribution	1-1
1.2	Sample Sizes for Statistical Tests	I-2
1.3	Critical Values for the Sign Test	I-4
1.4	Critical Values for the WRS Test	I-6
1.5	Probability of Detecting an Elevated Area	1-11
1.6	Test Statistics for the Quantile Test	1-16
1.7	Random Numbers	I-22
J Derivation of Alpha Scanning Equations Presented in Section 6.6.2.2	J-1
K Comparison Tables Between Quality Assurance Documents	K-1
L Stem and Leaf Displays and Quantile Plots	L-1
L.1 Stem and Leaf Display	L-1
L.2 Quantile Plots	L-2
M Calculation of Power Curves	M-1
M.1 Power Calculations for the Statistical Tests	M-1
N Effect of Precision on Planning and Performing Surveys	N-1
N.1 Introduction	N-1
N.2 Summary	N-5
O Detailed Calculations for Statistical Tests and Illustrative Examples for the Determination of
DCGLs	0-1
0.1 Introduction	0-1
0.2 The WRS Test	0-1
0.3 The Sign Test	0-3
0.4 Calculating Area Factors and the DCGL for the EMC	0-4
NUREG-1575, Revision 2	xiv	May 2020
DRAFT FOR PUBLIC COMMENT	DO NOT CITE OR QUOTE

-------
0.5 Release Criteria for Discrete Radioactive Particles	0-8
0.6 Uranium Mill Tailings Radiation Control Act of 1978 (UMTRCA) Sites	0-8
Glossary	GL-1
May 2020
DRAFT FOR PUBLIC COMMENT
xv
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
TABLES
1.1: Scope of MARSSIM	1-4
2.1: The Data Life Cycle used to Support the Radiation Survey and Site Investigation
Process	2-25
2.2: Recommended Conditions for Demonstrating Compliance Based on Survey Unit
Classification for a Final Status Survey	2-36
2.3: Examples of Alternate Statistical Tests	2-39
3.1: Questions Useful for the Preliminary Investigation	3-9
4.1: Suggested Area for Surveys	4-22
4.2: Examples of Direct Measurement Instruments	4-34
4.3: Sample Results for Unity Rule Example	4-58
5.1: Null and Alternative Hypothesis for Scenarios A and B	5-26
5.2: Values of N/2 for Use with the Wilcoxon Rank Sum Test'	5-32
5.3: Values of N for Use with the Sign Test	5-35
5.4: Example FSS Investigation Levels	5-49
5.5: Recommended Survey Coverage for Structures and Land Areas	5-51
6.1: Examples of Estimated Detection Capabilities for Alpha and Beta Survey Equipment (Static
one minute counts for 238U calculated using Equations 6.3, 6.4, and 6.5)	6-11
6.2: Values of d' for Selected True Positive and False Positive Proportion	6-14
6.3: Nal(TI) Scintillation Detector Scan MDCs for Common Radionuclides and Radioactive
Materials	6-23
6.4: Probability of Detecting 300 dpm/100 cm2 of Alpha Activity While Scanning with Alpha
Detectors Using an Audible Output (calculated using Equation 6-16)	6-26
6.5: Common Uncertainty Propagation Equations	6-30
6.6: Areas Under Various Intervals About the Mean of a Normal Distribution	6-30
6.7: Potential Applications for Instrumentation and Measurement Technique Combinations. 6-44
6.8: Advantages and Disadvantages of Instrumentation and Measurement Technique
Combinations	6-45
6.9: Radiation Detectors with Applications to Radon Surveys	6-56
7.1: Soil Sampling Equipment	7-13
7.3: Typical Measurement Sensitivities for Laboratory Radiometric Procedures	7-20
8.1: Summary of Statistical Tests and Evaluation Methods	8-13
8.2: Methods for Checking the Assumptions of Statistical Tests	8-15
8.3: Summary of Statistical Tests for Radionuclide not in Background and Radionuclide-Specific
Measurement	8-15
NUREG-1575, Revision 2	xvi	May 2020
DRAFT FOR PUBLIC COMMENT	DO NOT CITE OR QUOTE

-------
8.4: Summary of Statistical Tests for Radionuclide in Background or Radionuclide Non-Specific
(Gross) Measurements	8-16
8.5: Summary of Results for Scan-Only Surveys	8-17
8.6: Final Status Survey Parameters for Example Survey Units for Scenario A	8-18
8.7: Final Status Survey Parameters for Example Survey Units for Scenario B	8-19
8.8: Example Sign Analysis: Class 2 Exterior Soil Survey Unit	8-24
8.9: Sign Test Example Data for Class 3 Exterior Survey Unit	8-26
8.10: WRS Test for Class 2 Interior Drywall Survey Unit in Example 7	8-31
8.11: WRS and Quantile Test Under Scenario B for Class 2 Interior Drywall Survey Unit in
Example	8-34
8.12: Calculation of oo2 for Example 9	8-36
8.13: Analysis of Variance for Example 9 Data	8-36
8.14: Example 10 WRS Test for Two Radionuclides	8-40
8.15: Example 11 WRS Test for Two Radionuclides	8-41
A.1: Class 1 Interior Concrete Survey Unit and Reference Area Data	A-17
A.2: Stem and Leaf Displays for Class 1 Interior Concrete Survey Units	A-17
A.3: WRS Test for Class 1 Interior Concrete Survey Unit	A-20
C.1 DOE Authorities, Orders, and Regulations Related to Radiation Protection	C-4
C.2 Agreement States	C-17
C.3	States That Regulate Diffuse NORM	C-17
D.1:	Representation of Decision Errors for a Final Status Survey (FSS) Using Scenario A for the
True Condition of the Survey Unit	D-19
D.2: Representation of Decision Errors for a Final Status Survey Using Scenario B for the True
Condition of the Survey Unit	D-19
D.3: Suggested Content or Consideration, Impact if Not Met, and Considerations for Data
Descriptors	D-50
D.4: Use of Quality Control Data	D-56
D.5: Minimum Considerations for Precision, Impact if Not Met, and Corrective Actions	D-57
D.6: Minimum Considerations for Bias, Impact if Not Met, and Corrective Actions	D-59
D.7: Minimum Considerations for Representativeness, Impact if Not Met, and Corrective Actions
	D-62
D.8: Minimum Considerations for Comparability, Impact if Not Met, and Corrective Actions.. D-64
D.9:	Minimum Considerations for Completeness, Impact if Not Met, and Corrective Actions. D-66
E.1:	Required Number of Laboratory Samples for RSS Sign Test for 5 Sets Per Cycle	E-8
May 2020	xvii	NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT	DO NOT CITE OR QUOTE

-------
E.2: Required Number of Laboratory Samples for RSS Sign Test for 4 Sets Per Cycle	E-9
E.3: Required Number of Laboratory Samples for RSS Sign Test for 3 Sets Per Cycle	E-10
E.4: Critical Values for the RSS Sign Test	E-16
E.5:	Critical Values for the SRS Sign Test	E-17
F.1:	Program Comparison	F-5
F.2: Data Elements for Site Assessments	F-11
F.3:	Comparison of Sampling Emphasis between Remedial Site Assessment and Removal Site
Assessment	F-11
G.1:	Site Assessment Information Sources (Organized by Information Source)	G-2
G.2:	Site Assessment Information Sources (Organized by Information Needed)	G-11
H.1:	AMS Detection Limits	H-51
H.2: Radiation Detectors with Applications to Alpha Surveys	H-59
H.3: Radiation Detectors with Applications to Beta Surveys	H-61
H.4: Radiation Detectors with Applications to Gamma and X-Ray Surveys	H-63
H.5: Radiation Detectors with Applications to Large Area Mobile Detector Arrays	H-67
H.6: Radiation Detectors with Applications to Radon Surveys	H-68
H.7: Systems that Measure Atomic Mass or Emissions	H-70
H.8:	Special Techniques and Equipment	H-72
I.1:	Cumulative Normal Distribution Function Oz	1-1
1.2: Sample Sizes for Sign Test	I-2
1.3: Sample Sizes for Wilcoxon Rank Sum Test	I-3
1.4: Critical Values for the Sign Test Statistic S+	I-4
1.5: Critical Values for the WRS Test	I-6
1.6: Risk that an Elevated Area with Length L/G and Shape S will not be Detected and the Area
(%) of the Elevated Area Relative to a Triangular Sample Grid Area of 0.866 G2	1-11
1.7: Values of r and k for the Quantile Test When a Is Approximately 0.01	1-16
1.8: Values of r and k for the Quantile Test When a Is Approximately 0.025	1-17
1.9: Values of r and k for the Quantile Test When a Is Approximately 0.05	1-19
1.10: Values of r and k for the Quantile Test When a Is Approximately 0.10	I-20
1.11: 1,000 Random Numbers Uniformly Distributed between Zero and One	I-22
K.1 Comparison of EPA QA/R-5 and EPA QAMS-005/80	K-2
K.2: Comparison of EPA QA/R-5 and ASME NQA-1	K-3
K.3: Comparison of EPA QA/R-5 and DOE Order 414.1 D	K-4
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
xviii
May 2020
DO NOT CITE OR QUOTE

-------
K.4: Comparison of EPA QA/R-5 and ISO 9000	K-5
K.5: Comparison of EPA QA/R-5 and UFP-QAPP	K-6
L.1: Data for Quantile Plot	L-3
L.2: Ranked Reference Area Concentrations	L-5
L.3: Interpolated Ranks for Survey Unit Concentrations	L-6
M.1: Values of Pr and p2 for Computing the Mean and Variance of WMW	M-4
0.1: Values of Pr for Given Values of the Relative Shift, A/o, when the Radionuclide
is Present in Background	0-1
0.2: Percentiles Represented by Selected Values of a and p	0-2
0.3: Values of Ps for Given Values of the Relative Shift, A/a, when the Radionuclide
is Not Present in Background	0-3
0.4: Illustrative Examples of Outdoor Area Factors	0-6
0.5: Illustrative Examples of Indoor Area Factors	0-6
May 2020
DRAFT FOR PUBLIC COMMENT
xix
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
FIGURES
1.1: Compliance Demonstration	1-2
2.1: The Data Life Cycle	2-9
2.2: The Data Quality Objectives Process	2-11
2.3: The Assessment Phase of the Data Life Cycle (EPA 2006a)	2-16
2.4: The Radiation Survey and Site Investigation Process in Terms of Area Classification ... 2-20
2.5: The Historical Site Assessment Portion of the Radiation Survey and Site Investigation
Process	2-21
2.6: The Scoping Survey Portion of the Radiation Survey and Site Investigation Process	2-22
2.7: The Characterization and Remedial Action Support Survey Portion of the Radiation Survey
and Site Investigation Process	2-23
2.8: The Final Status Survey Portion of the Radiation Survey and Site Investigation Process2-24
3.1: Historical Site Assessment Process Flowchart	3-2
3.2: Example Showing How a Site Might be Categorized Prior to Cleanup Based on the
Historical Site Assessment	3-22
4.1: Sequence of Preliminary Activities Leading to Survey Design	4-7
4.2: Flow Diagram for Selection of Field Survey Instrumentation for Direct Measurements and
Analysis of Samples	4-38
4.3: Indoor Grid Layout with Alphanumeric Grid Block Designation: Walls and Floors are
Diagramed as Though They Lay Along the Same Horizontal Plane	4-46
4.4: Example of a Grid System for Survey of Site Grounds Using Compass Directions	4-47
4.5: Example of a Grid System for Survey of Site Grounds Using Distances Left or Right of the
Baseline	4-48
5.1: The Scoping Survey Portion of the Radiation Survey and Site Investigation Process	5-2
5.2: The Characterization and Remedial Action Support Survey Portions of the Radiation Survey
and Site Investigation Process	5-3
5.3: The Final Status Survey Portion of the Radiation Survey and Site Investigation Process. 5-4
5.4: Process for Designing an Integrated Survey Plan for a Final Status Survey	5-23
5.5: Process for Identifying Discrete Measurement Locations	5-24
5.6: Identifying Data Needs for Assessment of Potential Areas of Elevated Activity in Class 1
Survey Units	5-25
5.7: Gray Region for Scenario A	5-28
5.8: Gray Region for Scenario B	5-29
6.1: Graphically Represented Probabilities for Type I and Type II Errors in Detection Capability
for Instrumentation with a Background Response	6-8
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
xx
May 2020
DO NOT CITE OR QUOTE

-------
6.2: The Physical Probe Area of a Detector: Gas Flow Proportional Detector with Physical Probe
Area of 126 cm2	6-50
8.1: Examples of Posting Plots	8-7
8.2: Sample GIS Visualization, Modified from Figures 3.4 and 7.8 in NUREG/CR-7021 (NRC
2012)	8-9
8.3: Example of a Frequency Plot (a) and Other Statistical Information Output from Visual
Sample Plan v. 7 (b)	8-10
8.4: ProLICL Worksheet for Example 12	8-42
8.5: ProLICL Select Variables Window for Hypothesis Testing for Example 12	8-42
8.6: ProLICL Hypothesis Testing Options Window for Example 12	8-43
8.7: ProLICL Output for Example 12	8-44
A.1: Plot Plan of the Specialty Source Manufacturing Company	A-3
A.2: Building Floor Plan	A-4
A.3: Examples of Scanning Patterns for Each Survey Unit Classification	A-7
A.4: Reference Coordinate System for the Class 1 Interior Concrete Survey Unit	A-8
A.5: Decision Performance Goal Diagram for the Class 1 Interior Concrete Survey Unit	A-10
A.6 Prospective Power Curve for the Class 1 Interior Concrete Survey Unit	A-13
A.7: Measurement Grid for the Class 1 Interior Concrete Survey Unit	A-14
A.8: Quantile-Quantile Plot for the Class 1 Interior Concrete Survey Unit	A-18
A.9: Retrospective and Prospective Power Curves for the Class 1 Interior Concrete Survey Unit
	A-21
D.1: The Data Quality Objectives Process	D-4
D.2: Repeated Application of the DQO Process throughout the Radiation Survey and Site
Investigation Process	D-5
D.3: Example of the Parameter of Interest for the Case wherein the Radionuclide does not
appear in Background	D-12
D.4: Example of the Parameter of Interest for the Case wherein the Radionuclide appears in
Background	D-14
D.5: Statement of the Null Hypothesis for the Final Status Survey Addressing the Issue of
Compliance	D-22
D.6: Statement of the Null Hypothesis for the Final Status Survey Addressing the Issue of
Indistinguishability from Background Using Scenario B	D-24
D.7: Geometric Probability of Sampling at Least One Point of an Area of Elevated Activity as a
Function of Sample Density with Either a Square or Triangular Sampling Pattern	D-30
May 2020
DRAFT FOR PUBLIC COMMENT
xxi
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
D.8: Example of a Scenario A Power Chart Illustrating the Decision Rule for the Final Status
Survey	D-31
D.9: Example of a Scenario A Error Chart Illustrating the Decision Rule for the Final Status
Survey	D-32
D.10: The Assessment Phase	D-43
D.11:	Graphical Representation of Accuracy	D-60
E.1:	Power Curve for 10 Cycles of 3 Sets per Cycle for a Critical Value of 19 (a = 0.018) and
SRS Power Curve for the Same Critical Value (a = 0.005)	E-12
E.2:	Power Curve for 6 Cycles of 5 Sets per Cycle for a Critical Value of 18 (a = 0.032) and
SRS Power Curve for the Same Critical Value (a = 0.100)	E-12
F.1	Comparison of the Radiation Survey and Site Investigation Process with the CERCLA
Superfund Process and the RCRA Corrective Action Process	F-2
J.1: Probability (P) of getting one or more counts when passing over a 100 cm2 area containing
residual radioactive material at 500 dpm/100 cm2 alpha	J-4
J.2: Probability (P) of getting one or more counts when passing over a 100 cm2 area containing
residual radioactive material at 1,000 dpm/100 cm2 alpha	J-5
J.3: Probability (P) of getting one or more counts when passing over a 100 cm2 area containing
residual radioactive material at 5,000 dpm/100 cm2 alpha	J-6
J.4: Probability (P) of getting two or more counts when passing over a 100 cm2 area containing
residual radioactive material at 500 dpm/100 cm2 alpha	J-7
J.5: Probability (P) of getting two or more counts when passing over a 100 cm2 area containing
residual radioactive material at 1,000 dpm/100 cm2 alpha	J-8
J.6: Probability (P) of getting two or more counts when passing over a 100 cm2 area containing
residual radioactive material at 5,000 dpm/100 cm2 alpha	J-9
L.1: Example of a Stem and Leaf Display	L-2
L.2: Example of a Quantile Plot	L-4
L.3: Quantile Plot for Example Class 2 Exterior Survey Unit of Section 8.3.2	L-4
L.4: Example Quantile-Quantile Plot	L-7
M.1: Retrospective Power Curve for Class 3 Exterior Survey Unit	M-2
M.2: Retrospective Power Curve for Class 2 Interior Drywall Survey Unit	M-5
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
xxii
May 2020
DO NOT CITE OR QUOTE

-------
ABBREVIATIONS
1st Lt.
First Lieutenant
AARST
American Association of Radon Scientists and Technologists
AEA
Atomic Energy Act
A EC
Atomic Energy Commission
AFI
Air Force Instructions
AGL
above ground level
AL
action level
ALARA
as low as reasonably achievable
AMC
Army Materiel Command
AMS
accelerator mass spectrometry
ANSI
American National Standards Institute
AR
Army Regulations
ARA
Army Radiation Authorization
ASQ
American Society for Quality
ASTM
American Society of Testing and Materials
ATSDR
Agency for Toxic Substances and Disease Registry
CAA
Clean Air Act
Capt.
Captain (Air Force)
CAPT
Captain (Navy)
CDR
Commander
CED
committed effective dose
CEDE
committed effective dose equivalent
CERCLA
Comprehensive Environmental Response, Compensation, and Liability Act
CERCLIS
Comprehensive Environmental Response, Compensation, and Liability

Information System
CFR
Code of Federal Regulations
CHP
Certified Health Physicist
COC
chain of custody
Col.
Colonel
CV
coefficient of variation
DCF
dose conversion factor
DCGL
derived concentration guideline level
DCGLemc
DCGL for small areas of elevated activity, used with the EMC
DCGLw
DCGL for average concentrations over a wide area, used with statistical tests
DEFT
Decision Error Feasibility Trials
DGPS
differential global positioning system
DHS
Department of Homeland Security
DL
discrimination limit
DLC
Data Life Cycle
DOD
U.S. Department of Defense
May 2020	xxiii	NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT	DO NOT CITE OR QUOTE

-------
DOE
U.S. Department of Energy
DOT
Department of Transportation
DQA
Data Quality Assessment
DQIs
Data Quality Indicators
DQO
Data Quality Objectives
ED
electronic dosimeter
EERF
Eastern Environmental Radiation Facility
Ehf
human factors efficiency
EIC
electret ion chamber
EMC
elevated measurement comparison
EML
Environmental Measurements Laboratory
EMMI
Environmental Monitoring Methods Index
EPA
U.S. Environmental Protection Agency
EPIC
Environmental Photographic Interpretation Center
ERAMS
Environmental Radiation Ambient Monitoring System
FA-MS
flowing afterglow mass spectrometer
FEMA
Federal Emergency Management Agency
FIRM
Flood Insurance Rate Maps
FRDS
Federal Reporting Data System
FSP
Field Sampling Plan
FSS
Final Status Survey
FUSRAP
Formerly Utilized Sites Remedial Action Program
FWPCA
Federal Water Pollution Control Act
GCS
geographic coordinate system
GEMS
Geographical Exposure Modeling System
GIS
geographic information system
GM
Geiger-Mueller
GPR
ground-penetrating radar
GPS
global positioning system
GRIDS
Geographic Resources Information Data System
GWSI
Ground Water Site Inventory
HASP
Health and Safety Plan
HPS
Health Physics Society
HSA
Historical Site Assessment
HSWA
Hazardous and Solid Waste Amendments
HRS
Hazard Ranking System
HTD
hard-to-detect
HWP
hazard work permit
IAEA
International Atomic Energy Agency
ICP
inductively coupled plasma
ICP-AES/MS
inductively coupled plasma-atomic emission spectrometry/mass spectrometry
ICP-MS
inductively coupled plasma mass spectrometer
IEEE
Institute of Electrical and Electronics Engineers
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
xxiv
May 2020
DO NOT CITE OR QUOTE

-------
IR-MS
isotope ratio mass spectrometer
ISGS
in situ gamma spectroscopy
ISI
Information System Inventory
ISO
International Organization for Standardization
IV
independent verification
JSA
job safety analysis
KPA
kinetic phosphorescence analysis
LA-ICP-AES
laser ablation-inductively coupled plasma-atomic emission spectrometry
LA-ICP-MS
laser ablation-inductively coupled plasma-mass spectrometry
LANL
Los Alamos National Laboratory
LBGR
lower bound of the gray region
LCD
liquid crystal display
LCDR
Lieutenant Commander
LLD
lower limit of detection
LLNL
Lawrence Livermore National Laboratory
LLRWPA
Low-Level Radioactive Waste Policy Act, as Amended
LSC
liquid scintillation counter
Lt.
Lieutenant (Air Force)
LT
Lieutenant (Navy)
Lt. Col.
Lieutenant Colonel
MARLAP
Multi-Agency Radiation Laboratory Analytical Protocols (Manual)
MARSAME
Multi-Agency Radiation Survey and Assessment of Materials and Equipment

(Manual)
MARSSIM
Multi-Agency Radiation Survey and Site Investigation Manual
MCA
multichannel analyzer
MDA
minimum detectable activity
MDC
minimum detectable concentration
MDCR
minimum detectable count rate
MDER
minimum detectable exposure rate
MDLEST
Mobile Demonstration Laboratory for Environmental Screening Technologies
MED
Manhattan Engineering District
MeV
megaelectron volt
MQC
minimal quantifiable concentration
MQO
Measurement Quality Objectives
MS
mass spectrometry
MS/MD
matrix spike/matrix duplicate
NAREL
National Air and Radiation Environmental Laboratory
NARM
naturally occurring and accelerator produced radioactive material
NCAPS
National Corrective Action Prioritization System
NCP
National Contingency Plan
NCRP
National Council on Radiation Protection and Measurements
NIST
National Institute of Standards and Technology
May 2020
DRAFT FOR PUBLIC COMMENT
XXV
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
NORM
naturally occurring radioactive material
NPDC
National Planning Data Corporation
NPDES
National Pollutant Discharge Elimination System
NPL
National Priorities List
NRC
U.S. Nuclear Regulatory Commission
NWPA
Nuclear Waste Policy Act
NWWA
National Water Well Association
ODES
Ocean Data Evaluation System
ORISE
Oak Ridge Institute for Science and Education
ORNL
Oak Ridge National Laboratory
OSHA
U.S. Occupational Safety and Health Administration
OSL
optically stimulated luminescence
OSLNs
optically stimulated luminescence devices sensitive to neutrons
PA EC
potential alpha energy concentration
PCi
picocurie
PE
performance evaluation
PERALS
photon electron rejecting alpha liquid scintillator
PIC
pressurized ionization chamber
PMT
photomultiplier tube
PPE
personal protective equipment
QA
quality assurance
QAM
Quality Assurance Manual
QAPP
Quality Assurance Project Plan
QC
quality control
QMP
Quality Management Plan
RAGS/HHEM
Risk Assessment Guidance for Superfund/Human Health Evaluation Manual
RAS
Remedial Action Support
RASP
Radiological Affairs Support Program
RCRA
Resource Conservation and Recovery Act
RCRIS
Resource Conservation and Recovery Information System
RFI/CMS
RCRA Facility Investigation/Corrective Measures Study
RFP
Request for Proposal
RI/FS
Remedial Investigation/Feasibility Study
ROD
Record of Decision
RODS
Records of Decision System
RSS
Ranked Set Sampling
RSSI
Radiation Survey and Site Investigation
RWP
radiation work permit
SADA
Visual Sample Plan and Spatial Analysis and Decision Assistance
SAP
Sampling and Analysis Plan
SARA
Superfund Amendments and Reauthorization Act
SDWA
Safe Drinking Water Act
SFMP
Surplus Facilities Management Program
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
xxvi
May 2020
DO NOT CITE OR QUOTE

-------
SOPs
Standard Operating Procedures
SOR
sum of the ratios
SOW
statement of work
SPP
systematic planning process
SRS
simple random sampling
STORET
Storage and Retrieval for Water Quality Data
TED
total effective dose
TEDE
total effective dose equivalent
TENORM
technologically enhanced naturally occurring radioactive material
TIMS
thermal ionizing mass spectrometry
TLD
thermoluminescent dosimeter
TOF-MS
time-of-flight mass spectrometry
TRU
transuranic
TSCA
Toxic Substances Control Act
TVA
Tennessee Valley Authority
UBGR
upper boundary of the gray region
UCL
upper confidence limit
UFP
Uniform Federal Policy
UFP-QAPP
Uniform Federal Policy for Quality Assurance Project Plans
UFP-QS
Uniform Federal Policy for Implementing Environmental Quality Systems
UMTRCA
Uranium Mill Tailings Radiation Control Act
USGS
United States Geological Survey
USPHS
United States Public Health Service
USRADS
Ultrasonic Ranging and Data System
UXO
unexploded ordnance
VOCs
volatile organic compounds
WATSTORE
National Water Data Storage and Retrieval System
WL
working level
WQX
Water Quality Exchange
WRS
Wlcoxon Rank Sum
WSR
Wlcoxon signed rank
WT
Wlcoxon test
May 2020
DRAFT FOR PUBLIC COMMENT
xxvii
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Symbols, Nomenclature, and Notations
<
less than
>
greater than
<
less than or equal to
>
greater than or equal to

degrees (angle or temperature)
%
percent
1-/?
statistical power of a hypothesis test
a
Type I decision-error rate
aQ
alpha used for the quantile test
aS
alpha scintillation survey meter
a
half-width of a rectangular or triangular probability distribution
A
area
A
overall sensitivity of a measurement
Ac
actinium (isotope listed: 228Ac)
Aea
area of elevated activity
AU
action level value an individual radionuclide (/' = 1, 2, n)
ALmeas.mod
modified action level for the radionuclide being measured when it is used as a

surrogate for other radionuclide(s)
ALmeas
action level for the radionuclide being measured
ALinfer
action level for the inferred radionuclide (in surrogate measurements)
Am
area factor
Am
americium (isotope listed:241 Am)
As
surface activity
fi
Type II decision-error rate
b
background count rate
bi
the average number of counts in the background interval (scanning)
B
mean background counts
Be
beryllium (isotope listed: 7Be)
Bi
bismuth (isotopes listed: 210Bi, 212Bi, 214Bi)
Bq
becquerel
yS
gamma scintillation (gross)
C
carbon (isotope listed: 14C)
C
radionuclide concentration or activity
c
constant
Cb
number of background counts
Cs+b
number of gross counts
Ci
curie
C,
concentration value an individual radionuclide (/' = 1, 2, ..., n)
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
xxviii
May 2020
DO NOT CITE OR QUOTE

-------
c,	sensitivity coefficient
Cjfj(Xj)	component of the uncertainty in y due to x,
Cinfer/Cmeas	ratio of amount of the inferred radionuclide to that of the measured surrogate
radionuclide
Cs	concentration for the surrogate radionuclide
°C	degrees Celsius
cm	centimeter
cm2	square centimeter
cm3	cubic centimeter
Cd	cadmium (isotope listed: 109Cd)
Co	cobalt (isotopes listed: 57Co, 60Co)
cpm	counts per minute
Cr	chromium (isotope listed: 51Cr)
Cs	cesium (isotope listed: 137Cs)
Csl(TI)	cesium iodide (thallium activated)
CZT	cadmium-zinc telluride
S	estimate of the mean concentration of residual radioactive material in the
survey unit
A	shift (width of the gray region, UBGR-LBGR)
A/o-	relative shift
At,-	the observation interval
d	parameter in the Stapleton Equation for the critical net signal
d	width of the detector in the direction of the scan
d'	detectability index (scanning)
DCGLgross	derived concentration guideline level for a gross measurement
DCGL;	derived concentration guideline level of the /'th component leading to dose or
risk
DCGLmin	lowest of the derived concentration guideline levels
DCGLs-mod	modified derived concentration guideline level of the surrogate radionuclide
DCGLs-unmod	derived concentration guideline level of the surrogate radionuclide before
modification
dpm	disintegrations per minute
instrument efficiency
es	surface (or source) efficiency
st	total efficiency of the instrument
eV	electron-volt
Ey	energy of a gamma photon of concern in kiloelectron-volts (keV)
Ej	energy of a photon of interest
°F	degrees Fahrenheit
fi	relative fraction of activity contributed by radionuclide /' to the total
ft	foot (feet)
May 2020	xxix	NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT	DO NOT CITE OR QUOTE

-------
ft3
Fe
cubic foot (feet)
iron (isotopes listed: 55Fe, 59Fe)
g	gram
G	activity
GBq	gigabecquerel (1*109 becquerels)
GGal	gross gamma action level
GM	Geiger-Mueller survey meter
GPa	gas-flow proportional counter (a mode)
GPp	gas-flow proportional counter (P mode)
h	hour
H	hydrogen (isotope listed: 3H [tritium])
Ho	null hypothesis
Hi	alternative hypothesis
Hz	hertz
/	ith sample or measurement in a set
/'	observation time interval length (scanning)
I	iodine (isotopes listed: 123l, 125l, 1311)
in.	inch
Ir	iridium (isotope listed: 192lr)
ISy	in situ gamma spectrometry
k	k-statistic for the quantile test
k	coverage factor for the expanded uncertainty, U
k	Poisson probability sum for a and p (assuming a and p are equal)
k	critical value of the sign test
K	potassium (isotope listed: 40K)
Kd	distribution coefficient
kBq	kilobecquerel (1x103 becquerels)
keV	kiloelectron-volt (1x103 electron-volts)
kg	kilogram
km	kilometer
kc>	multiple of the standard deviation defining /q, usually chosen to be 10
L	length
L	liter
L	grid size spacing
Lc	critical level
Ld	detection limit
Lea	revised spacing of the systematic pattern
LaBr	lanthanum bromide
lb	pound
[i	micro (10-6)
|j	true mean
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
XXX
May 2020
DO NOT CITE OR QUOTE

-------
H	theoretical mean of a population distribution
(lien/p)air	mass energy absorption coefficient in air centimeters squared per gram
(cm2/g)
^Bq	microbecquerel
^Ci	microcuries
microroentgen (1 x 10"6 roentgen)
^Sv	microsievert
m	number of reference measurements (WRS test or Quantile test)
m	number of ranking categories
m	adjusted reference sample measurements
m	meter
m2	square meter
Mi	total amount of [dose counts, activity, etc.]
mBq	millibecquerels
MDCRsurveyor	required number of net source counts
MeV	megaelectron-volt (1x106 electron-volt)
mg	milligram(s)
mGy	milligray
mm	millimeter(s)
Mn	manganese (isotope listed: 54Mn)
M/R	mass-to-charge ratio
mR	milliroentgen
mrad	millirad
mrem	millirem (1x10~3 rem)
mSv	milliseivert (1x10"3 Sv)
n	number of survey unit measurements (WRS test or Quantile test)
n	nth sample or measurement in a set
n	number of laboratory samples (for the Ranked Set Sampling test)
N	sample size (i.e., number of data points [or samples]) for the Sign test
N	number of field screening measurements (for the Ranked Set Sampling test)
Dea	survey unit area divided by the maximum area corresponding to the area
factor, which yields the number of measurements needed so the scan MDC is
adequate
Na	sodium (isotope listed: 22Na)
Nal	sodium iodide
Nal(TI)	sodium iodide (thallium activated)
nBq	nanobecquerels
hea	required number of data points for assessing small areas of elevated activity
ng	nanogram
Ni	nickel (isotope listed: 57Ni, 63Ni)
Np	neptunium (isotope listed: 237Np)
non-Poisson variance component of the background count rate correction
May 2020
DRAFT FOR PUBLIC COMMENT
xxxi
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
p
p
coverage probability for expanded uncertainty
efficiency of a less than ideal surveyor (scanning)
P	probabilityr
Pa	protactinium (isotopes listed: 234Pa, 234mPa)
PA	probe area
Pb	lead (isotopes listed: 212Pb, 214Pb)
PC	personal computer
pCi	picocurie (1*10"12 curies)
PIC	pressurized ionization chamber
Pm	promethium (isotope listed: 147Pm)
Po	polonium (isotopes listed: 210Po, 212Po, 214Po, 216Po)
ppt	parts per trillion
Pu	plutonium (isotopes listed: 238P, 239Pu, 240Pu,241 Pu)
q	critical value for statistical tests
p	density
p(Xi,Xj)	correlation coefficient for two input quantities, X, and Xj
r	number of cycles
r	random number from a data set
r	r-statistic for the quantile test
R	ratio
R	roentgen (exposure rate)
Ra	radium (isotopes listed: 224Ra, 226Ra, 228Ra)
Rb	mean background count rate
R,	established ratio of the concentration of the /'th radionuclinde to the
concentration of the surrogate radionuclide for I = 2,...n
Ri	mean interference count rate
Rh	rhodium
Rn	radon (isotopes listed: 220Rn, 222Rn)
Rnet	net counting rate
Ru	ruthenium (isotope listed: 106Ru)
r(xi,xj)	correlation coefficient for two input estimates, x, and xs
a	theoretical total standard deviation of the population distribution being
sampled
a2	theoretical total variance of the population distribution being sampled
gm	theoretical measurement standard deviation of the population distribution
being sampled, estimated by the combined standard uncertainty of the
measurement
theoretical measurement variance of the population distribution being
sampled

-------
c7s
(TS2
v(Xi, Xj)
estimate of the measurement variability in the survey unit
theoretical sampling variance of the population distribution being sampled
covariance for two input quantities, X, and Xj
total uncertainty
s
S+
s(x)
Sb2
Sc
Sd
Si
Si,
surveyor
Sr
Sv
t
t
T
tl/2
Tc
Th
Th nat
Tl
tb
ti
ts
ts+b
u
U
U nat
U(Xi)
u(x,)/1 x, |
U(Xi,Xj)
Uc(y)
uc(y)/y
uc(y)
Ui(y)
standard deviation of the survey unit
Sign test statistic
sample standard deviation of the input estimate, xi
mean square between reference areas
critical value of the net instrument signal
mean value of the net signal that gives a specified probability, 1-(3, of yielding
an observed signal greater than its critical value Sc
minimum detectable number of net source counts in the observation interval
(scanning)
minimum detectable number of net source counts in the observation interval
by a less than ideal surveyor (scanning)
strontium (isotope listed: 90Sr)
seivert
mean square within reference areas
t-test statistic
number of "less than" values
weighted sum
half-life
techicium (isotopes listed: "Tc, 99mTc)
thorium (isotopes listed: 228Th: 230Th, 232Th, 234Th)
natural thorium
thalium (isotopes listed: 201TI, 204TI, 208TI)
count time for the background
time interval
count time for the source
gross count time
expanded uncertainty
uranium (isotopes listed: 234U, 235U, 238U)
natural uranium
standard uncertainty of the input estimate, x,
relative standard uncertainty of x,
covariance of two input estimates, x, and xs
combined standard uncertainty of y
relative combined standard uncertainty of the output quantity for a particular
measurement
combined variance of y
component of the combined standard uncertainty, uc(y), generated by the
standard uncertainty of the input estimate xh u(xi), multiplied by the sensitivity
coefficient, c,
May 2020	xxxiii	NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT	DO NOT CITE OR QUOTE

-------
Um
Umr
measurement method uncertainty
required measurement method uncertainty
(Pmr	required relative measurement method uncertainty
cp(Xi)	relative standard uncertainty of a nonzero input estimate, x,, for a particular
measurement. cp(Xj) = u(xi)/xi
0(z)	cumulative normal distribution function
Y	volt(s)
v	scan speed
aj2	variance
W	physical probe area
Wr	sum of the ranks of the (adjusted) reference measurements (WRS test)
Ws	sum of the ranks of the (adjusted) sample measurements (WRS test)
H/S	weighted instrument sensitivity
x	estimate of the input quantity, X
x	reference area measurement
x	sample mean
X	maximum length
X[k]i	survey unit measurements
Xj	results of the individual samples
Xj	an input quantity
xc	the critical value of the response variable, x
xq	minimum quantifiable value of the response variable, x
y	year
y	estimate of the output quantity for a particular measurement, Y
Y	maximum width
Y	yttrium
Y	output quantity, measurand
yc	critical value of the concentration
Yd	minimum detectable concentration (MDC)
y
-------
CONVERSION FACTORS
To Convert
From
To
Multiply By
To Convert
From
To
Multiply By
acre
hectare
0.405
meter (m)
inch
39.4

m2
4,050

mile
0.000621

ft2
43,600
m2
acre
0.000247
Bq
Ci
2.7x10"11

hectare
0.0001

dps
1

ft2
10.8

pCi
27

square mile
3.86x10"7
Bq/kg
pCi/g
0.027
m3
liter
1,000
Bq/m2
dpm/100 cm2
0.60
mrem
mSv
0.01
Bq/m3
Bq/L
0.001
mrem/y
mSv/y
0.01

pCi/L
0.027
mSv
mrem
100
centimeter
(cm)
inch
0.394
mSv/y
mrem/y
100
Ci
Bq
3.70x1010
ounce (oz)
L
0.0296

pCi
1x1012
pCi
Bq
0.037
dps
dpm
60

dpm
2.22

pCi
27
pCi/g
Bq/kg
37
dpm
dps
0.0167
pCi/L
Bq/m3
37

pCi
0.451
rad
Gy
0.01
gray (Gy)
rad
100
rem
mrem
1,000
hectare
acre
2.47

mSv
10
liter (L)
cm3
1000

Sv
0.01

m3
0.001
seivert (Sv)
mrem
100,000

ounce oz
(fluid)
33.8

mSv
rem
1,000
100
Abbreviations: m = meter; ft = foot; Bq = becquerel; Ci = curie; dps = decays per second; pCi = picocurie; kg =
kilogram; g = gram; L = liter; cm = centimeter; in. = inch; dpm = decays per minute; oz = ounce; mrem = millirem; mSv
= millisievert; y = year; Gy = gray; Sv = sievert.
May 2020
DRAFT FOR PUBLIC COMMENT
XXXV
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
MARSSIM
Introduction
1 INTRODUCTION
1.1 Purpose and Scope of MARSSIM
Radioactive materials have been produced, processed, used, and stored at thousands of sites
throughout the United States. Many of them at one time had or now have residual radioactive
material in excess of natural background. The sites range in size from Federal weapons-
production facilities covering hundreds of square kilometers to the nuclear medicine
departments of small hospitals. Owners and managers would like to find and remove any
excess residual radioactive material and release these sites for restricted use or for unrestricted
public use.
The U.S. Environmental Protection Agency (EPA), the U.S. Nuclear Regulatory Commission
(NRC), and the U.S. Department of Energy (DOE), and the U.S. Department of Defense (DoD)
are responsible for the release of federally controlled sites after cleanup. Such sites include
DOE and DoD sites, sites licensed by the NRC and its Agreement States, and former
unlicensed industrial facilities that handled ores containing radioactive materials that are
addressed under Federal or State regulatory programs.
The Multi-Agency Radiation Survey and Site Investigation Manual (MARSSIM) provides a
nationally consistent consensus approach to conducting radiation surveys and investigations at
sites with the potential for residual radioactive material. This approach is both scientifically
rigorous and flexible enough to be applied to a diversity of site cleanup conditions.
To release a site after remediation, it is normally necessary to demonstrate to the responsible
Federal or State agency that the cleanup effort was successful and that the release criteria
(specific regulatory limits) were met. In MARSSIM, the "Final Status Survey" (FSS) provides this
demonstration. This manual assists site personnel or others in performing or assessing such a
demonstration. (MARSSIM may also serve to guide or monitor other types of remediation
efforts.)
As illustrated in Figure 1.1, the demonstration of compliance with respect to conducting surveys
is comprised of three interrelated parts:
I.	Translate: Translating the cleanup/release criteria (e.g., mSv/y, mrem/y, specific risk) into
corresponding derived concentration guideline levels (e.g., Bq/kg or pCi/g in soil) through
the use of environmental pathway modeling.
II.	Measure: Acquiring scientifically sound and defensible site-specific data on the levels and
distribution of residual radioactive material, as well as levels and distribution of radionuclides
May 2020
DRAFT FOR PUBLIC COMMENT
1-1
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Introduction
MARSSIM
1	present in the background, by employing suitable field and/or laboratory measurement
2	techniques.1
3	III. Decide: Determining that the data obtained from sampling support the conclusion that the
4	site meets the release criteria, within an acceptable degree of uncertainty, through
5	application of a statistically based decision rule.
6
7
8
9
10
1 Measurements include field and laboratory analyses; however, MARSSIM leaves detailed discussions of laboratory
sample analyses to another manual (i.e., a companion document, the Multi-Agency Radiation Laboratory Analytical
Protocols [MARLAP] manual).
Measure
Decide
Translate
Survey
Sample
Interpret Results,
Statistical Test
MARSSIM
Figure 1.1: Compliance Demonstration
MARSSIM provides standardized and consistent approaches for planning, conducting,
evaluating, and documenting environmental radiological surveys, with a specific focus on the
FSSs that are carried out to demonstrate compliance with cleanup regulations. The MARSSIM
Default Modeling
or
Site-Specific Modeling
(DCGLs)
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
1-2
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Introduction
1	process gathers comprehensive technical information—specifically for II and III above—on
2	residual radioactive material in surface soils and on building surfaces. This information is used
3	in a performance-based approach for demonstrating compliance with a dose- or risk-based
4	regulation. This approach includes processes that identify data quality needs and may reveal
5	limitations on the data that can be collected from a survey. MARSSIM's approach supports
6	decision-making at sites with residual radioactive material in surface soil and on building
7	surfaces. In particular, MARSSIM describes generally acceptable approaches for the following:
8	• planning and designing scoping, characterization, remediation-support, and FSSs for sites
9	with residual radioactive material in surface soil and on building surfaces
10	• Historical Site Assessment (HSA)
11	• quality assurance/quality control (QA/QC) in data acquisition and analysis
12	• conducting surveys
13	• field and laboratory methods and instrumentation, and interfacing with radiation laboratories
14	• statistical hypothesis testing, and the interpretation of statistical data
15	• documentation
16	Table 1.1 summarizes the scope of MARSSIM. Several issues related to releasing sites are
17	beyond the scope of MARSSIM. These include the translation of dose or risk standards into
18	radionuclide-specific concentrations or demonstrating compliance with ground water or surface
19	water regulations. MARSSIM can be applied to surveys performed at vicinity properties—those
20	not under Government or licensee control—but the decision to apply MARSSIM at vicinity
21	properties is a regulatory decision outside the scope of MARSSIM. Information on designing,
22	implementing, and assessing radiological surveys of materials and equipment is presented in
23	the Multi-Agency Radiation Survey and Assessment of Materials and Equipment (MARSAME)
24	supplement to MARSSIM. The potential presence of residual radioactive material in other media
25	(e.g., subsurface soil, ground water) is not addressed by MARSSIM. MARSSIM's main focus is
26	on FSSs, so the processes in this manual may follow remediation activities that remove below-
27	surface residual radioactive material. Therefore, some of the reasons for limiting the scope of
28	the document to surface soils and building surfaces include—
29	• Residual radioactive material is limited to these media for many sites following remediation.
30	• Because many sites have surface soil and building surface residual radioactive material as
31	the leading source of exposure to radiation, existing computer models used for calculating
32	the concentrations based on dose or risk generally consider only surface soils or building
33	surfaces as a source term.
May 2020
DRAFT FOR PUBLIC COMMENT
1-3
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Introduction
1 Table 1.1: Scope of MARSSIM
MARSSIM
Within Scope of MARSSIM
Beyond Scope of MARSSIM
Technical
Information
MARSSIM provides technical,
performance-based guidance
on conducting radiation surveys
and site investigations,
remediation and restoration
activities, and demonstration of
compliance with dose- or risk-
based regulations. MARSSIM
includes a framework for
developing a phased approach
to site investigations that
include stakeholder
involvement, and which
emphasizes the development of
the final status survey (FSS) for
site release.
Regulation
MARSSIM does not set new
regulations or requirements, or
address non-technical issues
(e.g., legal or policy) for site
cleanup. Release criteria will be
provided rather than calculated
using MARSSIM.
Tool Box
MARSSIM can be thought of as
an extensive tool box with many
components—some within the
text of MARSSIM, others by
reference.
Tool Box
Many topics are beyond the
scope of MARSSIM. For
example—
• a public participation program


•	staging, classification,
packaging, and transportation
of wastes for disposal
•	remediation and stabilization
techniques
•	training
Stakeholder
Involvement
MARSSIM encourages
stakeholder involvement but
does not provide specific
guidance.
Stakeholder
Involvement
Specific guidance is determined
by the individual Federal and
State agencies.
Measurement
The information given in
MARSSIM is performance-
based and directed toward
acquiring site-specific data and
goals.
Procedure
The approaches suggested in
MARSSIM vary depending on
the various site data needs—
there are no set procedures for
sample collection, measurement
techniques, storage, or disposal
established in MARSSIM.
Modeling
The interface between
environmental pathway
modeling and MARSSIM is an
important survey design
consideration addressed in
MARSSIM.
Modeling
Environmental pathway
modeling and ecological
endpoints in modeling are
beyond the scope of MARSSIM.
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
1-4
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Introduction
Within Scope of MARSSIM
Beyond Scope of MARSSIM
Soil and
Buildings
The two main media of interest
in MARSSIM are surface soil
and building surfaces affected
by residual radioactive material.
Other Media
MARSSIM does not cover other
media, including construction
materials, equipment,
subsurface soil, surface or
subsurface water, biota, air,
sewers, or sediments.
Final Status
Survey
The focus of MARSSIM is on
the FSS, as this is the deciding
factor in judging whether the
site meets the restricted or
unrestricted release criteria.
Instruments
and Radiation
Detection
Equipment
MARSSIM does not recommend
the use of any specific radiation
detection equipment—there is
too much variability in the types
of radiation sites.
Radiation
MARSSIM considers only
radiation-derived hazards.
Chemicals
MARSSIM does not consider
any hazards posed by
chemicals.
Remediation
Method
MARSSIM assists users in
determining when sites are
ready for an FSS and provides
information on how to determine
if remediation was successful.
Remediation
Method
MARSSIM does not discuss
selection and evaluation of
remediation alternatives, public
involvement, legal
considerations, and policy
decisions related to planning.
Data Quality
Objectives
(DQO)
Process
MARSSIM presents a
systemized approach for
designing surveys to collect
data needed for making
decisions, such as whether to
release a site.
DQO Process
MARSSIM does not provide
prescriptive or default DQOs.
Data Quality
Assessment
(DQA)
MARSSIM provides a set of
statistical tests for evaluating
data and lists alternate tests
that may be applicable at
specific sites.
DQA
MARSSIM does not prescribe a
statistical test for use at all sites.
Radon
Assessment
MARSSIM does address
measurements of radon
(concentration or flux) at sites
with the immediate radon
parents present because of
previous site operations.
Radon
Assessment
MARSSIM does not include
measurements of radon in
ambient air, air emissions,
effluents, water, or indoor air at
sites with none of the immediate
radon parents present because
of previous site operations.
1	MARSSIM also recognizes that there may be other factors that have an impact on designing
2	surveys, such as cost or stakeholder concerns. Guidance on how to address these specific
3	concerns is outside the scope of MARSSIM. Unique site-specific cases may arise that require a
4	modified approach beyond what is presently described in MARSSIM. This includes examples
5	such as—
May 2020
DRAFT FOR PUBLIC COMMENT
1-5
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Introduction
MARSSIM
1	• sites affected by naturally occurring radioactive material (NORM) or technically enhanced
2	naturally occurring radioactive material (TENORM) in which the concentrations
3	corresponding to the release criteria are close to the variability of the background
4	• sites where a reference background cannot be established
5	However, the process of planning, implementing, assessing, and making decisions about a site
6	described in MARSSIM is applicable to all sites, even if the examples in this manual do not
7	meet a site's specific objectives.
8	Of MARSSIM's many topics, the Data Quality Objective (DQO) approach to data acquisition and
9	analysis and the Data Quality Assessment (DQA) for determining that data meet stated
10	objectives are a consistent theme throughout the manual. The DQO process and DQA
11	approach, described in Chapter 2, present a scientific, common-sense method for designing
12	and conducting surveys and making best use of the obtainable information. A formal framework
13	for systematizing the planning of data acquisition surveys can ensure that the information can
14	support important decisions, such as whether to release a particular site following remediation.
15	DQOs must be developed on a site-specific basis. The approaches presented in MARSSIM may
16	not meet the DQOs at every site, so other methods may be used to meet site-specific DQOs, as
17	long as an equivalent level of performance can be demonstrated.
18	1.2 Structure of the Manual
19	Chapter 2 provides an overview of the Radiation Survey and Site Investigation (RSSI) process.
20	Figures 2.4 through 2.8 are flowcharts that summarize the steps taken and decisions made in
21	the process. Chapter 3 provides instructions for performing an HSA—a detailed investigation to
22	collect existing information on the site or facility and to develop a conceptual site model. The
23	results of the HSA are used to plan surveys, perform measurements, and collect additional
24	information at the site. Chapter 4 covers issues that arise in all types of surveys. Detailed
25	information on performing specific types of surveys is included in Chapter 5. Information on
26	selecting the appropriate measurement method combining instruments and measurement
27	techniques is included in Chapters 6 and 7. Chapter 6 discusses direct measurements and
28	scanning surveys, and Chapter 7 discusses sampling and sample preparation for laboratory
29	measurements. The interpretation of survey results is described in Chapter 8.
30	MARSSIM also contains several appendices to provide additional information on specific topics.
31	Appendix A presents an example of how to apply the MARSSIM process to a specific site
32	through an FSS. Appendix B describes a simplified procedure for compliance demonstration
33	that may be applicable at certain types of sites. Appendix C summarizes the regulations and
34	requirements associated with radiation surveys and site investigations for each of the agencies
35	involved in the development of MARSSIM. Detailed information on the EPA Quality System is in
36	Appendix D. The ranked set sampling approach, a form of double sampling that can be useful
37	for hard-to-detect radionuclides, is in Appendix E. Appendix F describes the relationships
38	among MARSSIM; the Comprehensive Environmental Response, Compensation, and Liability
39	Act (CERCLA); and the Resource Conservation and Recovery Act (RCRA). Sources of
40	information used during site assessment are listed in Appendix G. Appendix H describes field
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
1-6
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Introduction
1	survey and laboratory analysis equipment that may be used for radiation surveys and site
2	investigations. Appendix I offers tables of statistical data and supporting information for
3	interpreting survey results described in Chapter 8. The derivation of the alpha scanning
4	detection limit calculations used in Chapter 6 is described in Appendix J. Comparison tables
5	for QA documents are in Appendix K. Appendix L includes guidance for the use of stem and
6	leaf displays and quantile plots. Instructions for the calculation of power curves are included in
7	Appendix M. Appendix N includes three illustrative examples demonstrating the potential
8	consequences of using methods with different levels of precision for planning and designing an
9	FSS and for actually performing the FSS. Appendix O provides additional information about the
10	Wilcoxon Rank Sum test (WRS) and Sign test and illustrates examples of the derived
11	concentration guideline level (DCGL) determinations.
12	MARSSIM is presented in a modular format, with each module containing information on
13	conducting specific aspects of, or activities related to, the survey process. Followed in order,
14	each module leads to the generation and implementation of a complete survey plan. Although
15	this approach may involve some overlap and redundancy in information, it also allows many
16	users to concentrate only on those portions of the manual that apply to their own particular
17	needs or responsibilities. The procedures within each module are listed in order, and options
18	are provided to let the user skip portions of the manual that may not be applicable to a specific
19	site. Where appropriate, checklists condense and summarize major points in the process. The
20	checklists may be used to verify that every suggested step is followed or explain why a step was
21	not needed.
22	MARSSIM contains a simplified procedure (see Appendix B) that many users of radioactive
23	materials may be able to employ to demonstrate compliance with the release criteria—with the
24	approval of the responsible regulatory agency. Sites that may qualify for simplified release
25	procedures are those in which the radioactive materials used were—
26	• of relatively short half-life (e.g., < 120 days) and have since decayed to insignificant
27	quantities
28	• kept only in small enough quantities so as to be exempted or not requiring a specific license
29	from a regulatory authority
30	• used or stored only in the form of non-leaking sealed sources
31	• combinations of the above
32	1.3 Use of the Manual
33	Potential users of this manual are Federal, State, and local government agencies with
34	regulatory authority and control of residual radioactive material in the environment; their
35	contractors; and other parties, such as organizations with licensed authority to possess and use
36	radioactive materials. The manual is intended for a technical audience having knowledge of
37	radiation health physics and statistics, as well as experience with the practical applications of
38	radiation protection. An understanding of instrumentation and methodologies and expertise in
39	planning, approving, and implementing surveys of environmental levels of radioactive material is
May 2020
DRAFT FOR PUBLIC COMMENT
1-7
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Introduction
MARSSIM
1	assumed. This manual has been written so that individuals responsible for planning, approving,
2	and implementing radiological surveys will be able to understand and apply the information
3	provided here. Certain situations and sites may require consultation with personnel with specific
4	types of expertise and experience.
5	MARSSIM uses the word "should" as a recommendation, not as a requirement. Each
6	recommendation in this manual is not intended to be taken literally and applied at every site.
7	MARSSIM's survey planning documentation will address how to apply the process on a site-
8	specific basis.
9	As previously stated, MARSSIM supports compliance with dose- or risk-based regulations. The
10	translation of the regulatory dose limit to a corresponding concentration level is not addressed in
11	MARSSIM, so the information in this manual is applicable to a broad range of regulations,
12	including concentration-based regulations. The terms dose, risk, and dose-based and risk-
13	based regulation are used throughout the manual, but these terms are not intended to limit the
14	use of the manual.
Note that Federal or State agencies that can approve a demonstration of compliance may
support requirements that differ from what is presented in this version of MARSSIM. It is
essential, therefore, that the persons carrying out the surveys remain in close communication
with the proper Federal or State regulatory authorities throughout the compliance
demonstration process.
15	1.4 Missions of the Federal Agencies Producing MARSSIM
16	MARSSIM is the product of a multi-agency workgroup with representatives from EPA, NRC,
17	DOE, and DoD. This section briefly describes the missions of the participating agencies.
18	Regulations and requirements governing site investigations for each of the agencies associated
19	with radiation surveys and site investigations are presented in Appendix C.
20	1.4.1 U.S. Environmental Protection Agency
21	The mission of the EPA is to improve and preserve the quality of the environment, on both
22	national and global levels. The EPA's scope of responsibility includes implementing and
23	enforcing environmental laws, setting guidelines, monitoring pollution, performing research, and
24	promoting pollution prevention. EPA Headquarters maintains overall planning, coordination, and
25	control of EPA programs, and EPA's 10 regional offices are responsible for executing EPA's
26	programs within the boundaries of each region. EPA also coordinates with State and local
27	governments' pollution control activities and supports further research and development.
28	1.4.2 U.S. Nuclear Regulatory Commission
29	The mission of the NRC is to ensure adequate protection of public health and safety, the
30	common defense and security, and the environment in the use of certain radioactive materials in
31	the United States. The NRC's scope of responsibility includes regulation of commercial nuclear
32	power reactors; nonpower research, test, and training reactors; fuel cycle facilities; medical,
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
1-8
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Introduction
1	academic, and industrial uses of nuclear materials; and the transport, storage, and disposal of
2	nuclear materials and waste. The Energy Reorganization Act of 1974 and the Atomic Energy
3	Act of 1954, as amended, provide the foundation for regulation of the Nation's commercial use
4	of radioactive materials.
5	1.4.3 U.S. Department of Energy
6	The mission of the DOE is to develop and implement a coordinated national energy policy to
7	ensure the availability of adequate energy supplies and to develop new energy sources for
8	domestic and commercial use. In addition, DOE is responsible for the development,
9	construction, and testing of nuclear weapons for the U.S. Military. DOE is also responsible for
10	managing the low- and high-level radioactive wastes generated by past nuclear weapons and
11	research programs and for constructing and maintaining a repository for civilian radioactive
12	wastes generated by commercial nuclear reactors. DOE has the lead in remediating facilities
13	and sites previously used in atomic energy programs.
14	1.4.4 U.S. Department of Defense
15	The global mission of the DoD is to provide for the defense of the United States. In doing this,
16	DoD is committed to protecting the environment. Each military service has specific regulations
17	addressing the use of radioactive sources and the development of occupational health
18	programs and radiation protection programs. The documents describing these regulations are
19	used as guidance in developing environmental radiological surveys within DoD and are
20	discussed in Appendix C.
21	In accordance with section 91b of the Atomic Energy Act of 1954, as amended, DoD (including
22	separate military services) has authority to acquire nuclear reactor systems and special nuclear
23	materials. Additionally, DoD (including separate military services) is the lead federal agency for
24	environmental remediation under several federal regulatory programs.
May 2020
DRAFT FOR PUBLIC COMMENT
1-9
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
MARSSIM
Overview of the Radiation Survey and Site Investigation Process
1
2
2 OVERVIEW OF THE RADIATION SURVEY AND SITE
INVESTIGATION PROCESS
3	2.1 Introduction
4	The purpose of the Multi-Agency Radiation Survey and Site Investigation Manual (MARSSIM) is
5	to provide a standardized approach to demonstrating compliance with release criteria.1 This
6	chapter provides a brief overview of the Radiation Survey and Site Investigation (RSSI) process,
7	several important aspects of this process, and its underlying principles. The purpose of this
8	chapter is to provide the overview information required to understand the rest of this manual.
9	The concepts introduced here are discussed in detail throughout the manual.
10	• Section 2.2 introduces and defines key terms used throughout the manual. Some of these
11	terms may be familiar to the MARSSIM user, while others are new terms developed
12	specifically for this manual.
13	• Section 2.3 describes the flow of information used to decide whether a site or facility
14	complies with release criteria. The section describes the framework that is used to
15	demonstrate compliance with the release criteria and is the basis for all information
16	presented in this manual. The decision-making process is broken down into four phases:
17	(1) planning, (2) implementation, (3) assessment, and (4) decision-making.
18	• Section 2.4 introduces the RSSI process, which can be used for compliance demonstration
19	at many sites. The section describes a series of surveys that form the core of this process.
20	Each survey has specified goals and objectives to support a final decision on whether a site
21	or facility complies with the appropriate criteria. Flow diagrams are provided showing how
22	the different surveys support the overall process, along with descriptions of the information
23	obtained through each type of survey.
24	• Section 2.5 presents major considerations that relate to the decision-making and survey-
25	design processes. This section, in addition to the examples discussed in detail throughout
26	the manual, focuses on residual radioactive material in surface soils and on building
27	surfaces. Recommended survey designs for demonstrating compliance are presented,
28	along with the rationale for selecting these designs.
29	• Section 2.6 recognizes that the methods presented in MARSSIM may not represent the
30	most appropriate survey design at all sites. Some alternate methods for applying the RSSI
31	process are discussed. Different methods for demonstrating compliance that are technically
32	defensible may be developed with the approval of the responsible regulatory agency.
33	MARSSIM provides an approach that is technically defensible and flexible enough to be applied
34	to a variety of site-specific conditions. Applying this approach to dose- or risk-based criteria
1 MARSSIM uses the word "should" as a recommendation, not as a requirement. Each recommendation in this
manual is not intended to be taken literally and applied at every site. MARSSIM's survey planning documentation will
address how to apply the process on a site-specific basis.
May 2020
DRAFT FOR PUBLIC COMMENT
2-1
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
Overview of the Radiation Survey and Site Investigation Process
MARSSIM
provides a consistent approach to protecting human health and the environment. The manual's
performance-based approach to decision-making provides the flexibility needed to address
compliance demonstration at individual sites.
2.2 Understanding Key MARSSIM Terminology and Survey Unit Classification
2.2.1 Key MARSSIM Terminology
The first step in understanding the RSSI process is accomplished by understanding this
manual's scope, the terminology, and the concepts. Some terms were developed specifically for
MARSSIM, for the purposes of this manual, while other commonly used terms were adopted.
This section explains some of the terms roughly in the order of their presentation in the manual.
The italicized terms in this section are all defined in the Glossary of this document.
The process described in MARSSIM begins with the premise that release criteria have already
been provided in terms of a measurement quantity. The methods presented in MARSSIM are
generally applicable and are not dependent on the value of the release criteria.
Release criteria are regulatory limits expressed in terms of dose (millisieverts/year or
millirem/year) or risk (cancer morbidity or cancer mortality) or concentrations of radioactive
material specified in regulations or standards. The terms "release limit" and "cleanup standard"
are also used to describe this term. Release criteria that are typically based on dose (e.g., total
effective dose [TED], committed effective dose [CED], total effective dose equivalent [TEDE], or
committed effective dose equivalent [CEDE]) or risk (e.g., risk of cancer incidence [morbidity] or
risk of cancer death [mortality]) generally cannot be measured directly.
Exposure pathway modeling is an analysis of various exposure pathways and scenarios used to
convert dose or risk into concentration. Exposure pathway modeling is used to calculate a
radionuclide-specific predicted concentration of radioactive material or surface area
concentration of radioactive material of specific nuclides that could result in a dose or risk equal
to the release criteria within the required performance period. In this manual, such a
concentration is termed the derived concentration guideline level (DCGL). In many cases,
DCGLs can be derived from applicable requirements or regulatory agency guidance based on
default modeling input parameters (e.g., screening-level analyses) if site conditions are
consistent with the underlying assumptions in the default modeling or screening analyses; in
other cases, it may be necessary to develop site-specific parameters. In general, the units for
the DCGL are the same as the units for measurements performed to demonstrate compliance
(e.g., becquerel/kilogram [Bq/kg] or picocurie/gram [pCi/g], becquerel/square meter [Bq/m2] or
decays per minute [dpm]/100 cm2). This allows direct comparisons between the survey results
and the DCGL. A discussion of the uncertainty associated with using DCGLs to demonstrate
compliance is included in Appendix D, Section D.1.6.
An investigation level is a derived media-specific, radionuclide-specific concentration that, if
exceeded, triggers some response, such as further investigation or remediation. An
investigation level may be used early in the process to identify areas requiring further
investigation; it may also be used as a screening tool during compliance demonstration to
identify potential problem areas. A DCGL is an example of a specific investigation level that is
based on the release criteria.
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
2-2
May 2020
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
MARSSIM
Overview of the Radiation Survey and Site Investigation Process
While the derivation of DCGLs is outside the scope of MARSSIM, it is important to understand
the assumptions that underlie this derivation of DCGLs to ensure consistency with the statistical
approach used to demonstrate compliance with regulatory criteria. For example, the estimated
dose, and consequently the cleanup level or DCGL, may be sensitive to assumptions regarding
the lateral extent (i.e., area) of residual radioactive material for relatively small exposure areas
(e.g., areas that do not approximate an infinite source for external radiation exposure, or areas
that would not support crop cultivation in quantities consistent with the assumed annual
consumption rates of contaminated produce for the resident farmer scenario). Other important
factors may include depth of residual radioactive material, chemical and physical form of the
source, hydrogeological considerations, and potential exposure scenarios. For more information
on environmental pathway modeling, check with your regulator's guidance on the topic.
MARSSIM defines two potential DCGLs based on the area of residual radioactive material:
•	Evenly distributed activity—If the residual radioactive material is evenly distributed over a
large area, MARSSIM looks at the average or median concentration of radioactive material
over the entire area. The DCGLw1 (the DCGL used when applying the Wilcoxon Rank Sum
[WRS] or Sign tests; see Section 2.5.1.2) is derived based on assuming an average
concentration over a wide area in the exposure pathway modeling.
•	Small areas of elevated concentrations of radioactive material—If the residual radioactive
material appears as small areas of elevated concentrations of radioactive material3 within a
larger area, MARSSIM also considers the results of individual measurements. The DCGLemc
(the DCGL used for the elevated measurement comparison [EMC], see Section 2.5.1.1) is
derived separately for these small areas and generally from different exposure assumptions
than those used for larger areas.
Surface soil is the top layer of soil on a site that is available for direct exposure, growing plants,
resuspension of particles for inhalation, and mixing from human disturbances. Surface soil may
also be defined as the thickness of soil that can be measured using direct measurement or
scanning techniques. Historically, this layer has often been represented as the top 15 cm
(6 inches) of soil (40 CFR 192), but it will vary depending on radionuclide, surface
characteristics, measurement method, and pathway modeling assumptions. For the purposes of
MARSSIM, surface soil may be considered to include gravel fill, waste piles, concrete, or
asphalt paving. Similarly, a building surface is defined as the thickness of building surface
material that can be measured using direct measurement or scanning techniques and will also
2	The "W" in DCGLw historically stood for Wilcoxon Rank Sum test, which is the statistical test recommended in
MARSSIM for demonstrating compliance when the radionuclide is present in background. However, as the Sign test
is also a recommended test in MARSSIM for demonstrating compliance when the radionuclide is not present in
background, the term now colloquially refers to "wide-area" or "average."
3	A small area of elevated concentration of radioactive material, or maximum point estimate of residual radioactive
material, might also be referred to as a "hot spot." This term has been purposefully omitted from MARSSIM because
the term often has different meanings based on operational or local program concerns. As a result, there may be
problems associated with defining the term and reeducating MARSSIM users in the proper use of the term.
May 2020
DRAFT FOR PUBLIC COMMENT
2-3
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Overview of the Radiation Survey and Site Investigation Process
MARSSIM
1	vary depending on radionuclide, surface characteristics, measurement technique, and pathway
2	modeling assumptions.
3	A site is any installation, facility, or discrete, physically separate parcel of land, or any building or
4	structure or portion thereof that is being considered for survey and investigation. Area is a very
5	general term that refers to any portion of a site, up to and including the entire site.
6	Remediation includes those actions that are consistent with a permanent remedy taken instead
7	of, or in addition to, removal action in the event of a release or threatened release of a
8	hazardous substance into the environment, to prevent or minimize the release of hazardous
9	substances so that they do not migrate to cause substantial danger to present or future public
10	health or welfare or the environment.
11	Decommissioning is a term for the process of safely removing a site from service, reducing the
12	concentration of residual radioactive material through remediation to a level that permits release
13	of the property, and termination of the license or other authorization for site operation.
14	A survey unit is a physical area consisting of structures or land areas of specified size and
15	shape at a site for which a separate decision will be made as to whether the unit meets the
16	release criteria. (This decision is made as a result of the final status survey [FSS]—the survey in
17	the RSSI process used to demonstrate compliance with release criteria.) Survey units are
18	established to facilitate the survey process and the statistical analysis of survey data. The size
19	and shape of the survey unit are based on such factors as the potential for residual radioactive
20	material, the expected distribution of residual radioactive material, and any physical boundaries
21	(e.g., buildings, fences, roads, soil type, and surface water body) at the site. Survey units are
22	generally formed by grouping contiguous site areas with a similar use history and the same
23	classification of potential for residual radioactive material.
24	Measurement in MARSSIM is used interchangeably to mean (1) the act of using a detector to
25	determine the level or quantity of radioactive material on a surface or in a sample of material
26	removed from a media being evaluated, or (2) the quantity obtained by the act of measuring.
27	Direct measurements are obtained by placing a detector near the surface or media being
28	surveyed for a prescribed amount of time. An indication of the resulting concentration of
29	radioactive material is read out directly.
30	Scanning is a measurement technique performed by moving a portable radiation detector at a
31	specified speed and distance next to a surface to detect radiation.
32	Sampling is the process of collecting a portion of an environmental medium as being
33	representative of the locally remaining medium. The collected portion, or aliquot, of the medium
34	is then analyzed to identify the radionuclide and determine the concentration. The word sample
35	may also refer to a set of individual measurements drawn from a population whose properties
36	are studied to gain information about the entire population. The latter is primarily used for
37	statistical discussions.
38	The graded approach is defined as the process where the level of application of managerial
39	controls for an item or work is determined according to the intended use of the results and the
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
2-4
May 2020
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
MARSSIM
Overview of the Radiation Survey and Site Investigation Process
degree of confidence needed in the quality of the results. To make the best use of resources for
decommissioning, MARSSIM places greater survey efforts on areas that have, or had, the
highest potential for residual radioactive material. The FSS uses statistical tests to support
decision-making. These statistical tests are performed using survey data from areas with
common characteristics, such as potential for residual radioactive material, which are
distinguishable from other areas with different characteristics.
Categorization is the act or result of separating an area or survey unit into one of two
categories: impacted or non-impacted. Areas that have no reasonable potential for residual
radioactive material are categorized as non-impacted areas. These areas have no radiological
impact from site operations and are typically identified early in the cleanup process. Areas with
some reasonable potential for residual radioactive material are categorized as impacted areas.
Classification is the process by which impacted areas or survey units are separated into
Class 1, Class 2, or Class 3 areas according to radiological characteristics. Survey unit
classification determines the FSS design and the procedures used to develop this design.
Preliminary area classifications, made earlier in the MARSSIM process, are useful for planning
subsequent surveys.
The background reference area is a geographical area from which representative reference
measurements are performed for comparison with measurements performed in specific survey
units. If the radionuclide of concern is present in the background, or if the measurement system
used to determine concentration in the survey unit is not radionuclide-specific, background
measurements are compared to the survey unit measurements to determine the concentration
of residual radioactive material. The site radiological reference area is defined as an area that
has similar physical, chemical, radiological, and biological characteristics as the survey unit(s)
being investigated but has not been affected by site activities (i.e., non-impacted).
The Data Life Cycle is the process of planning the survey, implementing the survey plan, and
assessing the survey results before making a decision. Survey planning uses the Data Quality
Objectives (DQO) Process, which is a series of logical steps to create a plan for the resource-
effective acquisition of environmental data, to ensure that the survey results are of sufficient
quality and quantity to support the final decision. Measurement Quality Objectives (MQOs) are
the specific analytical data requirements of the DQOs. Quality assurance (QA) is an integrated
system of management activities involving planning, implementation, assessment, reporting,
and quality improvement to ensure that a process, item, or service is of the type and quality
needed and expected by the customer. Quality control (QC) is the overall system of technical
activities that measure the attributes and performance of a process, item, or service against
defined standards to verify that they meet the stated requirements established by the customer,
operational techniques, and activities that are used to fulfill requirements for quality. QA/QC
procedures are performed during implementation of the survey plan to collect information
necessary to evaluate the survey results. Data Quality Assessment (DQA) is the scientific and
statistical evaluation of data to determine if the data are of the right type, quality, and quantity to
support their intended use.
A systematic process and structure for quality should be established to provide confidence in
the quality and quantity of data collected to support decision-making. The data used in decision-
May 2020
DRAFT FOR PUBLIC COMMENT
2-5
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
Overview of the Radiation Survey and Site Investigation Process
MARSSIM
making should be supported by a planning document that records how quality assurance and
quality control are applied to obtain the types and quality of results that are needed and
expected. There are several terms used to describe a variety of planning documents, some of
which document only a small part of the survey design process. MARSSIM uses the term
Quality Assurance Project Plan (QAPP) to describe a written document outlining the procedures
a monitoring project will use to ensure the data it collects and analyzes meets project
requirements. This term conforms to consensus guidance ANSI/ASQC E4-1994 (ASQC 1995)
and U.S. Environmental Protection Agency (EPA) guidance (EPA 2001b; EPA 2002a), and its
use is recommended to promote consistency. The use of the term QAPP in MARSSIM does not
exclude the use of other terms (e.g., Decommissioning Plan, Sampling and Analysis Plan, Field
Sampling Plan) to describe survey documentation, provided that the information included in the
documentation supports the objectives of the survey. The QAPP is a plan for obtaining data of
sufficient quality and quantity to satisfy data needs; it describes policy, organization, and
functional activities and includes DQOs and MQOs.
2.2.2 Classification Assessment
Impacted areas are divided into three classifications:
•	Class 1 Areas: Areas that have, or had before remediation, a potential for residual
radioactive material (based on site operating history) or known residual radioactive material
(based on previous radiation surveys) above the DCGLw. Examples of Class 1 areas
include—
o site areas previously subjected to remedial actions4
o locations where leaks or spills are known to have occurred
o former burial or disposal sites
o waste storage sites
o areas with residual radioactive material in discrete solid pieces of material and high
specific activity
•	Class 2 Areas: Areas that have, or had before remediation, a potential for residual
radioactive material or known residual radioactive material but are not expected to exceed
the DCGLw. To justify changing an area's classification from Class 1 to Class 2, the existing
data (from the Historical Site Assessment [HSA], scoping surveys, or characterization
surveys) should provide a high degree of confidence that no individual measurement would
exceed the DCGLw. Other justifications for this change in an area's classification may be
4 Remediated areas are identified as Class 1 areas because the remediation process often results in less than
100 percent removal of the radioactive material. The residual radioactive material that remains on the site after
remediation is often associated with relatively small areas with elevated levels of radioactive material. This results in
a non-uniform distribution of the radionuclide and a Class 1 classification. If an area is expected to have no potential
to exceed the DCGLw and was remediated to demonstrate the residual radioactive material is as low as reasonably
achievable, the remediated area might be classified as Class 2 for the final status survey.
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
2-6
May 2020
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
MARSSIM
Overview of the Radiation Survey and Site Investigation Process
appropriate based on the outcome of the DQO process. Examples of areas that might be
classified as Class 2 for the FSS include—
o locations where radioactive materials were present in an unsealed form (e.g., process
facilities)
o residual radioactive material potentially along transport routes
o areas downwind from stack release points
o upper walls, roof support frameworks, and ceilings of some buildings or rooms subjected
to airborne radioactive material
o areas where low concentrations of radioactive materials were handled
o areas on the perimeter of former buffer or radiological control areas
• Class 3 Areas: Any impacted areas that are not expected to contain any residual radioactive
material or are expected to contain levels of residual radioactive material at a small fraction
of the DCGLw, based on site operating history and previous radiation surveys. To justify
changing an area's classification from Class 1 or Class 2 to Class 3, the existing data (from
the HSA, scoping surveys, or characterization surveys) should provide a high degree of
confidence that there is either no residual radioactive material, or that any levels of residual
radioactive material are a small fraction of the DCGLw. Other justifications for this change in
an area's classification may be appropriate based on the outcome of the DQO process.
Examples of areas that might be classified as Class 3 include buffer zones around Class 1
or Class 2 areas, and areas with very low potential for residual radioactive material but
insufficient information to justify a non-impacted classification.
Class 1 areas have the greatest potential for residual radioactive material and, therefore,
receive the highest degree of survey effort for the FSS using a graded approach, followed by
Class 2, and then by Class 3.
Survey units should be classified as Class 1 unless there is sufficient justification for classifying
the survey as Class 2 or Class 3. Likewise, the classification of a survey unit should not be
reduced without sufficient justification.
Non-impacted areas do not receive any level of survey coverage, because they have no
reasonable potential for residual radioactive material. Non-impacted areas are determined on a
site-specific basis from information collected during site identification, the HSA, and scoping and
characterization surveys. Examples of areas that would be non-impacted rather than impacted
usually include administrative, residential, or other buildings that have not contained radioactive
materials except such devices as smoke detectors or exit signs with sealed radioactive sources.
2.3 Making Decisions Based on Survey Results
Compliance demonstration is simply a decision as to whether a survey unit meets the release
criteria. For most sites, this decision is based on the results of one or more surveys. When
May 2020
DRAFT FOR PUBLIC COMMENT
2-7
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
Overview of the Radiation Survey and Site Investigation Process
MARSSIM
survey results are used to support a decision, the decision maker5 needs to ensure that the data
will support that decision with satisfactory confidence. Uncertainty in the survey results is
unavoidable, so the possibility of errors in decisions supported by the survey results is
unavoidable. For this reason, actions must be taken to manage the uncertainty in the survey
results so that sound and defensible decisions can be made. These actions include proper
survey planning to control known causes of uncertainty, proper application of QC procedures
during implementation of the survey plan to detect and control significant sources of error, and
careful analysis of uncertainty before the data are used to support decision-making. These
actions describe the flow of data throughout each type of survey and are combined in the Data
Life Cycle, as shown in Figure 2.1.
There are four phases of the Data Life Cycle:
•	Planning Phase: The survey design is developed and documented using the DQO process.
QA/QC procedures are developed and documented in the QAPP. The QAPP is the principal
product of the planning process, which incorporates the DQOs as it integrates all technical
and quality aspects for the life cycle of the project, including planning, implementation, and
assessment. The QAPP contains plans for survey operations and provides a specific format
for obtaining the type and quality of data needed for decision-making. The QAPP elements
are presented in the order of the Data Life Cycle and are grouped into two types of
elements: (1) project management and (2) collection and evaluation of environmental data
(ASQC 1995). The DQO process is described in Appendix D and applied in Chapters 3, 4,
and 5 of this manual. Development of the QAPP is described in Appendix D and applied
throughout the RSSI process.
•	Implementation Phase: The survey is carried out in accordance with the standard operating
procedures (SOPs) and QAPP, and it generates raw data. Chapters 6-7 and Appendix H
provide information on the selection of data collection techniques. The QA and QC
measurements, discussed in Chapters 6-7, also generate data and other important
information that will be used during the Assessment Phase.
•	Assessment Phase: The data generated during the Implementation Phase first are verified
to ensure that the SOPs specified in the QAPP were followed and that the measurement
systems performed in accordance with the criteria specified in the QAPP. Then the data are
validated to ensure that the results of data collection activities support the objectives of the
survey as documented in the QAPP or permit a determination that these objectives should
be modified. The DQA process is then applied using the validated data to determine if the
quantity and quality of the data satisfy their intended use. The DQA process is described in
Appendix D and is applied in Chapter 8.
•	Decision-making Phase: A decision is made, in coordination with the regulatory agency,
based on the conclusions drawn from the assessment process. The ultimate objective is to
make technically defensible decisions with a specified level of confidence (Chapter 8).
5 The term decision maker is used throughout this section to describe the person, team, board, or committee
responsible for the final decision regarding release of the survey unit.
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
2-8
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Overview of the Radiation Survey and Site Investigation Process
Plan for data collection using the Data Objectives Process
and develop a Quality Assurance Project Plan
Planning Phase
Collect data using documented measurement techniques and associated
quality assurance and quality control activities
Implementation Phase
Evaluate the collected data against the survey objectives using Data
Verification, Data Validation, and Data Quality Assessment
Assessment Phase
Make a technically defensible decisions with a specified level of confidence are
made in coordination with regulating agencies
Decision-Making Phase
2	Figure 2.1: The Data Life Cycle
3	2.3.1 Planning Effective Surveys—Planning Phase
4	The first step in designing effective surveys is planning. The DQO process is a series of
5	planning steps based on the scientific method for establishing criteria for data quality and
6	developing survey designs (ASQC 1995, EPA 2006a, EPA 1987a, EPA 1987b). Planning
7	radiation surveys using the DQO process improves the survey effectiveness and efficiency, and
8	thereby the defensibility of decisions. Proper data collection planning minimizes expenditures by
9	eliminating unnecessary, duplicative, or overly precise data. Using the DQO process ensures
10	that the type, quantity, and quality of environmental data used in decision making will be
May 2020
DRAFT FOR PUBLIC COMMENT
2-9
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Overview of the Radiation Survey and Site Investigation Process
MARSSIM
1	appropriate for the intended application. MARSSIM supports the use of the DQO process to
2	design surveys for input to both evaluation techniques (elevated measurement comparison and
3	the statistical test). The DQO process provides systematic procedures for defining the criteria
4	that the survey design should satisfy, including whether to perform scan-only surveys or scan
5	surveys in conjunction with direct measurements/sampling, what type of measurements to
6	perform, when and where to perform measurements, the level of decision errors for the survey,
7	and how many measurements to perform.
8	The level of effort associated with planning a survey is based on the complexity of the site.
9	Large and complicated sites generally receive a significant amount of effort during the planning
10	phase, while smaller sites may not require as much planning. In addition, the complexity of the
11	survey depends not only on the size of the site or survey unit, but also on the physical and
12	chemical characteristics of the site and the radioactive materials on the site. This graded
13	approach defines data quality requirements according to the type of survey being designed, the
14	risk of making a decision error based on the data collected, and the consequences of making
15	such an error. This approach provides a more effective survey design combined with a basis for
16	judging the usability of the data collected.
17	DQOs are qualitative and quantitative statements derived from the outputs of the DQO process
18	that—
19	• clarify the study objective
20	• define the most appropriate type of data to collect
21	• determine the most appropriate conditions (e.g., environmental, legal, safety) for collecting
22	the data
23	• specify limits on decision errors, which will be used as the basis for establishing the quantity
24	and quality of data needed to support the decision
25	The DQO process consists of seven steps, as shown in Figure 2.2. Each step is discussed in
26	detail in Appendix D. Although all of the outputs of the DQO process are important for
27	designing efficient surveys, there are some that are referred to throughout the manual. These
28	DQOs are mentioned briefly here and are discussed in detail throughout MARSSIM and in
29	Appendix D.
30	The minimum information (outputs) required from the DQO process to proceed with the
31	methods described in MARSSIM are—
32	• Classify and specify boundaries of survey units. This can be accomplished at any time but
33	must be finalized during FSS planning (Section 4.6, Section 4.9).
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
2-10
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Overview of the Radiation Survey and Site Investigation Process
Step 1. State the problem
Define the problem that motivates the study
Identify the planning team, examine budget, and schedule
Step 2. Identify the goal of the study
State how environmental data will be used in solving the problem
Identify study questions and define alternative outcomes
Step 3. Identify information inputs
Identify data and information needed to answer study questions

Step 4. Define the boundaries of the study
Specify the target population and characteristics of interest
Define spatial and temporal limits and scale of inference
Step 5. Develop the analytic approach
Define the parameter of interest, specify the type of inference, and
develop logic for drawing conclusions from the findings
Statistical
Hypothesis
Estimation and Other
Analytical Approaches
Step 6. Specify performance or Acceptance Criteria
Develop performance criteria for new data being collected
Develop acceptance criteria for data already collected
1
2
Step 7. Develop the Detailed Plan for Obtaining Data
Select the most resource effective sampling and analysis plan that
satisfies the performance or acceptance criteria
Figure 2.2: The Data Quality Objectives Process
May 2020	2-11
DRAFT FOR PUBLIC COMMENT
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
Overview of the Radiation Survey and Site Investigation Process
MARSSIM
•	Determine if Scenario A or Scenario B will be used to evaluate the survey unit. Scenario A
uses a null hypothesis that assumes the concentration of radioactive material in the survey
unit exceeds the DCGLw. Scenario A is sometimes referred to as "presumed not to comply"
or "presumed not clean." Scenario B uses a null hypothesis that assumes the level of
concentration of radioactive material in the survey unit is less than or equal to the
discrimination level. Scenario B is sometimes referred to as "indistinguishable from
background" or "presumed clean" (Section 5.3.1).
•	State the null hypothesis (Ho). For Scenario A, the concentration of residual radioactive
material in the survey unit exceeds the release criteria (Section 2.5, Appendix D,
Section D.1.6). For Scenario B, the residual radioactive material in the survey unit does not
exceed the release criteria (Section 2.5, Appendix D, Section D.1.6).
•	Specify a gray region where the consequences of decision errors are considered relatively
minor. For Scenario A the upper bound of the gray region is defined as the DCGLw, and the
lower bound of the gray region (LBGR) is a site-specific variable generally chosen to be a
conservative (slightly higher) estimate of the concentration of residual radioactive material
remaining in the survey unit and adjusted to provide an acceptable value for the relative
shift. For Scenario B the LBGR is the action level (AL), and the upper bound is defined by a
discrimination limit (DL) that can be reliably distinguished from the AL (Section 5.3.3.1,
Section 5.3.4.1, Appendix D, Section D.1.7.3).
•	Define decision errors and assign their probability limits for the chosen Scenario (A or B).
The probability of making a Type I decision error (a) or a Type II decision error (J3) is a site-
specific variable (Section 5.3.2, Appendix D, Section D.1.6).
•	Estimate the standard deviation of the measurements in the survey unit. The standard
deviation (o) is a site-specific variable, typically estimated from preliminary survey data.
•	Specify the relative shift (A/6). The relative shift is equal to the width of the gray region (A)—
which in Scenario A is equal to (DCGLw - LBGR) and for Scenario B is equal to (DL - AL)—
divided by an estimate of the uncertainty (o). The relative shift is generally designed to have
a value greater than one (Section 5.3.3.2, Section 5.3.4.2).
•	Select a survey strategy based on the measurement requirements and site classifications to
include one of the following:
o a combination of scanning and direct measurements or sample collection and analysis
o scanning only, provided that the scanning measurement system meets the detection
capability and uncertainty requirements of a scan-only survey design (Section 5.3.6.1,
Section 5.3.9)
•	For surveys utilizing the Sign or WRS test, calculate the estimated number of measurements
(N) and specify the measurement locations required to demonstrate compliance. The
number of measurements depends on the relative shift, Type I and Type II decision error
rates, and the potential for small areas of elevated activity (Sections 5.3.3 and 5.3.4).
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
2-12
May 2020
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
MARSSIM
Overview of the Radiation Survey and Site Investigation Process
•	Determine the percentage of scanning coverage for survey units based on the assigned
classification of the survey unit and relative shift. Class 1 areas and survey units will have
scan coverage of 100 percent, while the scan coverage of Class 2 and Class 3 areas and
survey units will vary between 10 percent and 100 percent as a function of the relative shift
(Section 5.3.6).
•	Specify the documentation requirements for the survey, including survey planning
documentation. Documentation supporting the decision on whether or not the site complies
with the release criteria is determined on a site-specific basis (Appendix D, Section D.2).
•	Specify the required MQOs for all measurement techniques (scanning, direct measurement,
and sample analysis) specified in the QAPP. The MQOs are unique for each measurement
system (Sections 6.2-6.4).
MQOs are an important subset of inputs into the DQO process that define performance
requirements and objectives for the measurement system. MQOs that should be considered
include the following:
•	Method uncertainty: Method uncertainty is the sum of the random and systematic
uncertainties in the measurement system. MARSSIM uses the term "measurement method
uncertainty" to refer to the predicted uncertainty of a measured value that would be
calculated if the method were applied to a hypothetical sample with a specified
concentration, typically the release limit. Reasonable values for measurement method
uncertainty can be predicted for a particular measurement technique based on typical
values for specific parameters (e.g., count time, efficiency) and previous surveys of the
areas being investigated. The MQO for the required measurement method uncertainty is
calculated based on the width of the gray region and is related to the minimum detectable
concentration (MDC).
•	Detection capability: The MDC is recommended as the MQO for defining the detection
capability of the measurement system, which is the net response level that can be expected
to be seen using a detector with a fixed level of confidence. To account for cases where
decisions are being made based on multiple measurements, the MDC should be less than
50 percent of the DCGL in Scenario A and the DL in Scenario B.
•	Range: The method range is the lowest and highest concentration of an analyte that a
method can accurately detect. The expected concentration range for a radionuclide of
concern may be an important MQO. Most radiation measurement techniques are capable of
measuring over a wide range of radionuclide concentrations. However, if the expected
concentration range is large, the range should be identified as an important measurement
method performance characteristic, and an MQO should be developed. The MQO for the
acceptable range should be a conservative estimate. This will help prevent the selection of
measurement techniques that cannot accommodate the actual concentration range.
•	Specificity: Specificity is the ability of the measurement method to measure the radionuclide
of concern in the presence of interferences. To determine if specificity is an important MQO,
the planning team needs information on expected concentration ranges for the radionuclides
May 2020
DRAFT FOR PUBLIC COMMENT
2-13
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Overview of the Radiation Survey and Site Investigation Process
MARSSIM
1	of concern and other chemical and radionuclide constituents, along with chemical and
2	physical attributes of the residual radioactive material being investigated.
3	• Ruggedness: For a project that involves field measurements that are performed in difficult or
4	variable environments, or laboratory measurements that are complex in terms of chemical
5	and physical characteristics, the measurement method's ruggedness may be an important
6	MQO. Ruggedness refers to the relative stability of the measurement technique's
7	performance when small variations in method parameter values are made. For field
8	measurements, the changes may include temperature, humidity, or atmospheric pressure.
9	For laboratory measurements, variability in sample conditions (e.g., pH) or laboratory
10	conditions may be important. To determine if ruggedness is an important measurement
11	method performance characteristic, the planning team needs detailed information on the
12	chemical and physical characteristics of the soil or surfaces being investigated and
13	operating parameters for the radiation instruments used by the measurement technique.
14	Precision, bias, representativeness and sensitivity, comparability, and completeness are the
15	Data Quality Indicators (DQIs) recommended for quantifying the amount of error for survey data
16	(EPA 2002a). These DQIs are discussed in detail in Appendix D, Section D.1.6.
17	2.3.2 Evaluating Sources of Variability in Survey Results—Implementation Phase
18	To encourage flexibility and the use of appropriate measurement techniques for a specific site,
19	MARSSIM does not provide detailed recommendations on specific techniques to be used.
20	Instead, MARSSIM encourages the decision maker to evaluate available techniques based on
21	the survey DQOs and MQOs. Information on evaluating whether these objectives have been
22	met, such as the required measurement method uncertainty and minimum detectable
23	concentration, is provided.
24	QC programs can both lower the chances of making an incorrect decision and help the data
25	user understand the level of uncertainty that surrounds the decision (EPA 2002a). As discussed
26	previously, QC data are collected and analyzed during implementation to provide an estimate of
27	the uncertainty associated with the survey results. QC measurements (scans, direct
28	measurements, and samples) are technical activities performed to measure the attributes and
29	performance of the survey. During any survey, a certain number of measurements should be
30	taken for QC purposes.
31	2.3.3 Evaluating Survey Results—Assessment Phase
32	Assessments of environmental data are used to evaluate whether the data meet the objectives
33	of the survey and are sufficient to determine compliance with the DCGL (EPA 1992a, EPA
34	1992b, EPA 2006a). The assessment phase of the Data Life Cycle consists of three phases:
35	data verification, data validation, and DQA.
36	• Data verification is used to ensure that the requirements stated in the planning documents
37	are implemented as prescribed (see Appendix D.4.1).
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
2-14
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Overview of the Radiation Survey and Site Investigation Process
1	• Data validation is used to ensure that the results of the data collection activities support the
2	objectives of the survey as documented in the QAPP or to permit a determination that these
3	objectives should be modified (see Appendix D.4.2).
4	• DQA is the scientific and statistical evaluation of data to determine if the data are of the right
5	type, quality, and quantity to support their intended use (EPA 2006a). DQA helps complete
6	the Data Life Cycle by providing the assessment needed to determine that the planning
7	objectives are achieved (see Section 8.2). Figure 2.3 illustrates where data verification,
8	data validation, and DQA fit into the Assessment Phase of the Data Life Cycle.
9	There are five steps in the DQA process:
10	• review the DQOs and survey design
11	• conduct a preliminary data review
12	• select the statistical test(s)
13	• verify the assumptions of the statistical test(s)
14	• draw conclusions from the data
15	The strength of DQA is its design that progresses in a logical and efficient manner to promote
16	an understanding of how well the data meet the intended use. The Assessment Phase is
17	described in more detail in Appendix D. Section 2.6 discusses the flexibility of the Data Life
18	Cycle and describes the use of survey designs other than those described later in MARSSIM.
19	2.3.4 Uncertainty in Survey Results
20	Uncertainty in survey results arises primarily from two sources—survey design errors and
21	measurement errors:
22	• Survey design errors occur when the survey design is unable to capture the complete extent
23	of variability that exists for the radionuclide distribution in a survey unit. Because it is
24	impossible in every situation to measure the concentration of residual radioactive material at
25	every point in space and time, the survey results will be incomplete to some degree. It is
26	also impossible to know with complete certainty the concentration of residual radioactive
27	material at locations that were not measured, so the incomplete survey results give rise to
28	uncertainty. The greater the natural or inherent variation in residual radioactive material, the
29	greater the uncertainty associated with a decision based on the survey results. The
30	unanswered question is, "How well do the survey results represent the true level of residual
31	radioactive material in the survey unit?"
32	• Measurement errors create uncertainty by masking the true level of residual radioactive
33	material and may be classified as random or systematic errors. Random errors affect the
34	precision of the measurement system and show up as variations among repeated
35	measurements. Systematic errors show up as measurements that are biased to give results
May 2020
DRAFT FOR PUBLIC COMMENT
2-15
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
Overview of the Radiation Survey and Site Investigation Process
MARSSIM
that are consistently higher or lower than the true value. Measurement uncertainty is
discussed in Section 6.4.
PLANNING
Data Quality Objectives Process
Quality Assurance Project Plan
Development
IMPLEMENTATION
Field Data Collection and Associated
Quality Assurance/Quality Control
Activities
ASSESSMENT
Data Verification/Validation
Data Quality Assessment
QUALITY ASSURANCE ASSESSMENT
Routine
QC/Performance
Evaluation Data
DATA VERIFICATION/VALIDATION
Verify measurement performance
Verify measurement procedure and reporting
specifications
OUTPUT
Verified/Validated Data
>
INPUT
r
DATA QUALITY ASSESSMENT
•	Review project objectives and sampling design
•	Conduct preliminary data review
•	Select statistical method
•	Verify assumptions of the method
•	Draw conclusions from the data
>
OUTPUT
r
Project Conclusions
Figure 2.3: The Assessment Phase of the Data Life Cycle (EPA 2006a)
MARSSIM uses the Data Life Cycle to control and estimate the uncertainty in the survey results
on which decisions are made. Adequate planning should minimize known sources of
uncertainty. QC data collected during implementation of the survey plan provide an estimate of
the uncertainty. Statistical hypothesis testing or comparison to an upper confidence limit during
the assessment phase provides a level of confidence for the final decision. There are several
levels of decisions included within each survey type. Some decisions are quantitative, based on
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
2-16
May 2020
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
MARSSIM
Overview of the Radiation Survey and Site Investigation Process
the numerical results of measurements performed during the survey. Other decisions are
qualitative based on the available evidence and best professional judgment. The Data Life
Cycle can and should be applied consistently to both types of decisions.
2.3.5 Reporting Survey Results
The process of reporting survey results is an important consideration in planning the survey.
Again, the level of effort for reporting should be based on the complexity of the survey. A simple
survey with relatively few results may require a single report, while a more complicated survey
may require several reports to meet the objectives of the survey. Reporting requirements for
individual surveys should be developed during planning and clearly documented in the QAPP.
These requirements should be developed with cooperation from the people performing the
analyses (e.g., the analytical laboratory should be consulted on reporting results for samples).
The Health Physics Society and Multi-Agency Radiological Laboratory Analytical Protocols
(MARLAP) have provided several suggestions for reporting survey results (EPA 1980a, NRC
2004):
•	Report the actual result of the analysis. Do not report data as "less than the detection limit."
Even negative results and results with large uncertainties can be used in the statistical tests
to demonstrate compliance. Results reported only as "< MDC" cannot be fully used and, for
example, complicate even such simple analyses as an average. Although the nonparametric
tests described in Sections 8.3-8.4 and the upper confidence limit comparison described in
Section 8.5 can accommodate situations where up to 40 percent of the results as non-
detects, it is better to report the actual results.
•	Report results using the correct units and the correct number of significant digits. The choice
of reporting results using International System units (e.g., Bq/kg, Bq/m2) or conventional
units (e.g., pCi/g, dpm/100 cm2) is made on a site-specific basis. Generally, MARSSIM
recommends that all results be reported in the same units as the DCGLs. Sometimes the
results may be more convenient to work with as counts directly from the detector. In these
cases, the user should decide what the appropriate units are for a specific survey based on
the survey objectives. MARLAP suggests that the uncertainty and MDC should be reported
to two significant figures, while environmental radiation measurements seldom warrant more
than two or three significant figures.
•	Report the measurement uncertainty for every analytical result or series of results, such as
for a measurement system. This uncertainty, while not directly used for demonstrating
compliance with the release criteria, is used for survey planning and data assessment
throughout the RSSI process. In addition, the uncertainty is used for evaluating the
performance of measurement systems using QC measurement results (as described in
Section 6.2 for scans and direct measurements, and in Section 7.2 for laboratory analysis
of samples). The uncertainty is also used for comparing individual measurements to the
action level, which is especially important in the early stages of the RSSI process (scoping,
characterization, and remedial action support surveys described in Section 2.4) when
decisions are made based on a limited number of measurements. Section 6.4 discusses
methods for calculating the measurement uncertainty.
May 2020
DRAFT FOR PUBLIC COMMENT
2-17
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Overview of the Radiation Survey and Site Investigation Process
MARSSIM
1	• Report the MDC for the measurement system as well as the method used to calculate the
2	MDC. The MDC is an a priori estimate of the capability for detecting an activity concentration
3	with a specific measurement system (EPA 1980a). As such, this estimate is valuable for
4	planning and designing radiation surveys. Optimistic estimates of the MDC (calculated using
5	ideal conditions that may not apply to actual measurements) overestimate the ability of a
6	technique to detect residual radioactive material, especially when scanning for alpha or low-
7	energy beta radiations. This can invalidate survey results, especially for scanning surveys.
8	Using a more realistic MDC during scoping and characterization surveys, as described in
9	Section 6.3, helps in the proper classification of survey units for FSSs and minimizes the
10	possibility of designing and performing subsequent surveys because of errors in
11	classification. Estimates of the MDC that minimize potential decision errors should be used
12	for planning surveys.
13	Reporting requirements for individual surveys should be developed during planning and clearly
14	documented in the QAPP.
15	2.4 Radiation Survey and Site Investigation Process
16	The Data Life Cycle discussed in Section 2.3 is the basis for the performance-based approach
17	in MARSSIM. The RSSI process is a series of surveys designed to demonstrate compliance
18	with dose- or risk-based criteria for sites with residual radioactive material. The size, complexity,
19	and amount of existing information on the site will determine how many of the surveys in the
20	series will be necessary.
21	There are six principal steps in the RSSI process:
22	• site identification
23	• HSA
24	• scoping survey
25	• characterization survey
26	• remedial action support survey
27	• FSS
28	Table 2.1 provides a simplified overview of the principal steps in the RSSI process and how the
29	Data Life Cycle can be used in an iterative fashion within the process. Each of these steps is
30	briefly described in the Sections 2.4.1-2.4.6 and described in more detail in Chapter 5. In
31	addition, there is a brief description of regulatory agency confirmation and verification
32	(Section 2.4.7). Because MARSSIM focuses on demonstrating compliance with release criteria,
33	specifically by using an FSS, some of these surveys have additional objectives that are not fully
34	discussed in MARSSIM (e.g., health and safety of workers, supporting selection of values for
35	exposure pathway model parameters).
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
2-18
May 2020
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
MARSSIM
Overview of the Radiation Survey and Site Investigation Process
Figure 2.4 illustrates the RSSI process in terms of area classification and lists the major
decision to be made for each type of survey. The flowchart demonstrates one method for
quickly estimating the survey unit classification early in the MARSSIM process based on limited
information. This figure is a useful tool for visualizing the classification process, but there are
site-specific characteristics that may cause variation from this scheme. This illustration is not
designed to comprehensively consider every possibility that may occur at individual survey
units.
The flowcharts in Figures 2.5-2.8 present the principal steps and decisions in the site
investigation process and shows the relationship of the survey types to the overall assessment
process. As shown in these figures, there are several sequential steps in the site investigation
process and each step builds on information provided by its predecessor. Properly applying
each sequential step in the RSSI process should provide a high degree of assurance that the
release criteria have not been exceeded.
2.4.1	Site Identification
Often, sites where radioactive material is known or suspected to have been used or stored are
readily identified before decommissioning or cleanup. Any facility preparing to terminate an NRC
or agreement state license would be identified as a site. Formerly terminated NRC licenses may
also become sites for the EPA Superfund Program. Portions of military bases or
U.S. Department of Energy facilities may be identified as sites based on records of authorization
to possess or handle radioactive materials. Where records are incomplete, site identification can
be more difficult. In addition, information obtained during the performance of survey activities
may identify additional potential radiation sites related to the site being investigated. More
detailed information on site identification is provided in Section 3.4.
2.4.2	Historical Site Assessment
The primary purpose of the HSA is to collect existing information concerning the site and its
surroundings.
The primary objectives of the HSA are to—
Identify potential sources of residual radioactive material.
Determine whether sites pose an imminent threat to human health and the environment.
Differentiate impacted from non-impacted areas.
Provide input to scoping and characterization survey designs.
Provide an assessment of the likelihood of migration of radioactive material.
Identify additional potential sites containing radioactive material related to the site being
investigated.
May 2020
DRAFT FOR PUBLIC COMMENT
2-19
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Overview of the Radiation Survey and Site Investigation Process
MARSSIM
Initially Assumes a Class 1
Projected Final Status
Survey Classification
Historical Site
Assessment
Scoping Survey
Yes/Unknown
Is the Area
Impacted?
No
/Does the Area"\
Potentially Contain
Residual Radioactive
\ Material? ^
No
N on-Impacted
No Survey Required
Yes/Unknown
Characterization
Survey
/ Does the Area \
Actually Contain
Residual Radioactive
\ Material? ^
Perform Class 3 Final
Status Survey
No
Yes
Is the Probability
of Exceeding the
DCGLw Small?
No
Yes
Is the Probability
of Exceeding the
DCGLemc Small?
Remedial Action
Support Survey
N
Yes
There Sufficient
Information to Support
v. Classification as^
Class 2?
Class 1 Final Status
Survey
Class 2 Final Status
Survey
N
Yes
Figure 2.4: Radiation Survey and Site Investigation Process in Terms of Area Classification
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
2-20
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Overview of the Radiation Survey and Site Investigation Process
Are the DQOs
_ Satisfied? _
Does Site Pose
Immediate Risk to
Human Health and
Environment?
Yes/Unknown
Does Site Possibly \
Contain Residual Radioactive
Material in Excess of Natural
Background or Fallout ^
Levels?
Identify Site
Section 3.3
Reassess DQOs
Design HSA Using
DQO Process
Section 3.2
Perform HSA
Survey Objectives
1.	Identify potential sources of residual radioactive material
2.	Determine whether site poses a threat to human health
and the environment
3.	Differentiate impacted from non-impacted areas
4.	Provide input to scoping and characterization survey
designs
5.	Provide an assessment of the likelihood of radionuclide
migration
6.	Identify additional potential radiation sites related to the
site being investigated
Sections 3.4, 3.5, & 3.6
Document Findings
Supporting Non-Impacted
Classification
Document Findings of
HSA
Validate Data
and Assess
Data Quality
Decision to
Release Area
Sections 3.8
Refer to the Appropriate
Regulatory Authority
ite |
J
To Figure
Sections 3.8
Figure 2.5: The Historical Site Assessment Portion of the Radiation Survey and Site
Investigation Process
May 2020
DRAFT FOR PUBLIC COMMENT
2-21
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Overview of the Radiation Survey and Site Investigation Process
MARSSIM
From I
Figure I
V
Design Scoping Survey
Plan Using
DQO Process
Survey Objectives
1.	Perform a preliminary hazard
assessment
2.	Support classification of all or part of the
site as a Class 3 area
3.	Evaluate whether survey plan can be
optimized for use in characterization or
final status survey
4.	Provide input to the characterization
survey design.
Perform
Reassess DQOs
Scoping Survey
validate Data and
Assess Data Quality
To Figure
There Sufficient
Information to Support
Classification as
No/Unknown
C ass 3?
Document Findings
Supporting Class 3
Classification
To Figure
3	Figure 2.6: The Scoping Survey Portion of the Radiation Survey and Site Investigation
4	Process
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
2-22
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Overview of the Radiation Survey and Site Investigation Process
From
Figure
Reassess DQOs
Are the DQOs
Satisfied?
Classify Areas as
Class 1, Class 2,
or Class 3
Design
Characterization
Survey Plan Using
DQO Process
	I	
Perform
Characterization
Survey
Validate Data
and Assess
Data Quality
Survey Objectives
1.	Determine the nature and extent of
the residual radioactive material
2.	Evaluate remedial alternatives and
technologies
3.	Evaluate whether survey plan can be
used as the final status survey
4.	Provide input to the final status survey
design
7
Is Area Remediation
Required?
Determine Remedial
Alternative and Site-
Specific DCGLs
-No-
Remediate the Area
Yes
Perform Remedial
Action Support Survey
No
Reassess Remedial
Alternative and Site-
Specific DCGLs
-Yes-
Does the
Remedial Action
Support Survey Indicate
the Remediation is
Complete?
From
Reassessment of
Remedial Alternative and
Site-Specific DCGLs
Necessary?
Figure
* The point where survey units that fail to demonstrate compliance in the final status survey in Figure 2.8 re-enter the process
Figure 2.7: The Characterization and Remedial Action Support Survey Portion of the
Radiation Survey and Site Investigation Process
May 2020
DRAFT FOR PUBLIC COMMENT
2-23
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Overview of the Radiation Survey and Site Investigation Process
MARSSIM
Survey Objectives
1.	Select/verify survey unit classification
2.	Demonstrate that the potential dose or
risk from residual radioactive material is
below the release criteria for each
survey unit
3.	Demonstrate that the potential dose
from residual elevated areas is below
the release criteria for each survey unit
Design Final Status Survey
Plan Using DQO Process
Perform Final Status
Survey for Class 1
Survey Units
Perform Final Status
Survey for Class 2
Survey Units
Perform Final Status
Survey for Class 3
Survey Units
Validate Data
and Assess
Data Quality
Are the DQOs
Satisfied?
Perform Additional
Surveys
Do Final \
Status Survey
Results Contain Residual
Radioactive Material
Less than ^
DCGLs? S
Is Additional
Remediation
Required?
Document Results in the Final
Status Survey Report
* Connects with the Remedial Action Support Survey portion of the process in Figure 2.7
Figure 2.8: The Final Status Survey Portion of the Radiation Survey and Site Investigation
Process
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
2-24
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM	Overview of the Radiation Survey and Site Investigation Process
1	The HSA typically consists of three phases: identification of a candidate site, preliminary
2	investigation of the facility or site, and site visits or inspections. Information collected during the
3	HSA is then used to evaluate the site.
4	2.4.3 Scoping Survey
5	If the data collected during the HSA indicate that an area is impacted, a scoping survey may be
6	performed. Scoping surveys provide site-specific information based on limited measurements.
7	The primary objectives of a scoping survey are to—
8	• Perform a preliminary hazard assessment.
9	• Support classification of all or part of the site as a Class 3 area, if appropriate.
10	• Evaluate whether the survey plan can be optimized for use in the characterization survey or
11	FSS.
12	• Provide data to complete the site prioritization scoring process (Comprehensive
13	Environmental Response, Compensation, and Liability Act [CERCLA] and Resource
14	Conservation and Recovery Act [RCRA] sites only).
15	• Provide input to the characterization survey design, if necessary.
16	Table 2.1 provides an overview of how the Data Life Cycle (Plan, Implement, Assess, and
17	Decide) can be used to support each of the steps in the RSSI process up through the FSS.
18	Table 2.1: The Data Life Cycle6 used to Support the Radiation Survey and Site
19	Investigation Process
RSSI Process Data Life Cycle
MARSSIM Methodology
Site Identification
—
Provides information on identifying potential
radiation sites (Section 3.3)
Historical Site
Assessment
Historical Site
Assessment
Data Life Cycle
Provides information on collecting and assessing
existing site data (Sections 3.4-3.9) and potential
sources of information (Appendix F)
Scoping Survey
Scoping Data
Life Cycle
Discusses the purpose and general approach for
performing scoping surveys, especially as sources
of information when planning final status surveys
(Section 5.2.1)
6 The steps of the Data Life Cycle can be found in Figure 2.1. The DQO process for each of the steps can be found
in Figure 2.2.
May 2020
DRAFT FOR PUBLIC COMMENT
2-25
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Overview of the Radiation Survey and Site Investigation Process
MARSSIM
RSSI Process
Data Life Cycle
MARSSIM Methodology
Characterization
Survey
Characterization
Data Life Cycle
Discusses the purpose and general approach for
performing characterization surveys, especially as
sources of information when planning final status
surveys (Section 5.2.2)
Remedial Action
Support Survey
Remedial Action
Data Life Cycle
Discusses the purpose and general approach for
performing remedial action support surveys,
especially as sources of information when planning
final status surveys (Section 5.2.3)
Final Status
Survey
Final Status Data
Life Cycle
Provides detailed information for planning final
status surveys (Chapter 4, Section 5.3), selecting
measurement techniques (Chapter 6, Chapter 7,
Appendix H), and assessing the data collected
during final status surveys (Chapter 8,
Appendix D)
1	Scoping surveys can be conducted after the HSA is completed and typically consist of judgment
2	measurements based on the HSA data. If the results of the HSA indicate that an area is Class 3
3	and no residual radioactive material is found during a scoping survey, the area may be
4	classified as Class 3, and a Class 3 FSS is performed. If the scoping survey locates residual
5	radioactive material, the area may be considered as Class 1 (or Class 2) for the FSS, and a
6	characterization survey is typically performed. Sufficient information should be collected to
7	identify situations that require immediate radiological attention. For sites where the CERCLA
8	requirements are applicable, the scoping survey should collect sufficient data to complete the
9	Hazard Ranking System (HRS) scoring process. For sites where the RCRA requirements are
10	applicable, the scoping survey should collect sufficient data to complete the National Corrective
11	Action Prioritization System (NCAPS) scoring process. Sites that meet the National Contingency
12	Plan (NCP) criteria for a removal should be referred to the Superfund removal program (EPA
13	1996c). A comparison of the MARSSIM approach to CERCLA and RCRA requirements is
14	provided in Appendix F.
15	2.4.4 Characterization Survey
16	If the results of the HSA and scoping survey indicate that an area could be classified as Class
17	1 or Class 2 for the FSS, a characterization survey may be warranted. The characterization
18	survey is planned based on the HSA and scoping survey results. This type of survey typically is
19	a detailed radiological environmental characterization of the area.
20	The primary objectives of a characterization survey are as follow:
21	• Determine the nature and extent of the residual radioactive material.
22	• Collect data to support evaluation of remedial alternatives and technologies.
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
2-26
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM	Overview of the Radiation Survey and Site Investigation Process
1	• Support a hazard assessment of the potential dose and risk to workers or the public during
2	remediation.
3	• Evaluate whether the survey plan can be optimized for use in the FSS.
4	• Support remedial investigation/feasibility study requirements (CERCLA sites only) or facility
5	investigation/corrective measures study requirements (RCRA sites only).
6	• Provide input to the FSS design.
7	The characterization survey can be the most comprehensive of all the survey types and typically
8	generates the most data. This can include preparing a reference grid, taking systematic or
9	judgment measurements, and performing surveys of different media (e.g., surface soils, interior
10	and exterior surfaces of buildings). The decision as to which media will be surveyed is site-
11	specific and will be addressed throughout the RSSI process.
12	2.4.5 Remedial Action Support Survey
13	If an area is adequately characterized and has concentrations of residual radioactive material
14	above the DCGLs, a remediation plan should be prepared. A remedial action support survey is
15	performed while remediation is being conducted and guides the remediation in a real-time
16	mode.
17	Remedial action support surveys are conducted to—
18	• support remediation activities
19	• determine when a site or survey unit is ready for the FSS
20	• provide updated estimates of site-specific parameters used for planning the FSS
21	This manual does not provide information on the routine operational surveys used to support
22	remediation activities. The determination that a survey unit is ready for an FSS following
23	remediation is an important step in the RSSI process. In addition, remedial activities result in
24	changes to the distribution of residual radioactive material within the survey unit. For most
25	survey units, the site-specific parameters used during FSS planning (e.g., variability in the
26	radionuclide concentration, probability of small areas of elevated activity) will need to be re-
27	established following remediation. Obtaining updated values for these critical parameters should
28	be considered when planning a remedial action support survey.
29	2.4.6 Final Status Survey
30	The FSS is used to demonstrate compliance with release criteria. This type of survey is the
31	major focus of this manual.
32	The primary objectives of the FSS are as follow:
33	• Verify that survey unit classification is correct.
May 2020
DRAFT FOR PUBLIC COMMENT
2-27
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Overview of the Radiation Survey and Site Investigation Process
MARSSIM
1	• Demonstrate that the total potential dose or risk from all residual radioactive material in each
2	survey unit is below the release criteria.
3	• Demonstrate that the potential dose or risk from any small areas of elevated concentration
4	of radioactive material is below the release criteria for each survey unit, if necessary.
5	The FSS provides data to demonstrate that all radiological parameters satisfy the established
6	guideline values and conditions. Data from other surveys conducted during the RSSI process—
7	such as scoping, characterization, and remedial action support surveys—can provide valuable
8	information for planning an FSS, provided they are of sufficient quality.
9	Professional judgment in sampling is often used for locating and characterizing the extent of
10	residual radioactive material at a site. However, the MARSSIM focus is on planning the FSS,
11	which utilizes a more systematic approach to sampling. Systematic sampling is based on rules
12	that endeavor to achieve the representativeness in sampling consistent with the application of
13	statistical tests.
14	2.4.7 Regulatory Agency Confirmation and Verification Survey
15	The regulatory agency responsible for the site often confirms whether the site may be released.
16	Terms for this process can include confirmatory surveys or independent verification. This
17	confirmation may be accomplished by the agency or an impartial party either as an ongoing
18	activity during site remediation or after remediation and the FSS has been completed. Although
19	some actual measurements may be performed, much of the work required for confirmation and
20	verification will involve evaluation and review of documentation and data from survey activities,
21	though the evaluation may include site visits to observe survey and measurement procedures or
22	split-sample analyses by the regulatory agency's laboratory. Therefore, accounting for
23	confirmation and verification activities during the planning stages is important to each type of
24	survey. In some cases, post-remedial sampling and analysis may be performed by an impartial
25	party. The review of survey results should include verifying that the DQOs and MQOs are met,
26	reviewing the analytical data used to demonstrate compliance, and verifying that the statistical
27	test results support the decision to release the site.
28	2.5 Demonstrating Compliance with Dose- or Risk-Based Criteria
29	MARSSIM presents a process for demonstrating compliance with dose- or risk-based criteria.
30	The RSSI process provides flexibility in planning and performing surveys based on site-specific
31	considerations. Dose- or risk-based criteria usually allow one to account for radionuclide and
32	site-specific differences.
33	The FSS is designed to demonstrate compliance with the release criteria. The earlier surveys in
34	the RSSI process are performed to support decisions and assumptions used in the design of the
35	FSS. These preliminary surveys (e.g., scoping, characterization) may have other objectives in
36	addition to compliance demonstration that need to be considered during survey planning that
37	are not fully discussed in this manual. For this reason, MARSSIM focuses on FSS design. To
38	allow maximum flexibility in the survey design, MARSSIM provides information on designing a
39	survey using the RSSI process. This allows users with few resources available for planning to
40	develop an acceptable survey design. The rationale for the development of the information in
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
2-28
May 2020
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
MARSSIM
Overview of the Radiation Survey and Site Investigation Process
MARSSIM is presented in the following sections. Users with available planning resources are
encouraged to investigate alternate survey designs for site-specific applications using the
information provided in Section 2.6.
2.5.1 The Decision to Use Statistical Tests
The objective of compliance demonstration is to provide an acceptable level of confidence that
the release criteria are not exceeded. As previously stated, 100 percent confidence in a decision
cannot be proven because the data always contain some uncertainty. The use of statistical
methods is necessary to provide a quantitative estimate of the probability that average
concentration of radioactive material at a particular site results in a dose or risk above the
release criteria. Statistical methods provide for specifying (controlling) the probability of making
decision errors and for extrapolating from a set of measurements to the entire site in a
scientifically valid fashion (EPA 1994a).
Clearly stating the null hypothesis is necessary before statistical hypothesis testing can be
performed. MARSSIM provides the option to establish the null hypothesis under either
Scenario A or Scenario B. The Scenario A null hypothesis in MARSSIM is the concentration of
residual radioactive material in the survey unit exceeds the release criteria. This statement
directly addresses the issue of compliance demonstration for the regulator and places the
burden of proof for demonstrating compliance on the site owner or responsible party. The
Scenario B null hypothesis in MARSSIM is the concentration of residual radioactive material in
the survey unit does not exceed the release criteria. This statement also addresses the issue of
compliance demonstration for the regulator; however, it places the burden of proof for
demonstrating a lack of compliance on the regulator.
In Scenario B, the burden of proof is no longer on the individuals designing the survey and thus
should be used with caution and only in those situations where Scenario A is not an effective
alternative and regulators have agreed on the use of Scenario B. Regardless of the scenario
selected, the probability of rejecting the null hypothesis (i.e., the statistical power) will depend on
the variability in the survey unit and the tolerable Type II error probability (i.e., /3). Under
Scenario A, this type of decision error can result in deciding that a survey unit does not meet the
release criteria when it actually does. However, under Scenario B, this type of decision error can
result in deciding that a survey unit does meet the release criteria when it actually does not. For
this reason, the value of /? under Scenario B should be chosen carefully and in consultation with
regulatory authorities.
Because inadequate statistical power under Scenario B can result in a decision error that a
survey meets release criteria when it does not, individuals designing a MARSSIM Survey using
Scenario B should make conservative assumptions for a so that, even if the variability in the
survey unit is higher than expected, the power of the resulting survey (1-/?) will still be sufficient
to ensure that survey units with residual radioactive material in excess of the DCGL will be
discovered at least 1-/? percent of the time. To ensure adequate statistical power, a
retrospective power analysis that indicates that regulatory agency requirements on /?were met
needs to be completed following the completion of Scenario B MARSSIM Surveys. See
Chapter 8 and Appendix I for more information on performing retrospective power analyses.
May 2020
DRAFT FOR PUBLIC COMMENT
2-29
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
Overview of the Radiation Survey and Site Investigation Process
MARSSIM
The information needed to perform a statistical test is determined by the assumptions used to
develop the test. MARSSIM recommends the use of nonparametric statistical tests because
these tests use fewer assumptions and, consequently, require less information to verify these
assumptions. If a large number of measurements will be made (scan-only surveys), then
MARSSIM recommends comparison to an upper confidence limit. If the radionuclide is not part
of the natural background and radionuclide-specific measurements will be made, MARSSIM
recommends the Sign test. If the radionuclide is part of the natural background, or radionuclide-
specific measurements will not be made, MARSSIM recommends the WRS test. For
Scenario B, MARSSIM also recommends the quantile test and a retrospective power analysis.
These additional tests provide assurance that when the null hypothesis is not rejected, it is not
because there is insufficient power in the statistical tests. The retrospective power analysis can
also be useful for Scenario A in identifying the reasons why the null hypothesis was not
rejected. The tests described in MARSSIM (see Chapter 8) are relatively easy to understand
and implement. Ranked set sampling (see Appendix E) is another method for performing
statistical testing of samples and can be useful for hard-to-detect radionuclides. For the reasons
described above, Scenario A is preferred to Scenario B. Scenario B should be used instead of
Scenario A only when there is sufficient justification for its use.
Site conditions can potentially affect the validity of statistical tests. The distribution of residual
radioactive material is particularly of concern. Is the residual radioactive material distributed
uniformly, or is it located in small areas? Is the residual radioactive material present in the
surface soil or on building surfaces, or does it extend into the subsurface? MARSSIM addresses
only surface soil and building surfaces for the FSS to demonstrate compliance. This represents
a situation that is expected to commonly occur at sites with residual radioactive material, and it
allows the survey design to account for the ability to directly measure surface radioactivity using
scanning techniques. Radioactive material in other media may be identified during the HSA or
preliminary surveys (i.e., scoping, characterization, remedial action support). If radioactive
material in other media (e.g., subsurface soils or building materials) is identified, methodologies
for demonstrating compliance other than those described in this manual may need to be
developed or evaluated. Situations where scanning techniques may not be effective
(e.g., volumetric or subsurface radioactive material) are discussed in existing guidance (EPA
1989a, EPA 1994a, EPA 2001a).
2.5.1.1 Small Areas of Elevated Activity
While the development of DCGLs is outside the scope of MARSSIM, this manual assumes that
DCGLs will be developed using exposure pathway models that assume a relatively uniform
distribution of radioactive material. While this represents an ideal situation, small areas of
elevated activity are a concern at many sites.
MARSSIM addresses the concern for small areas of elevated activity by using a simple
comparison to an investigation level as an alternative to statistical methods. Using the EMC is a
conservative approach, because additional investigation is required unless every measurement
is below the investigation level. For Class 1 survey units, the investigation level for this
comparison is called the DCGLemc. The DCGLemc can be higher than the DCGLw due to the
lower dose or risk resulting from a smaller area of radioactive material. In the case of multiple
areas of elevated activity in a survey unit, a posting plot (discussed in Section 8.2.2.2) or similar
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
2-30
May 2020
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
MARSSIM
Overview of the Radiation Survey and Site Investigation Process
representation of the distribution of activity in the survey unit can be used to determine any
pattern in the location of these areas.
If elevated levels of residual radioactive material are found in an isolated area in addition to
residual radioactive material distributed relatively uniformly across the survey unit, the unity rule
(Section 4.4) can be used to ensure that the total dose or risk meets the release criteria. If there
is more than one of these areas, a separate term should be included in the calculation for each
area of elevated activity. As an alternative to the unity rule, the dose or risk from the actual
distribution of residual radioactive material can be calculated if there is an appropriate exposure
pathway model available. Note that these considerations generally only apply to Class 1 survey
units, since areas of elevated activity should not be present in Class 2 or Class 3 survey units.
2.5.1.2 Relatively Uniform Distribution of Residual Radioactive Material
As discussed previously, the development of a DCGL starts with the assumption of a relatively
uniform distribution of residual radioactive material. Some variability in the measurements is
expected. This is primarily due to a random spatial distribution of residual radioactive material
and uncertainties in the measurement process.
With a scan-only survey, the upper confidence limit (UCL) for the mean derived from the
arithmetic mean, the variance, and the number of the measurements would represent the
parameter of interest for demonstrating compliance. Survey units where a large number of
measurements are taken (scan-only surveys) can utilize this technique. Instructions on
generating a UCL from scan-only survey data are provided in Section 8.5.
When statistical sampling is performed, whether the radionuclide of concern is present in
background helps determine the form of the statistical test. The WRS test is recommended for
comparisons of survey unit radionuclide concentrations with background. When the radionuclide
of concern is not present in background, the Sign test is recommended. Instructions on
performing these tests are provided in Section 8.3 and Section 8.4.
The WRS and Sign tests are designed to determine whether the level of residual activity
uniformly distributed throughout the survey unit exceeds the DCGLw. Because these methods
are based on ranks or number of measurements below the DCGL, the statistical tests are tests
of the median. When the underlying measurement distribution is symmetric, the mean is equal
to the median. When the underlying distribution is asymmetric, these tests are still true tests of
the median but only approximate tests of the mean. However, numerous studies show that this
is a fair approximation (Hardin and Gilbert, 1993). The assumption of symmetry is less
restrictive than that of normality, because the normal distribution is itself symmetric. If, however,
the measurement distribution is skewed to the right, the mean will generally be greater than the
median. In severe cases, the mean may exceed the DCGLw while the median does not. For this
reason, MARSSIM recommends comparing the arithmetic mean of the survey unit data to the
DCGLw as a first step in the interpretation of the data (see Section 8.2.2.1). A mean survey unit
concentration less than the DCGLw is a necessary, but not sufficient, condition for a survey unit
to meet the release criteria.
The WRS test compares the distribution of a set of measurements in a survey unit to that of a
set of measurements in a reference area. In scenario A, the test is performed by first adding the
May 2020
DRAFT FOR PUBLIC COMMENT
2-31
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Overview of the Radiation Survey and Site Investigation Process
MARSSIM
1	value of the DCGLw to each measurement in the reference area. The combined set of survey
2	unit data and adjusted reference area data are listed, or ranked, in increasing numerical order. If
3	the ranks of the adjusted reference site measurements are significantly higher than the ranks of
4	the survey unit measurements, the survey unit demonstrates compliance with the release
5	criteria.
6	The quantile test is a statistical test to account for non-uniform distributions of radioactive
7	material. The quantile test was developed to detect differences between the survey unit and the
8	reference area that consist of a shift to higher values in only a fraction of the survey unit. The
9	quantile test is only performed when Scenario B is used, and only if the null hypothesis is not
10	rejected for the WRS test. Using the quantile test and the WRS test in tandem results in higher
11	statistical power to identify survey units that do not meet the release criteria than either test by
12	itself.
13	The Sign test compares the distribution of a set of measurements in a survey unit to a fixed
14	value, namely the DCGLw. First, the value for each measurement in the survey unit is
15	subtracted from the DCGLw. The resulting distribution is tested to determine if the center of the
16	distribution is greater than zero. If the adjusted distribution is significantly greater than zero, the
17	survey unit demonstrates compliance with the release criteria.
18	Information on performing the statistical tests and presenting graphical representations of the
19	data are provided in Chapter 8 and Appendix I.
20	2.5.2 Categorization and Classification
21	Categorizing and classifying a survey unit determine the level of survey effort based on the
22	potential for residual radioactive material. Areas are initially categorized as impacted or non-
23	impacted based on the results of the HSA. Non-impacted areas have no reasonable potential
24	for residual radioactive material and require no further evidence to demonstrate compliance with
25	the release criteria, although documentation of the decision to categorize an area as non-
26	impacted would still be needed. When planning the FSS, impacted areas may be further
27	classified into survey units. If a survey unit is given a less restrictive classification than is
28	warranted, the potential for making decision errors increases. For this reason, all impacted
29	areas are initially assumed to be Class 1. Class 1 areas require the highest level of survey effort
30	because they are known to have concentrations of residual radioactive material above the
31	DCGLw, or the residual radioactive material concentrations are unknown.
32	Information indicating the potential or known residual radioactive material concentration is less
33	than the DCGLw can be used to support re-classification of an area or survey unit as Class 2 or
34	Class 3.
35	There is a certain amount of information necessary to demonstrate compliance with the release
36	criteria. The amount of this information that is available and the level of confidence in this
37	information are reflected in the area classification. The initial assumption for affected areas is
38	that none of the necessary information is available. This results in a default Class 1
39	classification.
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
2-32
May 2020
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
MARSSIM
Overview of the Radiation Survey and Site Investigation Process
Not all of the information available for an area will have been collected for purposes of
compliance demonstration. For example, data are collected during characterization surveys to
determine the extent, and not necessarily the amount, of residual radioactive material. This
does not mean that the data do not meet the objectives of compliance demonstration, but it may
mean that statistical tests would be of little or no value because the data have not been
collected using appropriate protocols or design. Rather than discard potentially valuable
information, MARSSIM allows for a qualitative assessment of existing data (Chapter 3). Non-
impacted areas represent areas where all of the information necessary to demonstrate
compliance is available from existing sources. For these areas, no statistical tests are
considered necessary. A classification as Class 2 or Class 3 indicates that some information on
describing the potential for residual radioactive material is available for that survey unit. The
data collection recommendations are modified to account for the information already available,
and the statistical tests are performed on the data collected during the FSS. The HSA
(described in Chapter 3) is used to provide an initial categorization for the area of impacted or
non-impacted based on existing data and professional judgment.
2.5.3 Design Considerations for Small Areas of Elevated Activity
Scanning surveys are typically used to identify small areas of elevated activity. The size of the
area of elevated activity that the survey is designed to detect affects the DCGLemc, which in turn
determines the ability of a scanning technique to detect these areas. Larger areas have a lower
DCGLemc and are more difficult to detect than smaller areas. Ranked set sampling (RSS), as
described in Appendix E, provides an alternative approach for identifying small areas of hard-
to-detect radionuclides through a combination of field measurements and samples to identify
small areas.
The percentage of the survey unit to be covered by scans is also an important consideration.
One-hundred percent coverage means that the entire surface area of the survey unit has been
covered by the field of view of the scanning instrument. One-hundred percent scanning
coverage provides a high level of confidence that all areas with elevated concentrations of
radioactive material have been identified. One-hundred percent coverage is recommended for
all Class 1 survey units. If the available information concerning the survey unit provides
information demonstrating that areas of elevated concentrations of radioactive material may not
be present, the survey unit may be classified as Class 2 or Class 3. Because there is already
some level of confidence that areas of elevated activity are not present, 100 percent coverage
may not be necessary to demonstrate compliance. Section 5.3.6 provides information on
determining the scan area for Class 2 and 3 areas. For Class 2 areas, the scan area will be
based on the width of the gray region and the uncertainty, typically somewhere between 10-
100 percent of the area, with a combination of systematic scanning and scanning in areas
judged to have the highest potential for residual radioactive material. For Class 3 areas, the
scan area is the same as Class 2 survey units, except for surveys where samples and/or direct
measurements are collected, in which case the scan area can be less than 10 percent and is
typically only in areas judged to have the highest potential for residual radioactive material. A
general recommendation when deciding which areas to scan is to always err in the direction that
minimizes the decision error. In general, scanning the entire survey unit is less expensive than
finding areas of elevated concentrations of radioactive material later in the survey process.
May 2020
DRAFT FOR PUBLIC COMMENT
2-33
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Overview of the Radiation Survey and Site Investigation Process	MARSSIM
1	Finding such areas will lead to performing additional surveys due to survey unit
2	misclassification.
3	Another consideration for scanning surveys is the selection of scanning locations. This is not an
4	issue when 100 percent of the survey unit is scanned. Whenever less than 100 percent of the
5	survey unit is scanned, a decision must be made on what areas should be scanned. The
6	general recommendation is that when large amounts of the survey unit are scanned (e.g., less
7	than 50%), the scans should be systematically performed along transects of the survey unit.
8	When smaller amounts of the survey unit are scanned, selecting areas based on professional
9	judgment may be more appropriate and efficient for locating areas of elevated activity
10	(e.g., drains, ducts, piping, ditches, floor joints, sumps). A combination of 100 percent scanning
11	in portions of the survey unit based on professional judgment and less coverage (e.g., 20-
12	50 percent) for all remaining areas may result in an efficient scanning survey design for some
13	survey units.
14	2.5.4 Design Considerations for Relatively Uniform Distributions of Residual
15	Radioactive Material
16	The survey design for areas with relatively uniform distributions of residual radioactive material
17	is primarily controlled by classification and the requirements of the statistical test. Again, the
18	recommendations provided for Class 1 survey units are designed to minimize the decision error.
19	Recommendations for Class 2 or Class 3 surveys may be based on existing information if the
20	level of confidence associated with this information is sufficient.
21	The first consideration is the identification of survey units. The identification of survey units may
22	be accomplished early (e.g., scoping) or late (e.g., final status) in the survey process but must
23	be accomplished before performing an FSS. Early identification of survey units can help in
24	planning and performing surveys throughout the RSSI process. Late identification of survey
25	units can prevent misconceptions and problems associated with reclassification of areas based
26	on results of subsequent surveys. The area of an individual survey unit is determined based on
27	the area classification and modeling assumptions used to develop the DCGLw. Identification of
28	survey units is discussed in Section 4.6.
29	When performing surveys for which the Sign or WRS test is used, another consideration is the
30	estimated number of measurements to demonstrate compliance using the statistical tests.
31	Sections 5.3.3-5.3.4 describe the calculations used to estimate the number of measurements.
32	These calculations use information that is usually available from planning or from preliminary
33	surveys (i.e., scoping, characterization, remedial action support).
34	The information needed to perform these calculations is—
35	• acceptable values for the probabilities of making Type I or Type II decision errors
36	• the estimates of the measurement variability in the survey unit (
-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
MARSSIM
Overview of the Radiation Survey and Site Investigation Process
MARSSIM recommends that site-specific values be determined for each of these parameters.
When selecting site-specific values for decision error rates and A, MARSSIM recommends that
an initial value be selected and adjusted to develop a survey design that is appropriate for a
specific site. For Scenario A, the DCGLwis chosen as the upper bound of the gray region, and
the lower bound of the gray region is typically chosen to represent a conservative (slightly
higher) estimate of the concentration of residual radioactive material remaining in the survey
unit at the beginning of the FSS. For Scenario B, the AL is chosen as the lower bound of the
gray region and the upper bound is the DL, a value that represents how much effort will be
taken to determine there is no residual radioactive material. For decision error rates, a value
that minimizes the risk of making a decision error is recommended for the initial calculations.
The number of measurements can be recalculated using different values for the LBGR, DL, or
decision error rates until an appropriate survey design is obtained.7 A prospective power curve
(see Appendix M) that considers the effects of these parameters can be very helpful in
designing a survey and considering alternative values for these parameters and is highly
recommended.
To ensure that the desired power is achieved with the statistical test and to account for
underestimated values of the measurement variability, MARSSIM recommends that the
estimated number of measurements calculated using the formulas in Sections 5.3.3-5.3.4 be
increased by 20 percent to account for a reasonable amount of uncertainty in the parameters
used to calculate and still allow flexibility to account or some lost or unusable data. Insufficient
numbers of measurements may result in failure to achieve the DQO for power and result in
increased Type II decision errors, where survey units below the release criteria fail to
demonstrate compliance in Scenario A. Of more concern to the regulator, Type II decision
errors for Scenario B lead to the incorrect release of survey units with average or median
concentrations above the release criteria.
Once survey units are identified and the number of measurements is determined, measurement
locations should be selected. The statistical tests assume that the measurements are taken
from random locations within the survey unit. A systematic grid with a random starting point is
used for Class 1 and Class 2 survey units. A systematic grid with a random starting point or a
random survey design is used for Class 3 survey units.
2.5.5 Developing an Integrated Survey Design
To account for assumptions used to develop the DCGLw and the realistic possibility of small
areas of elevated activity, if required, an integrated survey design should be developed to
include all the design considerations. An integrated survey design combines a scanning survey
for areas of elevated activity with random measurements for relatively uniform distributions of
radioactive material. Table 2.2 presents the recommended conditions for demonstrating
compliance for an FSS based on classification.
7 Note that for some areas, an appropriate survey design may not be possible within initial survey design constraints,
such as the requirements on or and survey power (1-/5), available funds, and estimated values for the average and
variability of the concentration of residual radioactive material remaining at the site. In these cases, the planning team
will have to reconsider the survey design constraints or their decision to conduct a final status survey, in consultation
with their regulator.
May 2020
DRAFT FOR PUBLIC COMMENT
2-35
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Overview of the Radiation Survey and Site Investigation Process
MARSSIM
1	Table 2.2: Recommended Conditions for Demonstrating Compliance Based on
2	Survey Unit Classification for a Final Status Survey
Survey Unit Classification
Statistical
Test(s)a
Elevated
Measurement
Comparison
Sampling and
Direct
Measurements'5
Scanning
Impacted
Class 1
Yes
Yes
Systematic
100% Coverage
Class 2
Yes
Yes
Systematic
10-100%
Systematic/
Judgmental
Class 3
Yes
Yes
Random or
Systematic
10-100%c
Systematic/
Judgmental
Non-Impacted
No
No
No
None
3	a Statistical tests may consist of the Sign test, Wilcoxon Rank Sum test, quantile test, or comparison to an upper
4	confidence limit, depending on the survey design chosen.
5	b For scan-only surveys, omit the sampling and direct measurements.
6	c For surveys utilizing sampling and/or direct measurements, this percentage can be lower than 10% judgmental.
7	Random-start systematic grids are used for Class 1 and Class 2 survey units because there is
8	an increased probability of small areas of elevated activity. The use of a systematic grid allows
9	the decision maker to draw conclusions about the size of any potential areas of elevated activity
10	based on the area between measurement locations, while the random starting point of the grid
11	provides an unbiased method for determining measurement locations for the statistical tests.
12	The random start numbers should be furnished by an unbiased source to ensure that the survey
13	results are similarly unbiased.
14	Random measurement patterns are used for Class 3 survey units to ensure that the
15	measurements are independent and meet the requirements of the statistical tests.
16	Scan-only surveys can be used in place of direct measurements and/or sampling and analysis if
17	the scanning measurement system has an MDC that is less than 50 percent of the DCGLw and
18	meets requirements for measurement method uncertainty. When scanning is used alongside
19	sampling and/or direct measurements, it is used to identify locations within the survey unit that
20	exceed the investigation level. These locations are marked and receive additional investigations
21	to determine the concentration, area, and extent of the residual radioactive material. For Class 1
22	areas, scanning surveys are designed to detect small areas of elevated activity that are not
23	detected by the measurements using the systematic grids. For this reason, the measurement
24	locations and the number of measurements may need to be adjusted based on the sensitivity of
25	the scanning technique (see Section 5.3.5). This is also the reason for recommending
26	100 percent coverage for the scanning survey.
27	Scanning surveys in Class 2 areas are also performed primarily to find areas of elevated activity
28	not detected by the measurements using the systematic pattern. However, the measurement
29	locations are not adjusted based on sensitivity of the scanning technique, and scanning is only
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
2-36
May 2020
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
MARSSIM
Overview of the Radiation Survey and Site Investigation Process
performed in portions of the survey unit. The level of scanning effort should be proportional to
the potential for finding areas of elevated activity. In Class 2 survey units that have
concentrations of residual radioactive material closer to the release criteria or a higher variability
across the survey unit, a larger portion of the survey unit would be scanned; for survey units that
are closer to background or have a lower variability, scanning a smaller portion of the survey
unit may be appropriate. Class 2 survey units have a lower probability for areas of elevated
activity than Class 1 survey units, but some portions of the survey unit may have a higher
potential than others. Judgmental scanning surveys would focus on the portions of the survey
unit with the highest probability for areas of elevated activity. If the entire survey unit has an
equal probability for areas of elevated activity, or the judgmental scans do not cover at least
10 percent of the area, systematic scans along transects of the survey unit or scanning surveys
of randomly selected grid blocks are performed.
Class 3 areas have the lowest potential for areas of elevated activity. For scan-only surveys, the
scan area and methodology are the same as for Class 2, but for surveys that contain both
statistical sampling and scanning, scanning surveys should be performed in areas of highest
potential (e.g., corners, ditches, and drains) based on professional judgment. This provides a
qualitative level of confidence that no areas of elevated activity were missed by the random
measurements or that there were no errors made in the classification of the area.
2.6 Flexibility in Applying MARSSIM Approach
Section 2.5 describes an example that applies the performance-based approach presented in
Sections 2.3-2.4 to design a survey for a site with residual radioactive material in surface soils
and/or building surfaces. Obviously, this design cannot be uniformly applied at every site with
residual radioactive material, so flexibility has been provided in the form of a performance-based
approach. This approach encourages the user to develop a site-specific survey design to
account for site-specific characteristics. It is expected that most users will adopt the portions of
the MARSSIM methodology that apply to their site. In addition, changes to the overall survey
design that account for site-specific differences would be presented as part of the survey plan.
The plan should also demonstrate that the extrapolation from measurements performed at
specific locations to the entire site or survey unit is performed in a technically defensible
manner.
Where Section 2.5 describes the development of a generic survey design that will be applicable
at most radiation sites, this section describes the flexibility available within the MARSSIM for
designing a site-specific survey design. Alternate methods for accomplishing the demonstration
of compliance are briefly described, and references for obtaining additional information on these
alternate methods are provided.
2.6.1 Alternate Statistical Methods
MARSSIM encourages the use of statistics to provide a quantitative estimate of the probability
that the release criteria are not exceeded at a site. While it is unlikely that any site will be able to
demonstrate compliance with dose- or risk-based criteria without at least considering the use of
statistics, MARSSIM recognizes that the use of statistical tests may not always provide the most
effective method for demonstrating compliance. For example, MARSSIM recommends a simple
May 2020
DRAFT FOR PUBLIC COMMENT
2-37
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
Overview of the Radiation Survey and Site Investigation Process
MARSSIM
comparison to an investigation level to evaluate the presence of small areas of elevated activity
in place of complicated statistical tests.
MARSSIM recommends the use of nonparametric statistical tests for evaluating environmental
data. There are two reasons for this recommendation: (1) Environmental data are usually not
normally distributed, and (2) there are often a significant number of qualitative survey results
(e.g., less than MDC). Either one of these conditions means that parametric statistical tests may
not be appropriate. If one can demonstrate that the data are normally distributed and that there
are sufficient results to support a decision concerning the survey unit, parametric tests will
generally provide higher power (or require fewer measurements to support a decision
concerning the survey unit). The tests to demonstrate that the data are normally distributed
generally require more measurements than the nonparametric tests. EPA provides guidance on
selecting and performing statistical tests to demonstrate that data are normally distributed (EPA
2006a). Guidance is also available for performing parametric statistical tests (NRC 1992a, EPA
1989a, EPA 1994a, EPA 2006a).
There are a wide variety of statistical tests designed for use in specific situations. These tests
may be preferable to the generic statistical tests recommended in MARSSIM when the
underlying assumptions for these tests can be verified. Table 2.3 lists several examples of
statistical tests that may be considered for use at individual sites or survey units. A brief
description of the tests and references for obtaining additional information on these tests are
also listed in the table. Applying these tests may require consultation with a statistician.
2.6.2 Integrating MARSSIM with Other Survey Designs
2.6.2.1 Accelerated Cleanup Models
There are a number of approaches designed to expedite site cleanups. These approaches can
save time and resources by reducing sampling, preventing duplication of effort, and reducing
inactive time periods between steps in a cleanup process. Although Section 2.4 describes the
RSSI process recommended in MARSSIM as one with seven principal steps, MARSSIM is not
intended to be a serial process that would slow site cleanups. Rather, MARSSIM supports
existing programs and encourages approaches to expedite site cleanups. Planning in
MARSSIM promotes saving time and resources.
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
2-38
May 2020
DO NOT CITE OR QUOTE

-------
D ^
73 Q)
"n k>
—\ o
o°
73
T>
a
w
~
o
o
o
m
Table 2.3: Examples of Alternate Statistical Tests
N>
I
CO
CD
D
O
O
73
m
O
cn
~vl
cn
m
O ^
^ £
O <
a w
S§'
m k>
Alternate Tests
Probability Model
Assumed
Type of Test
Reference
Advantages
Disadvantages
Alternate 1-Sample Tests (No Reference Area Measurements)
Student's t Test
Normal
Parametric test for Ho:
Mean < t
Guidance for Data
Quality Assessment,
EPA QA/G-9,
p. 3.2-2.
Appropriate if data
appear to be normally
distributed and
symmetric.
Relies on a non-robust
estimator for |a and ct.
Sensitive to outliers
and departures from
normality.
tTest Applied to
Logarithms
Lognormal
Parametric test for Ho:
Median < t
Guidance for Data
Quality Assessment,
EPA QA/G-9,
p. 3.2-2.
A well-known and
easy-to-apply test.
Useful for a quick
summary of the
situation if the data are
skewed to right.
Relies on a non-robust
estimator for ct.
Sensitive to outliers
and departures from
lognormality.
Minimum
Variance
Unbiased
Estimator for
Lognormal Mean
Lognormal
Parametric estimates
for mean and variance
of lognormal
distribution
Gilbert, Statistical
Methods for
Environmental
Pollution Monitoring,
p. 164, 1987.
A good parametric test
to use if the data are
lognormal.
Inappropriate if the
data are not lognormal.
Chen Test
Skewed to right,
including
lognormal
Parametric test for Ho:
Mean > 0
Journal of the
American Statistical
Association (90),
p. 767, 1995.
A good parametric test
to use if the data are
lognormal.
Applicable only for
testing Ho: "survey unit
is clean." Survey unit
must be significantly
greater than 0 to fail.
Inappropriate if the
data are not skewed to
higher values.
>
73
U)
U)
O
<
CD
3
CD
CD
73
0)
Q.
0)'
r+
o'
U)
n
—5
<
CD
<
03
Q.
U)
r+
CD
5"
<
CD
W
r+
co'
CD
r+
o'
o
o
CD
C/J
W

-------
D Z
73 C
> 73
~n	m
—'	O
~n	A
O	cn
c= 73
ro cd
i- <.

o
N>
O
o
o
m
N>
i
o
D
O
O
H
m
O
D 03
n	M
y	o
H	N>
m	o
Alternate Tests
Probability Model
Assumed
Type of Test
Reference
Advantages
Disadvantages
Bayesian
Approaches
Varies, but a
family of
probability
distributions must
be selected
Parametric test for Ho:
Mean < L
DeGroot, Optimal
Statistical Decisions,
2005.
Permits use of
subjective "expert
judgment" in
interpretation of data.
Decisions based on
expert judgment may
be difficult to explain
and defend.
Bootstrap
No restriction
Nonparametric; uses
resampling methods to
estimate sampling
variance
Hall, Annals of
Statistics (22),
p. 2011-2030, 1994.
Avoids assumptions
concerning the type of
distribution.
Computer-intensive
analysis required.
Accuracy of the results
can be difficult to
assess.
Lognormal
Confidence
Intervals Using
Bootstrap
Lognormal
Uses resampling
methods to estimate
one-sided confidence
interval for lognormal
mean
Angus, The
Statistician (43),
p. 395, 1994.
Nonparametric method
applied within a
parametric lognormal
model.
Computer-intensive
analysis required.
Accuracy of the results
can be difficult to
assess.
Alternate 2-Sample Tests (Reference Area Measurements Are Required)
Student's t Test
Symmetric, normal
Parametric test for
difference in means
H0: (Xx < M-y
Guidance for Data
Quality Assessment,
EPA QA/G-9,
p. 3.3-2.
Easy to apply.
Performance for non-
normal data are
acceptable.
Relies on a non-robust
estimator for ct;
therefore, test results
are sensitive to
outliers.
0
<
1
0
0
73
CD
Q.
0)'
r+
o'
U)
n
—5
<
<
03
Q.

73
U)
U)

-------
D ^
73 Q)
"n k>
—\ o
o°
73
T>
a
w
~
o
o
o
m
N>
i
D
O
O
73
m
o
cn
~vl
cn
m
O ^
^ £
O <
a w
S§'
m k>
Alternate Tests
Probability Model
Assumed
Type of Test
Reference
Advantages
Disadvantages
Mann-Whitney
Test
No restrictions
Nonparametrictest
difference in location
Ho! |J.x < )-ly
Hollander,
Nonparametric
Statistical Methods,
2014.
Equivalent to the WRS
test, but used less
often. Similar to
resampling, because
test is based on set of
all possible differences
between the two data
sets.
Assumes that the only
difference between the
test and reference
areas is a shift in
location.
Kolmogorov-
Smirnov
No restrictions
Nonparametric test for
any difference
between the two
distributions
Hollander,
Nonparametric
Statistical Methods,
2014.
A robust test for
equality of two sample
distributions against all
alternatives.
May reject because
variance is high,
although mean is in
compliance.
Bayesian
Approaches
Varies, but a
family of
probability
distributions must
be selected
Parametric tests for
difference in means or
difference in variance
Box and Tiao,
Bayesian Inference
in Statistical
Analysis, 2011.
Permits use of "expert
judgment" in the
interpretation of data.
Decisions based on
expert judgment may
be difficult to explain
and defend.
2-Sample
Quantile Test
No restrictions
Nonparametric test for
difference in shape
and location
EPA, Methods for
Evaluating the
Attainment of
Cleanup Standards,
Vol. 3, p. 7.1, 1994.
Will detect if survey
unit distribution
exceeds reference
distribution in the
upper quantiles.
Applicable only for
testing Ho: "survey unit
is clean." Survey unit
must be significantly
greater than 0 to fail.
Sign Test when
Background is
Present
No restrictions
Nonparametric test for
difference in location
assuming uniform
background
Abelquist,
Decommissioning
Health Physics: A
Handbook for
MARSSIM Users,
2nd Edition, 2014.
Less computationally
intensive. Consistent
with pre-MARSSIM
survey designs.
Less powerful than the
Wilcoxon Rank Sum
Test because of
assumptions
concerning background
distributions.
>
73
U)
U)
0
<
1
0
0
73
0)
Q.
0)'
r+
o'
U)
n
—5
<
<
03
Q.
U)
r+
0
5"
<
0
w
r+
cq'
CD
r+
o'
o
o
0
C/J
w

-------
D Z
73 C
> 73
~n	m
—'	O
~n	A
O	cn
c= 73
ro cd
i- <.

o
N>
O
o
o
m
N>
N>
D
o
o
Alternate Tests
Probability Model
Assumed
Type of Test
Reference
Advantages
Disadvantages
Bootstrap and
Other Resampling
Methods
No restrictions
Nonparametric; uses
resampling methods to
estimate sampling
variance
Hall, Annals of
Statistics (22),
p. 2011, 1994.
Avoids assumptions
concerning the type of
distribution. Generates
informative resampling
distributions for
graphing.
Computer-intensive
analysis required.
Alternate to Statistical Tests
Decision Theory
No restrictions
Incorporates loss
function in the decision
theory approach
DOE, Statistical and
Cost-Benefit
Enhancements to
the DQO Process for
Characterization
Decisions, 1996.
Combines elements of
cost-benefit analysis
and risk assessment
into the planning
process.
Limited experience in
applying the method to
compliance
demonstration and
decommissioning.
Computer-intensive
analysis required.
Abbreviations: Ho = null hypothesis; t: = t-test statistic, L = bayesian test statistic; |a = mean; ct = standard deviation.
O
<
CD
3
CD
CD
73
CD
Q.
0)'
r+
o'
U)
n
—5
<
CD
<
03
Q.

m	o
>
w
U)

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
MARSSIM
Overview of the Radiation Survey and Site Investigation Process
Sandia National Laboratories in New Mexico used a combination of the observational approach,
process knowledge, judgmental soil sampling, and Global Positioning System (GPS)/gamma
survey techniques to identify and remediate potential residual radioactive material during
execution of the Environmental Restoration Project there. Depleted uranium was almost
exclusively the radionuclide of concern. There were 268 individual designated test locations on
the site ranging from tens of square meters to hundreds of acres, necessitating the application
of a flexible, graded approach, as appropriate. GPS/gamma in situ surveys were particularly
valuable in cost-effectively screening large areas and identifying sub-areas that warranted more
rigorous investigation than other, non-affected areas. As-completed survey maps and data files
consisted of ArcGIS figures generated from the GPS/gamma surveys (before and after)
supplemented by analytical laboratory results which were correlated to the GPS/gamma
surveys, when both were used. Use of in situ GPS/gamma surveys complemented by ArcGIS
analytical tools enabled convenient statistical treatment of thousands of data points, making
demonstration of successful remediation much easier to present, as well as easier to
understand by the stakeholders.
At the U.S. Department of Energy's Hanford Site, the parties to the Tri-Party Agreement
negotiated a method to implement the CERCLA process in order to (1) accelerate the
assessment phase, and (2) coordinate RCRA and CERCLA requirements whenever possible,
thereby resulting in cost savings. The Hanford Past Practice Strategy was developed in 1991 to
accelerate decision-making and initiation of remediation through activities that include
maximizing the use of existing data consistent with DQOs (DOE 1991).
The Adaptive Sampling and Analysis Programs at the Environmental Science Division of
Argonne National Laboratory quantitatively fuse soft data (e.g., historical records, aerial photos,
nonintrusive geophysical data) with hard sampling results to estimate residual radioactive
material extent, measure the uncertainty associated with these estimates, determine the
benefits from collecting additional samples, and assist in siting new sample locations to
maximize the information gained (DOE 2001).
2.6.2.2 Superfund Soil Screening Guidance
The Soil Screening Guidance for Radionuclides (EPA 1996a, EPA 1996b) is a tool developed
by EPA to help standardize and accelerate the evaluation and cleanup of radioactively
contaminated soils at sites on the National Priorities List (NPL) where future residential land use
is anticipated. The guidance provides a methodology for calculating risk-based, site-specific soil
screening levels for radionuclides in soil that may be used to identify areas needing further
investigation at NPL sites. The Soil Screening Guidance assumes that there is a low probability
of residual radioactive material and does not account for small areas of elevated activity. These
assumptions correlate to a Class 3 area in MARSSIM. Because the Soil Screening Guidance is
designed as a screening tool instead of a final demonstration of compliance, the specific values
for decision error levels, the bounds of the gray region, and the number and location of
measurements are developed to support these objectives. However, the MARSSIM approach
can be integrated with the survey design in the Soil Screening Guidance using this guidance as
an alternate MARSSIM survey design.
May 2020
DRAFT FOR PUBLIC REVIEW
2-43
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Overview of the Radiation Survey and Site Investigation Process
MARSSIM
1	The Soil Screening Guidance survey design is based on collecting samples, so scan surveys
2	and direct measurements are not considered. To reduce analytical costs, the survey design
3	recommends compositing samples and provides a statistical test for demonstrating compliance.
4	If utilizing the Soil Screening Guidance in conjunction with MARSSIM, factor in the effects of the
5	compositing technique when calculating measurement method uncertainty and detection
6	capability and in the determination of areas of elevated radioactive material.
NUREG-1575, Revision 2
DRAFT FOR PUBLIC REVIEW
2-44
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Historical Site Assessment
1
3 HISTORICAL SITE ASSESSMENT
2	3.1 Introduction
3	The Radiation Survey and Site Investigation (RSSI) process uses a graded approach that starts
4	with the Historical Site Assessment (HSA) and is later followed by other surveys that lead to the
5	final status survey (FSS). The HSA is an investigation to collect existing information describing a
6	site's complete history from the start of site activities to the present time. During the HSA
7	process, additional information is collected to categorize the site or areas within the site as
8	impacted or non-impacted and to make preliminary site classification assessments. In this
9	chapter1—
10	• Section 3.1 provides an overview of the HSA.
11	• Section 3.2 describes the Data Quality Objectives (DQO) process, utilized to establish
12	criteria for planning HSA data collection activities.
13	• Section 3.3 describes how site identification is used to establish a site or an area within a
14	site as having the potential to have residual radioactive material based on prior activities at
15	the site.
16	• Section 3.4 describes the preliminary investigation, utilized to obtain sufficient information to
17	determine an initial categorization of a site or survey unit.
18	• Section 3.5 explains how site reconnaissance is utilized to gather sufficient information to
19	support decisions on further action.
20	• Section 3.6 covers the evaluation of HSA data to differentiate sites that need further action
21	from those that pose little to no threat from the environment.
22	• Section 3.7 describes how to utilize the data gathered from the HSA to determine the next
23	step in the RSSI process.
24	• Section 3.8 covers the preparation of an HSA report to summarize what is known about a
25	site, assumptions and inferences made about the site, activities conducted during the HSA,
26	and all researched information.
27	• Section 3.9 provides a review of the HSA process.
28	• Figure 3.1 presents a flowchart of HSA activities and Figure 3.2 provides initial
29	categorization of the site or survey unit2 as impacted or non-impacted.
1	MARSSIM uses the word "should" as a recommendation, not as a requirement. Each recommendation in this
manual is not intended to be taken literally and applied at every site. MARSSIM's survey planning documentation will
address how to apply the process on a site-specific basis.
2	Refer to Section 4.6.2 for a discussion of survey units.
May 2020
DRAFT FOR PUBLIC COMMENT
3-1
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Historical Site Assessment
MARSSIM
Identify Site
Section 3.3
Design HSA Using
DQO Process
Reassess DQOs
Are the DQOs
Satisfied?
Section 3.2
Survey Objectives
1.	Identify potential sources of residual radioactive material
2.	Determine whether site poses a threat to human health
and the environment
3.	Differentiate impacted from non-impacted areas
4.	Provide input to scoping and characterization survey
designs
5.	Provide an assessment of the likelihood of radionuclide
migration
6.	Identify additional potential radiation sites related to the
site being investigated
Perform HSA
Sections 3.4, 3.5, & 3.6
Validate Data
and Assess
Data Quality
7
-Yes
Does Site Pose
Immediate Risk to
Human Health and
Environment?
Refer to the Appropriate
Regulatory Authority
Yes/Unknown
Does Site Possibly
Contain Residual Radioactive
Material in Excess of Natural
Background or Fallout
Levels?
Document Findings
Supporting Non-Impacted
Classification
Decision to
Release Area
Sections 3
To Figure
Document Findings of
HSA
Sections 3.8
Figure 3.1: Historical Site Assessment Process Flowchart
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
3-2
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Historical Site Assessment
1	The HSA may provide information needed to calculate derived concentration guideline levels
2	(DCGLs, initially described in Section 2.2), as well as information that reveals the magnitude of
3	a site's DCGLs. This information is used for comparing historical data to potential DCGLs and
4	determining the suitability of the existing data for assessment of the site. The HSA also supports
5	emergency response and removal activities within the context of the the U.S. Environmental
6	Protection Agency's (EPA) Superfund program, fulfills public information needs, and furnishes
7	appropriate information about the site early in the RSSI process. For a large number of sites,
8	such as currently licensed facilities, site identification and reconnaissance may not be needed.
9	For certain response activities, such as reports concerning the possible presence of radioactive
10	material, preliminary investigations may consist more of site reconnaissance and a scoping
11	survey in conjunction with collection of historical information.
12	This chapter describes three sections of an HSA: (1) identification of a candidate site
13	(Section 3.3), (2) preliminary investigation of the facility or site (Section 3.4), and (3) site
14	reconnaissance (Section 3.5). The site reconnaissance is not a scoping survey, however,
15	because the intent is to find physical conditions that may affect the investigative process and not
16	to collect measurements. The HSA is followed by an evaluation of the site based on information
17	collected during the HSA.
18	The amount of detailed information and effort needed to conduct an HSA depends on the type
19	of site, associated historical events, regulatory framework, and availability of documented
20	information. For example, information for an HSA is readily available at some facilities that
21	routinely maintain records throughout their operations, such as licensees of the U.S. Nuclear
22	Regulatory Commission (NRC) or Agreement States. At other facilities, such as Comprehensive
23	Environmental Response, Compensation, and Liability Act (CERCLA) sites, a comprehensive
24	search may be necessary to gather information for an HSA (see Appendix F). In the former
25	case, the HSA is essentially complete, and a review of the following sections will serve to
26	ensure that the information justifies the recommendation. In the latter case, the HSA process
27	has identified data gaps that will be addressed in subsequent scoping or characterization
28	surveys. In still other cases, where sealed sources or small amounts of radionuclides are
29	described by the HSA, the site may qualify for a simplified decommissioning procedure (see
30	Appendix B).
31	3.2 Data Quality Objectives
32	The Data Quality Objectives (DQO) process assists in directing the planning of data collection
33	activities performed during the HSA. Information gathered during the HSA can also support the
34	DQOs of subsequent surveys.
35	Three inputs to the HSA/DQO process are expected:
36	• identifying an individual or a list of planning team members, including the decision maker
37	(DQO Step 1, Appendix D, Section D.1)
38	• concisely describing the problem (DQO Step 1, Appendix D, Section D.1)
39	• initially classifying site and survey unit as impacted or non-impacted (DQO Step 4,
40	Appendix D, Section D.2.2)
May 2020
DRAFT FOR PUBLIC COMMENT
3-3
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Historical Site Assessment
MARSSIM
1	Other inputs may accompany these three, and this added information may be useful in
2	supporting subsequent applications of the DQO process.
3	The planning team clarifies and defines the DQOs for a site-specific survey. This
4	multidisciplinary team of technical experts offers the greatest potential to solve the problems
5	encountered in designing a survey. Including a stakeholder group representative is an important
6	consideration when assembling this team. Once formed, the team can also consider the role of
7	public participation in this assessment and the possible surveys to follow. The number of team
8	members is directly related to the scope and complexity of the problem. For a small site or
9	simplified situations, planning may be performed by the site owner. For a large, complex facility,
10	the team may include project managers, site managers, scientists, engineers, community and
11	local government representatives, health physicists, statisticians, and regulatory agency
12	representatives. A reasonable effort should be made to include other individuals—that is,
13	specific decision makers or data users—who may use the study findings sometime in the future.
14	The role of the regulatory agency representatives is to facilitate survey planning—without direct
15	participation in survey plan development—by offering comments and information based on past
16	precedent, current guidance, and potential pitfalls. A regulatory agency representative may also
17	be included at specific sites when needed (e.g., CERCLA).
18	The planning team is generally led by a member who is referred to as the decision maker. This
19	individual is often the person with the most authority over the study and may be responsible for
20	assigning the roles and responsibilities to planning team members. Overall, the decisionmaking
21	process arrives at final decisions based on the planning team's recommendations. The problem
22	or situation description provides background information on the fundamental issue to be
23	addressed by the assessment (EPA 2006b). The following steps may be helpful during DQO
24	development:
25	• Describe the conditions or circumstances surrounding the problem or situation and the
26	reason for undertaking the survey.
27	• Describe the problem or situation as it is currently understood by briefly summarizing
28	existing information.
29	• Conduct literature searches and interviews.
30	• Examine past or ongoing studies to ensure that the problem is correctly defined. Consider
31	breaking complex problems into more manageable pieces.
32	Section 3.5 provides information on gathering existing site data and determining the usability of
33	this data.
34	The initial classification of the site involves developing a conceptual model based on the existing
35	information collected during the preliminary investigation. Conceptual models describe a site or
36	facility and its environs and present hypotheses regarding the radionuclides for known and
37	potential residual radioactive material (EPA 1987a, 1987b). The classification of the site is
38	discussed in Section 3.7.
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
3-4
May 2020
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
MARSSIM
Historical Site Assessment
Several steps in the DQO process may be addressed initially during the HSA. This information
or decision may be based on limited or incomplete data. As the site assessment progresses and
as decisions become more difficult, the iterative nature of the DQO process allows for re-
evaluation of preliminary decisions. This is especially important for classification of sites and
survey units where the final classification is not made until the FSS is planned.
3.3	Site Identification
A site may already be known for its prior use and presence of radioactive materials. Elsewhere,
potential radioactive materials sites may be identified through such situations and information as
the following:
•	records of authorization to possess or handle radioactive materials, including—
o NRC or NRC Agreement State License
o U.S. Department of Energy facility records
o Naval Radioactive Materials Permit
o U.S. Air Force Master Materials License
o Army Radiation Authorization
o State Authorization for Naturally Occurring and Accelerator Produced Radioactive
Material (NARM)
•	notification to government agencies of possible releases of radioactive substances
•	citizens filing a petition under section 105(d) of the Superfund Amendments and
Reauthorization Act of 1986 (EPA 1986)
•	ground and aerial radiological surveys
•	contacts with knowledge of the site
Once identified, the name, location, and current legal owner or custodian (where available) of
the site should be recorded.
3.4	Preliminary Investigation
The preliminary investigation serves to collect readily available information concerning the
facility or site and its surroundings. The investigation is designed to obtain sufficient information
to provide initial categorization of the site or survey unit as impacted or non-impacted.
Information on the potential distribution of residual radioactive material may be used for
classifying each site or survey unit as Class 1, Class 2, or Class 3 and is useful for planning
scoping and characterization surveys.
May 2020
DRAFT FOR PUBLIC COMMENT
3-5
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Historical Site Assessment
MARSSIM
1	Table 3.1 provides a set of questions that can be used to assist in the preliminary investigation.
2	Apart from obvious cases (e.g., NRC licensees), this table focuses on characteristics that
3	identify a previously unrecognized or known but undeclared source of potential residual
4	radioactive material. Furthermore, these questions may identify confounding factors for
5	selecting reference sites.
6	Table 3.1: Questions Useful for the Preliminary Investigation
Question
Purpose of Question
1. Was the site ever licensed for the
manufacture, use, or distribution of
radioactive materials under Agreement State
Regulations, U.S. Nuclear Regulatory
Commission licenses, or Armed Services
permits, or for the use of 91B material?
Indicates a higher probability that the area is
impacted.
2. Did the site ever have permits to dispose of
or incinerate radioactive material onsite? Is
there evidence of such activities?
Evidence of radioactive material disposal indicates
a higher probability that the area is impacted.
3. Has the site ever had deep wells for injection
or permits for such?
Indicates a higher probability that the area is
impacted.
4. Did the site ever have permits to perform
research with radiation-generating devices or
radioactive materials except medical or
dental X-ray machines?
Research that may have resulted in the release of
radioactive material indicates a higher probability
that the area is impacted.
5. As a part of the site's radioactive materials
license, were there ever any soil moisture
density gauges (americium-beryllium or
plutonium-beryllium sources) or radioactive
thickness monitoring gauges stored or
disposed of onsite?
Leak-test records of sealed sources may indicate
whether a storage area is impacted. Evidence of
radioactive material disposal indicates a higher
probability that the area is impacted.
6. Was the site used to create radioactive
material by activation?
Indicates a higher probability that the area is
impacted.
7. Were radioactive sources stored at the site?
Leak-test records of sealed sources may indicate
whether or not a storage area is impacted.
8. Is there evidence that the site was involved
in the Manhattan Project or any Manhattan
Engineering District activities (1942-1946)?
Indicates a higher probability that the area is
impacted.
9. Was the site ever involved in the support of
nuclear weapons testing (1945-1962)?
Indicates a higher probability that the area is
impacted.
10. Were any facilities on the site used as a
weapons storage area? Was weapons
maintenance ever performed at the site?
Indicates a higher probability that the area is
impacted.
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
3-6
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Historical Site Assessment
Question
Purpose of Question
11. Was there ever any decontamination,
maintenance, or onsite storage of ships,
vehicles, or planes with residual radioactive
material?
Indicates a higher probability that the area is
impacted.
12. Is there a record of any aircraft accident at or
near the site (e.g., depleted uranium
counterbalances, thorium alloys, radium
dials)?
May include other considerations, such as evidence
of radioactive material that was not recovered.
13. Are there records indicating use or storage of
radium dials and other radioactive luminous
devices as a source?
Indicates a higher probability that the area is
impacted.
14. Was there ever any radiopharmaceutical
manufacturing, storage, transfer, or disposal
onsite?
Indicates a higher probability that the area is
impacted.
15. Was animal research ever performed at the
site?
Evidence that radioactive material was used for
animal research indicates a higher probability that
the area is impacted.
16. Were naturally occurring radioactive material
(NORM) or technologically enhanced
naturally occurring radioactive material
(TENORM)—such as uranium, thorium, or
radium compounds—used in manufacturing,
research, or testing at the site, or were these
compounds stored at the site?
Indicates a higher probability that the area is
impacted or results in a potential increase in
background variability.
17. Has the site ever been involved in the
processing or production of NORM or
TENORM (e.g., radium, fertilizers,
phosphorus compounds, vanadium
compounds, refractory materials, rare earth
elements, or precious metals) or mining,
milling, processing, or production of uranium
or thorium?
Indicates a higher probability that the area is
impacted or results in a potential increase in
background variability.
18. Were coal or coal products used onsite? If
yes, did combustion of these substances
leave ash or ash residues onsite? If yes, are
runoff or production ponds onsite?
May indicate other considerations, such as a
potential increase in background variability.
19. Was there ever any onsite disposal of
material known to be high in naturally
occurring radioactive material (e.g., monazite
sands used in sandblasting)?
May indicate other considerations, such as a
potential increase in background variability.
May 2020
DRAFT FOR PUBLIC COMMENT
3-7
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Historical Site Assessment
MARSSIM
Question
Purpose of Question
20. Did the site contain or use pipe from the oil
and gas industries?
Indicates a higher probability that the area is
impacted or results in a potential increase in
background variability.
21. Is there any reason to expect that the site
may contain radioactive material (other than
previously listed)?
See Section 3.7.
1	Definition: 91B = "highly classified radioactive material covered under Section 91(b) of the Atomic Energy Act (AEA)
2	of 1954 associated with current nuclear weapons material, legacy nuclear weapons maintenance wastes, residuals
3	from nuclear weapons accident/incidents, some residuals from atmospheric testing of nuclear weapons, and
4	residuals from nuclear reactor operations." (George Air Force Base, n.d.)
5	Appendix G of this document provides a general listing and cross-reference of information
6	sources—each with a brief description of the information contained in each source.
7	3.4.1 Existing Radiation Data
8	Sources of useful information for an HSA include site files; monitoring data; former site
9	evaluation data; and Federal, State, and local investigations or emergency actions. Existing site
10	data may provide specific details about the identity, concentration, and areal distribution of
11	residual radioactive material. However, these data should be examined carefully because—
12	• Previous survey and sampling efforts may not be compatible with HSA objectives or may not
13	be extensive enough to characterize the facility or site fully.
14	• Measurement protocols and standards may not be known or compatible with HSA objectives
15	(e.g., quality assurance/quality control [QA/QC] procedures, limited analysis rather than full-
16	spectrum analysis) or may not be extensive enough to characterize the facility or site fully.
17	• Conditions may have changed since the site was last sampled (i.e., substances may have
18	been released, migration may have spread the residual radioactive material, additional
19	waste disposal may have occurred, or decontamination may have been performed).
20	Existing data can be evaluated using the Data Quality Assessment (DQA) process described in
21	Appendix D. (Also see DOE 1987 and EPA 1980a, 1992a, 1992b, 2006a for additional
22	guidance on evaluating data.)
23	3.4.1.1 Licenses, Site Permits, and Authorizations
24	The facility or site radioactive materials license and supporting or associated documents are
25	potential sources of information for licensed facilities. If a license does not exist, there may be a
26	permit or other document that authorized site operations involving radioactive material. These
27	documents may specify the quantities of radioactive material authorized for use at the site, the
28	chemical and physical form of the materials, operations for which the materials are (or were)
29	used, locations of these operations at the facility or site, and total quantities of material used at
30	the site during its operating lifetime.
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
3-8
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Historical Site Assessment
1	EPA and State agencies maintain files on a variety of environmental programs. These files may
2	contain permit applications and monitoring results with information on specific waste types and
3	quantities, sources, type of site operations, and operating status of the facility or site. Some of
4	these information sources are listed in Appendix G.
5	3.4.1.2 Operating Records
6	Records and other information sources useful for site evaluations include those describing
7	onsite activities, current and past radiation control procedures, and past operations involving—
8	• demolition
9	• effluent releases
10	• discharge to sewers or onsite septic systems
11	• production of residues
12	• land filling
13	• waste and material storage
14	• pipe and tank leaks
15	• spills and accidental releases
16	• release of facilities or equipment from radiological controls
17	• onsite or offsite radioactive and hazardous waste disposal
18	Some records may be or may have been classified for national security purposes, and means
19	should be established to review all pertinent records. Past operations should be summarized in
20	chronological order, along with information about permits and approvals. Estimates of the total
21	amount of radioactive material disposed of or released at the site and the physical and chemical
22	form of the radioactive material should also be included. Records on waste disposal,
23	environmental monitoring, site inspection reports, license applications, operational permits,
24	waste disposal material balance and inventory sheets, and purchase orders for radioactive
25	materials are useful for estimating total activity. Information on accidents—such as fires,
26	flooding, spills, unintentional releases, or leakage—should be collected, because they indicate
27	potential sources of residual radioactive material. Possible areas of localized radioactive
28	material should be identified.
29	Site plats or plots, blueprints, drawings, and sketches of structures are especially useful to
30	illustrate the location and layout of buildings on the site. Site photographs, aerial surveys, and
31	maps can help verify the accuracy of these drawings or indicate changes after the drawings
32	were prepared. Processing locations, waste streams to and from the site, and the presence of
33	stockpiles of raw materials and finished product should be noted on these photographs and
34	maps. Buildings or outdoor processing areas may have been modified or converted to other
May 2020
DRAFT FOR PUBLIC COMMENT
3-9
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Historical Site Assessment
MARSSIM
1	uses or configurations. The locations of sewers, pipelines, electric lines, water lines, etc., should
2	also be identified. This information facilitates planning the site reconnaissance and subsequent
3	surveys, developing a site conceptual model, and increasing the efficiency of the survey
4	program.
5	Corporate contract files may also provide useful information during subsequent stages of the
6	RSSI process. Older facilities may not have complete operational records, especially for
7	obsolete or discontinued processes. Financial records may also provide information on
8	purchasing and shipping that, in turn, help to reconstruct a site's operational history.
9	While operating records can be useful tools during the HSA, the investigator should be careful
10	not to place too much emphasis on this type of data. These records are often incomplete and
11	lack information on substances previously not considered hazardous. Out-of-date blueprints and
12	drawings may not show modifications made during the lifetime of a facility, but they may be
13	useful to identify additional areas that should be investigated.
14	3.4.2 Contacts and Interviews
15	Conduct interviews with current or previous employees to collect first-hand information about
16	the site or facility and to verify or clarify information gathered from records. Interviews cover
17	general topics, such as radioactive waste handling procedures. Results from interviews
18	conducted early in the process are useful in guiding subsequent data collection activities.
19	Interviews scheduled late in the data gathering process can also be very useful. Questions can
20	be directed to specific areas of the investigation that need additional information or clarification.
21	Photographs and sketches can be used to assist the interviewer and allow the interviewees to
22	recall information of interest. Conducting interviews onsite where the employees performed their
23	tasks often stimulates memories and facilitates information gathering. In addition to interviewing
24	managers, engineers, and facility workers, interviews may be conducted with laborers and truck
25	drivers to obtain information from their perspective.
26	The investigator should be cautious in the use of interview information. Whenever possible,
27	anecdotal evidence should be assessed for accuracy, and results of interviews should be
28	backed up with supporting data. To ensure that specific information is confirmed and properly
29	recorded, it may be advisable to hire trained investigators and take affidavits.
30	3.5 Site Reconnaissance
31	The objective of the site reconnaissance or site visit is to gather sufficient information to support
32	a decision regarding further action. Reconnaissance activity is not a risk assessment, a scoping
33	survey, or a study of the full extent of residual radioactive material at a facility or site. The
34	reconnaissance offers an opportunity to record information concerning hazardous site
35	conditions as they apply to conducting future survey work. In this regard, information describing
36	physical hazards, structural integrity of buildings, or other conditions defines potential problems
37	that may impede future work. Site reconnaissance is most applicable to sites with less available
38	information and may not be necessary at other sites having greater amounts of records, such as
39	NRC licensed facilities.
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
3-10
May 2020
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
MARSSIM
Historical Site Assessment
To prepare for the site reconnaissance, begin by reviewing what is known about the facility or
site and identify data gaps. Given the site-specific conditions, consider whether a site
reconnaissance is necessary and practical. This type of effort may be deemed necessary if a
site is abandoned or not easily observed from areas of public access, or if file searches disclose
little information. These same circumstances may also make a site reconnaissance risky for
health and safety reasons—in view of the many unknowns—and may make entry difficult. This
investigative step may be less critical for active facilities whose operators grant access and
provide requested information. Remember to arrange for proper site access and prepare an
appropriate health and safety plan, if required, before initiating the site visit.
Investigators should acquire signed consent forms from the site or equipment owner to gain
access to the property to conduct the reconnaissance. Investigators are to determine if State
and Federal officials, and local individuals, should be notified of the reconnaissance schedule. If
needed, local officials should arrange for public notification. Guidance on obtaining access to
sites can be found in Entry and Continued Access Under CERCLA (EPA 1987c).
A study plan should be prepared before the site reconnaissance to anticipate every
reconnaissance activity and identify specific information to be gathered. This plan should
incorporate a survey of the site's surroundings and provide details for activities that verify or
identify the location of nearby residents, worker populations, drinking water or irrigation wells,
and foods, as well as other site environs information.
Materials and equipment for a site reconnaissance should be prepared in advance. This
includes a camera to document site conditions, as well as health and safety monitoring
instruments, including a radiation detection meter, a GPS receiver or extra copies of
topographic maps to mark target locations, water distribution areas, and other important site
features.
A logbook is critical to keeping a record of field activities and observations as they occur. The
Multi-Agency Radiation Survey and Site Investigation Manual (MARSSIM) recommends that the
logbook be completed in waterproof ink, preferably by one individual. Furthermore, each page of
the logbook should be signed and dated, including the time of day, after the last entry on the
page. Corrections should be documented and approved. Alternatively, a computerized logbook
may also be used, with the adequate provision of controls to ensure appropriate quality
assurance and version control. For example, logbook entries should be signed daily and closed
out, so that future revisions would require date/time stamping and signature of the individual
making changes.
3.6 Evaluation of Historical Site Assessment Information
The main purpose of the HSA is to determine the current status of the site or facility, but the
data collected may also be used to differentiate sites that need further action from those that
pose little or no threat to human health and the environment. The information gathered during
this screening process can show the need for additional surveys or may be sufficient to
recommend a site release. Because much of the information collected during HSA activities is
qualitative, and analytical data may be of unknown quality, many decisions regarding a site are
the result of professional judgment.
May 2020
DRAFT FOR PUBLIC COMMENT
3-11
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
Historical Site Assessment
MARSSIM
Historical analytical data indicating the presence of residual radioactive material in
environmental media (surface soil, subsurface soil, surface water, ground water, air, or
buildings) can be used to support the hypothesis that radioactive material was released at the
facility or site. A decision that the site does not meet release criteria can be made regardless of
the quality of the data, its attribution to site operations, or its relationship to background levels.
In such cases, elevated results are sufficient to support the hypothesis—it is not necessary to
definitively demonstrate that a problem exists. Conversely, historical analytical data can also be
used to support the hypothesis that no release has occurred. However, these data should not
be the sole basis for this hypothesis. If historical analytical data constitute the principal evidence
for ruling out the presence of residual radioactive material, the data must be of sufficient quality
to clearly demonstrate that a problem does not exist.
In most cases, it is assumed there will be some level of process knowledge available in addition
to historical analytical data. If process knowledge suggests that no residual radioactive material
should be present, and the historical analytical data also suggests that no residual radioactive
material is present, the process knowledge provides an additional level of confidence and
supports categorizing the area as non-impacted. However, if process knowledge suggests no
residual radioactive material should be present, but the historical analytical data indicate the
presence of residual radioactive material, the area will probably be categorized as impacted.
The following sections describe the recommended information to accurately and completely
support a site release recommendation. If some of the information is not available, it should be
identified as a data need for future surveys. Data needs are collected during Step 3 of the DQO
process (Identify Inputs to the Decision) as described in Appendix D, Section D.1.3.
Section 3.6.5 provides information on professional judgment and how it may be applied to the
decisionmaking process.
3.6.1 Identify Potential Sources of Residual Radioactive Material
An efficient HSA gathers information sufficient to identify the radionuclides used at the site,
including their chemical and physical form. The first step in evaluating HSA data is to estimate
the potential for residual radioactive material from these radionuclides.
Site operations are a strong indicator of the potential for residual radioactive material
(NRC 1992a). An operation that handled only encapsulated sources is expected to have a low
potential for residual radioactive material—assuming that the integrity of the sources was not
compromised. A review of leak-test records for such sources may be adequate to demonstrate
the low probability of residual radioactive material. A chemical manufacturing process facility
would likely have residual radioactive material in piping, ductwork, and process areas, with a
potential for residual radioactive material in soil where spills, discharges, or leaks occurred.
Sites using large quantities of radioactive ores—especially those with outside waste collection
and treatment systems—are likely to have residual radioactive material on the premises. If loose
dispersible materials were stored outside or process ventilation systems were poorly controlled,
then windblown surface deposition of residual radioactive material may be possible.
Consider how long the site was operational. If enough time elapsed since the site discontinued
operations, radionuclides with short half-lives may no longer be present in significant quantities.
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
3-12
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Historical Site Assessment
1	In this case, calculations demonstrating that residual activity could not exceed the DCGL may
2	be sufficient to evaluate the potential residual radioactive material at the site. A similar
3	evaluation can be made based on knowledge of a radionuclide's chemical and physical form.
4	Such a determination relies on records of radionuclide inventories, chemical and physical forms,
5	total amounts of material and activity in waste shipments, and purchasing records to document
6	and support this decision. However, a number of radionuclides experience significant decay
7	product ingrowth, which should be considered when evaluating existing site information.
8	3.6.2 Identify Potential Areas with Residual Radioactive Material
9	Information gathered during the HSA should be used to provide an initial categorization of the
10	site areas as impacted or non-impacted.
11	Impacted areas are either known to contain residual radioactive material based on radiological
12	surveillance or are suspected of containing it based on historical information. This includes
13	areas where—
14	• Radioactive material was used and stored.
15	• Records indicate spills, discharges, or other unusual occurrences that could result in the
16	spread of radioactive material.
17	• Radioactive material was buried or disposed.
18	Areas immediately surrounding or adjacent to these locations are also considered impacted
19	because of the potential for inadvertent spread of radionuclides.
20	Non-impacted areas are those areas where there is no reasonable possibility for residual
21	radioactive material based on site history or previous survey information. The criteria used for
22	this distinction need not be as strict as those used to demonstrate final compliance. However,
23	the reasoning for categorizing an area as non-impacted should be maintained as a written
24	record.
25	All potential sources of radioactive material in impacted areas should be identified and their
26	dimensions recorded (in two or three dimensions, to the extent they can be measured or
27	estimated). Sources can be delineated and characterized through visual inspection during the
28	site reconnaissance; interviews with knowledgeable personnel; and historical information
29	concerning disposal records, waste manifests, and waste sampling data. The HSA should
30	address potential residual radioactive material from the site whether it is physically within or
31	outside of site boundaries. This approach describes the site in a larger context, but as noted in
32	Chapter 1, MARSSIM's scope concerns releasing a site and does not include areas outside a
33	site's boundaries.
34	3.6.3 Identify Potential Media with Residual Radioactive Material
35	The next step in evaluating the data gathered during the HSA is to identify media at the site with
36	a potential for containing residual radioactive material. Identification of those media that do not
May 2020
DRAFT FOR PUBLIC COMMENT
3-13
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Historical Site Assessment
MARSSIM
1	contain residual radioactive material and those that may contain it is necessary for both
2	preliminary area classification (Section 4.4) and planning subsequent survey activities.
3	This section provides information on evaluating the likelihood for release of radioactive material
4	into the following environmental media: surface soil, subsurface soil, sediment, surface water,
5	ground water, air, and buildings. Although MARSSIM's scope is focused on surface soils and
6	building surfaces, other media will still need to be considered.
7	The evaluation will result in a finding either of suspected residual radioactive material or of no
8	suspected residual radioactive material. The finding may be based on analytical data,
9	professional judgment, or a combination of the two.
10	Subsequent sections describe the environmental media and pose questions pertinent to each
11	type. Each question is accompanied by a commentary. Carefully consider the questions within
12	the context of the site and the available data. Avoid spending excessive amounts of time on
13	particular questions, because answers to every question are unlikely to be available at each
14	site. Questions that cannot be answered based on existing data can be used to direct future
15	surveys of the site. Also, keep in mind the numerous differences in site-specific circumstances
16	and that the questions do not identify every characteristic that might apply to a specific site.
17	Additional questions or characteristics identified during a specific site assessment should be
18	included in the HSA report (Section 3.9; EPA 1991e).
19	3.6.3.1 Surface Soil
20	Surface soil is the top layer of soil on a site that is available for direct exposure, growing plants,
21	resuspension of particles for inhalation, and mixing from human disturbances. Surface soil may
22	also be defined as the thickness of soil that can be measured using direct measurement or
23	scanning techniques. Historically, this layer has often been represented as the top
24	15 centimeters (cm; 6 inches [in.]) of soil (40 CFR 192), but will vary depending on radionuclide,
25	surface characteristics, measurement technique, and pathway modeling assumptions. For the
26	purposes of MARSSIM, surface soil may be considered to include gravel fill, waste piles,
27	concrete, or asphalt paving. For many sites where radioactive material was used, one first
28	assumes that radioactive material on the site exists on surfaces, and the evaluation is used to
29	identify areas of high and low probability of residual radioactive material (Class 1, Class 2, or
30	Class 3 areas).
31	• Were all radiation sources used at the site encapsulated sources? A site where only
32	encapsulated sources were used would be expected to have a low potential for residual
33	radioactive material. A review of the leak-test records and documentation of encapsulated
34	source location may be adequate to make a finding of no suspected residual radioactive
35	material.
36	• Were radiation sources used only in specific areas of the site? Evidence that
37	radioactive material was confined to certain areas of the site may be helpful in determining
38	which areas are impacted and which are non-impacted.
39	• Was surface soil regraded or moved elsewhere for fill or construction purposes? This
40	helps identify additional potential sites of radioactive material.
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
3-14
May 2020
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
MARSSIM
Historical Site Assessment
3.6.3.2 Subsurface Soil and Other Subsurface Media
Subsurface soil and other subsurface media are defined as any solid materials beneath the
surface soil layer. The purpose of these subsurface investigations is to locate and define the
lateral and vertical extent of the potential residual radioactive material in the subsurface.
Subsurface measurements can be expensive, especially for beta- or alpha-emitting
radionuclides. To effectively use project resources, subsurface investigations should be biased
(e.g., limited to known or potential areas containing subsurface radioactive material). After
identifying areas of subsurface concern, further subsurface investigations would be necessary
to delineate the lateral and vertical extent of radioactive material during the remedial
investigation and design phase. The latter would aid in planning the necessary resources
(e.g., budgets, contractors, obtaining access) and to set the schedule for the remedial action
phase.
•	Is there evidence of changes in surface features? Understanding the development
history of an area can aid the investigation in identifying subsurface areas of potential
concern. Historically, industrial wastes potentially containing radioactive material were used
as fill material (e.g., to fill in old streams, wetlands, low-lying areas) or as subgrade material
(e.g., beneath buildings, basement floors). Changes in surface features overtime can affect
the distribution of radioactive material. Reviewing historical records can be of great benefit in
identifying subsurface areas of potential concern and in providing subsequent cost effective
and defensible graded approaches to better characterize the site. Examples of helpful
records include aerial photographs, topography maps, railroad/road maps, navigation maps,
Sanborn Fire Insurance Maps, construction photographs, postcards, and correspondence.
•	Are there areas of known or suspected residual radioactive material in surface soil?
Residual radioactive material in surface soil can migrate deeper into the soil. Surface soil
sources should be evaluated based on radionuclide mobility, soil permeability, and
infiltration rate to determine the potential for residual radioactive material in the subsurface.
Computer modeling may be helpful for evaluating these types of situations.
•	Is there a ground water plume without an identifiable source? Radioactive material in
ground water indicates that a source of residual radioactive material is present. If no source
is identified during the HSA, residual radioactive material in the subsurface is a probable
source.
•	Is there potential for enhanced mobility of radionuclides in soils? Radionuclide mobility
can be enhanced by the presence of solvents or other chemicals that affect the sorption
capacity of soil.
•	Is there evidence that the surface has been disturbed? Recent or previous excavation
activities are obvious sources of surface disturbance. Areas with developed plant life
(forested or old growth areas) may indicate that the area remained undisturbed during the
operating life of the facility. Areas where vegetation is removed during previous excavation
activity may be distinct from mature plant growth in adjacent areas. If a site is not purposely
replanted, vegetation may appear in a sequence starting with grasses that are later replaced
May 2020
DRAFT FOR PUBLIC COMMENT
3-15
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Historical Site Assessment
MARSSIM
1	by shrubs and trees. Typically, grasslands recover within a few years, sagebrush or low
2	ground cover appears over decades, and mature forests may take centuries to develop.
3	• Is there evidence of subsurface disturbance? Non-intrusive, non-radiological
4	measurement techniques may provide evidence of subsurface disturbance. Magnetometer
5	surveys can identify buried metallic objects, and ground-penetrating radar can identify
6	subsurface anomalies such as trenches or dump sites. Techniques involving special
7	equipment are discussed in Section 6.9.
8	• Are surface structures present? Structures constructed during a site's operational history
9	may cover residual radioactive material below ground. Some consideration for residual
10	radioactive material that may exist beneath parking lots, buildings, or other onsite structures
11	may be warranted as part of the investigation. There may be underground piping, drains,
12	sewers, or tanks that caused the spread of residual radioactive material.
13	3.6.3.3 Surface Water
14	Surface waters include streams and rivers, lakes, coastal tidal waters, and oceans. Note that
15	certain ditches and intermittently flowing streams also qualify as surface water. The evaluation
16	determines whether radionuclides are likely to migrate to surface waters or their sediments.
17	Where a previous release is not suspected, the potential for future release depends on the
18	distance to surface water and the flood potential at the site. One can also consider the
19	interaction between soil and water in relation to seasonal factors, including soil cracking
20	because of freezing, thawing, and desiccation that influence the dispersal or infiltration of
21	radionuclides.
22	• Is surface water nearby? The proximity of residual radioactive material to local surface
23	water is essentially determined by runoff and radionuclide migration through the soil. The
24	definition for nearby depends on site-specific conditions and the time performance period. If
25	the terrain is flat, precipitation is low, and soils are sandy, nearby may be within several
26	meters. If annual precipitation is high or occasional precipitation events are high, within
27	1,200 meters (3/4 mile) might be considered nearby.
28	• Is the waste quantity particularly large? Depending on the physical and chemical form of
29	the waste and its location, large is a relative term. A small quantity of liquid waste may be of
30	more importance (i.e., a greater risk or hazard) than a large quantity of solid waste stored in
31	watertight containers.
32	• Is the drainage area large? The drainage area includes the area of the site itself plus the
33	upgradient area that produces runoff flowing over the site. Larger drainage areas generally
34	produce more runoff and increase the potential for residual radioactive material in surface
35	water.
36	• Is precipitation heavy? If the site and surrounding area are flat, a combination of heavy
37	precipitation and low infiltration rate may cause precipitation to pool on the site. Otherwise,
38	these characteristics may contribute to high runoff rates that carry radionuclides overland to
39	surface water. Total annual precipitation exceeding one meter (40 in.), or a once in 2-years
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
3-16
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Historical Site Assessment
1	24-hour precipitation event exceeding 5 cm (2 in.) might be considered "heavy."
2
3	The amount of precipitation varies for locations across the continental United States from
4	high (e.g., approximately 200 cm/year [y; 89 in./y], Mt. Washington, New Hampshire) to low
5	values (e.g., approximately 10.7 cm/y [4.2 in./y], Las Vegas, Nevada). Certified data on
6	precipitation rates for locations throughout the United States can be obtained from the
7	National Centers for Environmental Information (https://www.ncdc.noaa.gov).
8	• Is the infiltration rate low? Infiltration rates range from very high in gravelly and sandy
9	soils to very low in fine silt and clay soils. Paved sites prevent infiltration and generate
10	runoff.
11	• Are sources of residual radioactive material poorly contained or prone to runoff?
12	Proper containment that prevents radioactive material from migrating to surface water
13	generally uses engineered structures such as dikes, berms, run-on and runoff control
14	systems, and spill collection and removal systems. Sources prone to releases via runoff
15	include leaks, spills, exposed storage piles, or intentional disposal on the ground surface.
16	Sources not prone to runoff include underground tanks, aboveground tanks, and containers
17	stored in a building.
18	• Is a runoff route well defined? A well-defined runoff route—along a gully, trench, berm,
19	wall, etc.—will more likely contribute to migration to surface water than a poorly defined
20	route. However, a poorly defined route may contribute to dispersion of radioactive material
21	to a larger area of surface soil.
22	• Has deposition of waste into surface water been observed? Indications of this type of
23	activity will appear in records from past practice at a site or from information gathered during
24	personal interviews.
25	• Is ground water discharge to surface water probable? The hydrogeology and
26	geographical information of the area around and inside the site may be sufficiently
27	documented to indicate discharge locations.
28	• Does analytical or circumstantial evidence suggest residual radioactive material in
29	surface water? Any condition considered suspicious can be considered circumstantial
30	evidence.
31	• Is the site prone to flooding? The Federal Emergency Management Agency publishes
32	flood insurance rate maps that delineate 100-year and 500-year flood plains. Ten-year
33	floodplain maps may also be available. Generally, a site on a 500-year floodplain is not
34	considered prone to flooding.
35	3.6.3.4 Groundwater
36	Proper evaluation of ground water includes a general understanding of the local geology and
37	subsurface conditions. Of particular interest is descriptive information relating to subsurface
38	stratigraphy, aquifers, and ground water use.
May 2020
DRAFT FOR PUBLIC COMMENT
3-17
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Historical Site Assessment
MARSSIM
1	• Are sources poorly contained? Proper containment—which prevents radioactive material
2	from migrating to ground water—generally uses engineered structures, such as liners, layers
3	of low permeability soil (e.g., clay), and leachate collection systems.
4	• Is the source likely to affect ground water? Underground tanks, landfills,3 surface
5	impoundments, and lagoons are examples of sources that are likely to release residual
6	radioactive material that migrates to ground water. Aboveground tanks, drummed solid
7	wastes, or sources inside buildings are less likely to contribute to residual radioactive
8	material in ground water.
9	• Is waste quantity particularly large? Depending on the physical and chemical form of the
10	waste and its location, large is a relative term. A small quantity of liquid waste may be of
11	more importance (i.e., greater risk or hazard) than a large quantity of solid waste stored in
12	watertight containers.
13	• Is precipitation heavy? If the site and surrounding area are flat, a combination of heavy
14	precipitation and low infiltration rate may cause precipitation to pool on the site. Otherwise,
15	these characteristics may contribute to high runoff rates that carry radionuclides overland to
16	surface water. Total annual precipitation exceeding one meter (40 in.), or a once in 2-years
17	24-hour precipitation event exceeding 5 cm (2 in.) might be considered "heavy."
18
19	The amount of precipitation varies for locations across the continental United States from
20	high (e.g., approximately 200 cm/y [89 in/y] in Mt. Washington, New Hampshire) to low
21	values (e.g., approximately 10.7 cm/y [4.2 in/y] in Las Vegas, Nevada). Certified data on
22	precipitation rates for locations throughout the United States can be obtained from the
23	National Centers for Environmental Information (https://www.ncdc.noaa.gov).
24	• Is the infiltration rate high? Infiltration rates range from very high in gravelly and sandy
25	soils to very low in fine silt and clay soils. Unobstructed surface areas are potential
26	candidates for further examination to determine infiltration rates.
27	• Is the site located in an area of karst terrain? In karst terrain, ground water moves rapidly
28	through channels caused by dissolution of the rock material (usually limestone) that
29	facilitates migration of radioactive material and chemicals.
30	• Is the subsurface highly permeable? Highly permeable soils favor downward movement
31	of water that may transport radioactive materials. Well logs, local geologic literature, or
32	interviews with knowledgeable individuals may help answer this question.
33	• What is the distance from the surface to an aquifer? The shallower the source of ground
34	water, the higher the threat of residual radioactive material. It is difficult to determine
35	whether an aquifer may be a potential source of drinking water in the future (e.g., next
3 Landfills can affect the geology and hydrogeology of a site and produce heterogeneous conditions. It may be
necessary to consult an expert on landfills and the conditions they generate.
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
3-18
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Historical Site Assessment
1	1,000 years). Use the shallowest aquifer below the site when determining the distance to the
2	surface.
3	• Are suspected radionuclides highly mobile in ground water? Mobility in ground water
4	can be estimated based on the distribution coefficient (Kd) of the radionuclide. Elements with
5	a high Kd, like thorium (e.g., Kd = 3,200 cm3/gram [g]), are not mobile while elements with a
6	low Kd, like hydrogen (e.g., Kd = 0 cm3/g), are very mobile. EPA provides a compilation of Kd
7	values. These values can be influenced by site-specific considerations such that site-
8	specific Kd values need to be evaluated or determined. Also, the mobility of a radionuclide
9	can be enhanced by the presence of solvents or other chemicals.
10	• Does analytical or circumstantial evidence suggest residual radioactive material in
11	ground water? Evidence for residual radioactive material may appear in current site data;
12	historical, hydrogeological, and geographical information systems records; or as a result of
13	personal interviews.
14	3.6.3.5 Air
15	Evaluation of air is different than evaluation of other media with a potential for residual
16	radioactive material. Air is evaluated as a pathway for resuspending and dispersing radioactive
17	material.
18	• Were there observations of releases of radioactive material into the air? Direct
19	observation of a release to the air might occur where radioactive materials are suspected to
20	be present in particulate form (e.g., mine tailings, waste pile) or adsorbed to particulates
21	(e.g., radioactive material in soil), and where site conditions favor air transport (e.g., dry,
22	dusty, windy).
23	• Does analytical or circumstantial evidence suggest a release to the air? Other
24	evidence for releases to the air might include areas of residual radioactive material in
25	surface soil that do not appear to be caused by direct deposition or overland migration of
26	radioactive material.
27	• For radon exposure only, are there elevated amounts of radium (226Ra) in the soil or
28	water that could act as a source of radon in the air? The source 226Ra decays to 222Rn,
29	which is radon gas. Once radon is produced, the gas needs a pathway to escape from its
30	point of origin into the air. Radon is readily released from water sources that are open to air.
31	Soil, however, can retain radon gas until it has decayed (see Section 6.8). The rate that
32	radon is emitted by a solid (i.e., radon flux) can be measured directly to evaluate potential
33	sources of radon.
34	• Is there a prevailing wind direction and a propensity for windblown transport of
35	radioactive material? Information pertaining to geography, ground cover (e.g., amount and
36	types of local vegetation), meteorology (e.g., wind speed at 7 meters [23 feet] above ground
37	level) for and around the site, and site-specific parameters related to surface soil
38	characteristics enter into calculations used to describe particulate transport. Mean annual
May 2020
DRAFT FOR PUBLIC COMMENT
3-19
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Historical Site Assessment
MARSSIM
1	wind speed can be obtained from the National Weather Service surface station nearest to
2	the site.
3	3.6.3.6 Structures
4	Structures used for storage, maintenance, or processing of radioactive materials are potential
5	sources of residual radioactive material. The questions presented in Table 3.1 help determine
6	whether a building might be affected by residual radioactive material. The questions listed in this
7	section are for identifying structures, or portions of structures, that might not be identified using
8	Table 3.1 but have a potential for residual radioactive material. Section 4.8.3.1 also presents
9	useful information on identifying structures with residual radioactive material.
10	• Were adjacent structures used for the storage, maintenance, or processing of
11	radioactive material? Adjacent is a relative term for this question. A processing facility with
12	a potential for venting radioactive material to the air could deposit residual radioactive
13	material on buildings downwind. A facility with little potential for release outside of the
14	structures handling the material would be less likely to deposit radioactive material on
15	nearby structures.
16	• Is a building, its addition, or a new structure located on a former radioactive waste
17	burial site or on land with residual radioactive material? Comparing past and present
18	photographs or site maps and retrieving building permits or other structural drawings and
19	records in relation to historical operations information will reveal site locations where
20	structures may have been built over buried waste or land with residual radioactive material.
21	• Was the building constructed using materials containing residual radioactive
22	material? Building materials (e.g., concrete, brick, plaster, cement, wood, metal, cinder
23	block) may contain residual radioactive material.
24	• Does the potentially non-impacted portion of the building share a drainage system or
25	ventilation system with areas with potential residual radioactive material? Technical
26	and architectural drawings for site structures, along with visual inspections, are required to
27	determine if this is a concern in terms of current or past operations.
28	• Is there evidence that previously identified areas of residual radioactive material were
29	remediated by painting or similar methods of immobilization? Removable sources of
30	residual radioactive material were sometimes immobilized by painting, partition, or the
31	addition of floor layers (e.g., tiles, carpet). These sources may be more difficult to locate and
32	may need special consideration when planning subsequent surveys.
33	3.6.4 Develop a Conceptual Model of the Site
34	Starting with project planning activities, gather and analyze available information to develop a
35	conceptual site model. The model is essentially a site diagram showing locations of known
36	radioactive material, areas of suspected residual radioactive material, types and concentrations
37	of radionuclides in impacted areas, media with potential residual radioactive material, and
38	locations of potential reference (background) areas. The diagram should include the general
39	layout of the site, including buildings and property boundaries. When possible, one should
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
3-20
May 2020
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
MARSSIM
Historical Site Assessment
produce three-dimensional diagrams. The conceptual site model will be upgraded and modified
as information becomes available throughout the RSSI process. The process of developing this
model is also briefly described in Attachment A of EPA 1996a.
The model is used to assess the nature and the extent of residual radioactive material; to
identify potential sources of residual radioactive material, release mechanisms, exposure
pathways, and human and environmental receptors; and to develop exposure scenarios.
Further, this model helps identify data gaps and determine media to be sampled, and it assists
staff in developing strategies for data collection. Site history and preliminary survey data
generally are extremely useful sources of information for developing this model. The conceptual
site model should include known and suspected sources of residual radioactive material, the
types of radioactive material, and affected media. Such a model can also illustrate known or
potential routes of migration and known or potential human and environmental receptors.
The site should be classified or initially divided into similar areas. Classification may be based
on the operational history of the site or observations made during the site reconnaissance (see
Section 3.5). After the site is classified using current and past site characteristics, further divide
the site or facility based on anticipated future use. This classification can help (1) assign limited
resources to areas that are anticipated to be released without restrictions, and (2) identify areas
with little or no possibility of unrestricted release. Figure 3.2 shows an example of how a site
might be classified in this manner. Further classification of a site may be possible based on site
disposition recommendations (unrestricted vs. release with passive controls).
3.6.5 Professional Judgment
In some cases, traditional sources of information, data, models, or scientific principles are
unavailable, unreliable, conflicting, or too costly or time consuming to obtain. In these instances,
professional expert judgment may be the only practical tool available to the investigator or
regulator. Expert judgment, or "expert elicitation," means using the judgments obtained from
experts about their field of expertise that are explicitly stated and documented for review and
appraisal by others. It is a formal, highly structured, and well-documented process for obtaining
the judgment of multiple experts regarding a scientific inquiry or decisionmaking (NRC 1990).
For most instances, the issue is not whether to use judgment, but whether to use it in an explicit
and disciplined fashion or in an ad hoc manner. An important interrelated question is when and
whose judgment should be used. For this guidance, it is often useful to formalize the elicitation
and use of judgment for significant technical, environmental, and socioeconomic problems. For
general applications, this type of judgment is a routine part of scientific investigation where
knowledge is incomplete. For MARSSIM guidance, professional judgment can be used as an
independent review of historical data to support decision-making during the HSA or the use of
statistical tools or methodology. Professional or expert judgment should be used as necessary,
particularly in situations where data are not reasonably obtainable by collection,
experimentation, field measurements, or when the cost of data collection is prohibitive.
Typically, the process of recruiting professionals for expert judgment should be documented and
as unbiased as possible. The credentials of the selected individual or individuals should
enhance the credibility of the elicitation, and their ability to communicate their reasoning is a
primary determinant of the quality of the results. Qualified expert professionals can be identified
May 2020
DRAFT FOR PUBLIC COMMENT
3-21
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Historical Site Assessment
MARSSIM
Hypothetical
Site:
f
wea A,
Impacted. Site history
shows areas
exceeding the DCGL
are not lite))'
Area A;
Production
.Area. B
Processing

Area C:
storages. Disposal
/L
AreaD.
Administration
¦iiia
(3) Office (&) Lao
-red -
Production

i
Site Boundary
Initial Area Classification Based on SiteUse
A/ea B
Processing
J
Aiea C
Storages Disposal
Further Area Classification Planning Considerations
Based on Historical Site Assessment
J
Area B:
Impacted. Site history
shows areas
exceedingly DCGL
are likely.
J
AreaC
Impacted- Potentially
restricted access.
Radioactive Wasle
ManagementUnit.

~
A/eaD
Administration


-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
MARSSIM
Historical Site Assessment
by different sources, including the planning team, professional organizations, government
agencies, universities, consulting firms, and public interest groups. The selection criteria for the
professionals should include potential conflict of interest (economic or personal), evidence of
expertise in a required topic, objectiveness, and availability.
3.7 Determining the Next Step in the Site Investigation Process
Upon completion, the HSA will support one of three possible recommendations:
1.	An emergency action may be necessary to reduce the risk to human health and the
environment, such as a Superfund removal action, which is discussed in detail by EPA
(EPA 1988a).
2.	The site or area is categorized as impacted, and further investigation is needed before a
decision regarding final release can be made. The area may be classified as Class 1,
Class 2, or Class 3, and a scoping survey or a characterization survey may be performed as
necessary. Information collected during the HSA can be very useful in planning these
subsequent survey activities.
3.	The site or area is categorized as non-impacted, a term that is applied where there is no
reasonable potential to contain radionuclide concentration(s) or radioactive material above
background (10 CFR 50). The site or area can be released.
As stated in Section 1.1, the purpose of this manual is to describe a process-oriented approach
for demonstrating that the concentration of residual radioactive material does not exceed the
release criteria. The highest probability of demonstrating this can be obtained by sequentially
following each step in the RSSI process. In some cases, however, performing each step in the
process is not practical or necessary. This section provides information on how the results of the
HSA can be used to determine the next step in the process.
The best method for determining the next step is to review the purpose for each type of survey
described in Chapter 5. For example, a scoping survey is performed to provide sufficient
information for determining (1) whether the present residual radioactive material warrants further
evaluation, and (2) initial estimates of the level of effort for decontamination and for preparing a
plan for a more detailed survey. If the HSA demonstrates that this information is already
available, do not perform a scoping survey. On the other hand, if the information obtained during
the HSA is limited, a scoping survey may be necessary to narrow the scope of the
characterization survey.
The exception to conducting additional surveys before an FSS is the use of HSA results to
release a site. Generally, the analytical data collected during the HSA are not adequate to
statistically demonstrate compliance for impacted areas as described in Chapter 8. This means
that the decision to release the site will be based on professional judgment. This determination
will ultimately be decided by the responsible regulatory agency.
May 2020
DRAFT FOR PUBLIC COMMENT
3-23
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
Historical Site Assessment
MARSSIM
3.8	Historical Site Assessment Report
A narrative report is generally the best format to summarize what is known about the site, what
is assumed or inferred, activities conducted during the HSA, and all researched information.
Cite a supporting reference for each factual statement given in the report. Attach copies of
references (i.e., those not generally available to the public) to the report. The narrative portion of
the report should be written in plain English and avoid the use of technical terminology.
A sample HSA report format is provided in Example 1. Additional information not identified in
the outline may be requested by the regulatory agency at its discretion. The level of effort to
produce the report should reflect the amount of information gathered during the HSA.
3.9	Review of the HSA
The planning team should ensure that someone (a first reviewer) conducts a detailed review of
the HSA report for internal consistency and as a QC mechanism. A second reviewer with
considerable site assessment experience should then examine the entire information package
to ensure consistency and to provide an independent evaluation of the HSA conclusions. The
second reviewer also evaluates the package to determine if special circumstances exist where
radioactive material may be present but not identified in the HSA. Both the first reviewer and the
second independent reviewer should examine the HSA written products to ensure internal
consistency in the report's information, summarized data, and conclusions. The site review
ensures that the HSA's recommendations are appropriate.
An important QA objective is to find and correct errors. A significant inconsistency indicating
either an error or a flawed conclusion, if undetected, could contribute to an inappropriate
recommendation. Identifying such a discrepancy directs the HSA investigator and site reviewers
to re-examine and resolve the apparent conflict.
Under some circumstances, experienced investigators may have differing interpretations of site
conditions and draw differing conclusions or hypotheses regarding the likelihood of residual
radioactive material. Any such differences should be resolved during the review. If a reviewer's
interpretations contradict those of the HSA investigator, the two should discuss the situation and
reach a consensus. This aspect of the review identifies significant points about the site
evaluation that may need detailed explanation in the HSA narrative report to fully support the
conclusions. Throughout the review, the HSA investigator and site reviewers should keep in
mind the need for conservative judgments in the absence of definitive proof to avoid
underestimating the presence of residual radioactive material, which could lead to an
inappropriate HSA recommendation.
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
3-24
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Historical Site Assessment
Example 1: HSA Report Format
1.
Glossary of Terms, Acronyms, and Abbreviations
2.
Executive Summary
3.
Purpose of the Historical Site Assessment
4.
Property Identification

1. Physical Characteristics

1. Name—CERCLIS ID# (if applicable) owner/operator name, address

2. Location—street address city county state geographical coordinates

3. Topography—USGS 7.5-minute quadrangle or equivalent

4. Stratigraphy

2. Environmental Setting

1. Geology

2. Hydrogeology

3. Hydrology

4. Meteorology
5.
Historical Site Assessment Methodology

1. Approach and Rationale

2. Boundaries of Site

3. Documents Reviewed

4. Property Inspections

5. Personal Interviews
6.
History and Current Usage

1. History—years of operation, type of facility, description of operations, regulatory

involvement permits and licenses, waste handling procedures

2. Current Usage—type of facility, description of operations, probable source types

and sizes, description of spills or releases, waste manifests, radioactive

inventories, emergency or removal actions

3. Adjacent Land Usage—sensitive areas, such as wetlands or preschools
7.
Findings

1. Potential Sources of Residual Radioactive Material

2. Potential Areas with Residual Radioactive Material

1. Impacted Areas—Known and Potential

2. Non-Impacted Areas

3. Potential Media with Residual Radioactive material

4. Related Environmental Concerns
8.
Conclusions
9.
References
10. Appendices

A. Conceptual Model and Site Diagram Showing Classifications

B. List of Documents

C. Photo Documentation Log

a. Original Photographs of the Site and Pertinent Site Features
May 2020	3-25	NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT	DO NOT CITE OR QUOTE

-------
MARSSIM
Considerations for Planning Surveys
1	4 CONSIDERATIONS FOR PLANNING SURVEYS
2	4.1 Introduction
3	4.1.1 Purpose
4	This chapter is intended to introduce the Multi-Agency Radiation Survey and Site Investigation
5	Manual (MARSSIM) user to general considerations for planning MARSSIM-based surveys by
6	presenting areas of consideration common to Radiation Surveys and Site Investigations (RSSIs)
7	with an emphasis on final status surveys (FSSs).1 Detailed technical information about planning
8	surveys will follow in the subsequent chapters. For the purposes of this chapter, it is assumed
9	that a Historical Site Assessment (HSA) has been performed, and the results are available to
10	the survey design team.
11	4.1.2 Scope
12	The emphasis in MARSSIM is on FSSs of surface soil and surfaces of buildings and outdoor
13	areas to demonstrate compliance with cleanup regulations. However, MARSSIM discusses four
14	types of surveys:
15	• Scoping
16	• Characterization
17	• Remedial Action Support (RAS)
18	• Final status
19	These survey types are discussed in more detail in Chapter 5. The emphasis on FSSs should
20	be kept in mind during the design phase of all surveys. The topics discussed in this chapter
21	focus on planning the FSS.
22	4.1.3 Overview of Survey Planning
23	In the following sections of this chapter, you will be introduced to many potentially unfamiliar
24	concepts, terms, definitions, etc., specifically related to planning surveys. Informal definitions will
25	be given in this chapter; however, the reader should refer to the Glossary for complete
26	definitions. The following topics related to survey planning are discussed in this chapter:
27	• Data Quality Objectives (DQO) process: The DQO process is used to develop performance
28	and acceptance criteria that clarify study objectives, define the appropriate type of data, and
29	specify tolerable levels of potential decision errors that will be used as the basis for
30	establishing the quality and quantity of data needed to support decisions.
31	• Survey types: There are four MARSSIM survey types: scoping, characterization, RAS, and
32	final status. The emphasis of this chapter will be on FSSs.
1 MARSSIM uses the word "should" as a recommendation, not as a requirement. Each recommendation in this
manual is not intended to be taken literally and applied at every site. MARSSIM's survey planning documentation will
address how to apply the process on a site-specific basis.
May 2020
DRAFT FOR PUBLIC COMMENT
4-1
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
Considerations for Planning Surveys
MARSSIM
•	Unity rule: The unity rule is used when more than one radionuclide is present at a
concentration that is distinguishable from background and where a single concentration
comparison does not apply. In this case, the mixture of radionuclides is compared against
default concentrations by applying the unity rule. This is accomplished by determining:
(1) the ratio between the concentration of each radionuclide in the mixture, and (2) the
concentration for that radionuclide in an appropriate listing of default values. The sum of the
ratios for all radionuclides in the mixture should not exceed 1.
•	Radionuclides and derived concentration guideline levels (DCGLs): The design team needs
to determine the radionuclides of concern and final form of the DCGLs. The DCGLs are
typically based on dose (or risk) pathway modeling, which is used to determine release
criteria expressed in measurable radiological quantities. These can be a soil concentration,
surface (areal) concentration, or external dose (or exposure) rate.
•	Area and site considerations: Properly classifying areas as impacted or non-impacted,
identifying survey units, and selecting background reference areas is critical for the
successful execution of a FSS.
•	Statistical considerations: MARSSIM recommends the use of statistical hypothesis testing in
all but the simplest of surveys (See Appendix B). The MARSSIM user must be conversant
with the statistical concepts discussed in the manual and should consider incorporating a
statistician in the design team.
•	Measurements: The detection capabilities of all the measurement (sampling included)
techniques must be evaluated to ensure that the data and measurement quality objectives
are met.
•	Site preparation: Site preparation includes efforts to gain permission to access the site,
ensuring that the survey team and equipment can operate safely, and establishing the
logistical means to perform the survey should be started early.
•	Health and safety: The health and safety of the workers is a high priority, so each site must
have a documented health and safety plan.
•	Survey design examples: A few simplified examples are presented in Section 4.12 to
illustrate the process for planning surveys.
4.2 Data Quality Objectives Process
DQOs were introduced in Chapter 1, expanded upon in Chapter 2, and discussed in detail in
Appendix D. The survey design team must be familiar with the DQO process to properly design
a survey. The DQO process can be summarized in seven steps:
1.	State the problem.
2.	Identify the decisions to be made.
3.	Identify inputs to the decision.
4.	Define the study boundaries.
5.	Develop a decision rule.
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
4-2
May 2020
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
MARSSIM
Considerations for Planning Surveys
6.	Specify limits on decision errors.
7.	Optimize the survey design.
4.2.1 Planning Phase
Using the DQO process allows the survey design team to use a graded approach to ensure that
the level of effort meets the design goals in a technically defensible and cost-effective manner.
The intent of any survey is to ensure that the decision makers have the appropriate type,
quantity, and quality of environmental data needed to make the correct decision. DQOs are
qualitative and quantitative statements derived from the DQO process that do the following:
•	Clarify the study objective: The first step in any survey is to determine the objectives of the
survey (DQO Steps 1 and 2). Depending on the data available from the previous RSSI
activities (e.g., HSA review), the objectives of a survey can range from augmenting HSA
information to be used as input in designing a characterization survey to releasing a survey
unit. An important part of this clarification is the determination of the initial condition of each
survey unit: The survey unit is assumed to be "not clean" (Scenario A) or "clean"
(Scenario B). See Section 2.5.1 and Sections D.1.1 and D.1.2 of Appendix D for more
details.
•	Define the most appropriate type of data to collect: Once the objectives of the survey have
been agreed upon, the design team needs to determine what data need to be collected
(DQO Step 3). Implicit in this step is the determination of the radionuclides of concern,
choice of equipment, detection limits, analytical methods, statistical tests, etc., that will be
used to meet the objectives of the survey.
•	Determine the most appropriate conditions for collecting the data: In general, the "study
boundaries" of a survey (DQO Step 4) are spatial; for example, the selection of the survey
units. However, if the objective of the survey is to collect data for classification, then the
spatial boundary of the survey might be the entire site under consideration. Throughout the
RSSI process, decisions need to be made; for example, based on a review of the HSA's
conclusions and the results of scoping/characterization surveys, the decision maker will
determine survey unit boundaries and classifications.
•	Specify limits on decision errors that will be used as the basis for establishing the quantity
and quality of data needed to support the decision: To make decisions, the outputs of the
survey must be in a form amenable to decision-making (DQO Step 5). MARSSIM
recommends the use of statistical hypothesis testing to make decisions to release a survey
unit. Because of the overall uncertainty in the surveying process, there is always a chance
of a decision error (e.g., incorrectly concluding that a survey unit meets the release criteria,
when in fact it does not). The chance of a decision error cannot be eliminated; however, it
can be controlled (DQO Step 6). The survey design team must be aware of the chance of
decision errors and take measures to control them (e.g., collecting more samples, using
more precise measurement techniques, and using better surveying and sampling designs).
When the previous steps are completed, the survey design team can optimize the survey plan
(DQO Step 7); the team might need to work through this step several times to arrive at the best
design. Appendix D discusses the planning phase of the data life cycle in detail. The MARSSIM
user should read and be conversant with Appendix D.
May 2020
DRAFT FOR PUBLIC COMMENT
4-3
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Considerations for Planning Surveys
MARSSIM
1	Regardless of the survey type under consideration, the DQOs remain the overarching guide for
2	planning; the design team needs to explain the "who, what, where, when, why, and how" for a
3	survey.
4	4.2.2 Quality System
5	MARSSIM requires that all environmental data collection and use take place in accordance with
6	a site-specific systematic planning process that incorporates industry-established quality
7	assurance and quality control (QA/QC). The goal of a QA/QC program is to identify and
8	implement sampling and analytical methodologies which limit the introduction of error into
9	analytical data. For MARSSIM data collection and evaluation, a quality system is needed to
10	ensure that radiation surveys produce results that are of the type and quality needed and
11	expected for their intended use. A quality system is a management system that describes the
12	elements necessary to plan, implement, and assess the effectiveness of QA/QC activities. This
13	system establishes many functions, including—
14	• quality management policies and guidelines for the development of organization- and
15	project-specific quality plans
16	• criteria and guidelines for assessing data quality
17	• assessments to ascertain the effectiveness of QA/QC implementation
18	• training programs related to QA/QC implementation.
19	A quality system ensures that MARSSIM decisions will be supported by sufficient data of
20	adequate quality and usability for their intended purpose, and it further ensures that such data
21	are authentic, appropriately documented, and technically defensible. MARSSIM uses the
22	project-level components of a quality system as a framework for planning, implementing, and
23	assessing environmental data collection activities.
24	In accordance with the environmental data quality system described in Appendix D, all
25	environmental data collection and use are to take place in accordance with a site-specific
26	systematic planning process (SPP) that consists of planning, implementation, and assessment
27	phases. The results of the SPP are usually documented in a Quality Assurance Project Plan
28	(QAPP). A QAPP integrates all technical and quality aspects and defines in detail how specific
29	QA/QC activities will be implemented during the survey project will be developed. The Uniform
30	Federal Policy (UFP) for QAPPs (EPA 2005a, 2005b, 2005c) was developed to provide
31	procedures and guidance for consistently implementing the national consensus standard
32	ANSI/ASQ E-4, Quality Systems for Environmental Data and Technology Programs, for the
33	collection and use of environmental data. The UFP for QAPPs is presented in three volumes:
34	• Part 1, UFP-QAPP Manual (EPA-505-B-04-900A, DTIC ADA 427785) (EPA 2005a.)
35	• Part 2A, UFP-QAPP Workbook (EPA-505-B-04-900C, DTIC ADA 427486) (EPA 2005c.)
36	• Part 2B, Quality Assurance/Quality Control Compendium: Minimum QA/QC Activities (EPA-
37	505-B-04-900B, DTIC ADA 426957) (EPA 2005b.)
38	Using this scientific, logical approach to planning for data collection and assessment at a site
39	helps ensure that the amounts and types of data collected are appropriate for decisionmaking
40	and that the physical, environmental, chemical, and radiological characteristics of the site are
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
4-4
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Considerations for Planning Surveys
1	adequately defined. The development of a QAPP is one of the first team-based QA/QC activities
2	performed in the project planning stage.
3	The objective of the UFP-QAPP is to provide a single national consensus document for
4	consistently and systematically implementing the project-specific requirements of ANSI/ASQ E4
5	(ASQC 1995) and help ensure the quality, objectivity, utility, and integrity of environmental data.
6	Information on selecting the number and type of QC measurements for a specific project are
7	provided in Section 3.4; Tables 4, 5, and 6 of the UFP-QAPP Part 1; and Worksheet 28 of the
8	UFP-QAPP Part 2A.
9	Minimum QA/QC activities are specified for all environmental data collection and use in the
10	UFP-QAPP Part 2B. However, this matrix of minimum requirements is not meant to be a
11	replacement for a site-specific QAPP. A wide range of site-specific guidelines for data collection
12	activities specified in the survey plan should be determined that relate to the ultimate use of the
13	data. These guidelines include, but are not limited to—
14	• types of decisions that will be supported by the data
15	• project quality objectives
16	• acceptance criteria for data quality indicators (also known as measurement performance
17	criteria)
18	• survey plan, including location of environmental and QC samples and measurements
19	• types of radionuclides and analyses that require laboratory analysis (on-site, field, or fixed
20	lab)
21	The QA/QC activities specified in the QA matrix represent a minimum list of activities. Other
22	QA/QC activities may be added, depending on the decisions to be made and on site-specific
23	conditions. The matrix of minimum QA/QC activities is organized by—
24	• survey type (i.e., scoping or characterization) for surveys prior to the FSS
25	• data uses (e.g., confirmatory measurements) for RAS surveys
26	• data type (i.e., screening versus definitive data)
27	• project stage (i.e., plan, implement, assess, decide)
28	4.3 Survey Types
29	4.3.1 Scoping
30	MARSSIM defines a scoping survey as "a type of survey that is conducted to identify
31	(1) radionuclides present, (2) relative radionuclide ratios, and (3) general concentrations and
32	extent of residual radioactive material." In conjunction with an HSA, the results of a scoping
33	survey can help determine (1) preliminary radionuclides of concern, (2) interim site and survey
34	unit boundaries, (3) initial area classifications, (4) data gaps, and (5) initial estimates of the level
35	of effort for remediation, and (6) information for planning a more detailed survey, such as a
36	characterization survey. Methods for planning, conducting, and documenting scoping surveys
37	are described in Section 5.2.1.
May 2020
DRAFT FOR PUBLIC COMMENT
4-5
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Considerations for Planning Surveys
MARSSIM
1	4.3.2 Characterization
2	MARSSIM defines a characterization survey as "a type of survey that includes facility or site
3	sampling, monitoring, and analysis activities to determine the extent and nature of residual
4	radioactive material. Characterization surveys provide the basis for acquiring necessary
5	technical information to develop, analyze, and select appropriate cleanup techniques."
6	Characterization surveys can be developed to meet a very broad range of objectives, many of
7	which are outside the scope of MARSSIM. The guidance in Section 5.2.2 concentrates on
8	providing characterization survey planning information with an emphasis on the FSS design.
9	4.3.3 Remedial Action Support
10	MARSSIM defines remedial action as "Those actions that are consistent with a permanent
11	remedy taken instead of, or in addition to, removal action in the event of a release or threatened
12	release of a hazardous substance into the environment, to prevent or minimize the release of
13	hazardous substances so that they do not migrate to cause substantial danger to present or
14	future public health or welfare or the environment." A RAS survey supports remediation
15	activities and is used to monitor the effectiveness of remediation efforts intended to reduce
16	residual radioactive material to acceptable levels. The general objectives of an RAS are to
17	(1) support remediation activities, (2) determine when a site or survey unit is ready for the FSS,
18	and (3) provide updated estimates of site-specific parameters to use for planning the FSS.
19	Methods for planning, conducting, and documenting an RAS are described in Section 5.2.3.
20	4.3.4 Final Status
21	4.3.4.1 Survey
22	MARSSIM defines an FSS as "measurements and sampling to describe the radiological
23	conditions of a site, following completion of remediation activities (if any) in preparation for
24	release." An FSS is performed to demonstrate that a survey unit meets the agreed-upon release
25	criteria. In other words, that FSS is designed to answer the question, "Does the concentration of
26	residual radioactive material in each survey unit satisfy the predetermined criteria for release for
27	unrestricted use or, where appropriate, for use with designated limitations (restricted release)?"
28	The primary objective of MARSSIM is the FSS. The design of FSSs is discussed in detail in
29	Section 5.3, with the remainder of MARSSIM expanding on the design, execution, and
30	assessment of FSSs.
31	Figure 4.1 illustrates the sequence of activities described in this chapter and their relationship to
32	the survey design process.
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
4-6
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Considerations for Planning Surveys
1
2	Figure 4.1: Sequence of Preliminary Activities Leading to an FSS Design
Is Radionuclide
Present in
Background?
Identify Radionuclides
Section 4.3
Survey Objectives
1.	Identify radionuclides of concern
2.	Identify associated radionuclides
3.	Identify conditions to be evaluated
or measured
Establish DCGLs
Classify Areas by Potential for
Residual Radioactive Material
Group/Separate Areas into
Survey Units
Prepare Site for
Survey Access
Section 4.9
Establish Survey Location
Reference System
Section 4.9.5
Design Surey
Chapter 5
SELECT BKGD
REFERENCE AREAS
Section 4.5
Section 4.4
Section 4.5
Section 4.
	I
To Fig
May 2020
DRAFT FOR PUBLIC COMMENT
4-7
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
Considerations for Planning Surveys
MARSSIM
4.3.4.2 Verification Process
Historically, regulators commissioned verification surveys after the completion of an FSS.
However, the application of the DQO process to the verification process has led to the
development of more effective processes, such as in-process decommissioning inspections
(Abelquist, 2014). For example, NRC (2008) and DOE (2011c) require verification inspections of
some sort; these documents can be used as guides for including verification processes in the
FSS design project. The personnel who plan and execute an FSS should be familiar with
independent verification (IV) process and be prepared to work with regulators and their
contractors to support the verification process during all phases of the FSS. Abelquist (2014)
provides an example of decommissioning inspection plan that might be useful when designing
an FSS.
Bailey (2008) summarized the experiences from IV activities of Oak Ridge Institute for Science
and Education (ORISE) in support of U.S. Department of Energy decommissioning projects. In
conclusion, Bailey (2008) states that—
Independent verification should be integrated into the planning stages rather than
after the cleanup contractor has completed the remediation work and
demobilized from the site. The IV of onsite remediation and FSS activities should
be coordinated and if possible implemented in parallel with the contractor to
minimize schedule impacts. A well-implemented and thorough IV program for a
site requires IV involvement throughout the D&D [Decontamination and
Decommissioning] process. Independent verification is not a substitute for routine
contractor quality assurance; however, IV activities often improve the contractor's
performance. IV recommendations often improve the contractor's FSS
procedures and results, while increasing the probability of complete remediation
and documentation.
4.3.5	Simplified Procedures
The design team should be aware that under certain conditions (e.g., sites where only small
quantities of radioactive materials exempted from or not requiring a specific license) a simplified
procedure might be able to be used to demonstrate regulatory compliance. The design team
should refer to Appendix B and seek regulatory approval before using this simplified procedure.
4.3.6	A Note on Subsurface Assessments
Many users might need to assess subsurface residual radioactive materials. Strictly speaking,
this is beyond the scope of MARSSIM; however, the general concepts contained in MARSSIM
(e.g., the DQO process, statistical survey and sampling design, etc.) may be appropriate to
address subsurface contamination. As always, any approach to site decommissioning needs to
be discussed with the appropriate regulatory authorities.
4.3.7	Uranium Mill Tailings Radiation Control Act of 1978 Sites
At Uranium Mill Tailings Radiation Control Act of 1978 (UMTRCA) sites, EPA's Health and
Environmental Protection Standards for Uranium and Thorium Mill Tailings (see 40 CFR 192)
are applicable. However, the technical requirements in these standards are not always
consistent with some of the recommendations in MARSSIM. Specifically, the soil cleanup
standards for 226Ra and 228Ra are specified as averages over an area of 100 square meters
(m2). Additional details for planning at UMTRCA site are provide in Section 4.12.9.
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
4-8
May 2020
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
MARSSIM
Considerations for Planning Surveys
4.4 The Unity Rule
The unity rule is used to ensure that the total dose (risk) from all sources (or media) and all
radionuclides associated with each source does not exceed the release criteria. It is to be used
when more than one radionuclide is present and distinguishable from background and a single
concentration does not apply. Essentially, this means that if measurements of different
quantities are made at a location, then the unity rule must be used. For example, the unity rule
would be used if two radionuclides are measured in each soil sample or if gross alpha and gross
beta measurements are made at each location and the results are being compared to specific
DCGLs.
The total amount of anything, whether dose, counts, or activity, is simply the sum of its parts
(Mi):
u
Total = Mi + M2 + ••• + Mi + ••• + Mn = Mi	(4-1)
i=l
Dividing both sides of Equation (4-1) by the total yields the following fundamental equation
(Equation (4-2)):
it
1=/l+/2 + ...+ /i + ...+fn=Y^fi	(4-2)
i=1
The basic statement of this equation is that sum of all fractions must add to unity (1).
When using the sum of fractions to demonstrate compliance in MARSSIM, each fraction is
determined by dividing each "part" (e.g., the concentration of residual radioactive material due to
a specific radionuclide/source) by the respective release criterion (e.g., a derived concentration
guideline level [DCGL]). In an FSS for a survey unit to be released, the dose or risk from all
radionuclides and all sources in a survey unit must be less than or equal to the applicable
release criterion, and the sum of fractions for multiple radionuclides/sources must be less than
or equal to unity:
n
Total Dose or Risk = ^ (Dose or Risk Component^ < Release Criterion (4-3)
i=l
Dividing the terms in Equation (4-3) by the applicable release criterion:
May 2020	4-9	NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT	DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
Considerations for Planning Surveys
MARSSIM
Total Dose or Risk /Dose or Risk Component^
Z/uose or kisk component \ ^	^
I Release Criterion /."	^ ^
i=1 x	' i
Release Criterion
i=l
If the dose/risk components and release criteria are expressed as concentrations (e.g., express
the dose/risk as concentration and release criterion as DCGLs as discussed above),
Equation (4-4) can be written in terms of concentrations (see Equation (4-5)):
71
Z/Dose or Risk Component^	Ct
1 Release Criterion J. ZjDCGLj-^	^ ^
i=l	'i i=1
where
•	Cj is the concentration of the ith component (e.g., radionuclide or source) leading to dose
or risk.
•	DCGLj is the derived concentration guideline level of the ith component (e.g., radionuclide
or source) leading to dose or risk.
This is the traditional form of the unity rule as used and defined in MARSSIM.
Other applications of Equation (4-1) or derivatives, such as deriving a gross activity DCGL, will
be covered in the corresponding section of this and other chapters as needed.
4.5 Radionuclides
4.5.1	Radionuclides of Concern
During the design of an FSS, the survey team should thoroughly review of all the remediation
activities conducted before the FSS to determine the radionuclides of concern and their
expected concentrations in each survey unit. The team should also determine if the
concentrations of the radionuclides of concern in the background need to be accounted for in
the FSS design; for example, if a radionuclide is not present in the background, the FSS can be
designed based on the one-sample Sign test.
If neither remedial action nor HSA data exist, then the survey design team should make the
identification of the radionuclides of concern a primary objective of the team's actions. Whether
through an HSA, a scoping survey, a characterization survey, or some combination of them, the
team must make an initial characterization of the types, concentrations, and distribution of the
residual radioactive material at the site.
4.5.2	Release Criteria and Derived Concentration Guideline Levels
The decommissioning process ensures that residual radioactive material will not result in
individuals being exposed to unacceptable levels of radiation dose or risk. Regulatory agencies
establish radiation dose standards based on risk considerations and scientific data relating dose
to risk. These radiation dose standards are the fundamental release criteria; however, they are
not measurable. To translate these release criteria into measurable quantities, residual levels of
NUREG-1575, Revision 2	4-10	May 2020
DRAFT FOR PUBLIC COMMENT	DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
MARSSIM
Considerations for Planning Surveys
radioactive material corresponding to the release criteria are derived (calculated) by analysis of
various pathways (e.g., direct radiation, inhalation, and ingestion) and scenarios (e.g., resident
farmer, industrial, recreational) through which exposures could occur.
These DCGLs are usually presented in terms of surface or mass activity concentrations of
radioactive material (typically becquerels [Bq]/m2, disintegrations per minute/centimeters
squared [dpm/100 cm2], Bq/kilogram [Bq/kg], or picocurie/gram [pCi/g], respectively). The
details of the derivation of DCGLs are beyond of the scope of MARSSIM. However, the survey
design team should understand how DCGLs were derived, because the models and
assumptions used to derive DCGLs can drive how measurements are made. For example, if
DCGLs for soil were derived based on an assumption that the residual radioactive material was
restricted to the top 15 cm of soil, a condition verified in a characterization survey, then for the
FSS it would not be appropriate to collect soil samples from the top 30 cm of soil. In many
cases, generally applicable DCGLs can be obtained from the relevant regulatory agency. In
other cases, DCGLs derived for site-specific conditions can be used with permission of the
relevant regulatory agency.
There are two types of DCGLs (DCGLw and DCGLemc)2 applicable to satisfying
decommissioning objectives:
•	The DCGLw is the mean3 concentration of residual radioactive material within a survey unit
that corresponds to release criteria (e.g., regulatory limit in terms of dose or risk).
•	The DCGLemc accounts for the smaller area of elevated residual radioactive material and is
typically derived based on dose (or risk) pathway modeling. The DCGLemc is always greater
than or equal to the DCGLw.
The contributions to dose or risk from both the uniform area and areas of elevated residual
radioactive material, if applicable, must meet the condition expressed in Equation 8-4 in
Section 8.6.2. The development of regulatory requirements leading to the establishment of a
DCGLemc is beyond the scope of MARSSIM and is determined strictly through the requirements
of regulatory agencies. Therefore, it is important to work with the applicable regulatory agency
to determine whether requirements for a DCGLemc should be consistent with the approach
presented in MARSSIM or those in regulatory documents. When properly justified to and
accepted by the regulatory agency, no DCGLemc requirement may be needed at all. DCGLemcS
and associated requirements for areas of elevated radioactive material should be clearly stated
and properly approved, and surveys should demonstrate compliance with those requirements.
More discussion about elevated areas of radioactive material and their consideration during
radiological survey activities can be found in Section 5.3.5.
To prove compliance with requirements for discrete radioactive particles, some surveys have
used the MARSSIM Elevated Measurement Comparison (EMC) process (see Section 8.6.1).
As discussed in Section 4.12.8, the MARSSIM EMC process might not apply to discrete
radioactive particles; the survey design team should use the DQO process to address such
particles in surface soils or building surfaces. More discussion about discrete radioactive
2	The "W" in DCGLw colloquially refers to "wide-area" or "average." The "EMC" in DCGLemc refers to the Elevated
Measurement Comparison.
3	The mean is sum of the values divided by the number of measurements and is commonly called the average.
May 2020
DRAFT FOR PUBLIC COMMENT
4-11
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Considerations for Planning Surveys
MARSSIM
1	particles and their consideration during radiological survey activities can be found in
2	Section 4.12.8.
3	The MARSSIM user should remember five things about release criteria and DCGLs:
4	• The fundamental release criteria are dose or risk based and cannot be measured.
5	• The fundamental release criteria are translated into measurable DCGLs.
6	• The determination of acceptable DCGLs must be coordinated with the regulator.
7	• The derivation of radionuclide-specific DCGLs is beyond the scope of MARSSIM.
8	• The application of radionuclide-specific DCGLs to derive operational DCGLs for FSSs is the
9	responsibility of MARSSIM user.
10	4.5.3 Applying DCGLs
11	This section focuses on introducing the application and modifications of DCGLs to derive
12	operational DCGLs for various situations commonly encountered while planning FSSs. An
13	operational DCGL is any modification or combination of radionuclide-specific DCGLs to derive
14	measurable quantity (e.g., a gross beta activity DCGL). The simplest application of a DCGL is
15	when a single radionuclide is distributed uniformly throughout a survey unit. When multiple
16	radionuclides are present in a survey unit, (1) the ratios of the concentrations of radionuclides
17	are roughly constant (correlated), or (2) the concentrations are unrelated. There are statistical
18	tests that can be performed to calculate the degree of correlation among the concentrations.
19	Ultimately, sound judgment must be used when interpreting the results of the calculations. If
20	there is no physical reason for the concentrations to be correlated, then they are likely not.
21	However, if there is sound evidence of correlation, then that evidence should be used. The
22	survey design team should consult closely with the appropriate regulatory agency during the
23	design phase.
24	Fundamentally, the measurement of residual radioactive material involves one or more of the
25	following:
26	• radionuclide-specific analyses
27	• gross activity measurements
28	• external radiation measurements
29	The choice of the operational DCGL depends on the types of measurements being made. If
30	multiple radionuclides are considered using gross activity measurements, then it might be
31	acceptable to use the smallest DCGL of the radionuclides present. To use surrogate
32	measurements to demonstrate compliance, all significant radionuclides should be identified, the
33	contributions of the various radionuclides should be known, and DCGLs should be developed
34	for each of the radionuclides of concern. If there is a well-established correlation between
35	radionuclide concentrations, then a weighted gross activity DCGL or surrogate-based DCGL
36	might be acceptable. If no correlation exists or a combination of the above options is proposed,
37	then the unity rule must be used.
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
4-12
May 2020
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
MARSSIM
Considerations for Planning Surveys
4.5.3.1	DCGLs for a Single Radionuclide
For a single radionuclide, compliance can be easily demonstrated if all the measurements in the
survey unit are below the DCGL. Otherwise, an appropriate statistical test must be used. This is
straightforward when there is one radionuclide (e.g., Sr/Y-90) where radioactive decay products
are included in the DCGL. If the radioactive decay products are not included in the DCGL, and
the radioactive decay products are present, the survey team must use one of the methods
outlined below. Additionally, the methods described below can be used for radionuclides that
are not part of the same decay chain (e.g., presence of a mix of fission products, such as Sr-90
and Cs-137).
If a DCGL for the parent of a serial decay chain includes contributions from the progeny, then
the direct application of that DCGL is possible. It is incumbent on the design team to determine
if the radionuclides of concern are parents of a decay series and if progeny are accounted for in
all DCGLs. For example, values for natural thorium (Th-nat) and natural uranium (U-nat)
typically include progeny; however, the design team must confirm this for each case. For
information on serial radioactive decay, see Section 4.5.3.8.
4.5.3.2	Most Conservative DCGL Approach for Multiple Radionuclides
If there are multiple radionuclides in a survey unit, then it might be possible to use the lowest
(most restrictive) DCGL. Note that if DCGLmin is the lowest of the DCGLs, then Equation (4-6)
applies, and DCGLmin may be applied to the total activity concentration rather than using the
unity rule.
Ci C2	Cn (C-1+C2"1—HCn)
	-— +		—+...+	-—< —			— <1	(4-6)
DCGL-i DCGL2 DCGLn ~ DCGLmin ~	v ;
The goal is then to demonstrate that the ratio of the total concentration of all radionuclides to
DCGLmin is less than 1, or alternatively that the total concentration of all radionuclides is less
than DCGLmin. Although this option may be considered, in many cases it will be too conservative
to be useful. Furthermore, the ease of detection must be taken into account during the DQO
process if use of the DCGLmin is being considered.
4.5.3.3	DCGLs for Multiple Radionuclides in Known Ratios (Surrogate Measurements)
For sites with multiple radionuclides, it may be possible to measure just one of the radionuclides
and still demonstrate compliance for all radionuclides present by using surrogate
measurements. If there is an established ratio among the concentrations of the radionuclides in
a survey unit, then the concentration of every radionuclide can be expressed in terms of any
one of them. The measured radionuclide is often called a surrogate radionuclide for the others.
In this case, the unity rule can be used to derive a new, modified DCGL for the surrogate
radionuclide, which accounts for the dose or risk contributions of the other radionuclides that are
not measured.
The fundamental aspect of the unity rule is that the sum of the ratios of the concentrations to the
DCGLs for each radionuclide should be less than or equal to one, as shown in Equation (4-7):
May 2020
DRAFT FOR PUBLIC COMMENT
4-13
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
Considerations for Planning Surveys
MARSSIM
£d^l"s1	<4"7)
(=1
where
•	Cj is the concentration of the ith radionuclide.
•	DCGLj is the DCGL of the ith radionuclide.
The terms in the denominator are the original, unmodified DCGLs for all the radionuclides in the
survey unit. However, when using a surrogate radionuclide, the design team needs to ensure
that the DCGL for the surrogate radionuclide is modified (DCGLs-mod) to account for the
presence of all the radionuclides. This is done by applying the unity rule as shown in
Equation (4-8:
Cs
< 1	(4-8)
DCGLs_mod
where
•	Cs is the concentration of the surrogate radionuclide.
•	DCGLs_mod is the modified DCGL for the surrogate radionuclide.
Here, DCGLs-mod is the DCGL for the surrogate radionuclide modified so that it represents all
radionuclides that are present in the survey unit. The DCGLs-mod is a variation of the unity rule
that uses established ratios as shown below in Equation (4-9):
DCGLs-mod - 7
J	 ¦ «2 . n Rt . . «n \ <4-9)
\DCGLs-unmod DCGL2 DCGLj DCGLn/
where
•	DCGLs-unmod is the DCGL of the surrogate radionuclide before modification.
•	DCGLj is the DCGL of the ith radionuclide for i = 2,... n.
•	Rt is the established ratio of the concentration of the ith radionuclide to the concentration of
the surrogate radionuclide for i = 2, ...n.
DCGLs-mod is then used for survey design purposes described in Chapter 5. An example
calculation of a surrogate DCGL and additional discussion are shown in Section 4.12.2.1.
This scheme is applicable only when radionuclide-specific measurements of the surrogate
radionuclide are made. It is unlikely to apply in situations where the surrogate radionuclide
NUREG-1575, Revision 2	4-14	May 2020
DRAFT FOR PUBLIC COMMENT	DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
MARSSIM
Considerations for Planning Surveys
appears in background, as background variations would increase the uncertainty in the
calculation of the surrogate measurements to unacceptable levels.
When using surrogates, it is often difficult to establish a consistent ratio between two or more
radionuclides. Rather than follow prescriptive guidance on acceptable levels of variability for the
surrogate ratio, a more reasonable approach may be to review the data collected to establish
the ratio and to use the DQO process to select an appropriate ratio from that data. The DQO
process should be used to assess the feasibility of use of surrogates. The benefit of using the
surrogate approach is avoiding the need to perform costly wet chemistry analyses on each
sample. This benefit should be considered relative to the difficulty in establishing the surrogate
ratio, as well as the potential consequence of unnecessary investigations that result from
decision errors, which may arise from using a "conservative" surrogate ratio (i.e., determining
that the site is dirty when the site is clean). Selecting a conservative surrogate ratio ensures that
potential exposures from individual radionuclides are not underestimated. The surrogate method
can only be used with confidence when dealing with the same media in the same
surroundings—for example, soil samples with similar physical and geological characteristics.
The planning team will need to consult with the regulatory agency for concurrence on the
approach used to determine the surrogate ratio.
The potential for shifts or variations in the radionuclide ratios means that the surrogate method
should be used with caution. Physical or chemical differences between the radionuclides may
produce different migration rates, causing the radionuclides to separate and changing the
radionuclide ratios. Remediation activities have a reasonable potential to alter the surrogate
ratio established prior to remediation. MARSSIM recommends that when the ratio is established
prior to remediation, additional post-remediation samples should be collected to ensure that the
data used to establish the ratio are still appropriate and representative of the existing site
condition. If these additional post-remediation samples are not consistent with the pre-
remediation data, surrogate ratios should be re-established.
4.5.3.4 Gross Activity DCGLs for Multiple Radionuclides in Known Ratios
For situations where multiple radionuclides with their own DCGLs are present, a gross activity
DCGL can be developed. This approach enables field measurement of gross activity
(e.g., Bq/m2), rather than determination of individual radionuclide activity, for comparison to the
DCGL. The gross activity DCGL for surfaces with multiple radionuclides is calculated as follows:
1.	Determine the relative fraction, ft of the total activity contributed by each of the n
radionuclides present for i = 1, ...n.
2.	Obtain the DCGLj for each ith radionuclide present for i = 1, ...n.
3.	Substitute the values ft and DCGLj in the following equation (Equation (4-10)) for i = 1, ...n.
DCGLgr0SS -	(4.10)
iDCGL! DCGL2 DCGLj " DCGLj
This process can be used to calculate a gross activity DCGL to be used as a DCGLw or a
DCGLemc. See Appendix 0.4 for the derivation. The example in Section 4.12.2.3 illustrates the
calculation of a gross activity DCGL.
May 2020	4-15	NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT	DO NOT CITE OR QUOTE

-------
Considerations for Planning Surveys
MARSSIM
1	Just as in the case of surrogate radionuclides, note that Equation (4-10) might not work for
2	sites having unknown or highly variable relative fractions of radionuclides throughout the site. In
3	these situations, the best approach may be to select the most conservative surface DCGL from
4	the mixture of radionuclides present (Section 4.5.3.2) or the unity rule (Section 4.5.3.5). If the
5	radionuclide with the most restrictive DCGL cannot be measured or is hard to detect with field
6	instruments, the DQOs must be revisited to determine the best approach. If the mixture contains
7	radionuclides that cannot be measured using field survey equipment, laboratory analyses of
8	surface materials may be necessary. Check with the regulator whether the use of a gross
9	activity DCGL is appropriate for the site.
10	4.5.3.5 DCGLs for Multiple Radionuclides with Unrelated Concentrations
11	If the concentrations of the different radionuclides appear to be unrelated in the survey unit, the
12	surrogate approach cannot be used. There is little choice but to measure the concentration of
13	each radionuclide and use the unity rule. The alternative would involve performing gross
14	measurements (e.g. alpha or beta) and applying the most restrictive DCGLw to all radionuclides.
15	Recall from Section 4.4 that the fundamental release criterion is that the sum of all the radiation
16	dose from all the residual radionuclides in a survey unit must be less than or equal to the dose
17	or risk criteria. In terms of DCGLs, the unity rule states that for a survey unit to meet the release
18	criteria, the sum of the ratios of the concentrations of each radionuclide to their respective
19	DCGLs must be less than or equal to one, as shown in Equation (4-11):
20	where
21	• Ci is the concentration of the ith radionuclide for i = 1, ...n.
22	• DCGLj is the DCGL of the ith radionuclide for i = 1,... n.
23	By using the unity rule in this manner, the design team creates an effective DCGL of 1. Note
24	that the DCGL is no longer expressed as a concentration; it is a unitless sum of fractions. To
25	apply the unity rule, the design team must calculate the sum of the ratios (SOR) or weighted
26	sum (T) of the ratios in the survey unit for each quantity measured at a given location, as
27	illustrated in Equation (4-12) and Example 4.12.4.
DCGL
DCGL,
DCGL
(4-11)
C-|	C2	Cj
T — 	 + 	+ ••• + 	+ ••• +
DCGL-i DCGL2 DCGLj
DCGLn
(4-12)
28 where
29	• Cj is the concentration in the sample of the ith radionuclide for i = 1, ...n.
30	• DCGLj is the DCGL of the ith radionuclide for i = 1,... n.
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
4-16
May 2020
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
MARSSIM
Considerations for Planning Surveys
In a given sample, the concentration of each radionuclide is divided by its DCGL
(normalization). This weighted sum, T, and its standard deviation, cXT), will be used in the
statistical tests to determine whether a survey unit can be released. The standard deviation in
the weighted sum is calculated as shown in Equation (4-13):
a(T) =

a(C0
2
o(C2)
2
+...+
o(C j)
2
o(Cn)

DCGL-i

dcgl2
DCGLj

DCGLn
(4-13)
where
o(Cj) is the estimate of uncertainty in the concentration in the sample of the ith radionuclide
for i = 1, ...n.
DCGLj is the DCGL of the ith radionuclide for i = 1,
.n.
Note that if there is a fixed ratio between the concentrations of some radionuclides but not
others, a combination of the methods in Sections 4.5.3.4 and 4.5.3.5 may be used. The
appropriate value of the DCGL with the concentration of the measured surrogate radionuclide
should replace the corresponding terms in Equations (4-12) and (4-13). Example 4.12.4
illustrates the calculation of the weighted sum and its associated uncertainty for two
radionuclides.
During the planning stage, data from characterization, scoping, or other surveys can be used to
estimate the values of T and a(T) in the survey unit to determine the number of samples
needed for the statistical tests.
Although this chapter does not discuss interpreting the data from an FSS (Chapter 8), a note on
how T is used can be helpful at this point. If the sum of the normalized concentrations is below
1.0 for every sample in a survey unit, compliance has been demonstrated. If a survey unit has
several individual locations where Tj exceeds 1, this does not mean that the survey unit fails to
meet the release criteria. If any individual Tj exceeds 1, then, as for the case for a single
radionuclide, an appropriate statistical test and the elevated measurement comparison test must
be performed.
4.5.3.6 The Use of External Radiation Measurements as a Surrogate
In lieu of using measurements of radionuclide concentrations to determine compliance with
release criteria, the DQO process can be used to determine if in situ measurements of external
radiation levels (e.g., exposure rates) can be used, particularly for radionuclides that deliver the
majority of their dose through the direct radiation pathway. This approach can be desirable
because external radiation measurements are generally easier to make and less expensive than
measuring radionuclide concentrations.
This method requires that a consistent ratio for the surrogate and unmeasured radionuclides be
established. The appropriate exposure rate DCGL could also account for radionuclides that do
not deliver the majority of their dose through the direct radiation pathway. This is accomplished
by determining the fraction of the total activity represented by radionuclides that do deliver the
majority of their dose through the direct radiation pathway and weighting the exposure rate limit
May 2020
DRAFT FOR PUBLIC COMMENT
4-17
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
Considerations for Planning Surveys
MARSSIM
by this fraction (see surrogate discussion above). Note that the previously mentioned
considerations for establishing consistent ratios also apply to this surrogate approach. The
regulatory agency should be consulted before using this surrogate approach.
4.5.3.1 Small Areas of Elevated Activity
The concept of the elevated measurement comparison and DCGI_EMcfor small areas of elevated
activity was introduced in Section 4.5.2. The DCGLemc accounts for the smaller area of
elevated residual radioactive material and is equal to or greater than the DCGLw. Recall that the
development of regulatory requirements leading to the establishment of a DCGLemc is beyond
the scope of MARSSIM and is determined strictly through the requirements of regulatory
agencies. Therefore, it is important to work with the applicable regulatory agency to determine
whether requirements for establishing a DCGLemc should be consistent with the approach
presented in MARSSIM or those in regulatory documents.
All the methods used to modify individual DCGLs to account for multiple radionuclides can be
used to modify the DCGLemc. When the ratios between the radionuclides is unknown, the unity
rule inequality for the EMC is as shown below in Equation (4-14):
, , Cn
dcglEMCi dcglEMC2 DCGLEMCj DCGLEMCi
< 1	(4-14)
In Equation (4-14), Cj is concentration of the ith radionuclide for i = 1, ... n, and DCGLEMC,i is
the DCGLemc for ith radionuclide for i = 1, ... n.
The use of Equation (4-14) may not be appropriate for scanning. For scanning, minimum
detectable concentration (MDC) considerations are a little more nuanced than for discrete
sampling. For example, when scanning for areas with potentially elevated concentrations of
residual radioactive material, the scan MDC should be below the DCGL—preferably at a fraction
(approximately 50 percent) of the DCGL. In a Class 1 survey unit, the scan MDC should be less
than the DCGLemc. Additional information is provided in Sections 5.3.5.1 and 5.3.5.2. The
radionuclide yielding the lowest detector response may or may not have the most restrictive
DCGL.
As illustrated in Equation (4-15), to use the surrogate (known ratios) method for the elevated
measurement comparison, the DCGLemc for the surrogate radionuclide is replaced by—
1
DCGLEMC,S-mod 		
1	, R2 , , Rj , , Rn	(4-15)
DCGLemc g.unmod DCGLEMC2 DCGLemc j DCGLEMC n
where
•	DCGLemc
s-unmod is the unmodified DCGLemc for the first radionuclide.
•	Rt is the concentration ratio of the ith radionuclide to the first radionuclide for i = 2, ...n.
•	DCGLEMC,i is the DCGLemc for the ith radionuclide for i = 2,... n.
When dealing with discrete radioactive particles (hot particles), the MARSSIM EMC process is
not valid when the instrumentation dose-to-rate conversion factor modeling assumes a "point
NUREG-1575, Revision 2	4-18	May 2020
DRAFT FOR PUBLIC COMMENT	DO NOT CITE OR QUOTE

-------
MARSSIM
Considerations for Planning Surveys
1	source" as opposed to an "area source" or "plane source." This violates the assumption inherent
2	in the dose or risk model of an activity concentration averaged over some definable area. The
3	FSS planning team should use the DQO process to address discrete radioactive particles, if
4	there is a reasonable potential for them to be present. See Section 4.12.8 for more information
5	on release criteria for discrete radioactive particles.
6	4.5.3.8 A Note on Serial Radioactive Decay
7	For decay series (e.g., thorium and uranium) whose radionuclides emit alpha, beta, and gamma
8	radiation, compliance with building surface activity DCGLs may be demonstrated by assessing
9	alpha, beta, or gamma radiations. However, relying on the use of alpha surface measurements
10	often proves problematic because of the highly variable level of alpha attenuation by rough,
11	porous, and dusty surfaces. Beta measurements typically provide a more accurate assessment
12	of thorium and uranium on most building surfaces because surface conditions cause
13	significantly less attenuation of beta particles than alpha particles. Beta measurements,
14	therefore, may provide a more accurate determination of surface activity than alpha
15	measurements. The presence of gamma-emitting radionuclides can introduce uncertainty into
16	the beta measurements, and field measurement techniques need to be used to account for the
17	gamma interference at each beta measurement location.
18	The relationship of beta and alpha emissions from decay chains or various enrichments of
19	uranium should be considered when determining the surface activity for comparison with the
20	DCGL values. When the initial member of a decay chain has a long half-life, the concentration
21	of radioactive material associated with the subsequent members of the series will increase at a
22	rate determined by the individual half-lives until all members of the decay chain are present at
23	activity levels equal to the activity of the parent. This condition is known as secular equilibrium.
24	Section 4.12.2.1 provides an example of the calculation of beta activity DCGLw for thorium-232
25	in equilibrium with its decay products.
26	4.5.4 Investigation Levels
27	The survey design should include the development of investigation levels during the DQO
28	process for the FSS. A measurement result is compared to an investigation level to indicate
29	when additional action might be necessary (e.g., a measurement that exceeds the DCGLw in a
30	Class 2 area). Investigation levels can be radionuclide-specific levels of radioactive material or
31	an instrument response (e.g., counts per minute). Additional discussions of investigation levels
32	are in Section 5.3.8.
33	4.5.5 Conclusions
34	The foregoing discussion of DCGLs highlights the following:
35	• Measurements can be made for specific radionuclides, typically through gamma
36	spectrometry or radiochemical analyses.
37	• The FSS design team should be familiar with the operational DCGLs and their applications.
38	• Gross activity measurements—typically gross alpha or beta concentrations—can be made,
39	especially on surfaces.
40	• Measurement of surrogate quantities can be used based on known relationships among the
41	various radionuclides.
May 2020
DRAFT FOR PUBLIC COMMENT
4-19
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
Considerations for Planning Surveys
MARSSIM
•	The use of surrogates or ratios determined from data collected before the FSS must be
approved by the appropriate regulatory authorities, and if remediation or other activities
occur which could affect the ratios, additional support for the assumed ratios or revisions to
the ratios is needed.
•	The unity rule can be used to determine the DCGLs for use in the design of the FSS when
multiple radionuclides/sources are present.
•	If DCGLs are modified for use of surrogates, that modification will affect instrument
selection.
The choice of how the DCGLs are applied and measured affects the statistical methods,
background reference unit selection, and other design features of a survey. For example,
MARSSIM recommends using the Wilcoxon Rank Sum (WRS) test if the radionuclides are
present in the background or if gross activity measurements are made (see Section 8.2.3).
4.6 Area and Site Considerations
4.6.1 Area Classification
All areas of a site will not have the same potential for residual radioactive material and,
accordingly, will not need the same level of survey coverage to demonstrate compliance with
the established release criteria. The process will be more efficient if the survey is designed so
that areas with higher potential for residual radioactive material (based in part on results of the
HSA in Chapter 3) will receive a higher degree of survey effort. The following is a discussion of
site area classifications.
Non-impacted areas: Areas that have no reasonable potential for residual radioactive material
and do not need any level of survey coverage. Those areas have no radiological impact from
site operations and are typically identified during the HSA (Chapter 3). Background reference
areas are normally selected from non-impacted areas (Section 4.6.3).
Impacted areas: Areas that have some potential for containing residual radioactive material.
They can be classified into three classes:
•	Class 1 areas: Areas that have, or had prior to remediation, a potential for residual
radioactive material above the DCGLw (based on site operating history) or known residual
radioactive material (based on previous radiological surveys). Examples of Class 1 areas
include—
o site areas previously subjected to remedial actions
o locations where leaks or spills are known to have occurred
o former burial or disposal sites
o waste storage sites
o areas with residual radioactive material in discrete solid pieces of material having high
specific activity
Note that areas containing residual radioactive material in excess of the DCGLw prior to
remediation should be classified as Class 1 areas. Justification is not required for a Class 1
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
4-20
May 2020
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
MARSSIM
Considerations for Planning Surveys
designation, unlike a Class 2 or Class 3 designation. The less restrictive the classification,
the greater the justification required.
•	Class 2 areas: Areas that have, or had prior to remediation, a potential for residual
radioactive material or known residual radioactive material but are not expected to exceed
the DCGLw. To justify a Class 2 designation, the existing data (from the HSA, scoping
surveys, or characterization surveys) should provide a high degree of confidence that no
individual measurement would exceed the DCGLw. Other justifications may be appropriate
based on the outcome of the DQO process. Examples of areas that might be classified as
Class 2 for the FSS include—
o locations where radioactive materials were present in an unsealed form
o residual radioactive material potentially along transport routes
o areas downwind from stack release points
o upper walls, roof support frameworks, and ceilings of some buildings or rooms subjected
to airborne radioactive material
o areas where low concentrations of radioactive materials were handled
o areas on the perimeter of former buffer or radiological control areas
•	Class 3 areas: Any impacted areas that are not expected to contain any residual radioactive
material or are expected to contain levels of residual radioactive material at a small fraction
of the DCGLw, based on site operating history and previous radiological surveys. To justify a
Class 3 designation, the existing data (from the HSA, scoping surveys, or characterization
surveys) should provide a high degree of confidence either that there is no residual
radioactive material or that any levels of residual radioactive material are a small fraction of
the DCGLw. Other justifications for an area's classification may be appropriate based on the
outcome of the DQO process. Examples of areas that might be classified as Class 3
include—
o buffer zones around Class 1 or Class 2 areas
o areas with very low potential for residual radioactive material but insufficient information
to justify a non-impacted classification
Classification is a critical step in the survey design process, as well as for the FSS (see
Table 2.2). Class 1 areas have the greatest potential for residual radioactive material and,
therefore, receive the highest degree of survey effort, followed by Class 2 and then Class 3
areas. All areas should be considered Class 1 areas unless some basis for classification as
Class 2 or Class 3 is provided.
The criteria used for designating areas as Class 1, 2, or 3 should be described in the FSS plan.
Compliance with the classification criteria should be demonstrated in the FSS report. A thorough
analysis of HSA findings (Chapter 3) and the results of scoping and characterization surveys
provide the basis for an area's classification. As a survey progresses, reevaluation of this
classification may be necessary based on newly acquired survey data. For example, if residual
radioactive material at concentrations that are a substantial fraction of the DCGLw is identified in
a Class 3 area, an investigation and reevaluation of that area should be performed to determine
if the Class 3 area classification is appropriate. Typically, the investigation will result in part or all
May 2020
DRAFT FOR PUBLIC COMMENT
4-21
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Considerations for Planning Surveys
MARSSIM
1	of the area being reclassified as Class 1 or Class 2. If survey results identify residual radioactive
2	material in a Class 2 area exceeding the DCGL or suggest that there may be a reasonable
3	potential that residual radioactive material is present in excess of the DCGL, an investigation
4	should be initiated to determine whether all or part of the area should be reclassified as Class 1.
5	More information on investigations and reclassifications is provided in Section 5.3.8.
6	4.6.2 Identification of Survey Units
7	A survey unit is a physical area consisting of structures or land areas of specified size and
8	shape for which a separate decision will be made whether that survey unit exceeds the release
9	criteria. This decision is made as a result of the FSS. Therefore, the survey unit is the primary
10	entity for demonstrating compliance with the release criteria.
11	To facilitate survey design and ensure that the number of survey data points for a specific site
12	are relatively uniformly distributed among areas of similar potential for residual radioactive
13	material, the site is divided into survey units that share a common history or other
14	characteristics or are naturally distinguishable from other portions of the site. A site may be
15	divided into survey units at any time before the FSS. Areas that have been classified can be one
16	survey unit or multiple survey units. For example, HSA or scoping survey results may provide
17	sufficient justification for partitioning the site into Class 1, 2, or 3 areas. Note, however, that
18	dividing the site into survey units is critical only for the FSS; scoping, characterization, and RAS
19	surveys may be performed without dividing the site into survey units.
20	A survey unit cannot include areas that have different classifications. A survey unit's
21	characteristics should be consistent with exposure pathway modeling that is used to convert
22	dose or risk into radionuclide concentrations. For indoor areas classified as Class 1, each room
23	may be designated as a survey unit. Indoor areas may also be subdivided into several survey
24	units of different classification, such as separating floors and lower walls from upper walls and
25	ceilings (and other upper horizontal surfaces) or subdividing a large warehouse based on floor
26	area.
27	Survey units should be limited in size based on classification, exposure pathway modeling
28	assumptions, and site-specific conditions. The suggested areas for survey units are provided in
29	Table 4.1.
30	Table 4.1: Suggested Area for Survey Units

Suggested Area for Survey Units
Classification
Structures (Floors, Walls,
and Ceilings)
Land Areas
Class 1
Up to 100 m2
Up to 2,000 m2
Class 2
Up to 1,000 m2
Up to 10,000 m2
Class 3
No Limit
No Limit
31	Abbreviation: m2 = square meters
32	The limitation on survey unit size ensures that the density of the measurements/samples is
33	commensurate with the potential of residual radioactive materials in excess of the DCGLw. The
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
4-22
May 2020
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
MARSSIM
Considerations for Planning Surveys
rationale for selecting a larger survey unit area should be developed using the DQO process
(Section 2.3) and fully documented.
Special considerations may be necessary for survey units with structure surface areas up to
10 m2 or land areas up to 100 m2. In this case, the number of data points obtained from the
statistical tests is unnecessarily large and not appropriate for smaller survey unit areas. Instead,
some specified level of survey effort should be determined based on the DQO process and with
the concurrence of the regulatory agency. For such small survey units, scan-only surveys or in
situ measurement may be more appropriate. The data generated from these smaller survey
units should be obtained based on judgment, rather than on systematic or random design, and
compared individually to the DCGLs.
One example special case for FSSs occurs at UMTRCA sites at which the radioactive materials
are from the processing of uranium or thorium ores for their source material content. See
Section 4.12.9 and Appendix 0.6 for more details for UMTRCA sites.
4.6.3 Selection of Background Reference Areas
Certain radionuclides may also occur at significant levels as part of background in the media of
interest (e.g., soil, building material). Examples include members of the naturally occurring
uranium, thorium, and actinium series; potassium-40 (40K); carbon-14 (14C); and tritium (3H).
137Cs and other radionuclides are also present in background as a result of fallout (Wallo et al.,
1994). Establishing a distribution of background concentrations is necessary to identify and
evaluate contributions attributable to site operations. Determining background levels for
comparison with the conditions determined in specific survey units entails conducting surveys in
one or more reference areas to define the background radiological conditions of the site.
NUREG-1505 (NRC 1998a) provides additional information on background reference areas.
The recommended site background reference area should have similar physical, chemical,
geological, radiological, and biological characteristics as the survey unit being evaluated.
Background reference areas should be selected from non-impacted areas, but they are not
limited to natural areas undisturbed by human activities. In some situations, a reference area
may be contiguous to the survey unit being evaluated, as long as the reference area does not
have any residual radioactive material resulting from site activities. For example, background
measurements may be taken from core samples of a building or structure surface or pavement.
This option should be discussed with the regulatory agency during survey planning. Reference
areas should not be part of the survey unit being evaluated.
Reference areas provide a location for background measurements that are used for
comparisons with survey unit data. The radioactive material present in a reference area would
ideally be the same as in the survey unit, had the survey unit never been affected by site
operations. If a site includes physical, chemical, geological, radiological, or biological variability
that is not represented by a single reference background area, selecting more than one
reference area may be necessary. Additionally, the concentration of some radionuclides may
vary over short (hours to days), medium (months or years), or long (centuries) time frames.
NUREG-1501 (NRC 1994a) provides more detailed information about sources of temporal
variability and methods to account for this variability.
It may be difficult to find a reference area within a residential or industrial complex for
comparison to a survey unit if the radionuclides of potential concern are naturally occurring.
Background may vary greatly due to different construction activities that have occurred at the
site. Examples of construction activities that change background include—
May 2020
DRAFT FOR PUBLIC COMMENT
4-23
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Considerations for Planning Surveys
MARSSIM
1	• leveling
2	• excavating
3	• adding fill dirt
4	• importing rocks or gravel to stabilize soil or underlay asphalt
5	• manufacturing asphalt with different matrix rock
6	• using different pours of asphalt or concrete in a single survey unit; layering asphalt over
7	concrete
8	• layering different thicknesses of asphalt, concrete, rock, or gravel
9	• covering or burying old features, such as railroad beds or building footings
10	Background variability may also increase due to the concentration of fallout in low areas of
11	parking lots or under downspouts, where runoff water collects and evaporates. Variations in
12	background of a factor of five or more can occur in the space of a few meters.
13	There are a number of possible actions to address these concerns. NUREG-1505 (NRC 1998a)
14	provides a methodology for considering variability in reference area concentrations. Reviewing
15	and reassessing the selection of reference areas may also be necessary. Selecting different
16	reference areas to represent individual survey units is another possibility. More attention may
17	also be needed in selecting survey units and their boundaries with respect to different areas of
18	potential or actual background variability. More detailed scoping or characterization surveys
19	may be needed to better understand background variability. Using radionuclide-specific
20	measurement techniques instead of gross radioactive material measurement techniques may
21	also be necessary. If a background reference area that satisfies the above recommendations is
22	not available, consultation with the regulatory agency is recommended. Alternate approaches
23	may include using published studies of radionuclide distributions. However, published reports
24	may not truly reflect the conditions at the site.
25	Verifying that a background reference area is appropriate for a survey can be accomplished
26	using the techniques described or referenced in Chapter 8. Verification provides assurance that
27	assumptions used to design the survey are appropriate and defensible. This approach can also
28	prevent decision errors that may result from selecting an inappropriate background reference
29	area.
30	If the radionuclides of interest do not occur in background, or the background levels are known
31	to be a small fraction of the DCGLw (e.g., <10 percent), the survey unit radiological conditions
32	may be compared directly to the specified DCGLw, and reference area background surveys are
33	not necessary. If the background is not well defined at a site and the decision maker is willing to
34	accept the increased probability of incorrectly failing to release a survey unit (Type II error), the
35	reference area measurements can be eliminated and the Sign test performed as described in
36	Section 8.3.
37	4.7 Statistical Considerations
38	The primary practical objective of an FSS survey is to answer the question, "Can this survey unit
39	be released to the satisfaction of the regulator?" In other words, the design team needs to be
40	able to confidently demonstrate compliance (or non-compliance) with the dose- or risk-based
41	release criteria expressed as a measurable quantity (DCGL). This need for a demonstrable,
42	quantitative confidence necessitates planning for statistical hypothesis testing.
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
4-24
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Considerations for Planning Surveys
1	The statistical concepts used in MARSSIM were introduced in Section 2.5. This chapter
2	reinforces and builds on those concepts with respect to the design of an FSS. The MARSSIM
3	user should be familiar with the statistical discussions throughout Chapters 2, 5, and 8 and
4	Appendices D and I. Consultations with statisticians can be very valuable for surveys that rely
5	on statistical methods for their design and assessment of the collected data.
6	4.7.1 Basic Terms
7	Before designing an FSS, the planning team should be familiar with the following statistical
8	terms:
9	• sample4 mean
10	• sample standard deviation
11	• sample median
12	• parametric and nonparametric tests
13	o Sign test
14	o Wilcoxon Rank Sum5 (WRS) test
15	o Student's t test
16	• Type I and Type II errors
17	• statistical power
18	• lower boundary of the gray region (LBGR)
19	• upper boundary of the gray region (UBGR)
20	• relative shift (A/a)
21	• null and alternative hypotheses
22	The MARSSIM user will encounter these and other statistical terms many times in this
23	document and while designing, performing, and assessing the results of an FSS. The use of
24	these terms will be kept to a minimum in this chapter, but this in no way diminishes their
25	importance. For this chapter, statistical terms will be defined when they are introduced.
26	4.7.2 Recommended Statistical Tests
27	How well a statistical test meets its objective depends on the difference between the
28	assumptions used to develop the test and the actual conditions being measured. Parametric
29	tests, such as the Student's t test, rely upon the results fitting some known distribution, like a
4	The term "sample" here is a statistical term and should not be confused with laboratory samples. For the calculation
of basic statistical quantities above, data may consist of scan data, direct measurement data, or laboratory sample
data. See also the glossary definition of sample.
5	This test is also called the Mann-Whitney U test, Mann-Whitney-Wilcoxon test, or Wilcoxon-Mann-Whitney test.
May 2020
DRAFT FOR PUBLIC COMMENT
4-25
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Considerations for Planning Surveys
MARSSIM
1	normal distribution. Nonparametric statistics are recommended in MARSSIM because they are
2	based on less restrictive assumptions than parametric tests. MARSSIM recommends the use of
3	the WRS test if the radionuclides of concern are present in the background or if gross activity
4	measurements are made. If the radionuclides of concern are not present in the background or
5	present only to a slight degree, then the Sign test is recommended.
6	4.7.3 Considerations on the Choice of a Statistical Test
7	The choice of a statistical test should be part of the DQO process during the design phase of
8	the FSS. The choice of the statistical test is influenced by how the DCGLs are expressed (gross
9	activity, radionuclide-specific, sum or ratios [unity rule]), the distribution of residual radioactive
10	material in the survey unit (relatively uniform vs. small areas of elevated activity), number of
11	reference units needed, etc. Concurrently, the number of samples needed depends on the
12	DCGL, the standard deviation of the residual radioactive material in both the survey and
13	reference units, the desired confidence in the conclusions (Type I and Type II errors), and the
14	statistical test under consideration (see Sections 5.3.3 and 5.3.4 for details, and see example
15	calculations in Sections 4.12.3 and 4.12.4). Ensuring a reasonable level of confidence that any
16	areas of elevated activity are detected might require additional samples.
17	Once the FSS is completed, the assumptions used to select the statistical test are examined to
18	determine whether the conditions are met for the test. As part of the DQO process, the design
19	team should plan for the possibility that the initial assumptions were not correct.
20	4.7.4 Deviations from MARSSIM Statistical Test Recommendations
21	The guidance and recommendations in MARSSIM are meant to be a set of practices generally
22	acceptable for use in designing an FSS. However, the flexibility of the DQO process allows for
23	the use of more cost-effective methods, if they are acceptable to the regulator. An example is
24	presented in Section 4.12.7.
25	4.7.5 An Important Statistical Note
26	For FSSs, the parameter of interest is the mean concentration of residual radioactive material in
27	a survey unit. The nonparametric statistical tests recommended in MARSSIM are tests of the
28	median value. For data that are from a skewed right distribution, the mean could significantly
29	exceed the median. Therefore, the team planning the FSS should include a comparison step in
30	the survey to ensure that mean is less than the DCGLw. See Section 8.2.2.
31	4.8 Measurements
32	Based on the potential radionuclides of interest, their associated radiations, how the DCGLs are
33	expressed, the types of media (e.g., soil, structure surfaces), and number of measurements to
34	be evaluated, the detection capabilities of various measurement methods (which consist of a
35	combination of a measurement technique and instrument) must be determined and
36	documented. Note that "measurements" includes both direct (field) measurements and
37	laboratory analyses.
38	4.8.1 Quality Control and Quality Assurance
39	For both field measurements (Chapter 6) and laboratory analyses (Chapter 7), the FSS design
40	team must plan to collect data to evaluate the performance of measurement and analytical
41	methods (including data collection). These data are called measurement and instrument
42	performance indicators. The DQO and MQO processes are used to determine which indicators
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
4-26
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Considerations for Planning Surveys
1	are important and included in the QAPP. Examples of measurement and instrument
2	performance indicators are shown below:
3	• Instrument background readings: Background readings before and after a series of
4	measurements are used as part of the process to ensure that an instrument was functioning
5	properly.
6	• Instrument response checks: Checking the instrument response with the same source in the
7	same geometry over the course of surveying can help ensure that the instrument was
8	working properly during the survey.
9	• Field blanks: These are samples prepared in the field using certified clean sand, soil, or
10	other media and sent to the laboratory for analysis. Field blanks are used to assess
11	contamination associated with sampling and laboratory procedures.
12	• Performance evaluation samples: These are used to assess the overall bias and errors in
13	the laboratory's analytical processes.
14	4.8.2 Measurement Quality Objectives
15	Although specifically for laboratory analyses, the performance characteristics discussed in the
16	Multi-Agency Radiological Laboratory Analytical Protocols Manual (MARLAP) (NRC 2004)
17	should be considered when establishing Measurement Quality Objectives (MQOs). This list is
18	not intended to be exhaustive:
19	«the method's uncertainty at a specified concentration, usually at the UBGR (expressed as a
20	standard deviation)
21	«the method's detection capability (expressed as the minimum detectable concentration, or
22	MDC)
23	• the method's quantification capability (expressed as the minimum quantifiable concentration,
24	or MQC)
25	• the method's range, which defines the method's ability to measure the radionuclide of
26	concern over some specified range of concentration
27	• the method's specificity, which refers to the ability of the method to measure the
28	radionuclide of concern in the presence of interferences
29	• the method's ruggedness, which refers to the relative stability of method performance for
30	small variations in method parameter values
31	Project-specific method performance characteristics should be developed as necessary and
32	may or may not include the characteristics listed here. When lists of performance characteristics
33	that affect measurability have been identified, the planning team should develop MQOs
34	describing the project-specific objectives for potential measurement techniques. Potential
35	measurement techniques should then be evaluated against the MQOs to determine whether
36	they are capable of meeting the objectives for measurability.
37	The International Organization for Standardization Guide to the Expression of Uncertainty in
38	Measurement (ISO 1993), National Institute of Standards and Technology Technical Note 1297
39	(NIST 1994), MARLAP (NRC 2004), Multi-Agency Radiation Survey and Assessment of
May 2020
DRAFT FOR PUBLIC COMMENT
4-27
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Considerations for Planning Surveys
MARSSIM
1	Materials and Equipment Manual (MARSAME) (NRC 2009), and Chapter 6 of this manual
2	provide information on determining measurement uncertainty. Chapter 6 of this manual and
3	NRC report NUREG-1507 (NRC 1997a) discuss the concept of detection capabilities and
4	provide guidance on determining detection capabilities and selecting appropriate measurement
5	methods. Although MARSAME and MARLAP include the concept of quantification capability,
6	MARSSIM takes a different approach by incorporating requirements for quantification capability
7	into detection capability with the requirement that the MDC be less than the UBGR and by
8	recommending that the MDC be less than 50 percent of the UBGR (See Chapter 6). Chapter 6
9	also discusses instruments and survey techniques for scans and direct measurements, and
10	Chapter 7 provides information on sampling and laboratory analysis. Appendix H describes
11	typical field and laboratory equipment, plus associated cost and instrument capabilities.
12	4.8.3 Selecting a Measurement Technique
13	Instruments should be identified for each of the three types of measurement techniques planned
14	for the FSS: (1) scanning, (2) direct, and (3) laboratory measurements. Scanning and direct
15	measurements are referred to as field measurements. In some cases, the same instrument or
16	type of instrument may be used for performing several measurement techniques. For example,
17	a gas proportional counter can be used for surface scanning measurements and laboratory
18	measurements of smear samples. Once the instruments are selected, appropriate
19	measurement techniques and standard operating procedures (SOPs) should be developed and
20	documented. The measurement techniques describe how the instrument will be used to perform
21	the required measurements.
22	4.8.3.1 Scanning Measurements
23	Scanning is an in situ measurement technique performed by moving a portable radiation
24	detector at a specified speed and distance next to a surface to detect radiation. Scanning
25	measurements are generally used to locate areas that exceed investigation levels and areas of
26	elevated activity that might be missed (e.g., measurements made with a systematic grid). In
27	general, MARSSIM does not recommend "scan-only" FSSs. However, through the DQO
28	process and consultation with the regulator, an FSS based on scanning measurements alone
29	might be allowed. Items that should be kept in mind while investigating a scan-only survey are—
30	• data and location logging
31	• reproducibility of the measurements (e.g., fixing a detector at a constant distance from a
32	surface)
33	• MDCs
34	• scanning speed and operator training
35	• data integrity and security
36	• selecting an appropriately sized area for elevated measurement comparison calculations
37	Additional information can be found in Chapter 6.
38	4.8.3.2 Direct Measurements
39	A direct measurement is an in situ measurement of radioactive material obtained by placing the
40	detector near the surface or media being surveyed for a prescribed amount of time. An
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
4-28
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Considerations for Planning Surveys
1	indication of the resulting concentration of radioactive material is read out directly. Making direct
2	measurements is analogous to collecting samples, and the results are often treated in a similar
3	manner. Direct measurement of alpha, beta, and gamma radiation for an FSS requires that
4	instruments and techniques be used that meet the DQOs and MQOs (e.g., MDC). When
5	selecting instruments and techniques, the design team needs to consider—
6	• type and amounts of radionuclides potentially present
7	• required detection limits
8	• distance from the surface being monitored and field of view
9	• type of measurement—rate or scaler (integrated) (e.g., counts in 5 minutes)
10	• duration of integrated counts
11	• radiation background (including interferences from nearby radiation sources)
12	All direct measurements and their locations should be recorded. Additional information can be
13	found in Chapter 6.
14	4.8.3.3 Laboratory Measurements
15	When planning for collecting samples as part of an FSS, the design team should use the DQO
16	process to determine the need for sample collection and laboratory analyses. All laboratories
17	under consideration to analyze samples should have written procedures that document their
18	analytical capabilities for the radionuclides of concern and a QA/QC program that documents
19	adherence to established criteria. The survey design team should also consider any appropriate
20	laboratory accreditation. Accreditation, QA/QC, and other appropriate documentation should be
21	available for review by the survey design team (with appropriate restrictions for proprietary or
22	other controlled information). Once a qualified laboratory has been chosen, the design team
23	should involve the laboratory early in the design process and maintain communication
24	throughout execution and data evaluation and interpretation. Chapter 7 contains more
25	information on the sampling and preparation for laboratory measurements.
26	Additionally, MARLAP (NRC 2004) contains extensive information "for the planning,
27	implementation, and assessment of projects that require laboratory analysis of radionuclides."
28	Like MARSSIM, MARLAP aims to provide a flexible approach to ensure that the radioanalytical
29	data are of the right quality and appropriate for the needs of the user.
30	Some items that should be considered when planning for laboratory analyses include the
31	following:
32	• sample media
33	• number of samples
34	• type and number of QC samples
35	• amount of material needed by the laboratory
36	• analytical bias and precision
37	• detection limits
May 2020
DRAFT FOR PUBLIC COMMENT
4-29
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Considerations for Planning Surveys
MARSSIM
1	• costs
2	• required turnaround time
3	• sample preservation and shipping requirements
4	• measurement documentation requirements
5	• sample tracking needs (e.g., chain of custody requirements)
6	4.8.3.4 Selecting a Radioanalytical Laboratory
7	It is advisable to select a radiochemical laboratory as early as possible in the survey planning
8	process so that it may be consulted on the analytical methodology and the sampling activities.
9	Federal procurement procedures may require additional considerations beyond the method
10	described here. The procurement of laboratory services usually starts with the development of a
11	request for proposal that includes a statement of work describing the analytical services to be
12	procured. The careful preparation of the statement of work is essential to the selection of a
13	laboratory capable of performing the required services in a technically competent and timely
14	manner.
15	Six criteria that should be considered are:
16	• well-documented procedures, instrumentation, and trained personnel to perform the
17	necessary analyses
18	• surveyors who are experienced in performing the same or similar analyses
19	• satisfactory performance evaluation results from formal monitoring or accreditation programs
20	• adequate capacity to perform all analyses within the desired timeframe
21	• internal QC program
22	• protocols for method performance documentation, sample tracking and security, and
23	documentation of results
24	The design team should review that laboratory's documentation concerning MDC calculations,
25	reporting procedures, calibrations, QA/QC, and accreditation to ensure that the FSS DQOs will
26	be met. More details can be found in Section 7.4.
27	When samples are collected for laboratory analyses, communications between the project
28	manager, field personnel, and laboratory personnel are vital to a successfully executed FSS.
29	The survey design team should strive to establish communications with the laboratory early in
30	the design process; when this is not possible, a radiochemist or health physicist with
31	radiochemical training should be consulted. Additional information on laboratory
32	communications is in Section 7.3.
33	4.8.4 Selection of Instruments for Field Measurements
34	4.8.4.1 Reliability and Robustness
35	Choose reliable instruments that are suited to the physical and environmental conditions at the
36	site and capable of meeting the MQOs. The MQOs should include the measurement method
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
4-30
May 2020
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
MARSSIM
Considerations for Planning Surveys
uncertainty, which is typically established at the UBGR (usually the DCGLw). The required
measurement method uncertainty is perhaps the most important MQO to be established during
the planning process. Determining a realistic value for the measurement method uncertainty for
field measurements is a challenging calculation, typically requiring the use of specialized
software. However, ensuring that the measurement method uncertainty meets the requirement
set for it at the UBGR will ensure that the measurement method can reliably perform
measurements at the most critical concentration level for the survey.
4.8.4.2	Detection Capability
The detection capability (sensitivity) or the ability to detect radiation or radioactive material with
some quantifiable level of confidence is a critical factor in the design of an FSS. This capability
is most often referred to as the MDC for direct measurements, or the scan MDC for scanning
measurements. The formal MARSSIM definition of the MDC is "the a priori activity concentration
that a specific instrument and technique that has a specified probability (typically 95 percent) of
producing a net count (or count rate) above the critical level." Informally, the MDC is the
concentration of radioactive material that can be reliably detected; if radioactive material is
present at the MDC with a specified probability of 95 percent, then the measurement process
will determine its presence 95 percent of the time. The MDC is a factor of both the
instrumentation and the technique or procedure being used. For scanning, human factors also
need to be taken into account. Details on how to calculate MDCs are given in Section 6.3. The
design team should be aware that there are other methods to calculate MDCs (especially for
direct and laboratory measurements) discussed in the scientific literature. The DQO process
should be used to determine which method best suits the needs of the FSS. This method and
results should be approved by the appropriate regulatory agency.
Having low MDCs is valuable when designing the FSS. If measured values are less than the
MDC, then the values can be quite variable and lead to high values for the standard deviation
(o) of the measured values in the survey unit or reference area. High values for a can be
accommodated in the statistical tests described in Chapter 8 for the FSS, but a large number of
measurements are needed to account for the variability.
Early in the project, low MDCs help in the identification of areas that can be classified as non-
impacted or Class 3 areas. These decisions are usually based on fewer numbers of samples,
and each measurement is evaluated individually. Using an optimistically low estimation of the
MDC (see Section 2.3.5) for these surveys may result in the misclassification of a survey unit
and cleaning up an area with no residual radioactive material or, alternatively, performing an
FSS in an area with residual radioactive material. Selecting a measurement technique with a
well-defined MDC or a conservative estimate of the MDC ensures the usefulness of the data for
making decisions for planning the FSS. For these reasons, MARSSIM recommends that a
realistic or conservative estimate of the MDC be used instead of an optimistic estimate.
4.8.4.3	Dynamic Range
The expected concentration range for a radionuclide of concern can be an important factor in
the overall measurement method performance. Most radiation measurement techniques are
capable of measuring over a wide range of radionuclide concentrations. However, if the
expected concentration range is large, the range should be identified as an important
measurement method performance characteristic, and an MQO should be developed. The MQO
for the acceptable range should be a conservative estimate. This will help prevent the selection
of measurement techniques that cannot accommodate the actual concentration range.
May 2020
DRAFT FOR PUBLIC COMMENT
4-31
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Considerations for Planning Surveys
MARSSIM
1	4.8.4.4 Calibration
2	Calibration refers to the determination and adjustment of the instrument response in a particular
3	radiation field of known intensity. Proper calibration procedures are essential to providing
4	confidence in measurements made to demonstrate compliance with release criteria. The FSS
5	design team should review and understand Section 6.6.4.
6	The instrument should be calibrated for the radiations and energies of interest at the site
7	(Section 6.6.4). Instrument calibrations should be traceable to an accepted standards
8	organization, such as the National Institute of Standards and Technology (NIST).6 Operational
9	checks of instrument performance should be conducted routinely and frequently to ensure that
10	the instrument response is maintained within acceptable ranges and that any changes in
11	instrument background are not attributable to radioactive contamination of the detector.
12	Considerations for the use and calibration of instruments include—
13	«the radiation type for which the instrument is designed
14	«the radiation energies within the range of energies for which the instrument is designed
15	«the environmental conditions for which the instrument is designed
16	«the influencing factors, such as magnetic and electrostatic fields, for which the instrument is
17	designed
18	«the orientation of the instrument, such that geotropic (gravity) effects are not a concern
19	«the manner the instrument is used, such that it will not be subject to mechanical or thermal
20	stress beyond that for which it is designed
21	As a minimum, each measurement system (detector/readout combination) should be calibrated
22	annually, and the response of the detector to a check source should be established following
23	calibration (ANSI 2013). Instruments may require more frequent calibration if recommended by
24	the manufacturer. Recalibration of field instruments is also required if an instrument fails a
25	performance check or if it has undergone repair or any modification that could affect its
26	response. The system should be calibrated to minimize potential errors during data transmission
27	and retransmission. The user may decide to perform calibrations following industry-recognized
28	procedures (ANSI 1997, NCRP 1978, NCRP 1985, NCRP 1991, ISO 1988, HPS 1994a, HPS
29	1994b), or the user can choose to obtain calibration by an outside service, such as a major
30	instrument manufacturer or a health physics services organization. Calibrations should include
31	devices used to determine the position or location of samples or measurements, as
32	recommended by the manufacturer.
33	Additional technical details about instrument efficiencies and example calculations are
34	contained in Section 4.12.5.
35	4.8.4.5 Specificity
36	Specificity is the ability of the measurement method to measure the radionuclide of concern in
37	the presence of interferences. To determine whether specificity is an important measurement
6 The NIST policy on traceability can be found here: https://vwvw.nist.gov/calibrations/traceabilitv.
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
4-32
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Considerations for Planning Surveys
1	method performance characteristic, the planning team will need information about expected
2	concentration ranges for the radionuclides of concern and other chemical and radionuclide
3	constituents, along with chemical and physical attributes of the soil or surface being measured.
4	The importance of specificity depends on—
5	• the chemical and physical characteristics of the soil or surface
6	• the chemical and physical characteristics of the residual radioactive material
7	• the expected concentration range for the radionuclides of concern
8	If potential interferences are identified (e.g., inherent radioactive material, similar radiations), an
9	MQO should be established for specificity.
10	4.8.4.6 Instrumentation Examples
11	Table 4.2 presents a list of common radionuclides along with recommended instruments for
12	field measurement methods that have proven effective based on past survey experience in the
13	decommissioning industry. This table provides a general indication of the detection capability of
14	commercially available instruments for field measurements. As such, Table 4.2 may be used to
15	provide an initial evaluation of instrument capabilities for some common radionuclides at the
16	example DCGLs listed in the table. For example, consider a surface with 241 Am. Table 4.2
17	indicates that241 Am is detectable at the example DCGLs and that viable direct measurement
18	instruments include gas-flow proportional (alpha mode) and alpha scintillation detectors.
19	Many radiation detection instruments can be used for both direct and scanning measurements.
20	The example DCGLs in Table 4.2 are given for direct measurements only. Issues of
21	detectability (MDC) for scanning are more complicated than for direct measurements and
22	depend on things such as human response, height above the surface, and scanning speed.
23	Table 4.2 should not be interpreted as providing specific values for an instrument's detection
24	capability, which is discussed in Section 6.7. In addition, NRC draft report NUREG-1506 (NRC
25	1995) and NUREG-1507 (NRC 1997a) provide further information on factors that may affect
26	survey instrumentation selection.
27	4.8.5 Selection of Sample Collection Methods
28	Sample characteristics—such as sample depth, volume, area, moisture level, and composition,
29	as well as sample preparation techniques that may alter the sample—are important planning
30	considerations for DQOs. Sample preparation may include, but is not limited to, removing
31	extraneous material, homogenizing, splitting, drying, compositing, and doing final preparations
32	of samples. Dose or risk pathway modeling should be representative of actual survey
33	conditions, to the extent practical, and modeling limitations should be well documented and
34	assessed. The sampling method should then consider assumptions made in the dose or risk
35	pathway modeling used to determine radionuclide DCGLs. For example, the actual depth and
36	area of residual radioactivity in the survey unit should be reflected in the modeling, and
37	sampling should be compatible with how the source was represented in the modeling. If a direct
38	measurement or scanning technique is used, it should also consider the compatibility of the
39	technique with the assumptions made in the dose or risk pathway modeling.
May 2020
DRAFT FOR PUBLIC COMMENT
4-33
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
1
Table 4.2: Examples of Field Measurement Instruments
Structure Surfaces
Example
Land Areas
Example
Example Instruments
Nuclide
DCGLa (Bq/m2)
Detectable
DCGLa (Bq/kg)
Detectable
Surface Activity
Soil Activity
Exposure Rate
3H
2.0x108
No
4.1 x103
No
NDb
ND
ND
14Q
6.2x106
Yes
4.4x102
No
GP|3
ND
ND
54Mn
5.4x104
Yes
5.6x102
Yes
GPP, GM
yS, ISy
PIC, yS, ISy
55pe
7.5x10®
No
3.7x105
Noc
ND
ND (ISy)
ND(ISy)
60Co
1.2x104
Yes
1.4x102
Yes
GP(3, GM
yS, ISy
PIC, yS, ISy
63Ni
3.0x10®
Yes
7.8x104
No
GPP
ND
ND
90Sr
1.5x104
Yes
6.3x101
Noc
GPP, GM
ND (GM, GPP)
ND
"Tc
2.2x10®
Yes
7.0x102
No
GPP, GM
ND
ND
137Cs
4.7x104
Yes
4.1 x102
Yes
GPP, GM
yS, ISy
PIC, yS, ISy
152EU
—
Yes
3.2x102
Yes
GPP, GM
yS, ISy
PIC, yS, ISy
226Ra (C)d
—
Yes
2.6x101
Yes
GPa, aS
yS, ISy
PIC, yS, ISy
232Th (C)d
—
Yes
4.1 x101
Yes
GPa, aS, GPP
yS, ISy
PIC, yS, ISy
238U (C)
—
Yes
1.9x101
Yes
GPa, aS, GPP, ISy
yS, ISy, GPP
PIC, yS, ISy
239Pu
—
Yes
8.5x101
Noc
GPa,aS
ND (ISy)
ND
241Am
—
Yes
7.8x101
Yes
GPa, aS
yS, ISy
PIC, yS, ISy
2	Abbreviations: Bq = becquerels; m2 = square meters; kg = kilograms; GPa = gas-flow proportional counter (a mode); GM = Geiger-Mueller survey meter; GP(3 =
3	gas-flow proportional counter ((3 mode); PIC = pressurized ionization chamber; aS = alpha scintillation survey meter; yS = gamma scintillation (gross); ISy = in situ
4^4	gamma spectrometry.
O 5	a Example DCGLs are provided only for discussion and are based on values given in NRC Report NUREG-1757 (Rev. 2), Volume 1, Tables B.1 and B.2 (NRC
O 6	2006). Example DCGLs should not be used in place of approved DCGLs.
O 7	b ND = Not detectable.
' 8	c Possibly detectable at limits for areas of elevated activity.
9	d For decay chains having two or more radionuclides of significant half-life that reach secular equilibrium, the notation "(C)" indicates the direct measurement
m 10	techniques assume the presence of decay products in the chain.
O
* ^
O 03
n M
y o
H	N>
m	o

-------
MARSSIM
Considerations for Planning Surveys
1	Conceptual models reflected in commonly used codes used to derive DCGLs should be
2	understood. For example, if surficial residual radioactive material exists at a thickness less than
3	15 cm (6 inches), commonly used codes either assume or allow the residual radioactive
4	material to be uniformly mixed throughout a larger thickness to simulate such processes as soil
5	mixing due to plowing. Yu et al. (1993) allows both the thickness of contamination and the
6	mixing depth to be specified. NRC (1992b) assumes the residual radioactivity is located in the
7	top 15 cm of soil. Similarly, models may be based on dry weight, which may necessitate either
8	drying samples or data transformation to account for dry weight. The DQOs and subsequent
9	direction to the laboratory for analysis might include removal of material not relevant for
10	characterizing the sample, such as pieces of glass, twigs, rocks, pebbles, or leaves. In all
11	cases, it is important to understand the modeling assumptions and how the data collected will
12	be compared to DCGLs derived from the modeling to ensure the fidelity of the statistical survey
13	results.
14	Both sample depth and area are considerations in determining appropriate sample volume, and
15	sample volume is a key consideration for determining the laboratory MDC. The depth should
16	also correlate with the conceptual model developed in Chapter 3 and upgraded throughout the
17	RSSI process. For example, if data collected during the HSA indicate that residual radioactive
18	material may exist to a certain depth, then samples should be deep enough to support the
19	survey objectives, such as for the scoping or characterization survey. Taking samples as a
20	function of depth might also be a survey design objective, such as for scoping, characterization,
21	or remediation support. Although some models and codes may allow for the input of (or can be
22	manipulated to consider) heterogeneous radionuclide distributions, other models and codes
23	may assume uniform residual radioactivity. In cases where the models are incapable of
24	representing the complexity of the sources, data may need to be processed for use in the
25	model. Impacts associated with the modeling simplifications should be well understood and
26	documented.
27	Additionally, the design team needs to consider sampling both data needs and data quality
28	indicators as determined from the DQO process. The design team should review the information
29	in Section 7.2 when designing the sampling portion of the FSS plan. The decision maker and
30	the survey planning team need to identify the data needs for the survey being performed,
31	including—
32	• type of samples to be collected or measurements to be performed (Chapter 5)
33	• radionuclide(s) of interest (Section 4.3)
34	• number of samples to be collected (Sections 5.3.3-5.3.5)
35	• type and frequency of field QC samples to be collected (Section 4.9)
36	• amount of material to be collected for each sample (Sections 4.7.3 and 7.5)
37	• sampling locations and frequencies (Section 5.3.7)
38	• SOPs to be followed or developed
39	• measurement method uncertainty (Section 6.4)
40	• target detection capabilities for each radionuclide of interest (Section 6.3)
41	• cost of the methods being evaluated (cost per analysis and total cost) (Appendix H)
May 2020
DRAFT FOR PUBLIC COMMENT
4-35
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Considerations for Planning Surveys
MARSSIM
1	• necessary turnaround time
2	• sample preservation and shipping requirements (Section 7.6)
3	• specific background for each radionuclide of interest (Section 4.5)
4	• DCGL for each radionuclide of interest (Section 4.3)
5	• measurement documentation requirements (Section 5.3.11)
6	• sample tracking requirements (Section 7.8)
7	In addition to the above items, the design team needs to consider the following data quality
8	indicators:
9	• precision
10	• bias
11	• representativeness
12	• comparability
13	• completeness
14	• others as discussed in Section 7.2.2.6
15	See Section 7.2 for a detailed discussion of the DQOs and MQOs for sampling.
16	Under some circumstances, it might be useful to assess the radionuclide concentrations on
17	different size fractions to better assess transport processes assumed in some dose models.
18	Chapters 6 and 7 present more detail regarding the application of these survey planning
19	considerations.
20	4.8.6 Selection of Measurement Techniques
21	In practice, the DQO process is used to obtain a proper balance among the use of various
22	measurement techniques (scanning, direct, and laboratory). In general, there is an inverse
23	correlation between the cost of a specific measurement technique and the detection levels
24	being sought. Depending on the survey objectives, important considerations include survey
25	costs and choosing an appropriate measurement method.
26	A certain minimum number of direct measurements or samples may be needed to demonstrate
27	compliance with the release criteria based on certain statistical tests (see Section 5.3.2).
28	Alternatively, if there is sufficient detection capability and an acceptable level of measurement
29	method uncertainty, a scan-only survey technique can (with the proper application of the DQO
30	process and regulatory approval) be used to demonstrate compliance with the DCGLw. The
31	potential for areas of elevated residual radioactive material may also have to be considered for
32	designing scanning surveys, as the need to identify areas of elevated activity may affect the
33	number of measurements. Some measurements may provide information of a qualitative nature
34	to supplement other measurements. An example of such an application is in situ gamma
35	spectrometry to demonstrate the absence (or presence) of specific radionuclides.
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
4-36
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Considerations for Planning Surveys
1	Assuming the residual radioactive material can be detected, either directly or by measuring a
2	surrogate radionuclide in the mixture, the next decision point depends on whether the
3	radionuclide being measured is present in background. Gross measurement methods will likely
4	be more appropriate for measuring concentrations of radioactive materials on surfaces in
5	structures, scanning for locations of elevated activity, and determining exposure rates.
6	Radionuclide-specific measurement techniques, such as gamma spectrometry, provide a
7	marked increase in detection capability over gross measurements because of their ability to
8	screen out contributions from other sources. Figure 4.2 illustrates the sequence of steps in
9	determining the type of survey design needed—that is, whether field measurement techniques
10	can be applied at a particular site along with sampling or if a scan-only design is more
11	appropriate. The selection of appropriate instruments for scanning, direct measurement, and
12	sampling and analysis should be survey specific.
13	4.8.7 Data Con version
14	Radiation survey data are usually obtained in units, such as the number of counts per unit time,
15	which have no intrinsic meaning relative to DCGLs. For comparison of survey data to DCGLs,
16	the survey data from field and laboratory measurements should be converted to DCGL units.
17	Alternatively, the DCGL can be converted into the same units used to record survey results.
18	Either method relies on understanding the instrument response (efficiency). The FSS design
19	team should use the DQO process to determine and document the proper methods used to
20	compare instrument results and DCGLs. Additional details are provided in Sections 4.12.6 and
21	6.7.
22	4.8.8 Additional Planning Considerations Related to Measurements
23	4.8.8.1 Selecting a Field Service Provider
24	The survey design team should start the process of selecting a service provider to perform field
25	data collection early in the planning process. Six criteria that should be considered are—
26	• validated SOPs
27	• experience with similar data collection activities
28	• satisfactory performance evaluations or technical review results
29	• adequate capacity to perform the all the data collection activities
30	• internal QC program
31	• protocols for method performance documentation, sample tracking and security, and
32	documentation of results
33	More details can be found in Section 6.5.
34	4.8.8.2 Radon Measurements
35	In some cases, radon may be detected within structures that do not contain residual radioactive
36	material; conversely, some structures that contain residual radioactive material may not yield
37	detectable radon or thoron. Consult with your regulator for the applicability of radon or thoron
38	measurements as part of a site survey.
May 2020
DRAFT FOR PUBLIC COMMENT
4-37
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Considerations for Planning Surveys
MARSSIM
Calculate Required MQOs
(i.e., Detection Capabilities
and Required Measurement
Method Uncertainties)

r
Evaluate M
Methods Relat
MG
easurement
ve to Required
Os
No—

i Survey Plan for
Sampling
>
Select and Obtain
Instruments
Calibrate Instruments
To Figure
2 Figure 4.2: Flow Diagram for Field Survey Design
No
Yes
Yes
Can Scanning
Measurements Achieve
\Required MQOs?^
Can Direct
Measurements Achieve
\Required MQOs?^
Design Survey Plan for Direct
Measurement and/or
Sampling
Design Survey Plan for Scan
Survey and/or Sampling
Desigr
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
4-38
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Considerations for Planning Surveys
1	If radon is a concern for the FSS, the design team should work with the appropriate regulatory
2	agency to determine the applicability of radon or thoron measurements. Because of the
3	widespread nature of indoor air radon, many states have developed requirements for
4	certification/qualification of people who perform radon services. Therefore, as part of the
5	qualifications for the service provider, determine whether the measurement provider or the
6	laboratory analyzing the measurements is required to be certified by the state or locality where
7	the work is being performed. State radon contacts can be found at
8	https://www.epa.qov/radon/find-information-about-local-radon-zones-and-state-contact-
9	information.
10	More details can be found in Section 6.8.
11	4.8.8.3 Specialized Equipment
12	The survey team must plan for using any specialized equipment other than radiation detectors
13	(e.g., global positioning systems [GPS], local microwave or sonar beacons and receivers, laser
14	positioning systems, etc.). Because these specialized systems are continuously being modified
15	and developed for site-specific applications, it is not possible to provide detailed descriptions of
16	every system. Section 6.9 provides examples of specialized equipment that have been applied
17	to radiation surveys and site investigations.
18	4.9 Site Preparation
19	Site preparation involves obtaining consent for performing the survey, establishing the property
20	boundaries, evaluating the physical characteristics of the site, accessing surfaces and land
21	areas of interest, and establishing a reference coordinate system. Site preparation may also
22	include removing equipment and materials that restrict access to surfaces. The presence of
23	furnishings or equipment will restrict access to building surfaces and add additional items that
24	the survey should address.
25	4.9.1 Consent for Survey
26	When facilities or sites are not owned by the organization performing the surveys, consent from
27	the site or equipment owner should be obtained before conducting the surveys. All appropriate
28	Federal, State, Tribal, and local officials, as well as the site owner and other affected parties,
29	should be notified of the survey schedule. Section 3.6 discusses consent for access, and
30	additional information based on the Comprehensive Environmental Response, Compensation,
31	and Liability Act is available from EPA (EPA 1987c).
32	4.9.2 Property Boundaries
33	Property boundaries may be determined from property survey maps furnished by the owners or
34	from plat maps obtained from city or county tax maps. Large-area properties and properties with
35	obscure boundaries or missing survey markers may require the services of a professional land
36	surveyor. A professional land surveyor can also tie a site radiological survey grid into the
37	existing land survey of the site or to an official State or municipal survey grid. Such a tie-in has
38	the advantage of making the radiological survey grid reproducible in the future.
39	If the radiological survey is only performed inside buildings, a tax map with the buildings
40	accurately located will usually suffice for site/building location designation.
May 2020
DRAFT FOR PUBLIC COMMENT
4-39
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
Considerations for Planning Surveys
MARSSIM
4.9.3 Physical Characteristics of the Site
The physical characteristics of the site will have a significant impact on the complexity,
schedule, and cost of a survey. These characteristics include the number and size of structures,
type of building construction, wall and floor penetrations, pipes, building condition, total area,
topography, soil type, and ground cover. In particular, the accessibility of structures and land
areas (Section 4.9.4) has a significant impact on the survey effort. In some cases, survey
techniques (e.g., in situ gamma spectrometry or scanning surveys discussed in Chapter 6) can
preclude or reduce the need to gain physical access or use intrusive techniques. This should be
considered during survey planning.
4.9.3.1 Structures
Building design and condition will have a marked influence on the survey efforts. The time
involved in conducting a survey of building interior surfaces is essentially directly proportional to
the total surface area, recognizing that upper wall and ceiling areas require more time than floor
and lower wall surveys. For this reason, the degree of survey coverage decreases as the
potential for residual radioactive material decreases. Judgment measurements and sampling,
which are performed in addition to the measurements performed for certain survey designs, are
recommended in areas likely to have accumulated deposits of residual radioactive material. As
discussed in Section 8.5, judgment measurements and samples are compared directly to the
appropriate DCGL.
The condition of surfaces after remedial action may affect the survey process. Removing
radioactive material that has penetrated a surface usually involves removing the surface
material. As a result, the floors and walls of remediated facilities are frequently badly scarred or
broken up and are often very uneven. Such surfaces are more difficult to survey, because it is
not possible to maintain a fixed distance between the detector and the surface. In addition,
scabbled or porous surfaces may significantly attenuate radiations—particularly alpha and low-
energy beta particles. Use of monitoring equipment on wheels is precluded by rough surfaces,
and such surfaces also pose an increased risk of damage to fragile detector probe faces. These
factors should be considered during the calibration of survey instruments; NRC report NUREG-
1507 (NRC 1997a) provides additional information on how to address these surface conditions.
The condition of the building should also be considered from a safety and health standpoint
before a survey is conducted. A structural assessment may be needed to determine whether the
structure is safe to enter.
Expansion joints, stress cracks, drains, and penetrations into floors and walls for piping, conduit,
anchor bolts, etc., are potential sites for accumulation of residual radioactive material and
pathways for migration into subfloor soil and hollow wall spaces. Drains, sewers, and septic
systems can contain residual radioactive material, and wall/floor interfaces are also likely
locations for residual radioactive material. Coring, drilling, or other such methods may be
necessary to gain access for surveying. Intrusive surveying may require permitting by local
regulatory authorities. Additionally, suspended ceilings may cover areas of potential residual
radioactive material, such as ventilation ducts and fixtures. There may be other materials
introduced that were not part of the original construction—such as floor tiles, partitions,
insulation, additional concrete slabs, and paint—that may cover residual radioactive material.
Exterior building surfaces will typically have a low potential for residual radioactive material;
however, there are several locations that should be considered during survey planning. If there
are roof exhausts or roof accesses that allow for radioactive material movement, or if the facility
is proximal to the air effluent discharge points, the possibility for residual radioactive material on
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
4-40
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Considerations for Planning Surveys
1	the roof should be considered. Because roofs are periodically resurfaced, radioactive material
2	may be trapped in roofing material, and sampling this material may be necessary. Such roof
3	drainage points as driplines along overhangs, downspouts, and gutters are also important
4	survey locations. Roofs may also accumulate radioactive material from fallout, or roof materials
5	may contain elevated levels of naturally occurring radioactive material (e.g., elevated uranium in
6	roof tar). Wall penetrations for process equipment, piping, and exhaust ventilation are potential
7	locations for exterior residual radioactive material. Window ledges and outside exits (doors,
8	doorways, landings, stairways, etc.) are also building exterior surfaces that should be
9	addressed.
10	4.9.3.2 Building Materials
11	In addition to radiological surveys of the building surfaces described in Section 4.8.3.1, it may
12	also be necessary to survey any building materials removed from the building as part of its
13	demolition, remediation, or renovation. Guidance for the design and implementation of
14	radiological surveys of these materials is provided in MARSAME (NRC 2009).
15	4.9.3.3 Land Areas
16	Depending on site processes and operating history, the radiological survey may include varying
17	portions of the land areas. Open land or paved areas with a potential for residual radioactive
18	material should include storage areas (e.g., equipment, product, waste, and raw material), liquid
19	waste collection lagoons and sumps, areas downwind (based on predominant wind directions
20	on an average annual basis, if possible) of stack release points, and surface drainage
21	pathways. Additionally, roadways and railways that may have been used for transport of
22	improperly contained radioactive materials could also have an accumulation of residual
23	radioactive material.
24	Building modifications should be reviewed to assess any expansions that might cover former
25	land disposal areas. Other land areas—such as wetlands, marshlands, or low-lying surface
26	areas—where waste material was used as fill material need to be assessed for potential
27	residual radioactive material. In some instances, the waste material is covered with clean
28	backfill material to grade to ground surface. Archived aerial photos, historical maps, and
29	interviews can be used to assess the potential presence of such areas.
30	Buried piping, underground tanks, fill areas, sewers, spill areas, and septic leach fields that may
31	have received radioactive liquids are locations of possible residual radioactive material that may
32	necessitate sampling of subsurface soil (Section 7.5.3). Information regarding soil type
33	(e.g., clay, sand) may provide insight into the retention or migration characteristics of specific
34	radionuclides. The need for special sampling by coring or split-spoon equipment should be
35	anticipated for characterization surveys.
36	If radioactive waste has been removed, surveys of excavated areas will be necessary before
37	backfilling with clean fill. If the waste is to be left in place, subsurface sampling around the burial
38	site perimeter to assess the potential for future migration may be necessary.
39	Additionally, rivers, harbors, shorelines, and other outdoor areas with a potential for residual
40	radioactive material may require survey activities including environmental media (e.g., sediment
41	and biota) associated with these areas.
May 2020
DRAFT FOR PUBLIC COMMENT
4-41
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
Considerations for Planning Surveys
MARSSIM
4.9.4 Clearing to Provide Access
In addition to the physical characteristics of the site, a major consideration is how to address
difficult-to-access areas that have a potential for residual radioactive material. Difficult-to-access
areas may need significant effort and resources to perform adequate surveys. This section
provides a description of common difficult-to-access areas that may have to be considered. The
level of effort expended to access such areas should be commensurate with the potential for
residual radioactive material. For example, the potential for the presence of residual radioactive
material behind walls should be established before significant effort is expended to remove
drywall.
4.9.4.1 Structures
When necessary, structures and indoor areas should be sufficiently cleared to permit
completion of the survey. Clearing includes providing access to interior surfaces (e.g., drains,
ducting, tanks, pits, ceiling areas, and equipment) by removing covers, disassembly, or other
means of producing adequate openings.
Such building features as ceiling height, construction materials, ducts, pipes, etc., will determine
the ease of accessibility of various surfaces. Scaffolding, cranes, lifts, or ladders may be
necessary to reach some surfaces, and dismantling portions of the building may be required.
The presence of furnishings and equipment will restrict access to building surfaces and add
additional items that the survey should address. Any remaining equipment indirectly involved in
the process may need to be dismantled to evaluate the radiological status, particularly of
difficult-to-access parts of the equipment. Removing or relocating certain furnishings, such as
laboratory benches and hoods, to obtain access to floors and walls may also be necessary. The
amount of effort and resources dedicated to such removal or relocation activities should be
commensurate with the potential for residual radioactive material. Where the potential is low, a
few spot-checks may be sufficient to provide confidence that covered areas are free of residual
radioactive material. In other cases, complete removal may be warranted. Guidance for the
survey and assessment of materials and equipment is included in the MARSAME Manual.
Piping, drains, sewers, sumps, tanks, and other components of liquid-handling systems present
special difficulties because of the difficulty in accessing interior surfaces. Process information,
operating history, and preliminary monitoring at available access points will assist in evaluating
the extent of sampling and measurements included in the survey. Some specialized survey
techniques for drains and sewers have been developed and are effective for the measurement
of some radionuclides.
If the building is constructed of porous materials (e.g., wood, concrete, masonry, etc.) and the
surfaces were not sealed, residual radioactive material may be found in the walls, floors, and
other surfaces. It may be necessary to obtain cores of these surfaces for laboratory analysis.
Another accessibility problem is the presence of residual radioactive material beneath tile or
other floor coverings. This often occurs because the covering was placed over surfaces
containing residual radioactive material, or the joints in tile were not sealed to prevent
penetration. The practice in some facilities has been to "fix" radioactive material (particularly
alpha emitters) by painting over the surface of the affected area. Thus, actions to obtain access
to surfaces, such as removing wall and floor coverings (including paint, wax, or other sealer)
and opening drains and ducts, may be necessary to enable representative measurements of the
residual radioactive material. This material may also require a radiation survey to ensure no
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
4-42
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Considerations for Planning Surveys
1	radioactive material was transferred during the removal process. If alpha radiation or very low
2	energy beta radiation is to be measured, the surface should be free of overlying material, such
3	as dust and water, which may significantly attenuate the radiations.
4	4.9.4.2 Land Areas
5	If ground cover needs to be removed or if other obstacles limit access by survey personnel or
6	necessary equipment, the time and expense of making land areas accessible should be
7	considered. In addition, contamination control procedures need to be developed to prevent the
8	spreading of radioactive material during ground cover removal or the use of heavy equipment.
9	Whenever possible, the property owner should perform the removal or relocation of equipment
10	and materials that require special precautions to prevent damage or maintain inventory
11	accountability. Clearing open land of brush and weeds will usually be performed by a
12	professional land-clearing organization under subcontract arrangements. However, survey
13	personnel may perform minor land-clearing activities as needed.
14	An important consideration prior to clearing is the possibility of bio-uptake of radionuclides in the
15	plant material to be cleared. Special precautions to avoid exposure of personnel involved in
16	clearing activities may be necessary. Radiological screening surveys should be performed to
17	ensure that cleared material or equipment does not contain residual radioactive material.
18	The extent of site clearing in specific areas depends primarily on the potential for residual
19	radioactive material to exist in those areas where—
20	• The radiological history or results of previous surveys indicate a low potential for residual
21	radioactive material in an area; it may be sufficient to perform only minimum clearing to
22	establish a reference coordinate system.
23	• Residual radioactive material is known to exist, or a high potential for it necessitates
24	completely clearing an area to provide access to all surfaces.
25	• New findings as the survey progresses indicate that additional clearing is needed.
26	Open land areas may be cleared by heavy machinery (e.g., bulldozers, bushhogs, and
27	hydroaxes). However, care should be exercised to prevent relocation of surface radioactive
28	material or damage to such site features as drainage ditches, utilities, fences, and buildings.
29	Minor land clearing may be performed using manually operated equipment, such as brush
30	hooks, power saws, knives, and string trimmers. Brush and weeds should be cut to the
31	minimum practical height necessary to facilitate measurement and sampling activities
32	(approximately 15 cm). Care should be exercised to prevent unnecessary damage to or removal
33	of mature trees, shrubs, or historical or cultural resources.
34	Potential ecological or cultural damage that might result from an extensive survey should be
35	considered. If a survey is likely to result in significant or permanent damage to environmental or
36	cultural resources, appropriate environmental and cultural analyses should be conducted prior
37	to initiating the survey.
38	4.9.5 Reference Coordinate System
39	4.9.5.1 Establishment
40	Reference coordinate systems are established at the site to—
May 2020
DRAFT FOR PUBLIC COMMENT
4-43
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
Considerations for Planning Surveys
MARSSIM
•	Facilitate the selection of measurement and sampling locations.
•	Provide a mechanism for referencing a measurement to a specific location so that the same
survey point can be located again.
A survey reference coordinate system consists of a grid of intersecting lines referenced to a
fixed site location or benchmark. Typically, the lines are arranged in a perpendicular pattern,
dividing the survey location into squares or blocks of equal area; however, other types of
patterns (e.g., three-dimensional, polar) have been used.
The reference coordinate system used for a particular survey should provide a level of
reproducibility consistent with the objectives of the survey. For example, commercially available
single-frequency GPS devices can typically locate a position to within approximately 5 m, while
dual-frequency receivers and augmentation systems can provide real-time precision on the
order of a few centimeters. On the other hand, a metal bar can be driven into the ground to
provide a long-term reference point for establishing a local reference coordinate system. Some
States have official grid systems, and if such a system exists in a particular State, consideration
should be given to tying a site grid into the official State grid.
Reference coordinate system patterns on horizontal surfaces are usually identified numerically
on one axis and alphabetically on the other axis, or in distances in different compass directions
from the grid origin. Examples of structure interior and land area grids are shown in
Figures 4.3-4.5. Grids on vertical surfaces may include a third designator, indicating position
relative to floor or ground level. Overhead measurement and sampling locations (e.g., ceiling
and overhead beams) are referenced to corresponding floor grids.
For surveys of Class 1 and Class 2 areas, basic coordinate system patterns at 1-2 m intervals
on structure surfaces and at 10-20 m intervals of land areas may be sufficient for the purpose of
identifying FSS locations with a reasonable level of effort. Gridding of Class 3 areas may also
be necessary to facilitate referencing of survey locations to a common system or origin but, for
practical purposes, may typically be at larger intervals (e.g., 5-10 m for large structural surfaces
and 20-50 m for land areas). For the FSS, the required scanning percentages, number of
discrete survey locations for direct measurements, and number of sample locations will depend
on the classification of the survey unit (see Chapter 5).
Reference coordinate systems on structure surfaces are usually marked by chalk lines or paint
along the entire grid line or at line intersections. Land area reference coordinate systems are
usually marked by wooden or metal stakes driven into the surface at reference line
intersections. The selection of an appropriate marker depends on the characteristics and routine
uses of the surface. Where surfaces prevent installation of stakes, the reference line
intersection can be marked by painting.
Three basic coordinate systems are used for identifying points on a reference coordinate
system. The reference system shown in Figure 4.3 references grid locations using numbers on
the vertical axis and letters on the horizontal axis. The reference system shown in Figure 4.4
references distances from the (0,0) point using the compass directions N (north), S (south),
E (east), and W (west). The reference system shown in Figure 4.5 references distances along
and to the R (right) or L (left) of the baseline.
In addition, a less frequently used reference system is the polar coordinate system, which
measures distances along transects from a central point. Polar coordinate systems are
particularly useful for survey designs to evaluate effects of stack emissions, where it may be
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
4-44
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Considerations for Planning Surveys
1	desirable to have a higher density of samples collected near the stack and fewer samples with
2	increasing distance from the stack.
3	Figure 4.5 shows an example grid system for an outdoor land area. The first digit or set of digits
4	includes an L or R (separated from the first set by a comma) to indicate the distance from the
5	baseline in units (m) and the direction (left or right) from the baseline. The second digit or set of
6	digits refers to the perpendicular distance from the (0,0) point on the baseline and is measured
7	in hundreds of units. Point A in the example of a reference coordinate system for survey of site
8	grounds, Figure 4.5, is identified as (100R, 2+00) (i.e., 200 m from the baseline and 100 m to
9	the right of the baseline). Fractional distances between reference points are identified by adding
10	the distance beyond the reference point and are expressed in the same units used for the
11	reference coordinate system dimensions. Point B on Figure 4.5 is identified as (25R, 1+30).
12	Open land reference coordinate systems should be referenced to a location on an existing State
13	or local reference system or to a U.S. Geological Survey (USGS) benchmark. (This may require
14	the services of a professional land surveyor.) GPS is capable of locating reference points in
15	terms of latitude and longitude (Section 6.9.2 provides descriptions of positioning systems.)
16	Following the establishment of the reference coordinate system, a drawing is prepared by the
17	survey team or the land surveyor. This drawing indicates the reference lines, site boundaries,
18	and other pertinent site features and provides a legend showing the scale and a reference
19	compass direction.
20	4.9.5.2 Quality System Considerations
21	The concept of the quality system was introduced in Section 4.2.2. The process used to
22	develop the reference coordinate system should be recorded in the survey planning
23	documentation (e.g., the QAPP). Any deviations from the requirements developed during
24	planning should be documented when the reference coordinate system is established.
25	When the survey reference coordinate system is referenced to a fixed site location or
26	benchmark on a known geographic coordinate system (GCS) or projected coordinate system
27	(projection) (e.g., a State system), or the survey reference coordinate system itself uses a
28	known GCS or projection rather than one of the three basic coordinate systems described in
29	Section 4.9.5.1, then the following information should be provided about the actual GCS or
30	projection used:
31	• name
32	• units used (e.g., feet, meters, etc.)
33	• zone
34	• datum
35	• spheroid
36	• method used to determine/obtain the coordinates (e.g., GPS)
37	• estimates of the accuracy and precision of the coordinates
38	• transformations used to convert from coordinates from one system to another
May 2020
DRAFT FOR PUBLIC COMMENT
4-45
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Considerations for Planning Surveys
MARSSIM
1	Ideally, this information should be provided in the spatial reference section of the metadata for
2	the GIS layer containing the data. If a GPS was used to obtain the coordinates, then information
3	should be included about the differential corrections made to the original GPS measurements.
4	It should be noted that the reference coordinate systems described in this section are intended
5	primarily for reference purposes and do not necessarily dictate the spacing or location of survey
6	measurements or samples. Establishment of a measurement grid to demonstrate compliance
7	with the DCGLs is discussed in Section 5.3.7 and Chapter 8.



•
.



j ¦

14
13
• a^ *•<••• %. ••••««
• *
a #
.......
......#.......
|WA
LL
i . a...a <

• *
a
......fi
•.a.aaaa.a.aaaa
12
* •

¦>
•
<




11
¦ •

it1!
• ;
* : : :
*
•
.
•
GRID BI
10
• •
• •

; : . !
• « ; ;
« « • 1
¦ • • ¦
• • ¦ .
a

9
e •

. . ¦ a
a • • a
1 • * i
a a * a
a a • a


8
7
6
\ValL
¦ a a a a a [ *0 ¦ a ¦ • ¦ •
..J.
a a ¦
• • ;
LOOR/tElilN^
......#%......#i......j.......
a • *
a a ¦
•	a •
a • J
a a a . a a a% • a aa a a #¦ . a a a a a | a a a • a a a
t i 1
a a •
•	» ;
; ; :
i	
•a.a.a
\ValL
i .a a•ao...a a a %...a a a.
ta.a a a aViaaaaa%aaaaaaa
............a.
5
: :

: :
: :
• •
:
:


4
; ;

• «
a a
: :
a a
a
a
:
•
; :

3
2
¦ aji %• . a . . . •

......#.aa.ia.la.aaaii
IwaLl

lai.iiji.aaoif aaaaaa

1
• *

• •
a a
a



A 1 B j c
D
E j F ! G
H
I
J | K \ L

8
FEET
0	6
0
METERS
9 Figure 4.3: Indoor Grid Layout with Alphanumeric Grid Block Designation: Walls and
10 Floors are Diagramed as Though They Lay Along the Same Horizontal Plane
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
4-46
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Considerations for Planning Surveys
S5N
SON
70N"
*
¦4-
I
*
m
¦
*
60N	
f
4
¦
5 ON*
40N"
30N-
20N*
ION-
«
4
I
4
m
m
4
I
4
I
m
BUILDING
IA
B
4 m
0
10E
20E
30E
40E
FEET
0	30
50E
60E
0	10
METERS
POINT A GRID COORDINATES 30E, 30N
POINT B GRID COORDINATES 23E; 24N
SHADED BLOCK GRID COORDINATES 10E. 30N
• SURVEY UNIT BOUNDARY
- ONSITE FENCE
2	Figure 4.4: Example of a Grid System for Survey of Site Grounds Using Compass
3	Directions
May 2020
DRAFT FOR PUBLIC COMMENT
4-47
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Considerations for Planning Surveys
MARSSIM
N
PROPERTY
BOUNDARY
4+00
3+00
2+00
1+00
0+00""
FEET
0	300
0	100
METERS
200L 100L BASELINE 100R 200R 300R
POINT A GRID COORDINATES 100R 2+00
POINT B GRID COORDINATES 25R; 1+30
SHADED BLOCK GRID COORDINATES 200L, 2+00
1
2	Figure 4.5: Example of a Grid System for Survey of Site Grounds Using Distances Left or
3	Right of the Baseline
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
4-48
May 2020
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
MARSSIM
Considerations for Planning Surveys
4.10 Health and Safety
Health and safety are emphasized as issues potentially affecting the implementation of
MARSSIM surveys. The focus of the health and safety program is minimizing environmental and
physical hazards (e.g., confined spaces, unstable surfaces, heat and cold stress) where these
issues may affect how a survey is designed and performed. Work areas and procedures that
present potential safety hazards must be identified and evaluated to warn personnel of potential
hazards. Personnel must be trained about potential physical and chemical safety hazards
(e.g., inhalation, adsorption, ingestion, injection/puncturing) and the potential for injury
(e.g., slips, trips, falls, burns). In addition, the presence or possibility of such environmental
hazards as poison ivy; ticks carrying Lyme disease; and poisonous snakes, spiders, rodents, or
insects should be noted. These hazards can affect the safety and health of the workers, as well
as the schedule for performing the survey. Some physical hazards require special procedures or
precautions. Steep slopes might require special gear for surveyors and instruments or might call
for dispensations from the regulatory agency to reduce or eliminate survey efforts in such areas.
The potential presence of unexploded ordnance (UXO) requires qualified explosive ordnance
disposal personnel to clear the survey unit of UXO and accompany survey personnel during the
survey.
A job safety analysis (JSA) should be performed prior to implementing a survey. The JSA offers
an organized approach to the task of locating problem areas for material handling safety (OSHA
2002). The JSA should be used to identify hazards and provide inputs for drafting a health and
safety plan (HASP). The HASP will address the potential hazards associated with survey
activities and should be prepared concurrently with the survey design. The HASP identifies
methods to minimize the threats posed by the potential hazards. The information in the HASP
may influence the selection of a measurement technique and disposition survey procedures.
Radiation work permits (RWPs) may be established to control access to radiologically controlled
areas. RWPs contain requirements from the JSA, such as dosimetry and personal protective
equipment (PPE), as well as survey maps illustrating predicted dose rates and related
radiological concerns (e.g., removable or airborne radioactive material). Hazard work permits
(HWPs) may be used in place of RWPs at sites with primarily physical or chemical hazards.
The JSA systematically carries out the basic strategy of accident prevention through the
recognition, evaluation, and control of hazards associated with a given job, as well as the
determination of the safest, most efficient method of performing that job. This process creates a
framework for deciding among engineering controls, administrative controls, and PPE for the
purpose of controlling or correcting unsafe conditions. Examples of these controls include—
•	engineering controls, which are physical changes in processes or machinery (e.g., installing
guards to restrict access to moving parts during operation), storage configuration
(e.g., using shelves in place of piles or stacks)
•	administrative controls, which are changes in work practices and organization
(e.g., restricted areas where it is not safe to eat, drink, smoke, etc.), including the placement
of signs to warn personnel of hazards
•	PPE, which are clothing or devices worn by employees to protect against hazards
(e.g., gloves, respirator, full-body suits)
Correction measures may incorporate principles of all of the controls listed above. The preferred
method of control is through engineering controls, followed by administrative controls, and then
PPE.
May 2020
DRAFT FOR PUBLIC COMMENT
4-49
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Considerations for Planning Surveys
MARSSIM
Proper handling procedures for hazardous substances are documented in site-specific health
and safety plans. Compliance with all control requirements is mandatory to maintain a safe
working environment. Personnel must regard control requirements as a framework to facilitate
health and safety, while still taking responsibility for their own well-being. Being wary of safety
hazards remains an individual responsibility, and personnel must be aware of their surroundings
at all times in work areas.
4.11 Documentation
Concurrently with the FSS design, the design team should begin to draft the FSS report. In
many cases before the FSS is started, the regulator will require a report documenting the
proposed sampling and surveying plan, including ancillary documentation such as the QAPP.
The FSS report should present a complete and unambiguous record of the radiological status of
the survey unit, relative to the established DCGLs. To the extent possible, this should be
self-contained and contain a minimum of information incorporated by reference. Reporting
requirements for the FSS should be developed during planning and clearly documented in the
QAPP. The text below describes some of the information needed for review of an FSS:
Example 1: Information Needed for an FSS Review
A review by the U.S. Nuclear Regulatory Commission of the final status survey (FSS)
documentation is undertaken to "verify that the results of the FSS demonstrate that the site,
area, or building meet the radiological criteria for license termination" (NRC, 2006). The
information needed by the NRC for a review is summarized below. For more details, see
NRC (2006).
•	an overview of the results of the FSS
•	a summary of the derived concentration guideline levels
•	a discussion of any differences from prior submissions
•	a description of the method by which the number of samples was determined for each
survey unit
•	a summary of the values used to determine the number of samples and a justification for
these values
•	the results for each survey unit
•	analytical methods used
•	detection limits
•	estimates of uncertainties or sample standard deviations
•	a description of any changes in initial survey unit assumptions relative to the extent of
residual radioactive material
•	a description of how "as low as reasonably achievable" practices were employed to
achieve final activity levels
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
4-50
May 2020
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
MARSSIM
Considerations for Planning Surveys
In addition to the items above, the design team should have the following information
available (NRC, 2006):
•	the results of previously conducted in-process inspections and confirmatory surveys
•	the licensee's quality assurance/quality control program
•	confirmation that the changes to prior submissions are not significant and are technically
correct
•	issues (a) identified by intervenors and stakeholders and (b) raised in allegations to
assure such issues have been satisfactorily resolved
•	descriptions of the survey units to determine if any special survey situations are present
•	results of elevated measurement comparisons
•	results of the appropriate statistical tests (e.g., Wilcoxon Rank Sum and Sign tests) to
confirm that results indicate compliance
•	specific parts of the FSS and supporting data that affect the FSS but that were not
available when the decommissioning or license termination plan was approved
4.12 Application of Survey Planning Concepts with Example Calculations
This section is intended to expand on the content presented in this chapter, provide a general
overview, and familiarize the MARSSIM user with the application of the concepts in
Sections 4.4 through 4.7 to planning FSSs.7 Greater detail appears in the chapters that follow.
4.12.1 Scenario A or Scenario B?
Occasionally, the design team will need to determine the appropriate scenario for use in
statistical hypothesis testing as the basis for the FSS. Under Scenario A, it is assumed that the
concentration of residual radioactive material equals or exceeds the release criteria. For
Scenario B, it is assumed that the concentration of residual radioactive material meets the
release criteria (i.e., less than the action level [AL]). Historically, MARSSIM recommended the
use of Scenario A, which put the burden of proof that the survey unit met the release criteria on
the individuals designing the survey. In Scenario B, the burden of proof is no longer on the
individuals designing the survey and thus should be used with caution and only in those
situations where Scenario A is not an effective alternative.
The basic problem is one of being able to distinguish residual radioactive material from
background. If a radionuclide has a relatively small DCGLw and is present in the background
with a relatively large variation, then it requires a large number of measurements to determine if
the residual concentration exceeds the DCGLw. The choice of Scenario A or Scenario B should
be based on which null hypothesis is easier to live with if false (NRC 1998a). If the DCGLw is
large relative to the measurement or background variation, then Scenario A should be chosen
(NRC 1998a). This is likely the more common situation. Conversely, if the DCGLw is small
7 Appendix A contains a detailed example of MARSSIM applied to executing FSS for a single radionuclide. This
example builds on examples in Chapters 5 and 8.
May 2020
DRAFT FOR PUBLIC COMMENT
4-51
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
Considerations for Planning Surveys
MARSSIM
relative to the measurement or background variation, then Scenario B should be chosen
(NRC 1998a).
The MARSSIM user should review the information in Section 5.3.1 and NRC (1998a) for more
information on selecting the appropriate scenario. The remainder of the examples in this section
are based on Scenario A.
4.12.2 DCGL Calculations
4.12.2.1	Decay Series
In this example, the surface activity DCGLwfor natural thorium (Th-nat) is 1,000 Bq/m2
(600 dpm/100 cm2), and all of its decay products are in secular equilibrium—that is, for each
disintegration of thorium-232 (232Th), a total of six alpha and four beta particles are emitted in
the thorium decay series. Note that in this example, the surface activity DCGLw of 1,000 Bq/m2
is assumed to apply to the total activity from all members of the decay chain. In this situation,
the corresponding alpha activity DCGLw should be adjusted to 600 Bq/m2 (360 dpm/100 cm2),
and the corresponding beta activity DCGLw to 400 Bq/m2 (240 dpm/100 cm2), in order to be
equivalent to 1,000 Bq/m2 of natural thorium surface activity. For a surface activity DCGLw of
1,000 Bq/m2, the beta activity DCGLw is calculated as shown in Equation (4-16):
DCGLW B = (DCGLW Tota|) x (fraction of decays that emit 0 s)
/1,000 Bq of chain \ ( 4 ff Bq \
V m2 ) \Bq of 232Th/ _ 400 fl Bq	(A.-\6)
~~	10 Bq of chain	m2
1 Bq of 232Th
For this example, the beta activity DCGLw corresponding to the DCGLw for natural thorium is
400 beta particles/second/square meter.
To demonstrate compliance with the beta activity DCGLw for this example, measurements of
beta count rates must be converted to activity using a weighted beta efficiency that accounts for
the energy and yield of each beta particle. For decay chains that have not achieved secular
equilibrium, the relative activities between the different members of the decay chain can be
determined as previously discussed for surrogate ratios.
4.12.2.2	Surrogate DCGL
This example illustrates and discusses the application of the surrogate method.
Determining the Surrogate Ratio
Ten soil samples within the survey unit were collected and analyzed for 137Cs and 90Sr to
establish a surrogate ratio. The ratios of 90Sr to 137Cs were as follows: 6.6, 5.7, 4.2, 7.9, 3.0, 3.8,
4.1, 4.6, 2.4, and 3.3. An assessment of this example data set results in a mean 90Srto 137Cs
surrogate ratio of 4.6, with a standard deviation of 1.7, as shown below using Equations 8.1
and 8.2:
6.6 + 5.7 + ••• + 2.4 + 3.3
Mean =	—	= 4.6
10
(6.6 - 4.6)2 + (5.7 - 4.6)2 + ••• + (2.4 - 4.6)2 + (3.3 - 4.6)2
° =	tt:—	= 1.7
10- 1
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
4-52
May 2020
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
MARSSIM
Considerations for Planning Surveys
There are various approaches that may be used to develop a surrogate ratio from this data, but
each must consider the variability and level of uncertainty in the data. One may consider the
variability in the surrogate ratio by selecting the 95 percent upper confidence level (UCL) of the
surrogate ratio (to yield a conservative value of 90Sr from the measured 137Cs), which is 8.0 in
this case, as shown below using Equation 8.3:
Similarly, one may select the most conservative value from the data set (in this case, 7.9).
At sites where surrogates are used, a correlation coefficient should be calculated to validate the
relationship between the radionuclides. In addition, the radioactive ingrowth and decay of
radionuclides should be evaluated. Surrogates are most appropriate for sites where the
radionuclides are contained in insoluble particulates. The sources of insoluble particulates
include the following:
•	Sites processing minerals, such as monazite, thorite, thorianite, and zircon: The gamma
radiation from the decay products here are a useful surrogate for the decay chain, and the
insolubility of the minerals precludes changes in the ratios of the parent and progeny.
•	Sites with residual radioactive material from corrosion products from nuclear reactors: In this
case, the gamma radiation from 60Co is typically used as a surrogate for other radionuclides,
including iron-59 (59Fe), iron-55 (55Fe), cobalt-57 (57Co), chromium (51Cr), manganese
(54Mn), nickel-57 (57Ni), and nickel-63 (63Ni). Note that at this kind of site, the shorter-lived
radionuclides will decay more rapidly than the 60Co, and appropriate decay corrections will
need to be evaluated.
•	Sites with plutonium isotopes and americium-241 (241Am): At these sites, the gamma
radiation from 241 Am is a useful surrogate for plutonium-249 (239Pu) and plutonium-240
(240Pu). Note that at this kind of site, plutonium-241 (241Pu) will continue to decay to 241Am;
therefore, the appropriate decay corrections will need to be made.
•	Sites with thoriated metal (e.g., nickel, tungsten, or magnesium): The gamma radiation from
thorium progeny here can be used as a surrogate, but a thorough evaluation is required to
verify that sufficient time has passed to permit the thorium and its progeny to be near a state
of secular equilibrium.
Once an appropriate surrogate ratio is determined and approved by the appropriate regulatory
agency, the planning team needs to consider how compliance will be demonstrated using
surrogate measurements. That is, the planning team must modify the DCGL of the measured
radionuclide to account for the inferred radionuclide. This calculation is shown below.
Surrogate DCGL Calculation
The modified DCGL for137Cs must be reduced using Equation (4-9):
where DCGLcs is the DCGL of 137Cs; DCGLsr is the DCGL of 90Sr, and Rsr/cs is the ratio of the
concentrations of 90Sr to 137Cs. Assuming that the DCGLsr is 150 Bq/kg, the DCGLcs is
UCL = 4.6 +1.96 x 1.7 = 8.0
1

May 2020
DRAFT FOR PUBLIC COMMENT
4-53
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
Considerations for Planning Surveys
MARSSIM
100 Bq/kg, and the ratio of 90Sr to 137Cs is 8 (e.g., from a post-remediation characterization
survey), the modified DCGL for137Cs (DCGLCs_mod) can be calculated using Equation (4-9):
DCGLCs_mod = 								^ = 16 Bq kg"1
(	1	+	§	j
\100 Bq kg"1 150Bqkg"1/
The modified DCGL for 137Cs (DCGLCs_mod) is 16 Bq kg-1.
4.12.2.3	Gross Activity DCGL for Radionuclides in Known Ratios
Determining the Radionuclide Ratios
As with the surrogate ratio method, the determination of the relative ratios should be determined
through the DQO process and with regulatory approval. Care must be taken to ensure that
ratios are applicable to the FSS conditions (e.g., ratio measurements are made just before
starting the FSS).
General Gross Activity DCGL Calculation
For this example, assume that 40 percent of the total surface activity was contributed by a
radionuclide with a DCGL of 8,300 Bq/m2 (5000 dpm/100 cm2), 40 percent by a radionuclide
with a DCGL of 1,700 Bq/m2 (1000 dpm/100 cm2), and 20 percent by a radionuclide with a
DCGL of 830 Bq/m2 (500 dpm/100 cm2). Using Equation (4-10),
1
DCGLgross - f f f3
DCGL-| DCGL2 DCGL3
1
~ O40 ' O40 ' O20
8,300 Bq/m2 1,700 Bq/m2 830 Bq/m2
= 1,900 Bq/m2
the gross activity DCGL is 1,900 Bq/m2 (1,100 dpm/100 cm2).
Note: If the relative amounts (ratios) were derived from data collected before
remediation, then the relative amounts need to be confirmed or verified after
remediation but before the FSS.
4.12.2.4	Unity Rule DCGL for Radionuclides with Unrelated Concentrations
For a given survey unit, data from previous surveys yield the radionuclide concentrations.
•	Mean 60Co concentration = 41 ± 32 (1a) Bq/kg.
•	Mean 137Cs concentration = 188 ± 153 (1 a) Bq/kg.
The DCGLwvalues for 60Co and 137Cs are 130 Bq/kg (3.6 pCi/g) and 410 Bq/kg (11 pCi/g),
respectively. Since the concentrations of the two radionuclides appear to be unrelated in the
survey unit, the surrogate approach cannot be employed.
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
4-54
May 2020
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
MARSSIM
Considerations for Planning Surveys
The weighted sum (calculated using Equation (4-12)) is
Cco-60 Ccs-137 41 Bq/kg 188 Bq/kg
T=-
¦ + ¦
= 0.77
DCGI_c0_6o DCGLcs-137 130 Bq/kg 410 Bq/kg
The standard deviation of the weighted sum (calculated using Equation 4-13) is
c(T)=
M
^(Cco-60)
DCGLco.
60
^(Ccs-137)
DCGLcs-
137
2
32 Bq/kg
2
153 Bq/kg

130 Bq/kg

410 Bq/kg
= 0.45
The weighted sum would be reported as 0.77 ± 0.45 (1o). This weighted sum and standard
deviation can be used to determine the number of samples required for FSS based on the
statistical tests chose by the design team. See the example in Section 4.12.4.
4.12.3 Required Number of Samples for a Single Radionuclide
See Sections 5.3.3 and 5.3.4 for detailed discussions on determining the required number of
data points for the WRS and Sign tests.
4.12.3.1 WRS Test
In this example the following data8 were collected for the survey unit and reference area. Under
consideration are a single radionuclide that is present in the background and a single survey
unit. This process would be repeated for each survey unit and reference area combination.
Because the actual activity units are irrelevant to the example, they will be omitted.
Data from a post-remediation survey are shown in Table 4.3 below:
Table 4.3: Sample Data from a Post-Remediation Survey

Reference Area
Survey Unit
Difference
Mean =
38.8
189.8
151.1
Median =
38.0
188.0
150.0
o =
6.6
8.1
NA
The DCGL of concern is 160. The design team settled on alpha of 0.05 and beta of 0.10.
To determine the appropriate number of measurements, the relative shift must be calculated
using Equation (4-17):
DCGL-LBGR
Relative Shift = 		(4-17)
The LBGR is often set at the expected median concentration of the radionuclide. However, in
our example the mean is higher than the median. Because it is conservative to set the LBGR at
8 This example is based on data presented in NRC (1998a).
May 2020
DRAFT FOR PUBLIC COMMENT
4-55
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
Considerations for Planning Surveys
MARSSIM
the higher value9 (i.e., the expected mean) and to choose the larger value for o for the reference
area or survey unit, that is what the design team does using Equation (4-17):
160-151.1
Relative Shift =	—	= 1.1
8.1
Referring to Table 5.2, we see that the recommended number of measurements (N/2) is 22
(given an alpha of 0.05 and beta of 0.10).10 This is the number of samples that must be
collected in both the reference area and survey unit, for a total of 44 measurements. Also, this
number accounts for missing or unusable data. The simplest approach is to assign half of those
points to the survey unit and half to the reference area.
4.12.3.2 Sign Test
In this example, the following data were collected for the survey unit and reference area. Under
consideration are a single radionuclide that is not present in the background or present an
insignificant fraction of the DCGLw and a single survey unit. The activity levels are compared
directly to the DCGL. This process would be repeated for each survey unit. Because the actual
activity units are irrelevant to the example, they will be omitted.
Data from a post-remediation survey are shown in Table 4.4 below:
Table 4.4: Sample Data from a Post-Remediation Survey

Survey Unit
Mean =
10.9
Median =
11.5
o =
3.3
The DCGL of concern is 16. The design team settled on alpha of 0.05 and beta of 0.05.
To determine the appropriate number of measurements, the relative shift must be calculated
using Equation (4-17):
DCGL-LBGR
Relative Shift = 	
o
The LBGR is often set at the expected median concentration of the radionuclide. Because it is
conservative to set the LBGR at the higher of the mean or median, the median is used (see
Equation (4-17)):
16-11.5
Relative Shift = ——— = 1.4
3.3
Referring to Table 5.2, we see that the required number of measurements (N) is 20 (given the
alpha of 0.05 and beta of 0.05). This number accounts for missing or unusable data.
9	Larger vales for the LBGR and a lead to a smaller relative shift that, in turn, leads to a larger number of required
measurements.
10	Using the median value results in a relative shift of 1.2 and an N/2 value of 19.
NUREG-1575, Revision 2	4-56	May 2020
DRAFT FOR PUBLIC COMMENT	DO NOT CITE OR QUOTE

-------
MARSSIM
Considerations for Planning Surveys
1	4.12.4 Required Number of Samples for the Multiple Radionuclides
2	See Sections 5.3.3-5.3.4 for detailed discussions on determining the required number of data
3	points for the WRS and Sign tests.
4	4.12.4.1 Applying the Unity Rule
5	A design team is tasked with classifying a survey unit according to the potential for
6	contamination and determining the appropriate number of soil samples to take during the FSS.
7	The contaminants are cobalt-60 (60Co) and cesium-137 (137Cs). The DCGLw values are—
8	• DCGLw,co-eo: 130 Bq/kg (3.5 pCi/g)
9	• DCGLw,cs-137- 410 Bq/kg (11 pCi/g)
10	During the DQO process and with approval of the regulator, the acceptable probability of a
11	Type I error11 (a) is set to 0.05, and the acceptable probability of a Type II error12 (/?) set to 0.10.
12	Because compliance must be demonstrated for more than one radionuclide, and each
13	radionuclide will be measured separately, the unity rule will be used, wherein the concentration
14	of each contaminant is normalized to (divided by) its DCGLw. When this is done, the collective
15	"concentration" of the multiple radionuclides is expressed as the weighted sum of ratios (T) or
16	sum of the ratios (SOR). See Section 4.5.3.5 for details on the SOR.
17	The following data, obtained earlier during the characterization survey, are assumed to be
18	representative of the existing conditions in the survey unit. The data from the characterization
19	survey are shown in Table 4.5.
20	Sign test
21	Although 137Cs is in the background, it is present at such a low concentration13 relative to the
22	DCGL that the planning team decides to "swallow" background and use the Sign test.14
23	Classification and General Observations
24	Both the mean (0.78) and median15 (0.66) of the SOR are less than 1. This indicates that the
25	survey unit might comply with the release criteria without further remediation. That the mean
26	and median differ indicates that the measurements might not be normally distributed. This
27	supports our decision to analyze the FSS data with a nonparametric test (Sign test).
28	That the value of the SORs for several samples (1, 4, 5, and 9) exceed 1 indicates that this
29	should be considered a Class 1 survey unit. Recall from the discussion of a Class 1 area that
11	This is the probability that the statistical test will indicate that the survey unit meets the release criteria when, in
fact, it does not.
12	This is the probability that the statistical test will indicate that the survey unit does not meet the release criteria
when, in fact, it does.
13	Cesium-137 appears in background soil at a concentration of about 37 Bq/kg (1 pCi/g).
14	The Sign test is a statistical test to demonstrate compliance with the release criteria when the radionuclide of
concern is not present in background.
15	The median is the middle value of the data set when the number of data points is odd, and it is the average of the
two middle values when the number of data points is even. Thus, 50 percent of the data points are above the median,
and 50 percent are below the median.
May 2020
DRAFT FOR PUBLIC COMMENT
4-57
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
1
Considerations for Planning Surveys
Table 4.5: Sample Results for Unity Rule Example
MARSSIM
Sample
Concentration
(Bq/kg)
Normalized Concentration
Ta

Cco-60
Ccs-137
(C/DCGL)Co-60
(C/DCGL)cs-137

1
104
308
0.80
0.74
1.54
2
33
78
0.25
0.19
0.45
3
30
185
0.23
0.45
0.69
4
41
322
0.32
0.79
1.11
5
78
525
0.60
1.28
1.89
6
26
70
0.20
0.17
0.38
7
4
44
0.03
0.11
0.14
8
0
-11
0.00
-0.03
-0.03
9
59
229
0.45
0.56
1.02
10
37
137
0.28
0.33
0.62
Mean
41
188
0.32
0.46
0.78
Median
35
161
0.27
0.39
0.66
Sample
Sigma (Bq/kg)
Normalized Sigma
Ta
Cco-60
Ccs-137
(
-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
MARSSIM
Considerations for Planning Surveys
/Expected Mean Concentration^ /Expected Mean Concentration^
LBGR= {	DCGL	jCo.K+ {	DCGL	jCs.137
41 Bq/kg 188 Bq/kg
130 Bq/kg 410 Bq/kg
= 0.32 + 0.46
= 0.78
Using Equation (4-18), the relative shift calculation then becomes
1 - 0.78
Relative Shift =
Using Equation (4-13), The combined sigma (o) for 60Co and 137Cs using data from Table 4.5
on previous page is
a
\
yy L„+ykL,
= V(0.25)2 + (0.39) 2
= 0.47
The relative shift is then determined as follows, using Equation (4-18):
1 - 0.78
Relative Shift = ———=— = 0.47
0.47
Note that the relative shift and sigma have the same value (0.47), which is a coincidence.
Referring to Table 5.3, we see that the recommended number of samples is somewhere
between 71 and 107 (given the alpha of 0.05 and beta of 0.10). When the relative shift falls
between the values on two lines on Table 5.3, the number of samples can be conservatively
estimated by using the number of samples corresponding to the smaller value of the relative
shift. For this example, the number of required samples would be 107.
4.12.5 Instrument Efficiencies
The instrument efficiency ) is defined as the ratio of the net count rate of the instrument to the
surface emission rate of a source for a specified geometry. The surface emission rate is defined
as the number of particles of a given type above a given energy emerging from the front face of
the source per unit time. The surface emission rate is the 2ti particle fluence that embodies both
the absorption and scattering processes that effect the radiation emitted from the source. Thus,
the instrument efficiency is determined by the ratio of the net count rate and the surface
emission rate.
The source efficiency (es) is defined as the ratio of the number of particles of a given type
emerging from the front face of a source to the number of particles of the same type created or
released within the source per unit time. The source efficiency takes into account the increased
particle emission due to backscatter effects, as well as the decreased particle emission due to
self-absorption losses. For an ideal source (i.e., no backscatter or self-absorption), the value of
the source efficiency is 0.5. Many real sources will exhibit values less than 0.5, although values
greater than 0.5 are possible, depending on the relative importance of the absorption and
backscatter processes.
May 2020	4-59	NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT	DO NOT CITE OR QUOTE

-------
Considerations for Planning Surveys
MARSSIM
1	For surface activity measurements, the product of the instrument and surface efficiencies is the
2	total efficiency of the instrument (st). The total efficiency is the net count rate of the instrument
3	divided by the total (4ti) emission rate in a specified geometry. It is usually the efficiency of
4	ultimate interest when planning FSSs.
5	4.12.5.1 Multiple Radionuclides
6	Whatever approach is used to assess multiple radionuclides, the FSS design team needs to
7	account for different instrument responses to the radionuclides of concern. It is important to use
8	an appropriately weighted total efficiency to convert from instrument counts to activity units. This
9	most frequently arises when measuring surface activity. When multiple radionuclides are being
10	measured with the same instrument, a weighted efficiency must be used. Starting with the unity
11	rule in its most general form (Equation (4-1)), the total number of counts is simply sum of the
12	counts from each radionuclide j present:
n
Total Number of Counts = ATota|£t = I Aj£s,j£i,j	(4-19)
j=i
13	where
14	• .<4Total is the total activity of the n radionuclides.
15	• Aj is the activity of the y'th radionuclide.
16	• £s j is the source efficiency of the y'th radionuclide.
17	• eij is the instrument efficiency of the y'th radionuclide.
18	If the fraction fj of each radionuclide in the mix is known, then, as shown in Equation (4-20),
n
^Total £t = ^(ATotai//Kny	(4-20)
j=i
19	Dividing both sides by ^jotai yields the total efficiency for the mixture, as shown in
20	Equation (4-21):
n
£t ~ ^ ' fj£s,j£i,j	(4-21)
j=i
21	The example below illustrates the calculation of a weighted total efficiency for two radionuclides
22	with different instrument efficiencies.
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
4-60
May 2020
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
MARSSIM
Considerations for Planning Surveys
Consider a site contaminated with cesium-137 (137Cs) and strontium/yttrium-90 (90Sr/Y), with
137Cs representing 60 percent of the total activity. Therefore, the relative fractions are 0.6 for
137Cs and 0.4 for 90Sr/Y. The source efficiency for both 137Cs and 90Sr/Y is 0.5. The
corresponding instrument efficiencies for 137Cs and 90Sr/Y are determined to be 0.38 and 0.45,
respectively.
The total efficiency can be calculated using Equation (4-21):
£t = fcs£s,Cs£i,Cs + fsr/Y£s,Sr/Y£i,Sr/Y
= (0.6)(0.5)(0.38) + (0.4)(0.5)(0.45)
= 0.20
Alternatively, the total efficiencies for each radionuclide can be calculated by multiplying the
surface efficiency by the instrument efficiency, as shown in the equations below, modified from
Equation (4-21):
£t,Cs = £s,Cs X £i,Cs = (0.5)(0.38) = 0.19
£t,Sr/Y = £s,Sr/Y x £i,Sr/Y = (0.5)(0.45) = 0.22
The weighted total efficiency can then be calculated as follows using equations modified from
Equation (4-21):
£t = fcs£t,Cs + fSr/Y £t,Sr/Y
= (0.6) (0.19) + (0.4)(0.22)
= 0.20
The weighted total efficiency is 0.20, or 20 percent.
When calculating the weighted total efficiency, one must account for the assumptions underlying
the corresponding DCGL, particularly the relative ratios of the radionuclides present. In this
case, the relative ratio of 137Cs and 90Sr/Y is needed to calculate the weighted total efficiency.
This can be important when dealing with the naturally occurring radionuclide chains (e.g., 226Ra
and progeny). The state of secular equilibrium (disequilibrium) would need to be accounted for
in determining the weighted total efficiency. An example of calculating the total weighted
efficiency for a mixture of radionuclides is shown in Section 4.12.6.
This weighted efficiency discussion addresses considerations for fractional activity. However,
more complex situations may be encountered, which must consider things such as radiation
emission intensities and branching ratios of decay chains. MARSAME and NUREG-1507
provide some additional examples of these more complex situations.
4.12.6 Data Conversion
This example illustrates the data conversion process along with another weighed total efficiency
calculation. Additional details are provided in Section 6.7.
A radionuclide laboratory is being decommissioned. Options are being considered for the FSS
of surfaces in the laboratory. The following radionuclide information is given in Table 4.6:
May 2020
DRAFT FOR PUBLIC COMMENT
4-61
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
Considerations for Planning Surveys	MARSSIM
Table 4.6: Sample Radionuclide Information
Radionuclide
DCGLw
Relative Fraction
Bq nr2
dpm (100 cm2)-1
14Q
5.77x106
3.46x106
0.12
63Ni
2.72x106
1.63x106
0.18
"Tc
1.95x106
1.17x106
0.40
204TI
1.33x104
8.00x103
0.02
90Sr/Y
1.30x104
7.78x103
0.13
106Ru/Rh
3.98x104
2.39x104
0.15
Abbreviations: DCGLw = derived concentration guideline level determined for a wide area; Bq = Becquerels; m =
meters; dpm = decays per minute; cm = centimeters.
Because the relative ratios are well known, were determined through the DQO process, and
were approved by the regulatory agency, gross beta activity measurements will be used to
determine compliance with the release criterion for this survey. The gross activity DCGLw can
be determined from Equation (4-10).
1
DCGLgross - ( Q18 ¦	¦ o^40 ; 002 j 013 j 0.15 \
\2.72x10® 5.77x106 1.95x106 1.33x104 1.30x104 3.98x104)
= 6.42 x104 Bq rrr2 (3.85 x104 dpm (100 cm2)"1).
It has been decided to use a gas-flow proportional counter with a physical probe area of
126 cm2 (0.0126 m2) in p particle-only mode for the survey. The design team has determined
the following total efficiencies for the detector (Table 4.7):
Table 4.7: Sample Efficiencies for a Detector

Source
Instrument
Total
Relative
Weighted
Radionuclide
Efficiency
Efficiency
Efficiency
Fraction
Efficiency
14Q
0.25
0.16
0.04
0.12
0.0048
63Ni
0.25
0.00
0.00
0.18
0.000
"Tc
0.25
0.64
0.16
0.40
0.064
204TI
0.50
0.58
0.29
0.02
0.0058
90Sr/Y
0.50
0.72
0.36
0.13
0.047
106Ru/Rh
0.50
1.10
0.55
0.15
0.082
Total =
1.00
0.20
In general, the output of the counter is a gross counting rate or integrated counts in a set
counting interval. The relationship between the counter's output and surface activity (As)
concentration is given by Equation 6-19:
. _ Cs/ts
S ~ £txW
where Cs is the integrated net counts recorded by the instrument; ts is the time period over
which the counts were recorded; et is the total efficiency of the instrument in counts per
disintegration, effectively the product of the instrument efficiency fo) and the source efficiency
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
4-62
May 2020
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
MARSSIM
Considerations for Planning Surveys
(es); and VK is the physical probe area. To account for background, Equation (4-22) (a slightly
modified form of Equation 6-20) is used:
. _ Cg+b/tg " Cb/tb _ Cg/tg _ R|iet
S £txW	£txW ~ £txW	(4"22)
where Cb is the background counts16 recorded by the instrument, tb is the time period over
which the background counts were recorded,17 and Rnet is the net counting rate. The units for ts,
tb, and W depend on the desired units for the FSS.
For this example, surface activity measurements are being made on drywall. Consider the
following data for one measurement location:
•	Cb = 1,626 counts
•	tb = 5 minutes
•	Cs = 1,210 counts
•	ts = 1 minute
The net counting rate is calculated using Equation (4-23):
Rnet = Cs+b/tg - Cb/tb	(4-23)
1,210 counts 1,626 counts
1 minute 5 minutes
= (1,210-325.2) cpm
= 884.8 cpm
The net counting rate can be converted to the surface activity concentration, as shown below,
using Equation (4-22):
A = Rnet
S £TxW
884.8 cpm
~ 0.20 (c/d)x0.0126 m2
= 3.51 xio5 dpm/m2
16	Background measurements should be made on uncontaminated material similar in composition to the material at
the measurement location.
17	The sample and background counting intervals can be different, depending on the desired MDC. See Section 6.3
and the professional literature for more discussion.
May 2020
DRAFT FOR PUBLIC COMMENT
4-63
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
Considerations for Planning Surveys
MARSSIM
In general, either SI (Bq/m2) or conventional (dpm/100 cm2) units are desired. For SI units, the
conversion is straightforward, because 60 dpm is equivalent to 1 Bq (see Equation (4-22)):
As=(3.51x105dPm/
s V	m2/ V60 dpm/
= 5.83x103 Bq/m2
The conversion to conventional units is shown below, using Equation (4-22):
M«i*io5 dpm/m2)*(S)
= 3.51x10s dpm/100 cm2
The FSS design team should determine the desired units during the planning phase. After the
desired units are chosen, the design team can create spreadsheets or other methods of
analyzing the raw counting data to streamline the process. Similarly, any action or investigation
levels chosen by the design team should be converted into the proper units. This process needs
to be performed for each field measurement instrument and technique.
For example, if it is determined that units of dpm/100 cm2 will be used, then the physical probe
area should be measured in cm2, and the following equation (Equation (4-24)) can be used for
each instrument used:
« _ Cs/ts - Cb/tb _ Rnet
<4_24)
where ts and tb are recorded in minutes (Rnet is expressed in cpm), and W is recorded in square
centimeters instead of square meters.
For this example, the equation is shown below (see Equation (4-22)):
884.8 cpm
A' " 0.20 (c/d)« (126/100)
= 3.51x10s dpm/100 cm2
The result is the same as in the earlier example, as expected.
Additionally, action and investigation levels and the DCGL can be converted to detector outputs
to facilitate timely actions (i.e., expressing action levels in terms of net counting rate might allow
the field survey team to alert supervisors about measurements exceeding action levels in near-
real time). In this case, the net counting rate corresponding to an action level can be expressed
by the following equation (Equation (4-25)):
Rnet = £tx WxAf	(4-25)
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
4-64
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Considerations for Planning Surveys
1
2	Suppose the design team set an action level at 10 percent of the DCGLgr0ss. The net counting
3	rate corresponding to this value is found as shown below, keeping in mind that 1 Bq is
4	equivalent to 1 disintegration per second (dps), using (Equation (4-25)):
5	R^eLt = 0.20 x 0.0126 m2 x(6.42x103 Bq/m2) = 16.2 cps = 971 cpm
6	Thus, any net count exceeding about 970 counts per minute (cpm) would be flagged for
7	investigation. If background rates are relatively constant, an action level can be expressed as a
8	gross counting rate, as well.
9	4.12.7 Example of a Deviation from a Recommended Statistical Test
10	Consider the case of a survey unit that contains many different surfaces with potentially different
11	backgrounds (e.g., drywall panels, concrete floor, glass windows, metal doors, wood trim, and
12	plastic fixtures) and gross activity measurements are being considered. In this case, MARSSIM
13	recommends the use of the WRS test when gross activity measurements are used; however,
14	the use of the WRS test might require several survey and associated reference units, resulting
15	in an inordinately large number of measurements. Furthermore, "it is not appropriate to make
16	each material a separate survey unit because the dose modeling is based on the dose from the
17	room as a whole and because a large number of survey units in a room would require an
18	inappropriate number of samples" (NRC 2006). In this situation, the design team should use the
19	DQO process and determine the best approach.
20	Instead of attempting to use the WRS test and multiple reference units, the design team might
21	investigate the materials to determine if material-specific backgrounds are needed or whether
22	materials with similar background could be grouped and considered as a unit. If this is done, it
23	might be "acceptable to perform a one-sample test (Sign test) on the difference between the
24	paired measurements from the survey unit and from the appropriate reference material"
25	(NRC 2006). Chapter 2 of NUREG-1505 (NRC 1998a) contains details on this method.
26	In addition to the alternative statistical approach discussed above, the NRC (2006) discusses
27	two additional approaches to resolve this issue. First, if the materials in the survey unit have
28	substantially different backgrounds, then a reference unit containing a similar mix of material
29	might be used, and the WRS test can then be applied. Second, if the materials in the survey unit
30	have similar backgrounds, or if one material predominates, then a reference background from a
31	single material might suffice. See NRC (2006) for more details.
32	The design team should use the DQO process to investigate any deviations from usual methods
33	and get approval from the regulatory agency before executing the FSS.
34	4.12.8 Release Criteria for Discrete Radioactive Particles
35	With the installation in the mid- and late-1980s of very sensitive portal monitors, many nuclear
36	power plants detected residual radioactive material on individuals and their clothing, present as
37	small—usually microscopic—highly radioactive particles having relatively high specific activity.
38	These particles became known as "discrete radioactive particles" and sometimes "hot particles."
39	Discrete radioactive particles are small (usually on the order of millimeters or micrometers),
40	distinct, highly radioactive particles capable of delivering extremely high doses to a localized
41	area in a short period of time.
May 2020
DRAFT FOR PUBLIC COMMENT
4-65
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
Considerations for Planning Surveys
MARSSIM
To prove compliance with requirements for discrete radioactive particles, some surveys have
used the MARSSIM EMC process (see Section 8.6.1); however, that process is not valid when
instrumentation dose-to-rate conversion factor modeling assumes a "point source" as opposed
to an "area source" or "plane source." This violates the assumption inherent in the dose or risk
model of an activity concentration averaged over some definable area. Therefore, it is not
acceptable to use the MARSSIM EMC process when the distance to the detector is greater than
three times the longest dimension of the area of elevated activity, as represented by
Equation (4-26):
where L is the estimated longest dimension of the area of elevated activity, and d is the distance
to the detector.
To address discrete radioactive particles in surface soils or building surfaces—
•	Include discrete radioactive particles as a consideration during the DQO process for
MARSSIM surveys.
•	When a regulatory agency sets requirements on the concentration of discrete radioactive
particles in a survey unit, use the DQO process to develop a survey to assess whether
requirements are met.
•	When appropriate, apply ALARA by addressing discrete radioactive particles during the
RAS survey.
•	If discrete radioactive particles do not contribute significantly to dose or risk at a site, it is a
reasonable assumption that they will not affect the outcome of a wide-area FSS. If an FSS
fails due to discrete radioactive particles, investigate the reasons for survey failure (see
Section 8.6.3).
4.12.9 Uranium Mill Tailings Radiation Control Act of 1978 (UMTRCA) Sites
At UMTRCA sites, the U.S. Environmental Protection Agency's Health and Environmental
Protection Standards for Uranium and Thorium Mill Tailings (in 40 CFR 192) are applicable.
However, the technical requirements in these standards are not always consistent with some of
the recommendations in MARSSIM. Specifically, the soil cleanup standards for 226Ra and 228Ra
are specified as averages over an area of 100 m2. (In the 40 CFR 192 rulemaking, an averaging
area of 100 m2 was used as a reasonable footprint for a home. One goal of the 40 CFR 192
standards was to protect future homes from indoor radon, and the specified averaging area was
a component implemented for the protection of health.) The rules at 40 CFR 192 do not
establish specific requirements for small areas of elevated radioactive material. At sites where
the uranium or thorium mill tailings standards are applicable, the following approach for FSSs is
acceptable:
•	A survey unit of no greater than 100 m2 sections of land should be used, consistent with the
regulatory standards.
•	The systematic sampling for performance of statistical tests, normally required under the
MARSSIM approach, is not required for each survey unit. Instead, compliance with the
standard can be demonstrated through the analysis of soil samples or composite soil
samples from each survey unit in conjunction with gamma radiation scanning or in situ
NUREG-1575, Revision 2	4-66	May 2020
d > 3L
(4-26)
DRAFT FOR PUBLIC COMMENT
DO NOT CITE OR QUOTE

-------
MARSSIM
Considerations for Planning Surveys
1	gamma radiation measurements of each survey unit. When appropriate, gamma radiation
2	scanning or in situ measurements correlated to soil sampling may be used in place of soil
3	sampling.
4	• Survey units may be classified, as appropriate, and the percentage of the survey unit that is
5	scanned may be adjusted accordingly for Class 1, Class 2, or Class 3 survey units.
6	• EMC criteria for small elevated areas of activity may be developed but are not required for
7	the purposes of MARSSIM.
8	These minor modifications to the standard MARSSIM radiological survey approach are
9	acceptable for those sites to which the UMTRCA standards are applicable.
May 2020
DRAFT FOR PUBLIC COMMENT
4-67
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
MARSSIM
Survey Planning and Design
5 SURVEY PLANNING AND DESIGN
5.1 Introduction
This chapter is intended to assist the user in planning radiological surveys with a particular
emphasis on conducting a final status survey (FSS), with the ultimate objective being to
demonstrate compliance with the derived concentration guideline levels (DCGLs).1 The survey
types that make up the Radiation Survey and Site Investigation (RSSI) process include scoping,
characterization, remedial action support (RAS), and FSSs; depending on the regulatory
framework, the process may also include confirmatory or independent verification surveys.
Although the scoping, characterization, and RAS surveys have multiple objectives, this manual
focuses on those aspects related to supporting the FSS and demonstrating compliance with
DCGLs. In general, each of these survey types expands upon the data collected during the
previous survey (e.g., the characterization survey is planned with information collected during
the scoping survey) up through the FSS. The conduct and extent of scoping and
characterization surveys will depend on the available information from the Historical Site
Assessment (HSA) and site-specific conditions. The purpose of the FSS is to demonstrate that
the release criteria established by the regulatory agency have not been exceeded. This final
release objective should be kept in mind throughout the design and planning phases for each of
the other survey types. For example, scoping surveys may be designed to meet the objectives
of the FSS such that the scoping survey report is also the FSS report. The survey and analytical
procedures referenced in this chapter are described in Chapters 6-7 and Appendix H. An
example of an FSS, as described in Section 5.3, appears in Appendix A. In addition, example
checklists are provided for each type of survey to assist the user in obtaining the necessary
information for planning an FSS.
Scoping surveys—used to augment the HSA and provide input to future survey designs—and
survey unit characterization and classification are described in Section 5.2.1. Section 5.2.2
describes characterization surveys performed to determine the following: nature and extent of
residual radioactive material; potential remediation alternatives and technologies; the inputs to
pathway analysis and dose or risk assessment models; occupational and public health and
safety impacts; and inputs to the FSS design. RAS surveys, performed to support remedial
activities, update estimates of site-specific parameters used in FSS planning, and determine
when a site or survey unit is ready for an FSS are described in Section 5.2.3. Section 5.3
covers FSSs, which are performed to demonstrate that a site or survey unit meets the residual
radioactive material release criteria.
A flowchart diagram illustrating the Survey Planning and Design Process, broken up by survey
type, is provided in Figures 5.1-5.3.
1 MARSSIM uses the word "should" as a recommendation, not as a requirement. Each recommendation in this
manual is not intended to be taken literally and applied at every site. MARSSIM's survey planning documentation will
address how to apply the process on a site-specific basis.
May 2020
DRAFT FOR PUBLIC COMMENT
5-1
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Survey Planning and Design
MARSSIM
1
2	Figure 5.1: The Scoping Survey Portion of the Radiation Survey and Site Investigation
3	Process
From
Figure
3.1
Survey Objectives
1.	Perform a preliminary hazard
assessment
2.	Support classification of all or part of the
site as a Class 3 area
3.	Evaluate whether survey plan can be
optimized for use in characterization or
final status survey
4.	Provide input to the characterization
survey design.
Design Scoping Survey
Plan Using
DQO Process
Perform
Scoping Survey
Validate Data and
Assess Data Quality
There Sufficient \
Information to Support
Classification as
Class 3?
No/Unknown
Document Findings
Supporting Class 3
Classification
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
5-2
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Survey Planning and Design
Survey Objectives
1.	Determine the nature and extent of
the residual radioactive material
2.	Evaluate remedial alternatives and
technologies
3.	Evaluate whether survey plan can be
used as the final status survey
4.	Provide input to the final status survey
design
Design
Characterization
Survey Plan Using
DQO Process
Reassess DQOs
Perform
Characterization
Survey
Are the DQOs
Satisfied?
z
validate Data
and Assess
Data Quality
Classify Areas as
Class 1, Class 2,
or Class 3
Determine Remedial
Alternative and Site-
Specific DCGLs
Remediate the Area
Perform Remedial
Action Support Survey
Is Area Remediation
Required?
Does the
Remedial Action
Support Survey Indicate
the Remediation is
Complete?
Reassess Remedial
Alternative and Site
Specific DCGLs
Reassessment of
Remedial Alternative and
Site-Specific DCGLs
Necessary?
-Yes—



To Figure I
\ 5
3 J
* The point where survey units that fail to demonstrate compliance in the final status survey in Figure 5.3 re-enter the process
Figure 5.2: The Characterization and Remedial Action Support Survey Portions of the
Radiation Survey and Site Investigation Process
May 2020
DRAFT FOR PUBLIC COMMENT
5-3
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Survey Planning and Design
MARSSIM
From Figure
5.1 and
Figure
5.2
Design Final Status Survey
Plan Using DQO Process
S u rv ev O bie ctiv es
1.	Select/verify survey unit classification
2.	Demonstrate that the potential dose or
risk from residual radioactive material is
below the release criteria for each
survey unit
3.	Demonstrate that the potential dose
from residual elevated areas is below
the release criteria for each survey unit
Perform Final Status
Survey for Class 1
Survey Units
Perform Final Status
Survey for Class 2
Survey Units
Perform Final Status
Survey for Class 3
Survey Units
Reassess DQOs
Validate Data
and Assess
Data Quality
Are the DQOs
Satisfied?
Perform Additional
Surveys
Z
Do Final
Status Survey
Results Contain Residual
Radioactive Material
Less than
DCGLs?
Is Additional
Remediation
Required?
To Figure
5.2
Document Results in the Final
Status Survey Report
* Connects with the Remedial Action Support Survey portion of the process in Figure 5.2
Figure 5.3: The Final Status Survey Portion of the Radiation Survey and Site Investigation
Process
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
5-4
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Survey Planning and Design
1	5.2 Preliminary Surveys
2	5.2.1 Scoping Surveys
3	If the data collected during the HSA indicate that a site or area is impacted, a scoping survey
4	may be performed. The objective of this survey is to augment the HSA for sites with potential
5	residual radioactive material. Specific objectives may include—
6	• performing a preliminary risk assessment and providing data to complete the site
7	prioritization scoring process for Comprehensive Environmental Response, Compensation,
8	and Liability Act (CERCLA) and Resource Conservation and Recovery Act (RCRA) sites
9	only (EPA 1992c)
10	• providing input to the characterization survey design, if necessary
11	• supporting the classification of all or part of the site as a Class 3 area for planning the FSS
12	• obtaining an estimate of the variability in the residual radioactive material concentration for
13	the site
14	• identifying non-impacted areas that may be appropriate for reference areas and estimating
15	the variability in radionuclide concentrations when the radionuclide of interest is present in
16	background
17	A scoping survey is not a requirement if HSA information meets the needs for designing
18	subsequent surveys, including the FSS. Alternatively, scoping surveys and characterization
19	surveys may be combined, if one survey can be designed to meet the requirements of both
20	survey types. See Section 5.2.2 for a description of characterization surveys.
21	Scoping survey information about the general radiation levels at the site, including gross levels
22	of residual radioactive material on building surfaces and in environmental media, is needed
23	when conducting a preliminary risk assessment (as noted above for CERCLA and RCRA sites).
24	If unexpected conditions are identified that prevent the completion of the survey, the Multi-
25	Agency Radiation Survey and Site Investigation Manual (MARSSIM) user should contact the
26	regulatory agency for further guidance. Sites that meet the National Contingency Plan (NCP)
27	criteria for a removal should be referred to the Superfund Removal program (EPA 1988b).
28	If the HSA indicates that residual radioactive material above release levels is likely, a scoping
29	survey could be performed to provide initial estimates of the level of effort for remediation and
30	information for planning a more detailed survey, such as a characterization survey. Not all
31	radiological parameters need to be assessed when planning for additional characterization,
32	because total surface activity or limited sample collection may be sufficient to meet the
33	objectives of the scoping survey.
34	Once a review of pertinent site history indicates that an area is impacted, the minimum survey
35	coverage at the site will include a Class 3 area FSS before the site's being released. For
36	scoping surveys with this objective, identifying radiological decision levels is necessary for
37	selecting instruments and procedures with the necessary instrument capabilities to demonstrate
38	compliance with the release criteria. A methodology for planning, conducting, and documenting
39	scoping surveys is described in the following sections.
May 2020
DRAFT FOR PUBLIC COMMENT
5-5
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
Survey Planning and Design
MARSSIM
5.2.1.1	Survey Design
Planning a scoping survey involves reviewing the HSA for a site (Chapter 3). This process
considers available information concerning locations of spills or other releases of radioactive
material. Reviewing the radioactive materials license or similar documentation provides
information on the identity, locations, and general quantities of radioactive material used at the
site. This information helps determine which areas are likely to contain residual radioactive
material and, thus, areas where scoping survey activities will be concentrated. The information
may also identify one or more non-impacted areas as potential reference areas when
radionuclides of concern are present in background (Section 4.5). Following the review of the
HSA, appropriate DCGLs for the site are selected. The DCGLs may be adjusted later if a
determination is made to use site-specific information to support the development of DCGLs.
If residual radioactive material is identified during the scoping survey, the area may be classified
as Class 1 or Class 2 for FSS planning (refer to Section 4.6.1 for information on initial
classification), and a characterization survey is subsequently performed. For scoping surveys
that are designed to provide input for characterization surveys, measurements and sampling
may not be as comprehensive or performed to the same level of sensitivity necessary for FSSs.
The design of the scoping survey should be based on specific Data Quality Objectives (DQOs).
See Section 2.3.1 and Appendix D for the information to be collected.
For scoping surveys that potentially serve to release the site or portions of the site from further
consideration, the scoping survey design for the Class 3 area should consist of sampling based
on the HSA data, and professional judgment and must be consistent with the requirements for
an FSS. If residual radioactive material is not identified, it may be appropriate to characterize
the area as non-impacted. Refer to Section 5.3 for a description of FSSs. However, collecting
additional information during subsequent surveys (e.g., characterization surveys) may be
necessary to make a final determination as to area classification.
5.2.1.2	Conducting Surveys
Scoping survey activities performed for preliminary risk assessment or to provide input for
additional characterization include a limited amount of surface scanning, surface activity
measurements, and sample collection (smears, soil, water, vegetation, paint, building materials,
subsurface materials). In this case, scans, direct measurements, and samples are used to
examine areas likely to contain residual radioactive material. These activities are conducted
based on HSA data, preliminary investigation surveys, and professional judgment.
Background activity and radiation levels for the area should be determined, including direct
radiation levels on building surfaces and radionuclide concentrations in media. Survey locations
should be referenced to grid coordinates, if appropriate, or fixed site features. This may be
accomplished by establishing a reference coordinate system in the event that residual
radioactive material is detected above the DCGLs (Section 4.8.5). Samples collected as part of
a scoping survey should be maintained under custody from collection through analysis and
reporting to ensure the integrity of the results. Sample tracking may include use of a chain of
custody, which is the unbroken trail of accountability that ensures the physical security of
samples, data, and records (Section 7.8).
Scoping surveys that are expected to be used as Class 3 area FSSs should be designed
following the procedure in Section 5.3. Scoping surveys should also include judgment
measurements and sampling in areas likely to have accumulated residual radioactive material
(Section 5.3.9). However, when performing a scoping survey as a Class 3 FSS, judgment
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
5-6
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Survey Planning and Design
1	samples should not be used as part of the statistical sampling population utilized to make a
2	release decision on a site or survey unit.
3	5.2.1.3 Evaluating Survey Results
4	Survey data are converted to the same units as the DCGLs (Section 6.6). Identification of
5	potential radionuclides of concern at the site is performed using direct measurements or
6	laboratory analysis of samples. The data are compared to the appropriate regulatory DCGLs.
7	For scoping survey activities that provide an initial assessment of the radiological hazards at the
8	site or input for additional characterization, the survey data are used to identify locations and the
9	general extent of residual radioactive material. Scoping surveys that are expected to be used as
10	Class 3 area FSSs should follow the methodology presented in Chapter 8 to determine whether
11	the release criteria have been exceeded.
12	5.2.1.4 Documentation
13	How the results of the scoping survey are documented depends on the specific objectives of the
14	survey. For scoping surveys that provide additional information for characterization surveys, the
15	documentation should provide general information on the radiological status of the site. Survey
16	results should include identification of the potential radionuclides of concern (including the
17	methods used for radionuclide identification), general extent of residual radioactive material
18	(e.g., activity levels, area, and depth), and possibly even relative ratios of radionuclides to
19	facilitate DCGL application. A narrative report or a report in the form of a letter may suffice for
20	scoping survey data that is used to provide input for characterization surveys. Sites being
21	released from further consideration should provide a level of documentation consistent with FSS
22	reports (Section 5.3.11). Example 1 includes an illustration of a scoping survey checklist,
23	including survey design, conduct of the survey, and evaluation of survey results.
Example 1: Example Scoping Survey Checklist
Survey Design
	 Enumerate Data Quality Objectives (DQOs) and Measurement Quality Objectives
(MQOs). State the objectives of the survey; survey instrumentation capabilities should
be appropriate for the specified survey objectives. Document survey requirements in a
project-specific Quality Assurance Project Plan (QAPP).
	 Review the Historical Site Assessment (HSA) for the following:
	 Operational history (e.g., problems, spills, releases, or notices of violation) and
available documentation (e.g., radioactive materials license)
	 Other available resources—site personnel, former workers, residents, etc.
	Types and quantities of materials that were handled and where radioactive
materials were stored, handled, moved, relocated, and disposed
	 Release and migration pathways
May 2020
DRAFT FOR PUBLIC COMMENT
5-7
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Survey Planning and Design
MARSSIM
	Areas that are potentially affected and likely to contain residual radioactive
material (Note: Survey activities will be concentrated in these areas.)
	 Types and quantities of materials likely to remain onsite—consider radioactive
decay
	 Select derived concentration guideline levels (DCGLs) for the site based on the HSA
review. (It may be necessary to assume appropriate regulatory DCGLs in order to
permit selection of survey methods and instrumentation for the expected radioactive
material and quantities.)
Conducting Surveys
	 Follow the survey design documented in the QAPP. Record deviations from the stated
objectives or documented standard operating procedures, and document additional
observations made when conducting the survey.
	 Select instrumentation based on the specific DQOs and MQOs of the survey.
Consider instrumentation capabilities for the expected residual radioactive material
and quantities.
	 Determine background activity and radiation levels for the area; include direct
radiation levels on building surfaces, radionuclide concentrations in media, and
exposure rates.
	 Record measurement and sample locations referenced to grid coordinates or fixed
site features.
	 For scoping surveys that are conducted as Class 3 area final status surveys (FSSs),
follow FSS procedure.
	 Conduct scoping survey, which involves judgment measurements and sampling
based on HSA results:
	 Perform investigatory surface scanning.
	 Conduct limited surface activity measurements.
	 Perform limited sample collection (smears, soil, water, vegetation, paint,
building materials, subsurface materials).
	 Maintain sample tracking.
Evaluating Survey Results
	 Compare survey results with the DQOs and MQOs.
	 Identify radionuclides of concern.
	 Identify impacted areas and the general extent of residual radioactive material.
	 Estimate the variability in the residual radioactive material levels for the site.
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
5-8
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Survey Planning and Design
Adjust DCGLs based on survey findings (the DCGLs initially selected may not be
appropriate for the site).
Determine the need for additional action (e.g., none, remediate, more surveys)
Prepare report for regulatory agency (determine if letter report is sufficient).
1	5.2.2 Characterization Surveys
2	Characterization surveys may be performed to satisfy a number of specific objectives. Examples
3	of characterization survey objectives include—
4	• determining the nature and extent of residual radioactive material
5	• evaluating remediation alternatives (e.g., unrestricted use, restricted use, onsite disposal,
6	off-site disposal, etc.)
7	• input to pathway analysis/dose or risk assessment models for determining site-specific
8	DCGLs (becquerel [Bq]/kilogram [kg], Bq/square meter [m2])
9	• estimating the occupational and public health and safety impacts during decommissioning
10	• evaluating remediation technologies
11	• providing input to FSS design
12	• meeting Remedial Investigation/Feasibility Study (RI/FS) requirements (under a CERCLA
13	program) or RCRA Facility Investigation/Corrective Measures Study (RFI/CMS)
14	requirements (under an RCRA program).
15	A characterization survey is not a requirement if HSA and scoping survey information meets the
16	needs for designing subsequent surveys, including FSSs. Alternatively, scoping surveys and
17	characterization surveys may be combined if one survey can be designed to meet the
18	requirements of both survey types. The scope of this manual precludes detailed discussions of
19	characterization survey design for each of these objectives; therefore, the user should consult
20	other references for specific characterization survey objectives not covered. For example, the
21	Decommissioning Handbook (DOE 1994a) is a good reference for characterization objectives
22	that are concerned with evaluating remediation technologies or unrestricted/restricted use
23	alternatives. Other references (e.g., Abelquist 2014; EPA 1988b, 2006c; NRC 1994a) should be
24	consulted for planning decommissioning actions, including remediation techniques, projected
25	schedules, costs, waste volumes, and health and safety considerations during remedial action.
26	Also, the types of characterization data needed to support risk or dose modeling should be
27	determined from the specific modeling code documentation. This manual concentrates on
28	providing information for the FSS design, with limited coverage on determining the specific
29	nature and extent of residual radioactive material. The specific objectives for providing
30	information to the FSS design include—
31	• estimating the projected radiological status at the time of the FSS, in terms of radionuclides
32	present, concentration ranges and variances, spatial distribution, etc.
May 2020
DRAFT FOR PUBLIC COMMENT
5-9
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Survey Planning and Design
MARSSIM
1	• evaluating potential reference areas to be used for background measurements, if necessary
2	• reevaluating the initial classification of survey units
3	• selecting instrumentation based on the necessary Measurement Quality Objectives (MQOs)
4	• establishing acceptable Type I and Type II errors with the regulatory agency (Appendix D
5	provides information on establishing acceptable decision error rates.)
6	Many of these objectives are satisfied by determining the specific nature and extent of residual
7	radioactive material in structures, residues, and environmental media. Additional detail on the
8	performance of characterization surveys designed to determine the general extent of residual
9	radioactive material can be found in the U.S. Nuclear Regulatory Commission's (NRC's)
10	Consolidated Decommissioning Guidance (NUREG-1757) (NRC 2006), Performance and
11	Documentation of Radiological Surveys (HPS/ANSI 13.49-2001) (HPS 2001), Characterization
12	in Support of Decommissioning Using the Data Quality Objectives Process (HPS/ANSI N13.59)
13	(HPS 2008), and the U.S. Environmental Protection Agency's (EPA's) RI/FS guidance (EPA
14	1988b; EPA 1993b).
15	Results of the characterization survey should include—
16	«the identification and distribution of residual radioactive material in buildings, structures, and
17	other site facilities
18	«the concentration and distribution of radionuclides of concern in surface and subsurface
19	soils
20	• the distribution and concentration of residual radioactive material in surface water, ground
21	water, and sediments
22	• the distribution and concentration of radionuclides of concern in other impacted media, such
23	as vegetation or paint
24	The characterization should include sufficient information on the physical characteristics of the
25	site, including surface features, meteorology and climatology, surface water hydrology, geology,
26	demography and land use, and hydrogeology. This survey should also address environmental
27	conditions that could affect the rate and direction of radionuclide transport in the environment,
28	depending on the extent of residual radioactive material identified above.
29	The following sections describe a method for planning, conducting, and documenting
30	characterization surveys. Alternative methodologies may also be acceptable to the regulatory
31	agencies.
32	5.2.2.1 Survey Design
33	The design of the site characterization survey is based on the specific DQOs for the information
34	to be collected, and it is planned using the HSA and scoping survey results. The DQO process
35	ensures that adequate data with sufficient quality are collected for the purpose of
36	characterization. The site characterization process typically begins with a review of the HSA,
37	which includes available information on site description, operational history, and the type and
38	extent of residual radioactive material (from the scoping survey, if performed). The site
39	description, or conceptual site model as first developed in Section 3.6.4, consists of the general
40	area, dimensions, and locations of affected areas on the site. A site map should show site
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
5-10
May 2020
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
MARSSIM
Survey Planning and Design
boundaries, roads, hydrogeological features, major structures, and other features that could
affect decommissioning activities. When available, Global Positioning System (GPS)
coordinates should be recorded for major features.
The operational history includes records of site conditions before operational activities,
operational activities of the facility, effluents and on-site disposal, and significant incidents—
including spills or other unusual occurrences—involving the spread of residual radioactive
material around the site and on areas previously released from radiological controls. This review
should include other available resources, such as site personnel, former workers, residents, etc.
Historic aerial photographs and site location maps may be particularly useful in identifying
potential areas of residual radioactive material.
The types and quantities of materials that were handled and the locations and disposition of
radioactive materials should be reviewed using available documentation (e.g., the radioactive
materials license). Release and migration pathways of radionuclides should be identified, as
well as areas that are potentially affected and are likely to contain residual radioactive material.
The types and quantities of materials likely to remain onsite, considering radioactive decay,
should be determined.
The characterization survey should clearly identify those portions of the site (e.g., soil,
structures, and water) that have been affected by site activities and potentially contain residual
radioactive material. The survey should also identify the portions of the site that have not been
affected by these activities. In some cases where no remediation is anticipated, results of the
characterization survey may indicate compliance with DCGLs established by the regulatory
agency. When planning for the potential use of characterization survey data as part of the FSS,
the characterization data must be of sufficient quality and quantity for that use (see
Section 5.3).
Several processes are likely to occur in conjunction with characterization. These include
considering and evaluating remediation alternatives and calculating site-specific DCGLs.
The survey should also provide information on variability in the radionuclide distribution in the
survey area. The radionuclide variability in each survey unit contributes to determining the
number of data points based on the statistical tests used during the FSS (Sections 5.3.3-5.3.4)
and the required scan coverage for Class 2 areas. Additionally, characterization data may be
used to justify reclassification for some survey units (e.g., from Class 1 to Class 2).
In some cases, judgment sampling is the most appropriate for meeting the data needs.
Judgment sampling includes measurements performed at locations selected using professional
judgment based on unusual appearance, location relative to known contaminated areas, high
potential for residual radioactive material, general supplemental information, etc. Examples of
situations in which judgment sampling may be the most appropriate include those where
residual radioactive material is isolated to locations that can be defined by individual
measurements, or in which biased results will provide the data from the areas of highest
suspected concentration of residual radioactive material. It should be understood, however, that
use of a judgment characterization survey will produce data that is considered biased and which
can generally only be used to draw conclusions about individual samples or specific locations
rather than provide quantifiable estimates about a larger aggregate population. As a result,
May 2020
DRAFT FOR PUBLIC COMMENT
5-11
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Survey Planning and Design
MARSSIM
1	averages of judgment samples (i.e., biased) should not be used to determine the mean
2	concentration of the residual radioactive material in the survey unit.
3	For those characterization survey objectives that require statistical evaluation of the data, the
4	sampling plan should be designed to produce unbiased data. Unbiased sampling makes use of
5	random sample selection, whereby each sample has an equal probability of being selected.
6	Unbiased data can be used to provide a measure of the population characteristics, such as the
7	average or mean radioactive material concentration and variance on that mean. One example of
8	when unbiased sample data may be required is the use of a data set in assessing compliance
9	with dose- or risk-based criteria. Most human health risk assessment protocols (e.g., EPA
10	1989d) require the computation of a mean and upper confidence limit (UCL) as the best
11	measure of residual radioactive material in identifying excess lifetime cancer risk or radiation
12	dose to a target population (EPA 2002b; 2006b).
13	The characterization survey may be used as an FSS for Class 3 areas under the following
14	conditions:
15	• The characterization survey was planned as an FSS.
16	• Site or survey unit conditions warrant the use of the characterization survey as the FSS.
17	• Only randomly selected samples are utilized as part of the statistical evaluation of the site or
18	survey unit. Any judgment samples collected may not be used as part of the statistical
19	sample count or as part of the statistical evaluation of results.
20	It may be also appropriate to combine the principles of judgment and unbiased sampling
21	strategies in designing a sample plan that provides the most representative data but also meets
22	all of the DQOs.
23	Note that because of site-specific characteristics of residual radioactive material, performing all
24	types of measurements described here may not be relevant at every site. For example, detailed
25	characterization data may not be needed for areas with residual radioactive material well above
26	the DCGLs that clearly require remediation. Judgment should be used in determining the types
27	of characterization information needed to provide an appropriate basis for remediation
28	decisions.
29	A number of software programs have been developed over the years to facilitate the design of
30	surveys, an example of which is Visual Sample Plan, developed by the Pacific Northwest
31	National Laboratory. These software programs can perform calculations to determine the
32	number of locations where measurements should be made or where samples should be
33	collected.
34	5.2.2.2 Conducting Surveys
35	Characterization survey activities often involve the detailed assessment of various types of
36	building and environmental media, including building surfaces, surface and subsurface soil,
37	surface water, and ground water. The HSA data should be used to identify the media onsite with
38	a potential for residual radioactive material (see Section 3.6.3). Identifying the media that may
39	contain residual radioactive material is useful for preliminary survey unit classification and for
40	planning subsequent survey activities. Selection of survey instrumentation and analytical
41	techniques are typically based on knowledge of the appropriate DCGLs, because remediation
42	decisions are made based on the level of the residual radioactive material as compared to the
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
5-12
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Survey Planning and Design
1	DCGL. Exposure rate measurements may be needed to assess occupational and public health
2	and safety.
3	Structure Surveys
4	Surveys of building surfaces and structures can include surface scanning, surface activity
5	measurements, exposure rate measurements, and sample collection (e.g., smears, subfloor
6	soil, water, paint, and building materials). Both field survey instrumentation (Chapter 6) and
7	analytical laboratory equipment and procedures (Chapter 7) are selected based on their
8	instrumentation capabilities for the expected residual radioactive material and their quantities.
9	Field and laboratory instruments are described in Appendix H.
10	Background activity and radiation levels for the area should be determined from appropriate
11	background reference areas. Background assessments include surface activity measurements
12	on building surfaces, exposure rates, and radionuclide concentrations in various media (refer to
13	Section 4.5). Building reference area measurements should be collected in non-impacted areas
14	within the same building, or in another similar building, provided that the reference area has
15	been constructed with materials of the same type and age. This ensures that the reference area
16	has the same inherent radioactive material and decay time, and therefore background
17	radioactivity, as the impacted area.
18	Measurement locations should be documented using reference system coordinates, if
19	appropriate, or fixed site features. A typical reference system spacing for building surfaces is
20	1 m. This is chosen to facilitate identifying survey locations and small areas of elevated activity.
21	Scans should be conducted in areas likely to contain residual radioactive material, based on the
22	results of the HSA and scoping survey.
23	Both systematic and judgment surface activity measurements are performed. Judgment direct
24	measurements are performed at locations of elevated direct radiation, as identified by surface
25	scans, to provide data on the upper ranges of residual radioactive material levels. Judgment
26	measurements may also be performed in sewers, air ducts, storage tanks, and septic systems
27	and on roofs of buildings, if necessary. Each surface activity measurement location should be
28	carefully recorded on the appropriate survey form.
29	Exposure rate measurements and media sampling are performed as necessary. For example,
30	subfloor soil samples may provide information on the horizontal and vertical extent of residual
31	radioactive material. Similarly, concrete core samples are necessary to evaluate the depth of
32	activated concrete in a reactor facility. Note that one type of radiological measurement may be
33	sufficient to determine the extent of residual radioactive material. For example, surface activity
34	measurements alone may be all that is needed to demonstrate that remediation of an area is
35	necessary; exposure rate measurements would add little to this determination.
36	Lastly, the measuring and sampling techniques should be commensurate with the intended use
37	of the data, as characterization survey data may be used to guide the FSS survey design or
38	supplement FSS data, provided that the data meet the selected DQOs and the FSS design
39	requirements.
May 2020
DRAFT FOR PUBLIC COMMENT
5-13
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Survey Planning and Design
MARSSIM
1	Land Area Surveys
2	Characterization surveys for surface and subsurface soils and media involve employing
3	techniques to determine the lateral and vertical extent and radionuclide concentrations in the
4	soil. This may be performed using either sampling and laboratory analyses or in situ gamma
5	spectrometry analyses, depending on the instrumentation capabilities of each methodology for
6	the expected radionuclides and concentrations. Note that in situ gamma spectrometry analyses
7	or any direct surface measurement cannot easily be used to determine distributions of
8	radionuclides as a function of depth. Sample collection followed by laboratory analysis
9	introduces several additional sources of uncertainty that need to be considered during survey
10	design. In many cases, a combination of direct measurements and samples is required to meet
11	the objectives of the survey.
12	Radionuclide concentrations in background soil samples should be determined for a sufficient
13	number of soil samples that are representative of the soil in terms of soil type, soil depth, etc. It
14	is important that the background samples be collected in non-impacted areas. Consideration
15	should be given to spatial variations in the background radionuclide concentrations as
16	discussed in Section 4.6 and NRC draft report NUREG-1501 (NRC 1994a).
17	Sample locations should be documented using GPS; reference system coordinates (see
18	Section 4.9.5), if appropriate; or fixed site features. A typical reference system spacing for open
19	land areas is 10 m (NRC 1992a). This spacing is somewhat arbitrary and is chosen to facilitate
20	determining survey unit locations and identifying areas of elevated concentrations of radioactive
21	material.
22	Surface scans for gamma activity should be conducted in areas likely to contain residual
23	radioactive material. Selection of instrumentation should be appropriate to detect the
24	radionuclide(s) of interest. Beta scans may be appropriate if the residual radioactive material is
25	near the surface and beta is the dominant radiation emitted from the residual radioactive
26	material. The detection capability and measurement uncertainty of the scanning technique
27	should be appropriate to meet the DQOs and MQOs.
28	Both systematic and judgment surface activity measurements are performed. Judgment direct
29	measurements are performed at locations of elevated direct radiation, as identified by surface
30	scans, to provide data on upper ranges of residual radioactive material levels. Judgment
31	measurements may also be performed in areas where radioactive materials might have
32	accrued, such as in swales, under downspouts, near access roads, etc., if necessary. Each
33	surface activity measurement location should be carefully recorded on the appropriate survey
34	form.
35	Both surface and subsurface soil and media samples may be necessary. Subsurface soil
36	samples should be collected where residual radioactive material is present on the surface and
37	where residual radioactive material is known or suspected in the subsurface. Boreholes should
38	be constructed to provide samples representing subsurface deposits.
39	Exposure rate measurements at 1 m (~3 feet) above the sampling location may also be
40	appropriate. Each surface and subsurface soil sampling and measurement location should be
41	carefully recorded.
42	Surface Water and Sediments
43	Surface water and sediment sampling may be necessary, depending on the potential for these
44	media to contain residual radioactive material, which depends on several factors, including the
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
5-14
May 2020
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
MARSSIM
Survey Planning and Design
proximity of surface water bodies to the site, size of the drainage area, total annual rainfall, and
spatial and temporal variability in surface water flow rate and volume. Refer to Section 3.6.3.3
for further criteria to determine the necessity for surface water and sediment sampling.
Characterizing surface water involves techniques that determine the extent and distribution of
residual radioactive material. This may be performed by collecting grab samples of the surface
water in a well-mixed zone. At certain sites, it may be necessary to collect stratified water
samples to provide information on the vertical distribution of residual radioactive material.
Sediment sampling should also be performed to assess the relationship between the
composition of the suspended sediment and the bedload sediment fractions (i.e., suspended
sediments compared to deposited sediments). When judgment sampling is used to find
radionuclides in sediments, radioactive sediments are more likely to be accumulated on fine-
grained deposits found in low-energy environments (e.g., deposited silt on inner curves of
streams).
Radionuclide concentrations in background water samples should be determined for a sufficient
number of water samples that are upstream of the site or in areas unaffected by site operations.
Consideration should be given to any spatial or temporal variations in the background
radionuclide concentrations.
Sampling locations should be documented using reference system coordinates, if appropriate,
or scale drawings of the surface water bodies. Effects of variability of surface water flow rate
should be considered. Surface scans for gamma activity may be conducted in areas likely to
contain residual radioactive material (e.g., along the banks) based on the results of the
document review or preliminary investigation surveys.
Surface water sampling should be performed in areas of runoff from active operations, at plant
outfall locations, upstream and downstream of the outfall, and any other areas likely to contain
residual radioactive material (see Section 3.6.3.3). Measurements of radionuclide
concentrations in water should include gross alpha and gross beta radioactivity concentration
assessments, as well as any necessary radionuclide-specific analyses. Non-radiological
parameters—such as specific conductance, pH, and total organic carbon—may be used as
surrogate indicators of potential radioactive material, if a specific relationship exists between the
radionuclide concentration and the level of the indicator (e.g., if a linear relationship between pH
and the radionuclide concentration in water is found to exist, then the pH may be measured
such that the radionuclide concentration can be calculated based on the known relationship
rather than performing an expensive nuclide-specific analysis). The use of surrogate
measurements is discussed in Section 4.5.3.
Each surface water and sediment sampling location should be carefully recorded on the
appropriate survey form. Additionally, surface water flow models may be used to illustrate
radionuclide concentrations and migration rates.
Ground Water
Ground water sampling may be necessary, depending on the local geology, potential for
residual radioactive material in the subsurface, and the regulatory framework. Because different
agencies handle ground water compliance in different ways (e.g., EPA's Superfund program
and some States require compliance with maximum contaminant levels specified in the Safe
Drinking Water Act), the regulatory agency should be contacted if residual radioactive material
May 2020
DRAFT FOR PUBLIC COMMENT
5-15
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Survey Planning and Design
MARSSIM
1	in ground water is expected. The need for ground water sampling is described in
2	Section 3.6.3.4.
3	If residual radioactive material in ground water is identified, the regulatory agency should be
4	contacted at once, because (1) ground water release criteria and DCGLs should be established
5	by the appropriate agency (Section 4.5.2), and (2) the default DCGLs for soil may be
6	inappropriate, because they are usually based on ground water without any residual radioactive
7	material.
8	Characterization of residual radioactive material in ground water should determine the extent
9	and distribution of residual radioactive material, rates and direction of ground water migration,
10	and the assessment of potential effects of ground water withdrawal on the migration of residual
11	radioactive material in ground water. This may be performed by designing a suitable monitoring
12	well network. The actual number and location of monitoring wells depends on the size of the
13	affected area, the type and extent of the residual radioactive material, the hydrogeological
14	system, and the objectives of the monitoring program.
15	When ground water samples are taken, background radiation levels should be determined by
16	collecting samples from the same aquifer upgradient of the site and then analyzing them. Any
17	tidal effects and effects of additional wells in the upgradient zone on ground water flow should
18	be evaluated to aid in selecting the proper location of upgradient background samples. The
19	background samples should not be affected by site operations and should be representative of
20	the quality of the ground water that would exist if the site had not been affected by the residual
21	radioactive material. Consideration should be given to any spatial or temporal variations in the
22	background radionuclide concentrations.
23	Sampling locations should be referenced to grid coordinates, if appropriate, or to scale drawings
24	of the ground water monitoring wells. Construction specifications on the monitoring wells should
25	also be provided, including elevation, internal and external dimensions, types of casings, type of
26	screen and its location, borehole diameter, and other necessary information about the wells.
27	In addition to organic and inorganic constituents, ground water sampling and analyses should
28	include all significant radiological constituents. Measurements in potential sources of drinking
29	water should include gross alpha and gross beta assessments, as well as any other
30	radionuclide-specific analyses deemed appropriate based on the HSA. Non-radiological
31	parameters—such as specific conductance, pH, and total organic carbon—may be used as
32	surrogate indicators of the potential presence of certain radionuclides, provided that a specific
33	relationship exists between the radionuclide concentration and the level of the indicator.
34	Each ground water monitoring well location should be carefully recorded on the appropriate
35	survey form. Additionally, radionuclide concentrations and sources should be plotted on a map
36	to illustrate the relationship among radionuclides, sources, hydrogeological features and
37	boundary conditions, and property boundaries (EPA 1993f).
38	Other Media
39	Air sampling may be necessary at some sites, depending on the local geology and the
40	radionuclides of potential concern. This may include collecting air samples or filtering the air to
41	collect resuspended particulates. Air sampling is often restricted to monitoring activities for
42	occupational and public health and safety, and it is not required to demonstrate compliance with
43	risk- or dose-based regulations. Section 3.6.3.5 describes examples of sites where air sampling
44	may provide information useful to designing an FSS. At some sites, radon measurements may
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
5-16
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Survey Planning and Design
1	be used to indicate the presence of radium, thorium, or uranium in the soil. Section 6.8 and
2	Appendix H provide information on this type of sampling.
3	In rare cases, vegetation samples may be collected as part of a characterization survey to
4	provide information in preparation for an FSS. Because most risk- and dose-based regulations
5	are concerned with potential future land use that may differ from the current land use,
6	vegetation samples are unsuitable for demonstrating compliance with regulations. There is a
7	relationship between radionuclide concentrations in plants and those in soil (the soil-to-plant
8	transfer factor is used in many models to develop DCGLs), and the plant concentration could be
9	used as a surrogate measurement of the soil concentration. In most cases, a measurement of
10	the soil itself as the parameter of interest is more appropriate and introduces less uncertainty in
11	the result.
12	5.2.2.3 Evaluating Survey Results
13	Survey data are converted to the same units as those in which DCGLs are expressed
14	(Section 6.7). Laboratory and in situ analyses are performed to identify potential residual
15	radioactive material at the site. Appropriate regulatory DCGLs for the site are selected, and the
16	data are then compared to the DCGLs. For characterization data that are used to supplement
17	FSS data, the statistical methodology in Chapter 8 should be followed to determine if a survey
18	unit satisfies the release criteria.
19	For characterization data that are used to help guide remediation efforts, the survey data are
20	used to identify locations and the general extent of residual radioactivity. The survey results are
21	first compared with DCGLs. Surfaces and environmental media are then differentiated as
22	exceeding DCGLs, not exceeding DCGLs, or not affected, depending on the measurement
23	results relative to the DCGL value. Direct measurements indicating areas of elevated activity are
24	further evaluated, and the need for additional measurements is determined.
25	5.2.2.4 Documentation
26	Documentation of the site characterization survey should provide a complete and unambiguous
27	record of the radiological status of the site. In addition, sufficient information to characterize the
28	extent of residual radioactive material, including all possible affected environmental media,
29	should be provided in the report. This report should also provide sufficient information to support
30	reasonable approaches or alternatives to site remediation. Example 2 includes an example of a
31	characterization survey checklist.
Example 2: Example Characterization Survey Checklist
Survey Design
	 Enumerate Data Quality Objectives (DQOs) and Measurement Quality Objectives
(MQOs): State the objective of the survey; survey instrumentation capabilities should
be appropriate for the specific survey objective.
	 Review the Historical Site Assessment (HSA) and scoping survey results, if
performed, for—
May 2020
DRAFT FOR PUBLIC COMMENT
5-17
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Survey Planning and Design
MARSSIM
	 Operational history (e.g., any problems, spills, or releases) and available
documentation (e.g., radioactive materials license).
	 Other available resources—site personnel, former workers, residents, etc.
	 Types and quantities of materials that were handled and where radioactive
materials were stored, handled, and disposed of.
	 Release and migration pathways.
	 Information on the potential for residual radioactive material that may be useful
during area classification for final status survey (FSS) design. Note: Survey
activities will be concentrated in Class 1 and Class 2 areas.
	Types and quantities of materials likely to remain onsite—consider radioactive
decay and ingrowth of decay products.
	 Document the survey plan (e.g., Quality Assurance Project Plan, standard
operating procedures, etc.)
Conducting Surveys
	 Select instrumentation based on its capabilities for the expected residual radioactive
material and quantities and knowledge of the appropriate derived concentration
guideline levels (DCGLs).
	 Define background locations and determine background activity and radiation levels
for the area; include surface activity levels on building surfaces, radionuclide
concentrations in environmental media, and exposure rates.
	 Establish a reference coordinate system. Prepare scale drawings for surface water
and ground water monitoring well locations.
	 Perform thorough surface scans of all areas potentially containing residual radioactive
material. Examples of indoor areas include expansion joints, stress cracks,
penetrations into floors and walls for piping, conduits, anchor bolts, and wall/floor
interfaces. Examples of outdoor areas include radioactive material storage areas,
areas downwind of stack release points, dripline and downspout areas, surface
drainage pathways, and roadways that may have been used for transport of
radioactive materials.
	 Perform systematic surface activity measurements.
	 Perform systematic smear, surface and subsurface soil and media, sediment, surface
water, and ground water sampling, if appropriate for the site.
	 Perform judgment direct measurements and sampling of elevated areas to provide
data on the upper ranges of levels of the concentration of radioactive material.
	 Document survey and sampling locations.
	 Maintain chain of custody of samples when necessary.
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
5-18
May 2020
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
MARSSIM
Survey Planning and Design
Note: One category of radiological data (e.g., radionuclide concentration, direct radiation
level, or surface radioactivity) may be sufficient to determine the extent of residual
radioactive material; other measurements may not be necessary (e.g., removable
surface radioactive material or exposure rate measurements).
Note: Measuring and sampling techniques should be commensurate with the intended use
of the data, because characterization survey data may be used to supplement FSS
data.
Evaluating Survey Results
	 Compare survey results with DCGLs. Differentiate surfaces or areas as exceeding
DCGLs, not exceeding DCGLs, or not affected.
	 Evaluate all locations of elevated direct measurements, and determine the need for
additional measurements or samples.
	 Prepare site characterization survey report.
5.2.3 Remedial Action Support Surveys
RAS surveys are conducted to (1) support remediation activities, (2) determine when a site or
survey unit is ready for the FSS, and (3) provide updated estimates of site-specific parameters
to use for planning the FSS. This manual does not discuss the routine operational surveys
(e.g., air sampling, dose rate measurements, environmental sampling) conducted for health and
safety purposes to support remediation activities.
A RAS survey serves to monitor the effectiveness of remediation efforts to reduce residual
radioactive material to acceptable levels. The RAS survey also ensures that remediation is
targeted to only those areas requiring remediation, which in turn ensures a cost-effective
remediation. This type of survey guides the cleanup in a real-time mode. The RAS survey
typically relies on a simple radiological parameter, such as direct radiation near the surface, as
an indicator of effectiveness. The investigation level for the RAS survey (established as the
DCGL) is determined and used for immediate, in-field decisions (Section 5.3.8).
Such a survey is intended for expediency and cost-effectiveness and does not provide thorough
or accurate data describing the radiological status of the site. Note that this survey typically
does not provide information that can be used to demonstrate compliance with the DCGLs;
rather, it is an interim step in the compliance demonstration process. Areas that are determined
to likely satisfy the DCGLs on the basis of the RAS survey will then be surveyed in detail by the
FSS. Alternatively, the RAS survey can be designed to meet the objectives of an FSS as
described in Section 5.3.
Remedial activities result in changes to the distribution of residual radioactive material within a
survey unit. The site-specific parameters used during FSS planning (e.g., variability in the
radionuclide concentration within a survey unit or the probability of small areas of elevated
activity) will change during remediation. For most survey units, values for these parameters will
need to be re-established following remediation. Obtaining updated values for these critical
planning parameters should be considered when designing a RAS survey.
May 2020
DRAFT FOR PUBLIC COMMENT
5-19
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Survey Planning and Design
MARSSIM
1	5.2.3.1 Survey Design
2	The objective of the RAS survey is to determine whether remediation was adequate to remove
3	radioactive material to levels at or below the DCGL criteria. Although the presence of small
4	areas of elevated concentrations of radioactive material may satisfy the elevated measurement
5	criteria, it may be more efficient to design the RAS survey to identify residual radioactive
6	material at the radionuclide-specific release limit based on the spatial distribution of the
7	radionuclide within a survey unit (DCGLw) and to remediate small areas of elevated activity that
8	may potentially satisfy the release criteria. Survey instrumentation and techniques are therefore
9	selected based on the instrumentation capabilities for the known or suspected radionuclides and
10	DCGLs to be achieved.
11	There will be radionuclides and media that cannot be evaluated at the DCGLw using field
12	monitoring techniques. For these cases, it may be feasible to collect and analyze samples by
13	methods that are quicker and less costly than radionuclide-specific laboratory procedures. Field
14	laboratories and screening techniques may be acceptable alternatives to more expensive
15	analyses. Reviewing remediation plans may be required to get an indication of the location and
16	amount of residual radioactive material remaining following remediation.
17	5.2.3.2 Conducting Surveys
18	Field survey instruments and procedures are selected based on their ability to detect and
19	quantify the expected radionuclides. Survey methods typically include scans of surfaces
20	followed by direct measurements to identify residual radioactive material. The surface activity
21	levels are compared to the investigation levels, and a determination is made on the need for
22	further remediation efforts.
23	Survey activities for soil excavations include surface scans using field instrumentation sensitive
24	to beta and gamma activity. Because it can be difficult to correlate scanning results to
25	radionuclide concentrations in soil, judgment should be exercised carefully when using scan
26	results to guide the cleanup efforts. Field laboratories and screening techniques may provide a
27	better approach for determining whether further soil remediation is necessary.
28	5.2.3.3 Evaluating Survey Results
29	Survey data (e.g., surface activity levels and radionuclide concentrations in various media) are
30	converted to standard units and compared to the DCGLs (Section 6.7). If results of these
31	survey activities indicate that remediation has been successful in meeting the DCGLs, remedial
32	actions are ceased, and FSS activities are initiated. Alternatively, further remediation may be
33	needed if results indicate the presence of residual radioactive material in excess of the DCGLs.
34	DCGLs may be recalculated based on the results of the remediation process as the regulatory
35	program allows or permits.
36	5.2.3.4 Documentation
37	The RAS survey should guide the cleanup and alert those performing remedial activities that
38	(1) additional remediation is needed or (2) the site may be ready to initiate an FSS. Data that
39	indicate an area has been successfully remediated could be used to estimate the variance for
40	the survey units in that area. Information identifying areas of elevated activity that existed before
41	remediation may be useful for planning FSSs. Example 3 includes an example of a RAS survey
42	checklist, including survey design, conduct of surveys, and evaluation of survey results.
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
5-20
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Survey Planning and Design
Example 3: Example Remedial Action Support Survey Checklist
Survey Design
	 Enumerate Data Quality Objectives and Measurement Quality Objectives: State the
objectives of the survey; survey instrumentation capabilities should be appropriate for
the specific survey objective.
	 Document the survey plan (e.g., Quality Assurance Project Plan, standard operating
procedures, etc.)
	 Review the remediation plans.
	 Determine applicability of monitoring surfaces/soils for the radionuclides of concern.
Note: RAS surveys may not be feasible for residual radioactive materials with very low-
energy beta emitters or for soils or media containing pure alpha emitters.
	 Select simple radiological parameters (e.g., surface activity) that can be used to make
immediate in-field decisions on the effectiveness of the remedial action.
Conducting Surveys
	 Select instrumentation based on its capabilities for measuring the expected
radionuclides.
	 Perform scanning and surface activity measurements near the surface being
remediated.
	 Survey soil excavations and perform field evaluation of samples (e.g., gamma
spectrometry of undried/non-homogenized soil) as remedial actions progress.
Evaluating Survey Results
	 Compare survey results with DCGLs using survey data as a field decision tool to
guide the remedial actions in a real-time mode.
	 Document survey results.
1	5.3 Final Status Surveys
2	An FSS is performed to demonstrate that residual radioactive material in each survey unit
3	satisfies the predetermined criteria for release for unrestricted use or, where appropriate, for use
4	with designated limitations. The survey provides data to demonstrate that all radiological
5	parameters do not exceed the established DCGLs. For these reasons, more detailed
6	information is provided for this category of survey. For the FSS, survey units represent the
7	fundamental elements for demonstrating that the property is less than the DCGLw by using a
8	combination of direct measurements or sampling and scanning (see Sections 5.3.3-5.3.5) or
9	by a scan-only survey of the property (if warranted by the scanning instrumentation detection
10	capability and measurement uncertainty; see Section 5.3.6). The percentage of the area
May 2020
DRAFT FOR PUBLIC COMMENT
5-21
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Survey Planning and Design
MARSSIM
1	scanned is dependent upon the classification of the property or survey units within the property
2	as well as the relative shift for the site. The documentation specified in the following sections
3	helps ensure a consistent approach among different organizations and regulatory agencies.
4	This allows for comparisons of survey results between sites or facilities.
5	The MARSSIM approach recognizes that alternative methods may be acceptable to different
6	regulatory agencies. Flow diagrams and a checklist to assist the user in planning a survey are
7	included in this section.
8	Figures 5.4-5.6 illustrate the process of designing an FSS. This process begins with
9	development of DQOs and MQOs. The first decision after developing the DQOs and MQOs is to
10	establish whether a scan-only or traditional MARSSIM approach (scanning with direct
11	measurements and/or samples) will be used. Based on these objectives and the known or
12	anticipated radiological conditions at the site, the numbers and locations of measurement and
13	sampling points used and amount of scanning to demonstrate compliance with the release
14	criteria are then determined. Finally, survey techniques appropriate to develop adequate data
15	(see Chapters 6 and 7) are selected and implemented.
16	The elements of an FSS discussed in Section 5.3 consist of the following subsections:
17	• selecting either Scenario A or Scenario B as a basis of the survey design (Section 5.3.1)
18	• determining the appropriate release criteria based on whether radionuclides of concern are
19	present in the background (Section 5.3.2)
20	• determining the appropriate number of data points for statistical tests when residual
21	radioactive materials are present in the background (Section 5.3.3)
22	• determining the appropriate number of data points for statistical tests when residual
23	radioactive materials are not present in the background (Section 5.3.4)
24	• establishing procedures for determining data points for small areas of elevated activity
25	(Section 5.3.5)
26	• determining the scan area (Section 5.3.6)
27	• determining the survey locations (Section 5.3.7)
28	• establishing the appropriate investigation level for a survey (Section 5.3.8)
29	• developing an integrated survey design (Section 5.3.9)
30	• evaluating the survey results (Section 5.3.10)
31	• documenting the results (Section 5.3.11)
32	Another important consideration during planning for an FSS is the performance of confirmatory
33	surveys. Planning for the FSS should include early discussions with the regulatory agency
34	concerning logistics for confirmatory or verification surveys. A confirmatory survey (also known
35	as an independent verification survey) may be performed by the regulatory agency or by an
36	independent third party (e.g., a party contracted by the regulatory agency) to provide data to
37	substantiate results of the FSS. Actual field measurements and sampling may be performed.
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
5-22
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Survey Planning and Design
Select Appropriate
Scenario
Section 5.3.1
Enumerate DQOs & MQOs
and Select Measurement
Method
Sampling/Direct Surveys
with Scanning
Scan-Only Surveys
Chapters 4, 6 & 7
Determine a, A, and A/a
Determine a, A, and A/a
Section 5.3.6
Sections 5.3.3 & 5.3.4
Radionuclides(s)
Present in
Background?
No
Yes
Section 4.5
Obtain Number of Data Points for
WRS Test, N/2, From Table 5.2
for Each Survey Unit and
Reference Area
Obtain Number of Data Points for
Sign Test, N, From Table 5.3
Section 5.3.3
Section 5.3.4
Identify Measurement Locations
for Discrete Measurements
Determine Scan Area
Section 5.3.7
Figure 5.5
Section 5.3.6
Determine Survey Location
for Scans
Identify Data Needs for
Assessment of Potential Areas of
Elevated Activity
Section 5.3.7
Section 5.3.5
Figure 5.6
Determine Investigation
Levels
Section 5.3.8
Develop an Integrated
Survey Design
Section 5.3.9
Figure 5.4: Process for Designing an Integrated Survey Plan for a Final Status Survey
May 2020
DRAFT FOR PUBLIC COMMENT
5-23
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Survey Planning and Design
MARSSIM
-Class 1-
WHAT IS THE
AREA CLASSIFICATION?
-Class 3-
Section 4.6.1
Class 2
Determine Number of
Data Points Needed
Determine Number of
Data Points Needed
Determine Number of
Data Points Needed
Sections 5.3.3 & 5.3.4
Sections 5.3.3 & 5.3.4
Sections 5.3.3 & 5.3.4
Determine Spacing
for Survey Unit
Determine Spacing
for Survey Unit
Generate Sets of
Random Values
Equations 5.5 & 5.6
Equations 5.5 & 5.6
Appendix I
Generate a Random
Starting Point
Generate a Random
Starting Point
Appendix I
Multiply Survey Unit
Dimensions by
Random Numbers to
Determine Coordinates
Appendix I
Identify Data Point
Grid Locations
Identify Data Point
Grid Locations
Section 5.3.7
Section 5.3.7
Section 5.3.7
Where Conditions Prevent
Survey of Identified Locations,
Supplement with Additional
Randomly Selected
Locations
Where Conditions Prevent
Survey of Identified Locations,
Supplement with Additional
Randomly Selected
Locations
Continue Until the Necessary
Number of Data Points are
Identified
Figure 5.5: Process for Identifying Discrete Measurement Locations
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
5-24
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Survey Planning and Design
Establish DQOs for Areas with the Potential
for Exceeding DCGLs and Acceptable
Risk for Missing Such Areas
Identify Number of Data Points, n, Needed
Based on Statistical Tests
Sections 5.3.3 & 5.3.4
Calculate the Area, A, Bounded by Data
Points, n
Section 5.3.5.1
Determine Acceptable Concentrations in
Various Individual Smaller Areas within
Survey Unit
Section 5.3.5
Determine the Acceptable Concentration
Corresponding to the Calculated Area, A
Section 5.3.5
Determine the Required Scan MDC to
Identify the Acceptable Concentration in an
Area, A
Calculate the Required
Grid Spacing and Numbe
of Data Points
Evaluate MDCs of Measurement
Techniques for Available Instrumentation
Is the
Scan MDC for
Available Instrumentation Less
Than the Required
Scan MDC?
Section 5.3.5.2
Examples 7 & 8
No Additional Sampling
Points are Necessary for
Potential Elevated Areas
2	Figure 5.6: Identifying Data Needs for Assessment of Potential Areas of Elevated Activity
3	in Class 1 Survey Units
May 2020
DRAFT FOR PUBLIC COMMENT
5-25
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
Survey Planning and Design
MARSSIM
Independent confirmatory survey activities are usually limited in scope to spot-checking
conditions at selected locations, comparing findings with those of the FSS and performing
independent statistical evaluations of the data developed from the confirmatory survey and the
FSS. Another purpose of the confirmatory activities may be to identify any deficiencies in the
FSS documentation based on a thorough review of survey procedures and results. Finally,
reviewing the results of confirmatory surveys performed on other sites may provide insight into
possible survey deficiencies, which can then be corrected before the FSS is performed (Roberts
2008).
5.3.1 Selecting the Appropriate Scenario
The DQO process, as it is applied to FSSs, is described in more detail in Appendix D of this
manual and in EPA and NRC guidance documents (EPA 1987b, 1987c, 2006c; NRC 1998a). As
part of this process, the objective of the survey and the null and alternative hypotheses should
be clearly stated. The objective of FSSs is typically to demonstrate that residual radioactive
material levels meet the release criteria. One of two approaches is used to demonstrate that this
objective is met; the two approaches differ in the selection of the null hypothesis (i.e., what is
assumed to be the true state of nature), as summarized in Table 5.1.
Table 5.1: Null and Alternative Hypothesis for Scenarios A and B
Scenario
Null Hypothesis (H0)
Alternative Hypothesis {Hx)
A
The concentration of residual
radioactive material is equal to or
exceeds the release criteria.
The concentration of residual
radioactive material is less than the
release criteria.
B
The concentration of residual
radioactive material is equal to or less
than the release criteria.
The concentration of residual
radioactive material exceeds the
release criteria.
Historically, MARSSIM recommended the use of Scenario A, which put the burden of proof that
the survey unit met the release criteria on the individuals designing the survey. However,
Scenario A requires that survey designers choose a discrimination limit (DL), the lower bound of
the gray region (LBGR), at some radioactive material concentration less than the DCGL. This is
effectively impossible when the AL corresponding to the release criteria is "zero residual
radioactive material" or "zero residual radioactive material above background." The only way to
design a survey for these kinds of release criteria is to establish a DL at some radioactive
material concentration greater than the AL.
In Scenario B, the burden of proof is no longer on the individuals designing the survey and,
thus, should be used with caution and only in those situations where Scenario A is not an
effective alternative. The consequence of inadequate power is an increased Type II decision
error (/?) rate. For Scenario A, this means that a survey unit that does meet the release criteria
has a higher probability of being incorrectly determined not to meet the release criteria. For
Scenario B, this means that a survey unit that does not meet the release criteria has a higher
probability of being incorrectly determined to meet the release criteria. For this reason,
individuals designing a MARSSIM Survey using Scenario B should make conservative
assumptions for the estimate of the standard deviation (a) (see Section 5.3.3.2) so that even if
the variability in the survey unit is higher than expected, the power of the resulting survey
(1 - /?) (see Section 5.3.2) will still be sufficient to ensure that survey units with residual
radioactive material in excess of the AL will be discovered 1 - /? percent of the time. As a result,
a retrospective power analysis needs to be performed following the completion of Scenario B
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
5-26
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Survey Planning and Design
1	MARSSIM surveys indicating that regulatory agency requirements on /? at the DL were met. See
2	Chapter 8 and Appendix M for more information on performing a retrospective power analysis.
3	5.3.2 Application of Release Criteria
4	The statistical test used to evaluate data for FSSs where direct measurements or sampling and
5	analysis is performed depend on the scenario selected. For radionuclides that are present in
6	background, the Wilcoxon Rank Sum (WRS) test is typically used in Scenario A. In Scenario B
7	two nonparametric statistical tests are performed: the WRS test and the Quantile test. The WRS
8	and Quantile tests are both used because each test detects different residual radioactive
9	material patterns in the survey unit. When radionuclides of concern are not present in
10	background, the Sign test is used for both scenarios. For scan-only surveys, a comparison to an
11	upper confidence level (UCL) is performed. The Sign, WRS, UCL, and Quantile tests are
12	discussed in Chapter 8.
13	To determine data needs for these tests, the acceptable probability of making Type I decision
14	errors (a) and Type II decision errors (/?) should be established (see Appendix D, Section
15	D.1.6). The acceptable decision error rates are defined at the LBGR and the DCGLw for
16	Scenario A and the action level (AL) and the DL for Scenario B. Acceptable decision error rates
17	are determined during survey planning using the DQO process.
18	The final step of the DQO process includes selecting a survey design that satisfies the DQOs.
19	For some sites or survey units, the information provided in this section may result in a survey
20	design that cannot be accomplished with the available resources. For these situations, the
21	planning team may be able to relax one or more of the constraints used to develop the survey
22	design as described in Appendix D. For example—
23	• increasing the decision error rates, considering the risks associated with making an incorrect
24	decision
25	• increasing the width of the gray region, as long as the LBGR is not set lower than the
26	estimate of the residual radioactive material remaining in the survey unit in Scenario A
27	• changing the boundaries—it may be possible to reduce measurement costs by changing or
28	eliminating survey units that may require different decisions
29	5.3.3 Determining Numbers of Data Points for Statistical Tests for Residual Radioactive
30	Material Present in Background
31	The comparison of measurements from the reference area and survey unit is made using the
32	WRS test, which is usually conducted for each survey unit. In addition, the Elevated
33	Measurement Comparison (EMC) may need to be performed against each measurement to
34	ensure that the measurement result does not exceed a specified investigation level. If any
35	measurement in the remediated survey unit exceeds the specified investigation level, then
36	additional investigation is recommended, at least locally, regardless of the outcome of the WRS
37	test.
38	The WRS test is most effective when residual radioactive material is uniformly present
39	throughout a survey unit. The test is designed to detect whether the median concentration
40	exceeds the DCGLw. The advantage of this nonparametric test is that it does not assume the
May 2020
DRAFT FOR PUBLIC COMMENT
5-27
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
Survey Planning and Design
MARSSIM
data are normally or log-normally distributed. The WRS test also allows for "less than"
measurements to be present in the reference area and the survey units. This test can generally
be used with up to 40 percent "less than" measurements in either the reference area or the
survey unit. However, the use of "less than" values in data reporting is not recommended (NRC
2004). Wherever possible, the actual result of a measurement, together with its uncertainty,
should be reported.
This section introduces several terms and statistical parameters that will be used to determine
the number of data points needed to apply the nonparametric tests. An example is provided
below to better illustrate the application of these statistical concepts.
5.3.3.1 Define the Gray Region
In Scenario A, the upper bound of the gray region (UBGR) is equal to the DCGLw. The gray
region is defined as the interval between the LBGR and the DCGLw (Figure 5.7). For
Scenario A, the LBGR is typically chosen to represent a conservative (slightly higher) estimate
of the mean concentration of residual radioactive material remaining in the survey unit at the
beginning of the FSS. If there is no information with which to estimate the residual radioactive
material concentration remaining, the LBGR may be initially set to equal one-half of the DCGLw.


4 A h
	fe.

LBGR	DCGLw
Concentration of Radioactive Material
Figure 5.7: Gray Region for Scenario A
In Scenario B, the UBGR is equal to the DL, and the LBGR is equal to the AL. The gray region
is defined as the interval between the AL and the DL (Figure 5.8).2 The AL is the concentration
of radioactive material that causes a decision maker to choose one of the alternative actions,
such as releasing a survey unit or requiring additional investigation. The planning team also
chooses the DL. The DL is the concentration of radioactive material or level of radioactivity that
can be reliably distinguished from the action level by performing measurements with the devices
selected for the survey (i.e., direct measurements, scans, in situ measurements, samples with
laboratory analyses) and defines the rigor of the survey. It is determined through negotiations
with the regulator, and, in some cases, the DL will be set equal to a regulatory limit (e.g.,
10 CFR 36.57 and DOE 2011 c). The DL and the AL should be reported in the same units. The
selection of the appropriate null hypothesis is further discussed in Chapter 8 and Appendix D.
2 This description of Scenario B is based on information contained in the Multi-Agency Radiation Survey and
Assessment of Materials and Equipment (MARSAME) Manual and the Multi-Agency Radiation Laboratory Analytical
Protocols and is fundamentally different from the description of Scenario B found in NUREG-1505 (NRC 1998a).
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
5-28
May 2020
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
MARSSIM
Survey Planning and Design
A
AL
DL
Concentration of Radioactive Material
Figure 5.8: Gray Region for Scenario B
When Scenario B is being used, the variability in the data may be such that a decision may be
"too close to call" when the true but unknown value of the residual radioactivity concentration is
very near the DL. In this situation, consultation with the appropriate regulator may be required to
determine the AL and DL. As an example, the U.S. NRC discusses methods in NUREG-1505
(NRC 1998a), Chapter 13, to establish the gray region and a concentration level that is
considered indistinguishable from background when the WRS test is used3.
5.3.3.2 Calculate the Relative Shift
The width of the gray region is a parameter that is essential for planning all statistical tests; it is
also referred to as the shift, A. In Scenario A, the shift is the difference between the LBGR and
the DCGLw (A = DCGLw - LBGR). In Scenario B, the shift is the difference between the AL
and the DL (A = DL - AL). The absolute size of the shift is less important than the relative shift,
A/a, where a is an estimate of the standard deviation of the measured values in the survey unit.
This estimate of a includes both the real spatial variability in the quantity being measured and
the uncertainty of the chosen measurement method. The relative shift is an expression of the
resolution of the measurements in units of measurement uncertainty.
The shift and the estimated standard deviation in the measurements of the radioactive material
in the survey unit (as) and reference area (ay) are used to calculate the relative shift (A/a) (see
Appendix D, Section D.1.7.3). The standard deviations in the radionuclide level will likely be
available from previous systematic and non-judgment survey data (e.g., scoping or
characterization survey data for un-remediated survey units or RAS surveys for remediated
survey units). If they are not available, it may be necessary to (1) perform some limited
preliminary measurements (about 5-20) to estimate the distributions or (2) to make a
reasonable estimate based on available site knowledge. If the first approach above is used, the
scoping or characterization survey data or preliminary measurements used to estimate the
standard deviation should use the same technique as the FSS will. When preliminary data are
not obtained, it may be reasonable to assume a coefficient of variation (CV) on the order of
3 Chapter 8 in NUREG-1505 [NRC 1998] provides additional information on the WRS test.
May 2020
DRAFT FOR PUBLIC COMMENT
5-29
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
Survey Planning and Design
MARSSIM
30 percent, based on experience. The CV is a measure of the dispersion of the data and is
defined by the ratio of the standard deviation to the mean.
The value selected as an estimate of a for a survey unit may be based on data collected only
from within that survey unit or from data collected from a much larger area of the site. Note that
survey units are not finalized until the planning stage of the FSS. This means that there may be
some difficulty in determining which individual measurements from a preliminary survey may
later represent a particular survey unit. For many sites, the most practical solution is to estimate
a for each area classification (i.e., Class 1, Class 2, and Class 3) for both interior and exterior
survey units. This will result in all exterior Class 3 survey units using the same estimate of a, all
exterior Class 2 survey units using a second estimate for a, and all exterior Class 1 survey units
using a third estimate for a. If there are multiple types of surfaces within an area classification,
additional estimates of a may be required. For example, a Class 2 concrete floor may require a
different estimate of a than a Class 2 cinder block wall, or a Class 3 unpaved parking area may
require a different estimate of a than a Class 3 lawn. In addition, a separate estimate of a
should be obtained for every reference area.
The importance of choosing appropriate values for or and as must be emphasized. If the value
is grossly underestimated, the number of data points will be too few to obtain the desired power
level for the test, and a resurvey may be recommended (see Chapter 8). If, on the other hand,
the value is overestimated, the number of data points determined will be unnecessarily large.
Values for the relative shift that are less than 1 will result in a large number of measurements
needed to demonstrate compliance. The number of data points will also increase as A becomes
smaller. Because the DCGL is fixed, this means that the LBGR also has a significant effect on
the estimated number of measurements needed to demonstrate compliance in Scenario A. The
DL selected during the DQO process will have a similar effect in Scenario B. When the
estimated standard deviations in the reference area and survey units are different, the larger
value should be used to calculate the relative shift (A/a). There is little benefit, in terms of
reduced number of measurements, for relative shift values greater than 3. Because of this and
the large number of measurements resulting from relative shift values less than 1, in
Scenario A, the LBGR may be adjusted to ensure the relative shift is greater than 1, as long as
the LBGR is not set lower than the estimate of the residual radioactive material remaining in the
survey unit. For Scenario B, the planning team may wish to adjust the DL to achieve a similar
effect with approval from the regulator. However, it is extremely important that such adjustments
be supported by data. Additional considerations related to adjusting the relative shift are
provided in Appendix D, Section D.1.7.3.
In practice, the DQO process is used to obtain a proper balance among the use of various
measurement techniques. In general, there is an inverse correlation between the cost of a
specific measurement method and the detection levels being sought. Depending on the survey
objectives, there are many important considerations when selecting a measurement method
that will ultimately affect both the survey costs and the statistical power of the sampling design.
Statistical power is defined as the probability that a statistical test will correctly reject the null
hypothesis (i.e., under Scenario A, accepting that a site that meets the release criteria truly
does, and under Scenario B, accepting that a site that does not meet the release criteria truly
does not). A general example approach that might be undertaken for a Scenario A planning
session is discussed below.
N is the total number of data points for each survey unit/reference area combination. The N data
points are divided between the survey unit, n, and the reference area, m. The simplest method
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
5-30
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Survey Planning and Design
1	for distributing the N data points is to assign half the data points to the survey unit and half to
2	the reference area, son = m = N/2. This means that N/2 measurements are performed in
3	each survey unit, and N/2 measurements are performed in each reference area. If more than
4	one survey unit is associated with a particular reference area, N/2 measurements should be
5	performed in each survey unit, and N/2 measurements should be performed in the reference
6	area.
7	Table 5.2 provides a list of the number of data points needed to demonstrate compliance using
8	the WRS test for selected values of a, p, and A/a. The values listed in Table 5.2 represent the
9	number of measurements to be performed in each survey unit and in the corresponding
10	reference area. Example 4 illustrates the use of the WRS Test under Scenario A.
Example 4: Use of WRS Test under Scenario A
A site has 14 survey units and 1 reference area in a building, and the same measurement
method is used to perform measurements in each survey unit and the reference area. The
radionuclide has a wide-area derived concentration guideline level of 400 becquerels/square
meter (Bq/m2) (240 decays per minute [dpm]/100 square centimeters [cm2]). The radionuclide
is present in background at a level of 100 ± 15 Bq/m2 (1 a). The standard deviation of the
radionuclide in the survey area is ± 40 Bq/m2 (24 dpm/100 cm2), based on previous survey
results for the same or similar radionuclide distribution. When the estimated standard
deviation in the reference area and the survey units are different, the larger value, 40 Bq/m2
in this example, is used to calculate the relative shift. During the Data Quality Objective
process, Scenario A is selected. The LBGR is selected to be 240 Bq/m2. This is based on a
conservative estimate of the concentration of residual radioactive material in the survey unit.
Type I and Type II error values (a and p) of 0.05 are selected. Determine the number of data
points to be obtained from the reference area and from each of the survey units for the
statistical tests.
The value of the relative shift for the survey unit, A/a, is (400 - 240)/40, or 4.0. The number
of data points can be obtained directly from Table 5.2. For a = 0.05, p = 0.05, and A/a =
4.0, a value of 9 is obtained for N/2. The table value has already been increased by
20 percent to account for missing or unusable data.
11
May 2020
DRAFT FOR PUBLIC COMMENT
5-31
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
12
X3Z
73 C
£ 2 13
m
H CD
~n A
O cn
C 73
ro cd
i— <.
og-
O =3
O N>
m
Table 5.2: Values of N/2 for Use with the Wilcoxon Rank Sum Test4 5
cn
l
CO
N>
D
O
Z
O
H
O
H
m
O
O 03
n M
y o
H	N>
m	o
A/cr
¦
a
= 0.01

¦
m
a
= 0.025
¦
¦I
a
= O.Of

¦
II
a
= 0.1C
hh
¦
m
a
= 0.25

¦i
0.1
5,452 4,627 3,972 3,278 2,268
4,627 3,870 3,273 2,646 1,748
3,972 3,273 2,726 2,157 1,355
3,278 2,646 2,157 1,655
964
2,268 1,748 1,355
964
459
0.2
1,370 1,163
998
824
570
1,163
973
823
665
440
998
823
685
542
341
824
665
542
416
243
570
440
341
243
116
0.3
614
521
448
370
256
521
436
369
298
197
448
369
307
243
153
370
298
243
187
109
256
197
153
109
52
0.4
350
297
255
211
146
297
248
210
170
112
255
210
175
139
87
211
170
139
106
62
146
112
87
62
30
0.5
227
193
166
137
95
193
162
137
111
73
166
137
114
90
57
137
111
90
69
41
95
73
57
41
20
0.6
161
137
117
97
67
137
114
97
78
52
117
97
81
64
40
97
78
64
49
29
67
52
40
29
14
0.7
121
103
88
73
51
103
86
73
59
39
88
73
61k
48
^30
73
59
48
37
22
51
39
30
22
11
0.8
95
81
69
57
40
81
68
57
46
31
69
57
48
38
24
57
46
38
29
17
40
31
24
17
8
0.9
77
66
56
47
32
66
55
46
38
25
56
46
39
31
20
47
38
31
24
14
32
25
20
14
7
1.0
64
55
47
39
27
55
46
39
32
21
47
39
32
26
16
39
32
26
20
12
27
21
16
12
6
1.1
55
47
40
33
23
47
39
33
27
18
40
33
28
22
14
33
27
22
17
10
23
18
14
10
5
1.2
48
41
35
29
20
41
34
29
24
16
35
29
24
19
12
29
24
19
15
9
20
16
12
9
4
1.3
43
36
31
26
18
36
30
26
21
14
31
26
22
17
11
26
21
17
13
8
18
14
11
8
4
1.4
38
32
28
23
16
32
27
23
19
13
28
23
19
15
10
23
19
15
12
7
16
13
10
7
4
1.5
35
30
25
21
15
30
25
21
17
11
25
21
18
14
9
21
17
14
11
7
15
11
9
7
3
1.6
32
27
23
19
14
27
23
19
16
11
23
19
16
13
8
19
16
13
10
6
14
11
8
6
3
1.7
30
25
22
18
13
25
21
18
15
10
22
18
15
12
8
18
15
12
9
6
13
10
8
6
3
1.8
28
24
20
17
12
24
20
17
14
9
20
17
14
11
7
17
14
11
9
5
12
9
7
5
3
1.9
26
22
19
16
11
22
19
16
13
9
19
16
13
11
7
16
13
11
8
5
11
9
7
5
3
2.0
25
21
18
15
11
21
18
15
12
8
18
15
13
10
7
15
12
10
8
5
11
8
7
5
3
2.25
22
19
16
14
10
19
16
14
^11
8
16
14
11
9
6
14
11
9
7
4
10
8
6
4
2
2.5
21
18
15
13
9
18
15
13
10
7
15
13
11
9
6
13
10
9
7
4
9
7
6
4
2
2.75
20
17
15
12
9
17
14
12
10
7
15
12
10
8
5
12
10
8
6
4
9
7
5
4
2
3.0
19
16
14
12
8
16
14
12
10
6
14
12
10
8
5
12
10
8
6
4
8
6
5
4
2
3.5
18
16
13
11
8
16
13
11
9
6k
13
11
9
8
5
11
9
8
6
4
8
6
5
4
2
4.0
18
15
13
11
8
15
13
11
9
6
13
11
9
7
5
11
9
7
6
4
8
6
5
4
2
w
c
3
CD
<
CD
3
CQ
CD
Q.
D
CD
C/J
CQ'
3
>
73
C/)
C/)
4	In Scenario B the sample size for the WRS test is also used for the Quantile test.
5	The values were calculated using Equation 0-1 and increased by 20 percent for the reasons discussed in Appendix O

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
MARSSIM	Survey Planning and Design
Example 5 illustrates the use of the WRS Test under Scenario B.
Example 5: Use of WRS Test under Scenario B
A site has 14 survey units and 1 reference area in a building, and the same measurement
method is used to perform measurements in each survey unit and the reference area. The
radionuclide is present in background at a level of 100 ± 15 becquerels/meter squared
(Bq/m2) (1 a). The standard deviation of the radionuclide in the survey area is 40 Bq/m2,
based on previous survey results for the same or similar radionuclide distribution. When the
estimated standard deviation in the reference area and the survey units are different, the
larger value, 40 Bq/m2 in this example, should be used to calculate the relative shift. During
the Data Quality Objective process, Scenario B is selected because the release criterion for
the site is no residual radioactive material above background. The discrimination limit is
selected to be 220 Bq/m2 as a stakeholder agreed-upon starting point for developing an
acceptable survey design, and Type I and Type II error values (a and /?) of 0.05 are selected.
Determine the number of data points to be obtained from the reference area and from each of
the survey units for the statistical tests.
The value of the relative shift for the reference area, A/a, is (220 - 100)/40, or 3.0. The
number of data points can be obtained directly from Table 5.2. For a = 0.05, p = 0.05, and
A/a = 3.0, a value of 10 is obtained for N/2. The table value has already been increased by
20 percent to account for missing or unusable data.
5.3.4 Determining Numbers of Data Points for Statistical Tests for Residual Radioactive
Material Not Present in Background
For the situation where the residual radioactive material is not present in background or is
present at such a small fraction of the DCGLw as to be considered insignificant, a background
reference area is not necessary. Instead, the radionuclide levels are compared directly with the
DCGL value. The general approach closely parallels that used for the situation when the
radionuclide is present in background as described in Section 5.3.3. However, the statistical
tests differ slightly. The Sign test replaces the WRS test described above.
5.3.4.1 Define the Gray Region
In Scenario A, the UBGR is equal to the DCGLw (Figure 5.7). The LBGR is typically chosen to
represent a conservative (slightly higher) estimate of the residual radioactive material
concentration remaining in the survey unit at the beginning of the FSS. If there is no information
with which to estimate the residual radioactive material concentration remaining, the LBGR may
be initially set to equal one-half of the DCGLw. In Scenario B, the LBGR is equal to zero or the
DCGLw, and the UBGR is defined as the DL (Figure 5.8). The DL is a concentration or level of
radioactive material that can be reliably distinguished from the LBGR by performing
measurements with the devices selected for the survey. The DL defines the rigor of the survey
and is determined through negotiations with the regulator. The selection of the appropriate null
hypothesis is further discussed in Chapter 8 and Appendix D.
May 2020
DRAFT FOR PUBLIC COMMENT
5-33
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
Survey Planning and Design
MARSSIM
5.3.4.2 Calculate the Relative Shift
In Scenario A, the shift is the distance between the LBGR and the DCGLw (A = DCGLw -
LBGR). In Scenario B, the shift is the distance between the AL and the DL (A = DL - AL).
The absolute size of the shift is less important than the relative shift, A/a, where a is an
estimate of the variability in the survey unit. The value of a may be obtained from earlier
surveys, limited preliminary measurements, or a reasonable estimate. This estimate of
a includes both the real spatial variability in the quantity being measured and the uncertainty of
the measurement method. The relative shift, A/a, is an expression of the resolution of the
measurements in units of measurement uncertainty. Values of the relative shift that are less
than 1 will result in a large number of measurements needed to demonstrate compliance.
Section 5.3.3.2 provides more detail on the relative shift.
Table 5.3 provides a list of the number of data points used to demonstrate compliance using the
Sign test for selected values of a, /?, and A/a. The values listed in Table 5.3 represent the
number of measurements to be performed in each survey unit. These values were calculated
using Equation 0-1 in Appendix O and increased by 20 percent to account for missing or
unusable data and uncertainty in the calculated value of N. Example 6 illustrates the use of the
Sign Test under Scenario A.
Example 6: Use of Sign Test Under Scenario A
A site has one survey unit. The wide-area derived concentration guideline level for the
radionuclide of interest is 140 becquerels/kilogram (Bq/kg) (3.9 picocuries/gram [pCi/g]) in soil.
The radionuclide is not present in background; data from previous investigations indicate
average residual radioactive material at the survey unit of 110 ± 3.7 (1 a) Bq/kg (3.7 ±0.1
pCi/g). Using Scenario A, the lower bound of the gray region was selected to be 110 Bq/kg. A
value of 0.05 is next selected for the probability of Type I decision errors (a), and a value of
0.01 is selected for the probability of Type II decision errors (/?) based on the survey
objectives. Determine the number of data points to be obtained from the survey unit for the
statistical tests.
The value of the relative shift, A/a, is (140 - 110)/3.7, or 8.1. The number of data points can
be obtained directly from Table 5.3. For a = 0.05, /? = 0.01, and A/a > 3.0, a value of 20 is
obtained for N. The table value has already been increased by 20 percent to account for
missing or unusable data and uncertainty in the calculated value of N.
NUREG-1575, Revision 2	5-34
INTERNAL AGENCY REVIEW DRAFT
May 2020
DO NOT CITE OR QUOTE

-------
* £
>	o
i-
>
O
m
~z.
o
-<
70
m
<
m
o
70
>
Table 5.3: Values of N for Use with the Sign Test6
cn
l
CO
cn
70
O
m
o
O
z
O
H
0	_
H cn
m ~-J
1	cn
°-
o <
a w
S§
m n>


a
= 0.01


a
= 0.025


a
= 0.05


a
= 0.10


a
= 0.25

A/cr
0.01
0.025
P
0.05
0.10
0.25
0.01
0.025
P
0.05
0.10
0.25
0.01
0.025
P
0.05
0.10
0.25
0.01
0.025
P
0.05
0.10
0.25
0.01
0.025
P
0.05
0.10
0.25
0.1
4,095 3,476 2,984 2,463 1,704
3,476 2,907 2,459 1,989 1,313
2,984 2,459 2,048 1,620 1,018
2,463 1,989 1,620 1,244
725
00
o
CO
CO
o
h-
725
345
0.2
1,035
879
754
623
431
879
735
622
503
333
754
622
518
410
258
623
503
410
315
184
431
333
258
184
88
0.3
468
398
341
282
195
398
333
281
227
150
341
281
234
185
117
282
227
185
143
83
195
150
117
83
40
0.4
270
230
197
162
113
230
1921
162
131
87
197
162
136
107
68
162
131
107
82
48
113
87
68
48
23
0.5
178
152
130
107
75
152
126
107
87
58
130
107
89
71
45
107
87
71
54
33
75
58
45
33
16
0.6
129
110
94
77
54
110
92
77
63
42
94
77
65
52
33
77
63
52
40
23
54
42
33
23
11
0.7
99
83
72
59
41
83
70
59
48
33
72
59
50
40
26
59
48
40
30
18
41
33
26
18
9
0.8
80
68
58
48
34
68
57
48
39
26
58
48
40
32
21
48
39
32
24
15
34
26
21
15
8
0.9
66
57
48
40
28
57
47
40
33
22
48
40
34
27
17
40
33
27
21
12
28
22
17
12
6
1.0
57
48
41
34
24
48
40
34
28
18
41
34
29
23
15
34
28
23
18
11
24
18
15
11
5
1.1
50
42
36
30
21
42
35
30
24
17
36
30
26
21
14
30
24
21
16
10
21
17
14
10
5
1.2
45
38
33
27
20
38
32
27
22
15
33
27
23
18
12
27
22
18
15
9
20
15
12
9
5
1.3
41
35
30
26
17
35
29
24
21
14
30
24
21
17
11 '
26
21
17
14
8
17
14
11
8
4
1.4
38
33
28
23
16
33
27
23
18
12
28
23
20
16
10
23
18
16
12
8
16
12
10
8
4
1.5
35
30
27
22
15
30
26
22
17
12
27
22
18
15
10
22
17
15
11
8
15
12
10
8
4
1.6
34
29
24
21
15
29
24
21
17
11
24
21
17
14
9
21
17
14
11
6
15
11
9
6
4
1.7
33
28
24
20
14
28
23
20
16
11
24
20
17
14
9
20
16
14
10
6
14
11
9
6
4
1.8
32
27
23
20
14
27
22
20
16
11
23
20
16
12
9
20
16
12
10
6
14
11
9
6
4
1.9
30
26
22
18
14
26
22
18
15
10
22
18
16
12
9
18
15
12
10
6
14
10
9
6
4
2.0
29
26
22
18
12
26
21
18
15
10
22
18
15
12
8
18
15
12
10
6
12
10
8
6
3
2.5
28
23
21
17
12
23
20
17
14
10
21
17
15
11
8
17
14
11
9
5
12
10
8
5
3
3.0
27
23
20
17
12
23
20
17
14
9
20
17
14
11
8
17
14
11
9
5
12
9
8
5
3
>
70
U)
U)
3 The values were calculated using Equation 0-2 and increased by 20 percent for the reasons discussed in Appendix O.
cn
n
—5
<
CD
<
CD
3
CO
CD
Q.
D
CD

cq'
3

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
Survey Planning and Design
MARSSIM
5.3.5 Determining the Number of Discrete Data Points for Small Areas of Elevated
As described in Section 4.2.5, the treatment of elevated areas of radioactive material is
determined strictly through requirements of regulatory agencies and is beyond the scope of
MARSSIM. A technically sound approach should be used for the derivation of the DCGL for the
Elevated Measurement Comparison (DCGLemc) values. The methodology presented in
MARSSIM of using the unity rule to consider the combined impact of each elevated area is one
conservative approach to assess areas of elevated radioactive materials. See Figure 5.6 for a
summary of data needs for areas of elevated activity.
The statistical tests described throughout Sections 5.3.3 and 5.3.4 (see also Chapter 8)
evaluate whether the residual radioactive material in an area exceeds the DCGLw for
radionuclide concentrations that are approximately uniform across the survey unit. In addition,
there should be a reasonable level of assurance that any small areas of elevated concentrations
of residual radioactive material that could be significant relative to the DCGLemc are not missed
during the FSS. The statistical tests introduced in the previous sections may not successfully
detect small areas of elevated concentrations of radioactive material. Instead, systematic
measurements or samples are made at locations defined by a systematic grid, in conjunction
with surface scanning. These results are used to obtain adequate assurance that small areas of
elevated concentrations of radioactive material are below the DCGLemc and the release criteria
are met. The procedure is applicable for all radionuclides, regardless of whether they are
present in background and is implemented for Class 1 survey units.
5.3.5.1 Determine if Additional Data Points are Needed
Identify the number of survey data points needed for the statistical tests discussed in
Sections 5.3.3 or 5.3.4 (the appropriate section is determined by whether the radionuclide is
present in background). These data points are then positioned throughout the survey unit by
randomly selecting a start point and establishing a systematic pattern. This systematic sampling
grid may be either triangular or rectangular. The triangular grid is generally more efficient for
locating small areas of elevated activity. Appendix D includes a brief discussion on the
efficiency of triangular and rectangular grids for locating areas of elevated activity. A more
detailed discussion is provided by EPA (EPA 1994b).
The number of calculated survey locations, n, and the total area of the survey unit, A, are used
to determine the grid spacing, L, of the systematic sampling pattern, using Equations 5-1
and 5-2.
Activity
A (survey unit)
for a triangular grid
(5-1)
0.866 n
A (survey unit)
for a rectangular grid
(5-2)
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
5-36
May 2020
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
MARSSIM
Survey Planning and Design
The grid area that is bounded by these survey locations is given by Equations 5-3 and 5-4. The
risk of not sampling a circular area—equal to A (grid area)—of elevated activity by use of a
random-start grid pattern is illustrated in Figure D.7 in Appendix D.
A (grid area) = 0.866 L2 for a triangular grid	(5-3)
A (grid area) = L2 for a rectangular grid	(5-4)
The DCGLemc that corresponds to this size of the area of elevated activity (Aea) is obtained from
specific regulatory agency guidance. After using the grid area calculated in Equation 5-3 or 5-4
to determine the DCGLemc for a specific radionuclide, the required minimum detectable
concentration (MDC) of the scan procedure needed to detect an area of elevated activity is
given by Equation 5-5.
Scan MDC (required) = DCGLemc	(5-5)
The actual scan MDCs of scanning techniques are then determined for the available
instrumentation (see Sections 6.3.2 and 6.6). The actual scan MDC of the selected scanning
technique is compared to the required scan MDC. If the actual scan MDC is less than the
required scan MDC, no additional sampling points are necessary for assessment of small areas
of elevated activity. In other words, the scanning technique exhibits adequate detection
capability to detect small areas of elevated activity.
Revisions 0 and 1 of MARSSIM (published in 1998 and 2000, respectively) included the
calculation of an area factor7 as an intermediate step in the determination of the required scan
MDC. The use of an area factor is not necessary if DCGLemc is tabulated directly as a function
of the area of radioactive material. To simplify the determination of the required scan MDC, the
use of the area factor as an intermediate calculation is not included in this revision of
MARSSIM. The area factor can still be used if the ratio of the DCGLemc to the DCGLw is known
and will produce the same results as the approach described in the current revision of
MARSSIM.
5.3.5.2 Calculate the Required Grid Spacing and Number of Data Points
If the actual scan MDC is greater than the required scan MDC (i.e., the available scan detection
capability is not sufficient to detect small areas of elevated activity), then it is necessary to
calculate the DCGLemc that corresponds to the actual scan MDC using Equation 5-6.
DCGLemc = Scan MDC (actual)	(5-6)
The size of the area of elevated activity (Aea) that corresponds to this DCGLemc is then obtained
from specific regulatory agency guidance. The required number of data points for assessing
7 The area factor, Am, is defined as the ratio of the DCGLemc to the DCGLw as a function of the grid area and relates
the required scan MDC to the DCGLw using the equation: Scan MDC (required) = Am x DCGLW.
May 2020
DRAFT FOR PUBLIC COMMENT
5-37
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
Survey Planning and Design	MARSSIM
small areas of elevated activity (nEA) can then be determined by dividing the area of elevated
activity (Aea) into the survey unit area using Equation 5-7.
A (survey unit)
nEA = ~A—, ¦ ., ,-tx	(5-7)
Aea (gnd unit)
The calculated number of measurement or sampling locations, nEA, is used to determine a
revised spacing, LEA, of the systematic pattern, using Equations 5-8 and 5-9.
Lea —
N
A (survey unit)	..
for a triangular grid	(5-8)
0.866 nEA
Lea —
M
A (survey unit)
- for a rectangular grid	(5-9)
nEA
The distance between measurement/sampling locations should generally be rounded down to
the nearest distance that can be conveniently measured in the field. This value of LEA is then
used to determine the measurement locations as described in Section 5.3.7. The Sign, WRS,
and quantile tests are performed using the larger number of data points, nEA. Figure 5.6
provides a concise overview of the procedure used to identify data needs for the assessment of
small areas of elevated activity.
If residual radioactive material is found in an isolated area of elevated activity in addition to
residual radioactive material distributed relatively uniformly across the survey unit, the
information in Section 8.6.2 can be used to ensure that the total dose or risk does not exceed
the release criteria. If there is more than one area of elevated activity, a conservative method is
to include a separate term in the formula for each; however, this method may violate
assumptions used in the pathway modeling process if adjustments are not made to the
modeling. Specifically, if a receptor is assumed to be located directly above one area of
elevated activity for the full occupancy period, that same receptor cannot realistically also be
assumed to be located directly above a separate area of elevated activity for the full occupancy
period associated with the exposure scenario8. As an alternative, the dose or risk due to the
actual residual radioactive material can be modeled if there is an appropriate exposure pathway
model available. Note that these considerations generally apply only to Class 1 survey units,
since areas of elevated activity should not exist in Class 2 or Class 3 survey units.
When the detection limit of the scanning technique is very large relative to the DCGLemc, the
number of measurements estimated to demonstrate compliance using the statistical tests may
become unreasonably large. In this situation, evaluate the survey objectives and considerations.
These considerations may include the survey design and measurement methodology, exposure
8 By default, RESRAD assumes that the receptor spends 50 percent of the time indoors and 25 percent of his time
outdoors; without further adjustment, and if two areas are considered, the receptor would spent 100 percent of the
time indoors and another 50 percent of the time outdoors for a total time of 150 percent of what would typically be
assumed for the exposure scenario.
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
5-38
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Survey Planning and Design
1	pathway modeling assumptions and parameter values used to determine the DCGLs, HSA
2	conclusions concerning source terms and radionuclide distributions, and the results of scoping
3	and characterization surveys. In most cases, the result of this evaluation is not expected to
4	justify an unreasonably large number of measurements. Example 7 provides an example of
5	how to determine whether additional data points are required to ensure the actual scan MDC is
6	less than or equal to the required scan MDC.
Example 7: Example Determination Whether Additional Data Points Are Required
A Class 1 land area survey unit of 1,500 square meters (m2) is potentially affected by residual
radioactive material consisting of cobalt-60 (60Co). The wide-area derived concentration
guideline level value for60Co is 110 becquerels/kilogram (Bq/kg; 3 picocuries/gram [pCi/g]),
and the scan detection capability for this radionuclide has been determined to be 150 Bq/kg
(4 pCi/g). The table below provides the derived concentration guideline level obtained using
the Elevated Measurement Comparison for different grid areas:
Grid Area
DCGLemc
(m2)
(Bq/kg)
1
1,070
3
480
10
230
30
160
100
130
300
120
1,000
120
3,000
110
10,000
110
Abbreviations: m = meter; DCGLemc = derived concentration guideline level obtained using the Elevated Minimum
Comparison; Bq = becquerel; kg = kilogram.
Calculations indicate the number of data points needed for statistical testing is 27. The
distance between measurement locations for this number of data points and the given land
area is 8 m, as illustrated in the application of Equation 5-1:
L =
M
A (survey unit)
0.866 n
M
1,500 m2
0.866x27
= 8.0 m for a triangular grid
The grid area encompassed by a triangular sampling pattern of 8 m is approximately 55.4 m2
as calculated using Equation 5-3:
A (grid area) = 0.866 L2 = 0.866 (8.0 m)2 = 55.4 m2
The DCGLemc for a grid area of 55.4 m2 is determined by interpolation to be 150 Bq/kg:
May 2020
DRAFT FOR PUBLIC COMMENT
5-39
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
Survey Planning and Design
MARSSIM
/55.4 m2 - 30 m2\
160 Bq/kg + ( 1QQ m2 _ 30 m2) (130 Bcl/k9 " 160 Bq/k9) = 150 BV/k9
The acceptable minimum detectable concentration (MDC) of the scan procedure needed to
detect an area of elevated activity in a 55.4 m2 area is therefore given by Equation 5-5:
Scan MDC (required) = DCGLemc = 150 Bq/kg
Because the detection capability of the procedure to be used (150 Bq/kg) is equal to or less
than the required Scan MDC, no additional data points are needed to demonstrate
compliance with the elevated measurement comparison criteria.
Example 8 provides another example of how to determine if additional data points are required
to ensure the actual scan MDC is less than or equal to the required scan MDC, including how to
calculate the number of required data points when the actual scan MDC is greater than the
required scan MDC.
Example 8: Example Determination Whether Additional Data Points Are Required
A Class 1 land area survey unit of 1,500 square meters (m2) is potentially affected by residual
radioactive material consisting of 60Co. The wide-area derived concentration guideline level for
cobalt-60 (60Co) is 110 becquerels/kilogram (Bq/kg; 3 picocuries/gram [pCi/g]). The table below
provides the derived concentration guideline level obtained using the Elevated Measurement
Comparison for different grid areas:	
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
5-40
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Survey Planning and Design
Grid Area
(m2)
DCGLemc
(Bq/kg)
1
1,070
3
480
10
230
30
160
100
130
300
120
1,000
120
3,000
110
10,000
110
Abbreviations: m = meter; DCGLemc = derived concentration guideline level obtained using the Elevated Minimum
Comparison; Bq = becquerel; kg = kilogram.
In contrast to Example 7, the scan detection capability for this radionuclide has been
determined to be 170 Bq/kg (4.6 pCi/g). Calculations indicate the number of data points
needed for statistical testing is 15. The distance between measurement locations for this
number of data points and the given land area is 10 m, as illustrated in the application of
Equation 5-1:
L =
M
A (survey unit)
0.866 n
M
1,500 m2
0.866x15
= 10.7 m for a triangular grid
The grid area encompassed by a triangular sampling pattern of 10 m is approximately
86.6 m2, as calculated using Equation 5-3:
A (grid area) = 0.866 L2 = 0.866 (10.7 m)2 = 99.1 m2
The DCGLemc for a grid area of 99.1 m2 is determined by interpolation to be 130 Bq/kg:
160 Bq/kg +
99.1 m2 - 30 m2
100 m2 - 30 m2
(130 Bq/kg - 160 Bq/kg) = 130 Bq/kg
The required scan minimum detectable concentration (MDC) for that grid area is therefore
also 130 Bq/kg:
Scan MDC (required) = DCGLemc = 130 Bq/kg
Because the actual scan MDC of the procedure to be used (170 Bq/kg) is greater than the
required scan MDC, the data points obtained for the statistical testing may not be sufficient to
demonstrate compliance using the elevated measurement comparison. The grid area
corresponding to a DCGLemc of 170 Bq/kg is determined by interpolation to be 27 m2:
30 m2 +
170 Bq/kg - 160 Bq/kg
230 Bq/kg - 160 Bq/kg
(10 m2 - 30 m2) = 27 m2
May 2020
DRAFT FOR PUBLIC COMMENT
5-41
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
Survey Planning and Design
MARSSIM
The number of samples required to account for areas of elevated activity (nEA) is calculated
using Equation 5-7:
nEA
A (survey unit) 1,500 m2
Aea (grid unit) 27 m2
= 56 measurements
The triangular grid spacing required to account for areas of elevated activity (LEA) is
calculated using Equation 5-8:
Lea —
M
A (survey unit)
0.866 nEA
M
1500 m2
0.866 x 56
= 5.5 m for a triangular grid
The number of data points required increased from 15 to 56, and the grid spacing decreased
from 10.7 m to 5.5 m.
5.3.6 Determining the Scan Area
The use of direct measurements or sampling in combination with separate scans of the area is
necessary when the scanning instrument and technique have sufficient detection capability to
identify areas of elevated concentrations of radioactive material, but insufficient detection
capability to quantify the average concentration of radioactive material in the survey unit. In
instances where the measurement method has sufficient detection capability to meet the MQOs
to both quantify the average concentration of radioactive material in the survey unit and identify
areas of elevated concentrations of radioactive material, a scan-only survey can be considered.
Similar in principle to a scan-only survey is a series of direct measurements that have the
detection capability to meet the MQOs to both quantify the average concentration of radioactive
material in the survey unit and identify areas of elevated concentrations of radioactive material.
5.3.6.1 Scan-Only Surveys
During scan-only surveys, a large number of discrete scan measurements are taken and
analyzed; this approach is greatly facilitated by the use of scan systems that automatically
record scan measurements and location. These systems typically utilize GPS or other position
determinations in conjunction with radiological measurements, with both the radiological and
locational data being automatically recorded. These techniques permit the convenient
accumulation, storage, and display of hundreds or thousands of scan data points for a survey
unit.
Scan-only surveys will likely require site-specific validation samples to ensure that the method
can reliably detect concentrations at the DCGLw under the conditions expected at the site. This
validation can be accomplished at any point in the RSSI process (post-remediation, if
remediation is performed). Consult with your regulator for guidance on the level of effort needed
to validate scan-only surveys.
Scan-only surveys generally cover a much larger portion of the survey unit than traditional
discrete sampling or measurement. A similar concept is found in a series of direct
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
5-42
May 2020
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
MARSSIM
Survey Planning and Design
measurements, where the field of view of the direct measurements covers a statistically
significant portion of the survey unit (i.e., 10 percent or more).9
However, a scan-only approach should be used only for circumstances where the measurement
method has sufficient detection capability to meet the MQOs to both quantify the average
concentration of radioactive material in the survey unit and identify areas of elevated
concentrations of radioactive material. To ensure that this is the case, the scan MDC (for the
scan system) should be less than 50 percent of the DCGLw. The scan-only methodology will
require validation, which likely requires collecting some percentage of samples for laboratory
analysis to compare with results from the same location. Other MQOs should be met as well,
including the MQO for measurement method uncertainty at the DCGLw.
In general, when utilizing a scan-only survey approach, the anticipated measurement method
uncertainty is expected to be higher than traditional scan and sampling procedures. Therefore, a
maximum scan coverage (e.g., 100 percent) should always be achieved in Class 1 areas when
utilizing this approach. The percentages of Class 2 or Class 3 areas that should be scanned is
10 percent or the result using Equation 5-10, whichever is larger:
Scanning a greater percentage than that calculated above is always acceptable. When
performing scan-only surveys, the following must be considered and addressed in survey plans:
•	Perform quality control procedures, such as evaluating measurement method uncertainty by
performing replicate scans over a prescribed portion of the site and performing reference
standard checks at a prescribed frequency.
•	Evaluate the extent to which alpha and beta radiation in the surface may impact scan-only
survey results.
•	Determine the number and type of validation samples to establish a correlation between
scan-only and laboratory results.
5.3.6.2 Scanning and Sampling
When scanning is done in combination with direct measurements or sampling, the scanning
instrument and technique must have detection capabilities to meet the MQOs to identify areas
of elevated concentrations of radioactive material. This differs from the requirements for scan-
only surveys that must have detection capabilities to both identify areas of elevated
concentrations of radioactive material and quantify the average concentration of radioactive
material in the survey unit.
9 In the Multi-Agency Radiation Survey and Assessment of Materials and Equipment (MARSAME) Manual, a direct
measurement survey covering a statistically significant portion of the survey unit was referred to as an "in situ"
survey type; in MARSSIM, this survey type is incorporated into scan-only surveys.
May 2020	5-43	NUREG-1575, Revision 2
Scan Area =
(10-A/a)
10
x 100%
(5-10)
DRAFT FOR PUBLIC COMMENT
DO NOT CITE OR QUOTE

-------
Survey Planning and Design
MARSSIM
1	The percentage of the area that needs to be scanned depends on the classification of the
2	survey unit. For Class 1 survey units, 100 percent of the area should be scanned. The
3	percentages of Class 2 or Class 3 areas are scanned according to Equation 5-10. Scanning a
4	greater percentage than that calculated above for Class 2 or Class 3 areas is always
5	acceptable.
6	The detection capability for scanning techniques used in Class 2 and Class 3 areas is not tied to
7	the area between measurement locations like they are in a Class 1 area (see Section 5.3.5).
8	The scanning techniques selected should represent the best reasonable effort based on the
9	survey objectives. Structure surfaces are generally scanned for alpha-, beta-, and gamma-
10	emitting radionuclides. In contrast, scanning for alpha or beta emitters for land area survey units
11	is generally not considered effective because of problems with attenuation and media
12	interferences. If one can reasonably expect to find any residual radioactive material, it is prudent
13	to perform a judgment scanning survey.
14	5.3.7 Determining Survey Locations
15	Like the required scanning percentages, the determination of discrete survey locations for the
16	direct measurements or the collection of samples depends on the classification of the survey
17	unit. The method for determining survey locations for land areas and structure surfaces is
18	described below.
19	5.3.7.1 Survey Locations for Discrete Measurements and Samples
20	A scale drawing of the survey unit is prepared, along with the overlying planar reference
21	coordinate system or grid system. Any location within the survey area is thus identifiable by a
22	unique set of coordinates. The maximum length, X, and width, Y, dimensions of the survey unit
23	are then determined. Identifying and documenting a specific location for each measurement
24	performed is an important part of an FSS to ensure that measurements can be reproduced if
25	necessary. The reference coordinate system described in Section 4.9.5 provides a method for
26	relating measurements to a specific location within a survey unit. Systems utilizing GPS
27	technology and data logging software are widely available to identify and track survey
28	dimensions, sampling locations, and locations associated with specific scan results.
29	Land Areas
30	Measurements and samples in Class 3 survey units and reference areas are usually taken at
31	random locations. These locations are determined by generating sets of random numbers
32	(two values, representing the X axis and Y axis distances). Random numbers can be obtained
33	from mathematical tables, including Table 1.11 in Appendix I, or generated by calculator or
34	computer. Sufficient sets of numbers will be needed to identify the total number of survey
35	locations established for the survey unit. Each set of random numbers is multiplied by the
36	appropriate survey unit dimension to provide coordinates, relative to the origin of the survey unit
37	reference grid pattern. Coordinates identified in this manner that do not fall within the survey unit
38	area or that cannot be surveyed because of site conditions are replaced with other survey points
39	determined in the same manner. Example 9 provides an example of a random sampling
40	pattern.
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
5-44
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM	Survey Planning and Design
Example 9: Random Sampling Pattern
in this example, eight data points were identified using the appropriate table (Table 5.2 or
Table 5.3). The locations of these points were determined using the table of random numbers
found in Appendix I, Table 1.11.
85N
80N
70N,
60N ,
50N.,
40N.
30N_
20N„
ION






1 P
¦
	t	
*



¦

¦
1
¦
t
	1			





¦
¦





*
i
i
*


¦


¦
¦
i
i
i
	1	
7
¦





¦
¦
¦






r
tL	




BUILDI>&

¦
¦
¦
j
¦
f







¦
¦





	1	
i
i
i


5
¦
¦ L
¦
¦
	
P
i
i
9.	





¦
¦
¦

1
f
i
i

I

)E 51

¦
¦
J
6 l
¦>E 2(SE 3
iE 4i
)E 6
)E
N
t
SAMPLE
COORDINATES
#1:	52E. 24N
#2:	28E.2N
#}:	45E,S3N
*4:	47E, 5N
#5:	4 IE. 22N
«6:	0E.44N
s7:	21E.56N
-S;	35E.63N
FEET
0	30
SURFACE SOIL MEASUREMENT/SAMPLING LOCATION
SURVEY UNIT BOUNDARY
	 ONSITE FENCE
May 2020	5-45	NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT	DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
Survey Planning and Design
MARSSIM
Class 2 areas are surveyed on a random-start systematic pattern. The number of calculated
survey locations, n, based on the statistical tests, is used to determine the spacing, L, of a
systematic pattern as specified in Equations 5-1 and 5-2.
After L is determined, a random start location is identified, as described previously, for a survey
pattern starting location. Beginning at the random start location, a row of points is identified
parallel to the X-axis at intervals of L.
For a triangular grid, a second row of points is then developed, parallel to the first row, at a
distance of 0.866 x L from the first row. Survey points along that second row are midway (on the
X-axis) between the points on the first row. This process is repeated to identify a pattern of
survey locations throughout the affected survey unit. If identified points fall outside the survey
unit or at locations that cannot be surveyed, additional points are determined using the random
process described above until the desired total number of points is identified.
For Class 1 areas, a systematic pattern having dimensions determined in Section 5.3.6 is
installed on the survey unit. The starting point for this pattern is selected at random, as
described above for Class 2 areas. The same process as described above for Class 2 areas
applies to Class 1. Example 10 provides an illustration of a triangular systematic pattern in an
outdoor Class 2 survey unit.
Example 10: Illustration of a Triangular Systematic Pattern in an Outdoor Class 2
Survey Unit
An example of a triangular survey pattern is shown below. In this example, the statistical test
calculations estimate 20 samples (Table 5.3, a = 0.01, p = 0.05, A/a > 3.0). The random-
start coordinates were 27E, 53N. The grid spacing (L) was calculated using Equation 5-3.
L =
N
5,100 m2
= 17 m
0.866 x 20
Two points were identified on a row parallel to the X-axis, each 17 meters (m) from the
starting point. The subsequent rows were positioned 0.866 x L, or 15 m, from the initial row.
This random-start triangular sampling process resulted in 21 sampling locations, one of which
was difficult to assess because of the building location, which yields the desired number of
data points.
NUREG-1575, Revision 2	5-46	May 2020
DRAFT FOR PUBLIC COMMENT	DO NOT CITE OR QUOTE

-------
MARSSIM
Survey Planning and Design
85N

UILDMD
	w=
I
STARTING POINT
FOR TRIANGULAR
SAMPLING GRID
(27E, 5JN)
„ FEET
0	30
SURFACE SOIL MEASUREMENT LOCATION
MEASUREMENT LOCATION THAT IS NOT SAMPLED
SURVEY UNIT BOUNDARY
ONSITE FENCE
1	-Structure Surfaces
2	All structure surfaces for a specific survey unit are included on a single reference grid system for
3	purposes of identifying survey locations. The same methods as described above for land areas
4	are then used to locate survey points for all classifications of areas.
5	In addition to the survey locations identified for statistical evaluations and elevated
6	measurement comparisons, data may be obtained from judgment locations that are selected
7	because of unusual appearance, location relative to areas affected by residual radioactive
8	material, high potential for residual radioactive material, general supplemental information, etc.
May 2020
DRAFT FOR PUBLIC COMMENT
5-47
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Survey Planning and Design
MARSSIM
1	Data points selected based on professional judgment are not included with the data points from
2	the random-start triangular grid for statistical evaluations; instead they are compared individually
3	with the established DCGLs and conditions. Measurement locations selected on the basis of
4	professional judgment cannot be considered representative of the survey unit, a necessary
5	condition if the statistical tests described in Chapter 8 are used.
6	5.3.1.2 Survey Locations for Scans
7	Like the determination of the location of discrete measurements or samples, the determination
8	of survey locations for scans depends on the classification of the survey unit.
9	Class 1 Areas
10	For Class 1 areas, scans are intended to detect small areas of elevated activity that are not
11	detected by the measurements using the systematic pattern (Section 5.3.5). This is the reason
12	for recommending 100 percent coverage for the scanning survey. One-hundred percent
13	coverage means that the entire accessible surface area of the survey unit is covered by the field
14	of view of the scanning instrument. If the field of view is 2 m wide, the survey instrument can be
15	moved along parallel paths 2 m apart to provide 100 percent coverage. If the field of view of the
16	detector is 5 centimeters (cm), the parallel paths should be 5 cm apart.
17	Class 2 Areas
18	Class 2 survey units have a lower probability for areas of elevated activity than Class 1 survey
19	units, but some portions of the survey unit may have a higher potential than others. Judgment
20	scanning surveys focus on the portions of the survey unit with the highest probability for areas
21	of elevated activity. If the entire survey unit has an equal probability for areas of elevated
22	activity, or the judgment scans don't cover at the required scanning percentage of the area,
23	systematic scans along transects of the survey unit or scanning surveys of randomly selected
24	grid blocks are performed.
25	Class 3 Areas
26	Class 3 areas may be uniformly scanned for radiation emitted from the radionuclides of interest,
27	or the scanning may be performed in areas with the greatest potential for residual radioactive
28	material (e.g., corners, ditches, and drains) based on professional judgment and the objectives
29	of the survey. Such recommendations are typically provided by a health physics professional
30	with radiation survey experience. This provides a qualitative level of confidence that no areas of
31	elevated activity were missed by the random measurements or that there were no errors made
32	in the classification of the area. In some cases, a combination of these approaches may be the
33	most appropriate.
34	5.3.8 Determining Investigation Levels
35	An important aspect of the FSS is the design and implementation of investigation levels.
36	Investigation levels are radionuclide-specific levels of radioactive material used to indicate when
37	additional investigations may be necessary. Investigation levels also serve as a quality control
38	check to determine when a measurement process begins to get out of control. For example, a
39	measurement that exceeds the investigation level may indicate that the survey unit has been
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
5-48
May 2020
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
MARSSIM
Survey Planning and Design
improperly classified (see Section 4.6), or it may indicate a failing instrument. Typically,
investigation levels are set as part of the DQO process.
When an investigation level is exceeded, the first step is to confirm that the initial measurement
or sample actually exceeds the particular investigation level. This may involve taking further
measurements to determine that the area and level of the elevated residual radioactive material
are such that the resulting dose or risk meets the release criteria. Rather than—or in addition
to—taking further measurements, the investigation may involve assessing the adequacy of the
exposure pathway model used to obtain the DCGLs and area factors, as well as the consistency
of the results obtained with the HSA and the scoping, characterization, and RAS surveys.
Depending on the results of the investigation actions, the survey unit may require
reclassification, remediation, or resurvey. Table 5.4 illustrates an example of how investigation
levels can be developed.
Table 5.4: Example FSS Investigation Levels
Survey Unit
Classification
Flag Direct Measurement or
Sample Result When...
Flag Scanning Measurement Result
When...
Class 1

> DCGLemc for the area bounded by four

>	DCGLemc or
>	DCGLw and > a statistical
parameter-based value
adjacent systematic grid measurement
points to determine the DCGLemc (when
a traditional MARSSIM approach is
utilized), or for the area bounded by an
acceptable elevated area size (when a
scan-only approach is utilized)
Class 2
> DCGLw
> DCGLw or > scan MDC
Class 3
> fraction of DCGLw
> DCGLw or > scan MDC
Abbreviations: DCGLemc is the derived concentration guideline level (DCGL) determined with the Elevated
Measurement Compairson; DCGLw is the wide-area DCGL.
When determining an investigation level using a statistical-based parameter (e.g., standard
deviation) one should consider survey objectives, underlying radionuclide distributions, and an
understanding of corresponding types (e.g., normal, lognormal, non-parametric), descriptors
(e.g., standard deviation, mean, median), population stratifications (i.e., subgroups), and other
prior survey and historical information. For example, a level might be arbitrarily established at
the mean + 3s, where s is the standard deviation of the survey unit, assuming a normal
distribution. A higher value might be used if locating discrete sources of higher activity was a
primary survey objective. By the time the FSS is conducted, survey units should be defined.
Estimates of the mean, variance, and standard deviation of the radionuclide activity levels within
the survey units should also be available.
For a Class 1 survey unit, measurements above the DCGLw are not necessarily unexpected.
However, a measurement above the DCGLw at one of the discrete measurement locations
might be considered unusual if it were much higher than all of the other discrete measurements.
Thus, any discrete measurement that is both above the DCGLw and above the statistical-based
parameter for the measurements should be investigated further. Any measurement, either at a
discrete location or from a scan that is above the DCGLemc should also be flagged for further
investigation. When a traditional MARSSIM approach (scanning with direct measurements
and/or samples) is utilized, the DCGLemc should be established for the largest (worst case)
May 2020
DRAFT FOR PUBLIC COMMENT
5-49
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
Survey Planning and Design
MARSSIM
potential elevated measurement area (the area bounded by four sampling grid measurement
points). This largest potential elevated area is also the survey unit area divided by the number of
measurements or samples (for the systematic sampling grid). When a scan-only approach is
utilized, it is important that an appropriate size for a potential elevated area and associated
DCGLemc be established as a part of the DQO process and in agreement with the regulator.
In Class 2 or Class 3 areas, neither measurements above the DCGLw nor areas of elevated
activity are expected. Any measurement at a discrete location exceeding the DCGLw in these
areas should be flagged for further investigation. Because the survey design for Class 2 and
Class 3 survey units is not driven by the EMC, the scan MDC might exceed the DCGLw. In this
case, any indication of residual radioactive material during the scan would warrant further
investigation.
When it is not feasible to obtain a scan MDC below the DCGLw, the basis for using the
DCGLemc or an investigation level above the DCGLw in Table 5.4 for Class 2 and Class 3 areas
may be necessary but should be justified in survey planning documents. For example, where
there is high uncertainty in the reported scan MDC, more conservative criteria would be
warranted.
Similarly, data quality assessment (DQA) for scanning may warrant a more conservative flag, as
would greater uncertainty from HSA or other surveys on the size of potential areas of elevated
activity. In some cases, it may even be necessary to agree in advance with the regulatory
agency on which site-specific investigation will be used if other than those presented in
Table 5.4.
Because there is a low expectation for residual radioactive material in a Class 3 area, it may be
prudent to investigate any measurement exceeding even a fraction of the DCGLw. The level
selected in these situations should be commensurate with the potential exposures at the site,
the radionuclides of concern, and the measurement and scanning methods chosen. This level
should be set using the DQO Process during the survey design phase of the Data Life Cycle. In
some cases, the user may also wish to follow this procedure for Class 2 and even Class 1
survey units.
5.3.9 Developing an Integrated Survey Strategy
The final step in survey design is to integrate the survey techniques (Chapter 6) with the
number of measurements and measurement spacing, with the amount of scanning determined
earlier in this chapter. This integration, along with the information provided in other portions of
this manual, produce an overall strategy for performing the survey. The survey design may
consist of scan-only, or a combination of scans with sampling or direct measurements.
Table 5.5 provides a summary of the recommended survey coverage for structures and land
areas. This survey coverage for different areas is the subject of this section.
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
5-50
May 2020
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
MARSSIM	Survey Planning and Design
Table 5.5: Recommended Survey Coverage for Structures and Land Areas
Area
Scanning and Direct Measurements and/or
Sampling Survey
Scan-Only Survey
Classification
Scanning
Direct Measurements
or Samples)
Scanning
Class 1
100%
Number of data points
from statistical tests
(Sections 5.3.3 and
5.3.4); additional
measurements may be
necessary for small areas
of elevated activity
(Section 5.3.5)
100%
Class 2
10-100%
Systematic and Judgment
"Scan Area"=
(10- A/a)
v J x 100%
10
Number of data points
from statistical tests
(Sections 5.3.3 and
5.3.4)
10-100%
Systematic and Judgment
"Scan Area"=
(10-A/a)
v J x 100%
10
Class 3
Judgment
Number of data points
from statistical tests
(Sections 5.3.3 and 5.3.4)
"Scan Area"=
(10-A/o)
v J x 100%
10
Judgment
Abbreviation: A/cr represents the relative shift.
For surveys in which discrete measurements or samples are taken, random measurement
patterns are generally used for Class 3 survey units to ensure that the measurements are
independent and support the assumptions of the statistical tests. Systematic grids are used for
Class 2 survey units because there is an increased probability of small areas of elevated
activity. The use of a systematic grid allows the decision maker to draw conclusions about the
size of the potential areas of elevated activity based on the area between measurement
locations. The random starting point of the grid provides an unbiased method for obtaining
measurement locations to be used in the statistical tests. Class 1 survey units have the highest
potential for small areas of elevated activity, so the areas between measurement locations
might need to be adjusted to ensure that these areas can be detected by scanning techniques.
MARSSIM allows the use of both sampling (where a sample is collected and sent to an
analytical laboratory, on-site or off-site) and direct measurements (fixed measurement taken in
the field by an in situ gamma spectroscopy or beta scintillation meter, for example.) It is
important to consider the required MQOs for the survey and ensure that the measurement
method chosen meets those criteria. Some direct measurement methods may not be
appropriate for some radionuclides in land areas.
May 2020
DRAFT FOR PUBLIC COMMENT
5-51
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
Survey Planning and Design
MARSSIM
The objectives of the scanning surveys are different. Scanning is used to identify locations
within the survey unit that exceed the investigation level. These locations are marked and/or
receive additional investigations to determine the concentration, area, and extent of the residual
radioactive material.
Scanning measurements can also be used in place of the sampling or direct measurements
when the detection capability is sufficient and a large number of discrete scan measurements
are taken and analyzed; this approach is greatly facilitated by the use of scan systems that
automatically record scan measurements and location. These systems typically utilize GPS or
other position determinations in conjunction with radiological measurements, with both the
radiological and locational data being automatically recorded. These techniques permit the
convenient accumulation, storage, and display of hundreds or thousands of scan data points for
a survey unit. However, a scan-only approach should only be used for circumstances where the
scan MDC (for the scan system) is less than 50 percent of the DCGLw and other MQOs, such
as requirements for measurement method uncertainty, can be met. For scan-only surveys of
Class 2 or Class 3 survey units where the percentage of the area scanned is less than
100 percent, the survey must be designed so that average concentration of radioactive material
calculated from the survey data is an unbiased representative estimate of the true mean
concentration in the survey unit. In the event the scan-only survey option is feasible for a site or
survey unit, the sampling function of the FSS would not be applicable.
In addition to the building and land surface areas described above, there are numerous other
locations where measurements and/or sampling may be necessary independent from the FSS.
Examples include items of equipment and furnishings, building fixtures, drains, ducts, and
piping. Many of these items or locations have both internal and external surfaces with potential
residual radioactive material. An approach on conducting or evaluating these types of surveys is
contained in the Multi-Agency Radiation Survey and Assessment of Materials and Equipment
(MARSAME) Manual (NRC 2009), which is a supplement to MARSSIM. Subsurface
measurements or sampling may also be necessary.
Special situations may be evaluated by judgment sampling and measurements. Data from such
surveys should be compared directly with a limit developed for the specific situation and
approved by the regulator.
Quality control measurements are recommended for all surveys, as described in Sections 4.8,
6.2, and 7.2. Also, some regulatory programs require removable activity measurements
(e.g., DOE requirements in DOE Order 458.1 [DOE 2011c], 10 CFR 835). These additional
measurements should be considered during survey planning.
5.3.9.1 Class 1 Areas
For Class 1 areas, scanning surveys are designed to detect small areas of elevated activity
above the DCGLw that are not detected by the measurements using the systematic pattern
(Section 5.3.7). For this reason, the measurement locations and the number of measurements
may need to be adjusted based on the sensitivity of the scanning technique (Section 5.3.5.1).
This is also the reason for recommending 100 percent coverage for the scanning survey.
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
5-52
May 2020
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
MARSSIM
Survey Planning and Design
As discussed in Section 5.3.6.1, scanning techniques can be used in lieu of discrete samples or
direct measurements when the scan MDC is less than 50 percent of the DCGLw, and the scan
coverage is 100 percent. Note that, in a statistical sense, a scan of 100 percent of a survey unit
constitutes a sample of 100 percent of the survey unit. Other MQOs need to be met, as well,
including the MQO for required measurement method uncertainty.
Locations of direct radiation above an investigation level are identified and evaluated. Results of
initial and followup direct measurements and sampling at these locations are recorded and
documented in the FSS report. For structure surfaces, measurements of total and (when
applicable) removable radioactive material are performed at locations identified by scans and at
previously determined locations (Section 5.3.7). Soil sampling or direct measurements are
performed at locations identified by scans and at previously determined locations
(Section 5.3.7).
The development of direct measurement or sample investigation levels for Class 1 areas should
establish a course of action for individual measurements that exceed the investigation level.
Because measurements above the DCGLw are not necessarily unexpected in a Class 1 survey
unit, additional investigation levels may be established to identify discrete measurements that
are much higher than the other measurements. Any discrete measurement that both is above
the DCGLw and exceeds a statistical based parameter (e.g., three standard deviations above
the mean) should be investigated further (Section 5.3.8). Any measurement (direct
measurement, sample, or scan) that exceeds the DCGLemc should be flagged for further
investigation.
The results of the investigation and any additional remediation that was performed should be
included in the FSS report. Data are reviewed as described in Section 8.2.2, additional data are
collected as necessary, and the final complete data set evaluated as described in Section 8.3
and Section 8.4.
5.3.9.2 Class 2 Areas
Scanning surveys in Class 2 areas are also primarily performed to find areas of elevated activity
not detected by the measurements using the systematic pattern. However, the number and
location of measurements are not adjusted based on sensitivity of the scanning technique, and
scanning is performed in portions of the survey unit. The level of scanning effort should be
proportional to the potential for finding areas of elevated activity based on the conceptual site
model developed and refined from Section 3.6.4. In other words, the farther the expected
residual radioactive material in the survey unit is from the DCGLw in units of uncertainty (the
larger the A/a), the less scanning is needed. Surface scans are performed over 10-100 percent
of structure surfaces or open land surfaces, as calculated in Equation 5-10. A larger portion of
the survey unit would be scanned in Class 2 survey units that have residual radioactive material
close to the release criteria, but for survey units that are closer to background scanning, a
smaller portion of the survey unit may be appropriate.
As discussed in Section 5.3.6.1, scanning techniques for Class 2 survey units might be used in
lieu of discrete samples or direct measurements when the scan MDC is less than 50 percent of
the DCGLw and the scan coverage is between 10 and 100 percent. Note that, in a statistical
sense, a scan of 10-100 percent of a survey unit constitutes a sample of 10-100 percent of the
May 2020
DRAFT FOR PUBLIC COMMENT
5-53
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Survey Planning and Design
MARSSIM
1	survey unit. Other MQOs need to be met, as well, including the MQO for required measurement
2	method uncertainty. The area scanned should be selected in an unbiased manner.
3	Locations of scanning survey results greater than the investigation level are identified and
4	investigated. If small areas of elevated activity are confirmed by this investigation, all or part of
5	the survey unit should be reclassified as Class 1 and the survey strategy for that survey unit
6	redesigned accordingly. Investigation levels for Class 2 areas should establish levels for
7	investigation of individual measurements close to but less than the DCGLw. Investigation levels
8	for Class 2 areas should also establish a course of action for individual measurements that
9	exceed or approach the DCGLw. The results of the investigation of the positive measurements
10	and basis for reclassifying all or part of the survey unit as Class 1 should be included in the FSS
11	report.
12	The results of the investigation should be included in the FSS report. Data are reviewed as
13	described in Section 8.2.2, additional data are collected as necessary, and the final complete
14	data set evaluated as described in Section 8.3 and Section 8.4.
15	5.3.9.3 Class 3 Areas
16	Class 3 areas have the lowest potential for areas of elevated activity. Locations exceeding the
17	scanning survey investigation level should be flagged for further investigation. If the presence of
18	residual radioactive material occurring at concentrations greater than a small fraction of the
19	DCGLw is identified, reevaluation of the classification of the survey unit should be performed.
20	As discussed in Section 5.3.6.1, scanning techniques for Class 3 survey units can be used in
21	lieu of sampling and statistical testing when the scan MDC is less than 50 percent of the
22	DCGLw. Other MQOs need to be met, as well, including the MQO for required measurement
23	method uncertainty.
24	Sampling or direct measurements are performed at randomly selected locations
25	(Section 5.3.7). Survey results are tested for compliance with DCGLs, and additional data are
26	collected and tested as necessary. For structure surfaces, measurements of total and (when
27	applicable) removable radioactive material are performed at the locations identified by the scans
28	and at the randomly selected locations that are chosen in accordance with Section 5.3.7.
29	Investigation levels for Class 3 areas should be established to identify areas of elevated activity
30	that may indicate the presence of residual radioactive material. Because there is a low
31	expectation for residual radioactive material in a Class 3 area, it may be prudent to investigate
32	any measurement exceeding even a fraction of the DCGLw. The investigation level selected will
33	depend on the site, the radionuclides of concern, and the measurement and scanning methods
34	chosen. This level should be commensurate with the potential exposures and should be
35	determined using the DQO Process during survey planning. In some cases, the user may wish
36	to follow this procedure for Class 2 survey units.
37	The data are tested relative to the preestablished criteria. If additional data are needed, they
38	should be collected and evaluated as part of the entire data set. Identification of residual
39	radioactive material suggests that the area may be incorrectly classified. If so, a reevaluation of
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
5-54
May 2020
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
MARSSIM
Survey Planning and Design
the Class 3 area classification should be performed and, if appropriate, all or part of the survey
unit should be resurveyed as a Class 1 or Class 2 area.
The results of the investigation of the measurements that exceed the investigation level and the
basis for reclassifying all or part of the survey unit as Class 1 or Class 2 should be included in
the FSS report.
As discussed in Section 5.3.8, investigation levels are determined and used to indicate when
additional investigations may be necessary or when a measurement process begins to get out
of control. The results of all investigations should be documented in the FSS report, including
the results of scan surveys that may have potentially identified areas of elevated direct radiation.
5.3.10	Evaluating Survey Results
Chapter 8 describes detailed procedures for evaluating survey results. After data are converted
to the same units as the DCGL, the process of comparing the results to the DCGLs and
objectives begins. Individual measurements and sample concentrations are first compared to
DCGL levels for evidence of small areas of elevated activity and not to determine if
reclassification is necessary. Additional data or additional remediation and resurveying may be
necessary. Data are then evaluated using statistical methods to determine if they exceed the
release criteria. If the release criteria have been exceeded or if results indicate the need for
additional data points, appropriate further actions will be determined by the site management
and the regulatory agency. The scope of further actions should be agreed upon and developed
as part of the DQO Process before the survey begins (Appendix D). Finally, the results of the
survey are compared with the DQOs established during the planning phase of the project. Note
that DQOs may identify a need for a report of the evaluation of removable radioactive material
resulting from the analysis of smears. These results may be used to satisfy regulatory
requirements or to evaluate the need for additional ALARA procedures.
5.3.11	Documentation
Documentation of the FSS should provide a complete and unambiguous record of the
radiological status of the survey unit relative to the established DCGLs. In addition, sufficient
data and information should be provided to enable an independent re-creation and evaluation at
some future time. Much of the information in the FSS report will be available from other site
remediation documents; however, to the extent practicable, this report should be a stand-alone
document with minimum information incorporated by reference. The report should be
independently reviewed (see Section 8.7) and should be approved by a designated person (or
persons) who are capable of evaluating all aspects of the report before release, publication, or
distribution. Example 11 includes an example of a final status survey checklist, including survey
preparations, survey design, conduct of surveys, and evaluation of survey results.
May 2020
DRAFT FOR PUBLIC COMMENT
5-55
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Survey Planning and Design
MARSSIM
Example 11: Example Final Status Survey Checklist
Survey Preparations
	 Ensure that residual radioactive material limits have been determined for the
radionuclides present at the site, typically performed during earlier surveys associated
with the release process.
	 Identify the radionuclides of concern. Determine whether the radionuclides of concern
exist in background.
	 Segregate the site into Class 1, Class 2, and Class 3 areas, based on the presence of
potential residual radioactive material.
	 Identify the survey units.
	 Select representative reference (background) areas for both indoor and outdoor
survey areas. Reference areas are selected from non-impacted areas and—
	 are free of residual radioactive material from site operations
	 exhibit similar physical, chemical, and biological characteristics of the survey
area
	 have similar construction, but have no history of radioactive operations
	 Select measurement method, based on the required Measurement Quality Objectives
(MQOs).
	 Determine minimum detectable concentrations (MDCs; select instrumentation
based on the radionuclides present) and match between instrumentation and
derived concentration guideline levels (DCGLs)—the selected instruments
should be capable of detecting the radionuclides of concern at less than
50 percent of the DCGLs.
	 Determine measurement method uncertainty and compared to required
measurement method uncertainty.
	 Determine ruggedness, specificity, and range and compare to requirements.
	 Prepare the area if necessary—clear and provide access to areas to be surveyed.
	 Establish reference coordinate systems (as appropriate).
Survey Design
	 Enumerate Data Quality Objectives (DQOs) and MQOs: State the objective of the
	survey, state the null and alternative hypotheses, specify the acceptable decision
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
5-56
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Survey Planning and Design
error rates (Type I [a] and Type II \fi\) and requirements for MDC, measurement
method uncertainty, ruggedness, specificity, and range.
	 Specify sample collection and analysis procedures.
	 Determine numbers of data points for statistical tests, depending on whether the
radionuclide is present in background. Alternatively, design a scan-only survey using
automated equipment recording both data and location.
	 Specify the number of samples/measurements to be obtained, if applicable.
	 Evaluate the power of the statistical tests to determine whether the number of
samples is appropriate.
	 Ensure that the sample size is sufficient for detecting areas of elevated
activity.
	Add additional samples/measurements for quality control and to allow for
possible loss.
	 Establish the percentage of the survey unit to be surveyed by scanning.
	 Specify sampling locations, if appropriate.
	 Specify areas and percentage of areas subject to scanning survey.
	 Provide information on the survey measurement method.
	 Specify methods of data reduction and comparison of survey units to reference areas.
	 Provide quality control procedures and Quality Assurance Project Plan (QAPP) for
ensuring validity of survey data:
	 properly calibrated instrumentation
	 necessary replicate, reference, and blank measurements
	 comparison of field measurement results to laboratory sample analyses
	 Document the survey plan (e.g., QAPP, standard operating procedures [SOPs], etc.)
Conducting Surveys
	 Perform reference (background) area measurements and sampling.
	 Conduct survey activities:
	 Perform surface scans of the Class 1, Class 2, and Class 3 areas.
May 2020
DRAFT FOR PUBLIC COMMENT
5-57
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Survey Planning and Design
MARSSIM
	 Conduct surface activity measurements and sampling at previously selected
sampling locations, if applicable.
	 Conduct additional direct measurements and sampling at locations based on
professional judgment.
	 Perform and document any necessary investigation activities, including survey unit
reclassification, remediation, and resurvey.
	 Document measurement and sample locations; provide information on measurement
system MDC and measurement method uncertainty.
	 Document any observations, abnormalities, and deviations from the QAPP or SOPs.
Evaluating Survey Results
	 Review DQOs and MQOs.
	 Perform data reduction on the survey results.
	 Conduct a preliminary data review.
	 Select the statistical test(s).
	Verify the assumptions of statistical tests.
	 Compare survey results with regulatory DCGLs:
	 Conduct an elevated measurement comparison, if appropriate.
	 Determine the area-weighted average, if appropriate.
	 Conduct Wilcoxon Rank Sum or Sign tests, if appropriate.
	 Conduct quantile test or retrospective power analysis, if appropriate.
	 Conduct Upper Level Comparison, if appropriate.
	 Prepare FSS report.
	 Obtain an independent review of the report.
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
5-58
May 2020
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
MARSSIM
Field Measurement Methods and Instrumentation
6 FIELD MEASUREMENT METHODS AND INSTRUMENTATION
6.1 Introduction
"Measurement" is used in the Multi-Agency Radiation Survey and Investigation Manual
(MARSSIM) to mean (1) the act of using a detector to determine the level or quantity of
radioactive material on a surface or in a sample of material removed from a medium being
evaluated, or (2) the quantity obtained by the act of measuring.1 Three methods are available
for collecting radiation data while performing a survey: direct measurements, scanning, and
sampling. This chapter discusses direct measurement methods, scanning, and instrumentation.
The collection and analysis of media samples are presented in Chapter 7. Information on the
operation and use of individual field and laboratory instruments is provided in Appendix H.
Total surface activities, removable surface activities, and radionuclide concentrations in various
environmental media are the radiological parameters typically determined using field
measurements and laboratory analyses. Certain radionuclides or radionuclide mixtures may
necessitate the measurement of alpha, beta, and gamma radiations. In addition to assessing
each survey unit as a whole, any small areas of elevated activity should be identified to the
extent practicable and their extent and activities determined. Due to numerous detector
requirements, multiple measurement methods (survey technique and instrument combination)
may be needed to adequately measure all of the parameters required to satisfy the release
criteria or meet all the objectives of a survey.
Selecting an appropriate measurement method requires evaluation of both Data Quality
Objectives (DQOs) and Measurement Quality Objectives (MQOs). Instruments should be stable
and reliable under the environmental and physical conditions where they are used, and their
physical characteristics (size and weight) should be compatible with the intended application.
Numerous commercial firms offer a wide variety of instruments appropriate for the radiation
measurements described in this manual. These firms can provide thorough information
regarding capabilities, operating characteristics, limitations, etc., of specific equipment.
If the available field measurement methods do not achieve the MQOs, laboratory methods
discussed in Chapter 7 are typically used. There are certain radionuclides that are difficult to
measure at some derived concentration guideline levels (DCGLs) typically encountered in situ
using current state-of-the-art instrumentation and techniques because of the types, energies,
and abundances of their radiations. Examples of such radionuclides include such very low-
energy, pure beta emitters as tritium (3H) and nickel-63 (63Ni) and low-energy photon emitters as
iron-55 (55Fe) and iodine-125 (125l). Pure alpha emitters dispersed in soil or covered with some
absorbing layer may not be measurable, because alpha radiation will not penetrate through the
media or covering to reach the detector. A common example of such a condition would be
thorium-230 (230Th) surface contamination covered by paint, dust, oil, or moisture. The
U.S. Nuclear Regulatory Commission (NRC) report NUREG-1507 (NRC 1997a) provides
information on the extent to which these surface conditions may affect detection capability. In
1 MARSSIM uses the word "should" as a recommendation, not as a requirement. Each recommendation in this
manual is not intended to be taken literally and applied at every site. MARSSIM's survey planning documentation will
address how to apply the process on a site-specific basis.
May 2020
DRAFT FOR PUBLIC COMMENT
6-1
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Field Measurement Methods and Instrumentation
MARSSIM
1	such circumstances, the survey design will usually rely on sampling and laboratory analysis to
2	measure residual activity levels. Appendix E provides information on using a ranked set
3	sampling procedure to reduce sampling requirements for hard-to-detect radionuclides.
4	Section 6.2 includes a discussion of DQOs and MQOs. Two important MQOs, detection
5	capability and measurement uncertainty, are covered in more detail in Sections 6.3 and 6.4,
6	respectively. Section 6.5 discusses the selection of a service provider to perform field data
7	collection activities. The selection of a measurement method is discussed in Section 6.6.
8	Section 6.7 includes information on the data conversion needed to make comparisons with the
9	applicable DCGLs. Radon measurements are covered in Section 6.8. Section 6.9 includes
10	information about special equipment.
11	6.2 Data Quality Objectives
12	The third step of the DQO Process (EPA 2006c) involves identifying the data needs for a
13	survey. One decision that can be made at this step is the selection of field measurement
14	methods that meet the MQOs or determining that sample collection and subsequent laboratory
15	analysis is required.
16	6.2.1 Identifying Data Needs for Field Measurement Methods
17	The decision maker and the survey planning team need to identify the data needs for the survey
18	being performed, including the following:
19	• type of measurements to be performed (Chapter 5)
20	• radionuclide(s) of interest (Section 4.5)
21	• number of direct measurements to be performed (Sections 5.3.3-5.3.4)
22	• area of survey coverage for surface scans based on survey unit classification
23	(Section 5.3.6)
24	• type and frequency of field QC measurements to be performed (Section 4.8)
25	• standard operating procedures (SOPs) to be followed or developed (Chapter 6)
26	• measurement method uncertainties (Section 6.4)
27	• detection capabilities for each radionuclide of interest (Section 6.3)
28	• cost of the measurement methods being evaluated (both cost per measurement and total
29	cost) (Appendix H)
30	• necessary turnaround time (a potential health and safety concern for situations involving
31	excavations)
32	• specific background for the radionuclide(s) of interest (Section 4.5)
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
6-2
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Field Measurement Methods and Instrumentation
1	• DCGL for each radionuclide of interest (Section 4.10)
2	• measurement documentation requirements
3	• measurement tracking requirements
4	Some of this information will be supplied by subsequent steps in the DQO process, and several
5	iterations of the process may be needed to identify all of the data needs. Consulting with a
6	health physicist or radiochemist may be necessary to properly evaluate the information before
7	deciding between field measurement methods or sampling followed by laboratory analytical
8	methods to perform the survey. Many surveys will involve a combination of field measurements
9	and sampling methods to demonstrate compliance with the release criteria.
10	6.2.2 Measurement Performance Indicators
11	Measurement performance indicators are used to evaluate the performance of the
12	measurement method. These indicators describe how the measurement method is performing
13	to ensure the survey results are of sufficient quality to meet the survey objectives.
14	6.2.2.1 Background Measurements/Blanks
15	Background measurements are direct measurements or scans of materials with little or no
16	radioactive material, other than that present in the natural background of the material; or the
17	response of the instrument to ambient radiation when the instrument is moved away from the
18	surface being surveyed. These measurements are performed to determine whether the
19	measurement process introduces any increase in instrument signal rate that could impact the
20	measurement method detection capability. Background measurements should be representative
21	of all measurements performed using a specific measurement method (i.e., combination of
22	instrumentation and measurement technique). When practical, the background measurements
23	should consist of the same or equivalent material(s) as the area being surveyed.
24	Background measurements typically are performed before and after a series of measurements
25	to demonstrate the measurement method was performing adequately throughout the survey. At
26	a minimum, background measurements should be performed at the beginning and end of each
27	shift. When large quantities of data are collected (e.g., scanning measurements) or there is an
28	increased potential for radionuclide contamination of the instrument (e.g., removable or airborne
29	radionuclides), background measurements may be performed more frequently. In general,
30	background measurements should be performed before too many measurements have been
31	performed such that it is not practical to repeat those measurements if a problem is identified.
32	A sudden change in the measured background indicates a condition requiring immediate
33	attention. Sudden changes can be caused by the introduction of a radionuclide, a change in
34	ambient background, instrument instability, or contamination of the detector. Gradual changes in
35	the measured background indicate a need to inspect all survey areas for sources of radioactive
36	material. Gradual buildup of removable radionuclides over time or instrument drift and
37	deterioration can result in slowly increasing background measurements. High variability in
38	background measurements can result from instrument instability or improper classification
May 2020
DRAFT FOR PUBLIC COMMENT
6-3
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Field Measurement Methods and Instrumentation
MARSSIM
1	(i.e., high-activity and low-activity areas combined into a single survey unit). It is important to
2	correct any problems with blanks to ensure that the detection capability (see Section 6.3) is not
3	compromised.
4	If smears or swipes, described in more detail in Section 6.6.1.4, are used to estimate the
5	amount of removable radioactive material on the surface, measurement of an unused smear, or
6	blank, provides a background measurement of the instrument used to test the smears
7	6.2.2.2 Replicate Measurements
8	Replicate measurements are two or more measurements performed at the same location or on
9	the same sample that are performed primarily to provide an estimate of the random uncertainty
10	for the measurement method. The reproducibility of measurement results should be evaluated
11	by replicates to establish this component of measurement uncertainty (see Section 6.4).
12	Replicates typically are performed at specified intervals during a survey (e.g., 5 percent of all
13	measurements or once per day) and should be employed to evaluate each batch of data used
14	to support a decision (e.g., one replicate per survey unit). For scan-only surveys, where
15	decisions are made based on logged and geolocated measurements, typically 5 percent of all
16	measurements are replicated (e.g., 5 percent of the scanned area is scanned twice).
17	Estimates of random uncertainty exhibit a range of values and depend in part on the surface
18	being measured and the activity level. Small changes in the random uncertainty are expected,
19	and the acceptable range of variability should be established before initiating data collection
20	activities. The main causes for high random uncertainty include problems with repeating
21	measurements on irregular surfaces, the surface being measured, counting statistics when the
22	activity levels are low, and instrument contamination.
23	6.2.2.3 Spikes and Standards
24	Spikes and standards are materials with known composition and amounts of radioactive
25	material; they are used to evaluate bias in the measurement method and typically performed
26	periodically during a survey (e.g., 5 percent of all measurements or once per day). When spikes
27	and standards are available, they should be used to evaluate each batch of data used to
28	support a release decision (i.e., at least one spike or standard per survey unit).
29	Tracking results of measurements with known activity can provide an indication of the
30	magnitude of the systematic uncertainty or drift of the measurement system. In general, activity
31	levels near the DCGLs (or discrimination limits in Scenario B) will provide adequate information
32	on the performance of the measurement system.
33	6.2.3 Instrument Performance Indicators
34	Evaluating instrument performance indicators provides information on the operation of the
35	instruments and how they are performing.
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
6-4
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Field Measurement Methods and Instrumentation
1	6.2.3.1 Performance Tests
2	Performance tests should be carried out periodically and after any maintenance to ensure that
3	the instruments continue to meet performance requirements for measurements. An example of
4	a performance test is a test for response time. Performance requirements should be met as
5	specified in the applicable sections of the American National Standards Institute (ANSI)
6	publications ANSI N323AB (ANSI 2013), ANSI N42.17A (ANSI 2004), and ANSI N42.17C (ANSI
7	1990). These tests may be conducted as part of the calibration procedure.
8	6.2.3.2 Functional Tests
9	Functional tests should be performed before initial use of an instrument and after periods when
10	the instrument was stored for a relatively long time or transported over a long distance. These
11	functional tests should include—
12	• general condition
13	• battery condition
14	• verification of current calibration (i.e., check to see that the date due for calibration has not
15	passed)
16	• source and background response checks (and other tests as applicable to the instrument)
17	• constancy check
18	The effects of environmental conditions (temperature, humidity, etc.) and interfering radiation on
19	an instrument should be established before use. The performance of functional tests should be
20	appropriately documented. This may be as simple as a checklist on a survey sheet, or it may
21	include more detailed statistical evaluation, such as a chi-square test (Gilbert 1987).
22	6.2.3.3 Instrument Background
23	All radiation detection instruments have a background response, even in the absence of a
24	sample or radiation source. Inappropriate background correction will result in measurement
25	error and increase the uncertainty of data interpretation.
26	6.2.3.4 Efficiency Calibrations
27	Knowing the detector efficiency is critical for converting the instrument response to activity (see
28	MARSSIM Section 6.7, Multi-Agency Radiation Survey and Assessment of Materials and
29	Equipment [MARSAME] Section 7.8.2.2, and Multi-Agency Radiological Laboratory Analytical
30	Protocols [MARLAP] Chapter 16). Routine performance checks may be used to demonstrate
31	that the system's operational parameters are within acceptable limits, and these measurements
32	typically are included in the assessment of systematic uncertainty. The system's operational
33	parameters may be tracked using control charts.
May 2020
DRAFT FOR PUBLIC COMMENT
6-5
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Field Measurement Methods and Instrumentation
MARSSIM
1	6.2.3.5 Energy Calibrations (Spectrometry Systems)
2	Spectrometry systems identify radionuclides based on the energy of the detected radiations. A
3	correct energy calibration is critical to accurately identify radionuclides. An incorrect energy
4	calibration may result in misidentification of peaks or failure to identify radionuclides present.
5	6.2.3.6 Peak Resolution and Tailing (Spectrometry Systems)
6	The shape of the full energy peak is important for identifying radionuclides and quantifying their
7	activity with spectrometry systems. Poor peak resolution and peak tailing may result in larger
8	measurement uncertainty or in failure to identify the presence of peaks based on shape.
9	Consistent problems with peak resolution indicate the presence of an analytical bias.
10	6.2.3.1 Voltage Plateaus (Proportional Counters, Geiger-Mueller Detectors)
11	The accuracy of results using a proportional counter or Geiger-Mueller (GM) detector can be
12	affected if the system is not operated with its detector's high voltage adjusted such that it is on a
13	stable portion of the operating plateau.
14	6.2.3.8 Self-Absorption, Backscatter, and Crosstalk
15	Alpha and beta measurement results can be affected through self-absorption and backscatter.
16	Measurement systems using an electronic discriminator (e.g., gas flow proportional detectors)
17	that simultaneously detect alpha and beta particles can be affected by crosstalk
18	(i.e., identification of alpha particles as beta particles and vice versa). Accurate differentiation
19	between alpha and beta activity depends on the assessment and maintenance of information on
20	self-absorption and crosstalk.
21	6.3 Detection Capability
22	The detection capability (sometimes referred to as sensitivity) of a measurement system refers
23	to a radiation level or quantity of radioactive material that can be measured or detected with
24	some known or estimated level of confidence. This quantity is a factor of both the
25	instrumentation and the technique or procedure being used.
26	The primary parameters that affect a measurement system's detection capability are the
27	background count rate, the instrument's detection efficiency, and the counting time interval.
28	When making field measurements, the detection capability will usually be less than what can be
29	achieved in a laboratory due to increased background and, often, significantly lower detection
30	efficiency. It is often impossible to guarantee that pure alpha emitters can be detected in situ,
31	because the weathering of aged surfaces will often completely absorb the alpha emissions.
32	NUREG-1507 (NRC 1997a) contains data on many of the parameters that affect detection
33	efficiencies in situ, such as absorption, surface smoothness, and particulate radiation energy.
34	6.3.1 Detection Capability for Direct Measurements
35	Prior to performing field measurements using scalers, an investigator must evaluate the
36	detection capability of the equipment proposed for use to ensure that levels below the DCGL
37	can be detected. After a direct measurement has been made, it is then necessary to determine
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
6-6
May 2020
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
MARSSIM
Field Measurement Methods and Instrumentation
whether the result can be distinguished from the instrument background response of the
measurement system. The terms that are used in this manual to define detection capability for
fixed point counts and sample analyses are—
•	Critical level: The critical level (Lc) is the level at which there is a statistical probability (with a
predetermined confidence) of correctly identifying a measurement as greater than
background.
•	Detection limit: The detection limit (LD) is the net response level that can be expected to be
seen with a detector with a fixed level of confidence.
•	Minimum detectable concentration: The minimum detectable concentration (MDC) is the a
priori activity concentration that a specific instrument and technique that has a specified
probability (typically 95 percent) of producing a net count (or count rate) above the critical
level. When stating the detection limit of an instrument, this value should be used. The MDC
is the detection limit multiplied by an appropriate conversion factor to give units of activity.
The following discussion provides an overview of the derivation contained in the well-known
publication by Currie (1968) followed by a description of how the resulting formulas should be
used. Publications by Currie (1968) and Altshuler and Pasternack (1963) provide details of the
derivations involved. The two parameters of interest for a detector system with a background
response greater than zero are—
1.	The critical level is the lower bound on the 95 percent detection interval defined for LD and is
the level at which there is a 5 percent chance of calling a background value "greater than
background." This value should be used when counting samples or making direct radiation
measurements. Any response above this level should be considered as above background
(i.e., a net positive result). This will ensure 95 percent detection capability for LD.
2.	The detection limit is the net response level, in counts, that can be expected to be seen with
a detector with a fixed level of confidence, which is assumed to be 95 percent.
Assuming that a system has a background response, and that random uncertainties and
systematic uncertainties are accounted for separately, these parameters can be calculated
using Poisson statistics. For these calculations, two types of decision errors should be
considered. A Type I error occurs when a detector response is considered to be above
background when, in fact, only background radiation is present. A Type II error occurs when a
detector response is considered to be background when, in fact, radiation is present at levels
above background. The probability of a Type I error is referred to as a (alpha) and is associated
with Lc; the probability of a Type II error is referred to as /?(beta) and is associated with LD.
Figure 6.1 graphically illustrates the relationship of these terms with respect to each other and
to a normal background distribution.2
2 Note that the values of a and p chosen here are for the detection hypothesis test and are always chosen to be 5%
for comparability purposes. These aand Rvalues are separate and distinct from the values of aand pchosen by
the planning team for use in designing site surveys.
May 2020
DRAFT FOR PUBLIC COMMENT
6-7
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Field Measurement Methods and Instrumentation
MARSSIM
T3
0
N
O
CO
E
a2 = B + Ld
_q
_Q
o

\
CL
0
Net Counts
B = Background Counts (mean)
Lc = Critical Level (net counts above background)
Ld = Detection Limit (net counts above background)
a = Probability of Type I Error
/?= Probability of Type II Error
Figure 6.1: Graphically Represented Probabilities for Type I and Type II Errors in
Detection Capability for Instrumentation with a Background Response
If a and /?are assumed to be equal, the variance (a2) of all measurement values is assumed to
be equal to the values themselves. If the background of the detection system is not well known,
then the critical level and the detection limit can be calculated by using the following formulas:
where Lc is the critical level (counts), LD is the detection limit (counts), k is the Poisson
probability sum for a and p (assuming a and p are equal), and B is the number of background
counts that are expected to occur while performing an actual measurement.
The curve to the left in Figure 6.1 is the background distribution. The result is a Poisson
distribution with a mean equal to the number of background counts, B, and a variance, a2, equal
to B. Note that the distribution accounts only for the expected statistical variation due to the
stochastic nature of radioactive decay. Currie assumed "paired blanks" when deriving the above
stated relationships (Currie 1968), which is interpreted to mean that the sample and background
count times are the same.
Lc = k^flB
Ld = k2 = 2ky[2B
(6-1)
(6-2)
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
6-8
May 2020
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
MARSSIM
Field Measurement Methods and Instrumentation
If values of 0.05 for both a and /? are selected as acceptable, then k = 1.645 (from Appendix I,
Table 1.1), and Equations 6-1 and 6-2 can be written as—
Note: In Currie's derivation, the constant factor of 3 in the LD formula was stated
as being 2.71, but since that time it has been shown (Brodsky 1992) and
generally accepted that a constant of 3 is more appropriate. If the sample count
times and background count times are different, a slightly different formulation is
used.
The MDC value should be used when stating the detection capability of an instrument. Again,
this value is used before any measurements are made and is used to estimate the level of
activity that can be detected using a given measurement method.
For an integrated measurement over a preset time, the MDC can be obtained from
Equation 6-4 by multiplying by the factor C. This factor is used to convert from counts to
concentration, as shown in Equation 6-5:
The total detection efficiency and other constants or factors represented by the variable C are
usually not truly constants, as shown in Equation 6-5. It is likely that at least one of these
factors will have a certain amount of variability associated with it, which may or may not be
significant. These varying factors are gathered together into the single constant, C, by which the
net count result will be multiplied when converting the final data. If C varies significantly between
measurements, then it might be best to select a value, C', from the observed distribution of C
values that represents a conservative estimate. For example, a value of C might be selected to
ensure that at least 95 percent of the possible values of C are less than the chosen value, C'.
The MDC calculated in this way helps assure that the survey results will meet the DQOs. This
approach for including uncertainties into the MDC calculation is recommended in both
NUREG/CR-4007 (NRC 1984) and Appendix A to ANSI N13.30 (ANSI 1996). Underestimating
an MDC can have adverse consequences, especially if activity is later detected at a level above
the stated MDC.
From a conservative point of view, it is better to overestimate the MDC for a measurement
method. Therefore, when calculating MDC and Lc values, a measurement system background
value should be selected that represents the high end of what is expected for a particular
measurement method. For direct measurements, probes will be moved from point to point; as a
result, it is expected that the background will most likely vary significantly because of variations
in background, source materials, and changes in geometry and shielding. Ideally, the MDC
values should be calculated for each type of area, but it may be more economical to simply
select a background value from the highest distribution expected and use this for all
May 2020	6-9	NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT	DO NOT CITE OR QUOTE
Lc = 2.33 VB
Ld = 3 + 4.65 VB
(6-3)
(6-4)
MDC = C x (3+4.65VB)
(6-5)

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Field Measurement Methods and Instrumentation
MARSSIM
calculations. For the same reasons, realistic values of detection efficiencies and other process
parameters should be used when possible and should be reflective of the actual conditions, as
adopting an overly conservative MDC may sometimes lead to difficulties in implementation. To a
great degree, the selection of these parameters will be based on judgment and will require
evaluation of site-specific conditions. Example 1 illustrates the calculation of an MDC in
becquerels per square meter (Bq/m2) for an instrument with a 15 cm2 probe area when the
measurement and background counting times are each 1 minute.
Example 1: Calculation of a Minimum Detectable Concentration (MDC)
This example illustrates the calculation of an MDC in becquerels per square meter (Bq/m2) for
an instrument with a 15 square centimeter (cm2) probe area when the measurement and
background counting times are each 1 minute. Note that the count rate is reported in units of
disintegrations per minute (dpm). The background counts, B, is 40 counts.
If the total efficiency of the probe is 20 percent, then 1 count will be recorded during a
1-minute timeframe for every 5 dpm. The concentration C =
/5 dpm\ / Bq \ / 1 \/10,000cm2\	0
c = feiSt) (eojd Id (—S?—) " 556 Bq/m
The MDC is calculated using Equation 6-5:
MDC = (55.6 Bq/m2) x (3+4.65V40) = 1,800 Bq/m2(l,100 dpm/100 cm2)
The critical level, Lc, for this example is calculated from Equation 6-3:
Lc = 2.33Vb = 2.33^/40 = 15 counts
Given the above scenario, if a person asked what level of residual radioactive material could
be detected 95 percent of the time using this method, the answer would be 1,800 Bq/m2
(1,100 dpm/100 cm2). When performing measurements using this method, any count yielding
greater than 55 total counts, or greater than 15 net counts (55 - 40 = 15) during a period of
1 minute, would be regarded as greater than background.
MDC values for other counting conditions may be derived from Equation 6-5, depending on the
detector and radionuclides of concern. For example, it may be required to determine what level
of residual radioactive material, distributed over 100 square centimeters (cm2), can be detected
with a 500 cm2 probe or what level of residual radioactive material can be detected with any
probe when the area is smaller than the probe active area. Table 6.1 lists several common field
survey detectors with estimates of MDC values for uranium-238 (238U) on a smooth, flat plane.
As such, these represent minimum MDC values and may not be applicable at all sites.
Appropriate site-specific MDC values should be determined using the DQO Process.
NUREG-1575, Revision 2	6-10	May 2020
DRAFT FOR PUBLIC COMMENT	DO NOT CITE OR QUOTE

-------
MARSSIM
Field Measurement Methods and Instrumentation
1	6.3.2 Detection Capability for Scans
2	Unless data logging is employed, the ability to identify a small area of elevated levels of
3	radioactive material during surface scanning is dependent upon the surveyor's skill in
4	recognizing an increase in the output of an instrument. For notation purposes, the term
5	detection capability (sometimes referred to as scanning sensitivity) is used throughout this
6	section to describe the ability of a surveyor to detect a pre-determined level of residual
7	radioactive material with a detector. The greater the detection capability, the lower the level of
8	residual radioactive material that can be detected. Table 6.1 provides examples of a set of
9	detection capabilities.
10	Table 6.1: Examples of Estimated Detection Capabilities for Alpha and Beta Survey
11	Equipment (Static 1-Minute Counts for 238U Calculated Using Equations 6-3, 6-4, and 6-5)




Approximate Detection Capability
Detector
Probe
Area
(cm2)
Background
(cpm)
Efficiency
(cpm/dpm)
Lc
(counts)
Ld
(counts)
MDC
(Bq/m2)a
Alpha
proportional
50
1
0.15
2
7
150
Alpha
proportional
100
1
0.15
2
7
83
Alpha
proportional
600
5
0.15
5
13
25
Alpha
scintillation
50
1
0.15
2
7
150
Beta
proportional
100
300
0.20
40
83
700
Beta
proportional
600
1,500
0.20
90
183
250
Beta
GM pancake
15
40
0.20
15
32
1,800
12	Abbreviations: cm = centimeter; cpm = counts per minute; dpm = decays per minute; Lc = critical level; LD = detection
13	limit; Bq = becquerels; m = meters; GM = Geiger-Mueller.
14	a Assumes that the size of the area of radioactive material is at least as large as the probe area.
15	Many of the radiological instruments and monitoring techniques typically used for occupational
16	health physics activities may not provide the detection capabilities necessary to demonstrate
17	compliance with the DCGLs. The detection capability for a given application can be improved
18	(i.e., lower the MDC) by (1) selecting an instrument with a higher detection efficiency or a lower
19	background, (2) decreasing the scanning speed, or (3) increasing the size of the effective probe
20	area without significantly increasing the background response.
May 2020
DRAFT FOR PUBLIC COMMENT
6-11
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
Field Measurement Methods and Instrumentation
MARSSIM
Scanning is usually performed during radiological surveys to identify the presence of any areas
of elevated activity. The probability of detecting residual radioactive material in the field not only
depends on the detection capability of the survey instrumentation when used in the scanning
mode of operation, but also is affected by the surveyor's ability (i.e., human factors). The
surveyor must make a decision whether the signals represent only the background activity or
residual radioactive material in excess of background. Lower levels of residual radioactive
material can be detected by increasing the detection capability (i.e., detection of residual
radioactive material is inversely proportional to the detection capability). Accounting for these
human factors represents a significant change from the methods of estimating detection
capabilities for scans used in the past.
An empirical method for evaluating the detection capability for surveys is by actual
experimentation or, as it is certainly feasible, by simulating an experimental setup using
computer software. The following steps provide a simple example of how one can perform this
empirical evaluation:
1.	A desired radionuclide activity level is selected.
2.	The response of the detector to be used is determined for the selected radionuclide activity
level.
3.	A test source is constructed that will give a detector count rate equivalent to the detector
response, determined in Step 2. The count rate is equivalent to what would be expected
from the detector when placed on an actual area with residual radioactive material equal in
value to that selected in Step 1.
4.	The detector of choice is then moved over the source at different scan rates until an
acceptable speed is determined.
The most useful aspect of this approach is that the source can then be used to show surveyors
what level of residual radioactive material is expected to be targeted with the scan. They, in
turn, can gain experience with what the expected response of the detector will be and how fast
they can survey and still feel confident about detecting the target residual radioactive material
level. The person responsible for the survey can then use this information when developing a
fixed point measurement and sampling plan.
The remainder of this section provides the reader with information regarding the underlying
processes involved when performing scanning surveys for alpha-, beta-, and gamma-emitting
radionuclides. The purpose is to provide relevant information that can be used for estimating
realistic detection capabilities for scans.
6.3.2.1 Scanning for Beta and Gamma Emitters
The scan MDC depends on several factors:
• the intrinsic characteristics of the detector (efficiency, physical probe area, etc.)
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
6-12
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Field Measurement Methods and Instrumentation
1	• the nature (type and energy of emissions) and relative distribution of residual radioactive
2	material (point versus distributed source and depth of residual radioactive material)
3	• scan rate
4	• other characteristics of the surveyor
5	Some factors may affect the surveyor's performance (e.g., fatigue, noise, level of training,
6	experience) and the survey's a priori expectation of the likelihood of residual radioactive
7	material present. For example, if the surveyor believes that the potential for residual radioactive
8	material is very low, as in a Class 3 area, a relatively large signal may be required for the
9	surveyor to conclude that residual radioactive material is present.
10	Signal Detection Theory
11	Personnel conducting radiological surveys for residual radioactive material at sites must
12	interpret the audible output of a portable survey instrument to determine when the signal
13	("clicks") exceeds the background level by a margin sufficient to conclude that residual
14	radioactive material is present. It is difficult to detect low levels of residual radioactive material,
15	because both the signal and the background vary widely. Signal detection theory provides a
16	framework for the task of deciding whether the audible output of the survey meter during
17	scanning is due to background or signal plus background levels. An index of sensitivity (d') that
18	represents the distance between the means of the background and background plus signal
19	(refer to Figure 6.1 for determining LD), in units of their common standard deviation can be
20	calculated for various decision errors (correct detection and false positive rate).
21	As an example, for a correct detection rate of 95 percent (complement of a false negative rate of
22	5 percent) and a false positive rate of 5 percent, d! is 3.28 (similar to the static MDC for the
23	same decision error rates). The index of sensitivity is independent of human factors; therefore,
24	the ability of an ideal observer (theoretical construct) may be used to determine the minimum d'
25	that can be achieved for particular decision errors. The ideal observer makes optimal use of the
26	available information to maximize the percent correct responses, providing an effective upper
27	bound against which to compare actual surveyors. Table 6.2 lists selected values of d'.
28	Two Stages of Scanning
29	The framework for determining the scan MDC is based on the premise that there are two stages
30	of scanning. That is, surveyors do not make decisions based on a single indication; rather, upon
31	noting an increased number of counts, they pause briefly and then decide whether to move on
32	or take further measurements. Thus, scanning consists of two components: continuous
33	monitoring and stationary sampling. In the first component, characterized by continuous
34	movement of the probe, the surveyor has only a brief "look" at potential sources, determined by
35	the scan speed. The surveyor's willingness to decide that a signal is present at this stage is
36	likely to be liberal, in that the surveyor should respond positively on scant evidence, because
37	the only "cost" of a false positive is a little time. The second component occurs only after a
38	positive response was made at the first stage. This response is marked by the surveyor
39	interrupting his or her scanning and holding the probe stationary for a period of time while
May 2020
DRAFT FOR PUBLIC COMMENT
6-13
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Field Measurement Methods and Instrumentation
MARSSIM
1	comparing the instrument output signal during that time to the background counting rate. Owing
2	to the longer observation interval, detection capability is relatively high. For this decision, the
3	criterion should be stricter, as the cost of a "yes" decision is to spend considerably more time
4	Table 6.2: Index of Sensitivity (d') Values for Selected True Positive and False Positive
5	Proportions
False Positive
True Positive Proportion
Proportion
0.60
0.65
0.70 0.75 0.80 0.85
0.90
0.95
0.05
1.90
2.02
2.16
2.32
2.48
2.68
2.92
3.28
0.10
1.54
1.66
1.80
1.96
2.12
2.32
2.56
2.92
0.15
1.30
1.42
1.56
1.72
1.88
2.08
2.32
2.68
0.20
1.10
1.22
1.36
1.52
1.68
1.88
2.12
2.48
0.25
0.93
1.06
1.20
1.35
1.52
1.72
1.96
2.32
0.30
0.78
0.91
1.05
1.20
1.36
1.56
1.80
2.16
0.35
0.64
0.77
0.91
1.06
1.22
1.42
1.66
2.02
0.40
0.51
0.64
0.78
0.93
1.10
1.30
1.54
1.90
0.45
0.38
0.52
0.66
0.80
0.97
1.17
1.41
1.77
0.50
0.26
0.38
0.52
0.68
0.84
1.04
1.28
1.64
0.55
0.12
0.26
0.40
0.54
0.71
0.91
1.15
1.51
0.60
0.00
0.13
0.27
0.42
0.58
0.82
1.02
1.38
6	taking a static measurement or a sample. Because scanning can be divided into two stages, it is
7	necessary to consider the survey's scan detection capability for each stage. Typically, the
8	minimum detectable count rate (MDCR) associated with the first scanning stage will be greater
9	due to the brief observation intervals of continuous monitoring—provided that the length of the
10	pause during the second stage is significantly longer. Typically, observation intervals during the
11	first stage are on the order of 1 or 2 seconds, while the second stage pause may be several
12	seconds longer. The greater value of MDCR from each of the scan stages is used to determine
13	the detection capability for the surveyor.
14	Determination of MDCR and Use of Surveyor Efficiency
15	The minimum detectable number of net source counts in the time interval is given by Sj.
16	Therefore, for an ideal observer, the number of source counts required for a specified level of
17	performance can be arrived at by multiplying the square root of the number of background
18	counts by the detectability value associated with the desired performance (as reflected in d') as
19	shown in Equation 6-6:
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
6-14
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Field Measurement Methods and Instrumentation
Si = d'JTi
(6-6)
where the value of d' is selected from Table 6.2 based on the required true positive and false
positive rates and bt is the number of background counts in the interval. The MDCR can be
calculated using the Equation 6-7:
MDCR = —	(6""
A tt
where Att is the observation interval. Example 2 illustrates the calculation of the MDCR for a
probe with a background count rate of 1,500 counts per minute (cpm) for a 1-second interval.
Example 2: Calculation of the Minimum Detectable Count Rate for the First Stage of
Scanning
Estimate the minimum detectable count rate (MDCR) by scanning in an area with a
background of 1,500 counts per minute (cpm). Note that the MDCR must be considered for
both scan stages, and the more conservative value is selected as the minimum count rate
that is detectable. It will be assumed that a typical source remains under the probe for 1
second (s) during the first stage, therefore, the average number of background counts in the
observation interval is—
where Jb, is the average number of background counts in an observation interval.
Furthermore, it can be assumed that at the first scanning stage, a high rate (e.g., 95 percent)
of correct detections is required and that a correspondingly high rate of false positives (e.g.,
60 percent) will be tolerated. From Table 6.2, the value of d!, representing this performance
goal is 1.38. The net source counts needed to support the specified level of performance
(assuming an ideal observer) will be estimated by multiplying 5 (the square root of 25) by
1.38. Thus, the net source counts, s{ in interval Att (in seconds), needed to yield better than
95 percent detections with about 60 percent false positives is given by Equation 6-6:
where sy is the minimum detectable number of net source counts and is over a time interval
specified by t„ For this example, MDCR is equivalent to 414 cpm above a background of
1,500 cpm (1,914 cpm gross).
/minutex
bi = (1500 cpm)(1 s) (-gQj-j
= 25 counts
st = 1.38x725 = 6.9
The MDCR, in cpm, may be calculated using Equation 6-7:
6.9 countsx / 60 s
MDCR
minute
= 414 cpm
May 2020
DRAFT FOR PUBLIC COMMENT
6-15
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Field Measurement Methods and Instrumentation
MARSSIM
1	Example 3 illustrates the determination of the detection limit for the ideal observer (MDCR) at
2	the first scanning stage for various background levels for a true positive proportion of 95 percent
3	and false positive proportion of 60 percent.
Example 3: Determination of the Detection Limit for the Ideal Observer
The table below provides the minimum detection count rate and detection limit for the ideal
observer at the first scanning stage for various background levels, based on an index of
sensitivity (d') of 1.38 for a true positive proportion of 95 percent, a false positive proportion of
60 percent, and a 2-second observation interval.
Detection Capability of the Ideal Observer for Various Background Levels
Background
MDCR
Scan
(cpm)
(net cpm)
Sensitivity


(gross cpm)a
45
50
95
60
60
120
260
120
380
300
130
430
350
140
490
400
150
550
1,000
240
1,240
3,000
410
3,410
4,000
480
4,480
Abbreviations: cpm = counts per minute; MDCR = minimum detectable count rate.
aThe detection capability of the ideal observer during the first scanning stage is based on an index of sensitivity
(d) of 1.38 and a 2-second observational interval.
4	The minimum number of source counts required to support a given level of performance for the
5	final detection decision (second scan stage) can be estimated using the same method. As
6	explained earlier, the performance goal at this stage will be more demanding. Example 4
7	illustrates the calculation of the MDCR for the probe from Example 2 but with an interval of
8	4 seconds instead of 1 second.
Example 4: Calculation of the Minimum Detectable Count Rate for the Second Stage of
Scanning
The required rate of true positives remains high (e.g., 95 percent), but fewer false positives
(e.g., 20 percent) can be tolerated, such that the index of sensitivity (d') (from Table 6.2) is
now 2.48. One will assume that the surveyor typically stops the probe over a suspect location
for about 4 seconds (s) before making a decision so that the average number of background
counts in an observation interval is
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
6-16
May 2020
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
MARSSIM
Field Measurement Methods and Instrumentation
/minutex
bt = (1,500 cpm)(4 s) ( j = 100 counts
where Z?, is the average number of background counts in an observation interval. Therefore,
the minimum detectable number of net source counts, st, needed will be estimated by
multiplying 10 (the square root of b,) by 2.48 (the d! value) using Equation 6-6:
Si = d'Jbl = 2.48xVl00 = 24.8
The minimum detectable count rate (MDCR) is calculated using Equation 6-7:
Si /24.8 counts \ / 60 s \
MDCR = — = 			 —	 = 372 cpm
Atj V 4 s / Vminute/
where sy is the minimum detectable number of net source counts and is over a time interval
specified by t,. The MDCR is 372 counts per minute (cpm) net, or 1,872 cpm gross. The value
associated with the first scanning stage (Example 2: 414 cpm net or 1,914 cpm gross) will
typically be greater, owing to the relatively brief intervals assumed.
Laboratory studies using simulated sources and backgrounds were performed to assess the
abilities of surveyors under controlled conditions. The methodology and analysis of results for
these studies are described in NUREG-1507 (NRC 1997a). The surveyor's actual performance
as compared with the ideal possible performance (using the ideal observer construct) provided
an indication of the efficiency of the surveyors. Based on the results of the confidence rating
experiment, this surveyor efficiency (p) was estimated to be between 0.5 and 0.75.
MARSSIM recommends assuming a surveyor efficiency value at the lower end of the observed
range (i.e., 0.5) when making MDC estimates. Thus, the required number of net source counts,
MDCRsurveyor, is determined by dividing the MDCR by the square root of p, as in Equation 6-8:
MDCR	(G Q\
MDCR =——
surveyor	/
Example 5 shows the calculation of the surveyor MDCR for Example 1:
Example 5: Calculation of the Surveyor Minimum Detectable Count Rate for Example 1
Using the data from Example 1, the surveyor minimum detectable count rate (MDCR) is
calculated using Equation 6-8:
MDCR 414 cpm
MDCR = —;=— =	==— = 585 cpm
V05
May 2020	6-17	NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT	DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
Field Measurement Methods and Instrumentation
MARSSIM
The surveyor MDCR is 585 counts per minute (cpm) net (2,085 cpm gross).
Scan MDCs for Structure Surfaces and Land Areas
The survey design for determining the number of data points for areas of elevated activity (see
Section 5.3.5) depends on the scan MDC for the selected instrumentation. In general, alpha or
beta scans are performed on structure surfaces to satisfy the elevated activity measurements
survey design, and gamma scans are performed for land areas. Because of low background
levels for alpha emitters, the approach described here is not generally applied to determining
scan MDCs for alpha emitters; rather, the reader is referred to Section 6.3.2.2 for an
appropriate method for determining alpha scan MDCs for building surfaces. In any case, the
data requirements for assessing potential elevated areas of direct radiation depend on the scan
MDC of the survey instrument (e.g., floor monitor, GM detector, sodium iodide [Nal] scintillation
detector).
Scan MDCs for Building/Structure Surfaces
The scan MDC is determined from the MDCR by applying conversion factors that account for
detector and surface characteristics and surveyor efficiency. As discussed above, the MDCR
accounts for the background level, performance criteria (d), and observation interval. The
observation interval during scanning is the actual time that the detector can respond to the
source of residual radioactive material—this interval depends on the scan speed, detector size
in the direction of the scan, and area of elevated activity. Because the actual dimensions of
potential areas of elevated activity in the field cannot be known a priori, MARSSIM recommends
postulating a certain area (e.g., perhaps 50-200 cm2) and then selecting a scan rate that
provides a reasonable observation interval.
Finally, the scan MDC in units of decays per minute (dpm)/100 cm2 for structure surfaces may
be calculated using Equation 6-9:
MDCR
Scan MDC =	(6-9)
~Jv £i £s 100"
where
MDCR is the minimum detectable count rate
st is the instrument efficiency
ss is the surface efficiency
p is the surveyor efficiency
W is the physical probe area in square centimeters
100 is a units conversion from (cm2)"1 to (100 cm2)"1
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
6-18
May 2020
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
MARSSIM
Field Measurement Methods and Instrumentation
Consideration may need to be given to the size of the detector probe relative to the size of the
postulated hot spot size. For example, a large area floor monitor with a probe area of
approximately 600 cm2 would fully cover the area of the postulated hot spot in the scenario
presented above (i.e., 50-200 cm2). In this situation, a probe area correction is likely not
appropriate. Example 6 illustrates the calculation of the scan MDC for the probe in Example 3
for technetium-99 ("Tc).
Example 6: Calculation of the Scan Minimum Detectable Concentration for the Probe in
Example 3
As an example, the scan minimum detectable concentration (MDC) (in disintegrations per
minute [dpm]/100 square centimeters [cm2]) for technetium-99 ("Tc) on a concrete surface
may be determined for a background level of 300 counts per minute (cpm) and a 2-second
observation interval using a hand-held gas proportional detector (126 cm2 probe area). For a
specified level of performance at the first scanning stage of 95 percent true positive rate and
60 percent false positive rate (and assuming the second stage pause is long enough to
ensure that the first stage is more limiting), d! equals 1.38 (Table 6.2), and the minimum
detectable count rate (MDCR) is 130 cpm (Example 3). Using a surveyor efficiency of 0.5,
and assuming instrument and surface efficiencies of 0.36 and 0.54, respectively, the scan
MDC is calculated using Equation 6-9:
MDCR	130 cpm
Scan MDC =	ttj- =	——	= 750 dpm/100 cnr
/— W ,—	/126 cm2^
VP Too" ^ (0.36 x 0.54 cpm/dpm)
The scan MDC for "Tc is 750 dpm/100 cm2.
Additional examples for calculating the scan MDC may be found in NUREG-1507 (NRC 1997a).
Scan MDCs for Land Areas
In addition to the MDCR and detector characteristics, the scan MDC for land areas is based on
the area of elevated activity, depth of contamination, and the radionuclide (i.e., energy and yield
of gamma emissions).
Thallium-infused Nal (Nal(TI)) scintillation detectors are generally used for scanning land areas.
Typically, the detectors are placed just above the surveyed area. By hand, this can be done by
suspending the detector from the surveyor's hand by a rope. By mechanical means, the
detector can be fixed from an automated device.
An overview of the approach used to determine scan MDCs for land areas follows. The Nal(TI)
scintillation detector background level and scan rate (observation interval) are postulated, and
the MDCR for the ideal observer, for a given level of performance, is obtained. After a surveyor
efficiency is selected, the relationship between the surveyor MDCR (MDCRSUrveyor) and the
radionuclide concentration in soil, in Bq/kilogram (kg) or picocuries/gram (pCi/g) is determined.
May 2020	6-19	NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT	DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
Field Measurement Methods and Instrumentation	MARSSIM
This correlation requires two steps: First, the relationship between the detector's net count rate
to net exposure rate (cpm per microroentgens/hour [jo.R/h]) is established, and second, the
relationship between the concentration of residual radioactive material and exposure rate is
determined.
For a particular gamma energy, the relationship of Nal(TI) scintillation detector count rate and
exposure rate may be determined analytically (in cpm per |a,R/h). The approach used to
determine the gamma fluence rate necessary to yield a fixed exposure rate (1 |a,R/h)—as a
function of gamma energy—is provided in NUREG-1507 (NRC 1997a). The Nal(TI) scintillation
detector response (in cpm) is related to the fluence rate at specific energies, considering the
detector's efficiency (probability of interaction) at each energy. From this, the Nal(TI) detector
count rate versus exposure rates for varying gamma energies are determined. After the
relationship between the Nal(TI) detector response and the exposure rate is established, the
MDCRsurveyor (in cpm) of the Nal(TI) detector can be related to the minimum detectable exposure
rate (MDER) using Equation 6-10:
MDCRsurveyor
MDER =			(6-10)
Ratio of Count Rate to Exposure Rate
The MDER is used to determine the minimum detectable radionuclide concentration (i.e., the
scan MDC) by modeling a specified small area of elevated activity and then dividing the MDER
by the exposure rate conversion factor using Equation 6-11:
MDER
Scan MDC = -	—	:———	(6-11)
Exposure Rate Conversion Factor
Example 7 illustrates the calculation of the scan MDC for cesium-137 (137Cs) using a 38
millimeter (mm; 1.5 inch [in.]) by 32 mm (1.25 in.) Nal(TI) scintillation detector.
Example 7: Calculation of a Scan Minimum Detectable Concentration for 137Cs
Modeling (using MicroShield 5.05™) of the small area of elevated activity (soil concentration)
is used to determine the net exposure rate produced by a radionuclide concentration at a
distance 10 centimeters (cm) above the source. This position is selected because it relates to
the average height of the Nal(TI) scintillation detector above the ground during scanning. The
factors considered in the modeling include the following:
•	radionuclide of interest (considering all gamma emitters for decay chains)
•	expected concentration of the radionuclide of interest
•	areal dimensions of the area of elevated activity
•	depth of the area of elevated activity
•	location of dose point (Nal(TI) scintillation detector height above the surface)
•	density and moisture content of soil
Modeling analyses are conducted by selecting a radionuclide (or radioactive material decay
series) and then varying the concentration of the radionuclide. The other factors are held
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
6-20
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Field Measurement Methods and Instrumentation
constant—the areal dimension of a cylindrical area of elevated activity is 0.25 square
meters (m2; radius of 28 cm), the depth of the area of elevated activity is 15 cm, the dose
point is 10 cm above the surface, and the density of soil is 1.6 g/cm3. The soil was modeled
as 50 percent aluminum and 50 percent carbon by weight. The objective is to determine the
radionuclide concentration that is correlated to the minimum detectable net exposure rate.
The scan MDC for cesium-137 (137Cs) using a 38 millimeter (mm; 1.5 inch [in.]) by 32 mm
(1.25 in.) Nal(TI) scintillation detector is considered in detail. Assume that the background
level is 4,000 counts per minute (cpm) and that the desired level of performance, 95 percent
correct detections and 60 percent false positive rate, results in a d! of 1.38. The scan rate of
0.5 m/second (s) provides an observation interval of 1 second (based on a diameter of about
56 cm for the area of elevated activity). The MDCRSUrveyor may be calculated assuming a
surveyor efficiency (p) of 0.5, as follows:
bt = (4,000 cpm)x(1 s)x(1 min/60 s) = 66.7 counts
MDCR = d'/b^ (60 s/1 min) = (1.38)x	x(60 s/1 min) = 680 cpm
MDCR 680 cpm
7p Vol)
MDCRsurveyor = —— = —7=~ = 960 cpm
The corresponding minimum detectable exposure rate (MDER) is determined for this detector
and radionuclide. The manufacturer of this particular 38 mm (1.5 in.) by 32 mm (1.25 in.)
Nal(TI) scintillation detector quotes a count rate to exposure rate ratio for 137Cs of 350 cpm
per |a,R/hour (h), which is equivalent to the Standard International System of Units of
1.36 cpm per picocoulomb (pC) per kilogram (kg) per hour (cpm/(pC/kg)/h). MDER can be
calculated by dividing the MDCRsurveyor by the exposure rate ratio for 137Cs, as shown below:
960 cpm
MDER = ——	= 706 (pCi/kg)/h (2.74 nR/h)
1.36 cpm/(pCi/kg)/h	w v * '
Both 137Cs and its short-lived progeny, 137mBa, are chosen from the MicroShield® library. The
source activity and other modeling parameters are entered into the modeling code. The
source activity is selected based on an arbitrary concentration of 185 Bq/kg (5 picocuries
[pCi]/g). The modeling code performed the appropriate calculations and determined an
exposure rate of 337 (pCi/kg)/h (1.307 nR/h), which accounts for buildup.
Finally, the radionuclide concentrations of 137Cs and 137mBa (scan MDC) necessary to yield
the minimum detectable exposure rate of 706 (pCi/kg)/h (2.74 |a,R/h) may be calculated using
the following formula:
May 2020	6-21	NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT	DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
Field Measurement Methods and Instrumentation	MARSSIM
706 (pC/kg)/h
scanMDC=7TC7k|h\= 390 Bq/k9
^ 185 Bq/kg J
The scan MDC for 137Cs using a 38 mm (1.5 in.) by 32 mm (1.25 in.) Nal(TI) scintillation
detector, rounded to the appropriate number of significant digits, is 390 Bq/kg (11 pCi/g).
It must be emphasized that although a single scan MDC value can be calculated for a given
radionuclide, other scan MDC values may be equally justifiable, depending on the values
chosen for the various factors, including the MDCR (background level, acceptable performance
criteria, observation interval), surveyor efficiency, detector parameters, and the modeling
conditions of the residual radioactive material. It should also be noted that determination of the
scan MDC for radioactive materials—such as uranium and thorium—must consider the gamma
radiation emitted from the entire decay series. NUREG-1507 (NRC 1997a) provides a detailed
example of how the scan MDC can be determined for enriched uranium.
Example 7 uses 137Cs as the radionuclide, which is the same radionuclide that is used to
calibrate the instrument. When doing gamma surveys for other radionuclides than 137Cs—the
most common instrument calibration source—instruments may underestimate or overestimate
the source strength because of the different energies of the gamma rays that are emitted. This
uncertainty can be reduced or eliminated by cross calibrating the detector to energy
compensated detectors, such as pressurized ion chambers, or the instrument can be calibrated
to the specific radionuclide of concern.
Table 6.3 provides scan MDCs for common radionuclides and radioactive materials in soil. It is
important to note that the variables used in the above examples to determine the scan MDCs for
the 38 mm (1.5 in.) by 32 mm (1.25 in.) Nal(TI) scintillation detector (i.e., the MDCRSUrveyor
detector parameters (e.g., cpm per ^R/h), and the characteristics of the area of elevated
activity) have all been held constant to facilitate the calculation of scan MDCs provided in
Table 6.3. The benefit of this approach is that generally applicable scan MDCs are provided for
different radionuclides. Additionally, the relative detectability of different radionuclides is evident
because the only variable in Table 6.3 is the nature of the radionuclide.
As noted above, the scan MDCs calculated using the approach in this section are dependent on
several factors. One way to validate the appropriateness of the scan MDC is by tracking the
levels of residual radioactive material (both surface activity and soil concentrations) identified
during investigations performed as a result of scanning surveys. The measurements performed
during these investigations may provide an a posteriori estimate of the scan MDC that can be
used to validate the a priori scan MDC used to design the survey.
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
6-22
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM	Field Measurement Methods and Instrumentation
1	Table 6.3: Nal(TI) Scintillation Detector Scan MDCs for Common Radionuclides and
2	Radioactive Materials
Radionuclide/Radioactive
1.25 in. by 1.5 in. Nal
Detector3
2 in. by 2 in. Nal Detector3
Material
Scan MDC
(Bq/kg)
Weighted
cpm/|jR/h
Scan MDC
(Bq/kg)
Weighted
cpm/|jR/h
241Am
1,650
5,830
1,170
13,000
60Co
215
160
126
430
137Cs
385
350
237
900
230Th
111,000
4,300
78,400
9,580
226Ra
(Individual radionuclide, in
equilibrium with progeny)
167
300
104
760
232Th decay series
(Sum of all radionuclides in the
thorium decay series)
1,050
340
677
830
Th-232
(Individual radionuclide, in
equilibrium with progeny in
decay series)
104
340
66.6
830
Depleted Ub
(0.34% U-235)
2,980
1,680
2,070
3,790
U in natural isotopic abundance13
4,260
1,770
2,960
3,990
3% Enriched Ub
5,070
2,010
3,540
4,520
20% Enriched Ub
5,620
2,210
3,960
4,940
50% Enriched Ub
6,220
2,240
4,370
5,010
75% Enriched Ub
6,960
2,250
4,880
5,030
3	Abbreviations: in. = inch; MDC = minimum detectable concentration; Bq = becquerel; kg = kilogram; cpm = counts per
4	minute; nR/h = microroentgens/hour.
5	a Refer to text for complete explanation of factors used to calculate scan MDCs. For example, the background level
6	for the 1.25 in. by 1.5 in. Nal detector was assumed to be 4,000 cpm, and 10,000 cpm for the 2 in. by 2 in. Nal
7	detector. The observation interval was 1 second, and the level of performance was selected to yield a performance
8	criteria (d') of 1.38.
9	b Scan MDC for uranium includes the sum of 238U, 235U, and 234U.
10	6.3.2.2 Scanning for Alpha Emitters
11	Scanning for alpha emitters differs significantly from scanning for beta and gamma emitters in
12	that the expected background response of most alpha detectors is very close to zero. The
13	following discussion covers scanning for alpha emitters and assumes that the surface being
May 2020
DRAFT FOR PUBLIC COMMENT
6-23
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Field Measurement Methods and Instrumentation
MARSSIM
1	surveyed is similar in nature to the material on which the detector was calibrated. In this respect,
2	the approach is purely theoretical. Surveying surfaces that are dirty, non-planar, or weathered
3	can significantly affect the detection efficiency and therefore introduce bias to the expected
4	MDC for the scan. The use of reasonable detection efficiency values instead of optimistic values
5	is highly recommended. Appendix J contains a complete derivation of the alpha scanning
6	equations used in this section.
7	Because the time an area is under the probe varies and the background count rate of some
8	alpha instruments is less than 1 cpm, it is not practical to determine a fixed MDC for scanning.
9	Instead, it is more useful to determine the probability of detecting an area of residual radioactive
10	material at a predetermined DCGL for given scan rates.
11	For alpha survey instrumentation with backgrounds ranging from less than 1 to 3 cpm, a single
12	count provides a surveyor sufficient cause to stop and investigate further. Assuming this to be
13	true, the probability of detecting given levels of alpha-emitting radioactive materials on a surface
14	can be calculated by use of Poisson summation statistics.
15	Given a known scan rate and a DCGL for residual radioactive material on a surface, the
16	probability of detecting a single count while passing over the contaminated area is calculated
17	using Equation 6-12:
18	where
19	• P(n > 1) is the probability of observing one or more counts
20	• G is the activity in disintegrations per minute (dpm)
21	• et is the detector efficiency (4%)
22	• d is width of the detector in the direction of the scan in centimeters
23	• v is the scan speed in centimeters/second
24	Equation 6-12 may be solved for a minimum detectable alpha concentration by assessing
25	the probability of detection using Poisson summation statistics. Specifically, by defining a
26	certain probability of detection, the alpha scan minimum detectable activity (MDA) may be
27	estimated by solving Equation 6-12 for G, as shown in Equation 6-13, where /' is the
28	observation interval (in seconds) that can be calculated as d/v from Equation 6-12:
29	The scan MDC calculation may be written to account for the probe area, as shown in
30	Equation 6-14:
—Getd
P(n > 1) = 1 — e 60v
(6-12)
Alpha Scan MDA =
[— ln(l — P(n > 1))] x (60/j)
£t
(6-13)
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
6-24
May 2020
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
MARSSIM
Field Measurement Methods and Instrumentation
[—ln(l — P(n > 1))1 x (60/j)
Alpha Scan MDC = 						(6-14)
£t xl00
where A is the physical probe area (cm2) and the Alpha Scan MDC is in units of dpm/100 cm2.
Note: This evaluation is shown for the situation where a surveyor is expected to
respond to one single count and would become more complex for 2 or more
counts. It is also necessary to define an acceptable probability of detection,
which should be considered during the DQO process and may require
consultation with the regulator.
After a count is recorded and the guideline level of residual radioactive material is present, the
surveyor should stop and wait until the probability of getting another count is at least 90 percent.
This time interval can be calculated by using Equation 6-15:
[-ln(l-P(n > 1))] x 60 x 100 13,800	/R ^
C xWxst	~ CxWxst
where
•	t is the time period for static count in seconds
•	C is the DCGL in dpm/100 cm2
•	W is the physical probe area (in cm2)
•	et is the detector efficiency (4%)
Many portable proportional counters have background count rates on the order of 5-10 cpm,
and a single count should not cause a surveyor to investigate further. A counting period long
enough to establish that a single count indicates that an elevated level would be prohibitively
inefficient. For these types of instruments, the surveyor usually will need to get at least 2 counts
while passing over the source area before stopping for further investigation.
Assuming this to be a valid assumption, the probability of getting two or more counts can be
calculated using Equation 6-16:
P(n > 2) = 1 — P(n = 0) - P(n = 1)
= 1-|1+<££Lt£2£j(e-ffi^)	(MS)
where P(n> 2) is probability of getting 2 or more counts during the time interval t\ P(n = 0) is
the probability of not getting any counts during the time interval t\ P(n = 1) is the probability of
May 2020	6-25	NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT	DO NOT CITE OR QUOTE

-------
Field Measurement Methods and Instrumentation
MARSSIM
1	getting exactly 1 count during the time interval t; and B is the background count rate (cpm). All
2	other variables are the same as in Equation 6-10.
3	Appendix J provides a complete derivation of Equations 6-12 through 6-16 and a detailed
4	discussion of the probability of detecting residual alpha-emitting radioactive material on surfaces
5	for several different variables. Several probability charts are included at the end of Appendix J
6	for common detector sizes. Table 6.4 provides estimates of the probability of detecting
7	300 dpm/100 cm2 for some commonly used alpha detectors.
8	Table 6.4: Probability of Detecting 300 dpm/100 cm2 of Alpha Activity While Scanning
9	with Alpha Detectors Using an Audible Output (Calculated Using Equation 6-16)
Detector Type
Detection
Efficiency
(cpm/dpm)
Probe Dimension
in Direction of
Scan (cm)
Scan Rate
(cm/s)
Probability of
detecting
300 dpm/100 cm2
Proportional
0.20
5
3
80%
Proportional
0.15
15
5
90%
Scintillation
0.15
5
3
70%
Scintillation
0.15
10
3
90%
10	Abbreviations: cpm = counts per minute; dpm = decays per minute; cm = centimeters; s = seconds.
11	6.4 Measurement Uncertainty
12	The quality of measurement data will be directly affected by the magnitude of the measurement
13	uncertainty associated with it. Some uncertainties, such as statistical counting uncertainties, can
14	be easily calculated from the count results using mathematical procedures. Evaluation of other
15	sources of uncertainty requires more effort and in some cases is not possible. For example, if
16	an alpha activity measurement is made on a porous concrete surface, the observed instrument
17	response when converted to units of activity will probably not exactly equal the true activity
18	under the probe. Variations in the absorption properties of the surface for particulate radiation
19	will vary from point to point and therefore will create some level of variation in the expected
20	detection efficiency. This variability in the expected detector efficiency results in uncertainty in
21	the final reported result.
22	The measurement uncertainty for every analytical result or series of results, such as for a
23	measurement system, should be reported. This uncertainty, although not directly used for
24	demonstrating compliance with the release criteria, is used for survey planning and data
25	assessment throughout the Radiation Survey and Site Investigation (RSSI) process. In addition,
26	the uncertainty is used for evaluating the performance of measurement systems using quality
27	control (QC) measurement results. QC measurement results provide an estimate of random and
28	systematic uncertainties associated with the measurement process. Uncertainty can also be
29	used for comparing individual measurements to the DCGL. This is especially important in the
30	early stages of remediation (i.e., scoping, characterization, remedial action support) when
31	decisions are made based on a limited number of measurements.
32	Finally, where controlling uncertainty is important, a required uncertainty can be specified or a
33	minimum quantifiable concentration (MQC) can be defined. The MQC could, for example, be
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
6-26
May 2020
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
MARSSIM
Field Measurement Methods and Instrumentation
that concentration of residual radioactive material that can be quantified with an uncertainty no
greater than some limit (e.g., 10 percent) at some specified concentration (e.g., the DCGL).
For most sites, evaluations of uncertainty associated with field measurements are important
only for data being used as part of the final status survey (FSS) documentation. The FSS data,
which is used to document the final radiological status of a site, should state the uncertainties
associated with the measurements. Conversely, detailing the uncertainties associated with
measurements made during scoping or characterization surveys may or may not be of value,
depending on what the data will be used for, as defined by the DQOs. From a practical
standpoint, if the observed data are obviously greater than the DCGL and will be eventually
cleaned up, then the uncertainty may be relatively unimportant. Conversely, data collected
during early phases of a site investigation that may eventually be used to show that the area is
below the DCGL, and therefore does not require any cleanup action, will need the same
uncertainty evaluation as the FSS data. In summary, the level of effort needed to evaluate the
uncertainty should match the intended use of the data.
6.4.1 Systematic and Random Uncertainties
Measurement uncertainties are often broken into two subclasses: systematic uncertainty
(e.g., methodical) and random uncertainty (e.g., stochastic). Systematic uncertainties derive
from a lack of knowledge about the true distribution of values associated with a numerical
parameter and result in data that are consistently higher or lower than the true value. An
example of a systematic uncertainty would be the use of a fixed counting efficiency value
without knowledge of the frequency, even though it is known that the efficiency varies from
measurement to measurement. If the fixed counting efficiency value is higher than the true but
unknown efficiency—as would be the case for an unrealistically optimistic value—then every
measurement result calculated using that efficiency would be biased low. Random uncertainties
refer to fluctuations associated with a known distribution of values. An example of a random
uncertainty would be a well-documented chemical separation efficiency that is known to
fluctuate with a regular pattern about a mean. A constant recovery value is used during
calculations, but the true value is known to fluctuate from sample to sample with a fixed and
known degree of variation.
To minimize the need for estimating potential sources of uncertainty, the sources of uncertainty
themselves should be reduced to a minimal level by using such practices as the following:
•	The detector used should minimize the potential uncertainty. For example, when making
field surface activity measurements for 238U on concrete, a beta detector—such as a thin-
window GM "pancake" probe—may provide better quality data than an alpha detector,
depending on the circumstances. Less random uncertainty would be expected between
measurements with a beta detector, such as a pancake probe, because beta emissions
from the uranium will be affected much less by thin, absorbent layers than the alpha
emissions will.
•	Calibration factors should accurately reflect the efficiency of the detector used on the
surface material being measured for the radionuclide or mixture of radionuclides of concern
May 2020
DRAFT FOR PUBLIC COMMENT
6-27
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Field Measurement Methods and Instrumentation
MARSSIM
1	(see Section 6.6.4). For most field measurements, variations in the counting efficiency on
2	different types of materials will introduce the largest amount of uncertainty in the final result.
3	• Uncertainties should be reduced or eliminated by using standardized measurement
4	protocols (e.g., SOPs) when possible. Special effort should be made to reduce or eliminate
5	systematic uncertainties, or uncertainties that are the same for every measurement simply
6	due to an error in the process. If the systematic uncertainties are reduced to a negligible
7	level, then the random uncertainties, or those uncertainties that occur on a somewhat
8	statistical basis, can be dealt with more easily.
9	• Instrument operators should be trained and experienced with the instruments used to
10	perform the measurements.
11	• Quality assurance/quality control (QA/QC) should be conducted.
12	Uncertainties that cannot be eliminated need to be evaluated such that the effect can be
13	understood and properly propagated into the final data and uncertainty estimates. As previously
14	stated, nonstatistical uncertainties should be minimized as much as possible using good work
15	practices.
16	Overall random uncertainty can be evaluated using the methods described in the following
17	sections:
18	• Section 6.4.2 describes a method for calculating random counting uncertainty.
19	• Section 6.4.3 discusses how to combine this counting uncertainty with other uncertainties from the
20	measurement process using uncertainty propagation.
21	Systematic uncertainty is derived from calibration errors, incorrect yields and efficiencies,
22	nonrepresentative survey designs, and "blunders." It is difficult—and sometimes impossible—to
23	evaluate the systematic uncertainty for a measurement process, but bounds should always be
24	estimated and made small compared to the random uncertainty, if possible. If no other
25	information on systematic uncertainty is available, Currie (NRC 1984) recommends using
26	16 percent as an estimate for systematic uncertainties (1 percent for blanks, 5 percent for
27	baseline, and 10 percent for calibration factors).
28	6.4.2 Statistical Counting Uncertainty
29	When performing an analysis with a radiation detector, the result will have an uncertainty
30	associated with it because of the statistical nature of radioactive decay. To calculate the total
31	uncertainty associated with the counting process, both the background measurement
32	uncertainty and the sample measurement uncertainty must be accounted for. The standard
33	deviation of the net count rate, or the statistical counting uncertainty, can be calculated using
34	Equation 6-17:
G71
-s+b
H	T
's+b
(6-17)
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
6-28
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Field Measurement Methods and Instrumentation
1	where
2	• an is the standard deviation of the net count rate result
3	• Cs+b is the number of gross counts (sample)
4	• ts+b is the gross count time
5	• Cb is the number of background counts
6	• tb is the background count time
7	6.4.3 Uncertainty Propagation
8	Most measurement data will be converted to different units or otherwise included in a calculation
9	to determine a final result. The standard deviation associated with the final result, or the total
10	uncertainty, can then be calculated. Assuming the individual uncertainties are relatively small,
11	symmetric about zero, and independent of one another, then the total uncertainty for the final
12	calculated result can be determined by solving the following partial differential equation:
13	where y = f(x1,x2, -xn) is a formula that defines the calculation of a final result as a function of
14	the collected data. All variables in this equation (i.e., x1,x2, ...xn) are assumed to have a
15	measurement uncertainty associated with them and do not include numerical constants. oy is
16	the standard deviation, or uncertainty, associated with the final result, and oXi,oX2, -oXn are the
17	standard deviations, or uncertainties, associated with the parameters x1,x2,... xn, respectively.
18	Equation 6-18, generally known as the error propagation formula, can be solved to determine
19	the standard deviation of a final result from calculations involving measurement data and their
20	associated uncertainties. The solutions for common calculations along with their uncertainty
21	propagation formulas are included in Table 6.5.
22	Note: In the above examples, and x2 are measurement values with associated
23	standard deviations, or uncertainties, equal to oXi and oX2, respectively. The
24	symbol c is used to represent a numerical constant which has no associated
25	uncertainty. The symbol oy is used to denote the standard deviation, or
26	uncertainty, of the final calculated value, y.
(6-18)
May 2020
DRAFT FOR PUBLIC COMMENT
6-29
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
Field Measurement Methods and Instrumentation	MARSSIM
Table 6.5: Common Uncertainty Propagation Equations
Data Calculation
Uncertainty Propagation
y = x1+x2ory = x1-x2
°y =
J<+"1



V = *i/*2 °r y = x x2
II
tT

y = cx1 where c is a positive constant
0
y ~ C(7x1
y = xjc where c is a positive constant
V?
II
6.4.4 Reporting Confidence Intervals
Throughout Section 6.4, the term "measurement uncertainty" is used interchangeably with the
term "standard deviation." In this respect, the uncertainty is qualified as numerically identical to
the standard deviation associated with a normally distributed range of values. When reporting a
confidence interval for a value, one provides the range of values that represent a predetermined
level of confidence (i.e., 95 percent). To make this calculation, the final standard deviation—or
total uncertainty oy, as shown in Equation 6-18—is multiplied by a constant factor k,
representing the area under a normal curve as a function of the standard deviation. The values
of k representing various intervals about a mean of normal distributions as a function of the
standard deviation are given in Table 6.6. The following example illustrates the use of this factor
in context with the propagation and reporting of uncertainty values. Example 8 demonstrates
the calculation of the activity of a sample, along with its associated activity.
Table 6.6: Areas Under Various Intervals About the Mean of a Normal Distribution
Interval
Area Under the
(fi ± ka)
Interval
fl ± 0.674a
0.500
fl ± 1,00a
0.683
fl ± 1,65a
0.900
fl ± 1,96a
0.950
fl ± 2.00a
0.954
fl ± 2.58a
0.990
fl ± 3.00a
0.997
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
6-30
May 2020
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
MARSSIM
Field Measurement Methods and Instrumentation
Example 8: Uncertainty Propagation and Confidence Interval
A measurement process with a zero background yields a count result of 28 ± 5 counts in
5 minutes, where the ± 5 counts represents one standard deviation about a mean value of
28 counts. The detection efficiency is 0.1 ± 0.01 counts per disintegration, again representing
one standard deviation about the mean.
Calculate the activity of the sample, in decays per minute (dpm), total measurement
uncertainty, and the 95 percent confidence interval for the result.
The total number of disintegrations is—
xt	28 counts
*2
y
= 280 disintegrations
0.1 counts/disintegration
Using the equation for error propagation for division, total uncertainty is—
oy=y
M
~)2 + (—f = 280
Xi ) \X2 J	yj
28/
0.01
~oT
= 57 disintegrations
The activity will then be 280/5 minutes = 56 dpm and the total uncertainty will be
57 -T- 5 minutes = 11 dpm. (Because the count time is considered to have trivial variance, this
is assumed to be a constant.)
Referring to Table 6.6, a /(value of ± 1.96 represents a confidence interval equal to
95 percent about the mean of a normal distribution. Therefore, the 95 percent confidence
interval would be 1.96 x 11 dpm = 22 dpm. The final result would be that a 95 percent
confidence interval for mean activity is 56 ± 22 dpm at a coverage factor of k equal to 1.96.
6.5 Select a Service Provider to Perform Field Data Collection Activities
Often, one of the first steps in designing a survey is to select a service provider to perform field
data collection activities. MARSSIM recommends that this selection take place early in the
planning process so that the service provider can provide information during survey planning
and participate in the design of the survey. Service providers may include in-house experts in
field measurements and sample collection, health physics companies, or environmental
engineering firms, among others. See Section 6.8 and Appendix H for important information
concerning radon service providers.
Potential service providers should be evaluated to determine their ability to perform the
necessary analyses. Consideration should be given to using a field survey company that is
separate from the remediation company to preclude questions of independence and conflict of
May 2020
DRAFT FOR PUBLIC COMMENT
6-31
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Field Measurement Methods and Instrumentation
MARSSIM
1	interest. For large or complex sites, this evaluation may take the form of a pre-award audit. The
2	results of this audit provide a written record of the decision to use a specific service provider.
3	For less complex sites or facilities, a review of the potential service provider's qualifications is
4	sufficient for the evaluation.
5	Six criteria should be reviewed during this evaluation:
6	• Does the service provider possess the validated SOPs, appropriate instrumentation, and
7	trained personnel necessary to perform the field data collection activities, including
8	radon/thoron measurements? Field data collection activities (e.g., scanning surveys, direct
9	measurements, and sample collection) are defined by the data needs identified by the DQO
10	process.
11	• Is the service provider experienced in performing the same or similar data collection
12	activities?
13	• Does the service provider have satisfactory performance evaluation or technical review
14	results? The service provider should be able to provide a summary of QA audits and QC
15	measurement results to demonstrate proficiency. Equipment calibrations should be
16	performed using National Institute of Standards and Technology (NIST) traceable reference
17	radionuclide standards whenever possible.
18	• Is there adequate capacity to perform all field data collection activities within the desired
19	timeframe? This criterion considers the number of trained personnel and quantity of
20	calibrated equipment available to perform the specified tasks.
21	• Does the service provider conduct an internal QC review of all generated data that is
22	independent of the data generators?
23	• Are there adequate protocols for method performance documentation, sample tracking and
24	security (if necessary), and documentation of results?
25	Chapter 10 of MARLAP provides additional guidance related to field and sampling issues that
26	affect laboratory measurements (NRC 2004). Potential service providers should have an active
27	and fully documented quality system in place. The quality management system is typically
28	documented in one or more documents such as a Quality Management Plan (QMP) or Quality
29	Assurance Manual (QAM). This system should enable compliance with the objectives
30	determined by the DQO process in Section 2.3 (see also EPA 2006c). The elements of a
31	quality management system are discussed in Appendix D and EPA QA/G-5 (EPA 2002a).
32	6.6 Select a Measurement Method
33	The combination of a measurement technique with instrumentation, or measurement method, is
34	selected to implement a radiological survey design based on the ability to meet the MQOs (see
35	Section 6.1). Note that measurement techniques are separate from survey designs. A realistic
36	determination of the measurement method uncertainty is critical to demonstrating a method
37	meets the MQOs. Other considerations when selecting a measurement method include—
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
6-32
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Field Measurement Methods and Instrumentation
1	• health and safety concerns (Section 4.10)
2	• required detection capability (Section 6.3)
3	• required measurement method uncertainty and/or required measurement quantifiability
4	(Section 6.4)
5	• DQOs for the project
6	Measurement techniques are discussed in Section 6.6.1. Instrumentation includes a
7	combination of a radiation detector (Section 6.6.2) and a display (Section 6.6.3). Evaluation of
8	a measurement method and comparison to MQOs also requires an understanding of the
9	instrument calibration (Section 6.6.4). Instrumentation for performing radiological
10	measurements is varied and constantly being improved. Section 6.6.5 provides an overview of
11	some commonly used types of instruments and how they might be applied to FSSs. The
12	purpose of the discussions on instrumentation is not to provide an exhaustive list of acceptable
13	instruments, but to provide examples of how instrumentation and measurement techniques can
14	be combined to meet the survey objectives. Additional information on instrumentation is found in
15	Appendix H.
16	Section 6.6.6 provides information on selecting a combination of measurement technique and
17	instrumentation to provide a measurement method. It is necessary that the selected
18	measurement method meet the MQOs established during survey design. Selecting
19	instrumentation can be an iterative process. Certain MQOs (e.g., MDC, required measurement
20	method uncertainty) may not be attainable with some measurement methods. In some cases,
21	selection of a different instrument may be all that is necessary, and in other cases a different
22	measurement technique or an entirely different measurement method will need to be
23	considered. Finally, in cases where the MQOs cannot be met with any available measurement
24	methods, consult with the regulator for acceptable options.
25	6.6.1 Select a Measurement Technique
26	A measurement technique describes how a measurement is performed. The detector can be
27	moved relative to the surface being measured (i.e., scanning), used to perform static
28	measurements at a specified location in the survey unit (i.e., in situ or direct measurements), or
29	some representative portion of the survey unit taken to a different location for analysis
30	(i.e., sampling). These three measurement techniques are described in Section 6.6.1.1,
31	Section 6.6.1.2, and Section 6.6.1.3, respectively. Smears are a type of sampling, where a
32	portion of the removable radioactive material is collected from the surface being investigated
33	(Section 6.6.1.4).
34	6.6.1.1 Scanning Techniques
35	Scanning techniques generally consist of moving portable radiation detectors at a specified
36	distance relative to the physical surface of a survey unit at some specified speed to meet the
37	MQOs. Scanning is used in surveys to locate radiation anomalies by searching for variations in
38	readings, indicating gross activity levels that may require further investigation or action.
May 2020
DRAFT FOR PUBLIC COMMENT
6-33
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Field Measurement Methods and Instrumentation
MARSSIM
1	Scanning techniques can more readily provide thorough coverage of a given survey unit and are
2	often relatively quick and inexpensive to perform.
3	Scanning techniques can be used alone to demonstrate that concentrations of radioactive
4	material do not exceed release criteria. These surveys are referred to as scan-only surveys and
5	are discussed in detail in Section 5.3.6.1. Important considerations include that the scan MDC
6	and measurement method uncertainty are sufficient to meet MQOs to both quantify the average
7	concentration of the radioactive material and to identify areas of elevated activity. Scanning
8	equipment coupled with GPS or other locational data is strongly recommended for scan-only
9	surveys.
10	Maintaining the specified distance and speed during scanning can be difficult, especially with
11	hand-held instruments and irregularly shaped surfaces. Variations in source-to-detector
12	distance and scan speed can result in increased total measurement method uncertainty.
13	Determining a calibration function for situations other than surficial radionuclides uniformly
14	distributed on a plane can be complicated and may also contribute to the total measurement
15	method uncertainty.
16	6.6.1.2 Direct Measurements
17	Direct measurements, also referred to as in situ measurements, are taken by placing the
18	instrument in a fixed position at a specified distance from the surface of a given survey unit and
19	taking a discrete measurement for a pre-determined time interval. Direct measurements may be
20	combined with scanning measurements in a FSS design. In situ measurements are used
21	generally to provide an estimate of the average radionuclide concentration or level of activity
22	over a certain area or volume defined by the calibration function. In situ techniques are not
23	typically used to identify or quantify small areas or volumes of elevated radionuclide
24	concentration or activity.
25	Determining a calibration function and the associated MDA/MDC can be complicated and may
26	contribute to the total measurement method uncertainty, especially for situations other than
27	radionuclides uniformly distributed on a plane or through a regularly shaped volume (e.g., a disk
28	or cylinder).
29	However, in applicable situations and at the concurrence of the regulator, direct measurements
30	may be substituted for laboratory analysis. For example, all or a fraction of the systematic soil
31	samples may be measured in situ rather than traditional laboratory analysis.
32	6.6.1.3 Sampling
33	Sampling consists of removing a representative portion of the survey unit for separate
34	laboratory analysis. This measurement method generally has greater detection capability and
35	less measurement uncertainty than techniques that may be implemented as scanning or direct
36	measurements. Sampling is discussed in more detail in Chapter 7.
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
6-34
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Field Measurement Methods and Instrumentation
1	6.6.1.4 Smears
2	Smears, sometimes referred to as smear tests, swipes, or wipes, are used to provide an
3	estimate of removable radioactive material on the surface. Smears are a type of sample where
4	a filter paper or other substance is used to wipe a specified area of a surface. The filter paper or
5	other substance is then tested for the presence of removable radioactive material. The amount
6	of removable radioactive material transferred to the smear will depend on a number of factors,
7	including the type of swipe or smear material, the method used, the physical and chemical
8	nature of the surface being tested, the surface roughness, and the physical and chemical nature
9	of the radioactive material. These factors result in the need to establish a removal factor that will
10	be subject to some uncertainty. For this reason, although smears with detectable radioactive
11	material provide a qualitative indication of removable radioactive material, caution should be
12	exercised when using any quantitative results from smears. EPA 600/R-11/122 (EPA 2011)
13	provides more detailed guidance on the use of smears.
14	6.6.2 Select Instrumentation—Radiation Detectors
15	The particular capabilities of a radiation detector will establish its potential applications in
16	conducting a specific type of survey. Radiation detectors can be divided into four general
17	classes based on the detector material or the application: (1) gas-filled detectors, (2) scintillation
18	detectors, (3) solid-state detectors, and (4) passive integrating detectors. See Appendix H for
19	more information on the detectors discussed in this section.
20	6.6.2.1 Gas-Filled Detectors
21	Radiation interacts with the fill gas, producing ion-pairs that are collected by charged electrodes.
22	Commonly used gas-filled detectors are categorized as ionization, proportional, or GM, referring
23	to the region of gas amplification in which they are operated. The fill gas varies, but the most
24	common are—
25	• air
26	• argon with a small amount of methane (usually 10 percent methane by mass, referred to as
27	P-10 gas)
28	• neon or helium with a small amount of a halogen (e.g., chlorine or bromine) added as a
29	quenching agent
30	6.6.2.2 Scintillation Detectors
31	Radiation interacts with a solid or liquid medium, causing electronic transitions to excited states
32	in a luminescent material. The excited states decay rapidly, emitting photons that, in turn, are
33	captured by a photomultiplier tube. The ensuing electrical signal is proportional to the scintillator
34	light output, which, under the right conditions, is proportional to the energy loss that produced
35	the scintillation. The most common scintillator materials are Nal(TI), silver-activated zinc sulfide
36	(ZnS(Ag)), cadmium telluride (CdTe), thallium-activated cesium iodide (Csl(TI)), and plastic
May 2020
DRAFT FOR PUBLIC COMMENT
6-35
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
Field Measurement Methods and Instrumentation
MARSSIM
organic scintillators. The most traditional radiation survey instruments are the Nal(TI) detector
used for gamma surveys and the ZnS(Ag) detector for alpha surveys.
6.6.2.3	Solid-State Detectors
Radiation interacting with a semiconductor material creates electron-hole pairs that are
collected by a charged electrode. The design and operating conditions of a specific solid-state
detector determines the types of radiations (alpha, beta, or gamma) that can be measured, the
detection limit of the measurements, and the ability of the detector to resolve the energies of the
interacting radiations. The common semiconductor materials in use are germanium, silicon, and
cadmium zinc telluride (CZT), which are available in both n and p types in various
configurations.
Spectrometric techniques using these detectors provide a marked increase in energy resolution
in many situations. When a particular radionuclide contributes only a fraction of the total particle
fluence, photon fluence, or both from all sources (natural or manmade background), gross
measurements are inadequate and nuclide-specific measurements are necessary.
Spectrometry provides the means to discriminate among various radionuclides based on
characteristic energies. Direct gamma spectrometry is particularly effective in field
measurements, because the penetrating nature of the radiation allows one to "see" beyond
immediate radioactive materials on the surface. The availability of large, high-efficiency
germanium detectors permits measurement of low-abundance gamma emitters, such as 238U.
6.6.2.4	Passive Integrating Detectors
An additional class of instruments consists of passive, integrating detectors and associated
reading/analyzing instruments. The integrated ionization is read using a laboratory or hand-held
reader. This class includes thermoluminescent dosimeters (TLDs), optically stimulated
luminescence (OSL) dosimeters, and electret ion chambers (EICs). Because these detectors
are passive and can be exposed for relatively long periods of time, they can provide better
sensitivity for measuring low activity levels, such as free release limits, or for ongoing
surveillance. The ability to read and present data onsite is a useful feature, and such systems
are comparable to direct reading instruments.
The scintillation materials in Section 6.6.2.2 are selected for their prompt fluorescence
characteristics. In another class of inorganic crystals, called TLDs, the crystal material and
impurities are chosen so that the free electrons and holes created following the absorption of
energy from the radiation are trapped by impurities in the crystalline lattice, thus locking the
excitation energy in the crystal. Such materials are used as passive, integrating detectors. After
removal from the exposure area, the TLDs are heated in a reader, which measures the total
amount of light produced when the energy is released. The total amount of light is proportional
to the number of trapped, excited electrons, which, in turn, is proportional to the amount of
energy absorbed from the radiation. The intensity of the light emitted from the
thermoluminescent crystals is thus directly proportional to the radiation dose. TLDs come in a
large number of materials, the most common of which are lithium fluoride (LiF), manganese-
activated calcium fluoride (CaF2:Mn), dysprosium-activated calcium fluoride (CaF2:Dy),
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
6-36
May 2020
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
MARSSIM
Field Measurement Methods and Instrumentation
manganese-activated calcium sulfate (CaSO^Mn), dysprosium-activated calcium sulfate
(CaSCUiDy), and carbon-activated aluminum oxide (AhC^C).
OSL dosimeters are similar in principle to TLDs but use light instead of heat to release the free
electrons and holes trapped when radiation is absorbed. Advantages of OSL dosimeters over
TLDs include lower limits of detection and the fact that OSL dosimeters can be read multiple
times and can be reread later, if necessary.
The EIC consists of a very stable electret (a charged Teflon® disk) mounted inside a small
chamber made of electrically charged plastic. The ions produced inside this air-filled chamber
are collected onto the electret, causing a reduction of its surface charge. The reduction in
charge is a function of the total ionization during a specific monitoring period and the specific
chamber volume. This change in voltage is measured with a surface potential voltmeter.
6.6.3 Display and Recording Equipment
Radiation detectors are connected to electronic devices to (1) provide a source of power for
detector operation and (2) enable measurement of the quantity or quality of the radiation
interactions that are occurring in the detector. The quality of the radiation interaction refers to
the amount of energy transferred to the detector. In many cases, radiation interacts with other
material (e.g., air) before interacting with the detector or only partially interacts with the detector
(e.g., Compton scattering or pair production for photons). Because the energy recorded by the
detector is affected, there is an increased probability of incorrectly identifying the radionuclide.
The most common recording or display device used for portable radiation measurement
systems is a ratemeter. This device provides a display on either an analog meter representing
the number of events occurring over some time period (e.g., cpm), or, in the case of digital
ratemeters, on a digital display. The number of events can also be accumulated over a preset
time period using a digital scaling device. The resulting information from a scaling device is the
total number of events that occurred over a fixed period of time, where a ratemeter display
varies with time and represents a short-term average of the event rate. Determining the average
level on a ratemeter will require judgment by the user, especially when a low frequency of
events results in significant variations in the meter reading. The use of a ratemeter, although
acceptable for certain scanning applications, is discouraged for performing fixed measurements
(e.g., gross alpha/beta.)
Pulse height analyzers are specialized electronic devices designed to measure and record the
number of pulses or events that occur at different pulse height levels at specific energies. These
types of devices are used with detectors that produce output pulses proportional in height to the
energy deposited within them by the interacting radiation. They can be used to record only
those events occurring in a detector within a single band of energy or can simultaneously record
the events in multiple energy ranges. In the former case, the equipment is known as a single-
channel analyzer; the latter application is referred to as a multichannel analyzer. Both types of
analyzers can quantify specific radionuclides.
May 2020
DRAFT FOR PUBLIC COMMENT
6-37
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Field Measurement Methods and Instrumentation
MARSSIM
1	6.6.4 Instrument Calibration
2	Calibration refers to the determination and adjustment of the instrument response in a particular
3	radiation field of known intensity. Proper calibration procedures are an essential requisite toward
4	providing confidence in measurements made to demonstrate compliance with remediation
5	criteria. Certain factors, such as energy dependence and environmental conditions, require
6	consideration in the calibration process, depending on the conditions of use of the instrument in
7	the field. Considerations for the use and calibration of instruments include—
8	• the radiation type for which the instrument is designed
9	• the radiation energies within the range of energies for which the instrument is designed
10	«the environmental conditions for which the instrument is designed
11	«the influencing factors, such as magnetic and electrostatic fields, for which the instrument is
12	designed
13	«the orientation of the instrument, such that geotropic (gravity) effects are not a concern
14	• the manner the instrument is used, such that it will not be subject to mechanical or thermal
15	stress beyond that for which it is designed
16	Routine calibration commonly involves the use of one or more sources of a specific radiation
17	type and energy and of sufficient activity to provide adequate field intensities for calibration on
18	all ranges of concern.
19	Actual field conditions under which the radiation detection instrument will be used may differ
20	significantly from those present during routine calibration. Factors that may affect calibration
21	validity include—
22	• The energies of radioactive sources used for routine calibration may differ significantly from
23	those of radionuclides in the field.
24	• The source-detector geometry (e.g., point source or large area distributed source) used for
25	routine calibration may be different than that found in the field.
26	• The source-to-detector distance typically used for routine calibration may not always be
27	achievable in the field.
28	• The condition and composition of the surface being monitored (e.g., sealed concrete,
29	scabbled concrete, carbon steel, stainless steel, and wood) and the presence of overlaying
30	material (e.g., water, dust, oil, paint) may result in a decreased instrument response relative
31	to that observed during routine calibration.
32	If the actual field conditions differ significantly from the calibration assumptions, a special
33	calibration for specific field conditions may be required. Such an extensive calibration need only
34	be done once to determine the effects of the range of field conditions that may be encountered
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
6-38
May 2020
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
MARSSIM
Field Measurement Methods and Instrumentation
at the site. If responses under routine calibration conditions and proposed use conditions are
significantly different, a correction factor or chart should be supplied with the instrument for use
under the proposed conditions.
As a minimum, each measurement system (detector/readout combination) should be calibrated
annually and response checked with a source following calibration (ANSI 2013). Instruments
may require more frequent calibration, if recommended by the manufacturer. Re-calibration of
field instruments is also required if an instrument fails a performance check or if it has
undergone repair or any modification that could affect its response. The system should be
calibrated to minimize potential errors during data transmission and re-transmission. The user
may decide to perform calibrations following industry recognized procedures (ANSI 1997, NCRP
1978, NCRP 1985, NCRP 1991, ISO 1988, HPS 1994a, HPS 1994b), or the user can choose to
obtain calibration by an outside service, such as a major instrument manufacturer or a health
physics services organization.
Calibration sources should be traceable to NIST. Where NIST-traceable standards are not
available, standards obtained from an industry-recognized organization (e.g., the New
Brunswick Laboratory for various uranium, thorium, and plutonium standards) may be used.
Calibration of instruments for measurement of residual radioactive material on surfaces should
be performed such that a direct instrument response can be accurately converted to the 4n
(total) emission rate from the source. An accurate determination of activity from a measurement
of count rate above a surface in most cases is an extremely complex task because of the need
to determine appropriate characteristics of the source, including decay scheme, geometry,
energy, scatter, and self-absorption. Proper calibration ensures that systematic errors in
measurements are controlled to help ensure that the MQO for measurement method uncertainty
is met.
The variables that affect instrument response should be understood. Therefore, the calibration
should account for the following factors (where necessary):
•	Calibrations for point and large area source geometries may differ, and both may be
necessary if areas of activity smaller than the probe area and regions of activity larger than
the probe area are present.
•	Calibration should either be performed with the radionuclide of concern or with appropriate
correction factors developed for the radionuclide(s) present based on calibrations with
nuclides emitting radiations similar to the radionuclide of concern.
•	For portable instrumentation, calibrations should account for the substrate of concern
(i.e., concrete, steel) or appropriate correction factors developed for the substrates relative
to the actual calibration standard substrate. This is especially important for beta emitters
because backscatter is significant and varies with the composition of the substrate.
Conversion factors developed during the calibration process should be for the same
counting geometry to be used during the actual use of the detector.
May 2020
DRAFT FOR PUBLIC COMMENT
6-39
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
Field Measurement Methods and Instrumentation
MARSSIM
For building surface DCGLs, the level of residual radioactive material is typically expressed in
terms of the activity per unit area, normally Bq/m2 or dpm per 100 cm2. In many facilities,
residual radioactive material on the surface is assessed by converting the instrument response
(in cpm) to surface activity using the overall total efficiency. The total efficiency may be
considered to represent the product of two factors: the instrument (detector) efficiency and the
source efficiency. Use of the total efficiency is not a problem, provided that the calibration
source exhibits characteristics similar to the residual radioactive material on the surface
(i.e., radiation energy, backscatter effects, source geometry, self-absorption). In practice, this is
rarely the case; more likely, instrument efficiencies are determined with a clean, stainless steel
source, and then those efficiencies are used to determine the level of residual radioactive
material on a dust-covered concrete surface. By separating the efficiency into two components,
the surveyor has greater ability to consider the actual characteristics of the residual radioactive
material on the surface.
The instrument efficiency is defined as the ratio of the net count rate of the instrument to the
surface emission rate of a source for a specified geometry. The surface emission rate is defined
as the number of particles of a given type above a given energy emerging from the front face of
the source per unit time. The surface emission rate is the 2% particle fluence that embodies both
the absorption and scattering processes that effect the radiation emitted from the source. Thus,
the instrument efficiency is determined by the ratio of the net count rate and the surface
emission rate.
The instrument efficiency is determined during calibration by obtaining a static count with the
detector over a calibration source that has a traceable activity or surface emission rate. In many
cases, a source emission rate is measured by the manufacturer and certified as NIST traceable.
The source activity is then calculated from the surface emission rate based on assumed
backscatter and self-absorption properties of the source. The maximum value of instrument
efficiency is 1.
The source efficiency is defined as the ratio of the number of particles of a given type emerging
from the front face of a source to the number of particles of the same type created or released
within the source per unit time. The source efficiency takes into account the increased particle
emission due to backscatter effects, as well as the decreased particle emission due to self-
absorption losses. For an ideal source (i.e., no backscatter or self-absorption), the value of the
source efficiency is 0.5. Many real sources will exhibit values less than 0.5, although values
greater than 0.5 are possible, depending on the relative importance of the absorption and
backscatter processes.
Source efficiencies may be determined experimentally. Alternatively, ISO-7503-1 (ISO 1988)
makes recommendations for default source efficiencies. A source efficiency of 0.5 is
recommended for beta emitters with maximum energies above 0.4 megaelectronvolts (MeV).
Alpha emitters and beta emitters with maximum beta energies between 0.15 and 0.4 MeV have
a recommended source efficiency of 0.25. Source efficiencies for some common surface
materials and overlaying material are provided in NUREG-1507 (NRC 1997a).
Instrument efficiency may be affected by detector-related factors, such as detector size (probe
surface area); window density thickness; geotropism; instrument response time; counting time
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
6-40
May 2020
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
MARSSIM
Field Measurement Methods and Instrumentation
(in static mode); scan rate (in scan mode); and ambient conditions, such as temperature,
pressure, and humidity. Instrument efficiency also depends on solid angle effects, which include
source-to-detector distance and source geometry.
Source efficiency may be affected by source-related factors, such as the type of radiation and
its energy, source uniformity, surface roughness and coverings, and surface composition
(e.g., wood, metal, concrete).
The calibration of gamma detectors for the measurement of photon radiation fields should also
provide reasonable assurance of acceptable accuracy in field measurements. Use of these
instruments for demonstration of compliance with DCGLs is complicated by the fact that most
DCGLs produce exposure rates of at most a few |a,R/h. Several of the portable survey
instruments currently available in the United States for exposure rate measurements of ~1 |a,R/h
(often referred to as micro-R meters) have full scale intensities of ~3-5 |a,R/h on the first range.
This is below the ambient background for most low radiation areas and most calibration
laboratories. A typical background exposure rate of 10 mR/h gives a background dose rate of
100 millirem/year (mrem/y) Even on the second range, the ambient background in the
calibration laboratory is normally a significant part of the range and must be taken into
consideration during calibration. The instruments commonly are not energy-compensated and
are very sensitive to the scattered radiation that may be produced by the walls and floor of the
room or additional shielding required to lower the ambient background.
Low-intensity sources and large distances between the source and detector can be used for
low-level calibrations if the appropriate precautions are taken. Field characterization of low-level
sources with traceable transfer standards can be difficult because of the poor signal-to-noise
ratio. To achieve adequate detector signal, the distance between the detector and the source
generally will be as small as possible while still maintaining good geometry (5-7 detector
diameters).
Corrections for scatter can be made using a shadow-shield technique in which a shield of
sufficient density and thickness is placed about midway between the source and the detector to
eliminate virtually all the primary radiation. The dimensions of the shield should be the minimum
required to reduce the primary radiation intensity at the detector location to less than 2 percent
of its unshielded value. The change in reading caused by the shield's removal is attributed to
the primary field from the source at the detector position.
In some instruments that produce pulses (GM counters or scintillation counters), the detector
can be separated electronically from the readout electronics, and the detector output can be
simulated with a suitable pulser. Caution must be exercised to ensure that either the high
voltage is properly blocked or that the pulser is designed for this application. If this can be
accomplished, the instrument can first be calibrated on a higher range that is not affected by the
ambient background and in a geometry where scatter is not a problem and, after disconnecting
the detector, to provide the pulse-rate from the pulser, which will give the same instrument
response. The pulse rate can then be related to field strength and reduced to give readings on
lower ranges (with the detector disconnected) even below the ambient background. This
May 2020
DRAFT FOR PUBLIC COMMENT
6-41
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
Field Measurement Methods and Instrumentation
MARSSIM
technique does not take into account any inherent detector background independent of the
external background.
Ionization chambers are commonly used to measure radiation fields at very low levels. To
obtain the sensitivity necessary to measure these radiation levels, the instruments are
frequently very large and often pressurized. These instruments have some of the same
calibration problems as the more portable micro-R meters described above. The same
precautions (shadow shield) must be taken to separate the response of the instrument to the
source and to scattered radiation. Generally, it is not possible to substitute an electronic pulser
for the radiation field in these instruments.
For energy-dependent gamma scintillation instruments, such as Nal(TI) detectors, calibration for
the gamma energy spectrum at a specific site may be accomplished by comparing the
instrument response to that of a pressurized ionization chamber, or equivalent detector, at
different locations on the site. Multiple radionuclides with various photon energies may also be
used to calibrate the system for the specific energy of interest.
In the interval between calibrations, the instrument should receive a daily performance check
when in use. In some cases, a performance check following use may also provide valuable
information. This calibration check is merely intended to establish whether the instrument is
operating within certain specified, rather large, uncertainty limits. The initial performance check
should be conducted following the calibration by placing the source in a fixed, reproducible
location and recording the instrument reading. The source should be identified along with the
instrument, and the same check source should be used daily in the same fashion to
demonstrate the instrument's operability when the instrument is in use. Location and other
specific conditions should be recorded as well (e.g., indoor, outdoor, inside trailer, inside
vehicle, on the roof, etc.). For analog readout (count rate) instruments, a variation of
± 20 percent is usually considered acceptable. Optionally, instruments that integrate events and
display the total on a digital readout typically provide an acceptable average response range of
2 or 3 standard deviations. This is achieved by performing a series of repetitive measurements
(10 or more is suggested) of background and check source response and determining the
average and standard deviation of those measurements. From a practical standpoint, a
maximum deviation of ± 20 percent is usually adequate when compared with other uncertainties
associated with the use of the equipment. The amount of uncertainty allowed in the response
checks should be consistent with the level of uncertainty allowed in the final data. Ultimately the
decision maker determines what level of uncertainty is acceptable.
Instrument response, including both the background and check source response of the
instrument, should be tested and recorded at a frequency that ensures the data collected with
the equipment is reliable. For most portable radiation survey equipment, MARSSIM
recommends that a response check be performed at least twice daily when in use—typically
before beginning the day's measurements and again following the conclusion of measurements
on that same day. Additional checks can be performed if warranted by the instrument and the
conditions under which it is used. If the instrument response does not fall within the established
range, the instrument is removed from use until the reason for the deviation can be resolved
and acceptable response again demonstrated. If the instrument fails the post-survey source
check, all data collected during that time period with the instrument must be carefully reviewed
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
6-42
May 2020
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
MARSSIM
Field Measurement Methods and Instrumentation
and possibly adjusted or discarded, depending on the cause of the failure. Ultimately, the
frequency of response checks must be balanced with the stability of the equipment being used
under field conditions and the quantity of data being collected. For example, if the instrument
experiences a sudden failure during the day's work due to physical harm, such as a punctured
probe, then the data collected up until that point is probably acceptable even though a post-use
performance check cannot be performed. If no obvious failure occurred but the instrument failed
the post-use response check, then the data collected with that instrument since the last
response check should be viewed with great skepticism and possibly re-collected or randomly
checked with a different instrument. Additional corrective action alternatives are presented in
Appendix D. If recalibration is necessary, acceptable response ranges must be reestablished
and documented.
Record requirements vary considerably and depend heavily on the needs of the user. Even
though Federal and State regulatory agencies all specify requirements, the following records
should be considered a minimum:
•	laboratory quality control
o records documenting the traceability of radiological standards
o records documenting the traceability of electronic test equipment
•	record-keeping of instrument calibration files
o date the instrument was received in the calibration laboratory
o initial condition of the instrument, including mechanical condition (e.g., loose or broken
parts, dents, punctures), electrical condition (e.g., switches, meter movement, batteries),
and radiological condition (i.e., presence or absence of contamination)
o calibrator's records, including training records and signature on calibration records
o calibration data, including model and serial number of instrument, date of calibration,
recommended recalibration date, identification of source(s) used, NIST certificate or
standard certificate from the industry-recognized organization (certificate must include
the standard expiration date), "as found" calibration results, and final calibration results—
"as returned" for use
In addition, records of instrument problems, failures, and maintenance can be included and are
useful in assessing performance and identifying possible needs for altered calibration
frequencies for some instruments. Calibration records should be maintained at the facility where
the instruments are used as permanent records and should be available either as hard copies or
in safe computer storage.
May 2020
DRAFT FOR PUBLIC COMMENT
6-43
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Field Measurement Methods and Instrumentation
MARSSIM
1	6.6.5 Select Instrumentation Type—Radiation Survey Equipment
2	This section briefly describes the typical types of instrumentation that may be used to conduct
3	radiological surveys. More detailed information relevant to each type of instrument and
4	measurement method is provided in Appendix H.
5	6.6.5.1 Hand-Held Instruments
6	Hand-held instruments typically are composed of a detection probe (utilizing a single detector)
7	and an electronic instrument to provide power to the detector and to interpret data from the
8	detector to provide a measurement display. They may be used to perform scanning surveys or
9	direct measurements. Hand-held measurements also allow the user the flexibility to constantly
10	vary the source-to-detector geometry for obtaining data from difficult-to-measure areas.
11	6.6.5.2 Large Area Detectors
12	Although hand-held instruments are very useful for making direct measurements and scanning
13	small and/or difficult-to-measure areas, large area detectors provide advantages when the
14	survey unit includes large, easily accessible areas. These detectors may consist of either a
15	single large detector or an array of detectors. Unlike most hand-held detectors, which can only
16	measure the concentration in a small area—typically about 100 cm2—some detectors can
17	measure the concentration in a much larger area. When used in combination with data logging
18	and positioning systems, large-area detectors can be used in place of direct measurements and
19	scanning, if the systems can meet the required MQOs.
20	6.6.5.3 In situ Gamma Spectroscopy
21	Some in situ gamma spectroscopy (ISGS) systems consist of a small hand-held unit that
22	incorporates the detector and counting electronics into a single package. Other ISGS systems
23	consist of a semiconductor detector, a cryostat, a multichannel analyzer electronics package
24	that provides amplification and analysis of the energy pulse heights, and a computer system for
25	data collection and analysis. ISGS systems typically are applied to perform direct
26	measurements, but they may be incorporated into innovative detection equipment setups to
27	perform scanning surveys.
28	6.6.5.4 Laboratory Analysis
29	Laboratory analysis consists of analyzing a portion or sample of the surface soil or building
30	surface. The laboratory will generally have recommendations or requirements concerning the
31	amount and types of samples needed for the analysis of radionuclides or radiations.
32	Communications should be established between the field team collecting the samples and the
33	laboratory analyzing the samples. More information on sampling is provided in Section 7.5.
34	Laboratory analyses can be developed for any radionuclide with any material, given sufficient
35	resources. Laboratory analyses typically require more time to complete than field analyses. The
36	laboratory may be located onsite or offsite. The quality of laboratory data typically is greater
37	than data collected in the field, because the laboratory is better able to control sources of
38	measurement method uncertainty. The planning team should consider the resources available
39	for laboratory analysis (e.g., time, money), the sample collection requirements or
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
6-44
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Field Measurement Methods and Instrumentation
1	recommendations, and the requirements for data quality (e.g., MDC, required measurement
2	method uncertainty) during discussions with the laboratory.
3	6.6.6 Select a Measurement Method
4	Table 6.7 illustrates the potential applications for combinations of the instrument and
5	measurement techniques discussed in Sections 6.6.1 and 6.6.5, respectively. Sampling
6	followed by laboratory analysis is not included in these tables but is considered "GOOD" for all
7	applications. Please note the following qualifiers:
8	• GOOD: The measurement technique is well-suited for performing this application.
9	• FAIR: The measurement technique can adequately perform this application.
10	• POOR: The measurement technique is poorly suited for performing this application.
11	• NA: The measurement technique cannot perform this application.
12	• Few: A relatively small number, usually three or less.
13	• Many: A relatively large number, usually more than three.
14	Table 6.8 illustrates that most measurement techniques can be applied to almost any sample
15	and type of radioactive material. The quantity of samples to be surveyed becomes a major
16	factor for the selection of measurement instruments and techniques described in this chapter.
17	Facilities that conduct routine surveys may benefit financially from investing in measurement
18	instruments and techniques that require less manual labor to conduct disposition surveys. Use
19	of such automated systems will also reduce the potential for ergonomic injuries and attendant
20	costs associated with routine, repetitive surveys performed using hand-held instruments.
21	Hand-held surveying remains the more economical choice for a small area, but as the area
22	increases, the cost of an automated system becomes an increasingly worthwhile investment to
23	reduce manual labor costs associated with surveying. Note that alpha radiation has no survey
24	design options that are described as "GOOD" in Table 6.7. The planning team should revisit
25	earlier DQO selections to see if a different approach is more acceptable.
26	Each type of measurement technique has associated advantages and disadvantages, some of
27	which are summarized in Table 6.8. All the measurement techniques described in this table
28	include source-to-detector geometry and sampling variability as common disadvantages.
29
30
May 2020
DRAFT FOR PUBLIC COMMENT
6-45
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Field Measurement Methods and Instrumentation	MARSSIM
1	Table 6.7: Potential Applications for Instrumentation and Measurement Technique
2	Combinations
Radiation Type
Hand-Held
Instruments
In Situ Gamma
Spectroscopy
Direct Measurements
Alpha
FAIR
NA
Beta
GOOD
NA
Photon
GOOD
GOOD
Neutron
GOOD
NA
Scanning Surveys
Alpha
POOR
NA
Beta
GOOD
NA
Photon
GOOD
GOOD
Neutron
FAIR
NA
3	6.7 Data Conversion
4	This section describes methods for converting survey data to appropriate units for comparison
5	to radiological criteria. As stated in Chapter 4, conditions applicable to satisfying
6	decommissioning requirements include determining that any residual radioactive material will
7	not result in individuals' being exposed to unacceptable levels of radiation or radioactive
8	materials.
9	Radiation survey data are usually obtained in units that have no intrinsic meaning relative to
10	DCGLs, such as the number of counts per unit time. For comparison of survey data to DCGLs,
11	the survey data from field and laboratory measurements should be converted to DCGL units.
12	Alternatively, the DCGL can be converted into the same units used to record survey results.
13	6.7.1 Surface Activity
14	When measuring surface activity, it is important to account for the physical surface area
15	assessed by the detector to make probe area corrections and report data in the proper units
16	(i.e., Bq/m2, dpm/100 cm2). This is termed the physical probe area. A common misuse is to
17	make probe area corrections using the effective probe area, which accounts for the amount of
18	the physical probe area covered by a protective screen. Figure 6.2 illustrates the difference
19	between the physical and effective probe areas. The physical probe area is used because the
20	reduced detector response due to the screen is accounted for during instrument calibration as
21	long as the screen is in place during calibration.
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
6-46
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Field Measurement Methods and Instrumentation
2	Table 6.8: Advantages and Disadvantages of Instrumentation and Measurement
3	Technique Combinations
Instrument
Measurement
Technique
Advantages
Disadvantages
Hand-Held Instruments
Direct
•	Generally allows
flexibility in media to
be measured.
•	Detection equipment
is usually portable.
•	Detectors are
available to efficiently
measure alpha, beta,
gamma, x-ray, and
neutron radiation.
•	Measurement
equipment is
relatively low cost.
•	May provide a good
option for small
areas.
•	Requires a relatively
large amount of manual
labor as a surveying
technique; may make
surveying large areas
labor-intensive.
•	Detector windows may
be fragile.
•	Most do not provide
nuclide identification.
Hand-Held Instruments
Scanning
•	Generally allows
flexibility in media to
be measured.
•	Detection equipment
is usually portable.
•	Detectors are
available to efficiently
measure beta,
gamma, x-ray, and
neutron radiation.
•	Measurement
equipment is
relatively low cost.
•	May provide a good
option for small
areas.
•	Requires a relatively
large amount of manual
labor as a surveying
technique; may make
surveying large areas
labor-intensive.
•	Detector windows may
be fragile.
•	Most do not provide
nuclide identification.
•	Incorporates more
potential sources of
uncertainty than most
instrument and
measurement technique
combinations.
•	Potential ergonomic
injuries and attendant
costs associated with
repetitive surveys.
May 2020
DRAFT FOR PUBLIC COMMENT
6-47
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Field Measurement Methods and Instrumentation
MARSSIM
Measurement


Instrument
Technique
Advantages
Disadvantages
Hand-Held Instruments
Smear
• Only measurement
• Instrument background


technique for
may not be sufficiently


assessing removable
low.


radioactive material.
• Detectors with a


• Removable
counting sensitive


radioactive material
region larger than the


can be transferred
smear surface area may


and assessed in a
require counting


low background
adjustments to account


counting area.
for inherent



backgrounds associated



with other media located



under the detector



sensitive region.



• The results are not



always reproducible and



may not be considered



quantitative.
ISGS
Direct
• Provides quantitative
• Instrumentation may be


measurements with
expensive and difficult


flexible calibration.
to set up and maintain.


• Generally requires a
• May require liquid


moderate amount of
nitrogen supply (with


labor.
ISGS semiconductor


• May be cost-effective
systems).


for measuring large
• Size of detection


areas.
equipment may


• Good peak resolution
discourage portability.


with high purity
• Poor peak resolution


germanium
with Nal(TI) detectors.


detectors.

ISGS
Scanning
• Provides quantitative
• Instrumentation may be


measurements with
expensive and difficult


flexible calibration.
to set up and maintain.


• Generally requires a
• May require liquid


moderate amount of
nitrogen supply (with


labor.
ISGS semiconductor


• May be cost-effective
systems).


for measuring large
• Size of detection


areas.
equipment may



discourage portability.
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
6-48
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Field Measurement Methods and Instrumentation
Instrument
Measurement
Technique
Advantages
Disadvantages
Laboratory Analysis
Sampling
•	Generally provides
the lowest MDCs and
measurement
method
uncertainties, even
for difficult-to-
measure
radionuclides.
•	Allows positive
identification of
radionuclides without
gammas.
•	Most costly and time-
consuming
measurement
technique.
•	May incur increased
overhead costs while
personnel are waiting
for analytical results.
•	Great care must be
taken to ensure samples
are representative.
•	Detector windows may
be fragile.
Laboratory Analysis
Smear
•	Only measurement
technique for
assessing removable
radioactive material.
•	Removable
radioactive material
can be transferred
and assessed in a
low background
counting area.
•	Instrument background
may not be sufficiently
low.
•	Detectors with a
counting sensitive
region larger than the
smear surface area may
require counting
adjustments to account
for inherent
backgrounds associated
with other media located
under the detector's
sensitive region.
•	The results are not
always reproducible and
may not be considered
quantitative.
1	Abbreviation: ISGS = in situ gamma spectrometer; MDC = minimum detectable concentration.
2
May 2020
DRAFT FOR PUBLIC COMMENT
6-49
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
Field Measurement Methods and Instrumentation
MARSSIM
11.2 cm
11.2 cm
Physical Probe Area = 11.2 x 11.2 = 126 cm2
Area of Protective Screen = 26 cm2
Effective Probe Area = 100 cm2
Figure 6.2: The Physical Probe Area of a Detector: Gas Flow Proportional Detector with
Physical Probe Area of 126 cm2
The conversion of instrument display in counts to surface activity units of Bq/m2 is obtained
using Equation 6-19:
A, =
Cs/t-s
Et XW
(6-19)
where Cs is the integrated counts recorded by the instrument, ts is the time period over which
the counts were recorded in seconds, et is the total efficiency of the instrument in counts per
disintegration, effectively the product of the instrument efficiency fo) and the source efficiency
(ss), and W is the physical probe area in square meters (m2).To convert instrument counts to
conventional surface activity units of decays per minute per 100 cm2, Equation 6-19 can be
modified as shown in Equation 6-20:
As =
Cs/ts
Et x (W/100)
(6-20)
where ts is recorded in minutes instead of seconds, and W is recorded in square centimeters
instead of square meters.
Most instruments have background counts associated with the operation of the instrument. A
correction for instrument background can be included in the data conversion calculation, as
shown in Equation 6-21:
As =
cs/ts ~ cb/tb
Et X W
(6-21)
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
6-50
May 2020
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
MARSSIM	Field Measurement Methods and Instrumentation
where Cb is the background counts recorded by the instrument, and tb is the time period over
which the background counts were recorded in seconds. Note that the instrument background is
not the same as the measurements in the background reference area used to perform the
statistical tests described in Chapter 8. Equation 6-17 can be modified to provide conventional
surface activity units of decays per minute per 100 cm2, as shown in Equation 6-22:
Cs/ts — Ch/th
A*- JxWaoo) ((W2>
where ts and tb are recorded in minutes instead of seconds, and W is recorded in square
centimeters instead of square meters.
The presence of multiple radionuclides at a site requires additional considerations for
demonstrating compliance with a dose- or risk-based requirement. As demonstrated in
Section 4.5.3, a gross activity DCGL should be determined. Example 9 illustrates the
calculation of a weighted efficiency for a gross activity DCGL.
Example 9: Calculation of a Weighted Efficiency for a Gross Activity Derived
Concentration Guideline Level
Consider a site contaminated with cesium-137 (137Cs) and strontium/yttrium-90 (90Sr/Y), with
137Cs representing 60 percent of the total activity. The relative fractions are 0.6 for 137Cs and
0.4 for 90Sr/Y. If the derived concentration guideline level (DCGL) for 137Cs is 8,300
becquerels per square meter (Bq/m2; 5,000 decays per minute [dpm]/100 square centimeters
[cm2]) and the DCGL for 90Sr/Y is 12,000 Bq/m2 (7,200 dpm/100 cm2), the gross activity
DCGL is calculated using Equation 4-10, as shown below:
1	1
DCGLGr0SS Activity - fcs	fs — ~ 0.6	0.4
DCGLcs DCGLsr/Y 8,300 Bq/m2 12,000 Bq/m2
= 9,500 Bq/m2 (5,700 dpm/100 cm2)
Note that because the half-lives of 137Cs and 90Sr are approximately the same, the relative
fractions of the two radionuclides will not change because both decay at the same rate. For
other radionuclides, the relative fractions will change over time from the decay of one
radionuclide relative to the other.
It is important to use an appropriately weighted total efficiency to convert from instrument
counts to surface activity units using Equations 6-19 through 6-22. In this example, the
individual efficiencies for 137Cs and 90Sr/Y should first be independently evaluated. The
maximum energies for beta particles for 137Cs and 90Sr/Yare 0.51 MeV and 2.28 MeV,
respectively. The corresponding instrument efficiencies for 137Cs and 90Sr/Y are determined to
be 0.38 and 0.45, respectively. The surface efficiency of both nuclides is estimated to be 0.5.
May 2020
DRAFT FOR PUBLIC COMMENT
6-51
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
Field Measurement Methods and Instrumentation
MARSSIM
The total efficiencies are calculated by multiplying the surface efficiency by the instrument
efficiency, as shown below (see Section 4.12.5.1 for further explanation):
£t,Cs = £s,Cs x £;,Cs = (0.5)(0.38) = 0.19
£t,Sr/Y = £s,Sr/y x £i,Sr/Y = (0.5)(0.45) = 0.22
The overall efficiency is then determined by weighting each individual radionuclide efficiency
by the relative fraction of each radionuclide (Equation 4-21):
= fcs^cs + /sr/Y^,Sr/Y = (0.6)(0.19)+(0.4)(0.22) = 0.20
The overall efficiency is 0.20 (20 percent).
6.7.2 Soil Radionuclide Concentration and Exposure Rates
Analytical procedures, such as alpha and gamma spectrometry, are typically used to determine
the radionuclide concentration in soil in units of Bq/kg. Net counts are converted to soil DCGL
units by dividing by the time, detector or counter efficiency, mass or volume of the sample, and
by the fractional recovery or yield of the chemistry procedure (if applicable). Refer to Chapter 7
for examples of analytical procedures.
Instruments, such as a pressurized ionization chamber (PIC) or micro-R meter are used to
measure exposure rate. Typically, exposure rates are read directly in millisieverts per hour
(mSv/h) (Standard International System of Units) or microroentgens (|jR) per hour. A gamma
scintillation detector (e.g., Nal(TI)) provides data in cpm, and conversion to mSv/h is
accomplished by using site-specific calibration factors developed for the specific instrument
(Section 6.6.4).
In situ gamma spectrometry data may require special analysis routines before the spectral data
can be converted to soil concentration units or exposure rates. Commercially available
measurement systems may use proprietary methods to convert instrument counts to the
reported units. Although it is not always necessary to understand the conversion calculations,
any deviations from assumptions included in the conversion must be accounted for in the
estimate of total measurement uncertainty (Section 6.4). Consult the manufacturer to ensure
the total measurement uncertainty is determined correctly.
6.8 Radon Measurements
There are three radon isotopes in nature: 222Rn (radon) in the 238U decay chain, 220Rn (thoron) in
the 232Th decay chain, and 219Rn (actinon) in the 235U decay chain. 219Rn is the least abundant of
these three isotopes, and because of its short half-life of 4 seconds, it has the least probability
of emanating into the atmosphere before decaying. 220Rn, with a 55-second half-life, is
somewhat more mobile. 222Rn, with a 3.8-day half-life, is capable of migrating through soil or
building material and reaching the atmosphere. Therefore, in most situations, 222Rn should be
the predominant airborne radon isotope. In other instances, thorium-containing building material
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
6-52
May 2020
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
MARSSIM
Field Measurement Methods and Instrumentation
or interior building structures where processed thorium ore is present can result in thoron's
becoming the predominant airborne radon isotope.
In some cases, radon may be detected within structures that do not contain residual radioactive
material, and conversely, some structures that contain residual radioactive material may not
yield detectable radon or thoron. Consult with your regulator for the applicability of radon or
thoron measurements as part of a site survey.
Because of the widespread nature of indoor air radon, many states have developed
requirements for certification/qualification of people who perform radon services. Therefore, as
part of the qualifications for the service provider, determine whether the measurement provider
or the laboratory analyzing the measurements is required to be certified by the state or locality
where the work is being performed. State radon contacts can be found at
https://www.epa.gov/radon/find-information-about-local-radon-zones-and-state-contact-
information.
Many techniques have been developed over the years for measuring radon (Jenkins 1986) and
radon progeny in air. In addition, considerable attention is given by the U.S. Environmental
Protection Agency to the measurement of radon and radon progeny in homes (EPA 1992e).
Radon and radon progeny emit alpha and beta particles and gamma rays. Therefore, numerous
techniques can and have been developed for measuring these radionuclides based on detecting
alpha particles, beta particles, or gamma rays, independently or in some combination. This
section contains an overview of information dealing with the measurement of radon and radon
progeny. The information is focused on the measurement of 222Rn; however, the information
may be adapted for the measurement of 219Rn and 220Rn. There are commercial options for
measurements of 220Rn, but options for 219Rn are limited. More consideration should be given to
the two latter radon isotopes because of their short half-lives, which may prevent the shipment
of the sample for off-site laboratory analyses, depending on the sampling and measurement
methods.
Radon concentrations within a fixed structure can vary significantly from one section of the
building to another and can fluctuate over time. If a home has a basement, for instance, it is
usually expected that a higher radon concentration will be found there. Radon primarily enters
buildings that are at negative pressure with respect to the soil. A small increase in the relative
pressure between the soil and the inside of a structure can cause a significant increase in the
amount of radon entering the building from the soil. Many factors play a role in these variations,
but from a practical standpoint it is only necessary to recognize that fluctuations are expected
and that they should be accounted for. Long-term measurement periods (91 days or greater)
are required to determine a mean concentration inside a structure; however, a mean may not be
necessary to determine if a risk-reduction strategy is required. It may also not be necessary if
radon is being used as an indicator of nearby residual radioactive material.
Two analytical end points are of interest when performing radon measurements. The first and
most commonly used is radon concentration, which is stated in terms of activity per unit volume,
in Bq/m3 or picocuries per liter (pCi/L). Although this terminology is consistent with most Federal
requirements, it only implies the potential dose equivalent associated with radon. The second
May 2020
DRAFT FOR PUBLIC COMMENT
6-53
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Field Measurement Methods and Instrumentation
MARSSIM
1	analytical end point is the potential alpha energy concentration (PAEC) (or equilibrium
2	equivalent concentration) of the radon progeny. Radon progeny usually attach very quickly to
3	charged aerosols in the air following creation. The fraction that remains unattached is usually
4	quite small (i.e., 5-10 percent). Because most aerosol particles carry an electrical charge and
5	are relatively massive (> 0.1 |a,m), they are capable of attaching to the surfaces of the lung.
6	Essentially all dose or risk from radon is associated with alpha decays from radon progeny
7	deposited in the respiratory system. If an investigator is interested in accurately determining the
8	potential dose or risk associated with radon in the air of a room, the radon progeny
9	concentration must be known. It should be noted, however, that various processes remove
10	radon progeny from a room. If the radon is removed or prevented from entering, there will be no
11	risk from decay products.
12	Radon progeny concentrations are usually reported in units of working levels, where one
13	working level is equal to the potential alpha energy associated with the radon progeny in secular
14	equilibrium with 100 pCi/L of radon. One working level is equivalent to 1.3 x 105 MeV/L of
15	potential alpha energy. Given a known breathing rate and lung attachment probability, the
16	expected mean lung dose from exposure to a known working level of radon progeny can be
17	calculated.
18	Radon progeny are not usually found in secular equilibrium with radon indoors because of the
19	plating out of the charged aerosols onto walls, furniture, etc. The ratio of 222Rn progeny activity
20	to 222Rn activity usually ranges from 0.2 to as high as 0.8 indoors (NCRP 1988). If only the 222Rn
21	concentration is measured and it is not practical to measure the progeny concentrations, then
22	general practice is to assume a progeny to 222Rn equilibrium ratio for indoor areas. The
23	appropriate regulatory agency should be consulted to determine the appropriate equilibrium
24	factor. This allows one to estimate the expected dose or risk associated with a given radon
25	concentration.
26	In general, the following generic guidelines should be followed when performing radon
27	measurements during site investigations:
28	• The radon measurement method used should be well understood, documented, and carried
29	out in compliance with certification requirements as applicable. Measurements in buildings
30	should conform to current radon standards of practice as required by the regulator.3
31	• Long-term measurements should be considered where short-term (screening) tests are
32	close to guidance levels.
33	• In nonresidential buildings, such as schools and commercial buildings, the impact of the
34	heating, ventilation, and air conditioning system on radon entry should be considered,
35	because radon levels may change significantly between occupied and non-occupied
36	periods.
37	• The impact of variable environmental conditions (e.g., humidity, temperature, dust loading,
38	and atmospheric pressure) on the measurement process should be accounted for when
3 Contact the American Association of Radon Scientists and Technologists for the current radon standards of
practice.
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
6-54
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Field Measurement Methods and Instrumentation
1	necessary. Consideration should be given to effects on both the air collection process and
2	the counting system.
3	• The background response of the detection system should be accounted for.
4	• If measuring the potential alpha energy concentration directly is impractical to measure the
5	potential alpha energy concentration directly, then the progeny activities can be estimated
6	by assuming a specific equilibrium with radon. The concentrations of the radon progeny are
7	then estimated by applying an equilibrium factor to the measured radon concentration. The
8	appropriate regulatory agency should be consulted to determine the appropriate equilibrium
9	factor.
10	For a general overview, a list of common radiation detectors with their usual applications during
11	radon surveys is provided in Table 6.9. Descriptions and costs for specific equipment used for
12	the measurement of radon are contained in Appendix H.
13	The following provides a general overview of radon sampling and measurement concepts. The
14	intent of this section is to provide an overview of common methods and terminology.
15	6.8.1 Direct Radon Measurements
16	Direct radon measurements are performed by gathering radon into a chamber and measuring
17	the ionizations produced. A variety of methods have been developed, each making use of the
18	same fundamental mechanics but employing different measurement processes. The first step is
19	to get the radon into a chamber without collecting any radon progeny from the ambient air. A
20	filter is normally used to capture charged aerosols while allowing the radon gas to pass through.
21	Most passive monitors rely on diffusion of the ambient radon in the air into the chamber to
22	establish an equilibrium between the concentrations of radon in the air and in the chamber.
23	Active monitors use some type of air pump system for the air exchange method.
24	Once inside the chamber, the radon decays by alpha emission to form 218Po, which usually
25	takes on a positive charge within thousandths of a second following formation. Some monitor
26	types collect these ionic molecules and subsequently measure the alpha particles emitted by
27	the radon progeny. Other monitor types, such as the electret ion chamber, measure the
28	ionization produced by the decay of radon and progeny in the air within the chamber by directly
29	collecting the ions produced inside the chamber. The electrets are influenced by the ambient
30	gamma radiation level; therefore, correction factors based on the gamma radiation level must be
31	established to adjust the radon results. Simple systems measure the cumulative radon during
32	the exposure period based on the total alpha decays that occur. More complicated systems
33	measure the individual pulse height distributions of the alpha and/or beta radiation emissions
34	and derive the radon plus progeny isotopic concentration in the air volume.
35	Care must be taken to accurately calibrate a system and to understand the effects of humidity,
36	temperature, dust loading, air currents, and atmospheric pressure on the system. These
37	conditions create a small adverse effect on some systems and a large influence on others.
38
May 2020
DRAFT FOR PUBLIC COMMENT
6-55
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Field Measurement Methods and Instrumentation
1 Table 6.9: Radiation Detectors with Applications to Radon Surveys
MARSSIM
Category
Measures
System
Description
Application
Time
Remarks
Integrating/
222 Rn
Activated
Activated
Measure radon
2-7 days
LLD is 0.007-
Averaging

charcoal
charcoal is
concentration

0.04 Bq/L (0.2-
Methods

adsorption
opened to the
in indoor air.

1.0 pCi/L).



ambient air,






then gamma


Must wait 3 hours after



counted on a


deployment ends to



gamma


begin analysis.



scintillator or






in a liquid


Not a true integrating



scintillation


device.



counter.









Must be returned to






the laboratory






promptly.

222 Rn
Electret ion
This is a
Measure radon
2-7 days
Must correct reading


chamber
charged
concentration
for short
for gamma

220Rn

plastic vessel
in air.
term
background



that can be


concentration. Electret

Radon Flux

opened for air

91-365
is sensitive to



to pass

days for
extremes of



through.

long
temperature and



Voltage drop

term
humidity.



is then






measured.


Reader is sensitive to






temperature changes.






LLD is 0.007-0.02






Bq/L (0.2-0.5 pCi/L).


Alpha track
A small piece
Measure radon
91-365
LLD is 0.04 Bq/L-d


detection
of special
concentration
days
(1 pCi/L-d).



plastic or film
in air.





inside a small


Typical deployment is



container.


a minimum of 90 days.



Damage






tracks from






alpha particles






are chemically






etched and






tracks






counted.



NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
6-56
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Field Measurement Methods and Instrumentation
Category
Measures
System
Description
Application
Time
Remarks

222 Rn
Progeny
220Rn
Progeny
Filter/
detector unit
Air pump and
filtration unit
with TLD
chips or
nuclear track
detectors.
Measure
progeny
concentration
in air.
1 day -
a few
weeks
LLD is 0.0002 Working
Level for a week-long
measurement.
Continuous
Monitors
222 Rn
220Rn
Ionization
chambers,
scintillation
detectors,
solid state
detectors
Measure
radon
concentrations
and log results
on real-time
basis. May
provide
spectral data,
depending on
device.
Measure radon
concentration
in air; "sniffer"
to locate radon
entry points in
building.
Minutes
to a few
days
LLD is 150 Bq/m3
(4 pCi/L) in
10 minutes.
Radon
Progeny
Measurements
222 Rn
Decay
Products
220Rn
Decay
Products
Continuous
radon
progeny
monitors
Air pump and
solid-state
detector.
Measurement
of PAEC. Can
calculate
equilibrium.
1 day-
1 week
"grab
samples"
for some
models
LLD is 20 nJ/m3 (0.001
Working Level).
Short-Term
Radon Flux
Measurements
222 Rn
Large-area
activated
charcoal
collector
A canister
containing
activated
charcoal is
twisted into
the surface
and left for
24 hours.
Short-term
radon flux
measurements.
24 hours
The LLD is 0.007
Bq/m2/s (0.2 pCi/m2 s).


Electret Ion
Chamber
Ion Chamber
has filtered
outlets to
prevent
saturation.
Short term
radon flux
measurements.
8-24
hours
Gamma correction for
background required.
LLD is 0.08 pCi/m2 s
May 2020
DRAFT FOR PUBLIC COMMENT
6-57
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
Field Measurement Methods and Instrumentation
MARSSIM
Abbreviations: Bq = becquerels; L = liters; pCi = picocuries; d = day; LLD = lower limit of detection;
TLD = thermoluminescent dosimeter; m = meter; PAEC = potential alpha energy concentration; nJ = nanoJoules;
s = second.
6.8.1.1	Integrating/Averaging Methods
With integrating methods, measurements are made over a period of days, weeks, or months,
and the device is subsequently read by an appropriate device for the detector media used. The
most common detectors used are activated charcoal adsorbers (good for up to 1 week), EICs
(good for days to weeks), and alpha track plastics (good for weeks to months). Short-term
fluctuations are averaged out, thus making the measurement representative of average
concentration. Results in the form of an average value provide no way to determine the
fluctuations of the radon concentration over the measurement interval. Successive short-term
measurements can be used in place of single long-term measurements to gain better insight
into the seasonal dependence of the radon concentration. Continuous measurements can be
used to get better insight into the time dependence of the radon concentration, which can be of
particular importance in large buildings. Because charcoal allows continual adsorption and
desorption of radon, the method does not give a true integrated measurement over the
exposure time. Use of a diffusion barrier over the charcoal reduces the effects of drafts and high
humidity.
6.8.1.2	Continuous Monitors
Devices that measure direct radon concentrations over successive time increments are
generally called continuous radon monitors. These systems are more complex than integrating
devices, in that they measure the radon concentration and log the results to a data recording
device on a real-time basis. The monitor must take a reading at least once per hour to be
considered a continuous monitor. Continuous radon measurement devices normally allow the
noble gas radon to pass through a filter into a detection chamber where the radon decays and
the radon or the resulting progeny are measured. Common detectors used for real time
measurements are ion chambers, solid state surface barrier detectors, and ZnS(Ag) scintillation
detectors.
A principle of operation for monitors equipped with solid state detectors is an electrostatic
collection of alpha emitters with spectral analysis. The electric field within the sample cell drives
the positively charged ion to the detector where it attaches. The detector converts alpha
radiation directly to an electrical signal proportional in strength to the energy of alpha particle.
This makes it possible to tell which radionuclide produced the radiation; therefore, one can
distinguish 222Rn from 220Rn. If operated in air with a relatively high radon concentration, these
monitors need to be purged with filtered, fresh dry air with a normal radon concentration before
taking the next series of measurements. Continuous methods offer the advantage of providing
successive, short-term results over long periods of time. This allows the investigator not only to
determine the average radon concentration, but also to analyze the fluctuations in the values
over time. More complicated systems are available that measure the relative humidity and
temperature at the measurement location and log the values along with the radon
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
6-58
May 2020
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
MARSSIM
Field Measurement Methods and Instrumentation
concentrations to the data logging device. This allows the investigator to make adjustments, if
necessary, to the resulting data before reporting the results.4
6.8.2 Radon Progeny Measurements
Radon progeny measurements are usually performed by collecting aerosols onto filter paper
and subsequently counting the filter for attached progeny. Some systems pump air through a
filter and then automatically count the filter for alpha or beta emissions. An equivalent but more
labor-intensive method is to collect a sample using an air sampling pump and then count the
filter in standalone alpha or beta counting systems. The measurement system may make use of
any number of different techniques, ranging from full alpha and beta spectrometric analysis of
the filters to simply counting the filter for total alpha and or beta emissions.
When performing total (gross) counting analyses, the assumption is usually made that the only
radioisotopes in the air are due to 222Rn and its progeny. This uncertainty, which is usually very
small, can be essentially eliminated when performing manual sampling and analysis by
performing a followup measurement of the filter after the radon progeny have decayed to a
negligible level. This value can then be used as a background value for the air. Of course, such
a simple approach is applicable only when 222Rn is the isotope of concern. For 219Rn or 220Rn,
other methods would have to be used.
Time is a significant element in radon progeny measurements. Given any initial equilibrium
condition for the progeny isotopes, an investigator must be able to correlate the sampling and
measurement technique back to the true concentration values. When collecting radon progeny,
the buildup of total activity on the filter increases asymptotically until the activity on the filter
becomes constant (after approximately 3 hours of sampling). At this point, the decay rate of the
progeny atoms on the filter is equal to the collection rate of progeny atoms. This is an important
parameter to consider when designing a radon and progeny sampling procedure. Depending on
sensitivity requirements, collection times can be as short as 5 minutes (Maiello 2010). Although
it is possible to sample for other time periods, the equations developed for the three major 222Rn
progeny concentrations are valid for sampling times of 5 minutes only. Samples should be
shipped and analyzed as expeditiously as possible after sampling is concluded.
Note that the number of charged aerosol particles in the air can affect the results for radon
progeny measurements. If the number of particles is few, as is possible when humidity is low
and a room is very clean, then most of the progeny will not be attached and can plate out on
room surfaces before reaching the sample filter.
4 Depending on the device, these measurements would indicate unexpected disruptions when the device is used for
radon testing. The theory is that opening windows or moving the device would cause a noticeable disruption in the
measurement.
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE
May 2020
DRAFT FOR PUBLIC COMMENT
6-59

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
Field Measurement Methods and Instrumentation
MARSSIM
6.8.3 Radon Flux Measurements
Sometimes it is desirable to characterize the source of radon in terms of the rate at which radon
is emanating from a surface—that is, soil, uranium mill tailings, or concrete. One method used
for measuring radon flux is briefly described here.
The measurement of radon flux can be achieved by adsorption onto charcoal using a variety of
methods, such as a charcoal canister or a large-area collector (e.g., 25 cm polyvinyl chloride
[PVC] end cap). The collector is deployed by sealing the collection device onto the surface of
the material to be measured. After 24 hours of exposure, the activated charcoal is removed and
transferred to plastic containers. The amount of radon adsorbed on the activated charcoal is
determined by gamma spectroscopy. Because the area of the surface is well defined and the
deployment period is known, the radon flux (in units of Bq/m2-s or pCi/m2-s) can be calculated.
This method is reliable for measuring radon flux in normal environmental situations. However,
care should be taken if an extremely large source of radon is measured with this method. The
collection time should be chosen carefully to avoid saturating the canister with radon. If
saturation is approached, the charcoal loses its ability to absorb radon, and the collection rate
decreases. Even transporting and handling of a canister that is saturated with radon can be a
problem because of the dose rate from the gamma rays being emitted. One would rarely
encounter a source of radon that is so large that this would become a problem; however, the
potential for it should be recognized. Charcoal also can become saturated with water, which will
affect the absorption of radon. This can occur in areas with high humidity.
An alternative method for making passive radon flux measurements has been developed
recently using EICs. EIC technology has been widely used for indoor radon measurements. The
passive EIC procedure is similar to the procedures used with large-area activated charcoal
canisters. To provide the data for the background corrections, an additional EIC monitor is
located side-by-side on a radon-impermeable membrane. These data are used to calculate the
net radon flux. The Florida State Bureau of Radiation Protection has compared the results from
measurements of several phosphogypsum flux beds using the charcoal canisters and EICs and
has shown that the two methods give comparable results. The passive method seems to have
overcome some of the limitations encountered in the use of charcoal. The measurement periods
can be extended from hours to several days to obtain a better average, if needed. EIC flux
measurements are not affected by such environmental conditions as temperature, humidity, and
air flow. The measured detection capabilities are comparable to the charcoal method, but—
unlike charcoal—EICs do not become saturated by humidity. Intermediate readings can be
made if needed. In view of the low cost of the EIC reading and analysis equipment, the cost per
measurement can be as much as 50 percent lower than the charcoal method, with additional
savings in time. There are handling and storage requirements associated with these methods
and detectors. For more information, refer to the manufacturer and Appendix H.
6.9 Special Equipment
Various specialized systems have been developed that can be used during the performance of
RSSIs. These range from specially designed quick radiation scanning systems to commercial
global positioning systems (GPSs). The equipment may be designed to detect radiation directly,
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
6-60
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Field Measurement Methods and Instrumentation
1	detect and locate materials associated with the residual radioactive material (e.g., metal
2	containers), or locate the position where a particular measurement is performed. Because these
3	specialized systems are continuously being modified and developed for site-specific
4	applications, it is not possible to provide detailed descriptions of every system. The following
5	sections provide examples of specialized equipment that have been applied to radiation surveys
6	and site investigations.
7	6.9.1 Local Microwave and Sonar Positioning Systems
8	Local microwave or sonar beacons and receivers may provide useful location data in small
9	areas and tree-covered locales. With a number of fixed beacons in place, a roving unit can be
10	oriented and provide location data with similar accuracy and precision as the differential GPS
11	(DGPS). If the beacons are located at known points, the resulting positions can be determined
12	using simple calculations based on the known reference locations of the beacons.
13	The logistics of deploying the necessary number of beacons properly and the short range of the
14	signals are the major limitations of the system. In addition, multipathing of signals within wooded
15	areas or interior areas can cause jumps in the positioning data. These systems have
16	applicability both indoors and outdoors but require setting up a site-specific system that may
17	require adjustment for different locations (e.g., each room in a building).
18	6.9.2 Laser Positioning Systems
19	Laser positioning systems are becoming more popular for monitoring positions in three
20	dimensions. The newest systems use reflectorless electronic distance measurement to measure
21	the distance to an object without actually accessing the object. Laser systems use the principles
22	of phase shift and pulse (or time of flight) or a hybrid combination to measure distance. This
23	allows mapping of distant or inaccessible objects in hazardous areas. Using a reflector, or
24	retroprism, to identify the location of a surveyor or detector allows the system to track the
25	location of individual measurements. Laser systems are accurate to within a few millimeters at
26	distances up to 1,000 m. Laser systems require a clear line of sight between the object and the
27	laser. Systems with multiple lasers at different locations can be used to minimize issues with
28	line-of-sight interference.
29	6.9.3 Mobile Systems with Integrated Positioning Systems
30	In recent years, the advent of new technologies has introduced mobile sensor systems for
31	acquiring data that include fully integrated positioning systems. Portable and vehicle-based
32	versions of these systems record survey data while moving over surfaces to be surveyed and
33	simultaneously recording the location data from a roving DGPS receiver, local microwave/sonar
34	receiver, or special retroprism for a laser system. All measurement data are automatically stored
35	and processed with the measurement location for later posting (see Section 8.2.2.2 for a
36	discussion of posting plots) or for mapping the results using a geographic information system.
37	These systems are designed with a variety of detectors for different applications. For example,
38	alpha or beta detectors have been mounted on a robot at a fixed distance over a smooth
39	surface. The robot moves at a predetermined speed over the surface to provide scanning
40	results and records individual direct measurements at predetermined intervals. This type of
May 2020
DRAFT FOR PUBLIC COMMENT
6-61
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Field Measurement Methods and Instrumentation
MARSSIM
1	system not only provides the necessary measurement data, but also reduces the uncertainty
2	associated with human factors. Other systems are equipped with several types of radiation
3	detectors, magnetometers, electromagnetic sensors, or various combinations of multiple
4	sensors. The limitations of each system should be evaluated on a site-specific basis to
5	determine if the positioning system, the detector, the transport system, or some combination
6	based on site-specific characteristics will represent the limits of the system.
7	6.9.4 Radar, Magnetometer, and Electromagnetic Sensors
8	The number of sensors and sensor systems applicable to the detection and location of buried
9	waste have increased in use and reliability in recent years. These systems are typically
10	applicable to scoping and characterization surveys where the identification of residual
11	radioactive materials in the subsurface is a primary concern. However, the results of these
12	surveys may be used during FSS planning to demonstrate that subsurface materials are not a
13	concern for a particular site or survey unit. Some of the major technologies are briefly described
14	in the following sections.
15	6.9.4.1 Ground Penetrating Radar
16	For most sites, ground-penetrating radar (GPR) is the only instrument capable of collecting
17	images of buried objects in situ, as compared to magnetometers (Section 6.9.3.2) and
18	electromagnetic sensors (Section 6.9.3.3), which detect the strength of signals as measured at
19	the ground surface. Additionally, GPR is unique in its ability to detect both metallic and
20	nonmetallic (e.g., plastic, glass) containers. GPR techniques are being studied to monitor the
21	performance and stability of soil covers at uranium mill tailings sites and other land disposal
22	sites with earthen covers (Necsoiu and Walter 2015).
23	Subsurface radar detection systems have been the focus of study for locating and identifying
24	buried or submerged objects that otherwise could not be detected. There are two major
25	categories of radar signals: (1) time domain and (2) frequency domain. Time-domain radar uses
26	short impulses of radar-frequency energy directed into the ground being investigated.
27	Reflections of this energy, based on changes in dielectric properties, are then received by the
28	radar. Frequency-domain radar, on the other hand, uses a continuous transmission, where the
29	frequency of the transmission can be varied either stepwise or continuously. The changes in the
30	frequency characteristics due to effects from the ground are recorded. Signal processing, in
31	both cases, converts this signal to represent the location of radar reflectors against the travel
32	time of the return signal. Greater travel time corresponds to a greater distance beneath the
33	surface.
34	Examples of existing GPR technologies currently being applied to subsurface investigations
35	include the following:
36	• narrow-band radar
37	• ultra-wideband radar
38	• synthetic aperture radar
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
6-62
May 2020
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
MARSSIM
Field Measurement Methods and Instrumentation
•	frequency modulated continuous radar
•	polarized radar waves
The major limitation to GPR is the difficulty in interpreting the data, which is often provided in the
form of hazy, "waterfall-patterned" data images requiring an experienced professional to
interpret. Also, GPR can vary depending on the soil type—highly conductive clay soils often
absorb a large amount of the radar energy and may even reflect the energy. GPR can be
deployed using ground-based or airborne systems.
6.9.4.2 Magnetometers
Although soil affected by residual radioactive material and most radioactive waste possess no
ferromagnetic properties, the containers commonly used to hold radioactive waste (e.g., 55-
gallon drums) are made from steel. These containers possess significant magnetic
susceptibility, making the containers detectable using magnetometry.
Magnetometers sense the pervasive magnetic field of the Earth. This field, when encountering
an object with magnetic susceptibility, induces a secondary magnetic field in that object. This
secondary field creates an increase or decrease in Earth's ambient magnetic field.
Magnetometers measure these changes in the expected strength of the ambient magnetic field.
Some magnetometers, called "vector magnetometers," can sense both the direction and the
magnitude of these changes. However, for subsurface investigations only the magnitude of the
changes is used.
The ambient magnetic field on Earth averages 55,000 gamma in strength. The variations
caused by the secondary magnetic fields typically range from 10-1,000 gamma and average
around 100 gamma. Most magnetometers currently in use have a detection capability in the
0.1-0.01 gamma range and can detect these secondary fields.
An alternate magnetometer survey can be performed using two magnetometers in a
gradiometric configuration. This means that the first magnetometer is placed at the ground
surface, and the second is mounted approximately 0.5 m above the first. Data are recorded
from both sensors and compared. When the readings from both detectors are nearly the same,
it implies that there is no significant disturbance in the Earth's ambient magnetic field or that
such disturbances are broad and far away from the gradiometer. When a secondary magnetic
field is induced in an object, it affects one sensor more strongly than the other, producing a
difference in the readings from the two magnetometers. This approach is similar to the use of a
guard detector in anti-coincidence mode in a low-background gas-flow proportional counter in a
laboratory (see Appendix H for a description of gas-flow proportional counters). The
gradiometric configuration filters out the Earth's ambient magnetic field, large-scale variations,
and objects located far from the sensor to measure the effects of nearby objects, all without
additional data processing. Fifty-five-gallon drums buried 5-7 meters below the surface may be
detectable using a magnetometer. At many sites, multiple drums have been buried in trenches
or pits, and detection is straightforward. A single operator carrying a magnetometer with the
necessary electronics in a backpack can cover large areas in a relatively small amount of time.
May 2020
DRAFT FOR PUBLIC COMMENT
6-63
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Field Measurement Methods and Instrumentation
MARSSIM
1	The limitations on the system are related to the size of the objects and their depth below the
2	surface. Objects that are too small or buried too deep will not provide a secondary magnetic
3	field that can be detected at the ground surface.
4	6.9.4.3 Electromagnetic Sensors
5	Electromagnetic sensors emit an electromagnetic wave, in either a pulsed or continuous wave
6	mode, and then receive the result of that transmission. The result of the transmission is two
7	signals: quadrature and in-phase. As the wave passes through some material other than air, it is
8	slowed down by a resistive medium or sped up by a conductor through dielectric effects. This
9	produces the quadrature signal. If the electromagnetic wave encounters a highly conductive
10	object, it induces a magnetic field in the object. This induced electromagnetic field returns to the
11	sensor as a reflection of the original electromagnetic wave and forms the in-phase signal.
12	The in-phase signal is indicative of the presence, size, and conductivity of nearby objects
13	(e.g., 55-gallon drums), and the quadrature signal is a measure of the dielectric properties of the
14	nearby objects, such as soil. This means that electromagnetic sensors can detect all metallic
15	objects (including steel, brass, and aluminum), such as the metal in waste containers, and
16	sample the soil for changes in properties, such as those caused by leaks of contents.
17	Depths of interest are largely determined by the spacing between the coil used to transmit the
18	primary electromagnetic wave and the receiver used to receive that transmission. The rule of
19	thumb is that the depth of interest is on the order of the distance between the transmitter and
20	the receiver. A system designed with the transmitter and receiver placed tens of meters apart
21	can detect signals from tens of meters below the surface. A system with the transmitter and
22	receiver collocated can detect signals only from depths on the order of the size of the coil, which
23	is typically about 1 m. The limitations of electromagnetic sensors include a lack of clearly
24	defined signals and decreasing resolution of the signal as the distance below the surface
25	increases.
26	6.9.5 Aerial Radiological Surveys
27	Low-altitude aerial radiological surveys are designed to encompass large areas and may be
28	useful in—
29	• providing data to assist in the identification of residual radioactive materials and their
30	corresponding concentrations and spatial distributions
31	• characterizing the nature, extent and impact of the residual radioactive materials
32	The detection capability and data processing procedures provide total area coverage and a
33	detailed definition of the extent of gamma-producing isotopes for a specific area. The gamma
34	radiation spectral data are processed to provide a qualitative and quantitative analysis of the
35	radionuclides in the survey area. Flyover surveys establish a grid pattern (e.g., east-west) of
36	parallel lines approximately 60-150 m (200-500 feet) above the ground surface.
37	The survey consists of airborne measurements of natural and manmade gamma radiation from
38	the terrain surface. These measurements allow the determination of terrestrial spatial
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
6-64
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM	Field Measurement Methods and Instrumentation
1	distribution of isotopic concentrations and equivalent gamma exposure rates (e.g., 60Co, 234mPa,
2	and 137Cs). The results are reported as isopleths or data points for the isotopes and are usually
3	superimposed on scale maps of the area.
May 2020
DRAFT FOR PUBLIC COMMENT
6-65
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
MARSSIM
Sampling and Preparation for Laboratory Measurements
7 SAMPLING AND PREPARATION FOR LABORATORY
MEASUREMENTS
7.1 Introduction
The Multi-Agency Radiation Survey and Site Investigation Manual (MARSSIM) provides three
methods for collecting radiation data while performing a survey: direct measurement, scanning,
and sampling.1 A direct measurement is a radioactivity measurement obtained by placing the
detector near the surface or media being surveyed for a prescribed amount of time. An
indication of the resulting concentration of radioactive material is read out directly. Scanning is
an evaluation technique performed by moving a portable radiation detection instrument at a
constant speed and distance relative to the surface to detect radiation. These measurement
techniques are discussed in Chapter 6. The third method of obtaining radiation data involves
collecting a portion of a larger quantity of media for sample analysis using instrumentation in the
field or in a laboratory (NRC 2004).
Chapter 7 discusses issues involved in collecting and preparing samples for analysis. This
information will assist in communications with the laboratory during survey planning.
Samples should be collected and analyzed by qualified individuals using the appropriate
equipment and procedures. This manual assumes that the samples taken during the survey will
be submitted to a qualified laboratory for analysis. The laboratory should have written
procedures that document its analytical capabilities for the radionuclides of interest and a quality
assurance/quality control (QA/QC) program that documents the compliance of the analytical
process with established criteria. The method used to assay the radionuclides of concern should
be recognized as a factor affecting analysis time.
Commonly used radiation detection and measuring equipment for radiological survey field
applications is described in Chapter 6 and Appendix H. Many of these equipment types may
also be used for laboratory analyses, usually under more controlled conditions that provide for
lower detection limits and measurement method uncertainties and greater abilities to identify
and quantify between radionuclides. Laboratory methods often involve combinations of both
chemical and physical preparation and instrument techniques to quantify the low levels
expected in the samples. This chapter provides guidance to assist the MARSSIM user in
selecting appropriate procedures for collecting and handling samples for laboratory analysis.
More detailed information is available in documents listed in the reference section of this
manual.
The development of data quality objectives (DQOs) and measurement quality objectives
(MQOs) to define the data needs for a survey is described in Section 7.2. This includes making
decisions regarding the need to collect samples, the appropriate sampling methods, and QC
measurements implemented as part of the survey process. Section 7.3 describes
1 MARSSIM uses the word "should" as a recommendation, not as a requirement. Each recommendation in this
manual is not intended to be taken literally and applied at every site. MARSSIM's survey planning documentation will
address how to apply the process on a site-specific basis.
May 2020
DRAFT FOR PUBLIC COMMENT
7-1
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Sampling and Preparation for Laboratory Measurements
MARSSIM
1	communication with laboratory personnel during survey planning, before and during sample
2	collection, and during and after sample analysis. Collaborative communication with the
3	laboratory is an important aspect of the sampling and analysis process that helps ensure that
4	survey DQOs are met.
5	The selection of radiochemical laboratories based on their capability to meet technical,
6	reporting, and other contractual requirements is described in Section 7.4. Section 7.5 covers
7	sample collection considerations to enhance the representativeness of the sample, and the
8	establishment of field sample preparation and preservation criteria is included in Section 7.6.
9	Section 7.7 describes the selection of appropriate analytical methods to ensure that the
10	residual radionuclides—either as individual radionuclides or as a total amount of radioactivity as
11	identified in the DQOs and MQOs—can be detected at appropriate levels of sensitivity and that
12	requirements for measurement uncertainties are met. Sample tracking from field activities
13	through laboratory analysis and reporting is covered in Section 7.8. Section 7.9 covers the
14	packaging and shipping of samples containing radioactive material to minimize radiation
15	exposure to the general public and meet applicable Federal and international requirements.
16	7.2 Data Quality Objectives and Measurement Quality Objectives
17	The survey design is developed and documented using the DQO process (see Appendix D).
18	The third step of the DQO process involves identifying the data needs for a survey. One
19	decision that can be made at this step is the selection of either a scan-only survey, direct
20	measurements in conjunction with scanning for performing a survey, or sampling and laboratory
21	analysis in conjunction with scanning as the appropriate data collection strategy for the survey.
22	This chapter addresses the sampling and laboratory analysis of samples.
23	Because DQOs apply to both sampling and analytical activities, what are needed from an
24	analytical perspective are performance objectives specifically for the analytical process of a
25	project. Chapter 3 of the Multi-Agency Radiological Laboratory Analytical Protocols (MARLAP;
26	NRC 2004) refers to these performance objectives as MQOs. An MQO is a quantitative or
27	qualitative statement of a performance objective or requirement for a performance characteristic
28	of a particular method. The MQOs can be viewed as the analytical portion of the overall project
29	DQOs. In a performance-based approach, the MQOs are used initially for the selection and
30	evaluation of analytical methods and protocols and are subsequently used for the ongoing and
31	final evaluation of the analytical data.
32	7.2.1 Identifying Data Needs
33	The decision maker and the survey planning team need to identify the data needs for the survey
34	being performed, including the following:
35	• type of samples to be collected or measurements to be performed (Chapter 5)
36	• radionuclide(s) of interest (Section 4.3)
37	• number of samples to be collected (Sections 5.3.3-5.3.5)
38	• type and frequency of field QC samples to be collected (Section 4.9)
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
7-2
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Sampling and Preparation for Laboratory Measurements
1	• amount of material to be collected for each sample (Section 4.7.3 and Section 7.5)
2	• sampling locations and frequencies (Section 5.3.7)
3	• standard operating procedures (SOPs) to be followed or developed
4	• measurement method uncertainty (Section 6.4)
5	• target detection capabilities for each radionuclide of interest (Section 6.3)
6	• cost of the methods being evaluated (cost per analysis as well as total cost) (Appendix H)
7	• necessary turnaround time
8	• sample preservation and shipping requirements (Section 7.6)
9	• specific background for each radionuclide of interest (Section 4.5)
10	• derived concentration guideline level (DCGL) for each radionuclide of interest (Section 4.3)
11	• measurement documentation requirements (Section 5.3.11)
12	• sample tracking requirements (Section 7.8)
13	Some of this information will be supplied by subsequent steps in the DQO process, and several
14	iterations of the process may be needed to identify all of the data needs. Consulting with a
15	radiochemist or health physicist may be necessary to properly evaluate the information before
16	deciding what combination of scan-only, direct measurements and scanning, or sampling
17	methods and scanning will be required to meet the DQOs. Surveys might require data from all
18	three collection methods (i.e., sample analysis, direct measurements, and scans) to
19	demonstrate compliance with the applicable regulations and DQOs for the project.
20	7.2.2 Data Quality Indicators
21	Precision, bias, representativeness, comparability, and completeness are some of the historical
22	data quality indicators (DQIs) recommended for quantifying the amount of error in survey data
23	(EPA 2002a). The first two of these DQIs represent different aspects of the measurement
24	method uncertainty (Section 6.4), with precision representing that portion of the measurement
25	method uncertainty due to random uncertainty and bias representing that portion of the
26	measurement method uncertainty due to systematic uncertainty. Together, these DQIs should
27	be considered when selecting a measurement technique (i.e., scanning, direct measurement, or
28	sampling) or an analytical technique (e.g., radionuclide-specific analytical procedure). In some
29	instances, the DQI requirements will help in the selection of an analytical technique. In other
30	cases, the analytical requirements will assist in the selection of appropriate levels for the DQIs.
31	7.2.2.1 Precision
32	Precision is a measure of agreement among replicate measurements of the same property
33	under prescribed similar conditions (ASQC 1995). Precision is determined quantitatively based
34	on the results of replicate measurements (equations are provided in EPA 1990). The number of
May 2020
DRAFT FOR PUBLIC COMMENT
7-3
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
Sampling and Preparation for Laboratory Measurements
MARSSIM
replicate analyses needed to determine a specified level of precision for a project is discussed
in Section 4.9. Several types of replicate analyses are available to determine the level of
precision, and these replicates are typically distinguished by the point in the sample collection
and analysis process where the sample is divided. Determining precision by replicating
measurements with results at or near the detection limit of the measurement system is not
recommended, because the measurement uncertainty is usually greater than the desired level
of precision.
•	Field replicates2 are two or more separate samples collected at the same point in time and
space (EPA 2002b). These samples, also known as collocated samples, are collected
adjacent to the routine field sample to determine local variability of the radionuclide
concentration. Typically, including for MARSSIM collection, field replicates are collected
about 0.5-3 feet away from the selected sample location. Analytical results from field
replicates can be used to assess site variation, but only in the immediate sampling area.
Field replicates should not be used to assess variability across a site and are not
recommended for assessing error (EPA 1995). Field replicates can be non-blind, single-
blind, or double-blind.
•	Field splits are two or more representative portions taken from a single, usually
homogenized, sample collected in the field (EPA 2002b). These portions are divided into
separate containers and treated as separate samples throughout the remaining sample
handling and analytical processes and are used to assess error associated with sample
heterogeneity, sample methodology, and analytical procedures. Field splits are used when
determining total error for critical samples with residual radioactive material concentrations
near the action level. A minimum of eight field split samples is recommended for valid
statistical analysis (EPA 1995). Counting multiple split samples of a homogenized field
sample will decrease the uncertainty of the sample, because the sample count times can be
combined to derive the overall count time for the sample. In some cases, homogenization
may not be possible (e.g., discrete [small] radioactive particles). Field split samples can be
non-blind, single-blind, or double-blind and are recommended for determining the level of
precision for a radiation survey or site investigation.
•	An analytical laboratory replicate is two or more representative aliquots (portions of a
homogeneous sample, removed for the purpose of analysis or other chemical treatment)
whose independent measurements are used to determine the precision of laboratory
preparation and analytical procedures (NRC 2004). It is used to determine method
precision, but because it is a non-blind sample (i.e., known to the analyst), it can be used by
the analyst only as an internal control tool and not as an unbiased estimate of analytical
precision (EPA 1990).
•	A laboratory instrument replicate is the repeated measurement of a sample that has been
prepared for counting (i.e., laboratory sample preparation and radiochemical procedures
have been completed). It is used to determine precision for the instrument (repeated
measurements of the same sample using same instrument) and the instrument calibration
2 The term "field replicates" is used in some documents to refer to what this guidance calls field splits.
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
7-4
May 2020
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
MARSSIM
Sampling and Preparation for Laboratory Measurements
(repeated measurements of the same sample using different instruments, such as two
different germanium detectors with multichannel analyzers). A laboratory instrument
replicate is generally performed as part of the laboratory QC program and is a non-blind
sample. It is typically used as an internal control tool and not as an unbiased estimate of
analytical precision.
1.2.2.2 Bias
Bias is the systematic or persistent distortion of a measurement process that causes error in
one direction (ASQC 1995). Bias is determined quantitatively based on the analysis of samples
with a known concentration. There are several types of samples with known concentrations. QC
samples used to determine bias should be included as early in the analytical process as
possible.
•	Reference materials are one or more materials or substances with property values that are
sufficiently homogeneous and well established to be used for the calibration of an
apparatus, the assessment of a measurement method, or for assigning values to materials
(ISO 2008). A certified reference material is one for which each certified property value is
accompanied by an uncertainty at a stated level of confidence. Radioactive reference
materials may be available for certain radionuclides (e.g., uranium) in soil, but reference
building materials may not be available. Because reference materials are prepared and
homogenized as part of the certification process, they are rarely available as double-blind
samples. When appropriate reference materials are available (i.e., proper matrix, proper
radionuclide, and proper concentration range), they are recommended for use in
determining the overall bias for a measurement system.
•	Performance evaluation (PE) samples are used to evaluate the overall bias of the analytical
laboratory and detect any error in the analytical method used. These samples are usually
prepared by a third party, using a quantity of analyte(s) known to the preparer but unknown
to the laboratory, and always undergo certification analysis. The analyte(s) used to prepare
the PE sample is the same as the analyte(s) of interest. Laboratory procedural error is
evaluated by the percentage of analyte identified in the PE sample (EPA 1995). PE samples
are recommended for use in determining overall bias for a measurement system when
appropriate reference materials are not available. PE samples are equivalent to matrix
spikes prepared by a third party that undergo certification analysis and can be non-blind,
single-blind, or double-blind.
•	Matrix spike samples are environmental samples that are spiked in the laboratory with a
known concentration of a target analyte(s) to verify percent recoveries. They are used
primarily to check sample matrix interferences but can also be used to monitor laboratory
performance. However, a data set of at least three or more results is necessary to
distinguish between laboratory performance and matrix interference (EPA 1995). Matrix
spike samples are often replicated to monitor method performance and evaluate error due to
laboratory bias and precision (when four or more pairs are analyzed). These replicates are
often collectively referred to as a matrix spike/matrix spike duplicate (MS/MSD).
Several additional terms are applied to samples prepared by adding a known amount of the
radionuclide of interest to the sample. The majority of these samples are designed to isolate
May 2020
DRAFT FOR PUBLIC COMMENT
7-5
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Sampling and Preparation for Laboratory Measurements
MARSSIM
1	individual sources of bias within a measurement system by preparing pre- and post-operation
2	spikes. For example, the bias from the digestion phase of the measurement system can be
3	determined by comparing the result from a pre-digest spike to the result from a post-digest
4	spike.
5	Several types of samples are used to estimate bias caused by contamination during the sample
6	collection or analytical process:
7	• Background samples are collected from a non-impacted area with similar characteristics
8	(either onsite or offsite) where there is little or no chance of migration of the radionuclides of
9	concern (EPA 1995). Background samples are collected from the background reference
10	area (Section 4.5), to determine the natural composition and variability of the soil (especially
11	important in areas with high concentrations of naturally occurring radionuclides). They
12	provide a basis for comparison of radionuclide concentration levels with samples collected
13	from the survey unit when the statistical tests described in Chapter 8 are performed.
14	• Field blanks are samples prepared in the field using certified clean sand, soil, or water and
15	then submitted to the laboratory for analysis (EPA 1995). A field blank is used to evaluate
16	contamination error associated with sampling methodology and laboratory procedures. It
17	also provides information about contaminants that may be introduced during sample
18	collection, storage, and transport. Field blanks are recommended for determining bias
19	resulting from contamination for a radiation survey or site investigation.
20	• Method blanks are analytical control samples used to demonstrate that reported analytical
21	results are not the result of laboratory contamination (ATSDR 2005). A method blank
22	contains distilled or deionized water and reagents and is carried through the entire analytical
23	procedure (laboratory sample preparation, digestion, and analysis).3
24	7.2.2.3 Representativeness
25	Representativeness is a measure of the degree to which data accurately and precisely
26	represent a characteristic of a population parameter at a sampling point (ASQC 1995).
27	Representativeness is a qualitative term that is reflected in the survey design through the
28	selection of a measurement technique (e.g., direct measurement or sampling) and the size of a
29	sample collected for analysis.
30	Sample collection and analysis is typically less representative of true radionuclide
31	concentrations at a specific measurement location than performing a direct measurement. This
32	is caused by the additional steps required in collecting and analyzing samples, such as sample
33	collection, field sample preparation, laboratory sample preparation, and radiochemical analysis.
34	However, direct measurement techniques with acceptable detection limits are not always
35	available. When sampling is required as part of a survey design, it is critical that the sample
36	collection procedures consider representativeness. The location of the sample is determined as
37	described in Section 5.3.7, but the size and content of the sample are usually determined as
3 The method blank is also referred to as a reagent blank. The method blank is generally used as an internal control
tool by the laboratory because it is a non-blind sample.
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
7-6
May 2020
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
MARSSIM
Sampling and Preparation for Laboratory Measurements
the sample is collected. Sample size and content are discussed in Section 4.7.3 and
Section 7.5. Sample collection procedures also need to consider the development of the
DCGLs when determining the representativeness of the samples.
1.2.2.4	Comparability
Comparability is a qualitative term that expresses the confidence that two data sets can
contribute to a common analysis and interpolation. Generally, comparability is provided by using
the same measurement system for all analyses of a specific radionuclide. In many cases,
equivalent procedures used within a measurement system are acceptable. For example, using a
liquid-liquid extraction purification step to determine the concentration of plutonium-238 (238Pu)
using alpha spectrometry may be equivalent to using an ion-exchange column purification step.
However, using a gross alpha measurement made with a gas proportional counting system
would not be considered equivalent. Comparability is usually not an issue except in cases
where historical data have been collected and are being compared to current analytical results
or when multiple laboratories are used to provide results as part of a single survey design and
the analytical methods have not been clearly communicated to the laboratories.
7.2.2.5	Completeness
Completeness is a measure of the amount of valid data obtained from the measurement
system, expressed as a percentage of the number of valid measurements that should have
been collected. Valid data is all data that are usable for an intended purpose, including data with
no validation qualifiers and data found to be estimated that are justifiable for use. For example,
data below the DCGL determined using the Wilcoxon Rank Sum test (DCGLw) that are
estimated with high bias would be considered usable data. Completeness is of greater concern
for laboratory analyses than for direct measurements, because the consequence of having
incomplete data often requires the collection of additional samples. Direct measurements can
usually be repeated easily. The collection of additional samples generally requires a
remobilization of sample collection personnel, which can be expensive. Conditions at the site
may have changed, making it difficult or impossible to collect representative and comparable
samples without repeating the entire survey. On the other hand, if it is simply an analytical
problem and sufficient samples were originally collected, the analysis can be repeated using
archived sample material. Samples collected on a grid to locate areas of elevated activity are
also a concern for completeness. If one sample analysis result is not valid, the survey design
may not be able to detect areas of elevated activity near or at the missing sample location.
7.2.2.6	Other Data Quality Indicators
Several additional data quality indicators that influence the final status survey (FSS) design are
identified as DQOs and MQOs in Section 2.3.1. Many of these (e.g., selection and classification
of survey units, decision error rates, variability in the contaminant concentration, lower bound of
the gray region) are used to determine the number of measurements and are discussed in detail
in Section 5.3. The required detection capability (Section 6.3) and measurement method
uncertainties (Section 6.4) are directly related to the selection of a measurement method and a
radionuclide-specific analytical technique.
Cost, time, best available technology, or other constraints may create situations where the
required detection capabilities or measurement method uncertainties are deemed impracticable.
May 2020
DRAFT FOR PUBLIC COMMENT
7-7
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
Sampling and Preparation for Laboratory Measurements
MARSSIM
Under these circumstances, different values may be acceptable. Although laboratories will state
detection limits, these are usually based on ideal or optimistic situations and may not be
achievable under actual measurement conditions. Detection limits and measurement method
uncertainties are subject to variation from sample to sample, instrument to instrument, and
procedure to procedure, depending on sample size, geometry, background, instrument
efficiency, chemical recovery, abundance of the radiations being measured, counting time, self-
absorption in the prepared sample, and interferences from radionuclides or other materials
present in the sample.
7.3 Communications with the Laboratory
Laboratory analyses of samples are generally performed by personnel not directly involved in
the collection of the samples being analyzed. Samples are typically collected by one group
working in the field and analyzed by a second group located in a laboratory. This separation of
tasks can potentially lead to problems based on the lack of communication between the two
groups. For this reason, communications between the project manager, field personnel, and
laboratory personnel are vital to ensuring the success of a project. The MARLAP manual
(NRC 2004), Section 11.2.1 provides more information on communications with a laboratory.
7.3.1 Communications During Survey Planning
The radioanalytical laboratory is a valuable resource during survey planning. Information on
available analytical techniques, measurement method uncertainty, method detection capability,
required measurement uncertainties, analytical costs, and turnaround times can easily be
provided by the laboratory. All this information is used to make the decision to perform direct
measurements or collect samples for laboratory measurements. Additional information, such as
required sample size/volume, type of sample container, preservative requirements, and shipping
requirements—including the laboratory's availability for receipt of samples on weekends or
holidays—can be obtained and factored into the survey plan.
Involving the radioanalytical laboratory during survey planning also provides the laboratory with
site-specific information about the project. Information on the radionuclides of interest, possible
chemical and physical form of the residual radioactive material, and mechanism for release of
the residual radioactive material to the environment is used to modify or develop the analytical
method for site-specific conditions, if required. The laboratory should also be provided with the
site-specific action levels (i.e., DCGLs, investigation levels) early in the survey planning
process.
In some cases, it is not practical to select a radioanalytical laboratory early in the survey
process to participate in the survey planning activities. For example, Federal procurement
procedures require that a statement of work (SOW) identifying the tasks to be performed by the
laboratory be developed before selecting a laboratory. Unfortunately, the details of the tasks for
the laboratory to perform are developed during survey planning. This means that the information
provided by the laboratory and used during survey planning will be obtained from another
source, usually a radiochemist or health physicist trained in radiochemistry. The uncertainty
associated with this information and subsequent decisions made based on this information
increases. This may lead to increased costs caused by specifying an unnecessarily expensive
analytical method in the SOW, repeated sampling and analysis of samples that did not meet the
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
7-8
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Sampling and Preparation for Laboratory Measurements
1	required detection capabilities, or measurement method uncertainties because the specified
2	analytical method was not sufficient. In addition, unnecessary or inappropriate analytical
3	methods may be selected by the laboratory because site-specific information concerning the
4	samples was not provided.
5	The laboratory should be consulted when planning the schedule for the survey to ensure that
6	the expected turnaround times can be met based on the projected laboratory workload.
7	7.3.2 Communications Before and During Sample Collection
8	In most situations, the sample collection and shipping containers are supplied by the laboratory;
9	therefore, the laboratory should be notified well in advance of the sampling trip so that these
10	items will be available to the sampling team during the survey.
11	The main purpose of communications with the laboratory during sample collection is to inform
12	the laboratory of modifications to the survey design specified in the planning documents
13	(e.g., Quality Assurance Project Plan [QAPP] and SOPs). The laboratory should have a copy of
14	the survey design in its possession before samples' being collected.
15	Modifications to the survey design are often minor deviations from the SOPs caused by site-
16	specific conditions and usually affect a small number of samples. For example, a rock
17	outcropping covered by a thin layer of soil may restrict the depth of the surface soil sample to
18	5 centimeters (cm; 2 inches [in.]) instead of the 10 cm (4 in.) specified in the SOP. If the mass of
19	the samples collected from this area of the site is one-half the expected sample mass, the
20	laboratory needs to be informed of this deviation from the SOP. Also, the laboratory should be
21	notified of the proper sample handling requirements (i.e., inform the laboratory of the proper
22	handling of gravel in the samples, as some residual radioactive material could be present in the
23	form of small gravel). Finally, the laboratory should be notified of the approximate activity
24	concentrations to be expected in samples to ensure that the laboratory is licensed and equipped
25	to handle samples with elevated activity concentrations.
26	In other situations, there may be an extensive modification to the number or types of samples
27	collected at the site that will affect the analytical methods, detection capabilities, required
28	measurement uncertainties, analytical costs, or even the assumptions used to develop the
29	DCGL. For example, a large portion of the site may have been converted to a parking lot. A
30	large pile of material that may represent the former surface soil will be sampled, as well as soil
31	collected from beneath the parking lot surface. The number of samples to be analyzed has
32	doubled compared to the original SOW.
33	If the expected timing of receipt of samples at the laboratory changes because of sample
34	collection schedule deviations, the laboratory should be notified. Most laboratories require prior
35	notification for samples to be received on weekends.
36	7.3.3 Communications During Sample Analysis
37	The laboratory should communicate with the project manager and field personnel during sample
38	analysis. The laboratory should provide a list of missing or damaged samples as soon as
39	practical after the samples are received. This allows the project manager to determine if
40	resampling is required to replace the missing or damaged samples. The project manager may
May 2020
DRAFT FOR PUBLIC COMMENT
7-9
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Sampling and Preparation for Laboratory Measurements
MARSSIM
1	also request notification from the laboratory when samples are damaged or lost, or if any
2	security seals are missing or broken. Preliminary reports of analytical results may be useful to
3	help direct sampling activities and provide early indications of whether the survey objectives
4	defined by the DQOs and MQOs are being met. However, if preliminary results have not been
5	verified or validated, their usefulness is limited.
6	7.3.4 Communications Following Sample Analysis
7	Following sample analysis, the laboratory will provide documentation of the analytical results as
8	specified in the survey design, which should include the measurement result, measurement
9	uncertainty, minimum detectable activity, and quality control and chain-of-custody (COC)
10	documentation. Laboratory personnel should be available to assist with interpretation, data
11	verification, and data validation.
12	7.4 Selecting a Radioanalytical Laboratory
13	After the decision to perform sampling activities is made, the next step is to select the analytical
14	methods and determine the data needs for these methods. It is advisable to select a
15	radiochemical laboratory as early as possible in the survey planning process so it may be
16	consulted on the analytical methodology and the sampling activities. The laboratory provides
17	information on personnel, capabilities, and current workload that are necessary inputs to the
18	decision-making process. In addition, mobile laboratories can provide on-site analytical
19	capability. Obtaining laboratory or other services may involve a specific procurement process.
20	Federal procurement procedures may require additional considerations beyond the method
21	described here.
22	The procurement of laboratory services usually starts with the development of a request for
23	proposal (RFP) that includes an SOW describing the analytical services to be procured. Careful
24	preparation of the SOW is essential to the selection of a laboratory capable of performing the
25	required services in a technically competent and timely manner.
26	The technical proposals received in response to the procurement RFP must be reviewed by
27	personnel familiar with radioanalytical laboratory operations to select the most qualified offeror.
28	For complicated sites with a large number of laboratory analyses, it is recommended that a
29	portion of this evaluation take the form of a pre-award audit. The provision for this audit must be
30	in the RFP. The results of this audit provide a written record of the decision to use a specific
31	laboratory. Smaller sites or facilities may decide that a review of the laboratory's qualifications is
32	sufficient for the evaluation.
33	Six criteria should be reviewed during this evaluation:
34	• Does the laboratory possess the appropriate well-documented procedures, instrumentation,
35	and trained personnel to perform the necessary analyses? Necessary analyses are defined
36	by the data needs (radionuclide(s) of interest, required measurement uncertainties, and
37	target detection limits) identified by the DQO process.
38	• Is the laboratory experienced in performing similar analyses?
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
7-10
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Sampling and Preparation for Laboratory Measurements
1	• Does the laboratory have satisfactory performance evaluation results from formal monitoring
2	or accreditation programs? The laboratory should be able to provide a summary of QA
3	audits and proof of participation in interlaboratory cross-check programs. Equipment
4	calibrations should be performed using National Institute of Standards and Technology
5	(NIST)-traceable reference radionuclide standards whenever possible.
6	• Is there an adequate capacity to perform all analyses within the desired timeframe? This
7	criterion considers whether the laboratory possesses a radioactive materials-handling
8	license or permit for the samples to be analyzed. Very large survey designs may indicate
9	that more than one analytical laboratory is necessary to meet the survey objectives. If
10	several laboratories are performing analyses as part of the survey, the analytical methods
11	used to perform the analyses should be similar to ensure comparability of results (see
12	Appendix D).
13	• Does the laboratory provide an internal QC review of all generated data that is independent
14	of the data generators?
15	• Are there adequate protocols for method performance documentation and sample security?
16	Providers of radioanalytical services should have an active and fully documented QA program in
17	place, typically via one or more documents, such as a Quality Management Plan, Quality
18	Assurance Manual, or QAPP. This program should comply with the objectives determined by
19	the DQO process in Section 2.3.
20	Requirements for the QA program (e.g. QAPP), COC requirements, and the numbers of
21	samples to be analyzed should be specified, communicated to the laboratory in writing, and
22	agreed upon. The Sampling and Analysis Plan (SAP), analytical procedures, and the
23	documentation and reporting requirements should also be specified, communicated to the
24	laboratory in writing, and agreed upon. The laboratory's accreditation, if required, should be
25	confirmed by contacting the organization that provided the certification. These topics are
26	discussed in detail in the following sections of this chapter. Additional guidance on obtaining
27	laboratory services can be found in Chapter 5 of the MARLAP manual (NRC 2004).
28	7.5 Sampling
29	This section provides guidance on developing appropriate sample collection procedures for
30	surveys designed to demonstrate compliance with a dose- or risk-based regulation. Sample
31	collection procedures are concerned mainly with ensuring that collected samples are
32	representative of the sample media, are large enough to provide sufficient material to achieve
33	the desired detection limit and required measurement uncertainties, and are consistent with
34	assumptions used to develop the conceptual site model and the DCGLs. Additional
35	considerations for sample collection activities are discussed in Section 4.7.3.
36	Commingled chemical and radioactive waste at a site can influence sample handling and
37	laboratory requirements. Also, the external exposure rates or radioactivity concentration of a
38	specific sample may limit the time that workers will be permitted to remain in intimate contact
39	with the samples or may dictate that smaller samples be taken and special holding areas be
40	provided for collected samples before shipment. These special handling considerations may
41	conflict with the size specifications for the analytical method, normal sampling procedures, or
May 2020
DRAFT FOR PUBLIC COMMENT
7-11
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Sampling and Preparation for Laboratory Measurements
MARSSIM
1	equipment. There is a potential for biasing sampling programs by selecting samples that can be
2	safely handled or legally shipped to support laboratories, which could be a concern for scoping,
3	characterization, and Radiation Survey and Site Investigation (RSSI) samples.
4	7.5.1 Surface Soil
5	The purpose of surface soil sampling is to collect samples that accurately and precisely
6	represent the radionuclides and their concentrations at the location being sampled. To do this
7	and plan for sampling, a decision must be made as to the survey design. The selection of a
8	survey design is based on the Historical Site Assessment, results from preliminary surveys
9	(i.e., scoping characterization, remedial action support), and the objectives of the survey
10	developed using the DQO process. The selection between judgment, random, and systematic
11	survey designs is discussed in Section 5.3.
12	7.5.1.1 Sample Volume
13	The volume of soil collected should be specified in the sample collection procedure. In general,
14	large volumes of soil are more representative than small volumes of soil. In addition, large
15	samples provide sufficient material to ensure that required detection limits can be achieved and
16	that sample reanalysis can be done if there is a problem. However, large samples may cause
17	problems with shipping, storage, and disposal. All of these issues should be discussed with the
18	sample collection team and the analytical laboratory during development of sample collection
19	procedures. In general, surface soil samples range in size from 100 grams up to several
20	kilograms.
21	The sample collection procedure should also make clear if it is more important to meet the
22	volume requirement of the survey design or the surface area the sample represents. Constant
23	volume is related to comparability of the results, while surface area is more closely related to the
24	representativeness of the results. Maintaining a constant surface area and depth for samples
25	collected for a particular survey can eliminate problems associated with different depth profiles.
26	The actual surface area included as part of the sample may be important for estimating the
27	probability of locating areas of elevated concentration.
28	7.5.1.2 Sample Content
29	The material present in the field at the sample location may or may not provide a representative
30	sample. Vegetative cover, soil particle size distribution, inaccessibility, and lack of sample
31	material are examples of problems that may be identified during sample collection. All
32	deviations from the survey design as documented in the SOPs should be recorded as part of
33	the field sample documentation.
34	Sample content is generally defined by the assumptions used to develop the conceptual site
35	model and the DCGLs. A typical agricultural scenario assumes that the top few centimeters of
36	soil are available for resuspension in air; that the top 15 cm (6 in.) are homogenized by
37	agricultural activities (e.g., plowing); that roots can extend down several meters to obtain water
38	and nutrients, depending on the plant; and that external exposure is based on an assumed
39	thickness of contaminated soil (usually at the surface). Depending on the dominant exposure
40	pathways for each radionuclide, this can result in a complicated set of instructions for collecting
41	representative samples. This situation can be further complicated by the fact that the site is not
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
7-12
May 2020
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
MARSSIM
Sampling and Preparation for Laboratory Measurements
currently being used for agricultural purposes. For this situation, it is necessary to look at the
analytical results from the preliminary surveys (i.e., scoping, characterization, remedial action
support) to determine the expected depth of residual radioactive material.
In most situations the vegetative cover is not considered part of the surface soil sample and is
removed in the field. It is important that the sample collection procedure clearly indicate what is
and what is not considered part of the sample.
7.5.1.3 Sampling Equipment
The selection of proper sampling equipment is important to ensure that samples are collected in
a reproducible manner and to minimize the potential for cross-contamination. Sampling
equipment generally consists of a tool to collect the sample and a container to place the
collected sample in. Sample tracking begins as soon as the sample is collected, so it may be
necessary to consider security of collected samples required by the objectives of the survey.
Sampling tools are selected based on the type of soil, sample depth, number of samples
required, and training of available personnel. The selection of a sampling tool may also be
based on the expected use of the results. For example, if a soil sample is collected to verify the
depth profile used to develop the calibration for in situ gamma spectrometry, it is important to
preserve the soil core. Table 7.1 lists several examples of tools used for collecting soil samples,
situations where they are applicable, and some advantages and disadvantages involved in their
use.
Samples collected below the surface are useful in establishing the extent of residual radioactive
material in the vertical profile. Understanding the extent of residual radioactive material below
the surface can be helpful in determining remediation alternatives and release criteria. Sample
containers are generally not a major concern for collecting surface soil samples. Polyethylene
bottles with screw caps and wide mouths are recommended and should be new or clean, dry,
and checked for residual radioactive material before reuse. Polyethylene bags are acceptable,
especially with heavy gauge plastic to avoid sample spillage from tears in the bags. These
containers are fairly economical, provide easy access for adding and removing samples, and
resist chemicals, breaking, and temperature extremes. Glass containers are also acceptable,
but they are fragile and tend to break during shipment. Metal containers are sometimes used,
but sealing the container can present a problem, and corrosion can be an issue if the samples
are stored for a significant length of time.
Table 7.1: Soil Sampling Equipment4
Equipment
Application
Advantages
Disadvantages/
Considerations
Scoop,
Trowel, or
Post-Hole
Digger
Soft surface
soil
• Inexpensive
• Trowels with painted surfaces
should be avoided.
4 Reproduced and adapted from EPA 1995.
May 2020
DRAFT FOR PUBLIC COMMENT
7-13
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Sampling and Preparation for Laboratory Measurements
MARSSIM
Equipment
Application
Advantages
Disadvantages/
Considerations


• Easy to use and
decontaminate

Bulb Planter
Soft Soil, 0-
15 cm
(0-6 in.)
•	Easy to use
•	Uniform diameter and sample
volume
•	Preserves soil core
•	Limited depth capability
•	Can be difficult to
decontaminate
Soil Coring
Device
Soft soil, 0-60
cm
(0-24 in.)
•	Relatively easy to use
•	Preserves soil core
•	Limited depth capability
•	Can be difficult to
decontaminate
Thin-Wall
Tube
Sampler
Soft soil, 0-
3 m
(0-1 Oft)
•	Easy to use
•	Preserves soil core
•	Easy to decontaminate
• Can be difficult to remove
cores
Split Spoon
Sampler
Soil, to
bedrock
•	Excellent depth range
•	Preserves soil core
•	Useful for hard soils
• Often used in conjunction
with drill rig for obtaining
deep cores
Shelby Tube
Sampler
Soft soil, to
bedrock
•	Excellent depth range
•	Preserves soil core
•	Tube may be used for
shipping core to lab
• May be used in conjunction
with drill rig for obtaining
deep cores
Bucket
Auger
Soft soil,
7.5 cm-3 m
(3 in.-10 ft)
•	Easy to use
•	Good depth range
•	Uniform diameter and sample
volume
• May disrupt and mix soil
horizons greater than 15 cm
Hand-
Operated
Power Auger
Soil, 15 cm-
4.5 m
(6 in.-5 ft)
•	Good depth range
•	Generally used in conjunction
with bucket auger
•	Destroys soil core
•	Requires two or more
operators
•	Can be difficult to
decontaminate
1 Abbreviations: cm = centimeters; in. = inches; m = meters; ft = feet.
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
7-14
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Sampling and Preparation for Laboratory Measurements
1	7.5.2 Building Surfaces
2	Because building surfaces tend to be relatively smooth, and the radioactive material is assumed
3	to be on or near the surface, direct measurements are typically used to provide information on
4	residual radioactive material concentrations. Sometimes, however, it is necessary to collect
5	actual samples of the building material surface for analysis in a laboratory.
6	7.5.2.1 Sample Volume
7	The sample volume collected from building surfaces is usually a less significant DQO concern
8	than the area from which the sample was collected. This is because building surface DCGLs are
9	usually expressed in terms of activity per unit area. It is still necessary to consider the sample
10	volume to account for sample matrix effects that may reduce the chemical recovery, which in
11	turn affects the detection limit.
12	7.5.2.2 Sample Content
13	If residual radioactive material is covered by paint or some other treatment, the underlying
14	surface and the coating itself may contain residual radioactive material. If the residual
15	radioactive material is a pure alpha or low-energy beta emitter, measurements at the surface
16	will probably not be representative of the actual residual activity level. In this case, the surface
17	layer is removed from the known area, such as by using a commercial stripping agent or by
18	physically abrading the surface. The removed coating material is analyzed for activity content
19	and the level converted to appropriate units (i.e., becquerels/square meter [Bq/m2], decays per
20	minute [dpm]/100 cm2) for comparison with surface activity DCGLs. Direct measurements can
21	be performed on the underlying surface after removal of the coating.
22	Residual radioactive material may be incorporated into building materials, such as pieces of
23	concrete or other unusual matrices. Developing SOPs for collecting these types of samples may
24	involve consultation with the analytical laboratory to help ensure that the objectives of the
25	survey are achieved.
26	The thickness of the layer of building surface to be removed as a sample should be consistent
27	with the development of the conceptual site model and the DCGLs. For most sites, the surface
28	layer will only be the first few millimeters of the material being sampled.
29	7.5.2.3 Sampling Equipment
30	Tools used to provide samples of building surfaces depend on the material to be sampled.
31	Concrete may require chisels, hammers, drills, or other tools specifically designed to remove a
32	thin layer of the surface. Wood surfaces may require using a sander or a saw to collect a
33	sample. Paint may be chemically or physically stripped from the surface.
34	Sample containers for these samples are generally the same as those recommended for soil
35	samples. If chemicals are used to strip paint or other surface materials, the chemical resistance
36	of the container should be considered.
May 2020
DRAFT FOR PUBLIC COMMENT
7-15
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Sampling and Preparation for Laboratory Measurements
MARSSIM
1	7.5.3 Other Media
2	Surface soil and building surfaces are the media addressed in MARSSIM during the FSS
3	design. Other media may be involved and may have been remediated. Data collection activities
4	during preliminary surveys (i.e., scoping, characterization, remedial action support) may involve
5	collecting samples of other media to support the FSS design. Examples of other media that may
6	be sampled include—
7	• subsurface soil
8	• ground water
9	• surface water
10	• sediments
11	• sewers and septic systems
12	• flora and fauna (plants and animals)
13	• airborne particulates
14	• air (gas)
15	7.6 Field Sample Preparation and Preservation
16	Proper sample preparation and preservation are essential parts of any radioactive material
17	sampling program. The sampling objectives should be specified before sampling activities
18	begin. Precise records of sample collection and handling are necessary to ensure that data
19	obtained from different locations or time frames are correctly compared.
20	The appropriateness of sample preparation techniques is a function of the analysis to be
21	performed (EPA 1992a, 1992b). Field sample preparation procedures are a function of the
22	specified analysis and the objectives of the survey. It is essential that these objectives be clearly
23	established and agreed on in the early stages of survey planning (see Section 2.3).
24	7.6.1 Surface Soil
25	Soil and sediment samples, in most protocols, require no field preparation and are not
26	preserved. In some protocols (e.g., if the sample will be analyzed for both volatile organics and
27	radionuclides), cooling of soil samples to 4 degrees Celsius is required during shipping and
28	storage of soil samples.
29	When replicate samples are prepared in the field, it is necessary to homogenize the sample
30	before separation into replicates. There are standard procedures for homogenizing soil in the
31	laboratory (ASTM 2010), but the equipment required for these procedures may not be available
32	in the field. Simple field techniques, such as cone and quarter, or using a riffle splitter to divide
33	the sample may be appropriate if the sample can be dried (ASTM 2003, EPA 1995). If the
34	sample contains significant amounts of residual water (e.g., forms clumps of soil) and there are
35	no facilities for drying the sample, it is recommended that the homogenization and separation
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
7-16
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Sampling and Preparation for Laboratory Measurements
1	into replicates be performed in a laboratory. It is preferable to use non-blind replicates where the
2	same laboratory prepares and analyzes the replicates rather than use poorly homogenized or
3	heterogeneous samples to prepare replicate samples.
4	7.6.2 Building Surfaces
5	Field preparation and preservation of building and associated materials, including smear
6	samples, is not generally required. Homogenization of samples to prepare replicates is the
7	same for building surface material and soil.
8	7.6.3 Other Media
9	Other media may have significant requirements related to field sample preparation and
10	preservation. For example, water samples may need filtering and acidification. Storage at
11	reduced temperatures (i.e., cooling or freezing) to reduce biological activity may be necessary
12	for some samples. Adding chemical preservatives for specific radionuclides or media may also
13	be required. Guidance on sample preparation and preservation in matrices not discussed above
14	can be found in Chapter 10 of MARLAP.
15	7.7 Analytical Procedures
16	The selection of the appropriate radioanalytical methods is normally made before the
17	procurement of analytical services and is included in the SOW of the request for proposal. The
18	SOW may dictate the use of specific methods or be performance based. Unless there is a
19	regulatory requirement, such as conformance to the EPA drinking water methods (EPA 1980b),
20	the specification of performance-based methodology is encouraged. One reason for this is that
21	a laboratory will usually perform better using the methods it routinely employs, rather than other
22	methods with which it has less experience. The laboratory is also likely to have historical data
23	on performance for methods routinely used by that laboratory. However, the methods employed
24	in a laboratory should be derived from a reliable source.
25	This section briefly describes specific equipment and procedures to be used once the sample is
26	prepared for analysis. The results of these analyses (i.e., the concentrations of radioactive
27	material found in these samples) are the values used to determine the level of residual
28	radioactive material at a site. In a decommissioning effort, the DCGLs are expressed in terms of
29	the concentrations of certain radionuclides. It is of vital importance, therefore, that the analyses
30	be accurate, of adequate sensitivity, and have adequate minimum measurement uncertainties
31	for the radionuclides of concern. The selection of analytical procedures should be coordinated
32	with the laboratory and specified in the survey plan.
33	Analytical methods should be adequate to meet the data needs identified in the DQO process.
34	Consultation with the laboratory performing the analysis is recommended before selecting a
35	course of action. MARSSIM is not intended to limit the selection of analytical procedures; rather,
36	all applicable methods should be reviewed to provide results that meet the objectives of the
37	survey. The decision maker and survey planning team should decide whether routine methods
38	will be used at the site or if non-routine methods may be acceptable.
39	• Routine analytical methods are documented with information on minimum performance
40	characteristics, such as detection limit, minimum measurement uncertainty, precision and
May 2020
DRAFT FOR PUBLIC COMMENT
7-17
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
Sampling and Preparation for Laboratory Measurements	MARSSIM
accuracy, and useful range of radionuclide concentrations and sample sizes. Routine
methods may be issued by a recognized organization (e.g., Federal or State agency,
professional organization), published in a refereed journal, or developed by an individual
laboratory. The following are examples of sources for routine methods:
o Methods of Air Sampling and Analysis (Lodge 1988)
o Annual Book of ASTM Standards, Water and Environmental technology. Volume 11.05,
Environmental Assessment, Risk Management and Corrective Action (ASTM 2012)
o Standard Methods for the Examination of Water and Wastewater (APHA 2012)
o Environmental Measurements Laboratory Procedures Manual (DOE 1997)
o Inventory of Radiological Methodologies for Sites Contaminated With Radioactive
Materials (EPA 2006d)
o Radiochemistry Procedures Manual (EPA 1984)
o ANSI-AARST Radon Protocols/Standards
¦	MAH-2014, Protocol for Conducting Measurements of Radon and Radon Decay
Products in Homes (AARST 2014a)
¦	MAMF-2017, Protocol for Conducting Measurements of Radon and Radon Decay
Products in Multifamily Buildings (AARST 2017)
¦	MALB-2014, Protocol for Conducting Measurements of Radon and Radon Decay
Products in Schools and Large Buildings (AARST 2014b)
¦	MS-PC-2015, Performance Specifications for Instrumentation Systems Designed to
Measure Radon Gas in Air (AARST 2015)
¦	MS-QA-2019, Radon Measurement Systems Quality Assurance (AARST 2019)
•	Non-routine methods address situations with unusual or problematic matrices; low detection
limits; or new parameters, procedures or techniques. Non-routine methods include
adjustments to routine methods, new techniques published in refereed literature, and
development of new methods.
References that provide information on radiochemical methodology and should be considered in
the methods review and selection process are available from such organizations as—
•	National Council on Radiation Protection and Measurements
•	American Society of Testing and Materials
•	American National Standards Institute
•	Radiological and Environmental Sciences Laboratory, Idaho Falls, Idaho (operated by the
U.S. Department of Energy)
NUREG-1575, Revision 2	7-18	May 2020
DRAFT FOR PUBLIC COMMENT	DO NOT CITE OR QUOTE

-------
MARSSIM
Sampling and Preparation for Laboratory Measurements
1	• National Urban Security Technology Laboratory, New York City, NY (operated by the
2	U.S. Department of Homeland Security)
3	Equipment vendor literature, catalogs, and instrument manuals are often a source of useful
4	information on the characteristics of radiation detection equipment. Table 7.2 provides a
5	summary of common laboratory methods with estimated detection limits.
6	Analytical procedures in the laboratory consist of several parts that are assembled to produce
7	an SOP for a specific project or sample type. These procedures may include all or only some of
8	the following elements:
9	• laboratory sample preparation
10	• sample dissolution
11	• sample purification
12	• preparation for counting
13	• counting
14	• data reduction
15	7.7.1 Photon-Emitting Radionuclides
16	There is minimal special sample preparation required for counting samples using a germanium
17	detector or a sodium iodide (Nal) detector beyond placing the sample in a known geometry for
18	which the detector has been calibrated. The procedures to be followed to process a raw soil
19	sample to obtain a representative subsample for analysis depend, to some extent, upon the size
20	of the sample, the amount of processing already undertaken in the field, and—most important—
21	the radionuclide of interest (NRC 2004). The samples can be measured as they arrive at the
22	laboratory, or the sample can be dried, ground to a uniform particle size, and mixed to provide a
23	more homogeneous sample if required by the SOPs. Guidance on the preparation of samples,
24	including soil samples, can be found in Chapter 12 of MARLAP (NRC 2004).
25	The samples are typically counted using a germanium detector with a multichannel analyzer or
26	a Nal detector with a multichannel analyzer. Germanium detectors have better resolution and
27	can identify peaks (and the associated radionuclides) at lower concentrations. Nal detectors
28	often have a higher efficiency and are significantly less expensive than germanium detectors.
29	Low-energy photons (i.e., x-rays and gamma rays below 50 kilo-electron volts) can be
30	measured using specially designed detectors with an entrance window made from a very light
31	metal, typically beryllium. Descriptions of germanium and Nal detectors are provided in
32	Appendix H.
33	Data reduction is usually the critical step in measuring photon-emitting radionuclides. Often
34	several hundred individual gamma ray energies are detected within a single sample. Computer
35	software is usually used to identify energy peaks and associate these peaks with their
36	respective radionuclides. The software is also used to correct for the efficiency of the detector
37	and the geometry of the sample and to provide results in terms of concentrations with the
May 2020
DRAFT FOR PUBLIC COMMENT
7-19
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
D
73
>
:1 Table 7.2: Typical Measurement Sensitivities for Laboratory Radiometric Procedures
73
m
O
O
73
"0
c= 73
00 CD
cn
~vl
cn
O
O
O
m
<
(/>
o
N>
~vl
¦
N>
O
D
O
Z
O
H
O
H
m
O I
73 ^
O 03
n M
y °
H N>
m o
2
3
4
Sample
Type
Radionuclides or
Radiation Measured
Procedure
Approximate Detection
Capability
Smears
(Filter
Paper)
Gross alpha
Gas-flow proportional counter; 5 min count
Alpha scintillation detector with scaler; 5 min count
0.08 Bq (5 dpm)
0.33 Bq (20 dpm)
Gross beta
Gas-flow proportional counter; 5 min count
End window GM with scaler; 5 min count (unshielded detector)
0.17 Bq (10 dpm)
1.33 Bq (80 dpm)
Low energy beta
(3H, 14C, 63Ni)
Liquid scintillation spectrometer; 5 min count
0.50 Bq (30 dpm)
Soil
Sediment
137Cs, 60Co, 226Ra (214Bi)a,
232Th (228Ac), 235U
Germanium detector (25% relative efficiency) with multichannel
analyzer; pulse height analyzer; 500 g sample; 15 min analysis
0.04-0.1 Bq/g
(1-3 pCi/g)
234, 235, 238(J- 238, 239, 240pu-
227,228,230,232y|-|; other alpha
emitters
Alpha spectroscopy with multichannel analyzer—pyrosulfate
fusion and solvent extraction; surface barrier detector; pulse
height analyzer; 1 g sample; 16 h count
0.004-0.02 Bq/g
(0.10.5 pCi/g)

Gross alpha
Gas-flow proportional counter; 100 ml sample, 200 min count
0.04 Bq/L (1 pCi/L)

Gross beta
Gas-flow proportional counter; 100 ml sample, 200 min count
0.04 Bq/L (1 pCi/L)
Water
137Cs, 60Co, 226Ra (214Bi),
232Th (228Ac), 235U
Germanium detector (25% relative efficiency) with multichannel
analyzer; pulse height analyzer; 3.5 L sample, 16 h count
0.4 Bq/L (10 pCi/L)

234, 235, 238(J- 238, 239, 240pu-
227,228,230,232y|-|; other alpha
emitters
Alpha spectroscopy with multichannel analyzer—solvent
extraction; surface barrier detector; pulse height analyzer;
100 ml sample, 30 min count
0.004-0.02 Bq/L
(0.1-0.5 pCi/L)

3H
Liquid scintillation spectrometry; 5 ml sample, 30 min count
10 Bq/L (300 pCi/L)
Abbreviations: min = minute; Bq = becquerel; dpm = disintegrations per minute; GM = Geiger-Mueller; g = grams; h = hour; pCi = picocuries; ml = milliliters; L :
liters
a Indicates that a member of the decay series is measured to determine activity level of the parent radionuclide of primary interest.
(/)
CD
3
"O
5'
CO
03
Q.
CD
~D
03
—5
03
i	i
o'
3
03
cr
o
—5
03
r+
O
—5
<
CD
03
C/J
C
CD
3
CD
r+
w
>
73
U)
U)

-------
MARSSIM
Sampling and Preparation for Laboratory Measurements
1	associated uncertainty. It is important that the software be either a well-documented commercial
2	package or thoroughly evaluated and documented before use.
3	7.7.2 Beta-Emitting Radionuclides
4	Laboratory sample preparation is an important step in the analysis of surface soil and other solid
5	samples for beta-emitting radionuclides. The laboratory will typically have a sample preparation
6	procedure that involves drying the sample and grinding the soil so that all particles are smaller
7	than a specified size to provide a homogeneous sample. A small portion of the homogenized
8	sample is usually all that is required for the individual analysis.
9	Once the sample has been prepared, a small portion is dissolved, fused, or leached to provide a
10	clear solution containing the radionuclide of interest. The only way to ensure that the sample is
11	solubilized is to completely dissolve the sample. However, this can be an expensive and time-
12	consuming step in the analysis. In some cases, leaching with strong acids can consistently
13	provide greater than 80 percent recovery of the radionuclide of interest (NCRP 1976) and may
14	be acceptable for certain applications. After dissolution, the sample is purified using a variety of
15	chemical reactions to remove bulk chemical and radionuclide impurities. The objective is to
16	provide a chemically and radiologically pure sample for measurement. Examples of purification
17	techniques include precipitation, liquid-liquid extraction, ion-exchange chromatography,
18	distillation, and electrodeposition. Gross beta measurements may also be performed on material
19	that has not been purified.
20	After the sample is purified, it is prepared for counting. Beta-emitting radionuclides are usually
21	prepared for a specific type of counter in a specified geometry. Some samples can be
22	precipitated and collected on a filter in a circular geometry to provide a homogeneous sample.
23	Other samples can be converted to the appropriate chemical form and diluted to a specified
24	volume in preparation for counting.
25	Measurements of some samples may be performed using a gas-flow proportional counter.
26	Because total beta activity is measured, it is important that the purification step be performed to
27	remove any interfering radionuclides. Other samples can be added to a liquid scintillation
28	cocktail and counted using a liquid scintillation spectrometer. Liquid scintillation spectrometers
29	can be used for low-energy beta-emitting radionuclides, such as 3H and 63Ni. Proper
30	applications can decrease lower limits of detection for all nuclides; however, typical applications
31	in many labs limit the minimum detectable activity to those that are higher than standard gas
32	proportional counting. Gas-flow proportional counters have a very low background. Appendix H
33	provides a description of both the gas-flow proportional counter and the liquid scintillation
34	spectrometer.
35	7.7.3 Alpha-Emitting Radionuclides
36	Laboratory sample preparation for alpha-emitting radionuclides is similar to that for beta-emitting
37	radionuclides. Sample dissolution and purification tasks are also similar to those performed for
38	beta-emitting radionuclides.
39	Because of the limited penetrating power of alpha particles, the preparation for counting is often
40	a critical step. Gross alpha measurements can be made using small sample sizes with a gas-
41	flow proportional counter, but self-absorption of the alpha particles results in a relatively high
42	detection limit for this technique. Liquid scintillation spectrometers can also be used to measure
43	alpha-emitting radionuclides, but the resolution limits the usefulness of this technique. Most
44	alpha-emitting radionuclides are measured in a vacuum (to limit absorption by air) using alpha
May 2020
DRAFT FOR PUBLIC COMMENT
7-21
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
Sampling and Preparation for Laboratory Measurements
MARSSIM
spectroscopy. This method requires that the sample be prepared as a virtually weightless mount
in a specific geometry. Electrodeposition is the traditional method for preparing samples for
counting. This technique provides the highest resolution, but it requires a significant amount of
training and expertise on the part of the analyst to produce a high-quality sample. Precipitation
of the radionuclide of interest on the surface of a substrate is often used to prepare samples for
alpha spectroscopy. While this technique generally produces a spectrum with lower resolution,
the preparation time is relatively short compared to electrodeposition, and personnel can be
trained to prepare acceptable samples relatively quickly.
Alpha-emitting radionuclides are typically measured using alpha spectroscopy. The data
reduction requirements for alpha spectroscopy are greater than those for beta-emitting
radionuclides and similar to those for photon-emitting radionuclides. Alpha spectroscopy
produces a spectrum of alpha particles detected at different energies, but because the sample
is purified before counting, all of the alpha particles come from radionuclides of a single
element. This simplifies the process of associating each peak with a specific radionuclide, but
the lower resolution associated with alpha spectroscopy increases the difficulty of identifying the
peaks. Although commercial software packages are available for interpreting alpha
spectroscopy results, an experienced operator is required to ensure that the software is working
properly.
7.8 Sample Tracking
Sample tracking refers to the identification of samples, their location, and the individuals
responsible for their custody and transfer of the custody. This process covers the entire process
from collection of the samples and remains intact through the analysis and final holding or
disposal. It begins with the taking of a sample where its identification and designation of the
sample are critical to being able to relate the analytical result to a site location.
Tracking samples from collection to receipt at the analytical laboratory is normally done through
a COC process and documented on a COC or tracking record. The purpose of the COC record
is to ensure the security and legal defensibility of the sample throughout the process. When
samples are received by the laboratory, internal tracking and COC procedures should be in
place. These procedures should be documented through SOPs that ensure integrity of the
samples. Documentation of changes in the custody of a sample is important. This is especially
true for samples that may be used as evidence to establish compliance with release criteria. In
such cases, there should be sufficient evidence to demonstrate that the integrity of the sample
is not compromised from the time it is collected to the time it is analyzed. During this time, the
sample should either be under the positive control of a responsible individual or secured and
protected from any activity that could change the true value of the results or the nature of the
sample. When this degree of sample handling or custody is necessary, written procedures
should be developed for field operations and for interfacing between the field operations and the
analytical laboratory. This ensures that a clear transfer of the custodial responsibility is well-
documented and that no questions exist as to who is responsible for the sample at any time.
7.8.1 Field Tracking Considerations
Suggestions for field sample tracking are given below:
• Field personnel are responsible for maintaining field logbooks with adequate information to
relate the sample identifier (sample number) to its location and for recording other
information necessary to adequately interpret results of sample analytical data. Logbooks
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
7-22
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Sampling and Preparation for Laboratory Measurements
1	may use electronic records, provided information is stored in manner that is tamper-proof
2	and retrievable if electronic media fail.
3	• The sample collector is responsible for the care and custody of the samples until they are
4	properly transferred or dispatched. This means that samples are in their possession, under
5	constant observation, or secured. Samples may be secured in a sealed container, locked
6	vehicle, locked room, etc.
7	• Sample labels should be completed for each sample using waterproof ink or in a tamper-
8	proof and recoverable electronic medium.
9	• The survey manager or designee determines whether or not proper custody procedures
10	were followed during the field work and decides if additional sampling is indicated.
11	• If photographs are included as part of the sampling documentation, the name of the
12	photographer, date, time, site location, and site description should be entered sequentially in
13	a logbook as the photos are taken. After the photographs are printed, the prints should be
14	serially numbered. Alternatively, the information can be filed in a tamper-proof electronic
15	form or database.
16	7.8.2 Transfer of Custody
17	Suggestions for transferring sample custody are given below:
18	• All samples leaving the site should be accompanied by a COC record. This record should be
19	standardized and document sample custody transfer from the sampler, often through
20	another person, to the laboratory. The sample collector is responsible for initiating the
21	tracking record. The record should include a list, containing sample designation (number), of
22	the samples in the shipping container and the analysis requested for each sample.
23	• Shipping containers should be sealed and include a tamper-indicating seal that will indicate
24	if the container seal has been disturbed. The method of shipment, courier name, or other
25	pertinent information should be listed in the COC record.
26	• The original COC record should accompany the samples. A copy of the record should be
27	retained by the individual or organization relinquishing the samples. If a sample is to be split
28	and distributed to more than one analytical laboratory, multiple forms will be needed to
29	accompany sample sets.
30	• Discuss the custody objectives with the shipper to ensure that the objectives are met. For
31	example, if the samples are sent by mail and the originator of the sample requires a record
32	that the shipment was delivered, the package should be registered with return receipt
33	requested. If, on the other hand, the objective is to simply provide a written record of the
34	shipment, a certificate of mailing may be a less expensive and appropriate alternative.
35	• The individual receiving the samples should sign, date, and note the time of receipt on the
36	record. The condition of the container and the tamper-indicating seal should be documented
37	on the COC. Any problems with the individual samples, such as a broken container, should
38	be noted on the record.
39	• COC procedures may utilize tamper-proof electronic media, as appropriate.
May 2020
DRAFT FOR PUBLIC COMMENT
7-23
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Sampling and Preparation for Laboratory Measurements
MARSSIM
1	7.8.3 Radiochemical Holding Times
2	In some circumstances, sample holding times are particularly important. For example, liquid
3	samples are usually analyzed as quickly as possible. This would also be true for short half-lived
4	radionuclides. Minimizing the holding times in these situations can reduce the measurement
5	uncertainties and lower the minimum detectable concentrations.
6	For this reason, the SOW should contain the requirements for radiological holding and sample
7	turnaround times. It is important that the laboratory review the specifications for radionuclides
8	that have short half-lives (less than 30 days), because the method proposed by the laboratory
9	may depend on the required radiological holding time. For very short-lived radionuclides, it is
10	crucial to analyze the samples within the first two half-lives to meet the MQOs conveniently.
11	Additionally, samples requiring parent decay or progeny ingrowth should be held for sufficient
12	time before counting. Limits for minimum ingrowth and maximum or minimum decay times
13	should be established for all analytical methods where they are pertinent. For ingrowth, the
14	limits should reflect the minimum time required to ensure that the radionuclides of interest have
15	accumulated sufficiently to not adversely affect the detection limit or uncertainty (e.g., holding
16	samples for 226Ra analysis to permit ingrowth of 222Rn). Alternatively, requirements for holding
17	times may be set to ensure that interfering radionuclides have a chance to decay sufficiently
18	Conversely, the time for radioactive decay of the radionuclides of interest should be limited such
19	that the decay factor does not elevate the minimum detectible concentration or adversely affect
20	the measurement uncertainty (NRC 2004).
21	7.8.4 Laboratory Tracking
22	When the samples are received by the laboratory, they are prepared for radiochemical
23	analyses, which includes dividing the sample into aliquots. The tracking and COC
24	documentation within the laboratory become somewhat complicated because several portions
25	of the original sample may exist in the laboratory at a given time. The term "tracking" refers to
26	an accountability process that meets generally acceptable laboratory practices as described by
27	accrediting bodies but is less stringent than a formal COC process. Similar to the COC process,
28	tracking also develops a record of all individuals responsible for the custody and transfer of
29	samples. The use of a computer-based laboratory information management system can greatly
30	assist in tracking samples and fractions through the analytical system.
31	The minimal laboratory tracking process consists of the following:
32	• transfer of custody on receipt of the samples (original COC form is retained by the laboratory
33	and submitted with the data package for the samples)
34	• documentation of sample storage (location and amount)
35	• documentation of removal and return of sample aliquots (amount, date and time, person
36	removing or returning, and reason for removal)
37	• transfer of the samples and residues to the receiving authority (usually the site from which
38	they were taken)
39	• tamper-proof electronic systems acceptable for laboratory tracking
40	The procedure for accomplishing the above varies from laboratory to laboratory, but the exact
41	details of performing the operations of sample tracking should be contained in an SOP.
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
7-24
May 2020
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
MARSSIM
Sampling and Preparation for Laboratory Measurements
7.9 Packaging and Transporting Samples
All samples being shipped for radiochemical analysis should be properly packaged and labeled
before transport offsite or within the site. The primary concern is the possibility of spills, leaks, or
breakage of the sample containers. In addition to resulting in the loss of samples and cross-
contamination, the possible release of hazardous material poses a threat to the safety of
persons handling and transporting the package.
Suggestions for packaging and shipping radioactive environmental samples are listed below:
•	Review NRC requirements (10 CFR Part 71) and U.S. Department of Transportation (DOT)
requirements (49 CFR Parts 171-177) for packaging and shipping radioactive
environmental samples.
•	Visually inspect each sample container for indication of leaks or defects in the sample
container.
o Liquid samples should be shipped in plastic containers, if possible, and the caps on the
containers should be secured with tape. One exception to the use of plastic bottles is
samples collected for 3H analyses, which may require glass containers.
o Heavy plastic bags with sealable tops can be used to contain solid samples (e.g., soil,
sediment, air filters). The zipper lock should be secured with tape. Heavy plastic lawn
bags can be used to contain vegetation samples. The tops should be closed with a "tie"
that is covered by tape to prevent it from loosening and slipping off.
•	Wipe individual sample containers with a damp cloth or paper towel to remove any exterior
contamination. The outer surfaces of containers holding samples collected in an area
containing residual radioactive material should be surveyed with one or more hand-held
instruments appropriate for the suspected type of radioactive material.
•	If glass sample containers are used, place sample containers inside individual plastic bags
and seal to contain the sample in case of breakage.
•	Use packing material (e.g., paper, Styrofoam™, bubble wrap) to immobilize and isolate each
sample container and buffer hard knocks on the outer container during shipping. This is
especially important in cold weather, when plastic containers may become brittle and water
samples may freeze.
•	When liquid samples are shipped, include a sufficient quantity of an absorbent material
(e.g., vermiculite) to absorb all liquid packed in the shipping container in case of breakage.
This absorbent material also may suffice as the packing material.
•	Include the original signed and dated COC form, identifying each sample in the package. It
is good practice to place the COC form in a plastic bag to prevent it from becoming wet or
contaminated in case of a spill during shipment. If possible, avoid having multiple packages
of samples covered by a single COC form.
•	Seal closed the package and apply COC tape in such a manner that it must be torn (broken)
to open the package. The tape should carry the signature of the sender, and the date and
time, so that it cannot be removed and replaced undetected.
May 2020
DRAFT FOR PUBLIC COMMENT
7-25
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Sampling and Preparation for Laboratory Measurements
MARSSIM
1	• Ice chests constructed of metal or hard plastic make excellent shipping containers for
2	radioactive environmental samples.
3	• Regulations may require specific labeling and markings on the external surface of each
4	shipping container and may also require handling instructions and precautions be attached
5	to the shipping container. Some information should be included on the package even if not
6	required by the regulations, such as the sender's and receiver's (consignee and consignor)
7	names, addresses, and telephone numbers. When required by shipping regulation, proper
8	handling instructions and precautions should be clearly marked on shipping containers.
9	• Shipments with dry ice or other hazardous packaging material are subject to requirements
10	pertaining to the packaging, apart from the radioactive or hazardous contents.
11	If samples are sent offsite for analysis, the shipper is responsible for complying with all
12	applicable Federal, State, and local regulations. Applicable Federal regulations are briefly
13	addressed below. Any State or local regulation will very likely reflect a Federal regulation.
14	7.9.1 U. S. Nuclear Regulatory Commission Regulations
15	NRC regulations for packaging, preparation, and shipment of licensed material are contained in
16	10 CFR Part 71: "Packaging and Transportation of Radioactive Materials."
17	• Samples containing low levels of radioactive material are exempted as set forth in §§ 71.10.
18	• Low Specific Activity material (LSA) is defined in §§ 71.4: "Definitions." Samples classified
19	as LSA need only meet the requirements of the DOT, discussed below, and the
20	requirements of §§ 71.88: "Air transport of plutonium." Most environmental samples either
21	will fall into this category or will be exempt of any DOT regulations.
22	7.9.2 U.S. Department of Transportation Regulations
23	The DOT provides regulations governing the transport of hazardous materials under the
24	Hazardous Materials Transportation Act of 1975 (88 Stat. 2156, Public Law 93-633). Applicable
25	requirements of the regulations are found in 49 CFR Parts 171-177. Shippers of samples
26	containing radioactive material should be aware of the current rules in the following areas:
27	• Accident reporting: 49 CFR 171
28	• Marking and labeling packages for shipment: 49 CFR 172
29	• Packaging: 49 CFR 173
30	• Placarding a package: 49 CFR 172
31	• Registration of shipper/carrier: 49 CFR 107
32	• Shipper required training: 49 CFR 172
33	• Shipping papers and emergency information: 49 CFR 172
34	• Transport by air: 49 CFR 175
35	• Transport by rail: 49 CFR 174
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
7-26
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Sampling and Preparation for Laboratory Measurements
1	• Transport by vessel: 49 CFR 176
2	• Transport on public highway: 49 CFR 177
3	7.9.3 U.S. Postal Service Regulations
4	Any package containing radioactive materials may not be mailed if it is required to bear the
5	DOT'S Radioactive White-1 (49 CFR 172.436), Radioactive Yellow-ll (49 CFR 172.438), or
6	Radioactive Yellow-Ill (49 CFR 172.440) label, or if it contains quantities of radioactive material
7	in excess of those authorized in Publication 6, Radioactive Material, of the U.S. Postal Service.
8	7.9.4 International Atomic Energy Agency Regulations
9	In the event that samples or other radioactive materials, such as calibration sources, are
10	shipped outside the boundaries of the United States, the shipment of those materials must
11	comply with the International Atomic Energy Agency Regulations for the Safe Transport of
12	Radioactive Material (IAEA 2005). The areas addressed in the Regulations include—
13	• activity limits and material restrictions
14	• requirements and controls for transport
15	• radioactive material package and packaging requirements
17
16
test procedures
administrative controls and requirements
May 2020
DRAFT FOR PUBLIC COMMENT
7-27
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
MARSSIM
Interpretation of Survey Results
1	8 INTERPRETATION OF SURVEY RESULTS
2	8.1 Introduction
3	This chapter discusses the interpretation of survey results, primarily those of the final status
4	survey (FSS). Interpreting a survey's results is most straightforward when measurement data
5	are entirely higher or lower than the wide-area derived concentration guideline level (DCGLw).
6	In such cases, the decision that a survey unit meets or exceeds the release criteria requires little
7	in terms of data analysis. However, formal statistical tests provide a valuable tool when a survey
8	unit's measurements are neither clearly above nor exclusively below the DCGLw. Nevertheless,
9	the survey design always makes use of the statistical tests to help ensure that the number of
10	sampling points and the measurement detectability and uncertainty are adequate, but not
11	excessive, for the decision to be made. Although most statistical analysis is completed using
12	statistical software packages, this chapter provides an explanation to facilitate the reader's
13	understanding of the mechanics behind the calculations of these statistical tests.
14	Section 8.2 discusses the assessment of data quality. The remainder of Chapter 8 deals with
15	application of the statistical tests used in the decision-making process and the evaluation of the
16	test results. In addition, an example checklist is provided to assist the user in obtaining the
17	necessary information for interpreting the results of an FSS. Section 8.3 discusses the
18	application of the Sign test to survey data involving radionuclides that are not in the background.
19	Section 8.4 discusses the application of the Wilcoxon Rank Sum (WRS) test to survey data
20	involving radionuclides that are in the background and, for Scenario B, the application of the
21	quantile test when the null hypothesis is not rejected. Comparisons of scan-only results to an
22	upper confidence limit are discussed in Section 8.5. Section 8.6 discusses the results,
23	including the elevated measurement comparison (EMC), and interpretation of the statistical
24	tests. Section 8.7 discusses the documentation requirements.
25	8.2 Data Quality Assessment
26	Data Quality Assessment (DQA) is a scientific and statistical evaluation that determines whether
27	the data are of the right type, quality, and quantity to support their intended use. An overview of
28	the DQA process is presented in Section 2.3 and Appendix D. The DQA process has five
29	steps:
30	• Review the Data Quality Objectives (DQOs), Measurement Quality Objectives (MQOs) and
31	Survey Design (Section 8.2.1)
32	• Conduct a Preliminary Data Review (Section 8.2.2)
33	• Select the Statistical Test (Section 8.2.3)
34	• Verify the Assumptions of the Statistical Test (Section 8.2.4)
35	• Draw Conclusions from the Data (Section 8.2.5)
May 2020
DRAFT FOR PUBLIC COMMENT
8-1
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Interpretation of Survey Results
MARSSIM
1	The effort applied to DQA should be consistent with the graded approach used to develop the
2	survey design. More information on DQA can be found in Data Quality Assessment: A User's
3	Guide (EPA QA/G-9R, EPA 2006a) and Data Quality Assessment: Statistical Tools for
4	Practitioners (EPA QA/G-9S, EPA 2006b).
5	Data should be verified and validated as described in the Quality Assurance Project Plan
6	(QAPP). Guidance on data verification and validation can be found in Appendix D and Multi-
7	Agency Radiation Laboratory Analytical Protocols (MARLAP) (NRC 2004) Chapter 8. Guidance
8	on developing a QAPP is available in EPA QA/G-5 (EPA 2002a) and MARLAP Chapter 4.
9	8.2.1 Review the Data Quality Objectives, Measurement Quality Objectives, and Survey
10	Design
11	The first step in the DQA evaluation is a review of the DQO outputs to ensure that they are still
12	applicable. The review of the DQOs and survey design should also include the MQOs
13	(e.g., measurement uncertainty, detectability). For example, if the data suggest the survey unit
14	was misclassified as Class 3 instead of Class 1 (i.e., because measurement results above the
15	DCGLw were obtained), then the original DQOs should be redeveloped for the correct
16	classification; or, for example, if the data show the measurement uncertainty exceeds the
17	estimate used to design the survey, the DQOs and MQOs should be revisited.
18	The survey design and data collection documentation should be reviewed for consistency with
19	the DQOs. For example, the review should check that the calculated N number of samples was
20	taken in the correct locations and that the samples were analyzed using measurement systems
21	with required detection capability and uncertainty. Example checklists for different types of
22	surveys are given in Chapter 5.
23	Determining that the survey design provides adequate power is important to decision making,
24	particularly in cases where the average levels of residual radioactive material are near the
25	DCGLw. This can be done both prospectively during survey design to test the efficacy of a
26	proposed design and retrospectively during interpretation of survey results to determine that the
27	objectives of the design are met. The procedure for generating power curves for specific tests is
28	discussed in Appendix M. Note that the accuracy of a prospective power curve depends on
29	having good estimates of the data variability, a, and the number of measurements. After the
30	data are analyzed, a sample estimate of the data variability, namely the sample standard
31	deviation (s) and the actual number of valid measurements will be known. While the Type I (a)
32	decision error rate will always be achieved, the consequence of inadequate power is an
33	increased Type II (/?or false negative) decision error rate.
34	• For Scenario A, this means that a survey unit that actually meets the release criteria has a
35	higher probability of being incorrectly deemed not to meet the release criteria.
36	• For Scenario B, this means that a survey unit that does not meet the release criteria has a
37	higher probability of being incorrectly deemed to meet the release criteria.
38	Regulators are primarily concerned with errors that result from determining that a survey unit
39	meets the release criteria when it does not. This incorrect decision is a Type I error under
40	Scenario A and a Type II error under Scenario B. Site owners are also concerned with errors
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
8-2
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Interpretation of Survey Results
1	that result from determining that a survey unit does not meet the release criteria when it does.
2	This incorrect decision is a Type II error under Scenario A and a Type I error under Scenario B.
3	8.2.2 Conduct a Preliminary Data Review
4	To learn about the structure and quality of the data—identifying patterns, relationships, or
5	potential anomalies—it is recommended that the quality assurance (QA) and quality control
6	(QC) reports be reviewed and that basic statistical quantities be calculated and graphs of the
7	data, or populations estimators, be prepared so that objective evidence is provided to support
8	conclusions about the data set.
9	8.2.2.1 Data Evaluation and Conversion
10	Radiological survey data are usually obtained in units that have no intrinsic meaning relative to
11	DCGLs, such as the number of counts per unit time. For comparison of survey data to DCGLs,
12	the survey data from field and laboratory measurements are converted to DCGL units. Further
13	information on instrument calibration and data conversion is given in Section 6.7.
14	Basic statistical quantities that should be calculated for the sample data set are the—
15	• sample mean
16	• sample standard deviation
17	• sample median1
18	Other statistical quantities that may be calculated are—
19	«the standard error for the mean
20	• the highest measurement
21	• the lowest measurement
22	The sample mean, x, can be calculated using Equation 8-1:
1 The term "sample" here is a statistical term and should not be confused with laboratory samples. For the calculation
of basic statistical quantities above, data may consist of scan data, direct measurement data, or laboratory sample
data. See also the glossary definition of sample.
N
(8-1)
i=i
May 2020
DRAFT FOR PUBLIC COMMENT
8-3
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
Interpretation of Survey Results
MARSSIM
where N is the number of samples, and xt are the results of the individual samples. The sample
standard deviation, s, can be calculated using Equation 8-2:
s
Y
N
(X* ~^)2
(=1
(8-2)
The median is the middle value of the data set when the number of data points is odd and is the
average of the two middle values when the number of data points is even. Thus, 50 percent of
the data points are above the median, and 50 percent are below the median. Example 1
illustrates how to calculate the sample standard deviation.
Example 1: Calculate the Sample Standard Deviation
Suppose the following 20 concentration values are from a survey unit:
90.7, 83.5, 86.4, 88.5, 84.4, 74.2, 84.1, 87.6, 78.2, 77.6,
86.4, 76.3, 86.5, 77.4, 90.3, 90.1, 79.1, 92.4, 75.5, 80.5.
First, the sample mean of the data should be calculated:
x
N
4I«
(=i
= - (90.7 + 83.5 + 86.4 + ••• + 92.4 + 75.5 +80.5)
N
= 83.5
The sample mean is 83.5. The sample standard deviation should also be calculated:
s
Y
N
1=1
1
N
= 5.7
20-1
[(90.7 - 83.5)2+(83.5 - 83.5)2+---+(75.5 - 83.5)2+(80.5 - 83.5)2]
The sample standard deviation is 5.7.
For Scenario A, the mean concentration of the survey unit should always be compared to the
DCGLw. A mean survey unit concentration less than the DCGLw is a necessary, but not
sufficient, requirement for the release of the survey unit if the radionuclide is not present in the
background. Where remediation is inadequate, this comparison may readily reveal that a survey
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
8-4
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Interpretation of Survey Results
1	unit contains excess residual radioactive material—even before applying statistical tests. For
2	example, if the sample mean of the data exceeds the DCGLw and the radionuclide of interest
3	does not appear in background, then the survey unit clearly does not meet the release criteria.
4	On the other hand, if every measurement in the survey unit is below the DCGLw, the survey unit
5	clearly meets the release criteria.2
6	The value of the sample standard deviation is especially important. If the standard deviation is
7	too large compared to that assumed during the survey design, this may indicate that an
8	insufficient number of samples were collected to achieve the desired power of the statistical
9	test. Again, inadequate power can lead to unnecessary remediation for Scenario A (of particular
10	interest to the regulated) or inadequate remediation for Scenario B (of particular interest to the
11	regulator).
12	Large differences between the mean and the median would be an indication of skewness in the
13	data. This would also be evident in a histogram of the data. Example 2 illustrates a comparison
14	of the sample mean and median.
Example 2: Comparison of the Sample Mean and the Median
Using the data from the earlier example, take the 20 concentration values from the survey
unit:
90.7, 83.5, 86.4, 88.5, 84.4, 74.2, 84.1, 87.6, 78.2, 77.6,
86.4, 76.3, 86.5, 77.4, 90.3, 90.1, 79.1, 92.4, 75.5, 80.5.
Sort and rank the data from lowest to highest:
IT-
2
3
4
5
6
7
8
9
10 1
74.2
75.5
76.3
77.4
77.6
78.2
79.1
80.5
83.5
84.1

i 11
12
13
14
15
16
17
18
19
20 1
84.4
86.4
86.4
86.5
87.6
88.5
90.1
90.3
90.7
92.4
For the example data above, the median is 84.25 (i.e., (84.1 + 84.4)/2). The difference
between the median and the mean (i.e., 84.25 - 83.5 = 0.75) is a small fraction of the sample
standard deviation (i.e., 5.7). Thus, in this instance, the mean and median would not be
considered significantly different.
15	Examining the minimum, maximum, and range of the data may provide additional useful
16	information. The maximum is the value of the largest observed sample, the minimum is the
2 It can be verified that if every measurement is below the DCGLw, the conclusion from the statistical tests will always
be that the survey unit does not exceed the release criteria.
May 2020
DRAFT FOR PUBLIC COMMENT
8-5
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
Interpretation of Survey Results
MARSSIM
value of the smallest observed sample, and the range is the difference between the maximum
and minimum. When there are 30 or fewer data points, values of the range much larger than
about 4 to 5 standard deviations would be unusual. For larger data sets, the range might be
wider. Example 3 illustrates how to determine the sample range.
Example 3: Determination of the Sample Range
The minimum in the previous example is 74.2 and the maximum is 92.4, so the range is 18.2
(92.4 - 74.2 = 18.2). Dividing the range by the standard deviation indicates how many
standard deviations wide the sample data represent:
This is only 3.2 standard deviations. Thus, the range is not unusually large.
8.2.2.2 Graphical Data Review
At a minimum, a graphical data review should consist of a posting plot and a histogram.
Quantile plots are also useful diagnostic tools, particularly in the two-sample case, to compare
the survey unit and reference area in cases where the radionuclide is present in the background
or measurements are not radionuclide specific. Quantile plots are discussed in Appendix L,
Section L.2.
A posting plot is simply a map of the survey unit with the data values entered at the
measurement locations. This potentially reveals heterogeneities in the data, especially possible
areas of elevated residual radioactive material. Even in a reference area, a posting plot can
reveal spatial trends in background data that might affect the results of the statistical tests used
when the radionuclide is present in the background or measurements are not radionuclide
specific.
If the data given in the examples above were obtained using a triangular grid in a rectangular
survey unit, the posting plot might resemble the display in Figure 8.1. Figures 8.1a and 8.1c
show no unusual patterns in the data, whereas Figures 8.1b and 8.1d show the exact same
values (and therefore the same mean) but with a different distribution of residual radioactive
material. Figures 8.1b and 8.1d also reveal an obvious trend toward smaller values as one
moves from left to right across the survey unit, which can be discerned only if spatial information
is available and analyzed. The graphical display of data in a posting plot is beneficial to better
understanding the distribution of residual radioactive material at a site.
If the posting plot reveals systematic spatial trends in the survey unit, the cause of the trends
would need to be investigated. In some cases, such trends could be due to residual radioactive
material, but they may also be due to inhomogeneities in the survey unit background. Other
diagnostic tools for examining spatial data trends may be found in EPA Guidance Document
QA/G-9S (EPA 2006b). The use of geostatistical tools to evaluate spatial data trends may also
be useful in some cases (EPA 1989b).
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
8-6
May 2020
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
MARSSIM
Interpretation of Survey Results
,|30^l ,l§3l tl864l >|S8^| t|5n|
,[55j| ,gH ,113 ,lzU ,IH3
,113 ,nni ,1m ,im
,[553| ^18411 ^IstTsI ^17821 ^[TTel
,17631 >|S6Jl	^[5511
.Izz3 .EH
,|903 ,[?93
,175^1 g IsoSl
15531 _H3
goi itfsi
(a)
#go] ,m
,m3 (	,m ©a al§531
^gO £tp3 ,!p f ,im ,iU
(c)
Figure 8.1: Examples of Posting Plots
(b)
^T^°-71 »im ,gi3 f .En ,m3
,113 ,1m [ ,eh ,ini
j|h| #gn ,gH ^jm
;gjs53 ,|sSH ,|36ll ,|sa5l m
(d)
Geographic information system (GIS) tools can also be used to help with creation of conceptual
models (e.g., provide spatial context and a better understanding of site features that may control
or enhance radionuclide transport in the environment). Figures created with GIS tools can also
assist with identifying relatively homogeneous areas of residual radioactivity for delineation of
survey units. Examples of features that can be captured on a figure using GIS tools include the
following:
•	study area and property boundary
•	buildings where residual radioactivity may be present
•	roads
•	surface water features (streams, ponds, runoff basins, ditches, culverts)
May 2020
DRAFT FOR PUBLIC COMMENT
8-7
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Interpretation of Survey Results
MARSSIM
1 • underground features (underground storage tanks, piping)
2 • topography, surface geology, and outcrop locations
3 • hydrostratigraphic surfaces and isopach maps
4 • water table and potentiometric surfaces
5 • sampling locations
6 • monitoring well locations
7
contaminant distributions
8	For example, Figure 8.2a shows a map that includes the location of two hypothetical tanks.
9	Leaks are known to have occurred near the tanks. GIS information on the location of important
10	features and topography of surficial (or subsurface) structures can be used to identify areas
11	where residual radioactivity may be present and more likely to have been transported
12	(e.g., surface water runoff direction). GIS information and geostatistical tools can be helpful in
13	designing survey plans and identifying areas most likely to be above risk-based thresholds. For
14	example, the geostatistical tools available in such codes as Visual Sample Plan and Spatial
15	Analysis and Decision Assistance (SADA) can be used to analyze data and extrapolate data in
16	areas where no data are available. Figure 8.2 illustrates the use of SADA (Version 5) in
17	creating a three-dimensional visualization of the volume of soil most likely to be affected based
18	on sampling results and use of geostatistical tools available in the code. Figure 8.2b illustrates
19	how geostatistical tools can help interpolate and extrapolate data to determine the probability of
20	exceeding a threshold following characterization.
21	A frequency plot (or a histogram) is a useful tool for examining the general shape of a data
22	distribution. This plot is a bar chart of the number of data points within a certain range of values.
23	A frequency plot of the example data from Figure 8.1 is shown in Figure 8.3. A simple method
24	for generating a rough frequency plot is the stem-and-leaf display discussed in Appendix L,
25	Section L.1. The frequency plot may reveal any obvious departures from symmetry, such as
26	skewness or bimodality (two peaks), in the data distributions for the survey unit or reference
27	area. The presence of two peaks in the survey unit frequency plot may indicate the existence of
28	isolated areas of residual radioactive material, which may need to be further investigated as part
29	of the EMC tests.
30	The presence of two peaks in the background reference area or survey unit frequency plot may
31	also indicate a mixture of background concentration distributions due to different soil types,
32	construction materials, etc., or it could indicate the presence of residual radioactivity in the
33	background reference area.3 The greater variability in the data due to the presence of such a
34	mixture will reduce the power of the statistical tests to detect an adequately remediated survey
35	unit that meets the release criteria for Scenario A or to detect a survey unit that does not meet
3 In some cases, it may be necessary to perform additional investigation to determine if background reference areas
were properly classified as non-impacted.
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
8-8
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM	Interpretation of Survey Results
1
2
3
4
5
6
7
8	Figure 8.2: Sample GIS Visualization, Modified from Figures 3.4 and 7.8 in NUREG/CR-
9	7021 (NRC 2012).
Tank 1
Tank 2
Building
the release criteria for Scenario B. These situations should be avoided whenever possible by
carefully matching the background reference areas to the survey units and choosing more
homogeneous survey units, as discussed in more detail in Appendix D. If relatively
homogenous survey units cannot be identified, consistent with the underlying assumptions in
the DCGLw derivation, then other approaches may need to be taken to evaluate the
acceptability of the survey units for release (e.g., increased focus on evaluation of the risk of
elevated areas). Consult with your regulator for highly heterogeneous survey units.
200
Building 300
Building 100
May 2020
DRAFT FOR PUBLIC COMMENT
8-9
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Interpretation of Survey Results
MARSSIM
AM Data
^ Data Analysis
Data Analysis | Analytes |
Data Entry Summary Statistics | Tests [ Plots 1
[ate

n =
Min =
Max =
Range =
Mean =
Median =
Variance =
Std Dev =
Std Error =
Interquartile
Range =
Skewness =
20
74.2
92.4
18.2
83.-185
84.25
32.655
5.7145
1.2778
10.525
-0.12671
Percentiles:
1%:
5%:
10%:
74.2
74.265
75.58
25%:	77.75
50%:	84.25
75%:	88.275
90%:	90.66
95%:	92.315
Walsh's Outlier Test
Walsh's Test cannot be applied to fewer than
Suspected r	60 samples
Outliers: 11 	I
Significance
Note: Data should not be excluded from
analysis solely on the basis of this test.
Apply
Help
(b)
1	Figure 8.3: Example of a Frequency Plot (a) and Other Statistical Information Output from
2	Visual Sample Plan v. 7 (b)
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
8-10
May 2020
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
MARSSIM
Interpretation of Survey Results
Caution should be exercised when developing frequency plots (commonly referred to as
histograms). The shape of a histogram can depend on the choice of the bin widths and ranges.
If bins are too wide, features of the underlying distribution within the bin width may be missed. If
bins are too narrow, the bin-to-bin variability can be mistaken as a feature of the underlying
distribution. Additional caution should be exercised when interpreting histograms for small data
sets where smaller features, such as a second smaller peak, may not be observed.
Skewness or other asymmetry can impact the accuracy of the statistical tests. When the
underlying data distribution is highly skewed, it is often because there are a few elevated areas.
Because the Elevated Measurement Comparison derived concentration guideline level
(DCGLemc) is specifically used to evaluate the acceptability of elevated areas (i.e., the ability of
a site with elevated areas of residual radioactivity material above the DCGLwto meet release
criteria), the limitations associated with use of statistical tests based on the median or mean for
nonhomogeneous residual radioactivity are mitigated. In cases where highly heterogeneous
residual radioactive material is present, care should be taken to ensure that the lateral extent of
the elevated area is delineated and a DCGLemc is calculated consistent with the actual size of
the elevated area. When a number of elevated areas are present, techniques can be used to
evaluate the cumulative risk of the elevated areas dependent on the distribution of the elevated
areas in the survey unit.
8.2.2.3 Draw Conclusions from the Preliminary Data Review
In some instances, a preliminary review of the data may be sufficient to draw conclusions without
performing the statistical tests described in Section 8.2.3. For example, under Scenario A, the
sample mean of the survey unit data can be compared to the reference area sample mean and
the DCGLw to get a preliminary indication of the survey unit status. If the difference of the survey
unit sample mean and the reference area sample mean is greater than DCGLw, then the survey
unit cannot be released. Alternatively, significantly higher concentrations in the reference area
compared to the survey unit may be an indicator that the reference area is not appropriate for the
survey unit and warrants further investigation
Tables 8.3-8.5 describe examples of other circumstances leading to specific conclusions based
on a simple examination of the data without the need to perform certain statistical tests.
8.2.3 Select the Statistical Test
An overview of the statistical considerations important for FSSs appears in Section 2.5 and
Appendix D. The parameter of interest is the mean concentration in the survey unit. The
nonparametric tests recommended in this manual, in their most general form, are tests of the
median. For data that are from a skewed distribution, the mean could be significantly larger than
the median. Therefore, the mean should be compared to the DCGLw to ensure that the mean is
less than the DCGLw, as indicated in Section 8.2.2.3. If the data are highly skewed because of
the presence of elevated areas, the EMC test helps ensure that the site is acceptable for release.
May 2020
DRAFT FOR PUBLIC COMMENT
8-11
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
Interpretation of Survey Results
MARSSIM
If one assumes that the data are from a symmetric distribution where the median and the mean
are effectively equal, these statistical evaluations are also tests of the mean. If the assumption
of symmetry is violated, then nonparametric tests of the median approximately test the mean.
Computer simulations (Hardin and Gilbert, 1993) have shown that the approximation is a good
one—that is, the correct decision will be made about whether the mean concentration exceeds
the DCGLw, even when the data come from a skewed distribution. In this regard, Hardin and
Gilbert found the nonparametric tests to be correct more often than the commonly used
Student's t test. The robust performance of the Sign and WRS tests over a wide range of
conditions is the reason that they are recommended in this manual.
When a given set of assumptions is true, a parametric test designed for exactly that set of
conditions will have the highest power. For example, if the data are from a normal distribution,
the Student's t test will have higher power than the nonparametric tests. It should be noted that
for large enough sample sizes (e.g., large number of measurements), the Student's t test is not
a great deal more powerful than the nonparametric tests. On the other hand, when the
assumption of normality is violated, the nonparametric tests can be much more powerful than
the t test. Therefore, any statistical test may be used, provided that the data are consistent with
the assumptions underlying their use. When these assumptions are violated, the prudent
approach is to use the nonparametric tests, which generally involve fewer assumptions than
their parametric equivalents.
The Sign test, described in Section 5.3.4, is typically used when the radionuclide is not present
in background and radionuclide-specific measurements are made. The Sign test may also be
used if the radionuclide is present in the background at such a small fraction of the DCGLw
value as to be considered insignificant. In this case, background concentrations of the
radionuclide are included with the residual radioactive material (i.e., the entire amount is
attributed to facility operations). Thus, the total concentration of the radionuclide is compared to
the release criteria. This option should be used only if one expects that ignoring the background
concentration will not affect the outcome of the statistical tests. The advantage of ignoring a
small background contribution is that no reference area is needed. This can simplify the FSS
considerably. Some alternative statistical tests to the Sign test are described in Chapter 14 of
NUREG-1505, A Nonparametric Statistical Methodology for the Design and Analysis of Final
Status Decommissioning Surveys (NRC 1998a) and in Chapter 2.
The Sign test (Section 8.3.1) evaluates whether the median of the data is above or below the
DCGLw. If the data distribution is symmetric, the median is equal to the mean. In cases where
the data are severely skewed, the mean may be above the DCGLw, while the median is below
the DCGLw. In such cases, the survey unit does not meet the release criteria regardless of the
result of the statistical tests. On the other hand, if the largest measurement is below the DCGLw,
the Sign test will always show that the survey unit meets the release criteria.
For FSSs, the WRS test discussed in Section 5.3.3 can be used when the radionuclide of
concern appears in background or if measurements used are not radionuclide-specific. The
WRS test (Section 8.4.1) assumes the reference area and survey unit data distributions are
similar except for a possible shift in the medians. When the data are severely skewed, the value
for the mean difference may be above the DCGLw, while the median difference is below the
DCGLw. In such cases, the survey unit does not meet the release criteria regardless of the
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
8-12
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Interpretation of Survey Results
1	result of the statistical test. On the other hand, if the difference between the largest survey unit
2	measurement and the smallest reference area measurement is less than the DCGLw, the WRS
3	test will always show that the survey unit meets the release criteria.
4	The use of paired observations for survey units with different backgrounds and some alternative
5	statistical tests to the WRS test are described in Chapters 12 and 14, respectively, of NUREG-
6	1505, A Nonparametric Statistical Methodology for the Design and Analysis of Final Status
7	Decommissioning Surveys (NRC 1998a). If Scenario B was selected during the DQO process,
8	the quantile test is performed to test for skewness when the WRS test does not reject the null
9	hypothesis.
10	If individual scan-only survey results are recorded, a nonparametric confidence interval can be
11	used to evaluate the results of the FSS. Similarly, a confidence interval can be used to evaluate
12	a series of in situ measurements with overlapping fields of view. A one-tailed version of
13	Chebyshev's inequality or software (e.g., EPA's ProUCL software) can be used to evaluate the
14	probability of exceeding the upper bound of the grey region (UBGR) using an upper confidence
15	limit (UCL). The use of a UCL applies to both Scenario A (where the UBGR equals the DCGLw)
16	and Scenario B (where the UBGR equals the discrimination limit [DL]). Table 8.1 provides a
17	summary of the statistical tests and evaluation methods discussed in this chapter.
18	Table 8.1: Summary of Statistical Tests and Evaluation Methods
Statistical Test or
Evaluation Method
Applicability
Sign Test
(see Section 8.3 and Table 8.3)
•	Radionuclide not in background and nuclide-specific
measurements
•	Scenario A or B
WRS Test
(see Section 8.4 and Table 8.4)
•	Radionuclide in background or non-nuclide specific
measurements
•	Scenario A or B
Quantile Test
(see Section 8.4)
•	Test for non-uniform distribution of radioactive material
•	Combined with WRS Test
•	Scenario B only
Comparison to UCL
(see Section 8.5 and Table 8.5)
•	Scan-only surveys or in situ surveys
•	Scenario A or B
19	Abbreviations: WRS test = Wilcoxon Rank Sum test; UCL = upper confidence limit.
20	8.2.4 Verify the Assumptions of the Statistical Tests
21	An evaluation to determine that the data are consistent with the underlying assumptions made
22	for the statistical procedures helps to validate the use of a test. One may also determine that
23	certain departures from these assumptions are acceptable when given the actual data and other
24	information about the study.
May 2020
DRAFT FOR PUBLIC COMMENT
8-13
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Interpretation of Survey Results
MARSSIM
1	For surveys consisting of a combination of scanning with samples or direct measurements, the
2	nonparametric tests described in this chapter assume that the data from the reference area or
3	survey unit consist of independent samples from each distribution. Spatial dependencies that
4	potentially affect the assumptions can be assessed using posting plots (Section 8.2.2.2). More
5	sophisticated tools for determining the extent of spatial dependencies are also available
6	(e.g., EPA QA/G-9S, EPA 2006b). These methods tend to be complex and are best used with
7	guidance from a professional statistician.
8	Asymmetry in the data can be diagnosed with a stem-and-leaf display, a histogram, or a
9	quantile plot.
10	One of the primary advantages of the nonparametric tests is that they involve fewer
11	assumptions about the data than their parametric counterparts. If parametric tests are used,
12	(e.g., Student's ttest), then any additional assumptions made in using them should be verified
13	(e.g., testing for normality). These issues are discussed in detail in EPA QA/G-9S (EPA 2006b).
14	One of the more important assumptions made in the survey designs described in
15	Sections 5.3.3 and 5.3.4 is that the sample sizes determined for the tests are sufficient to
16	achieve the DQOs set for the Type I and Type II error rates. Verification of the power of the
17	tests (1 - /?) correctly determines a site that does not meet the release criterion is not released
18	under Scenario B, regardless of what the test determined, which may be of particular interest to
19	the regulator and is, therefore, required for surveys conducted under Scenario B. For Scenario
20	A, verification of the power of the tests to correctly release a site that meets the release criteria,
21	regardless of what the test determined, may be of particular interest to the site owner/operator.
22	Methods for assessing the power are discussed in Appendix M.
23	For these reasons, it is better to plan the surveys cautiously, including—
24	• overestimating the potential data variability
25	• taking more than the minimum number of measurements
26	• overestimating minimum detectable concentrations (MDCs) and measurement method
27	uncertainties
28	If one is unable to show that the DQOs and MQOs were met with reasonable assurance, a
29	resurvey may be needed. Examples of assumptions and possible methods for their assessment
30	are summarized in Table 8.2.
31	For scan-only surveys where data are compared to a UCL, Chebyshev's inequality should be
32	used with caution when there are very few points in the data set. Section 6.7 provides
33	information on converting the instrument reading to the appropriate units for reporting the UCL.
34	This is because the population mean and standard deviation in the Chebyshev formula are
35	being estimated by the sample mean and sample standard deviation. In a small data set from a
36	highly skewed distribution, the sample mean and sample standard deviation may be
37	underestimated if the high concentration but low probability portion of the distribution is not
38	captured in the sample data set.
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
8-14
May 2020
DO NOT CITE OR QUOTE

-------
1
MARSSIM	Interpretation of Survey Results
Table 8.2: Methods for Checking the Assumptions of Statistical Tests
Assumption
Diagnostic
Spatial Independence
Posting Plot
Symmetry
Histogram, Quantile Plot
Data Variance
Sample Standard Deviation
Adequate Power
Retrospective Power Chart
2	8.2.5 Draw Conclusions from the Data
3	The types of measurements that can be made in a survey unit are direct measurements,
4	laboratory samples, and scans.
5	Specific details for conducting the statistical tests are given in Section 8.3 (Sign test),
6	Section 8.4 (WRS test and quantile test), and Section 8.5 (upper confidence limit test). When
7	the data clearly show that a survey unit meets or exceeds the release criteria, the result is often
8	obvious without performing the formal statistical analysis. The data still need to meet the
9	assumptions for the statistical tests, (e.g., ensuring adequate power in Scenario B.) Tables 8.3-
10	8.5 display various survey results and their conclusions.
11	Table 8.3: Summary of Statistical Tests for Radionuclide Not in Background and
12	Radionuclide-Specific Measurement
Survey Result
Conclusion
Scenario A
All measurements are less than DCGLw.
Survey unit meets release criteria.
Sample mean is greater than DCGLw.
Survey unit does not meet release criteria.
Any measurement is greater than DCGLw, and the
sample mean is less than DCGLw.
Conduct Sign test and EMC.
Scenario B
Sample mean is less than the AL.
Survey unit meets release criteria.
All measurements are greater than AL.
Survey unit does not meet release criteria.
Any measurement is greater than the AL, and the
sample mean is greater than AL.
Conduct Sign test and EMC.
13	Abbreviations: DCGLw = wide-area derived concentration guideline level; EMC = elevated measurement comparison;
14	AL = action level.
15
May 2020
DRAFT FOR PUBLIC COMMENT
8-15
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Interpretation of Survey Results
MARSSIM
1	Table 8.4: Summary of Statistical Tests for Radionuclide in Background or Radionuclide
2	Non-Specific (Gross) Measurements
Survey Result
Conclusion
Scenario A
Difference between largest survey unit measurement
and smallest reference area measurement is less than
the DCGLw.
Survey unit meets release criteria.
Difference between survey unit sample mean and
reference area sample mean is greater than the
DCGLw.
Survey unit does not meet release criteria.
Difference between any survey unit measurement and
any reference area measurement is greater than
DCGLw, and the difference between survey unit sample
mean and reference area sample mean is less than the
DCGLw.
Conduct WRS test and EMC.
Scenario B
Difference between survey unit sample mean and
reference area sample mean is less than the AL.
Conduct quantile test.
Difference between smallest survey unit measurement
and largest reference area measurement is greater than
the AL.
Survey unit does not meet release criteria.
Difference between any survey unit measurement and
any reference area measurement is less than the AL,
and the difference between survey unit sample mean
and reference area sample mean is greater than AL.
Conduct WRS test, quantile test, and EMC.
3	Abbreviations: DCGLw = wide-area derived concentration guideline level; WRS test = Wilcoxon Rank Sum test;
4	EMC = elevated measurement comparison; AL = action level.
5
6	If applicable release criteria for elevated measurements exist, then both the measurements at
7	discrete locations and the scans are also subject to the EMC. The result of comparing individual
8	measurements to DCGLemc is not conclusive as to whether the survey unit meets or exceeds
9	the release criteria, but it is a flag or trigger for further investigation. The investigation may
10	involve taking further measurements to determine that the area and level of the elevated
11	residual radioactive material are such that the resulting dose or risk meets the release criteria.4
12	The investigation should also provide adequate assurance, using the DQO process, that there
4 Rather than, or in addition to, taking further measurements, the investigation may involve assessing the adequacy
of the exposure pathway model used to obtain the DCGLs and the consistency of the results obtained with the
Historical Site Assessment and the scoping, characterization, and remedial action support surveys.
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
8-16
May 2020
DO NOT CITE OR QUOTE

-------
1
MARSSIM	Interpretation of Survey Results
Table 8.5: Summary of Results for Scan-Only Surveys
Survey Result3
Conclusion
Scenario A
UCL is less than DCGLw.
Survey unit meets average release criteria;
conduct the EMC.
UCL is greater than DCGLw.
Survey unit does not meet release criteria.
Scenario B
UCL is less than DL.
Survey unit meets average release criteria;
conduct the EMC.
UCL is greater than DL.
Survey unit does not meet release criteria.
2	Abbreviations: UCL = upper confidence limit; DCGLw = wide-area derived concentration guideline level; EMC =
3	elevated measurement comparison; DL = discrimination limit.
4	a See Section 8.5 for additional details on calculating the UCL.
5	are no undiscovered areas of elevated residual radioactive material in the survey unit that might otherwise result in a
6	dose or risk exceeding the release criteria when considered in conjunction with the dose or risk posed by the
7	remainder of the survey unit. In some cases, this may lead to reclassifying all or part of a survey
8	unit unless the results of the investigation indicate that reclassification is not necessary. The
9	investigation level appropriate for each class of survey unit and type of measurement is shown
10	in Table 5.4 and is described in Section 5.3.8. Example 4 provides background information
11	that will be used in Examples 5-8.
Example 4: Illustrative Examples Background Information
This example provides the background for Examples 5-8.
To illustrate the data interpretation process, consider an example facility with 14 survey units
consisting of interior concrete surfaces, one interior survey unit with drywall surfaces, and two
outdoor surface soil survey units. The radionuclide of concern is cobalt-60 (60Co). The interior
surfaces were measured with a gas-flow proportional counter (see Appendix H) with an
active surface area of 100 square centimeters (cm2) to determine gross beta activity.
Because these measurements are not radionuclide-specific, appropriate reference areas
were chosen for comparison. The exterior surface soil was measured with a germanium
spectrometer to provide radionuclide-specific results. A reference area is not needed because
60Co does not have a significant background in soil.
The exterior surface soil Class 3 survey unit incorporates areas that are not expected to
contain residual radioactive material. The exterior surface soil Class 2 survey unit is similar to
the Class 3 survey unit but is expected to contain concentrations of residual radioactive
material below the wide-area derived concentration guideline level (DCGLw). The Class 1
May 2020
DRAFT FOR PUBLIC COMMENT
8-17
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Interpretation of Survey Results
MARSSIM
interior concrete survey units are expected to contain small areas of elevated activity that
may or may not exceed the DCGLw. The Class 2 interior drywall survey unit is similar to the
Class 1 interior concrete survey unit, but the drywall is expected to have a lower background,
less measurement variability, and a more uniform distribution of radioactive material. The
Class 2 survey unit is not expected to contain areas of residual radioactive material above the
DCGLw. The survey design parameters and DQOs developed for these survey units under
Scenario A are summarized in Table 8.6.
Table 8.7 provides survey design parameters and DQOs developed for two survey units
under Scenario B where the lower bound of the gray region is zero or indistinguishable from
background for a radionuclide that is in the natural background.
1 Table 8.6: Final Status Survey Parameters for Example Survey Units for Scenario A
Survey
Unit
Type
DQO
LBGR
DCGLw3
Estimated Standard
Deviation, cP
Test/Section

a
P


Survey
Reference

Exterior
Surface
Soil
Class 2
0.025
0.025
128 Bq/kg
140 Bq/kg
4.0 Bq/kg
N/A
Sign/
Example 5
Exterior
Surface
Soil
Class 3
0.025
0.01
128 Bq/kg
140 Bq/kg
4.0 Bq/kg
N/A
Sign/
Example 6
Interior
Concrete
Class 1
0.05
0.05
3,000 dpm/
100 cm2
5,000 dpm/
100 cm2
625 dpm/
100 cm2
220 dpm/
100 cm2
WRS/
Appendix A
Interior
Drywall
Class 2
0.025
0.05
3,000 dpm/
100 cm2
5,000 dpm/
100 cm2
200 dpm/
100 cm2
200 dpm/
100 cm2
WRS/
Example 7
2	Abbreviations: DQO = data quality objective; LBGR = lower bound of the gray region; DCGLw = derived concentration
3	guideline level using the Wilcoxon Rank Sum test; cr= standard deviation; a = Type I decision error; /? = Type II
4	decision error; Bq = becquerel; kg = kilogram; dpm = disintegrations per minute; cm = centimeter.
5	a DCGLw is given in units of becquerels per kilogram or disintegrations per minute per 100 square centimeters.
6	b Estimated standard deviation from scoping, characterization, and Remedial Action Support surveys
7	8.3 Radionuclide Not Present in Background
8	The statistical test discussed in this section is used to compare each survey unit directly with the
9	applicable release criteria. A reference area is not included because the measurement
10	technique is radionuclide-specific and the radionuclide of concern is not present in background
11	(see Section 8.2.3). In this case, the concentrations of residual radioactive material are
12	compared directly with the DCGLw. The method in this section should be used only if the
13	radionuclide is not present in background or is present at such a small fraction of the DCGLw
14	value as to be considered insignificant. In addition, the Sign test is applicable only if
15	radionuclide-specific measurements are made to determine the concentrations. Otherwise, the
16	method in Section 8.4 is recommended.
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
8-18
May 2020
DO NOT CITE OR QUOTE

-------
1
MARSSIM	Interpretation of Survey Results
Table 8.7: Final Status Survey Parameters for Example Survey Units for Scenario B
Survey Unit
Type
DQO
AL
DLa
Estimated Standard
Deviation, cP
Test/
Section


a
P


Survey
Reference
Exterior Surface
Soil
Class 1
0.05
0.05
0 Bq/kg
12 Bq/kg
6 Bq/kg
6 Bq/kg
WRS
Exterior Surface
Soil0
Class 3
0.05
0.01
0 Bq/kg
12 Bq/kg
6 Bq/kg
6 Bq/kg
WRS
2	Abbreviations: DQO = data quality objective; DL = discrimination limit; cr= standard deviation; a = Type I decision
3	error; /? = Type II decision error; Bq = becquerel; kg = kilogram; WRS = Wilcoxon Rank Sum.
4	a AL is zero.
5	b Estimated standard deviation from scoping, characterization, and Remedial Action Support surveys.
6	c This survey unit is not worked out in further examples.
7	Reference area samples are not needed when there is sufficient information to indicate that
8	there is essentially no background concentration for the radionuclide being considered. With
9	only a single set of survey unit samples, the statistical test used here is the Sign test. See
10	Section 5.3.4 for further information appropriate to following the example and discussion
11	presented here.
12	8.3.1 Sign Test
13	The Sign test is designed to detect failure of the survey unit to meet release criteria if the
14	radioactive material is distributed across that survey unit. Although the parameter of interest is
15	usually the mean concentration of residual radioactive material in the survey unit, the median is
16	used in the Sign test as an estimate of the mean. This test does not assume that the data follow
17	any particular distribution, such as normal or lognormal.
18	In Scenario A, the hypothesis tested by the Sign test is as follows:
19	Null Hypothesis
20	H0\ The median concentration of residual radioactive material in the survey unit is
21	greater than or equal to the DCGLw.
22	Versus
23	Alternative Hypothesis
24	HThe median concentration of residual radioactive material in the survey unit is
25	less than the DCGLw; also defined as Ha.
26	The null hypothesis is assumed to be true unless the statistical test indicates that it should be
27	rejected in favor of the alternative. For Scenario A, the null hypothesis states that the probability
May 2020
DRAFT FOR PUBLIC COMMENT
8-19
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
Interpretation of Survey Results
MARSSIM
of a measurement less than the DCGLw is less than one-half, i.e., the 50th percentile (or
median) is greater than the DCGLw.
Because the Sign test uses the median instead of the mean, the null hypothesis in Scenario A
may be rejected if the median concentration is less than the DCGLw, even if the mean
concentration is greater than or equal to the DCGLw. If the mean concentration is greater than
or equal to the DCGLw, the survey unit does not meet the release criteria (see Table 8.3)
Furthermore, in addition to the Sign test, the DCGLemc (see Section 5.3.5) is compared to each
measurement to ensure none exceeds the DCGLemc. If a measurement exceeds the DCGLemc,
then additional investigation is recommended, at least locally, to determine the actual areal
extent of the elevated concentration.
In Scenario B, the hypothesis tested by the Sign test is as follows:
Null Hypothesis
H0\ The median concentration of residual radioactive material in the survey unit is
less than or equal to the AL.
Versus
Alternative Hypothesis
H^ The median concentration of residual radioactive material in the survey unit is
greater than the AL.
Again, the null hypothesis is assumed to be true unless the statistical test indicates that it should
be rejected in favor of the alternative. For Scenario B, the null hypothesis states that the
probability of a measurement greater than the AL is less than one-half (i.e., the 50th percentile
[or median] is less than the AL).
When using the Sign test for both Scenario A and B, it is necessary to show that there are a
sufficient number of measurements or samples with concentrations below the DCGLw or AL,
respectively. Under Scenario A, when there are too many measurements or samples with
concentrations above the DCGLw, we fail to reject the null hypothesis that the survey unit does
not meet the release criteria. Under Scenario B, when there are too many measurements or
samples with concentrations above the lower bound of the gray region (LBGR), we reject the
null hypothesis that the survey unit does meet the release criteria.
When the values of a and p are selected in the DQO process, an important difference between
Scenario A and Scenario B should be considered. For a fixed value of N, a lower value for a-is
more protective in Scenario A, but is less protective in Scenario B. In both scenarios, a lower
value for a-requires a higher degree of evidence before the null hypothesis is rejected. In
Scenario A, the null hypothesis is that the survey unit exceeds the release criteria, and a lower
value for a makes it more difficult to reject this hypothesis. In Scenario B, the null hypothesis is
that the survey unit meets the release criteria, and a lower value of a-makes it more difficult to
reject this hypothesis.
Note that some individual survey unit measurements may exceed the DCGLw even when the
survey unit as a whole meets the release criteria. In fact, a survey unit sample mean that is
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
8-20
May 2020
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
MARSSIM
Interpretation of Survey Results
close to the DCGLw might have almost half of its individual measurements greater than the
DCGLw. Such a survey unit may still not exceed the release criteria. The risk associated with
any areas above the DCGL is evaluated by developing a DCGLemc based on dose or risk
modeling. The DCGLemc is higher than the DCGLw and consider the size of the elevated area.
As long as the concentration in the elevated areas are less than the DCGLemc, the site can be
released. See Section 8.6.1 for additional details.
The assumption is that the survey unit measurements are independent random samples from a
symmetric distribution. If the distribution of measurements is symmetric, the median and the
mean are the same. If the distribution of measurements is highly skewed, then the efficacy of
the statistical tests is reduced because of the underlying homogeneity assumptions inherent in
the decision criteria (i.e., DCGL calculations).
The hypothesis specifies release criteria in terms of a DCGLw. The test should have sufficient
power (1 - /?, as specified in the DQOs) to detect concentrations of residual radioactive material
at the LBGR, which is less than the DCGLw. If a is the standard deviation of the measurements
in the survey unit, then the relative shift (the width of the gray region, which is calculated by
DCGLw-LBGR, divided by the standard deviation [A/a]) reflects the difference between the
average concentration of radioactive material and the DCGL relative to measurement variability.
The procedure for determining A/a is given in Section 5.3.3.2.
As stated above, the null hypothesis for Scenario B is that the median concentration of residual
radioactive material in the survey unit is less than the LBGR (or AL). To use the Sign test with
Scenario B, the concentration of radioactive material in background should be zero or
insignificant compared to the LBGR (or AL). In some cases, the LBGR (or AL) may be set equal
to zero (e.g., release criteria require that concentrations be indistinguishable from background
and the radionuclide is not present in background). In this case, results should be scattered
about zero; therefore, if there are too many results with concentrations greater than zero, the
null hypothesis should be rejected. Results less than zero are both possible and likely when the
concentrations are truly equal to zero and measurements are subject to some random
component of measurement method uncertainty.
In this case, the number of positive and negative results are expected to be the same, and the
average of all the results is expected to be zero. When analyzing samples where the
concentration is very small, the data analysis should be reviewed carefully, because even
relatively small systematic errors can result in relatively large differences in the number of
positive and negative results.
Some laboratories report results below the lower limit of detection as "< LLD" or below the
minimum detectable activity as "< MDA". Under Scenario A, the use for the Sign test of such
results is usually not problematic, because the DL is required to be less than the DCGLw, and
any values less than the DL will also be less than the DCGLw. However, under Scenario B, in
which the DL is greater than the AL, it is difficult to determine if the concentrations reported as
May 2020
DRAFT FOR PUBLIC COMMENT
8-21
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Interpretation of Survey Results
MARSSIM
1	"< LLD" or "< MDA" are greater than or less than the AL. For this reason, the Sign test should be
2	used only for Scenario B when actual concentrations, no matter how small, are reported.
3	8.3.2 Applying the Sign Test
4	The Sign test is applied as outlined in the following five steps, and further illustrated by
5	Examples 5 and 6. Separate instructions are given for Scenarios A and B.
6	Scenario A
7	1. List the survey unit measurements: xt, i = 1,2,3,
8	2. Subtract each measurement, xt, from the DCGLwto obtain the differences: Dt =
9	DCGLyv ~ %i>i = 1,2,3,, N.
10	3. Discard each difference that is exactly zero and reduce the sample size, N, by the
11	number of such measurements exactly equal to the DCGLw.
12	4. Count the number of positive differences. The result is the test statistic S+. (Note that
13	a positive difference corresponds to a measurement below the DCGLw and
14	contributes evidence that the survey unit meets the release criteria).
15	5. Large values of S+ indicate that the null hypothesis is false. The value of S+ is
16	compared to the critical values in Table I.4. If S+ is greater than the critical value, k,
17	in that table, the null hypothesis is rejected.
18	Scenario B
19	1. List the survey unit measurements: xit i = 1,2,3, ...,N.
20	2. Subtract the AL from each measurement, xt, to obtain the differences: Dt = xt- AL,
21	i = 1,2,3, ...,N.
22	3. Discard each difference that is exactly zero and reduce the sample size, N, by the
23	number of such measurements exactly equal to the AL.
24	4. Count the number of positive differences. The result is the test statistic S+. (Note that
25	a positive difference corresponds to a measurement above the AL and contributes
26	evidence that the survey unit does not meet the release criteria.)
27	5. Large values of S+ indicate that the null hypothesis is false. The value of S+ is
28	compared to the critical values in Table 1.4. If S+ is greater than the critical value, k,
29	in that table, the null hypothesis is rejected.
30	Passing a survey unit without making a single calculation may seem an unconventional
31	approach. However, the key is in the survey design, which is intended to ensure enough
32	measurements are made to satisfy the DQOs. As in the previous example, after the data are
33	collected, the conclusions and power of the test can be checked by constructing a retrospective
34	power curve as outlined in Appendix M.
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
8-22
May 2020
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
MARSSIM
Interpretation of Survey Results
In addition to checking the power of the statistical test, it is also important to ensure that the
uncertainty of the measurements met the MQOs for required measurement uncertainty. One
final consideration remains regarding the survey unit classification: "Was any definite amount of
residual radioactive material found in the survey unit?" This will depend on the MDC of the
measurement method. Generally, the MDC is at least three or four times the estimated
measurement standard deviation. For example, in Table 8.9, the largest observation,
9.3 becquerels/kilogram (Bq/kg; 0.25 picocuries/gram [pCi/gj), is less than three times the
estimated measurement standard deviation of 3.8 Bq/kg (0.10 pCi/g). Thus, it is unlikely that
any of the measurements could be considered indicative of positive residual radioactive
material. This means that the Class 3 survey unit classification was appropriate. Examples 5
and 6 illustrate how to use the Sign test on Class 2 and 3 exterior soil units.
Example 5: Sign Test for a Class 2 Exterior Soil Survey Unit
Refer back to Example 4 for background information. For the Class 2 Exterior Soil survey
unit, the Sign test is appropriate, because the radionuclide of concern does not appear in
background and radionuclide-specific measurements were made. Scenario A is selected.
Table 8.6 shows that the DQOs for this survey unit include a = 0.025 and /? = 0.025. The
DCGLw is 140 becquerels/kilogram (Bq/kg; 3.8 picocuries/gram [pCi/g]), and the LBGR was
selected to be 128 Bq/kg (3.5 pCi/g). The estimated standard deviation of the measurements
is a = 4.0 Bq/kg (0.11 pCi/g). The relative shift was calculated to be 3.0, as shown below:
Table 5.3 indicates the number of measurements estimated for the Sign Test, N, is 20
(ia= 0.025, /?= 0.025, and A/cr = 3). (Table 1.2 in Appendix I also lists the number of
measurements estimated for the Sign test.) This survey unit is Class 2, so the 20
measurements needed were made on a random-start triangular grid. When laying out the
grid, 22 measurement locations were identified.
The 22 measurements taken on the exterior lawn Class 2 survey unit are shown in the first
column of Table 8.8. The mean of these data is 129 Bq/kg (3.5 pCi/g), and the standard
deviation is 11 Bq/kg (0.30 pCi/g). Since the number of measurements is even, the median of
the data is the average of the two middle values (126+128)/2 = 127 Bq/kg (3.4 pCi/g). A
quantile plot of the data is shown in Appendix L, Figure L.3.
Five measurements exceed the DCGLw value of 140 Bq/kg: 142, 143, 145, 148, and 148.
However, none exceed the mean of the data plus three standard deviations:
129+(3x11) = 162 Bq/kg (4.3 pCi/g). Thus, these values appear to reflect the overall
variability of the concentration measurements rather than to indicate an area of elevated
activity—provided that these measurements were scattered through the survey unit.
However, if a posting plot demonstrates that the locations of these measurements are	
A/<7 =
DCGLW - LBGR 140 Bq/kg - 128 Bq/kg
= 3.0
o
4.0 Bq/kg
May 2020
DRAFT FOR PUBLIC COMMENT
8-23
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Interpretation of Survey Results
MARSSIM
grouped together, then that portion of the survey unit containing these locations merits further
investigation.
The middle column of Table 8.8 contains the differences, DCGLw- Data, and the last column
contains the signs of the differences. The bottom row shows the number of measurements
with positive differences, which is the test statistic S+. In this case, S+= 17.
The value of S+ is compared to the appropriate critical value in Table 1.4. In this case, for
N = 22 and a = 0.025, the critical value is 16. Because S+= 17 exceeds this value, the null
hypothesis that the survey unit exceeds the release criteria is rejected.
1 Table 8.8: Example Sign Analysis: Class 2 Exterior Soil Survey Unit
Data
(Bq/kg)
DCGLw - Data
(Bq/kg)
Sign
121
19
+
143
-3
-
145
-5
-
112
28
+
125
15
+
132
8
+
122
18
+
114
26
+
123
17
+
148
-8
-
115
25
+
113
27
+
126
14
+
134
6
+
148
-8
-
130
10
+
119
21
+
136
4
+
128
12
+
125
15
+
142
-2
-
129
11
+
| Number of positive differences S+ = 17 |
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
8-24
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Interpretation of Survey Results
Example 6: Sign Test for a Class 3 Exterior Soil Survey Unit
Refer back to Example 4 for background information. For the Class 3 exterior soil survey
unit, the Sign test is again appropriate, because the radionuclide of concern does not appear
in background and radionuclide-specific measurements were made. Scenario A is selected.
Table 8.6 shows that the DQOs for this survey unit include a = 0.025 and /? = 0.01. The
DCGLw is 140 becquerels/kilogram (Bq/kg; 3.8 picocuries/gram [pCi/g]), and the LBGR was
selected to be 128 Bq/kg (3.5 pCi/g). The estimated standard deviation of the measurements
is a = 4.0 Bq/kg (0.11 pCi/g). The relative shift was calculated to be 3.0, as shown below:
DCGLW - LBGR 140 Bq/kg - 128 Bq/kg
A/cr=	=	= 3.0
'	o	4.0 Bq/kg
Table 5.3 indicates that the sample size estimated for the Sign test, N, is 23 (a = 0.025,
/? = 0.01, and A/cr = 3). This survey unit is Class 3, so the measurements were made at
random locations within the survey unit. The 23 measurements taken on the exterior lawn are
shown in the first column of Table 8.9. The mean of these data is 2.1 Bq/kg (0.057 pCi/g),
and the standard deviation is 3.3 Bq/kg (0.089 pCi/g). None of the data exceed 2.1
Bq/kg+(3*3.3) Bq/kg = 12.0 Bq/kg (0.32 pCi/g). Because N is odd, the median is the
middle (12th-highest) value, namely 2.6 Bq/kg (0.070 pCi/g).
An initial review of the data reveals that every data point is below the DCGLw, so the survey
unit meets the release criteria specified in Table 8.3. For purely illustrative purposes, the Sign
test analysis is performed. The middle column of Table 8.9 contains the quantity
DCGLw - Data. Because every data point is below the DCGLw, the sign of DCGLw - Data is
always positive. The number of positive differences is equal to the number of measurements,
N, and so the Sign test statistic S+ is 23. The null hypothesis will always be rejected at the
maximum value of S+ (which in this case is 23) and the survey unit passes. Thus, the
application of the Sign test in such cases requires no calculations and one need not consult a
table for a critical value. If the survey is properly designed, the critical value must always be
less than N.
Notice that some of these measurements are negative (-0.37 in cell A6). This might occur if
an analysis background (e.g., the Compton continuum under a spectrum peak) is subtracted
to obtain the net concentration value. The data analysis is both easier and more accurate
when numerical values are reported as obtained rather than reporting the results as "less
than" or not detected.
1	If one determines that residual radioactive material is definitely present, this would indicate that
2	the survey unit was initially misclassified. Ordinarily, MARSSIM recommends a resurvey using a
3	Class 1 or Class 2 design. In some cases, the original survey may have met the requirements
May 2020
DRAFT FOR PUBLIC COMMENT
8-25
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Interpretation of Survey Results
MARSSIM
1	for the Class 1 or 2 design. Section 8.6.3 includes additional discussion on the misclassification
2	of survey units.
3	For example, if one determines that the survey unit is a Class 2, a resurvey might be avoided if
4	the survey unit does not exceed the maximum size recommended for such a classification. In
5	this case, the only difference in survey design would be whether the measurements were
6	obtained on a random or on a triangular grid. Provided that the initial survey's scanning
7	methodology has sufficient detection capability to detect areas at the DCGLw, versus the higher
8	DCGLemc, the scan would be able to compensate for differences in the survey grid sample
9	locations, and those differences alone would not affect the outcome of the statistical analysis.
10	Therefore, if the above conditions were met, a resurvey might not be necessary.
11	Table 8.9: Sign Test Example Data for Class 3 Exterior Survey Unit
Sample
Number
A
B
C
Data
(Bq/kg)
DCGLw-Data
(Bq/kg)
Sign
1
3.0
137.0
+
2
3.0
137.0
+
3
1.9
138.1
+
4
0.37
139.6
+
5
-0.37
140.4
+
6
6.3
133.7
+
7
-3.7
143.7
+
8
2.6
137.4
+
9
3.0
137.0
+
10
-4.1
144.1
+
11
3.0
137.0
+
12
3.7
136.3
+
13
2.6
137.4
+
14
4.4
135.6
+
15
-3.3
143.3
+
16
2.1
137.9
+
17
6.3
133.7
+
18
4.4
135.6
+
19
-0.37
140.4
+
20
4.1
135.9
+
21
-1.1
141.1
+
22
1.1
138.9
+
23
9.3
130.7
+

Number of positive differences S+ =
23
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
8-26
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Interpretation of Survey Results
1	8.4 Radionuclide Present in Background
2	The statistical tests discussed in this section will be used to compare each survey unit with an
3	appropriately chosen, site-specific reference area. Each reference area should be selected on
4	the basis of its similarity to the survey unit, as discussed in Section 4.6.3.
5	8.4.1 Wilcoxon Rank Sum Test and Quantile Test
6	In Scenario A, the comparison of measurements from the reference area and survey unit is
7	made using the WRS test. In Scenario B, in addition to the WRS test, the quantile test should be
8	used to further evaluate survey units when the WRS test fails to reject the null hypothesis. The
9	recommended tests should be conducted for each survey unit. In addition, the EMC is
10	performed against each measurement to ensure that it does not exceed a specified
11	investigation level (e.g., DCGLemc for Class 1 survey units). If any measurement in the
12	remediated survey unit exceeds the specified investigation level, then additional investigation is
13	recommended, at least locally, regardless of the outcome of the WRS test.
14	The WRS test is most effective when residual radioactive material is uniformly present
15	throughout a survey unit. For Scenario A, the test is designed to detect whether this residual
16	radioactive material exceeds the DCGLw. For Scenario B, it is designed to detect whether this
17	residual radioactive material exceeds the AL.
18	The advantage of the nonparametric WRS test is that it does not assume that the data are
19	normally or lognormally distributed. The WRS test also allows "less than" measurements to be
20	present in the reference area and the survey units. The WRS test can generally be used with up
21	to 40 percent "less than" measurements in either the reference area or the survey unit.
22	However, the use of "less than" values in data reporting is not recommended, as discussed in
23	Sections 2.3.5 and 8.3. When possible, report the actual result of a measurement together with
24	its uncertainty.
25	The quantile test is a statistical test for non-uniformity in the distribution of the residual
26	radioactive material. The quantile test was developed to detect differences between the survey
27	unit and the reference area that consist of a shift to higher values in only a fraction of the survey
28	unit. The quantile test is performed only when Scenario B is used and only if the null hypothesis
29	is not rejected for the WRS test. Using the quantile test in tandem with the WRS test results in
30	higher power to identify survey units that do not meet the release criteria than either test by itself.
31	Using the quantile test in tandem with the WRS test also results in higher probability of Type I
32	errors when the true concentration is equal to the AL. The probability of making a Type I error
33	on at least one of the two tests is approximately a = aQ + aw where aQ and aw are the values
34	of alpha selected for the quantile and WRS tests, respectively. For this reason, when the
35	quantile test is performed in tandem with the WRS test aQ and aw should both be set equal to
36	a/2 so that when the true concentration is equal to the AL, the probability of a Type I error of
37	the two tests in tandem is approximately a.
May 2020
DRAFT FOR PUBLIC COMMENT
8-27
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
Interpretation of Survey Results
MARSSIM
In Scenario A, the hypothesis tested by the WRS test is as follows:
Null Hypothesis
H0\ The median concentration of residual radioactive material in the survey unit
exceeds that in the reference area by more than the DCGLw.
Versus
Alternative Hypothesis
HThe median concentration of residual radioactive material in the survey unit
exceeds that in the reference area by less than the DCGLw.
In Scenario B, the hypothesis tested by the WRS test is as follows:
Null Hypothesis
H0\ The median concentration of residual radioactive material in the survey unit
exceeds that in the reference area by less than the AL.
Versus
Alternative Hypothesis
H^ The median concentration of residual radioactive material in the survey unit exceeds
that in the reference area by more than the AL.
Scenario B is used when the goal of remediation is that residual radioactive material in the survey
unit be indistinguishable from background activity levels in the reference area (e.g., AL = 0) or
when the AL is below some discrimination level.
When the values of a and p are selected in the DQO process, an important difference between
Scenario A and Scenario B should be considered. For a fixed value of N, a lower value for a is
more protective in Scenario A, and a lower value for a is less protective in Scenario B. In both
scenarios, a lower value for a requires a higher degree of evidence before the null hypothesis is
rejected. In Scenario A, the null hypothesis is that the survey unit exceeds the release criteria,
and a lower value for a makes it more difficult to reject this hypothesis. In Scenario B, the null
hypothesis is that the survey unit meets the release criteria, and a lower value of a makes it
more difficult to reject this hypothesis. An illustration of this effect is shown in Example 8
presented in Section 8.4.3.
In both scenarios, the null hypothesis is assumed to be true unless the statistical test indicates
that it should be rejected in favor of the alternative. One assumes that any difference between
the reference area and survey unit concentration distributions is due to a shift in the survey unit
concentrations to higher values (i.e., due to the presence of residual radioactive material in
addition to background). Note that some or all of the survey unit measurements may be larger
than some reference area measurements while still meeting the release criteria. Indeed, some
survey unit measurements may exceed some reference area measurements by more than the
DCGLw. The result of the hypothesis test determines whether the survey unit as a whole is
deemed to meet the release criteria. The EMC is used to screen individual measurements.
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
8-28
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM	Interpretation of Survey Results
1	Two assumptions underlie this test: (1) Samples from the reference area and survey unit are
2	independent, identically distributed random samples, and (2) each measurement is independent
3	of every other measurement, regardless of the set of samples from which it came.
4	8.4.2 Applying the Wilcoxon Rank Sum Test
5	The WRS test is applied as outlined in the following six steps and further illustrated by the examples
6	in Section 8.4.3 and Appendix A. Separate instructions are provided for Scenarios A and B.
7	Scenario A
8	1. Obtain the adjusted reference area measurements, zit by adding the DCGLwto each
9	reference area measurement, xt. zt = xt + DCGLW. Them adjusted reference
10	sample measurements, zt, from the reference area and the n sample measurements,
11	y, from the survey unit are pooled and ranked in order of increasing size from 1 to N,
12	where N = m + n.
13	2. If several measurements are tied (i.e., have the same value), all are assigned the
14	average rank of that group of tied measurements.
15	3. If there are t "less than" values, all are given the average of the ranks from 1 to t.
16	Therefore, they are all assigned the rank t(t + l)/(21) = (t + l)/2, which is the
17	average of the first t integers. If there is more than one detection limit, all observations
18	below the largest detection limit should be treated as "less than" values.5
19	4. Sum the ranks of the adjusted measurements from the reference area, Wr. Note that
20	because the sum of the first N integers is N(N + l)/2, one can equivalent^ sum the
21	ranks of the measurements from the survey unit, Ws, and compute Wr =
22	N(N + 1)/2 - Ws.
23	5. Compare Wr with the critical value given in Table 1.5 for the appropriate values of n,
24	m, and a. If Wr is greater than the tabulated value, reject the hypothesis that the
25	survey unit exceeds the release criteria.
26	Scenario B
27	1. Obtain the adjusted survey unit measurements, zit by subtracting the AL from each
28	survey unit measurement, yt . zt = yt- AL.
5 If more than 40 percent of the data from either the reference area or survey unit are "less than," the WRS test
cannot be used. Such a large proportion of non-detects suggest that the DQO process be revisited for this survey to
determine if the survey unit was properly classified or the appropriate measurement method was used. As stated
previously, the use of "less than" values in data reporting is not recommended. Wherever possible, the actual result of
a measurement, together with its uncertainty, should be reported.
May 2020
DRAFT FOR PUBLIC COMMENT
8-29
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
Interpretation of Survey Results
MARSSIM
2.	The m adjusted sample measurements, zt, from the survey unit and the n reference
measurements, xt, from the reference area are pooled and ranked in order of
increasing size from 1 to N, where N = m + n. (Note: When using Table 1.5 for
Scenario B, the roles of m and n are reversed.)
3.	If several measurements are tied (i.e., have the same value), all are assigned the
average rank of that group of tied measurements.
4.	If there are t "less than" values, they are all given the average of the ranks from 1 to
t. Therefore, all are assigned the rank t(t + l)/(21) = (t + l)/2, which is the
average of the first t integers. If there is more than one detection limit, all
observations below the largest detection limit should be treated as "less than" values.
5.	Sum the ranks of the adjusted measurements from the survey unit, Ws. Note that
because the sum of the first N integers is N(N + l)/2, one can equivalent^ sum the
ranks of the measurements from the reference area, Wr, and compute Ws =
N(N + 1)/2 - Wr.
6.	Compare Ws with the critical value given in Table 1.5 for the appropriate values of n,
m, and a. (Because the quantile test is used in addition to the WRS test, a/2 should
be used rather than a.) If Ws is greater than the tabulated value, reject the null
hypothesis that the survey unit does not exceed the release criteria.
Example 7 illustrates the WRS test in practice for a Class 2 interior drywall survey unit.
Example 7: Wilcoxon Rank Sum Test Example: Class 2 Interior Drywall Survey Unit
Refer to Example 4 for background information. In this example, the gas-flow proportional
counter measures gross beta activity (see Appendix H), and the measurements are not
radionuclide-specific. The Wilcoxon Rank Sum (WRS) test is appropriate for the Class 2
interior drywall survey unit because background contributes to gross beta activity even
though the radionuclide of interest does not appear in background. Scenario A is selected
because the derived concentration guideline level using the WRS test (DCGLw) is higher than
the discrimination limit. As a result, the quantile test will not be needed for this example.
Table 8.6 shows that the data quality objectives (DQOs) for this survey unit include a = 0.025
and /? = 0.05. The DCGLw is 8,300 becquerels/square meter (Bq/m2; 5,000 decays per
minute [dpm]/100 square centimeters [cm2]) and the lower bound of the gray region (LBGR)
was selected to be 5,000 Bq/m2 (3,000 dpm/100 cm2). The estimated standard deviation, a,
of the measurements is about 830 Bq/m2 (500 dpm/100 cm2). The relative shift was
calculated to be 4.0, as shown below:
A/<7 =
DCGLW- LBGR 8,300 Bq/m2 - 5,000 Bq/m2
= 4.0
a
830 Bq/m2
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
8-30
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Interpretation of Survey Results
In Table 5.2, one finds that the number of measurements estimated for the WRS test is 11 in
each survey unit and 11 in each reference area (a = 0.025, p = 0.05, and A /a = 4).
(Table 1.3 in Appendix I also lists the number of measurements estimated for the WRS test.)
Table 8.10 lists the data obtained from the gas-flow proportional counter in units of counts
per minute (cpm). A reading of 160 cpm with this instrument corresponds to the DCGLw of
8,300 Bq/m2 (5,000 dpm/100 cm2). Column A lists the measurement results as they were
obtained. The sample mean and sample standard deviation of the reference area
measurements are 44 and 4.4 cpm, respectively. The sample mean and sample standard
deviation of the survey unit measurements are 98 and 5.3 cpm, respectively. In column B, the
code "R" denotes a reference area measurement, and "S" denotes a survey unit
measurement. Column C contains the Adjusted Data. The Adjusted Data are obtained by
adding the DCGLw to the reference area measurements (see Section 8.4.2, Step 1). The
ranks of the adjusted data appear in Column D. They range from 1 to 22, because there is a
total of 11+11 measurements (see Section 8.4.2, Step 2).
Note that two cases of measurements tied with the same value, at 104 and 205. Each tied
measurement is always assigned the average of the ranks. Therefore, both measurements at
104 are assigned rank (9 + 10)/2 = 9.5 (see Section 8.4.2, Step 3). Also note that the sum
of all of the ranks is still 22(22 + 1)/2 = 253. Checking this value with the formula in Step 5
of Section 8.4.2 is recommended to guard against errors in the rankings.
Column E contains only the ranks belonging to the reference area measurements. The total is
187. This is compared with the entry for the critical value of 156 in Table 1.5 for a = 0.025,
with n = 11 and m= 11. Because the sum of the reference area ranks is greater than the
critical value, the null hypothesis (i.e., that the mean survey unit concentration exceeds the
DCGLw) is rejected.
If some of the values of the survey unit had been higher and had ranked above some of the
reference unit samples, then the sum of the reference values would have been lower
(because the survey values are not counted and would have displaced downward reference
values). This then moves the sum closer to the critical value. If enough survey sample ranks
had displaced reference rankings, then the sum would have been below the critical value and
the null hypothesis would be accepted.
1 Table 8.10: WRS Test for Class 2 Interior Drywall Survey Unit in Example 7
Sample
Number
A
B
C
D
E
Data (cpm)
Unit or Area
Adjusted
Data
Ranks
Reference
Area Ranks
1
49
R
209
22
22
2
35
R
195
12
12
3
45
R
205
17.5
17.5
4
45
R
205
17.5
17.5
May 2020
DRAFT FOR PUBLIC COMMENT
8-31
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
Interpretation of Survey Results
MARSSIM
Sample
Number
A
B
c
D
E
Data (cpm)
Unit or Area
Adjusted
Data
Ranks
Reference
Area Ranks
5
41
R
201
14
14
6
44
R
204
16
16
7
48
R
208
21
21
8
37
R
197
13
13
9
46
R
206
19
19
10
42
R
202
15
15
11
47
R
207
20
20
12
104
S
104
9.5
0
13
94
S
94
4
" 0
14
98
S
98
6
0
15
99
S
99
7
0
16
90
S
90
1
0
17
104
S
104
9.5
0
18
95
S
95
5
0
19
105
S
105
11
0
20
93
S
93
3
0
21
101
S
101
8
0
22
92
S
92
2
0

Sum =
253
187
Abbreviation: cpm = counts per minute.
8.4.3 Applying the Quantile Test—Used Only in Scenario B
The quantile test was developed to detect differences between the survey unit and the
reference area that consist of a shift to higher values in only a fraction of the survey units. It
should be noted that, in general, this shift is not necessarily the same as the shift used for the
WRS test. The quantile test is better at detecting situations in which only a portion of the survey
unit contains excess residual radioactive material. The WRS test is better at detecting situations
in which any excess residual radioactive material is uniform across the entire survey unit. The
quantile test is used only in Scenario B. The quantile test is performed after the WRS test, if the
null hypothesis for the WRS test has not been rejected. Using the quantile test in tandem with
the WRS test in Scenario B results in higher power to detect survey units that have not been
adequately remediated than either test has by itself.
The quantile test is outlined in the six steps below:
1.	Calculate aQ. (aQ = a/2).
2.	Obtain the adjusted survey unit measurements, zt, by subtracting the AL from each survey
unit measurement, y{. zt = y* - AL. If the DCGLw is equal to zero, then this step is not
necessary.
3.	The n adjusted survey unit measurements, zt, and the m reference area measurements, xt,
are pooled and ranked in order of increasing size from 1 to N, where N = m + n.
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
8-32
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Interpretation of Survey Results
1	4. If several measurements are tied (i.e., have the same value), all are assigned the mean rank
2	of that group of tied measurements.
3	5. Look up the values for r and k in Tables 1.7-1.10 to be based on the number of measurements
4	in the survey unit (n), the number of measurements in the reference area (m), and aQ. The
5	operational decision described in the next step is made using the values for r and k.
6	6. If k or more of the r largest measurements in the combined ranked data set are from the
7	survey unit, the null hypothesis is rejected.
8	Examples 8-10 illustrate how certain tests can be used under a variety of testing scenarios.
Example 8: Class 2 Interior Survey Example Under Scenario B Using Wilcoxon Rank
Sum and Quantile Tests
Refer to Example 4 for background information. The data for an example Wilcoxon Rank
Sum (WRS) test using Scenario B are shown in Column A of Table 8.11. In Column B, the
label "R" is inserted to denote a reference area measurement, and the label "S" to denote a
survey unit measurement. Column C contains the adjusted data obtained by subtracting the
lower bound of the gray region (LBGR) of 142 counts per minute (cpm) from just the survey
unit measurements (the reference area measurements are not adjusted). The ranks of the
adjusted data in Column C are listed in Column D. The ranks range from 1 to 24, because
there are 12 + 12 = 24 measurements. The sum of all the ranks is N(N + 1)/2 = (24*25)/2 =
300. Column E contains only the ranks belonging to the adjusted survey unit measurements.
The sum of the ranks of the adjusted survey unit data is 194.5. From Table 1.5, for a = 0.025
and n = m= 12, the critical value is 184. Because the sum of the adjusted survey unit ranks,
194.5, is greater than the critical value, 184, the null hypothesis that the survey unit
concentrations do not exceed the LBGR is rejected (i.e., the site is determined to be dirty). In
Scenario B, the true concentration of radioactive material in the survey unit is judged to be in
excess of 142 cpm above the background.
For the quantile test, Table 1.8 provides the critical value, k, of the largest r measurements for
different values of n, the number of measurements from the survey unit, and m, the number
of measurements from the reference area. The same rankings in Column D of Table 8.11 for
the WRS test can be used for the quantile test. If k or more of the r largest measurements in
the combined ranked data set are from the survey unit, the null hypothesis is rejected. For a
survey unit that has failed the WRS test, as was the case in this example, it is not usually
necessary to also perform the quantile test. However, the quantile test is presented for
illustrative purposes.
In Table 8.11, Columns F and G show the sorted ranks of the adjusted data and the location
associated with each rank (i.e., "R" for reference area and "S" for survey unit). In Table 1.8,
the closest entry to n = m = 12 is for n = m= 10. The values of r = 7, k = 6 and a = 0.029 are
found. Thus, the null hypothesis is rejected if six of the seven largest adjusted measurements
May 2020
DRAFT FOR PUBLIC COMMENT
8-33
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Interpretation of Survey Results
MARSSIM
come from the survey unit. From Table 8.11, we find that only five of the seven largest
adjusted measurements come from the survey unit, so the null hypothesis is not rejected
based on the quantile test. The values of n and m that were used are close to, but not equal
to, the actual values so the a value will be different from that listed in the table. It is prudent to
check a few other entries in Table 1.8 that are near the actual sample size. Additionally,
Chapter 7 in NUREG-1505 (NRC 1998a) provides equations to calculate exact and
approximate values of the alpha error for the quantile test as a function of n, m, k, and r.
1	Table 8.11: WRS and Quantile Test Under Scenario B for Class 2 Interior Drywall Survey
2	Unit in Example 8

A
B
c
D
E
F
G
Sample
Number
Data
(cpm)
Area
Adjusted
Data
Ranks
Survey
Unit Ranks
Sorted
Ranks
Location
Associated
with Sorted
Ranks3
1
47
R
47
18
—
1
R
2
28
R
28
1
—
2
R
3
36
R
36
6
—
3
R
4
37
R
37
7
—
4.5
R
5
39
R
39
9.5
—
4.5
S
6
45
R
45
13
—
6
R
7
43
R
43
11
—
7
R
8
34
R
34
3
—
8
S
9
32
R
32
2
—
9.5
R
10
35
R
35
4.5
—
9.5
R
11
39
R
39
9.5
—
11
R
12
51
R
51
21
—
13
R
13
209
S
67
24
24
13
S
14
197
S
55
23
23
13
S
15
188
S
46
16
16
16
S
16
191
S
49
19
19
16
S
17
193
S
51
21
21
16
S
18
187
S
45
13
13
18
R
19
188
S
46
16
16
19
S
20
180
S
38
8
8
21
R
21
193
S
51
21
21
21
S
22
188
S
46
16
16
21
S
23
187
S
45
13
13
23
S
24
177
S
35
4.5
4.5
24
S



Sum =
300
194.5
—
—
3	a Measurements from the reference area and the survey unit are denoted by R and S, respectively. The adjusted data
4	and data columns are identical when AL= 0.
5
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
8-34
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Interpretation of Survey Results
Example 9: Example Using NUREG-1505 Kruskall-Wallis Test to Determine Whether
Appropriate to Consider Variability from Background under Scenario B
NUREG-1505 (NRC 1998a) provides guidance on methods used to demonstrate
indistinguishability from background when Scenario B is deemed appropriate to use
(e.g., when the DCGL is close to background considering variability). A difficulty arises in the
ability to release a site when variations in mean background among the potential reference
areas become comparable in magnitude to the width of the gray region. Because any
difference in radioactivity between the reference area and survey unit is assumed to be due
to residual radioactivity, and it is not possible to determine if the difference is actually due to
differences in background concentrations between the two areas, tests are available to
determine the significance of background variability and how this variability can be
considered in the statistical tests used to help determine if the site is clean.
The parametric F-test (assumes a normal distribution) and nonparametric Kruskall-Wallis test
(does not make an assumption regarding the underlying distribution) can be used to
determine if variability between the means of potential reference areas is statistically
significant. See data in Table 8.12 used to determine ranks of reference area measurements
used to perform the Kruskall-Wallis test. NUREG-1505 Equation 13-3 is used to calculate a
Kruskal-Wallis statistic:
where N is the total number of measurements in all the reference areas /'= 1 to k reference
areas; is the number of measurements in a given reference area; and Rt is the sum of the
ranks of the measurements in a given reference area.
The test statistic of 14.0 is compared to the critical values provided in NUREG-1505
Table 13.1. In the example, the Kruskall Wallis statistic of 14.0 is above the critical threshold
for 4-1=3 reference areas that range from 11.3 for an a value of 0.01 to 4.6 for an a value of
0.2. Therefore, the null hypothesis that the variability in the reference area means is zero can
be rejected with high confidence (i.e., the null hypothesis is rejected even for very small a or
false positive error rates). Although the Kruskall-Wallis test (or F-test) is used to determine if it
is appropriate to consider reference area variability in applying Scenario B, NUREG-1505
also indicates that background variability could be given the benefit of the doubt, in which
case the Kruskal-Wallis test (or F-test) need not be conducted.
If it is determined that the variability between reference means should be considered,
NUREG-1505 Equation 13-13 can be used to calculate the variance, a)2, which can be used
to determine the lower bound of the gray region (LBGR, or action level [AL]) for Scenario B.
NUREG-1505 provides an example where the mean square between reference areas, sh2,
May 2020	8-35	NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT	DO NOT CITE OR QUOTE

-------
Interpretation of Survey Results	MARSSIM
and mean square within reference areas, sw , calculated manually using Equation 13-13 or
output from ANOVA testing can be used to compute o52 = (sb2 - sw2)/n0, where n0 is equal
to the number of measurements per reference area when the number of measurements in
each reference area is the same (or see Equation 13-13 in NUREG-1505 when the number of
measurements in the reference areas are not the same). Using the ANOVA output in
Table 8.13,
, sb2 - sw2 6.52 - 0.97
co = —	— =	—	= 0.55
n0	10
As part of the data quality objective process, an agreed-upon value for the LBGR as a
multiple of cH can be selected (e.g., NUREG-1505 states that 3 o5 is a reasonable default [or
in the example V55 x 3 = 0.74 x 3 = 2.22 for the LBGR]). Note that the difference in means
between reference areas 2 and 4 in Table 13.2 is 1.82, which is similar to the LBGR
calculated based on 3 o5. NUREG-1505, Table 13.5 also provides information on the power of
the F-test, which is used to approximate the power of the Kruskal-Wallis test, to help
determine the number of references areas and the number of measurements that should be
taken in each reference area to perform the Kruskal-Wallis test and to estimate o5. In all
cases, the regulatory authority should be consulted to determine the acceptability of using
Scenario B, as well as determining appropriate values for the test parameters.
1 Table 8.12: Calculation of a>2 for Example 9
Sample
Number

Measurements
Measurement Ranks
Measurements Squared
Area
1
Area
2
Area
3
Area
4
Area
1
Area
2
Area
3
Area
4
Area
1
Area
2
Area
3
Area
4
1
0.27
1.04
2.45
3.77
6
13
27
39
0.07
1.08
6.00
14.21
2
1.87
0.39
0.34
2.63
20
9
8
31
3.50
0.15
0.12
6.92
3
0.97
2.07
3.06
4.05
10
23
37
40
0.94
4.28
9.36
16.40
4
1.01
0.57
2.83
1.72
11
2
35
19
1.02
0.32
8.01
2.96
5
2.08
1.97
1.09
1.50
24
21
14
17
4.33
3.88
1.19
2.25
6
1.62
0.22
0.26
2.47
18
3
5
29
2.62
0.05
0.07
6.10
7
0.30
1.39
2.80
1.42
7
15
34
16
0.09
1.93
7.84
2.02
8
1.98
0.05
2.77
2.47
22
4
33
28
3.92
0.00
7.67
6.10
9
2.18
0.75
2.42
2.76
25
1
26
32
4.75
0.56
5.86
7.62
10
1.02
2.50
2.86
3.35
12
30
36
38
1.04
6.25
8.18
11.22
Sum
13.30
7.87
20.88
26.14
—
—
—
—
22.28
18.50
54.30
75.80
Average
1.33
0.79
2.09
2.61
—
—
—
—
—
—
—
—
Average
Squared
1.77
0.62
4.36
6.83
	
—
—
—
—
—
—
—
2 Table 8.13: Analysis of Variance for Example 9 Data
Source of
Variation
Sum of
Squares
Degrees of
Freedom
Mean
Square
F Statistic
Between Groups
19.56
3
6.52
6.69
Within Groups
35.08
36
0.97
—
NUREG-1575, Revision 2	8-36	May 2020
DRAFT FOR PUBLIC COMMENT	DO NOT CITE OR QUOTE

-------
MARSSIM
Interpretation of Survey Results
Source of
Variation
Sum of
Squares
Degrees of
Freedom
Mean
Square
F Statistic
Total
54.65
39
—
—
1
Example 10: Class 1 Interior Concrete Survey Unit
As in the previous example, the gas-flow proportional counter measures gross beta activity
(see Appendix H) and the measurements are not radionuclide-specific. The nonparametric
statistical test for when the radionuclide is present in background is appropriate for the
Class 1 interior concrete survey unit because gross beta activity contributes to the overall
background, even though the specific radionuclide of interest does not appear in background.
Appendix A provides a detailed description of the calculations for the Class 1 interior
concrete survey unit.
2	8.4.4 Multiple Radionuclides
3	The use of the unity rule when there is more than one radionuclide to be considered is
4	discussed in Section 4.4. An example application of the use of the unity rule appears in
5	Examples 11 and 12.
Example 11: Application of WRS Test to Multiple Radionuclides
Consider a site with both cobalt-60 (60Co) and cesium-137 (137Cs) contamination. 137Cs
appears in background from fallout at a typical concentration of about 37 becquerels/kilogram
(Bq/kg; 1 picocurie/gram [pCi/g]). Assume that the DCGLwfor60Co is 74 Bq/kg (2 pCi/g) and
for 137Cs is 52 Bq/kg (1.4 pCi/g). In disturbed areas, the background concentration of 137Cs
can vary considerably. An estimated spatial standard deviation of 19 Bq/kg (0.5 pCi/g) for
137Cs will be assumed. During remediation, it was found that the concentrations of the two
radionuclides were not well correlated in the survey unit. 60Co concentrations were more
variable than the 137Cs concentrations, and 26 Bq/kg (0.7 pCi/g) is estimated for its standard
deviation. Measurement errors for both 60Co and 137Cs using gamma spectrometry will be
small compared to this. For the comparison to the release criteria, the weighted sum of the
concentrations of these radionuclides is computed from—
60Co concentration 137Cs concentration
T =	+	
60Co DCGL	137Cs DCGL
60Co concentration 137Cs concentration
= 74 Bq/kg + 52 Bq/kg
May 2020
DRAFT FOR PUBLIC COMMENT
8-37
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Interpretation of Survey Results
MARSSIM
The variance of the weighted sum, assuming that the 60Co and 137Cs concentrations are
spatially unrelated, is—
a2 (70 =
-(""Co
concentration
ion)
60,
26 Bq/kg
74 Bq/kg
Co DCGL
2
+
+
r(137Cs
concentration
ion)
137,
Cs DCGL
19 Bq/kg
52 Bq/kg
= 0.26
Thus, a = 0.5. The wide-area derived concentration guideline level (DCGLw) for the weighted
sum is 1. The null hypothesis for Scenario A is that the survey unit exceeds the release
criterion. During the data quality objective process, the lower bound of the gray region
(LBGR) was set at 0.5 for the weighted sum, so that A = DCGLw- LBGR =1.0 -0.5 = 0.5, and
A/cr= 0.5/0.5 = 1.0. The acceptable error rates chosen were a = /?= 0.05. To achieve this, 32
samples each are required in the survey unit and the reference area.
The weighted sums are computed for each measurement location in both the reference area
and the survey unit. The WRS test is then performed on the weighted sum. The calculations
for this example are shown in Table 8.14. The DCGLw for the unity rule (i.e., 1.0) is added to
the weighted sum for each location in the reference area. The ranks of the combined survey
unit and adjusted reference area weighted sums are then computed. The sum of the ranks of
the adjusted reference area weighted sums is then compared to the critical value for n =
m = 32, a = 0.05, which is 1,162 (see formula following Table 1.5). In Table 8.14, the sum
of the ranks of the adjusted reference area weighted sums is 1,281. This exceeds the critical
value, so the null hypothesis is rejected. In Scenario A, this means the survey unit meets the
release criteria. The difference between the mean of the weighted sums in the survey unit
and the reference area is 1.86 -1.16 = 0.7. Thus, the estimated dose or risk due to residual
radioactive material in the survey unit is approximately equal to 70 percent of the release
criterion.
1
Example 12: Use of ProLICL for the WRS Test for Multiple Radionuclides
As Table 8.14 does for Example 11, Table 8.15 provides sample results for a survey unit
with residual radioactive material that includes cobalt-60 (60Co) and cesium-137 (137Cs).
Because 137Cs from fallout is found in the background, samples were also collected and
analyzed from a reference area. The wide-area derived concentration guideline levels
(DCGLw) for 137Cs and 60Co are 1.4 and 2.0 Bq/kg, respectively. The unity rule is used to
determine if the survey unit meets the release criteria. Scenario A was selected. To perform
the WRS test, ProUCL, Version 5.0 was used. ProUCL is a freeware statistical software
program, provided by the U.S. Environmental Protection Agency.
The data can be entered by hand or copied and pasted from another software program.
Descriptions can be provided for the column headers by right-clicking the column header and
selecting "Header Name." For this example, the columns were named "Reference" and	
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
8-38
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Interpretation of Survey Results
"Survey Unit." Figure 8.4 shows the initial program inputs of the weighted sums of the
reference area and survey unit with their column headings.
To perform the WRS test, the two-sample Wilcoxon-Mann-Whitney test was selected. The
Wilcoxon-Mann-Whitney test is a nonparametric of the same hypothesis as the WRS test.
The two tests use a different test statistic and critical value, but both tests will provide the
same conclusion. The test can be selected by first choosing "Two Sample" from the
"Hypothesis Testing" menu, and then selecting "Wilcoxon-Mann-Whitney."
Variables are selected by clicking on the variable name in the list of variables, and then
clicking the corresponding "»" button. The Reference Area results are selected as the
"Background/Ambient" variable, and the Survey Unit results are selected as the "Area of
Concern/Site" variable, as shown in Figure 8.5.
Clicking the "Options" in the dialog window shown in Figure 8.5, generates another dialog
window. This window allows the user to specify the "Confidence Coefficient." The confidence
coefficient is equal to (1 - a). For this example, the "Confidence Coefficient" of 95 percent is
selected, corresponding to a= 0.05. The dialog window also allows the user to specify the
form of the hypothesis. For this example, Form 2 is selected, and the value of 1 is entered for
the "Substantial Difference." When using Form 2, the unadjusted reference area results
should be used instead of the adjusted reference area results.
Clicking the "OK" button shown in Figure 8.6 saves the changes and closes the dialog
window. Clicking the "OK" button shown in Figure 8.5 closes that dialog window and
generates the output sheet shown in Figure 8.7. As shown near the bottom of Figure 8.7, the
null hypothesis is rejected, and the survey unit is demonstrated to pass the statistical test.
The elevated measurement comparison would still need to be performed before deciding that
the survey unit has met the release criteria.
May 2020
DRAFT FOR PUBLIC COMMENT
8-39
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Interpretation of Survey Results
1 Table 8.14: Example 11 WRS Test for Two Radionuclides
MARSSIM
Sample
Reference Area
Survey Unit
Weighted Sum
Ranks
Number
137Cs
60Co
137Cs
60Co
Ref
Survey
Adj Ref
Survey
Adj Ref
1
2.00
0
1.12
0.06
1.43
0.83
2.43
1
56
2
1.23
0
1.66
1.99
0.88
2.18
1.88
43
21
3
0.99
0
3.02
0.56
0.71
2.44
1.71
57
14
4
1.98
0
2.47
0.26
1.41
1.89
2.41
23
55
5
1.78
0
2.08
0.21
1.27
1.59
2.27
9
50
6
1.93
0
2.96
0.00
1.38
2.11
2.38
37
54
7
1.73
0
2.05
0.20
1.23
1.56
2.23
7
46
8
1.83
0
2.41
0.00
1.30
1.72
2.30
16
52
9
1.27
0
1.74
0.00
0.91
1.24
1.91
2
24
10
0.74
0
2.65
0.16
0.53
1.97
1.53
27
6
11
1.17
0
1.92
0.63
0.83
1.68
1.83
13
18
12
1.51
0
1.91
0.69
1.08
1.71
2.08
15
32
13
2.25
0
3.06
0.13
1.61
2.25
2.61
47
63
14
1.36
0
2.18
0.98
0.97
2.05
1.97
30
28
15
2.05
0
2.08
1.26
1.46
2.12
2.46
39
58
16
1.61
0
2.30
1.16
1.15
2.22
2.15
45
41
17
1.29
0
2.20
0.00
0.92
1.57
1.92
8
25
18
1.55
0
3.11
0.50
1.11
2.47
2.11
59
35
19
1.82
0
2.31
0.00
1.30
1.65
2.30
11
51
20
1.17
0
2.82
0.41
0.84
2.22
1.84
44
19
21
1.76
0
1.81
1.18
1.26
1.88
2.26
22
48
22
2.21
0
2.71
0.17
1.58
2.02
2.58
29
62
23
2.35
0
1.89
0.00
1.68
1.35
2.68
3
64
24
1.51
0
2.12
0.34
1.08
1.68
2.08
12
33
25
0.66
0
2.59
0.14
0.47
1.92
1.47
26
5
26
1.56
0
1.75
0.71
1.12
1.60
2.12
10
38
27
1.93
0
2.35
0.85
1.38
2.10
2.38
34
53
28
2.15
0
2.28
0.87
1.54
2.06
2.54
31
61
29
2.07
0
2.56
0.56
1.48
2.11
2.48
36
60
30
1.77
0
2.50
0.00
1.27
1.78
2.27
17
49
31
1.19
0
1.79
0.30
0.85
1.43
1.85
4
20
32
1.57
0
2.55
0.70
1.12
2.17
2.12
42
40
Avg
1.62
0
2.28
0.47
1.16
1.86
2.16
sum =
sum =
Std Dev
0.43
0
0.46
0.48
0.31
0.36
0.31
799
1281
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
8-40
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM	Interpretation of Survey Results
1 Table 8.15: Example 12 WRS Test for Two Radionuclides
Sample
Reference Area
Results (Bq/kg)
Survey Unit (Bq/kg)
Reference
Area (x*)
Survey
Unit (y*)
Adjusted
Reference

137Cs
60Co
137Cs
60Co
Area (z4)
1
2.00
0.00
1.12
0.06
1.429
0.830
2.429
2
1.23
0.00
1.66
1.99
0.879
2.181
1.879
3
0.99
0.00
3.02
0.56
0.707
2.437
1.707
4
1.98
0.00
2.47
0.26
1.414
1.894
2.414
5
1.78
0.00
2.08
0.21
1.271
1.591
2.271
6
1.93
0.00
2.96
0.00
1.379
2.114
2.379
7
1.73
0.00
2.05
0.20
1.236
1.564
2.236
8
1.83
0.00
2.41
0.00
1.307
1.721
2.307
9
1.27
0.00
1.74
0.00
0.907
1.243
1.907
10
0.74
0.00
2.65
0.16
0.529
1.973
1.529
11
1.17
0.00
1.92
0.63
0.836
1.686
1.836
12
1.51
0.00
1.91
0.69
1.079
1.709
2.079
13
2.25
0.00
3.06
0.13
1.607
2.251
2.607
14
1.36
0.00
2.18
0.98
0.971
2.047
1.971
15
2.05
0.00
2.08
1.26
1.464
2.116
2.464
16
1.61
0.00
2.30
1.16
1.150
2.223
2.150
17
1.29
0.00
2.20
0.00
0.921
1.571
1.921
18
1.55
0.00
3.11
0.50
1.107
2.471
2.107
19
1.82
0.00
2.31
0.00
1.300
1.650
2.300
20
1.17
0.00
2.82
0.41
0.836
2.219
1.836
21
1.76
0.00
1.81
1.18
1.257
1.883
2.257
22
2.21
0.00
2.71
0.17
1.579
2.021
2.579
23
2.35
0.00
1.89
0.00
1.679
1.350
2.679
24
1.51
0.00
2.12
0.34
1.079
1.684
2.079
25
0.66
0.00
2.59
0.14
0.471
1.920
1.471
26
1.56
0.00
1.75
0.71
1.114
1.605
2.114
27
1.93
0.00
2.35
0.85
1.379
2.104
2.379
28
2.15
0.00
2.28
0.87
1.536
2.064
2.536
29
2.07
0.00
2.56
0.56
1.479
2.109
2.479
30
1.77
0.00
2.50
0.00
1.264
1.786
2.264
31
1.19
0.00
1.79
0.30
0.850
1.429
1.850
32
1.57
0.00
2.55
0.70
1.121
2.171
2.121
May 2020
DRAFT FOR PUBLIC COMMENT
8-41
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Interpretation of Survey Results
MARSSIM
[JjJ ProUCL 4.1 - [C:\Users\MARSSIM User\Documents\Example.wst]


"9 File Edit Configure
Number of Samples Summary Statistics
ROSEst. NDs Graphs Outlier Tests Goodness-of-Fit Hypothesis Testing
ANOVA T rend T ests
Background UCL Window Help




EM*
'ol«&l elBlml snl
Navigation Panel |

0
1
2
3
4
5 6
7
8
9 10
11 A1
Name

Reference
Survey Unit







=J|
0 Example.wst
1
1.429
0.83








2
0.879
2.181









3
0.707
2.437









4
1.414
1.894









5
1271
1.591









6
1.379
2.114









7
1.236
1.564









8
1.307
1.721









9
0.907
1.243









10
0.529
1.973









11
0.836
1.686









12
1.079
1.709









13
1.607
2.251









14
0.971
2.047









15
1.464
2.116









1G
1.15
2.223









17
0.921
1.571









18
1.107
2.471









19
1.3
1.65









20
0.836
2.219









ill]
1.257
1.883









Log Panel





LOG: 5:17:54 PM ^Information] worksheet.wst created!
LOG: 5:22:26 PM >[lnformationl C:\Users\MARSSIM UsertDocuments\Example.wst successfully saved!
Loading internal files...






Figure 8.4: ProUCL Worksheet for Example 12
Variables
Reference	0
Survey Unit	1
| Count
32
32
Without Group Variable
» I Background / Ambient (Reference
Options
» | Area of Concern / Site jSurvey I
With Group Variable
» | Variable
Group Var
Background / Ambient
Area of Concern / Site
"31
"3
Figure 8.5: ProUCL Select Variables Window for Hypothesis Testing for Example 12
NUREG-1575. Revision 2	8-42	May 2020
DRAFT FOR PUBLIC COMMENT	DO NOT CITE OR QUOTE

-------
MARSSIM
Interpretation of Survey Results
1
2
Site vs Background Comparison
Substantial Difference. S	-jj"
(Used with Test Form 2)
Confidence Coefficient
C 99.9%
C 99.5%
r 99%
C 97.5%
P 95%
C 90%
Select Null Hypothesis Form
C ADC <= Background [Form 1)
C AOC >= Background (Form 2)
t* AOC >= Background + S (Form 2)
f~ AOC = Background (2 Sided)
OK
Cancel
Figure 8.6: ProLICL Hypothesis Testing Options Window for Example 12
May 2020
DRAFT FOR PUBLIC COMMENT
8-43
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Interpretation of Survey Results
MARSSIM

Wilcoxon-Mann-Whitney Site vs Background Comparison Test for Full Data Sets without NDs
User Selected Options

From File
C:\Users\MARSSIM User\Documents\Example.wst
Full Precision
OFF
Confidence Coefficient
95%
Substantial Difference
Selected Null Hypothesis
Alternative Hypothesis
1
Site or AOC Mean/Median >= Background Mean/Median Plus Substantial Difference, S (Form 2)
Site or AOC Mean/Median Less Than Background Mean/Median Plus Substantial Difference, S







Area of Concern Data: Survey Unit





Background Data: Fleference











Raw Statistics






Site
Background





Number of Valid Observations
32
32






Number of Distinct Observations
32
29






Minimum
0.83
0.471






Maximum
2.471
1.679






Mean
1.863
1.161






Median
1.907
1.193






SD
0.361
0.308






SEof Mean
0.0638
0.0544












Wilcoxon-Mann-Whttney (WMW) Test











HO: Mean/Median of Site or AOC >= Mean/Median of Background + 1











Site Rank Sum W-Stat
799.5







WMW Test U-Stat
-3.223







WMW Critical Value (0.050)
-1.645







P-Value
0.0006354













Conclusion with Alpha = 0.05





Fteject HO. Conclude Site < Background + 1.00





P-Value < alpha (0.05)





2	Figure 8.7: ProLICL Output for Example 12
3	8.5 Scan-Only Surveys
4	The use of the UCL can apply to both Scenario A and B for scan-only surveys where individual
5	results are recorded. When release decisions are made about the estimated mean of a sampled
6	population, the assessment of the survey results is accomplished by comparing a UCL for the
7	mean to the DCGLw or DL for Scenarios A and B, respectively.
8	If individual scan-only survey results are recorded, a nonparametric confidence interval can be
9	used to evaluate the results of the release survey. Similarly, a confidence interval can be used
10	to evaluate a series of direct measurements with overlapping fields of view. A one-tailed version
11	of Chebyshev's inequality or software (e.g., EPA's ProUCL software) can be used to evaluate
12	the probability of exceeding the UBGR (i.e., using a UCL). The use of a UCL applies to both
13	Scenario A (where the UBGR equals the DCGLw) and Scenario B (where the UBGR equals the
14	DL).
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
8-44
May 2020
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
MARSSIM
Interpretation of Survey Results
Chebyshev's inequality calculates the probability that the absolute value of the difference of the
true but unknown mean of the population and a random number from the data set is at least a
specified value. That is, given a specified positive number (n), a mean (/1), and a random
number from the data set (r), then the probability that [/1 - r] is greater than or equal to n is
equal to a. In addition, a one-tailed version of the inequality can be used to calculate a UCL for
a data set that is independent of the data distribution (i.e., there is no requirement to verify the
data are from a normal, lognormal, or any other specified kind of distribution) by letting the
inequality equal the UCL. The UCL can be calculated using Equation 8-3:
UCL = pi +
M
a2 a2	(8-3
na n
The comparison to the UCL is described in the following steps:
1.	Calculate the mean (/1) and standard deviation (a) of the number of results (n) in the
data set.
2.	For Scenario A, retrieve the Type I error rate (a) used to design the survey. For
Scenario B, substitute the Type II error rate (/?) used to design the survey for a in
Equation 8-3.
3.	Using Chebyshev's inequality, calculate the maximum UCL using Equation 8-3.
If the maximum UCL is less than the UBGR, the survey demonstrates compliance with the
disposition criterion (i.e., reject the null hypothesis for Scenario A or fail to reject the null
hypothesis for Scenario B).
Chebyshev's inequality must be used with caution when there are very few points in the data
set. This is because the population mean and standard deviation in the Chebyshev formula are
being estimated by the sample mean and sample standard deviation. In a small data set from a
highly skewed distribution, the sample mean and sample standard deviation may be
underestimated if the high concentration but low probability portion of the distribution is not
captured in the sample data set.
8.6 Evaluate the Results: The Decision
When the data and the results of the tests have been obtained, the specific steps required to
achieve survey unit release depend on the procedures instituted by the governing regulatory
agencies and site-specific ALARA6 considerations. The following suggested considerations are
6 "as low as reasonably achievable"
May 2020
DRAFT FOR PUBLIC COMMENT
8-45
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
Interpretation of Survey Results
MARSSIM
for the interpretation of the test results with respect to the release limit established for the site or
survey unit. Note that the tests need not be performed in any particular order.
8.6.1 Elevated Measurement Comparison
If applicable release criteria for elevated measurements exist, then the EMC consists of
comparing each measurement from the survey unit with the investigation levels discussed in
Section 5.3.8. The EMC is performed both for measurements obtained on the systematic
sampling grid and for locations flagged by scanning measurements. Any measurement from the
survey unit that is equal to or greater than an investigation level indicates an area of relatively
high concentrations that should be investigated, regardless of the outcome of the nonparametric
statistical tests.
Under Scenario A, the statistical tests may reject the null hypothesis when only a very few high
measurements are obtained in the survey unit, regardless of how high the measurements are.
In a similar manner, under Scenario B, the statistical tests might not reject the null hypothesis
when only a few high measurements are obtained in the survey unit. The use of the quantile test
and the EMC against the investigation levels may be viewed as assurance that unusually large
measurements will receive proper attention regardless of the outcome of those tests and that
any area having the potential for significant dose or risk contributions will be identified. The EMC
is intended to flag potential failures in the remediation process. This should not be considered
the primary means to identify whether a survey unit meets the release criteria.
Note that the DCGLemc is an a priori limit, established both by the DCGLw and by the survey
design (i.e., grid spacing and scanning MDC). The true extent of an area of elevated activity can
be determined only after performing the survey and taking additional measurements. Upon the
completion of further investigation, the a posteriori limit can be established. The area of elevated
activity is generally bordered by concentration measurements below the DCGLw. An individual
elevated measurement on a systematic grid could conceivably represent an area four times as
large as the systematic grid area used to define the DCGLemc. This is the area bounded by the
nearest neighbors of the elevated measurement location. The results of the investigation should
show that the appropriate DCGLemc is not exceeded. If measurements above the stated
scanning MDC are found by sampling or by direct measurements at locations that were not
flagged during the scanning survey, then this may indicate the scanning method does not meet
the DQOs.
The preceding discussion primarily concerns Class 1 survey units. Measurements exceeding
the DCGLw in Class 2 or Class 3 areas may indicate survey unit misclassification. Scanning
coverage for Class 2 and Class 3 survey units is less stringent than for Class 1. If the
investigation levels of Section 5.3.8 are exceeded, an investigation should (1) ensure that the
area of elevated activity discovered meets the release criteria, and (2) provide reasonable
assurance that other undiscovered areas of elevated activity do not exist. If further investigation
determines that the survey unit was misclassified with regard to potential for residual radioactive
material, then a resurvey using the method appropriate for the new survey unit classification is
appropriate.
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
8-46
May 2020
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
MARSSIM
Interpretation of Survey Results
8.6.2 Interpretation of Statistical Test Results
The result of the statistical test is the decision to reject or not to reject the null hypothesis.
Provided that the results of investigations triggered by the EMC were resolved, a rejection of the
null hypothesis leads to the decision that the survey unit meets the release criteria in
Scenario A. In Scenario B, failure to reject the null hypothesis in both the WRS and quantile
tests leads to the decision that the survey unit meets the release criteria, provided that EMC
results are acceptable. However, estimating the mean concentration of residual radioactive
material in the survey unit may also be necessary so that dose or risk calculations can be made.
This estimate is designated by S. The mean concentration is generally the best estimator for 8.
However, only the unbiased measurements from the statistically designed survey should be
used in the calculation of S.
If residual radioactive material is found in an isolated area of elevated activity—in addition to
residual radioactive material distributed relatively uniformly across the survey unit—the unity
rule (Section 4.4) can be used to ensure that the total dose is within the release criteria, as
shown in Equation 8-4:
If there is more than one elevated area, a separate term could be included in Equation 8-4 for
each area. The use of the unity rule for more than one elevated area may imply that a person is
centered on each area of elevated radioactive material and exposed simultaneously. This is an
impossible situation and represents a very cautious exposure scenario. If there are multiple
elevated areas, then alternative approaches may be considered:
1.	The MARSSIM user could determine the elevated area (primary area) that contributes the
most to the total dose or risk. As shown by Abelquist (2008), the doses from elevated areas
other than the primary area can be very small and might be negligible.
2.	The dose or risk due to the actual residual radioactive material distribution could be
calculated if an appropriate exposure pathway model is available.
Other approaches for handling elevated concentrations of radioactive material may be utilized
and should be coordinated with the regulator.
The MARSSIM user should consult with the responsible regulatory agency for guidance on an
acceptable approach to address the dose or risk from elevated areas of residual radioactive
material. Note that these approaches generally apply only to Class 1 survey units, because
areas of elevated activity above the DCGLw should not exist in Class 2 or Class 3 survey units.
A retrospective power analysis for the test will often be useful, especially when the null
hypothesis is not rejected (see Appendix M). When the null hypothesis is not rejected, it may
be because it is true, or it may be because the test did not have sufficient power to detect that it
DCGL,
S
(mean concentration in elevated area - 8)
< 1
(8-4)
DCGLemc
May 2020
DRAFT FOR PUBLIC COMMENT
8-47
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
Interpretation of Survey Results
MARSSIM
is not true. The power of the test will be primarily affected by changes in the actual number of
measurements obtained and their standard deviation. An effective survey design will slightly
overestimate both the number of measurements and the standard deviation to ensure adequate
power. This ensures that a survey unit is not subjected to additional remediation simply because
the FSS is not sensitive enough to detect that residual radioactive material is below the DCGLw.
When the null hypothesis is rejected in Scenario A, the power of the test becomes a somewhat
moot question. Nonetheless, even in this case, a retrospective power curve can be a useful
diagnostic tool and an aid to designing future surveys and for other survey units at the site.
When the null hypothesis is accepted in Scenario B, the power of the test is of particular
importance. If an insufficient number of samples are collected, the null hypothesis that the
survey unit meets the release criteria may be accepted simply because of the lack of sufficient
power to detect residual radioactive material in the survey unit above the release criteria. If the
retrospective power analysis reveals a lack of sufficient power, it may be necessary to revisit the
DQO process with the updated estimate of a.
8.6.3 If the Survey Unit Fails
The systematic planning process included in MARSSIM should include planning for possible
survey unit failure. Early discussions with appropriate regulatory personnel about what actions
can and should be taken if the survey unit fails may prevent long delays later in the project.
However, if the survey unit fails in a way that is not anticipated, agreed upon actions may not be
applicable, and discussions will need to take place to address the unanticipated results.
The information provided in MARSSIM is fairly explicit concerning the steps that should be
taken to show that a survey unit meets the release criteria. Less has been said about the
procedures that should be used if the survey unit fails at any point. This is primarily because
there are many different ways that a survey unit may fail the FSS. The mean concentration of
residual radioactive material may not pass the nonparametric statistical tests. Further
investigation following the elevated measurement comparison may show a large enough area
with a concentration too high to meet the release criteria. Investigation levels may have caused
locations to be flagged during scanning that indicate unexpected levels of residual radioactive
material for the survey unit classification. Site-specific information is needed to fully evaluate all
of the possible reasons for failures, their causes, and their remedies.
When a survey unit fails to demonstrate compliance with the release criteria, the first step is to
review and confirm the data that led to the decision. Once this is done, the DQO process
(Appendix D) can be used to identify and evaluate potential solutions to the problem. The
concentration of residual radioactive material in the survey unit should be determined to help
define the problem. Once the problem has been stated, the decision concerning the survey unit
should be developed into a decision rule. Next, determine the additional data, if any, that are
needed to document that the survey unit demonstrates compliance with the release criteria.
Alternatives to resolving the decision statement should be developed for each survey unit that
fails the tests. These alternatives are evaluated against the DQOs, and a survey design that
meets the objectives of the project is selected. Example 13 discusses a Class 2 survey unit with
measurements exceeding the DCGLw.
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
8-48
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Interpretation of Survey Results
Example 13: Class 2 Survey Unit with Measurements Exceeding the DCGLw
A Class 2 survey unit passes the nonparametric statistical tests but has several
measurements on the sampling grid that exceed the derived concentration guideline level
determined using the Wilcoxon Rank Sum test (DCGLw). This is unexpected in a Class 2
area, so these measurements are flagged for further investigation. Additional sampling
confirms several areas where the concentration exceeds the DCGLw. This indicates that the
survey unit was misclassified. However, the scanning technique that was used was sufficient
to detect concentrations of residual radioactive material at the derived concentration guideline
level determined using the elevated measurement comparison (DCGLemc) calculated for the
sample grid. No areas exceeding the DCGLemc were found. Thus, the only difference
between the performed final status survey (FSS) and the required FSS for a Class 1 area is
that the scanning may not have covered 100 percent of the survey unit area. In this case, one
might simply increase the scan coverage to 100 percent. Reasons the survey unit was
misclassified should be noted. If no areas exceeding the DCGLemc are found, the survey unit
essentially demonstrates compliance with the release criteria as a Class 1 survey unit.
If a Class 2 survey unit has been misclassified as a Class 1 survey unit, the size of the survey
unit should be considered to determine if the survey unit should be divided into two or more
smaller survey units, based on the recommended survey sizes in Table 4.1 and the
concentration of radioactive material in different areas of the survey unit. If the scanning
technique was not sufficiently sensitive, it may be possible to reclassify as Class 1 only that
portion of the survey unit containing the higher measurements. This portion would be
resampled at the higher measurement density required for a Class 1 survey unit, with the rest
of the survey unit remaining as Class 2.
1 Example 14 discusses a Class 1 survey unit with elevated areas.
Example 14: Class 1 Survey Unit with Elevated Areas
Consider a Class 1 Survey unit that passes the nonparametric statistical tests and contains
some areas that were flagged for investigation during scanning. Further investigation,
sampling, and analysis indicate one area is truly elevated. This area has a concentration that
exceeds the derived concentration guideline level determined using the Wilcoxon Rank Sum
test by a factor greater than the area factor calculated for its actual size. This area is then
remediated. Remediation control sampling shows that the residual radioactive material was
removed, and no other areas were affected by residual radioactive material. In this case, one
may simply document the original final status survey (FSS), the fact that remediation was
performed, the results of the remedial action support survey, and the additional remediation
data. In some cases, additional FSS data may not be needed to meet the release criteria.
May 2020
DRAFT FOR PUBLIC COMMENT
8-49
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
1
Interpretation of Survey Results
Example 15 discusses a Class 1 survey unit that fails the statistical test.
MARSSIM
Example 15: Class 1 Survey Unit Fails the Statistical Test
Consider a Class 1 area that fails the nonparametric statistical tests. Confirmatory data
indicate that the mean concentration in the survey unit exceeds the derived concentration
guideline level determined using the Wilcoxon Rank Sum test over a majority of its area. This
indicates remediation of the entire survey unit is necessary, followed by another final status
survey (FSS). Reasons for performing an FSS in a survey unit with significant amounts of
residual radioactive material should be noted.
2	Examples 13-15 are meant to illustrate the actions that may be necessary to secure the
3	release of a survey unit that has failed to meet the release criteria. The DQO process should be
4	revisited to plan how to attain the original objective, which is to safely release the survey unit by
5	showing that it meets the release criteria. Whatever data are necessary to meet this objective
6	will be in addition to the FSS data already in hand.
7	8.6.4 Removable Radioactive Material
8	Some regulatory agencies may require that smear samples be taken at indoor grid locations as
9	an indication of removable surface activity. In addition, the percentage of removable activity
10	assumed in the dose modeling can have a large impact on estimated doses. As such, it might
11	be necessary to confirm this assumption regarding the amount of removable contamination.
12	However, measurements of smears are very difficult to interpret quantitatively. In general, the
13	results of smear samples should be used for determining compliance with requirements that
14	specifically require a smear measurement. In addition, they may be used as a diagnostic tool to
15	determine whether further investigation is necessary.
16	8.7 Documentation
17	Documentation of the FSS should provide a complete and unambiguous record of the
18	radiological status of the survey unit relative to the established DCGLs. In addition, sufficient
19	data and information should be provided to enable an independent evaluation of the results of
20	the survey—including repeating, when possible, measurements at some future time. The
21	documentation should comply with all applicable regulatory requirements. Additional information
22	on documentation is provided in Chapter 3, Chapter 5, and Appendix D.
23	Much of the information in the final status report will be available from other decommissioning
24	documents. However, to the extent practicable, this report should be a stand-alone document
25	with minimum information incorporated by reference. This document should describe the
26	instrumentation or analytical methods applied, how the data were converted to DCGL units, the
27	process of comparing the results to the DCGLs, and the process of determining that the DQOs
28	were met.
29	The results of actions taken as a consequence of individual measurements or sample
30	concentrations in excess of the investigation levels should be reported with any additional data,
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
8-50
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
Interpretation of Survey Results
1	remediation, or resurveys performed to demonstrate that issues concerning potential areas of
2	elevated activity were resolved. The results of the data evaluation using statistical methods to
3	determine whether release criteria were satisfied should be described. If criteria were not met,
4	or if results indicate a need for additional data, appropriate further actions should be determined
5	by the site management in consultation with the responsible regulatory agency. Example 16
6	provides an example of a data interpretation checklist.
Example 16: Example Data Interpretation Checklist
Convert Data to Standard Units
	Structure activity should be in becquerels/square meter (Bq/m2) (decays per minute
[dpm]/100 square centimeters [cm2]).
	Solid media (soil, building surfaces, etc.) activity should be in in Bq/kilogram (kg)
(picocuries/gram [pCi/g]).
Evaluate Elevated Measurements
	 Identify elevated data.
	 Compare data with derived elevated area criteria.
	 Determine need to remediate and/or reinvestigate elevated condition.
	 Compare data with survey unit classification criteria.
	 Determine need to investigate and/or reclassify.
Assess Survey Data
	Review data quality objectives (DQOs), measurement quality objectives (MQOs), and
survey design.
	 Verify that data of adequate quantity and quality were obtained.
	Perform preliminary assessments (graphical methods) for unusual or suspicious
trends or results—investigate further as appropriate.
Perform Statistical Tests
	 Select appropriate tests for the radionuclide.
	 Conduct tests.
	 Compare test results against hypotheses.
	 Confirm power level of tests.
Compare Results to Guidelines
	 Determine mean or median concentrations.
	 Confirm that residual activity satisfies guidelines.
May 2020
DRAFT FOR PUBLIC COMMENT
8-51
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
Interpretation of Survey Results
MARSSIM
Compare Results with DQOs and Measurement Quality Objectives (MQOs)
	 Determine whether all DQOs and MQOs are satisfied.
	 Explain/describe deviations from design-basis DQOs/MQOs.
1
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
8-52
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
References
REFERENCES, U.S. CODE, AND FEDERAL LAWS
General References
42 FR 60956, EPA, "Persons Exposed to Transuranium Elements in the Environment," Federal
Register, Vol. 42, No. 230, November 30, 1977, pp. 60956-60959.
46 FR 52601, NRC, "Disposal or On-site Storage of Thorium or Uranium Wastes from Past
Operations," Federal Register, Vol. 46, No. 205, October 23, 1981, pp. 52061-52063.
55 FR 51532, EPA, "Hazard Ranking System; Final Rule (46 CFR Part 300)," Federal Register,
Vol. 5, No. 241, December 14, 1990, pp. 51532-51667.
57 FR 13389, NRC, "Action Plan to Ensure Timely Cleanup of Site Decommissioning
Management Plan Sites (10 CFR Part 30)," Federal Register, Vol. 57, No. 74, April 16,
1992, pp. 13389-13392.
57 FR 6136, NRC, "Order Establishing Criteria and Schedule for Decommissioning the
Bloomberg Site (10 CFR Part 30)," Federal Register, Vol. 57, No. 34, February 20, 1992,
pp. 6136-6143.
American Association of Radon Scientists and Technologists (AARST), "Protocol for Conducting
Measurements of Radon and Radon Decay Products in Homes," AARST, Fletcher, NC,
2014a (MAH-2014).
AARST, "Protocol for Conducting Measurements of Radon and Radon Decay Products in
Schools and Large Buildings," AARST, Fletcher, NC, 2014b (MALB-2014).
AARST, "Performance Specifications for Instrumentation Systems Designed to Measure Radon
Gas in Air," AARST, Fletcher, NC, 2015 (MS-PC-2015).
AARST, Protocol for Conducting Measurements of Radon and Radon Decay Products in
Multifamily Buildings," AARST, Fletcher, NC, 2017 (MAMF-2017).
AARST, "Radon Measurement Systems Quality Assurance," AARST, Fletcher, NC, 2019 (MS-
QA-2019).
Abelquist, E., "Dose Modeling and Statistical Assessment of Hot Spots for Decommissioning
Applications," Ph.D. Dissertation, University of Tennessee, Knoxville, TN, 2008.
Abelquist, E., Decommissioning Health Physics, A Handbook for MARSSIM Users, Taylor &
Francis Group, New York, NY, 2010.
Abelquist, E., Decommissioning Health Physics, A Handbook for MARSSIM Users, Second
Edition, Taylor and Francis Group, Boca Raton, FL, 2014.
May 2020
DRAFT FOR PUBLIC COMMENT
Ref-1
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
References
MARSSIM
Adams, T.L., Jr., "Entry and Continued Access Under CERCLA," Memorandum to Regional
Administrators l-X and Regional Counsels l-Xfrom Assistant Administrator, Office of
Environment and Compliance Monitoring, EPA, June 5, 1987, PB91-138867.
Altshuler, B. and B. Pasternak, "Statistical Measures of the Lower Limit of Detection of a
Radioactivity Counter," Health Physics, 9:293-298, 1963.
Ames, L.L. and D. Rai, "Radionuclide Interactions with Soil and Rock Media," Office of Radiation
Programs, EPA, Las Vegas, NV,1978 (EPA 520/78-007A).
Angus, J.E., "Bootstrap One-Sided Confidence Intervals for the Log-Normal Mean," The
Statistician, 43(3): 395-401, 1994.
ANS (American Nuclear Society), "Mobile In Situ Gamma-Ray Spectroscopy System,"
Transactions of the American Nuclear Society, 70:47, 1994a.
ANS, "Large Area Proportional Counter for In Situ Transuranic Measurements," Transactions of
the American Nuclear Society, 70:47, 1994b.
ANSI (American National Standards Institute), "American National Standard Performance
Specifications for Health Physics Instrumentation - Portable Instrumentation for Use in
Extreme Environmental Conditions," Institute of Electrical and Electronics Engineers
(IEEE), Piscataway, NJ, March 28, 1990 (ANSI N42.17C).
ANSI, "American National Calibration and Usage of Thallium-Activated Sodium Iodide Detector
Systems for Assay of Radionuclides," Institute of Electrical and Electronics Engineers
(IEEE), Piscataway, NJ, January 26, 1995 (ANSI N42.12-1994).
ANSI, "Performance Criteria for Radiobioassay," American National Standards Institute,
Washington, DC, May 1, 1996 (ANSI/HPS N13.30).
ANSI, "Radiation Protection Instrumentation Test and Calibration - Portable Survey
Instruments," Institute of Electrical and Electronics Engineers (IEEE), Piscataway, NJ,
December 31, 1997 (N323A-1997).
ANSI, "Institute of Electrical and Electronics Engineers, Inc. Standard Test Procedures and
Bases for Geiger Mueller Counters," Institute of Electrical and Electronics Engineers
(IEEE), Piscataway, NJ, June 4, 1999 (IEEE 309-1999/ANSI N42.3-1999).
ANSI, "American National Standard Performance Specifications for Health Physics
Instrumentation—Portable Instrumentation for Use in Normal Environmental Conditions,"
IEEE, Piscataway, NJ, April 29, 2004 (N42.17A-2003).
ANSI, "American National Standard for Radiation Instrumentation Test and Calibration, Portable
Survey Instruments," IEEE, New York, NY, 2013 (N323AB-2013.)
ANSI, "American National Standard Performance Criteria for Handheld Instruments for the
Detection and Identification of Radionuclides," IEEE, New York, NY, August 24, 2016
(N42.34-2015).APHA (American Public Health Association), Standard Methods for the
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
Ref-2
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
References
Examination of Water and Wastewater, 22nd Edition, APHA, Washington, DC,
2012.ASME (American Society of Mechanical Engineers), "Quality Assurance Program
Requirements for Nuclear Facilities," NQA-1, ASME, New York, NY, 1989.
ASQC (American Society for Quality Control), "Specifications and Guidelines for Quality
Systems for Environmental Data Collection and Environmental Technology Programs,"
ANSI/ASQ E4-1994, ASQC, Milwaukee, Wl, 1995.
ASTM (American Society of Testing and Materials,) "Standard Guide for Conducting
Ruggedness Tests," ASTM E1169-2002, ASTM, West Conshohocken, PA, 2002.
ASTM "Standard Practice for Reducing Samples of Aggregate to Testing Size," ASTM C702-98,
ASTM, West Conshohocken, PA, 2003.
ASTM, "Standard Practice for Soil Sample Preparation for the Determination of Radionuclides,
ASTM C999-90 (2010) e1, ASTM, West Conshohocken, PA, 2010.
ASTM, Annual Book of ASTM Standards, Water and Environmental Technology, Volume 11.05,
"Environmental Assessment; Hazardous Substances and Oil Spill Responses; Waste
Management; Environmental Risk Assessment," ASTM, West Conshohocken, PA, 2012.
ATSDR (Agency for Toxic Substances and Disease Registry), "Public Health Assessment
Guidance Manual (Update)," ATSDR, Atlanta, GA, 2005.
Bailey, E.N., "Lessons Learned from Independent Verification Activities (DCN 0476-TR-02-0)"
Oak Ridge Institute of Science and Education, Oak Ridge, TN, 2008.
Bernabee, R., D. Percival, and D. Martin, "Fractionation of Radionuclides in Liquid Samples
from Nuclear Power Facilities," Health Physics, 39:57-67, 1980.
Berven, B.A., W.D. Cottrell, R.W. Leggett, C.A. Little, T.E. Myrick, W.A. Goldsmith, and F.F.
Haywood, "Generic Radiological Characterization Protocol for Surveys Conducted for
DOE Remedial Action Programs," Oak Ridge National Laboratory (ORNL), Oak Ridge,
TN, 1986, Report No. ORNL/TM-7850 (DE86-011747).
Berven, B.A., W.D. Cottrell, R.W. Leggett, C.A. Little, T.E. Myrick, W.A. Goldsmith, and F.F.
Haywood. "Procedures Manual for the ORNL Radiological Survey Activities (RASA)
Program," Oak Ridge National Laboratory, Oak Ridge, TN, 1987, Report No. ORNL/TM-
8600 (DE87-009089).
Bickel P.J. and K.A. Doksum, Mathematical Statistics: Basic Ideas and Selected Topics,
Holden-Day, San Francisco, CA, 1977.
Box, G.P., and G.C. Tiao, Bayesian Inference in Statistical Analysis, John Wiley and Sons, Inc.,
Hoboken, NJ, 2011.
Brodsky, A., "Exact Calculation of Probabilities of False Positives and False Negatives for Low
Background Counting," Health Physics, 63(2): 198-204, 1992.
May 2020
DRAFT FOR PUBLIC COMMENT
Ref-3
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
References
MARSSIM
Chen, L., "Testing the Mean of Skewed Distributions," Journal of the American Statistical
Association, 90: 767, 1995.
Chen, Z., Z. Bai, and B. Sinha, Ranked Set Sampling: Theory and Applications. Series: Lecture
Notes in Statistics, Vol. 176, Springer-Verlag, New York, NY, 2004.
Cheng, J.-J., B. Kassas, C. Yu, D. LePoire, J. Arnish, E.S. Dovel, S.Y. Cehn, A.W. Williams, A.
Wallo, and H. Petersen. "RESRAD-RECYCLE: A Computer Model for Analyzing the
Radiological Doses and Risks Resulting from the Recycling of Radioactive Scrap Metal
and the Reuse of Surface-Contaminated Material and Equipment," ANL/EAD-3, Argonne
National Laboratory, Argonne, IL, 2000.
Committee on the Biological Effects of Ionizing Radiations (BEIR), National Research Council,
Health Effects of Exposure to Low Levels of Ionizing Radiation: BEIR V, National
Academies Press, Washington, DC, 1990.
Conover, W.J., Practical Nonparametric Statistics, Second Edition, John Wley & Sons, New
York, NY, 1980.
Currie, L.A., "Limits for Qualitative Detection and Quantitative Determination," Analytical
Chemistry, 40(3):586-693, 1968.
Davidson, J.R., "ELIPGRID-PC: Upgraded Version," Oak Ridge National Laboratory, Oak
Ridge, TN, 1995 (ORNL/TM-13103).
Dawson, J., 2011, Public Comment, MARSSIM Workgroup Meeting, Washington, D.C., May
2011.
DeGroot, M.H., Optimal Statistical Decisions, John Wiley & Sons, Inc., Hoboken, New Jersey,
2005.
DoD (Department of Defense), "Environment, Safety, and Occupational Health, DoD Directive
4715.1E," DoD, Washington, DC, 2005.
DoD, "Environmental Compliance at Installations Outside the United States, DoD Instruction
4715.05, Incorporating Change 2" DoD, Washington, DC, 2018.
DoD, "Integrated Recycling and Solid Waste Management, DoD Instruction 4715.23," DoD,
Washington, DC, 2016.
DoD, "Occupational Medical Examinations and Surveillance Manual, Incorporating Change 3,
DoD 6055.05-M," DoD, Washington, DC, 2018.
DoD, "Quality Program Requirements, Military Specification MIL-Q-9858A," Washington, DC,
1963.
Department of the Air Force, "Environmental Baseline Surveys in Real Estate Transactions,"
AFI 32-7066, Air Force, Washington, DC, 2015.
NUREG-1575, Revision 2
DRAFT FOR PUBLIC COMMENT
Ref-4
May 2020
DO NOT CITE OR QUOTE

-------
MARSSIM
References
Department of the Air Force, "The Environmental Restoration Program, Incorporating Change
1," AFI 32-7020, Air Force, Washington, DC, 2016.
Department of the Air Force, "Radioactive Materials (RAM) Management," AFMAN 40-201, Air
Force, Washington, DC, 2019.
Department of the Army, "Handling, Storage and Disposal of Army Aircraft Components
Containing Radioactive Materials," TB 43-0108, Army, Washington, DC, 1979.
Department of the Army, "Occupational and Environmental Health: Control of Health Hazards
from Protective Material Used in Self-Luminous Devices," TB MED 522, Army,
Washington, DC, 1980.
Department of the Army, "Control of Hazards to Health from Ionizing Radiation Used by the
Army Medical Department," TB MED 525, Army, Washington, DC, 1988a.
Department of the Army, "Handling and Disposal of Unwanted Radioactive Material," TM 3-261,
Army, Washington, DC, 1988b.
Department of the Army, "Identification of U.S. Army Communications-Electronics Command
Managed Radioactive Items in the Army Supply System," TB 43-0122, Army,
Washington, DC, 1989a.
Department of the Army, "Transportability Guidance for Safe Transport of Radioactive
Materials," TM 55-315, Army, Washington, DC, 1989b.
Department of the Army, "Safety and Hazard Warnings for Operation and Maintenance of
TACOM Equipment," TB 43-0216, Army, Washington, DC, 1990. Department of the
Army, "USAEHA Environmental Sampling Guide, Technical Guide No. 155," U.S. Army
Environmental Hygiene Agency, Aberdeen Proving Ground, MD, 1993.
Department of the Army, "Identification of Radioactive Items in the Army," TB 43-0116, Army,
Washington, DC, 1998.
Department of the Army, "Management of Equipment Contaminated with Depleted Uranium or
Radioactive Commodities," AR 700-48, Army, Washington, DC, 2002a.
Department of the Army, "Occupational and Environmental Health: Management and Control of
Diagnostic, Therapeutic, and Medical Research X-Ray Systems and Facilities," TB MED
521, Army, Washington, DC, 2002b.
Department of the Army, "Instructions for Safe Handling, Maintenance, Storage and
Transportation of Radioactive Items under License 12-00722-06," TB 43-0197, Army,
Washington, DC, 2006.
Department of the Army, "Environmental Protection and Enhancement," AR 200-1, Army,
Washington, DC, 2007a.
May 2020
DRAFT FOR PUBLIC COMMENT
Ref-5
NUREG-1575, Revision 2
DO NOT CITE OR QUOTE

-------
References	MARSSIM
Department of the Army, "Health Hazard Assessment Program in Support of the Army
Acquisition Process," AR 40-10, Army, Washington, DC, 2007b.
Department of the Army, "Preventive Medicine," AR 40-5, Army, Washington, DC, 2007c.
Department of the Army, "Occupational Dosimetry and Dose Recording for Exposure to Ionizing
Radiation," DA PAM 385-25, Army, Washington, DC, 2012.
Department of the Army, "Army Test, Measurement, and Diagnostic Equipment," AR 750-43,
Army, Washington, DC, 2014.
Department of the Army, "The Army Radiation Safety Program," DA PAM 385-24, Army,
Washington, DC, 2015.
Department of the Army, "The Army Safety Program," AR 385-10, Army, Washington, DC, 2017.
Department of the Army, "Calibration and Repair Requirements for the Maintenance of Army
Materiel," TB 43-180, Army, Washington, DC, 2018.
Department of the Navy, "Initial Management of Irradiated or Radioactively Contaminated
Personnel," BUMEDINST 6470.10B, Navy, Washington, DC 2003.
Department of the Navy, "Radiological Affairs Support Program," NAVSEA 5100-18B, Navy,
Washington, DC, 2007.
Department of the Navy, "Navy Radiation Safety Committee," OPNAVINST 6470.3, Navy,
Washington, DC, 2015.
Department of the Navy, "Radiation Health Protection Manual, Incorporating Change 1,"
NAVMED P-5055, Navy, Washington, DC, 2018a.
Department of the Navy, "Radioactive Commodities in the Department of Defense Supply
System," NAVSUPINST 4000.34C, Navy, Washington, DC, 2018b.
DOE (Department of Energy), "Radiological and Environmental Sciences Laboratory
Procedures, Analytical Chemistry Branch Procedures Manual," DOE/IDO-12096, DOE,
Idaho Operations Office, Idaho Falls, ID, 1982.
DOE, "Formerly Utilized Sites Remedial Action Program, Verification and Certification Protocol
Supplement No. 2 to the FUSRAP Summary Protocol, Revision 1," Division of Facility
and Site Decommissioning Projects, Office of Nuclear Energy, DOE, Washington, DC,
1985.
DOE, "Formerly Utilized Sites Remedial Action Program, Designation/Elimination Protocol
Supp