NUREG-1575, Rev. 1
                    EPA402-R-97-016,Rev. 1
                    DOE/EH-0624, Rev. 1
      MULTI-AGENCY
   RADIATION SURVEY
         AND SITE
      INVESTIGATION
         MANUAL
        (MARSSIM)
Revision 1
August 2000

-------
H
I— H
O5

Q
O5  H

i
0
>H
u  o
^  s
ExJ  H
S  05

-------
                                     ABSTRACT
The MARS SIM provides information on planning, conducting, evaluating, and documenting
building surface and surface soil final status radiological surveys for demonstrating compliance
with dose or risk-based regulations or standards. The MARSSEVI is a multi-agency consensus
document that was developed collaboratively by four Federal agencies having authority and
control over radioactive materials: Department of Defense (DOD), Department of Energy (DOE),
Environmental Protection Agency (EPA), and Nuclear Regulatory Commission (NRC). The
MARSSIM's objective is to describe a consistent approach for planning, performing, and
assessing building surface and surface soil final status surveys to meet established dose or risk-
based release criteria, while at the same time encouraging an effective use of resources.
August 2000                                 iii                          MARSSIM, Revision 1

-------
                                   DISCLAIMER
This manual was prepared by four agencies of the United States Government. Neither the United
States Government nor any agency or branch thereof, or any of their employees, makes any
warranty, expressed or implied, or assumes any legal liability of responsibility for any third
party's use, or the results of such use, of any information, apparatus, product, or process
disclosed in this manual, or represents that its use by such third party would not infringe on
privately owned rights.

References within this manual to any specific commercial product, process, or service by trade
name, trademark, or manufacturer does not constitute an endorsement or recommendation by the
United States Government.
MARS SIM, Revision 1                          iv                                 August 2000

-------
                                     CONTENTS

                                                                                   Page
Abstract 	iii
Disclaimer 	iv
Acknowledgments	xix
Abbreviations	xxiii
Conversion Factors  	  xxvii
Errata and Addenda	xxviii

Roadmap  	  Roadmap-1

1.      Introduction 	1-1
       1.1     Purpose and Scope of MARSSIM	1-1
       1.2     Structure of the Manual	1-4
       1.3     Use of the Manual 	1-6
       1.4     Missions of the Federal Agencies Producing MARSSIM	1-7
              1.4.1  Environmental Protection Agency	1-7
              1.4.2  Nuclear Regulatory Commission	1-7
              1.4.3  Department of Energy  	1-7
              1.4.4  Department of Defense  	1-8

2.      Overview of the Radiation Survey and Site Investigation Process 	2-1
       2.1     Introduction  	2-1
       2.2     Understanding Key MARSSIM Terminology	2-2
       2.3     Making Decisions Based on Survey Results	2-6
              2.3.1  Planning Effective Surveys—Planning Phase	2-8
              2.3.2  Estimating the Uncertainty in Survey Results—
                    Implementation Phase  	2-11
              2.3.3  Interpreting Survey Results—Assessment Phase	2-11
              2.3.4  Uncertainty in Survey Results	2-12
              2.3.5  Reporting Survey Results	2-13
       2.4     Radiation Survey and Site Investigation Process	2-14
              2.4.1  Site Identification	2-16
              2.4.2  Historical Site Assessment	2-22
              2.4.3  Scoping Survey 	2-22
              2.4.4  Characterization Survey	2-23
              2.4.5  Remedial Action Support Survey  	2-23
              2.4.6  Final Status Survey 	2-24
              2.4.7  Regulatory Agency Confirmation and Verification	2-25
       2.5     Demonstrating Compliance With a Dose-or Risk-Based Regulation	2-25
              2.5.1  The Decision To Use Statistical Tests	2-25
              2.5.2  Classification	2-28
              2.5.3  Design Considerations for Small Areas of Elevated Activity	2-29


June 2001                                   V                       MARSSIM, Revision 1

-------
                                     CONTENTS

                                                                                    Page
              2.5.4  Design Considerations for Relatively Uniform
                    Distributions of Contamination	2-30
              2.5.5  Developing an Integrated Survey Design	2-31
       2.6    Flexibility in Applying MARSSEVI Guidance	2-33
              2.6.1  Alternate Statistical Methods	2-34
              2.6.2  Alternate Null Hypothesis	2-39
              2.6.3  Integrating MARSSEVI with Other Survey Designs	2-39

3.      Historical Site Assessment	3-1
       3.1    Introduction 	3-1
       3.2    Data Quality Objectives	3-2
       3.3    Site Identification	3-4
       3.4    Preliminary Historical Site Assessment Investigation	3-4
              3.4.1  Existing Radiation Data	3-7
              3.4.2  Contracts and Interviews	3-9
       3.5    Site Reconnaissance	3-9
       3.6    Evaluation of Historical Site Assessment Data	3-10
              3.6.1  Identify Potential Contaminants	3-11
              3.6.2  Identify Potentially Contaminated Areas	3-12
              3.6.3  Identify Potentially Contaminated Media 	3-13
              3.6.4  Develop a Conceptual Model of the Site	3-21
              3.6.5  Professional Judgment	3-22
       3.7    Determining the Next Step in the Site Investigation Process  	3-24
       3.8    Historical  Site Assessment Report	3-24
       3.9    Review of the Historical Site Assessment	3-25

4.      Preliminary Survey Considerations	4-1
       4.1    Introduction 	4-1
       4.2    Decommissioning Criteria	4-1
       4.3    Identify Contaminants and Establish Derived Concentration Guideline Levels  4-3
              4.3.1  Direct Application of DCGLs  	4-4
              4.3.2  DCGLs and the Use  of Surrogate Measurements  	4-4
              4.3.3  Use of DCGLs for Sites With Multiple Radionulcides	4-8
              4.3.4  Integrated Surface and Soil Contamination DCGLs  	4-8
       4.4    Classify Areas by Contamination Potential	4-11
       4.5    Select Background Reference Areas  	4-13
       4.6    Identify Survey Units	4-14
MARS SIM, Revision 1                          vi                                  August 2000

-------
                                     CONTENTS

                                                                                    Page
       4.7     Select Instruments and Survey Techniques	4-16
              4.7.1  Selection of Instruments	4-16
              4.7.2  Selection of Survey Techniques	4-17
              4.7.3  Criteria for Selection of Sample Collection and
                    Direct Measurement Methods  	4-19
       4.8     Site Preparation 	4-22
              4.8.1  Consent for Survey	4-22
              4.8.2  Property Boundaries	4-22
              4.8.3  Physical Characteristics of Site  	4-22
              4.8.4  Clearing To Provide Access	4-24
              4.8.5  Reference Coordinate System  	4-27
       4.9     Quality Control 	4-32
              4.9.1  Precision and Systematic Errors (Bias) 	4-33
              4.9.2  Number of Quality Control Measurements	4-34
              4.9.3  Controlling Sources of Error	4-38
       4.10   Health and Safety	4-38

5.      Survey Planning and Design  	5-1
       5.1     Introduction 	5-1
       5.2     Scoping Surveys	5-1
              5.2.1  General	5-1
              5.2.2  Survey Design  	5-2
              5.2.3  Conducting Surveys	5-3
              5.2.4  Evaluating Survey Results	5-3
              5.2.5  Documentation	5-4
       5.3     Characterization Surveys	5-7
              5.3.1  General	5-7
              5.3.2  Survey Design  	5-8
              5.3.3  Conducting Surveys	5-9
              5.3.4  Evaluating Survey Results	5-14
              5.3.5  Documentation	5-15
       5.4     Remedial Action Support Surveys	5-18
              5.4.1  General	5-18
              5.4.2  Survey Design  	5-18
              5.4.3  Conducting Surveys	5-19
              5.4.4  Evaluating Survey Results	5-19
              5.4.5  Documentation	5-19
August 2000                                 vii                         MARSSIM, Revision 1

-------
                                     CONTENTS

                                                                                    Page
       5.5     Final Status Surveys	5-21
              5.5.1  General	5-21
              5.5.2  Survey Design  	5-21
              5.5.3  Developing an Integrated Survey Strategy	5-46
              5.5.4  Evaluating Survey Results	5-52
              5.5.5  Documentation	5-52

6.      Field Measurement Methods and Instrumentation	6-1
       6.1     Introduction 	6-1
       6.2     Data Quality Objectives	6-2
              6.2.1  Identifying Data Needs	6-2
              6.2.2  Data Quality Indicators  	6-3
       6.3     Selecting a Service Provider to Perform Field Data Collection Activities .... 6-8
       6.4     Measurement Methods	6-10
              6.4.1  Direct Measurements	6-10
              6.4.2  Scanning Surveys	6-13
       6.5     Radiation Detection  Instrumentation	6-15
              6.5.1  Radiation Detectors	6-15
              6.5.2  Display and Recording Equipment  	6-17
              6.5.3  Instrument Selection  	6-18
              6.5.4  Instrument Calibration	6-20
       6.6     Data Conversion	6-28
              6.6.1  Surface Activity	6-29
              6.6.2  Soil Radionuclide Concentration and Exposure Rates  	6-31
       6.7     Detection Sensitivity 	6-31
              6.7.1  Direct Measurement Sensitivity	6-32
              6.7.2  Scanning Sensitivity	6-37
       6.8     Measurement Uncertainty (Error) 	6-49
              6.8.1  Systematic and Random Uncertainties  	6-50
              6.8.2  Statistical Counting Uncertainty  	6-52
              6.8.3  Uncertainty Propagation	6-52
              6.8.4  Reporting Confidence Intervals	6-53
       6.9     Radon Measurements	6-55
              6.9.1  Direct Radon Measurements  	6-58
              6.9.2  Radon Progeny Measurements	6-59
              6.9.3  Radon Flux Measurements  	6-60
       6.10   Special Equipment	6-61
              6.10.1 Positioning Systems	6-61
              6.10.2 Mobile Systems with Integrated Positioning Systems	6-62
              6.10.3 Radar, Magnetometer, and Electromagnetic  Sensors  	6-63
              6.10.4 Aerial Radiological Surveys  	6-66


MARSSIM, Revision 1                         viii                                 August 2000

-------
                                     CONTENTS

                                                                                   Page
7.      Sampling and Preparation for Laboratory Measurements	7-1
       7.1    Introduction  	7-1
       7.2    Data Quality Objectives	7-1
             7.2.1  Identifying Data Needs	7-2
             7.2.2  Data Quality Indicators  	7-2
       7.3    Communications with the Laboratory	7-7
             7.3.1  Communications During Survey Planning  	7-8
             7.3.2  Communications Before and During Sample Collection	7-8
             7.3.3  Communications During Sample Analysis  	7-9
             7.3.4  Communications Following Sample Analysis	7-9
       7.4    Selecting a Radioanalytical Laboratory	7-10
       7.5    Sampling  	7-11
             7.5.1  Surface Soil  	7-12
             7.5.2  Building Surfaces	7-15
             7.5.3  Other Media	7-16
       7.6    Field Sample Preparation and Preservation	7-16
             7.6.1  Surface Soil  	7-17
             7.6.2  Building Surfaces	7-17
             7.6.3  Other Media	7-17
       7.7    Analytical Procedures  	7-17
             7.7.1  Photon Emitting Radionuclides	7-21
             7.7.2  Beta Emitting Radionuclides	7-21
             7.7.3  Alpha Emitting Radionuclides	7-22
       7.8    Sample Tracking  	7-23
             7.8.1  Field Tracking Considerations	7-24
             7.8.2  Transfer of Custody	7-24
             7.8.3  Laboratory Tracking	7-25
       7.9    Packaging and Transporting Samples 	7-25
             7.9.1  U.S. Nuclear Regulatory Commission Regulations	7-27
             7.9.2  U.S. Department of Transportation Regulations	7-27
             7.9.3  U.S. Postal Service Regulations	7-28

8.      Interpretation of Survey Results	8-1
       8.1    Introduction  	8-1
       8.2    Data Quality Assessment	8-1
             8.2.1  Review the Data Quality Objectives and Sampling
                    Design 	8-2
             8.2.2  Conduct a Preliminary Data Review	8-2
             8.2.3  Select the Tests  	8-6
August 2000                                 ix                         MARSSIM, Revision 1

-------
                                    CONTENTS

                                                                                  Page
             8.2.4  Verify the Assumptions of the Tests 	8-7
             8.2.5  Draw Conclusions From the Data 	8-8
             8.2.6  Example	8-10
       8.3    Contaminant Not Present in Background	8-11
             8.3.1  One-Sample Statistical Test	8-11
             8.3.2  Applying the Sign Test  	8-12
             8.3.3  Sign Test Example: Class 2 Exterior Soil Survey Unit	8-12
             8.3.4  Sign Test Example: Class 3 Exterior Soil Survey Unit	8-14
       8.4    Contaminant Present in Background	8-17
             8.4.1  Two-Sample Statistical Test  	8-17
             8.4.2  Applying the Wilcoxon Rank Sum Test  	8-18
             8.4.3  Wilcoxon Rank Sum Test Example:
                    Class 2 Interior Drywall Survey Unit	8-19
             8.4.4  Wilcoxon Rank Sum Test Example:
                    Class 1 Interior Concrete Survey Unit	8-21
             8.4.5  Multiple Radionuclides 	8-21
       8.5    Evaluating the Results: The Decision  	8-21
             8.5.1  Elevated Measurement Comparison 	8-21
             8.5.2  Interpretation of Statistical Test Results  	8-23
             8.5.3  If the Survey Unit Fails  	8-23
             8.5.4  Removable Activity	8-25
       8.6    Documentation	8-25

9.      Quality Assurance and Quality Control	9-1
       9.1    Introduction  	9-1
       9.2    Development of a Quality Assurance Project Plan  	9-3
       9.3    Data Assessment  	9-5
             9.3.1  Data Verification	9-6
             9.3.2  Data Validation  	9-7

References 	Ref-1

Appendix A  Example of MARS SIM Applied to a Final  Status Survey	  A-l
       A.I   Introduction  	  A-l
       A.2   Survey Preparations	A. 1
       A.3   Survey Design  	  A-7
       A.4   Conducting Surveys	  A-14
       A.5   Evaluating Survey Results	  A-l5
MARS SIM, Revision 1                          x                                 August 2000

-------
                                    CONTENTS
                                                                                 Pai
Appendix B  Simplified Procedure for Certain Users of Sealed Sources, Short
             Half-Life Materials, and Small Quantities	B-l

Appendix C  Site Regulations and Requirements Associated With Radiation
             Surveys and Site Investigations	C-l
       C. 1    EPA Statutory Authorities	C-l
       C.2    DOE Regulations and Requirements	C-4
       C.3    NRC Regulations and Requirements	C-12
       C.4    DOD Regulations and Requirements	C-15
       C.5    State and Local Regulations and Requirements	C-20

Appendix D  The Planning Phase of the Data Life Cycle	  D-l
       D.I    State the Problem	  D-4
       D.2    Identify the Decision  	  D-5
       D.3    Identify the Inputs to the Decision	  D-5
       D.4    Define the Boundaries of the Study	  D-6
       D.5    Develop a Decision Rule	  D-8
       D.6    Specify Limits on Decision Errors	  D-13
       D.7    Optimize the Design for Collecting Data	  D-28

Appendix E  The Assessment Phase of the Data Life Cycle	E-l
       E. 1    Review DQOs and  Survey Design	E-l
       E.2    Conduct a Preliminary Data Review	E-3
       E.3    Select the Statistical Test	E-4
       E.4    Verify the Assumptions of the  Statistical Test	E-4
       E.5    Draw Conclusions from the Data	E-5

Appendix F  The Relationship Between the Radiation Survey and Site
             Investigation Process, the CERCLA Remedial or Removal Process,
             and the RCRA Correction Action Process	F-l

Appendix G  Historical Site Assessment Information Sources	  G-l

Appendix H  Description of Field Survey and Laboratory Analysis Equipment  	  H-l
       H.I    Introduction  	  H-3
       H.2    Field Survey Equipment	  H-5
       H.3    Laboratory Instruments  	  H-38
June 2001
XI
MARS SIM, Revision 1

-------
                                    CONTENTS

                                                                                 Page
Appendix I   Statistical Tables and Procedures	  1-1
       1.1    Normal Distribution	  1-1
       1.2    Sample Sizes for Statistical Tests  	  1-2
       1.3    Critical Values for the Sign Test	  1-4
       1.4    Critical Values for the WRS Test  	  1-6
       1.5    Probability of Detecting an Elevated Area	  1-11
       1.6    Random Numbers  	  1-14
       1.7    Stem and Leaf Display	  1-17
       1.8    Quantile Plots	  1-18
       1.9    Power Calculations for the Statistical Tests	  1-25
       1.10   Spreadsheet Formulas for the Wilcoxon Rank Sum Test  	  1-30
       1.11   Example WRS Test for Two Radionuclides  	  1-31

Appendix J   Derivation of Alpha Scanning Equations Presented in Section 6.7.2.2	J-l

Appendix K  Comparison Tables Between Quality Assurance Documents 	  K-l

Appendix L  Regional Radiation Program Managers	L-l
       L. 1    Department of Energy  	L-2
       L.2    Environmental Protection Agency	L-3
       L.3    Nuclear Regulatory Commission	L-5
       L.4    Department of the Army 	L-6
       L.5    Department of the Navy	L-7
       L.6    Department of the Air Force 	L-8

Appendix M  Sampling Methods: A List of Sources  	M-l
       M.I   Introduction   	M-l
       M.2   List of Sources	M-l

Appendix N  Data Validation Using Data Descriptors	  N-l
       N.I    Reports to Decision Maker  	  N-l
       N.2   Documentation	  N-2
       N.3    Data Sources  	  N-4
       N.4   Analytical Method and Detection Limit 	  N-4
       N.5    Data Review	  N-5
       N.6   Data Quality Indicators 	  N-6

Glossary	  GL-1

Index  	  Index-1
MARS SIM, Revision 1
xil
June 2001

-------
                                    CONTENTS

                                LIST OF TABLES

                                                                                 Page
1.1     Scope of MARSSIM 	1-8

2.1     The Data Life Cycle used to Support the Radiation Survey and
       Site Investigation Process  	2-16
2.2     Recommended Conditions for Demonstrating Compliance Based on Survey Unit
       Classification for a Final Status Survey	2-32
2.3     Examples of Alternate Statistical Tests	2-35

3.1     Questions Useful for the Preliminary HSA Investigation 	3-5

4.1     Selection of Direct Measurement Techniques Based on Experience	4-20
4.2     Example of DQO Planning Considerations	4-21
4.3     Upper Confidence Limits for the True Variance as a Function of the Number of
       QC Measurements used to Determine the Estimated Variance	4-36

5.1     Values of Pr for Given Values of the Relative Shift, A/o, when the
       Contaminant is Present in Background 	5-28
5.2     Percentiles Represented by Selected Values of a and p  	5-28
5.3     Values of N/2 for Use with the Wilcoxon Rank Sum Test	5-30
5.4     Values of Sign p for Given Values of the Relative Shift, A/o, when the
       Contaminant is Not Present in Background	5-32
5.5     Values of N for Use with the Sign Test	5-34
5.6     Illustrative Examples of Outdoor Area Dose Factors  	5-37
5.7     Illustrative Examples of Indoor Area Dose Factors	5-37
5.8     Example Final Status Survey Investigation Levels  	5-45
5.9     Recommended Survey Coverage for Structures and Land Areas  	5-47

6.1     Radiation Detectors With Applications for Alpha Surveys	6-20
6.2     Radiation Detectors With Applications for Beta Surveys	6-21
6.3     Radiation Detectors With Applications for Gamma Surveys  	6-22
6.4     Examples of Estimated Detection Sensitivities for Alpha and Beta Survey
       Instrumentation  	6-36
6.5     Values ofd' for Selected True Positive and False Positive Proportions	6-40
6.6     Scanning Sensitivity (MDCR) of the Ideal Observer for Various
       Background Levels	6-41
August 2000                                xiii                        MARSSIM, Revision 1

-------
                                 LIST OF TABLES

                                                                                  Page
6.7    Nal(Tl) Scintillation Detector Scan MDCs for Common Radiological
       Contaminants	6-47
6.8    Probability of Detecting 300 dpm/100 cm2 of Alpha Activity While Scanning with
       Alpha Detectors Using an Audible Output 	6-49
6.9    Areas Under Various Intervals About the Mean of a Normal Distribution	6-54
6.10   Radiation Detectors with Applications to Radon Surveys	6-57
6.11   Typical Radar Penetration Depths for Various Geologic Materials  	6-64

7.1    Soil Sampling Equipment  	7-14
7.2    Examples of Sources for Routine Analytical Methods 	7-18
7.3    Typical Measurement Sensitivities for Laboratory Radiometric Procedures  	7-20

 8.1    Methods for Checking the Assumptions of Statistical Tests	8-8
8.2    Summary of Statistical Tests	8-9
8.3    Final Status Survey Parameters for Example  Survey Units	8-10
8.4    Example Sign Analysis: Class 2 Exterior Soil Survey Unit 	8-14
8.5    Sign Test Example Data for Class 3 Exterior Survey Unit	8-16
8.6    WRS Test for Class 2 Interior Drywall  Survey Unit	8-20

9.1    The Elements of a Quality System Related to the Data Life Cycle	9-2
9.2    Examples of QAPP Elements for Site Surveys and Investigations	9-4
9.3    Suggested Content or Consideration, Impact  if Not Met, and  Corrective Actions
       for Data Descriptors	9-8

A.I    Class  1 Interior Concrete Survey Unit and Reference Area Data  	  A-15
A.2    Stem and Leaf Displays for Class 1 Interior Concrete Survey Unit	  A-16
A.3    WRS Test for Class 1  Interior Concrete Survey Unit 	  A-18

C.I    DOE Authorities, Orders and Regulations Related to Radiation Protection	C-5
C.2    Agreement States	C-21
C.3    States that Regulate Diffuse NORM  	C-21

D.I    Example Representation of Decision Errors for a Final Status Survey	  D-15

F. 1    Program Comparison	F-5
F.2    Data Elements for Site Visits	F-10
F.3    Comparison of Sampling Emphasis Between Remedial Site Assessment
       and Removal Assessment  	F-10

G. 1    Site Assessment Information Sources (Organized by Information Needed)	  G-2
G.2    Site Assessment Information Sources (Organized by Information Source)	  G-7
MARSSIM, Revision 1                         xiv                                August 2000

-------
                                LIST OF TABLES

                                                                                Page
H.I    Radiation Detectors with Applications to Alpha Surveys	  H-50
H.2    Radiation Detectors with Applications to Beta Surveys 	  H-52
H.3    Radiation Detectors with Applications to Gamma Surveys	  H-53
H.4    Radiation Detectors with Applications to Radon Surveys	  H-55
H.5    Systems that Measure Atomic Mass or Emissions  	  H-56

I.I     Cumulative Normal Distribution Function O(z)  	  1-1
I.2a    Sample Sizes for Sign Test  	  1-2
I.2b    Sample Sizes for Wilcoxon Rank Sum Test  	  1-3
1.3     Critical Values for the Sign Test Statistic S+ 	  1-4
1.4     Critical Values for the WRS Test  	  1-6
1.5     Risk that an Elevated Area with Length L/G and Shape S will not be Detected
       and the Area (%) of the Elevated Area Relative to a
       Triangular  Sample Grid Area of 0.866 G2	  1-11
1.6     1,000 Random Numbers Uniformly Distributed between Zero and One 	  1-14
1.7     Data for Quantile Plot  	  1-19
1.8     Ranked Reference Area Concentrations   	  1-22
1.9     Interpolated Ranks for Survey Unit Concentrations                               1-23
1.10    Values of Pr andp2 for Computing the Mean and Variance of WMW	  1-28
I.I 1    Spreadsheet Formulas Used in Table 8.6	  1-30
1.12    Example WRS Test for Two Radionuclides  	  1-35

K.I    Comparison of EPA QA/R-5 and EPA QAMS-005/80	  K-2
K.2    Comparison of EPA QA/R-5 and ASME NQA-1  	  K-3
K.3    Comparison of EPA QA/R-5 and DOE Order 5700.6c	  K-4
K.4    Comparison of EPA QA/R-5 and MIL-Q-9858A  	  K-5
K.5    Comparison of EPA QA/R-5 and ISO 9000  	  K-6

N.I    Use of Quality Control Data 	  N-7
N.2    Minimum Considerations for Precision, Impact if Not Met,
       and Corrective Actions  	  N-9
N.3    Minimum Considerations for Bias,  Impact if Not Met,
       and Corrective Actions  	  N-10
N.4    Minimum Considerations for Representativeness, Impact if Not Met,
       and Corrective Actions  	  N-13
N.5    Minimum Considerations for Comparability, Impact if Not Met,
       and Corrective Actions  	  N-15
N.6    Minimum Considerations for Completeness, Impact if Not Met,
       and Corrective Actions  	  N-16
June 2001                                  xv                      MARS SIM, Revision 1

-------
                                    CONTENTS

                                LIST OF FIGURES

                                                                                  Page
1.1    Compliance Demonstration	1-2

2.1    The Data Life Cycle	2-7
2.2    The Data Quality Objectives Process	2-10
2.3    The Assessment Phase of the Data Life Cycle	2-12
2.4    The Radiation Survey and Site Investigation Process
       in Terms of Area Classification	2-17
2.5    The Historical Site Assessment Portion of the
       Radiation Survey and Site Investigation Process	2-18
2.6    The Scoping Survey Portion of the Radiation Survey and
       Site Investigation Process  	2-19
2.7    The Characterization and Remedial Action Support Survey Portion of the
       Radiation Survey and Site Investigation Process	2-20
2.8    The Final Status Survey Portion of the Radiation Survey and
       Site Investigation Process  	2-21

3.1    Example Showing How a Site Might Be Classified Prior to Cleanup
       Based on the Historical Site Assessment	3-23
3.2    Example of a Historical Site Assessment Report Format  	3-26

4.1    Sequence of Preliminary Activities Leading to Survey Design	4-2
4.2    Flow Diagram for Selection of Field Survey Instrumentation for
       Direct Measurements and Analysis of Samples	4-18
4.3    Indoor Grid Layout With Alphanumeric Grid Block Designation	4-28
4.4    Example of a Grid System for Survey of Site Grounds Using Compass Directions . . 4-29
4.5    Example of a Grid System for Survey of Site Grounds Using Distances
       Left or Right of the Baseline 	4-30

5.1    Flow Diagram Illustrating the Process for Identifying Measurement Locations	5-22
5.2    Flow Diagram for Identifying the Number of Data Points, N, for Statistical Tests . . . 5-23
5.3    Flow Diagram for Identifying Data Needs for Assessment of Potential Areas of
       Elevated Activity in Class 1 Survey Units	5-24
5.4    Example of a Random Measurement Pattern	5-41
5.5    Example of a Random-Start Triangular Grid Measurement Pattern	5-43

6.1    The Physical Probe Area of a Detector 	6-29
6.2    Graphically Represented Probabilities for Type I and Type n Errors in Detection
       Sensitivity for Instrumentation With a Background Response 	6-33
MARSSIM, Revision 1                         xvi                                 August 2000

-------
                                LIST OF FIGURES

                                                                                  Page
8.1    Examples of Posting Plots	8-4
8.2    Example of a Frequency Plot	8-5

9.1    Example of a QAPP Format  	9-5

A.I    Plot Plan for the Specialty Source Manufacturing Company  	  A-3
A.2    Building Floor Plan 	  A-4
A.3    Examples of Scanning Patterns for Each Survey Unit Classification 	  A-6
A.4    Reference Coordinate System for the Class 1 Interior Concrete Survey Unit	  A-8
A.5    Power Chart for the Class 1 Interior Concrete Survey Unit	  A-9
A.6    Prospective Power Curve for the Class 1 Interior Concrete Survey Unit  	  A-12
A.7    Measurement Grid for the Class 1 Interior Concrete Survey Unit	  A-13
A. 8    Quantile-Quantile Plot for the Class 1 Interior Concrete Survey Unit 	  A-17
A.9    Retrospective Power Curve for the Class 1 Interior Concrete  Survey Unit	  A-20

D. 1    The Data Quality Objectives Process	  D-2
D.2    Repeated Applications of the DQO Process Throughout the
       Radiation Survey and Site Investigation Process	  D-3
D.3    Example of the Parameter of Interest for the 1-Sample Case  	  D-ll
D.4    Example of the Parameter of Interest for the 2-Sample Case  	  D-12
D.5    Possible Statement of the Null Hypothesis for the Final Status Survey
       Addressing the Issue of Compliance	  D-18
D.6    Possible Statement of the Null Hypothesis for the Final Status Survey
       Addressing the Issue of Indistinguishability from Background	  D-19
D.7    Geometric Probability of Sampling at Least One Point of an
       Area of Elevated Activity as a Function of Sample Density with
       Either a Square or Triangular Sampling Pattern  	  D-24
D.8    Example of a Power Chart Illustrating the Decision Rule for the
       Final Status Survey 	  D-25
D.9    Example of an Error Chart Illustrating the Decision Rule for the
       Final Status Survey 	  D-27

E. 1    The Assessment Phase of the Data Life Cycle	E-2

F. 1    Comparison of the Radiation Survey  and Site Investigation Process with the
       CERCLA Superfund Process and the RCRA Corrective Action Process	F-2
August 2000                                xvil                       MARSSIM, Revision 1

-------
                               LIST OF FIGURES

                                                                                Page
I.I     Example of a Stem and Leaf Display	  1-18
1.2     Example of a Quantile Plot 	  1-20
1.3     Quantile Plot for Example Class 2 Exterior Survey Unit of Section 8.3.3  	  1-21
1.4     Example Quantile-Quantile Plot  	  1-24
1.5     Retrospective Power Curve for Class 3 Exterior Survey Unit	  1-26
1.6     Retrospective Power Curve for Class 2 Interior Drywall Survey Unit  	  1-29

J. 1     Probability (P) of Getting One or More Counts When Passing Over a 100 cm2
       Area Contaminated at 500 dpm/100 cm2 Alpha  	J-5
J.2     Probability (P) of Getting One or More Counts When Passing Over a 100 cm2
       Area Contaminated at 1,000 dpm/100 cm2 Alpha	J-6
J.3     Probability (P) of Getting One or More Counts When Passing Over a 100 cm2
       Area Contaminated at 5,000 dpm/100 cm2 Alpha	J-7
J.4     Probability (P) of Getting Two or More Counts When Passing Over a 100 cm2
       Area Contaminated at 500 dpm/100 cm2 Alpha  	J-8
J.5     Probability (P) of Getting Two or More Counts When Passing Over a 100 cm2
       Area Contaminated at 1,000 dpm/100 cm2 Alpha	J-9
J.6     Probability (P) of Getting Two or More Counts When Passing Over a 100 cm2
       Area Contaminated at 5,000 dpm/100 cm2 Alpha	J-10

N.I    Measurement Bias and Random Measurement Uncertainty  	  N-ll
MARSSIM, Revision 1                       xvill                              August 2000

-------
                             ACKNOWLEDGMENTS

The Multi-Agency Radiation Survey and Site Investigation Manual (MARSSIM) came about as a
result of individuals—at the management level—within the Environmental Protection Agency
(EPA), Nuclear Regulatory Commission (NRC), Department of Energy (DOE), and Department
of Defense (DOD) who recognized the necessity for a standardized guidance document for
investigating radioactively contaminated sites.  The creation of the MARSSIM was facilitated by
the cooperation of subject matter specialists from these agencies with management's support and
a willingness to work smoothly together toward reaching the common goal of creating a
workable and user-friendly guidance manual. Special appreciation is extended to Robert A.
Meek of the NRC and Anthony Wolbarst of EPA for developing the  concept of a multi-agency
work group and bringing together representatives from the participating agencies.

The MARSSIM could not have been possible without the technical work group members who
contributed their time, talent, and efforts to develop this consensus guidance  document:
              CDR Colleen F. Petullo, U.S. Public Health Service, EPA, Chair
       EPA:
Mark Doehnert
Anthony Wolbarst, Ph.D.
H. Benjamin Hull
Sam Keith, CHP*
Jon Richards
       NRC:  Robert A. Meek, Ph.D.
             Anthony Huffert
             George E. Powers, Ph.D.
             David Fauver, CHP
             Cheryl Trottier
DOE:  Hal Peterson, CHP
       Kenneth Duvall
       Andrew Wallo in
                                 DOD: David Alberth, CHP (Army)
                                       LCDR Lino Fragoso, Ph.D. (Navy)
                                       Lt. Col. Donald Jordan (Air Force)
                                       Capt. Kevin Martilla (Air Force)
                                       Capt. Julie Coleman (Air Force)
Special mention is extended to the Federal agency contractors for their assistance in developing
the MARSSIM:

      EPA:  Scott Hay (S. Cohen & Associates, Inc.)
             Todd Peterson, Ph.D. (S. Cohen & Associates, Inc.)
             Harry Chmelynski, Ph.D. (S. Cohen & Associates, Inc.)
             Ralph Kenning, CHP (S. Cohen & Associates, Inc.)

      NRC:  Eric Abelquist, CHP (Oak Ridge Institute of Science and Education)
             James Berger (Auxier & Associates)
             Carl Gogolak, Ph.D. (DOE/EML, under contract with NRC)

* Formerly with EPA National Air and Radiation Environmental Laboratory (NAREL). Currently with the Agency
for Toxic Substances and Disease Registry (ATSDR).
August 2000
                            xix
                      MARSSIM, Revision 1

-------
                            ACKNOWLEDGMENTS

      DOE:  Robert Coleman, CHP (Oak Ridge National Laboratory)
             John Kirk Williams (Oak Ridge National Laboratory)
             Romance Carrier (Oak Ridge National Laboratory)

A special thank you is extended to Emilio Braganza (EPA), Gregory Budd (EPA), Mary Clark,
Ph.D. (EPA), Brian Littleton (EPA), John Karhnak (EPA), Sarah Seeley (EPA), Rett Sutton
(EPA/SEE), Juanita Beeson (NRC),  Stephen A. McGuire, Ph.D. (NRC), Walter Oliu (NRC), LT
James Coleman (Navy), CDR David E. Farrand (U.S Navy), CAPT David George (Navy), CDR
Garry Higgins (Navy), CAPT James Malinoski (Navy), Harlan Keaton (State of Florida), J.
Michael Beck, J.D. (EMS), Tom McLaughlin, Ph.D. (SC&A), Kevin Miller, Ph.D. (DOE/EML),
and the members of the EPA's Science Advisory Board (SAB) for their assistance in developing
the manual.

The membership of the SAB Radiation Advisory Committee's Review Subcommittee that
conducted an extensive peer review of the MARSSIM includes:

Chair
      James E. Watson, Jr., Ph.D., University of North Carolina at Chapel Hill

Members
      William Bair, Ph.D., (Retired), Battelle Pacific Northwest Laboratory
      Stephen L. Brown, Ph.D., R2C2 (Risks of Radiation and Chemical Compounds)
      June Fabryka-Martin, Ph.D.,  Los Alamos National Laboratory
      Thomas F. Gesell, Ph.D., Idaho State University
      F. Owen Hoffman, Ph.D., SENES Oak Ridge, Inc.
      Janet Johnson, Ph.D., Shepherd Miller, Inc.
      Bernd Kahn, Ph.D., Georgia Institute of Technology
      Ellen Mangione, M.D., Colorado Department of Health
      Paul J. Merges, Ph.D., New York State Department of Environmental Conservation

SAB Consultants
      Michael E. Ginevan, Ph.D., M.E. Ginevan & Associates
      David G. Hoel, Ph.D., University of South Carolina
      David E. McCurdy, Ph.D., Yankee Atomic Electric Company
      Frank L. Parker, Ph.D., Vanderbilt University [Liaison from Environmental
                           Management Advisory Board, U.S.  Department of Energy]

Science Advisory Board Staff
      K. Jack Kooyoomjian, Ph.D., Designated Federal Official, EPA
      Mrs. Diana L. Pozun, Staff Secretary, EPA
MARSSIM, Revision 1                        xx                               August 2000

-------
                            ACKNOWLEDGMENTS
The work group meetings were open to the public, and the following people attended meetings as
technical experts at the request of the work group or as observers:
K. Allison
L. Abramson
R. Abu-Eid
W. Beck

A. Boerner

Lt. E. Bonano
M. Boyd
J. Buckley
B. Burns
W. Cottrell

D. Culberson

M.C. Daily
M. Eagle
M. Frank
F. Galpin
R. Gilbert

I.E. Glenn
J. Hacala
L. Hendricks

K. Hogan
R. Hutchinson
G. Jablonowski
A.T. Kearney
NRC
NRC
Oak Ridge Institute of
Science and Education
Oak Ridge Institute of
Science and Education
Air Force
EPA
NRC
Army
Oak Ridge
National Laboratory
Nuclear Fuel Services,
Inc.
NRC
EPA
Booz, Allen & Hamilton
RAECorp.
Pacific Northwest
Laboratory
NRC
Booz, Allen & Hamilton
Nuclear Environmental
Services
EPA
National Institute of
Standards and
Technology
EPA
N. Lailas
H. Larson
G. Lindsey

J. Lux
M. Mahoney
J. Malaro
H. Morton
H. Mukhoty
AJ. Nardi
D. Ottlieg

V. Patania

C.L. Pittiglio
C. Raddatz
L. Ralston
P. Reed
R. Rodriguez

N. Rohnig
R. Schroeder
C. Simmons
E. Stamataky
R. Story
E. Temple
D. Thomas
S. Walker
P. White
R. Wilhelm
EPA
NRC
International Atomic
Energy Agency
Kerr-McGee Corporation
Army
NRC
Morton Associates
EPA
Westinghouse
Westinghouse Hanford
Company
Oak Ridge
National Laboratory
NRC
NRC
SC&A, Inc.
NRC
Oak Ridge
National Laboratory

Army
Kilpatrick & Cody
EPA
Foster Wheeler
EPA
Air Force
EPA
EPA
EPA
August 2000
                        xxi
                      MARSSIM, Revision 1

-------
MARSSIM, Revision 1                          xxii                                 August 2000

-------
                               ABBREVIATIONS

AEA            Atomic Energy Act
AEC            Atomic Energy Commission
API             Air Force Instructions
ALARA         as low as reasonably achievable
AMC           Army Material Command
ANSI           American National Standards Institute
AR             Army Regulations
ARA            Army Radiation Authorization
ASTM          American Society of Testing and Materials
ATSDR         Agency for Toxic Substances and Disease Registry

CAA            Clean Air Act
Capt.            Captain (Air Force)
CAPT           Captain (Navy)
CDR            Commander
CEDE           committed effective dose equivalent
CERCLA        Comprehensive Environmental Response, Compensation, and Liability Act
CERCLIS        Comprehensive Environmental Response, Compensation, and Liability
                Information System
CFR            Code of Federal Regulations
CHP            Certified Health Physicist
CPM            counts per minute

DCF            dose conversion factor
DCGL           derived concentration guideline level
DCGLEMC        DCGL for small areas of elevated activity, used with the EMC
DCGLW         DCGL for average concentrations over a wide area, used with statistical tests
DEFT           Decision Error Feasibility Trials
DLC            Data Life Cycle
DOD            Department of Defense
DOE            Department of Energy
DOT            Department of Transportation
DQA            Data Quality Assessment
DQO            Data Quality Objectives

EERF           Eastern Environmental Radiation Facility
Ehf             human factors efficiency
EMC            elevated measurement comparison
EML            Environmental Measurements Laboratory
EMMI           Environmental Monitoring Methods Index
EPA            Environmental Protection Agency
EPIC            Environmental Photographic Interpretation Center
ERAMS         Environmental Radiation Ambient Monitoring System
June 2001
xxin
MARS SIM, Revision 1

-------
                                ABBREVIATIONS

FEMA          Federal Emergency Management Agency
FIRM           Flood Insurance Rate Maps
FRDS           Federal Reporting Data System
FSP            Field Sampling Plan
FWPCA         Federal Water Pollution Control Act
FUSRAP        Formerly Utilized Sites Remedial Action Program

GEMS          Geographical Exposure Modeling System
GM            Geiger-Mueller
GPS            global positioning system
GRIDS          Geographic Resources Information Data System
GWSI           Ground Water Site Inventory

H0              null hypothesis
Ha              alternative hypothesis
HSA            Historical Site Assessment
HSWA          Hazardous and Solid Waste Amendments

ISI              Information System Inventory

Lc              critical level
LD              detection limit
LB GR          1 ower b ound of the gray regi on
LCDR          Lieutenant Commander
LLRWPA       Low Level Radioactive Waste Policy Act as Amended
LT              Lieutenant

MARLAP       Multi-Agency Radiation Laboratory Analytical Protocols (Manual)
MARS SIM      Multi-Agency Radiation Survey and Site Investigation Manual
MCA           multichannel  analyzer
MDC           minimum detectable concentration
MDCR          minimum detectable count rate
MED           Manhattan Engineering District

NARM          naturally occurring or accelerator produced radioactive material
NCAPS         National Corrective Action Prioritization System
NCRP           National Council on Radiation Protection and Measurements
NCP            National Contingency Plan
NIST           National Institute of Standards and Technology
NORM          naturally occurring radioactive material
NPDC           National Planning Data Corporation
MARS SIM, Revision 1
xxiv
August 2000

-------
                               ABBREVIATIONS
NPDES
NRC
NWPA
NWWA

ODES
ORNL
ORISE

PERALS
PIC

QA
QAPP
QC
QMP

RASP
RAGS/HHEM
RC
RCRA
RCRIS
RI/FS
ROD
RODS
RSSI

SARA
SAP
SDWA
SFMP
SOP
STORE!

TEDE
TLD
TRU
TSCA
National Pollutant Discharge Elimination System
Nuclear Regulatory Commission
Nuclear Waste Policy Act
National Water Well Association

Ocean Data Evaluation System
Oak Ridge National Laboratory
Oak Ridge Institute for Science and Education

photon electron rejecting alpha liquid scintillator
pressurized ionization chamber

quality assurance
Quality Assurance Project Plan
quality control
Quality Management Plan

Radiological Affairs Support Program
Risk Assessment Guidance for Superfund/Human Health Evaluation Manual
release criterion
Resource Conservation and Recovery Act
Resource Conservation and Recovery Information System
Remedial Investigation/Feasibility Study
Record of Decision
Records of Decision System
Radiation Survey and Site Investigation

Superfund Amendments and Reauthorization Act
Sampling and Analysis Plan
Safe Drinking Water Act
Surplus Facilities Management Program
Standard Operating Procedures
Storage and Retrieval of U.S. Waterways Parametric Data

total effective dose equivalent
thermoluminescence dosimeter
transuranic
Toxic Substances Control Act
August 2000
                        xxv
MARSSIM, Revision 1

-------
                              ABBREVIATIONS

UMTRCA       Uranium Mill Tailings Radiation Control Act
USGS           United States Geological Survey
USPHS          United States Public Health Service
USRADS        Ultrasonic Ranging and Data System

WATSTORE     National Water Data Storage and Retrieval System
WL             working level
WRS            Wilcoxon rank sum
WSR            Wilcoxon signed ranks
WT             Wilcoxon test
MARS SIM, Revision 1                       xxvi                              August 2000

-------
                      CONVERSION FACTORS
To Convert From
acre
becquerel (Bq)
Bq/kg
Bq/m2
Bq/m3
centimeter (cm)
Ci
dps
dpm
gray (Gy)
hectare
liter (L)
To
hectare
sq. meter (m2)
sq. feet (ft2)
curie (Ci)
dps
pCi
pCi/g
dpm/ 100 cm2
Bq/L
pCi/L
inch
Bq
pCi
dpm
pCi
dps
pCi
rad
acre
cm3
m3
ounce (fluid)
Multiply By
0.405
4,050
43,600
2.7x10-"
1
27
0.027
0.60
0.001
0.027
0.394
3.70xl010
IxlO12
60
27
0.0167
0.451
100
2.47
1000
0.001
33.8
To Convert From
meter (m)
sq. meter (m2)
m3
mrem
mrem/y
mSv
mSv/y
ounce (oz)
pCi
pCi/g
pCi/L
rad
rem
seivert (Sv)
To
inch
mile
acre
hectare
sq. feet (ft2)
sq. mile
liter
mSv
mSv/y
mrem
mrem/y
liter (L)
Bq
dpm
Bq/kg
Bq/m3
Gy
mrem
mSv
Sv
mrem
mSv
rem
Multiply By
39.4
0.000621
0.000247
0.0001
10.8
3.86xlO-7
1,000
0.01
0.01
100
100
0.0296
0.037
2.22
37
37
0.01
1,000
10
0.01
100,000
1,000
100
August 2000
xxvn
MARSSIM, Revision 1

-------
                            ERRATA AND ADDENDA


In response to comments received on the December 1997 Multi-Agency Radiation Survey and
Site Investigation Manual (MARSSIM), minor modifications were made to individual pages.
Modifications to the manual that correct errors are listed as errata, while modifications made to
clarify guidance or provide additional information are referred to as addenda. The pages affected
by these modifications are listed here and have the date of the modification in the footer.  A
complete list of comments and resolutions is available on the MARSSEVI web site at:

http://www.epa.gov/radiation/marssim/

August 2000

Pages Modified to Correct Errata

v, xv, xxvii, Roadmap-4,  1-3, 2-6, 2-11, 2-12, 4-33, 4-35, 4-36, 4-37, 4-38, 5-33, 6-4, 6-10, 6-23,
6-37, 7-20, 8-19, 9-3, 9-4, 9-7, Ref-3, Ref-4, A-2, A-5, A-7, A-ll, A-14, A-19, E-2, H-7, H-8, H-
10, H-12, H-14, H-16, H-32,1-30, N-2, N-6, N-8, N-l 1, N-13

Pages Modified to Provide Addenda

xiii, xxiii, xxviii, 5-30, 5-34, 7-8, C-20, C-21, D-23,1-5, L-2, L-3, L-4, L-5, L-8,, M-10


June 2001

Pages Modified to Correct Errata

v, xxiii, xxviii, Roadmap-8, 4-24, 5-12, 6-16 6-30, 6-37, 8-19, C-19

Pages Modified to Provide Addenda

8-23


August 2002

Pages Modified to Correct Errata

Ref-3, Ref-4, Ref-8, L-l, L-2, L-3, L-4, L-5, L-6, L-7

Pages Modified to Provide Addenda

2-4, 3-12, 4-11, GL-11

MARSSIM, Revision 1                        xxviii                                August 2002

-------
                                    ROADMAP
Introduction to MARSSIM

The Multi-Agency Radiation Survey and Site Investigation Manual (MARSSIM) provides
detailed guidance for planning, implementing, and evaluating environmental and facility
radiological surveys conducted to demonstrate compliance with a dose- or risk-based regulation.
The MARSSIM guidance focuses on the demonstration of compliance during the final status
survey following scoping, characterization, and any necessary remedial actions.

The process of planning the survey, implementing the survey plan, and assessing the survey
results prior to making a decision is called the Data Life Cycle.  MARSSIM Chapter 2 and
Appendix D provide detailed guidance on developing appropriate survey designs using the Data
Quality Objectives (DQO) Process to ensure that the survey results are of sufficient quality and
quantity to support the final decision. The survey design process is described in MARSSIM
Chapters 3, 4, and 5. Guidance on selecting appropriate measurement methods (i.e., scan
surveys, direct measurements, samples) and measurement systems (i.e., detectors, instruments,
analytical methods) is provided in MARSSIM Chapters 6 and 7 and Appendix H.  Data Quality
Assessment (DQA) is the process of assessing the survey results, determining that the quality of
the data satisfies the objectives of the survey, and interpreting the survey results as they apply to
the decision being made. The DQA process is described in MARSSIM Chapter 2 and
Appendix E and is applied in MARSSIM Chapter 8. Quality Assurance and Quality Control
(QA/QC) procedures are developed and recorded in survey planning documents, such as a
Quality Assurance Project Plan (QAPP) which is described in MARSSIM Chapter 9.

MARSSIM does not provide guidance for translating the release criterion into derived
concentration guideline levels (DCGLs). MARSSIM discusses contamination of surface soil and
building surfaces in detail. If other media (e.g., ground water, surface water, subsurface soil,
equipment, vicinity properties) are potentially contaminated at the time of the final status survey,
modifications to the MARSSIM survey design guidance and examples may be required.
The Goal of the Roadmap

The goal of the roadmap is to present a summary of the major steps in the design,
implementation, and assessment of a final status survey and to identify where guidance on these
steps is located in MARSSIM. A brief description of each step is included in the roadmap along
with references to the sections of MARSSIM that provide more detailed guidance.

This roadmap provides the user with basic guidance from MARSSIM combined with "rules of
thumb" (indicated by «^) for performing compliance demonstration surveys. The roadmap is not
designed to be a stand-alone document, but to be used as a quick reference to MARSSIM for

August 2000                             Roadmap-1                     MARSSIM, Revision 1

-------
MARS SIM Roadmap


users already familiar with the process of planning and performing surveys.  Roadmap users will
also find flow charts summarizing the major steps in the Radiation Survey and Site Investigation
Process, combined with references to sections in MARS SIM where detailed guidance may be
found.  In addition, the roadmap serves as an overview and example for applying MARS SIM
guidance at sites with radioactive contamination of surface soil and building surfaces. The
roadmap assumes a working knowledge of MARSSIM terminology. If such knowledge is
lacking, the user may refer to Section 2.2 of MARSSIM for definitions of key terms. In addition,
a complete set of definitions is provided in the Glossary.
Data Life Cycle

Compliance demonstration is simply a decision as to whether or not a survey unit meets the
release criterion. For most sites, this decision is supported by statistical tests based on the results
of one or more surveys. The initial assumption used in MARSSIM is that each survey unit is
contaminated above the release criterion until proven otherwise. The surveys are designed to
provide the information needed to reject this initial assumption. MARSSIM recommends using
the Data Life  Cycle as a framework for planning, implementing, and evaluating survey results
prior to making a decision. Figure 1  summarizes the major activities associated with each phase
of the Data Life Cycle.

Planning Stage

The survey design is developed and documented using the Data Quality Objectives (DQO)
Process (Section 2.3.1, Appendix D). The DQOs for the project are established and preliminary
surveys (e.g.,  scoping, characterization) are performed to provide information necessary to design
the final status survey for compliance demonstration.  The DQOs for the project are re-evaluated
for each of the preliminary surveys.  The preliminary surveys may provide information for
purposes other than compliance demonstration that are not discussed in MARSSIM.  For
example, a characterization survey may provide information to support evaluation of remedial
alternatives. In addition, any of the preliminary surveys may be designed to demonstrate
compliance with the release criterion as one of the survey objectives. These alternate survey
designs are developed based on site-specific considerations (Section 2.6). The planning phase of
the Data Life  Cycle produces a final status survey design that is used for demonstrating
compliance with the release criterion. This design is recorded in planning documents, such as a
Quality Assurance Project Plan (QAPP) described in Section 9.2.
MARSSIM, Revision 1                     Roadmap-2                             August 2000

-------
                                                                       MARSSIM Roadmap
                          Area Classification
                       Survey Unit Identification
                     Reference Coordinate System
Uniform


Contaminant
Distribution


Areas of
Elevated Activity
<
Q_
        Expected Values
            N, L, MDC
     (Number of Measurements,
      Location of Measurements
  Selection of Measurement Method)
                                                           Expected Values
                                                           Coverage, Scan MDC
                                                      (Percent of Survey Unit Scanned,
                                                    Size of Areas of Elevated Contamination,
                                                      Selection of Measurement Method)
Direct Measurements
    or Samples
111
Q.
                  Final Status Survey Data Acquisition
                    (Scan Surveys, Direct Measurements,
                      Sample Collection and Analysis)
                                                                                       DCGLw
                                                                                       and
03
03
LLI
03
03
     111
     Q
     O
     111
     Q
August 2000
                  Data Verification and Data Validation
              Review DQOs and Integrated Survey Design
                      Graphical Analysis of Data
    Statistical
      Tests
                                I
                        Actual Survey Results
                             (N, L, MDC)
                                                                       1
                                                                     Elevated
                                                                   Measurement
                                                                    Comparison
    Pass/Fail-
                                                                    -Pass/Fail
           Decide if Survey Unit Demonstrates Compliance with
           Regulation Based on Results of Final Status Survey
   Figure 1  The Data Life Cycle Applied to a Final Status Survey

                                 Roadmap-3                         MARSSIM, Revision 1

-------
MARS SIM Roadmap

A minimum amount of information is needed from the preliminary surveys to develop an
effective final status survey design. This includes

•      Sufficient information to justify classification and specification of boundaries for survey
       units (the default is Class 1 which results in the highest level of survey effort)
•      An estimate of the variability of the contaminant concentration in the survey unit (os) and
       the reference area (or) if necessary

After the preliminary surveys are completed, the final status survey design can be developed.
Figure 2 presents the major steps in the development of a survey design that integrates scanning
surveys with direct measurements and sampling.  Most of the steps are easy to understand and
references to appropriate  sections of MARSSIM are included in the flowchart. Several of these
steps are important enough to justify additional discussion in this guide.  These steps are

       Classify Areas by Contamination Potential
       Group/Separate Areas into Survey Units
       Determine Number of Data Points
       Select Instrumentation
       Develop an Integrated Survey Design

Classify Areas by Contamination Potential (Section 4.4)

Classification is a critical step in survey design because it determines the level of survey effort
based on the potential for contamination.  Overestimating the potential for contamination results
in an unnecessary increase in the level of survey effort. Underestimating the potential for
contamination greatly increases the probability of failing to demonstrate compliance based on the
survey results. There are two key decisions  made when classifying areas: 1) is the average
activity in the area likely  to exceed the DCGI^,, and 2) is the contamination present in small
areas of elevated activity  or is the contamination distributed relatively homogeneously across the
area. Each of these decisions is considered separately when designing the survey and then
combined into an integrated survey design.  Class 1 areas, prior to remediation, are impacted
areas with concentrations of residual radioactivity that exceed the DCGLW. Class 2 areas are
impacted areas where concentrations of residual activity that exceed the DCGI^, are not
expected. Class 3 areas are impacted areas that have a low probability of containing areas with
residual radioactivity. The information obtained from the preliminary surveys is crucial for
classifying areas (see Figure 2.4).
       Area classification considers both the level of contamination relative to the DCGLW and
       the distribution of the contamination. The contamination may be uniformly distributed or
       present as small areas of elevated activity.
MARSSIM, Revision 1                     Roadmap-4                             August 2000

-------
                                                                             MARSSIM Roadmap
                 IDENTIFY
              CONTAMINANTS
                 ESTABLISH
                   DCGLs
             CLASSIFY AREAS BY
              CONTAMINATION
                 POTENTIAL
             GROUP/SEPARATE
            AREAS INTO SURVEY
                   UNITS
                   IS THE
               CONTAMINANT
                PRESENT
                   BKGD?
             PREPARE SITE FOR
              SURVEY ACCESS
             ESTABLISH SURVEY
            LOCATION REFERENCE
                  SYSTEM
            DETERMINE NUMBER
              OF DATA POINTS
                  SELECT
             INSTRUMENTATION
          DEVELOP AN INTEGRATED
              SURVEY DESIGN
Section 3.6.1, Section 4.3
Section 4.3
Section 2.5.2, Section 4.4
Section 4.6
, Vr" k

L
SELECT BKGD 1
REFERENCE AREAS 1

Section 4.5
                                    Section 4.8
Section 4.8.5
Section 5.5.2
Section 4.7, Section 6.5.3, Section 7.5, Section 7.7, Appendix H
Section 2.5.5, Section 5.5.3
                Figure 2 Flow Diagram for Designing a Final Status Survey
August 2000
     Roadmap-5
MARSSIM, Revision 1

-------
MARS SIM Roadmap

Group/Separate Areas into Survey Units (Section 4.6)

Survey units are limited in size based on classification, exposure pathway modeling assumptions,
and site-specific conditions. Table 1 provides suggested survey unit areas based on area
classification.  The rationale for selecting a larger survey unit area should be developed using the
DQO Process and fully documented.

                          Table 1 Suggested Survey Unit Areas
Classification
Class 1
Class 2
Class 3
Structures
Land Areas
Structures
Land Areas
Structures
Land Areas
Suggested Area
up to 100 m2
up to 2,000 m2
100 to 1,000 m2
2,000 to 10,000 m2
no limit
no limit
       Survey unit areas should be consistent with exposure pathway modeling assumptions
       used to develop DCGLs.
Determine Number of Data Points (Section 5.5.2)

The number of data points is determined based on the selection of a statistical test, which in turn
is based on whether or not the contaminant is present in background. Figure 3 presents a flow
chart for determining the number of data points.

The first step in determining the number of data points is to specify the acceptable decision error
rates, a and p.  Decision error rates are site-specific and selected using the DQO Process.
Changes in the values of a and P may result from successive iterations of the DQO Process.
       Values for a and P are site-specific and selected using the DQO Process.
MARSSIM, Revision 1
Roadmap-6
August 2000

-------
                                                                                MARSSIM Roadmap
  ESTIMATE o, VARIABILITY IN THE
      CONTAMINANT LEVEL
                   Section 5.5.2.3
                                        SPECIFY DECISION
                                            ERRORS
   ISTHE
CONTAMINANT
 PRESENT
   BKGD?
Yes-
         Section 4.5
   CALCULATE RELATIVE SHIFT
             A/o
                   Section 5.5.2.3
                                 ADJUST LBGR
             Yes
 OBTAIN NUMBER OF DATA POINTS
     FOR SIGN TEST, N, FROM
          TABLE 5.5
   PREPARE SUMMARY OF DATA
   POINTS FROM SURVEY AREAS
ESTIMATE o's, VARIABILITIES
   IN BACKGROUND AND
  CONTAMINANT LEVELS
                          Section
                          5.5.2.2
                            CALCULATE RELATIVE SHIFT
                                     A/o
                                                                               Yes
                          OBTAIN NUMBER OF DATA POINTS
                             FOR WRS TEST, N/2, FROM
                          TABLE 5.3 FOR EACH SURVEY UNIT
                              AND REFERENCE AREA
                           PREPARE SUMMARY OF DATA
                           POINTS FROM SURVEY AREAS
                   Section 5.5.2.3
                                           Section 5.5.2.2
             Figure 3 Flow Diagram for Determining the Number of Data Points
August 2000
  Roadmap-7
                 MARSSIM, Revision 1

-------
MARS SIM Roadmap

The next step, after determining whether or not the contaminant is present in background, is to
estimate the variability of the contaminant concentration, o. The standard deviation of the
contaminant concentration determined from the preliminary survey results should provide an
appropriate estimate of o. If the contaminant is present in background, the variability in the
survey unit (os) and the variability in the reference area (or) should both be estimated. The larger
of the two values should be selected for determining the number of data points.  Underestimating
G can underestimate the number of measurements needed to demonstrate  compliance with the
regulation, which increases the probability the  survey unit will fail the statistical test.
Overestimating G can result in collecting more data than is necessary to demonstrate compliance.
       It is better to overestimate values of GS and ar.
       When cs and Gr are different, select the larger of the two values.
The third step is to calculate the relative shift, A/G. The variability of the contaminant
concentration, G, was determined in the previous step.  The shift, A, is equal to the width of the
gray region. The upper bound of the gray region is defined as the DCGLW.  The lower bound of
the gray region (LBGR) is a site-specific parameter, adjusted to provide a value for A/a between
one and three.  A/a can be adjusted using the following steps:

•      Initially select LBGR to equal one half the DCGLW.  This means A = (DCGLW - LBGR)
       also equals one half the DCGLW.  Calculate A/a.
•      If A/G is between one and three, obtain the appropriate number of data points from Table
       5.3  or Table 5.5.
•      If A/G is less than one, select a lower value for LBGR. Continue to select lower values
       for LBGR until A/G is greater than or equal to one, or until LBGR equals zero.
•      If A/G is greater than three, select a higher value for LBGR.  Continue to select higher
       values for LBGR until A/G is less than or equal to three.

Alternatively, A/G can be adjusted by solving the following equation and calculating A/G:

                                 LBGR = DCGLW  - G


If LBGR is less than zero, A/G can be calculated as DCGLW/G.
       Adjust the LBGR to provide a value for A/G between one and three.
MARSSIM, Revision 1                     Roadmap-8                                June 2001

-------
                                                                       MARSSIM Roadmap

The final step in determining the number of data points is to obtain the appropriate value from
Table 5.3 or Table 5.5.  Table 5.3 provides the number of data points for each survey unit and
each reference area when the contaminant is present in background (N/2). Table 5.5 provides the
number of data points for each survey unit when the contaminant is not present in background
(N).

Select Instrumentation (Section 4.7, Section 6.5.3, Section 7.5, Section 7.7, Appendix H)

Instrumentation or measurement techniques should be selected based on detection sensitivity to
provide technically defensible results that meet the objectives of the  survey.  Because of the
uncertainty associated with interpreting scanning results, the detection sensitivity of the selected
instruments should be as far below the DCGL as possible.  For direct measurements and sample
analyses, minimum detectable concentrations (MDCs) less than 10% of the DCGL are preferable
while MDCs up to 50% of the DCGL are acceptable.
       Estimates of the MDC that minimize potential decision errors should be used for planning
       surveys.
Develop an Integrated Survey Design (Section 5.5.3)

The integrated survey design combines scanning surveys with direct measurements and
sampling.  The level of survey effort is determined by the potential for contamination as
indicated by the survey unit classification. This is illustrated in Figure 4.  Class 3 survey units
receive judgmental scanning and randomly located measurements. Class 2 survey units receive
scanning over a portion of the survey unit based on the potential for contamination combined
with direct measurements and sampling performed on a systematic grid.  Class 1 survey units
receive scanning over 100% of the survey unit combined with direct measurements and sampling
performed on a systematic grid. The grid spacing is adjusted to account for the scan MDC
(Section 5.5.2.4).

Table 2 provides a summary of the recommended survey coverage for structures and land areas.
Modifications to the example survey designs may be required to account for other contaminated
media (e.g., ground water, subsurface soil).

Implementation Phase

The objectives outlined in the QAPP are incorporated into Standard Operating Procedures
(SOPs). The final status survey design is carried out in accordance with the SOPs and the QAPP
resulting in the generation of raw data.  Chapter 6, Chapter 7, and Appendix H provide
information on measurement techniques.
August 2000                            Roadmap-9                     MARSSIM, Revision 1

-------
MARS SIM Roadmap
                     -Class 1
      WHAT IS THE
 AREA CLASSIFICATION?
         CONDUCT SURFACE
      SCANS FOR 100% OF LAND
      AREAS AND STRUCTURES
                   Section 6.4.2
       DETERMINE NUMBER OF
        DATA POINTS NEEDED
                   Section 5.5.2.2
                   Section 5.5.2.3
        ADJUST GRID SPACING
        BASED ON SCAN MDC
                   Section 5.5.2.4
        GENERATE A RANDOM
          STARTING POINT
                   Section 5.5.2.5
      IDENTIFY DATA POINT GRID
            LOCATIONS
                   Section 5.5.2.5
         WHERE CONDITIONS
        PREVENT SURVEY OF
       IDENTIFIED LOCATIONS,
         SUPPLEMENT WITH
       ADDITIONAL RANDOMLY
        SELECTED LOCATIONS
             PERFORM
       MEASUREMENTS AT DATA
       POINT GRID LOCATIONS
                   Section 6.4.1
                   Section 7.4
Class 3-
                                                        Section 4.4
                                            Class 2
   CONDUCT SURFACE
 SCANS FOR 10-100% OF
    LAND AREAS AND
     STRUCTURES
             Section 6.4.2
 DETERMINE NUMBER OF
  DATA POINTS NEEDED
            Section 5.5.2.2
            Section 5.5.2.3
  GENERATE A RANDOM
    STARTING POINT
                                                Section 5.5.2.5
IDENTIFY DATA POINT GRID
      LOCATIONS
                                                Section 5.5.2.5
   WHERE CONDITIONS
  PREVENT SURVEY OF
 IDENTIFIED LOCATIONS,
   SUPPLEMENT WITH
 ADDITIONAL RANDOMLY
  SELECTED LOCATIONS
       PERFORM
 MEASUREMENTS AT DATA
 POINT GRID LOCATIONS
             Section 6.4.1
             Section 7.4
 CONDUCT JUDGMENTAL
  SURFACE SCANS FOR
   LAND AREAS AND
     STRUCTURES
            Section 6.4.2
 DETERMINE NUMBER OF
 DATA POINTS NEEDED
            Section 5.5.2.2
            Section 5.5.2.3
  GENERATE SETS OF
   RANDOM VALUES
                                          Section 5.5.2.5
 MULTIPLY SURVEY UNIT
DIMENSIONS BY RANDOM
NUMBERS TO DETERMINE
     COORDINATES
                                                                               Section 5.5.2.5
  CONTINUE UNTIL THE
NECESSARY NUMBER OF
   DATA POINTS ARE
      IDENTIFIED
      PERFORM
MEASUREMENTS AT DATA
 POINT GRID LOCATIONS
            Section 6.4.1
             Section 7.4
            Figure 4 Flow Diagram for Developing an Integrated Survey Design

MARSSIM, Revision 1                       Roadmap-10                               August 2000

-------
                                                                       MARSSIM Roadmap
         Table 2 Recommended Survey Coverage for Structures and Land Areas
Area
Classification
Class 1
Class 2
Class 3
Structures
Surface
Scans
100%
10 to 100%
(10 to 50% for upper
walls and ceilings)
Systematic and
Judgmental
Judgmental
Surface Activity
Measurements
Number of data points
from statistical tests
(Sections 5.5.2.2 and
5.5.2.3); additional
direct measurements
and samples may be
necessary for small
areas of elevated
activity (Section
5.5.2.4)
Number of data points
from statistical tests
(Sections 5.5.2.2 and
5.5.2.3)
Number of data points
from statistical tests
(Sections 5.5.2.2 and
5.5.2.3)
Land Areas
Surface
Scans
100%
10 to 100%
Systematic
and
Judgmental
Judgmental
Surface Soil
Measurements
Number of data points
from statistical tests
(Sections 5.5.2.2 and
5.5.2.3); additional
direct measurements
and samples may be
necessary for small
areas of elevated
activity (Section
5.5.2.4)
Number of data points
from statistical tests
(Sections 5.5.2.2 and
5.5.2.3)
Number of data points
from statistical tests
(Sections 5.5.2.2 and
5.5.2.3)
Assessment Phase

The assessment phase of the Data Life Cycle includes verification and validation of the survey
results combined with an assessment of the quantity and quality of the data. As previously
stated, both the average level of contamination in the survey unit and the distribution of the
contamination within the survey unit are considered during area classification.  For this reason,
the assessment phase includes a graphical review of the data to provide a visual representation of
the radionuclide distribution, an appropriate statistical test to demonstrate compliance for the
average concentration of a uniformly distributed radionuclide, and the elevated measurement
comparison (EMC) to demonstrate compliance for small areas of elevated activity.

The survey data are verified to ensure that SOPs specified in the survey design were followed
and that the measurement systems were performed in accordance with the criteria specified in the
QAPP (Section 9.3.1).  The data are validated to ensure that the results support the objectives of
the survey, as documented in the QAPP, or permit a determination that these objectives should
August 2000
Roadmap-11
MARSSIM, Revision 1

-------
MARS SIM Roadmap

be modified (Section 9.3.2).  The Data Quality Assessment (DQA) process is then applied using
the verified and validated data to determine if the quality of the data satisfies the data user's
needs.  DQA is described in Appendix E and is applied in Chapter 8.

The first step in DQA is to review the DQOs and survey design to ensure that they are still
applicable. For example, if the data suggest that a survey unit is misclassified, the DQOs and
survey design would be modified for the new classification.

The next step is to conduct a preliminary data review to learn about the structure of the data and
to identify patterns, relationships, or potential anomalies. This review should include calculating
basic statistical quantities (i.e.., mean, standard deviation, median) and graphically presenting the
data using at least a histogram and a posting plot. The results of the preliminary data review are
also used to verify the assumptions of the tests. Some of the assumptions and possible methods
for assessing them are summarized in Table 3.  Information on diagnostic tests is provided in
Section 8.2 and Appendix I.

            Table 3 Methods for Checking the Assumptions of Statistical Tests
Assumption
Spatial Independence
Symmetry
Data Variance
Power is Adequate
Diagnostic
Posting Plot (Figure 8.1)
Histogram (Figure 8.2)
Quantile Plot (Figure 1.2)
Sample Standard Deviation (Section 8.2)
Retrospective Power Chart
(Sign Test, Figure 1.5)
(WRS Test, Figure 1.6)
The final step in interpreting the data is to draw conclusions from the data. Table 4 summarizes
the statistical tests recommended in MARSSEVI. Section 8.3 provides guidance on performing
the Sign test when the contaminant is not present in background. Section 8.4 provides guidance
on performing the Wilcoxon Rank Sum (WRS) test when the contaminant is present in
background.
MARSSIM, Revision 1                     Roadmap-12                             August 2000

-------
                                                                     MARSSIM Roadmap
                         Table 4 Summary of Statistical Tests
 Radionuclide not in background and radionuclide-specific measurements made:
Survey Result
All measurements less than DCGLW
Average greater than DCGLW
Any measurement greater than DCGLW and the average
less than DCGLW
Conclusion
Survey unit meets release criterion
Survey unit does not meet release criterion
Conduct Sign test and elevated measurement
comparison
 Radionuclide in background or radionuclide non-specific (gross) measurements made:
Survey Result
Difference between maximum survey unit measurement
and minimum reference area measurements is less than
DCGLW
Difference of survey unit average and reference area
average is greater than DCGLW
Difference between any survey unit measurement and any
reference area measurement greater than DCGLW and the
difference of survey unit average and reference area
average is less than DCGI^,
Conclusion
Survey unit
Survey unit
meets release criterion
does not meet release criterion
Conduct WRS test and elevated measurement
comparison
Table 5 provides examples of final status survey investigation levels for each survey unit
classification and type of measurement.  For a Class 1 survey unit, measurements above the
DCGLW are not necessarily unexpected.  However, a measurement above the DCGLw at one of
the discrete measurement locations might be considered unusual if it were much higher than all
of the other discrete measurements.  Thus, any discrete measurement that is above both the
DCGLW and the statistical-based parameter for the measurements should be investigated further.
Any measurement, either at a discrete location or from a scan, that is above the DCOI^c should
be flagged for further investigation.

In Class 2 or Class 3 areas, neither measurements above the DCGLW nor areas of elevated
activity are expected. Any measurement at a discrete location exceeding the DCGLW in these
areas should be flagged for further investigation. Because the survey design for Class 2 and
Class 3 survey units is not driven by the  EMC, the scanning MDC might exceed the DCGLW. In
this case, any indication of residual radioactivity during the scan would warrant further
investigation.
August 2000
Roadmap-13
MARSSIM, Revision 1

-------
MARS SIM Roadmap
                         Table 5  Summary of Investigation Levels
Survey Unit
Classification
Class 1
Class 2
Class 3
Flag Direct Measurement or Sample Result When:
> DCGLEMC or
> DCGLW and > a statistical-based parameter value
> DCGLW
> fraction of DCGLW
Flag Scanning Measurement
Result When:
>DCGLEMC
>DCGLwor>MDC
>DCGLwor>MDC
Because there is a low expectation for residual radioactivity in a Class 3 area, it may be prudent
to investigate any measurement exceeding even a fraction of the DCGLW. The level one chooses
here depends on the site, the radionuclides of concern, and the measurement and scanning
methods chosen.  This level should be set using the DQO Process during the survey design phase
of the Data Life Cycle.  In some cases, the user may also decide to follow this procedure for
Class 2 and even Class 1 survey units.

Both the measurements at discrete locations and the scans are subject to the EMC.  The result of
the EMC does not in itself lead to a conclusion as to whether the survey unit meets  or exceeds
the release criterion, but is a flag or trigger for further investigation.  The investigation may
involve taking further measurements in order to determine that the area and level of the elevated
residual radioactivity are such that the resulting dose or risk meets the release criterion.1 The
investigation should also provide adequate assurance that there are no other undiscovered areas
of elevated residual radioactivity in the survey unit that might result in a dose exceeding the
release criterion.  This could lead to a re-classification of all or part of a survey unit—that is,
unless the results of the investigation indicate that reclassification is not necessary.

Decision Making Phase

A decision is made, in coordination with the responsible regulatory agency, based on the
conclusions drawn from the assessment phase. The results of the EMC are used to  demonstrate
compliance with the dose- or risk-based regulation for small areas  of elevated activity, while the
nonparametric statistical tests are used to demonstrate that the average radionuclide concentration
in the survey unit complies with the release criterion.  The objective is to make technically
defensible decisions with a specified level of confidence.
       1 Rather than, or in addition to, taking further measurements, the investigation may involve assessing the
adequacy of the exposure pathway model used to obtain the DCGLs and area factors, and the consistency of the
results obtained with the Historical Site Assessment and the scoping, characterization, and remedial action support
surveys.
MARSSIM, Revision 1
Roadmap-14
August 2000

-------
                                                                        MARSSIM Roadmap

The EMC consists of comparing each measurement from the survey unit with the investigation
levels in Table 5.  The EMC is performed for measurements obtained from the  systematic or
random sample locations as well as locations flagged by scanning surveys. Any measurement
from the survey unit that is equal to or greater than the investigation level indicates an area of
relatively higher concentration and is investigated, regardless of the outcome of the
nonparametric statistical tests.
       Any measurement from the survey unit that is equal to or greater than the investigation
       level indicates an area of relatively higher concentration and is investigated, regardless of
       the outcome of the nonparametric statistical tests.
The result of the Sign test or the WRS test is the decision to reject or not to reject the null
hypothesis that the survey unit is contaminated above the DCGI^,. Provided that the results of
any investigations triggered by the EMC have been resolved, a rejection of the null hypothesis
leads to the decision that the survey unit meets the release criterion. If necessary, the amount of
residual radioactivity in the survey unit can be estimated so that dose or risk calculations can be
made.  In most cases, the average concentration is the best estimate for the amount of residual
radioactivity.
Summary

The roadmap presents a summary of the planning, implementation, assessment, and decision
making phases for a final status survey and identifies where guidance on these phases is located
in MARSSIM. Each step in the process is described briefly along with references to the sections
of MARSSIM to which the user may refer for more detailed guidance. Flow charts are provided
to summarize the major steps in the Radiation Survey and Site Investigation Process, again citing
appropriate sections of MARSSIM. In addition to providing the user with basic guidance from
MARSSIM, the roadmap also includes "rules of thumb" for performing compliance
demonstration surveys.
August 2000                            Roadmap-15                    MARSSIM, Revision 1

-------
                                1  INTRODUCTION
1.1    Purpose and Scope of MARSSIM

Radioactive materials have been produced, processed, used, and stored at thousands of sites
throughout the United States. Many of these sites—ranging in size from Federal weapons-
production facilities covering hundreds of square kilometers to the nuclear medicine departments
of small hospitals—were at one time or are now radioactively contaminated.

The owners and managers of a number of sites would like to determine if these sites are
contaminated, clean them up if contaminated, and release them for restricted use or for
unrestricted public use. The Environmental Protection Agency (EPA), the Nuclear Regulatory
Commission (NRC), and the Department of Energy (DOE) are responsible for the release of sites
following cleanup. These responsibilities apply to facilities under the control of Federal
agencies, such as the DOE and Department of Defense (DOD), and to sites licensed by the NRC
and its Agreement States. Some States have responsibilities for similar sites under their control.

The Multi-Agency Radiation Survey and Site Investigation Manual (MARSSIM) provides a
nationally consistent consensus approach to conducting radiation surveys and investigations at
potentially contaminated sites. This approach should be both scientifically rigorous and flexible
enough to be applied to a diversity of site cleanup conditions. MARSSIM's title includes the
term "survey" because it provides information on planning  and conducting surveys, and includes
the term "site investigation" because the process outlined in the manual allows one to begin by
investigating any site (i.e., by gathering data or information) that may involve radioactive
contamination.

The decommissioning that follows remediation will normally require a demonstration to the
responsible Federal or State agency that the cleanup effort was successful and that  the release
criterion (a specific regulatory limit) was met. In MARSSIM, this demonstration is given the
name "final status survey." This manual  assists site personnel or others in performing or
assessing such a demonstration.  (Generally, MARSSIM may serve to guide or monitor
remediation efforts whether or not a release criterion is applied.)

As illustrated in Figure 1.1, the demonstration of compliance with respect to conducting surveys
is comprised of three interrelated parts:

I.      Translate:  Translating the cleanup/release criterion (e.g., mSv/y, mrem/y, specific risk)
       into a corresponding derived contaminant concentration level (e.g., Bq/kg or pCi/g in
       soil) through the use of environmental pathway modeling.
August 2000                                 1-1                         MARSSIM, Revision 1

-------
Introduction
Modeling,
  Tables
  (DCGL)
                               Release
                              Criterion
(^Translate  ^)
                                Survey
                                    &
                                Sample
                    Interpret
                    Results,
                   Statistical
                      Tests
                                            MARSSIM
                     Figure 1.1 Compliance Demonstration

II.     Measure: Acquiring scientifically sound and defensible site-specific data on the levels
      and distribution of residual contamination, as well as levels and distribution of
      radionuclides present as background, by employing suitable field and/or laboratory
      measurement techniques.1

IE.    Decide:  Determining that the data obtained from sampling does support the assertion that
      the site meets the release criterion, within an acceptable degree of uncertainty, through
      application of a statistically based decision rule.
   Measurements include field and laboratory analyses, however, MARSSIM leaves detailed discussions of
laboratory sample analyses to another manual (i.e., a companion document, the Multi-Agency Radiation Laboratory
Analytical Protocols (MARLAP) manual that is currently under development).
MARSSIM, Revision 1
1-2
                                                             August 2000

-------
                                                                               Introduction


MARS SIM presents comprehensive guidance—specifically for II and in above—for
contaminated soil and buildings.  This guidance describes a performance-based approach for
demonstrating compliance with a dose- or risk-based regulation. This approach includes
processes that identify data quality needs and may reveal limitations that enter into conducting a
survey.  The data quality needs stated as Data Quality Objectives (DQOs) include performance
measures and goals in relation to a specific intended use of the data (EPA 1997'a).

DQOs must be developed on a site-specific basis. However, because of the large variability in
the types of radiation sites, it is impossible to provide criteria that apply to every situation. As an
example, MARSSEVI presents a method for planning, implementing, assessing, and making
decisions about regulatory compliance at sites with radioactive contaminants in surface soil and
on building surfaces. In particular, MARSSEVI describes generally acceptable approaches for:

•      planning and designing scoping, characterization, remediation-support, and final status
       surveys for sites with surface soil and building surface contamination
•      Historical Site Assessment (HSA)
•      QA/QC in data acquisition and analysis
•      conducting surveys
•      field and laboratory methods and instrumentation, and interfacing with radiation
       laboratories
•      statistical hypothesis testing, and the interpretation of statistical data
•      documentation

Thus, MARSSEVI provides standardized and consistent approaches for planning, conducting,
evaluating, and documenting environmental radiological surveys, with a specific focus on the
final status surveys that are carried out to demonstrate compliance with cleanup regulations.
These approaches may not meet the DQOs at every site, so other methods may be used to meet
site-specific DQOs, as long as an equivalent level of performance can be demonstrated.

Table 1.1, at the end of Chapter 1, summarizes the scope of MARSSEVI. Several issues related to
releasing sites are beyond the scope of MARSSEVI.  These include translation of dose or risk
standards into radionuclide specific concentrations, or demonstrating compliance with ground
water or surface water regulations.  MARSSEVI can be applied to surveys performed at vicinity
properties—those not under government or licensee control—but the decision to apply the
MARSSEVI at vicinity properties is outside the scope of MARSSEVI. Other contaminated media
(e.g.,  sub-surface soil, building materials, ground water) and the release of contaminated
components and equipment are also not addressed by MARSSEVI. With MARSSEVI's main
focus on final status surveys, this manual continues a process of following remediation activities
that are  intended to remove below-surface contaminants.  Therefore, some of the reasons for
limiting the scope of the guidance to contaminated surface soils  and building surfaces include:
1) contamination is limited to these media for many sites following remediation, 2) since many

August 2000                                 1-3                          MARS SIM, Revision 1

-------
Introduction


sites have surface soil and building surface contamination as the leading source of contamination,
existing computer models used for calculating the concentrations based on dose or risk generally
consider only surface soils or building surfaces as a source term, and 3) MARS SIM was written
in support of cleanup rulemaking efforts for which supporting data are mostly limited to
contaminated surface soil and building surfaces.

MARSSIM also recognizes that there may be other factors, such as cost or stakeholder concerns,
that have an impact on designing surveys.  Guidance on how to address these specific concerns is
outside the scope of MARSSIM.  Unique site-specific cases may arise that require a modified
approach beyond what is presently described in MARSSIM. This includes examples such as:
1) the release of sites contaminated with naturally occurring radionuclides in which the
concentrations corresponding to the release criteria are close to the variability of the background
and 2) sites where a reference background cannot be established. However, the process of
planning, implementing, assessing, and making decisions about a site described in MARSSIM is
applicable to all sites, even if the examples in this manual do not meet a site's specific objectives.

Of MARS SIM's many topics, the Data Quality Objective (DQO) approach to  data acquisition
and analysis and the Data Quality Assessment (DQA) for determining that data meet stated
objectives are two elements that are a consistent theme throughout the manual. The DQO
Process and  DQA approach, described  in Chapter 2, present a method for building common
sense and the scientific method into all aspects of designing and conducting surveys, and making
best use of the obtainable information.  This becomes a formal framework for systematizing the
planning of data acquisition surveys so that the data sought yield the kind of information actually
needed for making important decisions—such as whether or not to release a particular site
following remediation.
1.2    Structure of the Manual

MARSSIM begins with the overview of the Radiation Survey and Site Investigation Process in
Chapter 2—Figures 2.4 through 2.8 are flowcharts that summarize the steps and decisions taken
in the process. Chapter 3 provides instructions for performing an Historical Site Assessment
(HSA)—a detailed investigation to collect existing information on the site or facility and to
develop a conceptual site model.  The results of the HSA are used to plan surveys, perform
measurements, and collect additional information at the site.  Chapter 4 covers issues that arise in
all types of surveys. Detailed information on performing specific types of surveys is included in
Chapter 5.  Guidance on selecting the appropriate instruments and measurement techniques for
each type of measurement is in Chapters 6 and 7. Chapter 6 discusses direct measurements and
scanning surveys, and Chapter 7 discusses sampling and sample preparation for laboratory
measurements.  The interpretation of survey results is described in Chapter 8.  Chapter 9 provides
guidance on data management,  quality assurance (QA), and quality control (QC). Information on
specific subjects related to radiation site investigation can be found in the appendices.

MARSSIM, Revision 1                         1-4                                  August  2000

-------
                                                                              Introduction


MARSSIM contains several appendices to provide additional guidance on specific topics.
Appendix A presents an example of how to apply the MARSSIM guidance to a specific site.
Appendix B describes a simplified procedure for compliance demonstration that may be
applicable at certain types of sites.  Appendix C summarizes the regulations and requirements
associated with radiation surveys and site investigations for each of the agencies involved in the
development of MARSSIM. Detailed guidance on the DQO Process is in Appendix D, and
Appendix E has guidance on DQA. Appendix F describes the relationships among MARSSIM,
the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA), and
the Resource Conservation and Recovery Act (RCRA).  Sources of information used during site
assessment are listed in Appendix G. Appendix H describes field survey and laboratory analysis
equipment that may be used for radiation  surveys and site investigations. Appendix I offers
tables of statistical data and supporting information for interpreting survey results described in
Chapter 8.  The derivation of the alpha scanning detection limit calculations used in Chapter 6 is
described in Appendix J.  Comparison tables for QA documents are in Appendix K. Appendix L
lists the regional radiation program managers for each of the agencies participating in the
development of MARSSIM. Appendix M lists publications that serve as resources describing
sampling methods. Information  on data validation is provided in Appendix N.

MARSSIM is presented in a modular format, with each module containing guidance on
conducting specific aspects of, or activities related to, the survey process. Followed in order,
each module leads to the generation and implementation of a complete survey plan. Although
this approach may involve some overlap and redundancy in information, it also allows many
users to concentrate only on those portions of the manual that apply to their own particular needs
or responsibilities. The procedures within each module are listed in order of performance and
options are provided to  guide a user past portions of the manual that may not be specifically
applicable to the user's  area of interest. Where appropriate, checklists condense and summarize
major points in the process.  The checklists may be used to verify that every suggested step is
followed or to flag a condition in which specific documentation should explain why a step was
not needed.

Also included in the manual is a section titled Roadmap. The roadmap is designed to be used
with MARSSIM as a quick reference for users already familiar with the process of planning and
performing radiation surveys.  The roadmap gives the user basic guidance, rules of thumb, and
references to sections in the manual containing detailed guidance.

MARSSIM, which is based on a graded approach, also contains a simplified procedure (see
Appendix B) that many users of radioactive materials may—with the approval of the  responsible
regulatory agency—be able to employ to demonstrate compliance with the release criterion.
Sites that may qualify for simplified release procedures are those in which the radioactive
materials used were 1) of relatively short half-life (e.g., t1/2 < 120 days) and have since decayed to
insignificant quantities, 2) kept only in small enough quantities so as to be exempted  or not

August 2000                                 1-5                        MARSSIM, Revision 1

-------
Introduction


requiring a specific license from a regulatory authority, 3) used or stored only in the form of non-
leaking sealed sources, or 4) combinations of the above.
1.3    Use of the Manual

Potential users of this manual are Federal, State, and local government agencies having authority
for control of radioactive environmental contamination; their contractors; and other parties, such
as organizations with licensed authority to possess and use radioactive materials. The manual is
intended for a technical audience having knowledge of radiation health physics and an
understanding of statistics as well as experience with the practical applications of radiation
protection.  An understanding of instrumentation and methodologies and expertise  in planning,
approving, and implementing surveys of environmental levels of radioactive material is assumed.
This manual has been written so that individuals responsible for planning, approving, and
implementing radiological surveys will be able to  understand and apply the guidance provided
here.  Certain situations and sites may require consultation with more experienced personnel.

MARSSEVI provides guidance for conducting radiation surveys and site investigations.
MARSSEVI uses the word "should" as a recommendation, that ought not be interpreted as a
requirement.  The reader need not expect that every recommendation in this manual will be taken
literally and applied at every site. Rather, it is expected that the survey planning documentation
will address how the guidance will be applied on a site-specific basis.

As previously stated, MARSSEVI supports implementation of dose- or risk-based regulations.
The translation of the regulatory dose limit to a  corresponding concentration level is not
addressed in MARSSEVI, so the guidance in this manual is applicable to a broad range of
regulations, including risk- or concentration-based regulations. The terms dose and dose-based
regulation are used throughout the manual, but these terms are not intended to limit the use of the
manual.

Note that Federal or State agencies that can approve a demonstration of compliance may support
requirements that differ from what is presented  in this version of MARSSEVI.  It is essential,
therefore, that the persons carrying out the surveys, whether they are conducting surveys in
accordance with the simplified approach of Appendix B or the full MARSSEVI process, remain
in close communication with the proper Federal or State authorities throughout the compliance
demonstration process.
MARSSIM, Revision 1                         1-6                                 August 2000

-------
                                                                              Introduction
1.4    Missions of the Federal Agencies Producing MARSSIM

MARSSIM is the product of a multi-agency workgroup with representatives from EPA, NRC,
DOE, and DOD. This section briefly describes the missions of the participating agencies.
Regulations and requirements governing site investigations for each of the agencies associated
with radiation surveys and site investigations are presented in Appendix C.

1.4.1   Environmental Protection Agency

The mission of the U.S. Environmental Protection Agency (EPA) is to improve and preserve the
quality of the environment, on both national and global levels. The EPA's scope of
responsibility includes implementing and enforcing environmental laws, setting guidelines,
monitoring pollution, performing research, and promoting pollution prevention. EPA
Headquarters maintains overall planning, coordination, and control  of EPA programs, and EPA's
ten regional offices are responsible for executing EPA's programs within the boundaries of each
region. EPA also coordinates with, and supports research and development of, pollution control
activities carried out by State and local  governments.

1.4.2   Nuclear Regulatory Commission

The mission of the U.S. Nuclear Regulatory Commission (NRC) is  to ensure adequate protection
of public health and safety, the common defense and security, and the environment in the use of
certain radioactive materials in the United States. The NRC's scope of responsibility includes
regulation of commercial nuclear power reactors; non-power research, test, and training reactors;
fuel cycle facilities; medical, academic, and industrial uses of nuclear materials; and the
transport, storage, and disposal of nuclear materials and waste. The Energy Reorganization Act
of 1974 and the Atomic Energy Act of 1954, as amended, provide the foundation for regulation
of the Nation's commercial use of radioactive materials.

1.4.3   Department of Energy

The mission of the Department of Energy (DOE) is to develop and implement a coordinated
national energy  policy to ensure the availability of adequate energy  supplies and to develop new
energy sources for domestic and commercial use. In addition, DOE is responsible for the
development, construction and testing of nuclear weapons for the U.S. Military. DOE is also
responsible for managing the low- and high-level radioactive wastes generated by past nuclear
weapons and research programs and for constructing and maintaining a repository for civilian
radioactive wastes generated by the commercial nuclear reactors. DOE has the lead in
decontaminating facilities and sites previously used in atomic energy programs.
August 2000                                1-7                         MARSSIM, Revision 1

-------
Introduction
1.4.4   Department of Defense

The global mission of the Department of Defense (DOD) is to provide for the defense of the
United States.  In doing this, DOD is committed to protecting the environment. Each military
service has specific regulations addressing the use of radioactive sources and the development of
occupational health programs and radiation protection programs. The documents describing
these regulations are used as guidance in developing environmental radiological surveys within
DOD and are discussed in Appendix C.

                             Table 1.1 Scope of MARS SIM
Within Scope of MARSSIM
Guidance MARS SIM provides technical
guidance on conducting radiation
surveys and site investigations.
Tool Box MARSSIM can be thought of as an
extensive tool box with many
components — some within the text
of MARSSIM, others by reference.
Measurement The guidance given in MARSSIM is
performance-based and directed
towards acquiring site-specific data.
Modeling The interface between environmental
pathway modeling and MARSSIM is
an important survey design
consideration addressed in
MARSSIM.
Beyond Scope of MARSSIM
Regulation MARSSIM does not set new
regulations or non-technical issues
(e.g. , legal or policy) for site
cleanup. Release criterion will be
provided rather than calculated using
MARSSIM.
Tool Box Many topics are beyond the scope of
MARSSIM, for example:
-a public participation program
-packaging and transportation of
wastes for disposal
-decontamination and stabilization
techniques
-training
Procedure The approaches suggested in
MARSSIM vary depending on the
various site data needs — there are no
set procedures for sample collection,
measurement techniques, storage and
disposal established in MARSSIM.
Modeling Environmental pathway modeling
and ecological endpoints in
modeling are beyond the scope of
MARSSIM.
MARSSIM, Revision 1
August 2000

-------
                                                                              Introduction
                       Table 1.1 Scope of MARS SIM (continued)
Within Scope of MARSSIM
Soil and The two main media of interest in
Buildings MARSSIM are contaminated surface
soil and building surfaces.
Final Status The focus of MARSSIM is on
Survey the final status survey as this is the
deciding factor in judging if the site
meets the release criterion.
Radiation MARSSIM only considers
radiation-derived hazards.
Remediation MARSSIM assists users in
Method determining when sites are ready for
a final status survey and provides
guidance on how to determine if
remediation was successful.
DQO MARS SIM presents a systemized
Process approach for designing surveys to
collect data needed for making
decisions such as whether or not to
release a site.
DQA MARSSIM provides a set of
statistical tests for evaluating data
and lists alternate tests that may be
applicable at specific sites.
Beyond Scope of MARSSIM
Other Media MARSSIM does not cover other
media, including construction
materials, equipment, subsurface
soil, surface or subsurface water,
biota, air, sewers, sediments or
volumetric contamination.
Materials or MARSSIM does not recommend
Equipment the use of any specific materials or
equipment — there is too much
variability in the types of radiation
sites — this information will be in
other documents.
Chemicals MARSSIM does not deal with any
hazards posed by chemical
contamination.
Remediation MARSSIM does not discuss
Method selection and evaluation of remedial
alternatives, public involvement,
legal considerations, policy decisions
related to planning
DQO MARSSIM does not provide
Process prescriptive or default values of
DQOs.
DQA MARSSIM does not prescribe a
statistical test for use at all sites.
August 2000
1-9
MARSSIM, Revision 1

-------
          2 OVERVIEW OF THE RADIATION SURVEY AND SITE
                           INVESTIGATION PROCESS
2.1    Introduction

This chapter provides a brief overview of the Radiation Survey and Site Investigation (RSSI)
Process, several important aspects of this Process, and its underlying principles. The concepts
introduced here are discussed in detail throughout the manual.

The purpose of MARSSEVI is to provide a standardized approach to demonstrating compliance
with a dose- or risk-based regulation.  Since most of the manual is based on general technical and
statistical concepts, much of the guidance can still be applied to other types of regulations or
standards. The purpose of this chapter is to provide the overview information required to
understand the rest of this manual.

Section 2.2 introduces and defines key terms used throughout the manual.  Some of these terms
may be familiar to the MARSSEVI user, while others are new terms developed specifically for
this manual.

Section 2.3 describes the flow of information used to decide whether or not a site or facility
complies with a regulation.  The section describes the framework that is used to demonstrate
compliance with a regulation, and is the basis for all guidance presented in this manual. The
decision-making process is broken down into four phases: 1) planning, 2)  implementation,
3) assessment, and 4) decision making.

Section 2.4 introduces the Radiation Survey and Site Investigation Process, which can be used
for compliance demonstration at many sites.  The section describes a series of surveys that
combine to form the core of this process. Each survey has specified goals and objectives to
support a final decision on whether or not a site or facility complies with the appropriate
regulations.  Flow diagrams showing how the different surveys support the overall process are
provided, along with descriptions of the information provided by each type of survey.

Section 2.5 presents major considerations that relate to the decision-making and survey-design
processes. This section, as well as the examples discussed in detail throughout the manual,
focuses on residual radioactive  contamination in surface soils and on building surfaces.
Recommended survey designs for demonstrating compliance are presented along with the
rationale for selecting these designs.

Section 2.6 recognizes that the methods presented in MARSSEVI may not represent the optimal
survey design at all sites. Some alternate methods for applying the Radiation Survey and Site
Investigation process are discussed.  Different methods for demonstrating  compliance that are
technically defensible may be developed with the approval of the responsible regulatory agency.

August 2000                                2-1                         MARS SIM, Revision 1

-------
Overview of the Radiation Survey and Site Investigation Process


MARSSIM provides an approach that is technically defensible and flexible enough to be applied
to a variety of site-specific conditions. Applying this guidance to a dose- or risk-based regulation
provides a consistent approach to protecting human health and the environment.  The manual's
performance-based approach to decision making provides the flexibility needed to address
compliance demonstration at individual sites.
2.2    Understanding Key MARSSIM Terminology

The first step in understanding the Radiation Survey and Site Investigation (RSSI) Process is
accomplished by understanding the scope of this manual, the terminology, and the concepts set
forth. Some of the terms used in MARSSIM were developed for the purposes of this manual,
while other commonly used terms are also adopted for use in MARSSIM. This section explains
some of the terms roughly in the order of their presentation in the manual.

The process described in MARSSIM begins with the premise that a release criterion has already
been provided in terms of a measurement quantity.  The methods presented in MARSSIM are
generally applicable and are not dependent on the value of the release criterion.

A release criterion is a regulatory limit expressed in terms of dose (mSv/y or mrem/y) or risk
(cancer incidence or cancer mortality).  The terms release limit or cleanup standard are also used
to describe this term. A release criterion is typically based on the total effective dose equivalent
(TEDE), the committed effective  dose equivalent (CEDE), risk of cancer incidence (morbidity),
or risk of cancer death (mortality) and generally cannot be measured directly. Exposure pathway
modeling is used to calculate a radionuclide-specific predicted concentration or surface area
concentration of specific nuclides that could result in a dose  (TEDE or CEDE) or specific risk
equal to the release criterion.  In this manual, such a concentration is termed the derived
concentration guideline level (DCGL).  Exposure pathway modeling is an analysis of various
exposure pathways and scenarios  used to convert dose or risk into concentration.  In many cases
DCGLs can be obtained from responsible regulatory agency  guidance based on default modeling
input parameters, while other users may elect to take into account site-specific parameters to
determine DCGLs. In general, the units for the DCGL are the same as the units for
measurements performed to demonstrate compliance (e.g., Bq/kg or pCi/g, Bq/m2 or dpm/100
cm2). This allows direct comparisons between the survey results and the DCGL.  A discussion of
the uncertainty associated with using DCGLs to demonstrate compliance is included in Appendix
D, Section D.6.

An investigation level is a radionuclide-specific level based on the release criterion that, if
exceeded,  triggers some response such as further investigation or remediation. An investigation
level may be used early in decommissioning to identify areas requiring further investigation, and
may also be used as  a screening tool during compliance demonstration to identify potential
problem areas. A DCGL is an example of a specific investigation level.

MARSSIM, Revision 1                         2-2                                 August 2000

-------
                                         Overview of the Radiation Survey and Site Investigation Process


While the derivation of DCGLs is outside the scope of MARS SIM, it is important to understand
the assumptions that underlie this derivation.  The derivation assumptions must be consistent
with those used for planning a compliance demonstration survey.  One of the most important
assumptions used for converting a dose or risk limit into a media-specific concentration is the
modeled area of contamination. Other considerations include sample depth, composition,
modeling parameters, and exposure scenarios. MARSSEVI defines two potential DCGLs based
on the area of contamination.

•      If the residual radioactivity is evenly distributed over a large area, MARSSEVI looks at the
       average activity over the entire area. The DCGLwl (the DCGL used for the statistical
       tests, see  Section 2.5.1.2) is derived based on an average concentration over a large area.

•      If the residual radioactivity appears as small areas of elevated activity2 within a larger
       area, typically smaller than the area between measurement locations, MARSSEVI
       considers the results of individual measurements.  The DCGLEMC (the DCGL used for the
       elevated measurement comparison (EMC),  see Section 2.5.3 and Section 2.5.4)  is derived
       separately for these small areas and generally from different exposure assumptions than
       those used for larger areas.

A site is any installation, facility, or discrete, physically separate parcel of land, or any building
or structure or portion thereof, that is being considered for survey  and investigation.

Area is a very general term that refers to any portion of a site, up to and including the entire site.

Decommissioning is the process of safely removing a site from service, reducing residual
radioactivity through remediation to a level that permits release of the property, and  termination
of the license or other authorization for site operation. Although only part of the process, the
term decommissioning is used in this sense for the  Radiation Survey and Site Investigation
(RSSI) Process, and is used this way throughout MARSSEVI.
   1 The "W" in DCGLW stands for Wilcoxon Rank Sum test, which is the statistical test recommended in
MARS SIM for demonstrating compliance when the contaminant is present in background.  The Sign test
recommended for demonstrating compliance when the contaminant is not present in background also uses the
DCGLW.

   2 A small area of elevated activity, or maximum point estimate of contamination, might also be referred to as a
"hot spot." This term has been purposefully omitted from MARS SIM because the term often has different
meanings based on operational or local program concerns. As a result, there may be problems associated with
defining the term and reeducating MARSSIM users in the proper use of the term. Because these implications are
inconsistent with MARSSIM concepts, the term is not used.

August 2000                                   2-3                           MARSSIM, Revision 1

-------
Overview of the Radiation Survey and Site Investigation Process


A survey unit is a physical area consisting of structure or land areas of specified size and shape
for which a separate decision will be made as to whether or not that area exceeds the release
criterion.  This decision is made as a result of the final status survey—the survey in the RSSI
Process used to demonstrate compliance with the regulation or standard. The size and shape of
the survey unit are based on factors, such as the potential for contamination, the expected
distribution of contamination, and any physical boundaries (e.g., buildings, fences, soil type,
surface water body) at the site.

For MARSSIM, measurement is used interchangeably to mean: 1) the act of using a detector to
determine the  level or quantity of radioactivity on a surface or in a sample of material removed
from a media being evaluated, or 2) the quantity obtained by the act of measuring.  Direct
measurements are obtained by placing a detector near the media being surveyed and inferring the
radioactivity level directly from the detector response. Scanning is a measurement technique
performed by moving a portable radiation detector at a constant speed above a surface to semi-
quantitatively  detect areas of elevated activity.  Sampling is the process of collecting a portion of
an environmental medium as being representative of the locally remaining medium. The
collected portion, or aliquot, of the medium is then analyzed to identify the contaminant and
determine the  concentration. The word sample may also refer to a set of individual
measurements drawn from a population whose properties are studied to  gain information about
the entire population.  This second definition of sample is primarily used for statistical
discussions.

To make the best use of resources for decommissioning, MARSSIM places greater survey efforts
on areas that have, or had, the highest potential for contamination. This is referred to as a graded
approach. The final status survey uses statistical tests to support decision making. These
statistical tests are performed using survey data from areas with common characteristics, such as
contamination potential, which  are distinguishable from other areas with different characteristics.
Classification is the process by  which an  area or survey unit is described according to
radiological characteristics. The significance of survey  unit classification is that this process
determines the final status survey design and the procedures used to develop this design.
Preliminary area classifications, made earlier in the MARSSIM Process, are useful for planning
subsequent surveys.

Areas that have no reasonable potential for residual contamination are classified as non-impacted
areas.  These areas have no radiological impact from site operations and are typically identified
early in decommissioning.  Areas with reasonable potential for residual contamination are  classified as
impacted areas.

Impacted areas are further divided into one of three classifications:
MARSSIM, Revision 1                         2-4                                  August 2002

-------
                                         Overview of the Radiation Survey and Site Investigation Process


•      Class 1 Areas: Areas that have, or had prior to remediation, a potential for radioactive
       contamination (based on site operating history) or known contamination (based on
       previous radiation surveys) above the DCGLW. Examples of Class 1 areas include:
       1) site areas previously subjected to remedial actions3, 2) locations where leaks or spills
       are known to have occurred, 3) former burial or disposal sites, 4) waste storage sites, and
       5) areas with contaminants in discrete solid pieces of material and high specific activity.

•      Class 2 Areas: Areas that have, or had prior to remediation, a potential for radioactive
       contamination or known contamination, but are not expected to exceed the DCGLW.  To
       justify changing the classification from Class 1 to Class 2, there should be measurement
       data that provides a high degree of confidence that no individual measurement would
       exceed the DCGLW.  Other justifications for reclassifying an area as Class 2 may be
       appropriate, based on site-specific considerations.  Examples of areas that might be
       classified as Class 2 for the final status survey include:   1) locations where radioactive
       materials were present in an unsealed form, 2) potentially contaminated transport routes,
       3) areas downwind from stack release points, 4) upper walls and ceilings of buildings or
       rooms subjected to airborne radioactivity, 5) areas handling low concentrations of
       radioactive materials, and 6) areas on the perimeter of former contamination control
       areas.

•      Class 3 Areas: Any impacted areas that are not expected to contain any residual
       radioactivity, or are expected to contain levels of residual radioactivity at a small fraction
       of the DCGLW, based on site operating history and previous radiation surveys. Examples
       of areas that might be classified as Class 3 include  buffer zones around Class 1 or Class 2
       areas, and areas with very low potential for residual contamination but insufficient
       information to justify a non-impacted classification.

Class 1 areas have the greatest potential for contamination and therefore receive the highest
degree of survey effort for the final status survey using a graded approach, followed by Class 2,
and then by Class 3. Non-impacted areas do not receive any level of survey coverage because
they have no potential for residual contamination.  Non-impacted areas are determined on a site-
specific basis.  Examples of areas that would be non-impacted  rather than impacted usually
include residential or other buildings that have or had nothing more than smoke detectors or exit
signs with sealed radioactive sources.
   3 Remediated areas are identified as Class 1 areas because the remediation process often results in less than
100% removal of the contamination, even though the goal of remediation is to comply with regulatory standards and
protect human health and the environment. The contamination that remains on the site after remediation is often
associated with relatively small areas with elevated levels of residual radioactivity.  This results in a non-uniform
distribution of the radionuclide and a Class 1 classification. If an area is expected to have no potential to exceed the
DCGLW and was remediated to demonstrate the residual radioactivity is as low as reasonably achievable (ALARA),
the remediated area might be classified as Class 2 for the final status survey.

August 2000                                   2-5                          MARSSIM, Revision 1

-------
Overview of the Radiation Survey and Site Investigation Process


If the radionuclide of potential concern is present in background, or if the measurement system
used to determine concentration in the survey unit is not radionuclide-specific, background
measurements are compared to the survey unit measurements to determine the level of residual
radioactivity. The background reference area is a geographical area from which representative
reference measurements are performed for comparison with measurements performed in specific
survey units.  The background reference area is defined as an area that has similar physical,
chemical, radiological, and biological characteristics as the survey unit(s) being investigated but
has not been contaminated by site activities (i.e., non-impacted).

The process of planning the survey, implementing the survey plan, and assessing the survey
results prior to making a decision is called the Data Life Cycle.  Survey planning uses the Data
Quality Objectives (DQO) Process to ensure that the survey results are of sufficient quality and
quantity to support the final decision.  Quality Assurance and Quality Control (QA/QC)
procedures are performed during implementation of the survey plan to collect information
necessary to evaluate the survey results. Data Quality Assessment (DQA) is the process of
assessing the survey results, determining that the quality of the data satisfies the objectives of the
survey, and interpreting the survey results as they apply to the decision being made.

A systematic process and structure for quality should be established to provide confidence in the
quality and quantity of data collected to support decision making. The data used in decision
making should be supported by a planning document that records how quality assurance and
quality control are applied to obtain type and quality of results that are needed and expected.
There are several terms used to describe a variety of planning documents, some of which
document only a small part of the survey design process.  MARRSIM uses the term Quality
Assurance Project Plan (QAPP) to describe a single document that incorporates all of the
elements of the survey design. This term is consistent with consensus guidance ANSI/ASQC E4-
1994 (ASQC 1995) and EPA guidance (EPA 1994c; EPA 1997a), and is recommended to
promote consistency. The use of the term QAPP in MARS SIM does not exclude the use of other
terms (e.g., Decommissioning Plan, Sampling and Analysis Plan, Field  Sampling Plan) to
describe survey documentation provided the information included in the documentation supports
the objectives of the survey.
2.3    Making Decisions Based on Survey Results

Compliance demonstration is simply a decision as to whether or not a survey unit meets the
release criterion. For most sites this decision is based on the results of one or more surveys.
When survey results are used to support a decision, the decision maker4 needs to ensure that the
   4 The term decision maker is used throughout this section to describe the person, team, board, or committee
responsible for the final decision regarding disposition of the survey unit.

MARSSIM, Revision 1                         2-6                                 August 2000

-------
                                         Overview of the Radiation Survey and Site Investigation Process
data will support that decision with satisfactory confidence. Usually a decision maker will make
a correct decision after evaluating the data. However, since uncertainty in the survey results is
unavoidable, the possibility of errors in decisions supported by survey results is unavoidable.  For
this reason, positive actions must be taken to manage the uncertainty in the survey results so that
sound, defensible decisions may be made.  These actions include proper survey planning to
control known causes of uncertainty, proper application of quality control (QC) procedures
during implementation of the survey plan to detect and control significant sources of error , and
careful analysis  of uncertainty before the data are used to support decision making. These
actions describe the flow of data throughout each type of survey, and are combined in the Data
Life Cycle as shown in Figure 2.1.

There are four phases of the Data Life Cycle:
       Planning Phase.  The survey design is
       developed and documented using the
       Data Quality Objectives (DQO) Process.
       Quality assurance and quality control
       (QA/QC) procedures are developed and
       documented in the Quality Assurance
       Project Plan (QAPP). The QAPP is the
       principal product of the planning process
       which incorporates the DQOs as it
       integrates all technical and quality aspects
       for the life cycle of the project, including
       planning, implementation, and
       assessment. The QAPP documents
       planning results for survey operations and
       provides a specific format for obtaining
       the type and quality of data needed for
       decision making. The QAPP elements
       are presented in an order corresponding
       to the Data Life Cycle by grouping them
       into two types of elements: 1) project
       management; and 2) collection and
       evaluation of environmental data (ASQC
       1995). The DQO process is described in
       Appendix D, and applied in Chapters 3,
       4, and 5 of this manual. Development of
       the QAPP is described in Section 9.2 and
       applied throughout decommissioning.
                   PLANNING PHASE

                Plan for Data Collection using the
               Data Quality Objectives Process and
             Develop a Quality Assurance Project Plan
                IMPLEMENTATION PHASE

       Collect Data using Documented Measurement Techniques and
        Associated Quality Assurance and Quality Control Activities
                 ASSESSMENT PHASE

       Evaluate the Collected Data Against the Survey Objectives using
       Data Verification, Data Validation, and Data Quality Assessment
                DECISION-MAKING PHASE
          Figure 2.1 The Data Life Cycle
August 2000
2-7
MARS SIM, Revision 1

-------
Overview of the Radiation Survey and Site Investigation Process


•      Implementation Phase. The survey design is carried out in accordance with the SOPs and
       QAPP, resulting in the generation of raw data.  Chapter 6, Chapter 7, and Appendix H
       provide information on the selection of data collection techniques. The QA and QC
       measurements, discussed in Chapter 6 and Chapter 7, also generate data and other
       important information that will be used during the Assessment Phase.

•      Assessment Phase. The data generated during the Implementation Phase are first verified
       to ensure that the SOPs specified in the QAPP were actually followed and that the
       measurement systems performed in accordance with the criteria specified in the QAPP.
       Then the data are validated to ensure that the results of data collection activities support
       the objectives of the survey as documented in the QAPP, or permit a determination that
       these objectives should be modified. The data quality assessment (DQA) process is then
       applied using the validated data to determine if the quality of the data satisfies the data
       user's needs.  Data verification and validation are described in Section 9.3.  The DQA
       process is described in Appendix E and is applied in Chapter 8.

•      Decision-Making Phase. A decision is made, in coordination with the responsible
       regulatory agency, based on the conclusions drawn from the assessment process. The
       ultimate objective is to make technically defensible decisions with a specified level of
       confidence (Chapter 8).

2.3.1   Planning Effective Surveys—Planning Phase

The first step in designing effective surveys is planning.  The DQO Process is a series of
planning steps based on the scientific method for establishing criteria for data quality and
developing survey designs (ASQC 1995, EPA 1994a, EPA 1987b, EPA 1987c).  Planning
radiation surveys using the DQO Process improves the survey effectiveness and efficiency, and
thereby the defensibility of decisions.  This minimizes expenditures related to data collection by
eliminating unnecessary, duplicative, or overly precise data. Using the DQO Process ensures that
the type, quantity, and quality of environmental data used in decision making will be  appropriate
for the intended application. MARS SIM supports the use of the DQO Process to design surveys
for input to both evaluation techniques (elevated measurement comparison and the statistical
test).  The DQO Process provides systematic procedures for defining the criteria that the survey
design should satisfy, including what type  of measurements to perform, when and where to
perform measurements, the level of decision errors for the survey, and how many measurements
to perform.

The level of effort associated with planning a survey is based on the complexity of the survey.
Large, complicated sites generally receive  a significant amount of effort during the planning
phase, while smaller sites may not require  as much planning. This graded approach defines data
quality requirements according to the type of survey being designed, the risk of making a

MARSSIM, Revision 1                          2-8                                 August 2000

-------
                                        Overview of the Radiation Survey and Site Investigation Process


decision error based on the data collected, and the consequences of making such an error.  This
approach provides a more effective survey design combined with a basis for judging the usability
of the data collected.

DQOs are qualitative and quantitative statements derived from the outputs of the DQO Process
that:

•      clarify the study objective
•      define the most appropriate type of data to collect
•      determine the most appropriate conditions for collecting the data
•      specify limits on decision errors which will be used as the basis for establishing the
       quantity and quality of data needed to support the decision

The DQO Process consists of seven steps, as shown in Figure 2.2. Each step is discussed in
detail in Appendix D. While all  of the outputs of the DQO Process are important for designing
efficient surveys, there are some that are referred to throughout the manual.  These DQOs are
mentioned briefly here, and are discussed in detail throughout MARSSEVI and in Appendix D.

The minimum information (outputs) required from the DQO Process to proceed with the
methods described in MARSSEVI are:

•      classify and specify boundaries of survey units:  this can be accomplished at any time, but
       must be finalized during final status survey planning (Section 4.4, Section 4.6)
•      state the null hypothesis (H0):  the residual radioactivity in the survey unit exceeds the
       release criterion (Section 2.5, Appendix D, Section D.6)
•      specify a gray region where the consequences of decision errors are relatively minor: the
       upper bound of the gray region is defined as the DCGLW, and the lower bound of the gray
       region (LBGR) is a site-specific variable generally initially selected to equal one half the
       DCGLW and adjusted to provide an acceptable value for the relative shift (Section 5.5.2.2,
       Section 5.5.2.3, Appendix D, Section D.6)
•      define Type I and Type n decision errors and assign probability limits for the occurrence
       of these errors:  the probability of making a Type I decision error (a) or a Type II decision
       error (P) are site-specific  variables (Section 5.5.2.2, Section 5.5.2.3, Appendix D,
       Section D.6)
•      estimate the standard deviation of the measurements in the survey unit:  the standard
       deviation (o) is a  site-specific variable, typically estimated from preliminary survey data
       (Section 5.5.2.2, Section  5.5.2.3)
•      specify the relative shift:  the shift (A) is equal to the width of the gray region
       (DCGLW - LBGR),  and the relative shift is defined as A/a, which is generally designed to
       have a value between one and three (Section 5.5.2.2, Section 5.5.2.3)
August 2000                                  2-9                         MARSSIM, Revision 1

-------
Overview of the Radiation Survey and Site Investigation Process
              STEP1:  STATE THE PROBLEM
                            I
            STEP 2:  IDENTIFY THE DECISION
                            I
       STEP 3:  IDENTIFY INPUTS TO THE DECISION
                            I
        STEP 4:  DEFINE THE STUDY BOUNDARIES
                            I
          STEP 5:  DEVELOP A DECISION RULE
                            I
     STEP 6: SPECIFY LIMITS ON DECISION ERRORS
                               STEP 7:
                            OPTIMIZE THE
                             DESIGN FOR
                           OBTAINING DATA
                       Figure 2.2 The Data Quality Objectives Process

       specify the detection limit for all measurement techniques (scanning, direct measurement,
       and sample analysis) specified in the QAPP: the minimum detectable concentration
       (MDC) is unique for each measurement system (Section 6.7)
       calculate the estimated number of measurements (N) and specify the measurement
       locations required to demonstrate compliance: the number of measurements depends on
       the relative shift (A/a), Type I and Type n decision error rates (a and P), the potential for
       small areas of elevated activity,  and the selection and classification of survey units
       (Section 5.5.2.2, Section 5.5.2.3)
       specify the documentation requirements for the survey, including survey planning
       documentation: documentation supporting the decision on whether or not the site
       complies with the release criterion is determined on a site-specific basis (Appendix N,
       Section N.2)
MARSSIM, Revision 1
2-10
August 2000

-------
                                       Overview of the Radiation Survey and Site Investigation Process


In addition to DQOs, values for the Data Quality Indicators (DQIs) should also be established
and recorded during the planning stage.  Where DQOs include performance measures and goals
in relation to a specific intended use of the data, DQIs quantify the amount of error in the data
collection process and the analytical measurement system regardless of how the data may be used
(EPA 1997a).  Precision, bias, accuracy, representativeness, comparability, and completeness are
the DQIs recommended for quantifying the amount of error for survey data. These DQIs are
discussed in detail in Appendix N, Section N.6.

2.3.2   Estimating the Uncertainty in Survey Results—Implementation Phase

To encourage flexibility and the use of optimal measurement techniques for a specific site,
MARSSEVI does not provide detailed guidance on specific techniques.  Instead, MARSSEVI
encourages the decision maker to evaluate available techniques based on the survey objectives.
Guidance on evaluating these objectives, such as detection limit, is provided.

QC programs can both lower the chances of making an incorrect decision and help the data user
understand the level of uncertainty that surrounds the decision (EPA 1997a). As discussed
previously, QC data are collected and analyzed during implementation to provide an estimate of
the uncertainty associated with the survey results. QC measurements (scans, direct
measurements, and samples) are technical activities performed to measure the attributes and
performance of the survey. During any survey, a certain number of measurements should be
taken for QC purposes.

2.3.3   Interpreting Survey Results—Assessment Phase

Assessment of environmental data is used to evaluate whether the data meet the objectives of the
survey and whether the data are sufficient to determine compliance with the DCGL (EPA 1992a,
EPA 1992b, EPA 1996a).  The assessment phase of the Data Life Cycle consists of three phases:
data verification, data validation, and Data Quality Assessment (DQA).

Data verification is used to ensure that the requirements stated in the planning documents are
implemented as prescribed (see Section 9.3).  Data validation is used to ensure that the results of
the data collection activities support the objectives of the survey as documented in the QAPP,  or
permit a determination that these objectives should be modified (see Section 9.3 and
Appendix N).  Data quality assessment (DQA) is the scientific and statistical evaluation of data
to determine if the data are of the right type, quality, and quantity to support their intended use
(EPA 1996a).  DQA helps complete the Data Life Cycle by providing the assessment needed to
determine that the planning objectives are achieved (see Section 8.2). Figure 2.3 illustrates
where data verification, data validation,  and DQA fit into the Assessment Phase of the Data Life
Cycle.
August 2000                                2-11                        MARS SIM, Revision 1

-------
Overview of the Radiation Survey and Site Investigation Process
                                                    Routine Data
                                QC/Performance
                                Evaluation Data
                                                                  INPUTS
                                                      DATA VALIDATION/VERIFICATION

                                                   • Verify Measurement Performance
                                                   • Verify Measurement Procedures and Reporting Requirements
                                                                                    OUTPUT
                                                         VALIDATED/VERIFIED DATA
                                                                                      INPUT
                                                          DATA QUALITY ASSESSMENT

                                                        • Review DQOs and Design
                                                        • Conduct Preliminary Data Review
                                                        • Select Statistical Test
                                                        • Verify Assumptions
                                                        ' Draw Conclusions
There are five steps in the DQA Process:

•      Review the DQOs and Survey Design
•      Conduct a Preliminary Data Review
•      Select the Statistical Test
•      Verify the Assumptions of the
       Statistical Test
•      Draw Conclusions from the Data

The strength of DQA is its design that
progresses in a logical and efficient manner to
promote an understanding of how well the
data meet the intended use.  The Assessment
Phase is described in more detail in Appendix
E.  Section 2.6 discusses the flexibility of the
Data Life Cycle and describes the use of
survey designs other than those described
later in MARS SIM.

2.3.4   Uncertainty in Survey Results

Uncertainty in survey results arises primarily
from two sources: survey design errors and
measurement errors.  Survey design errors
occur when the survey design is unable to
capture the complete extent of variability that
exists for the radionuclide distribution in a
survey unit. Since it is impossible in every
situation to measure the residual radioactivity at every point in space and time, the survey results
will be incomplete to some degree.  It is also impossible to know with complete certainty the
residual radioactivity at locations that were not measured, so the incomplete survey results give
rise to uncertainty.  The greater the natural or inherent variation in residual radioactivity, the
greater the uncertainty associated with a decision based on the survey results. The unanswered
question is:  "How well do the  survey results represent the true level of residual  radioactivity in
the survey unit?"

Measurement errors create uncertainty by masking the true level of residual radioactivity and
may be classified as random or systematic errors. Random errors affect the precision of the
measurement system, and show up as variations among repeated measurements. Systematic
errors show up as measurements that are biased to give results that are consistently higher or
lower than the true value. Measurement uncertainty is discussed in Section 6.8.
                                                                                    OUTPUT
                                                       CONCLUSIONS DRAWN FROM DATA
                                                 Figure 2.3 The Assessment Phase of the
                                                      Data Life Cycle (EPA 1996a)
MARSSIM, Revision 1
2-12
                                                                                 August 2000

-------
                                       Overview of the Radiation Survey and Site Investigation Process


MARS SIM uses the Data Life Cycle to control and estimate the uncertainty in the survey results
on which decisions are made. Adequate planning should minimize known sources of
uncertainty.  QC data collected during implementation of the survey plan provide an estimate of
the uncertainty.  Statistical hypothesis testing during the assessment phase provides a level of
confidence for the final decision. There are several levels of decisions included within each
survey type. Some decisions are quantitative, based on the numerical results of measurements
performed during the survey. Other decisions are qualitative based on the available evidence and
best professional judgment. The Data Life Cycle can and should be applied consistently to both
types of decisions.

2.3.5   Reporting Survey Results

The process of reporting survey results is an important consideration in planning the survey.
Again, the level of effort for reporting should be based on the complexity of the survey. A
simple survey with relatively few results may specify a single report, while a more complicated
survey may specify several reports to meet the objectives of the survey. Reporting requirements
for individual surveys should be developed during planning and clearly documented in the
QAPP. These requirements should be developed with cooperation from the people performing
the analyses (e.g., the analytical laboratory should be consulted on reporting results for samples).
The Health Physics Society has developed several suggestions for reporting survey results
(EPA 1980c).  These suggestions include:

•     Report the actual result of the analysis. Do not report data as "less than the detection
       limit."  Even negative results and results with large uncertainties can be used in the
       statistical tests to demonstrate compliance.  Results reported only as "
-------
Overview of the Radiation Survey and Site Investigation Process


•      Report the measurement uncertainty for every analytical result or series of results, such as
       for a measurement system. This uncertainty, while not directly used for demonstrating
       compliance with the release criterion, is used for survey planning and data assessment
       throughout the Radiation Survey and Site Investigation Process. In addition, the
       uncertainty is used for evaluating the performance of measurement systems using QC
       measurement results (as described in Section 6.2 for scans and direct measurements, and
       in Section 7.2 for laboratory analysis of samples). The uncertainty is also used for
       comparing individual measurements to the action level, which is especially important in
       the early stages of decommissioning (scoping, characterization, and remedial action
       support surveys described in Section 2.4) when decisions are made based on a limited
       number of measurements.  Section 6.8 discusses methods for calculating the
       measurement uncertainty.

•      Report the minimum detectable concentration (MDC)  for the measurement  system as well
       as the method used to calculate the MDC.  The MDC is an a priori estimate of the
       capability for detecting an activity concentration with a specific measurement system
       (EPA 1980c).  As such, this estimate is valuable for planning and designing radiation
       surveys. Optimistic estimates of the MDC (calculated using ideal conditions that may not
       apply to actual measurements) overestimate the ability of a technique to detect residual
       radioactivity, especially when scanning for alpha or low-energy beta radiations. This can
       invalidate survey results, especially for scanning  surveys.  Using a more realistic MDC, as
       described in Section 6.7, during scoping and characterization surveys helps  in the proper
       classification of survey units for final status surveys and minimizes the possibility of
       designing and performing subsequent surveys because of errors in classification.
       Estimates of the MDC that minimize potential decision errors should be used for planning
       surveys.

Reporting requirements for individual surveys should be developed during planning and clearly
documented in the QAPP.
2.4    Radiation Survey and Site Investigation Process

The Data Life Cycle discussed in Section 2.3 is the basis for the performance-based guidance in
MARS SIM. As a framework for collecting the information required for demonstrating
compliance identified using the DQO Process, MARSSEVI recommends using a series of surveys.
The Radiation Survey and Site Investigation (RSSI) Process is an example of a series of surveys
designed to demonstrate compliance with a dose- or risk-based regulation for sites with
radioactive contamination.
MARSSIM, Revision 1                        2-14                                 August 2000

-------
                                        Overview of the Radiation Survey and Site Investigation Process


There are six principal steps in the RSSI Process:

       Site Identification
       Historical Site Assessment
       Scoping Survey
       Characterization Survey
       Remedial Action Support Survey
       Final Status Survey

Table 2.1 provides a simplified overview of the principal steps in the RSSI process and how the
Data Life Cycle can be used in an iterative fashion within the process. Each of these steps is
briefly described in the Sections  2.4.1 through 2.4.6, and described in more  detail in Chapter 3
and Chapter 5.  In addition, there is a brief description of regulatory agency  confirmation and
verification (see Section 2.4.7).  Because MARSSIM focuses on demonstrating compliance with
a release criterion, specifically through the use of a final status survey, these surveys have
additional objectives that are not fully discussed in MARSSIM (e.g.,  health  and safety of
workers, supporting selection of values for exposure pathway model parameters).

Figure 2.4 illustrates the Radiation Survey and Site Investigation Process in terms of area
classification, and lists the major decision to be made for each type of survey. The flowchart
demonstrates one method for quickly estimating the survey unit classification early in the
MARSSIM Process based on limited information.  While this figure shows  the relationship
between area classification and survey unit classification along with the major decision points
that determine classification, this illustration is not designed to comprehensively consider every
possibility that  may occur at individual survey units.  As such, it is  a useful  tool for visualizing
the classification process, but there are site-specific characteristics that may cause variation from
this scheme.

The flowchart,  illustrated in Figures 2.5 through 2.8, presents the principal steps and decisions in
the site investigation process and shows the relationship of the survey types  to the overall
assessment process. As shown in these figures, there are several sequential  steps in the site
investigation process and each step builds on information provided by its predecessor. Properly
applying each sequential step in the RSSI Process should provide a high degree of assurance that
the release criterion has not been exceeded.
August 2000                                 2-15                         MARSSIM, Revision 1

-------
Overview of the Radiation Survey and Site Investigation Process
                      Table 2.1  The Data Life Cycle used to Support the
                       Radiation Survey and Site Investigation Process
    RSSI Process
      Data Life Cycle
                           MARSSIM Guidance
 Site Identification
                             Provides information on identifying potential radiation
                             sites (Section 3.3)
 Historical Site
 Assessment
Historical Site    Plan
Assessment      Implement
Data Life Cycle   Assess
                Decide
            Provides information on collecting and assessing
            existing site data (Sections 3.4 through 3.9) and
            potential sources of information (Appendix G)
 Scoping Survey
Scoping Data
Life Cycle
Plan
Implement
Assess
Decide
Discusses the purpose and general approach for
performing scoping surveys, especially as sources of
information when planning final status surveys (Section
5.2)
 Characterization
 Survey
Characterization  Plan
Data Life Cycle   Implement
                Assess
                Decide
            Discusses the purpose and general approach for
            performing characterization surveys, especially as
            sources of information when planning final status
            surveys (Section 5.3)
 Remedial Action
 Support Survey
Remedial
Action Data
Life Cycle
Plan
Implement
Assess
Decide
Discusses the purpose and general approach for
performing remedial action support surveys, especially
as sources of information when planning final status
surveys (Section 5.4)
 Final Status Survey
Final Status      Plan
Data Life Cycle   Implement
                Assess
                Decide
            Provides detailed guidance for planning final status
            surveys (Chapter 4 and Section 5.5), selecting
            measurement techniques (Chapter 6, Chapter 7, and
            Appendix H), and assessing the data collected during
            final status surveys (Chapter 8 and Chapter 9)	
2.4.1   Site Identification

The identification of known, likely, or potential sites is generally easily accomplished, and is
typically performed before beginning decommissioning. Any facility preparing to terminate an
NRC or agreement state license would be identified as a site.  Formerly terminated NRC licenses
may also become sites for the EPA Superfund Program. Portions of military bases or DOE
facilities may be identified as sites based on records of authorization to possess or handle
radioactive materials.  In addition, information obtained during the performance of survey
activities may identify additional potential radiation sites related to the site being investigated.
Information on site identification is provided in Section 3.3.
MARSSIM, Revision 1
                          2-16
                                                  August 2000

-------
                                                Overview of the Radiation Survey and Site Investigation Process
                                                                Initially Assumes a Class 1
                                                                Projected Final Status
                                                                Survey Classification
     Non-Impacted   ^v
  No Survey Required  V
                                                          the
                                                   Area Potentially
                                                   Contaminated?
                                                   Characterization
                                                       Survey
                                                          the
                                                    Area Actually
                                                    Contaminated?
                                                                         Class 3 Final
                                                                              Survey
                                                     the Probability
                                                   of Exceeding the
                                                    DCGLW Small?
                                                   Is the Probability
                                                   of Exceeding the
Remedial Action
Support Survey
                                                   DCGLEMC Small?
                                                   There Sufficient
                                                 Information to Support
                                                   Classification as
                                                      Class  2?
                                                                         Class 2 Final Status^v
                                                                              Survey       )
Class 1 Final Status
     Survey
               Figure 2.4  The Radiation Survey and Site Investigation Process
                                  in Terms of Area Classification
August 2000
                                       2-17
                                                                           MARS SIM, Revision 1

-------
Overview of the Radiation Survey and Site Investigation Process
                                Site Identification
                              Design Historical Site
                               Assessment (HSA)
                               Using Data Quality
                               Objectives (DQO)
                                    Process
                                  Perform HSA
                                 Validate Data
                                  and Assess
                                  Data Quality
7
                  Yes
                     Survey Objectives
                     1) Identify potential sources of contamination
                     2) Determine whether or not sites pose a threat
                     to human health and the environment
                     3) Differentiate impacted from non-impacted
                     areas
                     4) Provide input to scoping and characterization
                     survey designs
                     5) Provide an assessment of the likelihood of
                     contaminant migration
                     6) Identify additional potential radiation sites
                     related to the site being  investigated
                                Area Previously
                                Remediated and
                              Currently Poses Low
                                 Human Health
                                     Risk?
                         Document Findings
                      Supporting Non-Impacted
                            Classification
                       Provide Documentation
                      Sufficent to Demonstrate
                            Compliance
                                   Document
                                Findings of HSA
  7
 To Figure
.    2.6
                      Figure 2.5  The Historical Site Assessment Portion of the
                            Radiation Survey and Site Investigation Process
MARSSIM, Revision 1
        2-18
             August 2000

-------
                                                   Overview of the Radiation Survey and Site Investigation Process
                                      Design Scoping Survey
                                            Plan Using
                                          DQO Process
                                                                         Survey Objectives
                   1) Perform a preliminary hazard
                   assessment
                   2) Support classification of all or part
                   of the site as a Class 3 area
                   3) Evaluate whether survey plan can
                   be optimized for use in
                   characterization or final status survey
                   4) Provide input to the
                   characterization survey design
                                             Perform
                                         Scoping Survey
                                        Validate Data and
                                       Assess Data Quality
    7
                                         There Sufficient
                                       Information to Support
                                         Classification as
                                            Class 3?
              No/Unknown-
                                   To Figure
                                      2.7
                                              Yes
                                        Document Findings
                                        Supporting Class 3
                                          Classification
      7
                      Figure 2.6 The Scoping Survey Portion of the
                     Radiation Survey and Site Investigation Process
August 2000
2-19
MARS SIM, Revision 1

-------
Overview of the Radiation Survey and Site Investigation Process
                                                                             1) Determine the nature and extent of
                                                                             the contamination
                                                                             2) Evaluate remedial alternatives and
                                                                             technologies
                                                                             3) Evaluate whether survey plan can be
                                                                             optimized for use in the final status
                                                                             survey
                                                                             4) Provide input to the final status survey
                                                                             design
                                       Design
                                   Characterization
                                  Survey Plan Using
                                    DQO Process
                                                   Perform
                                                Characterization
                                                    Survey
                                                 Validate Data
                                                  and Assess
                                                  Data Quality
Are the DQOs
  Satisfied?
                                                                    Determine Remedial
                                                                    Alternative and Site
                                                                      Specific DCGLs
            Classify Areas as
            Class 1, Class 2,
               or Class 3
                                                                     Remediate the Area
                                                                     Perform Remedial
                                                                   Action Support Survey
                Do the
           Class 1 and Class
             Areas Require
             Remediati
                                                                         Does the
                                                                      Remedial Action
                                                                   Support Survey Indicate
                                                                     the Remediation is
                                                                        Complete?
           Reassess Remedial
           Alternative and Site
            Specific DCGLs
                                         Is Reassessment
                                      of Remedial Alternative
                                      and Site Specific DCGLs
                                           Necessary?
 1 The point where survey units that fail to demonstrate compliance in the final status survey in Figure 2.8 re-enter the process

   Figure 2.7  The Characterization and Remedial Action Support Survey Portion
                  of the Radiation Survey and Site Investigation Process
MARSSIM, Revision 1
                                           2-20
August 2000

-------
                                                   Overview of the Radiation Survey and Site Investigation Process
                                              From Figure
                                                2.6 and
                                                 Figure
                                                  2.7
                                       Design Final Status Survey
                                        Plan Using DQO Process
                    Perform Final Status
                     Survey for Class 1
                        Survey Units
                                                                                 Survey Objectives
                                                      1) Select/verify survey unit
                                                      classification
                                                      2) Demonstrate that the potential dose
                                                      or risk from residual  contamination is
                                                      below the release criterion for each
                                                      survey unit
                                                      3) Demonstrate that the potential dose
                                                      from  residual elevated areas is below
                                                      the release criterion for each survey unit
                         Perform Final Status
                          Survey for Class 2
                            Survey Units
Perform Final Status
 Survey for Class 3
   Survey Units
     Reassess DQOs
                                                            Validate Data
                                                             and Assess
                                                             Data Quality
Are the DQOs
  Satisfied?
        Perform Additional
             Surveys
                              Do the
                        Final Status Survey
                        Results Demonstrate
                          Compliance with
                             DCGLs?
                                                                     Additional
                                                                   Remediation
                                                                    Required?
                 /  Document Results in the Final
                 V       Status Survey Report
    ! Connects with the Remedial Action Support Survey portion of the process in Figure 2.7


                          Figure 2.8 The Final Status Survey Portion of the
                           Radiation Survey and Site Investigation Process
August 2000
                            2-21
         MARS SIM, Revision 1

-------
Overview of the Radiation Survey and Site Investigation Process


2.4.2   Historical Site Assessment

The primary purpose of the Historical Site Assessment (HSA) is to collect existing information
concerning the site and its surroundings.

The primary objectives of the HSA are to:

       identify potential sources of contamination
       determine whether or not sites pose a threat to human health and the environment
       differentiate impacted from non-impacted areas
       provide input to scoping and characterization survey designs
       provide an assessment of the likelihood of contaminant migration
       identify additional potential radiation sites related to the site being investigated

The HSA typically consists of three phases: identification of a candidate site, preliminary
investigation of the facility or site, and site visits or inspections. The HSA is followed by an
evaluation of the site based on information collected during the HSA.

2.4.3   Scoping Survey

If the data collected during the HSA indicate an area is impacted, a scoping survey could be
performed. Scoping surveys provide site-specific information based on limited measurements.

The primary objectives of a scoping survey are to:

•      perform a preliminary hazard assessment
•      support classification of all or part of the  site as a Class 3 area
•      evaluate whether the survey plan can be optimized for use in the characterization or final
       status surveys
•      provide data to complete the site prioritization scoring process (CERCLA and RCRA
       sites only)
•      provide input to the characterization survey design if necessary

Scoping surveys are conducted after the HSA is completed and consist of judgment
measurements based on the HSA data. If the results of the HSA indicate that an  area is Class 3
and no contamination is found, the area may be classified as Class 3 and a Class  3 final status
survey is performed. If the scoping survey locates contamination, the area may be considered  as
Class 1 (or Class 2) for the final status survey and a  characterization survey is typically
performed. Sufficient information should be collected to identify situations that  require
immediate radiological attention. For sites where the Comprehensive Environmental Response,
Compensation, and Liability Act (CERCLA) requirements are applicable, the scoping survey

MARSSIM, Revision 1                         2-22                                 August 2000

-------
                                       Overview of the Radiation Survey and Site Investigation Process


should collect sufficient data to complete the Hazard Ranking System (HRS) scoring process.
For sites where the Resource Conservation and Recovery Act (RCRA) requirements are
applicable, the scoping survey should collect sufficient data to complete the National Corrective
Action Prioritization System (NCAPS) scoring process.  Sites that meet the National
Contingency Plan (NCP) criteria for a removal should be referred to the Superfund removal
program (EPA 1988c). A comparison of MARSSIM guidance to CERCLA and RCRA
requirements is provided in Appendix F.

2.4.4   Characterization Survey

If an area could be classified as Class 1 or Class 2 for the final status survey, based on the HS A
and scoping survey results, a characterization survey is warranted. The characterization survey is
planned based on the HSA and scoping survey results. This type of survey is a detailed
radiological environmental characterization of the area.

The primary objectives of a characterization survey are to:

•      determine the nature and extent of the contamination
•      collect data to support evaluation of remedial alternatives and technologies
•      evaluate whether the survey plan can be optimized for use in the final status survey
•      support Remedial Investigation/Feasibility Study requirements (CERCLA sites only)  or
       Facility Investigation/Corrective Measures Study requirements (RCRA sites only)
•      provide input to the final status survey design

The characterization survey is the most comprehensive of all the survey types and generates the
most data. This includes preparing a reference grid, systematic as well as judgment
measurements, and surveys of different media (e.g., surface soils, interior and exterior surfaces of
buildings). The decision as to which media will be surveyed is a site-specific decision addressed
throughout the Radiation Survey and Site Investigation Process.

2.4.5   Remedial Action Support Survey

If an area is adequately characterized and is contaminated above the derived concentration
guideline levels (DCGLs),  a decontamination plan should be prepared. A remedial action
support survey is performed while remediation is being conducted, and guides the cleanup in a
real-time mode.

Remedial action support surveys are conducted to:

•      support remediation activities
•      determine when a site or survey unit is ready for the final status survey

August 2000                                2-23                        MARSSIM, Revision 1

-------
Overview of the Radiation Survey and Site Investigation Process


•      provide updated estimates of site-specific parameters used for planning the final status
       survey

This manual does not provide guidance on the routine operational surveys used to support
remediation activities. The determination that a survey unit is ready for a final status survey
following remediation is an important step in the RSSI Process.  In addition, remedial activities
result in changes to the distribution of contamination within the survey unit. For most survey
units, the site-specific parameters used during final status survey planning (e.g., variability in the
radionuclide concentration, probability of small areas of elevated activity) will need to be re-
established following remediation. Obtaining updated values for these critical parameters should
be considered when planning a remedial action support survey.

2.4.6  Final Status Survey

The final status survey is used  to demonstrate compliance with regulations. This type of survey
is the major focus of this manual.

The primary objectives of the final status survey are to:

•      select/verify survey unit classification
•      demonstrate that the potential dose or risk from residual contamination is below the
       release criterion for each survey unit
•      demonstrate that the potential dose or risk from small areas of elevated activity is below
       the release criterion for each survey unit

The final status survey provides data to demonstrate that all radiological parameters satisfy the
established guideline values and conditions.

Although the final status survey is discussed as if it were an activity performed at a single stage
of the site investigation process, this does not have to be the case. Data from other surveys
conducted during the Radiation Survey and Site Investigation Process—such as scoping,
characterization, and remedial  action support surveys—can provide valuable information for
planning a final status survey provided they are of sufficient quality.

Professional judgment and biased sampling are important for locating contamination and
characterizing the extent of contamination at a site.  However, the MARSSEVI focus is on
planning the final status survey which utilizes a more systematic approach to sampling.
Systematic sampling is based on rules that endeavor to achieve the representativeness in
sampling consistent with the application of statistical tests.
MARSSIM, Revision 1                         2-24                                 August 2000

-------
                                       Overview of the Radiation Survey and Site Investigation Process
2.4.7   Regulatory Agency Confirmation and Verification

The regulatory agency responsible for the site often confirms whether the site is acceptable for
release. This confirmation may be accomplished by the agency or an impartial party. Although
some actual measurements may be performed, much of the work required for confirmation and
verification will involve evaluation and review of documentation and  data from survey activities.
The evaluation may include site visits to observe survey and measurement procedures or split-
sample analyses by the regulatory agency's laboratory. Therefore, accounting for confirmation
and verification activities during the planning stages is important to each type of survey. In some
cases, post-remedial sampling and analysis may be performed by  an impartial party. The review
of survey results should include verifying that the data quality objectives are met, reviewing the
analytical data used to demonstrate compliance, and verifying that the statistical test results
support the decision to release the site. Confirmation and verification are generally ongoing
processes throughout the Radiation Survey and Site  Investigation (RSSI) Process.
2.5    Demonstrating Compliance With a Dose- or Risk-Based Regulation

MARSSEVI presents a process for demonstrating compliance with a dose- or risk-based
regulation. The RSSI Process provides flexibility in planning and performing surveys based on
site-specific considerations. A dose- or risk-based regulation usually allows one to take into
account radionuclide and site-specific differences.

The final status survey is designed to demonstrate compliance with the release criterion.  The
earlier surveys in the RSSI Process are performed to support decisions and assumptions used in
the design of the final status survey.  These preliminary surveys (e.g.,  scoping, characterization)
may have other objectives in addition to compliance demonstration that need to be considered
during survey planning that are not fully discussed in this manual. For this reason MARSSEVI
focuses on final status survey design.  To allow maximum flexibility in the survey design,
MARSSEVI provides guidance on designing a survey using the RSSI Process. This allows users
with few resources available for planning to develop an acceptable survey design. The rationale
for the development of the guidance in MARSSEVI is presented in the following sections. Users
with available planning resources are encouraged to investigate alternate  survey designs for site-
specific applications using the information provided in Section 2.6.

2.5.1   The Decision to Use Statistical Tests

The objective of compliance demonstration is to provide some level of confidence that the
release criterion is not exceeded. As previously stated, 100% confidence in a decision cannot be
proven because the data always contain some uncertainty. The use of statistical methods is
necessary to provide a quantitative estimate of the probability that the release criterion is not

August 2000                                2-25                         MARSSIM, Revision 1

-------
Overview of the Radiation Survey and Site Investigation Process


exceeded at a particular site.  Statistical methods provide for specifying (controlling) the
probability of making decision errors and for extrapolating from a set of measurements to the
entire site in a scientifically valid fashion (EPA 1994b).

Clearly stating the null hypothesis is necessary before a statistical test can be performed. The
null hypothesis recommended for use in MARSSEVI is: "The residual radioactivity in the survey
unit exceeds the release criterion."  This statement directly addresses the issue of compliance
demonstration for the regulator and places the burden of proof for demonstrating compliance on
the site owner or responsible party. The statistical tests are only applied at sites that were
subjected to an Historical Site Assessment (HSA).  At this point, the results of the HSA have
been reviewed and the site is determined to be impacted based on existing data and professional
judgment as described in Chapter 3. An impacted site, by definition, is expected to contain areas
of contamination, so this statement of the null hypothesis is reasonable for these sites.

The information needed to perform a statistical test is determined by the assumptions used to
develop the test. MARSSEVI recommends the use of nonparametric statistical tests because these
tests use fewer assumptions, and consequently require less information to verify these
assumptions. The tests described in MARSSEVI (see Chapter 8) are relatively easy to understand
and implement compared to other statistical tests.

Site conditions can also affect the selection of statistical tests. The distribution  of contamination
is of particular concern at sites with residual radioactivity.  Is the contamination distributed
uniformly, or is it located in small areas of elevated activity?  Is the residual radioactivity present
as surface, volumetric, or subsurface contamination? To demonstrate the use of the RSSI
Process at radiation sites, MARSSEVI addresses only surface soil and building surfaces for the
final status survey to demonstrate compliance.  This represents a situation that is expected to
commonly occur at sites with radioactive contamination, and allows the survey design to take
into account the ability to directly measure surface radioactivity using scanning  techniques.
Other contaminated media may be  identified during the HSA or preliminary surveys (i.e.,
scoping, characterization, remedial action support).  If other contaminated media (e.g.,
subsurface contamination, volumetric contamination of building materials) are identified,
methodologies for demonstrating compliance other than those described in this manual may need
to be developed or evaluated. Situations where scanning techniques may not be effective (e.g.,
volumetric or subsurface contamination) are discussed in existing guidance (EPA 1989a, EPA
1994b, EPA 1994d).
MARSSIM, Revision 1                         2-26                                 August 2000

-------
                                        Overview of the Radiation Survey and Site Investigation Process
2.5.1.1  Small Areas of Elevated Activity
While the development of DCGLs is outside the scope of MARSSEVI, this manual assumes that
DCGLs will be developed using exposure pathway models which in turn assume a relatively
uniform distribution of contamination. While this represents an ideal situation, small areas of
elevated activity are a concern at many sites.

MARSSEVI addresses the concern for small areas of elevated activity by using a simple
comparison to an investigation level as an alternative to statistical methods. Using the elevated
measurement comparison (EMC) represents a conservative approach, in that every measurement
needs to be below the action level. The investigation level for this comparison is called the
DCGLgMc, which is the DCGLW modified to account for the smaller area.  This area factor
correction (discussed in Section 5.5.2.4) is considered to be a defensible modification because
the exposure assumptions (e.g., exposure time and duration) are the same as those used to
develop the DCGLW.  In the case of multiple areas of elevated activity in a survey unit, a posting
plot (discussed in Section 8.2.2.2) or similar representation of the distribution of activity in the
survey unit can be used to determine any pattern in the location of these areas.

If elevated levels of residual radioactivity are found in an isolated area, in addition to residual
radioactivity distributed relatively uniformly across the survey unit, the unity rule (Section 4.3.3)
can be used to ensure that the total dose or risk meets the release criterion. If there is more than
one of these areas, a separate term should be included in the calculation for each area of elevated
activity. As an alternative to the  unity rule, the dose or risk due to the actual residual
radioactivity distribution can be calculated if there is an appropriate exposure pathway model
available.  Note that these considerations generally only apply to Class 1 survey units, since areas
of elevated activity should not be present in Class 2 or Class 3 survey units.

2.5.1.2  Relatively Uniform Distribution of Contamination

As discussed previously, the development of a DCGL starts with the  assumption of a relatively
uniform distribution of contamination.  Some variability in the measurements is expected.  This
is primarily due to a random spatial distribution of contamination and uncertainties in the
measurement process. The arithmetic mean of the measurements taken from such a distribution
would represent the parameter of interest for demonstrating compliance.

Whether or not the radionuclide of concern is present in background determines the form of the
statistical test. The Wilcoxon Rank Sum (WRS) test is recommended for comparisons of survey
unit radionuclide concentrations with background. When the radionuclide of concern is not
present in background, the Sign test is recommended. Instructions on performing these tests are
provided in Section 8.3 and Section 8.4.
August 2000                                 2-27                        MARSSIM, Revision 1

-------
Overview of the Radiation Survey and Site Investigation Process


The WRS and Sign tests are designed to determine whether or not the level of residual activity
uniformly distributed throughout the survey unit exceeds the DCGI^,.  Since these methods are
based on ranks, the results are generally expressed in terms of the median. When the underlying
measurement distribution is symmetric, the mean is equal to the median. When the underlying
distribution is not symmetric, these tests are still true tests of the median but only approximate
tests of the mean. However, numerous studies show that this is a fairly good approximation
(Hardin and Gilbert, 1993). The assumption of symmetry is less restrictive than that of normality
because the normal distribution is itself symmetric.  If, however, the measurement distribution is
skewed to the right, the average will generally be greater than the median. In severe cases, the
average may exceed the DCGLW while the median does  not.  For this reason, MARSSEVI
recommends comparing the arithmetic mean of the survey unit data to the DCGLW as a first step
in the interpretation of the data (see Section 8.2.2.1).

The WRS test is a two-sample test that compares the distribution of a set of measurements in a
survey unit to that of a set of measurements in a reference  area.  The test is performed by first
adding the value of the DCGLW to each measurement in the reference area. The combined set of
survey unit data and adjusted reference area data are listed, or ranked, in increasing numerical
order.  If the ranks of the adjusted reference site measurements are significantly higher than the
ranks of the  survey unit measurements, the survey unit demonstrates compliance with the release
criterion.

The Sign test is a one-sample test that compares the distribution of a set of measurements in a
survey unit to a fixed value, namely the DCGLW.  First, the value for each measurement  in the
survey unit is subtracted from the DCGLW. The resulting distribution is tested to  determine if the
center of the distribution is greater than zero.  If the adjusted distribution is significantly greater
than zero, the survey unit demonstrates compliance with the release criterion.

Guidance on performing the statistical tests and presenting graphical representations of the data
is provided in Chapter 8 and Appendix I.

2.5.2  Classification

Classifying a survey unit is crucial to the survey design because this step determines the  level of
survey effort based on the potential for contamination. Areas are initially classified as impacted
or non-impacted based on the results of the HSA. Non-impacted areas have no reasonable
potential for residual contamination and require no further evidence to demonstrate compliance
with the release criterion.  When planning the final status survey, impacted areas may be further
divided into survey units.  If a survey unit is classified incorrectly, the potential for making
decision errors increases.  For this reason, all impacted areas are initially assumed to be Class 1.
Class 1 areas require the highest level of survey effort because they are known to  have
contaminant concentrations above the DCGLW, or the contaminant concentrations are unknown.

MARSSIM, Revision 1                        2-28                                 August 2000

-------
                                        Overview of the Radiation Survey and Site Investigation Process


Information indicating the potential or known contaminant concentration is less than the DCGLw
can be used to support re-classification of an area or survey unit as Class 2 or Class 3.

There is a certain amount of information necessary to demonstrate compliance with the release
criterion.  The amount of this information that is available and the level of confidence in this
information is reflected in the area classification.  The initial assumption for affected areas is that
none of the necessary information is available.  This results in a default Class 1 classification.
This corresponds with the statement of the null hypothesis that the survey unit is contaminated,
and represents the most efficient case for the regulator. For this reason, the recommendations for
a Class 1 final status survey represent the minimal amount of information necessary to
demonstrate compliance.

Not all of the information  available for an area will have been collected for purposes of
compliance demonstration. For example, data are collected during characterization surveys  to
determine the extent, and not necessarily the amount, of contamination.  This does not mean that
the data do not meet the objectives of compliance demonstration, but may mean that statistical
tests would be of little or no value because the data have not been collected using appropriate
protocols or design. Rather than discard potentially valuable information, MARSSIM allows for
a qualitative assessment of existing data (Chapter 3). Non-impacted areas represent areas where
all of the information necessary to demonstrate compliance is available from existing sources.
For these areas, no statistical tests are considered necessary. A classification as Class 2 or Class
3 indicates that some information on describing the potential for contamination is available for
that survey unit. The data collection recommendations are  modified to account for the
information already available, and the  statistical tests are performed on the data collected during
the final status survey.

As previously stated, the conservative  assumption that an area receive a classification of Class 1
is only applied to impacted sites. The HSA (described in Chapter 3) is used to provide an initial
classification for the site of impacted or non-impacted based on existing data and professional
judgment.

2.5.3  Design Considerations for Small Areas of Elevated Activity

Scanning surveys are typically used to identify small areas of elevated activity. The size of the
area of elevated activity that the survey is designed to detect affects the DCOI^,^, which in turn
determines the ability of a scanning technique to detect these areas. Larger areas have  a lower
          and are more  difficult to detect than smaller areas.
The percentage of the survey unit to be covered by scans is also an important consideration.
100% coverage means that the entire surface area of the survey unit has been covered by the field
of view of the scanning instrument.  100% scanning coverage provides a high level of confidence

August 2000                                 2-29                        MARSSIM, Revision 1

-------
Overview of the Radiation Survey and Site Investigation Process


that all areas of elevated activity have been identified.  If the available information concerning
the survey unit provides information demonstrating that areas of elevated activity may not be
present, the survey unit may be classified as Class 2 or Class 3. Because there is already some
level of confidence that areas of elevated activity are not present, 100% coverage may not be
necessary to demonstrate compliance. The scanning survey coverage may be adjusted based on
the level of confidence supplied by the existing data. If there is evidence providing a high level
of confidence that areas  of elevated activity are not present, 10% scanning coverage may meet
the objectives of the survey. If the existing information provides a lower level of confidence, the
scanning coverage may be adjusted between 10 and 100% based on the level of confidence and
the objectives of the survey. A general recommendation is to always err to minimize the decision
error. In general, scanning the entire survey unit is less expensive than finding areas of elevated
activity later in the survey process. Finding such areas will lead to performing additional surveys
due to survey unit misclassification.

Another consideration for scanning surveys is the selection of scanning locations. This is not an
issue when 100% of the survey unit is scanned. Whenever less than 100% of the survey unit is
scanned, a decision must be made on what areas are scanned. The general recommendation is
that when large amounts of the survey unit are scanned (e.g., >50%), the scans should be
systematically performed along transects of the survey unit. When smaller amounts of the survey
unit are scanned, selecting areas based on professional judgment may be more appropriate and
efficient for locating areas of elevated activity (e.g., drains, ducts, piping,  ditches).  A
combination of 100% scanning in portions of the survey unit selected based on professional
judgement and less coverage (e.g., 20-50%) for all remaining areas may result in an efficient
scanning survey design for some survey  units.

2.5.4   Design Considerations for Relatively Uniform Distributions of Contamination

The survey design for areas with relatively uniform distributions of contamination is primarily
controlled by classification and the requirements of the statistical test. Again, the
recommendations provided for Class 1 survey units are designed to minimize the decision error.
Recommendations for Class 2 or Class 3 surveys may be appropriate based on the existing
information and the level of confidence associated with this information.

The first consideration is the identification of survey units. The identification of survey units
may be accomplished early (e.g., scoping) or late (e.g., final status) in the survey process, but
must be accomplished prior to performing a final status survey. Early identification of survey
units can help in planning and performing surveys throughout the RSSI Process. Late
identification of survey units can prevent misconceptions and problems associated with
reclassification of areas based on results of subsequent surveys. The area of an individual survey
unit is determined based on the area classification and  modeling assumptions used to develop the
DCGLW.  Identification of survey units is discussed in  Section 4.6.

MARSSIM, Revision 1                         2-30                                 August 2000

-------
                                       Overview of the Radiation Survey and Site Investigation Process


Another consideration is the estimated number of measurements to demonstrate compliance
using the statistical tests.  Section 5.5.2 describes the calculations used to estimate the number of
measurements.  These calculations use information that is usually available from planning or
from preliminary surveys (i.e., scoping, characterization, remedial action support).

The information needed to perform these calculations is: 1) acceptable values for the
probabilities of making Type I (a) or Type II (P) decision errors, 2) the estimates of the
measurement variability in the survey unit (os) and the reference  area (or) if necessary, and 3) the
shift (A).

MARSSEVI recommends that  site-specific values be determined for each of these parameters.  To
assist the user in selecting site-specific values for decision error rates and A, MARSSEVI
recommends that an initial value be selected and adjusted to develop a survey design that is
appropriate for  a specific  site. An arbitrary initial value of one half the DCGLW is selected for
the lower bound of the gray region. This value is adjusted to provide a relative shift (A/a) value
between one and three as  described in Section 5.5.2.  For decision error rates a value that
minimizes the risk of making  a decision error is recommended for the initial calculations.  The
number of measurements can  be recalculated using different decision error rates until an
optimum survey design is obtained. A prospective power curve (see Appendix D, Section D.6
and Appendix I, Section 1.9) that considers the effects of these parameters can be very helpful in
designing a survey and considering alternative values for these parameters, and is highly
recommended.

To ensure that the desired power is achieved with the statistical test and to account for
uncertainties in the estimated values of the measurement variabilities, MARSSEVI recommends
that the estimated number of measurements calculated using the formulas in Section  5.5.2.2 and
5.5.2.3 be increased by 20%.  Insufficient numbers of measurements may result in failure to
achieve the DQO for power and result in increased Type n decision errors, where survey units
below the release criterion fail to demonstrate compliance.

Once survey units are identified and the number of measurements is determined, measurement
locations should be selected.  The statistical tests assume that the measurements are taken from
random locations within the survey unit.  A random survey design is used for Class 3 survey
units, and a random starting point for the systematic grid is used for Class 2 and Class 1 survey
units.

2.5.5   Developing an Integrated Survey Design

To account for assumptions used to develop the DCGLW and the realistic possibility of small
areas of elevated activity, an integrated survey design should be developed to include all of the
design considerations. An integrated survey design combines a scanning survey for areas of

August 2000                                 2-31                         MARSSIM, Revision  1

-------
Overview of the Radiation Survey and Site Investigation Process
elevated activity with random measurements for relatively uniform distributions of
contamination.  Table 2.2 presents the recommended conditions for demonstrating compliance
for a final status survey based on classification.

       Table 2.2  Recommended Conditions for Demonstrating Compliance Based on
                   Survey Unit Classification  for a Final Status Survey
Survey Unit
Classification
Impacted
Class 1
Class 2
Class 3
Non-Impacted
Statistical
Test
Yes
Yes
Yes
No
Elevated Measurement
Comparison
Yes
Yes
Yes
No
Sampling and/or
Direct Measurements
Systematic
Systematic
Random
No
Scanning
100% Coverage
10-100% Systematic
Judgmental
None
Random measurement patterns are used for Class 3 survey units to ensure that the measurements
are independent and meet the requirements of the statistical tests. Systematic grids are used for
Class 2 survey units because there is an increased probability of small areas of elevated activity.
The use of a systematic grid allows the decision maker to draw conclusions about the size of any
potential areas of elevated activity based on the area between measurement locations, while the
random starting point of the grid provides an unbiased method for determining measurement
locations for the statistical tests. Class 1 survey units have the highest potential for small areas of
elevated activity, so the areas between measurement locations are adjusted to ensure that these
areas can be identified by the scanning survey if the area of elevated activity is not detected by
the direct measurements or samples.

The objectives of the scanning surveys are different.  Scanning is used to identify locations
within the survey unit that exceed the investigation level. These locations are marked and
receive additional investigations to  determine the concentration, area, and extent of the
contamination.

For Class  1 areas, scanning surveys are designed to detect small areas of elevated activity that are
not detected by the measurements using the systematic grids. For this reason, the measurement
locations and the number of measurements may need to be adjusted based on the sensitivity of
the scanning technique (see Section 5.5.2.4).  This is also the reason for recommending 100%
coverage for the scanning survey.

Scanning surveys in Class 2 areas are also performed primarily to find areas of elevated activity
not detected by the measurements using the systematic pattern. However, the measurement
MARSSIM, Revision 1
2-32
August 2000

-------
                                        Overview of the Radiation Survey and Site Investigation Process


locations are not adjusted based on sensitivity of the scanning technique, and scanning is only
performed in portions of the survey unit. The level of scanning effort should be proportional to
the potential for finding areas of elevated activity: in Class 2 survey units that have residual
radioactivity close to the release criterion a larger portion of the survey unit would be scanned,
but for survey units that are closer to background scanning a smaller portion of the survey unit
may be appropriate. Class 2 survey units have a lower probability for areas of elevated activity
than Class 1 survey units, but some portions of the survey unit may have a higher potential than
others. Judgmental scanning surveys would focus on the portions of the survey unit with the
highest probability for areas of elevated activity. If the entire survey unit has an equal probability
for areas of elevated activity, or the judgmental scans don't cover at least 10% of the area,
systematic scans along transects of the survey unit or scanning surveys of randomly selected grid
blocks are performed.

Class 3 areas have the lowest potential for areas of elevated activity.  For this reason,  MARSSEVI
recommends that scanning surveys be performed in areas of highest potential (e.g., corners,
ditches, drains) based on professional judgment. This provides a qualitative level of confidence
that no areas of elevated activity were missed by the random measurements or that there were no
errors made in the classification of the area.

Note that the DCGL itself is not free of error. The assumptions made in any model used to
develop DCGLs for a site should be examined carefully.  The results  of this examination should
determine if the use of site-specific parameters result in large changes in the DCGLs,  or whether
a site-specific model  should be developed to  obtain DCGLs more relevant  to the exposure
conditions at the site.  Appendix D, Section D.6 provides additional information about the
uncertainty associated with the DCGL and other considerations for developing an integrated
survey design using the DQO  Process.
2.6    Flexibility in Applying MARSSIM Guidance

Section 2.5 describes an example that applies the performance-based guidance presented in
Section 2.3 and Section 2.4 to design a survey for a site with specific characteristics (i.e., surface
soil and building surface contamination).  Obviously this design cannot be uniformly applied at
every site with radioactive contamination, so flexibility has been provided in the form of
performance-based guidance. This guidance encourages the user to develop a site-specific
survey design to account for site-specific characteristics. It is expected that most users will adopt
the portions of the MARSSIM guidance that apply to their site. In addition, changes to the
overall survey design that account for site-specific differences would be presented as part of the
survey plan.  The  plan should also demonstrate that the extrapolation from measurements
performed at specific locations to the entire site or survey unit is performed in a technically
defensible manner.

August 2000                                 2-33                        MARSSIM, Revision 1

-------
Overview of the Radiation Survey and Site Investigation Process


Where Section 2.5 describes the development of a generic survey design that will be applicable at
most radiation sites, this section describes the flexibility available within the MARSSEVI for
designing a site-specific survey design. Alternate methods for accomplishing the demonstration
of compliance are briefly described and references for obtaining additional information on these
alternate methods are provided.

2.6.1   Alternate Statistical Methods

MARSSEVI encourages the use of statistics to provide a quantitative estimate of the probability
that the release criterion is not exceeded at a site. While it is unlikely that any site will be able to
demonstrate compliance with a dose- or risk-based regulation without at least considering the use
of statistics, MARSSEVI recognizes that the use of statistical tests may not always provide the
most effective method for demonstrating compliance. For example, MARSSEVI recommends a
simple comparison to an investigation level to evaluate the presence of small areas of elevated
activity in place of complicated statistical tests.  At some sites a simple comparison of each
measurement result to the DCGLW, to demonstrate that all the measurement results are below the
release criterion, may be more effective than statistical tests for the overall demonstration of
compliance with the regulation provided an adequate number of measurements are performed.

MARSSEVI recommends the use of nonparametric statistical tests for evaluating environmental
data.  There are two reasons for this recommendation: 1) environmental data is usually not
normally distributed, and 2) there are often a significant number of qualitative survey results
(e.g., less than MDC). Either one of these conditions means that parametric statistical tests may
not be appropriate, If one can demonstrate that the data are normally distributed and that there
are a sufficient number of results to  support a decision concerning the survey unit, parametric
tests will generally provide higher power (or require fewer measurements to support a decision
concerning the survey unit).  The tests to demonstrate that the data are normally distributed
generally require more measurements than the nonparametric tests. EPA provides guidance on
selecting and performing statistical tests to demonstrate that data are normally distributed (EPA
1996a). Guidance is also available for performing parametric statistical tests (NRC 1992, EPA
1989a, EPA 1994b, EPA 1996a).

There are a wide variety of statistical tests designed for use in specific situations. These tests
may be preferable to the generic statistical tests recommended in MARSSEVI when the
underlying assumptions for these tests can be verified. Table 2.3 lists several examples of
statistical tests that may be considered for use at individual sites or survey units. A brief
description of the tests and references for obtaining additional information on these tests are also
listed in the table.  Applying these tests may require  consultation with a statistician.
MARSSIM, Revision 1                         2-34                                 August 2000

-------
I
O
O
O
                                         Table 2.3  Examples of Alternate Statistical Tests
Alternate
Tests
Probability
Model Assumed
Type of Test
Reference
Advantages
Disadvantages
Alternate 1 -Sample Tests (no reference area measurements)
Student' st Test
t Test Applied To
Logarithms
Minimum
Variance
Unbiased
Estimator For
Lognormal Mean
Chen Test
Normal
Lognormal
Lognormal
Skewed to right,
including
Lognormal
Parametric test for
H0: Mean < L
Parametric test for H0:
Median < L
Parametric estimates
for mean and variance
of lognormal
distribution
Parametric test for
H0: Mean > 0
Guidance for Data
Quality Assessment,
EPA QA/G-9,
p. 3.2-2.
Guidance for Data
Quality Assessment,
EPA QA/G-9,
p. 3.2-2
Gilbert, Statistical
Methods for
Environmental
Pollution
Monitoring, p. 164,
1987.
Journal of the
American Statistical
Association (90),
p.767, 1995.
Appropriate if data
appears to be normally
distributed and
symmetric.
This is a well- known
and easy-to-apply test.
Useful for a quick
summary of the
situation if the data is
skewed to right.
A good parametric test
to use if the data is
lognormal.
A good parametric test
to use if the data is
lognormal.
Relies on a non-robust
estimator for u and o.
Sensitive to outliers and
departures from
normality.
Relies on a non-robust
estimator for o.
Sensitive to outliers and
departures from
lognormality.
Inappropriate if the data
is not lognormal.
Applicable only for
testing H0: "survey unit
is clean." Survey unit
must be significantly
greater than 0 to fail.
Inappropriate if the data
is not skewed to the
right.
to
o
<

<
ft'

o

ft

&
&
to"
O
                                                                                                                                  to
                                                                                                                                  B.
                                                                                                                                  GO

OQ

o'
h^
hS

°
S§

-------
                                                        Table 2.3 (continued)
o
Alternative
Tests
Probability
Model Assumed
Type of Test
Reference
Advantages
Disadvantages
Alternate 1-Samples Tests (no reference area measurements)
Bayesian Approaches
Bootstrap
Lognormal
Confidence Intervals
Using Bootstrap
Varies, but a
family of
probability
distributions
must be selected.
No restriction
Lognormal
Parametric test for
H0: Mean < L
Nonparametric. Uses
resampling methods to
estimate sampling
variance.
Uses resampling
methods to estimate
one-sided confidence
interval for lognormal
mean.
DeGroot, Optimal
Statistical Decisions,
p. 157, 1970.
Hall, Annals of
Statistics (22), p.
2011-2030, 1994.
Angus, The
Statistician (43), p.
395, 1994.
Permits use of
subjective "expert
judgment" in
interpretation of data.
Avoids assumptions
concerning the type of
distribution.
Nonparametric method
applied within a
parametric lognormal
model.
Decisions based on
expert judgment may be
difficult to explain and
defend.
Computer intensive
analysis required.
Accuracy of the results
can be difficult to
assess.
Computer intensive
analysis required.
Accuracy of the results
can be difficult to
assess.
in
Gfl
                                                                                                                                   ff
                                                                                                                                   GO



                                                                                                                                   I
                                                                                                                                   to

                                                                                                                                   B.
                                                                                                                                   ft
                                                                                                                                   8
                                                                                                                                   rt
to
O
O
o

-------
I
O
O
O
                                                           Table 2.3 (continued)
Alternative
Tests
Probability
Model Assumed
Type of Test
Reference
Advantages
Disadvantages
Alternate 2-Sample Tests (reference area measurements are required)
Student's t Test
Mann- Whitney Test
Kolmogorov-
Smirnov
Bayesian
Approaches
Symmetric, normal
No restrictions
No restrictions
Varies, but a
family of
probability
distributions must
be selected
Parametric test for
difference in means
H0: ux < uy
Nonparametric test
difference in location
H0: ux < uy
Nonparametric test for
any difference between
the 2 distributions
Parametric tests for
difference in means or
difference in variance.
Guidance for Data
Quality Assessment,
EPA QA/G-9,
p. 3.3-2
Hollander and
Wolfe,
Nonparametric
Statistical Methods,
p. 71, 1973.
Hollander and
Wolfe,
Nonparametric
Statistical Methods,
p. 219, 1973.
Box and Tiao,
Bayesian Inference
in Statistical
Analysis, Chapter 2,
1973.
Easy to apply.
Performance for non-
normal data is
acceptable.
Equivalent to the WRS
test, but used less
often. Similar to
resampling, because
test is based on set of
all possible differences
between the two data
sets.
A robust test for
equality of two sample
distributions against all
alternatives.
Permits use of "expert
judgment" in the
interpretation of data.
Relies on a non-robust
estimator for o,
therefore test results are
sensitive to outliers.
Assumes that the only
difference between the
test and reference areas
is a shift in location.
May reject because
variance is high,
although mean is in
compliance.
Decisions based on
expert judgement may
be difficult to explain
and defend.
to
o
<

<
ft'

o

ft

&
&
to"
                                                                                                                                         to
                                                                                                                                         B.
                                                                                                                                         GO

O
OQ

o'
h^
hS

°
S§

-------
                                                        Table 2.3 (continued)
Alternative
Tests
Probability Model
Assumed
Type of Test
Reference
Advantages
Disadvantages
Alternate 2-Sample Tests (reference area measurements are required)
2-Sample
Quantile Test
Simultaneous
WRS and Quantile
Test
Bootstrap and
Other Resampling
Methods
No restrictions
No restrictions
No restrictions
Nonparametric test for
difference in shape and
location.
Nonparametric test for
difference in shape and
location.
Nonparametric. Uses
resampling methods to
estimate sampling
variance.
EPA, Methods for
Evaluating the
Attainment of
Cleanup Standards,
Vol.3, p. 7.1, 1992.
EPA, Methods for
Evaluating the
Attainment of
Cleanup Standards,
Vol. 3, p. 7.17, 1992.
Hall, Annals of
Statistics (22),
p. 2011, 1994.
Will detect if survey
unit distribution
exceeds reference
distribution in the
upper quantiles.
Additional level of
protection provided by
using two tests. Has
advantages of both
tests.
Avoids assumptions
concerning the type of
distribution. Generates
informative resampling
distributions for
graphing.
Applicable only for
testing H0: "survey unit
is clean." Survey unit
must be significantly
greater than 0 to fail.
Cannot be combined
with the WRS test that
uses H0: "survey unit is
not clean." Should only
be combined with WRS
test for H0: "survey unit
is clean."
Computer intensive
analysis required.
Alternate to Statistical Tests
Decision Theory
No restrictions
Incorporates loss
function in the
decision theory
approach.
DOE, Statistical and
Cost-Benefit
Enhancements to the
DQO Process for
Characterization
Decisions, 1996.
Combines elements of
cost-benefit analysis
and risk assessment
into the planning
process.
Limited experience in
applying the method to
compliance
demonstration and
decommissioning.
Computer intensive
analysis required.
o

ft
                                                                                                                                     ff
                                                                                                                                     GO


                                                                                                                                     ^



                                                                                                                                    I
                                                                                                                                     a-
                                                                                                                                     GO

                                                                                                                                     ft
                                                                                                                                     ft
                                                                                                                                     8
                                                                                                                                     rt
GO
GO
O
to

u>
oo
o
o
o

-------
                                      Overview of the Radiation Survey and Site Investigation Process
2.6.2   Alternate Null Hypothesis

The selection of the null hypothesis in MARSSIM is designed to be protective of human health
and the environment as well as consistent with current methods used for demonstrating
compliance with regulations. MARSSIM also acknowledges that site-specific conditions (e.g.,
high variability in background, lack of measurement techniques with appropriate detection
sensitivity) may preclude the use of the null hypothesis that the survey unit is assumed to be
contaminated. Similarly, a different null hypothesis and methodology could be used for different
survey units (e.g., Class 3 survey units). NUREG 1505 (NRC 1997b) provides guidance on
determining when background variability might be an issue, designing surveys based on the null
hypothesis that the survey unit concentration is indistinguishable from the concentration  in the
reference area, and performing statistical tests to demonstrate that the survey unit is
indistinguishable from background.

2.6.3   Integrating MARSSIM with Other Survey Designs

2.6.3.1 Accelerated Cleanup Models

There are a number of approaches designed to  expedite site cleanups. These approaches can save
time and resources by reducing sampling, preventing duplication of effort, and reducing inactive
time periods between steps in a cleanup process. Although Section 2.4 describes the RSSI
Process recommended in MARSSIM as one with six principal steps, MARSSIM is not intented
to be a serial process that would slow site cleanups. Rather, MARSSIM supports existing
programs and encourages approaches to expedite site cleanups. Part of the significant emphasis
on planning in MARSSIM is meant to promote saving time and resources.

There are many examples of accelerated cleanup approaches. The Superfund Accelerated
Cleanup Model (SACM), which includes a module called integrated site assessment, has as its
objectives increased efficiency and shorter response times (EPA 1992f, EPA 1993c,  EPA 1997b).

Sandia National Laboratories (SNL) uses the Observational Approach. This approach uses an
iterative process of sample collection and real-time data evaluation to characterize a site. This
process allows early field results to guide later data collection in the field. Data collection is
limited to  only that required for selecting a unique remedy for a site.5

At DOE's Hanford Site, the parties to the Tri-Party Agreement negotiated a method to implement
the CERCLA process in order to 1) accelerate the assessment phase, and 2) coordinate RCRA
        Information on the Observational Approach recommended by Sandia National Laboratories is available
on the internet at http://www.em.doe.gov/tie/strechar.html.

December 1997                              2-39                                 MARSSIM

-------
Overview of the Radiation Survey and Site Investigation Process


and CERCLA requirements whenever possible, thereby resulting in cost savings. The Hanford
Past Practice Strategy (HPPS) was developed in 1991 to accelerate decisionmaking and initiation
of remediation through activities that include maximizing the use of existing data consistent with
data quality  objectives.6

The adaptive sampling programs at the Environmental Assessment Division (BAD) of Argonne
National Laboratory quantitatively fuse soft data (for example, historical records, aerial photos,
nonintrusive geophysical data) with hard sampling results to estimate contaminant extent,
measure the uncertainty associated with these estimates, determine the benefits from collecting
additional samples, and  assist in siting new sample locations to maximize the information
gained.7

2.6.3.2  Superfund Soil Screening Guidance

The goal of the Soil Screening Guidance (EPA 1996b, EPA 1996c) is to help standardize and
accelerate the evaluation and cleanup of contaminated soils at sites on the National Priorities List
(NPL) designated for future residential land use. The guidance provides a methodology for
calculating risk-based, site-specific, soil screening levels for chemical contaminants in soil that
may be  used to identify areas needing further investigation at NPL  sites. While the Soil
Screening Guidance was not developed for use with radionuclides, the methodology used is
comparable  to the MARS SUV! guidance for demonstrating compliance using DCGLs. The Soil
Screening Guidance assumes that there is a low probability of contamination, and does not
account for small areas of elevated activity. These assumptions correlate to a Class 3 area in
MARSSEVI. Because the Soil Screening Guidance is designed as a screening tool instead of a
final demonstration of compliance, the specific values for decision error levels, the bounds of the
gray region,  and the number and location of measurements are developed to support these
objectives. However, MARSSEVI guidance can be integrated with the survey design in the Soil
Screening Guidance using this guidance as an alternate MARSSEVI survey design.

The Soil Screening Guidance survey design is based on collecting samples, so scan surveys and
direct measurements are not considered. To reduce analytical costs the survey design
recommends compositing samples and provides a statistical test for demonstrating compliance.
Compositing samples provides an additional source of uncertainty and prevents the detection of
small areas of elevated activity.
       6 Information on the Hanford Past Practice Strategy is available on the internet at
http://www.bhi-erc.com/map/sec5.html.

         Information on the Argonne National Laboratory adaptive sampling programs can be obtained on the
internet at http://www.ead.anl.gov/~web/newead/prgprj/proj/adaptive/adaptive.html.

MARSSIM                                  2-40                              December 1997

-------
                      3 HISTORICAL SITE ASSESSMENT
3.1    Introduction

The Radiation Survey and Site Investigation (RSSI) Process uses a graded approach that starts
with the Historical Site Assessment (HSA) and is later followed by other surveys that lead to the
final status survey.  The HSA is an investigation to collect existing information describing a
site's complete history from the start of site activities to the present time. The necessity for
detailed information and amount of effort to conduct an HSA depend on the type of site,
associated historical events, regulatory framework, and availability of documented information.
For example, some facilities—such as Nuclear Regulatory Commission (NRC) licensees that
routinely maintain records throughout their operations—already have HSA information in place.
Other facilities, such as Comprehensive Environmental Response, Compensation, and Liability
Act (CERCLA) or Resource Conservation and Recovery Act (RCRA) sites, may initiate a
comprehensive search to gather HSA information (also see Appendix F for comparison of Multi-
Agency Radiation Survey and Site Investigation Manual (MARSSIM), CERCLA, and RCRA).
In the former case, the HSA is essentially complete and a review of the following sections
ensures that all information sources are incorporated into the overall  investigation.  In still other
cases, where sealed sources or small amounts of radionuclides are described by the  HSA, the site
may qualify for a simplified decommissioning procedure (see Appendix B).

The HSA

•      identifies potential,  likely, or known sources of radioactive material and radioactive
       contamination based on existing or derived information

•      identifies sites that need further action as opposed to those posing no threat  to human
       health

•      provides an assessment for the likelihood of contaminant migration

•      provides information useful to scoping and characterization surveys

•      provides initial classification of the site or survey unit1 as impacted or non-impacted

The HSA may provide information needed to calculate derived concentration  guideline levels
(DCGLs, initially described in Section 2.2) and furthermore provide information that reveals the
magnitude of a site's DCGLs.  This information is used for comparing  historical  data to potential
DCGLs and determining the suitability of the existing data as part of the assessment of the site.
The HSA also supports emergency response and removal activities within the context of the
  1 Refer to Section 4.6 for a discussion of survey units.

August 2000                                3-1                        MARS SIM, Revision 1

-------
Historical Site Assessment
EPA's Superfund program, fulfills public information needs, and furnishes appropriate
information about the site early in the Site Investigation process.  For a large number of sites (e.g
currently licensed facilities), site identification and reconnaissance may not be needed. For
certain response activities, such as reports concerning the possible presence of radioactivity,
preliminary investigations may consist more of a reconnaissance and a scoping survey in
conjunction with efforts to gather historical information.

The HSA is typically described in three sections: identification of a candidate site (Section 3.3),
preliminary investigation of the facility or site (Section 3.4), and site reconnaissance (Section
3.5). The reconnaissance however is not a scoping survey. The HSA is followed by an
evaluation of the site based on information collected during the HSA.
3.2    Data Quality Objectives

The Data Quality Objectives (DQO) Process assists in directing the planning of data collection
activities performed during the HSA.  Information gathered during the HSA supports other
DQOs when this process is applied to subsequent surveys.

Three HSA-DQO results are expected:

•      identifying an individual or a list of planning team members—including the decision
       maker  (DQO Step 1, Appendix D, Section D.I)

•      concisely describing the problem (DQO Step 1, Appendix D, Section D.I)

•      initially classifying site and survey unit as impacted or non-impacted (DQO Step 4,
       Appendix D, Section D.4)

Other results may accompany these three, and this added information may be useful in supporting
subsequent applications of the DQO process.

The planning team clarifies and defines the DQOs for a site-specific survey. This
multidisciplinary team of technical experts offers the greatest potential for solving problems
when identifying every important aspect of a survey.  Including a stakeholder group
representative is an important consideration when assembling this team. Once formed, the team
can also consider the role of public participation for this assessment and the possible surveys to
follow.  The number of team members is directly related to the scope and complexity of the
problem. For a small site or simplified situations, planning may be performed by the site owner.
For other specific sites (e.g.,  CERCLA), a regulatory agency representative may be included.
MARSSIM, Revision 1                         3-2                                 August 2000

-------
                                                                     Historical Site Assessment
The representative's role facilitates survey planning—without direct participation in survey plan
development—by offering comments and information based on past precedent, current guidance,
and potential pitfalls.  For a large, complex facility, the team may include technical project
managers, site managers, scientists, engineers, community and local government representatives,
health physicists, statisticians, and regulatory agency representatives. A reasonable effort should
be made to include other individuals—that is, specific decision makers or data users—who may
use the study findings sometime in the future.

The planning team is generally led by a member who is referred to as the decision maker. This
individual is often the person with the most authority over the study and may be responsible for
assigning the roles and responsibilities to planning team members.  Overall, the decision-making
process arrives at final decisions based on the planning team's recommendations.

The problem or situation description provides background information on the fundamental issue
to be addressed by the assessment (see EPA 1994a).  The following steps may be helpful during
DQO development:

•      describe the conditions or circumstances regarding the problem or situation and the
       reason  for undertaking the survey

•      describe the problem or situation as it is currently understood by briefly summarizing
       existing information

•      conduct literature searches and interviews, and examine past or ongoing studies to ensure
       that the problem is correctly defined

•      if the problem is complex, consider breaking it into more manageable  pieces

Section 3.4 provides guidance on gathering existing site data and determining the usability of this
data.

The initial classification of the site involves developing a conceptual model based on the existing
information collected during the preliminary investigation.  Conceptual models describe  a site or
facility and its environs and present hypotheses regarding the  radionuclides for known and
potential residual contamination (EPA 1987b, 1987c). The classification of the site is discussed
in Section 3.6, Evaluation of Historical Site Assessment Data.

Several results of the DQO Process may be addressed initially during the HSA.  This information
or decision may be based on limited or incomplete data. As the site assessment progresses and as
decisions become more difficult, the iterative nature of the DQO Process allows for re-evaluation
of preliminary decisions. This is especially important for classification of sites and survey units
where the final classification is not made until the final status survey is planned.

August 2000                                 3-3                          MARS SIM, Revision 1

-------
Historical Site Assessment
3.3    Site Identification

A site may already be known for its prior use and presence of radioactive materials.  Elsewhere,
potential radiation sites may be identified through the following:

•      records of authorization to possess or handle radioactive materials (e.g., NRC or NRC
       Agreement State License, DOE facility records, Naval Radioactive Materials Permit,
       USAF Master Materials License, Army Radiation Authorization, State Authorization for
       Naturally Occurring and Accelerator Produced Radioactive Material (NARM))

•      notification to government Agencies of possible releases of radioactive substances

•      citizens filing a petition under section 105(d) of the Superfund Amendments and
       Reauthorization Act of 1986 (SARA; EPA 1986)

•      ground and aerial radiological surveys

•      contacts with knowledge of the site

•      review of EPA's Environmental Radiation Ambient Monitoring System (ERAMS)
       database (Appendix G)

Once identified, the name, location, and current legal owner or custodian (where available) of the
site should be recorded.
3.4    Preliminary HSA Investigation

This limited-scope investigation serves to collect readily available information concerning the
facility or site and its surroundings. The investigation is designed to obtain sufficient
information to provide initial classification of the site or survey unit as impacted or non-
impacted. Information on the potential distribution of radioactive contamination may be used for
classifying each site or survey unit as Class 2 or Class 1 and is useful for planning scoping and
characterization surveys.

Table 3.1 provides a set of questions that can be used to assist in the preliminary HSA
investigation.  Apart from obvious cases (e.g., NRC licensees), this table focuses on
characteristics that identify a previously unrecognized or known but undeclared source of
potential contamination. Furthermore, these questions may identify confounding factors for
selecting reference sites.
MARSSIM, Revision 1                         3-4                                 August 2000

-------
                                                                                  Historical Site Assessment
               Table 3.1 Questions Useful for the Preliminary HSA Investigation
  9.


  10.



  11.
          Was the site ever licensed for the manufacture, use, or
          distribution of radioactive materials under Agreement
          State Regulations, NRC licenses, or Armed Services
          permits, or for the use of 9 IB material?

          Did the site ever have permits to dispose of, or
          incinerate, radioactive material onsite?

          Is there evidence of such activities?

          Has the site ever had deep wells for injection or permits
          for such?

          Did the site ever have permits to perform research with
          radiation generating devices or radioactive materials
          except medical or dental x-ray machines?

          As a part of the site's radioactive materials license were
          there ever any Soil Moisture Density Gauges
          (Americium-Beryllium or Plutonium-Beryllium
          sources), or Radioactive Thickness Monitoring Gauges
          stored or disposed of onsite?

          Was the site used to create radioactive material(s) by
          activation?

          Were radioactive sources stored at the site?
Is there evidence that the site was involved in the
Manhattan Project or any Manhattan Engineering
District (MED) activities (1942-1946)?

Was the site ever involved in the support of nuclear
weapons testing (1945-1962)?

Were any facilities on the site used as a weapons
storage area? Was weapons maintenance ever
performed at the site?

Was there ever any decontamination, maintenance, or
storage of radioactively contaminated ships, vehicles, or
planes performed onsite?
                                                      Indicates a higher probability that the area is
                                                      impacted.
Evidence of radioactive material disposal
indicates a higher probability that the area is
impacted.


Indicates a higher probability that the area is
impacted.

Research that may have resulted in the
release of radioactive materials indicates a
higher probability that the area is impacted.

Leak test records of sealed sources may
indicate whether or not a storage area is
impacted. Evidence of radioactive material
disposal indicates a higher probability that
the area is impacted.

Indicates a higher probability that the area is
impacted.

Leak test records of sealed sources may
indicate whether or not a storage area is
impacted.

Indicates a higher probability that the area is
impacted.


Indicates a higher probability that the area is
impacted.

Indicates a higher probability that the area is
impacted.
Indicates a higher probability that the area is
impacted.
August 2000
                                          5-5
                     MARS SIM, Revision 1

-------
Historical Site Assessment
        Table 3.1 Questions Useful for the Preliminary HSA Investigation (continued)
  12.      Is there a record of any aircraft accident at or near the
          site (e.g., depleted uranium counterbalances, thorium
          alloys, radium dials)?

  13.      Was there ever any radiopharmaceutical manufacturing,
          storage, transfer, or disposal onsite?

  14.      Was animal research ever performed at the site?
  15.      Were uranium, thorium, or radium compounds
          (NORM) used in manufacturing, research, or testing at
          the site, or were these compounds stored at the site?

  16.      Has the site ever been involved in the processing or
          production of Naturally Occurring Radioactive Material
          (e.g., radium, fertilizers, phosphorus compounds,
          vanadium compounds, refractory materials, or precious
          metals) or mining, milling, processing, or production of
          uranium?

  17.      Were coal or coal products used onsite?

          If yes, did combustion of these substances leave ash or
          ash residues onsite?

          If yes, are runoff or production ponds onsite?

  18.      Was there ever any onsite disposal of material known to
          be high in naturally occurring radioactive materials
          (e.g., monazite sands used in sandblasting)?

  19.      Did the site process pipe from the oil and gas
          industries?
  20.      Is there any reason to expect that the site may be
          contaminated with radioactive material (other than
          previously listed)?
May include other considerations such as
evidence of radioactive materials that were
not recovered.

Indicates a higher probability that the area is
impacted.

Evidence that radioactive materials were
used for animal research indicates a higher
probability that the area is impacted.

Indicates a higher probability that the area is
impacted or results in a potential increase in
background variability.

Indicates a higher probability that the area is
impacted or results in a potential increase in
background variability.
May indicate other considerations such as a
potential increase in background variability.
May indicate other considerations such as a
potential increase in background variability.


Indicates a higher probability that the area is
impacted or results in a potential increase in
background variability.

See Section 3.6.3.
Appendix G of this document provides a general listing and cross-reference of information
sources—each with a brief description of the information contained in each source. The Site
Assessment Information Directory (EPA 1991e) contains a detailed compilation of data sources,
including names, addresses, and telephone numbers of agencies that can provide HSA
information.
MARSSIM, Revision 1
                              August 2000

-------
                                                                    Historical Site Assessment
3.4.1   Existing Radiation Data

Site files, monitoring data, former site evaluation data, Federal, State, or local investigations, or
emergency actions may be sources of useful site information.  Existing site data may provide
specific details about the identity, concentration, and areal distribution of contamination.
However, these data should be examined carefully because:

•      Previous survey and sampling efforts may not be compatible with HSA objectives or may
       not be extensive enough to characterize the facility or site fully.

•      Measurement protocols and standards may not be known or compatible with HSA
       objectives (e.g., Quality Assurance/Quality Control (QA/QC) procedures, limited analysis
       rather than full-spectrum analysis) or may not be extensive enough to characterize the
       facility or  site fully.

•      Conditions may have changed since the site was last sampled (i.e., substances may have
       been released, migration may have spread the contamination, additional waste disposal
       may have occurred, or decontamination may have been performed).

Existing data can be evaluated using the Data Quality Assessment (DQA) process described in
Appendix E. (Also see DOE 1987 and EPA 1980c,  1992a, 1992b, 1996a for additional guidance
on evaluating data.)

3.4.1.1  Licenses,  Site Permits, and Authorizations

The facility or site radioactive materials license and  supporting or associated documents are
potential sources of information for licensed facilities. If a license does not exist, there may be a
permit or other document that authorized site operations involving radioactivity. These
documents may specify the quantities of radioactive material authorized for use at the site, the
chemical and physical form of the materials, operations for which the materials are (or were)
used, locations of these operations at the facility or site, and total quantities of material  used at
the site during its  operating lifetime.

EPA and State agencies maintain files on a variety of environmental programs.  These files may
contain permit applications and monitoring results with information on specific waste types and
quantities, sources, type of site operations, and operating status of the facility or site. Some of
these information sources are listed in Appendix G (e.g., Comprehensive Environmental
Response, Compensation,  and Liability Information System (CERCLIS), Resource Conservation
and Recovery Information System (RCRIS), Ocean Data Evaluation System (ODES)).
August 2000                                 3-7                         MARS SIM, Revision 1

-------
Historical Site Assessment
3.4.1.2  Operating Records

Records and other information sources useful for site evaluations include those describing onsite
activities; current and past contamination control procedures; and past operations involving
demolition, effluent releases, discharge to sewers or onsite septic systems, production of
residues, land filling, waste and material storage, pipe and tank leaks, spills and accidental
releases, release of facilities or equipment from radiological controls, and onsite or offsite
radioactive and hazardous waste disposal. Some records may be or may have been classified for
National Security purposes and means should be established to review all pertinent records.  Past
operations should be summarized in chronological order along with information indicating the
type of permits and approvals that authorized these operations. Estimates of the total activity
disposed of or released at the site and the physical and chemical form of the radioactive material
should also be included. Records on waste disposal, environmental monitoring, site  inspection
reports, license applications, operational permits, waste disposal material balance and inventory
sheets, and purchase orders for radioactive materials are useful—for estimating total  activity.
Information on accidents, such as fires, flooding, spills, unintentional releases, or leakage, should
be collected as potential sources of contamination.  Possible areas of localized contamination
should be identified.

Site plats or plots, blueprints, drawings, and sketches of structures are especially useful to
illustrate the location and layout of buildings on the site.  Site photographs, aerial surveys, and
maps can help verify the accuracy of these drawings or indicate changes following the time when
the drawings  were prepared. Processing locations—plus waste streams to and from the site as
well as the presence of stockpiles of raw materials and finished product—should be noted on
these photographs and maps. Buildings or outdoor processing areas may have been modified or
reconfigured  such that former processing areas were converted to other uses or configurations.
The locations of sewers, pipelines, electric lines, water lines, etc., should also be identified.  This
information facilitates planning the Site Reconnaissance and subsequent surveys, developing a
site conceptual model, and increasing the efficiency of the survey program.

Corporate contract files may also provide useful information during subsequent stages of the
Radiation Survey and Site Investigation Process.  Older facilities may not have complete
operational records, especially for obsolete  or discontinued processes. Financial records may
also provide information on purchasing and shipping that in turn help to reconstruct a site's
operational history.

While operating records can be useful tools during the HSA, the investigator should be careful
not to place too much emphasis on this type of data. These records are often incomplete and lack
information on substances previously not considered hazardous.  Out-of-date blueprints and
drawings may not show modifications made during the lifetime of a facility.
MARSSIM, Revision 1                          3-8                                  August 2000

-------
                                                                     Historical Site Assessment
3.4.2   Contacts and Interviews

Interviews with current or previous employees are performed to collect first-hand information
about the site or facility and to verify or clarify information gathered from existing records.
Interviews to collect first-hand information concerning the site or facility are generally conducted
early in the data-gathering process. Interviews cover general topics, such as radioactive waste
handling procedures.  Results of early interviews are used to guide subsequent data collection
activities.

Interviews scheduled late in the data gathering process may be especially useful. This activity
allows questions to be directed to specific areas of the investigation that need additional
information or clarification. Photographs and sketches can be used to assist the interviewer and
allow the interviewees to recall information of interest.  Conducting interviews onsite where the
employees performed their  tasks often stimulates memories and facilitates information gathering.
In addition to interviewing managers, engineers, and facility workers, interviews may be
conducted with laborers and truck drivers to obtain information from their perspective. The
investigator should be cautious in the use of interview information. Whenever possible,
anecdotal evidence should be assessed for accuracy and results of interviews should be backed up
with supporting data.  Steps that ensure specific information is properly recorded may include
hiring trained investigators  and taking affidavits.
3.5    Site Reconnaissance

The objective of the Site Reconnaissance or Site Visit is to gather sufficient information to
support a decision regarding further action.  Reconnaissance activity is not a risk assessment, a
scoping survey, or a study of the full extent of contamination at a facility or site.  The
reconnaissance offers an opportunity to record information concerning hazardous site conditions
as they apply to conducting future survey work.  In this regard, information describing physical
hazards, structural integrity of buildings, or other conditions, defines potential problems that may
impede future work.  This section is most applicable to sites with less available information and
may not be necessary at other sites having greater amounts of data, such as Nuclear Regulatory
Commission  (NRC) licensed facilities.

To prepare for the Site Reconnaissance, begin by reviewing what is known about the facility or
site and identify data gaps.  Given the site-specific conditions, consider whether or not a Site
Reconnaissance is necessary and practical. This type of effort may be deemed necessary if a site
is abandoned, not easily observed from areas of public access, or discloses little information
during file searches.  These same circumstances may also make a Site Reconnaissance risky for
health and safety reasons—in view of the many unknowns—and may make entry difficult. This
investigative  step may be practical, but less critical, for active facilities whose operators grant

August 2000                                 3-9                          MARS SIM, Revision 1

-------
Historical Site Assessment
access and provide requested information. Remember to arrange for proper site access and
prepare an appropriate health and safety plan, if required, before initiating the Site
Reconnaissance.

Investigators should acquire signed consent forms from the site or equipment owner to gain
access to the property to conduct the reconnaissance. Investigators are to determine if State and
Federal officials, and local individuals, should be notified of the reconnaissance schedule.  If
needed, local officials should arrange for public notification. Guidance on obtaining access to
sites can be found in Entry and Continued Access Under CERCLA (EPA 1987d).

A study plan should be prepared before the Site Reconnaissance to anticipate every
reconnaissance activity and identify specific information to be gathered. This plan should
incorporate a survey of the site's surroundings and provide details for activities that verify  or
identify the location of: nearby residents, worker populations, drinking water or irrigation wells,
foods, and other site environs information.

Preparing for the Site Reconnaissance includes initially gathering necessary materials and
equipment.  This includes a camera to document site conditions, health and safety monitoring
instruments including a radiation detection meter for use during the site visit, and extra copies of
topographic maps to mark target locations, water distribution areas, and other important site
features. A logbook is critical to keeping a record of field activities and observations as they
occur. For documentation purposes MARSSEVI recommends that the logbook be completed in
waterproof ink, preferably by one individual. Furthermore, each page of the logbook should be
signed and dated, including the time of day, after the last entry  on the page. Corrections should
be documented and approved.
3.6    Evaluation of Historical Site Assessment Data

The main purpose of the Historical Site Assessment (HSA) is to determine the current status of
the site or facility, but the data collected may also be used to differentiate sites that need further
action from those that pose little or no threat to human health and the environment.  This
screening process can serve to provide a site disposition recommendation or to recommend
additional surveys. Because much of the data collected during HSA activities is qualitative or is
analytical data of unknown quality, many decisions regarding a site are the result of professional
judgment.

There are three possible recommendations that follow the HSA:

•      An  emergency action to reduce the risk to human  health and the environment—this
       alternative is applicable to Superfund removal actions, which are discussed in detail by
       EPA(EPA1988c).

MARSSIM, Revision 1                        3-10                                August 2000

-------
                                                                     Historical Site Assessment


•      The site or area is impacted and further investigation is needed before a decision
       regarding final disposition can be made. The area may be Class 1, Class 2, or Class 3,
       and a scoping survey or a characterization survey should be performed. Information
       collected during the HSA can be very useful in planning these subsequent survey
       activities.

•      The site or area is non-impacted. There is no possibility or an extremely low probability
       of residual radioactive materials being present at the site.  The site or area can be released.

Historical analytical data indicating the presence of contamination in environmental media
(surface soil, subsurface soil, surface water, ground water, air, or buildings) can be used to
support the hypothesis that radioactive material was released at the facility or site.  A decision
that the site is contaminated can be made regardless of the quality of the data, its attribution to
site operations, or its relationship to background levels. In such cases, analytical indications are
sufficient to support the hypothesis—it is not necessary to definitively demonstrate that a
problem exists. Conversely, historical analytical data can also be used to support the hypothesis
that no release has occurred. However, these data should not be the sole basis for this
hypothesis.  Using historical analytical data as the principal reason for ruling out the occurrence
of contamination forces the data to demonstrate that a problem does not exist.

In most cases it is assumed there will be some level of process knowledge available in addition to
historical analytical data. If process knowledge suggests that no residual contamination should
be present and the historical analytical data also suggests that no residual contamination is
present, the process knowledge provides an additional  level of confidence and supports
classifying the area as non-impacted.  However, if process knowledge suggests no residual
contamination should be present but the historical analytical data indicate the presence of
residual contamination, the area will probably be considered impacted.

The following sections describe the information recommended for assessing the status of a  site.
This information is needed to accurately and completely support a site disposition
recommendation. If some of the information is not available, it should be identified as a data
need for future surveys.  Data needs are collected during Step 3 of the Data Quality Objective
(DQO) process (Identify Inputs to the Decision) as  described in Appendix D, Section D.3.
Section 3.6.5 provides information on professional judgment and how it may be applied to the
decision making process.

3.6.1   Identify Potential Contaminants

An efficient HSA gathers information sufficient to  identify the radionuclides used at the
site—including their chemical and physical form. The first step in evaluating HSA data is to
estimate the potential for residual contamination by these radionuclides.

August 2000                                  3-11                         MARS SIM, Revision 1

-------
Historical Site Assessment


Site operations greatly influence the potential for residual contamination (NRC 1992a).  An
operation that only handled encapsulated sources is expected to have a low potential for
contamination—assuming that the integrity of the sources was not compromised. A review of
leak-test records for such sources may be adequate to demonstrate the low probability of residual
contamination. A chemical manufacturing process facility would likely have contaminated
piping, ductwork, and process areas, with a potential for soil contamination where spills,
discharges, or leaks occurred. Sites using large quantities of radioactive ores—especially those
with outside waste collection and treatment systems—are likely to have contaminated grounds.
If loose dispersible materials were stored outside or process ventilation systems were poorly
controlled, then windblown surface contamination may be possible.

Consider how long the site was operational. If enough time elapsed since the site discontinued
operations, radionuclides with short half-lives may no longer be present in significant quantities.
In this case, calculations demonstrating that residual activity could not exceed the DCGL may be
sufficient to evaluate the potential residual  contaminants at the site.  A similar consideration can
be made based on knowledge of a contaminant's chemical and physical form.  Such a
determination relies on records of radionuclide inventories, chemical and physical forms, total
amounts of activity in waste shipments, and purchasing records to document and support this
decision. However, a number of radionuclides experience significant decay product ingrowth,
which should be included when evaluating existing site information.

3.6.2  Identify Potentially Contaminated Areas

Information gathered during the HSA should be used to provide an initial classification of the site
areas as impacted or non-impacted.

Impacted areas have a reasonable potential for radioactive contamination (based on historical data)
or contain known radioactive contamination (based on past or preliminary radiological
surveillance).  This includes areas where 1) radioactive materials were used and  stored;
2) records indicate spills, discharges, or other unusual occurrences that could result in the spread
of contamination; and 3) radioactive materials were buried or disposed. Areas immediately
surrounding or adjacent to these locations are included in this classification because of the
potential for inadvertent spread of contamination.

Non-impacted areas—identified through knowledge of site history or previous survey
information—are those areas where there is no reasonable possibility for residual radioactive
contamination. The criteria used for this segregation need not be as strict as those used to
demonstrate final compliance with the regulations. However, the reasoning for classifying an
area as non-impacted should be maintained as a written record. Note that—based on
accumulated survey data—an impacted area's classification may change as the RSSI Process
progresses.
MARSSIM, Revision 1                         3-12                                August 2002

-------
                                                                     Historical Site Assessment
All potential sources of radioactivity in impacted areas should be identified and their dimensions
recorded (in 2 or 3 dimensions—to the extent they can be measured or estimated). Sources can
be delineated and characterized through visual inspection during the site reconnaissance,
interviews with knowledgeable personnel, and historical information concerning disposal
records, waste manifests, and waste sampling data.  The HSA should address potential
contamination from the site whether it is physically within or outside of site boundaries. This
approach describes the site in a larger context, but as noted in Chapter 1, MARSSEVTs scope
concerns releasing a site and not areas outside a site's boundaries.

3.6.3   Identify Potentially Contaminated Media

The next step in evaluating the data gathered during the HSA is to identify potentially
contaminated media at the site. To identify media that may and media that do not contain
residual contamination supports both preliminary area classification (Section 4.4) and planning
subsequent survey activities.

This section provides guidance on evaluating the likelihood for release of radioactivity into the
following environmental media: surface soil, subsurface soil, sediment, surface water, ground
water, air, and buildings. While MARSSEVTs scope is focused on surface soils and building
surfaces, this section makes note of still other media to provide a starting place to identify and
address all possible media.  The evaluation will result in either a finding of "Suspected
Contamination" or "No Suspected Contamination," which may be based on analytical data,
professional judgment, or a combination of the two.

Subsequent sections describe the environmental media and pose questions pertinent to each type.
Each question is accompanied by a commentary. Carefully consider the questions within the
context of the site and the available data. Avoid spending excessive amounts of time answering
each question because answers to every question are unlikely to be available at each  site.
Questions that cannot be answered based on existing data can be used to direct future surveys of
the site. Also, keep in mind the numerous differences in site-specific circumstances  and that the
questions do not identify every characteristic that might apply to a specific site. Additional
questions or characteristics identified during a specific site assessment should be  included in the
HSA report (Section 3.8; EPA  1991f).

3.6.3.1 Surface Soil

Surface soil is the top layer of soil on a site that is available for direct exposure, growing plants,
resuspension of particles for inhalation, and mixing from human disturbances.  Surface soil may
also be defined  as the thickness of soil that can be measured using direct measurement or
scanning techniques. Typically, this layer is represented as the top 15 cm (6 in.) of soil (40 CFR
192).  Surface sources may include gravel fill, waste piles, concrete, or asphalt paving. For many

August 2000                                 3-13                         MARS SIM, Revision 1

-------
Historical Site Assessment


sites where radioactive materials were used, one first assumes that surface contamination exists
and the evaluation is used to identify areas of high and low probability of contamination (Class 1,
Class 2 or Class 3 areas).

•      Were all radiation sources used at the site encapsulated sources?

A site where only encapsulated sources were used would be expected to have a low potential for
contamination.  A review of the leak-test records and documentation of encapsulated source
location may be adequate for a finding of "No Suspected Contamination."

•      Were radiation sources used only in specific areas of the site?

Evidence that radioactive materials were confined to certain areas of the site may be helpful in
determining which areas are impacted and which are non-impacted.

•      Was surface soil regraded or moved elsewhere for fill or construction purposes?

This helps to identify additional potential radiation sites.

3.6.3.2  Subsurface Soil and Media

Subsurface soil and media are defined as any solid materials not considered to be surface soil.
The purpose of these investigations is to locate and define the vertical extent of the potential
contamination.  Subsurface measurements can be expensive, especially for beta- or alpha-
emitting radionuclides. Removing areas  from consideration for subsurface measurements or
defining areas as non-impacted for subsurface sampling conserves limited resources and focuses
the site assessment on areas of concern.

•      Are there areas of known or suspected surface soil contamination?

Surface soil contamination can migrate deeper into the soil. Surface soil sources should be
evaluated based on radionuclide mobility, soil permeability, and infiltration rate to determine the
potential for subsurface contamination.  Computer modeling may be helpful for evaluating these
types of situations.

•      Is there  a ground-water plume without an identifiable source?

Contaminated ground water indicates that a source of contamination is present. If no source is
identified during the HSA, subsurface contamination is a probable source.
MARSSIM, Revision 1                        3-14                                August 2000

-------
                                                                      Historical Site Assessment
•      Is there potential for enhanced mobility of radionuclides in soils?

Radionuclide mobility can be enhanced by the presence of solvents or other volatile chemicals
that affect the ion-exchange capacity of soil.

•      Is there evidence that the surface has been disturbed?

Recent or previous excavation activities are obvious sources of surface disturbance. Areas with
developed plant life (forested or old growth areas) may indicate that the area remained
undisturbed during the operating life of the facility.  Areas where vegetation is removed during
previous excavation activity may be distinct from mature plant growth in adjacent areas.  If a site
is not purposely replanted, vegetation may appear in a sequence starting with grasses that are
later replaced by shrubs and trees.  Typically, grasslands recover within a few years, sagebrush or
low ground cover appears over decades, while mature forests may take centuries to develop.

•      Is there evidence of subsurface disturbance?

Non-intrusive, non-radiological measurement techniques may provide evidence of subsurface
disturbance. Magnetometer surveys can identify buried metallic objects, and ground-penetrating
radar can identify subsurface anomalies such as trenches or dump sites. Techniques involving
special equipment are discussed in Section 6.10.

•      Are surface structures present?

Structures constructed at a site—during the operational history of that site—may cover below-
ground contamination. Some consideration for contaminants that may exist beneath parking lots,
buildings, or other onsite structures may be warranted as part of the investigation.  There may be
underground piping, drains, sewers, or tanks that caused contamination.

3.6.3.3  Surf ace Water

Surface waters include streams and rivers, lakes, coastal tidal waters, and oceans.  Note that
certain ditches and intermittently flowing streams qualify as surface water.  The evaluation
determines whether radionuclides are likely to migrate to surface waters or their sediments.
Where a previous release is not suspected, the potential for future release depends on the distance
to surface water and the flood potential at the site. With regard to the two preceding sections,
one can also consider an interaction between soil and water in relation to seasonal factors
including soil cracking due to freezing, thawing, and dessication that influence the dispersal or
infiltration of radionuclides.
August 2000                                 3-15                         MARS SIM, Revision 1

-------
Historical Site Assessment
•      Is surface water nearby?

The proximity of a contaminant to local surface water is essentially determined by runoff and
radionuclide migration through the soil. The definition for nearby depends on site-specific
conditions.  If the terrain is flat, precipitation is low, and soils are sandy, nearby may be within
several meters.  If annual precipitation is high or occasional rainfall events are high, within 1,200
meters (3/4 mile) might be considered nearby.  In general, sites need not include the surface
water pathway where the overland flow distance to the nearest surface water is more than 3,200
meters (2 miles).

•      Is the waste quantity particularly large?

Depending on the physical and chemical form of the waste and its location, large is a relative
term. A small quantity of liquid waste may be of more importance—i.e., a greater risk or
hazard—than a large quantity of solid waste stored in water tight containers.

•      Is the drainage area large?

The drainage area includes the area of the site itself plus the upgradient area that produces runoff
flowing over the site. Larger drainage areas generally produce more runoff and increase the
potential for surface water contamination.

•      Is rainfall heavy?

If the site and surrounding area are flat,  a combination of heavy precipitation and low infiltration
rate may cause rainwater to pool on the  site. Otherwise, these characteristics may contribute to
high runoff rates that carry radionuclides overland to surface water.  Total annual rainfall
exceeding one meter (40 inches), or a once in two-year-24-hour precipitation exceeding five cm
(two inches) might be considered "heavy."

Rainfall varies for locations across the continental United States from high (e.g., 89 in./y, Mt.
Washington, NH) to low values (e.g., 4.2 in./y, Las Vegas, NV). Precipitation rates will vary
during the year at each location due to seasonal and geographic factors. A median value for
rainfall within the United States, as found in van der Leeden et al. 1990, is about 26 in./y as is
observed for Minneapolis, MN.

•      Is the infiltration rate low?

Infiltration rates range from very high in gravelly and sandy soils to very low in fine silt and clay
soils. Paved sites prevent infiltration and generate runoff.
MARSSIM, Revision 1                         3-16                                  August 2000

-------
                                                                     Historical Site Assessment
•      Are sources of contamination poorly contained or prone to runoff?

Proper containment which prevents radioactive material from migrating to surface water
generally uses engineered structures such as dikes, berms, run-on and runoff control systems, and
spill collection and removal systems. Sources prone to releases via runoff include leaks, spills,
exposed storage piles, or intentional disposal on the ground surface.  Sources not prone to runoff
include underground tanks, above-ground tanks, and containers stored in a building.

•      Is a runoff route well defined?

A well defined runoff route—along a gully, trench, berm, wall, etc.—will more likely contribute
to migration to surface water than a poorly defined route.  However, a poorly defined route may
contribute to dispersion of contamination to a larger area of surface soil.

•      Has deposition  of waste into surface water been observed?

Indications of this type of activity will appear in records from past practice at a site or from
information gathered during personal interviews.

•      Is ground water discharge to surface water probable?

The hydrogeology and  geographical information of the area around and inside the site may be
sufficiently documented to indicate discharge locations.

•      Does analytical or circumstantial evidence suggest surface water contamination?

Any condition considered suspicious—and that indicates a potential contamination
problem—can be considered circumstantial evidence.

•      Is the site prone to flooding?

The Federal Emergency Management Agency (FEMA) publishes flood insurance rate maps that
delineate 100-year and 500-year flood plains.  Ten-year floodplain maps may also be available.
Generally, a site on a 500-year floodplain is not considered prone to flooding.

3.6.3.4 Ground Water

Proper evaluation of ground water includes a general understanding of the local geology and
subsurface conditions.  Of particular interest is descriptive information relating to subsurface
stratigraphy, aquifers, and ground water use.
August 2000                                 3-17                        MARS SIM, Revision 1

-------
Historical Site Assessment
•      Are sources poorly contained?

Proper containment which prevents radioactive material from migrating to ground water
generally uses engineered structures such as liners, layers of low permeability soil (e.g., clay),
and leachate collection systems.

•      Is the source likely to contaminate ground water?

Underground tanks, landfills,2 surface impoundments and lagoons are examples of sources that
are likely to release contaminants that migrate to ground water. Above ground tanks, drummed
solid wastes,  or sources inside buildings are less likely to contribute to ground-water
contamination.

•      Is waste quantity particularly large?

Depending on the physical and chemical form of the waste and its location, large is a relative
term.  A small quantity of liquid waste may be of more importance—i.e., greater risk or
hazard—than a large quantity of solid waste stored in water tight containers.

•      Is precipitation heavy?

If the site and surrounding area are flat, a combination of heavy precipitation and low infiltration
rate may cause rainwater to pool on the site. Otherwise, these characteristics may contribute to
high runoff rates that carry radionuclides overland to surface water. Total annual rainfall
exceeding one meter (40 in.), or a once in two-year-24-hour precipitation exceeding five cm (two
in.) might be considered "heavy."

Rainfall varies for locations across the continental United States from high (e.g., 89 in./y, Mt.
Washington, NH) to low values (e.g., 4.2 in./y, Las Vegas,  NV).  Precipitation rates will vary
during the year at each location due to  seasonal and geographic factors.  A median value for
rainfall within the United States, as found in van der Leeden et al. 1990, is about 26 in./y as is
observed for Minneapolis, MN.

•      Is the infiltration rate high?

Infiltration rates range from very high in gravelly and sandy soils to very low in fine silt and clay
soils.  Unobstructed surface areas are potential candidates for further examination to determine
infiltration rates.
   2 Landfills can affect the geology and hydrogeology of a site and produce heterogeneous conditions. It may be
necessary to consult an expert on landfills and the conditions they generate.

MARSSIM, Revision 1                          3-18                                 August 2000

-------
                                                                     Historical Site Assessment
•      Is the site located in an area of karst terrain?

In karst terrain, ground water moves rapidly through channels caused by dissolution of the rock
material (usually limestone) that facilitates migration of contaminants.

•      Is the subsurface highly permeable?

Highly permeable soils favor downward movement of water that may transport radioactive
materials. Well logs, local geologic literature, or interviews with knowledgeable individuals may
help answer this question.

•      What is the distance from the surface to an aquifer?

The shallower the source of ground water, the higher the threat of contamination. It is difficult to
determine whether an aquifer may be a potential source of drinking water in the future (e.g., next
1,000 years). This generally applies to the  shallowest aquifer below the site.

•      Are suspected contaminants highly  mobile  in ground water?

Mobility in ground water can be estimated based on the distribution  coefficient (Kd) of the
radionuclide. Elements with a high Kd, like thorium (e.g., Kd = 3,200 cm3/g), are not mobile
while elements with a low Kd, like hydrogen (e.g.,  Kd = 0 cm3/g), are very mobile.  The NRC
(NRC 1992b) and Department of Energy (DOE) (Yu, etal.,  1993) provide a compilation of Kd
values. These values can be influenced by  site-specific considerations such that site-specific Kd
values need to be evaluated or determined.  Also, the mobility of a radionuclide can be enhanced
by the presence of a solvent or volatile chemical.

•      Does analytical or circumstantial evidence  suggest ground water contamination?

Evidence for contamination may appear in  current site  data;  historical, hydrogeological, and
geographical information systems records;  or as a result of personal  interviews.

3.6.3.5 Air

Evaluation of air is different than evaluation of other potentially contaminated media. Air is
rarely the source of contamination.  Air is evaluated as a pathway for resuspending and
dispersing radioactive contamination as well as a contaminated media.
August 2000                                 3-19                        MARS SIM, Revision 1

-------
Historical Site Assessment
•      Were there observations of contaminant releases into the air?

Direct observation of a release to the air might occur where radioactive materials are suspected to
be present in particulate form (e.g., mine tailings, waste pile) or adsorbed to particulates (e.g.,
contaminated soil), and where site conditions favor air transport (e.g., dry, dusty, windy).

•      Does analytical or circumstantial  evidence suggest a release to the air?

Other evidence for releases to the air might include areas of surface soil contamination that do
not appear to be caused by direct deposition or overland migration of radioactive material.

•      For radon exposure only, are there elevated amounts of radium (226Ra) in the soil or water
       that could act as a source of radon in the air?

The source,  226Ra, decays to 222Rn, which is radon gas.  Once radon is produced, the gas needs a
pathway to escape from its point of origin into the air. Radon is not particularly soluble in water,
so this gas is readily released from water sources which are open to air.  Soil, however, can retain
radon gas until it has decayed (see Section 6.9). The rate that radon is emitted by a solid, i.e.
radon flux, can be measured directly to evaluate potential sources of radon.

•      Is there a prevailing wind and a propensity for windblown transport of contamination?

Information pertaining to geography, ground cover (e.g., amount and types  of local vegetation),
meteorology (e.g., windspeed at 7 meters above ground level) for and around the site, plus site-
specific parameters related to surface soil characteristics enter into calculations used to describe
particulate transport. Mean annual windspeed can be obtained from the National Weather
Service surface station nearest to the site.

3.6.3.6 Structures

Structures used for storage, maintenance, or processing of radioactive materials are potentially
contaminated by these materials.  The questions presented in Table 3.1 help to determine if a
building might be potentially contaminated. The  questions listed in this section are for
identifying potentially contaminated structures, or portions of structures, that might not be
identified using Table 3.1. Section 4.8.3.1 also presents useful information on identifying
structural  contamination.
MARSSIM, Revision 1                          3-20                                 August 2000

-------
                                                                     Historical Site Assessment
•      Were adjacent structures used for storage, maintenance, or processing of radioactive
       materials?

Adjacent is a relative term for this question. A processing facility with a potential for venting
radioactive material to the air could contaminate buildings downwind.  A facility with little
potential for release outside of the structures handling the material would be less likely to
contaminate nearby structures.

•      Is a building or its addition or a new structure located on a former radioactive waste
       burial site or contaminated land?

Comparing past and present photographs or site maps and retrieving building permits or other
structural drawings and records in relation to historical operations information will reveal site
locations where structures may have been built over buried waste or contaminated land.

•      Was the building constructed using contaminated material?

Building materials such as concrete, brick, or cinder block may have been formed using
contaminated material.

•      Does the potentially non-impacted portion of the building share a drainage system or
       ventilation system with a potentially contaminated area?

Technical and architectural drawings for site structures along with visual inspections are required
to determine if this is a concern in terms of current or past operations.

•      Is there evidence that previously identified areas of contamination were remediated by
       painting or similar methods of immobilizing contaminants?

Removable sources of contamination immobilized by painting may  be more difficult to locate,
and may need special consideration when planning subsequent surveys.

3.6.4   Develop a Conceptual Model of the Site

Starting with project planning activities, one gathers and analyzes available information to
develop a conceptual site model. The model is essentially a site diagram showing locations of
known contamination, areas of suspected contamination, types and concentrations of
radionuclides in impacted areas, potentially contaminated media, and locations of potential
reference (background) areas. The diagram should include the general layout of the  site
including buildings and property boundaries. When possible, produce three dimensional
diagrams. The conceptual site model will be upgraded and modified as information  becomes

August 2000                                 3-21                         MARS SIM, Revision 1

-------
Historical Site Assessment
available throughout the RSSI Process. The process of developing this model is also briefly
described in Attachment A of EPA 1996b.

The model is used to assess the nature and the extent of contamination, to identify potential
contaminant sources, release mechanisms, exposure pathways, human and/or environmental
receptors, and to develop exposure scenarios. Further, this model helps to identify data gaps,
determine media to be sampled, and assists staff in developing strategies for data collection. Site
history and preliminary  survey data generally are extremely useful sources of information for
developing this model.  The conceptual site model should include known and suspected sources
of contamination and the types of contaminants and affected media.  Such a model can also
illustrate known and potential routes of migration and known or potential human and
environmental receptors.

The site should be classified or initially divided into similar areas. Classification may be based
on the operational  history of the site or observations made during the Site Reconnaissance (see
Section 3.5.2).  After the site  is classified using current and past site characteristics, further
divide the site or facility based on anticipated future use. This classification can help to a) assign
limited resources to areas that are anticipated to be released without restrictions, and b) identify
areas with little or no possibility of unrestricted release. Figure 3.1 shows an example of how a
site might be classified in this manner.  Further classification of a site may be possible based on
site disposition recommendations (unrestricted vs. release with passive controls).

3.6.5  Professional Judgment

In some cases, traditional sources of information, data, models, or scientific principles are
unavailable, unreliable,  conflicting, or too costly or time consuming to obtain. In these instances
professional judgment may be the only practical tool available to the investigator. Professional
judgment is the expression of opinion, that is documented in written form and based on technical
knowledge and professional experience, assumptions, algorithms, and definitions, as stated by an
expert in response to technical problems (NRC  1990). For general applications, this type of
judgment is a routine part of scientific investigation where knowledge is incomplete.
Professional judgment can be used as an independent review of historical data to support
decision making during the HSA. Professional judgment should only be used in situations where
data are not reasonably obtainable by collection or experimentation.

The process of recruiting professionals should be documented and as unbiased as possible. The
credentials of the selected individual or individuals enhance the  credibility of the elicitation, and
the ability to  communicate their reasoning is a primary determinant of the quality of the results.
Qualified professionals  can be identified by different sources, including the planning team,
professional organizations, government agencies, universities, consulting firms,  and public
interest groups. The selection criteria for the professionals should include potential conflict of
interest (economic or personal), evidence of expertise in a required topic, objectiveness, and
availability.

MARSSIM, Revision 1                        3-22                                 August 2000

-------
                                                                                     Historical Site Assessment
         Hypothetical
         Site:
Area A:
Production
                               Area A:
                               Production
Area B:
Processing
                                 Area C:
                                 Storage & Disposal
                                                        Area D:
                                                        Administration
                                                          (a) Office    (b) Lab
                                                      i
                                  Initial Area Classification Based on Site Use
                              Area B:
                              Processing
Area C:
Storage & Disposal
• Site Boundary
    Area D:
    Administration
                                                                                          (a) Office   (b) Lab
                             Further Area Classification Planning Considerations
                                    Based on Historical Site Assessment
   Area A:
   Impacted.  Site history
   shows areas
   exceeding the DCGL
   are not likely.
                            Area B:
                            Impacted.  Site history
                            shows areas
                            exceeding the DCGL
                            are likely.
Area C:
Impacted. Potentially
restricted access.
Radioactive Waste
Management Unit.
    Area D:
    Subarea (a):
    Non-Impacted
    Subarea (b):
    Impacted
                                                                                             (a) Office
                                                                                               (b) Lab
        Figure 3.1  Example Showing how a Site Might be Classified Prior to Cleanup
                               Based on the Historical Site Assessment
August 2000
                                                  3-23
                             MARS SIM, Revision 1

-------
Historical Site Assessment
3.7    Determining the Next Step in the Site Investigation Process

As stated in Section 1.1, the purpose of this manual is to describe a process-oriented approach for
demonstrating compliance with the release criterion for residual radioactivity. The highest
probability of demonstrating compliance can be obtained by sequentially following each step in
the RSSI Process. In some cases, however, performing each step in the process is not practical or
necessary.  This section provides guidance on how the results of the HSA can be used to
determine the next step in the process.

The best method for determining the next step is to review the purpose for each type of survey
described in Chapter 5. For example, a scoping survey is performed to provide sufficient
information for determining 1) whether present contamination warrants further evaluation and
2) initial estimates of the level of effort for decontamination and preparing a plan for a more
detailed survey.  If the HSA demonstrates that this information is already available, do not
perform a scoping survey. On the other hand, if the information obtained during the HSA is
limited, a scoping survey may be necessary to narrow the scope of the characterization survey.

The exception to conducting additional surveys before a final status survey is the use of HSA
results to release a site. Generally, the  analytical data collected during the HSA are not adequate
to statistically demonstrate compliance for impacted areas as described in Chapter 8. This means
that the decision to release the site will be based on professional judgment. This determination
will ultimately be decided by the responsible regulatory agency.
3.8    Historical Site Assessment Report

A narrative report is generally a useful product for an HSA.  Use this report to summarize what is
known about the site, what is assumed or inferred, activities conducted during the HSA, and all
researched information.  Cite a supporting reference for each factual statement given in the
report. Attach copies of references (i.e., those not generally available to the public) to the report.
The narrative portion of the report should be written in plain English and avoid the use of
technical terminology.

To encourage consistency in the content of HSA narratives,  both the structure and content of
each report should follow the outline shown in Figure 3.2. Additional information not identified
in the outline may be requested by the regulatory agency at its discretion.  The level of effort to
produce the report should reflect the amount of information  gathered during the HSA.
MARSSIM, Revision 1                        3-24                                August 2000

-------
                                                                    Historical Site Assessment
3.9    Review of the HSA

The planning team should ensure that someone (a first reviewer) conducts a detailed review of
the HSA report for internal consistency and as a quality-control mechanism. A second reviewer
with considerable site assessment experience should then examine the entire information package
to assure consistency and to provide an independent evaluation of the HSA conclusions. The
second reviewer also evaluates the package to determine if special circumstances exist where
radioactivity may be present but not identified in the HSA.  Both the first reviewer and a second
independent reviewer should examine the HSA written products to ensure internal consistency in
the report's information, summarized data, and conclusions.  The site review ensures that the
HSA's recommendations are appropriate.

An important quality assurance objective is to find and correct errors.  A significant
inconsistency indicating either an error or a flawed conclusion, if undetected, could contribute to
an inappropriate recommendation. Identifying such a discrepancy directs the HSA investigator
and site reviewers to reexamine and resolve the apparent conflict.

Under some circumstances, experienced investigators may have differing interpretations of site
conditions and draw differing conclusions or hypotheses regarding the likelihood of
contamination.  Any such differences should be resolved during the review. If a reviewer's
interpretations contradict those of the HSA investigator, the two should discuss the situation and
reach a consensus. This aspect of the review identifies significant points about the site
evaluation that may need detailed explanation in the HSA narrative report to fully support the
conclusions. Throughout the review, the HSA investigator and site reviewers should keep in
mind the need for conservative judgments in the absence of definitive proof to avoid
underestimating the presence of contamination, which could lead to an inappropriate HSA
recommendation.
August 2000                                3-25                        MARS SIM, Revision 1

-------
Historical Site Assessment
  1.       Glossary of Terms, Acronyms and Abbreviations

  2.       Executive Summary

  3.       Purpose of the Historical Site Assessment

  4.       Property Identification
          4.1     Physical Characteristics
                 4.1.1    Name - CERCLIS ID# (if applicable), owner/operator name, address
                 4.1.2    Location - street address, city, county, state, geographic coordinates
                 4.1.3    Topography - USGS 7.5 minute quadrangle or equivalent
                 4.1.4    Stratigraphy
          4.2     Environmental Setting
                 4.2.1    geology
                 4.2.2    hydrogeology
                 4.2.3    hydrology
                 4.2.4    meteorology

  5.       Historical Site Assessment Methodology
          5.1     Approach and Rationale
          5.2     Boundaries of Site
          5.3     Documents Reviewed
          5.4     Property Inspections
          5.5     Personal Interviews

  6.       History and Current Usage
          6.1     History - years of operation, type of facility, description of operations, regulatory involvement;
                 permits & licenses, waste handling procedures
          6.2     Current Usage - type of facility, description of operations, probable source types and sizes,
                 description of spills or releases, waste manifests, radionuclide inventories, emergency or
                 removal actions
          6.3     Adjacent Land Usage - sensitive areas such as wetlands or preschools

  7.       Findings
          7.1     Potential Contaminants
          7.2     Potential Contaminated Areas
                 7.2.1    Impacted Areas—known and potential
                 7.2.2    Non-Impacted Areas
          7.3     Potential Contaminated Media
          7.4     Related Environmental Concerns

  8.       Conclusions

  9.       References

  10.      Appendices
          A.      Conceptual Model and Site Diagram showing Classifications
          B.      List of Documents
          C.      Photo documentation Log
 	Original photographs of the site and pertinent site features	
              Figure 3.2  Example of a Historical Site Assessment Report Format


MARSSIM, Revision 1                            3-26                                      August 2000

-------
               4 PRELIMINARY SURVEY CONSIDERATIONS
4.1    Introduction

This chapter assists the MARSSIM user in designing a survey plan by presenting areas of
consideration common to radiation surveys and site investigations in support of
decommissioning.  The topics discussed here should be addressed during the planning stages of
each survey. Figure 4.1 illustrates the sequence of preliminary activities described in this chapter
and their relationship to the survey design process.

Conducting radiological surveys in support of decommissioning serves to answer several basic
questions, including:

•      Is there residual radioactive contamination present from previous uses?
•      What is the character (qualitative and quantitative) of the residual activity?
•      Is the average residual activity level below the established derived concentration
       guideline level?
•      Are there small localized areas of residual activity in excess of the investigation level?

The survey methods used to evaluate radiological conditions and develop answers to these
questions depend on a number of factors including: contaminants, contaminant distribution,
acceptable contaminant levels established by the regulatory agency, future site use, and physical
characteristics of the site.
4.2    Decommissioning Criteria

The decommissioning process assures that residual radioactivity will not result in individuals
being exposed to unacceptable levels of radiation or radioactive materials.  Regulatory agencies
establish radiation dose standards based on risk considerations and scientific data relating dose to
risk. Residual levels of radioactive material that correspond to allowable radiation dose
standards are calculated (derived) by analysis of various pathways and scenarios (direct radiation,
inhalation, ingestion, etc.) through which exposures could occur. These derived levels, known as
derived concentration guideline levels (DCGLs), are presented in terms of surface or mass
activity concentrations. DCGLs usually refer to average levels of radiation or radioactivity above
appropriate background levels. DCGLs applicable to building or other structural and
miscellaneous surfaces are expressed in units of activity per surface area (typically Bq/m2 or
dpm/100 cm2).  When applied to soil and induced activity from neutron irradiation, DCGLs are
expressed in units of activity per unit of mass (typically Bq/kg or pCi/g).
August 2000                                 4-1                         MARSSIM, Revision 1

-------
Preliminary Survey Considerations
                  IDENTIFY
               CONTAMINANTS
                 ESTABLISH
                   DCGLs
             CLASSIFY AREAS BY
          CONTAMINATION POTENTIAL
              GROUP/SEPARATE
          AREAS INTO SURVEY UNITS
          PREPARE SITE FOR SURVEY
                  ACCESS
         ESTABLISH SURVEY LOCATION
             REFERENCE SYSTEM
               DESIGN SURVEY
    Section 4.3
    Section 4.3
    Section 4.4
    Section 4.6
                   IS THE
               CONTAMINANT
                 PRESENT IN
               BACKGROUND?
           Yes-
r
                                     Section 4.8
     Section
      4.8.5
    Chapter 5
SELECT BACKGROUND
 REFERENCE AREAS
                                                                         Section 4.5
          Figure 4.1 Sequence of Preliminary Activities Leading to Survey Design

MARSSIM, Revision 1                          4-2                                   August 2000

-------
                                                              Preliminary Survey Considerations


The DCGLW, based on pathway modeling, is the uniform residual radioactivity concentration
level within a survey unit that corresponds to the release criterion (e.g., regulatory limit in terms
of dose or risk). Note that for the majority of MARSSEVI users, the DCGL will simply be
obtained using regulatory agency guidance based on default parameters—other users may elect to
perform site-specific pathway modeling to determine DCGLs.  In both cases, the DCGL is based
on the spatial distribution of the contaminant, and each derivation can produce different values
depending on the specific radionuclide distribution and pathway modeling.

In addition to the numerical DCGLs, criteria include conditions for implementing those guideline
levels. Conditions applicable to satisfying decommissioning objectives described in Chapter 5
are as follows:

•      The uniform residual contamination above background is below the DCGLW.

•      Individual measurements or samples, representing small areas of residual radioactivity, do
       not exceed the DCGLg,^ for areas of elevated residual radioactivity. These small areas of
       residual radioactivity may exceed the DCGLW established for average residual
       radioactivity levels in a survey unit, provided these areas of residual radioactivity satisfy
       the criteria of the responsible regulatory agency.

The manner in which a DCGL is applied should be clearly documented in the survey plans and
reports.
4.3    Identify Contaminants and Establish DCGLs

Some objectives of the scoping and characterization surveys, as discussed in Chapter 5, include
identifying site contaminants, determining relative ratios of contaminants, and establishing
DCGLs and conditions for the contaminants which satisfy the requirements of the responsible
agency. Identification of potential radionuclide contaminants at the site is generally performed
through laboratory analyses, such as alpha and gamma spectrometry. These analyses are used to
determine the relative ratios of the identified contaminants, as well as isotopic ratios for common
contaminants like uranium and thorium. This information is essential in establishing and
applying the DCGLs for the site.  DCGLs provide the goal for essentially all aspects of
designing, implementing, and evaluating the final status survey. The DCGLs discussed in this
manual are limited to structure surfaces and soil contamination; the user should consult the
responsible regulatory agency if it is necessary to establish DCGLs for other environmental
media (e.g., ground water, and other water pathways). This section contains information
regarding the selection and application of DCGLs.
August 2000                                 4-3                         MARSSIM, Revision 1

-------
Preliminary Survey Considerations


The development of DCGLs is often an iterative process, where the DCGLs selected or
developed early in the Radiation Survey and Site Investigation (RSSI) Process are modified as
additional site-specific information is obtained from subsequent surveys.  One example of the
iterative nature of DCGLs is the development of final cleanup levels in EPA's Superfund
program. Soil Screening Levels1 (SSLs; EPA 1996b, EPA 1996c) are selected or developed at a
point early in the process, usually corresponding to the scoping survey in MARSSEVI.  An SSL
can be further developed, based on site-specific information, to become a preliminary
remediation goal (PRG; EPA 1991h), usually at a point corresponding to the characterization
survey.  If the PRG is found to be acceptable during the characterization survey, it is documented
as the final cleanup level in the Record of Decision (ROD) for the site. The ROD is typically in
place prior to any remedial action, because the remedy is also documented in the ROD.
Additional information on the Superfund program can be found in Appendix F.

4.3.1   Direct Application of DCGLs

In the simplest case, the DCGLs may be applied directly to survey data to demonstrate
compliance.  This involves assessing the surface activity levels and volumetric concentrations of
radionuclides and comparing measured values to the appropriate DCGL. For example, consider
a site that used only one radionuclide, such as 90Sr throughout its operational lifetime.  The
default DCGL for 90Sr on building surfaces and in soil may be obtained from the responsible
agency.  Survey measurements and samples are then compared to the surface and volume activity
concentration DCGLs for 90Sr directly to demonstrate compliance.  While seemingly
straightforward, this approach is not always possible (e.g., when more than one radionuclide is
present).

4.3.2   DCGLs and the Use of Surrogate Measurements

For sites with multiple contaminants, it may be possible to measure just one of the contaminants
and still demonstrate compliance for all of the contaminants present through the use of surrogate
measurements.  Both time and resources can be saved if the analysis of one radionuclide is
simpler than the analysis of the other. For example, using the measured  137Cs concentration as a
surrogate for  90Sr reduces the analytical costs because wet chemistry separations do not have to
be performed for 90Sr on every sample. In using one radionuclide to measure the presence of
others, a sufficient number of measurements, spatially separated throughout the survey unit,
should be made to establish a "consistent" ratio. The number of measurements needed to
determine the ratio is selected using the Data Quality Objectives (DQO)  Process and based on the
chemical, physical, and radiological characteristics of the nuclides and the site.  If consistent
   1  Soil Screening Levels are currently available for chemical contaminants and are not designed for use at sites
with radioactive contamination.

MARSSIM, Revision 1                         4-4                                 August 2000

-------
                                                               Preliminary Survey Considerations


radionuclide ratios cannot be determined during the Historical Site Assessment (HSA) based on
existing information, MARSSIM recommends that one of the objectives of scoping or
characterization be a determination of the ratios rather than attempting to determine ratios based
on the final status survey. If the ratios are determined using final status survey data, MARSSIM
recommends that at least 10% of the measurements (both direct measurements and samples)
include analyses for all radionuclides of concern.

In the use of surrogates, it is often difficult to establish a "consistent" ratio between two or more
radionuclides. Rather than follow prescriptive guidance on acceptable levels of variability for the
surrogate ratio, a more reasonable approach may be to review the data collected to establish the
ratio and to use the DQO process to select an appropriate ratio from that data. An example is
provided to illustrate the application of surrogate measurements.

Ten soil samples within the survey unit were collected and analyzed for 137Cs and 90Sr to
establish a surrogate ratio. The ratios of 90Sr to 137Cs were as follows: 6.6,  5.7, 4.2, 7.9, 3.0, 3.8,
4.1, 4.6, 2.4, and 3.3.  An assessment of this example data set results in an  average 90Sr to 137Cs
surrogate ratio of 4.6, with a standard deviation of 1.7. There are various approaches that may be
used to develop a surrogate ratio from this data—but each must consider the variability and level
of uncertainty in the data. One may consider the variability in the surrogate ratio by selecting the
95% upper bound of the surrogate ratio (to yield a conservative value of 90Sr from the measured
137Cs), which is 8.0 in this case. Similarly, one may select the most conservative value from the
data set (7.9).  The DQO process should be used to assess the use of surrogates.  The benefit of
using the surrogate approach is the reduced cost of not having to perform costly wet chemistry
analyses on each sample. This benefit should  be considered relative to the difficulty in
establishing the surrogate ratio, as well  as the potential consequence of unnecessary
investigations that result from the error in using a "conservative" surrogate ratio.  Selecting a
conservative surrogate ratio ensures that potential exposures from individual radionuclides are
not underestimated. The surrogate method can only be used with confidence when dealing with
the same media in the same surroundings—for example, soil samples with similar physical and
geological characteristics. The MARSSIM user will need to consult with the responsible
regulatory agency for concurrence on the approach used to determine the surrogate ratio.

Once an appropriate surrogate ratio is determined, one needs to consider how compliance will be
demonstrated using surrogate measurements.  That is, the user must modify the DCGL of the
measured radionuclide to account for the inferred radionuclide. Continuing with the above
example, the modified DCGL for 137Cs must be reduced according to the following equation:
                                                    DCGL,
              DCGLr   ,  = DCGLr  x 	^	              4-1
                    ( .? tnon          ( .?    r / s~i i s~i \   i~x s~i /—i T -i    T\/^/^~* T
                                                                      or
where CSr/CCs is the surrogate ratio of 90Sr to 137Cs.

August 2000                                 4-5                         MARSSIM, Revision 1

-------
Preliminary Survey Considerations


Assuming that the DCGLSr is 15 Bq/kg, the DCGLCs is 10 Bq/kg, and the surrogate ratio is 8 (as
derived previously), the modified DCGL for 137Cs (DCGLCs mod) can be calculated using
Equation 4-1:

                     DCGLr   ,  = 10  x - 11 -  = 1.6 Bq/kg
                           C,mod                                 *  *
This modified DCGL is then used for survey design purposes described in Chapter 5.

The potential for shifts or variations in the radionuclide ratios means that the surrogate method
should be used with caution. Physical or chemical differences between the radionuclides may
produce different migration rates, causing the radionuclides to separate and changing the
radionuclide ratios.  Remediation activities have a reasonable potential to alter the surrogate ratio
established prior to remediation.  MARSSEVI recommends that when the ratio is established prior
to remediation, additional post-remediation samples should be collected to ensure that the data
used to establish the ratio are still appropriate and representative of the existing site condition. If
these additional post-remediation samples are not consistent with the pre-remediation data,
surrogate ratios should be re-established.

Compliance with surface activity DCGLs for radionuclides of a decay series (e.g., thorium and
uranium) that emit both alpha and beta radiation may be demonstrated by assessing alpha, beta,
or both radiations.  However, relying on the use of alpha surface contamination measurements
often proves problematic due to the highly variable level of alpha attenuation by rough, porous,
and dusty surfaces. Beta measurements typically provide a more accurate assessment of thorium
and uranium contamination on most building surfaces because surface conditions cause
significantly less attenuation of beta particles than alpha particles. Beta measurements, therefore,
may provide a more accurate determination of surface activity than alpha measurements.

The relationship of beta and alpha emissions from decay chains or various enrichments of
uranium  should be considered when determining the surface activity for  comparison with the
DCGLW values.  When the initial member of a decay chain has a long half-life, the radioactivity
associated with the subsequent members of the series will increase at a rate determined by the
individual half-lives until all members of the decay chain are present at activity levels equal to
the  activity of the parent. This condition is known as secular equilibrium.

Consider an example where the average surface activity DCGLW for natural thorium is 1,000
Bq/m2 (600 dpm/100 cm2), and all of the progeny are in secular equilibrium — that is, for each
disintegration of 232Th there are six alpha and four beta particles emitted  in the thorium decay
MARSSIM, Revision 1                         4-6                                 August 2000

-------
                                                              Preliminary Survey Considerations


series. Note that in this example, the surface activity DCGLW of 1,000 Bq/m2 is assumed to
apply to the total activity from all members of the decay chain. In this situation, the
corresponding alpha activity DCGLW should be adjusted to 600 Bq/m2 (360 dpm/100 cm2), and
the corresponding beta activity DCGLW to 400 Bq/m2 (240 dpm/100 cm2), in order to be
equivalent to 1,000 Bq/m2 of natural thorium surface activity. For a surface activity DCGLW of
1,000 Bq/m2, the beta activity DCGLW is calculated as follows:

                 ( 1,000 Bq of chain.  x  (       4ft      .
                _ m^_ _ dis of Th-232   = 400 p Bq
                             10 Bq of chain                    m2
                             1 Bq of Th-232
To demonstrate compliance with the beta activity DCGLW for this example, beta measurements
(in cpm) must be converted to activity using a weighted beta efficiency that accounts for the
energy and yield of each beta particle. For decay chains that have not achieved secular
equilibrium, the relative activities between the different members of the decay chain can be
determined as previously discussed for surrogate ratios.

Another example for the use of surrogates involves the measurement of exposure rates, rather
than surface or volume activity concentrations, for radionuclides that deliver the majority of their
dose through the direct radiation pathway. That is, instead of demonstrating compliance with
soil or surface contamination DCGLs derived from the direct radiation pathway,  compliance is
demonstrated by direct measurement of exposure rates. To implement this surrogate method,
Historical Site Assessment (HSA) documentation should provide reasonable assurance that no
radioactive materials are buried at the site and that radioactive materials have not seeped into the
soil or groundwater. This surrogate approach may still be possible for sites that contain
radionuclides that do not deliver the majority of their dose through the direct radiation pathway.
This requires that a consistent relative ratio for the radionuclides that do deliver the majority of
their dose through the  direct radiation pathway can be established. The appropriate exposure rate
limit in this case accounts for the radionuclide(s) that do not deliver the majority of their dose to
the direct radiation pathway.  This is accomplished by determining the fraction of the total
activity represented by radionuclide(s) that do deliver the majority of their dose through the direct
radiation pathway, and weighting the exposure rate limit by this fraction. Note that the
considerations for establishing consistent relative ratios discussed above apply to this surrogate
approach as well.  The responsible regulatory agency should be consulted prior to implementing
this surrogate approach.
August 2000                                 4-7                         MARSSIM, Revision 1

-------
Preliminary Survey Considerations
4.3.3   Use of DCGLs for Sites with Multiple Radionuclides

Typically, each radionuclide DCGL corresponds to the release criterion (e.g., regulatory limit in
terms of dose or risk). However, in the presence of multiple radionuclides, the total of the
DCGLs for all radionuclides would exceed the release criterion. In this case, the individual
DCGLs need to be adjusted to account for the presence of multiple radionuclides contributing to
the total dose.  One method for adjusting the DCGLs is to modify the assumptions made during
exposure pathway modeling to account for multiple radionuclides.  The surrogate measurements
discussed in the previous section describe another method for adjusting the DCGL to account for
multiple radionuclides. Other methods include the use of the unity rule and development of a
gross  activity DCGL for surface activity to adjust the individual radionuclide DCGLs.

The unity rule, represented in the expression below, is satisfied when radionuclide mixtures yield
a combined fractional concentration limit that is less than or equal to one:

                            C          r           C
                            Cl         C2            "      1                        4-3
                                    DCGL2      DCGLn
where
       C      =      concentration
       DCGL =      guideline value for each individual radionuclide (1,2, ..., n)

For sites that have a number of significant radionuclides, a higher sensitivity will be needed in
the measurement methods as the values of C become smaller. Also, this is likely to affect
statistical testing considerations—specifically by increasing the numbers of data points necessary
for statistical tests.

4.3.4   Integrated Surface and Soil Contamination DCGLs

Surface contamination DCGLs apply to the total of fixed plus removable surface activity. For
cases where the surface contamination is due entirely to one radionuclide, the DCGL for that
radionuclide is used for comparison to measurement data (Section 4.3.1).

For situations where multiple radionuclides with their own DCGLs are present, a gross activity
DCGL can be  developed. This approach enables field measurement of gross activity, rather than
determination  of individual  radionuclide activity, for comparison to the DCGL. The gross
activity DCGL for surfaces with multiple radionuclides is calculated as follows:
MARSSIM, Revision 1                         4-8                                 August 2000

-------
                                                              Preliminary Survey Considerations


1.      Determine the relative fraction (f) of the total activity contributed by the radionuclide.
2.      Obtain the DCGL for each radionuclide present.
3.      Substitute the values of f and DCGL in the following equation.

              Gross Activity DCGL =
                                         fl.fl.      fn   \              4-4
                                       DCGL,    DCGL,     DCGL
                                              1          2.           Yl j
       Example

       Assume that 40% of the total surface activity was contributed by a radionuclide with a
       DCGL of 8,300 Bq/m2 (5000 dpm/100 cm2); 40% by a radionuclide with a DCGL of
       1,700 Bq/m2 (1000 dpm/100 cm2); and 20% by a radionuclide with a DCGL of 830 Bq/m2
       (500 dpm/100 cm2). Using Equation 4-4,


                    Gross Activity DCGL =
                                              0.40      0.40     0.20
                                             8,300    1,700    830
                                      = 1,900 Bq/m2

Note that Equation 4-4 may not work for sites exhibiting surface contamination from multiple
radionuclides having unknown or highly variable concentrations of radionuclides throughout the
site. In these situations, the best approach may be to select the most conservative surface
contamination DCGL from the mixture of radionuclides present.  If the mixture contains
radionuclides that cannot be measured using field survey equipment, laboratory analyses of
surface materials may be necessary.

Because gross surface activity measurements are not nuclide-specific, they should be evaluated
by the two-sample nonparametric tests described in Chapter 8 to determine if residual
contamination meets the release criterion. Therefore, gross surface activity measurements should
be performed for both the survey units being evaluated and for background reference areas.  The
background reference areas for surface activity typically involve building surfaces and
construction materials that are considered free of residual radioactivity (see Section 4.5). The
total surface activity due to residual contamination should not exceed  the gross activity DCGL
calculated above.
August 2000                                 4-9                        MARSSIM, Revision 1

-------
Preliminary Survey Considerations


For soil contamination, it is likely that specific radionuclides, rather than gross activity, will be
measured for demonstrating compliance. For radionuclides that are present in natural
background, the two-sample nonparametric test described in Section 8.4 should be used to
determine if residual soil contamination exceeds the release criterion. The soil contamination
due to residual activity should not exceed the DCGL. To account for multiple background
radionuclides, the DCGL should be adjusted in a manner similar to the gross activity DCGL
described above. For a known mixture of these radionuclides, each having a fixed relative
fraction of the total activity, the site-specific DCGLs for each radionuclide may be calculated by
first determining the gross activity DCGL and then multiplying that gross DCGL by the
respective fractional contribution of each radionuclide.  For example, if 238U, 226Ra, and 232Th
have DCGLs of 190 Bq/kg (5.0 pCi/g), 93 Bq/kg (2.5 pCi/g), and 37 Bq/kg (1.0 pCi/g) and
activity ratios of 40%, 40%, and 20%, respectively, Equation 4-4 can be used to calculate the
gross activity DCGL.


                      Gross Activity DCGL =
                                              0.40  + 0.40  + 0.20
                                               190     93      37
                                       = 85 Bq/kg

The adjusted DCGLs for each of the contributory radionuclides, when present in the given
activity ratios, are then 34 Bq/kg (0.40 x 85) for 238U, 34 Bq/kg (0.40 x 85) for 226Ra, and 17
Bq/kg (0.20 x  85) for 232Th. Determining gross activity DCGLs to demonstrate compliance
enables an evaluation of site conditions based on analysis for only one of the contributory
contaminants (surrogate approach), provided the relative ratios of the contaminants do not
change.

For situations where the background radionuclides occurring in background have unknown or
variable relative concentrations throughout the site, it may be necessary to perform the two-
sample nonparametric tests separately for each radionuclide present. The unity rule  should be
used to determine that the sum of each radionuclide concentration divided by its DCGL is less
than or equal to one.

Therefore, at each measurement location calculate the quantity:

                           r          r                 C
                           C
                                    DCGL2            DCGLn
                                                                                    4-5
where C is the radionuclide concentration.


MARSSIM, Revision 1                        4-10                                 August 2000

-------
                                                               Preliminary Survey Considerations


The values of C are the data to be used in the statistical tests to determine if the average over the
survey unit exceeds one.

The same approach applies for radionuclides that are not present in background, with the
exception that the one-sample nonparametric statistical test described in Section 8.3 is used in
place of the two-sample nonparametric test (see Section 5.5.2.3).  Again, for multiple
radionuclides either the surrogate approach or the unity rule should be used to demonstrate
compliance, if relative ratios are expected to change.
4.4    Classify Areas by Contamination Potential

All areas of the site will not have the same potential for residual contamination and, accordingly,
will not need the same level of survey coverage to achieve the established release criteria. The
process will be more efficient if the survey is designed so areas with higher potential for
contamination (based in part on results of the HSA in Chapter 3) will receive a higher degree of
survey effort.

Classification is a critical step in the survey design process.  The working hypothesis of
MARSSEVI is that all impacted areas being evaluated for release have a reasonable potential for
radioactive contamination above the DCGL.  This initial  assumption means that all areas are initially
considered Class 1 areas unless some basis for reclassification as non-impacted, Class 3, or
Class 2 is provided.

Areas that have no reasonable potential for residual contamination do not need any level of
survey coverage and  are designated as non-impacted areas. These areas have no radiological
impact from site operations and are typically identified during the HSA (Chapter 3). Background
reference areas are normally selected from non-impacted areas (Section 4.5).

Impacted areas are areas that have reasonable potential for containing contaminated material.  They
can be subdivided into three classes:

•      Class 1 areas: Areas that have, or had prior to remediation, a potential for radioactive
       contamination (based on site operating history) or known contamination (based on
       previous radiological surveys).  Examples of Class 1 areas include: 1) site areas
       previously subjected to remedial actions, 2) locations where leaks or spills  are known to
       have occurred, 3) former burial or disposal sites, 4) waste storage sites, and 5) areas with
       contaminants in discrete solid pieces of material high specific activity. Note that areas
       containing contamination in excess of the DCGLW prior to remediation should be
       classified as Class 1 areas.
August 2002                                 4-11                            MARS SIM, Revision 1

-------
Preliminary Survey Considerations
•      Class 2 areas:  These areas have, or had prior to remediation, a potential for radioactive
       contamination or known contamination, but are not expected to exceed the DCGLW. To
       justify changing an area's classification from Class 1 to Class 2, the existing data (from
       the HSA, scoping surveys, or characterization surveys) should provide a high degree of
       confidence that no individual measurement would exceed the DCGLW.  Other
       justifications for this change in an area's classification may be appropriate based on the
       outcome of the DQO process.  Examples  of areas that might be classified as Class 2 for
       the final status survey include: 1) locations where radioactive materials were present in an
       unsealed form (e.g., process facilities), 2) potentially contaminated transport routes,
       3) areas downwind from stack release points, 4) upper walls and ceilings of some
       buildings or rooms subjected to airborne radioactivity, 5) areas where low concentrations
       of radioactive materials were handled, and 6) areas on the perimeter of former
       contamination control areas.

•      Class 3 areas:  Any impacted areas that are not expected to contain any residual
       radioactivity, or are expected to contain levels of residual radioactivity at a small fraction
       of the DCGLW, based on site operating history and previous radiological surveys.
       Examples of areas that might be classified as Class 3 include buffer zones around Class 1
       or Class 2 areas, and areas with very low potential for residual  contamination but
       insufficient information to justify a non-impacted classification.

Class 1 areas have the greatest potential for contamination and, therefore, receive the highest
degree of survey effort, followed by Class 2 and then Class 3 areas.

The criteria used for designating areas as Class 1, 2, or 3 should be described in the final status
survey plan.  Compliance with the classification criteria should be demonstrated in the final
status survey report. A thorough analysis of HSA findings (Chapter 3) and the results of scoping
and characterization surveys provide the basis for an area's classification.  As a survey
progresses, reevaluation of this classification may be necessary based on newly acquired survey
data.  For example, if contamination is identified in a Class 3  area, an investigation and
reevaluation of that area should be performed to determine if the Class 3 area classification is
appropriate. Typically, the investigation will result in part or  all of the area being reclassified as
Class 1 or Class 2. If survey results identify residual contamination in a Class 2 area exceeding
the DCGL or suggest that  there may be a reasonable potential that contamination is present in
excess of the DCGL, an investigation should be initiated to determine  if all or part of the area
should be reclassified to Class 1.  More information on investigations and reclassifications is
provided in Section 5.5.3.
MARSSIM, Revision 1                         4-12                                  August 2000

-------
                                                               Preliminary Survey Considerations
4.5    Select Background Reference Areas

Certain radionuclides may also occur at significant levels as part of background in the media of
interest (soil, building material, etc.). Examples include members of the naturally-occurring
uranium, thorium, and actinium series; 40K; 14C; and tritium.  137Cs and other radionuclides are
also present in background as a result of nuclear weapons fallout (Wallo, et a/., 1994).
Establishing background concentrations that describe a distribution of measurement data is
necessary to identify and evaluate contributions attributable to site operations.  Determining
background levels for comparison with the conditions determined in specific survey units entails
conducting surveys in one or more reference areas to define the radiological conditions of the
site. NUREG-1505 (NRC 1997a) provides additional information on background reference
areas.

A site background reference area should have similar physical, chemical, geological,
radiological, and biological characteristics as the survey unit being evaluated. Background
reference areas are normally selected from non-impacted areas, but are not limited to natural
areas undisturbed by human activities. In some situations, a reference area may be associated
with the survey unit being evaluated, but cannot be potentially contaminated by site activities.
For example, background measurements may be taken from core samples of a building or
structure surface,  pavement, or asphalt. This option should be discussed with the responsible
regulatory agency during survey planning.  Generally, reference areas should not be part of the
survey unit being  evaluated.

Reference areas provide a location for background measurements which are used for
comparisons with survey unit data. The radioactivity present in a reference area would be ideally
the same as the survey unit had it never been contaminated. If a site includes physical,  chemical,
geological, radiological, or biological variability that is not represented by a single reference
background area,  selecting more than one reference area may be necessary.

It may be difficult to find a reference area within an industrial complex for comparison to a
survey unit if the  radionuclides of potential concern are naturally occurring. Background may
vary greatly due to different construction activities that have occurred at the site. Examples of
construction activities that change background include: leveling; excavating; adding fill dirt;
importing rocks or gravel to stabilize soil or underlay asphalt; manufacturing asphalt with
different matrix rock; using different pours of asphalt or concrete in a single survey unit; layering
asphalt over concrete; layering different thicknesses of asphalt, concrete, rock, or gravel; and
covering or burying old features such as railroad beds or building footings.  Background
variability may also increase due to the concentration of fallout in low areas of parking lots
where  runoff water collects and evaporates. Variations in background of a factor of five or more
can occur in the space of a few hectares.
August 2000                                 4-13                         MARS SIM, Revision 1

-------
Preliminary Survey Considerations


There are a number of possible actions to address these concerns. Reviewing and reassessing the
selection of reference areas may be necessary.  Selecting different reference areas to represent
individual survey units is another possibility.  More attention may also be needed in selecting
survey units and their boundaries with respect to different areas of potential or actual background
variability. More detailed scoping or characterization surveys may be needed to better
understand background variability. Using radionuclide-specific measurement techniques instead
of gross radioactivity measurement techniques may also be necessary.  If a background reference
area that satisfies the above recommendations is not available, consultation and negotiation with
the  responsible regulatory agency is recommended. Alternate approaches may include using
published studies of radionuclide distributions.

Verifying that a particular background reference area  is appropriate for a survey can be
accomplished using the techniques described or referenced in Chapter 8. Verification provides
assurance that assumptions used  to design the survey  are appropriate and defensible.  This
approach can also prevent decision errors that may result from selecting an inappropriate
background reference area.

If the radionuclide contaminants  of interest do not occur in background, or the background levels
are  known to be a small fraction  of the DCGLW (e.g.,  <10%), the survey unit radiological
conditions may be compared directly to the specified  DCGL and reference area background
surveys are not necessary. If the background is not well defined at a site, and the decision maker
is willing to accept the increased probability of incorrectly failing to release a survey unit (Type
II error), the reference area measurements can be eliminated and a one-sample statistical test
performed as described in Section 8.3.
4.6    Identify Survey Units

A survey unit is a physical area consisting of structures or land areas of specified size and shape
for which a separate decision will be made as to whether or not that area exceeds the release
criterion.  This decision is made as a result of the final status survey. As a result, the survey unit
is the primary entity for demonstrating compliance with the release criterion.

To facilitate survey design and ensure that the number of survey data points for a specific site are
relatively uniformly distributed among areas of similar contamination potential, the site is
divided into survey units that share a common history or other characteristics, or are naturally
distinguishable from other portions of the site.  A site may be divided into survey units at any
time before the final status survey. For example, HSA or scoping survey results may provide
sufficient justification for partitioning the site into Class 1, 2, or 3 areas. Note,  however, that
dividing the site into survey units is critical only for the final status survey—scoping,
characterization, and remedial  action support surveys may be performed without dividing the site
into survey units.

MARSSIM, Revision 1                         4-14                                 August 2000

-------
                                                               Preliminary Survey Considerations


A survey unit should not include areas that have different classifications. The survey unit's
characteristics should be generally consistent with exposure pathway modeling that is used to
convert dose or risk into radionuclide concentrations. For indoor areas classified as Class  1, each
room may be designated as a survey unit. Indoor areas may also be subdivided into several
survey units of different classification, such as separating floors and lower walls from upper
walls and ceilings (and other upper horizontal surfaces) or subdividing a large warehouse based
on floor area.

Survey units should be limited in size based on classification, exposure pathway modeling
assumptions, and site-specific conditions. The suggested areas for survey units are as follows:

       Classification              Suggested Area
       Class 1
         Structures                up to 100 m2 floor area
         Land areas               up to 2,000 m2
       Class 2
         Structures                100 to 1,000 m2
         Land areas               2,000 to 10,000 m2
       Class 3
         Structures                no limit
         Land areas               no limit

The limitation on survey unit size for Class 1 and Class 2 areas ensures that each area is assigned
an adequate number of data points. The rationale for selecting a larger survey unit area should be
developed using the DQO Process (Section 2.3) and fully documented. Because the number of
data points (determined in Sections 5.5.2.2 or 5.5.2.3) is independent of the survey unit size,
disregarding locating small areas of elevated activity, the survey coverage in an area is
determined by dividing the fixed number of data points obtained from the statistical tests by the
survey unit area. That is, if the statistical test estimates that 20 data points are necessary to
demonstrate compliance, then the survey coverage is determined by dividing 20 by the area over
which the data points are distributed.

Special considerations may be necessary for survey units with structure surface areas less than
10 m2 or land areas less than 100 m2. In this case, the number of data points obtained from the
statistical tests is unnecessarily large and not appropriate for smaller survey unit areas.  Instead,
some specified level of survey effort should be determined based on the DQO process and with
the concurrence  of the responsible regulatory agency. The  data generated from these smaller
survey units should be obtained based  on judgment, rather than on systematic or random design,
and compared individually to the DCGLs.
August 2000                                 4-15                         MARSSIM, Revision 1

-------
Preliminary Survey Considerations
4.7    Select Instruments and Survey Techniques

Based on the potential radionuclide contaminants, their associated radiations, and the types of
residual contamination categories (e.g., soil, structure surfaces) to be evaluated, the detection
sensitivities of various instruments and techniques are determined and documented.  Instruments
should be identified for each of the three types of measurements: 1) scans, 2) direct
measurements, and 3) laboratory analysis of samples. In some cases, the same instrument (e.g..,
sodium iodide detector) or same type of instrument (e.g., gas-flow proportional counter) may be
used for performing several types of measurements.  Once the instruments are selected,
appropriate survey techniques and standard operating procedures (SOPs) should be developed
and documented.  The survey techniques describe how the instrument will be used to perform the
required measurements.

Chapter 6 of this manual, NRC report NUREG-1507 (NRC  1997b), and draft NRC report
NUREG-1506 (NRC 1995) discuss the concept of detection sensitivities and provide guidance on
determining sensitivities and selecting appropriate measurement methods. Chapter 6 also
discusses instruments and survey techniques for scans and direct measurements, while Chapter 7
provides guidance on sampling and laboratory analysis. Appendix  H describes typical field and
laboratory equipment plus associated cost and instrument sensitivities.

4.7.1   Selection of Instruments

Choose reliable instruments that are suited to the physical and environmental conditions at the
site and capable of detecting the radiations of concern to the appropriate minimum detectable
concentration (MDC). During survey design,  it is generally  considered good practice to select a
measurement system with an MDC  between 10-50% of the DCGL.  Sometimes this goal may not
be achievable based on site-specific conditions (e.g., best available technology, cost restrictions).

The MDC is calculated based  on an hypothesis test for individual measurements (see Section
6.7), and results below the MDC are variable and lead to a high value for o of the measured
values  in the survey unit or reference area.  This high value for o can be accounted for using the
statistical tests described in Chapter 8 for the final status survey, but a large number of
measurements are needed to account for the variability, o is defined as the standard deviation of
the measurements in the survey unit.

Early in decommissioning, during scoping and characterization, low MDCs help in the
identification of areas that can be classified as non-impacted or Class 3 areas.  These decisions
are usually based on fewer numbers of samples, and each measurement is evaluated individually.
Using an optimistic estimation of the MDC (see Section 2.3.5) for these surveys may result in the
misclassification of a survey unit and cleaning up an uncontaminated area or performing a final
status survey in a contaminated area.  Selecting a measurement technique with a well defined

MARSSIM, Revision 1                         4-16                                August 2000

-------
                                                              Preliminary Survey Considerations


MDC or a conservative estimate of the MDC ensures the usefulness of the data for making
decisions for planning the final status survey. For these reasons, MARS SIM recommends that a
realistic or conservative estimate of the MDC be used instead of an optimistic estimate.  A
conservative estimate of the MDC uses reasonably conservative values for parameters with a
high level of uncertainty, and results in a MDC value that is higher than a non-conservative or
optimistic estimate.

The instrument should be calibrated for the radiations and energies of interest at the site.  This
calibration should be traceable to an accepted standards organization such as the National
Institute of Science and Technology (NIST).  Routine operational checks of instrument
performance should be conducted to assure that the check source response is maintained within
acceptable ranges and that any changes in instrument background are not attributable to
contamination of the detector. If the radionuclide contaminants cannot be detected at desired
levels by direct measurement (Section 6.7), the portion of the survey dealing with measurements
at discrete locations should be designed to rely primarily on sampling and laboratory analysis
(Chapter 7).

Assuming the contaminants can be detected, either directly or by measuring a surrogate
radionuclide in the mixture, the next decision point depends on whether the radionuclide being
measured is  present in background. Gross measurement methods will likely be more appropriate
for measuring surface contamination in structures,  scanning for locations of elevated activity, and
determining exposure rates.  Nuclide-specific measurement techniques, such as gamma
spectrometry, provide a marked increase  in detection sensitivity over gross measurements
because of their ability to screen out contributions from other sources.  Figure 4.2 illustrates the
sequence of steps in determining if direct measurement techniques can be applied at a particular
site, or if laboratory analysis is more appropriate.  Scanning surveys are typically performed at all
sites.  The selection of appropriate instruments for scanning,  direct measurement,  and  sampling
and analysis should be survey specific.

4.7.2   Selection of Survey Techniques

In practice, the DQO process is used to obtain a proper balance among the use of various
measurement techniques.  In general, there is an inverse correlation between the cost of a specific
measurement technique and the detection levels being sought. Depending on the survey
objectives, important considerations include survey costs and choosing the optimum
instrumentation and measurement mix.

A certain minimum number of direct measurements or  samples will be needed to demonstrate
compliance with the release criterion based on the nonparametric statistical tests (see Section
5.5.2). In addition, the potential for areas of elevated contamination will have to be considered
for designing scanning surveys.  Areas of elevated  activity may also affect the number of
measurements; however, scanning with survey instruments should generally be sufficient to

August 2000                                4-17                        MARSSIM, Revision 1

-------
Preliminary Survey Considerations
 IDENTIFY RADIONUCLIDE
      OF CONCERN

  IDENTIFY ASSOCIATED
     RADIONUCLIDES
  IDENTIFY CONDITION TO
    BE EVALUATED OR
      MEASURED
        CAN
REQUIRED SENSITIVITIE
     BE ACHIEVED
    USING DIRECT
   MEASUREMENTS?
      DETERMINE
      DCGL VALUES
  DETERMINE WHETHER
   CONTAMINANT IS IN
      BACKGROUND
  CALCULATE REQUIRED
       DETECTION
      SENSITIVITIES
 EVALUATE INSTRUMENTS
    AND TECHNIQUES
  RELATIVE TO REQUIRED
       DETECTION
      SENSITIVITIES
 DESIGN SURVEY PLAN
     FOR DIRECT
 MEASUREMENTS AND
      SAMPLING
                                                                          DESIGN SURVEY PLAN
                                                                             FOR SAMPLING
                                          SELECT AND OBTAIN
                                             INSTRUMENTS
                                              CALIBRATE
                                             INSTRUMENTS
                                        I  PROCEED WITH FIELD
                                        I       SURVEY
         Figure 4.2 Flow Diagram for Selection of Field Survey Instrumentation for
            Direct Measurements and Analysis of Samples (Refer to Section 4.7)
MARSSIM, Revision 1
    4-18
August 2000

-------
                                                             Preliminary Survey Considerations


ensure that no areas with unusually high levels of radioactivity are left in place.  Some
measurements may also provide information of a qualitative nature to supplement other
measurements.  An example of such an application is in situ gamma spectrometry to demonstrate
the absence (or presence) of specific contaminants.

Table 4.1 presents a list of common contaminants along with recommended survey methods that
have proven to be effective based on past survey experience in the decommissioning industry.
This table provides a general indication of the detection capability of commercially-available
instruments. As such, Table 4.1 may be used to provide an initial evaluation of instrument
capabilities for some common radionuclides at the example DCGLs listed in the table.  For
example, consider the  contamination of a surface with 241Am. Table 4.1 indicates that 241Am is
detectable at the example DCGLs, and that viable direct measurement instruments include gas-
flow proportional (a mode) and alpha scintillation detectors. Table 4.1 should not be interpreted
as providing specific values for an instrument's detection sensitivity, which is discussed in
Section 6.7.  In addition, NRC draft report NUREG-1506 (NRC 1995) provides further
information on factors that may affect survey instrumentation selection.

4.7.3 Criteria for Selection of Sample Collection and Direct Measurement Methods

Sample characteristics such as sample depth, volume, area, moisture level, and composition, as
well as sample preparation techniques which may alter the sample, are important planning
considerations for Data Quality Objectives.  Sample preparation may include, but is not limited
to, removing extraneous material, homogenizing, splitting, drying, compositing, and final
preparation of samples. As is the case for determining survey unit characteristics, the physical
sample characteristics  and sampling method should be consistent with the dose or risk pathway
modeling that is used to determine radionuclide DCGL's.  If a direct measurement method is
used, it should also be consistent with the pathway modeling.

For example, a sample depth of 15 cm (6 in.) for soil samples might be specified during the DQO
process for a final status  survey because this corresponds to the soil mixing or plow depth in
several environmental  pathway models (Yu et al, 1993, NRC 1992b). If contamination exists at
a depth less than this, a number of models uniformly mix it throughout this depth to simulate the
soil mixing associated with plowing.  Similarly, models may be based on dry weight, which may
necessitate either drying samples or data transformation to account for dry weight.

The DQOs and subsequent direction to the laboratory for analysis might include removal of
material not relevant for characterizing the sample, such as pieces of glass, twigs, or leaves.
Table 4.2 provides examples of how a particular field soil composition of fine-, medium-, and
coarse-grained materials might determine laboratory analysis DQOs for particular radionuclides.
Fine materials consist  of clay (less than 0.002 mm) and silt (0.002 to 0.062 mm). Medium
materials consist of sand, which can be further divided into very fine, fine, medium, coarse, and
very coarse sand.  Coarse materials consist of gravel, which  is composed of pebbles (2 to 64
mm), cobbles (64 to 256 mm), and boulders (greater than 256 mm) (Friedman 1978).
August 2000                                4-19                        MARSSIM, Revision 1

-------
Preliminary Survey Considerations
        Table 4.1  Selection of Direct Measurement Techniques Based on Experience
Nuclide
3H
14C
54Mn
55Fe
60Co
«Ni
90Sr
"Tc
137Cs
152Eu
226Ra (C)3
232Th (C)3
U4
239Pu,
240Pu,
241pu
241 Am
Structure Surfaces
Example
DCGL1
(Bq/m2)
1.6xl06
4.7xl05
1.3xl04
l.SxlO6
S.lxlO3
1.5xl06
6.0xl03
6.4xl05
8.2xl03
6.6xl03
970
340
560
120
110
Detectable
No
Yes
Yes
No
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Land Areas
Example
DCGL1
(Bq/kg)
1.5xl04
1.4xl03
450
4.1xl05
110
2.8xl05
420
1.9xl03
400
240
210
320
710
70
70
Detectable
No
No
Yes
No5
Yes
No
No5
No
Yes
Yes
Yes
Yes
Yes
No5
Yes
Direct Measurement Instruments2
Surface
Activity
ND6
GPB
GPB7,GM
ND
GPB,GM
GPB
GPB,GM
GP15,GM
GPB,GM
GPB,GM
GPa,aS
GPa,aS,GPB
GPa,aS,GPB,
ISy
GPa,aS
GPa,aS
Soil
Activity
ND
ND
YS,ISy
ND(ISy)
YS,ISy
ND
ND
(GM,GPB)
ND
yS,ISy
YS,ISy
ys,isy
YS,ISy
yS,ISy,
GPB
ND (ISy)
YSJSY
Exposure
Rate
ND
ND
PIC,yS,ISy
ND(ISy)
PIC,yS,ISy
ND
ND
ND
PIC,yS,ISy
PIC,yS,ISy
PIC,yS,ISy
PIC,yS,ISy
PIC,yS,ISy
ND
PIC,yS,ISy
  1 Example DCGLs based on values given in NRC draft report NUREG-1500 (NRC 1994c).
  2 GPa = Gas-flow proportional counter (a mode)
   GM = Geiger-Mueller survey meter
   GPB = Gas-flow proportional counter ((3 mode)
   PIC = Pressurized ionization chamber
   aS = Alpha scintillation survey meter
   yS = gamma scintillation (gross)
   ISy= in situ gamma spectrometry
  3 For decay chains having two or more radionuclides of significant half-life that reach secular equilibrium.
   The notation "(c)" indicates the direct measurement techniques assume the presence of progeny in the chain.
  4 Depleted, natural, and enriched.
  5 Possibly detectable at limits  for areas of elevated activity.
  6 Not detectable.
  7 Bold indicates the preferred  method where alternative methods are available.
MARSSIM, Revision 1
4-20
August 2000

-------
                                                              Preliminary Survey Considerations
                   Table 4.2  Example of DQO Planning Considerations
      Separate out and evaluate fine-grain material because resuspension is associated
      with the fine grain fraction for the air pathway.

      If contamination resides on sand, pebbles, and cobbles, analyze these materials for
      direct exposure pathway and analyze the fine-grain fraction for the air pathway.

      Separation and homogenization are not necessary for analyses because direct
      exposure pathway depends upon the average concentration and presence of cobbles
      will usually not impact laboratory analysis.

      Determine if pathway modeling considered the presence of cobbles.

      Separate, homogenize, and evaluate fine-grain material because plant root uptake is
      associated with the fine-grain fraction for the plant ingestion pathway.

      Separate, homogenize, and evaluate fine-grain materials because of their relevance
      for the contaminant source term for contaminant migration to the sub-surface for
      the water pathway.
Both sample depth and area are considerations in determining appropriate sample volume, and
sample volume is a key consideration for determining the laboratory MDC. The depth should
also correlate with the conceptual model developed in Chapter 3 and upgraded throughout the
Radiation Survey and Site Investigation (RSSI) Process. For example, if data collected during
the Historical Site Assessment indicate contamination may exist to a depth of greater than 15 cm
(6 in.), then samples should be deep enough to support the survey objectives, such as for the
scoping or characterization survey.  Taking samples as a function of depth might also be a survey
design objective, such as for scoping, characterization, or remediation support.

The depth and area of the sample should be recorded as well as any observations, such as the
presence of materials noted during sampling.  Chapter 6 and Chapter 7 present more detail
regarding the application of these survey planning considerations.
August 2000                                4-21                         MARSSIM, Revision 1

-------
Preliminary Survey Considerations
4.8  Site Preparation

Site preparation involves obtaining consent for performing the survey, establishing the property
boundaries, evaluating the physical characteristics of the site, accessing surfaces and land areas
of interest, and establishing a reference coordinate system. Site preparation may also include
removing equipment and materials that restrict access to surfaces. The presence of furnishings or
equipment will restrict access to building surfaces and add additional items that the survey
should address.

4.8.1   Consent for Survey

When facilities or sites are not owned by the organization performing the surveys, consent from
the site or equipment owner should be obtained before conducting the surveys.  All appropriate
local, State, and Federal officials as well as the site owner and other affected parties should be
notified of the survey schedule. Section 3.5 discusses consent for access, and additional
guidance based on the CERCLA program is available from EPA (EPA 1987d).

4.8.2   Property Boundaries

Property boundaries may be determined from property survey maps furnished by the owners or
from plat maps obtained from city or county tax maps. Large-area properties and properties with
obscure boundaries or missing survey markers may require the services of a professional land
surveyor.

If the radiological survey is only performed inside buildings,  a tax map with the buildings
accurately located will usually suffice for site/building location designation.

4.8.3   Physical Characteristics of Site

The physical characteristics of the site will have a significant impact on the complexity, schedule,
and cost of a survey.  These characteristics include the number and size of structures, type of
building construction, wall and floor penetrations, pipes, building condition, total area,
topography, soil type, and ground cover. In particular, the accessibility of structures and land
areas (Section 4.8.4)  has a significant impact on the survey effort. In some cases survey
techniques (e.g., in situ gamma spectrometry discussed in Chapter 6) can preclude or reduce the
need to gain physical access or use intrusive techniques.  This should be considered during
survey planning.
MARSSIM, Revision 1                         4-22                                 August 2000

-------
                                                               Preliminary Survey Considerations
4.8.3.1  Structures

Building design and condition will have a marked influence on the survey efforts. The time
involved in conducting a survey of building interior surfaces is essentially directly proportional to
the total surface area.  For this reason the degree of survey coverage decreases as the potential for
residual activity decreases.  Judgment measurements and sampling, which are performed in
addition to the measurements performed for the nonparametric tests, are recommended in areas
likely to have accumulated deposits of residual activity. As discussed in Section 5.5.3.3 and
Section 8.5, judgment measurements and samples are compared directly to the appropriate
DCGL.

The condition of surfaces after decontamination may affect the survey process. Removing
contamination that has penetrated a surface usually involves removing the surface material. As a
result, the floors and walls of decontaminated facilities are frequently badly scarred or broken up
and are often very uneven.  Such surfaces are more difficult to survey because it is not possible to
maintain a fixed distance between the detector and the surface. In addition, scabbled or porous
surfaces may significantly attenuate radiations—particularly alpha and low-energy beta particles.
Use of monitoring equipment on wheels is precluded by rough surfaces, and such surfaces also
pose an increased risk of damage to fragile detector probe faces.  These factors should be
considered during the calibration of survey instruments; NRC  report NUREG-1507 (NRC 1997b)
provides additional information on how to address these surface conditions.  The condition of the
building should also be considered from a safety and health standpoint before a survey is
conducted. A structural assessment may be needed to determine whether the structure is safe to
enter.

Expansion joints,  stress cracks, and penetrations into floors and walls for piping, conduit, and
anchor bolts, etc., are potential sites for accumulation of contamination and pathways for
migration into subfloor soil and hollow wall spaces.  Drains, sewers, and septic systems can also
become contaminated. Wall/floor interfaces are also likely locations for residual contamination.
Coring, drilling, or other such methods may be necessary to gain access for survey.  Intrusive
surveying may require permitting by local regulatory authorities.   Suspended ceilings may cover
areas of potential  contamination  such as ventilation ducts and fixtures.

Exterior building  surfaces will typically have a low potential for residual contamination,
however, there are several locations that should be considered during survey planning. If there
are roof exhausts, roof accesses that allow for radioactive material movement, or the facility is
proximal to the air effluent discharge points, the possibility of roof contamination should  be
considered.  Because roofs are periodically resurfaced, contaminants may be trapped in roofing
material, and sampling this material may be necessary. Roof drainage points such as driplines
along overhangs,  downspouts, and gutters are also important survey locations. Wall penetrations
for process equipment, piping, and exhaust ventilation are potential locations for exterior

August 2000                                 4-23                         MARSSIM, Revision 1

-------
Preliminary Survey Considerations


contamination. Window ledges and outside exits (doors, doorways, landings, stairways, etc.) are
also building exterior surfaces that should be addressed.

4.8.3.2 Land Areas

Depending upon site processes and operating history, the radiological survey may include
varying portions of the land areas.  Potentially contaminated open land or paved areas to be
considered include storage areas (e.g., equipment, product, waste, and raw material), liquid waste
collection lagoons and  sumps, areas downwind (based on predominant wind directions on an
average annual basis, if possible) of stack release points, and surface drainage pathways.
Additionally, roadways and railways that may have been used for transport of radioactive or
contaminated materials that may not have been adequately contained could also be potentially
contaminated.

Buried piping, underground tanks,  sewers, spill areas, and septic leach fields that may have
received contaminated  liquids are locations of possible contamination that may necessitate
sampling of subsurface soil (Section 7.5.3). Information regarding soil type (e.g., clay, sand)
may provide insight into the retention or migration characteristics of specific radionuclides.  The
need for special sampling by coring or split-spoon equipment should be anticipated for
characterization  surveys.

If radioactive waste has been removed, surveys of excavated areas will be necessary before
backfilling.  If the waste is to be left in place, subsurface sampling around the burial site
perimeter to assess the  potential for future migration may be necessary.

Additionally, potentially contaminated rivers, harbors, shorelines, and other outdoor areas may
require survey activities including environmental media (e.g., sediment, marine biota) associated
with these areas.

4.8.4   Clearing to Provide Access

In addition to the physical characteristics of the site, a major consideration is how to address
inaccessible areas that have a potential for residual radioactivity.  Inaccessible areas may need
significant effort and resources to adequately survey.  This section provides a description of
common inaccessible areas that may have to be considered. The  level of effort expended to
access these difficult-to-reach areas should be commensurate with the potential for residual
activity.  For example,  the potential for the presence of residual activity behind  walls should be
established before significant effort is expended to remove drywall.
MARSSIM, Revision 1                         4-24                                    June 2001

-------
                                                               Preliminary Survey Considerations
4.8.4.1  Structures

Structures and indoor areas should be sufficiently cleared to permit completion of the survey.
Clearing includes providing access to potentially contaminated interior surfaces (e.g., drains,
ducting, tanks, pits, ceiling areas, and equipment) by removing covers, disassembly, or other
means of producing adequate openings.

Building features such as ceiling height, construction materials, ducts, pipes, etc., will determine
the ease of accessibility of various surfaces.  Scaffolding, cranes, lifts, or ladders may be
necessary to reach some surfaces, and dismantling portions of the building may be required.

The presence of furnishings and equipment will restrict access to building surfaces and add
additional items that the survey should address. Remaining equipment indirectly involved in the
process may need to be dismantled in order to evaluate the radiological status, particularly of
inaccessible parts of the equipment.  Removing or relocating certain furnishings,  such as lab
benches and hoods, to obtain access to potentially contaminated floors and walls may also be
necessary.  The amount of effort and resources dedicated to such removal or relocation activities
should be commensurate with the potential for contamination.  Where the potential is low, a few
spot-checks may be sufficient to provide confidence that covered areas are free of contamination.
In other cases, complete removal may be warranted.

Piping, drains, sewers, sumps, tanks, and other components of liquid handling systems present
special difficulties because of the inaccessibility of interior surfaces. Process information,
operating history, and preliminary monitoring at available access points will assist in evaluating
the extent of sampling and measurements included in the survey.

If the building is constructed of porous materials (e.g., wood, concrete) and  the surfaces were not
sealed,  contamination may be found in the walls, floors, and other surfaces.  It may be necessary
to obtain cores of these surfaces for laboratory analysis.

Another accessibility problem is the presence of contamination beneath tile  or other floor
coverings.  This often occurs because the covering was placed over contaminated surfaces,  or the
joints in tile were not sealed to prevent penetration.  The practice in some facilities has been to
"fix"  contamination (particularly alpha emitters) by painting over the surface of the contaminated
area.  Thus, actions to obtain access to potentially contaminated surfaces, such as removing wall
and floor coverings (including paint, wax, or other sealer) and opening drains and ducts, may be
necessary to enable representative measurements of the contaminant. If alpha radiation or very
low energy beta radiation is to be measured, the surface should be free of overlying material,
such as dust and water, which may significantly attenuate the radiations.
August 2000                                 4-25                         MARSSIM, Revision 1

-------
Preliminary Survey Considerations
4.8.4.2 Land Areas

If ground cover needs to be removed or if there are other obstacles that limit access by survey
personnel or necessary equipment, the time and expense of making land areas accessible should
be considered. In addition, precautionary procedures need to be developed to prevent spreading
surface contamination during ground cover removal or the use of heavy equipment.

Removal or relocation of equipment and materials that may entail special precautions to prevent
damage or maintain inventory accountability should be performed by the property owner
whenever possible.  Clearing open land of brush and weeds will usually be performed by a
professional land-clearing organization under subcontract arrangements. However, survey
personnel may perform minor land-clearing activities as needed.

An important consideration prior to clearing is the possibility of bio-uptake and consequent
radiological contamination of the material to be cleared. Special precautions to avoid exposure
of personnel involved in clearing activities may be necessary.  Initial radiological screening
surveys should be performed to ensure that cleared material or equipment is not contaminated.

The extent of site clearing in specific areas depends primarily on the potential for radioactive
contamination existing in those areas where:  1) the radiological history or results of previous
surveys do not indicate potential contamination of an area (it may be sufficient to perform only
minimum clearing to establish  a reference coordinate system); 2) contamination is known to exist
or a high potential for contamination necessitates completely clearing an area to provide access to
all surfaces; and 3) new findings as the survey progresses may indicate that additional clearing be
performed.

Open land areas may be cleared by heavy machinery (e.g., bulldozers, bushhogs, and hydroaxes).
However, care should be exercised to prevent relocation of surface contamination or  damage to
site features such as drainage ditches, utilities, fences, and buildings. Minor land clearing may be
performed using manually operated equipment such as brushhooks, power saws, knives, and
string trimmers. Brush and weeds should be cut to the minimum practical height necessary to
facilitate measurement and sampling activities (approximately 15 cm). Care should be exercised
to prevent unnecessary damage to or removal of mature trees or shrubs.

Potential ecological damage that might result from an extensive survey should be considered.  If
a survey is likely to result in  significant or permanent damage to the environment, appropriate
environmental analyses should be conducted prior to initiating the survey. In addition,
environmental hazards such as poison ivy, ticks carrying Lyme disease, and poisonous snakes,
spiders, or insects should be noted.  These hazards can affect the safety and health of the workers
as well as the schedule for performing the survey.
MARSSIM, Revision 1                        4-26                                 August 2000

-------
                                                                Preliminary Survey Considerations


4.8.5  Reference Coordinate System

Reference coordinate systems are established at the site to:

       •      facilitate selection of measurement and sampling locations
       •      provide a mechanism for referencing a measurement to a specific location so that
              the same survey point can be relocated

A survey reference coordinate system consists of a grid of intersecting lines, referenced to a fixed
site location or benchmark.  Typically, the lines are arranged in a perpendicular pattern, dividing
the survey location into squares or blocks of equal area; however, other types of patterns (e.g.,
three-dimensional, polar) have been used.

The reference coordinate system used for a particular survey should provide a level of
reproducibility consistent with the objectives of the survey. For example, a commercially
available global positioning system will locate a position within tens of meters,  while a
differential global positioning system  (DGPS) provides precision on the order of a few
centimeters (see Section 6.10.1.1).  On the other hand, a metal bar can be driven into the ground
to provide a long-term reference point for establishing a local reference coordinate system.

Reference coordinate system patterns  on horizontal surfaces are usually identified numerically on
one axis and alphabetically on the other axis or in distances in different compass directions from
the grid origin. Examples of structure interior and land area grids are shown in Figures 4.3
through 4.5. Grids on vertical surfaces may include a third designator, indicating position
relative to floor or ground level. Overhead measurement and sampling locations (e.g.., ceiling
and overhead beams) are referenced to corresponding floor grids.

For surveys of Class 1 and Class 2 areas, basic grid patterns at 1 to 2 meter intervals on structure
surfaces and at 10 to 20 meter intervals of land areas may be sufficient to identify survey
locations with a reasonable level of effort, while not being prohibitive in cost or difficulty of
installation. Gridding of Class 3 areas may also be necessary to facilitate referencing of survey
locations to a common system or origin but, for practical  purposes, may typically be at larger
intervals—e.g., 5 to 10 meters for large structural surfaces and 20 to 50 meters for land areas.

Reference coordinate systems on structure surfaces are usually marked by chalk line or paint
along the entire grid line or at line intersections.  Land area reference coordinate systems are
usually marked by wooden or metal stakes, driven into the surface at reference line intersections.
The selection of an appropriate marker depends on the characteristics and routine uses of the
surface. Where surfaces prevent installation of stakes, the reference line intersection can  be
marked by painting.
August 2000                                 4-27                         MARSSIM, Revision 1

-------
Preliminary Survey Considerations
	 4 	
14
13
	 i 	 j
12
11
10
9
8
7
6
5
4


i i !
! ! :





i ]
1 i
i |




o
3
4 J
2
1
A



i
1
1
1
1
1
1
i
I
I
I
i
t t
1 ;
1 ;
1 ;

"t !
: ;
; ;
1 ;
; ;
1 ;
1 ;
1 ;
1 ;
F|LO<
1
1
1
1
1
1
1
1
1
1
1
1
1
i
1
1
1
1
1
1
i
^
3R/





W^
	 J 	 ^ 	
! 1
! !
• •
B | C D | E | F
i
|
	 1 	
f i
|
|
|
	 L 	




,
CEIMN




	

|
|
i
!

|
G 1 H
i







G







i






\







j
	 i 	
|
|

	
	
	
/AI

	
<





J | K



,L




GRIDBI








	 i 	
1
	 1 	
I
L
                                     FEET
                                   0        6
                                   0         2
                                     METERS
       Figure 4.3 Indoor Grid Layout with Alphanumeric Grid Block Designation:

                 Walls and Floors are Diagramed as Though They Lay

                          Along the Same Horizontal Plane
MARSSIM, Revision 1
4-28
August 2000

-------
                                                        Preliminary Survey Considerations
1
85N i
80JN

60N i
SON 1
40N ;
O ATVT 	 J
JOJN *
2 ON j
ION
0 *
1
i
r
i
N
>

N
)
'
X
— — »Ti
i
i
1 .s





*

i
i
i









^^^^A o
J4 m

TTOTOmT










	


.......„.„.„...„.„.„...„„.







BUILDIN





—™v,

11 .



T
1
	 	
G





„.„„...„.„.„..„..„....... -J.J.-J 	
        0
10E
20E
30E
40E
50E
                      60E
    POINT A GRID COORDINATES 30E, SON

    POINT B GRID COORDINATES 23E, 24N

    SHADED BLOCK GRID COORDINATES 10E, SON
                                                                        FEET
                                                                     0       30
0        10
  METERS
                   SURVEY UNIT BOUNDARY

                   ONSITE FENCE
           Figure 4.4 Example of a Grid System for Survey of Site Grounds

                            Using Compass Directions
August 2000
4-29
            MARS SIM, Revision 1

-------
Preliminary Survey Considerations
4+00

                                  PROPERTY
                                  BOUNDARY
                                                                 FEET
                                                               0       300
0+00
                                                               0        100
                                                                METERS
      200L     100L  BASELINE  100R     200R    300R
 POINT A GRID COORDINATES 100R, 2+00
 POINT B GRID COORDINATES 25R, 1+30
 SHADED BLOCK GRID COORDINATES 200L, 2+00

          Figure 4.5 Example of a Grid System for Survey of Site Grounds
                   Using Distances Left or Right of the Baseline
MARSSIM, Revision 1
4-30
August 2000

-------
                                                              Preliminary Survey Considerations


Three basic coordinate systems are used for identifying points on a reference coordinate system.
The reference system shown in Figure 4.3 references grid locations using numbers on the vertical
axis and letters on the horizontal axis. The reference system shown on Figure 4.4 references
distances from the 0,0 point using the compass directions N (north), S (south), E (east), and W
(west).  The reference system shown in Figure 4.5 references distances along and to the R (right)
or L (left) of the baseline.  In addition, a less frequently used reference system is the polar
coordinate system, which measures distances along transects from a central point.  Polar
coordinate systems are particularly useful for survey designs to evaluate effects of stack
emissions, where it may be desirable to have a higher density of samples collected near the stack
and fewer samples with increasing distance from the stack.

Figure 4.5 shows an example grid system for an  outdoor land area. The first digit or set of digits
includes an L or R (separated from the first set by a comma) to indicate the distance from the
baseline in units (meters) and the direction (left or right) from the baseline.  The second digit or
set of digits refers to the perpendicular distance from the 0,0 point on the baseline and is
measured in hundreds of units. Point A in the example of a reference coordinate system for
survey of site grounds, Figure 4.5, is identified 100R, 2+00 (i.e., 200 m from the baseline and
100 m to the right of the baseline). Fractional distances between reference points are identified
by adding the distance beyond the reference point and are expressed in the same units used for
the reference  coordinate system dimensions.  Point B on Figure 4.5 is identified 25R, 1+30.

Open land reference coordinate systems should be referenced to a location on an existing State or
local reference system or to a U.S. Geological Survey (USGS) bench mark. (This may require
the services of a professional land surveyor.) Global positioning systems (GPS) are capable of
locating reference points in terms of latitude and longitude (Section 6.10.1 provides descriptions
of positioning systems).

Following establishment of the reference coordinate system, a drawing is prepared by the survey
team or the land surveyor. This drawing indicates the reference lines, site boundaries, and other
pertinent site  features and provides a legend showing the scale and a reference compass direction.
The process used to develop the reference coordinate system should be recorded in the survey
planning documentation (e.g., the  Quality Assurance Project Plan or QAPP). An deviations from
the requirements developed during planning should be documented when the reference
coordinate system is established.

It should be noted that the reference coordinate systems described in this section are intended
primarily for reference purposes and do not necessarily dictate the spacing or location of survey
measurements or samples. Establishment of a measurement grid to demonstrate compliance with
the DCGL is discussed in Section  5.5.2.5  and Chapter 8.
August 2000                                 4-31                         MARSSIM, Revision 1

-------
Preliminary Survey Considerations
4.9    Quality Control

Site surveys should be performed in a manner that ensures results are accurate and sources of
uncertainty are identified and controlled. This is especially the case for final status surveys that
are vital to demonstrating a facility satisfies pre-established release criteria. Quality control (QC)
and quality assurance (QA) are initiated at the start of a project and integrated into all surveys as
DQOs are developed.  This carries over to the writing of a Quality Assurance Project Plan
(QAPP), which applies to each aspect of a survey.  Section 9.2 provides guidance on developing
a QAPP. Data quality is routinely a concern throughout the RSSI Process, and one should
recognize that QA/QC procedures will change as data are collected and analyzed, and as DQOs
become more rigorous for the different types of surveys that lead up to a final status survey.

In general, surveys performed by trained individuals are conducted with approved written
procedures and properly calibrated instruments that are sensitive to the suspected contaminant.
However, even the best approaches for properly performing measurements and acquiring
accurate data need to consider QC activities.  QC activities are necessary to obtain additional
quantitative information to demonstrate that measurement results have the required precision and
are sufficiently free of errors to accurately represent the site being investigated. The following
two questions are the main focus of the rationale for the assessment of errors in environmental
data collection activities (EPA  1990).

•      How many and what type of measurements are required to  assess the quality of data from
       an environmental survey?

•      How can the information from the quality assessment measurements be used to identify
       and control sources of error and uncertainties in the measurement process?

These questions are introduced as part of guidance that also includes an example to illustrate the
planning process for determining a reasonable number of quality control (QC) measurements.
This guidance also demonstrates how the information from the process may be used to document
the quality of the measurement data. This process was developed in terms of soil samples
collected in the field and then sent to a laboratory for analysis (EPA 1990).  For MARSSIM,
these questions may be asked in relation to measurements of surface soils and building surfaces
both of which include sampling, scanning, and direct measurements.

Quality control may be thought of in three parts:  1) determining the type of QC samples needed
to detect precision  or bias; 2) determining the number of samples as part of the survey design;
and 3) scheduling sample collections throughout the survey process to identify and control
sources of error and uncertainties.  Section 4.9.1 introduces the concepts of precision and bias
related to survey measurements and briefly discusses the types of QC measurements needed to
detect and quantify precision and bias.  Section 6.2 and Section 7.2 provide more detailed

MARSSIM, Revision  1                        4-32                                 August 2000

-------
                                                              Preliminary Survey Considerations


guidance on the types of QC measurements. The number of QC measurements is addressed in
Section 4.9.2, while Section 4.9.3 and Section 9.3 contain information on identifying and
controlling sources of uncertainty. Overall, survey activities associated with MARS SIM include
obtaining the additional information related to QA of both field and laboratory activities.

4.9.1   Precision and Systematic Errors (Bias)

Precision is a measure of agreement among repeated measurements. Precision is discussed
further in Appendix N in statistical terms.  Table N.2 presents the minimum considerations,
impacts of not meeting these considerations, and corrective actions associated with assessing
precision. Systematic errors, also called bias, accumulate during the measurement process and
result from faults in sampling designs and procedures, analytical procedures, sample
contamination, losses, interactions with containers, deterioration, inaccurate instrument
calibration, and other sources.  Bias causes the mean value of the sample data to be consistently
higher or lower than the true mean value.  Appendix N also discusses bias, and Table N.3
presents the minimum considerations associated with assessing bias, the impacts if the
considerations are not met, and related corrective actions. Laboratories typically introduce QC
samples into their sample load to assess possible bias.  In simplest terms, spikes, repeated
measurements, and blanks are used to assess bias, precision, and contamination, respectively.
See Section 6.2 for further discussion of specific measurements for determining precision and
bias for scans and direct measurements and Section 7.2 for further discussion of specific
measurements for determining precision and bias for samples.

Field work using scanning or direct measurements eliminates some sources of error because
samples are not removed, containerized, nor transported to another location for analysis. The
operator's technique or field instrument becomes the source of bias. In this case, detecting bias
might incorporate field replicates (see Section 7.2.2.1) by having a  second operator to revisit
measurement locations and  following the same procedure with the same instrument as was used
by the first operator. This is an approach used to assess precision of measurements.  A field
instrument's calibration can also be checked by one or  more operators during the course of a
survey and recorded on a control chart.  Differences in  set up or handling of instruments by
different operators may reveal a significant source of bias that is quite different from sources of
bias associated with laboratory work.

The following factors should be considered when evaluating sources of bias, error, and
uncertainty. Contamination is an added factor to consider for each  of the following items.

•      sample collection methods
•      handling and preparation of samples
•      homogenization and aliquots of laboratory samples
•      field methods for sampling, scanning, or direct measurements

August 2000                                4-33                         MARS SIM, Revision 1

-------
Preliminary Survey Considerations


•      laboratory analytical process
•      total bias contributed by all sources

The magnitude of the measurement system variability should be evaluated to determine if the
variability approaches or exceeds the true but unknown variability in the population of interest.
Errors, bias, or data variability may accumulate to the point of rendering data unusable to achieve
survey objectives.  Systematic investigations of field or laboratory processes can be initiated to
assess and identify the extent of errors, bias, and data variability and to determine if the DQOs
are achieved.  An important aspect of each QC determination is the representative nature of a
sample or measurement (see Appendix N for a description of representativeness). If additional
samples or measurements are not taken according to the appropriate method, the resulting QC
information will be invalid or unusable. For example, if an inadequate amount of sample is
collected, the laboratory analytical procedure may not yield a proper result.  The QC sample must
represent the sample population being studied.  Misrepresentation itself creates a bias that if
undetected leads to inaccurate conclusions concerning an analysis.  At the very least,
misrepresentation leads to a need for additional QA investigation.

4.9.2   Number of Quality Control Measurements

The number of QC measurements is determined by the available resources and the degree to
which one needs assurance that a measurement process is adequately controlled. The process is
simplified, for example, when the scope of a survey is  narrowed to a single method, one
sampling crew, and a single laboratory to analyze field samples. Increasing the number of
samples and scheduling sample collections and analyses over time or at different laboratories
increases the level  of difficulty and necessitates increasing the number of QC measurements.
The number of QC measurements may also be  driven upward as the action level approaches a
given instrument's detection limit. This number is determined on a  case-by-case basis, where the
specific contaminant and instruments are assessed for detecting a particular radionuclide.

A widely used standard practice is to collect a set percentage, such as 5% (EPA 1987b), of
samples for QA purposes. However, this practice has disadvantages. For example, it provides
no real assessment of the uncertainties for a relatively small  sample  size.  For surveys where the
required number of measurements increases, there may be a point beyond which there is little
added value in performing additional QC measurements. Aside from cost, determining the
appropriate number of QC measurements essentially depends on site-specific factors. For
example, soil may present a complex and variable matrix requiring many more QC
measurements for surface soils than for building surfaces.

A performance based alternative (EPA 1990) to a set percentage or rule of thumb can be
implemented. First, potential sources of error or uncertainty, the likelihood of occurrence, and
the consequences in the context of the DQOs should be determined.  Then, the  appropriate type

MARSSIM, Revision  1                        4-34                                August 2000

-------
                                                             Preliminary Survey Considerations


and number of QC measurements based on the potential error or uncertainty are determined. For
example, field replicate samples (i.e., a single sample that is collected, homogenized, and split
into equivalent fractions in the field) are used to estimate the combined contribution of several
sources of variation. Hence, the number of field replicate samples to be obtained in the study
should be dictated by how precise the estimate of the total measurement should be.

Factors influencing this estimate include the

•      number of measurements
•      number and experience of personnel involved
•      current and historical performance of sampling and analytical procedures used
•      the variability of survey unit and background reference area radioactivity measurement
       systems used
•      number of laboratories used
•      the level of radioactivity in the survey unit (which for a final status survey should be low)
•      how close an action level (e.g., DCGL) is to a detection limit (which may represent a
       greater concern after reducing or removing radionuclide concentrations by remediation)

The precision of an estimate of the  "true" variance for precision or bias within a survey design
depends on the number of degrees of freedom used to provide the estimate. Table 4.3 provides
the one-sided upper confidence limits for selected degrees of freedom assuming the results of the
measurements are normally distributed. Confidence limits are provided for 90, 95,  97.5, and 99
percent confidence levels. At the stated level of confidence, the "true" variance of the estimate
of precision or bias for a specified number of QC measurements will be between zero and  the
multiple of the estimated variance listed in Table 4.3. For example, for five degrees of freedom
one would be 90% confident that the true variance for precision falls between zero and 3.10
times the estimated variance.  The number of QC measurements is equal to one greater than the
degrees of freedom.

When planning surveys, the number of each type of QC measurement can be obtained from
Table 4.3. For example, if the survey objective is to estimate the variance in the bias for a
specific measurement system between zero and two times the estimated variance at a 95%
confidence  level, 15 degrees of freedom or 16 measurements of a material with known
concentration (e.g., performance evaluation samples) would be indicated. MARS SUV!
recommends that the survey objective be set such that the true variance falls between zero  and
two times the estimated variance. The level of confidence is then determined on a site-specific
basis to adjust the number of each type of QC measurement to the appropriate level (i.e., 11, 16,
21 or 31 measurements). The results of the QC measurements are evaluated during the
assessment phase of the data life cycle (see Section 9.3 and Appendix N).
August 2000                               4-35                        MARS SIM, Revision 1

-------
Preliminary Survey Considerations


      Table 4.3 Upper Confidence Limits for the True Variance as a Function of the
   Number of QC Measurements Used to Determine the Estimated Variance (EPA 1990)
Degrees of Freedom*
2
5
10
15
20
25
30
40
50
100
Level of Confidence (%)
90
9.49
3.10
2.05
1.76
1.61
1.52
1.46
1.38
1.33
1.21
95
19.49
4.34
2.54
2.07
1.84
1.71
1.62
1.51
1.44
1.28
97.5
39.21
6.02
3.08
2.40
2.08
1.91
1.78
1.64
1.61
1.35
99
99.50
9.02
3.91
2.87
2.42
2.17
2.01
1.80
1.68
1.43
* To obtain the necessary number of quality control measurements, add one to the degrees of freedom.

       Example:

       A site is contaminated with 60Co and consists of four Class 1 interior survey units, nine
       Class 2 interior survey units, two Class 3 interior survey units, and one Class 3 exterior
       survey unit.  Three different measurement systems are specified in the survey design for
       performing scanning surveys, one measurement system is specified for performing direct
       measurements for interior survey units, and one measurement system is specified for
       measuring samples collected from the exterior survey unit.

       Repeated measurements are used to estimate precision.  For scan surveys there is not a
       specified number of measurements. 10% of the scans in each Class 1 survey unit were
       repeated as replicates to measure operator precision (see Section 6.2.2.1) within 24 hours
       of the original scan survey. 5% of each Class 2 and Class 3 survey unit were similarly
       repeated as replicates to measure operator precision. The results of the repeated scans
       were evaluated based on professional judgment. For direct measurements and sample
       collection activities, a 95% confidence level was selected as consistent with the
       objectives of the survey. Using Table 4.3, it was determined that 16 repeated
       measurements were  required for both the direct measurement technique and the sample
       collection and laboratory measurement technique.  Because 72 direct measurements
       would be performed in Class  1 survey units, 99 in Class 2 survey units, and 20 in Class 3
       survey units, it was anticipated that at least  16 direct measurements would have sufficient
MARSSIM, Revision 1
4-36
August 2000

-------
                                                             Preliminary Survey Considerations


       activity above background to perform repeated measurements and obtain usable results
       (see Section 5.5.2 for guidance on determining the number of measurements and
       Appendix A for a more detailed discussion of the example site). The 16 direct
       measurement locations to be repeated would be selected based on the results of the direct
       measurements and would represent the entire usable range of activity found in the survey
       units rather than measuring the 16 locations with the highest activities.  (The usable range
       of activity includes the highest measurement result in the survey unit and the lowest
       measurement result with an acceptable measurement uncertainty compared to the desired
       level of precision.)  The repeated measurements would be performed by different
       operators using the same equipment, but they would not know the results of the original
       survey. To ensure that the measurements would be valid, the QC measurements to check
       for contamination would be performed at the  same time.  Because the laboratory's QA
       program called for periodic checks on the precision of the laboratory instruments, the
       total  survey design precision for laboratory measurements was measured.  Because the
       only  samples collected would come from a Class 3 area, the sample activities were
       expected to be close to or below the measurement system MDC. This meant that field
       replicate samples would not provide any usable information. Also, QC samples for bias
       were repeated to obtain a usable estimate of precision for the survey  design.

       Measurements of materials with known concentrations above background (e.g.,
       performance evaluation samples) and known  concentrations at or below background (e.g.,
       field blanks) are used to estimate bias.  For scan surveys, the repeated scanning performed
       to estimate precision would also serves as a check for contamination using blanks.
       Because there was no appropriate material of known concentration on which to perform
       bias measurements, the calibration checks were used to demonstrate that the instruments
       were reading properly during the surveys.  A control chart was developed using the
       instrument response for an uncalibrated check source.  Measurements were obtained
       using a specified source-detector alignment that could be easily repeated. Measurements
       were obtained at several times during the day over a period of several weeks prior to
       taking the instruments into the field. Calibration checks were performed before and after
       each survey period in the field and the results immediately plotted on the control chart to
       determine if the instrument was performing properly.  This method was also adopted for
       the direct measurement system. 20 samples were required by the survey design for the
       Class 3 exterior survey unit. To ensure that the samples were truly blind for the
       laboratory, samples three times the requested volume were collected. These samples
       were sent to a second laboratory for preparation. Each sample was weighed, dried, and
       reweighed to determine the moisture content. Then each sample was ground to a uniform
       particle size of 1 mm (approximately 16 mesh) and divided into three separate aliquots
       (each aliquot was the same size). For each sample one aliquot was packaged for transport
       to the laboratory performing the analysis. After these samples were packaged,  16 of the
       samples had both of the remaining aliquots spiked with the same level of activity using a

August 2000                                4-37                       MARSSIM, Revision 1

-------
Preliminary Survey Considerations


       source solution traceable to the National Institute of Science and Technology (NIST).
       The 16 samples each had a different level of activity within a range that was accepted by
       the laboratory performing the analysis.  These 32 samples were also packaged for
       transport to the laboratory.  In addition, 16 samples of a soil similar to the soil at the site
       were prepared as blanks to check against contamination.  The 20 samples, 32 spikes, and
       16 blanks were transported to the laboratory performing the analyses in a single shipment
       so that all samples were indistinguishable from each other except by the  sample
       identification.

4.9.3   Controlling Sources of Error

During the performance of a survey, it is important to identify sources of error and uncertainty
early in the process so that problems can be resolved. The timing of the QC measurements
within the survey design can be very important.  In order to identify problems as early as
possible, it may be necessary to perform a significant number of QC measurements early in the
survey. This can be especially important for surveys utilizing an innovative or untested survey
design. Survey designs that have been used previously and produced reliable results may be able
to space the QC measurement evenly throughout the survey,  or even wait to have samples
analyzed at the end of the survey, as long as the objectives of the survey are achieved.

For example, a survey design requires a new scanning method to be used for several  survey units
when there are little performance data available for this technique.  To ensure that the technique
is working properly, the first few survey units are re-scanned to provide an initial estimate of the
precision and bias. After the initial performance of the techniques  has been verified, a small
percentage of the remaining survey units is re-scanned to demonstrate that the technique is
operating properly for the duration  of the survey.

Identifying sources of error and uncertainty is only the first step.  Once the sources of uncertainty
have been identified, they should be minimized and controlled for the rest of the survey.  Section
9.3 discusses the assessment of survey data and provides guidance  on corrective actions that may
be appropriate for controlling sources of error or uncertainty after they have been identified.
4.10  Health and Safety

Consistent with the approach for any operation, activities associated with the radiological surveys
should be planned and monitored to assure the health and safety of the worker and other
personnel, both onsite and offsite, are adequately protected.  At the stage of determining the final
status of the site, residual radioactivity is expected to be below the DCGL values; therefore, the
final status survey should not include radiation protection controls. However, radiation
protection controls may be necessary when performing scoping or characterization surveys where
the potential for significant levels of residual radioactivity is unknown.

MARSSIM, Revision 1                        4-38                                August 2000

-------
                                                                Preliminary Survey Considerations


Significant health and safety concerns during any radiological survey include the potential
industrial hazards commonly found at a construction site, such as exposed electrical circuitry,
excavations, enclosed work spaces, hazardous atmospheres, insects, poisonous snakes, plants,
and animals, unstable surfaces (e.g., wet or swamp soil), heat and cold, sharp objects or surfaces,
falling objects, tripping hazards, and working at heights. The survey plan should incorporate
objectives and procedures for identifying and eliminating, avoiding, or minimizing these
potential safety hazards.
August 2000                                  4-39                         MARSSIM, Revision 1

-------
                     5 SURVEY PLANNING AND DESIGN
5.1    Introduction

This chapter is intended to assist the user in planning a strategy for conducting a final status
survey, with the ultimate objective being to demonstrate compliance with the derived
concentration guideline levels (DCGLs).  The survey types that make up the Radiation Survey
and Site Investigation (RSSI) Process include scoping, characterization, remedial action support,
and final status surveys.  Although the scoping, characterization, and remedial action support
surveys have multiple objectives, this manual focuses on those aspects related to supporting the
final status survey and demonstrating compliance with DCGLs. In general, each of these survey
types expands upon the data collected during the previous survey (e.g., the characterization
survey is planned with information collected during the scoping survey) up through the final
status survey. The purpose of the final  status survey is to demonstrate that the release criterion
established by the regulatory agency has not been exceeded.  This final release objective should
be kept in mind throughout the design and planning phases for each of the other survey types.
For example, scoping surveys may be designed to meet the objectives of the final status survey
such that the scoping survey report is also the final status survey report. The survey and
analytical procedures referenced in this chapter are described in Chapter 6, Chapter 7, and
Appendix H.  An example of a final status survey, as described in Section 5.5, appears in
Appendix A.  In addition, example checklists are provided for each type of survey to assist the
user in obtaining the necessary information for planning a final status survey.
5.2    Scoping Surveys

5.2.1   General

If the data collected during the Historical Site Assessment (HSA) indicate that a site or area is
impacted, a scoping survey could be performed. The objective of this survey is to augment the
HSA for sites with potential residual contamination. Specific objectives may include:
1) performing a preliminary risk assessment and providing data to complete the site prioritization
scoring process (CERCLA and RCRA sites only), 2) providing input to the characterization
survey design, if necessary, 3) supporting the classification of all or part of the site as a Class 3
area for planning the final status survey, 4) obtaining an estimate of the variability in the residual
radioactivity concentration for the site, and 5) identifying non-impacted areas that may be
appropriate for reference areas and estimating the variability in radionuclide concentrations when
the radionuclide of interest is present in background.

Scoping survey information needed when conducting a preliminary risk assessment (as noted
above for CERCLA and RCRA sites) includes the general radiation levels  at the site and gross
levels of residual contamination on building surfaces and in environmental media. If unexpected

August 2000                                5-1                         MARS SIM, Revision 1

-------
Survey Planning and Design


conditions are identified that prevent the completion of the survey, the MARS SIM user should
contact the responsible regulatory agency for further guidance.  Sites that meet the National
Contingency Plan criteria for a removal should be referred to the Superfund Removal program
(EPA 1988c).

If the HSA indicates that contamination is likely, a scoping survey could be performed to provide
initial estimates of the level of effort for remediation and information for planning a more
detailed survey, such as a characterization survey. Not all radiological parameters need to be
assessed when planning for additional characterization because total surface activity or limited
sample collection may be sufficient to meet the objectives of the scoping survey.

Once a review of pertinent site history indicates  that an area is impacted, the minimum survey
coverage at the site will include a Class 3 area final status survey prior to the site being released.
For scoping surveys with this objective, identifying radiological decision levels is necessary for
selecting instruments and procedures with the necessary detection sensitivities to demonstrate
compliance with the release criterion.  A methodology for planning, conducting, and
documenting scoping surveys is described in the following sections.

5.2.2  Survey Design

Planning a scoping survey involves reviewing the HSA (Chapter 3). This process considers
available information concerning locations of spills or other releases of radioactive material.
Reviewing the radioactive materials license or similar documentation provides information on
the identity, locations, and general quantities of radioactive material used at the site. This
information helps to determine which areas are likely to contain residual radioactivity and, thus,
areas where scoping survey activities will be concentrated.  The information may also identify
one or more non-impacted areas as potential reference areas when radionuclides of concern are
present in background (Section 4.5). Following  the review of the HSA, DCGLs that are
appropriate for the site are selected.  The DCGLs may be adjusted later if a determination is
made to use site-specific information to support  the development of DCGLs.

If residual radioactivity is identified  during the scoping survey, the area may be classified as
Class 1 or Class 2 for final status survey planning (refer to Section 4.4 for guidance on initial
classification), and a characterization survey is subsequently performed. For scoping surveys that
are designed to provide input for characterization surveys, measurements and sampling may not
be as comprehensive or performed to the same level of sensitivity necessary for final status
surveys.  The design of the scoping survey should be based on specific data quality objectives
(DQOs; see Section 2.3.1 and Appendix D) for the information to be collected.

For scoping surveys that potentially  serve to release the site from further consideration, the
survey design should consist of sampling based on the HSA data and professional judgment. If

MARSSIM, Revision 1                         5-2                                  August 2000

-------
                                                                    Survey Planning and Design


residual radioactivity is not identified during judgment sampling, it may be appropriate to
classify the area as Class 3 and perform a final status survey for Class 3 areas. Refer to Section
5.5 for a description of final status surveys. However, collecting additional information during
subsequent surveys (e.g., characterization surveys) may be necessary to make a final
determination as to area classification.

5.2.3  Conducting Surveys

Scoping survey activities performed for preliminary risk assessment or to provide input for
additional characterization include a limited amount of surface scanning, surface activity
measurements, and sample collection (smears, soil, water, vegetation, paint, building materials,
subsurface materials). In this case, scans, direct measurements, and samples are used to examine
areas likely to contain residual radioactivity. These activities are conducted based on HSA data,
preliminary investigation surveys, and professional judgment.

Background activity and radiation levels for the area should be determined, including direct
radiation levels on building surfaces and radionuclide concentrations in media.  Survey locations
should be referenced to grid coordinates, if appropriate, or fixed site features. It may be
considered appropriate to establish a reference coordinate system in the event that contamination
is detected above the DCGLs (Section 4.8.5).  Samples collected as part of a scoping survey
should consider any sample tracking requirements, including chain of custody, if required
(Section 7.8).

Scoping surveys that are expected to be used as Class 3  area final status surveys should be
designed following the guidance in Section 5.5. These surveys should also include judgment
measurements and sampling in areas likely to have accumulated residual radioactivity (Section
5.5.3).

5.2.4  Evaluating Survey Results

Survey data are converted to the same units as those in which DCGLs are expressed (Section
6.6). Identification of potential  radionuclide contaminants at the site is performed using direct
measurements or laboratory analysis of samples.  The data are compared to the appropriate
regulatory DCGLs.

For scoping survey activities that provide an initial assessment of the radiological hazards  at the
site,  or provide input for additional characterization, the survey data are used to identify locations
and general extent of residual radioactivity. Scoping surveys that are expected to be used as
Class 3  area final status surveys should follow the methodology presented in Chapter 8 to
determine if the release criterion has been exceeded.
August 2000                                  5-3                          MARS SIM, Revision 1

-------
Survey Planning and Design


5.2.5  Documentation

How the results of the scoping survey are documented depends on the specific objectives of the
survey. For scoping surveys that provide additional information for characterization surveys, the
documentation should provide general information on the radiological status of the site.  Survey
results should include identification of the potential contaminants (including the methods used
for radionuclide identification), general extent of contamination (e.g., activity levels, area of
contamination, and depth of contamination), and possibly even relative ratios of radionuclides to
facilitate DCGL application.  A narrative report or a report in the form of a letter may suffice for
scoping surveys used to provide input for characterization surveys.   Sites being released from
further consideration should provide a level of documentation consistent with final status survey
reports.
MARSSIM, Revision 1                          5-4                                  August 2000

-------
                                                                   Survey Planning and Design


                     EXAMPLE SCOPING SURVEY CHECKLIST
SURVEY DESIGN
	 Enumerate DQOs:  State the objectives of the survey; survey instrumentation capabilities
       should be appropriate for the specified survey objectives.

	 Review the Historical Site Assessment for:

       	 Operational history (e.g., problems,  spills, releases, or notices of violation) and
              available documentation (e.g.., radioactive materials license).

       	 Other available resources—site personnel, former workers, residents, etc.

       	 Types and quantities of materials that were handled and where radioactive
              materials were stored, handled, moved, relocated, and disposed.

       	 Release and migration pathways.

       	 Areas that are potentially affected and likely to contain residual contamination.
              Note: Survey activities will be concentrated in these areas.

       	 Types and quantities of materials likely to remain onsite—consider radioactive
              decay.

	 Select separate DCGLs for the site based on the HSA review.  (It may be necessary to
       assume appropriate regulatory DCGLs in order to permit selection of survey methods and
       instrumentation for the expected contaminants and quantities.)

CONDUCTING SURVEYS

	 Follow the survey design documented in the QAPP.  Record deviations from the stated
       objectives or documented SOPs and document additional observations made when
       conducting the survey.

	 Select instrumentation based on the specific DQOs of the survey. Consider detection
       capabilities for the expected contaminants and quantities.

	 Determine background activity and radiation levels for the area; include direct radiation
       levels on building surfaces, radionuclide concentrations in  media, and exposure rates.
August 2000                                 5-5                         MARSSIM, Revision 1

-------
Survey Planning and Design
       Record measurement and sample locations referenced to grid coordinates or fixed site
       features.
       For scoping surveys that are conducted as Class 3 area final status surveys, follow
       guidance for final status surveys.

       Conduct scoping survey, which involves judgment measurements and sampling based on
       HSA results:

       	  Perform investigatory surface scanning.

       	  Conduct limited surface activity measurements.
       	  Perform limited sample collection (smears, soil, water, vegetation, paint, building
              materials, subsurface materials).

       	  Maintain sample tracking.

EVALUATING SURVEY RESULTS

	 Compare survey results with the DQOs.

	 Identify radionuclides of concern.

	 Identify impacted areas and general extent of contamination.

	 Estimate the variability in the residual radioactivity levels for the site.

	 Adjust DCGLs based on survey findings (the DCGLs initially selected may not be
       appropriate for the site).

	 Determine the need for additional action (e.g., none, remediate, more surveys)

	 Prepare report for regulatory agency (determine if letter report is sufficient).
MARSSIM, Revision 1                         5-6                                 August 2000

-------
                                                                    Survey Planning and Design


5.3    Characterization Surveys

5.3.1   General

Characterization surveys may be performed to satisfy a number of specific objectives. Examples
of characterization survey objectives include: 1) determining the nature and extent of radiological
contamination, 2) evaluating remediation alternatives (e.g., unrestricted use, restricted use, onsite
disposal, off-site disposal, etc.), 3) input to pathway analysis/dose or risk assessment models for
determining site-specific DCGLs (Bq/kg, Bq/m2), 4) estimating the occupational and public
health and safety impacts during decommissioning, 5) evaluating remediation technologies,
6) input to final status survey design, and 7) Remedial Investigation/Feasibility Study
requirements (CERCLA sites only) or RCRA Facility Investigation/Corrective Measures Study
requirements (RCRA sites only).

The scope of this manual precludes detailed discussions of characterization survey design for
each of these objectives, and therefore, the user should consult other references for specific
characterization survey objectives not covered. For example, the Decommissioning Handbook
(DOE 1994) is a good reference for characterization objectives that are concerned with
evaluating remediation technologies or unrestricted/restricted use alternatives. Other references
(EPA 1988b, 1988c, 1994a; NRC 1994) should be consulted for planning decommissioning
actions, including decontamination techniques, projected schedules, costs, and waste volumes,
and health and safety considerations during decontamination.  Also, the types of characterization
data needed to support risk or dose modeling should be determined from the specific modeling
code documentation.

This manual concentrates on providing information for the final status survey design, with
limited coverage on determining the specific nature and extent of radionuclide contamination.
The specific objectives for providing information to the final status survey design include:
1) estimating the projected radiological  status at the time of the final status survey, in terms of
radionuclides present, concentration ranges and variances, spatial distribution, etc., 2) evaluating
potential reference areas to be used for background measurements, if necessary, 3) reevaluating
the initial classification of survey units,  4) selecting instrumentation based on the necessary
MDCs, and 5) establishing acceptable Type I and Type n errors with the regulatory agency
(Appendix D provides guidance on establishing  acceptable decision error rates). Many of these
objectives are satisfied by determining the specific nature and extent of contamination of
structures, residues, and environmental media. Additional detail on the performance of
characterization surveys designed to determine the general extent of contamination can be found
in the NRC's Draft Branch Technical Position on Site Characterization for Decommissioning
(NRC 1994a) and EPAs RI/FS guidance (EPA 1988b; EPA 1993c).
August 2000                                 5-7                         MARS SIM, Revision 1

-------
Survey Planning and Design


Results of the characterization survey should include: 1) the identification and distribution of
contamination in buildings, structures, and other site facilities; 2) the concentration and
distribution of contaminants in surface and subsurface soils; 3) the distribution and concentration
of contaminants in surface water, ground water, and sediments, and 4) the distribution and
concentration of contaminants in other impacted media such as vegetation or paint. The
characterization should include sufficient information on the physical characteristics of the site,
including surface features, meteorology and  climatology, surface water hydrology, geology,
demography and land use, and hydrogeology. This survey should  also address environmental
conditions that could affect the rate and direction of contaminant transport in the environment,
depending on the extent of contamination identified above.

The following sections describe a method for planning, conducting, and documenting
characterization surveys. Alternative methodologies may also be acceptable to the regulatory
agencies.

5.3.2   Survey Design

The design of the site characterization survey is based on the specific DQOs for the information
to be collected, and is planned using the HSA and scoping survey  results. The DQO Process
ensures that an adequate amount of data with sufficient quality are collected for the purpose of
characterization. The site characterization process typically begins with a review of the HSA,
which includes available information on site description, operational history, and the type and
extent of contamination (from the scoping survey, if performed). The site description, or
conceptual site model as first developed in Section 3.6.4, consists  of the general area,
dimensions, and locations of contaminated areas on the site. A site map should show site
boundaries, roads, hydrogeologic features, major structures, and other features that could affect
decommissioning activities.

The operational history includes records of site conditions prior to operational activities,
operational activities of the facility, effluents and on-site disposal, and significant
incidents—including spills or other unusual  occurrences—involving the spread of contamination
around the site  and on areas previously released from radiological controls.  This review should
include other available resources, such as site personnel, former workers, residents, etc.  Historic
aerial photographs and site location maps may be particularly useful in identifying potential areas
of contamination.

The types and quantities of materials that were handled and the locations and disposition of
radioactive materials should be reviewed using available documentation (e.g., the radioactive
materials license). Contamination release and migration pathways should be identified, as well
as areas that are potentially affected and are  likely to contain residual contamination.  The types
and quantities of materials likely to remain onsite, considering radioactive decay, should be
determined.

MARSSIM, Revision 1                          5-8                                  August 2000

-------
                                                                    Survey Planning and Design


The characterization survey should clearly identify those portions of the site (e.g., soil, structures,
and water) that have been affected by site activities and are potentially contaminated. The survey
should also identify the portions of the site that have not been affected by these activities.  In
some cases where no remediation is anticipated, results of the characterization survey may
indicate compliance with DCGLs established by the regulatory agency.  When planning for the
potential use of characterization survey data as part of the final status  survey, the characterization
data must be of sufficient quality and quantity for that use (see Section 5.5). There are several
processes that are likely to occur in conjunction with characterization. These include considering
and evaluating remediation alternatives, and calculating site-specific DCGLs.

The survey should also provide information on variations in the contaminant distribution in the
survey area. The contaminant variation in each survey unit contributes to determining the
number of data points based on the statistical tests used during the final  status  survey (Section
5.5.2). Additionally, characterization data may be used to justify reclassification for some survey
units (e.g.., from  Class 1 to Class 2).

Note that because of site-specific characteristics of contamination, performing all types of
measurements described here may not be relevant at every site. For example, detailed
characterization  data may not be needed for areas with contamination well above the DCGLs that
clearly require remediation.  Judgment should be used in determining the types of
characterization  information needed to provide an appropriate basis for decontamination
decisions.

5.3.3   Conducting Surveys

Characterization survey activities often involve the detailed assessment  of various types of
building and environmental media, including building surfaces, surface  and subsurface soil,
surface water, and ground water.  The HSA data should be used to identify the potentially
contaminated media onsite (see Section 3.6.3). Identifying the media that may contain
contamination is useful for preliminary survey unit classification and for planning subsequent
survey activities. Selection of survey instrumentation and analytical techniques are typically
based on a knowledge of the appropriate DCGLs, because remediation decisions are made based
on the level  of the residual contamination as compared to the DCGL.  Exposure rate
measurements may be needed to assess occupational and public health and safely.  The location
of underground utilities  should be considered before conducting a survey to avoid compounding
the problems at the site.
August 2000                                  5-9                          MARS SIM, Revision 1

-------
Survey Planning and Design
5.3.3.1  Structure Surveys

Surveys of building surfaces and structures include surface scanning, surface activity
measurements, exposure rate measurements, and sample collection (e.g., smears, subfloor soil,
water, paint, and building materials).  Both field survey instrumentation (Chapter 6) and
analytical laboratory equipment and procedures (Chapter 7) are selected based on their detection
capabilities for the expected contaminants and their quantities. Field and laboratory instruments
are described in Appendix H.

Background activity and radiation levels for the area should be determined from appropriate
background reference areas. Background assessments include surface activity measurements on
building surfaces, exposure rates, and radionuclide concentrations in various media (refer to
Section 4.5).

Measurement locations should be documented using reference system coordinates, if appropriate,
or fixed site features. A typical  reference system spacing for building surfaces is 1 meter. This is
chosen to facilitate identifying survey locations, evaluating small areas of elevated activity, and
determining survey unit average activity levels.

Scans should be conducted in areas likely to contain residual activity, based on the results of the
HSA and scoping survey.

Both systematic and judgment surface activity measurements are performed. Judgment direct
measurements are performed at  locations of elevated direct radiation, as identified by surface
scans, to provide data on upper ranges of residual contamination levels. Judgment measurements
may also be performed in sewers, air ducts, storage tanks,  septic systems and on roofs of
buildings, if necessary. Each surface activity measurement location should be carefully recorded
on the appropriate survey form.

Exposure rate measurements and media sampling are performed as necessary. For example,
subfloor soil samples may provide information on the horizontal and vertical extent of
contamination.  Similarly, concrete core samples are necessary to evaluate the depth of activated
concrete in a reactor facility. Note that one type of radiological measurement may be sufficient
to determine the extent of contamination. For example, surface activity measurements alone may
be all that is needed to demonstrate that decontamination of a particular area is necessary;
exposure rate measurements would add little to this determination.

Lastly, the measuring and sampling techniques should be commensurate with the intended use of
the data, as characterization survey data may be used to supplement final status survey data,
provided that the data meet the selected DQOs.
MARSSIM, Revision 1                         5-10                                August 2000

-------
                                                                   Survey Planning and Design
5.3.3.2 Land Area Surveys
Characterization surveys for surface and subsurface soils and media involve employing
techniques to determine the lateral and vertical extent and radionuclide concentrations in the soil.
This may be performed using either sampling and laboratory analyses, or in situ gamma
spectrometry analyses, depending on the detection capabilities of each methodology for the
expected contaminants and concentrations. Note that in situ gamma spectrometry analyses or
any direct surface measurement cannot easily be used to determine vertical distributions of
radionuclides. Sample collection followed by laboratory analysis introduces several additional
sources of uncertainty that need to be considered during survey design. In many cases, a
combination of direct measurements and samples is required to meet the objectives of the survey.

Radionuclide concentrations in background soil samples should be determined for a sufficient
number of soil samples that are representative of the soil in terms of soil type, soil depth, etc.  It
is important that the background samples be collected in non-impacted areas. Consideration
should be given to spatial variations in the background radionuclide concentrations as discussed
in Section 4.5 and NRC draft report NUREG-1501 (NRC 1994b).

Sample locations should be documented using reference system coordinates (see Section 4.8.5),
if appropriate, or fixed site features.  A typical reference system spacing for open land areas is 10
meters (NRC 1992a). This spacing is somewhat arbitrary and is chosen to facilitate determining
survey unit locations and evaluating areas of elevated radioactivity.

Surface scans for gamma activity should be conducted in areas likely to contain residual activity.
Beta scans may be appropriate if the contamination is near the surface and represents the
prominent radiation emitted from the contamination. The sensitivity of the scanning technique
should be appropriate to meet the DQOs.

Both surface and subsurface soil and media samples may be necessary. Subsurface soil samples
should be collected where surface contamination is present and where subsurface contamination
is known or suspected.  Boreholes should be constructed to provide samples representing
subsurface deposits.

Exposure rate measurements at 1 meter above the sampling location may also be appropriate.
Each surface  and subsurface soil sampling and measurement location should be carefully
recorded.
August 2000                                5-11                         MARS SIM, Revision 1

-------
Survey Planning and Design


5.3.3.3  Other Measurements/Sampling Locations

Surface Water and Sediments. Surface water and sediment sampling may be necessary
depending on the potential for these media to be contaminated.  The contamination potential
depends on several factors, including the proximity of surface water bodies to the site, size of the
drainage area, total annual rainfall, and spatial and temporal variability in surface water flow rate
and volume.  Refer to Section 3.6.3.3 for further consideration of the necessity for surface water
and sediment sampling.

Characterizing surface water involves techniques that determine the extent and distribution of
contaminants. This may be performed by collecting grab samples of the surface water in a well-
mixed zone.  At certain sites, it may be necessary to collect stratified water samples to provide
information on the vertical distribution of contamination.  Sediment sampling should also be
performed to assess the relationship between the composition of the suspended sediment and the
bedload sediment fractions (i.e., suspended sediments compared to deposited sediments).  When
judgment sampling is used to find radionuclides in sediments, contaminated sediments are more
likely to be accumulated on fine-grained deposits found in low-energy environments (e.g.,
deposited silt on inner curves of streams).

Radionuclide concentrations in background water samples should be determined for a sufficient
number of water samples that are upstream of the site or in areas unaffected by site operations.
Consideration should be given to any spatial or temporal variations in the background
radionuclide concentrations.

Sampling locations should be documented using reference system coordinates, if appropriate, or
scale drawings of the surface water bodies. Effects of variability of surface water flow rate
should be considered. Surface scans for gamma activity may be conducted in areas likely  to
contain residual  activity (e.g., along the banks) based on the results of the document review
and/or preliminary investigation surveys.

Surface water sampling should be performed in areas of runoff from active operations,  at plant
outfall locations, both upstream  and downstream of the outfall, and any other areas likely to
contain residual  activity (see Section 3.6.3.3).  Measurements of radionuclide concentrations in
water should include gross alpha and gross beta assessments, as well as any necessary
radionuclide-specific analyses. Non-radiological parameters, such as specific conductance, pH,
and total organic carbon may be used as surrogate indicators of potential contamination, provided
that a specific relationship exists between the radionuclide concentration and the level of the
indicator (e.g., a linear relationship between pH and the radionuclide concentration in water is
found to exist, then the pH may  be measured such that the radionuclide concentration can  be
calculated based on the known relationship rather than performing an expensive nuclide-specific
analysis).  The use of surrogate measurements is discussed in Section 4.3.2.

MARSSIM, Revision 1                         5-12                                 June 2001

-------
                                                                   Survey Planning and Design


Each surface water and sediment sampling location should be carefully recorded on the
appropriate survey form.  Additionally, surface water flow models may be used to illustrate
contaminant concentrations and migration rates.

Ground Water. Ground-water sampling may be necessary depending on the local geology,
potential for subsurface contamination, and the regulatory framework.  Because different
agencies handle ground water contamination situations in different ways (e.g., EPA's Superfund
program and some States require compliance with maximum contaminant levels specified in the
Safe Drinking Water Act), the responsible regulatory agency should be contacted if ground water
contamination is expected.  The need for ground-water sampling is described in Section 3.6.3.4.

If ground-water contamination is identified, the responsible regulatory agency should be
contacted at once because: 1) ground water release criteria and DCGLs should be established by
the appropriate agency (Section 4.3), and 2) the default DCGLs for soil may be inappropriate
since they are usually based on initially uncontaminated ground water.

Characterization of ground-water contamination  should determine the extent and distribution of
contaminants, rates and direction of ground water migration, and the assessment of potential
effects of ground water withdrawal on the migration of ground water contaminants.  This may be
performed by designing a suitable monitoring well network.  The actual number and location of
monitoring wells depends on the size of the contaminated area, the type and extent of the
contaminants, the hydrogeologic system, and the objectives of the monitoring program.

When ground-water samples are taken, background should be determined by sufficient sampling
and analysis of ground-water samples collected from the same aquifer upgradient of the site.  The
background samples should not be affected by site operations and should be representative of the
quality of the ground water that would exist if the site had not been contaminated. Consideration
should be given to any spatial or temporal variations in the background radionuclide
concentrations.

Sampling locations should be referenced to grid coordinates, if appropriate, or to scale drawings
of the ground-water monitoring wells.  Construction specifications on the monitoring wells
should also be provided, including elevation, internal and external dimensions, types of casings,
type of screen and its location, borehole diameter, and other necessary information on the wells.

In addition to organic and inorganic constituents, ground-water sampling and analyses should
include all significant radiological contaminants. Measurements in potential sources of drinking
water should include gross alpha and gross beta assessments, as well as any other radionuclide-
specific analyses. Non-radiological parameters, such as specific conductance, pH, and total
organic carbon may be used as surrogate indicators of potential contamination, provided that a
specific relationship exists between the radionuclide concentration and the level of the indicator.

August 2000                                5-13                         MARS SIM, Revision 1

-------
Survey Planning and Design


Each ground-water monitoring well location should be carefully recorded on the appropriate
survey form.  Additionally, contaminant concentrations and sources should be plotted on a map
to illustrate the relationship among contamination, sources, hydrogeologic features and boundary
conditions, and property boundaries (EPA 1993b).

Other Media. Air sampling may be necessary at some sites depending on the local geology and
the radionuclides of potential concern. This may include collecting air samples or filtering the air
to collect resuspended particulates. Air sampling is often restricted to monitoring activities for
occupational and public health and safely and is not required to demonstrate compliance with
risk- or dose-based regulations. Section 3.6.3.5 describes examples of sites where air sampling
may provide information useful to designing a final status  survey. At some sites, radon
measurements may be used to indicate the presence of radium, thorium, or uranium in the soil.
Section 6.9 and Appendix H provide information on this type of sampling.

In rare cases, vegetation samples may be collected as part of a characterization survey to provide
information in preparation for a final status survey. Because most risk- and dose-based
regulations are concerned with potential future land use that may differ from the current land use,
vegetation samples are unsuitable for demonstrating compliance with regulations.  There is a
relationship between radionuclide concentrations in plants and those in soil (the soil-to-plant
transfer factor is  used in many models to develop DCGLs) and the plant concentration could be
used as a surrogate measurement of the soil concentration. In most cases, a measurement of the
soil itself as the parameter of interest is more appropriate and introduces less uncertainty in the
result.

5.3.4   Evaluating Survey Results

Survey data are converted to the same units as those in which DCGLs are expressed (Section
6.6). Identification of potential radionuclide contaminants at the  site is performed through
laboratory and in situ analyses. Appropriate regulatory DCGLs for the site are selected and the
data are then compared to the DCGLs. For characterization data  that are used to supplement
final status survey data, the statistical methodology in Chapter 8 should be followed to determine
if a survey unit satisfies the release criteria.

For characterization data that are used to help guide remediation efforts, the survey data are used
to identify locations and general extent of residual activity. The survey results are first compared
with DCGLs.  Surfaces and environmental media are then  differentiated as exceeding DCGLs,
not exceeding DCGLs, or not contaminated, depending on the measurement results relative to the
DCGL value.  Direct measurements indicating areas of elevated activity are further evaluated and
the need for additional measurements is determined.
MARSSIM, Revision 1                        5-14                                 August 2000

-------
                                                                    Survey Planning and Design
5.3.5   Documentation
Documentation of the site characterization survey should provide a complete and unambiguous
record of the radiological status of the site. In addition, sufficient information to characterize the
extent of contamination, including all possible affected environmental media, should be provided
in the report. This report should also provide sufficient information to support reasonable
approaches or alternatives to site decontamination.
August 2000                                 5-15                        MARS SIM, Revision 1

-------
Survey Planning and Design


              EXAMPLE CHARACTERIZATION SURVEY CHECKLIST

SURVEY DESIGN

	     Enumerate DQOs: State objective of the survey; survey instrumentation
              capabilities should be appropriate for the specific survey objective.

              Review the Historical Site Assessment for:
                            Operational history (e.g., any problems, spills, or releases) and
                            available documentation (e.g.., radioactive materials license).

                            Other available resources—site personnel, former workers,
                            residents, etc.

                            Types and quantities of materials that were handled and where
                            radioactive materials were stored, handled, and disposed of.

                            Release and migration pathways.

                            Information on the potential for residual radioactivity that may be
                            useful during area classification for final status survey design.
                            Note: Survey activities will be concentrated in Class 1 and Class 2
                            areas.

                            Types and quantities of materials likely to remain on-site—
                            consider radioactive decay.
CONDUCTING SURVEYS
              Select instrumentation based on detection capabilities for the expected
              contaminants and quantities and a knowledge of the appropriate DCGLs.

              Determine background activity and radiation levels for the area; include surface
              activity levels on building surfaces, radionuclide concentrations in environmental
              media, and exposure rates.

              Establish a reference coordinate system.  Prepare scale drawings for surface water
              and ground-water monitoring well locations.
MARSSIM, Revision 1                         5-16                                 August 2000

-------
                                                                   Survey Planning and Design


	      Perform thorough surface scans of all potentially contaminated areas, (e.g., indoor
              areas include expansion joints, stress cracks, penetrations into floors and walls for
              piping, conduit, and anchor bolts, and wall/floor interfaces); outdoor areas include
              radioactive material storage areas, areas downwind of stack release points, surface
              drainage pathways, and roadways that may have been used for transport of
              radioactive or contaminated materials.

	      Perform systematic surface activity measurements.

	      Perform systematic smear, surface and subsurface soil and media, sediment,
              surface water and groundwater sampling, if appropriate for the site.

	      Perform judgment direct measurements and sampling of areas of elevated activity
              of residual radioactivity to provide data on upper ranges of residual contamination
              levels.

	      Document survey and sampling locations.

	      Maintain chain of custody of samples when necessary.

Note:  One category of radiological data (e.g., radionuclide concentration, direct radiation level,
       or surface contamination) may be sufficient to determine the extent of contamination;
       other measurements may not be necessary (e.g., removable surface contamination or
       exposure rate measurements).

Note:  Measuring and sampling techniques should be commensurate with the intended use of the
       data because characterization survey data may be used to supplement final status survey
       data.

EVALUATING SURVEY RESULTS

	      Compare survey results with DCGLs. Differentiate surfaces/areas as exceeding
              DCGLs, not exceeding DCGLs, or not contaminated.

	      Evaluate all locations of elevated direct measurements  and determine the need for
              additional measurements/samples.

	      Prepare site characterization survey report.
August 2000                                5-17                        MARS SIM, Revision 1

-------
Survey Planning and Design


5.4    Remedial Action Support Surveys

5.4.1   General

Remedial action support surveys are conducted to 1) support remediation activities, 2) determine
when a site or survey unit is ready for the final status survey, and 3) provide updated estimates of
site-specific parameters to use for planning the final status survey. This manual does not discuss
the routine operational  surveys (e.g., air sampling, dose rate measurements, environmental
sampling) conducted to support remediation activities.

A remedial action support survey serves to monitor the effectiveness of decontamination efforts
that are intended to reduce residual radioactivity to acceptable levels. This type of survey guides
the cleanup in a real-time mode. The remedial action support survey typically relies on a simple
radiological parameter, such as direct radiation near the surface, as an indicator of effectiveness.
The investigation level  (the level below which there is an acceptable level of assurance that the
established DCGLs have been attained) is determined and used for immediate, in-field decisions
(Section 5.5.2.6). Such a survey is intended for expediency and cost effectiveness and does not
provide thorough or accurate data describing the radiological status of the site. Note that this
survey does not provide information that can be used to demonstrate compliance with the
DCGLs and is an interim step in the  compliance demonstration process.  Areas that are
determined to satisfy the DCGLs on  the basis of the remedial action support survey will then be
surveyed in detail by the final status  survey. Alternatively, the remedial action support survey
can be designed to  meet the objectives of a final status survey as described in Section 5.5.
DCGLs may be recalculated  based on the results of the remediation process  as the regulatory
program allows or  permits.

Remedial activities result in changes to the distribution of contamination within a survey unit.
The site-specific parameters used during final status survey planning (e.g., variability in the
radionuclide concentration within a survey unit or probability of small areas of elevated activity)
will change during remediation. For most survey units, values for these parameters will need to
be re-established following remediation. Obtaining updated values for these critical planning
parameters should  be considered when designing a remedial action support survey.

5.4.2   Survey  Design

The objective of the remedial action  support survey is to detect the presence of residual activity
at or below the  DCGL criteria.  Although the presence of small areas of elevated radioactivity
may satisfy the  elevated measurement criteria, it may be more efficient to design the remedial
action support survey to identify residual radioactivity at the DCGLw (and to remediate small
areas of elevated activity that may potentially satisfy the release criteria). Survey instrumentation
and techniques  are  therefore  selected based on the detection capabilities for the known or
suspected contaminants and DCGLs to be achieved.

MARSSIM, Revision  1                        5-18                                 August 2000

-------
                                                                    Survey Planning and Design


There will be radionuclides and media that cannot be evaluated at the DCGLW using field
monitoring techniques. For these cases, it may be feasible to collect and analyze samples by
methods that are quicker and less costly than radionuclide-specific laboratory procedures. Field
laboratories and screening techniques may be acceptable alternatives to more expensive analyses.
Reviewing remediation plans may be required to get an indication of the location and amount of
remaining contamination following remediation.

5.4.3   Conducting Surveys

Field survey instruments and procedures are selected based on their detection capabilities for the
expected contaminants and  their quantities.  Survey methods typically include scans of surfaces
followed by direct measurements to identify residual radioactivity. The surface activity levels are
compared to the DCGLs, and a determination is made on the need for further decontamination
efforts.

Survey activities for soil excavations include surface scans using field instrumentation sensitive
to beta and gamma activity. Because it is difficult to correlate scanning results to radionuclide
concentrations in soil, judgment should be carefully exercised when using scan results to guide
the cleanup efforts. Field laboratories and screening techniques may provide a better approach
for determining whether or  not further soil remediation is necessary.

5.4.4   Evaluating Survey Results

Survey data (e.g., surface activity levels and radionuclide concentrations in various media) are
converted to standard units  and compared to the DCGLs (Section 6.6).  If results of these survey
activities indicate that remediation has been successful in meeting the DCGLs, decontamination
efforts are ceased and final  status survey activities are initiated. Further remediation may be
needed if results indicate the presence of residual activity in excess of the DCGLs.

5.4.5   Documentation

The remedial action support survey is intended to guide the cleanup and alert those performing
remedial activities that additional remediation is needed or that the site  may be ready to initiate a
final survey. Data that indicate an area has been successfully remediated could be used to
estimate the variance for the survey units in that area.  Information identifying areas of elevated
activity that existed prior to remediation may be useful for planning final status surveys.
August 2000                                 5-19                        MARS SIM, Revision 1

-------
Survey Planning and Design


          EXAMPLE REMEDIAL ACTION SUPPORT SURVEY CHECKLIST

SURVEY DESIGN

	     Enumerate DQOs:  State the objectives of the survey; survey instrumentation
             capabilities should be able to detect residual contamination at the DCGL.

	     Review the remediation plans.

	     Determine applicability of monitoring surfaces/soils for the radionuclides of
             concern.  Note: Remedial action support surveys may not be feasible for surfaces
             contaminated with very low energy beta emitters or for soils or media
             contaminated with pure alpha emitters.

	     Select simple radiological parameters (e.g., surface activity) that can be used to
             make immediate in-field decisions on the effectiveness of the remedial action.

CONDUCTING SURVEYS

	     Select instrumentation based on its detection capabilities for the expected
             contaminants.

	     Perform scanning and surface activity measurements near the surface being
             decontaminated.

	     Survey soil excavations and perform  field evaluation of samples (e.g., gamma
             spectrometry of undried/non-homogenized soil) as remedial actions progress.
EVALUATING SURVEY RESULTS
             Compare survey results with DCGLs using survey data as a field decision tool to
             guide the remedial actions in a real-time mode.

             Document survey results.
MARSSIM, Revision 1                        5-20                                August 2000

-------
                                                                    Survey Planning and Design


5.5    Final Status Surveys

5.5.1   General

A final status survey is performed to demonstrate that residual radioactivity in each survey unit
satisfies the predetermined criteria for release for unrestricted use or, where appropriate, for use
with designated limitations.  The survey provides data to demonstrate that all radiological
parameters do not exceed the established DCGLs. For these reasons, more detailed guidance is
provided for this category of survey. For the final status survey, survey units represent the
fundamental elements for compliance demonstration using the statistical tests (see Section 4.6).
The documentation specified in the following sections helps ensure a consistent approach among
different organizations and regulatory agencies.  This allows for comparisons of survey results
between  sites or facilities.

This section describes methods for planning and conducting final status surveys to satisfy the
objectives of the regulatory agencies. The MARSSIM approach recognizes that alternative
methods  may be acceptable to those agencies. Flow diagrams and a checklist to assist the user in
planning a survey are included in this section.

5.5.2   Survey Design

Figures 5.1 through 5.3 illustrate the process of designing a final  status survey.  This process
begins with development of DQOs. On the basis of these objectives and the known or
anticipated radiological conditions at the site, the numbers and locations of measurement and
sampling points used to demonstrate compliance with the release criterion are then determined.
Finally, survey techniques appropriate to develop adequate data (see Chapters 6 and 7) are
selected and implemented.

Planning for the final status survey should include early discussions with the regulatory agency
concerning logistics for confirmatory or verification surveys. A confirmatory survey (also known
as an independent verification survey), may be performed by the responsible regulatory agency or
by an independent third party (e.g., contracted by the regulatory agency) to provide data to
substantiate results of the final status survey. Actual field measurements and sampling may be
performed. Another purpose of the confirmatory activities may be to identify any deficiencies in
the final  status survey documentation based on a thorough review of survey procedures and
results. Independent confirmatory survey activities are usually limited in scope to spot-checking
conditions at selected  locations, comparing findings with those of the final status survey, and
performing independent statistical evaluations of the data developed from the confirmatory
survey and the final status survey.
August 2000                                 5-21                        MARSSIM, Revision 1

-------
Survey Planning and Design
                 -Class 1
     WHAT IS THE
AREA CLASSIFICATION?
 DETERMINE NUMBER OF
  DATA POINTS NEEDED
               Figure 5.2
               Figure 5.3
   DETERMINE SPACING
    FOR SURVEY UNIT
               Figure 5.3
              Section 5.5.2.4
              Section 5.5.2.5
  GENERATE A RANDOM
     STARTING POINT
              Section 5.5.2.5
   IDENTIFY DATA POINT
    GRID LOCATIONS
              Section 5.5.2.5
   WHERE CONDITIONS
   PREVENT SURVEY OF
  IDENTIFIED LOCATIONS,
    SUPPLEMENT WITH
  ADDITIONAL RANDOMLY
  SELECTED LOCATIONS
              Section 5.5.2.5
                                                                          Class 3-
                                                          Section 4.4
                                              Class 2
                                                1
DETERMINE NUMBER OF
 DATA POINTS NEEDED
              Figure 5.1
 DETERMINE SPACING
   FOR SURVEY UNIT
              Figure 5.3
            Section 5.5.2.4
 GENERATE A RANDOM
   STARTING POINT
                                                   Section 5.5.2.5
 IDENTIFY DATA POINT
   GRID LOCATIONS
                                                   Section 5.5.2.5
 WHERE CONDITIONS
 PREVENT SURVEY OF
IDENTIFIED LOCATIONS,
  SUPPLEMENT WITH
ADDITIONAL RANDOMLY
 SELECTED LOCATIONS
                                                   Section 5.5.2.5
 DETERMINE NUMBER OF
 DATA POINTS NEEDED
              Figure 5.2
  GENERATE SETS OF
   RANDOM VALUES
                                                 Section 5.5.2.5
 MULTIPLY SURVEY UNIT
DIMENSIONS BY RANDOM
NUMBERS TO DETERMINE
     COORDINATES
                                                                                        Section 5.5.2.5
  CONTINUE UNTIL THE
NECESSARY NUMBER OF
   DATA POINTS ARE
      IDENTIFIED
                                                                                        Section 5.5.2.5
               Figure 5.1 Flow Diagram Illustrating the Process for Identifying
                       Measurement Locations (Refer to Section 5.5.2.5)
MARSSIM, Revision 1
         5-22
              August 2000

-------
                                                                             Survey Planning and Design
  ESTIMATE a, THE VARIABILITY IN
    THE CONTAMINANT LEVEL
                   Section 5.5.2.3
   CALCULATE RELATIVE SHIFT
             A/a
                                                    Section 4.5
                                  ADJUST LBGR
                     ESTIMATE o's, THE VARIABILITIES
                          IN BACKGROUND AND
                         CONTAMINANT LEVELS
                                                                                       Section 5.5.2.2
                       CALCULATE RELATIVE SHIFT
                                 A/a
             Yes
                                                                                 Yes
 OBTAIN NUMBER OF DATA POINTS
     FOR SIGN TEST, N, FROM
           TABLE 5.5
   PREPARE SUMMARY OF DATA
   POINTS FROM SURVEY AREAS
                   Section 5.5.2.3
                     OBTAIN NUMBER OF DATA POINTS
                        FOR WRS TEST, N/2, FROM
                    TABLE 5.3 FOR EACH SURVEY UNIT
                         AND REFERENCE AREA
                      PREPARE SUMMARY OF DATA
                      POINTS FROM SURVEY AREAS
                                                                                        Section 5.5.2.2
                   Figure 5.2 Flow Diagram for Identifying the Number of
                               Data Points, N, for Statistical Tests
August 2000
5-23
MARS SIM, Revision 1

-------
Survey Planning and Design
                                  ESTABLISH DQOs FOR AREAS WITH THE
                                  POTENTIAL FOR EXCEEDING  DCGLs AND
                               ACCEPTABLE RISK FOR MISSING SUCH AREAS
                                                      Section 5.5.2.1
                                IDENTIFY NUMBER OF DATA POINTS NEEDED
                                     BASED ON STATISTICAL TESTS, n
                                                      Figure 5.2, Section 5.5.2.2, Section 5.5.2.3
                                  CALCULATE THE AREA, A, BOUNDED BY
                                         SAMPLE LOCATIONS, n
                               DETERMINE ACCEPTABLE CONCENTRATIONS IN
                               VARIOUS INDIVIDUAL SMALLER AREAS WITHIN A
                                    SURVEY UNIT (USE AREA FACTORS)
                                                      Examples in Tables 5.6 and 5.7
                               DETERMINE THE ACCEPTABLE CONCENTRATION
                               CORRESPONDING TO THE CALCULATED AREA, A
                                    (AREA FACTOR x AVERAGE DCGL)
                                                      Examples in Tables 5.6 and 5.7
                                 DETERMINE THE REQUIRED SCAN MDC TO
                               IDENTIFY THE ACCEPTABLE CONCENTRATION I
                                             AN AREA, A
                                   EVALUATE MDCs OF MEASUREMENT
                                      TECHNIQUES FOR AVAILABLE
                                          INSTRUMENTATION
 CALCULATE AREA FACTOR THAT
 CORRESPONDS TO THE ACTUAL
         SCAN MDC
  (SCAN MDC/AVERAGE DCGL)
   DETERMINE THE MAXIMUM
 AREA, A ', THAT CORRESPONDS
     TO THE AREA FACTOR
          IS THE
      SCAN MDC FOR
AVAILABLE INSTRUMENTATION
  LESS THAN THE REQUIRED
       SCAN MDC?
Yes-
                                                                        NO ADDITIONAL SAMPLING
                                                                       POINTS ARE NECESSARY FOR
                                                                       POTENTIAL ELEVATED AREAS
   RECALCULATE NUMBER OF
     DATA POINTS NEEDED
   (n., = SURVEY UNIT AREA/A1)
     DETERMINE GRID SIZE
         SPACING, L
  Figure 5.3 Flow Diagram for Identifying Data Needs for Assessment of Potential
     Areas of Elevated Activity in Class 1 Survey Units (Refer to Section 5.5.2.4)
MARSSIM, Revision 1
            5-24
                    August 2000

-------
                                                                   Survey Planning and Design
5.5.2.1 Application of Decommissioning Criteria
The DQO Process, as it is applied to decommissioning surveys, is described in more detail in
Appendix D of this manual and in EPA and NRC guidance documents (EPA 1994,  1987b,
1987c; NRC 1997a). As part of this process, the objective of the survey and the null and
alternative hypotheses should be clearly stated.  The objective of final status surveys is typically
to demonstrate that residual radioactivity levels meet the release criterion. In demonstrating that
this objective is met, the null hypothesis (H0) tested is that residual contamination exceeds the
release criterion; the alternative hypothesis (Ha) is that residual contamination meets the release
criterion.

Two statistical tests are used to evaluate data from final status surveys. For contaminants that are
present in background, the Wilcoxon Rank Sum (WRS) test is used. When contaminants are not
present in background, the Sign test is used. To determine data needs for these tests, the
acceptable probability of making Type I decision errors (a) and Type n decision errors (P) should
be established (see Appendix D, Section D.6).  The acceptable decision error rates are a function
of the amount of residual radioactivity and are determined during survey planning using the DQO
Process.

The final step of the DQO process includes selecting the optimal design that satisfies the DQOs.
For some sites or survey units, the guidance provided in this section may result in a  survey design
that cannot be accomplished with the available resources.  For these situations, the planning team
will need to relax one or more of the constraints used to develop the survey design as described
in Appendix D. Examples of survey design constraints discussed in this  section include:

•      increasing the decision error rates, not forgetting to consider the risks associated with
       making an incorrect decision
•      increasing the width of the gray region by decreasing the lower bound of the gray region
•      changing the boundaries—it may be possible  to reduce measurement costs by changing or
       eliminating survey units that may require different decisions

5.5.2.2 Contaminant Present in Background—Determining Numbers of Data Points for
Statistical Tests

The comparison of measurements from the reference area and survey unit is made using the
WRS test, which should be conducted for each survey unit. In addition, the elevated
measurement comparison (EMC) is performed against each measurement to ensure  that the
measurement result does not exceed a specified investigation level.  If any measurement in the
remediated survey unit exceeds the specified investigation level, then additional investigation is
recommended, at least locally, regardless of the outcome  of the WRS test.
August 2000                                5-25                         MARS SIM, Revision 1

-------
Survey Planning and Design


The WRS test is most effective when residual radioactivity is uniformly present throughout a
survey unit.  The test is designed to detect whether or not this activity exceeds the DCGLW. The
advantage of this nonparametric test is that it does not assume the data are normally or
log-normally distributed. The WRS test also allows for "less than" measurements to be present
in the reference area and the survey units. As a general rule, this test can be used with up to 40 %
"less than" measurements in either the reference area or the survey unit. However, the use of
"less than" values in data reporting is not recommended.  Wherever possible, the actual  result of
a measurement, together with its uncertainty, should be reported.

This section introduces several terms and statistical parameters that will be used to determine the
number of data points needed to apply the nonparametric tests. An example is provided to better
illustrate the application of these statistical concepts.

Calculate the Relative Shift. The lower bound of the gray region (LBGR) is selected during the
DQO Process along with the target values for a and p.  The width of the gray region, equal to
(DCGL - LBGR), is a parameter that is central to the WRS test.  This parameter is also referred
to as the shift, A. The absolute size of the shift is actually of less importance than the relative
shift, A/o, where o is an estimate of the standard deviation of the measured values in the survey
unit. This estimate of o includes both the real spatial variability in the quantity being measured
and the precision of the chosen measurement system.  The relative shift, A/G, is an expression of
the resolution of the measurements in units of measurement uncertainty.

The shift (A = DCGLW - LBGR) and the estimated standard deviation in the measurements of the
contaminant (or and os) are used to calculate the relative shift, A/G (see Appendix D, Section
D.6).  The standard deviations in the contaminant level will likely be available from previous
survey data (e.g., scoping or characterization survey data for unremediated survey units or
remedial action support  surveys for remediated survey units). If they are not available, it may be
necessary to 1) perform  some limited preliminary measurements (about 5 to 20) to estimate the
distributions, or 2) to make a reasonable estimate based on available site knowledge.  If the first
approach above is used,  it is important to note that the scoping or characterization survey data or
preliminary measurements used to estimate the standard deviation should use the same technique
as that to be used during the final status survey. When preliminary data are not obtained, it may
be reasonable to assume a coefficient of variation on the order of 30%, based on experience.

The value selected as an estimate of o for a survey unit may be based on data collected only from
within that survey unit or from data collected from a  much larger area of the site.  Note that
survey units are not finalized until the planning stage of the final status survey. This means that
there may be some difficulty in determining which individual measurements from a preliminary
survey may later represent a particular survey unit. For many sites, the most practical solution is
to estimate o for each area classification (i.e., Class 1, Class 2, and Class 3) for both interior and
exterior survey units.  This will result in all exterior Class 3 survey units using the same estimate

MARSSIM, Revision 1                        5-26                                August 2000

-------
                                                                   Survey Planning and Design


of G, all exterior Class 2 survey units using a second estimate for o, and all exterior Class 1
survey units using a third estimate for G.  If there are multiple types of surfaces within an area
classification, additional estimates of G may be required. For example, a Class 2 concrete floor
may require a different estimate of G than a Class 2 cinder block wall, or a Class 3 unpaved
parking area may require a different estimate of G than a Class 3 lawn.  In addition, MARS SIM
recommends that a separate estimate of G be obtained for every reference area.

The importance of choosing appropriate values for cr and GS must be  emphasized.  If the value is
grossly underestimated, the number of data points will be too few to  obtain the desired power
level for the test and a resurvey may be recommended (refer to Chapter 8). If, on the other hand,
the value is overestimated, the number of data points determined will be unnecessarily large.

Values for the relative shift that are less than one will result in a large number of measurements
needed to demonstrate compliance. The number of data points will also increase as A becomes
smaller.  Since the DCGL is fixed, this means that the lower bound of the gray region also has a
significant effect on the estimated number of measurements needed to demonstrate compliance.
When the estimated standard deviations in the reference area and  survey units are different, the
larger value should be used to calculate the relative shift (A/a).

Determine Pr.  The probability that a random measurement from  the survey unit exceeds a
random measurement from the background reference area by less than the DCGLW when the
survey unit median is equal to the LBGR above background is defined as Pr.  Pr is used in
Equation 5-1 for determining the  number of measurements to be performed during the survey.
Table 5.1 lists relative shift values and values for Pr. Using the relative shift calculated in the
preceding section, the value of  Pr can be obtained from Table 5.1. Information on calculating
individual values of Pr is available in NUREG-1505 (NRC 1997a).

If the actual value of the relative shift is not listed in Table 5.1, always  select the next lower
value that appears in the table.  For example, A/a=l.67 does not appear in Table 5.1. The next
lower value is 1.6, so the value of Pr would be 0.871014.

Determine Decision Error Percentiles.  The next step in this process is to determine the
percentiles, Z^ and Zj.B, represented by the selected decision error levels, a and B, respectively
(see Table 5.2). Z^ and Zj_B are standard statistical values (Harnett 1975).
August 2000                                5-27                        MARSSIM, Revision 1

-------
Survey Planning and Design
            Table 5.1 Values of Pr for Given Values of the Relative Shift, A/o,
                    when the Contaminant is Present in Background
A/o
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
1.1
1.2
1.3
Pr
0.528182
0.556223
0.583985
0.611335
0.638143
0.664290
0.689665
0.714167
0.737710
0.760217
0.781627
0.801892
0.820978
A/o
1.4
1.5
1.6
1.7
1.8
1.9
2.0
2.25
2.5
2.75
3.0
3.5
4.0
Px
0.838864
0.855541
0.871014
0.885299
0.898420
0.910413
0.921319
0.944167
0.961428
0.974067
0.983039
0.993329
0.997658
       If A/o > 4.0, use Pr = 1.000000

             Table 5.2  Percentiles Represented by Selected Values of a and B
a(orB)
0.005
0.01
0.015
0.025
0.05
Z,.n(orZ,.R)
2.576
2.326
2.241
1.960
1.645
a(or(3)
0.10
0.15
0.20
0.25
0.30
Z,.n(orZ,.B)
1.282
1.036
0.842
0.674
0.524
Calculate Number of Data Points for WRS Test.  The number of data points, N, to be obtained
from each reference area/survey unit pair for the WRS test is next calculated using
                                         3CP-0.5)2
                                                                                 (5-1)
MARSSIM, Revision 1
5-28
August 2000

-------
                                                                   Survey Planning and Design


The value of N calculated using equation 5-1 is an approximation based on estimates of o and Pr,
so there is some uncertainty associated with this calculation. In addition,  there will be some
missing or unusable data from any survey. The rate of missing or unusable measurements, R,
expected to occur in survey units or reference areas and the uncertainty associated with the
calculation of N should be accounted for during survey planning.  The number of data points
should be increased by 20%, and rounded up, over the values calculated using equation 5-1  to
obtain sufficient data points to attain the desired power level with the statistical tests and  allow
for possible lost or unusable data.  The value of 20% is selected to account for a reasonable
amount of uncertainty in the parameters used to calculate N and still allow flexibility to account
for some lost or unusable data.  The recommended 20% correction factor  should be applied as a
minimum value.  Experience and site-specific considerations should be used to increase the
correction factor if required. If the user determines that the 20% increase in the number of
measurements is excessive for a specific site, a retrospective power curve should be used to
demonstrate that the survey design provides adequate power to support the decision (see
Appendix I).

N is the total number of data points for each survey unit/reference area combination.  The N data
points are divided between the survey unit, n, and the reference area, m. The simplest method for
distributing the N data points is to  assign half the data points to the survey unit and half to the
reference area, so  n=m=N/2.  This  means that N/2 measurements are performed in each survey
unit, and N/2 measurements are performed in each reference area. If more than one survey unit is
associated with a particular reference area, N/2 measurements should be performed in each
survey unit and N/2 measurements should be performed in the reference area.

Obtain Number of Data Points for WRS Test from Table 5.3. Table 5.3 provides a list of the
number of data points used to demonstrate compliance using the WRS test for selected values of
a, P, and A/a.  The values listed in Table 5.3 represent the number of measurements to be
performed in each survey unit as well  as in the corresponding reference area. The values  were
calculated using Equation 5-1 and increased by 20% for the reasons discussed in the previous
section.

       Example:

       A site has  14 survey units and  1 reference area, and the same type of instrument
       and method is used to perform measurements in each area. The contaminant has a
       DCGLW which when converted to cpm equals 160 cpm. The contaminant is
       present in background at a level of 45 ± 7 (la) cpm.  The standard deviation of the
       contaminant in the survey area is ± 20 cpm, based on previous survey results for
August 2000                                5-29                        MARSSIM, Revision 1

-------
                             Table 5.3 Values of N/2 for Use with the Wilcoxon Rank Sum Test
in
o'
A/a
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
1.9
2.0
2.25
2.5
2.75
3.0
3.5
4.0
a=0.01
P
0.01 0.025 0.05 0.10 0.25
5452 4627 3972 3278 2268
1370 1163 998 824 570
614 521 448 370 256
350 297 255 211 146
227 193 166 137 95
161 137 117 97 67
121 103 88 73 51
95 81 69 57 40
77 66 56 47 32
64 55 47 39 27
55 47 40 33 23
48 41 35 29 20
43 36 31 26 18
38 32 28 23 16
35 30 25 21 15
32 27 23 19 14
30 25 22 18 13
28 24 20 17 12
26 22 19 16 11
25 21 18 15 11
22 19 16 14 10
21 18 15 13 9
20 17 15 12 9
19 16 14 12 8
18 16 13 11 8
18 15 13 11 8
a=0.025
(3
0.01 0.025 0.05 0.10 0.25
4627 3870 3273 2646 1748
1163 973 823 665 440
521 436 369 298 197
297 248 210 170 112
193 162 137 111 73
137 114 97 78 52
103 86 73 59 39
81 68 57 46 31
66 55 46 38 25
55 46 39 32 21
47 39 33 27 18
41 34 29 24 16
36 30 26 21 14
32 27 23 19 13
30 25 21 17 11
27 23 19 16 11
25 21 18 15 10
24 20 17 14 9
22 19 16 13 9
21 18 15 12 8
19 16 14 11 8
18 15 13 10 7
17 14 12 10 7
16 14 12 10 6
16 13 11 9 6
15 13 11 9 6
a=0.05
P
0.01 0.025 0.05 0.10 0.25
3972 3273 2726 2157 1355
998 823 685 542 341
448 369 307 243 153
255 210 175 139 87
166 137 114 90 57
117 97 81 64 40
88 73 61 48 30
69 57 48 38 24
56 46 39 31 20
47 39 32 26 16
40 33 28 22 14
35 29 24 19 12
31 26 22 17 11
28 23 19 15 10
25 21 18 14 9
23 19 16 13 8
22 18 15 12 8
20 17 14 11 7
19 16 13 11 7
18 15 13 10 7
16 14 11 9 6
15 13 11 9 6
15 12 10 8 5
14 12 10 8 5
13 11 9 8 5
13 11 9 7 5
a=0.10
P
0.01 0.025 0.05 0.10 0.25
3278 2646 2157 1655 964
824 665 542 416 243
370 298 243 187 109
211 170 139 106 62
137 111 90 69 41
97 78 64 49 29
73 59 48 37 22
57 46 38 29 17
47 38 31 24 14
39 32 26 20 12
33 27 22 17 10
29 24 19 15 9
26 21 17 13 8
23 19 15 12 7
21 17 14 11 7
19 16 13 10 6
18 15 12 9 6
17 14 11 9 5
16 13 11 8 5
15 12 10 8 5
14 11 9 7 4
13 10 9 7 4
12 10 8 6 4
12 10 8 6 4
11 9 8 6 4
11 9 7 6 4
a=0.25
P
0.01 0.025 0.05 0.10 0.25
2268 1748 1355 964 459
570 440 341 243 116
256 197 153 109 52
146 112 87 62 30
95 73 57 41 20
67 52 40 29 14
51 39 30 22 11
40 31 24 17 8
32 25 20 14 7
27 21 16 12 6
23 18 14 10 5
20 16 12 9 4
18 14 11 8 4
16 13 10 7 4
15 11 9 7 3
14 11 8 6 3
13 10 8 6 3
12 9 7 5 3
11 9 7 5 3
11 8 7 5 3
10 8 6 4 2
97642
97542
86542
86542
86542
                                                                                                                    OQ

                                                                                                                    —

                                                                                                                    I
OO

O
O
o
o

-------
                                                                    Survey Planning and Design


       the same or similar contaminant distribution. When the estimated standard deviation in
       the reference area and the survey units are different, the larger value, 20 cpm in this
       example, should be used to calculate the relative shift.  During the DQO process the
       LBGR is selected to be one-half the DCGLW (80 cpm) as an arbitrary starting point for
       developing an acceptable survey design,1 and Type I and Type II error values (a and P) of
       0.05 have been selected. Determine the number of data points to be obtained from the
       reference area and from each of the survey units for the statistical tests.

       The value of the relative shift for the reference area, A/a, is (160-80)/20 or 4. From Table
       5.1, the value of Pr is 0.997658. Values of percentiles, represented by the selected
       decision error levels, are obtained from Table 5.2.  In this case Z^ (for  a = 0.05) is 1.645
       and ZJ.B (P = 0.05) is also 1.645.

       The number of data points, N, for the WRS test of each combination of reference area and
       survey units can be calculated using Equation 5-1
                              N=
                                    3(0.997658-0.5)2
       Adding an additional 20% gives 17.5 which is then rounded up to the next even number,
       18. This yields 9 data points for the reference area and 9 for each survey unit.

       Alternatively, the number of data points can be obtained directly from Table 5.3.  For
       a=0.05, p=0.05, and A/o=4.0 a value of 9 is obtained for N/2. The table value has already
       been increased by 20% to account for missing or unusable data.

5.5.2.3 Contaminant Not Present in Background — Determining Numbers of Data Points for
Statistical Tests

For the situation where the contaminant is not present in background or is present at such a small
fraction of the DCGLW as to be considered insignificant, a background reference area is not
necessary. Instead, the contaminant levels are compared directly with the DCGL value.  The
general approach closely parallels that used for the situation when the contaminant is present in
background as described in Section 5.5.2.2. However, the statistical tests differ slightly. The
one-sample Sign test replaces the two-sample Wilcoxon Rank Sum test described above.
   1  Appendix D provides more detailed guidance on the selection of the LBGR.

August 2000                                 5-31                        MARS SIM, Revision 1

-------
Survey Planning and Design


Calculate the Relative Shift. The initial step in determining the number of data points in the
one-sample case is to calculate the relative shift, A/os = (DCGL-LBGR)/os, from the DCGL
value, the lower bound of the gray region (LBGR), and the standard deviation of the contaminant
in the survey unit, os, as described in Section 5.5.2.2. Also as described in Section 5.5.2.2, the
value of GS may be obtained from earlier surveys, limited preliminary measurements, or a
reasonable estimate. Values of the relative shift that are less than one will result in a large
number of measurements needed to demonstrate compliance.

Determine Sign p.  Sign p is the estimated probability that a random measurement from the
survey unit will be less than the DCGLW when the survey unit median is actually at the LBGR.
The Sign p is used to calculate the minimum number of data points necessary for the survey to
meet the DQOs. The value of the relative shift calculated in the previous  section is used to
obtain the corresponding value of Sign p from Table 5.4.

          Table 5.4  Values of Sign p for Given Values of the Relative Shift, A/o,
                   when the Contaminant is Not Present in Background
A/o
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
1.1
Signp
0.539828
0.579260
0.617911
0.655422
0.691462
0.725747
0.758036
0.788145
0.815940
0.841345
0.864334
A/o
1.2
1.3
1.4
1.5
1.6
1.7
1.8
1.9
2.0
2.5
3.0
Signp
0.884930
0.903199
0.919243
0.933193
0.945201
0.955435
0.964070
0.971284
0.977250
0.993790
0.998650
       If A/0 > 3.0, use Signp = 1.000000

Determine Decision Error Percentiles.  The next step in this process is to determine the
percentiles, Z^ and Z^, represented by the selected decision error levels, a and B, respectively
(see Table 5.2).
MARSSIM, Revision 1
5-32
August 2000

-------
                                                                   Survey Planning and Design


Calculate Number of Data Points for Sign Test. The number of data points, N, to be obtained
for the Sign test is next calculated using the following formula:
                                N-     --E
                                     4(Sign p  - 0.5)2
Finally, the number of anticipated data points should be increased by at least 20% as discussed in
Section 5.5.2.2 to ensure sufficient power of the tests and to allow for possible data losses.

Obtain Number of Data Points for Sign Test from Table 5.5.  Table 5.5 provides a list of the
number of data points used to demonstrate compliance using the Sign test for selected values of
a, P, and A/a.  The values listed in Table 5.5 represent the number of measurements to be
performed in each survey unit. These values were calculated using Equation 5-2 and increased
by 20% to account for missing or unusable data and uncertainty in the calculated value of N.

       Example:

       A site has 1 survey unit.  The DCGL level for the contaminant of interest is 140
       Bq/kg (3.9 pCi/g) in soil. The contaminant is not present in background; data
       from previous investigations indicate average residual contamination at the survey
       unit of 3.7 ± 3.7 (la) Bq/kg.  The lower bound of the gray region was selected to
       be 110 Bq/kg. A value of 0.05 is next selected for the probability of Type I
       decision errors (a) and a value of 0.01 is selected for the probability of Type II
       decision errors (P) based on the survey objectives. Determine the number of data
       points to be obtained from the survey unit for the statistical tests.

       The value of the shift parameter, A/a, is (140-110)/3.7 or 8.  From Table 5.4, the value of
       Sign p  is  1.0.  Since A/o>3, the width of the gray region can be reduced. If the LBGR is
       raised to 125, then A/a is (140-125)/3.7 or 4. The value of Sign p remains at 1.0. Thus,
       the number of data points calculated will not change.  The probability of a Type II error is
       now specified at 125 Bq/kg (3.4 pCi/g) rather than 110 Bq/kg (3.0 pCi/g). As a
       consequence,  the probability of a Type n error at 110 Bq/kg (3.0 pCi/g) will be even
       smaller.

       Values of percentiles,  represented by the selected decision error levels are obtained from
       Table 5.2. ZlH1 (for a = 0.05) is 1.645, and Z^ (P = 0.01) is 2.326.
August 2000                                5-33                         MARS SIM, Revision 1

-------
                                        Table 5.5 Values of N for Use with the Sign Test
                                                                                                                             Gfl
                                                                                                                             c
                                                                                                                             OQ
                                                                                                                             —

                                                                                                                             I
                                                                                                                             Oq
A/a
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
1.9
2.0
2.5
3.0
a=0.01
P
0.01 0.025 0.05 0.10 0.25
4095 3476 2984 2463 1704
1035 879 754 623 431
468 398 341 282 195
270 230 197 162 113
178 152 130 107 75
129 110 94 77 54
99 83 72 59 41
80 68 58 48 34
66 57 48 40 28
57 48 41 34 24
50 42 36 30 21
45 38 33 27 20
41 35 30 26 17
38 33 28 23 16
35 30 27 22 15
34 29 24 21 15
33 28 24 20 14
32 27 23 20 14
30 26 22 18 14
29 26 22 18 12
28 23 21 17 12
27 23 20 17 12
a=0.025
(3
0.01 0.025 0.05 0.10 0.25
3476 2907 2459 1989 1313
879 735 622 503 333
398 333 281 227 150
230 1921 162 131 87
152 126 107 87 58
110 92 77 63 42
83 70 59 48 33
68 57 48 39 26
57 47 40 33 22
48 40 34 28 18
42 35 30 24 17
38 32 27 22 15
35 29 24 21 14
33 27 23 18 12
30 26 22 17 12
29 24 21 17 11
28 23 20 16 11
27 22 20 16 11
26 22 18 15 10
26 21 18 15 10
23 20 17 14 10
23 20 17 14 9
a=0.05
P
0.01 0.025 0.05 0.10 0.25
2984 2459 2048 1620 1018
754 622 518 410 258
341 281 234 185 117
197 162 136 107 68
130 107 89 71 45
94 77 65 52 33
72 59 50 40 26
58 48 40 32 21
48 40 34 27 17
41 34 29 23 15
36 30 26 21 14
33 27 23 18 12
30 24 21 17 11
28 23 20 16 10
27 22 18 15 10
24 21 17 14 9
24 20 17 14 9
23 20 16 12 9
22 18 16 12 9
22 18 15 12 8
21 17 15 11 8
20 17 14 11 8
a=0.10
P
0.01 0.025 0.05 0.10 0.25
2463 1989 1620 1244 725
623 503 410 315 184
282 227 185 143 83
162 131 107 82 48
107 87 71 54 33
77 63 52 40 23
59 48 40 30 18
48 39 32 24 15
40 33 27 21 12
34 28 23 18 11
30 24 21 16 10
27 22 18 15 9
26 21 17 14 8
23 18 16 12 8
22 17 15 11 8
21 17 14 11 6
20 16 14 10 6
20 16 12 10 6
18 15 12 10 6
18 15 12 10 6
17 14 11 9 5
17 14 11 9 5
a=0.25
P
0.01 0.025 0.05 0.10 0.25
1704 1313 1018 725 345
431 333 258 184 88
195 150 117 83 40
113 87 68 48 23
75 58 45 33 16
54 42 33 23 11
41 33 26 18 9
34 26 21 15 8
28 22 17 12 6
24 18 15 11 5
21 17 14 10 5
20 15 12 9 5
17 14 11 8 4
16 12 10 8 4
15 12 10 8 4
15 11 9 6 4
14 11 9 6 4
14 11 9 6 4
14 10 9 6 4
12 10 8 6 3
12 10 8 5 3
12 9 8 5 3
o'
fj\
oo
I
O
O
O

-------
                                                                    Survey Planning and Design


       The number of data points, N, for the Sign test can be calculated using Equation 5-2.


                               N  =  <1645+2326>2  = 15.85
                                      4(1.0-0.5)2
       Adding an additional 20% gives 19.2 and rounding up yields 20 data points for the survey
       unit.

       Alternatively, the number of data points can be obtained directly from Table 5.5. For
       a=0.05, p=0.01, and A/o>3.0 a value of 20 is obtained for N. The table value has already
       been increased by 20% to account for missing or unusable data and uncertainty in the
       calculated value of N.

5.5.2.4 Determining Data Points for Small Areas of Elevated Activity

The statistical tests described above (also see Chapter 8) evaluate whether or not the residual
radioactivity in an area exceeds the DCGLW for contamination conditions that are approximately
uniform across the survey unit.  In addition, there should be a reasonable level of assurance that
any small areas of elevated residual radioactivity that could be significant relative to the
DCGLgMc are not missed during the final status survey. The  statistical tests introduced in the
previous sections may not successfully detect small areas of elevated contamination. Instead,
systematic measurements and sampling, in conjunction with surface scanning, are used to obtain
adequate assurance that small areas of elevated radioactivity will  still satisfy the release criterion
or the DCGLEMC. The procedure is applicable for all radionuclides, regardless of whether or not
they are present in background, and is implemented for survey units classified as Class 1.

The number of survey data  points needed for the  statistical tests discussed in Section 5.5.2.2 or
5.5.2.3 is identified (the appropriate section depends on whether the contaminant is present in
background or not). These data points are then positioned throughout the survey unit by first
randomly selecting a start point and establishing a systematic pattern.  This systematic sampling
grid may be either triangular or square. The triangular grid is generally more efficient for
locating small areas of elevated activity.  Appendix D includes  a brief discussion on the
efficiency of triangular and square grids for locating areas of elevated activity. A more detailed
discussion is provided by EPA (EPA 1994b).
August 2000                                 5-35                         MARS SIM, Revision 1

-------
Survey Planning and Design


The number of calculated survey locations, n, is used to determine the grid spacing, L, of the
systematic sampling pattern (see Section 5.5.2.5). The grid area that is bounded by these survey
locations is given by A = 0.866 x L2 for a triangular grid and A = L2 for a square grid. The risk
of not sampling a circular area—equal to A—of elevated activity by use of a random-start grid
pattern is illustrated in Figure D.7 in Appendix D.

One method for determining values for the DCGLEMC is to modify the DCGLW using a correction
factor that accounts for the difference in area and the resulting change in dose or risk. The area
factor is the magnitude by which the concentration within the small area of elevated  activity can
exceed DCGLW while maintaining compliance with the release criterion.  The area factor is
determined based on specific regulatory agency guidance.

Tables 5.6 and 5.7 provide examples of area factors generated using exposure pathway models.
The outdoor area factors listed in Table 5.6 were calculated using RESRAD 5.6.  For each
radionuclide, all  exposure pathways were calculated assuming a concentration of 37  Bq/kg
(1 pCi/g). The area of contamination in RESRAD 5.6 defaults to 10,000 m2.  Other than
changing the area (i.e.,  1, 3, 10, 30, 100, 300,  1,000, or 3,000 m2), the RESRAD default values
were not changed.  The area factors were then computed by taking the ratio of the dose or risk
per unit concentration generated by RESRAD for the default 10,000 m2 to that generated for the
other areas listed. If the DCGL for residual radioactivity distributed over 10,000 m2  is multiplied
by this value, the resulting concentration distributed over the specified smaller area delivers the
same calculated dose.  The indoor area factors listed in Table 5.7 were calculated in a similar
manner using RESRAD-BUTLD 1.5. For each radionuclide, all exposure pathways were
calculated assuming a concentration of 37  Bq/m2 (1 pCi/m2). The area of contamination in
RESRAD-BUTLD 1.5 defaults to 36 m2. The other areas compared to this value were 1, 4, 9, 16,
or 25 m2. Removable surface contamination was assumed to be 10%. No other changes to the
default values were made.  Note that the use of RESRAD to determine area factors is for
illustration purposes only.  The MARSSIM user should consult with the responsible  regulatory
agency for guidance on acceptable techniques to  determine area factors.

The minimum detectable concentration (MDC) of the scan procedure—needed to detect an area
of elevated activity at the limit determined by the area factor—is calculated as follows:

                  Scan MDC (required) =  (DCGL^) x (Area Factor)                 5.3


The actual MDCs of scanning techniques are then determined for the available instrumentation
(see Section 6.7). The actual MDC of the  selected scanning technique is compared to the
required scan MDC. If the actual scan MDC is less than the required scan MDC, no additional
sampling points are necessary for assessment  of small areas of elevated activity. In other words,
the scanning technique  exhibits adequate sensitivity to detect small areas of elevated activity.

MARSSIM, Revision 1                        5-36                                August 2000

-------
                                                                        Survey Planning and Design
               Table 5.6 Illustrative Examples of Outdoor Area Dose Factorsi
Nuclide
Am-241
Co-60
Cs-137
Ni-63
Ra-226
Th-232
U-238
Area Factor
1m2
208.7
9.8
11.0
1175.2
54.8
12.5
30.6
3m2
139.7
4.4
5.0
463.7
21.3
6.2
18.3
10m2
96.3
2.1
2.4
154.8
7.8
3.2
11.1
30m2
44.2
1.5
1.7
54.2
3.2
2.3
8.4
100 m2 300 m2
13.4
1.2
1.4
16.6
1.1
1.8
6.7
4.4
1.1
1.3
5.6
1.1
1.5
4.4
1000 m2
1.3
1.1
1.1
1.7
1.0
1.1
1.3
3000 m2
1.0
1.0
1.1
1.5
1.0
1.0
1.0
10000 m2
1.0
1.0
1.0
1.0
1.0
1.0
1.0
* The values listed in Table 5.6 are for illustrative purposes only. Consult regulatory guidance to determine area
factors to be used for compliance demonstration.
                Table 5.7 Illustrative Examples of Indoor Area Dose Factors*
Nuclide
Am-241
Co-60
Cs-137
Ni-63
Ra-226
Th-232
U-238

1m2
36.0
9.2
9.4
36.0
18.1
36.0
35.7

4m2
9.0
3.1
3.2
9.0
5.5
9.0
9.0
Area
9m2
4.0
1.9
1.9
4.0
2.9
4.0
4.0
Factor
16m2
2.2
1.4
1.4
2.3
1.9
2.2
2.2

25m2
1.4
1.2
1.2
1.4
1.3
1.4
1.4

36m2
1.0
1.0
1.0
1.0
1.0
1.0
1.0
* The values listed in Table 5.7 are for illustrative purposes only. Consult regulatory guidance to determine area
factors to be used for compliance demonstration.
If the actual scan MDC is greater than the required scan MDC (i.e., the available scan sensitivity
is not sufficient to detect small areas of elevated activity), then it is necessary to calculate the
area factor that corresponds to the actual scan MDC:
August 2000
5-37
MARS SIM, Revision 1

-------
Survey Planning and Design
                           .     „        scan MDC (actual)
                          Area Factor  =	                          5.4
                                                DCGL


The size of the area of elevated activity (in m2) that corresponds to this area factor is then
obtained from specific regulatory agency guidance, and may be similar to those illustrated in
Table 5.6 or Table 5.7.  The data needs for assessing small areas of elevated activity can then be
determined by dividing the area of elevated activity acceptable to the regulatory agency into the
survey unit area.  For example, if the area of elevated activity is 100 m2 (from Table 5.6) and the
survey unit area is 2,000 m2, then the calculated number of survey locations is 20. The calculated
number of survey locations, nEA, is used to determine a revised spacing, L, of the systematic
pattern (refer to Section 5.5.2.5).  Specifically, the spacing, L, of the pattern (when driven by the
areas of elevated  activity) is given by:
                        L  =
                             \
         -  for a triangular grid                       5.5
0.866 «„,
       EA
                             L  =
                                  \
                                     A
     nEA
          for a square grid                           S-6
where A is the area of the survey unit.  Grid spacings should generally be rounded down to the
nearest distance that can be conveniently measured in the field.

If the number of data points required to identify areas of elevated activity (nEA) is greater than the
number of data points calculated using Equation 5-1 (N/2) or Equation 5-2 (N), L should be
calculated using Equation 5-5 or Equation 5-6.  This value of L is then used to determine the
measurement locations as described in Section 5.5.2.5.  If nEA is smaller than N/2 or N, L is
calculated using Equation 5-7 or Equation 5-8 as described in Section 5.5.2.5.  The statistical
tests are performed using this larger number of data points. Figure 5.3 provides a concise
overview of the procedure used to identify data needs for the assessment of small areas of
elevated activity. If residual radioactivity is found in an isolated area of elevated activity—in
addition to residual radioactivity distributed relatively uniformly across the survey unit—the
unity rule (described in Section 4.3.3) can be used to ensure that the total dose or risk does not
exceed the release criterion (see Section 8.5.2). If there is more than one elevated area, a separate
term should be included for each. As an alternative to the unity rule, the dose or risk due to the
actual residual radioactivity distribution can be calculated if there is an appropriate exposure
pathway model available.  Note that these considerations generally apply only to Class 1 survey
units, since areas of elevated  activity should not exist in Class 2 or Class 3 survey units.

MARSSIM, Revision 1                        5-38                                 August 2000

-------
                                                                   Survey Planning and Design


When the detection limit of the scanning technique is very large relative to the DCGLEMC, the
number of measurements estimated to demonstrate compliance using the statistical tests may
become unreasonably large. In this situation perform an evaluation of the survey objectives and
considerations.  These considerations may include the survey design and measurement
methodology, exposure pathway modeling assumptions and parameter values used to determine
the DCGLs, Historical Site Assessment conclusions concerning source terms and radionuclide
distributions, and the results of scoping and characterization surveys. In most cases the result of
this evaluation is not expected to justify an unreasonably large number of measurements.

       Example 1:

       A Class 1 land area survey unit of 1,500 m2 is potentially contaminated  with 60Co.
       The DCGLW value for 60Co is 110 Bq/kg (3 pCi/g) and the scan sensitivity for this
       radionuclide has been determined to be 150 Bq/kg (4 pCi/g).  Calculations
       indicate the number of data points needed for  statistical testing is 27.  The
       distance between measurement locations for this number of data points  and the
       given land area is 8 m.  The area encompassed by a triangular sampling pattern of
       8 m is approximately 55.4 m2. From Table 5.6 an area factor of about 1.4 is
       determined by interpolation.  The acceptable concentration in a 55.4 m2 area is
       therefore 160 Bq/kg (1.4 x 110 Bq/kg).  Since the scan sensitivity of the procedure
       to be used is less than the DCGLW times the area factor, no additional data points
       are needed to demonstrate compliance with the elevated measurement comparison
       criteria.

       Example 2:

       A Class 1 land area survey unit of 1500 m2 is potentially contaminated with 60Co.
       The DCGL for 60Co is 110 Bq/kg (3 pCi/g). In contrast to Example 1, the scan
       sensitivity for this radionuclide has been determined to be 170 Bq/kg (4.6 pCi/g).
       Calculations indicate the number of data points needed for statistical testing is 15.
       The distance between measurement locations  for this number of data points and
       land area is 10m.  The area encompassed by a triangular sampling pattern of 10 m
       is approximately 86.6 m2. From Table 5.6 an  area factor of about 1.3 is
       determined by interpolation.  The acceptable concentration in a 86.6 m2 area is
       therefore 140 Bq/kg (1.3 x 110 Bq/kg).  Since the scan sensitivity of the procedure
       to be used is greater than the DCGLW times the area factor, the data points
       obtained for the statistical testing may not be sufficient to demonstrate compliance
       using the elevated measurement comparison.  The area multiplier for elevated
       activity  that would have to be achieved is  1.5 (170/110 Bq/kg). This is
       equivalent to an area of 30 m2 (Table 5.6) which would be obtained with a spacing
       of about 6 m. A triangular pattern of 6 m spacing includes 50 data points, so 50
       measurements should be performed in the survey unit.

August 2000                                 5-39                        MARS SIM, Revision 1

-------
Survey Planning and Design


5.5.2.5 Determining Survey Locations

A scale drawing of the survey unit is prepared, along with the overlying planar reference
coordinate system or grid system.  Any location within the survey area is thus identifiable by a
unique set of coordinates. The maximum length, X, and width, Y, dimensions of the survey unit
are then determined. Identifying and documenting a specific location for each measurement
performed is an important part of a final status survey to ensure that measurements can be
reproduced if necessary. The reference coordinate system described in Section 4.8.5 provides a
method for relating measurements to a specific location within a survey unit.

If the same values for a, P, and A/a are used in Equations 5-1 or Equation 5-2, the required
number of measurements is  independent of survey unit classification.  This means that the same
number of measurements could be performed in a Class 1, Class 2, or Class 3 survey unit. While
this is a best case scenario, it points out the importance of identifying appropriate survey units
(e.g., size, classification) in  defining the level of survey effort. The spacing of measurements is
affected by the number of measurements, which is independent of classification. However, the
spacing of measurements is  also affected by survey unit area, the variability in the contaminant
concentration, and the interface with the models used to develop the DCGLs which are
dependent on classification.

Land Areas. Measurements and samples in Class 3 survey units and reference areas should be
taken at random locations. These locations are determined by generating sets of random numbers
(2 values, representing the X axis and Y axis distances). Random numbers can be generated by
calculator or computer, or can be obtained from mathematical tables.  Sufficient sets of numbers
will be needed to identify the total number of survey locations established for the survey unit.
Each set of random numbers is multiplied by the appropriate survey unit dimension to provide
coordinates, relative to the origin of the survey unit reference grid pattern.  Coordinates identified
in this manner, which do not fall within the survey until area or which cannot be surveyed, due to
site conditions,  are replaced with other survey points determined in the same manner. Figure 5.4
is an example of a random sampling pattern. In this example, 8 data points were identified using
the appropriate formula based on the statistical tests (i.e., Equation 5-1 or Equation 5-2). The
locations of these points were determined using the table of random numbers found in Appendix
I, Table 1.6.
MARSSIM, Revision 1                        5-40                                August 2000

-------
                                                           Survey Planning and Design
                                                                      TV
   8 ON
   VON
   60N
   SON
   40N
   20N
    ION
i
4
<
1
4
i
1
4
	 i 	 .'
T
i
	 4 	
j
1
	 A. 	
j
4
I
I
j
0 1






7





3E 2(






•





)E 3(





8B





5


3E 4(


3"





BUILDIN


.

4M
3E 5








G


• l


3E 6
A
f
1 I
1
SAMPLE
" * COORDINATES
#1: 52E, 24N
#2: 28E, 2N
#3: 45E,83N
#4: 47E, 5N
#5: 41E, 22N
, 	 #6- OE 44N
#7: 21E, 56N
#8: 35E, 63N


, I 	 FEET
0 30
~r 0 10
3E METERS
                      SURFACE SOIL MEASUREMENT/SAMPLING LOCATION
                      SURVEY UNIT BOUNDARY
                      ONSITE FENCE
August 2000
Figure 5.4 Example of a Random Measurement Pattern

                      5-41                      MARSSIM, Revision 1

-------
Survey Planning and Design


Class 2 areas are surveyed on a random-start systematic pattern.  The number of calculated
survey locations, n, based on the statistical tests, is used to determine the spacing, L, of a
systematic pattern by:
                         L =
                             \
                                   A
0.866 n
         for a triangular grid                       5-7
                             L =
    —   for a square grid                           5-8
                                    n
where A is the area of the survey unit.
After L is determined, a random coordinate location is identified, as described previously, for a
survey pattern starting location. Beginning at the random starting coordinate, a row of points is
identified, parallel to the X axis, at intervals of L.

For a triangular grid, a second row of points is then developed, parallel to the first row, at a
distance of 0.866 x L from the first row. Survey points along that second row are midway (on
the X-axis) between the points on the first row. This process is repeated to identify a pattern of
survey locations throughout the affected survey unit. If identified points fall outside the survey
unit or at locations which cannot be surveyed, additional  points are determined using the random
process described above, until the desired total number of points is identified.

An example of such a survey pattern is shown in Figure 5.5. In this example, the statistical test
calculations estimate 20 samples (Table 5.5, a=0.01, p=0.05, A/o>3.0). The random-start
coordinate was 27E,  53N.  The grid spacing was calculated using Equation 5-7:
                                L =
                                   \
       5,100 m2
      0.866 x  20
= 17 m.
Two points were identified on a row parallel to the X-axis, each 17m from the starting point.
The subsequent rows were positioned 0.866 x L, or 15 m, from the initial row. This random-start
triangular sampling process resulted in 21 sampling locations, one of which was inaccessible
because of the building location, which yields the desired number of data points.
MARSSIM, Revision 1                         5-42                                 August 2000

-------
                                                           Survey Planning and Design
   SON
   20N
   ION

                                                             STARTING POINT
                                                             FOR TRIANGULAR
                                                             SAMPLING GRID
                                                             (27E, 53N)
                                                                   FEET
                                                                 0       30
                 10E
20E
30E
40E
50E
60E
o       10
  METERS
                    SURFACE SOIL MEASUREMENT LOCATION
                    MEASUREMENT LOCATION THAT IS NOT SAMPLED
                    SURVEY UNIT BOUNDARY
                    ONSITE FENCE
     Figure 5.5 Example of a Random-Start Triangular Grid Measurement Pattern

August 2000                            5-43                     MARSSIM, Revision 1

-------
Survey Planning and Design


For Class 1 areas a systematic pattern, having dimensions determined in Section 5.5.2.4, is
installed on the survey unit. The starting point for this pattern is selected at random, as described
above for Class 2 areas. The same process as described above for Class 2 areas applies to
Class 1, only the estimated number of samples is different.

Structure Surfaces.  All structure surfaces for a specific survey unit are included on a single
reference grid system for purposes of identifying survey locations. The same methods as
described above for land areas are then used to locate survey points for all classifications of
areas.

In addition to the survey locations identified for statistical evaluations and elevated measurement
comparisons, data will likely be obtained from judgment locations that are selected due to
unusual appearance, location relative to contamination areas, high potential for residual activity,
general supplemental information, etc. Data points selected based on professional judgment are
not included with the data points from the random-start triangular grid for statistical evaluations;
instead they are compared individually with the established DCGLs and conditions.
Measurement locations selected based on professional judgment violate the assumption of
unbiased measurements used to develop the statistical tests described in Chapter 8.

5.5.2.6  Determining Investigation Levels

An important aspect of the final status survey is the design and implementation of investigation
levels. Investigation levels are radionuclide-specific levels of radioactivity used to indicate when
additional investigations may be necessary. Investigation levels also serve as a quality control
check to determine when a measurement process begins to get out of control. For example, a
measurement that exceeds the investigation level may indicate that the survey unit has been
improperly classified  (see Section 4.4) or it may indicate a failing instrument.

When an investigation level is exceeded, the first step is to confirm that the initial
measurement/sample  actually exceeds the particular investigation level. This may involve taking
further measurements to determine that the area and level of the elevated residual radioactivity
are such that the resulting dose or risk meets the release criterion.2 Depending on the results of
the investigation actions, the survey unit may require reclassification, remediation, and/or
resurvey.  Table 5.8 illustrates an example of how investigation levels can be developed.
   2 Rather than, or in addition to, taking further measurements the investigation may involve assessing the
adequacy of the exposure pathway model used to obtain the DCGLs and area factors, and the consistency of the
results obtained with the Historical Site Assessment and the scoping, characterization and remedial action support
surveys.

MARSSIM, Revision 1                          5-44                                  August 2000

-------
                                                                   Survey Planning and Design
               Table 5.8 Example Final Status Survey Investigation Levels
Survey Unit
Classification
Class 1
Class 2
Class 3
Flag Direct Measurement or Sample
Result When:
> DCGLEMC or
> DCGLW and > a statistical parameter-
based value
> DCGLW
> fraction of DCGLW
Flag Scanning Measurement Result When:
>DCGLEMC
>DCGLwor>MDC
>DCGLwor>MDC
When determining an investigation level using a statistical-based parameter (e.g., standard
deviation) one should consider survey objectives, underlying radionuclide distributions and an
understanding of corresponding types (e.g., normal, log normal, non-parametric), descriptors
(e.g., standard deviation, mean, median), population stratifications (i.e., are there sub-groups
present?), and other prior survey and historical information. For example, a level might be
arbitrarily established at the mean + 3s, where s is the standard deviation of the survey unit,
assuming a normal distribution. A higher value might be used if locating discrete sources of
higher activity was a primary survey objective. By the time the final status survey is conducted,
survey units should be defined. Estimates of the mean, variance,  and standard deviation of the
radionuclide activity levels within the survey units should also be available.

For a Class 1 survey unit, measurements above the DCGLW are not necessarily unexpected.
However, a measurement above the DCGLW at one of the discrete measurement locations might
be considered unusual if it were much higher than all of the other discrete measurements.  Thus,
any discrete measurement that is both above the DCGLW and above the statistical-based
parameter for the measurements should be investigated further. Any measurement, either at a
discrete location or from a scan, that is above the DCGLEMC should be flagged for further
investigation.

In Class 2 or Class 3 areas, neither measurements above the DCGLW nor areas of elevated
activity are expected. Any measurement at a discrete location exceeding the DCGLW in these
areas should be flagged for further investigation. Because the survey design for Class 2 and
Class 3 survey units is not driven by the EMC, the  scanning MDC might exceed  the DCGLW. In
this case, any indication of residual radioactivity during the scan would warrant further
investigation.

The basis for using the DCGLEMC rather than the more conservative criteria for Class 2  and
Class 3 areas should be justified in survey planning documents. For example, where there is high
uncertainty in the reported scanning MDC, a more  conservative criteria would be warranted.
August 2000
5-45
MARS SIM, Revision 1

-------
Survey Planning and Design


Similarly, DQA for scanning may warrant a more conservative flag, as would greater uncertainty
from Historical Site Assessment or other surveys on the size of potential areas of elevated
activity.  In some cases, it may even be necessary to agree in advance with the regulatory agency
responsible for the site on which  site-specific investigation will be used if other than those
presented in Table 5.8.

Because there is a low expectation for residual radioactivity in a Class 3  area, it may be prudent
to investigate any measurement exceeding even a fraction of the DCGLW. The level selected in
these situations depends on the site, the radionuclides of concern,  and the measurement and
scanning methods chosen. This level should be set using the DQO Process during the survey
design phase of the Data Life Cycle. In some cases, the user may  also wish to follow this
procedure for Class 2  and even Class 1 survey units.

5.5.3   Developing an Integrated  Survey Strategy

The final step in survey design is to integrate the survey techniques (Chapter 6) with the number
of measurements and measurement spacing determined earlier in this chapter. This integration
along with the guidance provided in other portions of this manual  produce an overall strategy for
performing the survey. Table 5.9 provides  a summary of the recommended survey coverage for
structures and land areas. This survey coverage for different areas is the subject of this section.

Random measurement patterns are used for Class 3 survey units to ensure that the  measurements
are  independent and support the assumptions of the statistical tests.  Systematic grids are used for
Class 2 survey units because there is an increased probability of small areas of elevated activity.
The use of a systematic grid allows the decision maker to draw conclusions about the size of the
potential  areas of elevated activity based on the area between measurement locations. The
random starting point  of the grid  provides an unbiased method for obtaining measurement
locations to be used in the statistical tests.  Class 1  survey units have the highest potential for
small areas of elevated activity, so the areas between measurement locations  are adjusted to
ensure that these areas can be detected by scanning techniques.

The objectives of the scanning surveys are different.  Scanning is used to identify locations
within the survey unit that exceed the investigation level. These locations are marked and
receive additional investigations to  determine the concentration, area, and extent of the
contamination.

For Class 1 areas, scanning surveys are designed to detect small areas of elevated activity that are
not detected by the measurements using the systematic pattern. For this reason the measurement
locations, and the number of measurements, may need to be adjusted based on the  sensitivity of
the  scanning technique (Section 5.5.2.4). This is also the reason for recommending 100%
MARSSIM, Revision 1                         5-46                                August 2000

-------
                                                                  Survey Planning and Design
        Table 5.9 Recommended Survey Coverage for Structures and Land Areas
Area
Classification
Class 1
Class 2
Class 3
Structures
Surface Scans
100%
10 to 100%
(10 to 50% for upper
walls and ceilings)
Systematic and
Judgmental
Judgmental
Surface Activity
Measurements
Number of data points
from statistical tests
(Sections 5.5.2.2 and
5.5.2.3); additional
measurements may be
necessary for small
areas of elevated
activity (Section
5.5.2.4)
Number of data points
from statistical tests
(Sections 5.5.2.2 and
5.5.2.3)
Number of data points
from statistical tests
(Sections 5.5.2.2 and
5.5.2.3)
Land Areas
Surface Scans
100%
10 to 100%
Systematic and
Judgmental
Judgmental
Soil Samples
Number of data points
from statistical tests
(Sections 5.5.2.2 and
5.5.2.3); additional
measurements may be
necessary for small
areas of elevated
activity (Section
5.5.2.4)
Number of data points
from statistical tests
(Sections 5.5.2.2 and
5.5.2.3)
Number of data points
from statistical tests
(Sections 5.5.2.2 and
5.5.2.3)
coverage for the scanning survey. 100% coverage means that the entire surface area of the
survey unit is covered by the field of view of the scanning instrument.  If the field of view is two
meters wide, the survey instrument can be moved along parallel paths two meters apart to
provide  100% coverage. If the field of view of the detector is 5 cm, the parallel paths should be
5 cm apart.

Scanning surveys in Class 2 areas are also primarily performed to find areas of elevated activity
not detected by the measurements using the systematic pattern.  However, the measurement
locations are not adjusted based on sensitivity of the scanning technique and scanning is
performed in portions of the survey unit.  The level of scanning effort should be proportional to
the potential for finding areas of elevated activity based on the conceptual site model developed
and refined from Section 3.6.4.  A larger portion of the survey unit would be scanned in Class 2
survey units that have residual radioactivity close to the release criterion, but for survey units that
are closer to background scanning, a smaller portion of the survey unit may be appropriate.
Class 2 survey units have a lower probability for areas of elevated activity than Class 1 survey
units, but some portions of the survey unit may have a higher potential than others. Judgmental
August 2000
5-47
MARS SIM, Revision 1

-------
Survey Planning and Design


scanning surveys focus on the portions of the survey unit with the highest probability for areas of
elevated activity. If the entire survey unit has an equal probability for areas of elevated activity,
or the judgmental scans don't cover at least 10% of the area, systematic scans along transects of
the survey unit or scanning surveys of randomly selected grid blocks are performed.

Class 3 areas have the lowest potential for areas of elevated activity. For this reason, scanning
surveys are recommended for areas with the highest potential for contamination (e.g., corners,
ditches, drains) based on professional judgment. Such recommendations are typically provided
by a health physics professional with radiation survey experience. This provides a qualitative
level of confidence that no areas of elevated activity were missed by the random measurements
or that there were no errors made in the classification of the area.

The sensitivity for scanning techniques used in Class 2 and Class 3 areas is not tied to the area
between measurement locations, as they are in a Class 1 area (see Section 5.5.2.4).  The scanning
techniques selected should represent the best reasonable effort based on the survey objectives.
Structure surfaces are generally scanned for alpha, beta, and gamma emitting radionuclides.
Scanning for alpha emitters or low-energy (<100 keV) beta emitters for land area survey units is
generally not considered effective because of problems with attenuation and media  interferences.
If one can reasonably expect to find any  residual radioactivity, it is prudent to perform  a
judgmental scanning survey.

If the equipment and methodology used for  scanning is capable of providing data of the same
quality as direct measurements (e.g., detection limit,  location of measurements,  ability to record
and document results), then scanning may be used in place of direct measurements. Results
should be documented for at least the number of locations estimated for the statistical tests.  The
same logic can be applied for using direct measurements instead of sampling. In addition, some
direct measurement systems may be able to  provide scanning data.

As previously discussed, investigation levels are determined and used to indicate when additional
investigations may be necessary or when a measurement process begins to get out of control.
The results of all investigations should be documented in the final status survey report, including
the results of scan surveys that may have potentially identified areas of elevated direct radiation.

5.5.3.1  Structure Surveys

Class 1 Areas. Surface scans are performed over  100% of structure surfaces for radiations
which might be emitted from the potential radionuclide contaminants. Locations of direct
radiation,  distinguishable above background radiation, are identified and evaluated. Results of
initial and followup direct measurements and sampling at these locations are recorded and
documented in the final status survey report. Measurements of total and removable
contamination are performed at locations identified by scans and at previously determined

MARSSIM, Revision 1                         5-48                                 August 2000

-------
                                                                   Survey Planning and Design


locations (Section 5.5.2.5).  Where gamma emitting radionuclides are present, in situ gamma
spectroscopy may be used to identify the presence of specific radionuclides or to demonstrate
compliance with the release criterion.

Direct measurement or sample investigation levels for Class 1 areas should establish a course of
action for individual measurements that approach or exceed the DCGLW. Because measurements
above the DCGLW are not necessarily unexpected in a Class 1 survey unit, additional
investigation levels may be established to identify discrete measurements that are much higher
than the other measurements. Any discrete measurement that is both above the DCGLW and
exceeds three times the standard deviation (s) of the mean should be investigated further (Section
5.5.2.6).  Any measurement (direct measurement, sample, or scan) that exceeds the DCGLEMC
should be flagged for further investigation. The results of the investigation and any additional
remediation that was performed should be included in the final status survey report. Data are
reviewed as described in Section 8.2.2, additional data are collected as necessary, and the final
complete data set evaluated as described in Section 8.3 or Section 8.4.

Class 2 Areas. Surface scans are performed over 10 to 100% of structure surfaces. Generally,
upper wall surfaces and ceilings should receive surface scans over 10 to 50% of these areas.
Locations of scanning survey results above the investigation level  are identified and investigated.
If small areas of elevated activity are confirmed by this investigation, all or part of the survey  unit
should be reclassified as Class 1 and the  survey strategy for that survey unit redesigned
accordingly.

Investigation levels for Class 2 areas should establish a course of action for individual
measurements that exceed or approach the DCGLW. The results of the investigation of the
positive measurements and basis for reclassifying all or part of the survey unit as Class 1 should
be included in the final status survey report. Where gamma emitting radionuclides are
contaminants, in situ gamma spectroscopy may be used to identify the  presence of specific
radionuclides or to demonstrate compliance with the release criterion.  Data are reviewed as
described in Section 8.2.2, additional data are collected as necessary, and the  final complete data
set evaluated as described in Section 8.3  or Section 8.4.

Class 3 Areas. Scans of Class 3 area surfaces should be performed for all radiations which
might be emitted from the potential radionuclide contaminants. MARSSEVI recommends that the
surface area be scanned. Locations of scanning survey results above the investigation level are
identified and evaluated.  Measurements of total and removable contamination are performed at
the locations identified by the scans and at the randomly selected locations that are chosen in
accordance with Section 5.5.2.5. Identification of contamination suggests that the area may be
incorrectly classified.  If so, a re-evaluation of the Class 3 area classification should be performed
and, if appropriate, all or part of the survey unit should be resurveyed as a Class 1 or Class 2 area.
In some cases the investigation may include measurements by in situ gamma  spectroscopy at a

August 2000                                 5-49                        MARSSIM, Revision 1

-------
Survey Planning and Design


few locations in each structure in a Class 3 area. A gamma spectroscopy system might even be
an appropriate substitution for surface scans.

Because there is a low expectation for residual radioactivity in a Class 3 area, it may be prudent
to investigate any measurement exceeding even a fraction of the DCGLW.  The investigation level
selected will depend on the site, the radionuclides of concern, and the measurement and scanning
methods chosen. This level should be determined using the DQO Process during survey
planning.  In some cases, the user may wish to follow this procedure for Class 2 survey units.

The results of the investigation of the measurements that exceed the investigation level and the
basis for reclassifying all or part of the survey unit as Class 1 or Class 2 should be included in the
final status survey report. The data are tested relative to the preestablished criteria.  If additional
data are needed, they should be collected and evaluated as part of the entire data set.

5.5.3.2 Land Area Surveys

Class 1 Areas. As with structure surfaces, 100% scanning coverage of Class 1 land areas is
recommended. Locations of scanning survey results above the investigation level are identified
and evaluated. Results of initial and followup direct measurements and sampling at these
locations are recorded.  Soil sampling is performed at locations identified by scans and at
previously determined locations (Section 5.5.2.5).  Where gamma emitting radionuclides are
contaminants, in situ gamma spectroscopy may be used to confirm the absence of specific
radionuclides or to demonstrate compliance.

Direct measurement or sample investigation levels for Class 1 areas should establish a course of
action for individual measurements that approach or exceed the DCGLW. Because measurements
above the DCGLW are not necessarily unexpected in a Class 1 survey unit, additional
investigation levels may be established to identify discrete measurements that are much higher
than the other measurements.  Any discrete measurement that is both above the DCGLW and
exceeds three standard deviations above the mean should be investigated further (Section
5.5.2.6).   Any measurement (direct measurement, sample, or scan) that exceeds the DCGLEMC
should be flagged for further investigation. The results of the investigation and any additional
remediation that was performed should be included in the final status survey report. Data are
reviewed as described in Section  8.2.2, additional data are collected as necessary, and the final
complete data set evaluated as described in Section 8.3 or Section 8.4.

Class 2 Areas. Surface scans are performed over 10 to 100% of open land surfaces. Locations
of direct radiation above the scanning survey investigation level are identified and evaluated. If
small areas of elevated activity are identified, the survey unit should be reclassified as "Class 1"
and the survey strategy for that survey unit redesigned accordingly.
MARSSIM, Revision 1                        5-50                                 August 2000

-------
                                                                   Survey Planning and Design


If small areas of elevated activity above DCGL values are not identified, direct measurement or
soil sampling is performed at previously determined locations (Section 5.5.2.5). Where gamma
emitting radionuclides are contaminants, in situ gamma spectroscopy may be used to confirm the
absence of specific radionuclides or to demonstrate compliance.  Data are reviewed as described
in Section 8.2.2, additional data are collected as necessary, and the final complete data set
evaluated as described in Section 8.3 or Section 8.4.

Investigation levels for Class 2 areas should establish levels for investigation of individual
measurements close to but below the DCGLW.  The results of the investigation of the positive
measurements and basis for reclassifying all or part of the survey unit as Class 1 should  be
included in the final status survey report.

Class 3 Areas. Class 3 areas may be uniformly scanned for radiations from the radionuclides of
interest, or the scanning may be performed in areas with the greatest potential for residual
contamination based on professional judgment and the objectives of the survey. In some cases a
combination of these approaches may be the most appropriate. Locations exceeding the scanning
survey investigation level are evaluated, and, if the presence of contamination not occurring in
background is identified, reevaluation of the classification of contamination potential should be
performed.

Investigation levels for Class 3 areas should be established to identify areas of elevated activity
that may indicate the presence of residual radioactivity.  Scanning survey locations that exceed
the investigation level should be flagged for further investigation. The results of the
investigation and basis for reclassifying all or part  of the survey unit as Class 1  or Class 2 should
be included in the final status survey report.  The data are tested relative to the preestablished
criteria. If additional data are needed, they should  be collected and evaluated as part of the entire
data set.  Soil sampling is performed at randomly selected locations (Section 5.5.2.5); if the
contaminant can be measured at DCGL levels by in situ techniques, this method may be used to
replace or supplement the sampling and laboratory analysis approach.  For gamma emitting
radionuclides, the above data should be supplemented by several exposure rate and/or in situ
gamma spectrometry measurements. Survey results are tested for compliance with DCGLs and
additional data are collected and tested, as necessary.

5.5.3.3  Other Measurement/Sampling Locations

In addition to the building and land surface areas described above, there are numerous other
locations where measurements and/or sampling may be necessary.  Examples include items of
equipment and furnishings,  building fixtures, drains, ducts, and piping. Many of these items or
locations have both internal and external surfaces with potential residual radioactivity.
Subsurface measurements and/or sampling may also be necessary.  Guidance on conducting or
evaluating these types of surveys is outside the scope of MARSSEVI.

August 2000                                 5-51                        MARS SIM, Revision 1

-------
Survey Planning and Design


Special situations may be evaluated by judgment sampling and measurements.  Data from such
surveys should be compared directly with DCGLs developed for the specific situation. Areas of
elevated direct radiation identified by surface scans are typically followed by direct
measurements or samples. These direct measurements and samples are not included in the
nonparametric tests described in this manual, but rather, should be compared directly with
DCGLs developed for the specific situation.

Quality control measurements are recommended for all surveys, as described in Section 4.9,
Section 6.2, and Section 7.2. Also, some regulatory programs require removable activity
measurements (e.g., NRC Regulatory Guide 1.86; NRC 1974). These additional  measurements
should be considered during survey planning.

5.5.4   Evaluating Survey Results

After data are converted to DCGL units, the process of comparing the results to the DCGLs,
conditions, and objectives begins.  Individual measurements and sample concentrations are first
compared to DCGL levels for evidence of small areas of elevated activity and not to determine if
reclassification is necessary. Additional data or additional remediation and resurvey may be
necessary.  Data are then evaluated using statistical methods to determine if they  exceed the
release criterion.  If the release criterion has been exceeded or if results indicate the need for
additional data points, appropriate further actions will be determined by the site management and
the responsible regulatory agency.  The scope of further actions should be agreed upon and
developed as part of the DQO Process before the survey begins (Appendix D).  Finally, the
results of the survey are compared with the data quality objectives established during the
planning phase of the project. Note that Data Quality Objectives may require a report of the
semi-quantitative evaluation of removable contamination resulting from the analysis of smears.
These results may be used to satisfy regulatory requirements or to evaluate the  effectiveness of
ALARA  procedures.  Chapter 8 describes detailed procedures for evaluating survey results.

5.5.5   Documentation

Documentation of the final status survey should provide a complete and unambiguous record of
the radiological status of the survey unit, relative to the established DCGLs.  In addition,
sufficient data and  information should be provided to enable an independent re-creation and
evaluation at some future time. Much of the information in the final status report will be
available from other decommissioning documents; however, to the extent practicable, this report
should be a stand-alone document with minimum  information incorporated by  reference. The
report should be independently reviewed (see Section 3.9) and should be approved by a
designated  person (or persons) who is capable of evaluating all aspects of the report prior to
release, publication, or distribution.
MARSSIM, Revision 1                        5-52                                August 2000

-------
                                                                   Survey Planning and Design


                  EXAMPLE FINAL STATUS SURVEY CHECKLIST

SURVEY PREPARATIONS

	      Ensure that residual radioactivity limits have been determined for the
              radionuclides present at the site, typically performed during earlier surveys
              associated with the decommissioning process.

	      Identify the radionuclides of concern. Determine whether the radionuclides of
              concern exist in background.  This will  determine whether one-sample or two-
              sample tests are performed to demonstrate compliance.  Two-sample tests are
              performed when radionuclides are present in the natural background; one-sample
              tests may be performed if the radionuclide is not present in background.

	      Segregate the site into Class 1, Class 2,  and Class 3 areas, based on contamination
              potential.

	      Identify survey units.

	      Select representative reference (background) areas for both indoor and outdoor
              survey areas. Reference areas are selected from non-impacted areas and

                    	      are free of contamination from site operations,
                    	      exhibit similar physical, chemical, and biological
                                  characteristics of the survey area,

                    	      have similar construction, but have no history of
                                  radioactive operations.

              Select survey instrumentation and survey techniques.  Determine MDCs (select
              instrumentation based on the radionuclides present) and match between
              instrumentation and DCGLs—the selected instruments should  be capable of
              detecting the contamination at 10-50% of the DCGLs.

              Prepare area if necessary—clear and provide access to areas to be surveyed.

              Establish reference coordinate systems (as appropriate).
August 2000                                5-53                        MARS SIM, Revision 1

-------
Survey Planning and Design
SURVEY DESIGN
	      Enumerate DQOs:  State objective of survey, state the null and alternative
              hypotheses, specify the acceptable decision error rates (Type I (a) and Type n (P)).

	      Specify sample collection and analysis procedures.

	      Determine numbers of data points for statistical tests, depending on whether or
              not the radionuclide is present in background.

              	     Specify the number of samples/measurements to be obtained based
                           on the statistical tests.

              	     Evaluate the power of the statistical tests to determine that the
                           number of samples is appropriate.

              	     Ensure that the sample size is sufficient for detecting areas of
                           elevated activity.

              	     Add additional samples/measurements for QC and to allow for
                           possible loss.

	      Specify sampling locations.

	      Provide information on survey instrumentation and techniques.  The decision to
              use portable survey instrumentation or in situ techniques, and/or a combination of
              both, depends on whether or not the radiation levels are elevated compared to
              natural background, and whether or not the residual radioactivity is present at
              some fraction of background levels.

	      Specify methods of data reduction and comparison of survey units to reference
              areas.

	      Provide quality control procedures and QAPP for ensuring validity of survey data:

              	     properly calibrated instrumentation,

              	     necessary replicate, reference and blank measurements,

              	     comparison of field measurement results to laboratory sample
                           analyses.

	      Document the survey plan (e.g., QAPP, SOPs, etc.}


MARSSIM, Revision 1                        5-54                                 August 2000

-------
                                                                 Survey Planning and Design


CONDUCTING SURVEYS

	     Perform reference (background) area measurements and sampling.

             Conduct survey activities:

             	     Perform surface scans of the Class 1, Class 2, and Class 3 areas.

             	     Conduct surface activity measurements and sampling at previously
                          selected sampling locations.

             	     Conduct additional direct measurements and sampling at locations
                          based on professional judgment.

	     Perform and document any necessary investigation activities, including survey
             unit reclassification, remediation, and resurvey.

	     Document measurement and sample locations; provide information on
             measurement system MDC and measurement errors.

	     Document any observations, abnormalities, and deviations from the QAPP or SOPs
EVALUATING SURVEY RESULTS

	     Review DQOs.

	     Analyze samples.

	     Perform data reduction on survey results.

	     Verify assumptions of statistical tests.

	     Compare survey results with regulatory DCGLs:

             	     Conduct elevated measurement comparison.

             	     Determine area-weighted average, if appropriate.

             	     Conduct WRS or Sign tests.

	     Prepare final status survey report.

	     Obtain an independent review of the report.
August 2000                                5-55                        MARSSIM, Revision 1

-------
    6  FIELD MEASUREMENT METHODS AND INSTRUMENTATION
6.1    Introduction

Measurement is used in MARSSIM to mean 1) the act of using a detector to determine the level
or quantity of radioactivity on a surface or in a sample of material removed from a media being
evaluated, or 2) the quantity obtained by the act of measuring. Three methods are available for
collecting radiation data while performing a survey—direct measurements, scanning, and
sampling. This chapter discusses scanning and direct measurement methods and
instrumentation. The collection and analysis of media samples are presented in Chapter 7.
Information  on the operation and use of individual field and laboratory instruments is provided in
Appendix H. Quality assurance and quality control (QA/QC) are discussed in Chapter 9.

Total surface activities, removable surface activities, and radionuclide concentrations in various
environmental media (e.g., soil, water, air) are the radiological parameters typically determined
using field measurements and laboratory analyses. Certain radionuclides or radionuclide
mixtures may necessitate the measurement of alpha, beta, and gamma radiations. In addition to
assessing each survey unit as a whole, any small areas  of elevated activity should be identified
and their extent and activities determined.  Due  to numerous detector requirements, no single
instrument (detector and readout combination) is generally capable of adequately measuring all
of the parameters required to satisfy the release  criterion or meet all the objectives of a survey.

Selecting instrumentation requires evaluation of both site and radionuclide specific parameters
and conditions. Instruments should be stable and reliable under the environmental and physical
conditions where they are used, and their physical characteristics (size and weight) should be
compatible with the intended application.  The instrument and measurement method should be
able to detect the type of radiation of interest, and should, in relation to the survey or analytical
technique, be capable of measuring levels that are less than the derived concentration guideline
level (DCGL). Numerous commercial firms offer a wide variety of instruments appropriate for
the radiation measurements described in this manual. These firms can provide thorough
information  regarding capabilities, operating characteristics, limitations, etc., for specific
equipment.

If the field instruments and measurement methods cannot detect radiation levels below the
DCGLs, laboratory methods discussed in Chapter 7 are typically used. A discussion of detection
limits and detection levels for some typical instruments is presented in Section 6.7. There are
certain radionuclides that  will be essentially impossible to measure at the DCGLs in situ using
current state-of-the-art instrumentation and techniques because of the types, energies, and
abundances of their radiations. Examples of such radionuclides include very low energy, pure
beta emitters such as 3H and 63Ni and low-energy photon emitters such as 55Fe and 125I.  Pure
alpha emitters  dispersed in soil or covered with  some absorbing layer may not be detectable
because alpha radiation will not penetrate through the media or covering to reach the detector.  A

August 2000                                6-1                         MARSSIM, Revision 1

-------
Field Measurement Methods and Instrumentation


common example of such a condition would be 230Th surface contamination, covered by paint,
dust, oil, or moisture. NRC report NUREG-1507 (NRC 1997a) provides information on the
extent to which these surface conditions may affect detection sensitivity.  In circumstances such
as these, the survey design will usually rely on sampling and laboratory analysis to measure
residual activity levels.
6.2    Data Quality Objectives

The third step of the Data Quality Objectives (DQO) Process involves identifying the data needs
for a survey. One decision that can be made at this step is the selection of direct measurements
for performing a survey or deciding that sampling methods followed by laboratory analysis are
necessary.

6.2.1   Identifying Data Needs

The decision maker and the survey planning team need to identify the data needs for the survey
being performed, including the:

•      type of measurements to be performed (Chapter 5)
•      radionuclide(s) of interest (Section 4.3)
•      number of direct measurements to be performed (Section 5.5.2)
•      area of survey coverage for surface scans based on survey unit classification (Section
       5.5.3)
•      type and frequency of field QC measurements to be performed (Section 4.9)
•      measurement locations and frequencies (Section 5.5.2)
•      standard operating procedures (SOPs) to be followed or developed (Chapter 6)
•      analytical bias and precision (e.g., quantitative or qualitative) (Appendix N, Section N.6)
•      target detection limits for each radionuclide of interest (Section 6.4)
•      cost of the methods being evaluated (cost per measurement as well as total cost)
       (Appendix H)
•      necessary turnaround time
•      specific background for the radionuclide(s) of interest (Section 4.5)
•      derived concentration guideline level (DCGL) for each radionuclide of interest
       (Section 4.3)
•      measurement documentation requirements
•      measurement tracking requirements

Some of this information will be supplied by subsequent steps in the DQO process, and several
iterations of the process may be needed to identify all of the data needs.  Consulting with a health
physicist or radiochemist may be necessary to properly evaluate the information before deciding

MARSSIM, Revision 1                         6-2                                August 2000

-------
                                                   Field Measurement Methods and Instrumentation


between direct measurements or sampling methods to perform the survey. Many surveys will
involve a combination of direct measurements and sampling methods, along with scanning
techniques, to demonstrate compliance with the release criterion.

6.2.2   Data Quality Indicators

The data quality indicators identified as DQOs in Section 2.3.1 and described in Appendix N
should be considered when selecting a measurement method (i.e., scanning, direct measurement,
sampling) or a measurement system (e.g., survey instrument, human operator, and procedure for
performing measurements). In some instances, the data quality indicator requirements will help
in the selection of a measurement system.  In other cases, the requirements of the measurement
system will assist in the selection of appropriate levels for the data quality indicators.

6.2.2.1 Precision

Precision is a measure of agreement among replicate measurements of the same property, under
prescribed similar conditions (ASQC  1995). Precision is determined quantitatively based on the
results of replicate measurements (equations are provided in EPA 1990). The number of
replicate analyses needed to determine a specified level of precision for a project is discussed in
Section 4.9.  Determining precision by replicating measurements with results at or near the
detection limit of the measurement system is not recommended because the measurement
uncertainty is usually greater than the desired level of precision. The types of replicate
measurements  applied to scanning and direct measurements are limited by the relatively
uncomplicated measurement system (i.e., the uncertainties associated with sample collection and
preparation are eliminated). However, the uncertainties associated with applying a single
calibration factor to a wide variety of site conditions mean these measurements are very useful
for assessing data quality.

•      Replicates to Measure Operator Precision.  For scanning and direct measurements,
       replicates to measure operator precision provide an estimate of precision for the operator
       and the Standard Operating Procedure (SOP) or protocol used to perform the
       measurement.  Replicates to measure operator precision are measurements performed
       using the same instrument at the same location,  but with a different operator. Replicates
       to measure operator precision  are usually non-blind or single-blind measurements.

•      Replicates to Measure Instrument Precision.  For scanning and direct measurements,
       replicates to measure instrument precision  provide an estimate of precision for the type of
       instrument, the calibration, and the SOP or protocol used to perform the measurement.
       Replicates to measure instrument precision are measurements performed by the same
       operator at the same location, but with a different instrument. Replicates to measure
       instrument precision are usually non-blind or single-blind measurements.

August 2000                                  6-3                        MARSSIM, Revision 1

-------
Field Measurement Methods and Instrumentation


For many surveys a combination of instrument and operator replicates are used to provide an
estimate of overall precision for both scanning and direct measurements. Replicates of direct
measurements can be compared with one another similar to the analytical results for samples.
Results for scanning replicates may be obtained by stopping and recording instrument readings at
specific intervals during the scanning survey (effectively performing direct measurements at
specified locations). An alternative method for estimating the precision of scanning is to
evaluate the effectiveness of the scanning survey for identifying areas of elevated activity. The
results of scanning are usually locations that are identified for further investigation. A
comparison of the areas identified by the replicate scanning surveys can be performed either
quantitatively (using statistical methods) or qualitatively (using professional judgment). Because
there is a necessity to evaluate whether the same number of locations were identified by both
replicates as well as if the identified locations  are the same, there is difficulty in developing
precision as a DQO that can be evaluated.

6.2.2.2  Bias

Bias is the systematic or persistent distortion of a measurement process that causes error in one
direction (EPA 1997a). Bias is determined quantitatively based on the measurement of materials
with a known concentration. There are several types of materials with known concentrations that
may be used to determine bias for scans and direct measurements.

•      Reference Material. Reference material is a material or substance one or more of whose
       property values are sufficiently homogeneous and well established to be used for the
       calibration of an apparatus, the assessment of a measurement method, or for assigning
       values to materials (ISO 1993).  A certified reference material is reference  material for
       which each certified property value is accompanied by an uncertainty at a stated level of
       confidence. Radioactive reference materials may be available for certain radionuclides in
       soil (e.g., uranium in soil), but reference building materials may not be available.
       Because reference materials are prepared and homogenized as part of the certification
       process, they are rarely available as double-blind samples.  When appropriate reference
       materials are available (i.e., proper matrix, proper radionuclide, proper concentration
       range) they are recommended for use in determining the overall bias for a measurement
       system.  For scanning and direct measurements a known amount  of reference material is
       sealed in a known geometry. This known material is measured in the field using a
       specified protocol (e.g., specified measurement time at a specified distance from the
       reference material) to evaluate the performance of the instrument only.

•      Performance Evaluation (PE) Samples. PE samples are used to evaluate the bias of the
       instrument and detect any error in the instrument calibration. These samples are usually
       prepared by a third party, using a quantity of analyte(s) which is known to the preparer
       but unknown to the operator, and always undergo certification analysis. The analyte(s)

MARSSIM, Revision 1                         6-4                                 August 2000

-------
                                                   Field Measurement Methods and Instrumentation


       used to prepare the PE sample is the same as the analyte(s) of interest (EPA 1991g). PE
       samples are recommended for use in determining bias for a measurement system when
       appropriate reference materials are not available. PE samples are equivalent to matrix
       spikes prepared by a third party that undergo certification analysis and can be non-blind
       or single-blind when used to measure bias for scanning and direct measurements.

•      Matrix Spike Samples.  Matrix spike samples are environmental samples that are spiked
       in the laboratory with a known concentration of a target analyte(s) to verify percent
       recoveries.  They are primarily used to check sample matrix interferences but can also be
       used in the field to monitor instrument performance  (EPA 1991g).  Matrix Spike samples
       are often replicated to monitor a method's performance and evaluate bias and precision
       (when four or more pairs are analyzed).  These replicates are  often collectively referred to
       as a matrix spike/matrix spike duplicate (MS/MSD).

•      Calibration Checks.  Calibration checks are measurements  performed to verify instrument
       performance each time an instrument is used (see Section 6.5.4). These checks may be
       qualitative or quantitative.  Operators use qualitative checks to determine if an instrument
       is operating properly and can be used to perform measurements. Quantitative calibration
       checks require a specified protocol to measure a calibration source with a known
       instrument response, and the results are documented to provide a record of instrument
       precision and bias. The results of quantitative calibration checks are typically recorded
       on a control chart (see Section 6.2.2.7).  Note that the calibration check source does not
       need to be traceable for qualitative or quantitative calibration checks as long as the
       instrument response has been adequately established (see Section 6.5.4). Because
       calibration checks are non-blind measurements they  are only  recommended when other
       types of QC measurements are not available.

Quality control measurements can also be used to estimate bias caused by contamination.

•      Background Measurement. A background measurement is a  measurement performed
       upgradient of the area of potential contamination (either onsite or offsite) where there is
       little or no chance of migration of the contaminants of concern (EPA 1991g).
       Background measurements are performed in the background reference area (Section 4.5),
       determine the natural composition and variability of the material of interest (especially
       important in areas with high concentrations of naturally occurring radionuclides), and are
       considered "clean." They provide a basis for comparison of contaminant concentration
       levels with measurements performed in the survey unit when the statistical tests described
       in Chapter 8 are performed.
August 2000                                 6-5                         MARSSIM, Revision 1

-------
Field Measurement Methods and Instrumentation


•      Measurement Blanks. Measurement blanks are samples prepared in the laboratory using
       certified clean sand or soil and brought to the field to monitor contamination for scanning
       and direct measurements.  A measurement blank is used to evaluate contamination error
       associated with the instrument used to perform measurements in the field. Measurement
       blanks are recommended for determining bias resulting from contamination of
       instruments used for scanning and direct measurements.

6.2.2.3 Representativeness

Representativeness is a measure of the degree to which data accurately and precisely represent a
characteristic of a population parameter at a sampling point (ASQC 1995)  or measurement
location.  Representativeness is a qualitative term that is reflected in the survey design through
the selection of a measurement method (e.g., direct measurement or sampling).

Sample collection and analysis is typically less representative of true radionuclide concentrations
at a specific measurement location than performing a direct measurement.  This is caused by the
additional steps required in collecting and  analyzing samples, such as sample collection, field
sample preparation, laboratory sample preparation, and radiochemical analysis. However, direct
measurement techniques with acceptable detection limits are not always available.  The location
of the direct measurement is determined in Section 5.5.2.5, where random  and systematic survey
designs are selected based on survey unit classification.  The coverage for a survey unit using
scanning techniques is discussed in Section 5.5.3 and is also based primarily on survey unit
classification. Because scanning locations are often selected based on professional judgment for
survey units with less than 100% coverage, representativeness of these locations may be a
concern.  For both scanning and direct measurements the measurement locations and method for
performing the measurements should be compared to the modeling assumptions used to develop
the DCGLs.

6.2.2.4 Comparability

Comparability is a qualitative term that expresses the confidence that two data sets can contribute
to a common  analysis and interpolation. Generally, comparability is provided by using the same
measurement system for all analyses of a specific radionuclide.  Comparability is usually not an
issue except in cases where historical data has been collected and is being compared to current
analytical results, or when multiple laboratories are used to provide results as part of a single
survey design.

6.2.2.5 Completeness

Completeness is a measure of the amount of valid data obtained from the measurement system.
This is expressed as a percentage of the number of valid measurements that should have been

MARSSIM, Revision 1                        6-6                                August 2000

-------
                                                   Field Measurement Methods and Instrumentation


collected. Completeness is of greater concern for laboratory analyses than for direct
measurements because the consequences of incomplete data often require the collection of
additional data. Completeness is a concern for scanning only if the scanning results are
invalidated for some reason.  Direct measurements and scans can usually be repeated fairly easily
while the personnel performing the measurements are still in the field. For this reason
MARS SIM strongly recommends that scanning and direct measurement results be evaluated as
soon as possible. Direct measurements performed on a systematic grid to locate areas of elevated
activity are also a concern for completeness.  If one direct measurement result is not valid, the
entire survey design for locating areas of elevated activity may be invalidated.

6.2.2.6 Other Data Quality Indicators

Several additional data quality  indicators that influence the final status survey design are
identified as DQOs in Section 2.3.1. Many of these (e.g., selection and classification of survey
units, decision error rates, variability in the contaminant concentration, lower bound of the gray
region) are used to determine the number of measurements and are  discussed in detail in Section
5.5.2. The method detection limit is directly related to the selection of a measurement method
and a specific measurement system.

Scanning and direct measurement techniques should be capable of measuring levels below the
established DCGLs— detection limits of 10-50% of the DCGL should be the target (see Section
6.7).  Cost, time, best available technology, or other constraints may create situations where the
above stated sensitivities are deemed impractical. Under these circumstances, higher detection
sensitivities may be acceptable. Although service providers and instrument manufacturers will
state detection limits, these sensitivities are usually based on ideal or optimistic situations and
may not be achievable under site-specific measurement conditions.  Detection limits are subject
to variation from measurement to measurement, instrument to instrument, operator to operator,
and procedure to procedure.  This variation depends on geometry, background, instrument
calibration, abundance of the radiations being measured, counting time, operator training,
operator  experience, self-absorption in the medium being measured, and interferences from
radionuclides or other materials present in the medium. The detection limit that is achievable in
practice should not exceed the  DCGL.

6.2.2.7 Using Control Charts to Provide  Control of Field Measurement Systems

Control charts are commonly used in radioanalytical laboratories to monitor the performance of
laboratory instruments. Control charts are also useful for monitoring the performance of field
instruments and can be used to help control field measurement systems.

A control chart is a graphical plot of measurement results with respect to time or sequence of
measurement,  together with limits within in which the measurement values are expected to lie

August 2000                                 6-7                         MARSSIM, Revision 1

-------
Field Measurement Methods and Instrumentation


when the system is in a state of statistical control (DOE 1995).  Calibration check results are
typically plotted on control charts for field measurements. However, control charts may be
developed for any measurements where the expected performance is established and
documented. A separate set of control charts for monitoring each type of measurement (e.g.,
calibration check, background, measurement of PE samples) should be developed for each
instrument.

The control chart is constructed by preparing a graph showing the arithmetic mean and the
control limits as horizontal lines. The recommended control limits are two standard deviations
above and below the mean,  and three standard deviations above and below the mean. The
measurement results in the appropriate units are shown on the y-axis and time or sequence is
plotted using the x-axis. Detailed guidance on the development and use of control charts is
available in Quality Assurance of Chemical Measurements (Taylor 1987) and Statistical Methods
for Quality Improvement (Kume 1985).

As the quality control or other measurements are performed, the results are entered on the control
chart.  If the results are outside the control limits or show a particular trend or tendency, then the
process is not in control. The control chart documents the performance of the measurement
system during the time period of interest.

Quality control measurements for field instruments may be difficult or expensive to obtain for
some surveys. In these cases control charts documenting instrument performance may represent
the only determination of precision and bias for the survey. Because control charts are non-blind
measurements they are generally not appropriate for estimating precision and bias.  However, the
control chart documents the performance of the field instruments. Provided the checks for
precision and bias fall within the control limits, the results obtained using that instrument should
be acceptable for the survey.
6.3   Selecting a Service Provider to Perform Field Data Collection Activities

One of the first steps in designing a survey is to select a service provider to perform field data
collection activities.  MARSSEVI recommends that this selection take place early in the planning
process so that the service provider can provide information during survey planning and
participate in the design of the survey.  Service providers may include in-house experts in field
measurements and sample collection, health physics companies, or environmental engineering
firms among others.

When the service provider is not part of the organization responsible for the site, these services
are obtained using some form of procurement mechanism.  Examples of procurement
mechanisms include purchase orders or contracts. A graded approach should be used in
determining the appropriate method for procuring services.

MARSSIM, Revision 1                         6-8                                August 2000

-------
                                                    Field Measurement Methods and Instrumentation


Potential service providers should be evaluated to determine their ability to perform the
necessary analyses. For large or complex sites, this evaluation may take the form of a pre-award
audit. The results of this audit provide a written record of the decision to use a specific service
provider.  For less complex sites or facilities, a review of the potential service provider's
qualifications is sufficient for the evaluation.

There are six criteria that should be reviewed during this evaluation:

•      Does the service provider possess the validated Standard Operating Procedures (SOPs),
       appropriate instrumentation, and trained personnel necessary to perform the field data
       collection activities? Field data collection activities (e.g., scanning surveys, direct
       measurements, and sample collection) are defined by the data needs identified by the
       DQO process.
•      Is the service provider experienced in performing the same or similar data collection
       activities?
•      Does the service provider have satisfactory performance evaluation or technical review
       results? The service provider should be able to provide a summary of QA audits and QC
       measurement results to demonstrate proficiency. Equipment calibrations should be
       performed using National Institute of Standards and Technology (NIST) traceable
       reference radionuclide standards whenever possible.
•      Is there an adequate capacity to perform all field data collection activities within the
       desired  timeframe? This criterion considers the number of trained  personnel and quantity
       of calibrated equipment available to perform the specified tasks.
•      Does the service provider conduct an internal quality control review of all generated data
       that is independent of the data generators?
•      Are there adequate protocols for method performance documentation,  sample  tracking
       and security (if necessary),  and documentation of results?

Potential service providers should have an active and fully documented quality system in place.1
This system should enable compliance with the objectives determined by the DQO process in
Section 2.3 and Appendix D (see EPA 1994c). The elements of a quality management system
are discussed in Section 9.1 (ASQC 1995, EPA 1994f).
   1 The quality management system is typically documented in one or more documents such as a Quality
Management Plan (QMP) or Quality Assurance Manual (QAM). A description of quality systems is included in
Section 9.1.

August 2000                                  6-9                         MARSSIM, Revision 1

-------
Field Measurement Methods and Instrumentation
6.4    Measurement Methods

Measurement methods used to generate field data can be classified into two categories commonly
known as scanning surveys and direct measurements. The decision to use a measurement
method as part of the survey design is determined by the survey objectives and the survey unit
classification. Scanning is performed to identify areas of elevated activity that may not be
detected by other measurement methods.  Direct measurements are analogous to collecting and
analyzing samples to determine the average activity in a survey unit.  Section 5.5.3 discusses
combining scans and direct measurements in an integrated survey design.

6.4.1   Direct Measurements

To conduct direct measurements of alpha, beta, and photon surface activity, instruments and
techniques providing the required detection sensitivity are selected.  The type of instrument and
method of performing the direct measurement are selected as dictated by the type of potential
contamination present, the measurement sensitivity requirements, and the objectives of the
radiological survey. Direct measurements are taken by placing the instrument at the appropriate
distance2 above the surface, taking a discrete measurement for a pre-determined time interval
(e.g.,  10 s, 60 s, etc.), and recording the reading. A one minute integrated count technique is a
practical field survey procedure for most equipment and provides detection sensitivities that are
below most DCGLs. However, longer or shorter integrating times may be warranted (see Section
6.7.1 for information dealing with the calculation of direct measurement detection sensitivities).

Direct measurements may be collected at random locations in the survey unit.  Alternatively,
direct measurements may be collected at systematic locations and supplement scanning surveys
for the identification of small areas of elevated activity (see Section 5.5.2.5). Direct
measurements may also be collected at locations identified by scanning surveys as part of an
investigation to determine the source of the elevated instrument response. Professional judgment
may also be used to identify location for direct measurements to further define the areal extent of
contamination and to determine maximum radiation levels within an area, although these types of
direct measurements are usually associated with preliminary surveys (i.e.., scoping,
characterization, remedial action support). All direct measurement locations and results should
be documented.
       2 Measurements at several distances may be needed. Near-surface or surface measurements provide the
best indication of the size of the contaminated region and are useful for model implementation. Gamma
measurements at 1 m provide a good estimate of potential direct external exposure.
MARSSIM, Revision 1                         6-10                                 August 2000

-------
                                                   Field Measurement Methods and Instrumentation


If the equipment and methodology used for scanning is capable of providing data of the same
quality required for direct measurement (e.g., detection limit, location of measurements, ability to
record and document results), then scanning may be used in place of direct measurements.
Results should be documented for at least the number of locations required for the statistical
tests. In addition, some direct measurement systems may be able to provide scanning data,
provided they meet the objectives of the scanning survey.

The following sections briefly describe methods used to perform direct measurements in the
field. The instruments used to perform these measurements are described in more detail in
Section 6.5.3 and Appendix H.

6.4.1.1 Direct Measurements for Photon Emitting Radionuclides

There are  a wide variety of instruments  available for measuring photons in the field (see
Appendix H) but all of them are used in essentially the  same way. The detector is set up at a
specified distance from the surface being measured and data are collected for a specified period
of time. The distance from the surface to the detector is generally determined by the calibration
of the instrument because photons do not interact appreciably with air. When measuring x-rays
or low-energy gamma rays, the detector is often placed closer to the surface to increase the
counting efficiency. The time required to perform a direct measurement may vary from very
short (e.g., 10 seconds) to very long (e.g., several days or weeks) depending on the type of
detector and the required detection limit. In general, the lower the required detection limit the
longer the time required  to perform the measurement.  A collimator may be used in areas where
activity from adjacent or nearby areas might interfere with the direct measurement.  The
collimator (usually lead, tungsten, or steel) shields the detector from extraneous photons but
allows activity from a specified area of the surface to reach the detector.

       Example:

       The portable germanium detector, or in situ gamma spectrometer, can be used to estimate
       gamma-emitting radionuclide concentrations in the field.  As with the laboratory-based
       germanium detector with multichannel analyzer, in situ gamma spectrometry can
       discriminate among various radionuclides  on the basis of characteristic gamma and x-ray
       energies to provide a nuclide-specific measurement. A calibrated detector measures the
       fluence rate of primary photons at specific energies that are characteristic of a particular
       radionuclide (NRC 1995b).  This fluence rate can then be converted to units of
       concentration.  Under certain conditions the fluence rate  may be converted directly to
       dose or risk for a direct comparison to the release criterion rather than to the DCGLW.
       Although this conversion is generally made,  the fluence rate should be considered the
       fundamental parameter for assessing the level of radiation at a specific location because it
       is a directly measurable physical quantity.

August 2000                                6-11                         MARS SIM, Revision 1

-------
Field Measurement Methods and Instrumentation


       For outdoor measurements, where the contaminant is believed to be distributed within the
       surface soil, it may be appropriate to assume a uniform depth profile when converting the
       fluence rate to a concentration. At sites where the soil is plowed or overturned regularly,
       this assumption is quite realistic because of the effects of homogenization. At sites where
       the activity was initially deposited on the surface and has gradually penetrated deeper
       over time, the actual depth profile will have a higher activity at the surface and gradually
       diminish with depth.  In this case, the assumption of a uniform depth profile will estimate
       a higher radionuclide concentration relative to the average concentration over that depth.
       In cases where there is an inverted depth profile (i.e., low concentration at the surface that
       increase with depth), the assumption of a uniform depth profile will underestimate the
       average radionuclide concentration over that depth. For this reason, MARSSEVI
       recommends that soil cores be collected to determine the actual depth profile for the site.
       These soil cores may be collected during the characterization or remedial action support
       survey to establish a depth profile for planning a final status survey. The cores may also
       be collected during the final status survey to verify the assumptions used to develop the
       fluence-to-concentration correction.

       For indoor measurements, uncollimated in situ measurements can provide useful
       information on the low-level average activity across an entire room. The position of the
       measurement within the room is not critical if the radionuclide of interest is not present in
       the building materials. A measurement of peak count rate can be converted to fluence
       rate, which can  in turn be related to the average surface activity.  The absence of a
       discernible peak would mean  that residual activity could not exceed a certain average
       level. However, this method will not easily locate small areas of elevated activity. For
       situations where the activity is not uniformly distributed on the surface, a series of
       collimated measurements using a systematic grid allows the operator to identify general
       areas of elevated contamination.

       The NRC draft report Measurement Methods for Radiological Surveys in Support of New
       Decommissioning Criteria (NRC 1995b) provides a detailed description of the theory and
       implementation of in situ gamma spectrometry.  In situ spectrometry is provided as one
       example of a useful tool for performing direct measurements  for particular scenarios, but
       interpretation of the instrument output in terms of radionuclide distributions is dependent
       on the assumptions used to  calibrate the method site-specifically.  The depth of treatment
       of this technique in this example is not meant to imply that in situ gamma spectrometry is
       preferred a priori over other appropriate measurement techniques described in this
       manual.
MARSSIM, Revision 1                         6-12                                 August 2000

-------
                                                   Field Measurement Methods and Instrumentation
6.4.1.2 Direct Measurements for Alpha Emitting Radionuclides

Direct measurements for alpha-emitting radionuclides are generally performed by placing the
detector on or near the surface to be measured.  The limited range of alpha particles (e.g., about
1 cm or 0.4 in. in air, less in denser material) means that these measurements are generally
restricted to relatively smooth, impermeable surfaces such as concrete, metal, or drywall where
the activity is present as surface contamination. In most cases, direct measurements of porous
(e.g., wood) and volumetric (e.g., soil, water) material cannot meet the objectives of the survey.
However, special instruments such as the long range alpha detector (see Appendix H) have been
developed to measure the concentration of alpha emitting radionuclides in soil under certain
conditions. Because the detector is used in close  proximity to the potentially contaminated
surface, contamination of the detector or damage  to the detector caused by irregular surfaces need
to be considered before performing direct measurements  for alpha emitters.

6.4.1.3 Direct Measurements for Beta Emitting Radionuclides

Direct measurements for beta emitting radionuclides are generally performed by placing the
detector on or near the surface to be measured,  similar to measurements for alpha emitting
radionuclides. These measurements are typically restricted to relatively smooth, impermeable
surfaces where the activity is present as surface contamination.  In most cases, direct
measurements of porous (e.g., wood) and volumetric (e.g., soil, water) material cannot meet the
objectives of the survey. However, special instruments such as large area gas-flow proportional
counters (see Appendix H)  and arrays of beta scintillators have been developed to measure the
concentration of beta emitting radionuclides in  soil under certain conditions.   Similar to direct
measurements for alpha emitting radionuclides, contamination of the detector and damage to the
detector need to be considered before performing direct measurements for beta emitters.

6.4.2   Scanning Surveys

Scanning is the process by which the operator uses portable radiation detection instruments to
detect the presence of radionuclides on a specific  surface (i.e., ground, wall, floor, equipment).
The term scanning survey is used to describe the process  of moving portable radiation detectors
across a suspect surface with the intent of locating radionuclide contamination. Investigation
levels for scanning surveys  are determined during survey planning to identify  areas of elevated
activity.  Scanning surveys  are performed to locate radiation anomalies indicating residual gross
activity that may require further investigation or action.  These investigation levels may be based
on the DCGLW, the DCGLEMC, or some other level as discussed in Section 5.5.2.6.
August 2000                                 6-13                         MARS SIM, Revision 1

-------
Field Measurement Methods and Instrumentation


Small areas of elevated activity typically represent a small portion of the site or survey unit.
Thus, random or systematic direct measurements or sampling on the commonly used grid spacing
may have a low probability of identifying such small areas. Scanning surveys are often relatively
quick and inexpensive to perform. For these reasons, scanning surveys are typically performed
before direct measurements or sampling.  This way time is not spent fully evaluating an area that
may quickly prove to be contaminated above the investigation level during the scanning process.
Scans are conducted which would be indicative of all radionuclides potentially present, based on
the Historical Site Assessment, surfaces to be  surveyed, and survey design objectives. Surrogate
measurements may be utilized where appropriate (see Section 4.3.2). Documenting scanning
results and  observations from the field is very  important.  For example, a scan that identified
relatively sharp increases in instrument response or identified the boundary of an area of
increased instrument response should be documented.  This information is useful when
interpreting survey results.

The following sections briefly describe techniques used to perform scanning surveys for different
types of radiation. The instruments used to perform these measurements are described in more
detail in Section 6.5.3 and Appendix H.

6.4.2.1 Scanning for Photon Emitting Radionuclides

Sodium iodide survey meters (Nal(Tl) detectors) are normally used for scanning areas for gamma
emitters because they are very sensitive to gamma  radiation, easily portable and relatively
inexpensive.  The detector is held close to the  ground surface (~6 cm or 2.5 in.) and moved in a
serpentine (i.e., snake like, "S" shaped) pattern while walking at a speed that allows the
investigator to detect the desired investigation level. A scan rate of approximately 0.5 m/s is
typically used for distributed gamma emitting  contaminants in soil; however, this rate must be
adjusted depending on the expected  detector response and the desired investigation level.
Discussion of scanning rates versus detection sensitivity for gamma emitters is provided in
Section 6.7.2.1.

Sodium iodide survey meters are also used for scanning to detect areas with elevated areas of
low-energy gamma and x-ray emitting radionuclides such as 241Am and 239Pu.  Specially designed
detectors, such as the FIDLER (field instrument for the detection of low energy radiation) probe
with survey meter, are typically used to detect these types of radionuclides.

6.4.2.2 Scanning for Alpha Emitting Radionuclides

Alpha scintillation survey meters and thin window gas-flow proportional counters are typically
used for performing alpha surveys. Alpha radiation has a very limited range and, therefore,
instrumentation must be kept close to the surface—usually less than 1  cm (0.4 in.). For this
reason, alpha scans are generally performed on relatively smooth, impermeable surfaces (e.g.,

MARSSIM, Revision 1                         6-14                                August 2000

-------
                                                    Field Measurement Methods and Instrumentation


concrete, metal, drywall) and not on porous material (e.g., wood) or for volumetric
contamination (e.g., soil, water). In most cases, porous and volumetric contamination cannot be
detected by scanning for alpha activity and meet the objectives of the survey because of high
detection sensitivities.  Under these circumstances, samples of the material are usually collected
and analyzed as discussed in Chapter 7.  Determining scan rates when surveying for alpha
emitters is discussed in Section 6.7.2.2 and Appendix J.

6.4.2.3  Scanning for Beta Emitting Radionuclides

Thin window gas-flow proportional counters are normally used when surveying for beta emitters,
although solid scintillators designed for this purpose are also available.  Typically, the beta
detector is held less than 2 cm  from the surface and moved at a rate such that the desired
investigation level can be detected. Low-energy (<100 keV) beta emitters are subject to the same
interferences and self-absorption problems found with alpha emitting radionuclides, and scans
for these radionuclides are performed under similar circumstances.  Determination of scan rates
when surveying for beta emitters is discussed in Section 6.7.2.1.
6.5    Radiation Detection Instrumentation

Traditional radiation instruments consist of two components:  1) a radiation detector, and
2) electronic equipment to provide power to the detector and to display or record radiation
events.  This section identifies and very briefly describes the types of radiation detectors and
associated display or recording equipment that are applicable to survey activities in support of
environmental assessment or remedial action. Each survey usually requires performing direct
field measurements using portable instrumentation and collection of samples for laboratory
analysis. The selection and proper use of appropriate instruments for both direct measurements
and laboratory analyses will likely be the most critical factors in assuring that the survey
accurately determines the radiological status of a site and meets the survey objectives. Chapter 7
provides specific information on laboratory analysis of collected samples. Appendix H contains
instrument specific information for various types of field survey and laboratory analysis
equipment currently in use.

6.5.1   Radiation Detectors

The particular capabilities of a radiation detector will establish its potential applications in
conducting a specific type of survey. Radiation detectors can be divided into four general classes
based on the detector material or the application. These categories are: 1) gas-filled detectors,
2) scintillation detectors, 3) solid-state detectors, and 4) passive integrating detectors.
August 2000                                 6-15                         MARSSIM, Revision 1

-------
Field Measurement Methods and Instrumentation
6.5.1.1  Gas-Filled Detectors

Radiation interacts with the fill gas, producing ion-pairs that are collected by charged electrodes.
Commonly used gas-filled detectors are categorized as ionization, proportional, or Geiger-
Mueller (GM), referring to the region of gas amplification in which they are operated. The fill
gas varies, but the most common are: 1) air, 2) argon with a small amount of organic methane
(usually 10% methane by mass, referred to as P-10 gas), and 3) argon or helium with  a small
amount of a halogen such as chlorine or bromine added as a quenching agent.

6.5.1.2  Scintillation Detectors

Radiation interacts with a solid or liquid medium causing electronic transitions to excited states
in a luminescent material.  The excited states decay rapidly, emitting photons that in turn are
captured by a photomultiplier tube. The ensuing electrical signal is proportional to the scintillator
light output, which, under the right conditions, is proportional to the energy loss that produced
the scintillation.  The most common scintillant materials are Nal(Tl), ZnS(Ag), Cd(Te), and
CsI(Tl) which are used in traditional radiation survey instruments such as the Nal(Tl) detector
used for gamma  surveys and the ZnS(Ag) detector for alpha surveys.

6.5.1.3  Solid-State Detectors

Radiation interacting with a semiconductor material creates electron-hole pairs that are collected
by a charged electrode. The design and operating conditions of a specific solid-state detector
determines the types of radiations (alpha, beta, and/or gamma) that can be measured, the
detection level of the measurements, and the ability of the detector to resolve the  energies of the
interacting radiations. The semiconductor materials currently being used are germanium and
silicon which are available in both n and p types in various configurations.

Spectrometric techniques using these detectors provide a marked increase in sensitivity in many
situations. When a particular radionuclide contributes only a fraction of the total particle fluence
or photon fluence, or both, from all sources (natural or manmade background), gross
measurements are inadequate and nuclide-specific measurements are necessary.  Spectrometry
provides the means to discriminate among various radionuclides on the basis of characteristic
energies. In-situ gamma spectrometry is particularly effective in field measurements  since the
penetrating nature of the radiation allows one to "see" beyond immediate surface contamination.
The availability of large, high efficiency germanium detectors permits measurement of low
abundance gamma emitters such as 238U as well as low energy emitters such as 241Am and 239Pu.
MARSSIM, Revision 1                         6-16                                   June 2001

-------
                                                    Field Measurement Methods and Instrumentation
6.5.1.4  Passive Integrating Detectors

There is an additional class of instruments that consists of passive, integrating detectors and
associated reading/analyzing instruments.  The integrated ionization is read using a laboratory or
hand-held reader. This class includes thermoluminescence dosimeters (TLDs) and electret ion
chambers (EICs). Because these detectors are passive and can be exposed for relatively long
periods of time, they can provide better sensitivity for measuring low activity levels such as free
release limits or for continuing surveillance.  The ability to read and present data onsite is a
useful feature and such systems are comparable to direct reading instruments.

The scintillation materials in Section 6.5.1.2  are selected for their prompt fluorescence
characteristics.  In another class of inorganic crystals, called TLDs, the crystal material and
impurities are chosen so that the free electrons  and holes created following the absorption of
energy from the radiation are trapped by impurities in the crystalline lattice thus locking the
excitation energy in the crystal.  Such materials are used  as passive, integrating detectors. After
removal from the exposure area, the TLDs are heated in a reader which measures the total
amount of light produced when the energy is released. The total amount of light is proportional
to the number of trapped, excited electrons, which in turn is proportional to the amount of energy
absorbed from the radiation. The intensity of the light emitted from the thermoluminescent
crystals is thus directly proportional to the radiation dose. TLDs come in a large number of
materials, the most common of which are LiF,  CaF2:Mn, CaF2:Dy, CaSO4:Mn, CaSO4:Dy,
A12O3:C.

The electret ion chamber consists of a very stable electret (a charged Teflon® disk) mounted
inside a small chamber made of electrically charged  plastic. The ions produced inside this air
filled chamber are collected onto the electret, causing a reduction of its surface charge. The
reduction in charge is a function of the total ionization during a specific monitoring period and
the specific chamber volume. This change in voltage is measured with a surface potential
voltmeter.

6.5.2   Display and Recording Equipment

Radiation detectors  are connected to electronic devices to 1) provide a source of power for
detector operation, and 2) enable measurement of the quantity and/or quality of the radiation
interactions that are occurring in the detector. The quality of the radiation interaction refers to
the amount of energy transferred to the detector.  In many cases, radiation interacts with other
material (e.g., air) prior to interacting with the detector, or only partially interacts with the
detector (e.g., Compton  scattering for photons). Because the energy recorded by the detector is
affected, there is an increased probability of incorrectly identifying the radionuclide.
August 2000                                 6-17                         MARS SIM, Revision 1

-------
Field Measurement Methods and Instrumentation


The most common recording or display device used for portable radiation measurement systems
is a ratemeter. This device provides a display on an analog meter representing the number of
events occurring over some time period (e.g., counts per minute).  Digital ratemeters are also
commercially available. The number of events can also be accumulated over a preset time period
using a digital scaling device. The resulting information from a scaling device is the total
number of events that occurred over a fixed period of time, where a ratemeter display varies with
time and represents a short term average of the event rate. Determining the average level on a
ratemeter will require judgment by the user, especially when a low frequency of events results in
significant variations in the meter reading.

Pulse height analyzers are specialized electronic devices designed to measure and record the
number of pulses or events that occur at different pulse height levels.  These types of devices are
used with detectors which produce output pulses that are proportional in height to the energy
deposited within them by the interacting radiation. They can be used to record only those events
occurring in a detector within a single band of energy or can simultaneously record the events in
multiple energy ranges.  In the former case, the equipment is known as a single-channel analyzer;
the latter application is referred to as a multichannel analyzer.

6.5.3   Instrument Selection

Radiation survey parameters that might be needed for site release  purposes include surface
activities, exposure rates, and radionuclide concentrations in soil.  To determine these
parameters, field measurements and laboratory analyses may be necessary.  For certain
radionuclides or radionuclide mixtures, both alpha and beta radiations may have to be  measured.
In addition to assessing average radiological conditions, the survey objectives should address
identifying small areas of elevated activity and determining the extent and level of residual
radioactivity.

Additionally, the potential uses of radiation instruments can vary  significantly depending on the
specific design and operating criteria of a given detector type.  For example, a Nal(Tl) scintillator
can be designed to be very thin with a low atomic number entrance window (e.g., beryllium)  such
that the effective detection capability for low energy photons is optimized. Conversely, the same
scintillant material can be fabricated as a thick cylinder in order to optimize the detection
probability for higher energy photons. On the recording end of a detection system,  the output
could be a ratemeter, sealer, or multichannel analyzer as described in Section 6.5.2.  Operator
variables such as training and level of experience with  specific instruments should also be
considered.

With so many variables, it is highly unlikely that any single instrument (detector and readout
combination) will be capable of adequately measuring  all of the radiological parameters
necessary to demonstrate that criteria for release have been satisfied. It is usually necessary to
select multiple instruments to perform the variety of measurements required.

MARSSIM, Revision 1                         6-18                                 August 2000

-------
                                                   Field Measurement Methods and Instrumentation


Selection of instruments will require an evaluation of a number of situations and conditions.
Instruments must be stable and reliable under the environmental and physical conditions where
they will be used, and their physical characteristics (size and weight) should be compatible with
the intended application.  The instrument must be able to detect the type of radiation of interest,
and the measurement system should be capable of measuring levels that are less than the DCGL
(see Section 6.7).

For gamma radiation scanning, a scintillation detector/ratemeter combination is the usual
instrument of choice. A large-area proportional detector with a ratemeter is recommended for
scanning for alpha and beta radiations where surface conditions and locations permit; otherwise,
an alpha scintillation or thin-window  GM detector (for beta surveys) may be used.

For direct gamma measurements, a pressurized ionization chamber or in-situ gamma
spectroscopy system is recommended. As an option, a Nal(Tl) scintillation detector may be used
if cross-calibrated to a pressurized ion chamber or calibrated for the specific energy of interest.
The same  alpha and beta detectors identified above for scanning surveys are also recommended
for use in direct measurements.

There are certain radionuclides that, because of the types, energies, and abundances of their
radiations, will be essentially impossible to measure at the guideline levels, under field
conditions, using state-of-the-art instrumentation and techniques.  Examples of such
radionuclides include very low energy pure beta emitters, such as 3H and 63Ni, and low energy
photon emitters, such as 55Fe and 125I. Pure alpha emitters dispersed in soil or covered with some
absorbing layer will not be detectable because the alpha radiation will not penetrate through the
media or covering to reach the detector.  A common example of such a condition would be 230Th
surface contamination covered by paint, dust, oil, or moisture. In  such circumstances, sampling
and laboratory analysis would be required to measure the residual  activity levels unless surrogate
radionuclides are present as discussed in Section 4.3.2.

The number of possible design and operating schemes for each of the different types of detectors
is too large to discuss in detail within the context of this document. For a general overview, lists
of common radiation detectors along with their usual applications during surveys are provided in
Tables 6.1 through 6.3. Appendix H contains  specific information for various types of field
survey and laboratory analysis equipment currently in use. Continual development of new
technologies will result in changes to these listings.
August 2000                                 6-19                        MARS SIM, Revision 1

-------
Field Measurement Methods and Instrumentation
            Table 6.1 Radiation Detectors with Applications to Alpha Surveys
Detector Type
Gas Proportional
Air Proportional
Scintillation
Solid State
Passive,
integrating
electret ion
chamber
Detector Description
<1 mg/cm2 window; probe area
50 to 1000 cm2
<0.1 mg/cm2 window; probe area
10 to 20 cm2
No window (internal proportional)
<1 mg/cm2 window; probe area
-50 cm2
ZnS(Ag) scintillator; probe area
50 to 100 cm2
ZnS(Ag) scintillator; probe area
10 to 20 cm2
Liquid scintillation cocktail
containing sample
Silicon surface barrier detector
<0.8 mg/cm2 window, also
window-less, window area 50-180
cm2, chamber volume 50-1,000 ml
Application
Surface scanning; surface
contamination measurement
Laboratory measurement of
water, air, and smear samples
Laboratory measurement of
water, air, and smear samples
Useful in low humidity
conditions
Surface contamination
measurements, smears
Laboratory measurement of
water, air, and smear samples
Laboratory analysis,
spectrometry capabilities
Laboratory analysis by alpha
spectrometry
Contamination on surfaces, in
pipes and in soils
Remarks
Requires a supply
of appropriate fill
gas



Useable in high
humidity and
temperature
6.5.4   Instrument Calibration

Calibration refers to the determination and adjustment of the instrument response in a particular
radiation field of known intensity.  Proper calibration procedures are an essential requisite toward
providing confidence in measurements made to demonstrate compliance with cleanup criteria.
Certain factors, such as energy dependence and environmental conditions, require consideration
in the calibration process, depending on the conditions of use of the instrument in the field.
Routine calibration of radiation detection instruments refers to calibration for normal use under
typical field conditions. Considerations for the use and calibration of instruments include:
MARSSIM, Revision 1
6-20
August 2000

-------
                                                  Field Measurement Methods and Instrumentation
             Table 6.2  Radiation Detectors with Applications to Beta Surveys
Detector Type
Gas Proportional
lonization
(non-pressurized)
Geiger-Mueller
Scintillation
Passive,
integrating
electret ion
chamber
Detector Description
<1 mg/cm2 window; probe area
50 to 1,000 cm2
<0.1 mg/cm2 window; probe
area 10 to 20 cm2
No window (internal
proportional)
1-7 mg/cm2 window
<2 mg/cm2 window; probe area
10 to 100 cm2
Various window thickness; few
cm2 probe face
Liquid scintillation cocktail
containing sample
Plastic scintillator
7 mg/cm2 window, also
window-less, window area 50-
180 cm2, chamber volume 50-
1,000 ml
Application
Surface scanning; surface
contamination measurement
Laboratory measurement of
water, air, smear, and other
samples
Laboratory measurement of
water, air, smear, and other
samples
Contamination measurements;
skin dose rate estimates
Surface scanning; contamination
measurements; laboratory
analyses
Special scanning applications
Laboratory analysis;
spectrometry capabilities
Contamination measurements
Low energy beta including H-3
contamination on surfaces and in
pipes
Remarks
Requires a supply
of appropriate fill
gas
Can be used for
measuring very
low-energy betas



Useable in high
humidity and
temperature
•      use of the instrument for radiation of the type for which the instrument is designed
•      use of the instrument for radiation energies within the range of energies for which the
       instrument is designed
•      use under environmental conditions for which the instrument is designed
•      use under influencing factors, such as magnetic and electrostatic fields, for which the
       instrument is designed
•      use of the instrument in an orientation such that geotropic effects are not a concern
•      use of the instrument in a manner that will not subject the instrument to mechanical or
       thermal stress beyond that for which it is designed
August 2000
6-21
MARS SIM, Revision 1

-------
Field Measurement Methods and Instrumentation
           Table 6.3  Radiation Detectors with Applications to Gamma Surveys
Detector Type
Gas lonization


Geiger-Mueller


Scintillation












Solid State


Passive,
integrating
electret ion
chamber
Detector Description
Pressurized ionization
chamber; Non-pressurized
ionization chamber
Pancake (<2 mg/cm2
window) or side window
(—30 mg/cm2)
Nal(Tl) scintillator; up to
5 cm by 5 cm




Nal(Tl) scintillator; large
volume and "well"
configurations
Csl or Nal(Tl) scintillator;
thin crystal
Organic tissue equivalent
(plastics)
Germanium semi-
conductor

7 mg/cm2 window, also
window-less, window area
50-180 cm2, chamber
volume 50-1,000 ml
Application
Exposure rate measurements


Surface scanning; exposure
rate correlation (side window
in closed position)
Surface scanning; exposure
rate correlation




Laboratory gamma
spectrometry

Scanning; low-energy gamma
and x-rays
Dose equivalent rate
measurements
Laboratory and field gamma
spectrometry and
spectroscopy




Remarks



Low relative sensitivity to
gamma radiation

High sensitivity; Cross
calibrate with PIC (or
equivalent) or for specific
site gamma energy mixture
for exposure rate
measurements.



Detection of low-energy
radiation





Useable in high humidity
and temperature


Routine calibration commonly involves the use of one or more sources of a specific radiation
type and energy, and of sufficient activity to provide adequate field intensities for calibration on
all ranges of concern.

Actual field conditions under which the radiation detection instrument will be used may differ
significantly from those present during routine calibration. Factors which may affect calibration
validity include:
MARSSIM, Revision 1
6-22
August 2000

-------
                                                   Field Measurement Methods and Instrumentation


•      the energies of radioactive sources used for routine calibration may differ significantly
       from those of radionuclides in the field
•      the source-detector geometry (e.g., point source or large area distributed source) used for
       routine calibration may be different than that found in the field
•      the source-to-detector distance typically used for routine calibration may not always be
       achievable in the field
•      the condition and composition of the surface being monitored (e.g., sealed concrete,
       scabbled concrete, carbon steel,  stainless steel, and wood) and the presence of overlaying
       material (e.g., water, dust, oil, paint) may result in a decreased instrument response
       relative to that observed during routine calibration

If the actual field conditions differ significantly from the calibration assumptions, a special
calibration for specific field conditions may be required.  Such an extensive calibration need only
be done once to determine the effects of the range of field conditions that may be encountered at
the site. If responses under routine calibration conditions and proposed use conditions are
significantly different, a correction factor or chart should be supplied with the instrument for use
under the proposed conditions.

As a minimum, each measurement system (detector/readout combination) should be calibrated
annually and response checked with a source following calibration (ANSI 1996). Instruments
may require more frequent calibration if recommended by the manufacturer.  Re-calibration of
field instruments is also required if an instrument fails a performance check or if it has undergone
repair or any modification that could affect its response.

The user may decide to perform calibrations following industry recognized procedures (ANSI
1996b, DOE Order 5484.1,  NCRP 1978, NCRP 1985, NCRP 1991, ISO 1988, HPS 1994a, HPS
1994b), or the user can choose to obtain calibration by an outside service, such as a major
instrument manufacturer or a health physics services organization.

Calibration sources should be traceable  to the National Institute of Standards and Technology
(NIST). Where NIST traceable standards are not available, standards obtained from an industry
recognized organization (e.g., the New Brunswick Laboratory for various uranium standards)
may be used.

Calibration of instruments for measurement of surface contamination should be performed such
that a direct instrument response can be accurately converted to the 4-n (total) emission rate from
the source. An accurate determination of activity from a measurement of count rate above a
surface in most cases is an extremely complex task because of the need to determine appropriate
chacteristics of the source including decay scheme, geometry, energy, scatter, and self-
absorption. For the purpose of release of contaminated areas from radiological control,
measurements must provide sufficient accuracy to ensure that cleanup standards have been

August 2000                                6-23                         MARSSIM, Revision 1

-------
Field Measurement Methods and Instrumentation


achieved. Inaccuracies in measurements should be controlled in a manner that minimizes the
consequences of decision errors. The variables that affect instrument response should be
understood well enough to ensure that the consequences of decision errors are minimized.
Therefore, the calibration should account for the following factors (where necessary):

•      Calibrations for point and large area source geometries may differ, and both may be
       necessary if areas of activity smaller than the probe area and regions of activity larger
       than the probe area are present.
•      Calibration should either be performed with the radionuclide of concern, or with
       appropriate correction factors developed for the radionuclide(s) present based on
       calibrations with nuclides emitting radiations similar to the radionuclide  of concern.
•      For portable instrumentation, calibrations should account for the substrate of concern
       (i.e., concrete, steel) or appropriate correction factors developed for the substrates relative
       to the actual calibration standard substrate.  This is especially important for beta emitters
       because backscatter is significant and varies with the composition of the  substrate.
       Conversion factors developed during the calibration process should be for the same
       counting geometry to  be used during the actual use of the detector.

For cleanup standards for building surfaces, the contamination level is typically expressed in
terms of the particle emission rate per unit time per unit area, normally Bq/m2 or disintegrations
per minute (dpm) per 100 cm2.  In many facilities, surface contamination is assessed by
converting the instrument response (in counts per minute) to surface activity using one overall
total efficiency. The total efficiency may be considered to represent the product of two factors,
the instrument (detector) efficiency, and the source efficiency.  Use of the total efficiency is  not a
problem provided that the calibration source exhibits characteristics similar to the surface
contamination (i.e., radiation  energy, backscatter effects, source geometry, self-absorption).  In
practice, this is hardly the case; more likely, instrument efficiencies are determined with a clean,
stainless steel source, and then those efficiencies are used to determine the level  of contamination
on a dust-covered concrete  surface. By separating the efficiency into two components, the
surveyor has a greater ability  to consider the actual characteristics  of the surface contamination.

The instrument efficiency is defined as the ratio of the net count rate of the instrument and the
surface emission rate of a source for a specified geometry. The surface emission rate is defined
as the number of particles of a given type above a given energy emerging from the front face of
the source per unit time.  The surface emission rate is the 2rc particle fluence that embodies both
the absorption and scattering  processes that effect the radiation emitted from the source. Thus,
the instrument efficiency is determined by the ratio of the net count rate and the  surface emission
rate.
MARSSIM, Revision 1                         6-24                                  August 2000

-------
                                                   Field Measurement Methods and Instrumentation


The instrument efficiency is determined during calibration by obtaining a static count with the
detector over a calibration source that has a traceable activity or surface emission rate.  In many
cases, a source emission rate is measured by the manufacturer and certified as NIST traceable.
The source activity is then calculated from the surface emission rate based on assumed
backscatter and self-absorption properties of the source. The maximum value of instrument
efficiency is 1.

The source efficiency is defined as the ratio of the number of particles of a given type emerging
from the front face of a source and the number of particles of the same type created or released
within the source per unit time.  The source efficiency takes into account the increased particle
emission due to backscatter effects, as well as the decreased particle emission due to self-
absorption losses. For an ideal source (i.e., no backscatter or self-absorption), the value of the
source efficiency is 0.5.  Many real sources will exhibit values less than 0.5, although values
greater than 0.5 are possible, depending on the relative importance of the absorption and
backscatter processes.

Source efficiencies may be determined experimentally. Alternatively, ISO-7503-1 (ISO 1988)
makes recommendations for default source efficiencies. A source efficiency  of 0.5 is
recommended for beta emitters with maximum energies above 0.4 MeV. Alpha emitters and
beta emitters with maximum beta energies between 0.15 and 0.4 MeV have a recommended
source efficiency of 0.25. Source efficiencies for some common surface materials and overlaying
material are provided in NUREG-1507 (NRC 1997b).

Instrument efficiency may be affected by detector-related factors such as detector size (probe
surface area), window density thickness, geotropism, instrument response time, counting time (in
static mode),  scan rate (in scan mode), and ambient conditions such as temperature, pressure, and
humidity.  Instrument efficiency also depends on solid angle effects, which include source-to-
detector distance and source geometry.

Source efficiency may be affected by source-related factors such as the type of radiation and its
energy, source uniformity, surface roughness and coverings, and surface composition (e.g., wood,
metal, concrete).

The calibration of gamma detectors for the measurement of photon radiation fields should also
provide reasonable assurance of acceptable accuracy in field measurements.  Use of these
instruments for demonstration of compliance with cleanup standards is complicated by the fact
that most cleanup levels produce exposure rates of at most a few |iR/h.  Several of the portable
survey instruments currently available in the United States for exposure rate measurements of
~1 |iR/h (often referred to as micro-R meters) have full scale intensities of ~3 to 5 |iR/h on the
first range. This is below the ambient background for most low radiation areas and most
calibration laboratories.  (A typical background dose equivalent rate of 100 mrem/y gives a

August 2000                                6-25                         MARSSIM, Revision 1

-------
Field Measurement Methods and Instrumentation


background exposure rate of about 10 jiR/h.) Even on the second range, the ambient background
in the calibration laboratory is normally a significant part of the range and must be taken into
consideration during calibration. The instruments commonly are not energy-compensated and
are very sensitive to the scattered radiation that may be produced by the walls and floor of the
room or additional shielding required to lower the ambient background.

Low intensity sources and large distances between the source and detector can be used for low-
level calibrations if the appropriate precautions are taken.  Field characterization of low-level
sources with traceable transfer standards is difficult because of the poor signal-to-noise ratio in
the standard chamber.  In order to achieve adequate ionization current, the distance between the
standard chamber and the source generally will be as small as possible while still maintaining
good geometry (5 to 7 detector diameters). Generally it is not possible to use a standard
ionization chamber to characterize the field at the distance necessary to reduce the field to the
level required for calibration. A high quality GM detector, calibrated as a transfer standard, may
be useful at low levels.

Corrections for scatter can be made using a shadow-shield technique in which a shield of
sufficient density and thickness to eliminate virtually all the primary radiation is placed about
midway between the source and the detector. The dimensions of the shield should be the
minimum required to reduce the primary radiation intensity at the detector location to less than
2% of its unshielded value. The change in reading caused by the shield being removed is
attributed to the primary field from the source at the detector position.

In some instruments that produce pulses (GM counters or scintillation counters), the detector can
be separated electronically from the readout electronics and the detector output can be simulated
with a suitable pulser.  Caution must be exercised to ensure that either the high voltage is
properly blocked or that the pulser is designed for this application. If this can be accomplished,
the instrument can first be calibrated on a higher range that is not affected by the ambient
background and in a geometry where scatter is not a problem and, after disconnecting the
detector, to provide the pulse-rate from the pulser which will give the same instrument response.
The pulse rate can then be related to field strength and reduced to give readings on lower ranges
(with the detector disconnected) even below the ambient background. This technique does not
take account of any inherent detector background independent of the external background.

Ionization chambers are commonly used to measure radiation fields at very low levels.  In order
to obtain the sensitivity necessary to measure these radiation levels, the instruments are
frequently very large and often pressurized. These instruments have the same calibration
problems as the more portable micro-R meters described above. The same  precautions (shadow
shield) must be taken to separate the response of the instrument to the source and to scattered
radiation.  Generally, it is not possible to substitute  an electronic pulser for the radiation field in
these instruments.

MARSSIM, Revision 1                        6-26                                 August 2000

-------
                                                   Field Measurement Methods and Instrumentation


For energy-dependent gamma scintillation instruments, such as Nal(Tl) detectors, calibration for
the gamma energy spectrum at a specific site may be accomplished by comparing the instrument
response to that of a pressurized ionization chamber, or equivalent detector, at different locations
on the site. Multiple radionuclides with various photon energies may also be used to calibrate the
system for the specific energy of interest.

In the interval between calibrations, the instrument should receive a performance check prior to
use.  In some cases, a performance check following use may also provide valuable information.
This calibration check is merely intended to establish whether or not the instrument is operating
within certain specified, rather large, uncertainty limits. The initial performance check should be
conducted following the calibration by placing the source in a fixed,  reproducible location and
recording  the instrument reading. The source should be identified along with the instrument, and
the same check source should be used in the same fashion to demonstrate the instrument's
operability on a daily basis when the instrument is in use.  For analog readout (count rate)
instruments, a variation of ± 20% is usually considered acceptable.  Optionally,  instruments that
integrate events and display the total on a digital  readout typically provide an acceptable average
response range of 2 or 3 standard deviations.  This is achieved by performing a series of
repetitive  measurements (10 or more is suggested) of background and check source response and
determining the average and standard deviation of those measurements.  From a practical
standpoint, a maximum deviation of ± 20% is usually adequate when compared with other
uncertainties associated with the use of the equipment.  The amount of uncertainty allowed in the
response checks should be consistent with the level of uncertainty allowed in the final data.
Ultimately the decision maker determines what level of uncertainty is acceptable.

Instrument response, including both the background and check source response  of the instrument,
should be  tested and recorded at a frequency that ensures the data collected with the equipment is
reliable. For most portable radiation survey equipment, MARS SIM recommends that a response
check be performed twice daily when in use—typically prior to beginning the day's
measurements and again following the conclusion of measurements on that same day.
Additional checks can be performed if warranted by the instrument and the conditions under
which it is used.  If the instrument response does not fall within the established range, the
instrument is removed from use until the reason for the deviation can be resolved and acceptable
response again demonstrated.  If the instrument fails the post-survey  source check, all data
collected during that time period with the instrument must be carefully reviewed and possibly
adjusted or discarded, depending on the cause of the failure. Ultimately, the frequency of
response checks must be balanced with the stability of the equipment being used under field
conditions and the quantity of data being collected. For example, if the instrument experiences a
sudden failure during the course of the day's work due to physical harm, such as a punctured
probe, then the data collected up until that point is probably acceptable even though a post-use
performance check cannot be performed. Likewise, if no obvious failure occurred but the
instrument failed the post-use response check, then the data collected with that instrument since

August 2000                                6-27                        MARSSIM, Revision 1

-------
Field Measurement Methods and Instrumentation
the last response check should be viewed with great skepticism and possibly re-collected or
randomly checked with a different instrument.  Additional corrective action alternatives are
presented in Section 9.3.  If re-calibration is necessary, acceptable response ranges must be
reestablished and documented.

Record requirements vary considerably and depend heavily on the needs of the user. While
Federal and State regulatory agencies all specify requirements, the following records should be
considered a minimum.

Laboratory Quality Control
•      records documenting the traceabililty of radiological standards
•      records documenting the traceability of electronic test equipment

Records for Instruments to be Calibrated
•      date received in the calibration laboratory
•      initial condition of the instrument, including mechanical condition (e.g., loose or broken
       parts, dents, punctures),  electrical condition (e.g., switches, meter movement, batteries),
       and radiological condition (presence or absence of contamination)
•      calibrator's records including training records and signature on calibration records
•      calibration data including model and serial number of instrument, date of calibration,
       recommended recalibration date, identification of source(s) used, "as found" calibration
       results, and final calibration results—"as returned" for use.

In addition, records of instrument problems, failures, and maintenance can be included and are
useful in assessing performance and identifying possible needs for altered calibration frequencies
for some instruments. Calibration records should be maintained at the facility where the
instruments are used as permanent records, and should be available either as hard copies or in
safe computer storage.
6.6    Data Conversion

This section describes methods for converting survey data to appropriate units for comparison to
radiological criteria.  As stated in Chapter 4, conditions applicable to satisfying decommissioning
requirements include determining that any residual contamination will not result in individuals
being exposed to unacceptable levels of radiation and/or radioactive materials.

Radiation survey data are usually obtained in units, such as the number of counts per unit time,
that have no intrinsic meaning relative to DCGLs. For comparison of survey data to DCGLs, the
survey data from field and laboratory measurements  should be converted to DCGL units.
MARSSIM, Revision 1                         6-28                                 August 2000

-------
                                                   Field Measurement Methods and Instrumentation
6.6.1   Surface Activity
When measuring surface activity, it is important to account for the physical surface area assessed
by the detector in order to make probe area corrections and report data in the proper units (i.e.,
Bq/m2, dpm/100 cm2).  This is termed the physical probe area. A common misuse is to make
probe area corrections using the effective probe area which accounts for the amount of the
physical probe area covered by a protective screen. Figure 6.1 illustrates the difference between
the physical probe area and the effective probe area. The physical probe area is used because the
reduced detector response due to the screen is accounted for during instrument calibration.
                                                     Physical Probe Area = 11.2 x 11.2 = 126 cm

                                                     Area of Protective Screen = 26 cm2

                                                     Effective Probe Area = 100 cm2
          Gas Flow Proportional Detector with Physical Probe Area of 126 cm2

                        Figure 6.1  The Physical Probe Area of a Detector
The conversion of instrument display in counts to surface activity units is obtained using the
following equation.
                                             C
Bq/m2 =
                                      i _
                                                                                  (6-1)
                                          (sr x A)
August 2000
          6-29
MARS SIM, Revision 1

-------
Field Measurement Methods and Instrumentation


where
       Cs    =     integrated counts recorded by the instrument
       Ts    =     time period over which the counts were recorded in seconds
       ST    =     total efficiency of the instrument in counts per disintegration, effectively
                    the product of the instrument efficiency (s;) and the source efficiency (ss)
       A    =     physical probe area in m2

To convert instrument counts to conventional  surface activity units, Equation 6-1 can be
modified as shown in Equation 6-2.


                              dpm           —-                                  ,(- Ox
                           	—	 =        T                                   (y-2.)
                            100 cm2
                                        (eJ x (4/100)
where Ts is recorded in minutes instead of seconds, and A is recorded in cm2 instead of m2.

Some instruments have background counts associated with the operation of the instrument.  A
correction for instrument background can be included in the data conversion calculation as
shown in Equation 6-3. Note that the instrument background is not the same as the
measurements in the background reference area used to perform the statistical tests described in
Chapter 8.

                                          r     r
                                         _£  - _^
                                Bqlm2 =  T     T                                (6-3)
                                           s      b
                                          (er x  A)
where
       Cb    =     background counts recorded by the instrument
       Tb    =     time period over which the background counts were recorded in seconds

Equation 6-3 can be modified to provide conventional surface activity units as shown in Equation
6-4.

                                           cs    cb
                                           —  ~ —                               (6-4)
                             100 cm
                                             s
                                         (eJ x (4/100)
MARSSIM, Revision 1                        6-30                                  June 2001

-------
                                                   Field Measurement Methods and Instrumentation


where Ts and Tb are recorded in minutes instead of seconds and A is recorded in cm2 instead of
  9
m .

The presence of multiple radionuclides at a site requires additional considerations for
demonstrating compliance with a dose- or risk-based regulation.  As demonstrated in Section
4.3.2, a gross activity DCGL should be determined. For example, consider a site contaminated
with 60Co and 63Ni, with 60Co representing 60% of the total activity.  The relative fractions are 0.6
for  60Co and 0.4 for 63Ni. If the DCGL for 60Co is 8,300 Bq/m2 (5,000 dpm/100 cm2) and the
DCGL for 63Ni is 12,000 Bq/m2 (7,200 dpm/100 cm2), the gross activity DCGL is 9,500 Bq/m2
(5,700 dpm/100 cm2) calculated using Equation 4-4.

When using the gross activity DCGL, it is important to use an appropriately weighted total
efficiency to convert from instrument counts to surface activity units using Equations 6-1 through
6-4. In this example, the individual efficiencies for 60Co and 63Ni should be independently
evaluated. The overall efficiency is then determined by weighting each individual efficiency by
the  relative fraction of each radionuclide.

6.6.2  Soil Radionuclide Concentration and Exposure Rates

Analytical procedures,  such as  alpha and gamma spectrometry, are typically used to determine
the  radionuclide concentration  in soil in units of Bq/kg. Net counts are converted to soil DCGL
units by dividing by the time, detector or counter efficiency, mass or volume of the sample, and
by the fractional recovery or yield of the chemistry procedure (if applicable). Refer to Chapter 7
for  examples of analytical procedures.

Instruments, such as a PIC or micro-R meter, used to measure exposure rate typically read
directly in mSv/h.  A gamma scintillation detector (e.g., Nal(Tl)) provides data in counts per
minute and conversion to mSv/h is accomplished by using site-specific calibration factors
developed for the specific instrument (Section 6.5.4).

In situ gamma spectrometry data may require special analysis routines before the spectral data
can be converted to soil concentration units or exposure rates.
6.7    Detection Sensitivity

The detection sensitivity of a measurement system refers to a radiation level or quantity of
radioactive material that can be measured or detected with some known or estimated level of
confidence.  This quantity is a factor of both the instrumentation and the technique or procedure
being used.
August 2000                                6-31                         MARSSIM, Revision 1

-------
Field Measurement Methods and Instrumentation


The primary parameters that affect the detection capability of a radiation detector are the
background count rate, the detection efficiency of the detector and the counting time interval. It
is important to use actual background count rate values and detection efficiencies when
determining counting and scanning parameters, particularly during final status and verification
surveys.  When making field measurements, the detection sensitivity will usually be less than
what can be achieved in a laboratory due to increased background and, often times, a
significantly lower detection efficiency. It is often impossible to guarantee that pure alpha
emitters can be detected in situ since the weathering of aged surfaces will often completely
absorb the alpha emissions. NRC report NUREG-1507 (NRC 1997b) contains data on many of
the parameters that affect detection efficiencies in situ, such as absorption, surface smoothness,
and paniculate radiation energy.

6.7.1   Direct Measurement Sensitivity

Prior to performing field measurements, an investigator must evaluate the detection sensitivity of
the equipment proposed for use to ensure that levels below the DCGL can be detected (see
Section 4.3).  After a direct measurement has been made, it is then necessary to determine
whether or not the result can be distinguished from the instrument background response of the
measurement system. The terms that are used in this manual to define detection sensitivity for
fixed point counts and sample analyses are:

       Critical  level  (Lc)
       Detection limit (1^,)
       Minimum detectable concentration (MDC)

The critical level (Lc) is the level, in counts, at which there  is a statistical probability (with a
predetermined confidence) of incorrectly identifying a measurement system background value as
"greater than background."  Any response above this level is considered to be greater than
background. The detection limit (LD) is an a priori estimate of the detection capability of a
measurement system, and is also reported in units of counts. The minimum detectable
concentration (MDC) is the detection limit (counts) multiplied by an appropriate conversion
factor to give units consistent with a site guideline, such as Bq/kg.

The following discussion provides an overview of the derivation contained in the well  known
publication by Currie (Currie 1968) followed by a description of how the resulting formulae
should be used. Publications by Currie (Currie 1968, NRC  1984) and Altshuler and Pasternack
(Altshuler and Pasternak 1963) provide details of the derivations involved.

The two parameters of interest for a detector system with a background response greater than
zero are:
MARSSIM, Revision 1                         6-32                                August 2000

-------
                                                    Field Measurement Methods and Instrumentation
       Lc     the net response level, in counts, at which the detector output can be considered
              "above background"
       LD     the net response level, in counts, that can be expected to be seen with a detector
              with a fixed level of certainty

Assuming that a system has a background response and that random uncertainties and systematic
uncertainties are accounted for separately, these parameters  can be calculated using Poisson
statistics.  For these calculations, two types of decision errors should be considered. A Type I
error (or "false positive") occurs when a detector response is considered to be above background
when, in fact, only background radiation is present.  A Type n error (or "false negative") occurs
when a detector response is considered to be background when in fact radiation is present at
levels above background.  The probability of a Type I error is referred to as a (alpha) and is
associated with Lc; the probability of a Type n error is referred to as B  (beta) and is associated
with LJJ. Figure 6.2 graphically illustrates the relationship of these terms with respect to each
other and to a normal background distribution.
                                                          Background counts (mean)
                                                          Critical level (net counts above bkgd)
                                                          Detection limit (net counts above bkgd)
                                                          Probability of Type I error
                                                          Probability of Type II error
        Figure 6.2 Graphically Represented Probabilities for Type I and Type II Errors
            in Detection Sensitivity for Instrumentation with a Background Response

If a and P are assumed to be equal, the variance (a2) of all measurement values is assumed to be
equal to the values themselves.  If the background of the detection system is not well known,
then the critical detection level and the detection limit can be calculated by using the following
formulae:
August 2000
6-33
MARS SIM, Revision 1

-------
Field Measurement Methods and Instrumentation
                                    Lc =
                                                  I-                              (6'5)
                                    LD = k2 + 2kv/2B
where
       Lc     =      critical level (counts)
       LD     =      detection limit (counts)
       k      =      Poisson probability sum for a and P (assuming a and P are equal)
       B      =      number of background counts that are expected to occur while performing
                     an actual measurement

The curve to the left in the diagram is the background distribution minus the mean of the
background distribution. The result is a Poisson distribution with a mean equal to zero and a
variance, o2, equal to B.  Note that the distribution accounts only for the expected statistical
variation due to the stochastic nature of radioactive decay. Currie assumed "paired blanks" when
deriving the above stated relationships (Currie 1968), which is interpreted to mean that the
sample and background count times are the same.

If values of 0.05 for both a and P are selected as acceptable, then k = 1.645 (from Appendix I,
Table I.I) and Equation 6-5 can be written as:


                                    Lc = 2.33VT3
                                                                                   (6-6)
                                    LD = 3
       Note:  In Currie's derivation, the constant factor of 3 in the LD formula was stated as
       being 2.71, but since that time it has been shown (Brodsky 1992) and generally accepted
       that a constant factor of 3 is more appropriate. If the sample count times and background
       count times are different, a slightly different formulation is used.

For an integrated measurement over a preset time, the MDC can be obtained from Equation 6-6
by multiplying by the factor, C. This factor is used to convert from counts to concentration as
shown in Equation 6-7:

                            MDC  = C x  (3  + 4.65^/B )                           (6-7)

The total detection efficiency and other constants or factors represented by the variable C are
usually not truly constants as shown in Equation 6-7.  It is likely that at least one of these factors

MARSSIM, Revision 1                        6-34                                 August 2000

-------
                                                   Field Measurement Methods and Instrumentation


will have a certain amount of variability associated with it which may or may not be significant.
These varying factors are gathered together into the single constant, C, by which the net count
result will be multiplied when converting the final data.  If C varies significantly between
measurements, then it might be best to select a value, C', from the observed distribution of C
values that represents a conservative estimate. For example, a value of C might be selected to
ensure that at least 95% of the possible values of C are less than the chosen value, C'.  The MDC
calculated in this way helps assure that the survey results will  meet the Data Quality Objectives.
This approach for including uncertainties into the MDC  calculation is recommended in both
NUREG/CR-4007 (NRC 1984) and Appendix A to ANSIN13.30 (ANSI 1996a).
Underestimating an MDC can have adverse consequences, especially if activity is later detected
at a level above the  stated MDC.

Summary of Direct Measurement Sensitivity Terms

•      The MDC is the a priori net activity level  above  the critical level that an instrument can
       be expected to detect 95% of the time. This value should be used when stating the
       detection capability of an instrument.  The MDC is the detection limit, LD  multiplied by
       an appropriate conversion factor to give units of activity.  Again, this value is used before
       any measurements are made and is used to estimate the level  of activity that can be
       detected using a given protocol.

•      The critical level, Lc, is the lower bound on the 95% detection interval defined for LD and
       is the level at which there is a 5% chance of calling a background value "greater than
       background." This value should be used when actually counting samples or making
       direct radiation measurements.  Any response above this level should be considered as
       above background (i.e., a net positive result).  This will ensure 95% detection capability
•      From a conservative point of view, it is better to overestimate the MDC for a
       measurement method. Therefore, when calculating MDC and Lc values, a measurement
       system background value should be selected that represents the high end of what is
       expected for a particular measurement method.  For direct measurements, probes will be
       moved from point to point and, as a result, it is expected that the background will most
       likely vary significantly due to variations in background, source materials, and changes in
       geometry and shielding.  Ideally, the MDC values should be calculated for each type of
       area, but it may be more economical to simply select a background value from the highest
       distribution expected and use this for all calculations. For the same reasons, realistic
       values of detection efficiencies and other process parameters should be used when
       possible and should be reflective of the actual conditions. To a great degree, the selection
       of these parameters will be based on judgment and will require evaluation of site-specific
       conditions.

August 2000                                6-35                         MARS SIM, Revision 1

-------
Field Measurement Methods and Instrumentation


MDC values for other counting conditions may be derived from Equation 6-7 depending on the
detector and contaminants of concern. For example, it may be required to determine what level
of contamination, distributed over 100 cm2, can be detected with a 500 cm2 probe or what
contamination level can be detected with any probe when the contamination area is smaller than
the probe active area. Table 6.4 lists several common field survey detectors with estimates of
MDC values for 238U on a smooth, flat plane.  As such, these represent minimum MDC values
and may not be  applicable at all sites. Appropriate site-specific MDC values should be
determined using the DQO Process.

          Table 6.4 Examples of Estimated Detection Sensitivities for Alpha and
                              Beta Survey Instrumentation

         (Static one minute counts for 238U calculated using Equations 6-6 and 6-7)
Detector
Alpha
proportional
Alpha
proportional
Alpha
proportional
Alpha
scintillation
Beta
proportional
Beta
proportional
Beta
GM pancake
Probe area
(cm2)
50
100
600
50
100

600

15

Background
(cpm)
1
1
5
1
300

1500

40

Efficiency
(cpm/dpm)
0.15
0.15
0.15
0.15
0.20

0.20

0.20


Approximate
-^C i-'D
(counts) (counts)
2
2
5
2
40

90

15

7
7
13
7
83

183

32

Sensitivity
MDC
(Bq/m2)a
150
83
25
150
700

250

1800

   Assumes that the size of the contamination area is at least as large as the probe area.

       Sample Calculation 1:

       The following example illustrates the calculation of an MDC in Bq/m2 for an instrument
       with a 15 cm2 probe area when the measurement and background counting times are each
       one minute:
MARSSIM, Revision 1
6-36
August 2000

-------
                                                   Field Measurement Methods and Instrumentation


       B      =      40 counts
       C      =      (5dpm/count)(Bq/60dpm)(l/15cm2probearea)(10,OOOcm2/m2)
              =      55.6 Bq/m2-counts

       The MDC is calculated using Equation 6-7:

         MDC = 55.6  x (3  + 4.65 ^40 ) =  1,800 Bq/m2 (1,100  dpm/WO  cm2)


       The critical level, Lc, for this example is calculated from Equation 6-6:


                             Lc = 2.33^/B  =   15 counts
       Given the above scenario, if a person asked what level of contamination could be detected
       95% of the time using this method, the answer would be 1,800 Bq/m2 (1,100 dpm/100
       cm2). When actually performing measurements using this method, any count yielding
       greater than 55 total counts, or greater than 15 net counts (55-40=15) during a period of
       one minute, would be regarded as greater than background.

6.7.2   Scanning Sensitivity

The ability to identify a small area of elevated radioactivity during surface scanning is dependent
upon the surveyor's skill in recognizing an increase in the audible or display output of an
instrument.  For notation purposes, the term "scanning sensitivity" is used throughout this section
to describe the ability of a surveyor to detect a pre-determined level of contamination with a
detector. The greater the  sensitivity, the lower the level of contamination that can be detected.

Many of the radiological instruments and monitoring techniques typically used for occupational
health physics activities may not provide the detection sensitivities necessary to demonstrate
compliance with the DCGLs. The detection sensitivity for a given application can be improved
(i.e., lower the MDC) by: 1) selecting an instrument with a higher detection efficiency or a lower
background, 2) decreasing the scanning speed, or 3) increasing the size of the  effective probe
area without significantly increasing the background response.

Scanning is usually performed during radiological surveys in support of decommissioning to
identify the presence of any areas of elevated activity. The probability of detecting residual
contamination in the field depends not only on the sensitivity of the survey instrumentation when
used in the scanning mode of operation, but is also affected by the surveyor's ability—i.e.,
human factors.  The surveyor must make a decision whether the signals represent only the

June 2001                                  6-37                          MARSSIM, Revision 1

-------
Field Measurement Methods and Instrumentation


background activity, or residual contamination in excess of background. The greater the
sensitivity, the lower the level of contamination that may be detected by scanning. Accounting
for these human factors represents a significant change from the traditionally accepted methods
of estimating scanning sensitivities.

An empirical method for evaluating the detection sensitivity for contamination surveys is by
actual experimentation or, since it is certainly feasible, by simulating an experimental setup using
computer software.  The following steps provide a simple example of how one can perform this
empirical evaluation:

1)     A desired nuclide contamination level is selected.
2)     The response of the detector to be used is determined for the selected nuclide
       contamination level.
3)     A test source is constructed which will give a detector count rate equivalent to what was
       determined in step 2. The count rate is equivalent to what would be expected from the
       detector when placed on an actual contamination area equal in value to that selected in
       step 1.
4)     The detector of choice is then moved over the source at different scan rates until an
       acceptable speed is determined.

The most useful aspect of this approach is that the source can then be used to show surveyors
what level of contamination is expected to be targeted with the scan. They, in turn, can gain
experience with what the expected response of the detector will be and  how fast they can survey
and still feel comfortable about detecting the target contamination level. The person responsible
for the survey can then use this information when developing a fixed point measurement and
sampling plan.

The remainder of this section is dedicated to providing the reader with information pertaining to
the underlying processes involved when performing scanning surveys for alpha, beta, and gamma
emitting radionuclides.  The purpose is to provide relevant information that can be used for
estimating realistic scanning sensitivities for survey activities.

6.7.2.1  Scanning for Beta and Gamma Emitters

The minimum detectable concentration of a scan survey (scan MDC) depends on the intrinsic
characteristics of the detector (efficiency, physical probe area, etc.),  the nature (type  and  energy
of emissions) and relative distribution of the potential contamination (point versus distributed
source and depth of contamination), scan rate, and other characteristics of the surveyor.  Some
factors that may affect the surveyor's performance include the  costs associated with various
outcomes—e.g., fatigue, noise, level of training, experience—and the survey's apriori
expectation of the likelihood of contamination present. For example, if the surveyor believes that

MARSSIM, Revision 1                         6-38                                August 2000

-------
                                                   Field Measurement Methods and Instrumentation


the potential for contamination is very low, as in a Class 3 area, a relatively large signal may be
required for the surveyor to conclude that contamination is present. NRC draft report
NUREG/CR-6364 (NRC 1997d) provides a complete discussion of the human factors as they
relate to the performance of scan surveys.

Signal Detection Theory. Personnel conducting radiological surveys for residual contamination
at decommissioning sites must interpret the audible output of a portable survey instrument to
determine when the signal ("clicks") exceeds the background level by a margin sufficient to
conclude that contamination is present.  It is difficult to detect low levels of contamination
because both the signal and the background vary widely.  Signal detection theory provides a
framework for the task of deciding whether the audible output of the survey meter during
scanning is due to background or signal plus background levels. An index of sensitivity (d') that
represents the distance between the means of the background and background plus signal  (refer
to Figure 6.2 for determining LD), in units of their common standard deviation, can be calculated
for various decision errors (correct detection and false positive rate). As an example, for a
correct detection rate of 95% (complement of a false  negative rate of 5%) and a false positive
rate of 5%, d' is 3.29 (similar to the static MDC for the same decision error rates). The index of
sensitivity is independent of human factors, and therefore, the ability of an ideal observer
(theoretical construct), may be used to determine the  minimum d'that can be achieved for
particular decision errors.  The ideal observer makes  optimal use of the available information to
maximize the percent correct responses, providing an effective upper bound against which to
compare actual surveyors. Table 6.5 lists selected values ofd'.

Two Stages of Scanning. The framework for determining the scan MDC is based on the
premise that there are two stages of scanning.  That is, surveyors do not make decisions on the
basis of a single indication, rather, upon noting an increased number of counts, they  pause briefly
and then decide whether to move on or take further measurements. Thus, scanning consists of
two components: continuous monitoring and  stationary sampling.  In the first component,
characterized by continuous movement of the probe,  the  surveyor has only a brief "look" at
potential sources, determined by the scan speed.  The surveyor's willingness to decide that a
signal is present at this stage is likely to be liberal, in that the surveyor should respond positively
on scant evidence, since the only "cost" of a false positive is a little time. The second  component
occurs only after a positive response was made at the first stage. This response is marked by the
surveyor interrupting his scanning and holding the probe stationary for a period of time, while
comparing the instrument output signal during that time to the background counting rate.  Owing
to the longer observation interval, sensitivity is relatively high.  For this decision, the criterion
should be more strict, since the cost of a "yes" decision is to spend considerably more  time taking
a static measurement or a sample.
August 2000                                6-39                         MARS SIM, Revision 1

-------
Field Measurement Methods and Instrumentation
      Table 6.5  Values of d'for Selected True Positive and False Positive Proportions
False Positive
Proportion
0.05
0.10
0.15
0.20
0.25
0.30
0.35
0.40
0.45
0.50
0.55
0.60
True Positive Proportion
0.60
1.90
1.54
1.30
1.10
0.93
0.78
0.64
0.51
0.38
0.26
0.12
0.00
0.65
2.02
1.66
1.42
1.22
1.06
0.91
0.77
0.64
0.52
0.38
0.26
0.13
0.70
2.16
1.80
1.56
1.36
1.20
1.05
0.91
0.78
0.66
0.52
0.40
0.27
0.75
2.32
1.96
1.72
1.52
1.35
1.20
1.06
0.93
0.80
0.68
0.54
0.42
0.80
2.48
2.12
1.88
1.68
1.52
1.36
1.22
1.10
0.97
0.84
0.71
0.58
0.85
2.68
2.32
2.08
1.88
1.72
1.56
1.42
1.30
1.17
1.04
0.91
0.82
0.90
2.92
2.56
2.32
2.12
1.96
1.80
1.66
1.54
1.41
1.28
1.15
1.02
0.95
3.28
2.92
2.68
2.48
2.32
2.16
2.02
1.90
1.77
1.64
1.51
1.38
Since scanning can be divided into two stages, it is necessary to consider the survey's scan
sensitivity for each of the stages.  Typically, the minimum detectable count rate (MDCR)
associated with the first scanning stage will be greater due to the brief observation intervals of
continuous monitoring—provided that the length of the pause during the second stage is
significantly longer. Typically, observation intervals during the first stage are on the order of 1
or 2 seconds, while the second stage pause may be several seconds long. The greater value of
MDCR from each of the scan stages is used to determine the scan sensitivity for the surveyor.

Determination of MDCR and Use of Surveyor Efficiency.  The minimum detectable number
of net source counts in the interval is given by s;.  Therefore, for an ideal observer, the number of
source counts required for a specified level of performance can be arrived at by multiplying the
square root of the number of background counts by the detectability value associated with the
desired performance (as reflected in d1} as shown in Equation 6-8:
                                    s  = d>
                                        (6-8)
where the value of d'is selected from Table 6.5 based on the required true positive and false
positive rates and b; is the number of background counts in the interval.
MARSSIM, Revision 1
6-40
August 2000

-------
                                                    Field Measurement Methods and Instrumentation


For example, suppose that one wished to estimate the minimum count rate that is detectable by
scanning in an area with a background of 1,500 cpm. Note that the minimum detectable count
rate must be considered for both scan stages—and the more conservative value is selected as the
minimum count rate that is detectable. It will be assumed that a typical source remains under the
probe for 1 second during the first stage, therefore, the average number of background counts in
the observation interval is 25 (b; = 1500 x (1/60)).  Furthermore, as explained earlier, it can be
assumed that at the first scanning stage a high rate (e.g., 95%) of correct detections is required,
and that a correspondingly high rate of false positives (e.g., 60%) will be tolerated. From Table
6.5, the value of d', representing this performance goal,  is 1.38. The net source counts needed to
support the specified level of performance (assuming an ideal observer) will be estimated by
multiplying 5 (the square root of 25) by 1.38.  Thus, the net source counts per interval, s;, needed
to yield better than 95% detections with about 60% false positives is 6.9.  The minimum
detectable source count rate, in cpm, may be calculated by:
                                MDCR = s.  x  (60//)
                                         (6-9)
For this example, MDCR is equivalent to 414 cpm (1,914 cpm gross). Table 6.6 provides the
scan sensitivity for the ideal observer (MDCR) at the first scanning stage for various background
levels, based on an index of sensitivity (d'} of 1.38 and a 2-second observation interval.

             Table 6.6  Scanning Sensitivity (MDCR) of the Ideal Observer for
                               Various Background Levels"
Background (cpm)
45
60
260
300
350
400
1,000
3,000
4,000
MDCR (net cpm)
50
60
120
130
140
150
240
410
480
Scan Sensitivity (gross cpm)
95
120
380
430
490
550
1,240
3,410
4,480
*The sensitivity of the ideal observer during the first scanning stage is based on an index of sensitivity (d') of 1.38
and a 2-second observation interval.
August 2000
6-41
MARS SIM, Revision 1

-------
Field Measurement Methods and Instrumentation


The minimum number of source counts required to support a given level of performance for the
final detection decision (second scan stage) can be estimated using the same method.  As
explained earlier, the performance goal at this stage will be more demanding.  The required rate
of true positives remains high (e.g., 95%), but fewer false positives (e.g., 20%) can be tolerated,
such that d' (from Table 6.5) is now 2.48.  One will assume that the surveyor typically stops the
probe over a suspect location for about 4 seconds before making a decision, so that the average
number of background counts in an observation interval is 100 (b; = 1,500 x (4/60)).  Therefore,
the  minimum detectable number of net source counts, s;, needed will be estimated by  multiplying
10 (the square root of 100) by 2.48 (the tT value); so s; equals 24.8.  The MDCR is calculated by
2.48 x (60/4) and equals 372 cpm. The value associated with the first scanning stage  (this
example, 414 cpm) will typically be greater, owing to the relatively brief intervals assumed.

Laboratory studies using simulated sources  and backgrounds were performed to assess the
abilities of surveyors under controlled conditions.  The methodology and analysis of results for
these studies are described in draft NUREG/CR-6364 (NRC 1997d) and NUREG-1507 (NRC
1997b). The surveyor's actual performance as compared with that which is ideally possible
(using the ideal  observer construct) provided an indication of the efficiency of the surveyors.
Based on the results of the confidence rating experiment, this surveyor efficiency (p) was
estimated to be between 0.5 and 0.75.

MARSSEVI recommends assuming an efficiency value at the lower end of the observed range
(i.e., 0.5) when making MDC estimates. Thus, the required number of net source counts for the
surveyor, MDCRsurveyor, is determined by dividing the MDCR by the square root of p.  Continuing
with this example, the surveyor MDCR is calculated by 414 cpm/0.707, or 585 cpm (2,085 cpm
gross).

Scan MDCs for Structure Surfaces and Land Areas. The survey design for determining the
number of data points for areas of elevated  activity (see Section 5.5.2.4) depends on the scan
MDC for the selected instrumentation. In general, alpha or beta scans are performed on structure
surfaces to satisfy the elevated activity measurements survey design, while gamma scans are
performed for land areas. Because of low background levels for alpha emitters, the approach
described here is not generally applied to determining scan MDCs for alpha contaminants—
rather, the reader is referred to Section 6.7.2.2 for an appropriate method for determining alpha
scan MDCs for building surfaces.  In any case, the data requirements for assessing potential
elevated areas of direct radiation depend on the scan MDC of the survey instrument (e.g., floor
monitor, GM detector, Nal scintillation detector).

Scan MDCs for Building/Structure Surfaces. The scan MDC is determined from the minimum
detectable count rate (MDCR) by applying  conversion factors that account for detector and
surface characteristics and surveyor efficiency.  As discussed above, the MDCR accounts for the
background level, performance criteria (d'), and observation interval. The observation interval

MARSSIM, Revision 1                         6-42                                August 2000

-------
                                                   Field Measurement Methods and Instrumentation


during scanning is the actual time that the detector can respond to the contamination source—
this interval depends on the scan speed, detector size in the direction of the scan, and area of
elevated activity. Because the actual dimensions of potential areas of elevated activity in the
field cannot be known a priori, MARSSIM recommends postulating a certain area (e.g., perhaps
50 to 200 cm2), and then selecting a scan rate that provides a reasonable observation interval.

Finally, the scan MDC for structure surfaces may be calculated:

                                            MDCR
                        Scan MDC =
                                      V/P £
                                             J   100  cm
probe  area                        (6-10)
where
       MDCR      =     minimum detectable count rate
       s;            =     instrument efficiency
       ss            =     surface efficiency
       p            =     surveyor efficiency

As an example, the scan MDC (in dpm/100 cm2) for "Tc on a concrete surface may be
determined for a background level of 300 cpm and a 2-second observation interval using a hand-
held gas proportional detector (126 cm2 probe area). For a specified level of performance at the
first scanning stage of 95% true positive rate and 60% false positive rate (and assuming the
second stage pause is sufficiently long to ensure that the first stage is more limiting), d' equals
1.38 (Table 6.5) and the MDCR is 130 cpm (Table 6.6). Using a surveyor efficiency of 0.5, and
assuming instrument and surface efficiencies  of 0.36 and 0.54, respectively, the scan MDC is
calculated using Equation 6-10:

              Scan MDC = 	—	  = 750 dpm/100 cm2
                           yO~5 (0.36) (0.54) (1.26)
Additional examples for calculating the scan MDC may be found in NUREG-1507 (NRC
1997b).

Scan MDCs for Land Areas. In addition to the MDCR and detector characteristics, the scan
MDC (in pCi/g) for land areas is based on the area of elevated activity, depth of contamination,
and the radionuclide (i.e., energy and yield of gamma emissions). If one assumes constant
parameters for each of the above variables, with the exception of the specific radionuclide in
question, the scan MDC may be reduced to a function of the radionuclide alone.  Nal scintillation
detectors are generally used for scanning land areas.

August 2000                                6-43                        MARSSIM, Revision 1

-------
Field Measurement Methods and Instrumentation


An overview of the approach used to determine scan MDCs for land areas follows.  The Nal(Tl)
scintillation detector background level and scan rate (observation interval) are postulated, and the
MDCR for the ideal observer, for a given level of performance, is obtained.  After a surveyor
efficiency is selected, the relationship between the surveyor MDCR (MDCRsurveyor) and the
radionuclide concentration in soil (in Bq/kg or pCi/g)is determined.  This correlation requires
two steps—first, the relationship between the detector's net count rate to net exposure rate (cpm
per |iR/h) is established, and second, the relationship between the radionuclide contamination
and exposure rate is determined.

For a particular gamma energy, the relationship of Nal(Tl) scintillation detector count rate and
exposure rate may be determined analytically (in cpm per jiR/h).  The approach used to
determine the gamma fluence rate necessary to yield a fixed exposure rate (1  |iR/h)—as a
function  of gamma energy—is provided in NUREG-1507 (NRC 1997b). The Nal(Tl)
scintillation detector response (cpm) is related to the fluence rate at specific energies, considering
the detector's efficiency (probability of interaction) at each energy.  From this, the Nal(Tl)
scintillation detector versus exposure rates for varying gamma energies are determined.  Once the
relationship between the Nal(Tl) scintillation detector response (cpm) and the exposure rate is
established, the MOCR^^ (in cpm) of the Nal(Tl) scintillation detector can be related to the
minimum detectable net exposure rate. The minimum detectable exposure rate is used to
determine the minimum detectable radionuclide concentration (i.e., the scan MDC) by modeling
a specified small area of elevated activity.

Modeling (using Microshield™) of the small area of elevated activity (soil concentration) is used
to determine the net exposure rate produced by a radionuclide concentration at a distance 10 cm
above the source.  This position is selected because it relates to the average height of the Nal(Tl)
scintillation detector above the ground during scanning.

The factors considered in the modeling include:

       radionuclide of interest (considering all gamma emitters for decay chains)
       expected concentration of the radionuclide of interest
       areal dimensions of the area of elevated activity
       depth of the area of elevated activity
       location of dose point (Nal(Tl) scintillation detector height above the  surface)
       density of soil

Modeling analyses are conducted by selecting a radionuclide (or radioactive material decay
series) and then varying the concentration of the contamination. The other factors are held
constant—the areal dimension of a cylindrical area of elevated activity is 0.25 m2 (radius of 28
cm), the  depth of the area of elevated activity is 15 cm, the dose point is 10 cm above the surface,
and the density of soil is 1.6 g/cm3.  The objective is to determine the radionuclide concentration
that is correlated to the minimum detectable net exposure rate.

MARSSIM, Revision 1                         6-44                                 August 2000

-------
                                                   Field Measurement Methods and Instrumentation


As an example, the scan MDC for 137Cs using a 1.5 in. by 1.25 in. Nal(Tl) scintillation detector is
considered in detail. Assume that the background level is 4,000 cpm and that the desired level of
performance, 95% correct detections and 60% false positive rate, results in a d' of 1.38. The
scan rate of 0.5m/s provides an observation interval of 1-second (based on a diameter of about 56
cm for the area of elevated activity).  The MOCR^^,. may be calculated assuming a surveyor
efficiency (p) of 0.5 as follows:

            1)     b; = (4,000  cpm) x (1 sec) x (1 min/60 sec) = 66.7 counts

            2)     MDCR  =  (1.38) x (^/66?7) x (60 sec/1  min) = 680  cpm

            3)     MDCRsurveyor =  680/^/05 = 960 cpm
The corresponding minimum detectable exposure rate is determined for this detector and
radionuclide. The manufacturer of this particular 1.5 in. by 1.25 in. Nal(Tl) scintillation detector
quotes a count rate to exposure rate ratio for 137Cs of 350 cpm per |iR/h. The minimum
detectable exposure rate is calculated by dividing the count rate (960 cpm) by the count rate to
exposure rate ratio for the radionuclide of interest (350 cpm per |iR/h). The minimum detectable
exposure rate for this example is 2.73 |iR/h.

Both 137Cs and its short-lived progeny, 137mBa, were chosen from the Microshield™ library. The
source activity and other modeling parameters were  entered into the modeling code. The source
activity was selected based on an arbitrary concentration of 5 pCi/g.  The modeling code
performed the appropriate calculations and determined an exposure rate of 1.307 jiR/h (which
accounts for buildup). Finally, the radionuclide concentrations of 137Cs and 137mBa (scan MDC)
necessary to yield the minimum detectable exposure rate (2.73 //R/h) may be calculated using the
following formula.

                       »YT^    (5 pd/g)(2.73 \iRlh)   1ft/1    „..
                  scan MDC = -^-^-——	  =10.4 pCilg                 (6-\\}
                                     1.307                                        (    }
It must be emphasized that while a single scan MDC value can be calculated for a given
radionuclide—other scan MDC values may be equally justifiable depending on the values chosen
for the various factors, including the MDCR (background level, acceptable performance criteria,
observation interval), surveyor efficiency, detector parameters and the modeling conditions of the
contamination.  It should also be noted that determination of the scan MDC for radioactive
materials—like uranium and thorium—must consider the gamma radiation emitted from the
entire decay series. NUREG-1507 (NRC 1997b) provides a detailed example of how the scan
MDC can be determined for enriched uranium.

August 2000                                6-45                        MARSSIM, Revision 1

-------
Field Measurement Methods and Instrumentation
Table 6.7 provides scan MDCs for common radionuclides and radioactive materials in soil. It is
important to note that the variables used in the above examples to determine the scan MDCs for
the 1.25 in. by 1.5 in. Nal(Tl) scintillation detector—i.e., the MDCRsurveyor detector parameters
(e.g., cpm per |iR/h), and the characteristics of the area of elevated activity—have all been held
constant to facilitate the calculation of scan MDCs provided in Table 6.7.  The benefit of this
approach is that generally applicable scan MDCs are provided for different radioactive
contaminants. Additionally, the relative detectability of different contaminants is evident
because the only variable in Table 6.7 is the nature of the contaminant.

As noted above, the scan MDCs calculated using the approach in this section are dependent on
several factors. One way to validate the appropriateness of the scan MDC is by tracking the
residual radioactivity (both surface activity and soil concentrations) levels identified during
investigations performed as a result of scanning surveys. The measurements performed during
these investigations may provide an a posteriori estimate of the scan MDC that can be used to
validate the a priori scan MDC used to design the survey.

6.7.2.2  Scanning for Alpha Emitters

Scanning for alpha emitters differs significantly from scanning for beta and gamma emitters in
that the expected background response of most alpha detectors is very close to zero. The
following discussion covers scanning for alpha emitters and assumes that the surface being
surveyed is similar in nature to the material on which the detector was calibrated.  In this respect,
the approach is purely theoretical.  Surveying surfaces  that are dirty, non-planar, or weathered
can significantly affect the detection efficiency and therefore bias the expected MDC for the
scan. The use of reasonable detection efficiency values instead of optimistic values is highly
recommended.  Appendix J contains a complete derivation of the alpha scanning equations used
in this section.

Since the time a contaminated area is under the probe varies and the background count rate of
some alpha instruments is less than 1 cpm, it is not practical to determine a fixed MDC for
scanning. Instead, it is more useful to determine the probability of detecting an area of
contamination at a predetermined DCGL for given scan rates.

For alpha survey instrumentation with backgrounds ranging from <1 to 3 cpm, a single count
provides a surveyor sufficient cause to stop and investigate further. Assuming this to be true, the
probability of detecting given levels of alpha surface contamination can be calculated by use of
Poisson summation statistics.
MARSSIM, Revision 1                         6-46                                 August 2000

-------
                                                        Field Measurement Methods and Instrumentation
                     Table 6.7  Nal(Tl) Scintillation Detector Scan MDCs
                           for Common Radiological Contaminants"
Radionuclide/Radioactive
Material
Am-241
Co-60
Cs-137
Th-230
Ra-226
(in equilibrium with progeny)
Th-232 decay series
(Sum of all radionuclides in he
thorium decay series)
Th-232
(In equilibrium with progeny in
decay series)
Depleted Uraniumb
(0.34%U-235)
Natural Uraniumb
3% Enriched Uraniumb
20% Enriched Uraniumb
50% Enriched Uraniumb
75% Enriched Uraniumb
1.25 in. by 1.5 in. Nal Detector
Scan MDC
(Bq/kg)
1,650
215
385
111,000
167
1,050
104
2,980
4,260
5,070
5,620
6,220
6,960
Weighted
cpm/^R/h
5,830
160
350
4,300
300
340
340
1,680
1,770
2,010
2,210
2,240
2,250
2 in. by 2 in. Nal Detector
Scan MDC
(Bq/kg)
1,170
126
237
78,400
104
677
66.6
2,070
2,960
3,540
3,960
4,370
4,880
Weighted
cpm/,«R/h
13,000
430
900
9,580
760
830
830
3,790
3,990
4,520
4,940
5,010
5,030
a Refer to text for complete explanation of factors used to calculate scan MDCs. For example, the background level
for the 1.25 in. by 1.5 in. Nal detector was assumed to be 4,000 cpm, and 10,000 cpm for the 2 in. by 2 in. Nal
detector. The observation interval was 1-sec and the level of performance was selected to yield d' of 1.38.
b Scan MDC  for uranium includes sum of 238U, 235U, and 234U.
August 2000
6-47
MARS SIM, Revision 1

-------
Field Measurement Methods and Instrumentation


Given a known scan rate and a surface contamination DCGL, the probability of detecting a single
count while passing over the contaminated area is

                                              - GEd
                                               60v
where
       P(n> 1)        =     probability of observing a single count
       G            =     contamination activity (dpm)
       E             =     detector efficiency (4-n)
       d             =     width of detector in direction of scan (cm)
       v             =     scan speed (cm/s)

       Note:  Refer to Appendix J for a complete derivation of these formulas.

Once a count is recorded and the guideline level of contamination is present the surveyor should
stop and wait until the probability of getting another count is at least 90%. This time interval
can be calculated by
                                        13,800
                                    '  = "CAB"                                   (6'13)

where
       t      =      time period for static count (s)
       C      =      contamination guideline (dpm/1 00 cm2)
       A     =      physical probe  area (cm2 )
       E      =      detector efficiency (4-n)

Many portable proportional counters have background count rates on the order of 5 to 10 cpm,
and a single count should not cause a  surveyor to investigate further. A counting period long
enough to establish that a single count indicates an elevated contamination level would be
prohibitively inefficient. For these types of instruments, the surveyor usually will need to get at
least 2 counts while passing over the source area before stopping for further investigation.

Assuming this to be a valid assumption, the probability of getting two or more counts can be
calculated by:
MARSSIM, Revision 1                         6-48                                August 2000

-------
                                                  Field Measurement Methods and Instrumentation
                   P(n>2) = l-P(n=0)-P(n=l)

                          = 1-1 I
                          (GE + B) t
                             60
           (6-14)
where
       P(n>2)
       P(n=0)
       P(n=l)
       B
probability of getting 2 or more counts during the time interval t
probability of not getting any counts during the time interval t
probability of getting 1 count during the time interval t
background count rate (cpm)
All other variables are the same as for Equation 6-12.

Appendix J provides a complete derivation of Equations 6-12 through 6-14 and a detailed
discussion of the probability of detecting alpha surface contamination for several different
variables.  Several probability charts are included at the end of Appendix J for common detector
sizes. Table 6.8 provides estimates of the probability of detecting 300 dpm/100 cm2 for some
commonly used alpha detectors.

       Table 6.8 Probability of Detecting 300 dpm/100 cm2 of Alpha Activity While
                Scanning with Alpha Detectors Using an Audible Output
                            (calculated using Equation 6-12)
Detector
Type
Proportional
Proportional
Scintillation
Scintillation
Detection
Efficiency
cpm/dpm
0.20
0.15
0.15
0.15
Probe Dimension
in Direction of Scan
(cm)
5
15
5
10
Scan Rate
(cm/s)
3
5
3
3
Probability
detecting
300 dpm/100
80%
90%
70%
90%
of
cm2




6.8    Measurement Uncertainty (Error)

The quality of measurement data will be directly impacted by the magnitude of the measurement
uncertainty associated with it.  Some uncertainties, such as statistical counting uncertainties, can
be easily calculated from the count results using mathematical procedures. Evaluation of other
August 2000
               6-49
MARS SIM, Revision 1

-------
Field Measurement Methods and Instrumentation


sources of uncertainty require more effort and in some cases is not possible. For example, if an
alpha measurement is made on a porous concrete surface, the observed instrument response when
converted to units of activity will probably not exactly equal the true activity under the probe.
Variations in the absorption properties of the surface for particulate radiation will vary from
point to point and therefore will create some level of variation in the expected detection
efficiency.  This variability in the expected detector efficiency results in uncertainty in the final
reported result. In addition, QC measurement results provide an estimate of random and
systematic uncertainties associated with the measurement process.

The measurement uncertainty for every analytical result or series of results, such as for a
measurement system, should be reported. This uncertainty, while not directly used for
demonstrating compliance with the release criterion, is used for  survey planning and data
assessment throughout the Radiation Survey and Site Investigation (RSSI) process.  In addition,
the uncertainty is used for evaluating the performance of measurement systems using QC
measurement results. Uncertainty can also be used for comparing individual measurements to
the DCGL. This is especially important in the early stages of decommissioning (i.e., scoping,
characterization, remedial action support) when decisions are made based on a limited number of
measurements.

For most sites, evaluations of uncertainty associated with field measurements is important only
for data being used as part of the final status survey documentation.  The final status survey data,
which is used to document the final radiological  status of a site,  should state the uncertainties
associated with the measurements.  Conversely, detailing the uncertainties associated with
measurements made during scoping or characterization surveys may or may not be of value
depending on what the data will be used for—i.e. the data quality objectives (DQOs).  From a
practical standpoint, if the observed data are obviously greater than the DCGL and will be
eventually cleaned up, then the uncertainty may be relatively unimportant.  Conversely, data
collected during early phases of a site investigation that may eventually be used to show that the
area is below the DCGL—and therefore does not require any clean-up action—will need the
same uncertainty evaluation as the final status survey data.  In summary, the level of effort needs
to match the intended use of the data.

6.8.1   Systematic and Random Uncertainties

Measurement uncertainties are often broken into two sub-classes of uncertainty termed
systematic (e.g., methodical) uncertainty and random (e.g., stochastic) uncertainty. Systematic
uncertainties derive from a lack of knowledge about the true distribution of values associated
with a numerical parameter and result in data that is consistently higher (or lower) than the true
value.  An example of a systematic uncertainty would be the use of a fixed  counting efficiency
value even though it is known that the efficiency varies from measurement to measurement but
without knowledge of the frequency.  If the fixed counting efficiency value is higher than the true

MARSSIM, Revision 1                        6-50                                 August 2000

-------
                                                   Field Measurement Methods and Instrumentation


but unknown efficiency—as would be the case for an unrealistically optimistic value—then every
measurement result calculated using that efficiency would be biased low.  Random uncertainties
refer to fluctuations associated with a known distribution of values. An example of a random
uncertainty would be a well documented chemical separation efficiency that is known to fluctuate
with a regular pattern about a mean.  A constant recovery value is used during calculations, but
the true value is known to fluctuate from sample to sample with a fixed and known degree of
variation.

To minimize the need for estimating potential sources of uncertainty, the sources of uncertainty
themselves should be reduced to a minimal level by using practices such as:

•      The detector used should minimize the potential uncertainty. For example, when making
       field surface activity measurements for 238U on concrete, a beta detector such as a thin-
       window Geiger-Mueller "pancake" may provide better quality data than an alpha detector
       depending on the circumstances.  Less random uncertainty would be expected between
       measurements with a beta detector such  as a pancake since beta emissions from the
       uranium will be affected much less by thin absorbent layers than will the alpha emissions.

•      Calibration factors should accurately reflect the efficiency of a detector being used on the
       surface material being measured for the  contaminant radionuclide or mixture of
       radionuclides (see Section 6.5.4). For most field measurements, variations in the
       counting efficiency on different types of materials will introduce the largest amount of
       uncertainty in the final result.

•      Uncertainties  should be reduced or eliminated by use of standardized measurement
       protocols (e.g., SOPs) when possible. Special effort should be made to reduce or
       eliminate systematic uncertainties, or uncertainties that are the same for every
       measurement  simply due to an error in the process. If the systematic uncertainties are
       reduced to a negligible level, then the random uncertainties, or those uncertainties that
       occur on a somewhat statistical basis, can be dealt with more easily.

•      Instrument operators should be trained and experienced with the instruments used to
       perform the measurements.

•      QA/QC should be conducted as described  in Chapter 9.

Uncertainties that cannot be eliminated need to be evaluated such that the effect can be
understood and properly propagated  into the final  data and uncertainty estimates. As previously
stated, non-statistical uncertainties should be minimized as much as possible through the use of
good work practices.
August 2000                                6-51                         MARSSIM, Revision 1

-------
Field Measurement Methods and Instrumentation


Overall random uncertainty can be evaluated using the methods described in the following
sections. Section 6.8.2 describes a method for calculating random counting uncertainty.  Section
6.8.3 discusses how to combine this counting uncertainty with other uncertainties from the
measurement process using uncertainty propagation.

Systematic uncertainty is derived from calibration errors, incorrect yields and efficiencies, non-
representative survey designs, and "blunders."  It is difficult—and sometimes impossible—to
evaluate the systematic uncertainty for a measurement process, but bounds should always be
estimated and made small compared to the random uncertainty, if possible.  If no other
information on systematic uncertainty is available, Currie (NRC 1984) recommends using 16%
as an estimate for systematic uncertainties (1% for blanks, 5% for baseline, and 10% for
calibration factors).

6.8.2   Statistical Counting Uncertainty

When performing an analysis with a radiation detector, the result will have an uncertainty
associated with it due to the statistical nature of radioactive decay. To calculate the total
uncertainty associated with the counting process, both the background measurement uncertainty
and the sample measurement uncertainty must be accounted for. The standard  deviation of the
net count rate, or the statistical counting uncertainty, can be calculated by
                                n
                                     N
                               o.  =    —  +  —                                (6-15)
T
where
       on     =      standard deviation of the net count rate result
       Cs+b    =      number of gross counts (sample)
       Ts+b    =      gross count time
       Cb     =      number of background counts
       Tb     =      background count time

6.8.3   Uncertainty Propagation

Most  measurement data will be converted to different units or otherwise included in a calculation
to determine a final  result. The standard deviation associated with the final result, or the total
uncertainty, can then be calculated. Assuming that the individual uncertainties are relatively
small, symmetric about zero, and independent of one another, then the total uncertainty for the
final calculated result can be determined by solving the following partial differential equation:
MARSSIM, Revision 1                         6-52                                 August 2000

-------
                                                    Field Measurement Methods and Instrumentation
                                                    au
                                                                                   (6-16)
where
       u
       ax, ay,.
function, or formula, that defines the calculation of a final result as
a function of the collected data. All variables in this equation, i.e.,
x, y, z..., are assumed to have a measurement uncertainty
associated with them and do not include numerical constants
standard deviation, or uncertainty, associated with the final result
standard deviation, or uncertainty, associated with the parameters
x, y, z, ...
Equation 6-16, generally known as the error propagation formula, can be solved to determine the
standard deviation of a final result from calculations involving measurement data and their
associated uncertainties. The solutions for common calculations along with their uncertainty
propagation formulas are included below.
          Data Calculation
       u = x + y,or u=x-y
       u = x -^ y , or u = xxy:
                        Uncertainty Propagation
                                                         \
                                                             X
       u = c x x, where c is a positive constant:      ou  = cox
       u = x H- c, where c is a positive constant:      ou  = —
                                                        c

       Note: In the above examples, x and y are measurement values with associated standard
       deviations, or uncertainties, equal to ox and oy respectively. The symbol "c" is used to
       represent a numerical constant which has no associated uncertainty. The symbol cu is
       used to denote the standard deviation, or uncertainty, of the final calculated value u.

6.8.4   Reporting Confidence Intervals

Throughout Section 6.8, the term "measurement uncertainty" is used interchangeably with the
term "standard deviation." In this respect, the uncertainty is qualified as numerically identical to
August 2000
               6-53
MARS SIM, Revision 1

-------
Field Measurement Methods and Instrumentation


the standard deviation associated with a normally distributed range of values.  When reporting a
confidence interval for a value, one provides the range of values that represent a pre-determined
level of confidence (i.e.., 95%). To make this calculation, the final standard deviation, or total
uncertainty ou as shown  in Equation 6-16, is multiplied by a constant factor k representing the
area under a normal curve as a function of the standard deviation. The values of k representing
various intervals about a mean of normal distributions as a function of the standard deviation is
given in Table 6.9. The  following example illustrates the use of this factor in context with the
propagation and reporting of uncertainty values.

    Table 6.9 Areas Under Various Intervals About the Mean of a Normal Distribution
Interval
(p±ko)
p± 0.674o
p± l.OOa
p±1.65a
p±1.96a
p± 2.00o
p±2.58o
p±3.00o
Area
0.500
0.683
0.900
0.950
0.954
0.990
0.997
       Example:

       Uncertainty Propagation and Confidence Interval:  A measurement process with a zero
       background yields a count result of 28 ± 5 counts in 5 minutes, where the ± 5 counts
       represents one standard deviation about a mean value of 28 counts. The detection
       efficiency is 0.1 counts per disintegration ± 0.01 counts per disintegration, again
       representing one standard deviation about the mean.

       Calculate the activity of the sample, in dpm, total measurement uncertainty, and the 95%
       confidence interval for the result.

              1)      The total number of disintegrations is:
                             28 counts
                               Q.I eld
                                         = 280
MARSSIM, Revision 1
6-54
August 2000

-------
                                     Field Measurement Methods and Instrumentation


2)     Using the equation for error propagation for division, total uncertainty is:


                                   = 57 disintegrations
                                        o.oiV
                                        0.1
              3)     The activity will then be 280 + 5 minutes = 56 dpm and the total
                    uncertainty will be 57 + 5 minutes =11 dpm. (Since the count time is
                    considered to have trivial variance, this is assumed to be a constant.)

Referring to Table 6.9, a k value of ±1.96 represents a confidence interval equal to 95% about the
mean of a normal distribution. Therefore, the 95% confidence interval would be 1.96 x 11 dpm
= 22 dpm.  The final result would be 56 ± 22 dpm.
6.9    Radon Measurements

There are three radon isotopes in nature: 222Rn (radon) in the 238U decay chain, 220Rn (thoron) in
the 232Th chain, and 219Rn (actinon) in the 235U chain.  219Rn is the least abundant of these three
isotopes, and because of its short half-life of 4 seconds it has the least probability of emanating
into the atmosphere before decaying. 220Rn with a 55 second half-life is somewhat more mobile.
222Rn with a 3.8 d half-life is capable of migrating through several decimeters of soil  or building
material and reaching the atmosphere.  Therefore, in most situations, 222Rn should be the
predominant airborne radon isotope.

Many techniques have been developed over the years for measuring radon (Jenkins 1986) and
radon progeny in air.  In addition, considerable attention is given by EPA to measurement of
radon and radon progeny in homes (EPA 1992d).  Radon and radon progeny emit alpha  and beta
particles and gamma rays. Therefore, numerous techniques can and have been developed for
measuring these radionuclides based on detecting alpha particles, beta particles, or gamma rays,
independently  or in some combination. It is even difficult to categorize the various techniques
that are presently in use. This section contains an overview of information dealing with the
measurement of radon and radon progeny. The information is focused on the measurement of
222Rn, however the information may be adapted for the measurement of 219Rn and 220Rn.

Radon concentrations within a fixed structure can vary significantly from one section of the
building to another and can fluctuate over time.  If a home has a basement, for instance,  it is
usually expected that a higher radon concentration will be found there. Likewise, a relatively
small increase  in the relative pressure between the soil and the inside of a structure can cause a
significant increase in the radon emanation rate from the soil into the structure. Many factors
play a role in these variations, but from a practical standpoint it is only necessary to recognize


August 2000                                6-55                        MARS SIM, Revision 1

-------
Field Measurement Methods and Instrumentation


that fluctuations are expected and that they should be accounted for. Long term measurement
periods are required to determine a true mean concentration inside a structure and to account for
the fluctuations.

Two analytical end points are of interest when performing radon measurements.  The first and
most commonly used is radon concentration, which is stated in terms of activity per unit volume
(Bq/m3 or pCi/L). Although this terminology is consistent with most federal guidance values, it
only infers the potential dose equivalent associated with radon. The second analytical end point
is the radon progeny working level. Radon progeny usually attach very quickly to charged
aerosols in the air following creation. The fraction that remains unattached is usually quite small
(i.e., 5-10%). Since most aerosol particles carry an electrical charge and are relatively massive
(> 0.1  |im), they are capable of attaching to the surfaces of the lung. Essentially all dose or risk
from radon is associated with alpha decays from radon progeny attached to tissues of the
respiratory system. If an investigator is interested in accurately determining the potential dose or
risk associated with radon in the air of a room, the radon progeny concentration must be known.

Radon progeny  concentrations are usually reported in units of working levels (WL), where one
working level is equal to the potential alpha energy associated with the radon progeny in secular
equilibrium with 100 pCi/L of radon. One working level is  equivalent to 1.28 x 10s MeV/L of
potential alpha energy.  Given a known breathing rate and lung attachment probability, the
expected mean lung dose from exposure to a known working level of radon progeny can be
calculated.

Radon progeny  are not usually found in secular equilibrium with radon indoors due to plating out
of the charged aerosols onto walls, furniture, etc. The ratio of 222Rn progeny activity to 222Rn
activity usually  ranges from 0.2 to as high as 0.8 indoors (NCRP 1988). If only the 222Rn
concentration is measured and it is not practical to measure the progeny concentrations, then
general practice is to assume a progeny to 222Rn equilibrium ratio of 0.5 for indoor areas.  This
allows one to estimate the expected dose or risk associated with a given radon concentration.

In general, the following generic guidelines should be followed when performing radon
measurements during site investigations:

•      The radon measurement method used should be well understood and documented.

•      Long term measurements are used to determine the true mean radon concentration.

•      The impact of variable environmental conditions (e.g., humidity, temperature, dust
       loading, and atmospheric pressure) on the measurement process should be accounted for
       when necessary. Consideration should be given to effects on both the air collection
       process  and the counting system.

MARSSIM, Revision 1                         6-56                                 August 2000

-------
                                                  Field Measurement Methods and Instrumentation
•      The background response of the detection system should be accounted for.

•      If the quantity of interest is the working level, then the radon progeny concentrations
       should be evaluated. If this is not practical, then the progeny activities can be estimated
       by assuming they are 50% of the measured radon activity (NCRP 1988).

For a general  overview, a list of common radiation detectors with their usual applications during
radon surveys is provided in Table 6.10. Descriptions and costs for specific equipment used for
the measurement of radon are contained in Appendix H.
           Table 6.10 Radiation Detectors with Applications to Radon Surveys
System
Large area
activated charcoal
collector
Continuous radon
monitor
Activated charcoal
adsorption
Electret ion
chamber
Alpha track
detection
Description
A canister containing activated
charcoal is twisted into the
surface and left for 24 hours.
Air pump and scintillation cell
or ionization chamber.
Activated charcoal is opened to
the ambient air, then gamma
counted on a gamma
scintillator or in a liquid
scintillation counter.
This is a charged plastic vessel
that can be opened for air to
pass through.
A small piece of special plastic
or film inside a small container.
Damage tracks from alpha
particles are chemically etched
and tracks counted.
Application
Short term radon
flux measurements
Track the real time
concentration of
radon
Measure radon
concentration in
indoor air
Measure short-
term or long-term
radon
concentration in
indoor air
Measure indoor or
outdoor radon
concentration in air
Remarks
The LLD is 0.007 Bq rn Y1
(0.2 pCi rn Y1).
Takes 1 to 4 hours for system to
equilibrate before starting. The LLD
is 0.004-0.04 Bq/L (0.1-1.0 pCi/L).
Detector is deployed for 2 to 7 days.
The LLD is 0.007-0.04 Bq/L (0.2 to
l.OpCi/L).
Must correct reading for gamma
background concentration. Electret is
sensitive to extremes of temperature
and humidity. LLD is 0.007-0.02
Bq/L (0.2-0.5 pCi/L).
LLDisOXHBqL-'d'1
(IpCiL-'d'1).
The following provides a general overview of radon sampling and measurement concepts. The
intent of this section is to provide an overview of common methods and terminology.
August 2000
6-57
MARS SIM, Revision 1

-------
Field Measurement Methods and Instrumentation
6.9.1   Direct Radon Measurements

Direct radon measurements are performed by gathering radon into a chamber and measuring the
ionizations produced. A variety of methods have been developed, each making use of the same
fundamental mechanics but employing different measurement processes. The first step is to get
the radon into a chamber without collecting any radon progeny from the ambient air.  A filter is
normally used to capture charged aerosols while allowing the radon gas to pass through.  Most
passive monitors rely on diffusion of the ambient radon in the air into the chamber to establish an
equilibrium between the concentrations of radon in the air and in the chamber. Active monitors
use some type of air pump system for the air exchange method.

Once  inside the chamber, the radon decays by alpha emission to form 218Po which usually takes
on a positive charge within thousandths of a second following formation.  Some monitor types
collect these ionic molecules and subsequently measure the alpha particles emitted by the radon
progeny.  Other monitor types, such as the electret ion chamber, measure the ionization produced
by the decay of radon in the  air within the chamber by directly collecting the ions produced inside
the chamber.  Simple systems measure the cumulative radon during the exposure period based on
the total alpha decays that occur. More complicated systems actually measure the individual
pulse  height distributions of the alpha and/or beta radiation emissions and derive the radon plus
progeny isotopic concentration in the air volume.

Care must be taken to accurately calibrate a system and to understand the effects of humidity,
temperature, dust loading, and atmospheric pressure on the  system. These conditions create a
small  adverse effect on  some systems and a large influence  on others.

6.9.1.1 Integrating Methods for Radon Measurement

With integrating methods, measurements are made over a period of days, weeks, or months and
the device is subsequently read by an appropriate device for the detector media used.  The most
common detectors used are activated charcoal adsorbers, electret ion chamber (EIC),  and alpha
track plastics. Short term fluctuations are averaged out, thus making the measurement
representative of average concentration.  Results in the form of an average value provide no way
to determine the fluctuations of the radon concentration over the measurement interval.
Successive short term measurements can be used in place of single long term measurements to
gain better insight into the time dependence of the  radon concentration.

6.9.1.2 Continuous Methods for Radon Measurement

Devices that measure direct radon concentrations over successive time increments are generally
called continuous radon monitors.  These systems are more complex than integrating  devices in
that they measure the radon concentration and log the  results to a data recording device on a real

MARSSIM, Revision 1                        6-58                               August 2000

-------
                                                   Field Measurement Methods and Instrumentation


time basis. Continuous radon measurement devices normally allow the noble gas radon to pass
through a filter into a detection chamber where the radon decays and the radon and/or the
resulting progeny are measured. The most common detectors used for real time measurements
are ion chambers, solid state surface barrier detectors, and ZnS(Ag) scintillation detectors.

Continuous methods offer the advantage of providing successive, short-term results over long
periods of time.  This allows the investigator not only to determine the average radon
concentration, but also to analyze the fluctuations in the values over time. More complicated
systems are available that measure the relative humidity and temperature at the measurement
location and log the values along with the radon concentrations to the data logging device. This
allows the investigator to make adjustments, if necessary, to the resulting data prior to reporting
the results.

6.9.2  Radon Progeny Measurements

Radon progeny measurements are performed by collecting charged aerosols onto filter paper and
subsequently counting the filter for attached progeny. Some systems pump air through a filter
and then automatically count the filter for alpha and/or beta emissions.  An equivalent but more
labor intensive method is to collect a sample using an air sampling pump and then count the filter
in stand alone alpha and/or beta counting systems. The measurement system may make use  of
any number of different techniques ranging from full alpha and beta spectrometric analysis of the
filters to simply counting the filter for total alpha and or beta emissions.

When performing total (gross) counting analyses, the assumption is usually made that the only
radioisotopes in the air are due to 222Rn and its progeny. This uncertainty, which is usually very
small, can be essentially eliminated when performing manual sampling and analysis by
performing a follow up measurement of the filter after the radon progeny have decayed to a
negligible level.  This value can then be used as a background value for the air. Of course, such a
simple approach is only applicable when 222Rn is the isotope of concern. For 219Rn or 220Rn,  other
methods would have to be used.

Time is a significant element in radon progeny measurements. Given any initial equilibrium
condition for the progeny isotopes, an investigator must be able to correlate the sampling and
measurement technique back to the true concentration values. When collecting radon progeny,
the buildup of total activity on the filter increases asymptotically until the activity on the filter
becomes constant.  At this point, the decay rate of the progeny atoms on the filter is equal to the
collection rate of progeny atoms.  This is an important parameter to consider when designing a
radon sampling procedure.

Note that the number of charged aerosol particles in the air can affect the results for radon
progeny measurements.  If the number of particles is few, as is possible when humidity is low
and a room is very  clean, then most of the progeny will not be attached and can plate out on  room

August 2000                                6-59                        MARSSIM, Revision 1

-------
Field Measurement Methods and Instrumentation


surfaces prior to reaching the sample filter.  This is not a problem if the same conditions always
exist in the room, however the calculated dose would underestimate the dose that would be
received in a higher humidity or dust concentration state with the same radon progeny
concentration.

6.9.3   Radon Flux Measurements

Sometimes it is desirable to characterize the source of radon in terms of the rate at which radon is
emanating from a surface—that is, soil, uranium mill tailings, or concrete.  One method used for
measuring radon flux is briefly described here.

The measurement of radon flux can be achieved by adsorption onto charcoal using a variety of
methods such as a charcoal canister or a large area collector (e.g., 25 cm PVC end cap). The
collector is deployed by firmly twisting the  end cap into the surface of the material to be
measured. After 24 hours of exposure, the activated charcoal is removed and transferred to
plastic containers.  The amount of radon adsorbed on the activated  charcoal is determined by
gamma spectroscopy.  Since the area of the  surface is well defined  and the deployment period is
known, the radon flux (in units of Bq/m2-s or pCi/m2-s) can be calculated.

This method is reliable for measuring radon flux in normal environmental situations. However,
care should be taken if an extremely large source of radon is measured with this method. The
collection time should be chosen carefully to avoid saturating the canister with radon.  If
saturation is approached, the charcoal loses  its ability to absorb radon and the collection rate
decreases. Even transporting and handling of a canister that is saturated with radon can be a
problem due to the dose rate from the gamma rays being emitted. One would rarely encounter a
source of radon that is so large that this would become a problem; however, it should be
recognized as a potential problem. Charcoal can also become saturated with water, which will
affect the absorption of radon.  This can  occur in areas with high humidity.

An alternative method for making passive radon flux measurements has been developed recently
using electret ionization chambers (EICs). EIC technology has been widely used for indoor
radon measurements.  The passive EIC procedure is  similar to the procedures used with large
area activated charcoal canisters.  In order to provide the data for the background corrections,  an
additional passive monitor is located side by side on  a radon impermeable membrane.  These
data are used to calculate the net radon flux. The Florida State Bureau of Radiation Protection
has compared the results from measurements of several phosphogypsum flux beds using the
charcoal canisters and EICs and has shown that the two methods give comparable results.  The
passive method seems to have overcome some of the limitations encountered in the use of
charcoal. The measurement periods can be  extended from hours to several days in order to
obtain a better average, if needed. EIC flux measurements are not affected by environmental
conditions such as temperature, humidity, and air flow.  The measured sensitivities are

MARSSIM, Revision 1                        6-60                                August 2000

-------
                                                   Field Measurement Methods and Instrumentation


comparable to the charcoal method but, unlike charcoal, EICs do not become saturated by
humidity.  Intermediate readings can be made if needed..  In view of the low cost of the EIC
reading/analyzing equipment, the cost per measurement can be as much as 50% lower than the
charcoal method with additional savings in time.


6.10  Special Equipment

Various specialized systems have been developed which can be used during the performance of
radiation surveys and site investigations.  These range from specially designed quick radiation
scanning systems to commercial global positioning systems (GPSs). The equipment may be
designed to detect radiation directly, detect and locate materials associated with the
contamination (e.g., metal containers), or locate the position where a particular measurement is
performed (e.g.,  GPS).  Because these specialized systems are continuously being modified and
developed for site-specific applications, it is not possible to provide detailed descriptions of
every system. The following sections provide examples of specialized equipment that have been
applied to radiation surveys and site investigations.

6.10.1 Positioning Systems

As stated in Section 4.8.5, documenting the location of measurements is important for
demonstrating the reproducibility of the results. There are a variety of positioning systems
available that provide a range of accuracy and precision that can be evaluated during survey
planning to determine their applicability to a particular  site. These positioning systems can be
used to establish a reproducible reference coordinate system or to locate individual measurements
using an established reference coordinate system (e.g., longitude and latitude).

6.10.1.1 Differential Global Positioning Systems

A variety of practical and versatile GPSs based on radio signals tracked from satellite beacons
are available (e.g., Trimble™, Novatel™, Garmin™). These systems are generally used to aid in
recording and retrieving location data with precision on the order of tens of meters.  With a
stationary base station and a separate moving locator, the  system is deployed in the "differential
global positioning system" (DGPS) mode.  DGPSs can record and retrieve location data with a
precision in the centimeter range.

DGPS can be used to provide position information on surface features in areas being surveyed,
linking the  survey results to previously published maps and aerial photographs. In addition,
survey results may be positioned using the DGPS readings to accurately and precisely locate the
results as well as the results of any subsequent analyses to these same maps or photographs. A
process called waypointing uses the DGPS to locate specific points and allows the user to find

August 2000                                 6-61                         MARS SIM, Revision 1

-------
Field Measurement Methods and Instrumentation


predetermined locations and set up gridded locations for measurements based on location data
that are tied into local or state coordinate systems.

Limitations on the use of DGPS are related to the number of satellite beacons available to the
system. When three or fewer satellites are available the accuracy and precision of the location
data will be reduced.  There are short periods of time (usually less than one hour even on the
worst days) when a limited number of satellites are overhead in the continental United States.
Satellites may also be blocked by excess tree cover or tall buildings. Distance between the
moving locator and the stationary base station may be several  kilometers or may be limited to
line-of-sight.  This limitation can be mitigated through the strategic use of repeater stations to re-
transmit the signal between the moving locator and the base station.

6.10.1.2 Local Microwave and Sonar Positioning Systems

Local microwave or sonar beacons and receivers may provide useful location data in small areas
and tree-covered locales.  One example of a sonar-based system is the ultrasonic ranging and data
system (USRADS).  With a number of fixed beacons in place, a roving unit can be oriented and
provide location data with similar accuracy and precision as the DGPS. If the beacons are
located at known points, the resulting positions  can be determined using simple calculations
based on the known reference locations of the beacons.

The logistics of deploying the necessary number of beacons properly and the short range of the
signals are  the major limitations of the system.  In addition, multipathing of signals within
wooded areas can cause jumps in the positioning data.

6.10.2 Mobile Systems with Integrated Positioning Systems

In recent years, the advent of new technologies has introduced mobile sensor systems for
acquiring data that include fully-integrated positioning  systems. Portable and vehicle-based
versions of these systems record survey data while moving over surfaces to be surveyed and
simultaneously recording the location data from either a roving DGPS receiver or local
microwave/sonar receiver. All measurement data are automatically stored and processed with
the measurement location for later posting (see Section 8.2.2.2 for a discussion of posting plots)
or for mapping the results. These systems are designed with a variety of detectors for different
applications.  For example, alpha or beta detectors have been mounted on a robot a fixed distance
over a smooth surface.  The robot moves at a predetermined speed over the surface to provide
scanning results, and also records individual direct measurements at predetermined intervals.
This type of system not only provides the necessary measurement data, but also reduces the
uncertainty associated with human factors.  Other systems are equipped with several types of
radiation detectors, magnetometers, electromagnetic sensors, or various combinations of multiple
sensors. The limitations of each system should be evaluated on a site-specific basis to determine

MARSSIM, Revision 1                         6-62                                 August 2000

-------
                                                   Field Measurement Methods and Instrumentation


if the positioning system, the detector, the transport system, or some combination based on site-
specific characteristics will represent the limits of the system.

6.10.3  Radar, Magnetometer, and Electromagnetic Sensors

The number of sensors and sensor systems applicable to the detection and location of buried
waste have increased in use and reliability in recent years. These systems are typically applicable
to scoping and characterization surveys where the identification of subsurface contamination is a
primary concern. However, the results of these surveys may be used during final status survey
planning to demonstrate that subsurface contamination is not a concern for a particular site or
survey unit. Some of the major technologies are briefly described in the following sections.

6.10.3.1  Ground Penetrating Radar

For most sites, ground penetrating radar (GPR) is the only instrument capable of collecting
images of buried objects in situ, as compared to magnetometers (Section 6.10.3.2) and
electromagnetic  sensors  (Section 6.10.3.3) which detect the strength of signals as measured at the
ground surface.  Additionally, GPR is unique in its ability to detect both metallic and non-
metallic (e.g.., plastic, glass) containers.

Subsurface radar detection systems have been the focus of study for locating and identifying
buried  or submerged objects that otherwise could not be detected.  There are two major
categories of radar signals: 1) time domain, and 2) frequency domain.  Time-domain radar uses
short impulses of radar-frequency energy directed into the ground  being investigated.
Reflections of this energy, based on changes in dielectric  properties, are then received by the
radar.  Frequency-domain radar, on the other hand, uses a continuous transmission where the
frequency of the transmission can be varied either stepwise or continuously. The changes in the
frequency characteristics due to effects from the ground are recorded.  Signal processing, in both
cases, converts this signal to represent the location of radar reflectors against the travel time of
the return signal. Greater travel time corresponds to a greater distance beneath the surface.
Table 6.11 lists the typical penetration depth for various geologic materials (fresh water is
included as a baseline for comparison).

Examples of existing GPR technologies currently being applied to subsurface investigations
include:

       narrow-band radar
       ultra-wideband radar
       synthetic aperture radar
       frequency modulated continuous  radar
       polarized radar waves
August 2000                                 6-63                         MARSSIM, Revision 1

-------
Field Measurement Methods and Instrumentation
       Table 6.11  Typical Radar Penetration Depths for Various Geologic Materials
Material
Fresh Water
Sand (desert)
Sandy Soil
Loam Soil
Clay Soil
Salt Flats (dry)
Coal
Rocks
Walls
Penetration Depth
m(ft)
100 (330)
5(16)
3(10)
3(10)
2(6)
1(3)
20 (66)
20 (66)
0.3(1)
The major limitation to GPR is the difficulty in interpreting the data, which is often provided in
the form of hazy, "waterfall-patterned" data images requiring an experienced professional to
interpret. Also, GPR can vary depending on the soil type as shown in Table 6.10. Highly
conductive clay soils often absorb a large amount of the radar energy, and may even reflect the
energy.  GPR can be deployed using ground-based or airborne systems.

6.10.3.2 Magnetometers

Although contaminated soil and most radioactive waste possess no ferromagnetic properties, the
containers commonly used to hold radioactive waste (e.g., 55-gallon drums) are made from steel.
These containers possess significant magnetic susceptibility making the containers detectable
using magnetometry.

Magnetometers sense the pervasive magnetic field of the Earth. This field, when encountering an
object with magnetic susceptibility, induces a secondary magnetic field in that object. This
secondary field creates an increase or decrease in Earth's ambient magnetic field.
Magnetometers measure these changes in the expected strength of the ambient magnetic field.
Some magnetometers, called "vector magnetometers,"  can sense the direction as well as the
magnitude of these changes. However, for subsurface investigations only the magnitude of the
changes are used.
MARSSIM, Revision 1
6-64
August 2000

-------
                                                   Field Measurement Methods and Instrumentation


The ambient magnetic field on Earth averages 55,000 gamma in strength. The variations caused
by the secondary magnetic fields typically range from 10 to 1,000 gamma, and average around
100 gamma. Most magnetometers currently in use have a sensitivity in the 0.1 to 0.01 gamma
range and are capable of detecting these secondary fields.

An alternate magnetometer survey can be performed using two magnetometers in a gradiometric
configuration.  This means that the first magnetometer is placed  at the ground surface, while the
second is mounted approximately 0.5 meters above the first. Data is recorded from both sensors
and compared. When the readings from both detectors are nearly the same, it implies that there
is no significant disturbance in the Earth's ambient magnetic field or that such disturbances are
broad and far away from the gradiometer.  When a secondary magnetic field is induced in an
object, it affects one sensor more strongly than the other, producing a difference in the readings
from the two magnetometers.  This approach is similar to the use of a guard detector in anti-
coincidence mode in a low-background gas-flow proportional counter in a laboratory (see
Appendix H for a description of gas-flow proportional counters). The gradiometric configuration
filters out the Earth's ambient magnetic field, large scale variations, and objects located far from
the sensor to measure the effects of nearby objects, all without additional data processing.

Fifty-five gallon drums buried 5 to 7 meters below the surface may be detectable using a
magnetometer.  At many sites, multiple drums have been buried in trenches or pits and detection
is straightforward.  A single operator carrying a magnetometer with the necessary electronics in a
backpack can cover large areas in a relatively small amount of time.

The limitations on  the system are related to the size of the objects and their depth below the
surface. Objects that are too small or buried too deep will not provide a secondary magnetic field
that can be  detected at the ground surface.

6.10.3.3  Electromagnetic Sensors

Electromagnetic sensors emit an electromagnetic wave, in either a pulsed or continuous wave
mode, and then receive the result of that transmission. The result of the transmission is two
signals; quadrature and in-phase.  As the wave passes through some material other than air, it is
slowed down by a resistive medium or sped up by a conductor through dielectric effects. This
produces the quadrature signal. If the electromagnetic wave encounters a highly conductive
object it induces a magnetic field in the object. This induced electromagnetic field returns to the
sensor as a  reflection of the original electromagnetic wave  and forms the in-phase signal.

The in-phase signal is indicative of the presence, size, and conductivity of nearby objects (e.g.,
55-gallon drums), while the quadrature signal is a measure of the dielectric properties of the
nearby objects such as soil.  This means that electromagnetic sensors can detect all metallic
objects (including steel, brass, and aluminum), such as the metal in waste containers, and also
sample the  soil for changes  in  properties, such as those caused by leaks of contaminants.

August 2000                                 6-65                        MARSSIM, Revision 1

-------
Field Measurement Methods and Instrumentation


Depths of interest are largely determined by the spacing between the coil used to transmit the
primary electromagnetic wave, and the receiver used to receive that transmission. The rule of
thumb is that the depth of interest is on the order of the distance between the transmitter and the
receiver. A system designed with the transmitter and receiver placed tens of meters apart can
detect signals from tens of meters below the surface.  A system with the transmitter and receiver
collocated can only detect signals from depths on the order of the size of the coil, which is
typically about one meter. The limitations of electromagnetic sensors include a lack of clearly
defined signals, and decreasing resolution of the signal as the distance below the surface
increases.

6.10.4 Aerial Radiological Surveys

Low-altitude aerial radiological surveys are designed to encompass large areas and may be useful
in:

•      providing data to assist in the identification of radioactive contaminants and their
       corresponding concentrations and spatial distributions
•      characterizing the nature, extent, and impact of contamination

The measurement sensitivity and data processing procedures provide total area coverage and a
detailed definition of the extent of gamma-producing isotopes for a specific area.  The gamma
radiation spectral data are processed to provide a qualitative and quantitative analysis of the
radionuclides in the survey area.  Helicopter flights establish a grid pattern (e.g., east-west) of
parallel lines approximately  61m (200 ft) above the ground surface.

The survey consists of airborne measurements of natural and man-made gamma radiation from
the terrain surface. These measurements allow for the determination of terrestrial spatial
distribution of isotopic concentrations and equivalent gamma exposure rates (e.g., 60Co, 234mPa,
and 137Cs). The results are reported as isopleths for the isotopes and are usually superimposed on
scale maps of the area.
MARSSIM, Revision 1                         6-66                                 August 2000

-------
                   7  SAMPLING AND PREPARATION FOR
                       LABORATORY MEASUREMENTS
7.1    Introduction

There are three methods for collecting radiation data while performing a survey.  A direct
measurement is obtained by placing the detector near or against the surface or in the media being
surveyed and reading the radioactivity level directly. Scanning is an evaluation technique
performed by moving a portable radiation detection instrument at a constant speed and distance
above the surface to semi-quantitatively detect elevated areas of radiation. These measurement
techniques are discussed in Chapter 6. Sampling is the process of collecting a portion of an
environmental medium as representative of the locally remaining medium.  The collected portion
of the medium is then analyzed to determine the radionuclide concentration. This chapter
discusses issues involved in collecting and preparing samples in the field for analysis, and in
evaluating the results of these analyses. In addition, a general discussion on laboratory sample
preparation and analysis is provided to assist in communications with the laboratory during
survey planning.

Samples should be collected and analyzed by qualified individuals using the appropriate
equipment and procedures. This manual assumes that the samples taken during the survey will
be submitted to  a qualified laboratory for analysis. The laboratory should have written
procedures that  document its analytical capabilities for the radionuclides of interest and a Quality
Assurance/Quality Control (QA/QC) program that documents the compliance of the analytical
process with established criteria. The method used to assay for the radionuclides of concern
should be recognized as a factor affecting analysis time.

Commonly used radiation detection and measuring equipment for radiological survey field
applications is described in Chapter 6 and Appendix H.  Many of these equipment types are also
used for laboratory analyses, usually under more controlled conditions that provide for lower
detection limits  and greater delineation between radionuclides. Laboratory methods often
involve combinations of both chemical and instrument techniques to quantify the low levels
expected in the samples. This chapter provides guidance to assist the MARSSIM user in
selecting appropriate procedures for collecting and handling samples for laboratory analysis.
More detailed information is available in documents listed in the reference section of this
manual.
7.2    Data Quality Objectives

The survey design is developed and documented using the Data Quality Objectives (DQO)
Process (see Appendix D). The third step of the DQO Process involves identifying the data
needs for a survey. One decision that can be made at this step is the selection of direct

August 2000                                7-1                         MARSSIM, Revision 1

-------
Sampling and Preparation for Laboratory Measurements


measurements for performing a survey or deciding that sampling methods followed by laboratory
analysis are necessary.

7.2.1   Identifying Data Needs

The decision maker and the survey planning team need to identify the data needs for the survey
being performed, including the:

       type of samples to be collected or measurements to be performed (Chapter 5)
       radionuclide(s) of interest (Section 4.3)
       number of samples to be collected (Section 5.5.2)
       type and frequency of field QC samples to be collected (Section 4.9)
       amount of material to be collected for each sample (Section 4.7.3 and Section 7.5)
       sampling locations and frequencies (Section 5.5.2)
       standard operating procedures (SOPs) to be followed or developed (Chapter 7)
       analytical bias and precision (e.g., quantitative or qualitative) (Appendix N)
       target detection limits for each radionuclide of interest (Section 6.4 and Table 7.2)
       cost of the methods being evaluated (cost per analysis as well as total cost) (Appendix H)
       necessary turnaround time
       sample preservation and shipping requirements (Section 7.6 and Section 7.9)
       specific background for the radionuclide(s) of interest (Section 4.5)
       derived concentration guideline level (DCGL) for each radionuclide  of interest
       (Section 4.3)
       measurement documentation requirements  (Section 9.4.2.2)
       sample tracking requirements (Section 7.8)

Some of this information will be supplied by subsequent steps  in the DQO process, and  several
iterations of the process may be needed to identify all of the data needs. Consulting with a
radiochemist or health physicist may be necessary to properly evaluate the information before
deciding between direct measurements or sampling methods to perform the survey.  Surveys may
require data from all three collection methods (i.e., sample analysis, direct measurements, and
scans) in order to demonstrate compliance with the regulation.

7.2.2   Data Quality Indicators

The data quality indicators identified as DQOs in Section 2.3.1 and described in Appendix N,
Section N.6, should be considered when  selecting a measurement method (i.e.., scanning, direct
measurement,  sampling) or an analytical  technique (e.g., radionuclide-specific analytical
procedure). In some instances, the data quality indicator requirements will help in the selection
of an analytical technique.  In other cases, the analytical requirements will assist in the selection
of appropriate levels for the data quality indicators.

MARSSIM, Revision 1                         7-2                                 August 2000

-------
                                              Sampling and Preparation for Laboratory Measurements
7.2.2.1 Precision
Precision is a measure of agreement among replicate measurements of the same property under
prescribed similar conditions (ASQC 1995).  Precision is determined quantitatively based on the
results of replicate measurements (equations are provided in EPA 1990). The number of
replicate analyses needed to determine a specified level of precision for a project is discussed in
Section 4.9. There are several types of replicate analyses available to determine the level of
precision, and these replicates are typically distinguished by the point in the sample collection
and analysis process where the sample is divided. Determining precision by replicating
measurements with  results at or near the detection limit of the measurement system is not
recommended because the measurement uncertainty is usually greater than the desired level of
precision.

•      Collocated Samples.  Collocated samples are samples collected adjacent to the routine
       field sample to determine local variability of the radionuclide concentration.  Typically,
       collocated samples are collected about one-half to three feet away from the selected
       sample location.  Analytical results from collocated samples can be used to assess site
       variation, but only in the immediate sampling area.  Collocated samples should not be
       used to assess variability across a site and are not recommended for assessing error (EPA
       1991g).  Collocated samples can be non-blind, single-blind, or double-blind.

•      Field Replicates. Field replicates are samples obtained from one location, homogenized,
       divided into  separate containers and treated as separate samples throughout the remaining
       sample handling and analytical processes. These  samples are used to assess error
       associated with sample heterogeneity, sample methodology and analytical procedures.
       Field replicates are used when determining total error for critical samples with
       contamination concentrations near the action level.  For  statistical analysis to be valid in
       such a case,  a minimum of eight replicate samples would be required (EPA  1991g). Field
       replicates (or field split samples) can be non-blind, single-blind, or double-blind and are
       recommended for determining the level of precision for a radiation survey or site
       investigation.

•      Analytical Laboratory Replicate. An analytical laboratory replicate is a subsample of a
       routine sample that is homogenized, divided into separate containers, and analyzed using
       the same analytical method.  It is used to determine method precision, but because it is a
       non-blind sample, or known to the analyst, it can only be used by the analyst as an
       internal control tool and not as an unbiased estimate of analytical precision (EPA 1990).

•      Laboratory Instrument Replicate. A laboratory instrument replicate is the repeated
       measurement of a sample that has been prepared for counting (i.e., laboratory sample
       preparation and radiochemical procedures have been completed). It is used to determine

August 2000                                  7-3                         MARSSIM, Revision  1

-------
Sampling and Preparation for Laboratory Measurements


       precision for the instrument (repeated measurements using same instrument) and the
       instrument calibration (repeated measurements using different instruments, such as two
       different germanium detectors with multichannel analyzers). A laboratory instrument
       replicate is generally performed as part of the laboratory QC program and is a non-blind
       sample. It is typically used as an internal control tool and not as an unbiased estimate of
       analytical  precision.

7.2.2.2 Bias

Bias is the systematic or persistent distortion of a measurement process that causes error in one
direction (ASQC  1995). Bias is determined quantitatively based on the analysis of samples with
a known concentration.  There are several types of samples with known concentrations. QC
samples used to determine bias should be included as early in the analytical process as  possible.

•      Reference Material. A material or substance one or more of whose property values are
       sufficiently homogeneous and well established to be used for the calibration of an
       apparatus, the assessment of a measurement method, or for assigning values to  materials
       (ISO 1993).  A certified reference material is reference material for which each certified
       property value is accompanied by an uncertainty at a stated level of confidence.
       Radioactive reference materials may be available for certain radionuclides in soil (e.g.,
       uranium in soil), but reference building materials may not be available.  Because
       reference materials are prepared and homogenized as part of the certification process,
       they are rarely available as double-blind samples.  When appropriate reference materials
       are available (i.e., proper matrix, proper radionuclide, proper concentration range), they
       are recommended for use in determining the overall bias for a measurement system.

•      Performance Evaluation (PE) Samples.  PE sample are samples that evaluate the overall
       bias of the analytical laboratory and detect any error in the analytical method used. These
       samples are usually prepared by a third party, using a quantity of analyte(s) which is
       known to the preparer but unknown to the laboratory, and always undergo certification
       analysis. The analyte(s) used to prepare the PE sample is the same as the analyte(s) of
       interest. Laboratory procedural error is evaluated by the percentage  of analyte identified
       in the PE sample (EPA 1991g).  PE samples are recommended for use in determining
       overall bias for a measurement system when appropriate reference material are  not
       available.  PE samples are equivalent to matrix spikes prepared by a third party that
       undergo certification analysis and can be non-blind, single-blind, or double-blind.

•      Matrix Spike Samples. Matrix spike samples are environmental samples that are spiked
       in the laboratory with a known concentration of a target analyte(s) to verify percent
       recoveries. They are used primarily to check sample matrix interferences but can also be
       used to monitor laboratory performance.  However, a data set of at least three or more

MARSSIM, Revision 1                         7-4                                  August 2000

-------
                                              Sampling and Preparation for Laboratory Measurements


       results is necessary to distinguish between laboratory performance and matrix
       interference (EPA 1991g). Matrix Spike samples are often replicated to monitor method
       performance and evaluate error due to laboratory bias and precision (when four or more
       pairs are analyzed).  These replicates are often collectively referred to as a matrix
       spike/matrix spike duplicate (MS/MSD).

There are several additional terms applied to samples prepared by adding a known amount of the
radionuclide of interest to the sample. The majority of these samples are designed to isolate
individual sources of bias within a measurement system by preparing pre- and post-operation
spikes. For example, the bias from the digestion phase of the measurement system can be
determined by comparing the result from a pre-digest spike to the result from a post-digest spike.

There are also several types of samples used to estimate bias caused by contamination.

•      Background Sample. A background sample is a sample collected upgradient of the area
       of potential contamination (either onsite or offsite) where there is little or no chance of
       migration of the contaminants of concern (EPA 1991g). Background samples  are
       collected from the background reference area (Section 4.5),  determine the natural
       composition and variability of the soil (especially important in areas with high
       concentrations of naturally occurring radionuclides), and are considered "clean" samples.
       They provide a basis for comparison of contaminant concentration levels with  samples
       collected from the survey unit when the statistical tests described in Chapter 8  are
       performed.

•      Field Blanks. Field blanks are samples prepared in the field using certified clean sand or
       soil and then submitted to the laboratory for analysis (EPA 1991g). A field blank is used
       to evaluate contamination error associated with sampling methodology and laboratory
       procedures. It also provides information about contaminants that may be introduced
       during sample collection, storage, and, transport.  Field blanks are recommended for
       determining bias resulting from contamination for a radiation  survey or site investigation.

•      Method Blank.  A method blank is an analytical control sample used to demonstrate that
       reported analytical results are not the result of laboratory contamination (ATSDR 1992).
       It contains distilled or deionized water and reagents, and is carried through the entire
       analytical procedure (laboratory sample preparation, digestion, and analysis). The
       method  blank is also referred to as a reagent blank. The method blank is generally used
       as an internal control tool by the laboratory because it is a non-blind sample.
August 2000                                 7-5                         MARSSIM, Revision 1

-------
Sampling and Preparation for Laboratory Measurements


7.2.2.3 Representativeness

Representativeness is a measure of the degree to which data accurately and precisely represent a
characteristic of a population parameter at a sampling point (ASQC 1995). Representativeness is
a qualitative term that is reflected in the survey design through the selection of a measurement
method (e.g., direct measurement or sampling) and the size of a sample collected for analysis.

Sample collection and analysis is typically less representative of true radionuclide concentrations
at a specific measurement location than performing a direct measurement.  This is caused by the
additional steps required in collecting and analyzing samples, such as sample collection, field
sample preparation, laboratory sample preparation, and radiochemical analysis. However, direct
measurement techniques with acceptable detection limits are not always available.  When
sampling is required as part of a survey design, it is critical that the sample collection procedures
consider representativeness.  The location of the sample is determined in Section 5.5.2.5, but the
size and content of the sample are usually determined as the sample is collected. Sample size
and content are discussed in Section 4.7.3 and  Section 7.5. Sample collection procedures also
need to consider the development of the DCGLs when determining the representativeness of the
samples.

7.2.2.4 Comparability

Comparability is a qualitative term that expresses the confidence that two data sets can contribute
to a common analysis and interpolation.  Generally, comparability is provided by using the same
measurement system for all analyses of a specific radionuclide.  In many cases, equivalent
procedures used within a measurement system are acceptable. For example, using a liquid-liquid
extraction purification step to determine the concentration of 238Pu using alpha spectrometry may
be equivalent to using an ion-exchange column purification step. However, using a gross alpha
measurement on a gas proportional counting system would not be  considered equivalent.
Comparability is usually not an issue except in cases where historical data have been collected
and are being compared to current analytical results, or when multiple laboratories are used to
provide results as part of a single survey design.

7.2.2.5 Completeness

Completeness is a measure of the amount of valid data obtained from the measurement system,
expressed as a percentage of the number of valid measurements that should have been collected.
Completeness is of greater concern for laboratory analyses than for direct measurements because
the consequences of incomplete data often require the collection of additional samples. Direct
measurements can usually be repeated fairly easily. The collection of additional samples
generally requires a remobilization of sample collection personnel which can be expensive.
Conditions at the site may have changed making it difficult or impossible to collect

MARSSIM, Revision 1                         7-6                                 August 2000

-------
                                              Sampling and Preparation for Laboratory Measurements


representative and comparable samples without repeating the entire survey.  On the other hand, if
it is simply an analytical problem and sufficient sample was originally collected, the analysis can
be repeated using archived sample material. Samples collected on a grid to locate areas of
elevated activity are also a concern for completeness. If one sample analysis is not valid, the
entire survey design for locating areas of elevated activity may be invalidated.

7.2.2.6  Other Data Quality Indicators

Several additional data quality indicators that influence the final status survey design are
identified as DQOs in Section 2.3.1. Many of these (e.g., selection and classification of survey
units, decision error rates, variability in the contaminant concentration, lower bound of the gray
region) are used to determine the number of measurements and are discussed in detail in Section
5.5.  The method detection limit is directly related to the selection of a measurement method and
a radionuclide-specific analytical technique.

Analytical methods should be capable of measuring levels below the established DCGLs,
detection limits of 10-50% of the DCGL should be the target (see Section 6.7). Cost, time, best
available technology, or other constraints may create situations where the above  stated
sensitivities are deemed impracticable. Under these circumstances, higher detection sensitivities
may be acceptable. Although laboratories will state detection limits, these sensitivities are
usually based on ideal or optimistic situations and may not be achievable under actual
measurement conditions. Detection limits are subject to variation from sample to sample,
instrument to instrument, and procedure to procedure, depending on sample size, geometry,
background, instrument  efficiency, chemical recovery, abundance of the radiations being
measured, counting time, self-absorption in the prepared sample, and interferences from
radionuclides or other materials present in the sample. The detection limit that is achievable in
practice should not exceed the DCGL.
7.3    Communications with the Laboratory

Laboratory analyses of samples are generally performed by personnel not directly involved in the
collection of the samples being analyzed.  Samples are typically collected by one group working
in the field, and analyzed by a second group located in a laboratory.  This separation of tasks can
potentially lead to problems based on the lack of communication between the two groups. For
this reason, communications between the Project Manager, field personnel, and laboratory
personnel are vital to ensuring the success of a project.
August 2000                                 7-7                         MARSSIM, Revision 1

-------
Sampling and Preparation for Laboratory Measurements
7.3.1   Communications During Survey Planning

The radioanalytical laboratory is a valuable resource during survey planning. Information on
available analytical techniques, analytical bias and precision, method detection limits, analytical
costs, and turnaround times can easily be provided by the laboratory. All of this information is
used to make the decision to perform direct measurements or collect samples for laboratory
measurements.  Additional information,  such as required sample size/volume, type of sample
container, preservative requirements, and shipping requirements, including the availability of the
laboratory for receipt of samples on weekends or holidays, can be obtained and factored into the
survey plan.

Involving the radioanalytical laboratory during survey planning also provides the laboratory with
site-specific information about the project. Information on the radionuclides of interest, possible
chemical and physical form of the contamination, and mechanism for release of the
contamination to the environment is used to modify or develop the analytical method for site-
specific conditions if required. The laboratory should also be provided with the site-specific
action levels (i.e., DCGLs, investigation levels) early in the survey planning process.

In some cases, it is not practical to select a radioanalytical laboratory early in the survey process
to participate in the survey planning activities. For example, Federal procurement procedures
require that a statement of work (SOW) identifying the tasks to be performed by the laboratory be
developed prior to selecting a laboratory. Unfortunately, the details of the tasks for the
laboratory to perform are developed during survey planning. This means that the information
provided by the laboratory and used during survey planning will be obtained from another
source, usually a radiochemist or health physicist trained in radiochemistry.  The uncertainty
associated with this information and subsequent decisions made based on this information
increases. This may  lead to increased costs caused by specifying an unnecessarily expensive
analytical method in  the SOW or repeated sampling and analysis of samples that did not meet the
target detection limits because the specified analytical method was not sensitive enough. In
addition, unnecessary or inappropriate analytical methods may be selected by the laboratory
because site-specific information concerning the samples was not provided.

The laboratory should be consulted when planning the schedule for the survey to insure that the
expected turnaround times can be met based on the projected laboratory workload.

7.3.2   Communications Before and During Sample Collection

In most situations, the sample collection and shipping containers are supplied by the laboratory;
therefore, the laboratory should be notified well in advance of the sampling  trip so that these
items will be available to the sampling team during the survey.
MARSSIM, Revision 1                         7-8                                 August 2000

-------
                                              Sampling and Preparation for Laboratory Measurements


The main purpose of communications with the laboratory during sample collection is to inform
the laboratory of modifications to the survey design specified in the planning documents (e.g.,
QAPP and SOPs).  The laboratory should have a copy of the survey design in their possession
prior to samples being collected.

Modifications to the survey design are often minor deviations from the SOPs caused by site-
specific conditions and usually affect a small number of samples. For example, a rock
outcropping covered by a thin layer of soil may restrict the depth of the surface soil sample to
5 cm (2 in.) instead of the 10 cm (4 in.) specified in the SOP. The mass of the samples collected
from this area of the site is one-half the expected sample mass, and the laboratory needs to be
informed of this deviation from the SOP.

In other situations, there may be an extensive modification to the number or types of samples
collected at the  site that will affect the analytical methods, detection capabilities, analytical costs,
or even the assumptions used to develop the DCGL. For example, a large portion of the site may
have been converted to a parking lot. A large pile of material that may represent the former
surface soil will be sampled as well as soil collected from beneath the parking lot surface. The
number of samples to be analyzed has doubled compared to the original SOW.

If the expected timing of receipt of samples at the laboratory changes due to sample collection
schedule deviations, the laboratory should be notified. Most laboratories require prior
notification for  samples to be received on weekends.

7.3.3  Communications During Sample Analysis

The laboratory should communicate with the Project Manager and field personnel during sample
analysis.  The laboratory should provide a list of missing or damaged samples as soon after the
samples are received as practical. This allows the Project Manager to determine if resampling is
required to replace the missing or damaged samples.  The Project Manager may also request
notification from the laboratory when samples are spilled or lost during analysis. Preliminary
reports of analytical results  may be useful to help direct sampling activities and provide early
indications of whether the survey objectives defined by the DQOs are being met. However, if
preliminary results have not been verified or validated, their usefulness is limited.

7.3.4  Communications Following Sample Analysis

Following sample analysis,  the laboratory will provide documentation of the analytical results as
specified in the  survey design.  Laboratory personnel should be available to assist with data
verification and validation.
August 2000                                 7-9                        MARSSIM, Revision 1

-------
Sampling and Preparation for Laboratory Measurements
7.4    Selecting a Radioanalytical Laboratory

Once the decision to perform sampling activities is made, the next step is to select the analytical
methods and determine the data needs for these methods.  It is advisable to select a radiochemical
laboratory early in the survey planning process in order that it may be consulted on the analytical
methodology1 and the sampling activities.  In addition, mobile laboratories can provide on-site
analytical capability. Obtaining laboratory or other services may involve a specific procurement
process. Federal procurement procedures may require additional considerations beyond the
method described here.

The procurement of laboratory services usually starts with the development of a request for
proposal that includes a statement-of-work describing the analytical services to be procured.  The
careful preparation of the statement-of-work is essential to the selection of a laboratory capable
of performing the required services in a technically competent and timely manner.

The technical proposals received in response to the procurement request for proposal must be
reviewed by personal familiar with radioanalytical laboratory operations in order to select the
most qualified offerer.  For complicated sites with a large number of laboratory analyses, it is
recommended that a portion of this evaluation take the form of a pre-award audit. The provision
for this audit must be in the request for proposal. The results of this audit provide a written
record of the decision to use a specific laboratory.  Smaller sites or facilities may decide that a
review of the laboratory's qualifications is sufficient for the evaluation.

There are six criteria that should be reviewed during this evaluation:

•      Does the laboratory possess the appropriate well-documented procedures,
       instrumentation, and trained personnel to perform  the necessary analyses? Necessary
       analyses are defined by the data needs (radionuclide(s) of interest and target detection
       limits) identified by the DQO process.

•      Is the laboratory experienced in performing the same or similar analyses?

•      Does the laboratory have satisfactory performance evaluation results from formal
       monitoring or accreditation programs?  The laboratory should be able to provide a
       summary of QA audits and proof of participation in interlaboratory cross-check programs.
       Equipment calibrations should be performed using National Institute of Standards and
       Technology (NIST) traceable reference radionuclide standards whenever possible.
       1 The laboratory provides information on personnel, capabilities, and current workload that are necessary
inputs to the decision-making process.

MARSSIM, Revision 1                         7-10                                 August 2000

-------
                                               Sampling and Preparation for Laboratory Measurements


•      Is there an adequate capacity to perform all analyses within the desired timeframe? This
       criterion considers whether or not the laboratory possesses a radioactive materials
       handling license or permit for the samples to be analyzed.  Very large survey designs may
       indicate that more than one analytical laboratory is necessary to meet the survey
       objectives.2

•      Does the laboratory provide an internal quality control review of all generated data that is
       independent of the data generators?

•      Are there adequate protocols for method performance documentation and sample
       security?

Providers of radioanalytical services should have an active and fully documented QA program in
place.3 This program should comply with the objectives determined by the DQO process in
Section 2.3. The QA program should include:

       laboratory organizational structure
       personnel qualifications
       written standard operating procedures and instructions
       inter- and intralaboratory performance analyses
       design control to define the flow of samples through the laboratory
       a corrective action plan
       an internal audit program

Chain-of-Custody requirements and numbers of samples are also specified. The analytical
procedures as well as the documentation and reporting requirements should be specified and
agreed upon.  These topics are discussed in detail in the following sections of this chapter.
7.5    Sampling

This section provides guidance on developing appropriate sample collection procedures for
surveys designed to demonstrate compliance with a dose- or risk-based regulation. Sample
collection procedures are concerned mainly with ensuring that a sample is representative of the
sample media, is large enough to provide sufficient material to achieve the desired detection
limit, and is consistent with assumptions used to develop the conceptual site model and the
DCGLs. Additional considerations for sample collection activities are discussed in Section 4.7.3.
   2 If several laboratories are performing analyses as part of the survey, the analytical methods used to perform the
analyses should be similar to ensure comparability of results (see Appendix N, Section N.6.5).

    The QA program is typically documented in one or more documents such as a Quality Management Plan,
Quality Assurance Manual, or Quality Assurance Project Plan.

August 2000                                 7-11                         MARS SIM, Revision 1

-------
Sampling and Preparation for Laboratory Measurements


The presence of radioactive and hazardous chemical wastes (mixed wastes) at a site can
influence the survey design.  The external exposure rates or radioactivity concentration of a
specific sample may limit the time that workers will be permitted to remain in intimate contact
with the samples, or may dictate that smaller samples be taken and special holding areas be
provided for collected samples prior to shipment. These special handling considerations may
conflict with the size specifications for the analytical method, normal sampling procedures, or
equipment. There is a potential for biasing sampling programs by selecting samples that can be
safely handled or legally shipped to support laboratories. Because final status surveys are
performed to demonstrate that a site can be safely released, issues  associated with high levels of
radioactivity are not expected to be a concern.

7.5.1   Surface Soil

The purpose of surface soil sampling is to collect samples that accurately and precisely represent
the radionuclides and their concentrations at the location being sampled. In order to do this and
plan for sampling, a decision must be made as to the survey design. The selection of a survey
design is based on the Historical Site Assessment, results from preliminary surveys (i.e., scoping
characterization, remedial action support),  and the objectives of the survey developed using the
Data Quality Objectives (DQO) Process. The selection between judgmental,  random, and
systematic survey designs is discussed in Section 5.5.3.

7.5.1.1 Sample Volume

The volume of soil collected should be specified in the sample collection procedure. In general,
large volumes of soil are more representative than small volumes of soil. In addition, large
samples provide sufficient sample to ensure that required detection limits can be achieved and
that sample reanalysis can be done if there  is a problem. However, large samples may cause
problems with shipping, storage, and disposal. All of these issues should be discussed with the
sample collection team and the analytical laboratory during development of sample collection
procedures.  In general, surface soil samples range in size from 100 g up to several kilograms.

The sample collection procedure should also make clear if it is more important to meet the
volume requirement of the survey design or the surface area the sample represents.  Constant
volume is related to comparability of the results while surface area is  more closely related to the
representativeness of the results.  Maintaining a constant surface area and depth for samples
collected for a particular survey can eliminate problems associated with different depth profiles.
The actual surface area included as part of the sample may be important for estimating the
probability of locating areas of elevated concentration.
MARSSIM, Revision 1                        7-12                                 August 2000

-------
                                              Sampling and Preparation for Laboratory Measurements
7.5.1.2  Sample Content
The material present in the field at the sample location may or may not provide a representative
sample. Vegetative cover, soil particle size distribution, inaccessibility, or lack of sample
material are examples of problems that may be identified during sample collection.  All
deviations from the survey design as documented in the Standard Operating Procedures (SOPs)
should be recorded as part of the field sample documentation.

Sample content is generally defined by the assumptions used to develop the conceptual site
model and the DCGLs. A typical agricultural scenario assumes that the top few centimeters of
soil are available for resuspension in air, that the top 15 cm (6 in.) are homogenized by
agricultural activities (e.g., plowing), that roots can extend down several meters to obtain water
and nutrients depending on the plant, and  that external exposure is based on an assumed
thickness of contaminated soil (usually at the surface). Depending on the dominant exposure
pathways for each radionuclide, this can result in a complicated set of instructions for collecting
representative samples.  This situation can be further complicated by the fact that the site is not
currently being used for agricultural purposes. For this situation it is necessary to look at the
analytical results from the preliminary surveys (i.e., scoping, characterization,  remedial action
support) to determine the expected depth of contamination.

In most situations the vegetative cover is not considered part of the surface soil sample and is
removed in the field.  For agricultural scenarios where external exposure is not the primary
concern, soil particles greater than 2 mm (0.08 in.) are generally not considered as part of the
sample (EPA 1990). Foreign material (e.g., plant roots, glass, metal, or concrete) is also
generally not considered part of the sample, but should be reviewed on a site-specific basis. It is
important that the sample collection procedure clearly indicate what is and what is not considered
part of the sample.

7.5.1.3 Sampling Equipment

The selection of proper sampling equipment is important to ensure that samples are collected
effectively and efficiently. Sampling equipment generally consists of a tool to collect the sample
and a container to place the collected sample  in.  Sample tracking begins as soon as the sample is
collected, so it may be necessary to consider security of collected samples required by the
objectives of the survey.

Sampling tools are selected based on the type of soil, sample depth, number of samples required,
and training of available personnel.  The selection of a sampling tool may also be based on the
expected use of the results.  For example,  if a soil sample is collected to verify the depth profile
used to develop the calibration for in situ gamma spectrometry, it is important to preserve the soil
core.  Table 7.1 lists several  examples of tools used for collecting soil samples, situations where
they are applicable, and some advantages  and disadvantages involved in their use.

August 2000                                 7-13                        MARS SIM, Revision 1

-------
Sampling and Preparation for Laboratory Measurements
                           Table 7.1  Soil Sampling Equipment*
Equipment
Tier
Scoop or trowel
Bulb Planter
Soil Coring Device
Thin-wall tube sampler
Split spoon sampler
Shelby tube sampler
Bucket auger
Hand -operated power
auger
Application
Soft surface soil
Soft surface soil
Soft Soil, 0-15 cm
(0-6 in.)
Soft soil, 0-60 cm
(0-24 in.)
Soft soil, 0-3 m (0-10 ft)
Soil, to bedrock
Soft soil, to bedrock
Soft soil, 7.5 cm -3m
(3 in. - 10 ft)
Soil, 15 cm- 4.5 m
(6 in. -15 ft)
Advantages/Disadvantages
Inexpensive; easy to use and decontaminate; difficult to
use in stone or dry soil.
Inexpensive; easy to use and decontaminate; trowels
with painted surfaces should be avoided
Easy to use and decontaminate: uniform diameter and
sample volume; preserves soil core; limited depth
capability; can be difficult to decontaminate
Relatively easy to use; preserves soil core; limited depth
capability; can be difficult to decontaminate
easy to use; preserves soil core; easy to decontaminate;
can be difficult to remove cores
Excellent depth range; preserves soil core; useful for
hard soils; often used in conjunction with drill rig for
obtaining deep cores
Excellent depth range; preserves soil core; tube may be
used for shipping core to lab.; may be used in
conjunction with drill rig for obtaining deep cores
Easy to use; good depth range; uniform diameter and
sample volume; may disrupt and mix soil horizons
greater than 15 cm
Good depth range; generally used in conjunction with
bucket auger; destroys soil core; requires two or more
operators; can be difficult to decontaminate
* Reproduced from EPA 1991g
Sample containers are generally not a major concern for collecting surface soil samples.
Polyethylene bottles with screw caps and wide mouths are recommended. These containers are
fairly economical, provide easy access for adding and removing samples, and resist chemicals,
breaking, and temperature extremes. Glass containers are also acceptable, but they are fragile
and tend to break during shipment. Metal containers are sometimes used, but sealing the
container can present a problem and corrosion can be an issue if the samples are stored for a
significant length of time.
MARSSIM, Revision 1
7-14
August 2000

-------
                                              Sampling and Preparation for Laboratory Measurements
7.5.2   Building Surfaces
Because building surfaces tend to be relatively smooth and the radioactivity is assumed to be on
or near the surface, direct measurements are typically used to provide information on
contaminant concentrations.  Sometimes, however, it is necessary to collect actual samples of the
building material surface for analysis in a laboratory.

7.5.2.1 Sample Volume

The sample volume collected from building surfaces is usually a less significant DQO concern
than the area from which the sample was collected.  This is because building surface DCGLs are
usually expressed in terms of activity per unit area.  It is still necessary to consider the sample
volume to account for sample matrix effects that may reduce the chemical recovery, which in
turn has an affect on the detection limit.

7.5.2.2 Sample Content

If residual activity is covered by paint or some other treatment, the underlying surface and the
coating itself may be contaminated.  If the activity is a pure alpha or low-energy beta  emitter,
measurements at the surface will probably not be representative of the actual residual activity
level. In this case the surface layer is removed from the known area, such as by using a
commercial stripping agent or by physically abrading the surface. The removed coating material
is analyzed for activity content  and the level converted to appropriate units (i.e., Bq/m2,
dpm/100 cm2) for comparison with surface activity DCGLs. Direct measurements can be
performed on the underlying surface after removal of the coating.

Residual radioactivity may be incorporated into building materials, such as pieces of concrete or
other unusual matrices. Development of SOPs for collecting these types of samples may involve
consultation with the analytical laboratory to help ensure that the objectives of the survey are
achieved.

The thickness of the layer of building surface to be removed as a sample should be consistent
with the development of the conceptual site model and the DCGLs. For most sites the surface
layer will only be the first few millimeters of the material being sampled.

7.5.2.3 Sampling Equipment

Tools used to provide samples of building surfaces depend on the material to be sampled.
Concrete may require chisels, hammers, drills, or other tools specifically designed to  remove a
thin layer of the surface. Wood surfaces may require using a sander or a saw to collect a sample.
Paint may be chemically or physically stripped from the surface.

August 2000                                7-15                         MARSSIM, Revision 1

-------
Sampling and Preparation for Laboratory Measurements


Sample containers for these samples are generally the same as those recommended for soil
samples. If chemicals are used to strip paint or other surface materials, the chemical resistance of
the container should be considered.

7.5.3   Other Media

Surface soil and building surfaces are the media addressed in MARS SIM during the final status
survey design.  Other media may be involved and may have been remediated.  Data collection
activities during preliminary surveys (i.e.., scoping, characterization, remedial action support)
may involve collecting samples of other media to support the final status survey design.
Examples of other media that may be sampled include:

       subsurface soil
       ground water
       surface water
       sediments
       sewers  and septic systems
       flora and fauna (plants and animals)
       airborne particulates
       air (gas)

Appendix M provides a list of resources that can be used to develop sample collection
procedures for other media that may required by preliminary surveys to support the development
of a final status survey design.
7.6    Field Sample Preparation and Preservation

Proper sample preparation and preservation are essential parts of any radioactivity sampling
program. The sampling objectives should be specified before sampling activities begin. Precise
records of sample collection and handling are necessary to ensure that data obtained from
different locations or time frames are correctly compared.

The appropriateness of sample preparation techniques is a function of the analysis to be
performed (EPA 1992a, 1992b). Field sample  preparation procedures are a function of the
specified analysis and the objectives of the survey.  It is essential that these objectives be clearly
established and  agreed upon in the early stages of survey planning (see Section 2.3).
MARSSIM, Revision 1                        7-16                                August 2000

-------
                                             Sampling and Preparation for Laboratory Measurements
7.6.1   Surface Soil
Soil and sediment samples, in most protocols, require no field preparation and are not preserved.
In some protocols, cooling of soil samples to 4 °C is required during shipping and storage of soil
samples. This is not a practice normally followed for the radiochemical analysis of soil samples.

When replicate samples are prepared in the field, it is necessary to homogenize the sample prior
to separation into replicates.  There are standard procedures for homogenizing soil in the
laboratory (ASTM 1995), but the equipment required for these procedures may not be available
in the field.  Simple field techniques, such as cone and quarter, or using a riffle splitter to divide
the sample may be appropriate if the sample can be dried (ASTM 1993, EPA 1991g).  If the
sample contains significant amounts of residual water (e.g., forms clumps of soil) and there are
no facilities for drying the sample, it is recommended that the homogenization and separation
into replicates be performed in a laboratory. It is preferable to use non-blind replicates where the
same laboratory prepares and analyzes the replicates rather than use poorly homogenized or
heterogeneous samples to prepare replicates samples.

7.6.2   Building Surfaces

Field preparation and preservation of building and associated materials, including smear samples,
is not generally required.  Homogenization of samples to prepare replicates is the same for
building surface material and soil.

7.6.3   Other Media

Other media may have significant requirements related to field sample preparation and
preservation. For example, water samples may need filtering and acidification. Storage at
reduced temperatures (i.e., cooling or freezing) to reduce biological activity may be necessary for
some samples. Addition of chemical preservatives for specific radionuclides or media may also
be required.
7.7    Analytical Procedures

The selection of the appropriate radioanalytical methods is normally made prior to the
procurement of analytical services and is included in the statement-of-work of the request for
proposal. The statement-of-work may dictate the use of specific methods or be performance
based.  Unless there is a regulatory requirement, such as conformance to the EPA drinking water
methods (EPA 1980a), the specification of performance based methodology is encouraged. One
reason for this is that a laboratory will usually perform better using the methods routinely
employed in its laboratory as contrasted to using other methods with which it has less experience.

August 2000                                 7-17                        MARS SIM, Revision 1

-------
Sampling and Preparation for Laboratory Measurements


The laboratory is also likely to have historical data on performance for methods routinely used by
that laboratory.  However, the methods employed in a laboratory should be derived from a
reliable source, such as those listed in Table 7.2.

            Table 7.2  Examples of References for Routine Analytical Methods
   •    Methods of'Air Sampling and Analysis (Lodge 1988)

   •    Annual Book of ASTM Standards, Water and Environmental technology.  Volume
        11.04, Environmental Assessment; Hazardous Substances and Oil Spill Responses;
        Waste Management; Environmental Risk Assessment (ASTM 1997)

   •    Standard Methods for the Examination of Water and Wastewater (APHA 1995)

   •    EML Procedures Manual (DOE 1990b)

   •    Radiochemical Analytical Procedures for Analysis of Environmental Samples (EPA
        1979)

   •    Radiochemistry Procedures Manual (EPA 1984a)

   •    Indoor Radon and Radon Decay Product Measurement Protocols (EPA 1992d)

   •    USAEHA Environmental Sampling Guide (Department of the Army 1993)
This section briefly describes specific equipment and procedures to be used once the sample is
prepared for analysis. The results of these analyses (i.e., the levels of radioactivity found in these
samples) are the values used to determine the level of residual activity at a site. In a
decommissioning effort, the DCGLs are expressed in terms of the concentrations of certain
radionuclides.  It is of vital importance, therefore, that the analyses be accurate and of adequate
sensitivity for the radionuclides of concern.  The selection of analytical procedures should be
coordinated with the laboratory and specified in the survey plan.

Analytical methods should be adequate to meet the data needs identified in the DQO process.
Consultation with the laboratory performing the analysis is recommended before selecting  a
course of action.  MARSSEVI is not intended to limit the selection of analytical procedures, rather
all applicable methods should be reviewed to provide results that meet the objectives of the
survey.  The decision maker and survey planning team should decide whether routine methods
will be used at the site or if non-routine methods may be acceptable.

MARSSIM, Revision 1                        7-18                                August 2000

-------
                                              Sampling and Preparation for Laboratory Measurements


•      Routine analytical methods are documented with information on minimum performance
       characteristics, such as detection limit, precision and accuracy, and useful range of
       radionuclide concentrations and sample sizes. Routine methods may be issued by a
       recognized organization (e.g., Federal or State agency, professional organization),
       published in a refereed journal, or developed by an individual laboratory.  Table 7.2 lists
       examples of sources for routine methods.

•      Non-routine methods address situations with unusual or problematic matrices, low
       detection limits, or new parameters, procedures or techniques. Non-routine methods
       include adjustments to routine methods, new techniques published in refereed literature,
       and development of new methods.

References that provide information on radiochemical methodology and should be considered in
the methods review and selection process are available from such organizations as:

•      National Council on Radiation Protection and Measurements (NCRP)
•      American Society of Testing and Materials (ASTM)
•      Radiological and Environmental Sciences Laboratory (RESL), Idaho Falls, Idaho
       (Operated by the DOE)
•      DOE Technical Measurements Center, Grand Junction, CO
•      Environmental Measurements Laboratory (EML);  formerly the Health and Safety
       Laboratory of the DOE

Equipment vendor literature, catalogs, and instrument manuals are often a source of useful
information on the characteristics of radiation detection equipment. Table 7.3 provides a
summary of common laboratory methods with estimated detection limits.

Analytical procedures in the laboratory consist of several parts that are assembled to produce an
SOP for a specific project or sample type.  These parts include:

       laboratory sample preparation
       sample dissolution
       sample purification
       preparation for counting
       counting
       data reduction
August 2000                                7-19                        MARS SIM, Revision 1

-------
Table 7.3  Typical Measurement Sensitivities for Laboratory Radiometric Procedures
.SSIM, Revision 1


to
o




VI
O
0
o
Sample Type
Smears (filter
paper)


Soil Sediment

Water





Radionuclides or Radiation
Measured
Gross alpha
Gross beta
Low energy beta
(3H, 14C, 63Ni)
137Cs, 60Co, 226Ra (214Bi)a, 232Th
(228Ac), 235U
234, 235, 238-rj. 238, 239, 240p
227>228'230'232Th; other alpha
emitters
Gross alpha
Gross beta
137Cs, 60Co, 226Ra (214Bi), 232Th
(228Ac), 235U
234, 235, 238-rj. 238, 239, 240p
227'228'230'232Th; other alpha
emitters
3H
Procedure
Gas-flow proportional counter; 5-min count
Alpha scintillation detector with sealer; 5-min count
Gas-flow proportional counter; 5-min count
End window GM with sealer; 5-min count (unshielded detector)
Liquid scintillation spectrometer; 5-min count
Germanium detector (25% relative efficiency) with multichannel analyzer;
pulse height analyzer; 500-g sample; 15-min analysis
Alpha spectroscopy with multichannel analyzer - pyrosulfate fusion and
solvent extraction; surface barrier detector; pulse height analyzer; 1-g
sample; 16-hr count
Gas-flow proportional counter; 100-ml sample, 200-min count
Gas-flow proportional counter; 100-ml sample, 200-min count
Germanium detector (25% relative efficiency) with multichannel analyzer;
pulse height analyzer; 3.5L sample, 16-hr count
Alpha spectroscopy with multichannel analyzer - solvent extraction;
surface barrier detector; pulse height analyzer; 100 ml sample, 30 min
count
Liquid scintillation spectrometry; 5-ml sample, 30-min count
Approximate
Measurement
Sensitivity
5 dpm
20 dpm
10 dpm
80 dpm
30 dpm
0.04-0. IBq/g
(1-3 pCi/g)
0.004-0.02 Bq/g
(0. 1-0.5 pCi/g)
0.04 Bq/L
(1 pCi/1)
0.04 Bq/L
(1 pCi/L)
0.4 Bq/L
(10 pCi/L)
0.004-0.02 Bq/L
(0. 1-0.5 pCi/L)

10 Bq/L
(300 pCi/L)
a Indicates that a member of the decay series is measured to determine activity level of the parent radionuclide of primary interest.
                                                                                                 eg
                                                                                                 o'
                                                                                                 cr
                                                                                                 o
                                                                                                 I

-------
                                              Sampling and Preparation for Laboratory Measurements
7.7.1   Photon Emitting Radionuclides

There is no special sample preparation required for counting samples using a germanium detector
or a sodium iodide detector beyond placing the sample in a known geometry for which the
detector has been calibrated.  The samples can be measured as they arrive at the laboratory, or the
sample can be dried, ground to a uniform particle size, and mixed to provide a more
homogeneous sample if required by the SOPs.

The samples are typically counted using a germanium detector with a multichannel analyzer or a
sodium iodide detector with a multichannel analyzer. Germanium detectors have better
resolution and can identify peaks (and the associated radionuclides) at lower concentrations.
Sodium iodide detectors often have a higher efficiency and are significantly less expensive than
germanium detectors.  Low-energy photons (i.e., x-rays and gamma rays below 50 keV) can be
measured using specially designed detectors with an entrance window made from a very light
metal, typically beryllium.  Descriptions of germanium and sodium iodide detectors are provided
in Appendix H.

Data reduction is usually the critical step in measuring photon emitting radionuclides.  There are
often several hundred individual gamma ray energies detected within a single  sample.  Computer
software is usually used to identify the peaks, associate them with the proper energy, associate
the energy with one or more radionuclides, correct for the efficiency of the detector and the
geometry of the sample, and provide results in terms of concentrations with the associated
uncertainty. It is important that the software be either a well-documented commercial package or
thoroughly evaluated and documented before use.

7.7.2   Beta Emitting Radionuclides

Laboratory sample preparation is an important step in the analysis of surface soil and other solid
samples for beta emitting radionuclides.  The laboratory will typically have a sample preparation
procedure that involves drying the sample and grinding the soil so that all of the particles are less
than a specified size to provide a homogeneous sample.  A small portion of the homogenized
sample is usually all that is required for the individual analysis.

Once the sample has been prepared, a small portion is dissolved, fused, or leached to provide a
clear solution containing the radionuclide of interest. The only way to ensure  that the sample is
solubilized is to completely dissolve the sample. However, this can be an  expensive and time-
consuming step in the  analysis. In some cases, leaching with strong acids can consistently
provide greater than 80% recovery of the radionuclide of interest (NCRP 1976a) and may be
acceptable for certain applications. Gross beta measurements may be performed on material that
has not been dissolved.
August 2000                                7-21                        MARS SIM, Revision 1

-------
Sampling and Preparation for Laboratory Measurements


After dissolution, the sample is purified using a variety of chemical reactions to remove bulk
chemical and radionuclide impurities.  The objective is to provide a chemically and
radiologically pure sample for measurement. Examples of purification techniques include
precipitation, liquid-liquid extraction, ion-exchange chromatography, distillation, and
electrodeposition. Gross beta measurements may be performed on material that has not been
purified.

After the sample is purified, it is prepared for counting.  Beta emitting radionuclides are usually
prepared for a specific type of counter in a specified geometry.  Solid material is usually
precipitated and collected on a filter in a circular geometry to provide a homogeneous sample.
Liquid samples are typically converted to the appropriate chemical form and diluted to a
specified volume in preparation for counting.

Measurements of solid samples are typically performed using a gas-flow proportional counter.
Because total beta activity is measured, it is important that the purification step be performed to
remove any interfering radionuclides. Liquid samples are usually diluted using a liquid
scintillation cocktail and counted using a liquid scintillation spectrometer. Liquid scintillation
spectrometers can be used for low-energy beta emitting radionuclides, such as 3H and 63Ni.  They
also have high counting efficiencies, but often have a high instrument background as well. Gas-
flow proportional counters have a very low background.  Appendix H provides a description of
both the gas-flow proportional counter and the liquid scintillation spectrometer.

Data reduction for beta emitting radionuclides is less  complicated than that for photon emitting
radionuclides. Since the beta detectors report total beta activity, the calculation to determine the
concentration for the radionuclide of interest is straightforward.

7.7.3   Alpha Emitting Radionuclides

Laboratory sample preparation for alpha emitting radionuclides is similar to that for beta emitting
radionuclides. Sample dissolution and purification tasks are also similar to those performed for
beta emitting radionuclides.

Because of the limited penetrating power of alpha particles,  the preparation for counting is often
a critical step. Gross alpha measurements can be made using small sample sizes with a gas-flow
proportional counter, but self-absorption of the alpha particles results in a relatively high
detection limit for this technique.  Liquid scintillation spectrometers  can also be used to measure
alpha emitting radionuclides but the resolution limits the usefulness of this technique. Most
alpha emitting radionuclides are measured in a vacuum (to limit absorption by air) using alpha
spectroscopy.  This  method requires that the sample be prepared as a virtually weightless mount
in a specific geometry.  Electrodeposition is the traditional method for preparing samples for
counting.  This technique provides the highest resolution, but it requires a significant amount of

MARSSIM, Revision 1                         7-22                                 August 2000

-------
                                              Sampling and Preparation for Laboratory Measurements


training and expertise on the part of the analyst to produce a high quality sample. Precipitation of
the radionuclide of interest on the surface of a substrate is often used to prepare samples for alpha
spectroscopy.  While this technique generally produces a spectrum with lower resolution, the
preparation time is relatively short compared to electrodeposition, and personnel can be trained
to prepare acceptable samples relatively quickly.

Alpha emitting radionuclides are typically measured using alpha spectroscopy. The data
reduction requirements for alpha spectroscopy are greater than those for beta emitting
radionuclides, and similar  to those for photon emitting radionuclides.  Alpha spectroscopy
produces a spectrum of alpha particles detected at different energies, but because the sample is
purified prior to counting,  all of the alpha particles come from radionuclides of a single element.
This simplifies the process of associating each peak with a specific radionuclide, but the lower
resolution associated with  alpha spectroscopy increases the difficulty of identifying the peaks.
Although commercial software packages are available for interpreting alpha spectroscopy results,
an experienced operator is required to ensure that the  software is working properly.
7.8     Sample Tracking

Sample tracking refers to the identification of samples, their location, and the individuals
responsible for their custody and transfer of the custody.  This process covers the entire process
from collection of the samples and remains intact through the analysis and final holding or
disposal. It begins with the taking of a sample where its identification and designation of the
sample are critical to being able to relate the analytical result to a site location.

Tracking samples from collection  to receipt at the analytical laboratory is normally done through
a Chain of Custody process, and documented on a Chain-of-Custody (COC) record.  Once
samples are received by the laboratory, internal tracking (e.g., COC) procedures should be in
place and codified through SOPs that assure integrity of the samples. Documentation of changes
in the custody of a sample(s) is important.  This is especially true for samples that may be used as
evidence to establish compliance with a release criterion.  In such cases, there should be
sufficient evidence to demonstrate that the integrity of the sample is not compromised from the
time it is collected to the time it is analyzed.  During this time, the sample should either be under
the positive control of a responsible individual or secured and protected  from any activity that
could change the true value of the results or the nature of the sample. When this degree of
sample handling or custody is necessary, written procedures should be developed for field
operations and for interfacing between the field operations and the analytical laboratory.  This
ensures that a clear transfer of the  custodial responsibility is well documented and no questions
exist as to who is responsible for the sample at any time.
August 2000                                 7-23                        MARSSIM, Revision 1

-------
Sampling and Preparation for Laboratory Measurements
7.8.1   Field Tracking Considerations

•      Field personnel are responsible for maintaining field logbooks with adequate information
       to relate the sample identifier (sample number) to its location and for recording other
       information necessary to adequately interpret results of sample analytical data.
•      The sample collector is responsible for the care and custody of the  samples until they are
       properly transferred or dispatched. This means that samples are in their possession, under
       constant observation, or secured.  Samples may be secured in a sealed container, locked
       vehicle, locked room, etc.
•      Sample labels should be completed for each sample using waterproof ink.
•      The survey manager or designee determines whether or not proper  custody procedures
       were followed during the field work, and decides if additional sampling is indicated.
•      If photographs are included as part of the sampling documentation, the name of the
       photographer, date, time, site location, and site description should be entered sequentially
       in a logbook as the photos are taken.  After the photographs are developed, the prints
       should be serially numbered.

7.8.2   Transfer of Custody

•      All samples leaving the site should be accompanied by a Chain-of-Custody record.  This
       record  documents sample custody transfer from the sampler, often through another
       person, to the laboratory. The individuals relinquishing the samples should sign and date
       the record. The record should include a list, including sample designation (number), of
       the samples in the shipping container and the analysis requested for each sample.
•      Shipping containers should be sealed and include a tamper indicating seal that will
       indicate if the container seal has been disturbed. The method of shipment, courier name,
       or other pertinent information should be listed in the Chain-of-Custody record.
•      The original  Chain-of-Custody record should accompany the samples. A copy of the
       record  should be retained by the individual or organization relinquishing the samples.
•      Discuss the custody objectives with the shipper to ensure that the objectives are met. For
       example, if the samples are sent by mail and the originator of the sample requires a record
       that the shipment was delivered, the package should be registered with return receipt
       requested.  If, on the other hand, the objective is to simply provide  a written record of the
       shipment, a certificate of mailing may be a less expensive and appropriate alternative.
•      The individual receiving the samples should sign and date the record.  The condition of
       the container and the tamper indicating seal should be noted on the Chain-of-Custody
       record. Any problems with the individual samples, such as a broken container, should be
       noted on the record.
MARSSIM, Revision 1                        7-24                                 August 2000

-------
                                             Sampling and Preparation for Laboratory Measurements
7.8.3   Laboratory Tracking

When the samples are received by the laboratory they are prepared for radiochemical analyses.
This includes the fractionation of the sample into aliquots. The tracking and Chain-of-Custody
documentation within the laboratory become somewhat complicated due to the fact that several
portions of the original sample may exist in the laboratory at a given time. The use of a computer
based Laboratory Information System (LEVIS) can greatly assist in tracking samples and fractions
through the analytical system.

The minimal laboratory tracking process consists of the following:

•      transfer of custody on receipt of the samples (original Chain-of-Custody form is retained
       by the laboratory and submitted with the data package for the samples)
•      documentation of sample storage (location and amount)
•      documentation of removal and return of sample aliquots (amount, date and time, person
       removing or returning, and reason for removal)
•      transfer of the samples and residues to the receiving authority (usually the site from which
       they were taken)

The procedure for accomplishing the above varies from laboratory to laboratory, but the exact
details of performing the operations of sample tracking should be contained in a SOP.
7.9    Packaging and Transporting Samples

All samples being shipped for radiochemical analysis should be properly packaged and labeled
before transport offsite or within the site.  The primary concern is the possibility of spills, leaks,
or breakage of the sample containers. In addition to resulting in the loss of samples and cross-
contamination, the possible release of hazardous material poses a threat to the safety of persons
handling and transporting the package.

Suggestions on packaging and shipping radioactive environmental samples are listed below.

1)     Review NRC requirements (10 CFR part 71)  and Department of Transportation (DOT)
       requirements (49 CFR parts 170 through 189) for packaging and shipping radioactive
       environmental samples.

2)     Visually inspect each sample container for indication of leaks or defects in the sample
       container.
August 2000                                7-25                        MARSSIM, Revision 1

-------
Sampling and Preparation for Laboratory Measurements


       a)      Liquid samples should be shipped in plastic containers, if possible, and the caps
              on the containers should be secured with tape.  One exception to the use of plastic
              bottles is samples collected for 3H analyses which may require glass containers.
       b)      Heavy plastic bags, with sealable tops, can be used to contain solid samples (e.g.,
              soil, sediment, air filters).  The zip-lock  should be secured with tape. Heavy
              plastic lawn bags can be used to contain vegetation samples.  The tops should be
              closed with a "tie" that is covered by tape to prevent it from loosening and
              slipping off.

3)     Wipe individual sample containers with a damp cloth or paper towel to remove any
       exterior contamination. The outer surfaces of containers holding samples collected in a
       contaminated area should be surveyed with a hand-held instrument(s), appropriate for the
       suspected type of radioactivity (P/y or a).

4)     If glass sample containers are used, place sample containers inside individual plastic bags
       and seal in order to contain the sample in case of breakage.

5)     Use packing material (e.g., paper, styrofoam, "bubble wrap") to immobilize and isolate
       each sample container and buffer hard knocks on the outer container during shipping.
       This is especially important in cold weather when plastic containers may become brittle
       and water samples may freeze.

6)     When liquid samples are shipped, include a sufficient quantity of an  absorbent material
       (e.g., vermiculite) to absorb all liquid packed in the shipping container in case of
       breakage. This absorbent material may  suffice as the packing material described above in
       item 5.

7)     Include the original, signed and dated, Chain-of-Custody (COC) form, identifying each
       sample in the package. It is good practice to place the COC form in a plastic bag to
       prevent it from becoming wet or contaminated in case of a spill during shipment.  If
       possible, avoid having multiple packages of samples covered by a single COC form.

8)     Seal closed the package and apply COC tape in  such a manner that it must be torn
       (broken) in order to open the package. The tape should carry the signature of the sender,
       and the date  and time, so that it cannot be removed and replaced undetected.

9)     Ice chests, constructed of metal or hard plastic, make excellent shipping containers for
       radioactive environmental samples.
MARSSIM, Revision 1                        7-26                                 August 2000

-------
                                             Sampling and Preparation for Laboratory Measurements


If samples are sent offsite for analysis, the shipper is responsible for complying with all
applicable Federal, State, and local regulations. Applicable Federal regulations are briefly
addressed below.  Any State or local regulation will very likely reflect a Federal regulation.

7.9.1   U.S. Nuclear Regulatory Commission Regulations

NRC regulations for packaging, preparation, and shipment of licensed material are contained in
10 CFRPart 71: "Packaging and Transportation of Radioactive materials".

Samples containing low levels of radioactivity are exempted as set forth in §§ 71.10.  A licensee
is exempt from all requirements of Part 71 if the specific activity of the sample being shipped is
not greater than 74,000 Bq/kg (2,000 pCi/g).

Low Specific Activity Material (LSAM) is defined in §§ 71.4: "Definitions." Samples classified
as LSAM need only meet the requirements of the U.S. Department of Transportation (DOT),
discussed below, and the requirements of §§ 71.88: "Air transport of plutonium." Most
environmental samples will fall into this category.

7.9.2   U.S. Department of Transportation Regulations

The U.S. Department of Transportation provides regulations governing the transport of
hazardous materials under the Hazardous Materials Transportation Act of 1975 (88 Stat. 2156,
Public Law 93-633). Applicable requirements of the regulations are found in 49 CFR Parts 170
through 189.  Shippers of samples containing radioactivity should be aware of the current rules in
the following areas.

•     Accident Reporting - 49 CFR 171

•     Marking and Labeling Packages for  Shipment - 49 CFR 172

•     Packaging - 49 CFR 173

•     Placarding a Package - 49 CFR 172

•     Registration of Shipper/Carrier - 49  CFR 107

•     Shipper Required Training - 49 CFR 172

•     Shipping Papers & Emergency Information - 49 CFR 172

•     Transport by Air - 49 CFR 175

August 2000                               7-27                         MARSSIM, Revision 1

-------
Sampling and Preparation for Laboratory Measurements


•      Transport by Rail - 49 CFR 174

•      Transport by Vessel - 49 CFR 176

•      Transport on Public Highway - 49 CFR 177

7.9.3   U.S. Postal Service Regulations

Any package containing radioactive materials is nonmailable if required to bear the U.S.
Department of Transportation's Radioactive White-1 (49 CFR 172.436), Radioactive Yellow-II
(49 CFR 172.438), or Radioactive Yellow-Hi (49 CFR 172.440) label, or if it contains quantities
of radioactive material in excess of those authorized in Publication 6, Radioactive Material, of
the  U.S. Postal Service.
MARSSIM, Revision 1                        7-28                                August 2000

-------
                8 INTERPRETATION OF SURVEY RESULTS
8.1    Introduction

This chapter discusses the interpretation of survey results, primarily those of the final status
survey.  Interpreting a survey's results is most straightforward when measurement data are
entirely higher or lower than the DCGLW. In such cases, the decision that a survey unit meets or
exceeds the release criterion requires little in terms of data analysis. However, formal statistical
tests provide a valuable tool when a survey unit's measurements are neither clearly above nor
entirely below the DCGLW. Nevertheless, the survey design always makes use of the statistical
tests in helping to assure that the number of sampling points and the measurement sensitivity are
adequate, but not excessive, for the decision to be made.

Section 8.2 discusses the assessment of data quality.  The remainder of this chapter deals with
application of the  statistical tests used in the decision-making process, and the evaluation of the
test results. In addition, an example checklist is provided to assist the user in obtaining the
necessary information for interpreting the results of a final status survey.
8.2    Data Quality Assessment

Data Quality Assessment (DQA) is a scientific and statistical evaluation that determines if the
data are of the right type, quality, and quantity to support their intended use. An overview of the
DQA process appears in Section 2.3 and Appendix E. There are five steps in the DQA process:

•      Review the Data Quality Objectives (DQOs) and Survey Design

•      Conduct a Preliminary Data Review

•      Select the Statistical Test

•      Verify the Assumptions of the Statistical Test

•      Draw Conclusions from the Data

The effort expended during the DQA evaluation should be consistent with the graded approach
used in developing the survey design.  More information on DQA is located in Appendix E, and
the EPA Guidance Document QA/G-9 (EPA 1996a). Data should be verified and validated as
described in Section 9.3 prior to the DQA evaluation.
August 2000                                8-1                         MARS SIM, Revision 1

-------
Interpretation of Survey Results
8.2.1   Review the Data Quality Objectives (DQOs) and Sampling Design

The first step in the DQA evaluation is a review of the DQO outputs to ensure that they are still
applicable. For example, if the data suggest the survey unit was misclassified as Class 3 instead
of Class 1, then the original DQOs should be redeveloped for the correct classification.

 The sampling design and data collection documentation should be reviewed for consistency with
the DQOs. For example, the review should check that the appropriate number of samples were
taken in the correct locations and that they were analyzed with measurement systems with
appropriate sensitivity.  Example checklists for different types of surveys are given in Chapter 5.

Determining that the sampling design provides adequate power is important to decision making,
particularly in cases where the levels of residual radioactivity are near the DCGLy?. This can be
done both prospectively, during survey design to test the efficacy of a proposed design, and
retrospectively, during interpretation of survey results to determine that the objectives of the
design are met. The procedure for generating power curves for specific tests is discussed in
Appendix I.  Note that the accuracy of a prospective power curve depends on estimates of the
data variability, o, and the number of measurements. After the data are analyzed, a sample
estimate of the data variability, namely the sample standard deviation (s) and the actual number
of valid measurements will be known.  The consequence of inadequate power is that a survey
unit that actually meets the release criterion has a higher probability of being incorrectly deemed
not to meet the release criterion.

8.2.2   Conduct a Preliminary Data Review

To learn about the structure of the data—identifying patterns, relationships, or potential
anomalies—one can review quality assurance (QA) and quality control (QC) reports, prepare
graphs of the data, and calculate basic statistical quantities.

8.2.2.1 Data Evaluation and  Conversion

Radiological survey data are usually obtained in units, such as the number of counts per unit
time, that have no intrinsic meaning relative to DCGLs.  For comparison of survey data to
DCGLs, the survey data from field and laboratory measurements are converted to DCGL units.
Further information on instrument calibration and data conversion is given in Section 6.2.7.

Basic statistical quantities that should be calculated for the sample data set are the:

•      mean
•      standard deviation
•      median

MARSSIM, Revision 1                          8-2                                August 2000

-------
                                                                 Interpretation of Survey Results


       Example:

       Suppose the following 20 concentration values are from a survey unit:

       90.7,  83.5, 86.4, 88.5,  84.4, 74.2, 84.1,  87.6,  78.2, 77.6,
       86.4,  76.3, 86.5, 77.4,  90.3, 90.1, 79.1,  92.4,  75.5, 80.5.

       First, the average of the data (83.5) and the sample standard deviation (5.7) should be
       calculated.
       The average of the data can be compared to the reference area average and the DCGLw to
       get a preliminary indication of the survey unit status. Where remediation is inadequate,
       this comparison may readily reveal that a survey unit contains excess residual
       radioactivity — even before applying statistical tests. For example, if the average of the
       data exceeds the DCGLW and the radionuclide of interest does not appear in background,
       then the survey unit clearly does not meet the release criterion. On the other hand, if
       every measurement in the survey unit is below the DCGLW, the survey unit clearly meets
       the release criterion.1

       The value of the sample standard deviation is especially important. If too large compared
       to that assumed during the survey design, this may indicate an insufficient number of
       samples were collected to achieve the desired power of the statistical test.  Again,
       inadequate power can lead to unnecessary remediation.

       The median is the middle value of the data set when the number of data points is odd, and
       is the average of the two middle values when the number of data points is even.  Thus
       50% of the data points are above the median, and 50% are below the median. Large
       differences between the mean and the median would be an early indication of skewness in
       the data.  This would also be evident in a histogram of the data.  For the example data
       above, the median is 84.25 (i.e., (84.1 + 84.4)/2). The difference between the median and
       the mean (i.e., 84.25 - 83.5 = 0.75) is a small fraction of the sample standard deviation
       (i.e., 5.7). Thus, in this instance, the mean and median would not be considered
       significantly different.

       Examining the minimum, maximum, and range of the data may provide additional useful
       information. The minimum in this example  is 74.2 and the maximum is 92.4,  so the
       range is 92.4 - 74.2 = 18.2. This is only 3.2 standard deviations. Thus, the range is not
       unusually large. When there are 30 or fewer data points, values of the range much larger
       than about 4 to 5 standard deviations would be unusual.  For larger data sets the range
       might be wider.
   1  It can be verified that if every measurement is below the DCGLW, the conclusion from the statistical tests will
always be that the survey unit does not exceed the release criterion.

August 2000                                 8-3                         MARSSIM, Revision 1

-------
Interpretation of Survey Results


8.2.2.2  Graphical Data Review

At a minimum, a graphical data review should consist of a posting plot and a histogram.
Quantile plots are also useful diagnostic tools, particularly in the two-sample case, to compare
the survey unit and reference area. These are discussed in Appendix I,  Section 1.8.

Apostingplot is simply a map of the survey unit with the data values entered at the measurement
locations. This potentially reveals heterogeneities in the data—especially possible patches of
elevated residual radioactivity. Even in a reference area, a posting plot can reveal spatial trends
in background data that might affect the results of the two-sample statistical tests.
                                                 90.7      83.5     86.4     88.5      84.4
                                                    74.2      84.1      87.6     78.2     77.6
                                                 86.4      76.3     86.5     77.4      90.3
                                                    90.1      79.1	 92.4     75.5     80.5
                                                                   (a)
If the data above were obtained using a
triangular grid in a rectangular survey unit,
the posting plot might resemble the display in
Figure 8.1. Figure 8. la shows no unusual
patterns in the data. Figure 8. Ib shows a
different plot of the same values, but with
individual results associated with different
locations within the survey unit. In this plot
there is an obvious trend towards smaller
values as one moves from left to right across
the survey unit. This trend is not apparent in
the simple initial listing of the data. The
trend may become more apparent if isopleths
are added to the posting plot.

If the posting plot reveals  systematic spatial
trends in the survey unit, the cause of the
trends would need to be investigated.  In
some cases, such trends could be  due to
residual radioactivity, but  may also be due to
inhomogeneities in the survey unit
background.  Other diagnostic tools for
examining spatial data trends may be found in
EPA Guidance Document QA/G-9 (EPA
1996a).  The use of geostatistical  tools to evaluate spatial data trends may also be
useful in some cases (EPA 1989a).
                                                 90.7      83.5     86.4     76.3      79.1
                                                    90.3      84.1      87.6     78.2     77.6
                                                         88.5
                                                                 86.5
                                                                         77.4
                                                    90.1
                                                             84.4_
                                                                     86.4
                                                                             80.5
                                                                                  74.2
                                                                   (b)
                                                  Figure 8.1 Examples of Posting Plots
A frequency plot (or a histogram) is a useful tool for examining the general shape of a data
distribution.  This plot is a bar chart of the number of data points within a certain range of values.
A frequency plot of the example data is shown in Figure 8.2). A simple method for generating a
MARSSIM, Revision 1
                                            8-4
August 2000

-------
                                                                Interpretation of Survey Results
               70
75       80       85       90

      Measured Value
                         Figure 8.2  Example of a Frequency Plot
rough frequency plot is the stem and leaf display discussed in Appendix I, Section 1.7. The
frequency plot will reveal any obvious departures from symmetry, such as skewness or
bimodality (two peaks), in the data distributions for the survey unit or reference area. The
presence of two peaks in the survey unit frequency plot may indicate the existence of isolated
areas of residual radioactivity.  In some cases it may be possible to determine an appropriate
background for the survey unit using this information. The interpretation of the data for this
purpose will generally be highly dependent on site-specific considerations and should only be
pursued after a consultation with the responsible regulatory agency.

The presence of two peaks in the background reference area or survey unit frequency plot may
indicate a mixture of background concentration distributions due  to different soil types,
construction materials, etc.  The greater variability in the data due to the presence of such a
mixture will reduce the power of the statistical tests to detect an adequately remediated survey
unit. These situations should be avoided whenever possible by carefully matching the
background reference areas to the survey units, and choosing survey units with homogeneous
backgrounds.

Skewness or other asymmetry can impact the accuracy of the statistical tests. A data
transformation (e.g., taking the logarithms of the data) can sometimes be used to make the
distribution more symmetric.  The statistical tests would then be performed on the transformed
data. When the underlying  data distribution is highly skewed, it is often because there are a few
high areas. Since the EMC is used to detect such measurements,  the difference between using
the median and the mean as a measure for the degree to which uniform residual radioactivity
remains in a survey unit tends to diminish in importance.
August 2000
                  8-5
MARS SIM, Revision 1

-------
Interpretation of Survey Results


8.2.3   Select the Tests

An overview of the statistical considerations important for final status surveys appears in Section
2.5 and Appendix D. The most appropriate procedure for summarizing and analyzing the data is
chosen based on the preliminary data review. The parameter of interest is the mean
concentration in the survey unit. The nonparametric tests recommended in this manual, in their
most general form, are tests of the median.  If one assumes that the data are from a symmetric
distribution—where the median and the mean are effectively equal—these are also tests of the
mean.  If the assumption of symmetry is violated, then nonparametric tests of the median
approximately test the mean. Computer simulations (e.g., Hardin and Gilbert, 1993) have shown
that the approximation is a good one.  That is, the correct decision will be made about whether or
not the mean concentration exceeds the DCGL, even when the data come from a skewed
distribution. In this regard, the nonparametric tests are found to be correct more often than the
commonly used Student's t test.  The robust performance of the Sign and WRS tests over a wide
range of conditions is the reason that they are recommended in this manual.

When a given set of assumptions is true, a parametric test designed for exactly that set of
conditions will have the highest power. For example, if the data are from a normal distribution,
the Student's t test will have higher power than the nonparametric tests.  It should be noted that
for large enough sample sizes (e.g., large number of measurements), the Student's t test is not a
great deal  more powerful than the nonparametric tests.  On the other hand, when the assumption
of normality is violated, the nonparametric tests can be very much more powerful than the t test.
Therefore, any statistical test may be used provided that the data are consistent with the
assumptions underlying their use. When these assumptions are violated, the prudent approach is
to use the  nonparametric tests which generally involve fewer assumptions than their parametric
equivalents.

The one-sample statistical test (Sign test) described in Section 5.5.2.3  should only be used if the
contaminant is not present in background and radionuclide-specific measurements are made.  The
one-sample test may also be used if the contaminant is present at such a small fraction of the
DCGLW value as to be considered insignificant. In this  case, background concentrations of the
radionuclide are included with the residual radioactivity (i.e., the entire amount is attributed to
facility operations).  Thus, the total concentration of the radionuclide is compared to the release
criterion.  This option should only be used if one expects that ignoring the background
concentration will not affect the outcome of the statistical tests. The advantage of ignoring a
small background contribution is that no reference area  is needed.  This can simplify the final
status survey considerably.

The one-sample Sign test (Section 8.3.1) evaluates whether the median of the data is above or
below the  DCGLW. If the data distribution is symmetric, the median is equal to the mean. In
cases where the data are severely skewed, the mean may be above the DCGLW, while the median

MARSSIM, Revision 1                         8-6                                 August 2000

-------
                                                                Interpretation of Survey Results


is below the DCGLW.  In such cases, the survey unit does not meet the release criterion regardless
of the result of the statistical tests. On the other hand, if the largest measurement is below the
DCGLW, the Sign test will always show that the survey unit meets the release criterion.

For final status surveys, the two-sample statistical test (Wilcoxon Rank Sum test, discussed in
Section 5.5.2.2) should be used when the radionuclide of concern appears in background or if
measurements are used that are not radionuclide specific. The two-sample Wilcoxon Rank Sum
(WRS) test (Section 8.4.1) assumes the reference area and survey unit data distributions are
similar except for a possible shift in the medians.  When the data are severely skewed, the value
for the mean difference may be above the DCGLW, while the median difference is below the
DCGLW. In such cases, the survey unit does not meet the release criterion regardless of the result
of the statistical test.  On the other hand, if the difference between the largest survey unit
measurement and the smallest reference area measurement is less than the DCGLW, the WRS test
will always show that the survey unit meets the release criterion.

8.2.4   Verify the Assumptions of the Tests

An evaluation to determine that the data are consistent with the underlying assumptions made for
the statistical procedures helps to validate the use of a test.  One may also determine that certain
departures from these assumptions are acceptable when given the actual data and other
information about the study. The nonparametric tests described in this chapter assume that the
data from the reference area or survey unit consist of independent samples from each
distribution.

Spatial dependencies that potentially affect the assumptions can be assessed using posting plots
(Section 8.2.2.2). More sophisticated tools for determining the extent of spatial dependencies are
also available (e.g., EPA QA/G-9).  These methods tend to be complex and are best used with
guidance from a professional statistician.

Asymmetry in the data can be diagnosed with a stem and leaf display, a histogram, or a Quantile
plot. As discussed in the previous section, data transformations can sometimes be used to
minimize the effects of asymmetry.

One of the primary advantages of the nonparametric tests used in this report is that they involve
fewer assumptions about the data than their parametric counterparts. If parametric tests are used,
(e.g., Student's t test), then any additional assumptions made in using them should be verified
(e.g., testing for normality). These issues are discussed in  detail in EPA QA/G-9 (EPA 1996a).
August 2000                                 8-7                         MARS SIM, Revision 1

-------
Interpretation of Survey Results


One of the more important assumptions made in the survey design described in Chapter 5 is that
the sample sizes determined for the tests are sufficient to achieve the data quality objectives set
for the Type I (a) and Type II (P) error rates.  Verification of the power of the tests (1-P) to detect
adequate remediation may be of particular interest. Methods for assessing the power are
discussed in Appendix 1.9.  If the hypothesis that the survey unit residual radioactivity exceeds
the release criterion is accepted, there should be reasonable assurance that the test is equally
effective in determining that a survey unit has residual contamination less than the DCGLW.
Otherwise, unnecessary remediation may result. For this reason, it is better to plan the surveys
cautiously—even to the point of:

•      overestimating the potential data variability
•      taking too many samples
•      overestimating minimum detectable concentrations  (MDCs)

If one is unable to show that the DQOs were met with reasonable assurance, a resurvey may be
needed. Examples of assumptions and possible methods for their assessment are summarized in
Table 8.1.

           Table 8.1 Methods for Checking the Assumptions of Statistical Tests
Assumption
Spatial Independence
Symmetry
Data Variance
Power is Adequate
Diagnostic
Posting Plot
Histogram, Quantile Plot
Sample Standard Deviation
Retrospective Power Chart
8.2.5   Draw Conclusions from the Data

The types of measurements that can be made in a survey unit are 1) direct measurements at
discrete locations, 2) samples collected at discrete locations, and 3) scans.  The statistical tests
are only applied to measurements made at discrete locations.  Specific details for conducting the
statistical tests are given in Sections 8.3 and 8.4.  When the data clearly show that a survey unit
meets or exceeds the release criterion, the result is often obvious without performing the formal
statistical analysis. Table 8.2 describes examples of circumstances leading to specific
conclusions based on a simple examination of the data.
MARSSIM, Revision 1                         8-8                                 August 2000

-------
                                                                   Interpretation of Survey Results
                          Table 8.2  Summary of Statistical Tests
 Radionuclide not in background and radionuclide-specific measurements made:
Survey Result
All measurements less than DCGLW
Average greater than DCGLW
Any measurement greater than DCGLW and the average
less than DCGLW
Conclusion
Survey unit meets release criterion
Survey unit does not meet release criterion
Conduct Sign test and elevated measurement
comparison
 Radionuclide in background or radionuclide non-specific (gross) measurements made:
Survey Result
Difference between largest survey unit measurement and
smallest reference area measurement is less than DCGI^,
Difference of survey unit average and reference area
average is greater than DCGLW
Difference between any survey unit measurement and any
reference area measurement greater than DCGLW and the
difference of survey unit average and reference area
average is less than DCGI^,
Conclusion
Survey unit
Survey unit
meets release criterion
does not meet release criterion
Conduct WRS test and elevated measurement
comparison
Both the measurements at discrete locations and the scans are subject to the elevated
measurement comparison (EMC). The result of the EMC is not conclusive as to whether the
survey unit meets or exceeds the release criterion, but is a flag or trigger for further investigation.
The investigation may involve taking further measurements to determine that the area and level
of the elevated residual radioactivity are such that the resulting dose or risk meets the release
criterion.2 The investigation should also provide adequate assurance, using the DQO process,
that there are no other undiscovered areas of elevated residual radioactivity in the survey unit that
might otherwise result in a dose or risk exceeding the release criterion. In some cases, this may
lead to re-classifying all or part of a survey unit—unless the results of the investigation indicate
that reclassification is not necessary.  The investigation level appropriate for each class of survey
unit and type of measurement is shown in Table 5.8 and is described in Section 5.5.2.6.
     Rather than, or in addition to, taking further measurements the investigation may involve assessing the
adequacy of the exposure pathway model used to obtain the DCGLs and area factors, and the consistency of the
results obtained with the Historical Site Assessment and the scoping, characterization and remedial action support
surveys.
August 2000
8-9
MARS SIM, Revision 1

-------
Interpretation of Survey Results
8.2.6   Example

To illustrate the data interpretation process, consider an example facility with 14 survey units
consisting of interior concrete surfaces, one interior survey unit with drywall surfaces, and two
exterior survey units.  The contaminant of concern is 60Co.  The interior surfaces were measured
with a gas-flow proportional counter (see Appendix H) with an active surface area of 20 cm2 to
determine total beta-gamma activity. Because these measurements are not radionuclide specific,
appropriate reference areas were chosen for comparison.  The exterior soil was measured with a
germanium spectrometer to provide  radionuclide-specific results. A reference area is not needed
because 60Co does not have a significant background in soil.

The exterior Class 3 survey unit incorporates areas that are not expected to contain residual
radioactivity. The exterior Class 2 survey unit is similar to the Class 3 survey unit, but is
expected to contain residual radioactivity below the DCGLW. The Class 1 Interior Concrete
survey units are expected to contain  small areas of elevated activity that may or may not exceed
the DCGLW. The Class 2 Interior Drywall survey unit is similar to the Class 1 Interior Concrete
survey unit, but the drywall is expected to have a lower background, less measurement
variability, and a more uniform distribution of contamination.  The Class 2 survey unit is not
expected to contain areas of activity above the DCGLW. Section 8.3 describes the Sign test used
to evaluate the survey units where the contaminant is not present in background.  Section 8.4
describes the WRS test used to evaluate the survey units where the contaminant is present in
background. Section 8.5 discusses the evaluation of the results of the statistical tests and the
decision regarding compliance with  the release criterion.  The survey design parameters and
DQOs developed for these survey units are summarized in Table 8.3.

           Table 8.3 Final Status Survey Parameters for Example Survey Units
Survey
Unit
Interior
Concrete
Interior
Drywall
Exterior Lawn
Exterior Lawn
Type
Class 1
Class 2
Class 2
Class 3
DQO
a
.05
.025
.025
.025
P
.05
.05
.025
.01
DCGLw
5000 dpm
per 100 cm2
5000 dpm
per 100 cm2
140 Bq/kg
140 Bq/kg
Estimated Standard
Deviation, o
Survey
625 dpm
per 100 cm2
200 dpm
per 100 cm2
3. 8 Bq/kg
3. 8 Bq/kg
Reference
220 dpm
per 100 cm2
200 dpm
per 100 cm2
N/A
N/A
Test/Section
WRS/App. A
WRS/8.4.3
Sign/8.3.3
Sign/8.3.4
MARSSIM, Revision 1
8-10
August 2000

-------
                                                                Interpretation of Survey Results
8.3    Contaminant Not Present in Background

The statistical test discussed in this section is used to compare each survey unit directly with the
applicable release criterion. A reference area is not included because the measurement technique
is radionuclide-specific and the radionuclide of concern is not present in background (see Section
8.2.6). In this case the contaminant levels are compared directly with the DCGLW. The method
in this section should only be used if the contaminant is not present in background or is present at
such a small fraction of the DCGLW value as to be considered insignificant.  In addition, one-
sample tests are applicable only if radionuclide-specific measurements are made to determine the
concentrations.  Otherwise, the method in Section 8.4 is recommended.

Reference areas and reference samples are not needed when there is sufficient information to
indicate there is essentially no background concentration for the radionuclide being considered.
With only a single set of survey unit samples, the statistical test used here is called a one-sample
test.  See Section 5.5 for further information appropriate to following the example and discussion
presented here.

8.3.1   One-Sample Statistical Test

The Sign test is designed to detect uniform failure of remedial action throughout the survey unit.
This test does not assume that the data follow any particular distribution, such as normal or
log-normal. In addition to the Sign Test, the DCGL^c (see Section 5.5.2.4) is compared to each
measurement to ensure none exceeds the T>CGl^EMC. If a measurement exceeds this DCGL, then
additional investigation is recommended, at least locally, to determine the actual areal  extent of
the elevated concentration.

The hypothesis tested by the Sign test is

       Null Hypothesis
       H0:  The median  concentration of residual radioactivity  in the survey unit is greater than
       the DCGLW

       versus

       Alternative Hypothesis
       Ha:  The median concentration of residual radioactivity  in the survey unit is less than the
       DCGLW

The null hypothesis is assumed to be true unless the statistical test indicates that it should be
rejected in favor of the alternative.  The null hypothesis states that the probability of a
measurement less than the DCGLW is less than one-half, i.e., the 50th percentile (or median) is

August 2000                                8-11                         MARS SIM, Revision 1

-------
Interpretation of Survey Results


greater than the DCGLW. Note that some individual survey unit measurements may exceed the
DCGLW even when the survey unit as a whole meets the release criterion. In fact, a survey unit
average that is close to the DCGLW might have almost half of its individual measurements
greater than the DCGLW. Such a survey unit may still not exceed the release criterion.

The assumption is that the survey unit measurements are independent random samples from a
symmetric distribution. If the distribution of measurements is symmetric, the median and the
mean are the same.

The hypothesis specifies a release criterion in terms of a DCGLW. The test should have sufficient
power (1-P, as specified in the DQOs) to detect residual radioactivity concentrations at the Lower
Boundary of the Gray Region (LBGR). If o is the standard deviation of the measurements in the
survey unit, then A/a expresses the size of the shift (i.e., A = DCGLW - LBGR) as the number of
standard deviations that would be considered "large" for the distribution of measurements in the
survey unit.  The procedure for determining A/a is given in Section 5.5.2.3.

8.3.2   Applying the Sign Test

The Sign test is applied as outlined in the following five steps, and further illustrated by the
examples in Sections 8.3.3 and 8.3.4.

1.      List the survey unit measurements, Xt ,i= 1,2,3...,N.

2.      Subtract each measurement, Xt, from the DCGLW to obtain the differences:
              Dt=DCGLw-Xt,i=\,2,l...,N.

3.      Discard each difference that is exactly zero and reduce the sample size, N, by the number
       of such zero measurements.

4.      Count the number of positive differences. The result is the test statistic S+.  Note that a
       positive difference corresponds to a measurement below the DCGLW and contributes
       evidence that the survey unit meets the release criterion.

5.      Large values of S+ indicate that the null hypothesis (that the survey unit exceeds the
       release criterion) is false. The value of S+ is compared to the critical values in Table 1.3.
       If S+ is greater than the critical value, k, in that table, the null hypothesis is rejected.

8.3.3   Sign Test Example: Class 2 Exterior Soil Survey Unit

For the Class 2 Exterior Soil  survey unit, the one-sample nonparametric  statistical test is
appropriate since the radionuclide of concern does not appear in background and radionuclide-
specific measurements were made.

MARSSIM, Revision 1                        8-12                                 August 2000

-------
                                                               Interpretation of Survey Results


Table 8.3 shows that the DQOs for this survey unit include a = 0.025 and P = 0.025. The
DCGLW is 140 Bq/kg (3.8 pCi/g) and the estimated standard deviation of the measurements is o
= 3.8 Bq/kg (0.10 pCi/g).  Since the estimated standard deviation is  much smaller than the
DCGLW, the LBGR should be set so that A/G is about 3.

If     A/G   = (DCGLW - LBGR)/o
             = 3
then   LBGR = DCGLW - 3o
             = 140-(3 x 3.8)
             = 128Bq/kg(3.5pCi/g).

Table 5.5 indicates the number of measurements estimated for the Sign Test, N, is 20 (a = 0.025,
P = 0.025, and A/G = 3). (Table I.2a in Appendix I also lists the number of measurements
estimated for the Sign test.) This survey unit is Class 2, so the 20 measurements needed were
made on a random-start triangular grid. When laying out the grid, 22 measurement locations
were identified.

The 22 measurements taken on the exterior lawn Class 2 survey unit are shown in the first
column of Table 8.4.  The mean of these data is 129 Bq/kg (3.5 pCi/g) and the standard deviation
is 11 Bq/kg (0.30 pCi/g).  Since the number of measurements  is even, the median of the data is
the average of the two middle values (126+128)72 = 127 Bq/kg (3.4 pCi/g). A Quantile Plot of
the data is shown in Appendix 1.8, Figure 1.3.

There are five measurements that exceed the DCGLW value of 140 Bq/kg: 142, 143, 145, 148,
and 148. However, none exceed the mean of the data plus three standard deviations:
127 + (3 x 11) = 160 Bq/kg (4.3 pCi/g).  Thus, these values appear to reflect the overall
variability of the concentration measurements rather than to indicate an area of elevated
activity—provided that these measurements were scattered through the survey unit.  However, if
a posting plot demonstrates that the locations of these measurements are grouped together, then
that portion of the survey unit containing these locations merits further investigation.

The middle column of Table 8.4 contains the differences, DCGLW - Data, and the last column
contains the signs of the differences.  The bottom row shows the number of measurements with
positive differences, which is the test statistic S+. In this case, S+ = 17.

The value of S+ is compared to the appropriate critical value in Table 1.3.  In this case, for N= 22
and a = 0.025, the critical  value is 16.  Since S+ = 17 exceeds this value, the null hypothesis that
the survey unit exceeds the release criterion is rejected.
August 2000                                8-13                        MARS SIM, Revision 1

-------
Interpretation of Survey Results
           Table 8.4 Example Sign Analysis: Class 2 Exterior Soil Survey Unit
Data
(Bq/kg)
121
143
145
112
125
132
122
114
123
148
115
113
126
134
148
130
119
136
128
125
142
129
DCGLw-Data
(Bq/kg)
19
o
-J
-5
28
15
8
18
26
17
-8
25
27
14
6
-8
10
21
4
12
15
-2
11
Sign
1
-1
-1
1
1
1
1
1
1
-1
1
1
1
1
-1
1
1
1
1
1
-1
1
Number of positive differences S+ = 17
8.3.4   Sign Test Example: Class 3 Exterior Soil Survey Unit

For the Class 3 exterior soil survey unit, the one-sample nonparametric statistical test is again
appropriate since the radionuclide of concern does not appear in background and radionuclide-
specific measurements were made.

Table 8.3 shows that the DQOs for this survey unit include a = 0.025 and P = 0.01. The DCGLW
is 140 Bq/kg (3.8 pCi/g) and the estimated standard deviation of the measurements is o = 3.8
Bq/kg (0.10 pCi/g).  Since the estimated standard deviation is much smaller than the DCGLW,  the
lower bound for the  gray region should be set so that A/a is about 3.
MARSSIM, Revision 1
8-14
August 2000

-------
                                                                Interpretation of Survey Results


If     A/G    = (DCGLW - LBGR)/o
              = 3
then   LBGR = DCGLW - 3o
              = 140 - (3 x 4)
              = 128Bq/kg(3.5pCi/g).

Table 5.5 indicates that the sample size estimated for the Sign Test, N, is 23 (a = 0.025, p = 0.01,
and A/G = 3). This survey unit is Class 3, so the measurements were made at random locations
within the survey unit.

The 23 measurements taken on the exterior lawn are shown in the first column of Table 8.5.
Notice that some of these measurements are negative (-0.37 in cell A6).  This might occur if an
analysis background (e.g., the Compton continuum under a spectrum peak) is subtracted to
obtain the net concentration value. The data analysis is both easier and more accurate when
numerical values are reported as obtained rather than reporting the results as "less than" or not
detected. The mean of these data is 2.1 Bq/kg (0.057 pCi/g) and the standard deviation is 3.3
Bq/kg (0.089 pCi/g). None of the data exceed 2.1 + (3 x 3.3) = 12.0 Bq/kg (0.32 pCi/g). Since
N is odd, the median is the middle (12th highest) value, namely 2.6 Bq/kg (0.070 pCi/g).

An initial review of the data reveals that every data point is below the DCGLW, so the survey unit
meets the release criterion specified in Table 8.3. For purely illustrative purposes, the Sign test
analysis is performed. The middle column of Table 8.5 contains the quantity DCGLW - Data.
Since every data point is below the DCGLW, the sign of DCGLW - Data is always positive. The
number of positive differences is equal to the number of measurements, N, and so the Sign test
statistic S+ is 23.  The null hypothesis will always be rejected at the maximum value of S+
(which in this case is 23) and the survey unit passes. Thus, the application of the Sign test in
such cases requires no calculations and one need not consult a table for a critical value.  If the
survey is properly designed, the critical value must always be less than N.

Passing a survey unit without making a single calculation may seem an unconventional approach.
However, the key is in the survey design which is intended to ensure enough measurements are
made to satisfy the DQOs.  As in the previous example, after the data are collected the
conclusions and power of the test can be checked by constructing a retrospective power curve as
outlined in Appendix I, Section 1.9.

One final consideration remains regarding the survey unit classification:  "Was any definite
amount of residual radioactivity found in the survey unit?" This will depend on the MDC of the
measurement method. Generally the MDC is at least 3 or 4 times the estimated measurement
standard deviation. In the present case, the largest observation, 9.3 Bq/kg (0.25 pCi/g), is less
than three times the estimated measurement standard deviation of 3.8 Bq/kg (0.10 pCi/g). Thus,
it is unlikely that any of the measurements could be considered indicative of positive
contamination.  This means that the Class 3 survey unit classification was appropriate.

August 2000                                8-15                        MARS SIM, Revision 1

-------
Interpretation of Survey Results
            Table 8.5 Sign Test Example Data for Class 3 Exterior Survey Unit
II A
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
Data
3.0
3.0
1.9
0.37
-0.37
6.3
-3.7
2.6
3.0
-4.1
3.0
3.7
2.6
4.4
-3.3
2.1
6.3
4.4
-0.37
4.1
-1.1
1.1
9.3
B
DCGLw-Data
137.0
137.0
138.1
139.6
140.4
133.7
143.7
137.4
137.0
144.1
137.0
136.3
137.4
135.6
143.3
137.9
133.7
135.6
140.4
135.9
141.1
138.9
130.7
Number of positive differences S+ =
C
Sign























23
If one determines that residual radioactivity is definitely present, this would indicate that the
survey unit was initially mis-classified. Ordinarily, MARSSEVI recommends a resurvey using a
Class 1 or Class 2 design.  If one determines that the survey unit is a Class 2, a resurvey might be
avoided if the survey unit does not exceed the maximum size for such a classification. In this
case, the only difference in survey design would be whether the measurements were obtained on
a random or on a triangular grid. Provided that the initial survey's scanning methodology is
sufficiently sensitive to detect areas at DCGLW without the use of an area factor, this difference
in the survey grids alone would not affect the outcome of the statistical analysis. Therefore, if the
above conditions were met, a resurvey might not be necessary.
MARSSIM, Revision 1
8-16
August 2000

-------
                                                                Interpretation of Survey Results
8.4    Contaminant Present in Background

The statistical tests discussed in this section will be used to compare each survey unit with an
appropriately chosen, site-specific reference area.  Each reference area should be selected on the
basis of its similarity to the survey unit, as discussed in Section 4.5.

8.4.1   Two-Sample Statistical Test

The comparison of measurements from the reference area and survey unit is made using the
Wilcoxon Rank Sum (WRS) test (also called the Mann-Whitney test). The WRS test should be
conducted for each survey unit. In addition, the EMC is performed against each measurement to
ensure that it does not exceed a specified investigation level.  If any measurement in the
remediated survey unit exceeds the specified investigation level, then additional investigation is
recommended, at least locally, regardless of the outcome of the WRS test.

The WRS test is most effective when residual radioactivity is uniformly present throughout a
survey unit.  The test is designed to detect whether or not this activity exceeds the DCGLW. The
advantage of the nonparametric WRS test is that it does not assume that the data are normally or
log-normally distributed. The WRS test also allows for "less than" measurements to be present
in the reference area and the survey units. As a general rule, the WRS test can be used with up to
40 percent "less than" measurements in either the  reference area or the survey unit. However, the
use of "less than" values in data reporting is not recommended as discussed in Section 2.3.5.
When possible, report the actual result of a measurement together with its uncertainty.

The hypothesis tested by the WRS test is

       Null Hypothesis
       H0: The median  concentration in the survey unit exceeds that in  the reference area by
       more than the DCGLW

       versus

       Alternative Hypothesis
       Ha: The median  concentration in the survey unit exceeds that in  the reference area by less
       than the DCGLW

The null hypothesis is assumed to be true unless the statistical test indicates that it should be
rejected in favor of the alternative.  One assumes that any difference between the reference area
and survey unit concentration distributions is due to a shift in the  survey unit concentrations to
higher values (i.e.,  due to the presence of residual  radioactivity in addition to background).
Note that some or all of the survey unit measurements may be larger than some reference area
August 2000                                8-17                        MARS SIM, Revision 1

-------
Interpretation of Survey Results


measurements, while still meeting the release criterion. Indeed, some survey unit measurements
may exceed some reference area measurements by more than the DCGL^,. The result of the
hypothesis test determines whether or not the survey unit as a whole is deemed to meet the
release criterion.  The EMC is used to screen individual measurements.

Two assumptions underlying this test are: 1) samples from the reference area and survey unit are
independent, identically distributed random samples, and 2) each measurement is independent of
every other measurement, regardless of the set of samples from which it came.

8.4.2  Applying the Wilcoxon Rank Sum Test

The WRS test is applied as outlined in the following six steps and further illustrated by the
examples in Section 8.4.3 and Appendix A.

1.      Obtain the adjusted reference area measurements, Z,, by adding the DCGLW to each
       reference area measurement, Xt. Z; = Xt +DCGLW

2.      The m adjusted reference sample measurements, 2t, from the reference area and the n
       sample measurements, Yt, from the survey unit are pooled and ranked in order of
       increasing size from 1 to N, where N = m+n.

3.      If several measurements are tied (i.e., have the same value), they are all assigned the
       average rank of that group of tied measurements.

4.      If there are t "less than" values, they are all given the average of the ranks from 1 to t.
       Therefore, they are all assigned the rank t(t+l)/(2t)  = (t+l)/2, which is the average of the
       first t integers.  If there is more than one detection limit, all observations below the largest
       detection limit should be treated as "less than" values.3

5.      Sum the ranks of the adjusted measurements from the reference area, Wr. Note that  since
       the sum of the first TV integers is N(N+l)/2, one can equivalently sum the ranks of the
       measurements from the survey unit, Ws, and compute Wr = N(N+l)/2 - Ws.

6.      Compare Wr with the critical value given in Table 1.4 for the appropriate values of «, m,
       and a. If Wr is greater than the tabulated value, reject the hypothesis that the survey unit
       exceeds the release criterion.
   3 If more than 40 percent of the data from either the reference area or survey unit are "less than," the WRS test
cannot be used. Such a large proportion of non-detects suggest that the DQO process be re-visited for this survey to
determine if the survey unit was properly classified or the appropriate measurement method was used. As stated
previously, the use of "less than" values in data reporting is not recommended. Wherever possible, the actual result
of a measurement, together with its uncertainty, should be reported.

MARSSIM, Revision 1                         8-18                                 August 2000

-------
                                                                Interpretation of Survey Results
8.4.3   Wilcoxon Rank Sum Test Example: Class 2 Interior Drywall Survey Unit

In this example, the gas-flow proportional counter measures total beta-gamma activity (see
Appendix H) and the measurements are not radionuclide specific.  The two-sample
nonparametric test is appropriate for the Class 2 interior drywall survey unit because gross beta-
gamma activity contributes to background even though the radionuclide of interest does not
appear in background.

Table 8.3 shows that the DQOs for this survey unit include a = 0.025 and 0 = 0.05. The DCGLW
is 8,300 Bq/m2 (5,000 dpm per 100 cm2) and the estimated standard deviation of the
measurements is about o = 1,040 Bq/m2 (625 dpm per 100 cm2). The estimated standard
deviation is 8 times less than the DCGLW. With this level of precision, the width of the gray
region can be made fairly narrow. As noted earlier, sample sizes do not decrease very much once
A/G exceeds 3 or 4. In this example, the lower bound for the gray region was set so that A/G is
about 4.

If     A/G    =  (DCGLW - LBGR)/G
              =  4
then   LBGR =  DCGLW - 4a
              =  8,300 - (4 x 1,040)
              =  4,100 Bq/m2 (2,500 dpm per 100 cm2).

In Table 5.3, one finds that the number of measurements estimated for the WRS test is 11 in each
survey unit and 11 in each reference area (a = 0.025, P = 0.05, and A/G = 4). (Table I.2b in
Appendix I also lists the number of measurements estimated for the WRS test.)  This survey unit
was classified as Class 2, so the 11 measurements needed in the survey unit and the 11
measurements needed in the reference area were made using a random-start triangular grid.4

Table 8.6 lists the data obtained from the gas-flow proportional counter in units of counts per
minute. A reading of 160 cpm with this instrument corresponds to the DCGLw of 8,300 Bq/m2
(5,000 dpm per 100 cm2). Column A lists the measurement results as they were obtained. The
average and standard deviation of the reference area measurements are  44 and 4.4 cpm,
respectively.  The average and standard deviation of the survey unit measurements are 98  and 5.3
cpm, respectively.
   4A random start systematic grid is used in Class 2 and 3 survey units primarily to limit the size of any potential
elevated areas. Since areas of elevated activity are not an issue in the reference areas, the measurement locations
can be either random or on a random start systematic grid (see Section 5.5.2.5).

June 2001                                  8-19                      MARSSIM, Revision 1

-------
Interpretation of Survey Results
              Table 8.6 WRS Test for Class 2 Interior Drywall Survey Unit

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
A
Data
(cpm)
49
35
45
45
41
44
48
37
46
42
47
104
94
98
99
90
104
95
105
93
101
92
B
Area
R
R
R
R
R
R
R
R
R
R
R
S
S
S
S
S
S
S
S
S
S
S
c
Adjusted
Data
209
195
205
205
201
204
208
197
206
202
207
104
94
98
99
90
104
95
105
93
101
92
Sum =
D
Ranks
22
12
17.5
17.5
14
16
21
13
19
15
20
9.5
4
6
7
1
9.5
5
11
3
8
2
253
E
Reference Area
Ranks
22
12
17.5
17.5
14
16
21
13
19
15
20
0
0
0
0
0
0
0
0
0
0
0
187
In column B, the code "R" denotes a reference area measurement, and "S" denotes a survey unit
measurement.  Column C contains the Adjusted Data. The Adjusted Data are obtained by adding
the DCGLW to the reference area measurements (see Section 8.4.2, Step 1). The ranks of the
adjusted data appear in Column D. They range from  1 to 22, since there is a total of 11+11
measurements (see Section 8.4.2, Step 2).

Note that there were two cases of measurements tied with the same value, at 104 and 209. Each
tied measurement is always assigned the average of the ranks. Therefore, both measurements at
104, are assigned rank (9+10)72 = 9.5 (see Section 8.4.2, Step 3). Also note that the sum of all
of the ranks is still 22(22+1)72 = 253.  Checking this value with the formula in Step 5 of Section
8.4.2 is recommended to guard against errors in the rankings.
MARSSIM, Revision 1
8-20
August 2000

-------
                                                                Interpretation of Survey Results


Column E contains only the ranks belonging to the reference area measurements. The total is
187. This is compared with the entry for the critical value of 156 in Table 1.4 for a = 0.025, with
n = 11 and m=\\.  Since the sum of the reference area ranks is greater than the critical value, the
null hypothesis (i.e., that the average survey unit concentration exceeds the DCGLW) is rejected.

The analysis for the WRS test is very well suited to the use of a computer spreadsheet. The
spreadsheet formulas used for the example above are given in Appendix 1.10, Table 1.11.

8.4.4   Class 1 Interior Concrete Survey Unit

As in the previous example, the gas-flow proportional counter measures total beta-gamma
activity (see Appendix H) and the measurements are not radionuclide specific. The two-sample
nonparametric test is appropriate for the Class 1 interior concrete survey unit because gross beta-
gamma activity contributes to background even though the radionuclide of interest does not
appear in background.

Appendix A provides a detailed description of the calculations for the Class  1 interior concrete
survey unit.

8.4.5   Multiple Radionuclides

The use of the unity rule when there is more than one radionuclide to be considered is discussed
in Appendix 1.11. An example application appears in Section 1.11.4.
8.5    Evaluating the Results:  The Decision

Once the data and the results of the tests are obtained, the specific steps required to achieve site
release depend on the procedures instituted by the governing regulatory agency and site-specific
ALARA considerations. The following suggested considerations are for the interpretation of the
test results with respect to the release limit established for the site or survey unit. Note that the
tests need not be performed in any particular order.

8.5.1  Elevated Measurement Comparison

The Elevated Measurement Comparison (EMC) consists of comparing each measurement from
the survey unit with the investigation levels discussed in Section 5.5.2.6 and Section 8.2.5. The
EMC is performed for both measurements obtained on the systematic-sampling grid and for
locations flagged by scanning measurements.  Any measurement from the survey unit that is
equal to or greater than an investigation level  indicates an area of relatively high concentrations
that should be investigated—regardless of the outcome of the nonparametric statistical tests.

August 2000                                8-21                        MARSSIM, Revision 1

-------
Interpretation of Survey Results


The statistical tests may not reject H0 when only a very few high measurements are obtained in
the survey unit.  The use of the EMC against the investigation levels may be viewed as assurance
that unusually large measurements will receive proper attention regardless of the outcome of
those tests and that any area having the potential for significant dose contributions will be
identified. The EMC is intended to flag potential failures in the remediation process. This
should not be considered the primary means to identify whether or not a site meets the release
criterion.

The derived concentration guideline level for the EMC is:

                              DCGLEMC = Am  x DCGLW                             8_1


where Am is the area factor for the area of the systematic grid area. Note that DCGLEMC is an a
priori limit, established both by the DCGLW and by the survey design (i.e., grid spacing and
scanning MDC).  The true extent of an area of elevated activity can only be determined after
performing the  survey and taking additional measurements.  Upon the completion of further
investigation, the a posteriori limit, DCGL^c = Am x DCGLW, can be established using the
value of Am appropriate for the actual area of elevated concentration. The area of elevated
activity is generally bordered by concentration measurements below the DCGLW. An individual
elevated measurement on a systematic grid could conceivably represent an area four times as
large as the systematic grid area used to define the DCGLEMC. This is the area bounded by the
nearest neighbors of the elevated measurement location.  The results of the investigation should
show that the appropriate DCGLEMC is not exceeded.  Area factors are discussed in Section
5.5.2.4.

If measurements above the stated scanning MDC are found by sampling or by direct
measurement at locations that were not flagged by the scanning survey, this may indicate that the
scanning method did not meet the DQOs.

The preceding discussion primarily concerns Class 1 survey units. Measurements exceeding
DCGLW in Class 2 or Class 3 areas may indicate survey unit mis-classification.  Scanning
coverage for Class 2 and Class 3 survey units is less stringent than for Class 1. If the
investigation levels of Section 8.2.5  are exceeded, an investigation should: 1) ensure that the area
of elevated activity discovered meets the release criterion, and 2) provide reasonable assurance
that other undiscovered areas of elevated activity do not exist. If further investigation determines
that the survey unit was mis-classified with regard to contamination potential, then a resurvey
using the method appropriate for the new survey unit classification may be appropriate.
MARSSIM, Revision 1                         8-22                                 August 2000

-------
                                                                  Interpretation of Survey Results
8.5.2  Interpretation of Statistical Test Results

The result of the statistical test is the decision to reject or not to reject the null hypothesis.
Provided that the results of investigations triggered by the EMC were resolved, a rejection of the
null hypothesis leads to the decision that the survey unit meets the release criterion. However,
estimating the average residual radioactivity in the survey unit may also be necessary so that dose
or risk calculations can be made.  This estimate is designated 5. The average concentration is
generally the best estimator for 5 (EPA 1992g). However, only the unbiased measurements from
the statistically designed survey should be used in the calculation of 5.

If residual radioactivity is found in an isolated area of elevated activity—in addition to residual
radioactivity distributed relatively uniformly across the survey unit—the unity rule (Section
4.3.3) can be used to ensure that the total dose is within the release criterion:

                5       (average concentration in elevated area -  5)    1
             DCGLW       (area factor for elevated 	"™™  ^                     8'2
If there is more than one elevated area, a separate term should be included for each. When
calculating 5 for use in this inequality, measurements falling within the elevated area may be
excluded providing the overall average in the survey unit is less than the DCGLW.  As an
alternative to the unity rule, the dose or risk due to the actual residual radioactivity distribution
can be calculated if there is an appropriate exposure pathway model available. Note that these
considerations generally apply only to Class  1 survey units, since areas of elevated activity
should not exist in Class 2 or Class 3 survey units.

A retrospective power analysis for the test will often be useful, especially when the null
hypothesis is not rejected (see Appendix 1.9). When the null hypothesis is not rejected, it may be
because it is in fact true, or it may be because the test did not have sufficient power to detect that
it is not true.  The power of the test will be primarily affected by changes in the actual number of
measurements obtained and their standard deviation. An effective survey design will slightly
overestimate both the number of measurements and the standard deviation to ensure adequate
power. This insures that a survey unit is not subjected to additional remediation simply because
the final status survey is not sensitive enough to detect that residual radioactivity is below the
guideline level.  When the null hypothesis is rejected, the power of the test becomes a somewhat
moot question.  Nonetheless, even in this case, a retrospective power curve can be a useful
diagnostic tool and an aid to designing future surveys.

8.5.3  If the Survey Unit Fails

The guidance provided in MARS SIM is fairly explicit concerning the steps that should be taken
to show that a survey unit meets release criteria.  Less has been said about the procedures that
should be used if at any point the survey unit fails.  This is primarily because there are many
different ways that a survey unit may fail the final status survey.  The overall level of residual
June 2001                                   8-23                         MARSSIM, Revision 1

-------
Interpretation of Survey Results


radioactivity may not pass the nonparametric statistical tests.  Further investigation following the
elevated measurement comparison may show that there is a large enough area with a
concentration too high to meet the release criterion. Investigation levels may have caused
locations to be flagged during scanning that indicate unexpected levels of residual radioactivity
for the survey unit classification. Site-specific information is needed to fully evaluate all of the
possible reasons for failure, their causes, and their remedies.

When a survey unit fails to demonstrate compliance with the release criterion, the first step is to
review and confirm the data that led to the decision. Once this is done, the DQO Process
(Appendix D) can be used to identify and evaluate potential solutions to  the problem. The level
of residual radioactivity in the survey unit should be determined to help define the problem.
Once the problem has been  stated the decision concerning the survey unit should be developed
into a decision rule. Next, determine the additional data, if any, needed to document that the
survey unit demonstrates compliance with the release criterion.  Alternatives to resolving the
decision statement should be developed for each survey unit that fails the tests. These
alternatives are evaluated against the DQOs, and a survey design that meets the objectives of the
project is selected.

For example, a Class 2  survey unit passes the nonparametric statistical tests, but has several
measurements on the sampling grid that exceed the DCGLW.  This is unexpected in a Class 2
area, and so these measurements are flagged for further investigation. Additional sampling
confirms that there are several areas where the concentration exceeds the DCGI^,. This indicates
that the survey unit was mis-classified.  However, the scanning technique that was used was
sufficient to detect residual radioactivity at the DCGLEMC calculated for the  sample grid. No
areas exceeding the DCGLEMC where found. Thus, the only difference between the final status
survey actually done, and that which would be required for a Class 1 area, is that the scanning
may not have covered 100% of the survey unit area. In this case, one might simply increase the
scan coverage to 100%.  Reasons why the survey unit was misclassified  should be noted. If no
areas exceeding the DCGLEMC are found, the survey unit essentially demonstrates compliance
with the release criterion as a Class 1 survey unit.

If, in the example above, the scanning technique was not sufficiently sensitive, it may be possible
to re-classify  as Class 1 only that portion of the survey unit containing the higher measurements.
This portion would be re-sampled at the higher measurement density required for a Class 1
survey unit, with the rest of the survey unit remaining Class 2.

A second example might be a Class  1 Survey unit that passes the nonparametric statistical tests
and contains some areas that were flagged for investigation during scanning. Further
investigation, sampling and analysis indicates one area is truly elevated.  This area has a
concentration that exceeds the DCGLW  by a factor greater than the area factor calculated for its
actual size. This area is then remediated. Remediation control sampling shows that the residual

MARSSIM, Revision 1                        8-24                                 August 2000

-------
                                                                Interpretation of Survey Results


radioactivity was removed, and no other areas were contaminated with removed material.  In this
case one may simply document the original final status survey, the fact that remediation was
performed, the results of the remedial action support survey, and the additional remediation data.
In some cases, additional final status survey data may not be needed to demonstrate compliance
with the release criterion.

As a last example, consider a Class 1 area which fails the nonparametric statistical tests.
Confirmatory data indicates that the average concentration in the survey unit does exceed the
DCGLW over a majority of its area. This indicates remediation of the entire survey unit is
necessary, followed by another final status survey. Reasons for performing a final status survey
in a survey unit with significant levels of residual radioactivity should be noted.

These examples are meant to illustrate the actions that may be necessary to secure the release of a
survey unit that has failed to meet the release criterion.  The DQO Process should be revisited to
plan how to attain the original objective, that is to safely release the survey unit by showing that
it meets the release criterion.  Whatever data are necessary to meet this objective will be in
addition to the final  status survey data already in hand.

8.5.4  Removable Activity

Some regulatory agencies may require that smear samples be taken at indoor grid locations as an
indication of removable surface activity.  The percentage of removable activity assumed in the
exposure pathway models has a great impact on dose calculations.  However, measurements of
smears are very difficult to interpret quantitatively.  Therefore, the results of smear samples
should not be used for determining compliance. Rather, they should be used as a diagnostic tool
to determine if further investigation is necessary.
8.6    Documentation

Documentation of the final status survey should provide a complete and unambiguous record of
the radiological status of the survey unit relative to the established DCGLs.  In addition,
sufficient data and information should be provided to enable an independent evaluation of the
results of the survey including repeating measurements at some future time. The documentation
should comply with all applicable regulatory requirements. Additional information on
documentation is provided in Chapter 3, Chapter 5, Chapter 9, and Appendix N.

Much of the information in the final status report will be available from other decommissioning
documents.  However, to the extent practicable, this report should be a stand-alone document
with minimum information incorporated by reference. This document should describe the
August 2000                                8-25                        MARSSIM, Revision 1

-------
Interpretation of Survey Results


instrumentation or analytical methods used, how the data were converted to DCGL units, the
process of comparing the results to the DCGLs, and the process of determining that the data
quality objectives were met.

The results of actions taken as a consequence of individual measurements or sample
concentrations in excess of the investigation levels should be reported together with any
additional data, remediation, or re-surveys performed to demonstrate that issues concerning
potential areas of elevated activity were resolved.  The results of the data evaluation using
statistical methods to determine if release criteria were satisfied should be described. If criteria
were not met or if results indicate a need for additional data, appropriate further actions should
be determined by the site management in consultation with the responsible regulatory agency.
MARSSIM, Revision 1                         8-26                                 August 2000

-------
                                                             Interpretation of Survey Results
                 EXAMPLE DATA INTERPRETATION CHECKLIST

CONVERT DATA TO STANDARD UNITS

	     Structure activity in Bq/m2 (dpm/100 cm2)
	     Solid media (soil, etc.) activity in Bq/kg (pCi/g)
EVALUATE ELEVATED MEASUREMENTS
          Identify elevated data
          Compare data with derived elevated area criteria
          Determine need to remediate and/or reinvestigate elevated condition
          Compare data with survey unit classification criteria
          Determine need to investigate and/or reclassify
ASSESS SURVEY DATA

	     Review DQOs and survey design
	     Verify that data of adequate quantity and quality were obtained
	     Perform preliminary assessments (graphical methods) for unusual or suspicious trends
          or results—investigate further as appropriate

PERFORM STATISTICAL TESTS

	     Select appropriate tests for category of contaminant
	     Conduct tests
	     Compare test results against hypotheses
	     Confirm power level of tests
COMPARE RESULTS TO GUIDELINES
          Determine average or median concentrations
          Confirm that residual activity satisfies guidelines
COMPARE RESULTS WITH DQOs*
          Determine whether all DQOs are satisfied
          Explain/describe deviations from design-basis DQOs
        ALARA may be included in the DQOs.


August 2000                               8-27                       MARSSIM, Revision 1

-------
            9 QUALITY ASSURANCE AND QUALITY CONTROL
9.1    Introduction

The goal of quality assurance and quality control (QA/QC) is to identify and implement sampling
and analytical methodologies which limit the introduction of error into analytical data. For
MARSSIM data collection and evaluation, a system is needed to ensure that radiation surveys
produce results that are of the type and quality needed and expected for their intended use. A
quality system is a management system that describes the elements necessary to plan, implement,
and assess the effectiveness of QA/QC activities.  This system establishes many functions
including: quality management policies and guidelines for the development of organization- and
project-specific quality plans; criteria and guidelines for assessing data quality; assessments to
ascertain effectiveness of QA/QC implementation; and training programs related to QA/QC
implementation. A quality system ensures that MARSSIM decisions will be supported by
sufficient data of adequate quality and usability for their intended purpose,  and further ensures
that such data are authentic, appropriately documented, and technically defensible.

Any organization collecting and evaluating data for a particular program must be concerned with
the quality  of results.  The organization must have results that:  meet a well-defined need, use, or
purpose; comply with program requirements; and reflect consideration of cost and economics.
To meet the objective, the organization should control the technical, administrative, and human
factors affecting the quality of results.  Control should be oriented toward the appraisal,
reduction, elimination, and prevention of deficiencies that affect quality.

Quality systems already exist for many organizations involved in the use of radioactive materials.
There are self-imposed internal quality management  systems (e.g., DOE) or there are systems
required by regulation by another entity (e.g., NRC) which require a quality system as a condition
of the operating license.1  These systems are typically called  Quality Assurance Programs. An
organization may also obtain services from another organization that already has a quality system
in place. When developing an organization-specific quality system, there is no need to develop
new quality management systems, to the extent that a facility's current Quality Assurance
Program can be used. Standard ANSI/ASQC E4-1994 (ASQC 1995) provides national
consensus quality standards for environmental programs. It addresses both quality systems and
the collection and evaluation of environmental data.  Annex  B  of ANSI/ASQC E4-1994
   1  Numerous quality assurance and quality control (QA/QC) requirements and guidance documents have been
applied to environmental programs. Until now, each Federal agency has developed or chosen QA/QC requirements
to fit its particular mission and needs. Some of these requirements include DOE Order 5700.6c (DOE 199 Ic); EPA
QA/R-2 (EPA 1994f); EPA QA/R-5 (EPA 1994c); 10 CFR 50, App. B; NUREG-1293, Rev. 1 (NRC 1991); Reg
Guide 4.15 (NRC 1979); and MIL-Q-9858A (DOD 1963). In addition, there are several consensus standards for
QA/AC, including ASME NQA-1 (ASME 1989), and ISO 9000/ASQC Q9000 series (ISO 1987). ANSI/ASQC E4-
1994  (ASQC 1995) is a consensus standard specifically for environmental data collection.

August 2000                                 9-1                         MARSSIM, Revision 1

-------
Quality Assurance and Quality Control
(ASQC 1995) and Appendix K of MARSSIM illustrate how existing quality system documents
compare with organization- and project-specific environmental quality system documents.

Table 9.1 illustrates elements of a quality system as they relate to the Data Life Cycle. Applying a
quality system to a project is typically done in three phases as described in Section 2.3:  1) the
planning phase where the Data Quality Objectives (DQOs) are developed following the process
described in Appendix D and documented in the Quality Assurance Project Plan (QAPP),2 2) the
implementation phase involving the collection of environmental data in accordance with
approved procedures and protocols, and 3) the assessment phase including the verification and
validation of survey results as discussed in Section 9.3 and the evaluation of the environmental
data using Data Quality Assessment (DQA) as discussed in Section 8.2 and Appendix E.
Detailed guidance on quality systems is not provided in MARSSIM because a quality system
should be in place and functioning prior to beginning environmental data collection activities.

                   Table 9.1  The Elements of a Quality System Related
                                 to the Data Life Cycle
Data Life Cycle
Planning
Implementation
Assessment
Quality System Elements
Data Quality Objectives (DQOs)
Quality Assurance Project Plans (QAPPs)
Standard Operating Procedures (SOPs)
QAPPs
SOPs
Data collection
Assessments and audits
Data validation and verification
Data Quality Assessment (DQA)
A graded approach bases the level of controls on the intended use of the results and the degree of
confidence needed in their quality. Applying a graded approach may mean that some
organizations (e.g., those using the simplified procedures in Appendix B) make use of existing
plans and procedures to conduct surveys. For many other organizations, the need for cleanup and
restoration of contaminated facilities may create the need for one or more QAPPs suitable to the
special needs of environmental data gathering, especially as it relates to the demonstration of
compliance with regulatory requirements. There may even be a need to update or revise an
existing quality management system.
     The quality assurance project plan is sometimes abbreviated QAPjP. MARSSIM adopts the terminology and
abbreviations used in ANSI/ASQC E4-1994 (ASQC 1995) and EPA QA/R-5 (EPA 1994c).
MARSSIM, Revision 1
9-2
August 2000

-------
                                                           Quality Assurance and Quality Control
9.2    Development of a Quality Assurance Project Plan

The Quality Assurance Project Plan (QAPP)3 is the critical planning document for any
environmental data collection operation because it documents how QA/QC activities will be
implemented during the life cycle of a project (EPA 1997a).  The QAPP is the blueprint for
identifying how the quality system of the organization performing the work is reflected in a
particular project and in associated technical goals.  This section provides information on how to
develop a QAPP based on the DQO process.  The results of the DQO process provide key inputs
to the QAPP and will largely determine the level of detail in the QAPP.

The consensus standard ANSI/ASQC E4-1994 (ASQC 1995) describes the minimum set of
quality elements required to conduct programs involving environmental data collection and
evaluation. Table 9.2 lists the quality elements for collection and evaluation of environmental
data from ANSI/ASQC E4-1994.  These quality elements are provided as examples that should
be addressed when developing a QAPP. This table also includes references for obtaining
additional information on each of these quality elements.  Many of these elements will be
addressed in existing documents, such as the organization's Quality Assurance Program or
Quality Management Plan. Each of these quality elements should be considered during survey
planning to determine the degree to which they will be addressed in the QAPP. Additional
quality elements may need to be added to this list as a result of organizational preferences or
requirements of Federal and State regulatory authorities. For example, safety and health or
public participation may be included as elements to be considered during the development of a
QAPP.

The QAPP should be developed using a graded approach as discussed in Section 9.1. In other
words, existing procedures and survey designs can be included by reference. This is especially
useful for sites using a simplified survey design process (e.g., surveys designed using
Appendix B).

A QAPP should be developed to document the results of the planning phase of the Data Life
Cycle (see Section 2.3). The level of detail provided in the QAPP for relevant quality elements is
determined using the DQO process during survey planning activities.  Information that is already
provided in existing documents does not need to be repeated in the QAPP, and can be included
by reference (EPA 1997a).
   3 MARSSIM uses the term Quality Assurance Project Plan to describe a single document that incorporates all
of the elements of the survey design. This term is consistent with ANSI/ASQC E4-1994 (ASQC 1995) and EPA
guidance (EPA 1994c, EPA 1997a), and is recommended to promote consistency. The use of the term QAPP in
MARSSIM does not exclude the use of other terms (e.g., Decommissioning Plan, Sampling and Analysis Plan,
Field Sampling Plan) to describe survey planning documentation as long as the information in the documentation
supports the objectives of the survey.

August 2000                                 9-3                         MARSSIM, Revision 1

-------
Quality Assurance and Quality Control
        Table 9.2  Examples of QAPP Elements for Site Surveys and Investigations
QAPP Element
Planning and
Scoping (reference
the QA Manual for
information on the
quality system)
Design of Data
Collection
Operations
(including training)
Implementation of
Planned Operations
(including
documents and
records)
Assessment and
Response
Assessment and
Verification of
Data Usability

ASQC 1995
EPA 1994c
EPA 1997a
NRC 1997c
EPA 1993d
ASQC 1995
EPA 1994c
EPA 1997a
EPA 1993d
ASQC 1995
EPA 1994c
EPA 1997a
NRC 1997c
EPA 1993d
ASQC 1995
EPA 1994c
EPA 1997a
EPA 1993d
ASQC 1995
EPA 1994c
EPA 1997a
NRC 1997c
EPA 1993d
Information Source
Part A, Sections 2.1 and 2.7; PartB, Section 3.1
Sections A4, A5, A6 and A7
Chapter III, Sections A4, A5, A6, and A7
Chapter 14
Project Objectives
Part A, Section 2.3; Part B, Section 3.2
Sections A9 and B 1
Chapter III, Sections A9 and Bl
Sampling Design
Part A, Section 2.8; Part B, Section 3.3
Sections Al, A2, A3, B2, B3, B4, B5, B6, B7, B8, B9, and BIO
Chapter III, Sections Al, A2, A3, B2, B3, B4, B5, B6, B7, B8, B9, and BIO
Chapter 5
Sampling Execution, Sample Analysis
Part A, Section 2.9, Part B, Section 3.4
Sections Cl and C2
Chapter III, Sections Cl and C2
Exhibit 3, Reference Box 3
PartB, Section 3. 5
Sections D1,D2, and D3
Chapter III, Sections Dl, D2, and D3
Chapter 20, Appendix J, Appendix Q
Assessment of Data Quality
For example, the quality system description, personnel qualifications and requirements, and
Standard Operating Procedures (SOPs) for the laboratory analysis of samples may simply be
references to existing documents (e.g., Quality Management Plan, Laboratory Procedure
Manual). SOPs for performing direct measurements with a specific instrument may be attached
to the QAPP because this information may not be readily available from other sources.

There is no particular format recommended for developing a QAPP. Figure 9.1 provides an
example of a QAPP format presented in EPA QA/R-5 (EPA 1994c). Appendix K compares the
quality elements presented in this example to the quality elements found in EPA QAMS-005-80
(EPA 1980d), ASMENQA-1 (ASME 1989), DOE Order 5700.6c (DOE 1991c), MIL-Q-9858A
(DOD 1963), and ISO 9000 (ISO 1987).
MARSSIM, Revision 1
9-4
August 2000

-------
                                                              Quality Assurance and Quality Control
  Project Management
         Title and Approval Sheet
         Table of Contents
         Distribution List
         Project/Task Organization
         Problem Definition/Background
         Project Task Description
         Quality Objectives and Criteria for Measurement Data
         Special Training Requirements/Certification

  Measurement/Data Acquisition
         Sampling Process Design (Experimental Design)
         Sampling Methods Requirements
         Sample Handling and Custody Requirements
         Analytical Methods Requirements
         Quality Control Requirements
         Instrument/Equipment Testing, Inspection, and Maintenance Requirements
         Instrument Calibration and Frequency
         Inspection/Acceptance Requirements for Supplies and Consumables

  Assessment/Oversight
         Assessments and Response Actions
         Reports to Management

  Data Validation and Usability
         Data Review, Validation, and Verification Requirements
         Validation and Verification Methods
         Reconciliation with User Requirements
                           Figure 9.1 Example of a QAPP Format

9.3    Data Assessment

Assessment of environmental data is used to evaluate whether the data meet the objectives of the
survey, and whether the data are sufficient to determine compliance with the DCGL (EPA 1992a,
1992b, 1996a).  The assessment phase of the DataLife Cycle consists of three phases:  data
verification, data validation, and Data Quality Assessment (DQA). This section provides
guidance on verifying and validating data collected during a final status survey designed to
demonstrate compliance with a dose- or risk-based regulation. Guidance on DQA is provided in
Chapter 8 and Appendix E. As with all components of a successful survey, the level of effort
associated with the assessment of survey data should be consistent with the objectives of the
survey (i.e.., a graded approach).
August 2000                                   9-5                           MARSSIM, Revision 1

-------
Quality Assurance and Quality Control


9.3.1   Data Verification

Data verification ensures that the requirements stated in the planning documents (e.g., Quality
Assurance Project Plan, Standard Operating Procedures) are implemented as prescribed.  This
means that deficiencies or problems that occur during implementation should be documented and
reported. This also means that activities performed during the implementation phase are assessed
regularly with findings documented and reported to management. Corrective actions undertaken
should be reviewed for adequacy and appropriateness and documented in response to the
findings. Data verification activities should be planned and documented in the QAPP. These
assessments may include but are not limited to inspections, QC checks, surveillance, technical
reviews, performance evaluations, and audits.

To ensure that conditions requiring corrective actions are identified and addressed promptly, data
verification activities should be initiated as part of data collection during the implementation
phase of the survey. The performance of tasks by personnel  is generally compared to a
prescribed method documented in the SOPs, and is generally assessed using inspections,
surveillance, or audits. Self-assessments and independent assessments may be planned,
scheduled, and performed as part of the survey. Self-assessment also means that personnel doing
work should document and report deficiencies or problems that they encounter to their
supervisors or management.

The performance of equipment such as radiation detectors or measurement systems such as an
instrument and human operator can be monitored using control charts. Control charts are used to
record the results of quantitative QC checks such as background and daily calibration or
performance checks.  Control charts document instrument and measurement system performance
on a regular basis and identify conditions requiring corrective actions on a real time basis.
Control charts are especially useful for surveys that extend over a significant period of time (e.g.,
weeks instead of days) and for equipment that is owned by a company that is frequently used to
collect survey data. Surveys that are accomplished in one or two days and use rented instruments
may not benefit significantly from the preparation and use of control charts. The use of control
charts is usually documented in the SOPs.

A technical review is an independent assessment that provides an in-depth  analysis and
evaluation of documents, activities, material, data, or items that require technical verification to
ensure that established requirements are satisfied (ASQC 1995). A technical review typically
requires a significant effort in time and resources and may not be necessary for all surveys. A
complex survey using a combination of scanning, direct measurements, and sampling for
multiple survey units is more likely to benefit from a detailed technical review than a simple
survey design calling for relatively few measurements using  one or two measurement techniques
for a single survey unit.
MARSSIM, Revision 1                         9-6                                 August 2000

-------
                                                            Quality Assurance and Quality Control
9.3.2   Data Validation
Data validation activities ensure that the results of data collection activities support the objectives
of the survey as documented in the QAPP, or support a determination that these objectives
should be modified. Data Usability is the process of ensuring or determining whether the quality
of the data produced meets the intended use of the data (EPA 1992a, EPA 1997a). Data
verification compares the collected data with the prescribed activities documented in the SOPs;
data validation compares the collected data to the DQOs documented in the QAPP.  Corrective
actions may improve data quality and reduce uncertainty, and may eliminate the need to qualify
or reject data.

9.3.2.1  Data Qualifiers

Qualified data are any data that have been modified or adjusted as part of statistical or
mathematical evaluation, data validation, or data verification operations (ASQC 1995).  Data
may be qualified or rejected as a result of data validation or data verification activities. Data
qualifier codes or flags are often used to identify data that has been qualified. Any scheme used
should be fully explained in the QAPP and survey documentation.  The following are examples
of data qualifier codes or flags derived from national qualifiers assigned to results in the contract
laboratory program (CLP; EPA 1994g).

U or 
-------
Quality Assurance and Quality Control


9.3.2.2 Data Validation Descriptors

Data validation is often defined by six data descriptors. These six data descriptors are
summarized in Table 9.3 and discussed in detail in Appendix N. The decision maker or reviewer
examines the data, documentation, and reports for each of the six data descriptors to determine if
performance is within the limits specified in the DQOs during planning.  The data validation
process for each data descriptor should be conducted according to procedures documented in the
QAPP.
            Table 9.3 Suggested Content or Consideration, Impact if Not Met,
                       and Corrective Actions for Data Descriptors
Data Descriptor
Reports to
Decision Maker














Documentation






Data Sources




Suggested Content
or Consideration
• Site description
• Survey design with
measurement locations
• Analytical method and detection
limit
• Detection limits (MDCs)
• Background radiation data
• Results on per measurement
basis, qualified for analytical
limitations
• Field conditions for media and
environment
• Preliminary reports
• Meteorological data, if indicated
by DQOs
• Field reports
• Chain-of -custody records
• SOPs
• Field and analytical records
• Measurement results related to
geographic location


• Historical data used meets
DQO's



Impact if Not Met
• Unable to perform a
quantitative radiation
survey and site
investigation












• Unable to identify
appropriate concentration
for survey unit
measurements
• Unable to have
adequate assurance of
measurement results
• Potential for Type I
and Type II decision
errors
• Lower confidence of
data quality
Corrective Action
• Request missing
information
• Perform qualitative or
semi-quantitative site
investigation











• Request that locations be
identified
• Resurveying or
resampling
• Correct deficiencies


• Resurveying, resampling,
or reanalysis for unsuitable
or questionable
measurements

MARSSIM, Revision 1
9-8
August 2000

-------
                                                            Quality Assurance and Quality Control
                                   Table 9.3 (continued)
Data Descriptor
Analytical
Method and
Detection Limit
Data Review
Data Quality
Indicators
Suggested Content
or Consideration
• Routine methods used to
analyze radionuclides of potential
concern
• Defined level of data review for
all data
• Surveying and sampling
variability identified for each
radionuclide
• QC measurements to identify
and quantify precision and accuracy
• Surveying, sampling, and
analytical precision and accuracy
quantified
Impact if Not Met
• Unqualified
precision and accuracy
• Potential for Type I
and Type II decision
errors
• Potential for Type I
and Type II decision
errors
• Increased variability
and bias due to analytical
process, calculation
errors, or transcription
errors
• Unable to quantify
levels for uncertainty
• Potential for Type I
and Type II decision
errors
Corrective Action
• Reanalysis
• Resurveying, resampling,
or reanalysis
• Documented statements
of limitation
• Perform data review
• Resurveying or
resampling
• Perform qualitative site
investigation
• Documented discussion
of potential limitations
Data collected should meet performance objectives for each data descriptor. If they do not,
deviations should be noted and any necessary corrective action performed. Corrective action
should be taken to improve data usability when performance fails to meet objectives.
August 2000
9-9
MARS SIM, Revision 1

-------
                     AVAILABILITY OF REFERENCE MATERIALS
NRC Reference Material

As of November 1999, you may electronically
access NUREG-series publications and other
NRC records at NRC's Public Electronic Reading
Room at www.nrc.gov/NRC/ADAMS/index.html.
Publicly released records include, to name a few,
NUREG-series publications; Federal Register
notices; applicant, licensee, and vendor
documents and correspondence; NRC
correspondence and internal memoranda;
bulletins and information notices; inspection and
investigative reports; licensee event reports; and
Commission papers and their attachments.

NRC publications in the NUREG series, NRC
regulations, and Title 10, Energy, in the Code of
Federal Regulations may also be purchased from
one of these two sources.
1.   The Superintendent of Documents
    U.S. Government Printing Office
    P. O. Box 37082
    Washington, DC 20402-9328
    www.access.gpo.gov/su_docs
    202-512-1800
2.   The National Technical Information Service
    Springfield, VA 22161-0002
    www.ntis.gov
    1-800-533-6847 or,  locally, 703-805-6000

A single copy of each NRC draft report for
comment is available free, to the extent of supply,
upon written request as follows:
Address:  Office of the Chief Information Officer,
            Reproduction and Distribution
           Services Section
          U.S. Nuclear Regulatory Commission
          Washington, DC 20555-0001
E-mail:    DISTRIBUTION@nrc.gov
Facsimile: 301-415-2289

Some publications in the NUREG series that are
posted at NRC's Web site address
www.nrc.gov/NRC/NUREGS/indexnum.html
are updated periodically and may differ from the
last printed version. Although references to
material found  on a Web site bear the date the
material was accessed, the material available on
the date cited may subsequently be removed from
the site.
Non-NRC Reference Material

Documents available from public and special
technical libraries include all open literature items,
such as books,  journal articles, and transactions,
Federal Register notices, Federal and State
legislation, and congressional reports. Such
documents as theses, dissertations, foreign
reports and translations, and non-NRC
conference proceedings may be purchased from
their sponsoring organization.

Copies of industry codes and standards used in a
substantive manner in the NRC regulatory
process are maintained at—
    The NRC Technical Library
    Two White Flint North
    11545RockvillePike
    Rockville,  MD 20852-2738

These standards are available in the library for
reference use by the public. Codes and
standards are usually copyrighted and may be
purchased from the originating organization or, if
they are American National Standards, from—
    American  National Standards Institute
    11 West 42nd Street
    New York, NY  10036-8002
    www.ansi.org
    212-642-4900
 The NUREG series comprises (1) technical and
 administrative reports and books prepared by the staff
 (NUREG-XXXX) or agency contractors
 (NUREG/CR-XXXX), (2) proceedings of conferences
 (NUREG/CP-XXXX), (3) reports resulting from
 international agreements (NUREG/IA-XXXX), (4)
 brochures (NUREG/BR-XXXX), and (5) compilations of
 legal decisions and orders of the Commission and Atomic
 and Safety Licensing Boards and of Directors' decisions
 under Section 2.206 of NRC's regulations
Submit written comments arising from the review or
use of MARSSIM to EITHER the U.S. Environmental
Protection Agency, ATTN: Air and Radiation Docket,
Mail Stop 6102, Air Docket No. A-96-44, First Floor
Waterside Mall (geographic address at 401  M Street,
SW.), mailing address 1200 Pennsylvania Ave., NW.,
Washington D.C. 20460-2001 or the Chief,  Rules and
Directives Branch, Division of Administrative Services,
U.S. Nuclear Regulatory Commission, Washington DC
20555-0001. As appropriate, revised pages of
MARSSIM will be posted on the Internet at:
http://www.epa.gov/radiation/marssim.

-------
               REFERENCES, REGULATIONS, & U. S. CODE

General References

42 FR 60956. November 30, 1977. "Persons Exposed to Transuranium Elements in the
      Environment," Federal Register

46 FR 52601. October 1981.  "Disposal or On-site Storage of Thorium or Uranium Wastes
      From Past Operations," NRC Branch Technical Position, Federal Register

57 FR 13389. April 16, 1992. "Action Plan to Ensure Timely Cleanup of Site
      Decommissioning Management Plan Sites," Federal Register

57 FR 6136. February 20, 1992.  "Order Establishing Criteria and Schedule for
      Decommissioning the Bloomberg Site." Federal Register

Agency for Toxic Substances and Disease Registry (ATSDR). 1992. ATSDR - Public Health
      Assessment Guidance Manual. ATSDR, Atlanta, GA. (PB92-147164)

Altshuler, B., and B. Pasternak.  1963.  Statistical Measures of the Lower Limit of Detection of a
      Radioactivity Counter.  Health Physics 9:293-298.

American National Standards Institute (ANSI). 1996a. Performance Criteria for
      Radiobioassay, N13.30.

American National Standards Institute (ANSI). 1996b. Radiation Protection Instrumentation
      Test and Calibration - Portable  Survey Instruments, N323 A.

American Nuclear Society (ANS).  1994. Mobile In Situ Gamma-Ray Spectroscopy System.
      Transactions of the American Nuclear Society 70:47.

American Nuclear Society (ANS).  1994. Large Area Proportional Counter for In Situ
      Transuranic Measurements.  Transactions of the American Nuclear Society 70:47.

American Public Health Association (APHA).  1995. Standard Methods for the Examination of
      Water and Wastewater.  19th Edition, APHA, Washington, DC.

American Society of Mechanical  Engineers (ASME).  1989. Quality Assurance Program
      Requirements for Nuclear  Facilities. NQA-1, ASME, New York, New York.

American Society for Quality Control (ASQC). 1995. Specifications and Guidelines for Quality
      Systems for Environmental Data Collection and Environmental Technology Programs.
      ANSI/ASQC E4-1994, ASQC, Milwaukee, Wisconsin.

August 2000                                  Ref-1                    MARS SIM, Revision 1

-------
References


American Society of Testing and Materials (ASTM).  1993. Reducing Samples of Aggregate to
       Testing Size, C702-93. ASTM, West Conshohocken, PA.

American Society of Testing and Materials (ASTM).  1995. Soil Sample preparation for the
      Determination ofRadionuclid.es, C999-90 (1995) el. ASTM, West Conshohocken, PA.

American Society of Testing and Materials (ASTM).  1997. Annual Book of ASTM Standards,
       Water and Environmental Technology: Environmental Assessment; Hazardous
      Substances and Oil Spill Responses; Waste Management; Environmental Risk
      Assessment. Volume 11.04, ASTM, West Conshohocken, PA.

Bernabee, R., D. Percival, and D.  Martin. 1980. Fractionation of Radionuclides in Liquid
       Samples from Nuclear Power Facilities. Health Physics 39:57-67.

Berven, B. A., W. D. Cottrell, R. W. Leggett, C. A. Little, T. E. Myrick, W. A. Goldsmith, and
      F. F. Haywood. 1986. Generic Radiological Characterization Protocol for Surveys
      Conducted for DOE Remedial Action Programs.  ORNL/TM-7850, Martin Marietta
      Energy Systems, Inc., Oak Ridge National Laboratory. (DE86-011747)

Berven, B. A., W. D. Cottrell, R. W. Leggett, C. A. Little, T. E. Myrick, W. A. Goldsmith, and
      F. F. Haywood. 1987. Procedures Manual for the ORNL Radiological Survey Activities
      (RASA) Program.  ORNL/TM-8600, Martin Marietta Energy Systems, Inc., Oak Ridge
      National Laboratory.  (DE87-009089)

Brodsky, A. 1992. "Exact Calculation of Probabilities of False Positives and False Negatives
      for Low Background Counting," Health Physics 63(2): 198-204.

Committee on the Biological Effects of Ionizing Radiations (BEIR).  1990.  Health Effects of
      Exposure to Low Levels of Ionizing Radiation.  BEIR V. National Academy of Sciences,
      National Academy Press, Washington D.C.

Conover. W. J.  1980. Practical Nonparametric Statistics, Second Edition.  John Wiley & Sons,
      New York.

Currie, L.A. 1968. Limits for Qualitative Detection and Quantitative Determination.  Analytical
      Chemistry  40(3):586-693.

Davidson, J.R. 1995.  ELIPGRID-PC: Upgraded Version. ORNL/TM-13103. Oak Ridge
      National Laboratory.
MARSSIM, Revision 1                          Ref-2                           August 2000

-------
                                                                            References
Department of the Army. 1993. USAEHA Environmental Sampling Guide. Technical Guide
      No. 155, U.S. Army Environmental Hygiene Agency, Aberdeen Proving Ground, MD.

Department of Defense (DOD).  1963. Quality Program Requirements. Military Specification
      MIL-Q-9858A. DOD, Washington, D.C.

Department of Energy (DOE).  1982. Radiological and Environmental Sciences Laboratory
      Procedures, Analytical Chemistry Branch Procedures Manual.  DOE/IDO-12096, DOE,
      Idaho Operations Office, Idaho Falls.

Department of Energy (DOE).  1985. Formerly Utilized Sites Remedial Action Program,
      Verification and Certification Protocol — Supplement No. 2 to the FUSRAP Summary
      Protocol, Revision 1.  DOE, Division of Facility and Site Decommissioning Projects,
      Office of Nuclear Energy.

Department of Energy (DOE).  1986a. Formerly Utilized Sites Remedial Action Program,
      Summary Protocol, Identification - Characterization - Designation - Remedial Action -
      Certification. DOE, Division of Facility and Site Decommissioning Projects, Office of
      Nuclear Energy.

Department of Energy (DOE).  1986b. Formerly Utilized Sites Remedial Action Program,
      Designation/Elimination Protocol—Supplement No. 1 to the FUSRAP Summary Protocol.
      DOE, Division of Facility and Site Decommissioning Projects, Office of Nuclear Energy.

Department of Energy (DOE).  1987. The Environmental Survey Manual,  Appendix A - Criteria
      for Data Evaluation. DOE/EH-0053,  DOE, Office of Environmental Audit, Washington,
      D.C.  (DE88-000254)

Department of Energy (DOE).  1990a. Environmental Protection, Safety, and Health Protection
      Information Reporting Requirements.  DOE Order 5484.1, Change 7, DOE, Washington,
      DC.

Department of Energy (DOE).  1990b. EML Procedures Manual, HASL-300, 28th ed. HASL-
      300-ED.27-Vol 1, DOE, Environmental Measurements Laboratory, New York. (DE91-
      010178)

Department of Energy (DOE).  199 la. Deleted.
August 2002                                  Ref-3                     MARSSIM, Revision 1

-------
References
Department of Energy (DOE). 1991c. Quality Assurance. U.S. DOE Order 5700.6c.

Department of Energy (DOE). 1992. Environmental Implementation Guide for Radiological
      Survey Procedures Manual, DOE Report for Comment. Martin Marietta Energy Systems,
      Oak Ridge National Laboratory.

Department of Energy (DOE). 1994. Decommissioning Handbook. DOE/EM-0142P, DOE,
      Washington, D.C.. (DE94-008981)

Department of Energy (DOE). 1995. DOE Methods for Evaluating Environmental and Waste
      Management Samples. DOE/EM-0089T, Rev. 2. Prepared for the U.S. Department of
      Energy by Pacific Northwest Laboratory, Richland, WA.

Department of Energy (DOE). 1996. Statistical and Cost-Benefit Enhancements to the DQO
      Process for Characterization Decisions. DOE/EM-0316, U.S. Department of Energy,
      Washington, D.C.

Department of Energy (DOE). 1997. EML Procedures Manual, HASL-300, 28th ed.  HASL-
      300-ED.27-Vol 1, DOE, Environmental Measurements Laboratory, New York. (DE91-
      010178)

Environmental Protection Agency (EPA).  1974. Methods for Chemical Analysis of Water and
      Wastes. EPA 625/6-74-003, (revised), EPA, Washington, D.C.

Environmental Protection Agency (EPA).  1979. Radiochemical Analytical Procedures for
      Analysis of Environmental Samples, EMSL-LV-0539-17, EPA, Office of Radiation and
      Indoor Air, Las Vegas.  (EMSL-LV-0539-17)

Environmental Protection Agency (EPA).  1980a. Prescribed Procedures for Measurement of
      Radioactivity in Drinking Water. EPA-600/4-80-032, EPA, Environmental Monitoring
      and Support Laboratory, Cincinnati, Ohio. (PB80-224744)

Environmental Protection Agency (EPA).  1980b. Samplers and Sampling Procedures for
      Hazardous Waste Streams. EPA-600/2-80-018, EPA, Washington, D.C. (PB80-135353)

Environmental Protection Agency (EPA).  1980c. Upgrading Environmental Radiation Data,
      Health Physics Society Committee Report HPSR-1. EPA 520/1-80-012, EPA, Office of
      Radiation Programs, Washington, D.C. (PB81-100364)
MARSSIM, Revision 1                          Ref-4                           August 2002

-------
                                                                           References


Environmental Protection Agency (EPA). 1980d. Interim Guidelines and Specifications for
      Preparing Quality Assurance Project Plans. QAMS-005/80, EPA, Washington, D.C.

Environmental Protection Agency (EPA). 1982. Handbook for Sampling and Sample
      Preservation of Water and Wastewater. EPA 600/4-82-029, EPA, Washington, D.C.
      (PB83-124503)

Environmental Protection Agency (EPA). 1983. Interim Guidelines and Specifications for
      Preparing Quality Assurance Project Plans. EPA, Washington, D.C.  (PB83-170514)

Environmental Protection Agency (EPA). 1984a. Eastern Environmental Radiation Facility:
      Radiochemical Procedures Manual. EPA 520/5-84-006, EPA, Office of Radiation
      Programs, Eastern Environmental Radiation Facility [renamed the National Air and
      Radiation Environmental Laboratory (NAREL) in 1989], Montgomery, Alabama.  (PB84-
      215581)

Environmental Protection Agency (EPA). 1984b. Soil Sampling Quality Assurance User's
      Guide. EPA 600/4-84-0043, EPA, Washington, D.C. (PB84-198621)

Environmental Protection Agency (EPA). 1986. Preliminary Assessment Petition. Publication
      9200.5-301FS, EPA, Office of Emergency and Remedial Response, Washington, D.C.

Environmental Protection Agency (EPA). 1987a. A Compendium of SuperfundField
      Operations Methods. EPA 540/P-87-001, EPA, Office of Emergency and Remedial
      Response, Washington, D.C. (PB88-181557)

Environmental Protection Agency (EPA). 1987b. Data Quality Objectives for Remedial
      Response Activities-Development Process.  EPA/540/G-87/003, OSWER Directive
      9355.07B, EPA, Washington, D.C. (PB88-131370)

Environmental Protection Agency (EPA). 1987c. DQOs for Remedial Response Activities-
      Example Scenario: RI/FS Activities at a Site with Contaminated Soils and Groundwater.
      EPA/540/G-87/004. OSWER Directive 9355.07B, EPA, Washington, D.C. (PB88-
      131388)

Environmental Protection Agency (EPA). 1987d. Entry and Continued Access Under CERCLA.
      EPA, Washington, D.C.  (PB91-138867)
August 2000                                 Ref-5                   MARS SIM, Revision 1

-------
References


Environmental Protection Agency (EPA).  1988a. Field Screening Methods Catalog - User's
      Guide. EPA 540/2-88-005, EPA, Office of Emergency and Remedial Response,
      Washington, D.C.  (PB89-134159)

Environmental Protection Agency (EPA).  1988b.  Guidance for Conducting Remedial
      Investigations and Feasibility Studies Under CERCLA, Interim Final..  EPA/540/G-
      89/004, OSWER Directive 9355.3-01, EPA, Washington, D.C. (PB 89-184626)

Environmental Protection Agency (EPA).  1988c. SuperfundRemoval Procedures. OSWER
      Directive 9360.0-03B, EPA, Office of Emergency and Remedial Response, Washington,
      D.C.

Environmental Protection Agency (EPA).  1989a. Methods for Evaluating the Attainment of
      Cleanup Standards, Volume 1: Soils and Solid Media.  EPA-23 0/02-89-042, EPA, Office
      of Policy, Planning, and Evaluation, Washington, D.C.  (PB89-234959)

Environmental Protection Agency (EPA).  1989b. Procedures for Completion And Deletion of
      National Priorities List Sites and (Update); Final Report. OSWER Directive 9320.2-03B
      EPA, Office of Solid Waste and Emergency Response,  Washington, D.C. (PB90-274556)

Environmental Protection Agency (EPA).  1989c. Background Information Document on
      Procedures Approved for Demonstrating Compliance with 40 CFR Part 61, Subpart I.
      EPA/520/1-89-001, EPA, Washington, D.C.

Environmental Protection Agency (EPA).  1990. A Rationale for the Assessment of Errors in the
      Sampling of Soils.  EPA 600/4-90/013.

Environmental Protection Agency (EPA).  1991 a. Description and Sampling of Contaminated
      Soils. EPA 625/12-91-002, EPA, Office, Washington, D.C.

Environmental Protection Agency (EPA).  1991 b.  Compendium ofERT Soil Sampling and
      Surface Geophysics Procedures.  EPA 540/P-91-006, EPA, Office, Washington, D.C.
      (PB91-921273/CCE)

Environmental Protection Agency (EPA).  1991c.  Compendium of ERT Ground Water Sampling
      Procedures. EPA 540/P-91-007, EPA, Office, Washington, D.C.  (PB91-921275/CCE)
MARSSIM, Revision 1                         Ref-6                           August 2000

-------
                                                                           References


Environmental Protection Agency (EPA).  1991d.  Compendium of ERTSurface Water and
      Sediment Sampling Procedures. EPA 540/P-91-005, EPA, Washington, D.C. (PB91-
      921274/CCE)

Environmental Protection Agency (EPA).  1991e.  Site Assessment Information Directory.  EPA,
      Office of Emergency and Remedial Response, Washington, D.C.

Environmental Protection Agency (EPA).  199 If.  Guidance for Performing Preliminary
      Assessments Under CERCLA.  EPA/540/G-91/013, EPA, Office of Emergency and
      Remedial Response, Washington, D.C. (PB92-963303)

Environmental Protection Agency (EPA).  1991g.  Removal Program Representative Sampling
      Guidance: Volume 1 - Soil. Publication 9360.4-10, EPA, Office of Emergency and
      Remedial Response, Washington, D.C. (PB92-963408)

Environmental Protection Agency (EPA).  1991h.  Risk Assessment Guidance for Superfund.
      Volume 1, Human Health Evaluation Manual. Part B, Development of Risk Based
      Preliminary Remediation Goals.  Interim Report, EPA/540/R-92/003, OSWER 9285.7-
      01B, Office of Solid Waste and Emergency Response, Washington, D.C. (PB92-963333)

Environmental Protection Agency (EPA).  1992a.  Guidance for Data Useability in Risk
      Assessment, Part A. OSWER Directive 9285.7-09A, EPA, Office of Emergency and
      Remedial Response, Washington, D.C. (PB92-963356)

Environmental Protection Agency (EPA).  1992b.  Guidance for Data Useability in Risk
      Assessment, Part B. OSWER Directive 9285.7-09B, EPA, Office of Emergency and
      Remedial Response, Washington, D.C. (PB92-963362)

Environmental Protection Agency (EPA).  1992c.  Radon Measurement in Schools, Revised
      Edition. EPA 402-R-92-014, EPA, Office of Air and Radiation, Washington, D.C.

Environmental Protection Agency (EPA).  1992d.  Indoor Radon and Radon Decay Product
      Measurement Device Protocols. EPA 402-R-92-004, EPA, Office of Air and Radiation,
      Washington, D.C.

Environmental Protection Agency (EPA).  1992e.  Guidance for Performing Site Inspections
      Under CERCLA. EPA/540-R-92-021, EPA, Office of Solid Waste and Emergency
      Response, Washington, D.C.
August 2000                                 Ref-7                   MARSSIM, Revision 1

-------
References
Environmental Protection Agency (EPA).  1992f. Guidance on Implementation of the
      Superfund Accelerated Cleanup Model (SACM) under CERCLA and the NCP. OSWER
      Directive 9203.1-03, EPA, Office of Solid Waste and Emergency Response, Washington,
      D.C.

Environmental Protection Agency (EPA).  1992g. Supplemental Guidance to RAGS:
      Calculating the Concentration Term. Publication 9285.7-081, EPA, Office of Solid
      Waste and Emergency Response, Washington, DC.  (PB92-963373)

Environmental Protection Agency (EPA).  1993a. Deleted.

Environmental Protection Agency (EPA).  1993b. RCRA Groundwater Monitoring: Draft
      Technical Guidance.  EPA/530-R-93-001. EPA Office of Solid Waste, Washington, D.C.
      (PB93-139350)

Environmental Protection Agency (EPA).  1993 c. Integrating Removal and Remedial Site
      Assessment Investigations. OSWER Directive 9345.1-16FS, EPA, Office of Solid Waste
      and Emergency Response, Washington, D.C. (PB93-963341)

Environmental Protection Agency (EPA).  1993d. Quality Assurance for Superfund
      Environmental Data Collection Activities. Publication 9200.2-16FS, EPA, Office of
      Solid Waste and Emergency Response, Washington, D.C.

Environmental Protection Agency (EPA).  1993e. Subsurface Characterization and Monitoring
      Techniques: A Desk Reference Guide,  Volume 1. EPA/625/R-93/003A, U.S.
      Environmental Protection Agency, Cincinnati, Ohio.  (PB94-136272)

Environmental Protection Agency.  1993f Description and Sampling of Contaminated Soils: A
      Field Pocket Guide, EPA/625/12-91/00.

Environmental Protection Agency (EPA).  1994a. Guidance for the Data Quality Objectives
      Process.  EPA/600/R-96/055, EPA QA/G-4, Final, EPA, Quality Assurance Management
      Staff, Washington, D.C.

Environmental Protection Agency (EPA).  1994b. Statistical Methods for Evaluating the
      Attainment of Cleanup Standards, Volume 3: Reference Based Standards for Soils and
      Solid Media.  EPA 230-R-94-004,  EPA, Office of Policy, Planning, and Evaluation,
      Washington, D.C. (PB94-176831)
MARSSIM, Revision 1                          Ref-8                            August 2002

-------
                                                                           References
Environmental Protection Agency (EPA). 1994c.  EPA Requirements for Quality Assurance
      Project Plans for Environmental Data Operations.  EPA QA/R-5, EPA, Draft Interim
      Final, Quality Assurance Management Staff, Washington, D.C.

Environmental Protection Agency (EPA). 1994d.  An SAB Report: Review of EPA 's Approach
      to Screening for Radioactive Waste Materials at a Superfund Site in Uniontown, Ohio.
      Prepared by the ad hoc Industrial Excess Landfill Panel of the Science Advisory Board
      (SAB). EPA-SAB-EC-94-010. EPA, SAB, Washington, D.C.

Environmental Protection Agency (EPA). 1994e.  Methods for Monitoring Pump-and-Treat
      Performance. EPA/600/R-94/123, EPA, Office of Research and Development,
      Washington, D.C.

Environmental Protection Agency (EPA). 1994f.  EPA Requirements for Quality Management
      Plans.  EPA QA/R-2, Interim Draft. Quality Assurance Management Staff, Washington,
      D.C.

Environmental Protection Agency (EPA). 1995a.  DEFT Software for Data Quality Objectives.
      EPA/600/R-96/056, EPA QA/G-4D. EPA, Washington, D.C.

Environmental Protection Agency (EPA). 1995b.  Guidance for the Preparation of Standard
      Operating Procedures (SOP s) for Quality Related Documents. EPA QA/G-6, EPA,
      Quality Assurance Management Staff, Washington, D.C.

Environmental Protection Agency (EPA). 1996a.  Guidance for Data Quality Assessment:
      Practical Methods for Data Analysis. EPA QA/G-9 QA96 Version, EPA/600/R-96/084,
      EPA, Quality Assurance Management Staff, Washington, D.C.

Environmental Protection Agency (EPA). 1996b.  Soil Screening Guidance:  User's Guide.
      EPA/540/R-96/018, EPA, Office of Emergency and Remedial Response, Washington,
      D.C.

Environmental Protection Agency (EPA). 1996c.  Soil Screening Guidance:  Technical
      Background Document.  EPA/540/R-95/128, EPA, Office of Solid Waste and Emergency
      Response, Washington, D.C. (PB96-963502)

Environmental Protection Agency (EPA). 1997a.  EPA Guidance for Quality Assurance Project
      Plans.  Final, EPA QA/G-5, EPA, Office of Research and Development, Washington,
      D.C.
August 2000                                 Ref-9                   MARSSIM, Revision 1

-------
References


Environmental Protection Agency (EPA). 1997b. Integrated Site Assessments may Expedite
      Cleanups. GAO/RCED-97-181. GAO, Washington, D.C.

Egan, J.P. 1975. Signal Detection Theory and ROC Analysis. Academic Press, Inc., New York.

Eisenbud, M. 1987. Environmental Radioactivity, 3rd ed. Academic Press, Inc., New York.

Friedman, G.M. and J.L. Sanders.  1978. Principles of Sedimentology.  John Wiley and Sons,
      New York, NY.

Fritzsche, A.E.  1987. An Aerial Radiological Survey of the White Creek Floodplain, Oak Ridge
      Resrvation, Oak Ridge, Tennessee. EGG-10282-1136, Remote Sensing Laboratory,
      EG&G/EM, Las Vegas, NV.

George, A.C. 1984. Passive, Integrated Measurement of Indoor Radon Using Activated Carbon.
      Health Physics 46:867.

Gilbert, R. O.  1987. Statistical Methods for Environmental Pollution Monitoring. Van
      Nostrand Reinhold, New York.

Hardin, J.W. andR.O. Gilbert. 1993.  Comparing Statistical tests for Detecting Soil
      Contamination Greater Than Background. PNL-8989, Pacific Northwest Laboratory,
      Richland, WA.

Harnett, D. L.  1975. Introduction to Statistical Methods,  2nd ed. Addison-Wesley, Reading,
      Massachusetts.

Health Physics Society (HPS). 1994a. Program handbook for the Accreditation of Calibration
      Laboratories by the Health Physics Society. HPS, McLean, VA.

Health Physics Society (HPS). 1994b.  Criteria for Accreditation of Calibration Laboratories
      by the Health Physics Society.  HPS, McLean, VA.

Hora, S. C. and R. L. Iman.  1989.  Expert Opinion in Risk Analysis: The NUREG-1150
      Methodology. Nuclear Science and Engineering.  102 (4):323-331.

International Atomic Energy Agency (IAEA).  1971.  Handbook on Calibration of Radiation
      Protection Monitoring Instruments. IAEA, Technical Report Series 133, Vienna.
MARSSIM, Revision 1                          Ref-10                           August 2000

-------
                                                                              References


International Organization for Standardization (ISO).  1987. ISO 9000/ASQC Q9000 Series.
      American Society for Quality Control, Milwaukee, Wisconsin.
      ISO 9000-1, Quality Management and Quality Assurance Standards - Guidelines for
      Selection and Use.
      ISO 9001-1, Quality Systems - Model for Quality Assurance in Design/Development,
      Production, Installation and Servicing.
      ISO 9002, Quality Systems -Model for Quality Assurance in Production and Installation,
      and Servicing.
      ISO 9003, Quality Systems -Model for Quality Assurance in Final Inspection and Test.
      ISO 9004-1, Quality Management and Quality System Elements - Guidelines.

International Organization for Standardization (ISO).  1988. Evaluation of Surface
      Contamination - Part 1: Beta Emitters and Alpha Emitters.  ISO-7503-1 (first edition),
      ISO, Geneva, Switzerland.

International Organization for Standardization (ISO).  1993. International Vocabulary of Basic
      and General Terms in Metrology. ISO, Geneva, Switzerland.

Jenkins, P .H.  1986.  Radon Measurement Methods: An Overview. In Proceedings of Health
      Physics Society Meeting, 29 June 1986, Pittsburgh, PA. CONF-8606139-5, Monsanto
      Research Corporation, Miamisburg, Ohio, p. 38.

Kume, H. 1985. Statistical Methods for Quality Improvement. The Association of Overseas
      Technical Scholarship, Tokyo, Japan.

Lodge. J.P., Jr. (Ed).  1988. Methods of Air Sampling and Analysis. 3rd Edition, American
      Chemical Society, American Institute of Chemical Engineers, American Public Works
      Association, American Society of Mechanical Engineers, Association  of Official
      Analytical Chemists, Air and Waste Management Association, Health Physics Society,
      Instrument Society of America. CRC Press, Inc. Boca Raton, FL.

Macmillan, N.A., and C.D. Creelman. 1991. Detection Theory: A User's Guide. Cambridge
      University Press, Cambridge, England.

Miller, K. M., P. Shebell and G. A. Klemic.  1994.  In Situ Gamma-Ray  Spectrometry for the
      Measurement of Uranium in Surface Soils. Health Physics 67,  140-150.

Myrick, T.E. et al.  1981.  State Background Radiation Levels: Results of Measurements Taken
      During 1975-1979.  ORNL/TM 7343, Oak Ridge National Laboratory, Oak Ridge,
      Tennessee.
August 2000                                  Ref-11                    MARS SIM, Revision 1

-------
References


National Council on Radiation Protection and Measurements (NCRP).  1976a. Environmental
      Radiation Measurements.  NCRP Report No. 50, NCRP, Bethesda, Maryland.

National Council on Radiation Protection and Measurements (NCRP).  1976b Tritium
      Measurement Techniques. NCRP Report 47, NCRP, Bethesda, Maryland.

National Council on Radiation Protection and Measurements (NCRP).  1978. Instrumentation
      and Monitoring Methods for Radiation Protection. NCRP Report 57, NCRP, Bethesda,
      Maryland.

National Council on Radiation Protection and Measurements (NCRP).  1985. A Handbook of
      Radioactivity Measurement Procedures. NCRP Report 58, 2nd ed, NCRP, Bethesda,
      Maryland.

National Council on Radiation Protection and Measurements (NCRP).  1987. Exposure of the
      Population in the United States and Canada from Natural Radiation. NCRP Report 94,
      NCRP, Bethesda, Maryland.

National Council on Radiation Protection and Measurements (NCRP).  1991. Calibration of
      Survey Instruments used in Radiation Protection for the Assessment of Ionizing Radiation
      Fields and radioactive Surface Contamination. NCRP Report 112, NCRP, Bethesda,
      Maryland.

Nuclear Regulatory Commission (NRC). 1974. Termination of Operating Licenses for Nuclear
      Reactors.  Regulatory Guide 1.86, NRC, Washington, D.C.

Nuclear Regulatory Commission (NRC). 1975. Programs for Monitoring Radioactivity in the
      Environs of Nuclear Power Plants.  NRC Regulatory Guide 4.1, Rev. 1, NRC,
      Washington, D.C.

Nuclear Regulatory Commission (NRC). 1979. Quality Assurance for Radiological Monitoring
      Programs (Normal Operations) - Effluent Streams and the Environment. Regulatory
      Guide 4.15, NRC, Washington, D.C.

Nuclear Regulatory Commission (NRC). 1980. Radiological Effluent and Environmental
      Monitoring at Uranium Mills. NRC Regulatory Guide 4.14, Rev. 1, NRC, Washington,
      D.C.

Nuclear Regulatory Commission (NRC). 1982. NRC, Office of Nuclear Reactor Regulation
      Letter to Stanford University, NRC Docket No. 50-401, NRC, Washington, D.C.
MARSSIM, Revision 1                         Ref-12                           August 2000

-------
                                                                             References
Nuclear Regulatory Commission (NRC). 1984. Lower Limit of Detection: Definition and
      Elaboration of a Proposed Position for Radiological Effluent and Environmental
      Measurements. NUREG/CR-4007, U.S. Nuclear Regulatory Commission, Washington
      D.C.

Nuclear Regulatory Commission (NRC). 1990. Severe Accident Risks: An Assessment for Five
      U.S. Nuclear Power Plants.  NUREG-1150, Volume 1.  Office of Nuclear Regulatory
      Research, NRC, Washington, D.C.

Nuclear Regulatory Commission (NRC). 1991. Quality Assurance Guidance for a Low-Level
      Radioactive Waste Disposal Facility.  NUREG-1293, Revision 1. NRC, Washington,
      D.C.

Nuclear Regulatory Commission (NRC). 1992a. Manual for Conducting Radiological Surveys
      in Support of License Termination. NUREG/CR-5 849, Draft Report for Comment, U.S.
      Nuclear Regulatory Commission, Washington, D.C. and Oak Ridge Associated
      Universities, Oak Ridge, TN.

Nuclear Regulatory Commission (NRC). 1992b. Residual Radioactive Contamination from
      Decommissioning. NUREG/CR-5512, Final Report.  U.S. Nuclear  Regulatory
      Commission, Washington, D.C. and Pacific Northwest Laboratory,  Richland, WA.

Nuclear Regulatory Commission (NRC). 1994a. Draft Branch Technical Position on Site
      Characterization for Decommissioning. NRC, Washington, D.C.

Nuclear Regulatory Commission (NRC). 1994b. Background as a Residual Radioactivity
      Criterion for Decommissioning, Draft Report.  NUREG-1501,  U. S. Nuclear Regulatory
      Commission, Office of Nuclear Regulatory Research, Washington, D.C.

Nuclear Regulatory Commission (NRC). 1994c.  Working Draft Regulatory Guide on Release
      Criteria for Decommissioning: NRC Staff's Draft for Comment. NUREG-15 00, U. S.
      Nuclear Regulatory Commission, Office of Nuclear Regulatory Research, Washington,
      D.C.

Nuclear Regulatory Commission (NRC). 1995. Proposed Methodologies for Measuring Low
      Levels of Residual Radioactivity for Decommissioning.  NUREG-1506, Draft Report for
      Comment, NRC, Washington, D.C.
August 2000                                  Ref-13                    MARS SIM, Revision 1

-------
References


Nuclear Regulatory Commission (NRC).  1997a. A Proposed Nonparametric Statistical
      Methodology for the Design and Analysis of Final Status Decommissioning Surveys.
      NUREG-1505, Final, NRC, Washington, D.C.

Nuclear Regulatory Commission (NRC).  1997b. Minimum Detectable Concentrations with
      Typical Radiation Survey Instruments for Various Contaminants and Field Conditions.
      NUREG/CR-1507, Final, NRC, Washington, D.C.

Nuclear Regulatory Commission (NRC).  1997c. NMSS Handbook for Decommissioning Fuel
      Cycle and Materials Licensees. NUREG/BR-0241, NRC, Washington, D.C.

Perdue, P. A., R. W. Leggett, and F. F. Haywood.  1978. A Technique for Evaluating Airborne
      Concentrations of Daughters of Radon Isotopes. T. F. Gesell and W. M. Lowder, (eds).
      Third International Symposium on the Natural Radiation Environment, 23-28 April 1978,
      in Natural Radiation Environment HI, Volume 1, pp. 347-356.  CONF-780422, Houston,
      Texas.

Peter Gray & Associates (PGA). 2000. The NORM Report. (P.O. Box 11541, Fort Smith,
      Arkansas 72917). Fall 1999/Winter 2000, pp. 1-10.

Taylor, J.K. 1987. Quality Assurance of Chemical Measurements. Lewis Publishers, Inc.,
      Chelsea, MI.

van der Leeden, F., F.L. Troise, and D.K. Todd.  1990.  The Water Encyclopedia. 2nd Edition,
      Lewis Publishers, Chelsea, MI.

Wallo, A., M. Moscovitch, I.E. Rodgers, D. Duffey, and C. Scares. 1994. "Investigations of
      Natural Variations of Cesium-137 Concentrations in Residential Soils." The Health
      Physics Society 39th AnualMeeting, June 28, 1994. McLean, Virginia: The Health
      Physics Society.

Yu, C., A. J. Zidan, J.-J. Cheng, Y. C. Yuan, L. G.  Jones, D. J. LePoire, Y. Y. Wang, C. O.
      Loureiro, E. Gnanaprgusan, E. Faillace, A. Wallo IE, W. A. Williams, and H. Peterson.
      1993. Manual for Implementing Residual Radioactive Material Guidelines Using
      RESRAD, Version 5.0. ANL/EAD/LD-2, Argonne National Laboratory, Argonne, Illinois.
      (DE94-015594)

Walpole, R.E. and R.H. Myers.  1985. Probability and Statistics for Engineers and Scientists.
      3rd Edition,  MacMillan Publishing Company, New York, NY.
MARSSIM, Revision 1                          Ref-14                           August 2000

-------
                                                                             References
 U. S. Code of Federal Regulations

 10 CFR, Chapter 1.  1995. U.S. Nuclear Regulatory Commission.  "Nuclear Regulatory
      Commission."

 10 CFR 20.  1995.  U.S. Nuclear Regulatory Commission.  "Standards for Protection Against
      Radiation."

 10 CFR 20.1001.  1995.  U.S. Nuclear Regulatory Commission.  "Standards for Protection
      Against Radiation—Subpart A—General Provisions: Purpose."

 10 CFR 20.1301.  1995.  U.S. Nuclear Regulatory Commission.  "Dose limits for individual
      members of the public—Subpart D—Occupational Dose Limits:  Dose Limits for
      Individual Members of the Public."

 10 CFR 20.2002.  1995.  U.S. Nuclear Regulatory Commission.  "Method for obtaining
      approval of proposed disposal procedures."

 10 CFR 30.  1995.  U.S. Nuclear Regulatory Commission.  "Rules of General Applicability to
      Domestic Licensing of Byproducts and Material."

 10 CFR 30.36.  1995. U.S. Nuclear Regulatory Commission.  "Licenses: Expiration and
      Termination of Licenses and Decommissioning of Sites and Separate Buildings or
      Outdoor Areas."

 10 CFR 40.  1995.  U.S. Nuclear Regulatory Commission.  "Domestic Licensing of Source
      Material."

 10 CFR 40.42.  1995. U.S. Nuclear Regulatory Commission.  "Licenses: Expiration and
      Termination of Licenses and Decommissioning of Sites and Separate Buildings or
      Outdoor Areas."

 10 CFR 40.65.  1995. U.S. Nuclear Regulatory Commission.  "Domestic Licensing of Source
      Material: Effluent Monitoring Reporting Requirements."

 10 CFR 40, Appendix A.  1995.  U.S. Nuclear Regulatory Commission. "Criteria Relating to
      the Operation of Uranium Mills and the Disposition of Tailings or Wastes Produced by
      the Extraction or Concentration of Source Material From Ores Processed Primarily for
      Their Source Material Content"

 10 CFR Part 50. 1995.  U.S. Nuclear Regulatory Commission. "Domestic Licensing of
      Production and Utilization Facilities."

August 2000                                  Ref-15                    MARS SIM, Revision 1

-------
References
 10 CFR Part 50, Appendix I.  1995.  U.S. Nuclear Regulatory Commission.  "Numerical Guides
      for Design Objectives and Limiting Conditions for Operations to Meet the Criterion 'As
      Low as is Reasonably Achievable' For Radioactive Material in Light-Water-Cooled
      Nuclear Power Reactor Effluents."

 10 CFR Part 50.82.  1995. U.S. Nuclear Regulatory Commission. "Domestic Licensing of
      Production and Utilization Facilities:  US/IAEA Safeguards Agreement: Application for
      Termination of License."

 10 CFR 70. 1995.  U.S. Nuclear Regulatory Commission. "Domestic Licensing of Special
      Nuclear Material."

 10 CFR 70.38.  1995.  U.S. Nuclear Regulatory Commission.  "Licenses: Expiration and
      Termination of Licenses and Decommissioning of Sites and Separate Buildings or
      Outdoor Areas."

 10 CFR 70.59.  1995.  U.S. Nuclear Regulatory Commission.  "Special Nuclear Material
      Control, Records, Reports and Inspections: Effluent Monitoring Reporting Requirements."

 10 CFR 71. 1996.  U.S. Nuclear Regulatory Commission. "Packaging and Transportation of
      Radioactive Material."

 10 CFR 71.4.  1996. U.S. Nuclear Regulatory Commission. "Packaging and Transportation of
      Radioactive Material—Subpart A—General Provisions: Definitions."

 10 CFR 71.10.  1996.  U.S. Nuclear Regulatory Commission.  "Packaging and Transportation of
      Radioactive Material—Subpart B—Exemptions: Exemption for Low-level Materials."

 10 CFR 71.88.  1996.  U.S. Nuclear Regulatory Commission.  "Packaging and Transportation of
      Radioactive Material—Subpart F—Operating Controls and Procedures: Air Transport of
      Plutonium."

 10 CFR 72.54.  1995.  U.S. Nuclear Regulatory Commission.  "Licensing Requirements for the
      Independent Storage of Spent Nuclear Fuel and High-level Radioactive Waste—Subpart
      C—Issuance and Conditions of License: Expiration and Termination of Licenses and
      Decommissioning of Sites and Separate Buildings or Outdoor Areas."

 40 CFR 141. 1994.  U.S. Environmental Protection Agency. "National Primary Drinking Water
      Regulations."
MARSSIM, Revision 1                          Ref-16                            August 2000

-------
                                                                            References


 40 CFR 141.15.  1994. U.S. Environmental Protection Agency. "National Primary Drinking
      Water Regulations—Subpart B—Maximum Contaminant Levels for Radium-226,
      Radium-228, and Gross Alpha Particle Radioactivity in Community Water Systems."

 40 CFR 141.16.  1994. U.S. Environmental Protection Agency. "National Primary Drinking
      Water Regulations—Subpart C—Maximum Contaminant Levels for Beta Particle and
      Photon Radioactivity from Man-made Radionuclides in Community Water Systems."

 40 CFR Part 190. 1995.  U.S. Environmental Protection Agency. "Environmental Radiation
      Protection Standards for Nuclear Power Operations."

 40 CFR 192, 30-34. 1994. U.S. Environmental Protection Agency.  "Health and
      Environmental Protection Standards for Uranium and Thorium Mill Tailings—Subpart
      D—Standards for Management of Uranium Byproduct Materials Pursuant to Section 84 of
      the Atomic Energy Act of 1954, as Amended."

 40 CFR 192, 40-43. 1994. U.S. Environmental Protection Agency.  "Health and
      Environmental Protection Standards for Uranium and Thorium Mill Tailings—Subpart
      E—Standards for Management of Thorium Byproduct Materials Pursuant to Section 84 of
      the Atomic Energy Act of 1954, as Amended."

 40 CFR 300.  1990.  U.S. Environmental Protection Agency. "Hazard Ranking System."

 49 CFR 107.  1996.  U.S. Department of Transportation.  "Registration of
      shipper/carrier—Subpart G—Registration of Persons Who Offer or Transport Hazardous
      Materials."

 49 CFR 171.  1996.  U.S. Department of Transportation.  "Accident Reporting."

 49 CFR 172.  1996.  U.S. Department of Transportation.  "Marking and Labeling Packages for
      Shipment."

 49 CFR 173.  1996.  U.S. Department of Transportation.  "Packaging."

 49 CFR 174.  1996.  U.S. Department of Transportation.  "Transport by Rail."

 49 CFR 175.  1996.  U.S. Department of Transportation.  "Transport by Air."

 49 CFR 176.  1996.  U.S. Department of Transportation.  "Transport by Vessel."

 49 CFR 177.  1996.  U.S. Department of Transportation.  "Transport on Public Highway."

August 2000                                 Ref-17                    MARS SIM, Revision 1

-------
References


 U.S. Federal Code

 Atomic Energy Act of 1954 (AEA), as Amended.

 Clean Air Act of 1955 (CAA).

 Diplomatic Security and Anti-Terrorism Act of 1986.

 Energy Reorganization Act of 1974, as Amended.

 Executive Order 10831, "Federal Compliance With Pollution Control Standards."

 Energy Policy Act of  1992.

 Federal Water Pollution Control Act of 1948 (FWPCA).

 Comprehensive Environmental Response, Compensation, and Liability Act of 1980 (CERCLA),
      as Amended.

 Low Level Radioactive Waste Policy Act (LLRWPA) of 1980, as amended.

 Low-Level Radioactive Waste Policy Amendments Act of 1985.

 Low-Level Radioactive Waste Policy Act of 1980.

 Nuclear Non-Proliferation Act of 1982.

 Nuclear Waste Policy Act of 1982 (NWPA).

 Nuclear Waste Policy Amendments Act of 1987.

 Resource Conservation and Recovery Act of 1976 (RCRA).

 Safe Drinking Water Act of 1974 (SOWA).

 Solar, Wind, Waste and Geothermal Power Production Incentives Act of 1990.

 Toxic Substances Control Act of 1976 (TSCA).

 Uranium Mill Tailings Radiation Control Act of 1978 (UMTRCA), as Amended.

 West Valley Demonstration Project Act of 1980.

MARSSIM, Revision 1                         Ref-18                           August 2000

-------
                                    APPENDIX A
            Example of MARSSIM Applied to a Final Status Survey
A.1 Introduction

This appendix presents the final status survey for a relatively simple example of a radiation site.
Portions of this example appear earlier in Chapter 5 and Chapter 8. This appendix highlights the
major steps for implementing a final status survey and gathering information needed to prepare a
report. The report's format will vary with the requirements of the responsible regulatory agency.
The Final Status Survey Checklist given at the end of Section 5.5 serves as a general outline for
this appendix—although not every point is discussed in detail. Chapters providing discussions
on particular points are referenced at each step.  This example presents detailed calculations for a
single Class 1 survey unit.  Section A.2 addresses the completion of steps 1-4 of the Data Quality
Objectives (DQO) Process (see Appendix D, Sections D. 1 to D.4). Section A.3 addresses the
completion of steps 5-7 of the DQO Process (see Appendix D, Sections D.5 to D.7).  Section A.4
covers survey performance.  Section A.5 discusses evaluating the survey results using Data
Quality Assessment (DQA, see Appendix E).
A.2 Survey Preparations
(Chapter 3- Historical Site Assessment)

The Specialty Source Manufacturing Company produced low-activity encapsulated sources of
radioactive material for use in classroom educational projects, instrument calibration, and
consumer products.  The manufacturing process—conducted between 1978 and 1993—involved
combining a liquid containing a known quantity of the radioactive material with a plastic binder.
This mixture was poured into a metal form and allowed to solidify. After drying, the form and
plastic were encapsulated in a metal holder which was pressure sealed.  A variety of
radionuclides were used in this operation, but the only one having a half-life greater than 60 days
was 60Co. Licensed activities were terminated as of April 1993  and stock materials containing
residual radioactivity were disposed using authorized procedures. Decontamination activities
included the initial identification and removal of contaminated equipment and facilities.  The site
was then surveyed to demonstrate that the radiological conditions satisfy regulatory agency
criteria for release.

A.2.1 Identify the Radionuclides of Concern
(Section 4.3)

More than 15 half-lives have passed for the materials with a half-life  of 60 days or less.  Based
on radioactive  decay and the initial quantities of the radionuclides, the quantities that could
remain at the site are negligible.  A characterization survey confirmed that no radioactive
contaminants, other than 60Co, were present.

August 2000                                 A-l                         MARSSIM, Revision 1

-------
Appendix A
A.2.2 Determine Residual Radioactivity Limits (DCGLs)
(Section 4.3)

The objective of this survey is to demonstrate that residual contamination in excess of the release
criterion is not present at the site.  The DCGLW for 60Co used for evaluating survey results is
8,300 Bq/m2 (5,000 dpm/100 cm2) for surface contamination of structures.  The DCGLW for
contamination in soil is 140 Bq/kg (3.8 pCi/g).1

A.2.3 Classify Areas Based on Contamination Potential.
(Section 4.4)

This facility consists of one administration/manufacturing building situated on approximately 0.4
hectares (1.0 acres) of land as shown in Figure A.I.  The building is a concrete block structure on
a poured concrete slab with a poured concrete ceiling.  The northern portion of the building
housed the manufacturing operations, and consists of a high-bay area of approximately 20 m x 20
m with a 7 m high ceiling.  The remainder of the building is single-story with numerous small
rooms partitioned by drywall construction. This portion of the building, used for administration
activities,  occupies an area of approximately 600 m2 (20 m x 30 m). The license does not
authorize use of radioactive materials in this  area. Operating records and previous radiological
surveys do not identify a potential for residual contamination in this section of the building.
Figure A.2 is a drawing of the building.

The property is surrounded by a chain-link security fence. At the northern end of the property,
the surface is paved and was used as a parking lot for employees and for truck access to the
manufacturing and shipping/receiving areas.  The remainder of the property is grass-covered.
There are no indications of incidents or occurrences leading to radioactive material releases from
the building. Previous surveys were reviewed and the results were determined to be appropriate
for planning the final status survey.  These surveys identified no radioactive contamination
outside the building.

A.2.4 Identify Survey Units
(Section 4.6)

Based on the results of other decommissioning surveys at the site and the operating history, the
following survey units were used to design the final status survey.  All of the interior survey units
consist of concrete surfaces (either poured concrete or cinder block) with the exception of the
administration areas which are drywall. The results of previous surveys demonstrated that the
same reference area could be used to represent the poured concrete and cinder block surfaces.
         The DCGL values used in this appendix are meant to be illustrative examples and are not meant to be
generally applied.

MARSSIM, Revision 1                         A-2                                 August 2000

-------
                                                                            Appendix A
           10W
10E
20E
30E
40E
50E
60E
70E
ION
0
10S
20S
308
40S
508
60S
70S
\x










x\
I









\x \x
/\ /\
W///////////////M

^L
\
\
\
\
\
\
\
\
\
\
>


\
\ ,,,,
\
\
\
/
\
\
\
\




X\ /^










^











-—-^~
,X XX





\

zzr









FENCE
PAVED AREA



X










MANUFA(

ADMINK

\x \
x\ X


\~~~~~"
\ X N.X





: TURING

TRATION



















MAIN STREET

.^.
\
\
\
\
\
"\
\
/
\
•N
\
\


\
\
\
\
\
\
\
\
\
\
<
N
                                                                        FEET
                                                                        METERS
          Figure A.I  Plot Plan of the Specialty Source Manufacturing Company




August 2000                                A-3                        MARSSIM, Revision 1

-------
Appendix A
                          »
MANUFACTURING
                        ADMINISTRATION
                 PARTITIONS (WALLS) REMOVED
                                                                    N
MARSSIM, Revision 1
 Figure A.2 Building Floor Plan




         A-4
August 2000

-------
                                                                              Appendix A
Structures
       Class 1       Floor and lower walls (up to 2 meters above the floor) of manufacturing
                    area - 4 survey units of 140 m2 each.

       Class 2       Upper walls (over 2 meters above the floor) of manufacturing area - 4
                    survey units of 100 m2 each.
                    Ceiling of manufacturing area - 4 survey units of 100 m2 each.
                    Paved area outside manufacturing area roll-up door - 1 survey unit of
                    60m2.

       Class 3       Floors and lower walls of administration areas - 1 survey unit.
                    Remainder of paved surfaces - 1 survey unit.

Land Areas
       Class 3       Lawn areas - 1 survey unit.

A.2.5 Select Survey Instrumentation and Survey Techniques
(Section 4.7, Chapter 6, Chapter 7, Appendix H, and Appendix M)

For interior surfaces, direct measurements of gross beta activity were made using one minute
counts on a gas flow proportional counter with an MDC of 710 Bq/m2 (425 dpm/100 cm2). This
is actually less than  10% of the DCGL for 60Co. Surfaces were scanned using either a 573  cm2
floor monitor with an MDC of 6,000 Bq/m2 (3,600 dpm/100 cm2) or a 126 cm2 gas flow
proportional counter with an MDC of 3,300 Bq/m2 (2,000 dpm/100 cm2).

Exterior soil surfaces were sampled and counted in a laboratory using a Ge spectrometer with an
MDC of 20 Bq/kg (0.5 pCi/g). This is actually slightly greater than 10% of the DCGL for 60Co.
Soil surfaces were scanned using a Nal(Tl) scintillator with an MDC of 185 Bq/kg (5.0 pCi/g) of
60Co.

Examples of scanning patterns used in each of the Class 1, 2, and 3 areas are shown in Figure
A.3.

A.2.6 Select Representative Reference (Background) Areas
(Section 4.5)

For the purposes of evaluating gross beta activity on structure surfaces, a building of similar
construction was identified on the property immediately east of the site. This building served as
a reference for surface activity measurements.  Two reference areas—one  for concrete surfaces
and one for drywall  surfaces—were required.  Because 60Co is not a constituent of background
and evaluation of the soil concentrations was radionuclide-specific, a reference area was not
needed for the  land area surveys.

August 2000                                 A-5                         MARS SIM, Revision 1

-------
Appendix A
                                                                            i     ill
 Interior Concrete Survey Units
 Class 1 Floors -100% Scan with Floor Monitor
 Class 1 Walls - 100% Scans with Gas Flow
              Proportional Counter
    Manufacturing Area Upper Walls and Ceiling
    Class 2 Areas - 25% Scans with Gas Flow
                 Proportional Counter
Administration/Office Areas
Class 3 Floors - 25% Scan with Floor Monitor
Class 3 Walls - 25% Scan with Gas Flow
             Proportional Counter
                                                                                  r
Class 2 Paved Area - 100% Scan with Floor Monitor
Class 3 Paved Area - 25% Scan with Nal(TI)
Class 3 Lawn Area - 100% Scan with Nal(TI) at Downspouts
                  and Edge of Pavement (Runoff Areas)
          10% Scan with Nal(TI) on Remaining Lawn Area
    Figure A.3 Examples of Scanning Patterns for Each Survey Unit Classification

MARSSIM, Revision 1                             A-6                                       August 2000

-------
                                                                               Appendix A
A.2.7 Prepare Area
(Section 4.8)

Prior to the survey, and as part of the decommissioning process, all internal partitions were
removed from the manufacturing area. Other items removed include the radioactive material
control exhaust system, a liquid waste collection system, and other furnishings and fixtures not
considered an integral part of the structure.

A.2.8 Establish Reference Coordinate Systems
(Section 4.8.5)

Land areas were gridded at 10 m intervals along north-south and east-west axes in preparation for
the characterization survey as shown in Figure A. 1.  The grid was checked to verify its use for the
final status survey.

Structure surfaces were already gridded at 2 m intervals,  incorporating the floors and the lower 2
m of the walls. Figure A.4 is an example of the coordinate system installed for one of the Class 1
interior concrete  survey units.
A.3 Survey Design

A.3.1 Quantify DQOs
(Section 2.3, Appendix D)

The null hypothesis for each survey unit is that the residual radioactivity concentrations exceed
the release criterion (Scenario A, Figure D.5).  Acceptable decision error probabilities for
testing the hypothesis were determined to be a=0.05 and P=0.05 for the Class 1 interior concrete
survey units, and a=0.025 and P=0.05 for all other survey units.

A.3.2 Construct the Desired Power Curve
(Section 2.3, Appendix D.6, Appendix 1.9)

The desired power curve for the Class 1 interior concrete survey units is shown in Figure A.5.
The gray region extends from 4,200 to 8,300 Bq/m2 (2,500 to 5,000 dpm/100 cm2). The survey
was designed for the  statistical test to have 95% power to decide that a survey unit containing
less than 4,200 Bq/m2 (2,500 dpm/100 cm2) above background meets the release criterion.  For
the same test, a survey unit containing over 17,000 Bq/m2 (10,000 dpm/100 cm2) above
background had less than a 2.5% probability of being released.
August 2000                                A-7                        MARSSIM, Revision 1

-------
Appendix A
18S
20S
22S
24S
26S
28S
SOS
32S









WEST
WALL









BE 28E 3(
FEET
i i
I \
0
METERS

NO







RTH W


FLOOR




ALL





















DE 32E 34E 36E 38E 40E 42E
N
^ t
4
                Figure A.4 Reference Coordinate System for the Class 1
                            Interior Concrete Survey Unit
MARSSIM, Revision 1
A-8
August 2000

-------
                                                                          Appendix A
(f)  CD

1  I
~  jD
"CD  "c
.c  o
*-  o
O)  d)
c  Q
   o
   0
   Q
   O
   CD   0
   .a   0
   o  .a
   _Q
   CD
   Q.
   0
   O
   O
              0.8   -
              0.6   -
              0.4   -
              0.2   -
                                                 Acceptable
                                                  Type I
                                                Decision Error
                                                 Rate, (1 -a)
                            Larger Error
                            Rates Are
                            Acceptable
                   Acceptable
                    Type II
                  Decision Error
                    Rate (3)
                    0      2500     5000     7500   10000     12500   15000

                   True Activity Above Background (dpm/100  cm2)

         Figure A.5 Power Chart for the Class 1 Interior Concrete Survey Unit

August 2000                               A-9                       MARSSIM, Revision 1

-------
Appendix A
A.3.3 Specify Sample Collection and Analysis Procedures
(Chapter 7)

In the Class 3 exterior survey unit soil cores were taken to a depth of 7.5 cm (3 in.) based on
development of DQOs, the conceptual site model, and the assumptions used to develop the
DCGLs. Each sample was labeled with the location code, date and time of sampling, sealed in a
plastic bag, and weighed prior to shipment to the analytical laboratory. At the laboratory, the
samples were weighed, dried, and weighed again. The samples were ground to a uniform particle
size to homogenize the samples consistent with the modeling assumptions used to develop the
DCGLs. One hundred gram (100 g) aliquots were gamma counted using a germanium detector
with multichannel analyzer.

The decision to use radionuclide-specific measurements for soil means that the survey of the
Class 3 exterior soil surface survey unit was designed for use with the one-sample Sign test.

A.3.4 Provide Information on Survey Instrumentation and Techniques
(Chapter 6)

A gas flow proportional counter with 20 cm2 probe area and  16% 4-n response was placed on the
surface at each direct measurement location, and a one minute count taken. Calibration and
background were checked before and after each  series of measurements. The DCGLw, adjusted
for the detector size and efficiency, is:

              (5,000 dpm/100 cm2) (0.20) (0.16) = 160 cpm                             A-l

The decision to use total  activity measurements for interior surfaces means that the survey of all
the interior survey units was designed for use with the two-sample WRS test for comparison with
an appropriate reference area.

A.3.5 Determine Numbers of Data Points
(Section 5.5.2.2)

This facility contains 15 survey units consisting of interior concrete surfaces, interior drywall
surfaces, exterior surface soil, and exterior paved surfaces.

Concrete Surfaces

The site has  12 interior concrete survey units to be compared with 1 reference area.  The same
type of instrument and method were used to perform measurements in each area.
MARSSIM, Revision 1                        A-10                                August 2000

-------
                                                                               Appendix A
The lower bound of the gray region is selected to be one-half the DCGL, and Type I and Type II
error values (a and P) of 0.05 were selected. The number of samples/measurements to be
obtained, based on the requirements of the statistical tests, was determined using Equation 5-1 in
Section 5.5.2.2:
                           N -   -
                                      - 0.5)2
From Table 5.2 it is found that ZlH1 = Z^ = 1.645 for a = P = 0.05.

The parameter Pr depends on the relative shift, A/a.  The width of the gray region, A, in Figure
A.5 is 4,200 Bq/m2 (2,500 dpm/100 cm2), which corresponds to 80 cpm. Data from previous
scoping and characterization surveys indicate that the background level is 45 ± 7 (la) cpm. The
standard deviation of the contaminant in the survey unit (os) is estimated at ± 20 cpm.  When the
estimated standard deviation in the reference area and the survey units are different, the larger
value should be used to calculate the relative shift. Thus, the value of the relative shift, A/a, is
(160-80)720 or 4.2 From Table 5.1, the value of Pr is approximately 1.000.

The number of data points for the WRS test of each combination of reference area and survey
units according to the allocation formula was:

                        ,r  (1.645 + 1.645)2
                          =                =
                             3(1.000-0.5)2
                                            , . .
                                          = 14.4                                    A-3
Adding an additional 20% and rounding up yielded 18 data points total for the reference area and
each survey unit combined.  Note that the same result is obtained by simply using Table 5.3 or
Table I.2b with a = P = 0.05 and A/a = 4. Of this total number, 9 were planned from the
reference area and 9 from each survey unit. The total number of measurements calculated based
on the statistical tests was 9 + (12)(9) = 117.

A.3.6 Evaluate the power of the statistical tests against the DQOs.
(Appendix 1.9.2)

Using Equation 1-8, the prospective power expected of the WRS test was calculated using the
fact that 9 samples were planned in each of the survey units and the reference area.  The value of
GS was taken to be 20 cpm, the larger of the two values anticipated for the reference area (7 cpm)
and the survey unit (20 cpm).  This prospective power curve is shown in Figure A. 6.
        Ordinarily A/a would be adjusted to a value between 1 and 3. For this example the adjustment was not
made.

August 2000                                A-11                        MARS SIM, Revision 1

-------
Appendix A
                              Prospective Power
          100    110    120    130    140    150    160    170    180    190   200
                                           cpm
       Figure A.6 Prospective Power Curve for the Class 1 Interior Concrete Survey Unit
A.3.7 Ensure that the Sample Size is Sufficient for Detecting Areas of Elevated Activity
(Chapter 5.5.2.4)

The Class  1 concrete interior survey units each have an area of 140 m2 (Figure A.7). The
distance between measurement locations in these survey units was:
               L  =
                    \
                        A
0.866w   \
               140
0.866 (10)
                       =  4.2 m
                                             A-4
MARSSIM, Revision 1
                  A-12
                                          August 2000

-------
                                                                Appendix A
12
10
                                  NORTH WALL
                                    A             A
              WEST
              WALL
                                                      10
12
           DIRECT MEASUREMENT LOCATION


           RANDOM START LOCATION


              FEET
      Figure A.7 Measurement Grid for the Class 1 Interior Concrete Survey Unit


August 2000                          A-13                    MARS SIM, Revision 1

-------
Appendix A
The result for L was rounded down to the nearest meter, giving L = 4 m. This resulted in an area
between sampling points of 0.866L2 = 13.9 m2. The DCGLW of 8,300 Bq/m2 (5,000 dpm/100
cm2) was well above the scanning MDC of 6,000 Bq/m2 (3,600 dpm/100 m2) for the least
sensitive of the two scanning instruments (the floor monitor).  Therefore, no adjustment to the
number of data points to account for areas of elevated activity was necessary.

A.3.8 Specify Sampling Locations
(Chapter 5.5.2.5)

Two random numbers between zero and one were generated to locate the random start for the
sampling grid.  Using Table 1.6 in Appendix I, 0.322467 and 0.601951 were selected.  The
random start for triangular sampling pattern was found by multiplying these numbers by the
length of the reference grid X and Y axes:

                          X = 0.322467 x 12m = 3.9                                A-5
                          Y = 0.601951 x 12m = 7.2                                A-6

The first row of measurement locations was laid out at 4m intervals parallel to one axis of the
reference grid.  The second row was positioned (0.866)(4) = 3.5 m from the first row,  with
measurement locations offset by 2 m from those in the first  row. The measurement grid is shown
in Figure A.7. When the measurement grid was constructed it was found that 10 measurement
locations were identified within the boundaries of the survey unit, which is greater than the 9
measurement locations calculated to be required for the statistical test. Because the spacing
between the measurements (L) is important for identifying areas of elevated activity, all of the
identified sampling locations should be used.

A.3.9 Develop Quality Control Procedures
(Section 4.9)

A.3.10 Document Results of Planning into a Quality Assurance Project Plan
(Section 9.2)
A.4 Conducting Surveys

A.4.1 Perform Reference (Background) Area Measurements and Scanning
(Chapter 6)

A.4.2 Collect and Analyze Samples
(Chapter 7)
MARSSIM, Revision 1                       A-14                               August 2000

-------
                                                                                   Appendix A
A.5  Evaluating Survey Results

A.5.1  Perform Data Quality Assessment
(Chapter 8.2)

The data from the one Class 1 interior concrete survey unit and its associated reference area are
given in Table A. 1.  Since ten sampling locations were identified, ten results are listed for the
survey unit.3 The average measurement in the survey unit is 206 cpm, and in the reference area
the average is 46 cpm.  The means and the medians are nearly equal in both cases. The standard
deviations are also consistent with those estimated during the survey design.  The survey unit
clearly contains residual radioactivity close to the DCGLW of 160 cpm (calculated using
Equation A-l).

         Table A.I Class 1 Interior Concrete Survey Unit and Reference Area Data











mean
standard deviation
median
Reference Area
(cpm)
45
36
32
57
46
60
39
45
53
42
46
9
45
Survey Unit
(cpm)
205
207
203
196
211
208
172
216
233
209
206
15.4
207.5
        There are also ten results listed for the reference area. This is only because there were also ten locations
identified there when the grid was laid out. Had nine locations been found, the survey would proceed using those nine
locations. There is no requirement that the number of sampling locations in the survey unit and reference area be equal.
It is only necessary that at least the minimum number of samples required for the statistical tests is obtained in each.
August 2000
A-l 5
MARS SIM, Revision 1

-------
Appendix A
The stem and leaf displays (see Appendix 1.7) for the data appear in Table A.2. They indicate
that the data distributions are unimodal with no notable asymmetry. There are two noticeably
extreme values in the survey unit data set, at 172 and 233 cpm.  These are both about 2 standard
deviations from the mean. A check of the data logs indicated nothing unusual about these points,
so there was no reason to conclude that these values were due to anything other than random
measurement variability.

       Table A.2 Stem and Leaf Displays for Class 1 Interior Concrete Survey Unit
Reference Area
30
40
50
60
6
5
7
0
2
5
3

9
6



2


Survey Unit
170
180
190
200
210
220
230
2

6
5
1

3



7
6





3






8






9



A Quantile-Quantile plot (see Appendix 1.8) of this data, shown in Figure A. 8, is consistent with
these conclusions. The median and spread of the survey unit data are clearly above those in the
reference area. The middle part of the curve has no sharp rises. However, the lower and upper
portion of the curve both show a steep rise due to the two extreme measurements in the survey
unit data set.

A.5.2 Conduct Elevated Measurement Comparison
(Section 8.5.1)

The DCGLW is 160 cpm above background.  Based on an area between measurement locations
13.9 m2 for L = 4 m, the area factor (from Table 5.7) is approximately 1.5. This means the
DCGLEMC is 240 cpm above background.  Even without subtracting the average background
value of 46, there were no survey unit measurements exceeding this value. All of the survey unit
measurements exceed the DCGLW and six exceed 206 cpm—the DCGLW plus the average
background. If any of these data exceeded three standard deviations of the survey unit mean, they
might have been considered unusual, but this was not the  case. Thus, while the amount of
residual radioactivity appeared to be near the release criterion, there was no  evidence of smaller
areas of elevated residual radioactivity.
MARSSIM, Revision 1
A-16
August 2000

-------
                                                                         Appendix A
     Quantile-Quantile Plot:  Class 1 Interior Concrete
    240
    230
    170
                   35
40         45         50
   Reference Area
55
60
    Figure A.8 Quantile-Quantile Plot for the Class 1 Interior Concrete Survey Unit

A.5.3 Conduct Statistical Tests
(Section 8.3, 8.4)

For the Class 1 interior concrete survey unit, the two-sample nonparametric statistical tests of
Section 8.4 were appropriate since, although the radionuclide of concern does not appear in
background, radionuclide specific measurements were not made.  This survey unit was classified
as Class 1, so the 10 measurements performed in the reference area and the 10 measurements
performed in the survey unit were made on random start triangular grids.

Table A. 3 shows the results of the twenty measurements in the first column. The average and
standard deviation of the reference area measurements were 46 and 9, respectively.  The average
and standard deviation of the survey unit measurements were 206 and 15, respectively.
August 2000
         A-17
  MARS SIM, Revision 1

-------
Appendix A
             Table A.3  WRS Test for Class 1 Interior Concrete Survey Unit
Data
45
36
32
57
46
60
39
45
53
42
211
208
172
216
233
209
237
176
253
229
Area
R
R
R
R
R
R
R
R
R
R
S
S
S
S
S
S
S
S
S
S
Adjusted
Data
205
196
192
217
206
220
199
205
213
202
211
208
172
216
233
209
237
176
253
229
Sum=
Ranks
7.5
4
o
J
15
9
16
5
7.5
13
6
12
10
1
14
18
11
19
2
20
17
210
Reference Area
Ranks
7.5
4
o
J
15
9
16
5
7.5
13
6
0
0
0
0
0
0
0
0
0
0
86
The analysis proceeded as described in Section 8.6.3. In the "Area" column, the code "R" is
inserted to denote a reference area measurement, and "S" to denote a survey unit measurement.
In the "Data" column, the data were simply listed as obtained.  The Adjusted Data were obtained
by adding the DCGLW to the reference area measurements and leaving the survey unit
measurements unchanged. The ranks of the Adjusted Data appear in the "Ranks" column.  They
range from 1 to 20, since there is a total of 20 (10+10) measurements.  The sum of all of the
ranks is 20(20+1)72 = 210. It is recommended to check this value as a guard against errors in the
rankings.

The "Reference Area Ranks" column contains only the ranks belonging to the reference area
measurements. The total is 86. This was compared with the entry in Table 1.4 for a = 0.05, with
n = 10 and m =10. This critical value is 127.  Thus, the sum of the reference area ranks was less
than the critical value and the null hypothesis—that the survey unit concentrations exceed the
DCGLW—was  accepted.
MARSSIM, Revision 1
A-18
August 2000

-------
                                                                              Appendix A


Again, as in Section 8.6.3, the retrospective power curve for the WRS test was constructed as
described in Appendix 1.9, using Equations 1-8,1-9, and I-10, together with the actual number of
concentration measurements obtained, N. The power as a function of A/s was calculated using
the observed standard deviation, s = 15.4, in place of o. The values of A/a were converted to
cpm using:

       cpm = DCGLW - (A/o)(observed standard deviation)                              A-7

The results for this example are plotted in Figure A.9, showing the probability that the survey
unit would have passed the release criterion using the WRS test versus cpm of residual
radioactivity.  This curve shows that the data quality objectives were easily met.  The curve
shows that a survey unit with less than about 130 cpm above background would almost always
pass and that a survey unit with more than about 170 cpm above background would almost
always fail.

A.5.4 Estimate Amount of Residual Radioactivity
(Chapter 8.5.2.1)

The amount of residual radioactivity in the survey unit above background was estimated
following the WRS test using the difference between the mean measurement in the survey unit
and the mean measurement in the reference area:  5 = 206 -46= 160.  This was converted to a
surface area activity concentration of 8,300 Bq/m2 (5,000 dpm/100 cm2), which is just at the
limiting value, DCGLW.

The difference in the median measurements (207.5 - 45 = 162.5) was converted to a surface
activity concentration of 8,500 Bq/m2 (5,100 dpm/100 cm2). This slightly exceeds the DCGLW.
August 2000                               A-19                       MARS SIM, Revision 1

-------
Appendix A
                             Retrospective Power
         100    110    120    130    140
    150
    cpm
160    170    180    190    200
      Figure A.9 Retrospective Power Curve for the Class 1 Interior Concrete Survey Unit
MARSSIM, Revision 1
A-20
                       August 2000

-------
                                    APPENDIX B

           SIMPLIFIED PROCEDURE FOR CERTAIN USERS OF
          SEALED SOURCES, SHORT HALF-LIFE MATERIALS,
                           AND SMALL QUANTITIES

A large number of users of radioactive materials may use a simplified procedure to demonstrate
regulatory compliance for decommissioning, avoiding complex final status surveys.  Sites that
qualify for simplified decommissioning procedures are those where radioactive materials have
been used or stored only in the form of: non-leaking, sealed sources; short half-life radioactive
materials (e.g., t1/2 <  120 days) that have since decayed to insignificant quantities; small quantities
exempted or not requiring a specific license from a regulatory authority; or combinations of the
above.

The user of a site that may qualify for implementation of a simplified procedure should provide
the regulatory authority with a minimum of: (1) a certification that no residual radioactive
contamination attributable to the user's activities is detectable by generally accepted survey
methods for decommissioning; and (2) documentation on the disposal of nuclear materials, such
as the information required in Form NRC-314 (Certification of Disposition of Materials).  This
minimum information may be used by the regulatory authority to document protection of both
the public health and safety and the environment, based on the transfer, decay, or disposal of
radioactive material  in some authorized manner.

Normally, the absence of radioactive contamination can be demonstrated by: (1) documenting the
amounts, kinds and uses of radionuclides as well as the processes involved; (2) conducting a
radiation survey of the site; and (3) submitting a report on this survey. More specifically, a user
of a qualified site should document from process knowledge and the nature of the use that either
no or unmeasurable  quantities of radioactive material remain onsite—whether on surfaces,
buried, imbedded, submersed, or dissolved. The submittal to the regulatory authority should
include possession history, use of the radioactive materials, and, if applicable, results of all leak
tests.  Where only small quantities or short half-life materials were handled, the regulatory
authority may consider the documentation on a case-by-case basis.

For those sites where a simple final status survey is conducted to demonstrate compliance with
the release criterion, the following information should be included in the final status survey
report:

•      basis for selecting  the instrumentation used for the survey
•      nature of the radionuclides surveyed
•      measurement techniques and instruments used, including references for procedures and
       protocols used to perform the measurements
August 2000                                B-l                        MARS SIM, Revision 1

-------
Appendix B

•      minimum detectable concentrations (MDCs) of the instruments and measurement systems
       used to perform the measurements
•      calibration, field testing, and maintenance of the instrumentation
•      qualifications of the personnel using the instrumentation
•      methods used to interpret the survey measurements
•      qualifications of the personnel interpreting the survey measurements
•      measurement results and measurement locations including the operator's name,
       instrument model and serial number, date the measurement was performed, and
       traceability of the measurement location

The number of measurements in each survey unit and each reference area can be determined
using Table 5.3 for sites where the radionuclide of potential interest is present in background.
The number of measurements for each survey unit where the radionuclide is not present in
background can be determined using Table 5.5. Values for acceptable decision error levels (a
and P) and the relative shift (A/a) can be determined as described in Section 5.5.2.  For sites
where the simplified approach in this appendix is appropriate, reasonably conservative values for
these parameters would be a = 0.05, P = 0.05, and A/a = 1. After increasing the number of
measurements by 20% to ensure adequate power for the statistical tests, Table 5.3 and Table 5.5
list a value  of approximately 30 measurements  for each  survey unit and each reference.
Therefore, 30 measurements may be used in place of the guidance in Section 5.5.2 at sites that
qualify for the simplified survey design process.

The results  of the survey should be compared to derived concentration guideline levels (DCGLs)
using an appropriate statistical test, such as the Student's t test or Wilcoxon test. If all
measurements are less than the DCGLW, then the statistics do not need to be addressed because
the conclusions are obvious.  If the mean  of the measurements  exceeds the DCGLW, the survey
unit obviously fails to demonstrate compliance and  the statistics do not need to  be addressed.

Radiation levels and concentrations should be reported as follows:

•      For  external  dose rates, units of:
              milli-Sieverts (micro-rem) per hour at one meter from  surfaces;

•      For  levels of radioactive materials, including alpha and beta measurements, units of:
              Bq/m2 (dpm/100 cm2, pCi/100 cm2) (removable and fixed) for surfaces;
              Bq/L (pCi/mL) for water;
              Bq/kg (pCi/g) for solids such as soils or concrete.
MARSSIM, Revision 1                        B-2                                 August 2000

-------
                                   APPENDIX C
      REGULATIONS AND REQUIREMENTS ASSOCIATED WITH
           RADIATION SURVEYS AND SITE INVESTIGATIONS1

C. 1  EPA Statutory Authorities

The U.S. Environmental Protection Agency administers several statutes that address various
aspects of the cleanup of radioactively contaminated sites. Listed below are the statutes, the
implementing regulations, and the responsible EPA offices.

C.I.I The Office of Air and Radiation (OAR) administers several statutes and
      implementing regulations:

•     Clean Air Act (CAA) as amended (42 U.S.C. 7401-7671 q.): The CAA protects and
      enhances the nation's air quality through national ambient air quality standards, new
      source performance standards, and other provisions. Radionuclides are a hazardous air
      pollutant regulated under Section 112 of the Act.

             National Emissions Standard for Hazardous Air Pollutants for Radionuclides (40
             CFR Part 61, 10 CFR 20.101-20.108)

•     Uranium Mill Tailings Radiation  Control Act (UMTRCA) of 1978 (42 U.S.C. 2022):
      UMTRCA requires stabilization and control of byproduct materials (primarily mill
      tailings) at licensed commercial uranium and thorium processing sites.  NRC and  DOE
      implement standards under this Act.

             Health and Environmental Protection Standards for Uranium and Thorium Mill
             Tailings (40  CFR Part 192)

             This regulation, along with "Criteria Relating to the Operation of Uranium Mills
             and the Disposition of Tailings or Wastes Produced by the Extraction or
             Concentration of Source Material From Ores Processed Primarily for Their
             Source Material Content"  (10 CFR 40, Appendix A), issued by the NRC and
             EPA, establish technical criteria related to the operation, decontamination,
             decommissioning, and reclamation of uranium or thorium mills and mill tailings.
             Both regulations provide design requirements for closure of the mill's waste
             disposal area.
       1 The user of this manual should consult the text of the statutes and regulations listed in this Appendix to
ensure compliance with all requirements applicable to a specific site and to ensure the use of current versions of
applicable statutes and regulations.

August 2000                                C-l                        MARS SIM, Revision 1

-------
Appendix C
             The principal radiological hazards from uranium milling operations and mill
             tailings disposal are due to radon gas emissions originating from uranium and
             thorium daughters. Release rates to the atmosphere are limited to an average rate
             of 0.7 Bq (20 pCi) per square meter per second. This rate is applicable to any
             portion of a licensed or disposal site unless land areas do not contain radium
             concentrations—averaged over 100 square meters—greater than (i) 185 Bq/kg
             (5 pCi/g) of radium averaged over the first 15 centimeters below the surface and
             (ii) 555 Bq/kg (15 pCi/g) of radium averaged over 15 cm thick layers more than
             15 centimeters below the surface.

       Atomic Energy Act (AEA) as amended (42 U.S.C. 2011-2296): The AEA requires the
       management, processing, and utilization of radioactive materials in a manner that protects
       public health and the environment.  This is the principal basis for EPA, NRC and DOE
       authorities.

       The AEA requires that source, special nuclear, and byproduct materials be managed,
       processed, and used in  a manner that protects public health and the environment. Under
       the AEA and Reorganization Plan No.  3 of 1970, EPA is authorized to issue federal
       guidance on radiation protection matters as deemed necessary by the Agency or as
       mandated by Congress. This guidance may be issued as regulations, given that EPA
       possesses the authority to promulgate generally applicable radiation protection standards
       under Reorganization Plan No. 3. For example, under AEA authority EPA promulgated
       its environmental radiation protection standards for nuclear power operations in 40 CFR
       Part 190.

       In conjunction with the AEA, EPA presently supports the following:

             Environmental  Radiation Protection Standards for the Management and Disposal
             of Spent Nuclear, High-Level and Transuranic Radioactive Wastes (40 CFR 191)

       Nuclear  Waste Policy Act (NWPA), as amended (Pub. L. 100-507, 42 U.S.C. 10101):
       The NWPA is intended to provide an orderly scheme for the selection and development
       of repositories for high-level radioactive waste and spent nuclear fuel.

       Low Level Radioactive Waste Policy Act (LLRWPA), as amended (Pub. L.  99-240, 42
       U.S.C. 2021b):  LLRWPA assigns States responsibility for ensuring adequate disposal
       capacity for low-level radioactive waste generated within their borders.

       Indoor Radon Abatement Act of 1988 (15 U.S.C.  2601 Sec. 301-311)
MARSSIM, Revision 1                         C-2                                August 2000

-------
                                                                            Appendix C
C.1.2  The Office of Emergency and Remedial Response (OERR) administers the
       Comprehensive Environmental Response, Compensation, and Liability Act
       (CERCLA) of 1980, as amended (Pub. L. 99-499, 42 U.S.C. 9601-9657)

•      CERCLA authorizes EPA, consistent with the National Oil and Hazardous Substances
       Contingency Plan (NCP, 40 CFR 300) to provide for remedial action in response to
       releases or substantial threats of releases of hazardous substances into the environment.
       Hazardous substances are defined as any substance designated or listed under the Clean
       Air Act, the Federal Water Pollution Control Act, the Toxic Substances Control Act, and
       the Resource Conservation and Recovery Act. Because the CAA designated
       radionuclides as a hazardous air pollutant, the provisions of CERCLA apply to
       radionuclides.

C.1.3  The Office of Solid Waste (OSW) administers the Resource Conservation and
       Recovery Act of 1976 (RCRA), as amended (Pub. L. 94-580, 42 U.S.C. 6901 etseq.)

•      RCRA provides for detailed regulation of hazardous waste from generation to final
       disposal. Hazardous waste generators and transporters must comply with EPA standards.
       Owners and operators of treatment, storage, or disposal facilities must obtain RCRA
       permits.  Materials defined in the AEA are  expressly excluded from the definition of solid
       waste, and, thus from regulation under RCRA. Naturally occurring and accelerator
       produced radioactive materials, however, are not excluded.

C.I.4  The Office of Water (OW) administers several statutes and implementing
       regulations:

•      Section 14.2 of the Public Health Service Act as amended by the Safe Drinking Water
       Act (SOWA) as amended (Pub. L. 93-523,  42 U.S.C. 300f etseq.). As amended in 1986,
       SDWA seeks to protect public water supply systems through protection of groundwater.
       Any radioactive substance that may be found in water is regulated under the Act
       (although the current regulations only specify a limited number of individual substances).

             Maximum Contaminant Levels (includes certain radionuclides). (40 CFR 141.11-
             141.16)

•      Clean Water Act as amended (Pub. L.  92-500, 33 U.S.C. 1251 etseq.)

             Requirements (40 CFR Parts 131, 400-469) established pursuant to sections 301,
             302, 303 (including State water quality standards), 306, 307, (including Federal
             Pretreatment requirements for discharge into a publicly owned treatment works),
             and 403 of the Clean Water Act.

August 2000                                C-3                        MARSSIM, Revision 1

-------
Appendix C


C.I.5  The Office of Prevention, Pesticides and Toxic Substances administers the Toxic
       Substances and Control Act (TSCA; 15 U.S.C. 2601)

•      TSCA regulates the manufacture, distribution in commerce, processing, use, and disposal
       of chemical substances and mixtures. Materials defined in the AEA are expressly
       excluded from TSCA.  However, naturally occurring and accelerator produced
       radionuclides are not excluded.
C.2   DOE Regulations and Requirements

C.2.1  Authorities of the Department of Energy

The Department of Energy Organization Act, which created DOE, the Energy Reorganization
Act of 1974, which created the Energy Research and Development Administration, and the
Atomic Energy Act of 19542 provide the basic authorities of the Department of Energy. The
principal DOE statutory authorities and regulations that pertain to radiation protection are shown
in Table C.I.

C.2.1.1  Atomic Energy Act of 1954, as amended

The Atomic Energy Act of 1954 established a program of private ownership and use of nuclear
materials and nuclear facilities,  such as nuclear research reactors, and a program for government
regulation of those applications. (Prior to 1954, all source, byproduct, and special nuclear
materials were government owned). The Atomic Energy Commission was given both the
regulatory authorities and the mission to develop both the peaceful and military uses of atomic
energy.  The Act also retained the Atomic Energy Commission as the civilian  agency responsible
for weapons programs production, development  and research consistent with the Atomic Energy
Act of 1946.

Under the Act, the Atomic Energy Commission was responsible for establishing regulations
ensuring the safety of commercial facilities and establishing requirements that ensure public
protection from radiation and radioactive materials resulting from or used in its research,
development, and production activities.
       2The Atomic Energy Commission was created by the Atomic Energy Act of 1946, not the 1954 act.

MARSSIM, Revision 1                         C-4                                August 2000

-------
                                                                                        Appendix C
                                            Table C.I

                  DOE AUTHORITIES, ORDERS AND REGULATIONS
                        RELATED TO RADIATION PROTECTION
                   Statutes

Atomic Energy Act of 1954, as amended

Energy Reorganization Act of 1974

Uranium Mill Tailings Radiation Control Act of
1978, as amended

Nuclear Non-Proliferation Act of 1978

Department of Energy Organization Act of 1980

West Valley Demonstration Project Act of 1980

Nuclear Waste Policy Act of 1982

Low-Level Waste Policy Act of 1980

Low-Level Waste Policy Amendments Act of 1985

Energy Policy Act of 1992

Waste Isolation Pilot Plant Land Withdrawal Act

Price Anderson Act

              DOE Regulations

10 CFR Part 835, "Occupational Radiation
Protection"

              Executive Orders

Executive Order 12580
                      DOE Orders

      Order 5400.1, "General Environmental Protection
      Program"
      Order 5400.2A, "Environmental Compliance Issue
      Coordination"
      Order DOE 5400.5, "Radiation Protection of the
      Public and the Environment"
      Order DOE 5400.4, "Comprehensive Environmental,
      Response, Compensation and Liability Act
      Requirements"
      Order DOE 5440. IE, "National Environmental
      Policy Act Compliance Program"
      Order DOE 5480. IB, "Environment, Safety and
      Health Program for Department of Energy Facilities"
      Order DOE 5480.3, "Safety Requirements for the
      Packaging and Transportation of Hazardous
      Materials, Hazardous Substances & Hazardous
      Wastes"
      Order DOE 5480.4, "Environment, Safety and Health
      Protection Standards"
      Order DOE 5480.6, "Safety of Department of Energy
      Owned Nuclear Reactors"
      Order DOE 5480.11, "Occupational Radiation
      Protection"
      Order DOE 5480.24, "Nuclear Criticality Safety"
      Order DOE 5480.25, "Safety at Accelerator
      Facilities"
      Order DOE 5484.1, "Environmental Protection,
      Safety and Health Protection Information Reporting
      Requirements"
      Order DOE 5820.2A, "Radioactive Waste
      Management"
August 2000
C-5
MARS SIM, Revision 1

-------
Appendix C
C.2.1.2  Energy Reorganization Act of 1974 (Public Law 93-438 (1974), as amended)

The Energy Reorganization Act of 1974 divided the former Atomic Energy Commission and
created the Energy Research and Development Administration (ERDA) and the Nuclear
Regulatory Commission.  The ERDA was responsible for radiation protection at its facilities, to
provide for worker and public health, worker safety, and environmental protection. ERDA was
abolished with the creation of the Department of Energy in 1980.

C.2.1.3  Department of Energy Organization Act of 1977 Public Law 95-91

The Department of Energy Organization Act created the Department of Energy (DOE) by
combining the Energy Research & Development Administration, the Federal Energy
Administration, Federal Power Commission, and part of the Department of Interior.

The DOE was intended to identify potential environmental, health, safety, socioeconomic,
institutional, and technological issues associated with the development and use of energy
sources.  Through this Act, DOE retained the responsibilities and authorities—held by its
predecessor agencies—to take actions necessary to protect the public from radiation associated
with radioactive materials production, research, and development. DOE established
requirements through a directives system that largely used DOE Orders as its regulatory
procedures.  With the passage of the Price-Anderson Act Amendments of 1990, DOE began
converting its health and safety Orders to rules.

C.2.1.4  Uranium Mill Tailings Radiation Control Act of 1978, as amended

The Uranium Mill Tailings Radiation Control Act (UMTRCA) provides a program of assessment
and remedial action at active and inactive uranium mill sites to control their tailings in a safe and
environmentally sound manner and to reduce radiation hazards to the public residing  in the
vicinity of these sites.  The DOE was directed to complete remedial action at 21 sites of inactive
uranium mills.

C.2.1.5  West Valley Demonstration Project Act of 1980

This act authorized DOE to carry out a project at West Valley, New York to demonstrate
solidification techniques which could be used for preparing high level radioactive waste for
disposal. The Act provides for informal review and project consultation by the NRC.

C.2.1.6  Low-Level Waste Policy Act of 1980

This act established the policy that each State is responsible for providing for the disposal of low-
level radioactive waste generated within its borders, except for waste from defense activities of

MARSSIM, Revision 1                         C-6                                 August 2000

-------
                                                                               Appendix C
DOE or Federal research and development activities, and authorized States to enter into
compacts to carry out this policy. DOE was required to take actions to assist the States in
carrying out this policy.

C.2.1.7  Nuclear Waste Policy Act of 1982  (Public Law 97-425, 1983)

This Act gives DOE the responsibility to develop repositories and to establish a program of
research, development, and demonstration for the disposal of high-level radioactive waste and
spent nuclear fuel. Title to and custody of commercial low-level waste sites under certain
conditions could be transferred to DOE.

C.2.1.8  Low-Level Waste Policy Amendments Act of 1985

This act amends the Low-Level Waste Policy Act of 1980 to improve the procedures for State
compacts. It also  assigns responsibility to the Federal government for the disposal of low-level
waste generated or owned by the DOE, specific other Federally generated or owned wastes, and
wastes with concentrations of radionuclides that exceed the limits established by the NRC for
class C radioactive waste.  The Act provides that all  class C radioactive wastes designated as a
Federal  responsibility—those that result from activities licensed by the NRC—shall be disposed
of in a facility licensed by the NRC.  The Act also assigns responsibilities to DOE to provide
financial and technical assistance to the States in carrying out the Act.

C.2.1.9  Waste Isolation Pilot Plant Land Withdrawal Act

The Waste Isolation Pilot Plant (WIPP) is a repository  intended for the disposal of transuranic
radioactive waste  produced by defense  activities. The Act establishes the following:

       1)      an  isolated parcel of land for the WIPP
      2)      provisions concerning testing and limits on the quantities of waste which may be
              disposed at the WIPP
      3)      EPA certification of compliance with disposal standards

C.2.1.10 Price Anderson Act

C.2.2 Executive Orders

Executive Order (E.O.) 12580 delegates to various Federal officials the responsibilities vested in
the  President for implementing the Comprehensive Environmental Response, Compensation,  and
Liability Act of 1980 (CERCLA) as amended by the Superfund Amendments and
Reauthorization Act of 1986  (SARA).
August 2000                                C-7                        MARSSIM, Revision 1

-------
Appendix C


C.2.3  DOE Regulations and Orders

C.2.3.1  10 CFR Part 835, "Occupational Radiation Protection"

This rule, which became effective on January 13, 1993, provides for the protection of radiation
workers at DOE owned facilities. The requirements contained in Part 835 are generally similar
to those in Order DOE 5480.11 and those used in NRC Regulations pertaining to the commercial
nuclear industry. In addition to the rule, DOE issued a dozen implementation guides, including
the "DOE Radiological Control Manual," (DOE/EH-0256T, Rv.l, April 1994).

C.2.3.2  Order DOE 5400.5, "Radiation Protection of the Public and the Environment"

This Order, issued in February 1990, contains DOE's requirements for ensuring the protection of
the public from the hazards of radiation.  This regulation includes dose limits for protection of
the public and environment, plus requirements:

1)     to apply the ALARA process—to reduce doses to the public as far below the release
       criterion as is practicable
2)     to apply the best available control technology to liquid effluents
3)     for control of property containing residual radioactive material

DOE 5400.5 is supported by numerous guidance documents, including those listed in this
section.

DOE 5400.5 is the primary directive relating to the release of property subject to radiological
contamination by DOE operations.  DOE 5400.5 will be replaced by  10 CFR Part 834 and its
guidance will be adopted for Part 834 when it is issued.

Under DOE 5400.5 and the guidance included in this section (C.2.3), DOE established
requirements for a case-by-case review and approval for release of real or non-real property
containing residual radioactive material.  Authorized limits and measurement procedures must be
developed by DOE before facilities  can release property from their control. The principle
requirement is to reduce doses to levels that are as low as practicable using the ALARA process
and assuming realistic but conservative use scenarios that are not likely to underestimate dose.
This requirement ensures that  doses are as far below the primary dose limit (1 mSv/y [100
mrem/y]) as is reasonably achievable. Because the primary dose limit is for doses from all
sources and pathways, authorized limits should be selected at levels below a DOE dose constraint
of 0.3 mSv/y (30 mrem/y).  However, the goal is to reduce doses under likely-use scenarios to a
few fractions of a mSv/year or less.
MARSSIM, Revision 1                        C-8                                August 2000

-------
                                                                            Appendix C


In addition to the requirement to apply ALARA and the dose constraint, DOE also utilizes
surface contamination guidelines similar to those in NRC Regulatory Guide 1.86 and the 40 CFR
Part 192 soil concentration limits for radium and thorium. The ALARA requirement ensures that
the 40 CFR Part 192 limits are appropriately used.  DOE also permits the use of supplemental
limits for situations where cleanups to authorized limits are not practicable or where the
scenarios used to develop the authorized limits are not appropriate. DOE 5400.5 permits the
release of property for restricted use and requires procedures to ensure these restrictions are
maintained.

Most DOE remedial action and restoration activities are also subject to CERCLA. In such cases,
DOE requirements are integrated into the CERCLA process.

The following sections describe the scope and importance of several guidance documents.

A. Residual Radioactive Material Control:

DOE/CH-8901, Manual  for Implementing Residual Radioactive Material Guidelines - A
Supplement to the U.S. Department of Energy Guidelines for Residual Radioactive Material at
FUSRAP and SFMP Sites. Department of Energy,  June 1989.

DOE Guidance Memorandum,  "Unrestricted Release of Radioactively Contaminated Personal
Property," J. Maher, DOE Office of Nuclear Safety, Mar. 15, 1984.

ANL/EAD/LD-2, Manual for Implementing Residual Radioactive Material Guidelines Using
RESRAD. Version  5.0. Published by Argonne National Laboratory and prepared by ANL and
DOE staff, September 1993.

ANL/EAIS-8, Data Collection Handbook to Support Modeling the Impacts of Radioactive
Material in  Soil. Argonne National Laboratory, April 1993.

ANL/EAIS/TM-103, A Compilation  of Radionuclide Transfer Factors for Plant. Meat. Milk and
Aquatic Food Pathways  and Suggested Default Values for the RESRAD Code. Argonne National
Laboratory, August 1993.

PNL-8724, Radiation Dose Assessments to Support Evaluations of Radiological Control Levels
for Recycling or Reuse of Material and Equipment. Pacific Northwest Laboratory, July 1995.

ANL/EADLD-3, RESRAD-Build: A Computer Model for Analyzing the Radiological Doses
Resulting from the Remediation and  Occupancy of Buildings Contaminated with Radioactive
Material. Argonne National Laboratory, November 1994.
August 2000                                C-9                        MARSSIM, Revision 1

-------
Appendix C
B. ALARA

DOE Guidance: DOE Guidance on the Procedures in Applying the ALARA Process for
Compliance with DOE 5400.5. Department of Energy, Office of Environmental Guidance,
March 8, 1991.

ANL/EAD/LD-2, Manual for Implementing Residual Radioactive Material Guidelines Using
RESRAD. Version 5.0. Chapters 1 and 5 and App. M, September 1993.

C. Measurement and Data Reporting

DOE Manual for use and Comment, Environmental Implementation Guide for Radiological
Survey Procedures. Department of Energy, Office of Environmental Guidance, Nov. 1992.

DOE/EH-0173T, Environmental Regulatory Guide for Radiological Effluent Monitoring and
Environmental Surveillance. Department of Energy, Jan. 1991.

D. Dose Factors

DOE/EH-0071, Internal Dose Conversion Factors for Calculation of Dose to the public. DOE,
July 1988. DOE currently recommends use of EPA-520-1-88-020, Federal Guidance Report No.
11, Limiting Radionuclide Intake and Air Concentrations and Dose Conversion Factors for
Inhalation. Submersion and Ingestion. Environmental Protection Agency, Sept.  1988, as an
alternative to DOE/EH-0071.

DOE/EH-0070, External Dose-Rate Conversion Factors for Calculation of Dose to the Public.
DOE, July 1988. DOE currently recommends use of EPA 402-R-93-081, Federal  Guidance
Report No. 12, External Exposure to Radionuclides in Air. Water and Soil. Environmental
Protection Agency, Sept. 1993, as an alternative to DOE/EH-0070.

E. Liquid Effluents

Implementation Guidance for DOE 5400.5. Section II.3 (Management and Control of
Radioactive Materials in Liquid Discharges and the Phaseout of Soil Columns). DOE Office of
Environment, June 1992.

C.2.3.3  Order DOE 5820.2A, "Radioactive Waste Management"

Order DOE 5820.2A establishes the policies, guidelines, and requirements by which the DOE
manages its radioactive and mixed waste and contaminated facilities. The Order implements
DOE's responsibilities and authorities for prediction of public and worker health and safety and

MARSSIM, Revision 1                       C-10                               August 2000

-------
                                                                                Appendix C


the environment under the Atomic Energy Act. It contains the requirements for management and
disposal of high-level waste, transuranic waste, low-level waste, NARM waste, and for the
decommissioning of radioactively contaminated facilities.

A. High-level Waste

The Order specifies: (1) requirements for storage operations including requirements for waste
characterization, transfer operations, monitoring, surveillance, and leak detection, and (2)
specifies that disposal shall be in accordance with the requirements of the Nuclear Waste Policy
Act of 1982.

B. Transuranic Waste

The Order requires waste to be certified in compliance with the Waste Isolation Pilot Plant-
Waste Acceptance Criteria and sent to the WTPP. There are requirements for waste
classification, waste generation and treatment, waste certification, waste packaging, temporary
storage, transportation and shipping, and interim storage. There are provisions for use of the
WIPP, and for assessing the disposition of previously buried transuranic-contaminated wastes.

C. Low-level Waste

The Order specifies performance objectives which assure that external exposure waste
concentrations of radioactive material—which may be released into surface water, ground water,
soil, plants, and animals—result in an effective dose equivalent that does not exceed 0.25 mSv/y
(25 mrem/y) to a member of the public. Releases to the atmosphere shall meet the requirements
of 40  CFR Part 61.  Reasonable efforts should be made to maintain releases of radioactivity in
effluents to the general environment as low as is reasonably achievable.  Radiological
performance assessments are required for the disposal of waste for the purpose of demonstrating
compliance with these performance objectives.

For low-level waste, there are also requirements on waste generation, waste characterization,
waste acceptance criteria, waste treatment, and long term storage. The Order includes additional
disposal requirements concerning disposal facility and disposal site design and waste
characteristic, site selection, facility operations, site closure and post closure, and environmental
monitoring.

D. NARM Waste

For management of Naturally-Occurring and Accelerator-Produced Radioactive Materials
(NARM) and 1 l(e)(2) byproduct materials (the tailings or wastes resulting from the
concentration of uranium or thorium), the order specifies that storage and disposal shall be

August 2000                                C-ll                        MARS SIM, Revision 1

-------
Appendix C
consistent with the requirements of the residual radioactive material guidelines contained in
40 CFR 192.

E. Decommissioning of Radioactively Contaminated Facilities

For the decommissioning of contaminated facilities, the order requires DOE organizations to
develop and document decommissioning programs which include provisions for surveillance and
maintenance.  There are requirements for facility design, post-operational activities,
characterization, and environmental review.
C.3   NRC Regulations and Requirements

C.3.1  NRC's Mission and Statutory Authority

The mission of the U.S. Nuclear Regulatory Commission (NRC) is to ensure adequate protection
of the public health and safety, the common defense and security, and the environment in the use
of nuclear materials in the United States. The NRC's scope of responsibility includes regulation
of commercial nuclear power reactors; nonpower research, test, and training reactors; fuel cycle
facilities; medical, academic, and industrial uses of nuclear materials; and the storage and
disposal of nuclear materials and waste.

The NRC is an independent agency created by the Energy Reorganization Act of 1974.  This Act
abolished the Atomic Energy Commission (AEC), moved the AEC's regulatory function to NRC,
and, along with the Atomic Energy Act of 1954, as amended, provides the foundation for
regulation of the nation's commercial nuclear power industry.

NRC regulations are issued under the United States Code of Federal Regulations (CFR) Title 10,
Chapter 1. Principal statutory authorities that govern NRC's work are:

       Atomic Energy Act of 1954, as amended
       Energy Reorganization Act of 1974,  as amended
       Uranium Mill Tailings Radiation Control Act of 1978, as amended
       Nuclear Non-Proliferation Act of 1978
       Low-Level Radioactive Waste Policy Act of 1980
       West Valley Demonstration Project Act of 1980
       Nuclear Waste Policy Act of 1982
       Low-Level Radioactive Waste Policy Amendments Act of 1985
       Diplomatic Security and Anti-Terrorism Act of 1986
       Nuclear Waste Policy Amendments Act of 1987
       Solar, Wind, Waste and Geothermal  Power Production Incentives Act of 1990
       Energy Policy Act of 1992

MARSSIM, Revision 1                        C-12                                August 2000

-------
                                                                               Appendix C
The Atomic Energy Act of 1954, as amended, allows the NRC to issue orders to both licensees
and persons not licensed by the NRC. NRC orders may be a means of compelling
decommissioning at sites where the license has been terminated or at sites that were not
previously licensed but currently contain radioactive material that is under the jurisdiction of the
NRC.

The NRC and its licensees share a common responsibility to protect the public health and safety.
Federal regulations and the NRC regulatory program are important elements in the protection of
the public.  NRC licensees, however, have the primary responsibility  for the safe use of nuclear
materials.

C.3.2  NRC Criteria for Decommissioning

This section of the survey manual contains information on the existing cleanup criteria for
decommissioning sites regulated by the NRC. Additional cleanup criteria established by State
and local governments may also be applicable at NRC-licensed sites at the time of
decommissioning.

NRC's requirements for decommissioning and license termination are contained in 10 CFR
30.36, 40.42, 50.82, 70.38, and 72.54. The radiological criteria for license termination are
contained in 10 CFR 20.1401 through 1406 (62 FR 39058, July 21, 1997).

Prior to the adoption of the current regulations on radiological criteria for license termination, the
Commission's position on residual contamination criteria, site characterization, and other related
decommissioning issues was outlined in a NRC document entitled "Action Plan to Ensure
Timely Cleanup of Site Decommissioning Management Plan Sites," which was published in the
Federal Register on April  6, 1993 (57 FR 13389).  Other documents that were used in the past
and which may continue to have some applicability in special cases include:

"Criteria Relating to the Operation of Uranium Mills and the Disposition of Tailings or Wastes
Produced by the Extraction or Concentration of Source Material From Ores Processed Primarily
for Their Source Material Content" (10 CFR 40, Appendix A) and Health and Environmental
Protection Standards for Uranium and Thorium Mill Tailings (40  CFR 192, Subparts D and E)

       These regulations, issued by the NRC and EPA, establish technical criteria related to the
       operation, decontamination, decommissioning, and reclamation of uranium or thorium
       mills and mill tailings. Both regulations provide design requirements for closure of the
       mill's waste disposal area, which requires an earthen cover over tailings or waste piles to
       control radiological hazards from uranium and thorium tailings for 200 to 1,000 years,
       according to Technical Criterion 6 of Appendix A to 10 CFR Part 40.
August 2000                                C-13                        MARS SIM, Revision 1

-------
Appendix C
       The principal radiological hazards from uranium milling operations and mill tailings
       disposal are radon from uranium and thorium daughters. The atmospheric release rates of
       these gaseous radionuclides to the atmosphere are limited to an average rate of 0.7 Bq (20
       pCi) per square meter per second.  This rate is applicable to any portion of a licensed or
       disposal site unless land areas do not contain radium concentrations—averaged over
       100 square meters—greater than:  (i) 0.2 Bq/g (5 pCi/g) of radium averaged over the first
       15 centimeters below the surface, and (ii) 0.6 Bq/g (15 pCi/g) of radium averaged over
       15-centimeter thick layers more than 15 centimeters below the surface.

       Criterion 6 allows radon release rates to be averaged over a period of at least 1 year (but
       much less than 100 years) to account for the wide variability in atmospheric radon
       concentrations over short time periods and seasons.  In addition, this criterion applies
       only to emissions from uranium daughters and does not include radon emissions from
       earthen materials used to cover the tailings piles.  If appropriate, radon emissions from
       cover materials are evaluated when developing a closure plan for each site to account for
       this additional contribution from naturally occurring radon. However, direct gamma
       exposure rates from tailings or wastes should be reduced to background levels according
       to this standard.

C.3.3  NRC Decommissioning Process and Staff Plans for Implementing Survey
       Procedures in this Manual

NRC licensees are required to conduct radiation surveys of the premises where the licensed
activities were conducted and submit a report describing the survey results.  The survey process
follows requirements contained in 10 CFR 30.36, 40.42, 50.82, 70.38,  and 72.54, which pertain
to decommissioning of a site and termination of a license. This process leads to the unrestricted
release of a site; however, many of the requirements may not be necessary if the licensee
demonstrates that the premises are suitable for release in some other manner. Each year, the
NRC staff routinely evaluates licensee requests to discontinue licensed operations.  The majority
of these requests are straightforward, requiring little,  if any, site remediation before radiological
surveys are conducted and evaluated. However, some NRC sites require substantial remediation
because buildings and lands contain nonroutine amounts of radiological contamination.
Radiological surveys may also be performed by the NRC at sites where there is not a license.

The NRC decommissioning process for a  site requiring substantial remediation can be described
by the activities listed below:

•      licensee notifies the NRC they intend to decommission all or part of the site
•      site characterization, including preparation of the characterization plan and performance
       of site characterization
•      development and submission of decommissioning plan

MARSSIM, Revision 1                        C-14                                August 2000

-------
                                                                               Appendix C
       NRC review and approval of decommissioning plan
       performance of decommissioning actions described in the plan
       performance of termination survey and submittal of termination survey report
       NRC performance and documentation of confirmatory survey
       NRC termination of license

The NRC staff plans to use the information contained in this manual as primary guidance for
conducting radiological surveys of routine licensee requests for license termination and
nonroutine license termination requests that require more extensive decommissioning actions.
Supplementary guidance may be used by the NRC staff to assist licensees in conducting such
surveys or aid the NRC staff in evaluating licensee's survey plans and survey results to determine
compliance with decommissioning criteria. Examples of supplementary guidance include NRC
Information Notices, Bulletins, Generic Letters, Branch Technical Positions, NUREG reports,
Regulatory Guides, and other regulatory documents that transmit NRC requirements and
guidance.
C.4   DOD Regulations and Requirements

The Department of Defense (DOD) consists of four primary military services: the United States
Air Force, the United States Army, the United States Navy, and the United States Marine Corps.

DOD installations use sources of ionizing radiation and support radiation protection programs for
the control of these radioactive materials. As a Federal agency, the DOD complies with all
applicable environmental regulations under the Federal Facilities Compliance Act of 1992.

C.4.1  DOD Sources of Ionizing Radiation

DOD's list of radioactive materials includes:

•      Special nuclear material such as plutonium or enriched uranium
•      Source material such as uranium or thorium
•      Byproduct material such as any radioactive material yielded in or made radioactive by
       exposure to radiation incident to the process of producing special nuclear material
•      Naturally occurring or accelerator-produced radioactive material (NARM), such as
       radium, and not classified as source material
•      Materials containing induced or deposited radioactivity

Ionizing Radiation Producing Devices: Electronic devices that are capable of emitting ionizing
radiation.  Examples are linear  accelerators, cyclotrons, radiofrequency generators that use
klystrons or magnetrons, and other electron tubes that produce x-rays.  These devices may have

August 2000                                C-15                        MARS SIM, Revision 1

-------
Appendix C
components that contain radioactive material or they may induce radioactivity in certain other
materials.

C.4.2  Commodities Containing Radioactive Material Within the DOD System

The DOD uses a variety of manufactured items (commodities) incorporating in whole or in part
both sealed and unsealed radioactive material. A sealed source is any radioactive material that is
permanently bound or fixed in a capsule or matrix designed to prevent the release or dispersal of
such material under the most severe conditions encountered in normal use.

Ionizing radiation is used directly in DOD systems as calibration and check sources for RADIAC
or other survey-type instruments, as a source of radioluminescence in meters and gauges,  as an
ionization source in various devices, and as radiographic sources.

Indirectly, ionizing radiation may be emitted from a DOD material system as natural radioactivity
or induced radioactivity incorporated into material or a component of the system.

Specific examples of commodities include  instrument calibration sources, luminescent
compasses and exit signs, certain electron tubes and spark gaps, depleted uranium
counterweights and munitions, and magnesium-thorium aircraft components.

C.4.3  Licensed Radioactive Material

Licensed radioactive material is source, special nuclear, or byproduct material received, stored,
possessed, used, or transferred under a specific or general license issued by the NRC or an NRC
Agreement State.

Radioactive material licensed or controlled by the individual military services:

•      The Department of the Air Force has been designated by the NRC, through the issuance
       of a Master Materials License, regulatory authority for the receipt, possession,
       distribution, use, transportation, transfer,  and disposal of radioactive material at Air Force
       activities. The Air Force Radioisotope Committee was established to provide
       administrative control of all radioactive material used in the Air Force except for reactors
       and associated radioactivity, nuclear weapons, and certain components of weapons
       delivery systems. Air Force Radioactive Material Permits are used to maintain this
       control.

•      The Department of the Army, through the issuance of NRC specific licenses to Army
       installations and activity commanders, maintains the regulatory authority for the receipt,
       possession, distribution, use, transportation, transfer,  and disposal of radioactive material

MARSSIM, Revision 1                         C-16                                 August 2000

-------
                                                                                Appendix C


       at Army activities. In addition, within the Department of the Army, radioactive material
       classified as NARM may be used under a Department of the Army Radioactive Material
       Authorization (DARA) issued by the Army Material Command (AMC) or the Office of
       The Army Surgeon General. A Department of the Army Radiation Permit is required for
       use, storage, possession, and disposal of radiation sources by non-Army agencies
       (including contractors)  on Army installations.

•      The Department of the Navy is designated by the NRC to have—through the issuance of a
       Master Materials License—regulatory authority for the receipt, possession, distribution,
       use, transportation, transfer, and disposal of radioactive material at Navy and Marine
       Corps activities. The Navy Radiation Safety Committee was established to provide
       administrative control of all radioactive material used in the Navy and Marine Corps
       except for nuclear propulsion reactors and associated radioactivity, nuclear weapons, and
       certain components of weapons delivery systems. Navy Radioactive Material Permits are
       used to maintain this control.

C.4.4  Other Controlled Radioactive Material

Certain radioactive material on DOD installations may not be controlled or regulated by either
the NRC or the DOE. However, during Base Realignment and Closure actions, DOD installation
property which is identified to be returned to civilian use may have the potential for radioactive
contamination by such  material. The DOD complies with applicable State limits, guidelines, and
procedures for this material.  The methodologies and technical approaches for environmental
radiological  surveys outlined in this manual will provide guidance for dealing with issues
concerning this material.

Naturally Occurring and Accelerator-Produced Radioactive Material

•      Naturally occurring and accelerator-produced radioactive material (NARM) is controlled
       and regulated by the individual military services, as is similarly done by certain States for
       corporations and other users residing within their boundaries.

Special Nuclear Material  Used in Military Applications

•      Special nuclear material used in military applications is a unique category of radioactive
       material.  This may be buried as radioactive waste on DOD installations, used in military
       weapons or utilization facilities, or used in nuclear reactors involving military
       applications on  DOD installations. Radioactive material used or associated with weapons
       systems or reactors associated with such military applications is exempt from NRC and
       State regulations under Section 91b,  Chapter 9, Military Application of Atomic Energy,
       Atomic Energy  Act of 1954.

August 2000                                C-17                        MARSSIM, Revision 1

-------
Appendix C


C.4.5  DOD Regulations Concerning Radiation and the Environment

The DOD, with its global mission, supports several directives and instructions concerning
environmental compliance. The individual military services have regulations implementing these
directives and instructions. The documents describing these regulations are used as guidance in
developing environmental radiological surveys within DOD.

The DOD and each military service also have specific regulations addressing the use of
radioactive sources and the development of occupational health programs and radiation
protection programs.  These regulations may help in identifying potential locations and sources
of radioactive contamination on DOD installations.

C.4.6  DOD Regulations and Requirements

Regulations and Requirements Concerning Development of Environmental Radiological Surveys

1.      DOD Directive 4165.60, Solid and Hazardous Waste Management-Collection, Disposal,
       Resource Recovery, and Recycling Program.
2.      DOD Directive 4210.15, Hazardous Material  Pollution Prevention.
3.      DOD Directive 5100.50, Protection and Enhancement of Environmental Quality.
4.      DOD Directive 6050.1, Environmental Effects in the United States of Department of
       Defense Actions.
5.      DOD Directive 6050.7, Environmental Effects Abroad of Major Department of Defense
       Actions.
6.      DOD Directive 6050.8, Storage and Disposal of Non-DOD-Owned-Hazardous  or Toxic
       Materials on DOD Installations.
7.      DOD Instruction 4120.14, Environmental Pollution Prevention, Control, and Abatement.
8.      DOD Instruction 5100.5, Protection and Enhancement of Environmental Quality.

Regulations and Requirements Concerning Use of Radioactive Sources and Development of
Occupational Health Programs and Radiation Protection Programs:

1.      DOD Instruction 6055.5-M, Occupational Health Surveillance Manual.
2.      DOD Instruction 6055.8, Occupational Radiation Protection Program.

Examples of Air Force Instructions (APIs):

1.      API 40-201, Managing Radioactive Materials in the Air Force.
2.      API 32-7020, Environmental Restoration Program.
3.      API 32-7066, Environmental Baseline and Close-out Surveys in Real Estate Transactions.
MARSSIM, Revision 1                        C-18                               August 2000

-------
                                                                            Appendix C
Examples of Army Regulations (ARs):
1.      AR 11 -9, The Army Radiation Safety Program
2.      AR 40-5, Preventive Medicine.
3.      AR 40-10, Health Hazard Assessment Program in Support of the Army Materiel
       Acquisition Decision Process.
4.      AR 200-1, Environmental Protection and Enhancement.
5.      AR 200-2, Environmental Effects of Army Actions.
6.      AR 385-30, Safety Color Code Markings and Signs.
7.      AR 700-64, Radioactive Commodities in the DOD Supply System.
8.      AR 750-25, Army Test, Measurement, and Diagnostic Equipment (TMDE) Calibration
       and Repair Support Program.
9.      TB MED 521, Management and Control of Diagnostic X-Ray, Therapeutic X-Ray, and
       Gamma Beam Equipment.
10.    TB MED 522, Control of Health Hazards from Protective Material Used in Self-
       Luminous Devices.
11.    TB MED 525, Control of Hazards to Health from Ionizing Radiation Used by the Army
       Medical Department.
12.    TB 43-180, Calibration and Repair Requirements for the Maintenance of Army Materiel.
13.    TB 43-0108, Handling, Storage, and Disposal of Army Aircraft Components Containing
       Radioactive Material.
14.    TB 43-0116, Identification of Radioactive Items in the Army.
15.    TB 43-0122, Identification of U.S. Army Communications-Electronic Command
       Managed Radioactive items in the Army.
16.    TB 43-0141, Safe Handling, Maintenance, Storage, and Disposal of Radioactive
       Commodities Managed by U.S. Army Troop Support and Aviation Material Readiness
       Command (Including Aircraft Components).
17.    TB 43-0197, Instructions for Safe Handling, Maintenance, Storage, and Disposal of
       Radioactive Items Managed by U.S. Army Armament Material Command.
18.    TB 43-0216, Safety and Hazard Warnings for Operation and Maintenance of TACOM
       Equipment.
19.    TM 3-261, Handling and Disposal of Unwanted Radioactive Material.
20.    TM 55-315, Transportability Guidance for Safe Transport of Radioactive Materials.

Examples of Navy Regulations:

1.      NAVMED P-5055, Radiation Health Protection Manual.
2.      NAVSEA SO420-AA-RAD-010, Radiological Affairs  Support Program (RASP) Manual.
3.      OPNAV 6470.3, Navy Radiation Safety Committee.
4.      NAVSEA 5100.18A, Radiological Affairs Support Program.
5.      OPNAV 5100.8G, Navy Safety and Occupational Safety and Health Program.

June 2001                                 C-19                        MARSSIM, Revision 1

-------
Appendix C


6.     NAVMEDCOM 6470.10, Initial Management of Irradiated or Radioactively
       Contaminated Personnel.
7.     OPNAV 3710.31, Carrying Hazardous Materials; Operational Procedures.
8.     NAVSUP 5101.11, Procedures for the Receipt, Storage, and Handling of Radioactive
       Material Shipments.
9.     NAVSUP 5101.6, Procedures for the Requisitioning, Labeling, Handling,  Storage, &
       Disposal of Items Which Contain Radioactive By-Product Material.
10.    NAVSUP 4000.34, Radioactive Commodities in the DOD Supply System.
11.    NAVSEA 9639.1, Radioluminescent Sources and Radioactively Contaminated
       Equipment Aboard Inactive Naval Ships and Craft.
12.    NAVSUP 4510.28, Special Restrictions on Issue and Disposal of Radiological Control
       Materials.
13.    NAVMED 6470.7, Procedures and Responsibilities for Use of Radioactive Materials at
       NAVMED Activities.
C.5   State and Local Regulations and Requirements

An Agreement State is a state that has signed an agreement with the NRC allowing the State to
regulate the use of radioactive materials—i.e., specifically Atomic Energy Act materials—within
that state. Table C.2 lists the Agreement States as of April 15, 2000 (see Appendix L for
contacts and addresses). Each Agreement State provides regulations governing the use of
radioactive materials that may relate to radiation site investigations.3 Table C.3 lists the  states
that regulate naturally occurring radioactive material (NORM) as of January 1, 2000 (PGA
2000). A number of other states are in the process of developing regulations governing the use of
NORM.  The decision maker should check with the state to ensure compliance with all
applicable regulations.
         A current list of agreement states can be obtained through the U.S. Nuclear Regulatory Commission on
the Internet on the State Program Directory page operated by the Oak Ridge National Laboratory at
http://www.hsrd.ornl.gov/nrc/asframe.htm.

MARSSIM, Revision 1                        C-20                                August 2000

-------
                                                                                 Appendix C
Table C.2 Agreement States
Alabama
Arizona
Arkansas
California
Colorado
Florida
Georgia
Illinois
Iowa
Kansas
Kentucky
Louisiana
Maine
Maryland
Massachusetts
Mississippi
Nebraska
Nevada
New Hampshire
New Mexico
New York

North Carolina
North Dakota
Ohio
Oregon
Rhode Island
South Carolina
Tennessee
Texas
Utah
Washington


Table C.3 States That Regulate Diffuse NORM
Alabama (proposed)
Arkansas
Colorado (proposed)
Georgia
Illinois (proposed)
Louisiana
Michigan
Mississippi
New Jersey
New Mexico
North Dakota
Ohio
Oklahoma (proposed)
Oregon
South Carolina
Texas
Utah

August 2000
C-21
MARS SIM, Revision 1

-------
                                    APPENDIX D

            THE PLANNING PHASE OF THE DATA LIFE CYCLE
The planning phase of the Data Life Cycle is carried out using the Data Quality Objectives
(DQO) Process.  The DQO Process is a series of planning steps based on the scientific method
for establishing criteria for data quality and developing survey designs (EPA 1994a, 1987b,
1987c).  The level of effort associated with planning is based on the complexity of the survey.
Large, complicated sites generally receive a significant amount of effort during the planning
phase, while smaller sites may not require as much planning effort.

Planning radiological surveys using the DQO Process can improve the survey effectiveness and
efficiency, and thereby the defensibility of decisions.  It also can minimize expenditures related
to data collection by eliminating unnecessary, duplicative, or overly precise data. The use of the
DQO Process assures that the type,  quantity, and quality of environmental data used in decision
making will be appropriate for the intended application. It provides systematic procedures for
defining the criteria that the survey design should satisfy, including when and where to perform
measurements, the level of decision errors for the survey, and how many measurements to
perform.

The expected output of planning a survey using the DQO Process is a quality assurance project
plan (QAPP). The QAPP integrates all technical and quality aspects of the Data Life Cycle, and
defines in detail how specific quality assurance and quality control activities will be implemented
during the survey.

The DQO Process provides for early involvement of the decision maker and uses a graded
approach to data quality requirements.  This graded approach defines data quality requirements
according to the type of survey being designed, the risk of making a decision error based on the
data collected, and the consequences of making such an error. This approach provides a more
effective  survey design combined with a basis for judging the usability of the data collected.

DQOs are qualitative and quantitative statements derived from the outputs of the DQO Process
that:

•      clarify the study objective
•      define the most appropriate type of data to collect
•      determine the most appropriate conditions for collecting the data
•      specify limits on decision errors which will be used as the basis for establishing the
       quantity and quality of data needed to support the decision
August 2000                                D-l                        MARS SIM, Revision 1

-------
Appendix D

The DQO Process consists of seven steps, as shown in Figure D. 1. The output from each step
influences the choices that will be made later in the Process.  Even though the DQO Process is
depicted as a linear sequence of steps, in practice it is iterative; the outputs of one step may lead
to reconsideration of prior steps as illustrated in Figure D.2. For example, defining the survey
unit boundaries may lead to classification of the survey unit, with each area or survey unit having
a different decision statement.  This iteration is encouraged since it ultimately leads to a more
efficient survey design. The first six steps of the DQO Process produce the decision performance
criteria that are used to develop the survey design. The final step of the Process develops a
survey design based on the DQOs. The first six steps should be completed before the final
survey design is developed, and every step should be completed before data collection begins.


STEP1: STATE THE PROBLEM
I
STEP 2: IDENTIFY THE

STEP 3: IDENTIFY INPUTS TO
i
STEP 4: DEFINE THE STUDY


DECISION

THE DECISION

BOUNDARIES

STEP 5: DEVELOP A DECISION RULE
1

STEP 6: SPECIFY LIMITS ON DECISION ERRORS

                                                                           STEP 7:
                                                                        OPTIMIZE THE
                                                                        DESIGN FOR
                                                                      OBTAINING DATA
                        Figure D.I  The Data Quality Objectives Process
When the DQO Process is used to design a survey, it helps ensure that planning is performed
properly the first time and establishes measures of performance for the data collector
(implementation) and the decision maker (assessment) during subsequent phases of the Data Life
Cycle.  DQOs provide up-front planning and define decision maker/data collector relationships
by presenting a clear statement of the decision maker's needs. This information is recorded in the
QAPP.
MARSSIM, Revision 1
D-2
August 2000

-------
                                                                       Appendix D
                                                             Iterate as
                                                             Needed
        Start
    Developing
       DQOs
                         A   Survey
                          V> Design
                         //Completed /
Optimize the
   Survey
   Design
                     Specify
                     Limits on
                     Decision
                      Errors
            State
             the
           Problem
               Develop
                  a
               Decision
                 Rule
                  Identify
                    the
                  Decision
                                     Identify
                                    Inputs to
                                  the Decision
 Define the
   Study
Boundaries
                                   Perform
                                    Survey
     HSA
                Scoping
                Survey
   Characterization
       Survey
                                       Remedial Action
                                       Support Survey
                               Final Status
                                Survey
                                  Demonstration
                                  of Compliance
                                 Based on Results
                                  of Final Status
                                     Survey
          Figure D.2 Repeated Applications of the DQO Process Throughout
                 the Radiation Survey and Site Investigation Process
August 2000
                D-3
                               MARS SIM, Revision 1

-------
Appendix D

DQOs for data collection activities describe the overall level of uncertainty that a decision maker
is willing to accept for survey results.  This uncertainty is used to specify the quality of the
measurement data required in terms of objectives for precision, accuracy, representativeness,
comparability, and completeness.  These objectives are presented in detail in Section 9.3.2 and
Appendix N.

The DQO Process is a flexible planning tool that can be used more or less intensively as the
situation requires. For surveys that have multiple decisions, such as characterization or final
status surveys, the DQO Process can be used repeatedly throughout the performance of the
survey.  Decisions made early in decommissioning are often preliminary in nature.  For this
reason, a scoping survey may only require a limited planning and evaluation effort. As the site
investigation process nears conclusion the necessity of avoiding a decision error becomes more
critical.

The following sections briefly discuss the steps of the DQO Process, especially as they relate to
final status survey planning,  and list the outputs for each step in the process. The outputs from
the DQO Process should be included in the documentation for the survey plan.
D.I   State the Problem

The first step in any decision making process is to define the problem so that the focus of the
survey will be unambiguous.  Since many sites or facilities present a complex interaction of
technical, economic, social, and political factors, the success of a project is critically linked to a
complete but uncomplicated definition of the problem .

There are four activities associated with this step:

•      identifying members of the planning team and stakeholders
•      identifying the primary decision maker or decision-making method
•      developing a concise description of the problem
•      specifying available resources and relevant deadlines for the study

The expected outputs of this step are:

•      a list of the planning team members and identification of the decision maker
•      a concise description of the problem
•      a summary of available resources and relevant deadlines for the survey

For a final status survey, examples of planning team members and stakeholders are described in
Section 3.2. A description of the problem would typically involve the release of all or some
portion of a site to demonstrate compliance with a regulation. The resources and deadlines are
typically identified on a site-specific basis.

MARSSIM, Revision 1                         D-4                                  August 2000

-------
                                                                                Appendix D

D.2   Identify the Decision

The goal of this step is to define the question that the survey will attempt to resolve and identify
alternative actions that may be taken based on the outcome of the survey. The combination of
these two elements is called the decision statement. The decision statement would be different
for each type of survey in the Radiation Survey and Site Investigation Process, and would be
developed based on the survey objectives described in Chapter 5.

There are four activities associated with this step in the DQO Process:

•      identifying the principal study question
•      defining the alternative actions that could result from resolution of the principal study
       question
•      combining the principal study question and the alternative actions into a decision
       statement
•      organizing multiple decisions

The expected output from this step is a decision statement that links the principal study question
to possible solutions to the problem.

For a final status survey, the principal study question could be: "Is the level of residual
radioactivity in the survey units in this portion of the site below the release criterion?"
Alternative actions may include further remediation, re-evaluation of the modeling assumptions
used to develop the DCGLs, re-assessment of the survey unit to see if it can be released with
passive controls, or a decision not to release the survey unit. The decision statement may be:
"Determine whether or not all the survey units in this portion of the site satisfy the release
criterion."
D.3   Identify the Inputs to the Decision

Collecting data or information is necessary to resolve most decision statements. In this step, the
planning team focuses on the information needed for the decision and identifies the different
types of information needed to resolve the decision statement.

The key activities for this step include:

•      Identifying the information required to resolve the decision statement. Ask general
       questions such as:  "Is information on the physical properties of the site required?" or:  "Is
       information on the chemical characteristics of the radionuclide or the matrix required?"
       Determine which environmental variables or other information are needed to resolve the
       decision statement.
August 2000                                 D-5                         MARSSIM, Revision 1

-------
Appendix D

•      Determining the sources for each item of information. Identify and list the sources for the
       required information.
•      Identifying the information needed to establish the action level or the derived
       concentration guideline level (DCGL) based on the release criterion.  The actual
       numerical value will be determined in Step 5 (i.e., Section D.5).
•      Confirming that appropriate measurement methods exist to provide the necessary data.  A
       list of potentially appropriate measurement techniques should be prepared based on the
       information requirements determined previously in this step. Field and laboratory
       measurement techniques for radionuclides are discussed in Chapters 6 and 7 of this
       manual. Information on using field and laboratory equipment, their detection limits and
       analytical costs are listed in Appendix H.  This performance information will be used in
       Steps 5 and 7 of the DQO Process.

The expected outputs of this step are:

•      a list of informational inputs needed to resolve the decision statement
•      a list of environmental variables or characteristics that will be measured

For the final status survey, the list of information inputs generally involves measurements of the
radioactive contaminants of concern in each survey unit.  These inputs include identifying survey
units, classifying survey units, identifying appropriate measurement techniques including
measurement costs and detection limits, and whether or not background measurements from a
reference area or areas need to be performed. The list of environmental variables measured
during the final status survey is typically limited to the level of residual radioactivity in the
affected media for each survey unit.
D.4   Define the Boundaries of the Study

During this step the planning team should develop a conceptual model of the site based on
existing information collected in Step 1 of the DQO Process or during previous surveys.
Conceptual models describe a site or facility and its environs, and present hypotheses regarding
the radionuclides present and potential migration pathways. These models may include
components from computer models, analytical models, graphic models, and other techniques.
Additional data collected during decommissioning are used to expand the conceptual model.

The purpose of this step is to define the spatial and temporal boundaries that will be covered by
the decision statement so data can be easily interpreted. These attributes include:

•      spatial boundaries that define the physical area under consideration for release (site
       boundaries)
MARSSIM, Revision 1                        D-6                                 August 2000

-------
                                                                                Appendix D

•      spatial boundaries that define the physical area to be studied and locations where
       measurements could be performed (actual or potential survey unit boundaries)
•      temporal boundaries that describe the time frame the study data represents and when
       measurements should be performed
•      spatial and temporal boundaries developed from modeling used to determine DCGLs

There are seven activities associated with this step:

•      specifying characteristics that define the true but unknown value of the parameter of
       interest
•      defining the geographic  area within which all decisions must apply
•      when appropriate, dividing the site into areas or survey units that have relatively
       homogeneous characteristics
•      determining the time frame to which the decision applies
•      determining when to collect data
•      defining the scale of decision making
•      identifying any practical constraints on data  collection

The expected outputs of this step are:

•      a detailed description of the spatial and temporal boundaries of the problem (a conceptual
       model)
•      any practical constraints that may interfere with the full implementation of the survey
       design

Specifying the characteristics that define the true but unknown value of the parameter of interest
for the final status survey typically involves identifying the radionuclides of concern. If possible,
the physical and chemical form  of the radionuclides should be described.  For example,
describing the residual radioactivity in terms of total uranium is not as specific or informative as
describing a mixture  of uraninite (UO2) and uranium metaphosphate (U(PO3)4) for natural
abundances of 234U, 235U, and 238U.

As an example, the study boundary may be defined  as the property boundary of a facility or, if
there is only surface contamination expected at the site, the soil within the property boundary to a
depth of 15 cm.  When appropriate (typically during and always before final status survey
design), the site is subdivided into survey units with relatively homogeneous characteristics
based on information collected during previous surveys.  The radiological characteristics are
defined by the area classification (Class 1, Class 2, or Class 3) while the physical characteristics
may include structures vs. land areas, transport routes vs. grassy areas, or soil types with different
radionuclide transfer characteristics.
August 2000                                 D-7                         MARSSIM, Revision 1

-------
Appendix D

The time frame to which the final status survey decision applies is typically defined by the
regulation.  For example: "The data are used to reflect the condition of radionuclides leaching
into ground water over a period of 1,000 years." Temporal boundaries may also include seasonal
conditions such as winter snow cover or summer drought that affect the accessibility of certain
media for measurement.

For the final status survey, the smallest, most appropriate subsets of the site for which decisions
will be made are defined as survey units. The size of the survey unit and the measurement
frequency within a survey unit are based on classification, site-specific conditions, and relevant
decisions used during modeling to determine the DCGLs.
D.5   Develop a Decision Rule

The purpose of this step is to define the parameter of interest, specify the action level (or DCGL),
and integrate previous DQO outputs into a single statement that describes a logical basis for
choosing among alternative actions.

There are three activities associated with this step:

•      specifying the statistical parameter that characterizes the parameter of interest
•      specifying the action level for the study
•      combining the outputs of the previous DQO steps into an "if...then..." decision rule that
       defines the conditions that would cause the decision maker to choose among alternative
       actions

Certain aspects of the site investigation process,  such as the HSA, are not so quantitative that a
statistical parameter can be specified. Nevertheless, a decision rule should still be developed that
defines the conditions that would cause the decision maker to choose among alternatives.

The expected outputs of this step are:

•      the parameter of interest that characterizes the level of residual radioactivity
•      the action level
•      an "if...then..." statement  that defines the conditions that would cause the decision  maker
       to choose among alternative actions

The parameter of interest is a descriptive measure (such as a mean or median) that specifies the
characteristic or  attribute that the decision maker would like to know about the residual
contamination in the survey unit.
MARSSIM, Revision 1                         D-8                                 August 2000

-------
                                                                                Appendix D

The mean is the value that corresponds to the "center" of the distribution in the sense of the
"center of gravity" (EPA 1989a). Positive attributes of the mean include: 1) it is useful when the
action level is based on long-term, average health effects, 2) it is useful when the population is
uniform with relatively small spread, and 3) it generally requires fewer samples than other
parameters of interest. Negative attributes include: 1) it is not a very representative measure of
central tendency for highly skewed distributions, and 2) it is not useful when a large proportion
of the measurements are reported as less than the detection limit (EPA 1994a).

The median is also a value that corresponds  to the "center" of a distribution, but where the mean
represents the center of gravity the median represents the "middle" value of a distribution. The
median is that value such that there are the same number of measurements greater than the
median as less than the median. The positive attributes of the median include: 1) it is useful
when the action level is based on long-term, average health effects, 2) it provides a more
representative measure of central tendency than the mean for  skewed populations, 3) it is useful
when a large proportion of the measurements are reported as less than the detection limit, and 4)
it relies on few statistical assumptions. Negative attributes include:  1) it will not protect against
the effects of extreme values, and 2) it is not a very representative measure of central tendency
for highly skewed distributions (EPA 1994a).

The nonparametric statistical tests discussed in Chapter 8 are  designed to determine whether or
not the level of residual activity uniformly distributed throughout the survey unit exceeds the
DCGLW. Since these methods are based on  ranks, the results  are generally expressed in terms of
the median.  When the underlying measurement distribution is symmetric, the mean is equal to
the median.  The assumption of symmetry is less restrictive than that of normality because the
normal distribution is itself symmetric.  If, however, the measurement  distribution is skewed to
the right, the average will generally be greater than the median.  In severe cases, the average may
exceed the DCGLW while the median does not.  For this reason, MARSSEVI recommends
comparing the arithmetic mean of the survey unit data to the DCGLW as a first step in the
interpretation of the data (see Section 8.2.2.1).

The action level is a measurement threshold value of the parameter of interest that provides the
criterion for choosing among alternative actions. MARSSEVI uses the  investigation level, a
radionuclide-specific level of radioactivity based on the release criterion  that results in additional
investigation when it is exceeded, as an action level.  Investigation levels are developed for both
the Elevated Measurement Comparison (EMC) using scanning techniques and the statistical tests
using direct measurements and samples.  Section 5.5.2.6 provides information on investigation
levels used in MARSSEVI.

The mean concentration of residual radioactivity is the parameter of interest used for making
decisions based on the final status survey. The definition of residual radioactivity depends on
whether or not the contaminant appears as part of background radioactivity in the reference area.
If the radionuclide is not present in background, residual radioactivity is defined as the mean

August 2000                                D-9                         MARSSIM, Revision 1

-------
Appendix D

concentration in the survey unit. If the radionuclide is present in background, residual
radioactivity is defined as the difference between the mean concentration in the survey unit and
the mean concentration in the reference area selected to represent background.  The term
1-sample case is used when the radionuclide does not appear in background, because
measurements are only made in the survey unit.  The term 2-sample case is used when the
radionuclide appears in background, because measurements are made in both the survey unit and
the reference area.

Figure D.3 contains a simple, hypothetical example of the 1-sample case.  The upper portion of
the figure shows a probability distribution of residual radionuclide concentrations in the surface
soil of the survey unit.  The parameter of interest is the location of the mean of this distribution,
represented by the vertical dotted line and denoted by the symbol D.

The decision rule for the 1-sample case is: "If the mean concentration in the survey unit is less
than the investigation level, then the survey unit is in compliance with the release criterion."  To
implement the decision rule, an estimate of the mean concentration in the  survey unit is required.
An estimate of the mean of the survey unit distribution may be obtained by measuring
radionuclide concentrations in soil at a set of n randomly selected locations in the survey unit. A
point estimate for the survey unit mean is obtained by calculating the simple arithmetic average
of the n measurements. Due to measurement variability, there is a distribution of possible values
for the point estimate for the survey unit mean, 5. This distribution is referred to as f(5), and is
shown in the lower graph of Figure D.3.  The investigation level for the Sign test used in the
1-sample case is the DCGLW, shown on the horizontal axis of the graph.

If f(5) lies far to the left (or to the right) of the DCGLW, a decision of whether or not the survey
unit demonstrates compliance can be easily made.  However, if f(5) overlaps the DCGLW,
statistical decision rules are used to assist the decision maker. Note that the width of the
distribution for the estimated mean may be reduced by increasing the number of measurements.
Thus, a large number of samples will reduce the probability of making decision errors.

Figure D.4 shows a simple, hypothetical example of the 2-sample case.  The upper portion of the
figure shows one probability distribution representing background radionuclide concentrations in
the surface soil  of the reference area, and another probability distribution representing
radionuclide concentrations in the surface soil of the survey unit. The graph in the middle
portion of the figure shows the distributions of the estimated mean concentrations in the
reference area and the survey unit. In this case, the parameter of interest is the difference
between the means of these two distributions, D, represented by the distance between the two
vertical  dotted lines.

The decision rule for the 2-sample case is: "If the difference between the mean concentration in
the survey unit and the mean concentration in the reference area is less than the investigation
level, then the survey unit is in  compliance with the release criterion."  To implement the

MARSSIM, Revision 1                        D-10                                August 2000

-------
                                                                            Appendix D
                                   1-Sampie Case
Contamination
  Distribution
                 D = Difference Due to
                 Residual Radioactivity
                                                                        Concentration
                                        Survey Unit
       f(5)
                  D = Difference Due to
                  Residual Radioactivity
                                     Survey Unit Mean
DCGL
5 = Mean Shift
 Above Zero
    f(5) is the sampling distribution of the estimated survey unit mean.

            Figure D.3 Example of the Parameter of Interest for the 1-Sample Case

August 2000                               D-ll                       MARS SIM, Revision 1

-------
Appendix D
Contamination
 Distributions
  Sampling
Distributions
of Estimated
   Means
                                  2-Sample Case
                    Reference Area
                                                              Survey Unit
                                                                          Concentration
                                    D = Mean Difference Due to
                                      Residual Radioactivity
                   Reference Area
                       Mean
Survey Unit
  Mean
                                                                          Concentration
     f(5)
                       D = Mean Difference Due to
                         Residual Radioactivity
                                                       D
           5 = Mean Shift
    DCGL      Above
             Background
   f(5) is the sampling distribution of the difference between
   the survey unit mean and the reference area mean.

             Figure D.4 Example of the Parameter of Interest for the 2-Sample Case

MARSSIM, Revision 1                       D-12                               August 2000

-------
                                                                                Appendix D

decision rule, an estimate of the difference is required.  This estimate may be obtained by
measuring radionuclide concentrations at a set of "n" randomly selected locations in the survey
unit and "m" randomly selected locations in the reference area. A point estimate of the survey
unit mean is obtained by calculating the simple arithmetic average of the n measurements in the
survey unit.  A point estimate of the reference area mean is similarly calculated.  A point estimate
of the difference between the two means is obtained by subtracting the reference area average
from the survey unit average.

The measurement distribution of this difference, f(5), is centered at D, the true value of the
difference.  This distribution is shown in the lower graph of Figure D.4.

Once again,  if f(5) lies far to the left (or to the right) of the DCGLW, a decision of whether or not
the  survey unit demonstrates compliance can be easily made.  However, if f(5) overlaps the
DCGLW, statistical decision rules are used to assist the  decision maker.
D.6 Specify Limits on Decision Errors

Decisions based on survey results can often be reduced to a choice between "yes" or "no", such
as determining whether or not a survey unit meets the release criterion.  When viewed in this
way, two types of incorrect decisions, or decision errors, are identified:  1) incorrectly deciding
that the answer is "yes" when the true answer is "no", and 2) incorrectly deciding the answer is
"no" when the true answer is "yes".  The distinctions between these two types of errors are
important for two reasons: 1) the consequences of making one type of error versus the other may
be very different, and 2) the methods for controlling these errors are different and involve
tradeoffs.  For these reasons, the decision maker should specify levels for each type of decision
error.

The purpose of this section is to specify the decision maker's limits on decision errors, which are
used to establish performance goals for the data collection design. The  goal of the planning team
is to develop a survey design that reduces the chance of making a decision error.

While the possibility of a decision error can never be totally eliminated, it can be controlled.  To
control the possibility of making decision errors, the planning team attempts to control
uncertainty in the survey results caused by sampling design error and measurement error.
Sampling design error may be controlled by collecting a large number of samples. Using more
precise measurement techniques or field duplicate analyses can reduce measurement error.
Better  sampling designs can also be developed to collect data that more accurately and efficiently
represent the parameter of interest.  Every survey will use a slightly different method of
controlling decision errors, depending on the largest source of error and the ease of reducing
those error components.
August 2000                                D-13                        MARS SIM, Revision 1

-------
Appendix D

The estimate of the standard deviation for the measurements performed in a survey unit (os)
includes the individual measurement uncertainty as well as the spatial and temporal variations
captured by the survey design.  For this reason, individual measurement uncertainties are not
used during the final status survey data assessment.  However, individual measurement
uncertainties may be useful  for determining an a priori estimate of os during survey planning.
Since a larger value of os results in an increased number of measurements needed to demonstrate
compliance during the final  status survey, the decision maker may seek to reduce measurement
uncertainty through various  methods (e.g., different instrumentation). There are trade-offs that
should be considered during survey planning. For example, the costs associated with performing
additional measurements with an inexpensive measurement system may be less than the costs
associated with a measurement system with better sensitivity (i.e., lower measurement
uncertainty, lower minimum detectable concentration). However, the more expensive
measurement system with better sensitivity may reduce os and the number of measurements used
to demonstrate compliance to the point where it is more cost effective to use the more expensive
measurement system.  For surveys in the early stages of the Radiation Survey and Site
Investigation Process, the measurement uncertainty and instrument sensitivity become even more
important.  During scoping,  characterization, and remedial action support surveys, decisions
about classification and remediation are made based on a limited number of measurements.
When the measurement uncertainty or the instrument sensitivity values approach the value of the
DCGL, it becomes more difficult to make these decisions.  From  an operational standpoint,  when
operators of a measurement system have an a priori understanding of the sensitivity and potential
measurement uncertainties,  they are able to recognize and respond to conditions that may warrant
further investigation—e.g., changes in background radiation levels, the presence of areas of
elevated activity, measurement system failure or degradation, etc.

The probability of making decision errors can be controlled by adopting a scientific approach,
called hypothesis testing. In this approach, the survey results are used to  select between one
condition of the environment (the null hypothesis, H,,) and an alternative  condition  (the
alternative hypothesis, Ha).  The null hypothesis is treated like a baseline  condition that is
assumed to be true in the absence of strong evidence to the contrary.  Acceptance or rejection of
the null hypothesis depends  upon whether or not the particular survey results are consistent with
the hypothesis.

A decision  error occurs when the decision maker rejects the null hypothesis when it is true, or
accepts the null hypothesis when it is false. These two types of decision errors are classified as
Type I and  Type n decision  errors, and can be represented by a table as shown in Table D. 1.

A Type I decision error occurs when the null hypothesis is rejected when it is true, and is
sometimes  referred to as a false positive error.  The probability of making a Type I  decision error,
or the level of significance,  is denoted by alpha (a).  Alpha reflects the amount of evidence the
decision maker would like to see before abandoning the null hypothesis, and is also referred to as
the size of the test.

MARSSIM, Revision 1                        D-14                                 August 2000

-------
                                                                                Appendix D

      Table D.I Example Representation of Decision Errors for a Final Status Survey
H0: The
TRUE
CONDITION
OF
SURVEY
UNIT
Residual Activity in the Survey Unit Exceeds the Release Criterion
Meets
Release
Criterion
Exceeds
Release
Criterion
DECISION
Rej ect H0 Accept H0
(Meets Release Criterion) (Exceeds Release Criterion)
(No decision error)
Incorrectly Release
Survey Unit
(Type I)
Incorrectly Fail to Release
Survey Unit
(TypeH)
(No decision error)


A Type II decision error occurs when the null hypothesis is accepted when it is false. This is
sometimes referred to as a false negative error.  The probability of making a Type II decision
error is denoted by beta (P).  The term (1-P) is the probability of rejecting the null hypothesis
when it is false, and is also referred to as the power of the test.

There is a relationship between a and P that is used in developing a survey design.  In general,
increasing a decreases P and vice versa, holding all other variables constant. Increasing the
number of measurements typically results in a decrease in both a and p.  The number of
measurements that will produce the desired values of a and P from the statistical test can be
estimated from a, P, the DCGLW, and the estimated variance of the distribution of the parameter
of interest.

There are five activities associated with specifying limits on decision errors:

•      Determining the possible range of the parameter of interest.  Establish the range by
       estimating the likely upper and lower bounds based on professional judgement.
•      Identifying the decision errors and choosing the null hypothesis.
       a.      Define both types of decision errors (Type I and Type II) and establish the true
              condition of the survey unit for each decision error.
       b.      Specify and evaluate the potential consequences of each decision error.
       c.      Establish which decision error has more severe consequences near the action
              level.  Consequences include health, ecological, political, social, and resource
              risks.
August 2000
D-15
MARS SIM, Revision 1

-------
Appendix D

       d.      Define the null hypothesis and the alternative hypothesis and assign the terms
              "Type I" and "Type II" to the appropriate decision error.
•      Specifying a range of possible parameter values, a gray region, where the consequences of
       decision errors are relatively minor. It is necessary to specify a gray region because
       variability in the parameter of interest and unavoidable imprecision in the measurement
       system combine to produce variability in the data such that a decision may be "too close
       to call" when the true but unknown value of the parameter of interest is very near the
       action level. Additional guidance on specifying a gray region is available in Guidance for
       the Data Quality Objectives Process (EPA 1994a).
•      Assigning probability limits to points above and below the gray region that reflect the
       probability for the occurrence of decision errors.
•      Graphically representing the decision rule.

The expected outputs of this step are decision error rates based on the consequences  of making
an incorrect decision. Certain aspects of the site investigation process, such as the Historical Site
Assessment (HSA), are not so quantitative that numerical values for decision errors can be
specified.  Nevertheless, a "comfort region" should be identified where the consequences of
decision errors are relatively minor.

In Section D.5, the parameter of interest was defined as the difference  between the survey unit
mean concentration of residual radioactivity and the reference area mean concentration in the
2-sample case, or simply the survey unit mean concentration in the 1-sample case. The possible
range of values for the parameter of interest is determined based on existing information (such as
the Historical Site Assessment or previous surveys) and best professional judgement. The likely
lower bound for f(5) is either background or zero. For a final status survey when the residual
radioactivity is expected to meet the release criterion, and a conservative upper bound might be
approximately three times DCGLW.

Hypothesis testing is used to determine whether or not a statement concerning the  parameter of
interest should be verified.  The statement about the parameter of interest is called the null
hypothesis.  The alternative hypothesis is the opposite of what is stated in the null  hypothesis.
The decision maker needs to choose between two courses of action, one associated with the null
hypothesis and one associated with the alternative hypothesis.

To make a decision using hypothesis testing, a test statistic is compared to a critical value.  The
test statistic1 is a number calculated using data from the survey. The critical value of the test
statistic defines a rejection region based on some assumptions about the true distribution of data
in the survey unit.  If the value of the test statistic falls within the rejection region,  the null
         The test statistic is not necessarily identical to the parameter of interest, but is functionally related to it
through the statistical analysis.

MARSSIM, Revision 1                         D-16                                August 2000

-------
                                                                                Appendix D

hypothesis is rejected.  The decision rule, developed in Section D.5, is used to describe the
relationship between the test statistic and the critical value.

MARSSEVI considers two ways to state H0 for a final status survey.  The primary consideration in
most situations will be  compliance with the release criterion.  This is shown as Scenario A in
Figure D.5.  The null hypothesis is that the survey unit exceeds the release criterion. Using this
statement of H0 means that significant evidence that the survey unit does not exceed the release
criterion is required before the survey unit would be released.

In some situations, however, the primary consideration may be determining if any residual
radioactivity at the site  is distinguishable from background, shown as Scenario B in Figure D.6.
In this manual, Scenario A is used as an illustration because it directly addresses the compliance
issue and allows consideration of decision errors. More information on Scenario B can be found
in the NRC draft report NUREG-1505 (NRC 1995a).

For Scenario A, the null hypothesis is that the survey unit does not meet the release criterion. A
Type I decision error would result in the release of a survey unit containing residual radioactivity
above the release criterion.  The probability of making this error is a. Setting a high value for a
would result in a higher risk that survey units that might be somewhat in excess of the release
criterion would be passed as meeting the release criterion. Setting a low value for a would result
in fewer survey units where the null hypothesis is rejected. However, the cost of setting a low
value for a is either a higher value for p or an increased number of samples used to demonstrate
compliance.

For Scenario A, the alternative hypothesis is that  the survey unit does meet the release criterion.
A Type II decision error would result in either unnecessary costs due to remediation of survey
units that are truly below the release criterion or additional survey activities to demonstrate
compliance. The probability of making a Type n error is p.  Selecting a high value for P (low
power) would result in  a higher risk that survey units that actually meet the release criterion are
subject to further investigation. Selecting a low value for P (high power) will minimize these
investigations, but the tradeoff is either a higher value for a or an increased number of
measurements used to demonstrate compliance. Setting acceptable values for a and P, as well as
determining an appropriate gray region, is a crucial step in the DQO process.

In the MARSSEVI framework, the  gray region is always bounded from above by the DCGL
corresponding to the release criterion.  The Lower Bound of the Gray Region (LBGR) is selected
during the DQO process along with the target values for a and p.  The width of the gray region,
equal to (DCGL - LBGR), is a parameter that is central to the nonparametric tests discussed in
this manual. It is also referred to as the shift, A. The absolute size of the shift is actually of less
importance than the relative shift A/a,  where a  is an estimate of the standard deviation of the
measured values in the  survey unit. The estimated standard deviation, a, includes both the real
spatial variability in the quantity being measured, and the precision of the chosen measurement

August 2000                                D-17                        MARS SIM, Revision 1

-------
Appendix D
                                       SCENARIO A

 Assume as a null hypothesis that the survey unit exceeds the release criterion.  This requires
 significant evidence that the residual radioactivity in the survey unit is less than the release
 criterion to reject the null hypothesis (and pass the survey unit). If the evidence is not
 significant at level a, the null hypothesis of a non-complying survey unit is accepted (and the
 survey unit fails).
  HYPOTHESIS TEST

  H0: Survey unit does not meet release criterion
  Ha: Survey unit does meet the release criterion
                                    Survey unit passes if and
                                    only if the test statistic falls in
                                    the rejection region.
                 f(5)
                       0
a = probability the
    null hypothesis
    is rejected
                                               Critical    Release
                                               Value    Criterion
 This test directly addresses the compliance question.

 The mean shift for the survey unit must be significantly below the release criterion for the null
 hypothesis to be rejected.

 With this test, site owners face a trade-off between additional sampling costs and unnecessary
 remediation costs. They may choose to increase the number of measurements in order to decrease
 the number of Type II decision errors (reduce the chance of remediating a clean survey unit for
 survey units at or near background levels.

 Distinguishability from background is not directly addressed. However, sample sizes may be
 selected to provide adequate power  at or near background levels, hence ensuring that most survey
 units near background would pass.  Additional analyses, such as point estimates and/or confidence
 intervals, may be used to address this question.

 A high percentage of survey units slightly below the release criterion may fail the release criterion,
 unless large numbers of measurements are used. This achieves a high degree of assurance that
 most survey units that are at or above the release criterion will not be improperly released.
     Figure D.5 Possible Statement of the Null Hypothesis for the Final Status Survey
                            Addressing the Issue of Compliance
MARSSIM, Revision 1
               D-18
August 2000

-------
                                                                                 Appendix D
                                       SCENARIO B
 Assume as a null hypothesis that the survey unit is indistinguishable from background.  This
 requires significant evidence that the survey unit residual radioactivity is greater than
 background to reject the null hypothesis (and fail the survey unit).  If the evidence is not
 significant at level a, the null hypothesis of a clean survey unit is accepted (and the survey
 unit passes).
 HYPOTHESIS TEST
 H0: Survey unit is indistinguishable from background
 Ha: Survey unit is distinguishable from background
                            Survey unit passes if and
                            only if the test statistic falls in
                            the rejection region.
  f(6)
            = probability the null hypothesis is rejected
                             0
Critical
Value
 Distinguishability from background may be of primary importance to some stakeholders.

 The residual radioactivity in the survey unit must be significantly above background for the null
 hypothesis to be rejected.

 Compliance with the DCGLs is not directly addressed. However, the  number of measurements may
 be selected to provide adequate power at or near the DCGL, hence ensuring that most survey units
 near the DCGL would not be improperly released.  Additional analysis, based on point estimates
 and/or confidence intervals, is required to determine compliance if the null hypothesis is rejected by
 the test.

 A high percentage of survey units slightly below the release criterion will fail unless large numbers of
 measurements are used. This is necessary to achieve a high degree of assurance that for most sites
 at or above the release criterion  the null hypothesis will fail to be improperly released.
     Figure D.6 Possible Statement of the Null Hypothesis for the Final Status Survey
               Addressing the Issue of Indistinguishability from Background
August 2000
       D-19
MARSSIM, Revision 1

-------
Appendix D

method. The relative shift, A/a, is an expression of the resolution of the measurements in units
of measurement uncertainty.  Expressed in this way, it is easy to see that relative shifts of less
than one standard deviation, A/a < 1, will be difficult to detect.  On the other hand, relative shifts
of more than three standard deviations, A/a > 3, are generally easier to detect. The number of
measurements that will be required to achieve given error rates, a and P, depends almost entirely
on the value of A/a (see Chapter 5).

Since small values of A/a result in large numbers of samples, it is important to design for A/a > 1
whenever possible. There are two obvious ways to increase A/a.  The first is to increase the
width of the gray region by making LBGR small. Only Type n decision errors occur in the gray
region. The disadvantage of making this gray region larger is that the probability of incorrectly
failing to release a survey unit will increase.  The target false negative rate P will be specified at
lower residual radioactivity levels, i.e.,  a survey unit will generally have to be lower in residual
radioactivity to have a high probability of being judged to meet the release criterion.  The second
way to increase A/a is to make a smaller. One way to make a small is by having survey units
that are relatively homogeneous in the amount of measured radioactivity.  This is an important
consideration in selecting survey units that have both relatively uniform levels of residual
radioactivity and also have relatively uniform background radiation levels. Another way to make
G small is by using more precise measurement methods.  The more precise methods might be
more expensive, but this may be compensated for by the decrease in the number of required
measurements. One example would be in using a radionuclide  specific method rather than gross
radioactivity measurements for residual radioactivity that does not appear in background. This
would eliminate the variability in background from G, and would also eliminate the need for
reference area measurements.

The effect of changing the width of the gray region and/or changing  the measurement variability
on the estimated number of measurements (and cost) can be investigated using the DEFT
(Decision Error Feasibility Trials) software developed by EPA (EPA 1995a).  This program can
only give approximate sample sizes and costs since it assumes that the measurement data are
normally distributed,  that a Student's t test will be used to evaluate the data, and that there is
currently no provision for comparison to a reference area. Nevertheless, as a rough rule of
thumb, the sample sizes calculated by DEFT are about 85%  of those required by the one-sample
nonparametric tests recommended in this manual. This rule of thumb works better for large
numbers of measurements than for smaller numbers of measurements, but can be very useful for
estimating the relative impact on costs of decisions made during the planning process.

Generally, the design goal should be to achieve A/a values between one and three. The number
of samples needed rises dramatically when A/a is smaller than one.  Conversely, little is usually
gained by making A/a larger than about three.  If A/a is greater than three or four, one should
take advantage of the measurement precision available by making the width of the gray region
smaller. It is even more important, however, that overly optimistic estimates for c be avoided.
The consequence  of taking fewer samples than are needed given the actual measurement
variations will be unnecessary remediations (increased Type II decision errors).

MARSSIM, Revision 1                        D-20                                August 2000

-------
                                                                                Appendix D

Once the preliminary estimates of A and o are available, target values for a and P can be
selected. The values of a and P should reflect the risks involved in making Type I and Type n
decision errors, respectively.

One consideration in setting the false positive rate are the health risks associated with releasing a
survey unit that might actually contain residual radioactivity in excess of the DCGI^,.  If a survey
unit did exceed the DCGLW, the first question that arises is "How much above the DCGLW is the
residual radioactivity likely to be?"  The DEFT software can be used to evaluate this.

For example, if the DCGLW is 100 Bq/kg (2.7 pCi/g), the LBGR is 50 Bq/kg (1.4 pCi/g), o is 50
Bq/kg (1.4 pCi/g), a = 0.10 and P = 0.05, the DEFT calculations show that while a survey unit
with residual radioactivity equal to the DCGLW has a 10% chance of being released, a survey unit
at a level of 115 Bq/kg (3.1 pCi/g) has less than a 5% chance of being released, a survey unit at a
level of 165 Bq/kg (4.5 pCi/g) has virtually no chance of being released. However, a survey unit
with a residual radioactivity level of 65 Bq/kg (1.8 pCi/g) will have about an 80% chance of
being released and a survey unit with a residual radioactivity level of 80 Bq/kg (2.2 pCi/g) will
only have about a 40% chance of being released.  Therefore, it is important to examine the
probability of deciding that the survey unit does not meet the release  criterion over the entire
range of possible residual radioactivity values, and not only at the boundaries of the gray region.
Of course, the gray region can be made narrower, but at the cost of additional sampling.  Since
the  equations governing the process are not linear, small changes can lead to substantial  changes
in survey costs.

As stated earlier, the values of a and P that are selected in the DQO process should reflect the
risk involved in making a decision error.  In setting values for a, the following are important
considerations:

•      In radiation protection practice, public health risk is modeled  as a linear function of dose
       (BEIR 1990). Therefore a 10% change in dose, say from 15 to  16.5, results in a 10%
       change in risk.  This situation is quite different from one in which there is a threshold. In
       the latter case, the risk associated with a decision error can be quite high, and low values
       of a should be selected.  When the risk is linear, much higher values of a at the release
       criterion might be considered adequately protective when the  survey design results in
       smaller decision error rates at doses or risks greater than the release criterion. False
       positives will tend to be balanced by false negatives across sites and survey units,
       resulting in approximately equal  human health risks.
•      The DCGL itself is not free of error. The dose or risk cannot  be measured directly, and
       many assumptions are made in converting doses or  risks to derived concentrations. To be
       adequately protective of public health, these models are generally designed to over predict
       the dose or risk. Unfortunately, it is difficult to quantify this.  Nonetheless, it is probably
       safe to say that most models have uncertainty sufficiently large  such that the true dose or
       risk delivered by residual radioactivity at the DCGL is very likely to be lower than the

August 2000                                 D-21                          MARSSIM, Revision 1

-------
Appendix D

       release criterion. This is an additional consideration for setting the value of a, that could
       support the use of larger values in some situations. In this case, one would prospectively
       address, as part of the DQO process, the magnitude, significance, and potential
       consequences of decision errors at values above the release criterion.  The assumptions
       made in any model used to predict DCGLs for a site should be examined carefully to
       determine if the use of site specific parameters results in large changes in the DCGLs, or
       whether a site-specific model should be developed rather than designing a survey around
       DCGLs that may be too conservative.
•      The risk of making the second type of decision error, P, is the risk of requiring additional
       remediation when a survey unit already meets the release criterion. Unlike the health
       risk, the cost associated with this type of error may be highly non-linear. The costs will
       depend on whether the survey unit has already had remediation work performed on it, and
       the type of residual radioactivity present. There may be a threshold below which the
       remediation cost rises very rapidly.  If so, a low value for P is appropriate at that threshold
       value.  This is primarily an issue for survey units that have a substantial likelihood of
       falling at or above the gray region for residual radioactivity.  For survey units that are
       very lightly contaminated, or have been so thoroughly remediated that any residual
       radioactivity is expected to be far below the DCGL, larger values of P may be appropriate
       especially if final status survey sampling costs are a concern.  Again, it is important to
       examine the probability of deciding that the survey unit does not meet the release
       criterion over the entire range of possible residual radioactivity values, below as well as
       above the gray region.
•      Lower decision error rates may be possible if alternative sampling and analysis
       techniques can be used that result in higher precision.  The same might be achieved with
       moderate increases in sample sizes. These alternatives should be explored before
       accepting higher design error rates.  However, in some circumstances, such as high
       background variations, lack of a radionuclide specific technique, and/or radionuclides that
       are very difficult and expensive to quantify, error rates that are lower than the
       uncertainties in the dose or risk estimates may be neither cost effective nor necessary for
       adequate radiation protection.

None of the above discussion is meant to suggest that under any circumstances a less than
rigorous, thorough, and professional approach to final status surveys would be satisfactory. The
decisions made and the rationale  for making these decisions should be thoroughly documented.

For Class 1 Survey Units, the number of samples may be driven more by the need to detect small
areas of elevated activity than by the requirements of the statistical tests. This in turn will depend
primarily on the sensitivity of available scanning instrumentation, the size of the area of elevated
activity, and the dose or risk model. A given concentration of residual radioactivity spread over a
smaller area will, in general, result in a smaller dose or risk.  Thus, the DCGLEMC used for the
elevated measurement comparison is usually larger than the DCGLW used for the statistical test.
In some cases, especially radionuclides that deliver dose or risk primarily via internal pathways,

MARSSIM, Revision 1                        D-22                                August 2000

-------
                                                                               Appendix D

dose or risk is approximately proportional to inventory, and so the difference in the DCGLs is
approximately proportional to the areas.

However, this may not be the case for radionuclides that deliver a significant portion of the dose
or risk via external exposure.  The exact relationship between the DCGLEMC and the DCGLW is a
complicated function of the dose or risk modeling pathways, but area factors to relate the two
DCGLs can be tabulated for most radionuclides (see Chapter 5), and site-specific area factors can
also be developed.

For many radionuclides, scanning instrumentation is readily available that is sensitive enough to
detect residual radioactivity concentrations at the DCGLEMC derived for the sampling grid of
direct measurements used in the statistical tests. Where instrumentation of sufficient sensitivity
(MDC, see Chapter 6) is not available, the number of samples in the survey unit can be increased
until the area between sampling points is small enough (and the resulting area factor is large
enough) that DCGLEMC can be detected by scanning. The details of this process are discussed in
Chapter 5. For some radionuclides (e.g., 3H) the scanning sensitivity is so low that this process
would never terminate—i.e., the number of samples required could increase without limit. Thus,
an important part of the DQO process is to determine the smallest size of an area of elevated
activity that it is important to detect, A^, and an acceptable level of risk , RA , that it  may go
undetected. The probability of sampling a circular area of size A with either a square or
triangular sampling pattern is shown in Figure D.7.  The ELIPGRID-PC (Davidson 1995)
computer code can also be used to calculate these probabilities.

In this part of the DQO process, the concern is less with areas of elevated activity that are found
than with providing adequate assurance that negative scanning results truly demonstrate the
absence of such areas. In selecting acceptable values for A^ and RA, maximum use of
information from the HSA and all surveys prior to the final  status surveys should be used to
determine what sort of areas of elevated activity could possibly exist, their potential size and
shape, and how likely they are to exist. When the detection limit of the scanning technique is
very large relative to the DCGLgMC, the number of measurements estimated to demonstrate
compliance using the statistical tests may become unreasonably large.  In this situation an
evaluation of the survey objectives and considerations be performed.  These considerations may
include the survey design and measurement methodology, exposure pathway modeling
assumptions and parameter values used to determine the DCGLs, Historical Site Assessment
conclusions concerning source terms and radionuclide distributions, and the results of scoping
and characterization surveys. In most cases the results of this evaluation is not expected to
justify an unreasonably large number of measurements.

A convenient method for visualizing the decision rule is to graph the probability of deciding that
the survey unit does not meet the release criterion, i.e., that the null hypothesis of Scenario A is
accepted.  An example of such a chart is shown in Figure D.8.
August 2000                                D-23                        MARSSIM, Revision 1

-------
Appendix D
     .a
     re
     .0
     O
               0.1
                      Triangular Systematic Grid
1.0
10.0
100.0
                                  Area of Elevated Activity
                                          (n x L2 )
     re

     o
     !_
     Q.
               0.1
                        Square Systematic  Grid
1.0
10.0
100.0
                                  Area of Elevated Activity
                                          (n x L2)
          Figure D.7 Geometric Probability of Sampling at Least One Point of
           an Area of Elevated Activity as a Function of Sample Density with
                  Either a Square or Triangular Sampling Pattern
MARSSIM, Revision 1
   D-24
                 August 2000

-------
                                                                     Appendix D
 0
    E
CD
    c
 c  o
  -  o
 O) 0
 m
 0
 Q
f 5
 CO  0
 -Q  0
 O  a
 CD  CD
 Q. ^
 0
 O
 O
           0.8
           0.6
           0.4
          0.2
                              Gray Region

                              Larger Error
                                Rates Are
                               Acceptable
 Acceptable
    Type II
Decision Error
   Rate ((3)
                                                  Acceptable
                                                     Type I
                                                 Decision Error
                                                  Rate (1 - a)
               0
                        0.5 DCGLv
                         DCGLv
1.5 DCGLv
2 DCGLv
                          True Activity Above Background

          Figure D.8  Example of a Power Chart Illustrating the Decision Rule
                           for the Final Status Survey
August 2000
                                   D-25
                                             MARS SIM, Revision 1

-------
Appendix D

In this example a is 0.025 and P is 0.05, providing an expected power (1-P) of 0.95 for the test.
A second method for presenting the information is shown in Figure D.9. This figure shows the
probability of making a decision error for possible values of the parameter of interest, and is
referred to as an error chart.  In both examples a gray region, where the consequences of decision
errors are deemed to be relatively minor, is shown.  These charts are used in the final step of the
DQO Process, combined with the outputs from the previous steps, to produce an efficient and
cost-effective survey design. It is clear that setting acceptable values for a and P, as well as
determining an appropriate gray region, is a crucial step in the DQO Process.  Instructions for
creating a prospective power curve, which can also be used to visualize the decision rule, are
provided in Appendix I.

After the survey design is implemented, the expected values of a and P determined in this step
are compared to the actual significance level and power of the statistical test based on the
measurement results during the assessment phase of the Data Life Cycle.  This comparison is
used to verify that the objectives of the survey have been achieved.

EPA QA/G-9 (EPA 1996a) discusses considerations for selecting a particular null hypothesis.
Because of the basic hypothesis testing philosophy, the null hypothesis is generally specified in
terms of the status quo {e.g.., no change or action will take place if the null hypothesis is not
rejected). Also, since the classical hypothesis testing approach exercises direct control over the
Type I (false positive) error rate, this rate is generally associated with the error of most concern.
In the case of the null hypothesis in which the residual radioactivity in the survey unit exceeds
the release criterion, a Type I decision error would conclude that the residual activity was less
than the release criterion when in fact it was above the release criterion. One difficulty,
therefore, may be obtaining a consensus on which error should be of most concern (i.e., releasing
a site where the residual activity exceeds the release criterion or failing to release a site where the
residual activity is less than the release criterion). It is likely that the regulatory agency's public
health-based protection viewpoint will differ from the viewpoint of the regulated party.  The
ideal approach is not only to define the null hypothesis  in such a way that the Type I decision
error protects human health and the environment but also in a way that encourages quality (high
precision and accuracy) and minimizes expenditure of resources in situations where decisions are
relatively "easy" (e.g., all observations are far below the threshold level of interest or DCGL).

To avoid excessive expense in performing measurements, compromises are sometimes
necessary. For example, suppose that a significance level (a) of 0.05 is to be used. However, the
affordable sample size may be expected to yield a test with power (P) of only 0.40 at some
specified parameter value chosen to have practical significance.  One possible compromise may
be to relax the Type I decision error rate (a) and use a value  of 0.10, 0.15, or even 0.20. By
relaxing the Type I decision error rate, a higher power (i.e., a lower Type n decision error rate)
can be achieved. An argument can be made that survey designs should be  developed and number
of measurements determined in such a way that both the Type I (a) and Type n (P) decision error
rates are treated simultaneously and in a balanced manner (i.e., a = P = 0.15).  This approach of

MARSSIM, Revision 1                        D-26                                August 2000

-------
                                                                        Appendix D
     1.00-
     0.20
0
-I—•
CD
C£
,_   0.15-
O
s_
LU
.Q
 CD
-i—•
 Q.
 0
 ^   0.10-
     0.05
     0.00
                              Gray Region

                               Larger Error
                                Rates Are
                               Acceptable
              Acceptable
               Type II
            Decision Error
               Rate(p),
                                                      Acceptable
                                                        Type I
                                                     Decision Error
                                                       Rate (a)
                     0.5 DCGU
                                         DCGU
1.5DCGU
2 DCGI_V
                        True Activity Above Background

          Figure D.9 Example of an Error Chart Illustrating the Decision Rule
                            for the Final Status Survey
August 2000
                                      D-27
        MARS SIM, Revision 1

-------
Appendix D

treating the Type I and Type n decision error rates simultaneously is taken by the DQO Process.
It is recommended that several different values for a and P be investigated before specific values
are selected.
D.7 Optimize the Design for Collecting Data

This step is designed to produce the most resource-effective survey design that is expected to
meet the DQOs. It may be necessary to work through this step more than once after revisiting
previous steps in the DQO Process.

There are six activities included in this step:

•      Reviewing the DQO outputs and existing environmental data to ensure they are internally
       consistent.
•      Developing general data collection design alternatives. Chapter 5 describes random and
       systematic sampling designs recommended for final status surveys based on survey unit
       classification.
•      Formulating the mathematical expressions needed to solve the design problem for each
       data collection design alternative.
•      Selecting the optimal design that satisfies the DQOs for each data collection  design
       alternative. If the recommended design will not meet the limits on decision errors within
       the budget or other constraints, then the planning team will need to relax one or more
       constraints.  Examples include:
       a.      increasing the budget for sampling and analysis
       b.      using exposure pathway modeling to develop site-specific DCGLs
       c.      increasing the decision error rates, not forgetting to consider the risks associated
              with making an incorrect decision
       d.      increasing the width of the gray region by decreasing the LBGR
       e.      relaxing other project  constraints—e.g., schedule
       f.      changing the boundaries—it may be possible to reduce measurement costs by
              changing or eliminating survey units that will require different decisions
       g.      evaluating alternative  measurement techniques with lower detection limits or
              lower survey costs
       h.      considering the use of passive controls when releasing the survey unit rather than
              unrestricted release
•      Selecting the most resource-effective survey design that satisfies all of the DQOs.
       Generally, the survey designs  described in Chapter 5 will be acceptable for demonstrating
       compliance. Atypical sites (e.g., mixed-waste sites) may require the planning team to
       consider alternative survey designs on a site-specific basis.
MARSSIM, Revision 1                        D-28                                 August 2000

-------
                                                                                Appendix D

•      Documenting the operational details and theoretical assumptions of the selected design in
       the QAPP, the field sampling plan, the sampling and analysis plan, or the
       decommissioning plan. All of the decisions that will be made based on the data collected
       during the survey should be specified along with the alternative actions that may be
       adopted based on the survey results.

Chapters 4 and  5 present a framework for a final status survey design.  When this framework is
combined with the site-specific DQOs developed using the guidance in this section, the survey
design should be acceptable for most sites. The key inputs to  Chapters 4 and 5 are:

•      investigation levels and DCGLs for each radionuclide  of interest
•      acceptable measurement techniques for scanning, sampling, and direct measurements,
       including detection limits and estimated survey costs
•      identification and classification of survey units
•      an estimate of the variability in the distribution of residual radioactivity for each survey
       unit, and in the reference area if necessary
•      the decision maker's acceptable a priori values for decision error rates (a and P)
August 2000                                D-29                        MARSSIM, Revision 1

-------
                                    APPENDIX E

          THE ASSESSMENT PHASE OF THE DATA LIFE CYCLE
The assessment phase of the Data Life Cycle includes verification and validation of the survey
data and assessment of quality of the data.  Data verification is used to ensure that the
requirements stated in the planning documents are implemented as prescribed. Data validation is
used to ensure that the results of the data collection activities support the objectives of the survey
as documented in the Quality Assurance Project Plan (QAPP), or permit a determination that
these objectives should be modified. Data Quality Assessment (DQA) is the scientific and
statistical evaluation of data to determine if the data are of the right type, quality, and quantity to
support their intended use (EPA 1996a). DQA helps complete the Data Life Cycle by providing
the assessment needed to determine that the planning objectives are achieved.  Figure E. 1
illustrates where data verification, data validation and DQA fit into the Assessment Phase of the
Data Life Cycle.

There are five steps in the DQA Process:

       Review the Data Quality Objectives (DQOs) and Survey Design
       Conduct a Preliminary Data Review
       Select the Statistical Test
       Verify the Assumptions of the Statistical Test
       Draw Conclusions from the Data

These five steps are presented in a linear sequence, but the DQA process is applied in an iterative
fashion much like the DQO process. The strength of the DQA process is that it is designed to
promote an understanding of how well the data will meet their intended  use by progressing in a
logical and efficient manner.
E.I   Review DQOs and Survey Design

The DQA process begins by reviewing the key outputs from the Planning phase of the Data Life
Cycle that are recorded in the planning documents (e.g., the QAPP). The DQOs provide the
context for understanding the purpose of the data collection effort.  They also establish
qualitative and quantitative criteria for assessing the quality of the data set for the intended use.
The survey design (documented in the QAPP) provides important information about how to
interpret the data.
August 2000                                E-l                         MARS SIM, Revision 1

-------
Appendix E
            Routine Data
QC/Performance
Evaluation Data
                               INPUTS
                DATA VALIDATION/VERIFICATION

            Verify Measurement Performance
            Verify Measurement Procedures and Reporting Requirements
                                                      OUTPUT
                   VALIDATED/VERIFIED DATA
                                                        INPUT
                     DATA QUALITY ASSESSMENT

                    Review DQOs and Design
                    Conduct Preliminary Data Review
                    Select Statistical Test
                    Verify Assumptions
                    Draw Conclusions
                                                      OUTPUT
                 CONCLUSIONS DRAWN FROM DATA


        Figure E.I The Assessment Phase of the Data Life Cycle (EPA 1996a)

MARSSIM, Revision 1                   E-2                         August 2000

-------
                                                                                 Appendix E


There are three activities associated with this step in the DQA process:

•      Translating the data user's objectives into a statement of the hypotheses to be tested using
       environmental data.  These objectives should be documented as part of the DQO Process,
       and this activity is reduced to translating these objectives into the statement of
       hypotheses.  If DQOs have not been developed, which may be the case for historical data,
       review Appendix D for assistance in developing these objectives.

•      Translating the objectives into limits on the probability of committing Type I or Type n
       decision errors. Appendix D, Section D.6 provides guidance on specifying limits on
       decision errors as part of the DQO process.

•      Reviewing the survey design and noting any special features or potential problems.  The
       goal of this activity is to familiarize the analyst with the main features of the survey
       design used to generate the environmental data. Review the survey design documentation
       (e.g., the QAPP) with the data user's objectives in mind. Look for design features that
       support or contradict these objectives.

For the final status survey, this step would consist of a review of the DQOs developed using
Appendix D and the QAPP developed in Chapter 9.
E.2   Conduct a Preliminary Data Review

In this step of the DQA process, the analyst conducts a preliminary evaluation of the data set,
calculating some basic statistical quantities and looking at the data through graphical
representations.  By reviewing the data both numerically and graphically, the analyst can learn
the "structure" of the data and thereby identify appropriate approaches and limitations for their
use.

This step includes three activities:

•      reviewing quality assurance reports
•      calculating statistical quantities (e.g., relative standing, central tendency, dispersion,
       shape, and association)
•      graphing the data (e.g., histograms, scatter plots, confidence intervals, ranked data plots,
       quantile plots, stem-and-leaf diagrams, spatial or temporal plots)

Chapter 8 discusses the application of these activities to a final status survey.
August 2000                                 E-3                         MARS SIM, Revision 1

-------
Appendix E


E.3   Select the Statistical Test

The statistical tests presented in Chapter 8 are applicable for most sites contaminated with
radioactive material. Chapter 2 discusses the rationale for selecting the statistical methods
recommended for the final status survey in more detail.  Additional guidance on selecting
alternate statistical methods can be found in Section 2.6  and in EPA's DQA guidance document
(EPA 1995).
E.4   Verify the Assumptions of the Statistical Test

In this step, the analyst assesses the validity of the statistical test by examining the underlying
assumptions in light of the environmental data. The key questions to be resolved are: "Do the
data support the underlying assumptions of the test?", and:  "Do the data suggest that
modifications to the statistical analysis are warranted?"

The underlying assumptions for the statistical tests are discussed in Section 2.5.  Graphical
representations of the data, such as those described in Section 8.2 and Appendix I, can provide
important qualitative information about the validity of the assumptions. Documentation of this
step is always important, especially when professional judgement plays a role in accepting the
results of the analysis.

There are three activities included in this step:

•      Determining the approach for verifying assumptions. For this activity, determine how the
       assumptions of the hypothesis test will be verified, including assumptions about
       distributional form, independence, dispersion, type,  and quantity of data. Chapter 8
       discusses methods for verifying assumptions for the final status survey statistical test
       during the preliminary data review.

•      Performing tests of the assumptions.  Perform the calculations selected in the previous
       activity for the statistical tests.  Guidance on performing the tests recommended for the
       final status survey are included in Chapter 8.

•      Determining corrective actions (if any). Sometimes the assumptions underlying the
       hypothesis test will not be satisfied and some type of corrective action should be
       performed before proceeding. In some cases, the data for verifying some key assumption
       may not be available and existing data may not support the assumption. In this situation,
       it may be necessary to collect new data, transform the data to correct a problem with the
       distributional assumptions, or select an alternate hypothesis test.  Section 9.3 discusses
       potential corrective actions.

MARSSIM, Revision 1                         E-4                                 August 2000

-------
                                                                                Appendix E


E.5   Draw Conclusions from the Data

The final step of the DQA process is performing the statistical test and drawing conclusions that
address the data user's objectives. The procedure for implementing the statistical test is included
in Chapter 8.

There are three activities associated with this final step:

•      Performing the calculations for the statistical hypothesis test (see Chapter 8).

•      Evaluating the statistical test results and drawing the study conclusions. The results of
       the statistical test will be either accept the null hypothesis, or reject the null hypothesis.

•      Evaluating the performance of the survey design if the design is to be used again. If the
       survey design is to be used again, either in a later phase of the current study or in a similar
       study, the analyst will be interested in evaluating the overall  performance of the design.
       To evaluate the survey design, the analyst performs a statistical power analysis that
       describes the estimated power of the test over the full range of possible parameter values.
       This helps the  analyst evaluate the adequacy of the sampling design when the true
       parameter value lies in the vicinity of the action level (which may not have been the
       outcome of the current study).  It is recommended that a statistician be consulted when
       evaluating the  performance of a survey  design for future use.
August 2000                                 E-5                         MARSSIM, Revision 1

-------
                                  APPENDIX F

 THE RELATIONSHIP BETWEEN THE RADIATION SURVEY AND SITE
 INVESTIGATION PROCESS, THE CERCLA REMEDIAL OR REMOVAL
      PROCESS, AND THE RCRA CORRECTIVE ACTION PROCESS
This appendix presents a discussion of the relationship between the Radiation Survey and Site
Investigation Process, the Comprehensive Environmental Response, Compensation, and Liability
Act (CERCLA) Remedial or Removal Process, and the Resource Conservation and Recovery Act
(RCRA) Corrective Action Process.  Each of these processes has been designed to incorporate
survey planning using the Data Quality Objectives (DQO) Process and data interpretation using
Data Quality Assessment (DQA) using a series of surveys to accomplish the project objectives.
At this basic level, MARSSIM is consistent with the other  processes.

Figure F. 1 illustrates the relationship between the major steps in each of these processes. As
shown in Figure F.I, the scope of MARSSIM (Section 1.1) results in steps in the CERCLA
Remedial or Removal Process and the RCRA Process that are not directly addressed by
MARSSIM (e.g., Feasibility Study or Corrective Measure Study). MARSSIM's focus on the
demonstration of compliance for sites with residual radioactivity using a final  status survey
integrates with the remedial design/remedial action (RD/RA) step of the CERCLA Remedial
Process described in Sec. 300.435(b)(l) of Part 40 of the Code of Federal Regulations.  However,
MARSSIM's focus is not directly addressed by the major steps of the CERCLA Removal
Process or the RCRA Corrective Action Process.

Much of the guidance presented in MARSSIM for designing surveys and assessing the survey
results is taken directly from the corresponding CERCLA or RCRA guidance.  MARSSIM users
familiar with the Superfund Preliminary Assessment guidance (EPA 199If) will recognize the
guidance provided on performing the Historical Site Assessment (Chapter 3) for identifying
potentially contaminated soil, water, or sediment. In addition, MARSSIM provides guidance for
identifying potentially contaminated structures which is not covered in the original CERCLA
guidance. The survey designs and statistical tests for relatively uniform distributions of residual
radioactivity discussed in MARSSIM are also discussed in CERCLA guidance (EPA 1989a, EPA
1994b).  However, MARSSIM includes scanning for radioactive materials which isn't discussed
in the  more general CERCLA guidance that doesn't specifically address radionuclides.
MARSSIM is not designed to replace or conflict with existing CERCLA or RCRA guidance, it is
designed to provide supplemental guidance for specific applications of the CERCLA Remedial
or Removal Process or the RCRA Corrective Action Process.
August 2000                               F-l                       MARSSIM, Revision 1

-------
Appendix F
   RADIATION SURVEY
       AND SITE
INVESTIGATION PROCESS
                   CERCLA
SOIL SCREENING
LEVEL GUI DANCE
REMEDIAL
PROCESS
REMOVAL
PROCESS
                              RCRA CORRECTIVE
                              ACTION PROCESS
Develop a
Conceptual
Site Model
i


Soil Screening
Survey
i
Pi
i
ss

\
Fail 	 •
Preliminary
Assessment
!

Site
Inspection
i

Remedial
Investigation
i

Feasibility
Study
i

Remedial Design/
Remedial Action
i


Removal
Site
Evaluation
'
Removal
Preliminary
Assessment
Removal
Site
Inspection
(if needed)
1
Removal Action



RCRA
Facility
Assessment
i

RCRA
Facility
Investigation
i

Corrective
Measure
Study
i

Corrective
Measure
Implementation
i

Closure/Post-Closure
Long-Term Remedial Assessment
      Figure F.I  Comparison of the Radiation Survey and Site Investigation Process
     with the CERCLA Superfund Process and the RCRA Corrective Action Process
Table F.I lists the major steps in MARSSIM and other CERCLA and RCRA processes and
describes the objectives of each step. This table provides a direct comparison of these processes,
and it shows the correlation between the processes.  This correlation is the result of carefully
integrating CERCLA and RCRA guidance with guidance from other agencies participating in the
development of MARSSIM to produce a multi-agency consensus document.

The first step in the CERCLA Remedial Process is the preliminary assessment to obtain existing
information about the site and determine if there is a threat to human health and the environment.
The next step is the site inspection which includes risk prioritization using the Hazard Ranking
System—sites with a score above a certain level are put on the National Priorities List (NPL).
Following the site assessment, the remedial investigation (RI) is performed to characterize the
MARSSIM, Revision 1
                   F-2
                                   August 2000

-------
                                                                             Appendix F

extent and type of release, and to evaluate the risk to human health and the environment. A
Sampling and Analysis Plan is constructed as part of the remedial investigation which consists of
a Quality Assurance Project Plan, a Field Sampling Plan, a Health and Safety Plan, and a
Community Relations Plan. The site feasibility study (FS) is the next step in the CERCLA
Remedial Process (although the RI and FS are intended to be done concurrently) which involves
an evaluation of alternative remedial actions.  For sites listed on the NPL the next action would
be to obtain a Record of Decision (ROD) which provides the remedy selected for the site.  The
remedial design/remedial action (RD/RA), which includes the development of the selected
remedy and its implementation, follows development of the ROD. After the RD/RA activities
there is a period of operation and maintenance when the site is given a long term remedial
assessment followed by closure/post-closure of the site (or removal from the NPL). A removal
action may occur at any stage of the CERCLA Remedial Process.

The CERCLA Removal Process is similar to the Remedial Process for the first few steps.
40 CFR § 300.400 (NCP Subpart E—Hazardous Substance Response) establishes methods and
criteria for determining the extent of response when there is a release into the environment of a
hazardous substance or any pollutant or contaminant that may present an imminent and
substantial danger to the public health or welfare of the United States. The first step in the
Removal Process is a removal site evaluation which includes a removal preliminary assessment
and, if warranted, a removal site inspection.  A removal preliminary assessment may be based on
available information and should include an evaluation of the factors necessary to make the
determination of whether a removal is necessary. A removal site inspection  is  performed, if
warranted, in a similar manner as in the CERCLA Remedial Process. If environmental samples
are to be collected, a sampling and analysis plan should be developed which consists of a field
sampling plan and a quality assurance project plan.  Post-removal site controls are those activities
necessary to sustain the effectiveness and integrity of the removal action. In the case of all
CERCLA removal actions taken pursuant to § 300.415, a designated spokesperson will inform
the community of actions taken, respond to inquiries, and provide information concerning the
release—this may include a formal community relations plan specifying the  community relations
activities expected during the removal response.

Comparisons have been made between the CERCLA Remedial Process and  CERCLA Removal
Process (EPA, 1993c). Table F.2 presents the data elements that are common to both programs
and those that are generally common to one program rather than the other. Table F.3 shows the
emphasis placed on sampling for remedial site assessment versus removal site  assessment.

Another guidance document that can be compared to MARSSEVI is the Soil  Screening Guidance
(EPA 1996b, EPA 1996c), which facilitates removing sites from consideration early in the
CERCLA Process.  Although not written to specifically address radioactive contaminants, the
Soil Screening Guidance leads the user from the initial site conceptualization and planning stages
through data collection and evaluation to the final testing step. MARSSEVI also leads the user
through similar planning, evaluation, and testing stages, but the guidance focuses on the final
compliance demonstration step.

August 2000                                F-3                        MARSSIM, Revision 1

-------
Appendix F

The Soil Screening Guidance provides a way to calculate risk-based, site-specific, soil screening
levels (SSLs) for contaminants in soil. SSLs can be used as preliminary remediation goals
(PRGs) if the conditions found at a specific site are similar to the conditions assumed in
calculating the SSLs.

Both the Soil Screening Guidance and MARSSEVI provide examples of acceptable sampling and
analysis plans (SAP) for site contaminants. The Soil Screening Guidance recommended default
survey design for surface soils is very specific—recommendations for the grid size for sampling,
the number of soil samples collected from each subarea and composited, and data analysis and
interpretation techniques  are described in detail. MARSSEVI provides guidance that is consistent
and compatible with the Soil Screening Guidance with respect to the approaches, framework,
tools, and overall objectives.

SSLs calculated using the CERCLA Soil Screening Guidance could also be used for RCRA
Corrective Action sites as action levels. The RCRA Corrective Action program views action
levels as generally fulfilling the same purpose as soil screening levels.  Table F. 1 shows other
similarities between the RCRA Corrective Action Process, CERCLA Remedial or Removal
Process, and MARSSEVI.

The similarities between the CERCLA Remedial Process and Removal Process have led to a
number of streamlined approaches to expedite  site cleanups by reducing sampling and preventing
duplication of effort. One example of these  approaches is the Superfund Accelerated Cleanup
Model (SACM) where the concept of integrating the removal and remedial site assessment was
introduced (EPA, 1993c). A memorandum from EPA, DOE,  and DOD (August 22,1994)
discusses guidance on accelerating and developing streamlined approaches for the cleanup of
hazardous waste at federal facility sites.
MARSSIM, Revision 1                         F-4                                August 2000

-------
 >
I
 to
 o
                                                      Table F.I  Program Comparison
                MARSSIM
                                      CERCLA REMEDIAL
                                             PROCESS
                                             CERCLA REMOVAL
                                                    PROCESS
                                                    RCRA
  1
GO
GO
O
        Historical Site Assessment
        Performed to gather existing
        information about radiation sites.
        Designed to distinguish between
        sites that possess no potential for
        residual radioactivity and those
        that require further investigation.

        Performed in three stages:
        1) Site Identification
        2) Preliminary Investigation
        3) Site Reconnaissance
                                 Preliminary Assessment

                                 Performed to gather existing information
                                 about the site and surrounding area.  The
                                 emphasis is on obtaining comprehensive
                                 information on people and resources that
                                 might be threatened by a release from the
                                 site.

                                 Designed to distinguish between sites that
                                 pose little or no threat to human health and
                                 the environment and sites that require
                                 further investigation.
                                         Preliminary Assessment

                                         Performed in a similar manner as in the
                                         CERCLA Remedial Process.  The
                                         removal preliminary assessment may be
                                         based on available information.

                                         A removal preliminary assessment may
                                         include an identification of the source,
                                         nature and magnitude of the release,
                                         evaluation by ATSDR of the threat to
                                         public health, and evaluation of factors
                                         necessary to make the determination of
                                         whether a removal is necessary.
        Scoping Survey
Performed to provide a
preliminary assessment of the
radiological hazards of the site.
Supports classification of all or
part of the site as Class 3 areas
and identifying non-impacted
areas of the site.

Scoping surveys provide data to
complete the site prioritization
scoring process for CERCLA or
RCRA sites.
Site Inspection

Performed to identify the substances
present, determine whether hazardous
substances are being released to the
environment, and determine whether
hazardous substances have impacted
specific targets.

Designed to gather information on
identified sites in order to complete the
Hazard Ranking System to determine
whether removal actions or further
investigations are necessary.
Site Inspection

Performed in a similar manner as in the
Remedial Process. A removal site
inspection may be performed as part of
the removal site evaluation (§ 300.410)
if warranted. A removal site inspection
may include an perimeter or on-site
inspection.

If the removal site evaluation shows
that removal is not required, but that
remedial action under § 300.430 may
be necessary, a remedial site evaluation
pursuant to § 300.420 would be
initiated.
Facility Assessment

Performed to identify and gather
information at RCRA facilities, make
preliminary determinations regarding
releases of concern and identify the
need for further actions and interim
measures at the facility.

Performed in three stages:
1) Preliminary  Review
2) Visual Site Inspection
3) Sampling Visit (if necessary)

The RCRA Facility Assessment
accomplishes the same objectives as
the Preliminary Assessment and Site
Inspection under the Superfund
Process.

The RCRA Facility Assessment often
forms the basis for the first conceptual
model of the site.
                                                                                                                                                                  I

-------
                                                              Table F.I  Program Comparison
GO
GO
                                                                                                                          I
                MARSSIM
     CERCLA REMEDIAL
             PROCESS
    CERCLA REMOVAL
           PROCESS
             RCRA
  1
>
        Characterization Survey
        Performed to support planning
        for final status surveys to
        demonstrate compliance with a
        dose- or risk-based regulation.
        Objectives include determining
        the nature and extent of
        contamination at the site, as well
        as meeting the requirements of
        RI/FS and FI/CMS.
Remedial Investigation

Performed to characterize the extent and
type of release of contaminants.  The PJ is
the mechanism for collecting data to
characterize site conditions, determine the
nature of the waste, assess risk to human
health and the environment, and conduct
treatability testing as necessary to evaluate
the potential performance and cost of the
treatment technologies that are being
considered.

EPA guidance presents a combined PJ/FS
Model Statement of Work. The PJ is
generally performed in seven tasks:
1) project planning (scoping):
 - summary of site location
 - history and nature of problem
 - history of regulatory and
   response actions
 - preliminary site boundary
 - development of site operations
   plans
2) field investigations
3) sample/analysis verification
4) data evaluation
5) assessment of risks
6) treatability study/pilot testing
7) PJ reporting
Removal Action

Performed once the decision has been
made to conduct a removal action at the
site (under § 300.415).  Whenever a
planning period of at least six months
exists before on-site activities must be
initiated, an engineering evaluation/cost
analysis or its equivalent is conducted.

If environmental samples are to be
collected, a sampling and analysis plan
is developed to provide a process for
obtaining data of sufficient quality and
quantity to satisfy data needs.  The
sampling and analysis plan consists of:
1) The field sampling plan, which
describes the number, type, and
location of samples and the type of
analysis
2) The quality assurance project plan,
which describes policy, organization,
and functional activities and the data
quality objectives and measures
necessary to achieve adequate data for
use in removal actions.
Facility Investigation

Defines the presence, magnitude,
extent, direction, and rate of movement
of any hazardous wastes and hazardous
constituents within and beyond the
facility boundary.

The scope is to :
1) characterize the potential pathways
of contaminant migration
2) characterize the source(s) of
contamination
3) define the degree and extent of
contamination
4) identify actual or potential receptors
5) support the development of
alternatives from which a corrective
measure will be selected by the EPA

The Facility Investigation is performed
in seven tasks:
1) description of current conditions
2) identification of preliminary
remedial measures technologies
3) FI work plan requirements
 - project management plan
 - data collection QAPP
 - data management plan
 - health and safety plan
 - community relations plan
4) facility investigation
5) investigation analysis
6) laboratory and bench-scale studies
1) reports	

-------
>
                                                            Table F.I Program Comparison
GO
GO
                MARSSIM
    CERCLA REMEDIAL
            PROCESS
    CERCLA REMOVAL
           PROCESS
             RCRA
        DCGLs
        Residual levels of radioactive
        material that correspond to
        allowable radiation dose
        standards are calculated (derived
        concentration guideline levels)
        and provided to the user. The
        survey unit is then evaluated
        against this radionuclide-specific
        DCGL.

        The DCGLs in this manual are
        for structure surfaces and soil
        contamination. MARSSIM does
        not provide equations or guidance
        for calculating DCGLs.
PRGs
Preliminary remediation goals are
developed early in the RI/FS process.
PRGs may then be used as the basis for
final cleanup levels based on the nine
criteria in the National Contingency Plan.
Soil Screening Levels (SSLs) can be used
as PRGs provided conditions at a specific
site are similar to those assumed in
calculating the SSLs.

SSLs are derived with exposure
assumptions for suburban residential land
use only. SSLs are based on a
10"6 risk for carcinogens, a hazard index
quotient of 1 for noncarcinogens (child
ingestion assumptions), or MCLGs,
MCLs, or HBLs for the migration to
groundwater.  The User's Guide provides
equations and guidance for calculating
site-specific SSLs.	
Removal Levels
The removal level is established by
identification of applicable or relevant
and appropriate requirements (ARARs),
or by health assessments. Concern is
for protection of human health and the
environment from the immediate
hazard of a release rather than a
permanent remedy.
Action Levels
At certain facilities subject to RCRA
corrective action, contamination will be
present at concentrations (action levels)
that may not justify further study or
remediation. Action levels are health-
or environmental-based concentrations
derived using chemical-specific
toxicity information and standardized
exposure assumptions.  The SSLs
developed under CERCLA guidance
can be used as action levels since the
RCRA corrective action program
currently views them as serving the
same purpose.
I
w'
5'
                                                                                                                      I

-------
                                                            Table F.I  Program Comparison
GO
GO
                                                                                                                      I
               MARSSIM
    CERCLA REMEDIAL
            PROCESS
    CERCLA REMOVAL
           PROCESS
             RCRA
       No Direct Correlation

       (MARSSIM characterization and
       remedial action support surveys
       may provide data to the
       Feasibility Study or the
       Corrective Measures Study)
71
oo
Feasibility Study

The FS serves as the mechanism for the
development, screening, and detailed
evaluation of alternative remedial actions.
As noted above, the RI and the FS are
intended to be performed concurrently.
However, the FS is generally considered to
be composed of four general tasks.

These tasks are:
1) development and screening of remedial
alternatives
2) detailed analysis of alternatives
3) community relations
4) FS reporting
No Direct Correlation
Corrective Measures Study

The purpose of the CMS is to identify ,
develop, and evaluate potentially
applicable corrective measures and to
recommend the corrective measures to
be taken.

The CMS is performed following an FI
and consists of the following four
tasks:
1) identification and development of
the corrective measures alternatives
2) evaluation of the corrective
measures alternatives
3) justification and recommendations
of the corrective measures alternatives
4) reports	
>

-------
 >
I
8
                                             Table F.I  Program Comparison
MARSSIM
CERCLA REMEDIAL
        PROCESS
CERCLA REMOVAL
       PROCESS
RCRA
  1
        Remedial Action Support Survey
        Performed to support remediation
        activities and determine when a
        site or survey unit is ready for the
        final status survey. These surveys
        monitor the effectiveness of
        decontamination efforts in
        reducing residual radioactivity to
        acceptable levels.

        Remedial action support surveys
        do not include routine operational
        surveys conducted to support
        remedial activities.
                         Remedial Design/Remedial Action

                         This activity includes the development of
                         the selected remedy and implementation of
                         the remedy through construction. A
                         period of operation and maintenance may
                         follow the RD/RA activities.

                         Generally, the RD/RA includes:
                         1) plans and specifications
                          - preliminary design
                          - intermediate design
                          - prefinal/final design
                          - estimated cost
                          - correlation of plans and specifications
                          - selection of appropriate RCRA facilities
                          - compliance with requirements of other
                         environmental laws
                          - equipment startup and operator training
                         2) additional studies
                         3) operation and maintenance plan
                         4)QAPP
                         5) site safety plan
                                    No Direct Correlation
                                   Corrective Measures Implementation

                                   The purpose of the CMI is to design,
                                   construct, operate, maintain, and
                                   monitor the performance of the
                                   corrective measures selected in the
                                   CMS.

                                   The CMI consists of four activities:
                                   1) Corrective Measure Implementation
                                   Program Plan
                                   2) corrective measure design
                                    - design plans and specifications
                                    - operation and maintenance plan
                                    - cost estimate
                                    - schedule
                                    - construction QA objectives
                                    - health and safety plan
                                    - design phases
                                   3) corrective measures construction
                                   (includes a construction QA program)
                                   4) reporting
        Final Status Survey
        Performed to demonstrate that
        residual radioactivity in each
        survey unit satisfies the release
        criterion.
                         Long Term Remedial Assessment
                         Closure/Post-Closure
                         NPL De-Listing
                                    Post-Removal Site Control
                                    Those activities that are necessary to
                                    sustain the integrity of a removal action
                                    following its conclusion.
                                   Closure/Post-Closure
GO
GO
I
o'

-------
Appendix F
                            Table F.2  Data Elements for Site Visits3
         Data Elements Common
      to Both Remedial and Removal
               Assessment
  Generally Remedial Site
     Assessment Only
     Generally Removal
      Assessment Only
 -Current human exposure identification
 -Sources identification, including locations,
 sizes, volumes
 -Information on substances present
 -Labels on drums and containers
 -Containment evaluation
 -Evidence of releases (e.g., stained soils)
 -Locations of wells on site and in
 immediate vicinity
 -Nearby wetlands identification
 -Nearby land uses
 -Distance measurements or estimates for
 wells, land uses (residences and schools),
 surface waters, and wetlands
 -Public accessibility
 -Blowing soils and air contaminants
 -Photodocumentation
 -Site sketch
-Perimeter survey
-Number of people within 200
feet
-Some sensitive environments
-Review all pathways
-Petroleum releases
-Fire and explosion threat
-Urgency of need for response
-Response and treatment
alternatives evaluation
-Greater emphasis on specific
pathways (e.g., direct contact)
-Sampling
TromEPA, 1993c
                    Table F.3 Comparison of Sampling Emphasis Between
                     Remedial Site Assessment and Removal Assessment3
        Remedial Site Assessment Emphasis
                   Removal Assessment Emphasis
 -Attribution to the site
 -Background samples
 -Ground water samples
 -Grab samples from residential soils
 -Surface water sediment samples
 -HRS factors related to surface water sample locations
 -Fewer samples on average (10-30) than removal
 assessment
 -Strategic sampling for HRS
 -Contract Laboratory Program usage
 -Full screening organics and inorganics analyses
 -Definitive analyses
 -Documentation, including targets and receptors
 -Computing HRS scores
 -Standardized reports	
          -Sampling from containers
          -Physical characteristics of wastes
          -Treatability and other engineering concerns
          -On-site contaminated soils
          -Composite and grid sampling
          -Rapid turnaround on analytical services
          -Field/screening analyses
          -PRP-lead removal actions
          -Goal of characterizing site
          -Focus on NCP removal action criteria
TromEPA, 1993c
MARSSIM, Revision 1
      F-10
                  August 2000

-------
                                   APPENDIX G

       HISTORICAL SITE ASSESSMENT INFORMATION SOURCES

This appendix provides lists of information sources often useful to site assessment. The lists are
organized in two ways:

•      Table G. 1, beginning on page G-2, identifies information needs by category and lists
       appropriate information sources for each. The categories are:

                    General site information, p. G-2
                    Source and waste characteristics, p. G-2
                    Ground water use and characteristics, p. G-3
                    Surface water use and characteristics, p. G-4
                    Soil exposure characteristics, p. G-5
                    Air characteristics, p. G-6

•      The reverse approach is provided in Table G.2, beginning on page G-7. Categories of
       information sources are listed with a brief explanation of the information provided by
       each source.  A contact is provided for additional information. The categories are:

                    Databases, p. G-7
                    Maps and aerial photographs, p. G-13
                    Files, p. G-17
                    Expert and other sources, p. G-19

More complete listings of site assessment information sources are available in the Site
Assessment Information Directory (EPA91e).
August 2000                               G-l                        MARS SIM, Revision 1

-------
Appendix G
                    Table G.I Site Assessment Information Sources
                          (Organized by Information Needed)
                               General Site Information
 Site Location. Latitude/Longitude
 CERCLIS
 USGS Topographic Maps
 State Department of Transportation Maps
 Site Reconnaissance
 USGS Global Land Information System
 U.S. Census Bureau Tiger Mapping Services
   Type of Operation and Site Status

   EPA Regional Libraries
   State Environmental Agency Files
   Site Reconnaissance
 Owner/Operator Information
 EPA Regional Libraries
 State Environmental Agency Files
 Local Tax Assessor
   Environmental Setting. Size of Site

   USGS Topographic Maps
   Aerial Photographs
   Site Reconnaissance
                           Source and Waste Characteristics
 Source Types. Locations. Sizes
 EPA Regional Libraries
 State Environmental Agency Files
 Aerial Photographs
 Site Reconnaissance
 DOE Field Offices
   Hazardous Substances Present

   EPA Regional Libraries
   State Environmental Agency Files
   RCRIS
   Local Health Department
   Local Fire Department
   ERAMS
   Local Public Works Department
 Waste Types and Quantities
 EPA Regional Office Files
 State Environmental Agency Files
 RCRIS
 Local Fire Department
 Aerial Photographs
 Site Reconnaissance
 Aerial Radiation Surveys
MARSSIM, Revision 1
G-2
August 2000

-------
                                                                           Appendix G
               Table G.I Site Assessment Information Sources (continued)
                          (Organized by Information Needed)
                        Ground Water Use and Characteristics
 General Stratigraphy
 USGS Topographic Maps
 U.S. Geological Survey
 State Geological Surveys
 Geologic and Bedrock Maps
 Local Experts
 Local University or College
   Private and Municipal Wells

   Local Water Authority
   Local Health Department
   Local Well Drillers
   State Environmental Agency Files
   WellFax
   WATSTORE
 Karst Terrain
 USGS Topographic Maps
 U.S. Geological Survey
 State Geological Surveys
 Geologic and Bedrock Maps
 Local Experts
 Local University or College
   Distance to Nearest Drinking Water Well

   USGS Topographic Maps
   Local Water Authority
   Local Well Drillers
   Local Health Department
   WellFax
   WATSTORE
   Site Reconnaissance
 Depth to Aquifer
 U.S. Geological Survey
 State Geological Surveys
 Geologic and Bedrock Maps
 Local Experts
 Local Well Drillers
 WATSTORE
   Wellhead Protection Areas

   State Environmental Agency
   Local Water Authority
   Local Well Drillers
   Local Health Department
   EPA Regional Water Officials
August 2000
G-3
MARS SIM, Revision 1

-------
Appendix G
               Table G.I Site Assessment Information Sources (continued)
                          (Organized by Information Needed)
                        Surface Water Use and Characteristics
 Surface Water Body Types
 USGS Topographic Maps
 State Department of Transportation Maps
 Aerial Photographs
 Site Reconnaissance
   Drinking Water Intakes

   Local Water Authority
   USGS Topographic Maps
   U.S. Army Corps of Engineers
   State Environmental Agency
 Distance to Nearest Surface Water Body
 USGS Topographic Maps
 State Department of Transportation
 Aerial Photographs
 Site Reconnaissance
   Fisheries

   U.S. Fish and Wildlife Service
   State Environmental Agency
   Local Fish and Wildlife Officials
 Surface Water Flow Characteristics
 U.S. Geological Survey
 State Environmental Agency
 U.S. Army Corps of Engineers
 STORET
 WATSTORE
   Sensitive Environments

   USGS Topographic Maps
   State Department of Transportation Maps
   State Environmental Agency
   U.S. Fish and Wildlife Service
   Local Fish and Wildlife Officials
   National Wetland Inventory Maps
   Ecological Inventory Maps
   Natural Heritage Program
 Flood Frequency at the Site
 Federal Emergency Management Agency
 State Environmental Agency
MARSSIM, Revision 1
G-4
August 2000

-------
                                                                           Appendix G
               Table G.I Site Assessment Information Sources (continued)
                          (Organized by Information Needed)
                            Soil Exposure Characteristics
 Number of People Living Within 200 Feet
 Site Reconnaissance
 USGS Topographic Maps
 Aerial Photographs
 U.S. Census Bureau Tiger Mapping Service
   Schools or Day Care Within 200 Feet

   Site Reconnaissance
   USGS Topographic Maps
   Local Street Maps
 Number of Workers Onsite
 Site Reconnaissance
 Owner/Operator Interviews
   Locations of Sensitive Environment

   USGS Topographic
   State Department of Transportation Maps
   State Environmental Agency
   U.S. Fish and Wildlife Service
   Ecological Inventory Maps
   Natural Heritage Program
August 2000
G-5
MARS SIM, Revision 1

-------
Appendix G
               Table G.I Site Assessment Information Sources (continued)
                         (Organized by Information Needed)
Air Pathway Characteristics
Populations Within Four Miles
GEMS
NPDC
USGS Topographic Maps
Site Reconnaissance
U.S. Census Bureau Tiger Mapping Services
Distance to Nearest Individual
USGS Topographic Maps
Site Reconnaissance
Locations of Sensitive Environments. Acreage
of Wetlands
USGS Topographic Maps
State Department of Transportation Maps
State Environmental Agency
U.S. Fish and Wildlife Service
National Wetland Inventory Maps
Ecological Inventory Maps
Natural Heritage Program

MARSSIM, Revision 1
G-6
August 2000

-------
                                                                          Appendix G
                    Table G.2 Site Assessment Information Sources
                          (Organized by Information Source)
Databases
Source:
Provides:
Supports:
Contact:
Source:
Provides:
Supports:
Contacts:
CERCLIS (Comprehensive Environmental Response, Compensation, and
Liability Information System)
EPA's inventory of potential hazardous waste sites. Provides site name, EPA
identification number, site address, and the date and types of previous
investigations
General Site Information
U.S. Environmental Protection Agency
Office of Solid Waste and Emergency Response
Office of Emergency and Remedial Response
MikeCullen 703/603-8881
Fax 703/603-9133
RODS (Records of Decision System)
Information on technology justification, site history, community participation,
enforcement activities, site characteristics, scope and role of response action, and
remedy.
General Site Information, Source and Waste Characteristics
U.S. Environmental Protection Agency
Office of Solid Waste and Emergency Response
Office of Emergency and Remedial Response
MikeCullen 703/603-8881
Fax 703/603-9133
August 2000
G-7
MARS SIM, Revision 1

-------
Appendix G
               Table G.2 Site Assessment Information Sources (continued)
                          (Organized by Information Source)
Databases
Source:
Provides:
Supports:
Contacts:
Source:
Provides:
Supports:
Contact:
RCRIS (Resource Conservation and Recovery Information System)
EPA's inventory of hazardous waste generators. Contains facility name, address,
phone number, and contact name; EPA identification number; treatment, storage
and disposal history; and date of notification.
General Site Information, Source and Waste Characteristics
U.S. Environmental Protection Agency
Office of Solid Waste and Emergency Response
Office of Solid Waste
Kevin Phelps 202/260-4697
Fax 202/260-0284
ODES (Ocean Data Evaluation System)
Information associated with both marine and fresh water supplies with the
following programs:
•301(h) sewage discharge
•National Pollutant Discharge Elimination System (NPDES)
•Ocean Dumping
•National Estuary Program
•403c Industrial Discharge
•Great Lakes Remedial Action Program
•National Coastal Waters Program
Houses a variety of data pertaining to water quality, oceanographic descriptions,
sediment pollutants, physical/chemical characteristics, biological characteristics,
and estuary information.
General Site Information, Source and Waste Characteristics,
Surface Water Use and Characteristics
U.S. Environmental Protection Agency
Office of Water
Robert King 202/260-7026
Fax 202/260-7024
MARSSIM, Revision 1
G-8
August 2000

-------
                                                                          Appendix G
               Table G.2 Site Assessment Information Sources (continued)
                          (Organized by Information Source)
Databases
Source:
Provides:
Supports:
Contact:
Source:
Provides:
Supports:
Contact:
Source:
Provides:
Supports:
Contact:
EMMI (Environmental Monitoring Methods Index)
U.S. Environmental Protection Agency's official methods compendium. Serves
as a source of standard analytical methods.
General Site Information
U.S. Environmental Protection Agency
User Support 703/519-1222
Annual updates may be purchased from the National Technical Information
Service at 703/487-4650
WellFax
National Water Well Association's inventory of municipal and community water
supplies. Identifies public and private wells within specified distances around a
point location and the number of households served by each.
Ground Water Use and Characteristics
National Water Well Association (NWWA)
6375 Riverside Drive
Dublin, OH 430 17
Geographic Resources Information Data System (GRIDS)
National access to commonly requested geographic data products such as those
maintained by the U.S. Geologic Survey, the Bureau of the Census, and the U.S.
Fish and Wildlife Service.
General Site Information, Ground Water Use and Characteristics,
Surface Water Use and Characteristics, Soil Exposure Characteristics,
Air Pathway Characteristics
U.S. Environmental Protection Agency
Office of Administration and Resources Management
Office of Information Resources Management
Bob Pease 703/235-5587
Fax 703/557-3186
August 2000
G-9
MARS SIM, Revision 1

-------
Appendix G
               Table G.2 Site Assessment Information Sources (continued)
                          (Organized by Information Source)
Databases
Source:
Provides:
Supports:
Contact:
Source:
Provides:
Supports:
Contact:
National Planning Data Corporation (NPDC)
Commercial database of U.S. census data. Provides residential populations
specified distance rings around a point location.
in
Soil Exposure Characteristics, Air Pathway Characteristics
National Planning Data Corporation
20 Terrace Hill
Ithaca, NY 14850-5686
STORET (Storage and Retrieval of U.S. Waterways Parametric Data)
EPA's repository of water quality data for waterways within the U.S. The system
is capable of performing a broad range of reporting, statistical analysis, and
graphics functions.
Geographic and descriptive information on various waterways; analytical data
from surface water, fish tissue, and sediment samples; stream flow data.
U.S. Environmental Protection Agency
Office of Water
Office of Wetlands, Oceans, and Watersheds and
Office of Information Resources Management
Louie H. Hoelman 202/260-7050
Fax 202/260-7024
MARSSIM, Revision 1
G-10
August 2000

-------
                                                                          Appendix G
               Table G.2 Site Assessment Information Sources (continued)
                         (Organized by Information Source)
Databases
Source:
Provides:
Supports:
Contact:
Source:
Provides:
Supports:
Contact:
Federal Reporting Data System (FRDS)
General information on public water supplies, including identification
information, noncompliance related events, violations of the State Drinking
Water Act, enforcement actions, identification of significant noncompliers, and
information on variances, exemptions, and waivers.
Ground Water Use and Characteristics, Surface Water Use and Characteristics
U.S. Environmental Protection Agency
Office of Water
Office of Ground Water and Drinking Water
Abe Seigel 202/260-2804
Fax 202/260-3464
WATSTORE
U.S. Geological Survey's National Water Data Storage and Retrieval System.
Administered by the Water Resources Division and contains the Ground Water
Site Inventory file (GWSI). This provides physical, hydrologic, and geologic
data about test holes, springs, tunnels, drains, ponds, other excavations, and
outcrops.
General Site Information, Ground Water Use and Characteristics, Surface Water
Use and Characteristics
U.S. Geological Surgery or USGS Regional Field Office
12201 Sunrise Valley Drive
Reston, VA 22092
August 2000
G-ll
MARS SIM, Revision 1

-------
Appendix G
               Table G.2 Site Assessment Information Sources (continued)
                          (Organized by Information Source)
Databases
Source:
Provides:
Supports:
Contacts:
Source:
Provides;
Supports:
Contact:
ISI (Information Systems Inventory)
Abstracts and contacts who can provide information on U.S. Environmental
Protection Agency databases.
All information needs
U.S. Environmental Protection Agency
Office of Information and Resources Management
Information Management and Services Division
ISI System Manager 202/260-5914
Fax 202/260-3923
ERAMS (Environmental Radiation Ambient Monitoring System)
A direct assessment of the population intake of radioactive pollutants due to
fallout, data for developing dose computational models, population exposures
from routine and accidental releases of radioactivity from major sources, data for
indicating additional measurement needs or other actions required in the event of
a major release of radioactivity in the environment, and a reference for data
comparison with other localized and limited monitoring programs.
Source and waste characteristics
U.S. Environmental Protection Agency
National Air and Radiation Environmental Laboratory
540 South Morris Avenue
Montgomery, AL 36115
Phone 334/270-3400
Fax 334/270-3454
MARSSIM, Revision 1
G-12
August 2000

-------
                                                                          Appendix G
               Table G.2 Site Assessment Information Sources (continued)
                          (Organized by Information Source)
Maps and Aerial Photographs
Source:
Provides:
Supports:
Contacts:
Source:
Provides;
Supports:
Contact:
Source:
Provides:
Supports:
Contact:
U.S. Geological Survey (USGS) Topographic Quadrangles
Maps detailing topographic, geographical, political, and cultural features.
Available in 7.5- and 15-minutes series.
Site location and environmental setting; latitude/longitude; houses, schools, and
other buildings; distances to targets; surface water body types; drainage routes;
wetlands and sensitive environments; karst terrain features.
U.S. Geological Survey or USGS Regional or Field Office
12201 Sunrise Valley Drive
Reston, VA 22092
National Wetland Inventory Maps
Maps delineating boundaries and acreage of wetlands.
Environmental setting and wetlands locations.
U.S. Geological Survey or U.S. Fish and Wildlife Service
12201 Sunrise Valley Drive 18th and C Street, NW
Reston, VA 22092 Washington, DC 20240
Ecological Inventory Maps
Maps delineating sensitive environments and habitats, including special land use
areas, wetlands, study areas, and native plant and animal species.
Environmental setting, sensitive environments, wetland locations and size.
U.S. Geological Survey or U.S. Fish and Wildlife Service
12201 Sunrise Valley Drive 18th and C Streets, NW
Reston, VA 22092 Washington, DC 20240
August 2000
G-13
MARS SIM, Revision 1

-------
Appendix G
               Table G.2 Site Assessment Information Sources (continued)
                          (Organized by Information Source)
Maps and Aerial Photographs
Source:
Provides:
Supports:
Contact:
Source:
Provides:
Supports:
Contact:
Source:
Provides:
Supports:
Contact:
Flood Insurance Rate Maps (FIRM)
Maps delineating flood hazard boundaries for flood insurance
purposes.
Flood frequency.
Federal Emergency Management Agency (FEMA) or Local Zoning and
Federal Insurance Administration Planning Office
Office of Risk Assessment
500 C Street, SW
Washington, DC 20472
State Department of Transportation Maps
State maps detailing road systems, surface water systems, and
cultural, and political features.
other geographical,
Site location and environmental setting, distances to targets, wetlands, and
sensitive environments.
State or Local Government Agency
Geologic and Bedrock Maps
Maps detailing surficial exposure and outcrop of formations for interpreting
subsurface geology. Bedrock maps describe depth and lateral distribution of
bedrock.
General stratigraphy beneath and surrounding the site.
U.S. Geological Survey or USGS Regional or Field
12201 Sunrise Valley Drive State Geological Survey
Reston, VA 22092
Office
Office
MARSSIM, Revision 1
G-14
August 2000

-------
                                                                          Appendix G
               Table G.2 Site Assessment Information Sources (continued)
                         (Organized by Information Source)
Maps and Aerial Photographs
Source:
Provides:
Supports:
Contact:
Source:
Provides:
Supports:
Contact:
Aerial Photographs
Black and white and/or color photographic images detailing topographic,
physical, and cultural features.
Site location and size, location and extent of waste sources, identification of
surrounding surficial geology, distances to targets, wetlands and sensitive
environments. May provide information on historical site operations, waste
quantity, and waste handling practices.
State Department of Transportation
Local Zoning and Planning Office
County Tax Assessor's Office
Colleges and Universities (geology or geography departments)
EPA's Environmental Monitoring Services Laboratory (EMSL)
EPA's Environmental Photographic Interpretation Center (EPIC)
U.S. Army Corps of Engineers
U.S. Department of Agriculture, Forest Service
U.S. Geological Survey
Global Land Information System (GLIS)
An interactive computer system about the Earth's land surfaces information.
GLIS contains abstract, description, and search information for each data set.
Through GLIS, scientists can evaluate data sets, determine their availability,
place online requests for products, or, in some cases, download products. GLIS
also offers online samples of earth science data.
Site location and environmental setting; latitude/longitude; houses, schools, and
other buildings; distances to targets; surface water body types; drainage routes;
wetlands and sensitive environments; karst terrain features.
Internet: http://mapping.usgs.gov or U.S. Geological Survey
12202 Sunrise Valley Drive
Reston, VA 20 192, US A
August 2000
G-15
MARS SIM, Revision 1

-------
Appendix G
               Table G.2 Site Assessment Information Sources (continued)
                           (Organized by Information Source)
                             Maps and Aerial Photographs
 Source:
Topologically Integrated Geographic Encoding and Referencing (TIGER) System
 Provides:
Automates the mapping and related geographic activities required to support the
decennial census and sample survey programs of the U.S. Census Bureau starting
with the 1990 decennial census. The topological structure of the TIGER data
base defines the location and relationship of streets, rivers, railroads, and other
features to each other and to the numerous geographic entities for which the
Census Bureau tabulates data from its censuses and sample surveys.
 Supports:
General Site Information, Soil Exposure Characteristics, Air Pathway
Characteristics
 Contacts:
http ://www. census.gov/geo/www/tiger
Public Information Office
Room 2705, FB-3
Census Bureau
U.S. Department of Commerce
Washington, DC 20233
MARSSIM, Revision 1
                            G-16
August 2000

-------
                                                                          Appendix G
               Table G.2 Site Assessment Information Sources (continued)
                         (Organized by Information Source)
Files
Source:
Provides:
Supports:
Source:
Provides;
Supports:
Office project files
Site investigation reports, logbooks, telecons, references, etc.
Information on nearby sites such as town populations, public and private water
supplies, well locations, targets, and general stratigraphy descriptions.
State Environmental Agency files
Historical site information, permits, violations, and notifications.
General site information and operational history, source descriptions, waste
quantities and waste handling practices. May provide results of previous site
investigations.
August 2000
G-17
MARS SIM, Revision 1

-------
Appendix G
                Table G.2 Site Assessment Information Source (continued)
                            (Organized by Information Source)
                                           Files
 Source:
EPA Regional Libraries
 Provides:
Historical information on CERCLIS sites, permits, violations, and notification.
Additionally provides interlibrary loan services.
 Supports:
General site information and operational history, source descriptions, waste quantities
and waste handling practices. May provide results of previous site investigations.
 Contact:
USEPA
Region 1 Library
JFK Federal Building
Boston, MA 02203
617/565-3300
              USEPA
              Region 2 Library
              290 Broadway
              16th Floor
              New York, NY 10007-1866
              212/264-2881

              USEPA
              Region 3 Information Resources Center,
              3PM52
              841 Chestnut Street
              Philadelphia, PA 19107
              215/597-0580

              USEPA
              Region 4 Library
              Atlanta Federal Center
              61 Forsyth Street, SW
              Atlanta, GA 30303-8909
              404/562-8190

              USEPA
              Region 5 Library
              77 W. Jackson Blvd., 12th Floor
              Chicago, IL 60604-3590
              312/353-2022
USEPA
Region 6 Library, 6M-A1
1445 Ross Avenue, Suite 1200
First Interstate Bank Tower
Dallas, TX 75202-2733
214/655-6427

USEPA
Region 7 Information Resources Center
726 Minnesota Avenue
Kansas City, KS 66101
913/551-7358
                                         USEPA
                                         Region 8 Library, 8PM-IML
                                         999 18th Street Suite 5 00
                                         Denver, CO 80202-2405
                                         303/293-1444
                                         USEPA
                                         Region 9 Library, MS:P-5-3
                                         75 Hawthorne Street
                                         San Francisco, CA 94105
                                         415/744-1510
                                         USEPA
                                         Region 10 Library, MD-108
                                         1200 Sixth Avenue
                                         Seattle, WA 98101
                                         206/553-1289 or 1259
MARSSIM, Revision 1
                             G-18
                          August 2000

-------
                                                                          Appendix G
               Table G.2 Site Assessment Information Sources (continued)
                         (Organized by Information Source)
Expert and Other Sources
Source:
Provides:
Supports:
Contact:
Source:
Provides:
Supports:
Contact:
Source:
Provides:
Supports:
Contact:
Source:
Provides:
Supports:
Contact:
U.S. Geological Survey
Geologic, hydrogeologic, and hydraulic information including maps, reports,
studies, and databases.
General stratigraphy descriptions, karst terrain, depth to aquifer, stream flow,
ground water and surface water use and characteristics.
U.S. Geological Survey or USGS Regional or Field Office
12201 Sunrise Valley Drive
Reston, VA 22092
U.S. Army Corps of Engineers
Records and data surrounding engineering projects involving surface waters.
Ground water and surface water characteristics, stream flow, locations of
wetlands and sensitive environments.
U.S. Army Corps of Engineers
State Geological Survey
State-specific geologic and hydrogeologic information including maps, reports,
studies, and databases.
General stratigraphy descriptions, karst terrain, depth to aquifer, ground water
use and characteristics.
State Geological Survey (Local or Field Office)
Natural Heritage Program
Information on Federal and State designated endangered and threatened plants,
animals, and natural communities. Maps, lists and general information may be
available.
Location of sensitive environments and wetlands.
State Environmental Agency
August 2000
G-19
MARS SIM, Revision 1

-------
Appendix G
               Table G.2 Site Assessment Information Sources (continued)
                          (Organized by Information Source)
Expert and Other Sources
Source:
Provides:
Supports:
Contact:
Source:
Provides:
Supports:
Contact:
Source:
Provides:
Supports:
Contact:
U.S. Fish and Wildlife Service
Environmental Information
Locations of sensitive environments, wetlands, fisheries; surface
characteristics and stream flow.
water
U.S. Fish and Wildlife Service or U.S. Fish and Wildlife Service
1 8th and C Streets, NW Regional office
Washington, DC 20240
Local Fish and Wildlife Officials
Local Environmental Information
Locations of sensitive environments, wetlands, fisheries; surface
characteristics and stream flow.
water
State or Local Environmental Agency
State or Local Game or Conservation Office
Local Tax Assessor
Past and present land ownership records, lot and building sizes, assessors maps.
May also provide historical aerial photographs.
Name of present and past owners/operators, years of ownership,
and operational history.
size of site,
Local Town Government Office
MARSSIM, Revision 1
G-20
August 2000

-------
                                                                          Appendix G
               Table G.2 Site Assessment Information Sources (continued)
                         (Organized by Information Source)
Expert and Other Sources
Source:
Provides:
Supports:
Contact:
Source:
Provides:
Supports:
Contact:
Source:
Provides:
Supports:
Contact:
Local Water Authority
Public and private water supply information, including service area maps, well
locations and depths, well logs, surface water intake locations, information
regarding water supply contamination.
Locations and populations served by municipal and private drinking water
sources (wells and surface water intakes), pumpage and production, blended
systems, depth to aquifer, general stratigraphic descriptions, ground water and
surface water characteristics, stream flow.
Local Town Government Office
Local Health Department
Information and reports regarding health-related problems that may be
associated with a site. Information on private and municipal water supplies,
and onsite monitoring wells.
Primary/secondary targets differentiation, locations and characteristics of public
substances present at the site.
Local Town Government Office
Local Zoning Board or Planning Commission
Records of local land development, including historical land use and
ownership, and general stratigraphy descriptions.
General site description and history, previous ownership, and land use.
Local Town Government Office
August 2000
G-21
MARS SIM, Revision 1

-------
Appendix G
               Table G.2 Site Assessment Information Sources (continued)
                          (Organized by Information Source)
Expert and Other Sources
Source:
Provides:
Supports:
Contact:
Source:
Provides:
Supports:
Source:
Provides:
Supports:
Source:
Provides:
Supports:
Local Fire Department
Records of underground storage tanks in the area, material safety data sheets
(MSDS) for local commercial and industrial businesses, and other information
on hazardous substances used by those businesses.
Location and use of underground storage tanks and other potential sources of
hazardous substances, identification of hazardous substances present at the site.
Local Town Government Office
Local Well Drillers
Public and Private water supply information including well locations and
depths, well logs, pumpage and production.
Populations served by private and municipal drinking water wells, depth to
aquifer, general stratigraphic information.
Local University or College
Geology/Environmental Studies departments may have relevant published
materials (reports, theses, dissertations) and faculty experts knowledgeable in
local geologic, hydrologic, and environmental conditions.
General stratigraphic information, ground water and surface water use and
characteristics, stream flow.
Site Reconnaissance
Onsite and /or offsite visual observation of the site and surrounding area.
General site information; source identification and descriptions; general ground
water, surface water, soil, and air pathway characteristics; nearby targets;
probable point of entry to surface water.
MARSSIM, Revision 1
G-22
August 2000

-------
                            APPENDIX H

                         DESCRIPTION OF
     FIELD SURVEY AND LABORATORY ANALYSIS EQUIPMENT


H.1  INTRODUCTION                                              H-3

H.2  FIELD SURVEY EQUIPMENT                                    H-5
     H.2.1  Alpha Particle Detectors                                    H-5
           ALPHA SCINTILLATION SURVEY METER  	 H-6
           ALPHA TRACK DETECTOR 	 H-7
           ELECTRET ION CHAMBER	 H-8
           GAS-FLOW PROPORTIONAL COUNTER	 H-9
           LONG RANGE ALPHA DETECTOR (LRAD)	 H-10
     H.2.2  Beta Particle Detectors                                    H-l 1
           ELECTRET ION CHAMBER	 H-12
           GAS-FLOW PROPORTIONAL COUNTER	 H-13
           GM SURVEY METER WITH BETA PANCAKE PROBE	 H-l4
     H.2.3  Gamma Ray Detectors                                     H-15
           ELECTRET ION CHAMBER	 H-16
           GM SURVEY METER WITH GAMMA PROBE	 H-17
           HAND-HELD ION CHAMBER SURVEY METER	 H-18
           HAND-HELD PRESSURIZED ION CHAMBER SURVEY METER .... H-l9
           PORTABLE GERMANIUM MULTICHANNEL ANALYZER 	 H-20
           PRESSURIZED IONIZATION CHAMBER (PIC)	 H-22
           SODIUM IODIDE SURVEY METER	 H-23
           THERMOLUMINESCENCE DOSIMETER 	 H-24
     H.2.4  Radon Detectors                                          H-25
           ACTIVATED CHARCOAL ADSORPTION	 H-26
           ALPHA TRACK DETECTION 	 H-27
           CONTINUOUS RADON MONITOR	 H-28
           ELECTRET ION CHAMBER	 H-29
           LARGE AREA ACTIVATED CHARCOAL COLLECTOR	 H-30
     H.2.5  X-Ray and Low Energy Gamma Detectors	 H-31
           FIDLER PROBE WITH SURVEY METER	 H-32
           FIELD X-RAY FLUORESCENCE SPECTROMETER 	 H-33
     H.2.6  Other Field Survey Equipment                               H-34
           CHEMICAL SPECIES LASER ABLATION MASS SPECTROMETER . . H-35
           LA-ICP-AES AND LA-ICP-MS	 H-36
August 2000                          H-l                   MARS SIM, Revision 1

-------
Appendix H

H.3   LABORATORY INSTRUMENTS                                     H-38
      H.3.1  Alpha Particle Analysis                                        H-38
            ALPHA SPECTROSCOPY WITH MULTICHANNEL ANALYZER	 H-39
            GAS-FLOW PROPORTIONAL COUNTER	 H-40
            LIQUID SCINTILLATION SPECTROMETER 	 H-41
            LOW-RESOLUTION ALPHA SPECTROSCOPY 	 H-42
      H.3.2  Beta Particle Analysis                                         H-43
            GAS-FLOW PROPORTIONAL COUNTER	 H-44
            LIQUID SCINTILLATION SPECTROMETER 	 H-45
      H.3.3  Gamma Ray Analysis	 H-46
            GERMANIUM DETECTOR WITH MULTICHANNEL ANALYZER  . . . H-47
            SODIUM IODIDE DETECTOR WITH MULTICHANNEL ANALYZER . H-48

EQUIPMENT SUMMARY TABLES                                        H-49

Table H.I -   Radiation Detectors with Applications to Alpha Surveys	 H-50

Table H.2 -   Radiation Detectors with Applications to Beta Surveys 	 H-52

Table H.3 -   Radiation Detectors with Applications to Gamma Surveys	 H-53

Table H.4 -   Radiation Detectors with Applications to Radon Surveys	 H-55

Table H.5 -   Systems that Measure Atomic Mass or Emissions 	 H-56
MARSSIM, Revision 1                     H-2                            August 2000

-------
                                                                               Appendix H

                               H.1 INTRODUCTION

This appendix provides information on various field and laboratory equipment used to measure
radiation levels and radioactive material concentrations.  The descriptions provide general
guidance, and those interested in purchasing or using the equipment are encouraged to contact
vendors and users of the equipment for specific information and recommendations. Although
most of this equipment is in common use, a few specialty items are included to demonstrate
promising developments.

The equipment is divided into two broad groupings of field survey and laboratory instruments,
and each group is subdivided into equipment that measures alpha, beta, gamma, x-rays, and
radon.  A single sheet provides information for each system and includes its type of use (field or
lab), the primary and secondary radiation detected, applicability for site surveys, operation,
specificity/sensitivity, and cost of the equipment and surveys performed.

The Applicability for Site Surveys section discusses how the equipment is most useful for
performing site radiological surveys. The Operation section provides basic technical information
on what the system includes, how it works, how to use it practically in the field, and its features.
The Specificity/Sensitivity section addresses the system's strengths and weaknesses, and the
levels of radioactivity it can measure. Information for the Cost section was obtained primarily
from discussions with manufacturers, users, and reviews of product literature.  The cost per
measurement is an estimate of the cost of producing and documenting a single data point,
generally as part of a multipoint survey.  It assumes times for instrument calibration (primarily if
conducted at the time of the survey), use, sample analysis, and report preparation and review. It
should be recognized that these values will change over time due to factors like inflation and
market expansion.

It is assumed that the user of this appendix has a basic familiarity with field and laboratory
equipment.  Some of the typical  instrument features and terms are listed below and may not be
described separately for the individual instruments:

•      Field survey equipment consists of a detector, a survey meter, and interconnected cables,
       although these are sometimes packaged in a single container. The detector or probe is
       the portion which is sensitive to radiation. It is designed in such a manner, made of
       selected materials, and operated at a high voltage that makes it sensitive to one or more
       types of radiation. Some detectors feature a window or a shield whose construction
       material and thickness make the detector more or less sensitive to a particular radiation.
       The size of the detector can vary depending on the specific need, but is often limited by
       the characteristics of the  construction materials and the physics  of the detection process.
       The survey meter contains the electronics and provides high voltage to the detector,
       processes the detector's signal, and displays the readings in analog or digital fashion. An
       analog survey meter has a continuous swing needle and typically a manually operated

August 2000                                 H-3                        MARSSIM, Revision 1

-------
Appendix H
       scale switch, used to keep the needle on scale. The scaling switch may not be required on
       a digital survey meter.  The interconnecting cables serve to transfer the high voltage and
       detector signals in the proper direction. These cables may be inside those units which
       combine the meter and detector into a  single box, but they are often external with
       connectors that allow the user to interchange detectors.

       Scanning and measuring surveys.  In a scanning survey, the field survey meter is operated
       while moving the detector over an area to search for a change in readings.  Since the
       meter's audible signal  responds faster than the meter display, listening to the built-in
       speaker or using headphones allows the user to more quickly discern changes in radiation
       level. When a scanning survey detects a change, the meter can be held in place for a
       more accurate static measurement.

       Integrated readings. Where additional sensitivity is desired, the reading can be integrated
       using internal electronics or an external sealer to give total values over time. The degree
       to which the sensitivity can be improved depends largely on the integration time selected.

       Units of measure.  Survey meters with conventional meter faces measure radiation levels
       in units of counts, microRoentgen (|iR), millirad (mrad), or millirem (mrem) in terms of
       unit time, e.g., cpm or |iR/hr. Those with SI meter faces use units of microSievert (|iSv)
       or milliGray per unit time, e.g., |iSv/hr or mGy/hr.
MARSSIM, Revision 1                         H-4                                 August 2000

-------
                                                                            Appendix H
                         H.2  FIELD SURVEY EQUIPMENT




                            H.2.1 Alpha Particle Detectors
August 2000                                H-5                        MARS SIM, Revision 1

-------
Appendix H

System:             ALPHA SCINTILLATION SURVEY METER
Lab/Field:          Field
Radiation Detected: Primary: Alpha     Secondary:  None (in relatively low gamma fields)

Applicability to Site Surveys: The alpha scintillation survey meter is useful for determining the
presence or absence of alpha-emitting contamination on nonporous surfaces, swipes, and air
filters, or on irregular surfaces if the degree of surface shielding is known.

Operation: This survey meter uses an alpha radiation detector with a sensitive area of
approximately 50 to 100 cm2 (8 to 16 in.2).  The detector has a thin, aluminized window of mylar
that blocks ambient light but allows alpha radiation to pass through. The detecting medium is
silver activated zinc sulfide, ZnS(Ag).  When the discriminator is appropriately adjusted, the
meter is sensitive only to alpha radiation. Light pulses are amplified by a photomultiplier tube
and passed to the survey meter.

The probe is generally placed close to the surface due to the short range of alpha particles in air.
A scanning survey is used to identify areas of elevated surface contamination and then a  direct
survey is performed to obtain actual measurements.  Integrating the readings over time improves
the sensitivity enough to make the instrument very useful for alpha surface contamination
measurements for many isotopes.  The readings are displayed in counts per minute, but factors
can usually be obtained to convert readings from cpm to  dpm.  Conversion factors, however, can
be adversely affected by the short range of alpha particles which allows them to be shielded to
often uncertain degrees if they are embedded in the surface. Systems typically  use 2 to 6 "C"  or
"D" cells and will operate for 100-300 hours.

Specificity/Sensitivity: When the alpha discriminator is correctly adjusted,  the alpha
scintillation survey meter measures only alpha radiation,  even  if there are other radiations
present. A scanning survey gives a quick indication of the presence or absence of surface
contamination, while integrating the readings provides a  measure  of the activity on a surface,
swipe, or filter. Alpha radiation is easily adsorbed by irregular, porous, moist,  or over painted
surfaces, and this should be carefully considered when converting count rate data to surface
contamination levels. This also requires wet swipes and  filters to be dried before counting. The
minimum sensitivity is around  10 cpm using the needle deflection or 1 to 2 cpm when using
headphones or a  sealer. Some headphones or sealers give one click for every two counts, so the
manual should be consulted to preclude underestimating  the radioactivity by a factor of two.

Cost of Equipment:  $ 1000

Cost per Measurement:   $5
MARSSIM, Revision 1                        H-6                                August 2000

-------
                                                                              Appendix H

System:             ALPHA TRACK DETECTOR
Lab/Field:          Field and Indoor Surfaces
Radiation Detected: Primary: Alpha     Secondary: None

Applicability to Site Surveys:  Alpha track detectors measure gross alpha surface
contamination, soil activity levels, or the depth profile of contamination.

Operation:  This is a passive integrating detector. It consists of a 1 mm-thick sheet of
polycarbonate material which is deployed directly on the soil surface or in close proximity to the
contaminated surface.  When alpha particles strike the detector surface, they cause microscopic
damage centers to form in the plastic matrix.  After deployment, the detector is etched in a
caustic solution which preferentially attacks the damage centers.  The etch pits may then be
counted in an optical scanner. The density of etch pits, divided by the deployment time, is
proportional to the soil or surface alpha activity.  The measurement may be converted to isotopic
concentration if the isotopes are known or measured separately.  The area of a standard detector
is 2 cm2 (0.3 in.2), but it may be cut into a variety of shapes and sizes to suit particular needs.

Specificity/Sensitivity:  Alpha track detectors are relatively inexpensive, simple, passive, and
have no measurable response to beta/gamma radiation. They provide a gross alpha measurement
where the lower limit of detection is a function of deployment time. For surface contamination it
is 330 Bq/m2 (200 dpm/lOOcm2) @ 1 hour, 50 Bq/m2 (30 dpm/lOOcm2) @ 8 hours, and 17 Bq/m2
(10 dpm/lOOcm2) @ 48 hours. For soil contamination it is 11,000 Bq/kg (300 pCi/g) @ 1 hour,
3,700 Bq/kg (100 pCi/g) @ 8 hours, and 740  Bq/kg (20 pCi/g) @ 96 hours. High surface
contamination or soil activity levels may be measured with deployment times of a few minutes,
while activity down to background levels  may require deployment times of 48-96 hours. When
placed on a surface, they provide an estimate  of alpha surface contamination or soil
concentration.  When deployed against the side of a trench, they can provide an estimate of the
depth profile of contamination.  They may also be used in pipes and under/inside of equipment.

For most applications, the devices are purchased for a fixed price per measurement, which
includes readout.  This requires that the detectors be returned to the vendor and the data are not
immediately available. For programs having  continuing needs and a large number of
measurements, automated optical scanners may be purchased. The  cost per measurement is then
a function of the number of measurements required.

Cost of Equipment: $65,000

Cost per Measurement: $5 to $10
August 2000                                H-7                        MARSSIM, Revision 1

-------
Appendix H

System:             ELECTRET ION CHAMBER
Lab/Field:          Field
Radiation Detected: Primary: Alpha, beta, gamma, or radon     Secondary: None
Applicability to Site Surveys: An electret is a passive integrating detector for measurements of
alpha- or beta-emitting contaminants on surfaces and in soils, gamma radiation dose, or radon air
concentration.

Operation: The system consists of a charged Teflon disk (electret), open-faced ionization
chamber, and electret voltage reader/data logger. When the electret is screwed into the chamber,
a static electric field is established and a passive ionization chamber is formed. For alpha or beta
radiation, the chamber is opened and deployed directly on the surface or soil to be measured so
the particles can enter the chamber.  For gammas, however, the chamber is left closed and the
gamma rays incident on the chamber penetrate the 2 mm-thick plastic detector wall.  These
particles or rays ionize the air molecules, the ions are attracted to the charged electret, and the
electret's charge is reduced.  The electret charge is measured before and after deployment with
the voltmeter, and the rate of change of the charge is proportional to the alpha or beta surface or
soil activity, with appropriate compensation for background gamma levels. A thin Mylar
window may be used to protect the electret from dust.  In low-level gamma measurements, the
electret is sealed inside a Mylar bag during deployment to minimize radon interference. For
alpha and beta measurements, corrections must be made for background gamma radiation and
radon response. This correction is accomplished by deploying additional gamma or radon-
sensitive detectors in parallel with the alpha or beta detector.  Electrets are simple and can
usually be reused several times before recharging by a vendor. Due to their small size (3.8 cm
tall by 7.6 cm diameter or 1.5 in. tall by 3 in. diameter), they may be deployed in hard-to-access
locations.

Specificity/Sensitivity:  This method gives a gross alpha, gross beta, gross gamma, or gross
radon measurement.  The lower limit of detection depends on the exposure time and the volume
of the chamber used. High surface alpha or beta contamination levels or high  gamma radiation
levels may be measured with deployment times of a few minutes. Much lower levels can be
measured by extending the deployment time to 24 hours or longer. For gamma radiation, the
response of the detector is nearly independent of energy from 15 to 1200 keV, and fading
corrections are not required.  To quantify ambient gamma radiation fields  of 10 |iR/hr, a 1000
mL chamber may be deployed for two days or a 50 mL chamber deployed for 30 days. The
smallest chamber is particularly useful for long-term monitoring and reporting of monthly or
quarterly measurements. For alpha and beta particles, the measurement may be converted to
isotopic concentration if the  isotopes are known or measured separately. The lower limit of
detection for alpha radiation is 83 Bq/m2 (50 dpm/100 cm2) @ 1 hour, 25 Bq/m2 (15 dpm/100
cm2) @ 8 hours, and 13  Bq/m2 (8 dpm/100 cm) @ 24 hours. For beta radiation from tritium it is
10,000 Bq/m2 (6,000 dpm/cm2) @ 1 hour  and 500 Bq/m2 (300 dpm/cm2) @ 24 hours. For beta
radiation from "Tc it is  830  Bq/m2 (500 dpm/cm2) @ 1 hour and 33 Bq/m2 (20 dpm/cm2) @ 24
hours.

Cost of Equipment: $4,000 to $25,000, for system  if purchased.
Cost per Measurement: $8-$25, for use under service contract
MARSSIM, Revision 1                        H-8                                August 2000

-------
                                                                               Appendix H

System:             GAS-FLOW PROPORTIONAL COUNTER
Lab/Field:          Field
Radiation Detected: Primary: Alpha, Beta       Secondary:  Gamma
Applicability to Site Surveys: This equipment measures gross alpha or gross beta/gamma
surface contamination levels on relatively flat surfaces like the floors and walls of facilities. It
also serves as a screen to determine whether or not more nuclide-specific analyses may be
needed.
Operation:  This system consists of a gas-flow proportional detector, gas  supply, supporting
electronics, and a sealer or rate meter.  Small detectors (-100 cm2) are hand-held and large
detectors (-400-600 cm2) are mounted on a rolling cart.  The detector entrance window can be <1
to almost 10 mg/cm2 depending on whether alpha, alpha-beta, or gamma radiation is monitored.
The gas used is normally P-10, a mixture of 10% methane and 90% argon. The detector is
positioned as close as practical to the surface being monitored for good counting efficiency
without risking damage from the detector touching the surface.  Quick disconnect fittings allow
the  system to be disconnected from the gas bottle for hours with little loss of counting efficiency.
The detector operating voltage can be set to make it  sensitive only to alpha radiation, to both
alpha and beta radiation, or to beta and low energy gamma radiation. These voltages are
determined for each system by placing either an alpha source, such as 230Th or 241Am, or a beta
source, such as 90Sr, facing and near the detector window, then increasing the high voltage in
incremental steps until the count rate becomes constant.  The alpha plateau, the region of
constant count rate, will be almost flat. The beta plateau will have a slope of 5 to 15 percent per
100 volts. Operation on the beta plateau  allows detection of some gamma radiation, but the
efficiency is very low. Some systems use a spectrometer to separate alpha, and beta/gamma
events, allowing simultaneous determination of both the  alpha and beta/gamma surface
contamination levels.
Specificity/Sensitivity:  These systems do not identify the alpha or beta energies detected and
cannot be used to identify specific radionuclides. Background for operation on the alpha plateau
is very low, 2 to 3 counts per minute, which is higher than for laboratory detectors because of the
larger detector size. Background for operation on the beta plateau is dependent on the ambient
gamma and cosmic ray background, and typically ranges from several hundred to a thousand
counts per minute. Typical efficiencies for unattenuated alpha sources are 15-20%.  Beta
efficiency depends on the window thickness and the beta energy.  For 90Sr/90Y in equilibrium,
efficiencies range from  5% for highly attenuated to  about 35% for unattenuated  sources.
Typical gamma ray efficiency is <1%.  The presence of natural  radionuclides in the surfaces
could interfere with the detection of other contaminants.  Unless the nature of the contaminant
and any naturally-occurring radionuclides is well known, this system is better used for assessing
gross surface contamination levels.  The texture and porosity of the surface can hide or shield
radioactive material from the detector, causing levels to be underestimated. Changes in
temperature can affect the detectors's sensitivity. Incomplete flushing with gas can cause a
nonuniform response over the detector's surface.  Condensation in the gas lines or using the quick
disconnect fittings can cause count rate instability.
Cost of Equipment: $2,000 to $4,000
Cost per Measurement: $2-$ 10 per m2
August 2000                                H-9                        MARSSIM, Revision 1

-------
Appendix H

System:             LONG RANGE ALPHA DETECTOR (LRAD)
Lab/Field:          Field
Radiation Detected: Primary: Alpha     Secondary: None

Applicability to Site Surveys: The LRAD is a rugged field-type unit for measuring alpha
surface soil concentration over a variety of dry, solid, flat terrains.

Operation:  The LRAD system consists of a large (1 m x 1 m) aluminum box, open on the
bottom side, containing copper plates that collect ions produced in the soil or surface under the
box, and used to measure alpha surface contamination or soil concentration. It is attached to a
lifting device on the front of a tractor and can be readily moved to new locations. Bias power is
supplied by a 300-V dry cell battery, and the electrometer and computer are powered by an
automobile battery and DC-to-AC inverter. A 50 cm grounding rod provides electrical
grounding.  A notebook computer is used for data logging and graphical interpretation of the
data. Alpha particles emitted by radionuclides in soil travel only about 3 cm in  air. However,
these alpha particles interact with the air and produce ions that travel considerably farther. The
LRAD detector box is lowered to the ground to form an enclosed ionization region. The copper
detector plate is raised to +300V along with a guard detector mounted above the detector plate to
control leakage current.  The ions are then allowed to collect on the copper plate producing a
current that is measured with a sensitive electrometer.  The signal is then averaged and processed
on a computer.  The electric current produced is proportional to the ionization within the
sensitive area of the detector and to the amount of alpha contamination present on the surface
soil.

Due to its size and weight (300 Ib), the unit can be mounted on a tractor for ease of movement.
All metal surfaces are covered with plastic to reduce the contribution from ion sources outside
the detector box.  At each site, a ground rod is driven into the ground. Each location is
monitored for at least 5  min. After each location is monitored, its data is fed into a notebook
computer and an interpolative graph of alpha concentration produced. The unit is calibrated
using standard alpha sources.

Sensitivity/Specificity:  The terrain over which this system is used must be dry, to prevent the
shielding of alpha particles by residual moisture, and flat, to prevent air infiltration from outside
the detector, both of which can lead to large errors.  The unit can detect a thin  layer of alpha
surface contamination at levels of 33-83 Bq/m2 (20-50 dpm/100 cm2), but does not measure
alpha contamination of deeper layers.  Alpha concentration errors are +74-740 Bq/kg (+2-20
pCi/g), with daily repeat accuracies of+370-3,700 Bq/kg (+10-100 pCi/g), depending on the
contamination level.  The dynamic measurement range appears to be 370-110,00 Bq/kg
(10-3,000 pCi/g).

Cost of Equipment: $25,000 (est. for tractor, computer, software, electrometer, and detector)
Cost per Measurement: $80 (based on 30 min per point and a 2 person team)

MARSSIM, Revision 1                       H-10                                August 2000

-------
                                                                            Appendix H
                         H.2 FIELD SURVEY EQUIPMENT




                             H.2.2 Beta Particle Detectors
August 2000                               H-ll                       MARS SIM, Revision 1

-------
Appendix H

System:             ELECTRET ION CHAMBER
Lab/Field:          Field
Radiation Detected: Primary:     Low energy beta (e.g. tritium, "Tc, 14C, 90Sr, 63Ni), alpha,
                                 gamma, or radon     Secondary: None
Applicability to Site Surveys:  This system measures alpha- or beta-emitting contaminants on
surfaces and in soils, gamma radiation dose, or radon air concentration, depending on how it is
configured.
Operation:  The system consists of a charged Teflon disk (electret), open-faced ionization
chamber, and electret voltage reader/data logger. When the electret is screwed into the chamber,
a static electric field is established and a passive ionization chamber is formed.  For alpha or beta
radiation, the chamber is opened and deployed directly on the surface or soil to be measured so
the particles can enter the chamber.  For gammas, however, the chamber is left closed and the
gamma rays incident on the chamber penetrate the 2 mm-thick plastic detector wall.  These
particles or rays ionize the air molecules, the ions are attracted to the charged electret, and the
electret's charge is reduced.  The electret charge is measured before and after deployment with
the voltmeter, and the rate of change of the charge is proportional to the alpha or beta surface or
soil activity, with appropriate compensation for background gamma levels. A thin Mylar window
may be used to protect the electret from dust. In low-level gamma measurements, the electret is
sealed inside a Mylar bag during deployment to minimize radon interference. For alpha and beta
measurements, corrections must be made for background gamma radiation and radon response.
This correction is accomplished by deploying additional gamma or radon-sensitive detectors in
parallel with the alpha or beta detector. Electrets are simple and can usually be reused several
times before recharging by a vendor. Due to their small size (3.8 cm tall by 7.6 cm diameter or
1.5 in. tall by 3 in. diameter), they may be deployed in hard-to-access locations.
Specificity/Sensitivity: This method gives a gross alpha, gross beta, gross gamma, or gross
radon measurement.  The lower limit of detection depends on the exposure time and the volume
of the chamber used.  High surface alpha or beta contamination levels or high gamma radiation
levels may be measured with deployment times of a few minutes. Much lower levels can be
measured by extending the deployment time to 24 hours or longer.  For gamma radiation, the
response of the detector is nearly independent of energy from 15 to 1200 keV, and fading
corrections are not required.  To quantify ambient gamma radiation fields  of 10 |iR/hr, a 1000
mL chamber may be deployed for two days or a 50  mL chamber deployed for 30 days.  The
smallest chamber is particularly useful for long-term monitoring and reporting of monthly or
quarterly measurements. For alpha and beta particles, the measurement may be converted to
isotopic concentration if the isotopes are known or measured separately. The lower limit of
detection for alpha radiation is 83 Bq/m2 (50 dpm/100 cm2) @ 1 hour, 25 Bq/m2 (15 dpm/100
cm2) @ 8 hours, and 13 Bq/m2 (8 dpm/100 cm2) @ 24 hours.  For beta radiation from tritium it is
10,000 Bq/m2 (6,000 dpm/cm2) @ 1 hour and 500 Bq/m2 (300 dpm/cm2) @ 24 hours. For beta
radiation from "Tc it is 830 Bq/m2(500 dpm/cm2) @  1 hour and 33 Bq/m2 (20 dpm/cm2) @ 24
hours.
Cost of Equipment:  $4,000 to $25,000, for system if purchased.
Cost per Measurement: $8-$25, for use under service contract

MARSSIM, Revision 1                        H-12                                August 2000

-------
                                                                               Appendix H

System:             GAS-FLOW PROPORTIONAL COUNTER
Lab/Field:          Field
Radiation Detected: Primary: Alpha, Beta       Secondary:  Gamma
Applicability to Site Surveys: This equipment measures gross alpha or gross beta/gamma
surface contamination levels on relatively flat surfaces like the floors and walls of facilities. It
would serve as a screen to determine whether or not more nuclide-specific analyses were needed.
Operation:  This system consists of a gas-flow proportional detector, gas  supply, supporting
electronics, and a sealer or rate meter.  Small detectors (-100 cm2) are hand-held and large
detectors (-400-600 cm2) are mounted on a rolling cart. The detector entrance window can be <1
to almost 10 mg/cm2 depending on whether alpha, alpha-beta, or gamma radiation is monitored.
The gas used is normally P-10, a mixture of 10% methane and 90% argon. The detector is
positioned as close as practical to the surface being monitored for good counting efficiency
without risking damage from the detector touching the surface.  Quick disconnect fittings allow
the system to be disconnected from the gas bottle for hours with little loss of counting efficiency.
The detector operating voltage can be set to make it sensitive only to alpha radiation, to both
alpha and beta radiation, or to beta and low energy gamma radiation. These voltages are
determined for each system by placing either an alpha source, such as 230Th or 241Am, or a beta
source, such as 90Sr, facing and near the detector window, then increasing the high voltage in
incremental  steps until the count rate becomes constant. The alpha plateau, the region of
constant count rate, will be almost flat. The beta plateau will have a slope of 5 to 15 percent per
100 volts. Operation on the beta plateau  allows detection of some gamma radiation, but the
efficiency is very low. Some systems use a spectrometer to separate alpha, and beta/gamma
events, allowing simultaneous determination of both the alpha and beta/gamma surface
contamination levels.
Specificity/Sensitivity:  These systems do not identify the alpha or beta energies detected and
cannot be used to identify specific radionuclides. Background for operation on the alpha plateau
is very low, 2 to 3 counts per minute, which is higher than for laboratory detectors because of the
larger detector size. Background for operation on the beta plateau is dependent on the ambient
gamma and cosmic ray background, and typically ranges from several hundred to a thousand
counts per minute. Typical efficiencies for unattenuated alpha sources are 15-20%. Beta
efficiency depends on the window thickness and the beta energy.  For 90Sr/90Y in equilibrium,
efficiencies range from  5% for highly attenuated to about 35% for unattenuated  sources.
Typical gamma ray efficiency is <1%.  The presence of natural  radionuclides in the surfaces
could interfere with the detection of other contaminants.  Unless the nature of the contaminant
and any naturally-occurring radionuclides is well known, this system is better used for assessing
gross surface contamination levels.  The texture and porosity of the surface can hide or shield
radioactive material from the detector, causing levels to be underestimated. Changes in
temperature  can affect the detectors's sensitivity. Incomplete flushing with gas can cause a
nonuniform  response over the detector's surface.  Condensation in the gas lines or using the quick
disconnect fittings can cause count rate instability.
Cost of Equipment: $2,000 to $4,000
Cost per Measurement:  $2-$ 10 per m2

August 2000                                 H-13                        MARS SIM, Revision 1

-------
Appendix H

System:             GM SURVEY METER WITH BETA PANCAKE PROBE
Lab/Field:          Field
Radiation Detected:        Primary: Beta      Secondary: Gamma and alpha
Applicability to Site Surveys: This instrument is used to find and measure low levels of
beta/gamma contamination on relatively flat surfaces.
Operation:  This instrument consists of a flat "pancake" type Geiger-Mueller detector connected
to a survey meter which measures radiation response in counts per minute. The detector housing
is typically a rigid metal on all sides except the radiation entrance face or window, which is made
of Mylar, mica, or a similar material. A steel, aluminum, lead, or tungsten housing  surrounds the
detector on all  sides except the window, giving the detector a directional response.  The detector
requires approximately 900 volts for operation. It is held within a few cm of the surface to
minimize the thickness of air shielding in between the radioactive material and the detector. It is
moved slowly to scan the surface in search of elevated readings, then held in place long enough
to obtain a stable measurement. Radiation entering the detector ionizes the gas, causes a
discharge throughout the entire tube, and results in a single count being sent to the meter. The
counts per minute meter reading is converted to a beta surface contamination level in the range of
1,700 Bq/m2 (1,000 dpm/100 cm2) using isotope specific factors.

Specificity/Sensitivity: Pancake type GM detectors primarily measure beta count rate in close
contact with surfaces to indicate the presence of contamination. They are sensitive to any alpha,
beta, or gamma radiation that enters the detector and causes ionization. As a result, they cannot
determine the type or energy of that radiation, except by using a set of absorbers.  To be detected,
beta particles must have enough energy to penetrate through any surface material that the
contamination  is absorbed in, plus  the detector window, and the layer of air and other shielding
materials in between. Low energy beta particles from emitters like 3H (17 keV) that cannot
penetrate the window alone are not detectable, while higher energy betas like those  from 60Co
(314 keV) can  be readily detected.  The beta detection efficiency at a field site is primarily a
function of the beta energy, window thickness, and the surface condition. The detection
sensitivity can be improved by using headphones or the audible response during scans. By
integrating the count rate over a longer period or by counting the removable radioactive material
collected on a swipe , the ability to detect surface contamination can be improved.  The nominal
2 in. diameter detector can measure an  increase of around 100 cpm above background, which
equates to 4,200 Bq/m2 (2,500 dpm/100 cm2) of 60Co on a surface under the detector or 20 Bq
(500 pCi) on a swipe. Larger 100 cm2 detectors improve sensitivity and eliminate the need to
swipe. A swipe's collection efficiency may be below 100%, and depends on the wiping
technique, the actual surface area covered, the texture and porosity of the surface, the affinity of
the contamination for the swipe material, and the dryness of the swipe.  This will proportionately
change the values above. The sensitivity to gamma radiation is around 10% or less  of the beta
sensitivity, while the alpha detection efficiency is difficult to evaluate.

Cost of equipment: $400 to $1,500
Cost per Measurement:  $5 to $10 per location

MARSSIM, Revision 1                        H-14                                August 2000

-------
                                                                         Appendix H
                        H.2 FIELD SURVEY EQUIPMENT




                            H.2.3  Gamma Ray Detectors
August 2000                              H-15                      MARS SIM, Revision 1

-------
Appendix H

System:             ELECTRET ION CHAMBER
Lab/Field:          Field
Radiation Detected: Primary:     Low energy beta (e.g. tritium, "Tc, 14C, 90Sr, 63Ni), alpha,
                                 gamma, or radon     Secondary: None
Applicability to Site Surveys:  This system measures alpha- or beta-emitting contaminants on
surfaces and in soils, gamma radiation dose, or radon air concentration, depending on how it is
configured.
Operation:  The system consists of a charged Teflon disk (electret), open-faced ionization
chamber, and electret voltage reader/data logger. When the electret is screwed into the chamber,
a static electric field is established and a passive ionization chamber is formed.  For alpha or beta
radiation, the chamber is opened and deployed directly on the surface or soil to be measured so
the particles can enter the chamber.  For gammas, however, the chamber is left closed and the
gamma rays incident on the chamber penetrate the 2 mm-thick plastic detector wall.  These
particles or rays ionize the air molecules, the ions are attracted to the charged electret, and the
electret's charge is reduced.  The electret charge is measured before and after deployment with
the voltmeter, and the rate of change of the charge is proportional to the alpha or beta surface or
soil activity, with appropriate compensation for background gamma levels. A thin Mylar window
may be used to protect the electret from dust. In low-level gamma measurements, the electret is
sealed inside a Mylar bag during deployment to minimize radon interference. For alpha and beta
measurements, corrections must be made for background gamma radiation and radon response.
This correction is accomplished by deploying additional gamma or radon-sensitive detectors in
parallel with the alpha or beta detector. Electrets are simple and can usually be reused several
times before recharging by a vendor. Due to their small size (3.8 cm tall by 7.6 cm diameter or
1.5 in. tall by 3 in. diameter), they may be deployed in hard-to-access locations.
Specificity/Sensitivity: This method gives a gross alpha, gross beta, gross gamma, or gross
radon measurement.  The lower limit of detection depends on the exposure time and the volume
of the chamber used.  High surface alpha or beta contamination levels or high gamma radiation
levels may be measured with deployment times of a few minutes. Much lower levels can be
measured by extending the deployment time to 24 hours or longer.  For gamma radiation, the
response of the detector is nearly independent of energy from 15 to 1200 keV, and fading
corrections are not required.  To quantify ambient gamma radiation fields  of 10 |iR/hr, a 1000
mL chamber may be deployed for two days or a 50  mL chamber deployed for 30 days.  The
smallest chamber is particularly useful for long-term monitoring and reporting of monthly or
quarterly measurements. For alpha and beta particles, the measurement may be converted to
isotopic concentration if the isotopes are known or measured separately. The lower limit of
detection for alpha radiation is 83 Bq/m2 (50 dpm/100 cm2) @ 1 hour, 25 Bq/m2 (15 dpm/100
cm2) @ 8 hours, and 13 Bq/m2 (8 dpm/100 cm2) @ 24 hours.  For beta radiation from tritium it is
10,000 Bq/m2 (6,000 dpm/cm2) @ 1 hour and 500 Bq/m2 (300 dpm/cm2) @ 24 hours. For beta
radiation from "Tc it is 830 Bq/m2(500 dpm/cm2) @  1 hour and 33 Bq/m2 (20 dpm/cm2) @ 24
hours.
Cost of Equipment:  $4,000 to $25,000, for system if purchased.
Cost per Measurement: $8-$25, for use under service contract

MARSSIM, Revision 1                        H-16                                August 2000

-------
                                                                               Appendix H

System:             GM SURVEY METER WITH GAMMA PROBE
Lab/Field:          Field
Radiation Detected:        Primary: Gamma    Secondary:  Beta

Applicability to Site Surveys: This instrument is used to give a quick indication of gamma-
radiation levels present at a site. Due to its high detection limit, the GM survey meter may be
useful during characterization surveys but may not meet the needs of final status surveys.

Operation:  This instrument consists of a cylindrical Geiger Mueller detector connected to a
survey meter. It is calibrated to measure gamma exposure rate in mR/hr. The detector is
surrounded on all sides by a protective rigid metal housing.  Some units called end window or
side window have a hinged door or rotating sleeve that opens to expose an entry window of
Mylar, mica, or a similar material, allowing beta radiation to enter the sensitive volume. The
detector requires approximately 900  volts for operation.  It is normally held at waist height, but is
sometimes placed in contact with an  item be evaluated.  It is moved slowly over the area to scan
for elevated  readings, observing the meter or, preferably, listening to the audible signal. Then it
is held in place long enough to obtain a stable measurement. Radiation entering the detector
ionizes the gas, causes a discharge throughout the entire tube, and results in a single count being
sent to the meter. Conversion from count rate to exposure rate is accomplished at calibration by
exposing the detector at discrete levels and  adjusting the meter scale(s) to read accordingly. In
the field, the exposure rate is read directly from the meter.  If the detector housing has an entry
window , an increase in "open-door" over "closed-door" reading indicates the presence of beta
radiation in the radiation field, but the difference is not a direct measure of the beta radiation
level.

Specificity/Sensitivity: GM meters measure gamma exposure rate,  and those with an entry
window  can identify if the radiation  field includes beta radiation. Since GM detectors are
sensitive to any energy  of alpha, beta, or gamma radiation that enters the detector, instruments
that use these detectors cannot identify the type  or energy of that radiation, or the specific
radionuclide(s) present. The sensitivity can be improved by using headphones or the audible
response during scans, or by integrating the exposure rate over time. The instrument has two
primary limitations for  environmental work. First, its minimum sensitivity is high, around 0.1
mR/hr in rate meter mode or 0.01 mR/hr in integrate mode.  Some instruments use a large
detector to improve low end sensitivity. However, in many instances the instrument is not
sensitive enough for site survey work.  Second,  the detector's energy response is nonlinear.
Energy compensated survey meters are commercially available, but the instrument's sensitivity
may be reduced.

Cost of Equipment: $400 to $1,500.

Cost per Measurement:  $5 per measurement for survey and report.


August 2000                                 H-17                        MARS SIM, Revision 1

-------
Appendix H

System:             HAND-HELD ION CHAMBER SURVEY METER
Lab/Field:          Field
Radiation Detected:        Primary:  Gamma   Secondary: None

Applicability to Site Surveys: The hand-held ion chamber survey meter measures true gamma
radiation exposure rate, in contrast to most other survey meter/probe combinations which are
calibrated to measure exposure rate at one energy and approximate the exposure rate at all other
energies. Due to their high detection limit, these instruments are not applicable for many final
status surveys.

Operation:  This device uses an ion chamber operated at a bias voltage sufficient to collect all
ion pairs created by the passage of ionizing radiation, but not sufficiently high to generate
secondary ion pairs as a proportional counter does. The units of readout are mR/hr, or some
multiple of mR/hr.  If equipped with an integrating mode, the operator can measure the total
exposure over a period of time. The instrument may operate on two "D"  cells or a 9 volt battery
that will last for 100 to 200 hours of operation.

Specificity/Sensitivity: Ion chamber instruments respond only to gamma or x-radiation. They
have no means to provide the identity of contaminants.  Typical ion chamber instruments have a
lower limit of detection of 0.5 mR/hr. These instruments can display readings below this, but the
readings may be erratic and have large errors associated with them.  In integrate mode, the
instrument sensitivity can be as low as 0.05 mR/hr.

Cost of Equipment: $800 to $1,200

Cost per Measurement: $5, or higher for making integrated exposure measurements.
MARSSIM, Revision 1                        H-18                               August 2000

-------
                                                                              Appendix H

System:             HAND-HELD PRESSURIZED ION CHAMBER (PIC) SURVEY
                    METER
Lab/Field:          Field
Radiation Detected:        Primary:     Gamma      Secondary: None

Applicability to Site Surveys: The hand-held pressurized ion chamber survey meter measures
true gamma radiation exposure rate, in contrast to most other survey meter/probe combinations
which are calibrated to measure exposure rate at one energy and approximate the exposure rate at
all other energies.  Due to their high detection limit, these instruments are not applicable for
many final status surveys.

Operation:  This device uses a pressurized air ion chamber operated at a bias voltage sufficient
to collect all ion pairs created by the passage of ionizing radiation, but not sufficiently high to
cause secondary ionization.. The instrument is identical to the ion chamber meter on the
previous page, except in this case the ion chamber is sealed and pressurized to 2 to 3 atmospheres
to increase the sensitivity of the instrument by the same factors.  The units of readout are |iR/hr
or mR/hr. A digital meter will allow  an operator to integrate  the total exposure  over a period of
time. The unit may use two "D" cells or a 9-volt battery that will last for 100  to 200 hours of
operation.

Specificity/Sensitivity:  Since the ion chamber is sealed, pressurized ion chamber instruments
respond only to gamma or X-radiation.  They have no means to provide the identity of
contaminants.  Typical instruments have a lower limit of detection of 0.1 mR/hr, or as low as
0.01 mR in integrate mode. These instruments can display readings below this, but the readings
may be erratic and have large errors associated with them.

Cost of Equipment: $1,000 to $1,500

Cost per Measurement: $5, or higher for making integrated exposure measurements.
August 2000                               H-19                       MARSSIM, Revision 1

-------
Appendix H

System:      PORTABLE GERMANIUM MULTICHANNEL ANALYZER (MCA) SYSTEM
Lab/Field:    Field
Radiation Detected:        Primary: Gamma    Secondary: None

Applicability for Site Surveys:     This system produces semi-quantitative estimates of
concentration of uranium and plutonium in soil, water, air filters, and quantitative estimates of
many other gamma-emitting isotopes. With an appropriate dewar, the detector may be used in a
vertical orientation to determine, in situ, gamma isotopes concentrations in soil.

Operation: This system consists of a portable germanium detector connected to a dewar of
liquid nitrogen, high voltage power supply, and multichannel analyzer.  It is used to identify and
quantify gamma-emitting isotopes in soil or other surfaces.

Germanium is a semiconductor material. When a gamma ray interacts with a germanium crystal,
it produces electron-hole pairs. An electric field is applied which causes the electrons to move in
the conduction band and the holes to pass the charge from atom to neighboring atoms. The
charge is collected rapidly and is proportional to the deposited energy.

The typical system consists of a portable multichannel analyzer (MCA) weighing about 7-10 Ibs
with batteries, a special portable low energy germanium detector with a built-in shield, and the
acquisition control and spectrum analysis software.  The detector is integrally mounted to a liquid
nitrogen dewar. The liquid nitrogen is added 2-4 hours before use and replenished every 4-24
hours based on capacity.

The MCA includes all required front end electronics, such as a high voltage power supply, an
amplifier, a digital stabilizer, and an analog-to-digital converter (ADC), which are fully
controllable from a laptop computer and software.

One  method uses the 94-104 keV peak region to analyze the plutonium isotopes from either
"fresh" or aged materials. It requires virtually no user input or calibration.  The source-to-
detector distance for this method does not need to be calibrated as long as there are enough
counts in the  spectrum to perform the analysis.

For in situ applications, a collimated detector is positioned at a fixed distance from a surface to
provide multichannel spectral data for a defined surface area.  It is especially useful for
qualitative and (based on careful field calibration or appropriate algorithms) quantitative analysis
of freshly deposited contamination. Additionally, with prior knowledge of the depth distribution
of the primary radionuclides of interest,  which is usually not known, or using algorithms that
match the site, the in situ system can be  used to estimate the content of radionuclides distributed
below the surface (dependent, of course, on adequate detection capability.)
MARSSIM, Revision 1                        H-20                                August 2000

-------
                                                                              Appendix H

Calibration based on Monte Carlo modeling of the assumed source-to-detector geometry or
computation effluence rates with analytical expressions is an important component to the
accurate use of field spectrometry, when it is not feasible or desirable to use real radioactive
sources.  Such modeling used in conjunction with field spectrometry is becoming much more
common recently, especially using the MCNP Monte Carlo computer software system.

Specificity/Sensitivity: With proper calibration or algorithms, field spectrometers can identify
and quantify concentrations of gamma emitting radionuclides in the middle to upper energy range
(i.e.., 50 keV with a P-type detector or 10 keV with an N-type detector).

For lower energy photons, as are important for plutonium and americium, an N-type detector or a
planar crystal is preferred with a very thin beryllium (Be) window. This configuration allows
measurement of photons in the energy range  5 to 80 keV. The Be window is quite fragile and a
target of corrosion, and should be protected accordingly.

The detector high voltage should only be applied when the cryostat has contained sufficient
liquid nitrogen for several hours.  These systems can accurately identify plutonium, uranium, and
many gamma-emitting isotopes in environmental media, even if a mixture of radionuclides is
present. Germanium has an advantage over sodium iodide because it can produce a quantitative
estimate of concentrations of multiple radionuclides in samples like soil, water, and air filters.

A specially designed low energy germanium  detector that exhibits very little deterioration in the
resolution as a function of count rate may be  used to analyze uranium and plutonium, or other
gamma-emitting radionuclides. When equipped with a built-in shield, it is unnecessary to build
complicated shielding arrangements while  making field measurements.  Tin filters can be used to
reduce the count rate from the 241Am 59 keV line which allows the electronics to process more of
the signal coming from Pu or U.

A plutonium content of 10 mg can be detected in a 55 gallon waste drum in about 30 minutes,
although with high uncertainty. A uranium analysis can be performed for an enrichment range
from depleted to 93% enrichment.  The measurement time can be in the order of minutes
depending on the enrichment and the attenuating materials.

Cost of Equipment:   $40,000

Cost per Measurement:  $ 100 to $200
August 2000                               H-21                        MARSSIM, Revision 1

-------
Appendix H

System:             PRESSURIZED IONIZATION CHAMBER (PIC)
Lab/Field:          Field
Radiation Detected:        Primary: Moderate (>80 keV) to high energy photons
                                  Secondary: None
Applicability to Site Surveys: The PIC is a highly accurate ionization chamber for measuring
gamma exposure rate in air, and for correcting for the energy dependence of other instruments
due to their energy sensitivities. It is excellent for characterizing and evaluating the effectiveness
of remediation of contaminated sites based on exposure rate. However, most sites also require
nuclide-specific identification of the contributing radionuclides.  Under these circumstances,
PICs must be used in conjunction with other soil sampling or spectrometry techniques to evaluate
the success of remediation efforts.

Operation: The PIC detector is a large sphere of compressed argon-nitrogen gas at 10 to 40
atmospheres pressure surrounded by a protective box. The detector is normally mounted on a
tripod and positioned to sit about three feet off the ground. It is connected to an electronics box
in which a strip chart recorder or digital integrator measures instantaneous and integrated
exposure rate. It operates at a bias voltage sufficient to  collect all ion pairs created by the
passage of ionizing radiation, but not sufficiently high to amplify or increase the number of ion
pairs . The high pressure inside the detector and the integrate feature make the PIC much more
sensitive and precise than other ion chambers for measuring low exposures.  The average
exposure rate is calculated from the integrated exposure and the operating time. Arrays of PIC
systems can be linked by telecommunications so their data can be observed from a central and
remote location.

Specificity/Sensitivity:  The PIC measures gamma or x-radiation and cosmic radiation. It is
highly stable, relatively energy independent, and serves  as an excellent tool to calibrate (in the
field) other survey equipment to measure exposure rate.  Since the PIC is normally uncollimated,
it measures cosmic, terrestrial,  and foreign source contributions without discrimination. Its
rugged and stable behavior makes it an excellent choice for an unattended sensor where area
monitors for gamma emitters are needed. PICs are highly sensitive, precise, and accurate to vast
changes in exposure rate (1 |iR/ hr up to 10 R/hr). PICs lack any ability to distinguish either
energy spectral characteristics or source type. If sufficient background information is obtained ,
the data can be processed using algorithms that employ  time and frequency domain analysis of
the recorded systems to effectively separate terrestrial, cosmic, and "foreign" source
contributions. One major advantage of PIC systems is that they can record exposure rate over
ranges of 1 to 10,000,000 jiR per hour  (i.e.., |iR/hr to 10 R/hr) with good precision and accuracy.

Cost of Equipment:  $15,000 to $50,000 depending on the associated electronics, data
processing, and telecommunications equipment.

Cost per Measurement:  $50 to $500 based on the operating time at each site and the number of
measurements performed.

MARSSIM, Revision 1                       H-22                                August 2000

-------
                                                                               Appendix H

System:              SODIUM IODIDE SURVEY METER
Lab/Field:           Field
Radiation Detected:        Primary:  Gamma    Secondary:  None
Applicability to Site Surveys:  Sodium iodide survey meters can be response checked against a
pressurized ionization chamber(PIC) and then used in its place so readings can be taken more
quickly.  This check should be performed often, possibly several times each day. They are useful
for determining ambient radiation levels and for estimating the concentration of radioactive
materials at a site.
Operation:  The sodium iodide survey meter measures gamma radiation levels in |iR/hr (10"6
R/hr) or counts per minute (cpm). Its response is energy and count rate dependent, so
comparison with a pressurized ion chamber necessitates a conversion factor for adjusting the
meter readings to true |iR/hr values.  The conversion factor obtained from this comparison is
valid only in locations where the radionuclide mix is identical to that where the comparison is
performed, and over a moderate range of readings.  The detector is held at waist level or
suspended near the surface and walked through an area listening to the audio and watching the
display for changes. It is held in place and the response allowed to stabilize before each
measurement is taken, with longer times required for lower responses.  Generally, the center of
the needle swing or the integrated reading is recorded.  The detector is  a sodium iodide crystal
inside an aluminum container with an optical glass window that is connected to a photomultiplier
tube. A gamma ray that interacts with the crystal produces light that travels out of the crystal and
into the photomultiplier tube. There, electrons are produced and multiplied to produce a readily
measurable pulse whose magnitude is proportional to the energy the gamma ray incident on the
crystal. Electronic filters accept the pulse as a count if certain discrimination height restrictions
are met.  This translates into a meter response. Instruments with pulse height discrimination
circuitry can be calibrated to view the primary gamma decay energy of a particular isotope.  If
laboratory analysis has shown a particular isotope to be present, the discrimination circuitry can
be adjusted to partially tune out other isotopes, but this also limits its ability to measure exposure
rate.
Specificity/Sensitivity:  Sodium iodide survey meters measure gamma radiation in |iR/hr or cpm
with a minimum sensitivity of around 1-5 jiR per hour, or 200-1,000 cpm, or lower in digital
integrate mode. The reading error of 50% can occur at low count rates because of a large needle
swing, but this decreases with increased count rate.  The instrument is quite energy sensitive,
with the greatest response around 100-120 keV and decreasing in either direction. Measuring the
radiation level at a location with both a PIC and the survey meter gives a factor for converting
subsequent readings to actual exposure rates. This ratio can change with location.  Some meters
have circuitry that looks at a few selected ranges of gamma energies, or one at a time with the
aide of a single channel analyzer.  This feature is used to determine if a particular isotope is
present. The detector should be protected against thermal or mechanical shock which can break
the sodium iodide crystal or the photomultiplier tube. Covering at least the crystal end with
padding is often sufficient. The detector is heavy, so adding a carrying strap to the meter and a
means of easily attaching and detaching the detector from the meter case helps the user endure
long surveys.
Cost of Equipment:  $2,000
Cost per Measurement: $5

August 2000                                 H-23                         MARSSIM, Revision 1

-------
Appendix H

System:             THERMOLUMINESCENCE DOSIMETER (TLD)
Lab/Field:          Field and lab
Radiation Detected:        Primary:  Gamma   Secondary:  Neutron, beta, x-ray
Applicability to Site Surveys: TLDs can be used to measure such a low dose equivalent that
they can identify gamma levels slightly above natural background. TLDs should be placed in
areas outside the site but over similar media to determine the average natural background
radiation level in the area.  Other TLDs should  be posted on site to determine the difference
from background.  Groups should be posted quarterly for days to quarters and compared to
identify locations of increased onsite doses.
Operation: A TLD is a crystal that measures radiation dose. TLDs are semiconductor crystals
that contain small amounts of added impurities.  When radiation interacts with the crystal,
electrons in the valence band are excited into the conduction band. Many lose their energy and
return directly to the valence band, but some are trapped at an elevated energy state by the
impurity atoms. This trapped energy can be stored for long periods, but the signal can fade with
age, temperature, and light. Heating the  TLD in a TLD reader releases the excess energy in the
form of heat and light. The quantity or intensity of the light given off gives a measure of the
radiation dose the TLD received.  If the  TLDs are processed at an off site location, the transit
dose (from the location to the site and return) must be determined and subtracted from the net
dose.  The ability to determine this transit dose affects the net sensitivity of the measurements.
The TLD is left in the field for a period of a day to a quarter and then removed from the field and
read in the laboratory on a calibrated TLD reader. The reading is the total dose received by the
TLD during the posting period. TLDs come in various shapes (thin-rectangles, rods, and
powder), sizes (0.08 cm to 0.6 cm (1/32 in. to 1/4 in.) on a side), and materials (CaF2:Mn,
CaSO4:Dy, 6LiF:Mn, 7LiF:Mn, LiBO4, LiF:Mg,Cu,P and A12O3:C). The TLD crystals can be held
loosely inside a holder, sandwiched between layers of Teflon, affixed to a substrate, or attached
to a heater strip and surrounded by a glass envelope. Most are surrounded by special thin shields
to correct for an over response to low-energy radiation. Many have special radiation filters to
allow the same type TLD to measure various types and energies of radiation.
Specificity/Sensitivity:  TLDs are primarily sensitive to gamma radiation, but selected
TLD/filter arrangements can be used to measure beta, x-ray, and neutron radiation. They are
posted both on site  and off site in comparable areas. These readings are compared to determine
if the site can cause personnel to receive  more radiation exposure than would be received from
background radiation. The low-end sensitivity can be reduced by specially calibrating each TLD
and selecting those  with high accuracy and good precision. The new A12O3 TLD may be capable
of measuring doses as low as 0.1 jiSv (0.01 mrem) while specially calibrated CaF2 TLDs posted
quarterly can measure dose differences as low as 0.05 mSv/y (5 mrem/y). This is in contrast to
standard TLDs that are posted monthly and may not measure doses below 1  mSv/y (100
mrem/y). TLDs should be protected from damage as the manufacturer recommends. Some are
sensitive to visible light, direct sunlight,  fluorescent light, excessive heat, or high humidity.
Cost of Equipment: $5K-$ 100K (reader), $25-$40 (TLD). TLDs cost $5 to $40 per rental.
Cost per Measurement: $25 to $125
MARSSIM, Revision 1                        H-24                               August 2000

-------
                                                                           Appendix H
                         H.2 FIELD SURVEY EQUIPMENT




                               H.2.4 Radon Detectors
August 2000                              H-25                      MARSSIM, Revision 1

-------
Appendix H

System:             ACTIVATED CHARCOAL ADSORPTION
Lab/Field:          Field
Radiation Detected:        Primary:  Radon gas        Secondary: None

Applicability to Site Surveys: Activated charcoal adsorption is a passive low cost screening
method for measuring indoor air radon concentration. The charcoal adsorption method is not
designed for outdoor measurements.  For contaminated structures, charcoal is a good short-term
indicator of radon contamination. Vendors provide measurement services which includes the
detector and subsequent readout.

Operation:  For this method, an airtight container with activated charcoal is opened in the area
to be sampled and radon in the air adsorbs onto the charcoal.  The detector, depending on its
design, is deployed for 2 to 7 days. At the end of the sampling period, the container is sealed and
sent to a laboratory for analysis.  Proper deployment and analysis will yield accurate results.

Two analysis methods are commonly used in activated charcoal adsorption.  The first method
calculates the radon concentration based on the gamma decay from the radon progeny analyzed
on a gamma scintillation or semiconductor detection system.  The second method is liquid
scintillation which employs a small vial containing activated charcoal for sampling. After
exposure, scintillation fluid is added to the vial and the radon concentration is determined by the
alpha and beta decay of the radon and progeny when counted in a liquid scintillation
spectrometer.

Specificity/Sensitivity: Charcoal absorbers are designed to measure radon concentrations in
indoor air. Some charcoal absorbers are sensitive to drafts, temperature and humidity. However,
the use of a diffusion barrier over the charcoal reduces these effects.  The minimum detectable
concentration for this method ranges from 0.007-0.04 Bq/L (0.2-1.0 pCi/L).

Cost of Equipment: $10,000 for a liquid scintillation counter, $10,000 for a sodium iodide
multichannel analyzer system, or $30,000+ for a germanium multichannel analyzer system. The
cost of the activated charcoal itself is minimal.
Cost per Measurement:  $5 to $30 including canister.
MARSSIM, Revision 1                       H-26                                August 2000

-------
                                                                              Appendix H

System:             ALPHA TRACK DETECTOR
Lab/Field:          Field
Radiation Detected:        Primary: Radon Gas (Alpha Particles)     Secondary: None
Applicability to Site Surveys: An alpha track detector is a passive, low cost, long term method
used for measuring radon.  Alpha track detectors can be used for site assessments both indoors
and outdoors (with adequate protection from the elements).

Operation:  Alpha track detectors  employ a small piece of special plastic or film inside a small
container. Air being tested diffuses through a filtering mechanism into the container. When
alpha particles from the decay of radon and its progeny strike the detector, they cause damage
tracks.  At the end of exposure, the container is sealed and returned to the laboratory for analysis.

The plastic or film detector is chemically treated to amplify the damage tracks and then the
number of tracks over a predetermined area are counted using a microscope, optical reader, or
spark counter.  The radon concentration is determined by the number of tracks per unit area.
Detectors are usually exposed for 3 to 12 months, although shorter time frames may be used
when measuring high radon concentrations.

Specificity/Sensitivity: Alpha track detectors are primarily used for indoor air measurements
but specially designed detectors are available for outdoor measurements.  Alpha track results are
usually expressed as the radon concentration over the exposure period (Bq/L-days). The
sensitivity is a function of detector design and exposure duration, and is on the order of 0.04
Bq/L-day (1  pCi/L-day).

Cost of Equipment: Not applicable when provided by a vendor

Cost per Measurement: $5 to $25
August 2000                               H-27                        MARSSIM, Revision 1

-------
Appendix H

System:             CONTINUOUS RADON MONITOR
Lab/Field:          Field
Radiation Detected:        Primary:  Radon gas        Secondary: None

Applicability to Site Surveys: Continuous radon monitors are devices that measure and record
real-time measurements of radon gas or variations in radon concentration on an hourly basis.
Since continuous monitors display real-time hourly radon measurements, they are useful for
short-term site investigation.

Operation:  Continuous radon monitors are precision devices that track and record real-time
measurements and variations in radon gas concentration on an hourly basis.  Air either diffuses or
is pumped into a counting chamber. The counting chamber is typically a scintillation cell or
ionization chamber.  Using a calibration factor, the counts are processed electronically, and radon
concentrations for predetermined intervals are stored in memory or directly transmitted to a
printer.

Most continuous monitors are used for a relatively short measurement period, usually 1 to 7 days.
These devices  do require some operator skills and often have a ramp-up period to equilibrate
with the surrounding atmosphere. This ramp-up time can range from 1 to 4 hours depending on
the size of the counting chamber and rate of air movement into the chamber.

Specificity/Sensitivity:  Most continuous monitors are designed for both indoor and outdoor
radon measurements. The limiting factor for outdoor usage is the need for electrical power. In
locations where external power is unavailable, the available operating time depends on the
battery lifetime of the monitor. The minimum detectable concentration for these detectors ranges
from 0.004-0.04 Bq/L (0.1-1.0 pCi/L).

Cost of Equipment: $1,000 to $5,000.

Cost per Measurement: $80+ based on duration of survey.
MARSSIM, Revision 1                       H-28                                August 2000

-------
                                                                               Appendix H

System:             ELECTRET ION CHAMBER
Lab/Field:          Field
Radiation Detected:        Primary: Radon gas (alpha, beta)   Secondary: Gamma

Applicability to Site Surveys: Electrets are used to measure radon concentration in indoor
environments.  For contaminated structures, the electret ion chamber is a good indicator of short-
term and long-term radon concentrations.

Operation:  For this method, an electrostatically charged disk (electret) is situated within a small
container (ion chamber). During the measurement period, radon diffuses through a filter  into the
ion chamber, where the ionization produced by the decay of radon and its progeny reduces the
charge on the electret.  A calibration factor relates the voltage drop, due to the  charge reduction,
to the radon  concentration. Variations in electret design enable the detector to make long-term or
short-term measurements.  Short-term detectors are deployed for 2 to 7 days, whereas long-term
detectors may be deployed from 1 to 12 months.

Electrets are relatively inexpensive, passive, and can be used several times before discarding or
recharging, except in areas of extreme radon concentrations. These detectors need to be
corrected for the background gamma radiation during exposure since this ionization also
discharges the electret.

Specificity/Sensitivity:  Electrets are designed to make radon measurements primarily in indoor
environments.  Care must be taken to measure the background gamma radiation at the site during
the exposure period. Extreme temperatures and humidity encountered outdoors may affect
electret voltage.  The minimum detectable concentration ranges from 0.007-0.02 Bq/L (0.2 to
0.5 pCi/L).

Cost of Equipment: Included in rental price

Cost per Measurement: $8 to $25 rental for an electret supplied by a vendor
August 2000                                H-29                        MARSSIM, Revision 1

-------
Appendix H

System:             LARGE AREA ACTIVATED CHARCOAL COLLECTOR
Lab/Field:          Field
Radiation Detected:        Primary: Radon gas Secondary: None

Applicability to Site Surveys:  This method is used to make radon flux measurements (the
surface emanation rate of radon gas) and involves the adsorption of radon on activated carbon in
a large area collector.

Operation:  The collector consists of a 10 inch diameter PVC end cap, spacer pads, charcoal
distribution grid, retainer pad with screen, and a steel retainer spring. Between 170 and 200
grams of activated charcoal is spread in the distribution grid and held in place by the retainer pad
and spring.

The collector is deployed by firmly twisting the end cap into the surface of the material to be
measured. After 24 hours of exposure, the activated charcoal is removed and transferred to
plastic containers.  The amount of radon adsorbed on the activated charcoal  is determined by
gamma spectroscopy.  This data is used to calculate the radon flux in units of Bq m"2 s"1.

Specificity/Sensitivity: These collectors give an accurate short-term assessment of the radon gas
surface emanation rate from a material. The minimum detectable concentration of this method is
0.007 Bq m'2 s'1 (0.2 pCi m'2 s'1).

Exposures greater than 24 hours are not recommended due to atmospheric and surface moisture
and temperature extremes which may affect charcoal efficiency.

Cost of Equipment:  Not applicable

Cost per Measurement:  $20 - $50 including canister
MARSSIM, Revision 1                        H-30                                August 2000

-------
                                                                       Appendix H
                        H.2 FIELD SURVEY EQUIPMENT




                   H.2.5 X-Ray and Low Energy Gamma Detectors
August 2000                             H-31                     MARS SIM, Revision 1

-------
Appendix H

System:             FIDLER PROBE WITH SURVEY METER
Lab/Field:          Field
Radiation Detected:        Primary: X-ray      Secondary: Low Energy Gamma
Applicability to Site Surveys:     The FIDLER (Field Instrument for the Detection of Low
Energy Radiation) probe is a specialized detector consisting of a thin layer of sodium or cesium
iodide which is optimized to detect gamma and x-radiation below 100 keV.  It is most widely
used for determining the presence of Pu and 241Am, and can be used for estimating radionuclide
concentrations in the field.
Operation: The FIDLER consists of a thin beryllium or aluminum window, a thin crystal of
sodium iodide, a quartz light pipe, and photomultiplier tube. The probe can have either a 3 in. or
5 in. crystal. The discussion below is applicable to 5 in. crystals. The survey meter requires
electronics capable of setting a window about an x-ray or gamma ray energy. This window
allows the probe and meter to detect specific energies and, in most cases, provide information
about a single element or radionuclide.  The window also lowers the background count.  Two
types of survey meters are generally used with FIDLER probes. One type resembles those used
with GM and alpha scintillation probes. They have an analog meter and range switch. The
second type is a digital survey meter, which can display the count rate or accumulate counts in a
sealer mode for a preset length of time.  Both types have adjustable high voltage and window
settings.  The advantage of the digital meter is that both background and sample counts can be
acquired in sealer mode, yielding a net count above background. The activity of a radionuclide
can then be estimated in the field.
Specificity/Sensitivity: The FIDLER probe is quite sensitive to x-ray and low energy gamma
radiation. Since it has the ability to discriminate energies, an energy window can  be set that
makes it possible to determine the presence of specific radionuclides when the nature of the
contamination is known. If the identity of a contaminant is known, the FIDLER can be used  to
quantitatively determine the concentration.  However, interferences can cause erroneous results if
other radionuclides are present. The FIDLER can also be used as a survey instrument to detect
the presence of x-ray or low energy gamma contaminates, and to determine the extent of the
contamination.  FIDLER probes are most useful for determining the presence of Pu and 241Am.
These isotopes have a complex of x-rays and gamma rays from 13-21 keV that have energies
centered around 17 keV, and 241Am has a gamma at 59 keV. There is an interference at 13 keV
from both americium and uranium x-rays. The FIDLER cannot distinguish which isotope of Pu
is present. 241Am can be identified based on the 59 keV gamma. Typical sensitivities for 238Pu
and 239Pu at one foot above the surface of a contaminated area are 500 to 700 and  250 to 350
counts per minute per jiCi per square meter (cpm/|iCi/m2), respectively. Assuming a soil density
of 1.5, uniform contamination of the first 1 mm of soil, and atypical background  of 400 counts
per minute, the MDC for 238Pu and 239Pu would be 370 and 740 Bq/kg (10 and 20  pCi/g), or 1500
and 3000 Bq/m2 (900 and  1,800 dpm/100 cm2).  This MDC is for fresh deposition; and will be
significantly less as the plutonium migrates into the soil. Because the window is fragile, most
operations with  a FIDLER probe require a low mass protective cover to prevent damaging the
window.  Styrofoam, cardboard, and other cushioning materials are common choices for a
protective cover.
Cost of Equipment:  $4,000 to $7,000
Cost per Measurement:  $10 to $20

MARSSIM, Revision 1                        H-32                                August 2000

-------
                                                                              Appendix H

System:             FIELD X-RAY FLUORESCENCE SPECTROMETER
Lab/Field:          Field
Radiation Detected:        Primary:  X-ray and low energy gamma radiation
                                  Secondary:   None

Applicability to Site Surveys: The system accurately measures relative concentrations of metal
atoms in soil or water samples down to the ppm range.

Operation:  This system is a rugged form of x-ray fluorescence system that measures the
characteristic x-rays of metals as they are released from excited electron structures. The
associated electronic and multi-channel analyzer systems are essentially identical to those used
with germanium spectrometry systems. The spectra of characteristic x-rays gives information for
both quantitative and qualitative analysis; however, most frequently, the systems are only
calibrated for relative atomic abundance or percent composition.

Specificity/Sensitivity: This is ideal for cases of contamination by metals that have strong x-ray
emissions within 5-100 keV. Application for quantification of the transition metals (in the
periodic table) is most common because of the x-ray emissions. Operation of this equipment is
possible with only a moderate amount of training. The sensitivity ranges from a few percent to
ppm depending on the particular atoms and their characteristic x-rays.  When converted to
activity concentration, the minimum detectable concentration for 238U is around 1,850 Bq/kg
(50 pCi/g) for typical soil matrices.

Cost of Equipment: $15,000 - $75,000 depending on size, speed of operation and auxiliary
features employed for automatic analysis of the results.

Cost per Measurement: $200
August 2000                               H-33                        MARS SIM, Revision 1

-------
Appendix H
                         H.2 FIELD SURVEY EQUIPMENT




                         H.2.6 Other Field Survey Equipment
MARSSIM, Revision 1                      H-34                              August 2000

-------
                                                                              Appendix H

System:             CHEMICAL SPECIES LASER ABLATION MASS SPECTROMETER
Lab/Field:          Field
Radiation Detected: None

Applicability to Site Surveys: Chemical Species Laser Ablation Mass Spectrometry has been
successfully applied to the analysis of organic and inorganic molecular species in condensed
material with high sensitivity and specificity.

Operation:  Solids can be converted into aerosol particles which contain much of the molecular
species information present in the original material. (One way this is done is by laser excitation
of one component of a solid mixture which, when volatilized, carries along the other molecular
species without fragmentation.) Aerosol particles can be carried hundreds of feet without
significant loss in a confined or directed air stream before analysis by mass spectrometry. Some
analytes of interest already exist in the form of aerosol particles. Laser ablation is also preferred
over traditional  means for the conversion of the aerosol particles into molecular ions for mass
spectral analysis. Instrument manufacturers are working with scientists at national laboratories
and universities in the development of compact portable laser ablation mass spectrometry
instrumentation for field based analyses.

Specificity/Sensitivity: This system can analyze soils and surfaces for organic and  inorganic
molecular species, with extremely good sensitivity. Environmental concentrations in the range of
10"9 - 10"14 g/g can be determined, depending on environmental conditions. It is highly effective
when used by a skilled operator, but of limited use due to high costs. It may be possible to
quantify an individual radionuclide if no other nuclides of that isotope are present in the sample
matrix. Potential MDC's  are 4xlQ-8 Bq/kg (IxlO'9 pCi/g) for 238U, 0.04 Bq/kg (10'3 pCi/g) for
239Pu, 4 Bq/kg (1 pCi/g) for 137Cs, and 37 Bq/kg (10 pCi/g) for 60Co.

Cost of Equipment: Very expensive (prototype)

Cost per Measurement:  May be comparable to  laser ablation inductively coupled plasma
atomic emission spectrometry (LA-ICP-AES) and laser ablation inductively coupled plasma
mass spectrometry (LA-ICP-MS).  When using the Atomic Emission Spectrometer,  the reported
cost is $4,000 per sample, or 80% of conventional sampling and analysis costs.  This high cost
for conventional samples is partly due to the 2-3 day time to analyze a sample for thorium by
conventional methods. When using the mass  spectrometer, the time required is about 30 minutes
per sample.
August 2000                               H-35                        MARS SIM, Revision 1

-------
Appendix H

System:             LA-ICP-AES AND LA-ICP-MS
Lab/Field:          Field
Radiation Detected: None

Applicability to Site Surveys: LA-ICP-AES and LA-ICP-MS are acronyms for Laser Ablation-
Inductively Coupled Plasma-Atomic Emission Spectrometry or Mass Spectrometry. LA-ICP-
AES/MS techniques are used to screen/characterize very small samples of soils and concrete
(non-destructively) in situ to determine the level of contamination. It is particularly suited to
measuring the surface concentration of uranium and thorium. The unit can assess the
concentrations at various depths when lower levels are exposed by some means. It has the
advantages of not consuming surface material, providing real time response, reducing sampling
and analysis time, and keeping personnel clear of the materials being sampled.  The information
developed can assist in identifying locations for excavation. It is currently being tested.

Operation: Components of the system include a sampling system, fiber optics cables,
spectrometer, potable water supply, cryogenic and high-pressure gas supply, a robotics arm,
control computers,  inductively coupled plasma torch, and video monitor.

Sampling probes have been developed and prototyped that will screen/characterize surface soils,
concrete floors or pads, and subsurface soils.  The  sampling probes, both surface and subsurface,
contain the laser (a 50-Hz Nd/YAG laser), associated optics, and control circuitry to raster the
laser (ablation) energy across one square inch  of sample surface.  Either sampling probe is
connected by an umbilical, currently 20 m long, to the Mobile Demonstration Laboratory for
Environmental Screening Technologies (MOLEST), a completely self-contained mobile
laboratory containing the instrumentation to immediately analyze the samples generated by the
laser ablation.

A fiber optic cable  delivers laser light to the surface of interest. This ablates a small quantity of
material that is carried away in a stream of argon gas.  The material enters the plasma torch
where it is vaporized, atomized, ionized, and electrically excited at about 8,000 K. This produces
an ionic emission spectrum that is analyzed on the atomic emission spectrometer.

The analysis instrumentation  (ICP-AES/MS) in the MDLEST does not depend on radioactive
decay for detection but looks directly at the atomic make up of the elements(s) of interest. A
large number of metals including the longer half-life radioactive elements can be detected and
quantified.  The spectrometer is set up using either hardware, software, or both to simultaneously
detect all elements  of interest in each sample.

The MDLEST can  be set up on site to monitor soil treatment processes. This function enables
the remediation manager to monitor, in real time, the treatment processes removing the
contaminants and ensure that satisfactory agreement with both regulatory agency and QC/QA
requirements is attained.

MARSSIM, Revision 1                        H-36                                August 2000

-------
                                                                              Appendix H

Specificity/Sensitivity:  This system measures the surface or depth concentration of atomic
species, and is particularly suited to uranium and thorium analysis.  It is highly effective with
skilled operators.  Some advantages are no contact with the soil, real time results, and no samples
to dispose of. The sample results are quickly available for field remediation decisions, with the
LA-ICP-AES taking about 10 minutes and LA-ICP-MS taking about 30 minutes.  The detection
limits for the two spectrometers that have been used are as follows:

1)     The AES (atomic emission spectrometer) can see ppm levels for some 70 elements and
       reportedly detects uranium  and thorium concentrations at 1 ppm, or 10 Bq/kg (0.3 pCi/g)
       for 238U and 0.4 Bq/kg (0.1  pCi/g) for 232Th. However,  the technique is only sensitive to
       elements; it cannot discriminate between the different isotopes of uranium and thorium.
       This prevents it from being used for assessing lower Z elements that have stable isotopes,
       or from determining relative abundances of isotopes of any element. This may
       significantly limit its use at some sites.
2)     The MS (mass  spectrometer) can see sub-ppb levels and is capable of quantifying the
       uranium and thorium isotopes. This system has been used to search for 230Th and 226Ra
       and is reportedly useful in reaching 0.8 ppm or 0.6 Bq/g (15 pCi/g) for 230Th content for
       remediated soil. It appears to measure uranium and thorium concentration of soil more
       sensitively than the LA-ICP-AES system.

Cost of Equipment:  Very expensive, >$1M.

Cost per Measurement:  When using the Atomic Emission Spectrometer,  the reported cost is
$4,000 per sample. When using the mass spectrometer, a dollar price was not provided.
August 2000                               H-37                        MARS SIM, Revision 1

-------
Appendix H
                        H.3  LABORATORY INSTRUMENTS




                             H.3.1 Alpha Particle Analysis
MARSSIM, Revision 1                       H-38                               August 2000

-------
                                                                              Appendix H

System:             ALPHA SPECTROSCOPY WITH MULTICHANNEL ANALYZER
Lab/Field:          Lab
Radiation Detected:        Primary: Alpha            Secondary: None
Applicability to Site: This is a very powerful tool for accurately identifying and quantifying the
activity of multiple alpha-emitting radionuclides in a sample of soil, water, air filters, etc.
Methods exist for the analyses of most alpha emitting radionuclides including uranium, thorium,
plutonium, polonium, and americium. Samples must first be prepared in a chemistry lab to
isolate the radionuclides of interest from the environmental matrix.
Operation:  This system consists of an alpha detector housed in a light-tight vacuum chamber, a
bias supply, amplifier, analog-to-digital converter, multichannel analyzer, and computer. The
bias is typically 25 to 100 volts.  The vacuum is typically less than 10 microns (0.1 millitorr).
The detector is a silicon diode that is reverse biased.  Alpha particles which strike the diode
create electron-hole pairs; the number of pairs is directly related to the energy of each alpha.
These pairs cause a breakdown of the diode and a current pulse to flow. The charge is collected
by a preamplifier and converted to a voltage pulse which is proportional to the alpha energy. It
is amplified and shaped by an amplifier. The MCA stores the resultant pulses and displays a
histogram of the number of counts vs. alpha energy.  Since most alphas will loose all of their
energy to the diode, peaks are seen  on the MCA display that can be identified by specific alpha
energies. Two system calibrations are necessary. A source with at least two known alpha
energies is counted to correlate the voltage pulses with alpha energy. A standard source of
known activity is analyzed to determine the system efficiency for detecting alphas.  Since the
sample and detector are in a vacuum,  most commonly encountered alpha energies will be
detected with approximately the  same efficiency, provided there is no self-absorption in the
sample. Samples are prepared in a chemistry lab.  The sample is placed in solution  and the
element of interest (uranium, plutonium, etc.) separated. A tracer of known activity is added
before separation to determine the overall recovery of the sample from the chemical procedures.
The sample is converted to a particulate having very little mass and collected on a special filter,
or it is collected from solution by electroplating onto a metal disk.  It is then placed  in the
vacuum chamber at a fixed distance from the diode and analyzed.  For environmental levels,
samples are typically analyzed for 1000 minutes or more.
Specificity/Sensitivity: The system can accurately identify and quantify the various alpha
emitting radioactive isotopes of each elemental species provided each has a different alpha
energy that can  be resolved by the system. For soils, a radionuclide can be measured below
0.004 Bq/g (0.1 pCi/g). The system is appropriate for all alphas except those from  gaseous
radionuclides.
Cost of Equipment:  $10,000 - $100,000 based on the number of detectors and sophistication of
the computer and data reduction  software. This does not include the cost of equipment for the
chemistry lab.
Cost per Measurement:  $250-$400 for the first element, $100-200 for each additional element
per sample.  The additional element cost depends on the separation chemistry involved and may
not always be less.  $200-$300 additional for a rush analysis.
August 2000                                H-39                        MARSSIM, Revision 1

-------
Appendix H

System:             GAS-FLOW PROPORTIONAL COUNTER
Lab/Field:          Lab
Radiation Detected:        Primary:  Alpha, Beta      Secondary:  Gamma
Applicability to Site Surveys: This system can determine the gross alpha or gross beta activity
of water, soil, air filters, or swipes. Results can indicate if nuclide-specific analysis is needed.
Operation:  The system consists of a gas-flow detector, supporting electronics, and an  optional
guard detector for reducing background count rate.  A thin window can be placed between the
gas-flow detector and sample to protect the detector from contamination, or the sample can be
placed directly into the detector. Systems with guard detectors operate sample and guard
detectors in anticoincidence mode to reduce the background and MDC. The detector high
voltage and discriminator are set to count alpha radiation, beta radiation, or both simultaneously.
The alpha and beta operating voltages are determined for each system by placing an alpha source,
like 230Th or 241Am, in the detector  and increasing the high voltage incrementally until the count
rate becomes constant, then repeating with a beta source, like 90Sr.  The alpha plateau, or region
of constant count rate, should have a slope <2%/100V and be >800V long.  The beta plateau
should have  a slope of <2.5%/100V and be >200V long. Operation on the beta plateau will also
allow detection of some gamma radiation and bremsstrahlung (x-rays), but the efficiency is very
low. Crosstalk between the a-to-p  channels is typically around  10% while p-to-a channels
should be <1%. The activity in soil samples is chemically extracted, separated if necessary,
deposited in a thin layer in a planchet to minimize self absorption, and heated to dryness.
Liquids are deposited and dried, while air filters and swipes are  placed directly in the planchet.
After each sample is placed under the detector, P-10 counting gas constantly flows through the
detector.  Systems with automatic sample changers can analyze tens to hundreds of planchet
samples in a single run.
Specificity/Sensitivity: Natural radionuclides present in soil samples can interfere with the
detection of  other contaminants. Unless the nature of the contaminant and any naturally-
occurring radionuclides is well known, this system is better used for screening samples. Although
it is possible to use a proportional counter to roughly determine  the energies of alpha and beta
radiation, the normal mode of operation is to detect all alpha events or all alpha and beta events.
Some systems use a discriminator to separate alpha and beta events, allowing simultaneous
determination of both the  alpha and beta activity in  a sample. These systems do not identify the
alpha or beta energies detected and cannot be used to identify specific radionuclides.  The alpha
channel background is very  low, <0.2 cpm (<0.04 cpm guarded), depending on detector size.
Typical, 4-pi, efficiencies for very thin alpha sources are 35-45% (window) and 40-50%
(windowless).  Efficiency depends  on window thickness, particle energy, source-detector
geometry, backscatter from the sample and holder, and detector size.  The beta channel
background ranges from 2 to 15 cpm (<0.5 cpm guarded).  The 4-pi efficiency for a thin 90Sr/90Y
source is >50% (window) to >60% (windowless), but can reduce to <5% for a thick source.
MDA's for guarded gas-flow proportional counters are somewhat lower for beta emitters than for
internal proportional counters because of the lower backgrounds. Analyzing  a high radioactivity
sample or flushing the detector with P10 gas at too high a flow rate can suspend fine particles
and contaminate the detector.
Cost of Equipment:  $4K-$5K (manual), $25K-$30K (automatic)
Cost per Measurement:  $30 to $50  plus radiochemistry

MARSSIM, Revision 1                        H-40                                August 2000

-------
                                                                                Appendix H

System:              LIQUID SCINTILLATION SPECTROMETER
Lab/Field:           Lab (primarily), field (secondarily)
Radiation Detected:        Primary:  Alpha, beta      Secondary: Gamma
Applicability to Site Surveys:  Liquid Scintillation can be a very effective tool for measuring
the concentration of radionuclides in soil, water, air filters, and swipes. Liquid scintillation has
historically been applied more to beta emitters, particularly the low energy beta emitters 3H and
14C, but it can also apply to other radionuclides.  More recently it has been used for measuring
radon in air and water. Initial scoping surveys may be done (particularly for loose surface
contamination) with surface swipes or air particulate filters. They may be counted directly in
liquid scintillation cocktails with no paper dissolution or other sample preparation.
Operation: The liquid scintillation process involves detection of light pulses (usually in the near
visible range) by photo-multiplier tubes (or conceptually similar devices).  The detected light
pulses originate from the re-structuring of previously excited molecular electron structures. The
molecular species that first absorb and then re-admit the visible light are called "liquid
scintillators"  and the solutions in which they reside are called "liquid scintillation cocktails."  For
gross counting, samples may be placed directly into a LSC vial of cocktail, and counted with no
preparation. Inaccuracies result when the sample itself absorbs the radiation before it can reach
the LSC cocktail, or when the sample absorbs the light produced by the cocktail.  For accurate
results, these  interferences are minimized. Interferences in liquid scintillation counting due to the
inability of the solution to deliver the full energy pulse to the photo-multiplier detector, for a
variety of reasons, are called "pulse quenching." Raw samples that cloud or color the LSC
cocktail so the resulting scintillations are absorbed will "quench" the sample and result in
underestimates of the activity.  Such samples are first processed by ashing, radiochemical or
solvent extraction, or pulverizing to place the sample in intimate contact with the LSC cocktail.
Actions like bleaching the sample may also be necessary to make the cocktail solution
transparent to the wavelength of light it emits. The analyst has several reliable computational or
experimental  procedures to account for "quenching."  One is by exposing the sample and pure
cocktail to an external radioactive standard and measuring the difference in response.
Specificity/Sensitivity:  The method is extremely flexible and accurate when used with proper
calibration and compensation for quenching effects.  Energy spectra are 10 to 100 times broader
than gamma spectrum photopeaks so that quantitative determination of complex multi-energy
beta spectra is impossible. Sample preparation can range from none to complex chemical
reactions. In  some cases, liquid scintillation offers many unique advantages; no sample
preparation before counting in contrast to conventional sample preparation for gas proportional
counting.  Recent advances in electronic stability and energy pulse shape discrimination has
greatly expanded uses. Liquid scintillation counters are ideal instruments for moderate to high
energy beta as well as alpha emitters, where the use of pulse shape discrimination has allowed
dramatic increases in sensitivity by electronic discrimination against beta and gamma emitters.
Additionally, very high energy beta emitters (above 1.5 MeV) may be counted using liquid
scintillation equipment without "liquid scintillation cocktails" by use of the Cerenkov light pulse
emitted as high energy charged particles move through water or similar substances.
Cost of Equipment: $20,000 to $70,000 based on the specific features and degree of automation
Cost per Measurement:  $50 -200 plus cost of chemical separation, if required

August 2000                                H-41                        MARS SIM, Revision 1

-------
Appendix H

System:             LOW-RESOLUTION ALPHA SPECTROSCOPY
Lab/Field:          Lab (Soil Samples)
Radiation Detected:        Primary: Alpha      Secondary:

Applicability to Site Surveys:  Low-resolution alpha spectroscopy is a method for measuring
alpha activity in soils with a minimum of sample preparation.  Some isotopic information can be
obtained.

Operation:  The system consists of a 2 in. diameter silicon detector, small vacuum chamber,
roughing pump, multichannel analyzer, laptop or benchtop computer, and analysis software.  Soil
samples are dried,  milled to improve homogeneity, distributed into 2 in. planchets, loaded into
the vacuum chamber, and counted. The accumulated alpha spectrum is displayed in real time.
When sufficient counts have been accumulated, the spectrum is transferred to a data file and the
operator inputs the known or suspected contaminant isotopes.  The analysis software then fits the
alpha spectrum with a set of trapezoidal peaks, one for each isotope, and outputs an estimate of
the specific activity of each isotope.

Specificity/Sensitivity: This method fills the gap between gross alpha analysis and
radiochemical separation/high-resolution alpha spectroscopy. Unlike gross alpha analysis, it
does provide some isotopic information.  Because this is a low-resolution technique, isotopes
with energies closer than -0.2 MeV cannot be separated.  For example, 238U (4.20 MeV) can be
readily distinguished from 234U (4.78 MeV), but 230Th (4.69 MeV) cannot be distinguished from
234U.

Because no chemical separation of isotopes is involved, only modest MDC's can be achieved.
Detection limits are determined by the background alpha activity in the region of interest of the
contaminant of concern, and also by the counting time.  Typical MDC's are 1,500 Bq/kg (40
pCi/g) @ 15 min counting time, 260 Bq/kg (7 pCi/g) @  8 hours, and 185 Bq/kg (5 pCi/g) @ 24
hours.  The method does not generate any new waste streams and does not require a sophisticated
laboratory or highly-trained personnel.

Cost of Equipment:  $11,000

Cost per Measurement: $25-$ 100
MARSSIM, Revision 1                        H-42                                August 2000

-------
                                                                            Appendix H
                        H.3  LABORATORY INSTRUMENTS




                             H.3.2 Beta Particle Analysis
August 2000                               H-43                       MARSSIM, Revision 1

-------
Appendix H

System:             GAS-FLOW PROPORTIONAL COUNTER
Lab/Field:          Lab
Radiation Detected:        Primary:  Alpha, Beta      Secondary:  Gamma
Applicability to Site Surveys: This system can determine the gross alpha or gross beta activity
of water, soil, air filters, or swipes. Results can indicate if nuclide-specific analysis is needed.
Operation:  The system consists of a gas-flow detector, supporting electronics, and an  optional
guard detector for reducing background count rate.  A thin window can be placed between the
gas-flow detector and sample to protect the detector from contamination, or the sample can be
placed directly into the detector. Systems with guard detectors operate sample and guard
detectors in anticoincidence mode to reduce the background and MDC. The detector high
voltage and discriminator are set to count alpha radiation, beta radiation, or both simultaneously.
The alpha and beta operating voltages are determined for each system by placing an alpha source,
like 230Th or 241Am, in the detector  and increasing the high voltage incrementally until the count
rate becomes constant, then repeating with a beta source, like 90Sr.  The alpha plateau, or region
of constant count rate, should have a slope <2%/100V and be >800V long.  The beta plateau
should have  a slope of <2.5%/100V and be >200V long. Operation on the beta plateau will also
allow detection of some gamma radiation and bremsstrahlung (x-rays), but the efficiency is very
low. Crosstalk between the a-to-p  channels is typically around  10% while p-to-a channels
should be <1%. The activity in soil samples is chemically extracted, separated if necessary,
deposited in a thin layer in a planchet to minimize self absorption, and heated to dryness.
Liquids are deposited and dried, while air filters and swipes are  placed directly in the planchet.
After each sample is placed under the detector, P-10 counting gas constantly flows through the
detector.  Systems with automatic sample changers can analyze tens to hundreds of planchet
samples in a single run.
Specificity/Sensitivity: Natural radionuclides present in soil samples can interfere with the
detection of  other contaminants. Unless the nature of the contaminant and any naturally-
occurring radionuclides is well known, this system is better used for screening samples. Although
it is possible to use a proportional counter to roughly determine  the energies of alpha and beta
radiation, the normal mode of operation is to detect all alpha events or all alpha and beta events.
Some systems use a discriminator to separate alpha and beta events, allowing simultaneous
determination of both the  alpha and beta activity in  a sample. These systems do not identify the
alpha or beta energies detected and cannot be used to identify specific radionuclides.  The alpha
channel background is very  low, <0.2 cpm (<0.04 cpm guarded), depending on detector size.
Typical, 4-pi, efficiencies for very thin alpha sources are 35-45% (window) and 40-50%
(windowless).  Efficiency depends  on window thickness, particle energy, source-detector
geometry, backscatter from the sample and holder, and detector size.  The beta channel
background ranges from 2 to 15 cpm (<0.5 cpm guarded).  The 4-pi efficiency for a thin 90Sr/90Y
source is >50% (window) to >60% (windowless), but can reduce to <5% for a thick source.
MDA's for guarded gas-flow proportional counters are somewhat lower for beta emitters than for
internal proportional counters because of the lower backgrounds. Analyzing  a high radioactivity
sample or flushing the detector with P10 gas at too high a flow rate can suspend fine particles
and contaminate the detector.
Cost of Equipment:  $4K-$5K (manual), $25K-$30K (automatic)
Cost per Measurement:  $30 to $50  plus radiochemistry

MARSSIM, Revision 1                        H-44                                August 2000

-------
                                                                                Appendix H

System:              LIQUID SCINTILLATION SPECTROMETER
Lab/Field:           Lab (primarily), field (secondarily)
Radiation Detected:        Primary:  Alpha, beta      Secondary: Gamma
Applicability to Site Surveys:  Liquid Scintillation can be a very effective tool for measuring
the concentration of radionuclides in soil, water, air filters, and swipes. Liquid scintillation has
historically been applied more to beta emitters, particularly the low energy beta emitters 3H and
14C, but it can also apply to other radionuclides.  More recently it has been used for measuring
radon in air and water. Initial scoping surveys may be done (particularly for loose surface
contamination) with surface swipes or air particulate filters. They may be counted directly in
liquid scintillation cocktails with no paper dissolution or other sample preparation.
Operation: The liquid scintillation process involves detection of light pulses (usually in the near
visible range) by photo-multiplier tubes (or conceptually similar devices).  The detected light
pulses originate from the re-structuring of previously excited molecular electron structures. The
molecular species that first absorb and then re-admit the visible light are called "liquid
scintillators"  and the solutions in which they reside are called "liquid scintillation cocktails."  For
gross counting, samples may be placed directly into a LSC vial of cocktail, and counted with no
preparation. Inaccuracies result when the sample itself absorbs the radiation before it can reach
the LSC cocktail, or when the sample absorbs the light produced by the cocktail.  For accurate
results, these  interferences are minimized. Interferences in liquid scintillation counting due to the
inability of the solution to deliver the full energy pulse to the photo-multiplier detector, for a
variety of reasons, are called "pulse quenching." Raw samples that cloud or color the LSC
cocktail so the resulting scintillations are absorbed will "quench" the sample and result in
underestimates of the activity.  Such samples are first processed by ashing, radiochemical or
solvent extraction, or pulverizing to place the sample in intimate contact with the LSC cocktail.
Actions like bleaching the sample may also be necessary to make the cocktail solution
transparent to the wavelength of light it emits. The analyst has several reliable computational or
experimental  procedures to account for "quenching."  One is by exposing the sample and pure
cocktail to an external radioactive standard and measuring the difference in response.
Specificity/Sensitivity:  The method is extremely flexible and accurate when used with proper
calibration and compensation for quenching effects.  Energy spectra are 10 to 100 times broader
than gamma spectrum photopeaks so that quantitative determination of complex multi-energy
beta spectra is impossible. Sample preparation can range from none to complex chemical
reactions. In  some cases, liquid scintillation offers many unique advantages such as no sample
preparation before counting in contrast to conventional sample preparation for gas proportional
counting.  Recent advances in electronic stability and energy pulse shape discrimination has
greatly expanded uses. Liquid scintillation counters are ideal instruments for moderate to high
energy beta as well as alpha emitters, where the use of pulse shape discrimination has allowed
dramatic increases in sensitivity by electronic discrimination against beta and gamma emitters.
Additionally, very high energy beta emitters (above 1.5 MeV) may be counted using liquid
scintillation equipment without "liquid scintillation cocktails" by use of the Cerenkov light pulse
emitted as high energy charged particles move through water or similar substances.
Cost of Equipment: $20,000 to $70,000 based on the specific features and degree of automation
Cost per Measurement:  $50 -200 plus cost of chemical separation, if required

August 2000                                H-45                        MARS SIM, Revision 1

-------
Appendix H
                       H.3 LABORATORY INSTRUMENTS




                            H.3.3 Gamma Ray Analysis
MARSSIM, Revision 1                      H-46                             August 2000

-------
                                                                              Appendix H

System:             GERMANIUM DETECTOR WITH MULTICHANNEL ANALYZER
                    (MCA)
Lab/Field:          Lab
Radiation Detected:        Primary:  Gamma   Secondary: None
Applicability to Site: This system accurately measures the activity of gamma-emitting
radionuclides in a variety of materials like soil, water, air filters, etc. with little preparation.
Germanium is especially powerful in dealing with multiple radionuclides and complicated
spectra.
Operation:  This system consists of a germanium detector connected to a dewar of liquid
nitrogen, high voltage power supply,  spectroscopy grade amplifier, analog to digital converter,
and a multichannel analyzer. P-type germanium detectors  typically operate from +2000 to +5000
volts. N-type germanium detectors operate from -2000 to  -5000 volts.  Germanium is a
semiconductor material.  When a gamma ray interacts with a germanium crystal, it produces
electron-hole pairs. An electric field  is applied which causes the electrons to move in the
conduction band and the holes to pass the charge from atom to neighboring atom.  The charge is
collected rapidly and is proportional to the deposited energy. The count rate/energy spectrum is
displayed on the MCA screen with the full energy photopeaks providing more useful information
than the general smear of Compton scattering events shown in between. The system is energy
calibrated using isotopes that emit at  least two known gamma ray energies,  so the MCA data
channels are given an energy equivalence. The MCA's display  then becomes a display of
intensity versus energy. Efficiency calibration is performed using known concentrations of
mixed isotopes. A curve of gamma ray energy versus counting efficiency is generated, and it
shows that P-type germanium is most sensitive at 120 keV and trails off to either side. Since the
counting efficiency depends on the distance from the sample to the detector, each geometry must
be given a separate efficiency calibration curve.  From that point the center  of each gaussian-
shaped peak tells the gamma ray energy that produced it, the combination of peaks identifies
each isotope, and the area under selected peaks is a measure of the amount of that isotope in the
sample. Samples are placed in containers and tare weighed.  Plastic petri dishes sit atop the
detector and are useful for small volumes or low energies,  while Marinelli beakers fit around the
detector and provide exceptional counting efficiency for volume samples. Counting times of
1000 seconds to 1000 minutes are typical. Each peak is identified manually or by gamma
spectrometry analysis software.  The  counts in each peak or energy band, the sample weight, the
efficiency calibration curve, and the isotope's decay scheme are factored together to give the
sample concentration.
Specificity/Sensitivity:  The system  accurately identifies and quantifies the concentrations of
multiple gamma-emitting radionuclides in samples like soil, water, and air filters with minimum
preparation.  A P-type detector is good for energies over 50 keV. An N-type or P-type planar
(thin crystal) detector with beryllium-end window is good  for 5-80 keV energies using a thinner
sample placed over the window.
Cost of Equipment:  $35,000 to $150,000 based on detector efficiency and sophistication of
MCA/computer/software system
Cost per Measurement: $ 100 to $200 (rush requests can double or triple costs)
August 2000                               H-47                       MARSSIM, Revision 1

-------
Appendix H

System:             SODIUM IODIDE DETECTOR WITH MULTICHANNEL ANALYZER
Lab/Field:          Lab
Radiation Detected:        Primary: Gamma   Secondary: None
Applicability to Site Surveys: This system accurately measures the activity of gamma-emitting
radionuclides in a variety of materials like soil, water, air filters, etc. with little preparation.
Sodium iodide is inherently more efficient for detecting gamma rays but has lower resolution
than germanium, particularly if multiple radionuclides and complicated spectra are involved.
Operation:  This system consists of a sodium iodide detector, a high voltage power supply, an
amplifier, an analog to digital converter, and a multichannel analyzer.  The detector is a sodium
iodide crystal connected to a photomultiplier tube (PMT). Crystal shapes can vary extensively
and typical detector high voltage are 900-1,000 V. Sodium iodide is a scintillation material.  A
gamma ray interacting with a sodium iodide crystal produces light which is passed to the PMT.
This light ejects electrons which the PMT multiplies into a pulse that is proportional to the
energy the gamma ray imparted to the crystal.  The MCA assesses the pulse size and places a
count in the corresponding channel. The count rate and energy spectrum is displayed on the
MCA screen with the full energy photopeaks providing more useful information than the general
smear of Compton  scattering events shown in between. The system is energy calibrated using
isotopes that emit at least two gamma ray energies, so the MCA data channels are given an
energy equivalence. The MCA's CRT then becomes a display of intensity  versus energy. A
non-linear energy response and lower resolution make isotopic identification  less precise than
with a germanium detector. Efficiency calibration is performed using known concentrations of
single or mixed isotopes. The single isotope method develops a count rate to activity factor.  The
mixed isotope method produces a gamma ray energy versus counting efficiency curve that shows
that sodium iodide  is most sensitive around 100-120 keV and trails off to either side. Counting
efficiency is a function of sample to detector distance, so each geometry must have a separate
efficiency calibration curve.  The center of each peak tells the gamma ray energy that produced  it
and the combination of peaks identifies each isotope.  Although the area under a peak relates to
that isotope's activity in the sample, integrating a band of channels often provides better
sensitivity.  Samples are placed in containers and tare weighed.  Plastic petri dishes sit atop the
detector and are useful for small volumes or low energies, while Marinelli beakers fit around the
detector and provide exceptional counting efficiency for volume samples.  Counting times of 60
seconds to 1,000 minutes are typical. The CRT display is scanned and each peak is identified by
isotope.  The counts in each peak or energy band, the sample weight, the efficiency calibration
curve, and the isotope's decay scheme are factored together to give the sample concentration.
Specificity/Sensitivity:  This system analyzes gamma-emitting isotopes with minimum
preparation, better efficiency, but lower resolution compared to most germanium detectors.
Germanium detectors do reach efficiencies of 150% compared with a 3 in.  by 3 in. sodium iodide
detector,  but the cost is around $100,000 each compared with $3,000.  Sodium iodide measures
energies over 80 keV.  The instrument response is energy dependent, the resolution is not superb,
and the energy calibration is not totally linear, so care should be taken when identifying or
quantifying multiple isotopes. Computer software can help interpret complicated spectra.
Sodium iodide is fragile and should be protected from shock and sudden temperature changes.
Cost of Equipment: $6K-$20K
Cost per Measurement: $100-$200 per sample.

MARSSIM, Revision 1                       H-48                                August 2000

-------
                                                                              Appendix H
                          EQUIPMENT SUMMARY TABLES







Table H. 1 -   Radiation Detectors with Applications to Alpha Surveys




Table H.2 -   Radiation Detectors with Applications to Beta Surveys




Table H.3 -   Radiation Detectors with Applications to Gamma Surveys




Table H.4 -   Radiation Detectors with Applications to Radon Surveys




Table H.5 -   Systems that Measure Atomic Mass or Emissions
August 2000
H-49
MARS SIM, Revision 1

-------
                              Table H.I  Radiation Detectors with Applications to Alpha Surveys
in
Gfl
System
Alpha
spectroscopy
Alpha
scintillation
survey meter
Alpha Track
Detector
Electret ion
chamber
Long range
alpha detector
(LRAD)
Description
A system using silicon diode
surface barrier detectors for
alpha energy identification
and quantification
<1 mg/cm2 window, probe
face area 50 to 100 cm2.
Polycarbonate plastic sheet is
placed in contact with a
contaminated surface and kept
in place
A charged Teflon disk in an
open-faced ion chamber
1m x 1m detector measures
ionization inside the box.
Attached to tractor for
movement. Has location
finder and plots graph of
contamination.
Application
Accurately identifies and
measures the activity of
multiple alpha radionuclides
in a thin extracted sample of
soil, water, or air filters.
Field measurement of
presence or absence of alpha
contamination on nonporous
surfaces, swipes, and air
filters, or on irregular surfaces
if the degree of surface
shielding is known.
Measures gross alpha surface
contamination, soil activity
level, or the depth profile of
contamination
Measures alpha or beta
contamination on surfaces and
in soils, plus gamma radiation
dose or radon concentration
Measures surface
contamination or soil
concentration at grid points
and plots curves of constant
contamination. Intended for
large areas.
Remarks
Sample requires radiochemical
separation or other preparation before
counting
Minimum sensitivity is 10 cpm, or 1
cpm with headphones
Alpha radiation produces holes that
are enlarged chemically. Density of
holes gives a measure of the
radioactivity level.
The type of radiation is determined by
how the electret is employed, e.g., the
unit is kept closed and bagged in
plastic to measure gammas
Alpha detection limit is 20-50
dpm/100 cm2 or 0.4 Bq/g (10 pCi/g).
Equipment
Cost
$10K-$100K
$1000

$4,000-$5,000
$25,000
Measurement
Cost
$250-$400
$5
$5-$25
$8-$25
$80
ro

I
X
ffi
o
O
O
O

-------
I
O
O
O
                              Table H.I Radiation Detectors with Applications to Alpha Surveys
System
Gas-flow
proportional
counter (field)
Gas-flow
proportional
counter (lab)
Liquid
scintillation
counter (LSC)
Description
A detector through which P 10
gas flows and which measures
alpha and beta radiation. < 1-
10 mg/cm2 window, probe
face area 50 to 100 cm2 for
hand held detectors; up to 600
cm2 if cart mounted
Windowless (internal
proportional) or window <0.1
mg/cm2, probe face area 10 to
20 cm2. May have a second or
guard detector to reduce
background and MDA.
Samples are mixed with LSC
cocktail and the radiation
emitted causes light pulses
with proportional intensity.
Application
Surface scanning, surface
activity measurement, or field
evaluation of swipes. Serves
as a screen to determine if
more nuclide-specific
analyses are needed.
Laboratory measurement of
water, air, and swipe samples
Laboratory analysis of alpha
or beta emitters, including
spectrometry capabilities.
Remarks
Natural radionuclides in samples can
interfere with the detection of other
contaminants. Requires P10 gas
Requires P10 gas. Windowless
detectors can be contaminated.
Highly selective for alpha or beta
radiation by pulse shape
discrimination. Requires LSC
cocktail.
Equipment
Cost
$2K-$4K
$4K-$30K
$20K-$70K
Measurement
Cost
$2-$10/m2
$50
$50-$200
in
in
O*
I
X
tn

-------
                              Table H.2  Radiation Detectors with Applications to Beta Surveys
System
GM survey meter
with beta
pancake probe
Gas-flow
proportional
counter (field)
Gas-flow
proportional
counter (lab)
Liquid
scintillation
counter (LSC)
Description
Thin 1.4 mg/cm2 window
detector, probe area 10 to 100
cm2
A detector through which P 10
gas flows and which measures
alpha and beta radiation. < 1-
10 mg/cm2 window, probe
face area 50 to 100 cm2
Windowless (internal
proportional) or window <0. 1
mg/cm2, probe face area 10 to
20 cm2. May have a second or
guard detector to reduce
background and MDA.
Samples are mixed with LSC
cocktail and the radiation
emitted causes light pulses
with proportional intensity.
Application
Surface scanning of
personnel, working areas,
equipment, and swipes for
beta contamination.
Laboratory measurement
of swipes when connected
to a sealer.
Surface scanning, surface
activity measurement, or
field evaluation of swipes.
Serves as a screen to
determine if more nuclide-
specific analyses are
needed.
Laboratory measurement
of water, air, and swipe
samples
Laboratory analysis of
alpha and beta emitters,
including spectrometry
capabilities.
Remarks
Relatively high detection limit
making it of limited value in final
status surveys.
Natural radionuclides in samples can
interfere with the detection of other
contaminants. Requires P10 gas, but
can be disconnected for hours.
Requires P10 gas. Windowless
detectors can be contaminated.
Highly selective for alpha and beta
radiation by pulse shape
discrimination. Requires LSC
cocktail.
Equipment
Cost
$400-$1,500
$2K-$4K
$4K-$30K
$20K-$70K
Measurement
Cost
$5-$10
$2-$10/m2
$50
$100-$200
                                                                                                                        ro

                                                                                                                        I
                                                                                                                        X
ffi
to
O
O
O

-------
I
O
O
O
                       Table H.3 Radiation Detectors with Applications to Gamma and X-Ray Surveys
System
GM survey meter
with gamma probe
Pressurized ion
chamber (PIC)
Electret ion
chamber
Hand-held ion
chamber survey
meter
Hand-held
pressurized ion
chamber survey
meter
Sodium Iodide
survey meter
FIDLER (Field
Instrument for
Detection of Low
Energy Radiation)
Description
Thick-walled 30 mg/cm2
detector
A highly accurate
ionization chamber that is
rugged and stable.
Electrostatically charged
disk inside an ion
chamber
Ion chamber for
measuring higher
radiation levels than
typical background.
Ion chamber for
measuring higher
radiation levels than
typical background.
Detectors sizes up to
8"x8". Used in micro R-
meter in smaller sizes.
Thin crystals of Nal or
Csl.
Application
Measure radiation levels
above 0.1 mR/hr.
Excellent for measuring
gamma exposure rate during
site remediation.
Gamma exposure rate
Measures true gamma
exposure rate.
Measures true gamma
exposure rate with more
sensitivity than the
unpressurized ion chamber.
Measures low levels of
environmental radiation.
Scanning of gamma/X
radiation from plutonium and
americium.
Remarks
Its non-linear energy response can
be corrected by using an energy
compensated probe.
Is used in conjunction with
radionuclide identification
equipment.
N/A, rented
Not very useful for site surveys
because of high detection limit
above background levels.
Not very useful for site surveys
because of high detection limit
above background levels.
Its energy response is not linear,
so it should be calibrated for the
energy field it will measure or
have calibration factors developed
by comparison with a PIC for a
specific site.

Cost of Equipment
$400-$1,000
$15K-$50K
included in rental
price
$800-$1,200
$1,000-$1,500
$2K
$6K-$7K
Cost per
Measurement
$5
$50 - $500
$8 - $25
$5
$5
$5
$10-$20
in
I
VI
o'
                                                                                                                  X

-------
                      Table H.3  Radiation Detectors with Applications to Gamma and X-Ray Surveys
System
Sodium iodide
detector with
multichannel
analyzer (MCA)
Germanium
detector with
multichannel
analyzer (MCA)
Portable
Germanium
Multichannel
Analyzer (MCA)
System
Field x-ray
fluorescence
spectrometer
Thermoluminesce
nee dosimeters
(TLDs)
Description
Sodium iodide crystal
with a large range of sizes
and shapes, connected to
a photomultiplier tube and
MCA.
Intrinsic germanium
semiconductor in p- or n-
type configuration and
without a beryllium
window.
A portable version of a
laboratory based
germanium detector and
multichannel analyzer.
Uses silicon or
germanium
semiconductor
Crystals that are sensitive
to gamma radiation
Application
Laboratory gamma
spectroscopy to determine the
identity and concentration of
gamma emitting radionuclides
in a sample.
Laboratory gamma
spectroscopy to determine the
identity and concentration of
gamma emitting radionuclides
in a sample.
Excellent during
characterization through final
status survey to identify and
quantify the concentration of
gamma ray emitting
radionuclides and in situ
concentrations of soil and
other media
Determining fractional
abundance of low percentage
metal atoms.
Measure cumulative radiation
dose over a period of days to
months.
Remarks
Sensitive for surface soil or
groundwater contamination.
Analysis programs have difficulty
if sample contains more than a few
isotopes.
Very sensitive for surface soil or
groundwater contamination. Is
especially powerful when more
than one radionuclide is present in
a sample.
Requires a supply of liquid
nitrogen or a mechanical cooling
system, as well as highly trained
operators.

Requires special calibration to
achieve high accuracy and
reproducibility of results.
Cost of Equipment
$6K-$20K
$35K-$150K
$40K
$15K-$75K
$5K-$50K for
reader +
$25-$40 per TLD
Cost per
Measurement
$100 to $200
$100 to $200
$100
$200
$25-$125
                                                                                                                 X
O
o
O

-------
>

I
to
o
Table H.4 Radiation Detectors with Applications to Radon Surveys
System
Large area
activated charcoal
collector
Continuous radon
monitor
Activated
charcoal
adsorption
Electret ion
chamber
Alpha track
detection
Description
A canister containing activated
charcoal is twisted into the
surface and left for 24 hours.
Air pump and scintillation cell
or ionization chamber
Activated charcoal is opened
to the ambient air, then gamma
counted on a gamma
scintillator or in a liquid
scintillation counter.
This is a charged plastic vessel
that can be opened for air to
pass into.
A small piece of special plastic
or film inside a small
container. Damage tracks from
alpha particles are chemically
etched and tracks counted.
Application
Short term radon flux
measurements
Track the real time
concentration of radon
Measure radon
concentration in indoor
air
Measure short-term or
long-term radon
concentration in indoor
air.
Measure indoor or
outdoor radon
concentration in air.
Remarks
The LLD is 0.007 Bq mV
(0.2 pCi rtf V).
Takes 1 to 4 hours for system to
equilibrate before starting. The LLD is
0.004-0.04 Bq/L (0.1-1.0 pCi/L).
Detector is deployed for 2 to 7 days.
The LLD is 0.007-0.04 Bq/L (0.2 to
l.OpCi/L).
Must correct reading for gamma
background concentration. Electret is
sensitive to extremes of temperature
and humidity. LLD is 0.007-0.02 Bq/L
(0.2-0.5 pCi/L).
LLD is 0.04 Bq L^d'1
(IpCiL-'d'1).
Equipment
Cost
N/A, rented
$1K-$5K
$10K-$30K
N/A, rented

Measurement
Cost
$20-$50
including
canister
$80
$5-$30
including
canister if
outsourced.
$8-$25 for rental
$5-$25
GO
GO
I

w'

5'
                                                                                                                       X

-------
                                Table H.5 Systems that Measure Atomic Mass or Emissions
System
LA-ICP-AES (Laser
Ablation Inductively
Coupled Plasma Atomic
Emissions Spectrometer)
LA-ICP-MS (Laser
Ablation Inductively
Coupled Plasma Mass
Spectrometer)
Chemical speciation laser
ablation/mass
spectrometer
Description
Vaporizes and ionizes the
surface material, and
measures emissions from
the resulting atoms.
Vaporizes and ionizes the
surface material, then
measures the mass of the
resulting atoms.
A laser changes the sample
into an aerosol that it
analyzed with a mass
spectrometer.
Application
Live time analysis of
radioactive U and Th
contamination in the
field.
Live time analysis of
radioactive U and Th
contamination in the
field.
Analyze organic and
inorganic species with
high sensitivity and
specificity.
Remarks
Requires expensive equipment
and skilled operators. LLD is
0.004 Bq/g (0.1 pCi/g) for 232Th
and 0.01 Bq/g (0.3 pCi/g) for
238U.
Requires expensive equipment
and skilled operators. More
sensitive than LA-ICP-AES.
LLD is 0.6 Bq/g (15 pCi/g) for
230Th.
Volatilized samples can be
carried hundreds of feet to the
analysis area.
Cost of
Equipment
>$1,000,000
>$1,000,000
>$1,000,000
Cost per
Measurement
$4,000
>$4,000
>$4,000
                                                                                                                   X
o
>

-------
                               APPENDIX I
               STATISTICAL TABLES AND PROCEDURES
I.I    Normal Distribution
              Table I.I Cumulative Normal Distribution Function O(z)
z
0.00
0.10
0.20
0.30
0.40
0.50
0.60
0.70
0.80
0.90
1.00
1.10
1.20
1.30
1.40
1.50
1.60
1.70
1.80
1.90
2.00
2.10
2.20
2.30
2.40
2.50
2.60
2.70
2.80
2.90
3.00
3.10
3.20
3.30
3.40
0.00
0.5000
0.5398
0.5793
0.6179
0.6554
0.6915
0.7257
0.7580
0.7881
0.8159
0.8413
0.8643
0.8849
0.9032
0.9192
0.9332
0.9452
0.9554
0.9641
0.9713
0.9772
0.9821
0.9861
0.9893
0.9918
0.9938
0.9953
0.9965
0.9974
0.9981
0.9987
0.9990
0.9993
0.9995
0.9997
0.01
0.5040
0.5438
0.5832
0.6217
0.6591
0.6950
0.7291
0.7611
0.7910
0.8186
0.8438
0.8665
0.8869
0.9049
0.9207
0.9345
0.9463
0.9564
0.9649
0.9719
0.9778
0.9826
0.9864
0.9896
0.9920
0.9940
0.9955
0.9966
0.9975
0.9982
0.9987
0.9991
0.9993
0.9995
0.9997
0.02
0.5080
0.5478
0.5871
0.6255
0.6628
0.6985
0.7324
0.7642
0.7939
0.8212
0.8461
0.8686
0.8888
0.9066
0.9222
0.9357
0.9474
0.9573
0.9656
0.9726
0.9783
0.9830
0.9868
0.9898
0.9922
0.9941
0.9956
0.9967
0.9976
0.9982
0.9987
0.9991
0.9994
0.9995
0.9997
0.03
0.5120
0.5517
0.5910
0.6293
0.6664
0.7019
0.7357
0.7673
0.7967
0.8238
0.8485
0.8708
0.8907
0.9082
0.9236
0.9370
0.9484
0.9582
0.9664
0.9732
0.9788
0.9834
0.9871
0.9901
0.9925
0.9943
0.9957
0.9968
0.9977
0.9983
0.9988
0.9991
0.9994
0.9996
0.9997
0.04
0.5160
0.5557
0.5948
0.6331
0.6700
0.7054
0.7389
0.7704
0.7995
0.8264
0.8508
0.8729
0.8925
0.9099
0.9251
0.9382
0.9495
0.9591
0.9671
0.9738
0.9793
0.9838
0.9875
0.9904
0.9927
0.9945
0.9959
0.9969
0.9977
0.9984
0.9988
0.9992
0.9994
0.9996
0.9997
0.05
0.5199
0.5596
0.5987
0.6368
0.6736
0.7088
0.7422
0.7734
0.8023
0.8289
0.8531
0.8749
0.8944
0.9115
0.9265
0.9394
0.9505
0.9599
0.9678
0.9744
0.9798
0.9842
0.9878
0.9906
0.9929
0.9946
0.9960
0.9970
0.9978
0.9984
0.9989
0.9992
0.9994
0.9996
0.9997
0.06
0.5239
0.5636
0.6026
0.6406
0.6772
0.7123
0.7454
0.7764
0.8051
0.6315
0.8554
0.8770
0.8962
0.9131
0.9279
0.9406
0.9515
0.9608
0.9686
0.9750
0.9803
0.9846
0.9881
0.9909
0.9931
0.9948
0.9961
0.9971
0.9979
0.9985
0.9989
0.9992
0.9994
0.9996
0.9997
0.07
0.5279
0.5674
0.6064
0.6443
0.6808
0.7157
0.7486
0.7794
0.8078
0.8340
0.8577
0.8790
0.8980
0.9147
0.9292
0.9418
0.9525
0.9616
0.9693
0.9756
0.9808
0.9850
0.9884
0.9911
0.9932
0.9949
0.9962
0.9972
0.9979
0.9985
0.9989
0.9992
0.9995
0.9996
0.9997
0.08
0.5319
0.5714
0.6103
0.6480
0.6844
0.7190
0.7517
0.7823
0.8106
0.8365
0.8599
0.8810
0.8997
0.9162
0.9306
0.9429
0.9535
0.9625
0.9699
0.9761
0.9812
0.9854
0.9887
0.9913
0.9934
0.9951
0.9963
0.9973
0.9980
0.9986
0.9990
0.9993
0.9995
0.9996
0.9997
0.09
0.5359
0.5753
0.6141
0.6517
0.6879
0.7224
0.7549
0.7852
0.8133
0.8389
0.8621
0.8830
0.9015
0.9177
0.9319
0.9441
0.9545
0.9633
0.9706
0.9767
0.9817
0.9857
0.9890
0.9916
0.9936
0.9952
0.9964
0.9974
0.9981
0.9986
0.9990
0.9993
0.9995
0.9997
0.9998
Negative values of z can be obtained from the relationship O(-z) = 1 - O(z).
August 2000
1-1
MARS SIM, Revision 1

-------
Appendix I
 1.2 Sample Sizes for Statistical Tests
                         Table I.2a Sample Sizes for Sign Test
               (Number of measurements to be performed in each survey unit)

A/o
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
1.9
2.0
2.5
3.0
(a,(3) or (P,a)
0.01
0.01
4095
1035
468
270
178
129
99
80
66
57
50
45
41
38
35
34
33
32
30
29
28
27
0.01
0.025
3476
879
398
230
152
110
83
68
57
48
42
38
35
33
30
29
28
27
26
26
23
23
0.01
0.05
2984
754
341
197
130
94
72
58
48
41
36
33
30
28
27
24
24
23
22
22
21
20
0.01
0.1
2463
623
282
162
107
77
59
48
40
34
30
27
26
23
22
21
20
20
18
18
17
17
0.01
0.25
1704
431
195
113
75
54
41
34
28
24
21
20
17
16
15
15
14
14
14
12
12
12
0.025
0.025
2907
735
333
192
126
92
70
57
47
40
35
32
29
27
26
24
23
22
22
21
20
20
0.025
0.05
2459
622
281
162
107
77
59
48
40
34
30
27
24
23
22
21
20
20
18
18
17
17
0.025
0.1
1989
503
227
131
87
63
48
39
33
28
24
22
21
18
17
17
16
16
15
15
14
14
0.025
0.25
1313
333
150
87
58
42
33
26
22
18
17
15
14
12
12
11
11
11
10
10
10
9
0.05
0.05
2048
518
234
136
89
65
50
40
34
29
26
23
21
20
18
17
17
16
16
15
15
14
0.05
0.1
1620
410
185
107
71
52
40
32
27
23
21
18
17
16
15
14
14
12
12
12
11
11
0.05
0.25
1018
258
117
68
45
33
26
21
17
15
14
12
11
10
10
9
9
9
9
8
8
8
0.1
0.1
1244
315
143
82
54
40
30
24
21
18
16
15
14
12
11
11
10
10
10
10
9
9
0.1
0.25
725
184
83
48
33
23
18
15
12
11
10
9
8
8
8
6
6
6
6
6
5
5
0.25
0.25
345
88
40
23
16
11
9
8
6
5
5
5
4
4
4
4
4
4
4
3
3
3
 MARSSIM, Revision 1
1-2
August 2000

-------
                                                                          Appendix I
                 Table L2b Sample Sizes for Wilcoxon Rank Sum Test
    (Number of measurements to be performed in the reference area and in each survey unit

A/o
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
1.9
2.0
2.25
2.5
2.75
3.0
3.5
4.0
(a,(3) or (P,a)
0.01
0.01
5452
1370
614
350
227
161
121
95
77
64
55
48
43
38
35
32
30
28
26
25
22
21
20
19
18
18
0.01
0.025
4627
1163
521
297
193
137
103
81
66
55
47
41
36
32
30
27
25
24
22
21
19
18
17
16
16
15
0.01
0.05
3972
998
448
255
166
117
88
69
56
47
40
35
31
28
25
23
22
20
19
18
16
15
15
14
13
13
0.01
0.1
3278
824
370
211
137
97
73
57
47
39
33
29
26
23
21
19
18
17
16
15
14
13
12
12
11
11
0.01
0.25
2268
570
256
146
95
67
51
40
32
27
23
20
18
16
15
14
13
12
11
11
10
9
9
8
8
8
0.025
0.025
3870
973
436
248
162
114
86
68
55
46
39
34
30
27
25
23
21
20
19
18
16
15
14
14
13
13
0.025
0.05
3273
823
369
210
137
97
73
57
46
39
33
29
26
23
21
19
18
17
16
15
14
13
12
12
11
11
0.025
0.1
2646
665
298
170
111
78
59
46
38
32
27
24
21
19
17
16
15
14
13
12
11
10
10
10
9
9
0.025
0.25
1748
440
197
112
73
52
39
31
25
21
18
16
14
13
11
11
10
9
9
8
8
7
7
6
6
6
0.05
0.05
2726
685
307
175
114
81
61
48
39
32
28
24
22
19
18
16
15
14
13
13
11
11
10
10
9
9
0.05
0.1
2157
542
243
139
90
64
48
38
31
26
22
19
17
15
14
13
12
11
11
10
9
9
8
8
8
7
0.05
0.25
1355
341
153
87
57
40
30
24
20
16
14
12
11
10
9
8
8
7
7
7
6
6
5
5
5
5
0.1
0.1
1655
416
187
106
69
49
37
29
24
20
17
15
13
12
11
10
9
9
8
8
7
7
6
6
6
6
0.1
0.25
964
243
109
62
41
29
22
17
14
12
10
9
8
7
7
6
6
5
5
5
4
4
4
4
4
4
0.25
0.25
459
116
52
30
20
14
11
8
7
6
5
4
4
4
3
3
3
3
3
3
2
2
2
2
2
2
August 2000
1-3
MARS SIM, Revision 1

-------
Appendix I
1.3    Critical Values for the SignTest
                 Table 1.3  Critical Values for the Sign Test Statistic S+
     TV
     4
     5
     6
     7
     8
     9
     10
     11
     12
     13
     14
     15
     16
     17
     18
     19
     20
     21
     22
     23
     24
     25
     26
     27
     28
     29
     30
Alpha
0.005 0.01 0.025 0.05 0.1 0.2 0.3 0.4 0.5
4
5
6
7
7
8
9
10
10
11
12
12
13
14
14
15
16
16
17
18
18
19
19
20
21
21
22
4
5
6
6
7
8
9
9
10
11
11
12
13
13
14
14
15
16
16
17
18
18
19
19
20
21
21
4
5
5
6
7
7
8
9
9
10
11
11
12
12
13
14
14
15
16
16
17
17
18
19
19
20
20
4
4
5
6
6
7
8
8
9
9
10
11
11
12
12
13
14
14
15
15
16
17
17
18
18
19
19
3
4
5
5
6
6
7
8
8
9
9
10
11
11
12
12
13
13
14
15
15
16
16
17
17
18
19
3
3
4
5
5
6
6
7
7
8
9
9
10
10
11
11
12
12
13
14
14
15
15
16
16
17
17
3
3
4
4
5
5
6
6
7
7
8
9
9
10
10
11
11
12
12
13
13
14
14
15
15
16
16
2
3
3
4
4
5
5
6
6
7
7
8
9
9
10
10
11
11
12
12
13
13
14
14
15
15
16
2
2
3
3
4
4
5
5
6
6
7
7
8
8
9
9
10
10
11
11
12
12
13
13
14
14
15
MARSSIM, Revision 1
1-4
August 2000

-------
                                                                                 Appendix I
            Table 1.3 Critical Values for the Sign Test Statistic S+ (continued)
Alpha
0.005 0.01 0.025 0.05 0.1 0.2 0.3 0.4 0.5
23
23
24
24
25
26
26
27
27
28
29
29
30
30
31
32
32
33
33
34
22
23
23
24
24
25
26
26
27
27
28
28
29
30
30
31
31
32
33
33
21
22
22
23
23
24
24
25
26
26
27
27
28
28
29
30
30
31
31
32
20
21
21
22
22
23
23
24
25
25
26
26
27
27
28
29
29
30
30
31
19
20
20
21
21
22
22
23
23
24
25
25
26
26
27
27
28
28
29
30
18
18
19
19
20
21
21
22
22
23
23
24
24
25
25
26
26
27
27
28
17
17
18
19
19
20
20
21
21
22
22
23
23
24
24
25
25
26
26
27
16
17
17
18
18
19
19
20
20
21
21
22
22
23
23
24
24
25
25
26
15
16
16
17
17
18
18
19
19
20
20
21
21
22
22
23
23
24
24
25
     TV
     31
     32
     33
     34
     35
     36
     37
     38
     39
     40
     41
     42
     43
     44
     45
     46
     47
     48
     49
     50
For N greater than 50, the table (critical) value can be calculated from:
        A^
        2
z
—1
2
z is the (1-a) percentile of a standard normal distribution, which can be found on page 1-10 or on
page 5-28 in Table 5.2.
August 2000
                               1-5
MARS SIM, Revision 1

-------
Appendix I


1.4     Critical Values for the WRS Test


                           Table 1.4  Critical Values for the WRS test

m is the number of reference area samples and n is the number of survey unit samples.

        n=     2   345   678   9   10  11  12  13 14  15  16  17  18   19  20
m = 2    a=0.001 7   9   11  13   15   17  19  21  23  25  27  29 31  33  35  37  39   41  43
        a=0.005 7   9   11  13   15   17  19  21  23  25  27  29 31  33  35  37  39   40  42
        a=0.01  7   9   11  13   15   17  19  21  23  25  27  28 30  32  34  36  38   39  41
        a=0.025 7   9   11  13   15   17  18  20  22  23  25  27 29  31  33  34  36   38  40
        a=0.05  7   9   11  12   14   16  17  19  21  23  24  26 27  29  31  33  34   36  38
        a=0.1   7   8   10  11   13   15  16  18  19  21  22  24 26  27  29  30  32   33  35
        n=     2   345   678   9   10  11  12   13 14  15  16  17  18   19  20
m=3    a=0.001 12  15  18  21   24  27  30  33  36  39  42   45 48  51  54  56  59   62  65
        a=0.005 12  15  18  21   24  27  30  32  35  38  40   43 46  48  51  54  57   59  62
        a=0.01  12  15  18  21   24  26  29  31  34  37  39   42 45  47  50  52  55   58  60
        a=0.025 12  15  18  20   22  25  27  30  32  35  37   40 42  45  47  50  52   55  57
        a=0.05  12  14  17  19   21  24  26  28  31  33  36   38 40  43  45  47  50   52  54
        a=0.1   11  13  16  18   20  22  24  27  29  31  33   35 37  40  42  44  46   48  50
        n=     2   345   678   9   10  11  12   13 14  15  16  17  18   19  20
m = 4    a=0.001 18  22  26  30  34   38  42  46  49  53  57   60 64  68  71  75  78   82  86
        a=0.005 18  22  26  30  33   37  40  44  47  51  54   58 61  64  68  71  75   78  81
        a=0.01  18  22  26  29  32   36  39  42  46  49  52   56 59  62  66  69  72   76  79
        a=0.025 18  22  25  28  31   34  37  41  44  47  50   53 56  59  62  66  69   72  75
        a=0.05  18  21  24  27  30   33  36  39  42  45  48   51 54  57  59  62  65   68  71
        a=0.1   17  20  22  25  28   31  34  36  39  42  45   48 50  53  56  59  61   64  67
        n=     2   345   678   9   10  11  12   13 14  15  16  17  18   19  20
m=5    a=0.001 25  30  35  40  45   50  54  58  63  67  72   76 81  85  89  94  98   102 107
        a=0.005 25  30  35  39  43   48  52  56  60  64  68   72 77  81  85  89  93   97  101
        a=0.01  25  30  34  38  42   46  50  54  58  62  66   70 74  78  82  86  90   94   98
        a=0.025 25  29  33  37  41   44  48  52  56  60  63   67 71  75  79  82  86   90   94
        a=0.05  24  28  32  35  39   43  46  50  53  57  61   64 68  71  75  79  82   86   89
        a=0.1   23  27  30  34  37   41  44  47  51  54  57   61 64  67  71  74  77   81   84
        n=     2   345   678   9   10  11  12   13 14  15  16  17  18   19  20
m=6    a=0.001 33  39  45  51   57  63  67  72  77  82  88   93 98  103 108 113 118  123 128
        a=0.005 33  39  44  49   54  59  64  69  74  79  83   88 93  98  103 107 112  117 122
        a=0.01  33  39  43  48   53  58  62  67  72  77  81   86 91  95  100 104 109  114 118
        a=0.025 33  37  42  47   51  56  60  64  69  73  78   82 87  91  95  100 104  109 113
        a=0.05  32  36  41  45   49  54  58  62  66  70  75   79 83  87  91  96  100  104 108
        a=0.1   31  35  39  43   47  51  55  59  63  67  71   75 79  83  87  91  94   98  102
MARSSIM, Revision 1                            1-6                                      August 2000

-------
m=7
m= !
m=9
                                                                                           Appendix I


                    Table 1.4 Critical Values for the WRS Test (continued)
n =

-------
Appendix I


                    Table 1.4 Critical Values for the WRS Test (continued)

        n=      2
m=12
n =

-------
                                                                                            Appendix I
                    Table 1.4  Critical Values for the WRS Test (continued)

        n=      2   3   4   5    6   7   8   9   10   11  12   13  14  15  16  17  18  19  20
m= 17  
-------
Appendix I


Reject the null hypothesis if the test statistic (Wr) is greater than the table (critical) value.
For n or m greater than 20, the table (critical) value can be calculated from:
                                                                                     (1.1)
if there are few or no ties, and from
                  \
nm
-
 12
                                                                                    (1.2)
              ,-_
if there are many ties, where g is the number of groups of tied measurements and tj is the number of
tied measurements in the jth group, z is the (1-a) percentile of a standard normal distribution, which
can be found in the following table:
a
0.001
0.005
0.01
0.025
0.05
0.1
z
3.09
2.575
2.326
1.960
1.645
1.282
Other values can be found in Table 1-1.
MARSSIM, Revision 1
                       I- 10
                                                                                August 2000

-------
                                                                                 Appendix I
1.5    Probability of Detecting an Elevated Area

        Table 1.5  Risk that an Elevated Area with Length L/G and Shape S will not be Detected
     and the Area (%) of the Elevated Area Relative to a Triangular Sample Grid Area of 0.866 G2

L/G
0.01
0.02
0.03
0.04
0.05
0.06
0.07
0.08
0.09
0.10
0.11
0.12
0.13
0.14
0.15
0.16
0.17
0.18
0.19
0.20
0.21
0.22
0.23
0.24
0.25
0.26
0.27
0.28
0.29
0.30
0.10
Risk
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
0.99
0.99
0.99
0.99
0.99
0.99
0.99
0.99
0.99
0.98
0.98
0.98
0.98
0.98
0.98
0.97
0.97
0.97
0.97
Area
<1%
<1%
<1%
<1%
<1%
<1%
<1%
<1%
<1%
<1%
<1%
1%
1%
1%
1%
1%
1%
1%
1%
1%
2%
2%
2%
2%
2%
2%
3%
3%
3%
3%
0.20
Risk
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
0.99
0.99
0.99
0.99
0.99
0.99
0.98
0.98
0.98
0.98
0.97
0.97
0.97
0.96
0.96
0.96
0.95
0.95
0.95
0.94
0.94
0.93
Area
<1%
<1%
<1%
<1%
<1%
<1%
<1%
<1%
1%
1%
1%
1%
1%
1%
2%
2%
2%
2%
3%
3%
3%
4%
4%
4%
5%
5%
5%
6%
6%
7%
0.30
Risk
1.00
1.00
1.00
1.00
1.00
1.00
0.99
0.99
0.99
0.99
0.99
0.98
0.98
0.98
0.98
0.97
0.97
0.96
0.96
0.96
0.95
0.95
0.94
0.94
0.93
0.93
0.92
0.91
0.91
0.90
Area
<1%
<1%
<1%
<1%
<1%
<1%
1%
1%
1%
1%
1%
2%
2%
2%
2%
3%
3%
4%
4%
4%
5%
5%
6%
6%
7%
7%
8%
9%
9%
10%
0.40
Risk
1.00
1.00
1.00
1.00
1.00
0.99
0.99
0.99
0.99
0.99
0.98
0.98
0.98
0.97
0.97
0.96
0.96
0.95
0.95
0.94
0.94
0.93
0.92
0.92
0.91
0.90
0.89
0.89
0.88
0.87
Area
<1%
<1%
<1%
<1%
<1%
<1%
<1%
<1%
1%
1%
2%
2%
2%
3%
3%
4%
4%
5%
5%
6%
6%
7%
8%
8%
9%
10%
11%
11%
12%
13%
Shape P
0.50
Risk
1.00
1.00
1.00
1.00
1.00
0.99
0.99
0.99
0.99
0.98
0.98
0.97
0.97
0.96
0.96
0.95
0.95
0.94
0.93
0.93
0.92
0.91
0.90
0.90
0.89
0.88
0.87
0.86
0.85
0.84
Area
<1%
<1%
<1%
<1%
<1%
1%
1%
1%
1%
2%
2%
3%
3%
4%
4%
5%
5%
6%
7%
7%
8%
9%
10%
10%
11%
12%
13%
14%
15%
16%
arameter, S
0.60
Risk
1.00
1.00
1.00
1.00
0.99
0.99
0.99
0.99
0.98
0.98
0.97
0.97
0.96
0.96
0.95
0.94
0.94
0.93
0.92
0.91
0.90
0.89
0.88
0.87
0.86
0.85
0.84
0.83
0.82
0.80
Area
<1%
<1%
<1%
<1%
1%
1%
1%
1%
2%
2%
3%
3%
4%
4%
5%
6%
6%
7%
8%
9%
10%
11%
12%
13%
14%
15%
16%
17%
18%
20%
0.70
Risk
1.00
1.00
1.00
1.00
0.99
0.99
0.99
0.98
0.98
0.97
0.97
0.96
0.96
0.95
0.94
0.94
0.93
0.92
0.91
0.90
0.89
0.88
0.87
0.85
0.84
0.83
0.81
0.80
0.79
0.77
Area
<1%
<1%
<1%
<1%
1%
1%
1%
2%
2%
3%
3%
4%
4%
5%
6%
7%
7%
8%
9%
10%
11%
12%
13%
15%
16%
17%
19%
20%
21%
23%
0.80
Risk
1.00
1.00
1.00
1.00
0.99
0.99
0.99
0.98
0.98
0.97
0.96
0.96
0.95
0.94
0.93
0.93
0.92
0.91
0.90
0.88
0.87
0.86
0.85
0.83
0.82
0.80
0.79
0.77
0.76
0.74
Area
<1%
<1%
<1%
<1%
1%
1%
1%
2%
2%
3%
4%
4%
5%
6%
7%
7%
8%
9%
10%
12%
13%
14%
15%
17%
18%
20%
21%
23%
24%
26%
0.90
Risk
1.00
1.00
1.00
0.99
0.99
0.99
0.98
0.98
0.97
0.97
0.96
0.95
0.94
0.94
0.93
0.92
0.91
0.89
0.88
0.87
0.86
0.84
0.83
0.81
0.80
0.78
0.76
0.74
0.73
0.71
Area
<1%
<1%
<1%
1%
1%
1%
2%
2%
3%
3%
4%
5%
6%
6%
7%
8%
9%
11%
12%
13%
14%
16%
17%
19%
20%
22%
24%
26%
27%
29%
1.00
Risk
1.00
1.00
1.00
0.99
0.99
0.99
0.98
0.98
0.97
0.96
0.96
0.95
0.94
0.93
0.92
0.91
0.90
0.88
0.87
0.85
0.84
0.82
0.81
0.79
0.77
0.75
0.74
0.72
0.69
0.67
Area
<1%
<1%
<1%
1%
1%
1%
2%
2%
3%
4%
4%
5%
6%
7%
8%
9%
10%
12%
13%
15%
16%
18%
19%
21%
23%
25%
26%
28%
31%
33%
Guidance for using Table 1.5 can be found in Gilbert 1987 and EPA 1989a.
   August 2000
1-11
MARS SIM, Revision 1

-------
Appendix I
     Table 1.5 Risk
   and the Area (%)
that an Elevated Area with Length L/G and Shape S will not be Detected
of the Elevated Area Relative to a Triangular Sample Grid Area of 0.866
                     (continued)

L/G
0.31
0.32
0.33
0.34
0.35
0.36
0.37
0.38
0.39
0.40
0.41
0.42
0.43
0.44
0.45
0.46
0.47
0.48
0.49
0.50
0.51
0.52
0.53
0.54
0.55
0.56
0.57
0.58
0.59
0.60
0.61
0.62
0.63
0.64
0.65
0.10
Risk
0.97
0.96
0.96
0.96
0.96
0.95
0.95
0.95
0.94
0.94
0.94
0.94
0.93
0.93
0.93
0.92
0.92
0.92
0.91
0.91
0.91
0.90
0.90
0.89
0.89
0.89
0.88
0.88
0.87
0.87
0.87
0.86
0.86
0.85
0.85
Area
3%
4%
4%
4%
4%
5%
5%
5%
6%
6%
6%
6%
7%
7%
7%
8%
8%
8%
9%
9%
9%
10%
10%
11%
11%
11%
12%
12%
13%
13%
13%
14%
14%
15%
15%
0.20
Risk
0.93
0.93
0.92
0.92
0.91
0.91
0.90
0.90
0.89
0.88
0.88
0.87
0.87
0.86
0.85
0.85
0.84
0.83
0.83
0.82
0.81
0.80
0.80
0.79
0.78
0.77
0.77
0.76
0.75
0.74
0.73
0.73
0.72
0.71
0.70
Area
7%
7%
8%
8%
9%
9%
10%
10%
11%
12%
12%
13%
13%
14%
15%
15%
16%
17%
17%
18%
19%
20%
20%
21%
22%
23%
24%
24%
25%
26%
27%
28%
29%
30%
31%
0.30
Risk
0.90
0.89
0.88
0.87
0.87
0.86
0.85
0.84
0.83
0.83
0.82
0.81
0.80
0.79
0.78
0.77
0.76
0.75
0.74
0.73
0.72
0.71
0.70
0.68
0.67
0.66
0.65
0.64
0.63
0.62
0.60
0.59
0.58
0.57
0.56
Area
10%
11%
12%
13%
13%
14%
15%
16%
17%
17%
18%
19%
20%
21%
22%
23%
24%
25%
26%
27%
28%
29%
31%
32%
33%
34%
35%
37%
38%
39%
40%
42%
43%
45%
46%
0.40
Risk
0.86
0.85
0.84
0.83
0.82
0.81
0.80
0.79
0.78
0.77
0.76
0.74
0.73
0.72
0.71
0.69
0.68
0.67
0.65
0.64
0.62
0.61
0.59
0.58
0.56
0.55
0.54
0.52
0.51
0.49
0.48
0.46
0.45
0.43
0.42
Area
14%
15%
16%
17%
18%
19%
20%
21%
22%
23%
24%
26%
27%
28%
29%
31%
32%
33%
35%
36%
38%
39%
41%
42%
44%
46%
47%
49%
51%
52%
54%
56%
58%
59%
61%
Shape P
0.50
Risk
0.83
0.81
0.80
0.79
0.78
0.76
0.75
0.74
0.72
0.71
0.70
0.68
0.66
0.65
0.63
0.62
0.60
0.58
0.56
0.55
0.53
0.51
0.49
0.47
0.46
0.44
0.42
0.40
0.39
0.37
0.35
0.34
0.32
0.30
0.29
Area
17%
19%
20%
21%
22%
24%
25%
26%
28%
29%
30%
32%
34%
35%
37%
38%
40%
42%
44%
45%
47%
49%
51%
53%
55%
57%
59%
61%
63%
65%
67%
70%
72%
74%
77%
arameter, S
0.60
Risk
0.79
0.78
0.76
0.75
0.73
0.72
0.70
0.69
0.67
0.65
0.63
0.62
0.60
0.58
0.56
0.54
0.52
0.50
0.48
0.46
0.43
0.41
0.39
0.37
0.35
0.33
0.31
0.29
0.27
0.25
0.23
0.21
0.20
0.18
0.16
Area
21%
22%
24%
25%
27%
28%
30%
31%
33%
35%
37%
38%
40%
42%
44%
46%
48%
50%
52%
54%
57%
59%
61%
63%
66%
68%
71%
73%
76%
78%
81%
84%
86%
89%
92%
0.70
Risk
0.76
0.74
0.72
0.71
0.69
0.67
0.65
0.63
0.61
0.59
0.57
0.55
0.53
0.51
0.49
0.46
0.44
0.41
0.39
0.37
0.34
0.32
0.29
0.27
0.24
0.22
0.20
0.18
0.16
0.14
0.12
0.10
0.09
0.07
0.06
Area
24%
26%
28%
29%
31%
33%
35%
37%
39%
41%
43%
45%
47%
49%
51%
54%
56%
59%
61%
63%
66%
69%
71%
74%
77%
80%
83%
85%
88%
91%
94%
98%
101%
104%
107%
0.80
Risk
0.72
0.70
0.68
0.66
0.64
0.62
0.60
0.58
0.56
0.54
0.51
0.49
0.46
0.44
0.41
0.39
0.36
0.33
0.30
0.27
0.25
0.22
0.19
0.17
0.14
0.12
0.10
0.08
0.06
0.04
0.03
0.02
0.01
0.00
0.00
Area
28%
30%
32%
34%
36%
38%
40%
42%
44%
46%
49%
51%
54%
56%
59%
61%
64%
67%
70%
73%
75%
78%
82%
85%
88%
91%
94%
98%
101%
104%
108%
112%
115%
119%
123%
0.90
Risk
0.69
0.67
0.64
0.62
0.60
0.58
0.55
0.53
0.50
0.48
0.45
0.42
0.40
0.37
0.34
0.31
0.28
0.25
0.22
0.18
0.15
0.13
0.10
0.08
0.06
0.04
0.02
0.01
0.00
0.00
0.00
0.00
0.00
0.00
0.00
Area
31%
33%
36%
38%
40%
42%
45%
47%
50%
52%
55%
58%
60%
63%
66%
69%
72%
75%
78%
82%
85%
88%
92%
95%
99%
102%
106%
110%
114%
118%
121%
126%
130%
134%
138%
1.00
Risk
0.65
0.63
0.61
0.58
0.56
0.53
0.50
0.48
0.45
0.42
0.39
0.36
0.33
0.30
0.27
0.23
0.20
0.16
0.13
0.09
0.07
0.05
0.03
0.02
0.01
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
Area
35%
37%
40%
42%
44%
47%
50%
52%
55%
58%
61%
64%
67%
70%
73%
77%
80%
84%
87%
91%
94%
98%
102%
106%
110%
114%
118%
122%
126%
131%
135%
139%
144%
149%
153%
MARSSIM, Revision 1
                        1-12
August 2000

-------
                                                                                Appendix I
     Table 1.5 Risk
   and the Area (%)
that an Elevated Area with Length L/G and Shape S will not be Detected
of the Elevated Area Relative to a Triangular Sample Grid Area of 0.866G2
                    (continued)

L/G
0.66
0.67
0.68
0.69
0.70
0.71
0.72
0.73
0.74
0.75
0.76
0.77
0.78
0.79
0.80
0.81
0.82
0.83
0.84
0.85
0.86
0.87
0.88
0.89
0.90
0.91
0.92
0.93
0.94
0.95
0.96
0.97
0.98
0.99
1.00
0.10
Risk
0.84
0.84
0.84
0.83
0.83
0.82
0.82
0.81
0.81
0.80
0.80
0.79
0.79
0.78
0.78
0.77
0.77
0.76
0.76
0.75
0.74
0.74
0.73
0.73
0.72
0.72
0.71
0.71
0.70
0.69
0.69
0.68
0.68
0.67
0.67
Area
16%
16%
17%
17%
18%
18%
19%
19%
20%
20%
21%
22%
22%
23%
23%
24%
24%
25%
26%
26%
27%
27%
28%
29%
29%
30%
31%
31%
32%
33%
33%
34%
35%
36%
36%
0.20
Risk
0.69
0.68
0.68
0.67
0.66
0.65
0.64
0.63
0.62
0.61
0.61
0.60
0.59
0.58
0.57
0.56
0.55
0.54
0.53
0.52
0.51
0.50
0.50
0.49
0.48
0.47
0.46
0.45
0.44
0.43
0.42
0.41
0.40
0.40
0.39
Area
32%
33%
34%
35%
36%
37%
38%
39%
40%
41%
42%
43%
44%
45%
46%
48%
49%
50%
51%
52%
54%
55%
56%
57%
59%
60%
61%
63%
64%
65%
67%
68%
70%
71%
73%
0.30
Risk
0.55
0.53
0.52
0.51
0.50
0.49
0.48
0.46
0.45
0.44
0.43
0.42
0.40
0.39
0.38
0.37
0.36
0.35
0.33
0.32
0.31
0.30
0.29
0.28
0.27
0.26
0.25
0.24
0.23
0.22
0.21
0.20
0.19
0.18
0.17
Area
47%
49%
50%
52%
53%
55%
56%
58%
60%
61%
63%
65%
66%
68%
70%
71%
73%
75%
77%
79%
80%
82%
84%
86%
88%
90%
92%
94%
96%
98%
100%
102%
105%
107%
109%
0.40
Risk
0.40
0.39
0.38
0.36
0.35
0.33
0.32
0.31
0.29
0.28
0.27
0.25
0.24
0.23
0.22
0.20
0.19
0.18
0.17
0.16
0.14
0.13
0.12
0.11
0.10
0.10
0.09
0.08
0.07
0.07
0.06
0.05
0.05
0.04
0.04
Area
63%
65%
67%
69%
71%
73%
75%
77%
79%
82%
84%
86%
88%
91%
93%
95%
98%
100%
102%
105%
107%
110%
112%
115%
118%
120%
123%
126%
128%
131%
134%
137%
139%
142%
145%
Shape P
0.50
Risk
0.27
0.25
0.24
0.22
0.21
0.20
0.18
0.17
0.15
0.14
0.13
0.12
0.10
0.09
0.08
0.07
0.06
0.05
0.05
0.04
0.03
0.02
0.02
0.01
0.01
0.01
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
Area
79%
81%
84%
86%
89%
91%
94%
97%
99%
102%
105%
108%
110%
113%
116%
119%
122%
125%
128%
131%
134%
137%
140%
144%
147%
150%
154%
157%
160%
164%
167%
171%
174%
178%
181%
arameter, S
0.60
Risk
0.15
0.13
0.12
0.10
0.09
0.08
0.07
0.05
0.04
0.04
0.03
0.02
0.01
0.01
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
Area
95%
98%
101%
104%
107%
110%
113%
116%
119%
122%
126%
129%
132%
136%
139%
143%
146%
150%
154%
157%
161%
165%
169%
172%
176%
180%
184%
188%
192%
196%
201%
205%
209%
213%
218%
0.70
Risk
0.05
0.03
0.02
0.01
0.01
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
Area
111%
114%
117%
121%
124%
128%
132%
135%
139%
143%
147%
151%
154%
158%
163%
167%
171%
175%
179%
183%
188%
192%
197%
201%
206%
210%
215%
220%
224%
229%
234%
239%
244%
249%
254%
0.80
Risk
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
Area
126%
130%
134%
138%
142%
146%
150%
155%
159%
163%
168%
172%
177%
181%
186%
190%
195%
200%
205%
210%
215%
220%
225%
230%
235%
240%
246%
251%
256%
262%
267%
273%
279%
284%
290%
0.90
Risk
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
Area
142%
147%
151%
155%
160%
165%
169%
174%
179%
184%
189%
194%
199%
204%
209%
214%
220%
225%
230%
236%
241%
247%
253%
259%
264%
270%
276%
282%
288%
295%
301%
307%
314%
320%
326%
1.00
Risk
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
Area
158%
163%
168%
173%
178%
183%
188%
193%
199%
204%
210%
215%
221%
226%
232%
238%
244%
250%
256%
262%
268%
275%
281%
287%
294%
300%
307%
314%
321%
327%
334%
341%
348%
356%
363%
August 2000
                        1-13
MARS SIM, Revision 1

-------
 Appendix I
1.6   Random Numbers




      Table 1.6 1,000 Random Numbers Uniformly Distributed between Zero and One
0.163601
0.934196
0.054552
0.972409
0.556401
0.625153
0.527330
0.826643
0.296068
0.848882
0.779276
0.095038
0.011672
0.215993
0.982374
0.860868
0.718917
0.800735
0.915538
0.975506
0.435196
0.692512
0.678823
0.642075
0.174285
0.951401
0.186824
0.105673
0.801080
0.101214
0.177754
0.846157
0.812147
0.691055
0.483819
0.165133
0.281668
0.554337
0.647423
0.951102
0.965257
0.241889
0.621126
0.838711
0.124034
0.673286
0.891272
0.083603
0.484461
0.577943
0.844846
0.476035
0.101973
0.794380
0.696798
0.225556
0.711742
0.652654
0.272807
0.368151
0.930602
0.029842
0.863244
0.921291
0.005407
0.026338
0.619461
0.236405
0.930066
0.322467
0.306383
0.059046
0.797573
0.985134
0.476899
0.350955
0.555548
0.979831
0.999181
0.799991
0.293328
0.196153
0.351792
0.550827
0.392367
0.274621
0.101393
0.186239
0.443407
0.354717
0.683995
0.819422
0.463655
0.398048
0.232159
0.928348
0.452254
0.821543
0.657348
0.289042
0.133649
0.210993
0.310843
0.878006
0.933720
0.945199
0.390527
0.156607
0.201517
0.104390
0.174899
0.214681
0.839512
0.942401
0.248859
0.132364
0.172627
0.926726
0.984335
0.630553
0.161947
0.885295
0.649633
0.268003
0.995100
0.267852
0.915087
0.883172
0.730612
0.752871
0.762408
0.437067
0.242961
0.513444
0.793464
0.583707
0.025057
0.891009
0.773819
0.369411
0.998118
0.105936
0.275881
0.005975
0.575622
0.253388
0.306651
0.427038
0.892670
0.595309
0.057760
0.526759
0.259801
0.157808
0.583713
0.585505
0.366531
0.867808
0.688925
0.690781
0.261410
0.272254
0.085164
0.786070
0.275906
0.840666
0.548200
0.158956
0.823097
0.642698
0.327863
0.744095
0.817291
0.802354
0.294093
0.813844
0.891664
0.196909
0.725887
0.612556
0.637352
0.893786
0.390428
0.739021
0.827112
0.148688
0.118990
0.741697
0.474156
0.509846
0.718368
0.040605
0.852958
0.453993
0.912588
0.957094
0.140346
0.371540
0.523221
0.017727
0.611426
0.208937
0.883009
0.393867
0.084302
0.317468
0.843209
0.144068
0.156608
0.972031
0.828245
0.133831
0.499623
0.973093
0.246417
0.054389
0.143171
0.124601
0.644996
0.082317
0.600575
0.133498
0.277716
0.480788
0.813221
0.418602
0.898409
0.408165
0.305020
0.997626
0.116336
0.877990
0.733824
0.830218
0.553577
0.108632
0.769081
0.309463
0.030270
0.184565
0.243728
0.374810
0.145212
0.062387
0.368678
0.104212
0.260175
0.527368
0.407518
0.569521
0.006423
0.952871
0.272407
0.364475
0.293721
0.922558
0.713379
0.648743
0.460949
0.293141
0.660224
0.026511
0.857964
0.301917
0.482638
0.800079
0.812482
0.896462
0.748483
0.947022
0.092405
0.783518
0.890058
0.090765
0.358794
0.445986
0.494982
0.826397
0.865552
0.222167
0.337680
0.909843
0.996266
0.675095
0.385141
0.494287
0.441518
0.474516
0.290613
0.361623
0.517658
0.716718
0.841304
0.648985
0.302687
0.511871
0.191600
0.144834
0.268538
0.572705
0.279164
0.338913
0.198725
0.789263
0.601951
0.360578
0.058602
0.910821
0.717362
0.141557
0.470457
0.618443
0.924341
0.244653
0.426236
0.256825
0.796671
0.114691
0.566173
0.779089
0.542048
0.318953
0.681475
0.602829
0.358966
0.437608
0.325204
0.709933
0.132225
0.096843
0.661969
0.896805
0.904515
0.298942
0.910079
0.626600
0.518416
0.745522
0.883509
0.680062
0.888281
0.564192
0.973160
0.443218
0.738495
0.388081
0.423421
0.444997
0.566196
0.937184
0.167665
0.944564
0.270225
0.489034
0.314429
0.596046
0.592776
0.648967
0.663842
0.648478
0.978186
0.592834
0.619741
0.961559
0.044439
0.466955
0.795514
0.308418
0.409622
0.737256
0.457172
0.121573
0.099444
0.045169
0.579216
0.986078
0.154562
0.097350
0.018872
0.140684
 MARSSIM, Revision 1
1-14
August 2000

-------
                                                                           Appendix I
     Table 1.6 1,000 Random Numbers Uniformly Distributed between Zero and One
                                    (continued)
0.873143
0.401675
0.574987
0.745415
0.613554
0.880368
0.567556
0.280015
0.502862
0.738375
0.366209
0.739267
0.375690
0.894101
0.668169
0.470107
0.047906
0.917713
0.839439
0.488244
0.488369
0.311380
0.028802
0.466082
0.720229
0.861579
0.849884
0.989999
0.337214
0.706330
0.417239
0.653326
0.099373
0.860299
0.067160
0.944317
0.917419
0.365705
0.911453
0.349662
0.061151
0.154831
0.929459
0.926550
0.303741
0.183534
0.237361
0.818555
0.794328
0.749763
0.554299
0.866922
0.178824
0.296926
0.135634
0.694949
0.072793
0.338565
0.260352
0.485094
0.270400
0.072165
0.603884
0.575779
0.778039
0.917789
0.994007
0.987184
0.082994
0.916556
0.529996
0.156385
0.210143
0.791992
0.348844
0.185575
0.800723
0.591254
0.238282
0.771468
0.808117
0.425406
0.857632
0.247850
0.696381
0.336240
0.238758
0.305231
0.634971
0.979969
0.256930
0.443631
0.324041
0.271284
0.309033
0.107402
0.254833
0.129716
0.322236
0.807264
0.944160
0.959713
0.939622
0.331677
0.816247
0.349735
0.344245
0.299909
0.707773
0.305465
0.067157
0.026232
0.363875
0.210015
0.743859
0.116707
0.920222
0.383195
0.795760
0.723544
0.118845
0.014438
0.341580
0.373333
0.424191
0.057148
0.887161
0.261038
0.489597
0.518074
0.110614
0.616290
0.494071
0.223989
0.007328
0.924413
0.153558
0.894264
0.348433
0.804761
0.547834
0.234554
0.608231
0.572502
0.954437
0.039033
0.613361
0.249767
0.181747
0.755573
0.838499
0.825052
0.769274
0.655124
0.386073
0.707522
0.568383
0.365952
0.134014
0.386382
0.004214
0.867155
0.716762
0.192603
0.461531
0.021104
0.869115
0.545130
0.217373
0.556232
0.799426
0.485610
0.008978
0.176598
0.871833
0.305933
0.781546
0.172763
0.770481
0.487552
0.767389
0.646094
0.753757
0.741124
0.549585
0.031334
0.169301
0.153359
0.689979
0.108975
0.047561
0.253032
0.185320
0.837800
0.782902
0.298471
0.221234
0.360957
0.867386
0.592513
0.542130
0.526636
0.770194
0.904929
0.469779
0.787951
0.931869
0.027043
0.969563
0.372555
0.382772
0.383695
0.576809
0.480599
0.777100
0.770237
0.914856
0.104256
0.455150
0.735335
0.498720
0.857324
0.791852
0.688526
0.941102
0.914420
0.353168
0.494021
0.455260
0.311194
0.239894
0.237660
0.244896
0.092884
0.490431
0.947374
0.166572
0.808757
0.280223
0.473418
0.306862
0.284572
0.521982
0.913966
0.678287
0.096443
0.801938
0.291364
0.070954
0.418470
0.479858
0.052969
0.172846
0.111924
0.707400
0.011893
0.112919
0.240324
0.941002
0.140520
0.988330
0.986074
0.225470
0.772731
0.732687
0.673377
0.996216
0.320633
0.447486
0.208165
0.271534
0.337304
0.426444
0.731405
0.375686
0.112314
0.009573
0.283447
0.650251
0.904790
0.992475
0.599127
0.266514
0.667142
0.374089
0.040364
0.695764
0.045748
0.004082
0.894958
0.421803
0.736102
0.412930
0.587451
0.014317
0.184068
0.428921
0.794021
0.259197
0.597085
0.444554
0.556251
0.198070
0.934912
0.448970
0.051811
0.150619
0.971659
0.600014
0.949825
0.869528
0.320336
0.339906
0.828215
0.242857
0.229879
0.943793
0.326222
0.151931
0.308979
0.239509
0.647901
0.216531
0.140070
0.624283
0.306903
0.505327
0.298068
0.597796
0.737514
0.471802
0.601453
0.571609
0.820797
0.940946
0.648821
0.291615
0.782477
0.186087
0.177531
0.157058
0.460602
0.985594
0.546347
0.049321
0.445073
0.876616
0.945046
0.441666
0.845737
0.226369
0.431645
0.113060
0.309290
0.849242
0.205750
0.036285
0.328792
0.698329
0.424858
0.246223
0.763214
0.840563
0.292810
0.303885
0.027722
0.539847
0.162072
0.340966
0.783451
0.083217
0.981580
0.261767
0.238087
0.277620
0.165732
0.922273
0.771997
0.743725
0.681447
0.778659
0.726957
0.885438
0.595525
0.275619
0.455018
0.505316
0.811135
0.194553
0.377845
August 2000
1-15
MARS SIM, Revision 1

-------
Appendix I
     Table 1.6 1,000 Random Numbers Uniformly Distributed between Zero and One
                                     (continued)
0.027171
0.768066
0.052305
0.623285
0.624284
0.853117
0.967796
0.759450
0.514703
0.826296
0.354198
0.744807
0.642312
0.824625
0.755877
0.625370
0.124012
0.825392
0.999194
0.536855
0.361632
0.923253
0.845432
0.058193
0.387888
0.899285
0.492051
0.308522
0.671602
0.933631
0.768853
0.108915
0.264540
0.792775
0.960789
0.356643
0.855876
0.679791
0.967123
0.133851
0.382001
0.297058
0.667083
0.797162
0.479871
0.202336
0.726183
0.655990
0.092643
0.644294
0.208541
0.018316
0.397054
0.115419
0.864053
0.255775
0.051583
0.123099
0.797708
0.770743
0.442388
0.321605
0.761154
0.847909
0.617183
0.636883
0.136063
0.022855
0.348421
0.057705
0.690208
0.058916
0.821341
0.297156
0.095780
0.682343
0.744466
0.076280
0.180449
0.806962
0.163569
0.505570
0.678619
0.899944
0.697578
0.501578
0.520741
0.570478
0.043774
0.487575
0.673915
0.050704
0.935493
0.746739
0.826653
0.600824
0.576129
0.871263
0.505977
0.607572
0.352557
0.405715
0.385851
0.621969
0.418534
0.927298
0.563383
0.122418
0.204221
0.404959
0.875712
0.113509
0.682796
0.733795
0.171916
0.688071
0.936409
0.772790
0.901289
0.373705
0.885420
0.406611
0.179839
0.674917
0.740170
0.655314
0.571558
0.634642
0.204828
0.197074
0.475395
0.866481
0.308849
0.581618
0.980045
0.952708
0.811955
0.600557
0.752543
0.685458
0.785028
0.774379
0.370345
0.919787
0.539543
0.413809
0.572689
0.423514
0.046701
0.482449
0.033111
0.831460
0.679568
0.068207
0.925783
0.418976
0.284410
0.237797
0.759989
0.417970
0.284838
0.932781
0.090931
0.967761
0.391874
0.372748
0.439594
0.066152
0.228607
0.588574
0.537793
0.860466
0.346358
0.393330
0.979875
0.244433
0.070374
0.329001
0.972838
0.405575
0.618925
0.058556
0.095675
0.606715
0.048914
0.242120
0.588503
0.810022
0.965550
0.460586
0.885414
0.362857
0.596215
0.877436
0.848112
0.795845
0.105093
0.566627
0.786084
0.353248
0.327832
0.452438
0.362205
0.670767
0.292400
0.831670
0.758190
0.591035
0.067899
0.896590
0.437879
0.874416
0.629443
0.857606
0.826932
0.639101
0.512284
0.515684
0.207558
0.328848
0.056160
0.337991
0.461960
0.844681
0.600528
0.427077
0.814902
0.871674
0.043950
0.394811
MARSSIM, Revision 1
1-16
August 2000

-------
                                                                                Appendix I


1.7    Stem and Leaf Display

The construction of a stem and leaf display is a simple way to generate a crude histogram of the
data quickly. The "stems" of such a display are the most significant digits of the data. Consider the
sample data of Section 8.2.2.2:

       90.7,  83.5,  86.4,   88.5,  84.4,  74.2,  84.1, 87.6,  78.2,  77.6,
       86.4,  76.3,  86.5,   77.4,  90.3,  90.1,  79.1, 92.4,  75.5,  80.5.

Here the data span three decades, so one might consider using the stems 70, 80 and 90. However,
three is too few stems to be informative, just as three intervals would be too few for constructing a
histogram. Therefore, for this example, each decade is divided into two parts.  This results in the six
stems 70, 75, 80, 85, 90, 95. The leaves are the least significant digits, so 90.7 has the stem 90 and
the leaf 0.7. 77.4 has the stem 75 and the leaf 7.4. Note that even though the stem is 75, the leaf is
not 2.4. The leaf is kept as 7.4 so that the data can be read directly from the display without any
calculations.

As shown in the top part of Figure 1.1, simply arrange the leaves of the data into rows, one stem per
row. The result is a quick histogram of the data. In order to ensure this, the same number of digits
should be used for each leaf, so that each occupies the same amount of horizontal space.

If the stems are arranged in increasing order, as shown in the bottom half of Figure I.I, it is easy to
pick out the minimum (74.2), the maximum (92.4), and the median (between 84.1 and 84.4).

A stem and leaf display (or histogram) with two peaks may indicate that residual radioactivity is
distributed  over only a portion of the survey unit.  Further information on the construction and
interpretation of data plots is given in EPA QA/G-9 (EPA 1996a).
August 2000                                1-17                        MARS SIM, Revision 1

-------
Appendix I
                      Stem Leaves
                      70    4.2
                      75    8.2,7.6,6.3,7.4,9.1,5.5
                      80
                      85
                      90
                      95
 3.5,4.4,4.1,0.5
 6.4, 8.5, 7.6, 6.4, 6.5
 0.7,0.3,0.1,2.4
                      Stem Sorted Leaves
                      70    4.2
                      75    5.5,6.3,7.4,7.6,8.2,9.1
                      80
                      85
                      90
                      95
0.5,3.5,4.1,4.4
6.4, 6.4, 6.5, 7.6, 8.5
0.1,0.3,0.7,2.4
                     Figure 1.1 Example of a Stem and Leaf Display
1.8    Quantile Plots

 A Quantile plot is constructed by first ranking the data from smallest to largest. Sorting the
data is easy once the stem and leaf display has been constructed. Then, each data value is simply
plotted against the percentage of the samples with that value or less.  This percentage is
computed from:
                Percent = •
                           100 (rank -0.5)
                       (number of data points)
                                                                                   (1-3)
The results for the example data of Section 1.7 are shown in Table 1.7. The Quantile plot for this
example is shown in Figure 1.2.

The slope of the curve in the Quantile plot is an indication of the amount of data in a given range
of values. A small amount of data in a range will result in a large slope. A large amount of data
in a range of values will result in a more horizonal slope. A sharp rise near the bottom or the top
is an indication of asymmetry. Sudden changes in slope, or notably flat or notably steep areas
may indicate peculiarities in the  survey unit data needing further investigation.
MARSSIM, Revision 1
               1-18
August 2000

-------
                                                                                Appendix I


                             Table 1.7  Data for Quantile Plot
Data:
Rank:
Percent:
Data:
Rank:
Percent:
74.2
1
2.5
84.4
11
52.5
75.
2
7.5
86.
12
60
5


4
5
0
76.3
O
12.5
86.4
12.5
60.0
77.4
4
17.5
86.5
14
67.5
77.6
5
22.5
87.6
15
72.5
78.2
6
27.5
88.5
16
77.5
79.1
7
32.5
90.1
17
82.5
80.5
8
37.5
90.3
18
87.5
83.5
9
42.5
90.7
19
92.5
84.1
10
47.5
92.4
20
97.5
A useful aid to interpreting the quantile plot is the addition of boxes containing the middle 50%
and middle 75% of the data. These are shown as the dashed lines in Figure 1.2. The 50% box has
its upper right corner at the 75th percentile and its lower left corner at the 25th percentile. These
points are also called the Quartiles. These are -78 and -88, respectively, as indicated by the
dashed lines. They bracket the middle half of the data values.  The 75% box has its upper right
corner at the 87.5th percentile and its lower left corner at the 12.5th percentile. A sharp increase
within the 50% box can indicate two or more modes in the data. Outside the 75% box, sharp
increases can indicate outliers. The median (50th percentile) is indicated by the heavy solid line
at the value -84, and can be used as  an aid to judging the symmetry of the data distribution.
There are no especially unusual features in the example Quantile plot shown in Figure 1.2, other
than the possibility of slight asymmetry around the median.

Another Quantile plot, for the example data of Section 8.3.3, is shown in Figure 1.3.
August 2000                                 1-19                         MARS SIM, Revision 1

-------
Appendix I
y4
90-

g *-
c
.0
'-1— >
03
•g 82-
8
c
o
O
78
74


















'




1
1
1
I
i
1
1

1
1



1
I
J








r "
i
i

i

i

i



i -
•

































• "























„ •

































•
•i
i
i


'


i


i
i






^-F
I

i
i

i







i
i


m
j_






























0 20 40 50 60 80 100
Percent
Figure 1.2 Example of a Quantile Plot
MARSSIM, Revision 1
1-20
August 2000

-------
                                                               Appendix I
                       Class 2 Exterior Survey Unit
    150 T
 O)
m
 o
    142
    134
 o
 O  126
    118 -
    110 T	1	1	1	1	1	1	1	1	1	1	1	1	1	1	1	1	1	1	1	1
        0           20          40          60          80          100
                                   Percent
         Figure 1.3 Quantile Plot for Example Class 2 Exterior Survey Unit of Section 8.3.3.
August 2000
1-21
MARS SIM, Revision 1

-------
Appendix I


A Quantile-Quantile plot is extremely useful for comparing two sets of data.  Suppose the
following 17 concentration values were obtained in a reference area corresponding to the
example survey unit data of Section 1.7:

       92.1,  83.2,  81.7,  81.8,  88.5,   82.4,  81.5,  69.7,   82.4,  89.7,
       81.4,  79.4,  82.0,  79.9,  81.1,   59.4,  75.3.

A Quantile-Quantile plot can be constructed to compare the distribution of the survey unit data,
Yj,j=l,...n, with the distribution of the reference area data Xt ,/'=!,... m.  (If the reference area
data set were the larger, the roles of Xand 7 would be reversed.) The data from each set are
ranked separately from smallest to largest. This has already been done for the survey unit data in
Table 1.7. For the reference area data, we obtain the results in Table 1.8.

                    Table  1.8 Ranked Reference Area Concentrations
Data:
Rank:
Data:
Rank:
59
1
82
1
4

0
1
69.
2
82.
12
7

4
5
75

82
O
O
.4
12.5
79.4
4
83.2
14
79.9
5
88.5
15
81.1
6
89.7
16
81.4 81.5 81.7
789
92.1
17
81.8
10


The median for the reference area data is 81.7, the sample mean is 80.7, and the sample standard
deviation is 7.5.

For the larger data set, the data must be interpolated to match the number of points in the smaller
data set.  This is done by computing
     Vj = 0.5(«/m)+0.5  and  v.+1 = v. + (n/m)  for i=l,...m-l,                       (1-4)



where m is the number of points in the smaller data set and n is the number of points in the larger
data set. For each of the ranks, /', in the smaller data set, a corresponding value in the larger data
set is found by first decomposing vt into its integer part, j, and its fractional part, g.

Then the interpolated values are computed from the relationship:


MARSSIM, Revision 1                         1-22                                 August 2000

-------
                                                                                 Appendix I


                                                                                      (1-5)
The results of these calculations are shown in Table 1.9.
               Table 1.9  Interpolated Ranks for Survey Unit Concentrations
Rank
V;
Z;
Xi
Rank
V;
Z;
Xi
1
1.09
74.3
59.4
11
12.85
86.4
82.0
2
2.26
75.7
69.7
12.5
14.03
86.5
82.4
3
3.44
76.8
75.3
12.5
15.21
87.8
82.4
4
4.62
77.5
79.4
14
5
5.79
78.1
79.7
15
16.38 17.
89
83
1 90
2 88
6
6.97
79.1
81.1
16
56 18.
.2 90
.5 89
7
8.15
80.9
81.4
17
74 19.
8
9.33
83.7
81.5

91
9
10.50
84.3
81.7


10

11.68
85
81


8
8


.6 92.3
.7 92
.1



Finally, Z; is plotted against^ to obtain the Quantile-Quantile plot. This example is shown in
Figure 1.4.

The Quantile-Quantile Plot is valuable because it provides a direct visual comparison of the two
data sets. If the two data distributions differ only in location (e.g. mean) or scale (e.g. standard
deviation), the points will lie on a straight line. If the two data distributions being compared are
identical, all of the plotted points will lie on the line Y=X. Any deviations from this would point
to possible differences in these distributions. The middle data point plots the median of 7 against
the median of X. That this point lies above the line Y=X, in the example of Figure 8.4, shows that
the median of 7 is larger than the median of X. Indeed, the cluster of points above the line Y = X
in the region of the plot where the data points are dense, is an indication that the central portion
of the survey unit distribution is shifted toward higher values than the reference area distribution.
This could imply that there is residual radioactivity in the survey unit. This should be tested
using the nonparametric statistical tests described in Chapter 8.

Another Quantile-Quantile plot, for the Class 1 Interior Survey Unit example data,  is shown in
Figure A. 8.

Further information on the interpretation of Quantile and Quantile-Quantile plots are given in
EPA QA/G-9 (EPA 1996a).
August 2000                                 1-23                         MARSSIM, Revision 1

-------
Appendix I
    60
    55
                 Example Q - Q Plot
        55   60   65   70    75   80   85   90    95
                       Reference Area

               Figure 1.4 Example Quantile-Quantile Plot
MARSSIM, Revision 1
1-24
August 2000

-------
                                                                                Appendix I


1.9    Power Calculations for the Statistical Tests

1.9.1 Power of the Sign Test

The power of the Sign test for detecting residual radioactivity at the concentration level LBGR =
DGCL - A, may be found using equation 1-6.

                           k  /   >
                           E  I  . |  L3TU-3T '  =<  l - 
-------
Appendix I
                                 Retrospective Power
  'c
       1.00
       0.80 --
       0.60 -
       0.40
       0.20 --
       0.00
           130
135               140              145
        Concentration (Bq/kg)
      150
             Figure 1.5 Retrospective Power Curve for Class 3 Exterior Survey Unit
MARSSIM, Revision 1
            1-26
August 2000

-------
                                                                               Appendix I
1.9.2 Power of the Wilcoxon Rank Sum Test

The power of the WRS test is computed from

                        W -Q.5-Q.
         Power = 1 -  >    -               -                                   (1-8)
where Wc is the critical value found in Table 1.4 for the appropriate vales of a, n and m.  Values
of 
-------
Appendix I
      Table 1.10  Values of Pr and/>2 for Computing the Mean and Variance of
A/a
-6.0
-5.0
-4.0
-3.5
-3.0
-2.5
-2.0
-1.9
-1.8
-1.7
-1.6
-1.5
-1.4
-1.3
-1.2
-1.1
-1.0
-0.9
-0.8
-0.7
-0.6
-0.5
-0.4
-0.3
-0.2
-0.1
0.0
0.1
0.2
0.3
0.4
0.5
0.6
Pt
1.11E-05
0.000204
0.002339
0.006664
0.016947
0.038550
0.078650
0.089555
0.101546
0.114666
0.128950
0.144422
0.161099
0.178985
0.198072
0.218338
0.239750
0.262259
0.285804
0.310309
0.335687
0.361837
0.388649
0.416002
0.443769
0.471814
0.500000
0.528186
0.556231
0.583998
0.611351
0.638163
0.664313
Pi
1.16E-07
6.14E-06
0.000174
0.000738
0.002690
0.008465
0.023066
0.027714
0.033114
0.039348
0.046501
0.054656
0.063897
0.074301
0.085944
0.098892
0.113202
0.128920
0.146077
0.164691
0.184760
0.206266
0.229172
0.253419
0.278930
0.305606
0.333333
0.361978
0.391392
0.421415
0.451875
0.482593
0.513387
A/a
0.7
0.8
0.9
1.0
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
1.9
2.0
2.1
2.2
2.3
2.4
2.5
2.6
2.7
2.8
2.9
3.0
3.1
3.2
3.3
3.4
3.5
4.0
5.0
6.0

Pt
0.689691
0.714196
0.737741
0.760250
0.781662
0.801928
0.821015
0.838901
0.855578
0.871050
0.885334
0.898454
0.910445
0.921350
0.931218
0.940103
0.948062
0.955157
0.961450
0.967004
0.971881
0.976143
0.979848
0.983053
0.985811
0.988174
0.990188
0.991895
0.993336
0.997661
0.999796
0.999989

Pi
0.544073
0.574469
0.604402
0.633702
0.662216
0.689800
0.716331
0.741698
0.765812
0.788602
0.810016
0.830022
0.848605
0.865767
0.881527
0.895917
0.908982
0.920777
0.931365
0.940817
0.949208
0.956616
0.963118
0.968795
0.973725
0.977981
0.981636
0.984758
0.987410
0.995497
0.999599
0.999978

MARSSIM, Revision 1
1-28
August 2000

-------
                                                                         Appendix I
 £
 +j
 'c
  If)
  2
  Q.
          4000
                                Retrospective Power
4500             5000


           dpmpeMOOcm2
5500
6000
        Figure 1.6 Retrospective Power Curve for Class 2 Interior Drywall Survey Unit
August 2000
            1-29
     MARS SIM, Revision 1

-------
Appendix I
1.10   Spreadsheet Formulas for the Wilcoxon Rank Sum Test

The analysis for the WRS test is very well suited for calculation on a spreadsheet. This is how
the analysis discussed above was done. This particular example was constructed using Excel
5.0™.  The formula sheet corresponding to Table 8.6 is given in Table 1.11.  The function in
Column D of Table 1.11 calculates the ranks of the data. The RANK function in Excel™ does
not return tied ranks in the way needed for the WRS.  The COUNTIF function is used to correct
for this. Column E simply picks out the reference area ranks from Column D.

                  Table 1.11  Spreadsheet Formulas Used in Table 8.6
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
ABC D E
Data
49
35
45
45
41
44
48
37
46
42
47
104
94
98
99
90
104
95
105
93
101
92

Area
R
R
R
R
R
R
R
R
R
R
R
S
S
S
S
S
S
S
S
S
S
S

Adjusted Data
=IF(B2="R",A2+160,A2)
=IF(B3="R",A3+160,A3)
=IF(B4="R",A4+160,A4)
=IF(B5="R",A5+160,A5)
=IF(B6="R",A6+160,A6)
=IF(B7="R",A7+160,A7)
=IF(B8="R",A8+160,A8)
=IF(B9="R",A9+160,A9)
=IF(B10="R",A10+160,A10)
=IF(B11="R",A11+160,A11)
=IF(B12="R",A12+160,A12)
=IF(B13="R",A13+160,A13)
=IF(B14="R",A14+160,A14)
=IF(B15="R",A15+160,A15)
=IF(B16="R",A16+160,A16)
=IF(B17="R",A17+160,A17)
=IF(B18="R",A18+160,A18)
=IF(B19="R",A19+160,A19)
=IF(B20="R",A20+160,A20)
=IF(B21="R",A21+160,A21)
=IF(B22="R",A22+160,A22)
=IF(B23="R",A23+160,A23)
Sum=
Ranks
=RANK(C2,$C$2:$C$23,1)+(COUNTIF($C$2:$C$23,C2) - 1) / 2
=RANK(C3,$C$2:$C$23,1)+(COUNTIF($C$2:$C$23,C3) - 1) / 2
=RANK(C4,$C$2:$C$23,1)+(COUNTIF($C$2:$C$23,C4) - 1) / 2
=RANK(C5,$C$2:$C$23,1)+(COUNTIF($C$2:$C$23,C5) - 1) / 2
=RANK(C6,$C$2:$C$23,1)+(COUNTIF($C$2:$C$23,C6) - 1) / 2
=RANK(C7,$C$2:$C$23,1)+(COUNTIF($C$2:$C$23,C7) - 1) / 2
=RANK(C8,$C$2:$C$23,1)+(COUNTIF($C$2:$C$23,C8) - 1) / 2
=RANK(C9,$C$2:$C$23,1)+(COUNTIF($C$2:$C$23,C9) - 1) / 2
=RANK(C10,$C$2:$C$23,1)+(COUNTIF($C$2:$C$23,C10) - 1) / 2
=RANK(C1 1,$C$2:$C$23, 1)+(COUNTIF($C$2:$C$23,C1 1) - 1) / 2
=RANK(C12,$C$2:$C$23,1)+(COUNTIF($C$2:$C$23,C12) - 1) / 2
=RANK(C13,$C$2:$C$23,1)+(COUNTIF($C$2:$C$23,C13) - 1) / 2
=RANK(C14,$C$2:$C$23,1)+(COUNTIF($C$2:$C$23,C14) - 1) / 2
=RANK(C15,$C$2:$C$23,1)+(COUNTIF($C$2:$C$23,C15) - 1) / 2
=RANK(C16,$C$2:$C$23,1)+(COUNTIF($C$2:$C$23,C16) - 1) / 2
=RANK(C17,$C$2:$C$23,1)+(COUNTIF($C$2:$C$23,C17) - 1) / 2
=RANK(C18,$C$2:$C$23,1)+(COUNTIF($C$2:$C$23,C18) - 1) / 2
=RANK(C19,$C$2:$C$23,1)+(COUNTIF($C$2:$C$23,C19) - 1) / 2
=RANK(C20,$C$2:$C$23,1)+(COUNTIF($C$2:$C$23,C20) - 1) / 2
=RANK(C21,$C$2:$C$23,1)+(COUNTIF($C$2:$C$23,C21) - 1) / 2
=RANK(C22,$C$2:$C$23,1)+(COUNTIF($C$2:$C$23,C22) - 1) / 2
=RANK(C23,$C$2:$C$23,1)+(COUNTIF($C$2:$C$23,C23) - 1) / 2
=SUM(D2:D23)
Reference Area
Ranks
=IF(B2="R",D2,0)
=IF(B3="R",D3,0)
=IF(B4="R",D4,0)
=IF(B5="R",D5,0)
=IF(B6="R",D6,0)
=IF(B7="R",D7,0)
=IF(B8="R",D8,0)
=IF(B9="R",D9,0)
=IF(B10="R",D10,0)
=IF(B11="R",D11,0)
=IF(B12="R",D12,0)
=IF(B13="R",D13,0)
=IF(B14="R",D14,0)
=IF(B15="R",D15,0)
=IF(B16="R",D16,0)
=IF(B17="R",D17,0)
=IF(B18="R",D18,0)
=IF(B19="R",D19,0)
=IF(B20="R",D20,0)
=IF(B21="R",D21,0)
=IF(B22="R",D22,0)
=IF(B23="R",D23,0)
=SUM(E2:E23)
MARSSIM, Revision 1
1-30
August 2000

-------
Appendix I

1. 11   Multiple Radionuclides

There are two cases to be considered when dealing with multiple radionuclides, namely 1) the
radionuclide concentrations have a fairly constant ratio throughout the survey unit, or 2) the
concentrations of the different radionuclides appear to be unrelated in the survey unit. In
statistical terms, we are concerned about whether the concentrations of the different
radionuclides are  correlated or not. A simple way to judge this would be to make a scatter plot of
the concentrations against each other, and see if the points appear to have an underlying linear
pattern. The correlation coefficient can also be computed to see if it lies nearer to zero than to
one.  One could also perform a curve  fit and test the significance of the result.  Ultimately,
however, sound judgement must be used in interpreting the results of such calculations. If there
is no physical reason for the concentrations to be related, they probably are not. Conversely, if
there is sound evidence that the radionuclide concentrations should be  related because of how
they were treated, processed or released, this information should be used.

1.11.1 Using the Unity Rule

In either of the two above cases, the unity rule described in  Section 4.3.3 is applied.  The
difference is in  how it is applied. Suppose there are n radionuclides. If the concentration of
radionuclide / is denoted by Ct, and its DCGLW is denoted by Dt, then the unity rule for the n
radionuclides states that:

                         Cl/D1 + C2/D2 + C3/D3 + - + CJDn < 1                    (1-11)

This will ensure that the total dose or risk due to the sum of all the radionuclides does not exceed
the release criterion.  Note that if D  is the smallest of the DCGLs, then
           (C, + C2 + C3 + - + CJ/Dmm <  C]/D] + C2/D2 + C3/D3 + - + Cn/Dn      (1-12)

so that the smallest DCGL may be applied to the total activity concentration, rather than using
the unity rule.  While this option may be considered, in many cases it will be too conservative to
be useful.
1.11.2  Radionuclide Concentrations with Fixed Ratios

If there is an established ratio among the concentrations of the n radionuclides in a survey unit,
then the concentration of every radionuclide can be expressed in terms of any one of them, e.g.,
radionuclide #1. The measured radionuclide is often called a surrogate radionuclide for the
others.
August 2000                                 1-31                         MARS SIM, Revision 1

-------
                                                                               Appendix I


If                   C2 = R2 Clt C3 = R3 C,,..., C, = R, Clt .., Cn = Rn C,
then
                          C,/D, + C3/D2 + C3/D3+... + CH/DH
                     = C, /A +R2 C,/D2 + R3 C, /D3 + ... +Rn C, /Dn
                        = C, [1/D, +R2/D2 + R3/D3 +  .- +Rn/DJ
                                       = C,/Dtotal                                   (1-13)

where

                      Dtotal=\l [I/A +R2/D2+R3/D3 + - +Rn/Dn]                  (1-14)
Thus, Dtotal is the DCGLW for the surrogate radionuclide when the concentration of that
radionuclide represents all radionuclides that are present in the survey unit.  Clearly, this scheme
is applicable only when radionuclide specific measurements of the surrogate radionuclide are
made.  It is unlikely to apply in situations where the surrogate radionuclide appears in
background, since background variations would tend to obscure the relationships between it and
the other radionuclides.

Thus, in the case where there are constant ratios among radionuclide concentrations, the
statistical tests are applied as if only the surrogate radionuclide were contributing to the residual
radioactivity, with the DCGLW for that radionuclide replaced by Dtotal. For example, in planning
the final status survey, only the expected standard deviation of the concentration measurements
for the surrogate radionuclide is needed to calculate the sample size.

For the elevated measurement comparison, the DCGLEMC for the surrogate radionuclide is
replaced by
                       Etota!=l/[l/E]+R2/E2+R3/E3 + -+Rn/En]                  (1-15)

where Et is the DCGLEMC for radionuclide /'.

1.11.3 Unrelated Radionuclide Concentrations

If the concentrations of the different radionuclides appear to be unrelated in the survey unit, there
is little alternative but to measure the concentration of each radionuclide and use the unity rule.
The exception would be  in applying the most restrictive DCGLW to all of the radionuclides, as
mentioned later in this section.

Since the release criterion is

                        Cl/D1 + C2/D2 + C3/D3 + - + CJDn< 1                    (1-16)

MARSSIM, Revision 1                        1-32                                August 2000

-------
Appendix I

the quantity to be measured is the weighted sum, T=C,ID1 + C2ID2 + C3ID3 + - + CnIDn.
The DCGLW for Tis one. In planning the final status survey, the measurement standard
deviation of the weighted sum,  T, is estimated by
                          ' D,]2 + [a(C2)l D2]2 + [a(C3)l D3]2 + - + [a(Cn}l DJ2        (1-17)

since the measured concentrations of the various radionuclides are assumed to be uncorrelated.

For the elevated measurement comparison, the inequality



is used, where Et is the DCGLEMC for radionuclide /'. For scanning, the most restrictive DCGLEMC
should generally be used.

When some of the radionuclides also appear in background, the quantity T = C, I D, + C21D2 +
C3ID3 + ••• + Cnl Dn must also be measured in an appropriate reference area. If radionuclide /'
does not appear in background, set Ct = 0 in the calculation of Tfor the reference area.

Note that if there is a fixed ratio between the concentrations of some radionuclides, but not
others, a combination of the method of this section with that of the previous section may be used.
The appropriate value of Dtotal with the concentration of the measured surrogate radionuclide
should replace the corresponding terms in equation 1-17.

1.11.4 Example Application of WRS Test to multiple radionuclides

This section contains an example application of the nonparametric statistical methods in this
report to sites that have residual radioactivity from more than one radionuclide. Consider a site
with both 60Co and 137Cs contamination.  137Cs appears in background from global  atmospheric
weapons tests at a typical concentration  of about 1 pCi/g.  Assume that the DCGLw for  60Co is 2
pCi/g and for 137Cs is 1.4 pCi/g.  In disturbed areas, the background  concentration of 137Cs can
vary considerably.  An estimated spatial standard deviation of 0.5 pCi/g for 137Cs  will be
assumed. During remediation, it was found that the concentrations of the two radionuclides were
not well correlated in the survey unit.  60Co concentrations were more variable than the 137Cs
concentrations, and 0.7 pCi/g is estimated for its standard deviation. Measurement errors for
both 60Co and 137Cs using gamma spectrometry will be small compared to this. For the
comparison to the release criteria, the weighted sum of the concentrations of these radionuclides
is computed from:

Weighted sum = (60Co concentration)/(60Co DCGLW) + (137Cs Concentration)/(137Cs DCGLW)
              = (60Co concentration)/(2) + (137Cs Concentration)/(1.4)


August 2000                                 1-33                         MARS SIM, Revision 1

-------
                                                                               Appendix I


The variance of the weighted sum, assuming that the 60Co and 137Cs concentrations are spatially
unrelated is

G2 = [(60Co Standard deviation)/(60Co DCGLW)]2 + [(137Cs Standard Deviation)/(137Cs DCGLW)]2
                        .4]2 = 0.25.
Thus G = 0.5. The DCGLW for the weighted sum is one. The null hypothesis is that the survey
unit exceeds the release criterion. During the DQO process, the LBGR was set at 0.5 for the
weighted sum, so that A = DCGLW- LBGR =1.0 -0.5 = 0.5, and A/a = 0.5/0.5 = 1.0. The
acceptable error rates chosen were a = P = 0.05. To achieve this, 32 samples each are required in
the survey unit and the reference area.

The weighted sums are computed for each measurement location in both the reference area and
the survey unit. The WRS test is then performed on the weighted sum. The calculations for this
example are shown in Table 1. 12. The DCGLW (i.e., 1 .0) is added to the weighted sum for each
location in the reference area.  The ranks of the combined survey unit and adjusted reference area
weighted sums are then computed. The sum of the ranks of the adjusted reference area weighted
sums is then compared to the critical value for n = m = 32, a = 0.05, which is 1 162 (see formula
following Table 1.4).  In Table 1.12, the sum of the ranks of the adjusted reference area weighted
sums is 1281.  This exceeds the critical value, so the null hypothesis is rejected.  The survey unit
meets the release criterion. The difference between the mean of the weighted sums in the survey
unit and the reference area is 1 .86 - 1.16 = 0.7.  Thus, the estimated dose or risk due to residual
radioactivity in the survey unit is 70% of the release criterion.
MARSSIM, Revision 1                        1-34                                August 2000

-------
Appendix I
                 Table 1.12 Example WRS Test for Two Radionuclides

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
Avg
StdDev
Reference Area
137Cs
2.00
1.23
0.99
1.98
1.78
1.93
1.73
1.83
1.27
0.74
1.17
1.51
2.25
1.36
2.05
1.61
1.29
1.55
1.82
1.17
1.76
2.21
2.35
1.51
0.66
1.56
1.93
2.15
2.07
1.77
1.19
1.57
1.62
0.43
60Co
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
Survey Unit
137Cs
1.12
1.66
3.02
2.47
2.08
2.96
2.05
2.41
1.74
2.65
1.92
1.91
3.06
2.18
2.08
2.30
2.20
3.11
2.31
2.82
1.81
2.71
1.89
2.12
2.59
1.75
2.35
2.28
2.56
2.50
1.79
2.55
2.28
0.46
60Co
0.06
1.99
0.56
0.26
0.21
0.00
0.20
0.00
0.00
0.16
0.63
0.69
0.13
0.98
1.26
1.16
0.00
0.50
0.00
0.41
1.18
0.17
0.00
0.34
0.14
0.71
0.85
0.87
0.56
0.00
0.30
0.70
0.47
0.48
Weighted Sum
Ref
1.43
0.88
0.71
.41
.27
.38
.23
.30
0.91
0.53
0.83
1.08
1.61
0.97
1.46
1.15
0.92
1.11
1.30
0.84
1.26
1.58
1.68
1.08
0.47
1.12
1.38
1.54
1.48
1.27
0.85
1.12
1.16
0.31
Survey
0.83
2.18
2.44
1.89
1.59
2.11
1.56
1.72
1.24
1.97
1.68
1.71
2.25
2.05
2.12
2.22
1.57
2.47
1.65
2.22
1.88
2.02
1.35
1.68
1.92
1.60
2.10
2.06
2.11
1.78
1.43
2.17
1.86
0.36
Adi Ref
2.43
1.88
1.71
2.41
2.27
2.38
2.23
2.30
1.91
1.53
1.83
2.08
2.61
1.97
2.46
2.15
1.92
2.11
2.30
1.84
2.26
2.58
2.68
2.08
1.47
2.12
2.38
2.54
2.48
2.27
1.85
2.12
2.16
0.31
Ranks
Survev
1
43
57
23
9
37
7
16
2
27
13
15
47
30
39
45
8
59
11
44
22
29
3
12
26
10
34
31
36
17
4
42
sum =
799
Adi Ref
56
21
14
55
50
54
46
52
24
6
18
32
63
28
58
41
25
35
51
19
48
62
64
33
5
38
53
61
60
49
20
40
sum =
1281
August 2000
1-35
MARS SIM, Revision 1

-------
                                                                                   Appendix I
MARSSIM, Revision 1                          1-36                                  August 2000

-------
                                   APPENDIX J

             DERIVATION OF ALPHA SCANNING EQUATIONS
                       PRESENTED IN SECTION 6.7.2.2

For alpha survey instrumentation with a background around one to three counts per minute, a
single count will give a surveyor sufficient cause to stop and investigate further. Assuming this
to be true, the probability of detecting given levels of alpha emitting radionuclides can be
calculated by use of Poisson summation statistics.

Discussion
Experiments yielding numerical values for a random variable X, where X represents the number
of events occurring during a given time interval or a specified region in space, are often called
Poisson experiments (Walpole and Myers 1985).  The probability distribution of the Poisson
random variable X, representing the number of events occurring in a given time interval t, is
given by:

                                     ~^'
                                              x = 0,l,2,...                        (J-l)
                                       x\
where:
       P(x; Xt) =     probability of x events in time interval t
       X     =      Average number of events per unit time
       Xt     =      Average value expected

To define this distribution for an alpha scanning system, substitutions may be made giving:

                                           e~mm "
                                P(n-m} = ^-^-                               (J-2)
                                             n\
where:
       P(n; m) =     probability of getting n counts when the average number expected is m
       m     =      Xt, average number of counts expected
       n     =      x, number of counts actually detected
For a given detector size, source activity, and scanning rate, the probability of getting n counts
while passing over the source activity with the detector can be written as:
August 2000                                 J-l                        MARS SIM, Revision 1

-------
Appendix J
                   P(n-m} =
-U-Ed
e 60v
'GEd
60 v
                                    n\
-U-Et
e 60
' GEt
60
                                                                                   (J-3)
                                     n\
where:
       G
       E
       d
       v
       t
source activity (dpm)
detector efficiency (4rc)
width of the detector in the direction of scan (cm)
scan speed (cm/s)
d/v, dwell time over source (s)
If it is assumed that the detector background is equal to zero, then the probability of observing
greater than or equal to 1 count, P(n> 1), within a time interval t is:
                             P(n  > 1) = \-P(n =  0)
                                                               (J-4)
If it is also assumed that a single count is sufficient to cause a surveyor to stop and investigate
further, then:
     P(n>\) =  l-
                                                = l-e
                                                        GEd
                                                        60 v
                                                               (J-5)
Figures J.I through J.3 show this function plotted for three different detector sizes and four
different source activity levels. Note that the source activity levels are given in terms of areal
activity values (dpm per 100 cm2), the probe sizes are the dimensions of the probes in line with
the direction of scanning, and the detection efficiency has been assumed to be  15%. The
assumption is made that the areal activity is contained within a 100 cm2 area and that the detector
completely passes over the area either in one or multiple passes.

Once a count has been recorded and the surveyor stops, the surveyor should wait a sufficient
period of time such that if the guideline level of contamination is present, the probability of
getting another count is at least 90%. This minimum time interval can be calculated for given
contamination guideline values by substituting the following parameters into Equation J-5 and
solving:
MARSSIM, Revision 1
                       J-2
August 2000

-------
                                                                               Appendix J
       d/v

       G
where:
       C
       A
Giving:
0.9
t
  CA
  100

contamination guideline (dpm/100 cm2)
Detector area (cm2)
                                    t =
                      13800
                      CAE
                                      (J-6)
Equation J-3 can be solved to give the probability of getting any number of counts while passing
over the source area, although the solutions can become long and complex. Many portable
proportional counters have background count rates on the order of 5 to 10 counts per minute and
a single count will not give a surveyor cause to stop and investigate further. If a surveyor did
stop for every count, and subsequently waited a sufficiently long period to make sure that the
previous count either was or wasn't caused by an elevated contamination level, little or no
progress would be made.  For these types of instruments, the surveyor usually will need to get at
least 2 counts while passing over the source area before stopping for further investigation.
Assuming this to be a valid assumption, Equation J-3 can be solved for n > 2 as follows:
P(n>2}= l-P

        = l-e
                                   n = 0)-P(n=l)
                                   (GE+B)t
                                     60
(GE+B)t
             (GE+B)t
                                  _(GE+B)t
                            = l-e'   60
                                                                                  (J-7)
Where:

             P(n>2)  =  probability of getting 2 or more counts during the time interval t
             P(n=0)  =  probability of not getting any counts during the time interval t
             P(n=l)  =  probability of getting 1 count during the time interval t
             B       =  background count rate (cpm)

All other variables are the same as in Equation J-3.
August 2000
                      J-3
                         MARS SIM, Revision 1

-------
Appendix J

Figures J-4 through J-7 show this function plotted for three different probe sizes and three
different source activity levels. The same assumptions were made when calculating these curves
as were made for Figures J-l through J-3 except that the background was assumed to be 7 counts
per minute.
MARSSIM, Revision 1                         J-4                                 August 2000

-------
                                 Alpha Surveys (500 dpm/100 cm')
                                                                            Appendix J
                                           Probe Size
                           10
15     20    25    30    35
    Survey Speed  (cm/s)
40
45
50
Figure J.I    Probability (P) of getting one or more counts when passing over a 100 cm2
             area contaminated at 500 dpm/100 cm2 alpha. The chart shows the
             probability versus scanning speed for three different probe sizes. The probe
             size denotes the dimensions of the probes which are in line with the direction
             of scanning.  A detection efficiency of 15% (4n) is assumed.
August 2000
        J-5
      MARS SIM, Revision 1

-------
Appendix J
                                 Alpha Surveys  (1000 dpm/100 cm')
                                  Probe Size
                           10    15     20    25    30    35    40     45    50
                                     Survey Speed  (cm/s)
Figure J.2   Probability (P) of getting one or more counts when passing over a 100 cm2
             area contaminated at 1,000 dpm/100 cm2 alpha. The chart shows the
             probability versus scanning speed for three different probe sizes.  The probe
             size denotes the dimensions of the probes which are in line with the direction
             of scanning.  A detection efficiency of 15% (47T) is  assumed.
MARSSIM, Revision 1
J-6
August 2000

-------
                                 Alpha Surveys  (5000 dpm/100 cm')
                                                                             Appendix J
                                  Probe Size
                0   10  20  30  40   50   60  70  80  90  100  110 120 130 HO 150
                                     Survey Speed  (cm/s)
Figure J.3   Probability (P) of getting one or more counts when passing over a 100 cm2
             area contaminated at 5,000 dpm/100 cm2 alpha. The chart shows the
             probability versus scanning speed for three different probe sizes.  The probe
             size denotes the dimensions of the probes which are in line with the direction
             of scanning.  A detection efficiency of 15% (47T) is  assumed.
August 2000
J-7
MARS SIM, Revision 1

-------
Appendix J
                                 Alpha Surveys  (500 dpm/100 cm')
                                   Probe Size
 Figure J.4    Probability (P) of getting two or more counts when passing over a 100 cm2
              area contaminated at 500 dpm/100 cm2 alpha.  The chart shows the
              probability versus scanning speed for three different probe sizes. The probe
              size denotes the dimensions of the probes which are in line with the direction
              of scanning. A detection efficiency of 15% (47T) is assumed.
MARSSIM, Revision 1
J-8
August 2000

-------
                                Alpha Surveys  (1000 dpm/100 cm')
                                                                            Appendix J
                                  Probe Size
                                            15        20
                                     Survey Speed (cm/s)
                      25
Figure J.5   Probability (P) of getting two or more counts when passing over a 100 cm2
             area contaminated at 1,000 dpm/100 cm2 alpha. The chart shows the
             probability versus scanning speed for three different probe sizes. The probe
             size denotes the dimensions of the probes which are in line with the direction
             of scanning.  A detection efficiency of 15% (47T) is assumed.
August 2000
J-9
MARS SIM, Revision 1

-------
Appendix J
                                 Alpha Surveys  (5000 dpm/100 cm')
                                   Probe Size
                       10      20      30      40     50      60      70
                                     Survey Speed (cm/s)
Figure J.6   Probability (P) of getting two or more counts when passing over a 100 cm2
             area contaminated at 5,000 dpm/100 cm2 alpha. The chart shows the
             probability versus scanning speed for three different probe sizes. The probe
             size denotes the dimensions of the probes which are in line with the direction
             of scanning.  A detection efficiency of 15% (47T) is assumed.
MARSSIM, Revision 1
MO
August 2000

-------
                                  APPENDIX K

               COMPARISON TABLES BETWEEN QUALITY
                          ASSURANCE DOCUMENTS

The comparison tables in this appendix provide a reference for the MARSSIM user who may not
be familiar with developing a QAPP based on EPA QA/R-5 (EPA 1994c). The tables relate the
basic recommendations and requirements of EPA QA/R-5 and other quality assurance documents
the reader may be more familiar with.

Each of the quality assurance documents compared in these tables was developed for a specific
industry and scope.  For this reason, there is not a direct comparison from one document to
another. Rather, the tables are designed to show similarities between different quality assurance
documents. In addition, there are topics specific to certain quality assurance documents that do
not have a counterpart in these comparison tables.

If there is no section listed as being comparable with a section of EPA QA/R-5, this does not
necessarily mean that the topic is not covered by the quality assurance document. In some cases
the topic may have been divided up into several subtopics that are distributed between other
sections of the particular document.

This appendix is not meant to provide a thorough cross-reference between different quality
assurance documents.  The purpose of these comparison tables is to demonstrate how the content
of QAPPs might be arranged differently and show a user the location of important information
concerning radiation surveys and site investigations. This might occur if the QAPP is developed
using guidance the reviewer is unfamiliar with.

EPA QA/R-5 is compared with five quality assurance documents in the following tables:

             EPA QAMS-005/80 (EPA 1980d)
             ASMENQA-1 (ASME 1989)
             DOE Order 5700.6c (DOE 1991c)
             MIL-Q-9858A (DOD 1963)
             ISO 9000 (ISO 1987)
August 2000                               K-l                        MARSSIM, Revision 1

-------
Appendix K
            Table K.1 Comparison of EPA QA/R-5 and EPA QAMS-005/80
EPA QA/R-5 Elements
Al
A2
A3
A4
A5
A6
A7
A8
A9
A10
Bl
B2
B3
B4
B5
B6
B7
B8
B9
BIO
Cl
C2
Dl
D2
D3
Title and Approval Sheet
Table of Contents
Distribution List
Project/Task Organization
Problem Definition/Background
Project/Task Description
Quality Objectives and Criteria for
Measurement Data
Project Narrative
Special Training Requirements/Certification
Documentation and Records
Sampling Process Design
Sampling Methods Requirements
Sample Handling and Custody Requirements
Analytical Methods Requirements
Quality Control Requirements
Instrument/Equipment Testing, Inspection,
and Maintenance Requirements
Instrument Calibration and Frequency
Inspection/ Acceptance Requirements for
Supplies and Consumables
Data Acquisition Requirements
Data Quality Management
Assessments and Response Actions
Reports to Management
Data Review, Validation, and Verification
Requirements
Validation and Verification Methods
Reconciliation with User Requirements
EPA QAMS-005/80
1.0
2.0
Title Page with Provision for Approval
Signatures
Table of Contents

4.0
3.0
3.0
5.0
Project Organization and Responsibility
Project Description
Project Description
Quality Assurance Objectives for
Measurement Data



6.0
6.0
7.0
9.0
11.0
13.0
8.0
Sampling Procedures
Sampling Procedures
Sample Custody
Analytical Methods
Internal Quality Control Checks and
Frequency
Preventive Maintenance Procedures and
Schedules
Calibration Procedures and Frequency



12.0
15.0
16.0
10.0
10.0
Assessment and Response Actions
Corrective Actions
Quality Assurance Reports to Management
Data Reduction, Validation, and Reporting
Data Reduction, Validation, and Reporting

MARSSIM, Revision 1
K-2
August 2000

-------
                                                                         Appendix K
              Table K.2 Comparison of EPA QA/R-5 and ASME NQA-1
EPA QA/R-5 Elements
Al
A2
A3
A4
A5
A6
A7
A8
A9
A10
Bl
B2
B3
B4
B5
B6
B7
B8
B9
BIO
Cl
C2
Dl
D2
D3
Title and Approval Sheet
Table of Contents
Distribution List
Project/Task Organization
Problem Definition/Background
Project/Task Description
Quality Objectives and Criteria for
Measurement Data
Project Narrative
Special Training Requirements/Certification
Documentation and Records
Sampling Process Design
Sampling Methods Requirements
Sample Handling and Custody Requirements
Analytical Methods Requirements
Quality Control Requirements
Instrument/Equipment Testing, Inspection,
and Maintenance Requirements
Instrument Calibration and Frequency
Inspection/ Acceptance Requirements for
Supplies and Consumables
Data Acquisition Requirements
Data Quality Management
Assessments and Response Actions
Reports to Management
Data Review, Validation, and Verification
Requirements
Validation and Verification Methods
Reconciliation with User Requirements
ASME NQA-1 Elements



1.
Organization

3.
2.
Design Control
Quality Assurance Program
8. Identification and Control of Items

4.
6.
3.
5.
13.
5.
9.
11.
10.
12.
14.
7.
8.
Procurement Document Control
Document Control
Design Control
Instructions, Procedures, and Drawings
Handling, Storage, and Shipping
Instructions, Procedures, and Drawings
Control of Processes
Test Control
Inspection
Control of Measuring and Test Equipment
Inspection, Test, and Operating Status
Control of Purchased Items and Services
Identification and Control of Items


15.
16.
18.
17.
Control of Nonconforming Items
Corrective Action
Audits
Quality Assurance Records



August 2000
K-3
MARS SIM, Revision 1

-------
Appendix K
            Table K.3 Comparison of EPA QA/R-5 and DOE Order 5700.6c
EPA QA/R-5 Elements
Al
A2
A3
A4
A5
A6
A7
A8
A9
A10
Bl
B2
B3
B4
B5
B6
B7
B8
B9
BIO
Cl
C2
Dl
D2
D3
Title and Approval Sheet
Table of Contents
Distribution List
Project/Task Organization
Problem Definition/Background
Project/Task Description
Quality Objectives and Criteria for
Measurement Data
Project Narrative
Special Training Requirements/Certification
Documentation and Records
Sampling Process Design
Sampling Methods Requirements
Sample Handling and Custody Requirements
Analytical Methods Requirements
Quality Control Requirements
Instrument/Equipment Testing, Inspection,
and Maintenance Requirements
Instrument Calibration and Frequency
Inspection/ Acceptance Requirements for
Supplies and Consumables
Data Acquisition Requirements
Data Quality Management
Assessments and Response Actions
Reports to Management
Data Review, Validation, and Verification
Requirements
Validation and Verification Methods
Reconciliation with User Requirements
DOE Order 5700.6C Elements



2 Personnel Training and Qualification
1 Program

1 Program

2 Personnel Training and Qualification
4 Documents and Records
6 Design
5 Work Processes

5 Work Processes

8 Inspection and Acceptance Testing

7 Procurement
8 Inspection and Acceptance Testing


10 Independent Assessment
9 Management Assessment


3 Quality Improvement
MARSSIM, Revision 1
K-4
August 2000

-------
                                                                        Appendix K
              Table K.4 Comparison of EPA QA/R-5 and MIL-Q-9858A
EPA QA/R-5 Elements
Al
A2
A3
A4
A5
A6
A7
A8
A9
A10
Bl
B2
B3
B4
B5
B6
B7
B8
B9
BIO
Cl
C2
Dl
D2
D3
Title and Approval Sheet
Table of Contents
Distribution List
Project/Task Organization
Problem Definition/Background
Project/Task Description
Quality Objectives and Criteria for
Measurement Data
Project Narrative
Special Training Requirements/Certification
Documentation and Records
Sampling Process Design
Sampling Methods Requirements
Sample Handling and Custody Requirements
Analytical Methods Requirements
Quality Control Requirements
Instrument/Equipment Testing, Inspection,
and Maintenance Requirements
Instrument Calibration and Frequency
Inspection/ Acceptance Requirements for
Supplies and Consumables
Data Acquisition Requirements
Data Quality Management
Assessments and Response Actions
Reports to Management
Data Review, Validation, and Verification
Requirements
Validation and Verification Methods
Reconciliation with User Requirements


MIL-Q-9858A Elements



3.1
Organization


3.2
Initial Quality Planning


3.4
4.1
Records
Drawings, Documentation, and Changes

3.3
6.4
o o
5.5
6.7
4.2
4.2
5.0
6.1
Work Instructions
Handling, Storage, and Delivery
Work Instructions
Identification of Inspection Status
Measuring and Test Equipment
Measuring and Test Equipment
Control of Purchases
Materials and Material Control

3.4
3.5
6.5
3.6
Records
Corrective Action
Nonconforming Material
Costs Related to Quality

6.6
Statistical Quality Control

6.2
6.3
Production Processing and Fabrication
Completed Item Inspection and Test
August 2000
K-5
MARS SIM, Revision 1

-------
Appendix K
                 Table K.5  Comparison of EPA QA/R-5 and ISO 9000
EPA QA/R-5 Elements
Al
A2
A3
A4
A5
A6
A7
A8
A9
A10
Bl
B2
B3
B4
B5
B6
B7
B8
B9
BIO
Cl
C2
Dl
D2
D3
Title and Approval Sheet
Table of Contents
Distribution List
Project/Task Organization
Problem Definition/Background
Project/Task Description
Quality Objectives and Criteria for
Measurement Data
Project Narrative
Special Training Requirements/Certification
Documentation and Records
Sampling Process Design
Sampling Methods Requirements
Sample Handling and Custody Requirements
Analytical Methods Requirements
Quality Control Requirements
Instrument/Equipment Testing, Inspection,
and Maintenance Requirements
Instrument Calibration and Frequency
Inspection/ Acceptance Requirements for
Supplies and Consumables
Data Acquisition Requirements
Data Quality Management
Assessments and Response Actions
Reports to Management
Data Review, Validation, and Verification
Requirements
Validation and Verification Methods
Reconciliation with User Requirements

ISO 9000 Elements



4
Management Responsibility


5
5.2
Quality System Principles
Structure of the Quality System



8
10
16
10
11
13
Quality in Specification and Design
Quality in Production
Handling and Post Production Functions
Quality in Production
Control of Production
Control of Measuring and Test Equipment

9
11.2
Quality in Procurement
Material Control and Traceability


5.4
14
15
5.3
6
11.7
12
Auditing the Quality System
Nonconformity
Corrective Action
Documentation of the Quality System
Economics — Quality Related Costs
Control of Verification Status
Verification Status

7
Quality in Marketing
MARSSIM, Revision 1
K-6
August 2000

-------
                                   APPENDIX L

             REGIONAL RADIATION PROGRAM MANAGERS

The following is a directory list of regional program managers in Federal agencies who
administer radiation control activities and have responsibility for certain radiation protection
activities.  The telephone numbers and addresses in this appendix are subject to change without
notice. A more complete directory list of professional personnel in state and local government
agencies is available from the Conference of Radiation Control Program Directors, Inc.
(CRCPD). This directory is updated and distributed yearly.  To obtain a copy of this annual
publication please contact:

                                       CRCPD
                                 205 Capital Avenue
                                 Frankfort, KY 40601
                                   (502) 227-4543
                                 http ://www. crcpd.org
                                   staff@crcpd.org
August 2002                                L-l                         MARS SIM, Revision 1

-------
Appendix L

L.I  Department of Energy (DOE)

DOE Home Page                                                    http://www.doe.gov

             Oak Ridge Operations Office                       Telephone: (865) 576-1005
             ORO Public Affairs Office                       http://www.oakridge.doe.gov/
             Post Office Box 2001
             Oak Ridge, Tennessee  37831

             Savannah River Operations Office                  Telephone: (803) 725-2889
             Department of Energy                                   http://www.srs.gov/
             Post Office Box A
             Aiken,  South Carolina  29808

             Albuquerque Operations Office                     Telephone: (505) 845-6202
             Department of Energy                                 http://www.doeal.gov/
             Post Office Box 5400
             Albuquerque, New Mexico 87185-5400

             Chicago Operations Office                         Telephone: (630) 252-2000
             Department of Energy                                http://www.ch.doe.gov/
             9700 South Cass Avenue
             Argonne, Illinois 60439

             Idaho Operations Office                           Telephone: (208) 526-0833
             Department of Energy                  http://www.id.doe.gov/doeid/index.html
             Post Office Box 1625
             Idaho Falls, Idaho 83401

             Oakland Operations Office                         Telephone: (510) 637-1762
             Department of Energy                               http://www.oak.doe.gov/
             1301 Clay Street
             Oakland, California  94612

             Richland Operations Office                        Telephone: (509) 376-7501
             Department of Energy                               http://www.hanford.gov/
             Post Office Box 550, A7-75
             Richland, Washington  99352

             Nevada Operations Office                         Telephone: (702) 295-3521
             Department of Energy                                http://www.nv.doe.gov/
             PO Box 98518
             Las Vegas, NV  89193-8518


MARS SIM, Revision 1                        L-2                               August 2002

-------
                                                                           Appendix L
L.2  Environmental Protection Agency (EPA)

EPA Home Page

Region 1     (CT, MA, ME, NH, RI, VT)
             U.S. Environmental Protection Agency
             Region 1
             1 Congress Street
             Boston, Massachusetts 02114-2023

Region 2     (NJ, NY, PR, VI)
             U.S. Environmental Protection Agency
             Region 2
             290 Broadway
             New York, New York 10007-1866

Region 3     (DC, DE, MD, PA, VA, WV)
             U.S. Environmental Protection Agency
             Region 3 (3CGOO)
             1650 Arch Street
             Philadelphia, Pennsylvania 19103-2029

Region 4     (AL, FL, GA, KY, MS, NC, SC, TN)
             U.S. Environmental Protection Agency
             Region 4
             Atlanta Federal Center
             61  Forsyth Street, SW
             Atlanta, Georgia 30303-3104

Region 5     (IL, IN, MI, MN, OH, WI)
             U.S. Environmental Protection Agency
             Region 5
             77 West Jackson Boulevard
             Chicago, Illinois 60604
                          http://www.epa.gov
                    Telephone: (888) 372-7341
                              (617)918-1111
                  http ://www. epa.gov/regionO I/
                    Telephone: (212) 637-3000
                  http ://www. epa.gov/Region2/
                    Telephone: (215) 597-9800
                              (215) 814-5000
                              (800) 438-2474
                  http://www.epa.gov/region03/
                    Telephone: (404) 562-9900
                              (800)241-1754
                   http ://www. epa.gov/region4/
                    Telephone: (312) 353-2000
                              (800)621-8431
                   http ://www. epa.gov/region5/
August 2002
L-3
MARS SIM, Revision 1

-------
Appendix L

Region 6
Region 7
Region 8
Region 9
Region 10
(AR, LA, MM, OK, TX)
U.S. Environmental Protection Agency
Region 6
1445 Ross Avenue, Suite 1200
Dallas, Texas 75202

(IA, KS, MO, NE)
U.S. Environmental Protection Agency
Region 7
901 North 5th Street
Kansas City, Kansas 66101

(CO, MT, ND, SD, UT, WY)
U.S. Environmental Protection Agency
Region 8
999 18th Street,  Suite 500
Denver, Colorado 80202-2466
(AZ, CA, HI, NV, American Samoa, and Guam)
U.S. Environmental Protection Agency
Region 9
75 Hawthorne Street
San Francisco, California 94105

(AK, ID, OR, WA)
U.S. Environmental Protection Agency
Region 10
1200 Sixth Avenue
Seattle, Washington  98101
                                                             Telephone: (214) 665-2200
                                                                        (800) 887-6063
                                                   http://www.epa.gov/earthlr6/index.htm
                                                             Telephone: (913) 551-7003
                                                                        (800) 223-0425
                                                            http://www.epa.gov/rgytgrnj/
                                                              Telephone:(303)312-6312
                                                                        (800)227-8917
                                                          http://www.epa.gov/unix0008/
                                                             Telephone: (415) 947-8700
                                                           http ://www. epa.gov/region09/
                                                             Telephone: (206) 553-1200
                                                                        (800) 424-4372
                                                           http://www.epa.gov/rlOearth/
MARS SIM, Revision 1
                            L-4
August 2002

-------
                                                                           Appendix L

L.3  Nuclear Regulatory Commission (NRC)

NRC Home Page                                                    http://www.nrc.gov

Region I      (CT, DC, DE, MA, MD, ME, NH, NJ, NY, PA, RI, VT)
             Administrator                                   Telephone: (610) 337-5000
             U.S. Nuclear Regulatory Commission                         (800) 432-1156
             475 Allendale Road
             King of Prussia, Pennsylvania 19406-1415

Region II     (AL, FL, GA, KY, MS, NC, PR, SC, TN, VA, VI, WV, Panama Canal)
             Administrator                                   Telephone: (404) 562-4400
             U.S. Nuclear Regulatory Commission                         (800) 577-8510
             Sam Nunn Atlanta Federal Center, 23 T85
             61 Forsyth Street, SW
             Atlanta, Georgia 30303-8931

Region IE    (IA, IL, IN, MI, MN, MO, OH, WI)
             Administrator                                   Telephone: (630) 829-9500
             U.S. Nuclear Regulatory Commission                         (800) 522-3025
             801 Warrenville Road
             Lisle, Illinois 60532-4351

Region IV    (AR, CO, ID, KS, LA, MT, NE, ND, NM, OK, SD, TX, UT, WY, AK, AZ, CA,
             HI, NV, OR, WA, Pacific Trust Territories)
             Administrator                                   Telephone: (817) 860-8100
             U. S. Nuclear Regulatory Commission                         (800) 952-9677
             Texas Health Resources Tower
             611 Ryan Plaza Drive, Suite 400
             Arlington, Texas  76011-8064
August 2002                               L-5                        MARS SIM, Revision 1

-------
Appendix L

L.4   Department of the Army

       The following is a list of key personnel within the Department of the Army who
       administer radiation control activities and have responsibilities for certain radiation
       protection activities.

             Deputy for Environmental Safety &                  Telephone: (703) 695-7824
                    Occupational Health
             Office of the Assistant Secretary of the Army
             (Installations, Logistics, & Environment)
             110 Army Pentagon
             Washington, DC 20310-0110

             Director of Army Radiation Safety                  Telephone: (703) 695-7291
             Army Safety Office
             DACS-SF
             Chief of Staff
             200 Army Pentagon
             Washington, DC 20310-0200

             Radiological Hygiene Consultant                    Telephone: (301) 295-0267
             Office of The Surgeon General
             Walter Reed Army Medical Center
             Attn:  MCHL-HP
             Washington, DC 20307-5001
MARSSIM, Revision 1                        L-6                                August 2002

-------
                                                                            Appendix L

L.5  Department of the Navy

      The following is a list of key personnel within the Department of the Navy who
      administer radiation control activities and have responsibilities for certain radiation
      protection activities.

      Naval Radiation Safety Committee                        Telephone: (703) 602-2582
      Chief of Naval Operations (N455)
      2211 S.Clark Place
      Crystal Plaza #5, Room 680
      Arlington, VA 22202-3735

      Commander (SEA-07R)                                 Telephone: (703) 602-1252
      Radiological Controls Program
      Naval Sea Systems Command
      2531 Jefferson Davis Highway
      Arlington, VA 22242-5160

      Officer in Charge                                       Telephone: (757) 887-4692
      Radiological Affairs Support Office
      P.O. Drawer 260
      Yorktown, VA  23691-0260
August 2002                                L-7                        MARS SIM, Revision 1

-------
Appendix L

L.6   Department of the Air Force

       The following is a list of key personnel within the Department of the Air Force who
       administer radiation control activities and have responsibilities for certain radiation
       protection activities.

             Chief, Materials Licensing                         Telephone: (202) 767-4313
             USAF Radioisotope Committee
             AFMOA/SGOR
             110 Luke Avenue, Room 405
             Boiling AFB, DC  20332-7050

             Chief, Consultant Branch                          Telephone: (210) 536-3486
             Radiation Services Division, Armstrong Laboratory
             IERA/SDRH
             2402 E Street
             Brooks AFB, TX  78235-5114
MARS SIM, Revision 1                        L-8                                August 2002

-------
                                   APPENDIX M

                SAMPLING METHODS: A LIST OF SOURCES
M.1   Introduction

Planning activities associated with field survey work include developing new and compiling or
adopting existing sampling methods. The following listing includes documents that represent
examples for the types of information one encounters when searching for sampling methods.
This listing initially presents references that appear with brief annotations that characterize the
information found in each document.

Journal articles and books may list references that lead to still other types of useful information.
Depending on survey needs, media being sampled, or site-specific requirements, one may follow
these references to resources that describe other types of methods found in original papers or
documents that appeared even as specific sampling techniques were first introduced.

The present listing is not exhaustive. Other titles or resources for sampling methods are available
through online literature databases; Federal, State, and university libraries; the internet; and other
sources.
M.2   List of Sources

Department of Energy (DOE). 1987. The Environmental Survey Manual. DOE/EH-0053, Vol.
1 of 4. DOE, Office of the Assistant Secretary for Environment, Safety, and Health, Office of
Environmental Audit, Washington, D.C.

•      General Description of Document:  Size: Approximately 188 pages (single sided)—This
       is the first of a four volume set that amounts to over 4 ins. (total thickness) of
       documentation related to environmental surveys. The first volume represents the main
       document, with the remaining three volumes contain eleven appendices.

•      Key Features of This Document:  Unlike a number of other references listed here, this
       document does include information related to radionuclides and considers biota (animal,
       plant, and related sample types).  Flow charts, checklists, planning diagrams, and figures
       help the reader to visualize a number of topics described in the text of all four volumes.
       Section 2 of this volume entertains topics related to a survey team's activities and survey
       reports.  Section 3 considers the use of existing data, followed by technical checklists in
       Section 4 and health and safety issues described in Section 5.
August 2000                               M-l                        MARS SIM, Revision 1

-------
Appendix M
       A quick review of this first volume reveals a limited amount of depth to the information
       presented.  There is little descriptive How To Sample information given here.  However,
       as an overview, the document is quite comprehensive and this may encourage a survey
       team to consider obtaining additional information relevant to a particular project need.
Department of Energy (DOE). 1987. The Environmental Survey Manual: Appendices A, B, and
C. DOE/EH-0053, Vol. 2 of 4.  DOE, Office of the Assistant Secretary for Environment, Safety,
and Health, Office of Environmental Audit, Washington, D.C.

•      General Description of Document:  Size: Approximately 188 pages (double sided)—This
       second volume contains three of eleven appendices.

•      Key Features of This Document:  The appendices include: A) Criteria for Data
       Evaluation, B) Checklists and Lines of Inquiry, and C) Health and Safety Plan for On-Site
       Survey Activities.
Department of Energy (DOE). 1987. The Environmental Survey Manual: Appendix D.
DOE/EH-0053, Vol. 3 of 4.  DOE, Office of the Assistant Secretary for Environment, Safety, and
Health,  Office of Environmental Audit, Washington, D.C.

•      General Description of Document:  Size: Approximately 438 pages (double sided)—This
       single volume is the largest part of the four part set and contains only one appendix:
       Appendix D - Analytical Methods.

•      Key Features of This Document:  The topics presented here have little to do with sample
       collection and are mostly concerned with the types of compounds or constituents within a
       sample. A radiological section covers a number of radionuclides that one may encounter
       in a number of sample matrices—including in water, air, soil, and sediments. Again, this
       is an appendix dedicated to sample analysis.
Department of Energy (DOE). 1987. The Environmental Survey Manual: Appendices E, F, G,
H, I, J, andK. DOE/EH-0053, Vol. 4 of 4. DOE, Office of the Assistant Secretary for
Environment, Safety, and Health, Office of Environmental Audit, Washington, D.C.

•      General Description of Document: Size: Approximately 312 pages (double sided)—This
       fourth and final volume includes seven appendices.
MARSSIM, Revision 1                        M-2                                August 2000

-------
                                                                              Appendix M


       Key Features of This Document: Appendix E is entitled Field Sampling Protocols and
       Guidance—which offers a number of site scenarios to describe an approach to sampling
       under varied conditions. Each scenario is followed by a set of sampling procedures
       appropriate for a particular sample matrix.  This appendix is 216 pages in length making
       this the largest part of Volume 4.  Diagrams are included to illustrate scenarios and the
       appearance of sampling equipment.

       The remaining appendices cover:  F) guidelines for preparation of quality assurance plans,
       G) decontamination guidance, H)  data management and analysis, I) sample and document
       management guidance, J) health and safety guidance for sampling and analysis teams, and
       K) documents for sampling and analysis program.
Department of Energy (DOE).  1991. Environmental Regulatory Guide for Radiological Effluent
Monitoring and Environmental Surveillance. DOE/EH-0173T, DOE, Assistant Secretary for
Environment, Safety, and Health, Washington, D.C.  (DE91-013607)

•      General Description of Document: Size: approximately 90 pages— This guide covers a
       number of topics related to radiation and environmental surveillance.

•      Key Features of This Document:  To accomplish environmental surveillance, various
       sample types—from biotic (animal and plant) to abiotic (air, water, soil, etc.}—are
       considered in Chapter 5 (title: Environmental Surveillance).  The basis for taking certain
       samples appears along with information on sample location and frequency.  A brief
       statement on sampling methods completes each section but procedures or techniques are
       not given in detail.  References to other guidance documents  on sampling are cited. The
       reader is directed to other  sources to obtain additional regulatory information or
       descriptions of specific procedures.

       Chapter 6 provides  information on laboratory procedures.  Other chapters cover: liquid
       effluent monitoring, airborne effluent monitoring, meteorological monitoring, data
       analysis and statistical treatment, dose calculations, records and reports, quality assurance
       (QA), and reports.
Department of Energy (DOE).  1994. Decommissioning Handbook.  DOE/EM-0142P.  DOE,
Office of Environmental Restoration, Germantown, MD

•      General Description of Document: Size: Approximately 312 pages—The manual is
       essentially written for those involved in decommissioning a nuclear power facility.  While
       not specifically focused on radiation sampling methods, this document may play a role in

August 2000                                M-3                        MARS SIM, Revision 1

-------
Appendix M
       identifying activities or sampling needs related to survey work before or during
       remediation at some Federal facilities.

       Key Features of This Document: Chapter 6 presents information on final project
       configuration based on planning and as such speaks of site boundaries. Chapter 7
       presents topics related to characterization including on-site measurements.

       This document includes discussion and illustrations of robotic devices used in sampling
       operations. Perhaps only appropriate in extreme situations, the use of a robot for
       obtaining a sample may apply where radiation levels are high, dust or air quality pose
       problems,  or where technical staff cannot physically reach a sample location due to
       structural limitations.
Environmental Protection Agency (EPA).  1980.  Samplers and Sampling Procedures for
Hazardous Waste Streams. EPA-600/2-80-018, EPA, Municipal Environmental Research
Laboratory, Cincinnati, OH.

•      General Description of Document: Size: 67 pages—the procedures listed here cover
       different types of media and include helpful diagrams of sampling devices.

•      Key Features of This Document: While not specifically geared to radioactive samples,
       this short manual outlines and presents information in a logical sequence—starting with
       descriptions of sampling devices, followed by discussion of selecting an appropriate
       device for various media (including samples taken from various sources;  e.g., drum,
       barrel, waste pile), container types, labels, seals, use of a log book, chain  of custody,
       sample receipt and logging, preservation and storage of samples, and references. The
       document includes five appendices, covering development of the composite liquid waste
       sampler, parts for constructing the sampler, checklist of items required in the field for
       sampling hazardous waste, random sampling, and systematic errors in using the
       composite liquid waste sampler.
Environmental Protection Agency (EPA).  1982.  Test Methods For Evaluating Solid Waste,
Physical / Chemical Methods, 2nd Edition.  EPA, Office of Solid Waste, Washington, D.C.
(PB87-120291)

•      General Description of Document: Size: Approximately 375 pages—composed of
       chapters and methods that update the first edition of this volume.
MARSSIM, Revision 1                        M-4                                August 2000

-------
                                                                             Appendix M


       Key Features of This Document:  Chapter 1 of this manual pulls together information
       from the first three chapters of the first edition.  This includes a Sampling Methodology
       section that addresses statistics, sampling strategies and examples, implementing a
       sampling plan, plus tables and figures of sampling devices, etc.  The main focus is on
       solid waste including metals and organics. Methods are described with the same format
       as indicated above in reference 1.  As above, the methods include some information
       relevant to the field component of sampling work, but the remainder of each method
       essentially is most useful to laboratory personnel.
Environmental Protection Agency (EPA).  1982. Handbook for Sampling and Sample
Preservation of Water and Wastewater. EPA-600/4-82-029, EPA, Environmental Monitoring
and Support Laboratory, Cincinnati, OH.  (PB83-124503)

•      General Description of Document: Size: Approximately 500 pages—composed of
       information specifically focused on sample collection and preservation. While the
       document concerns only water sampling, this volume is comprehensive and even includes
       a chapter on Sampling Radioactive Materials.

•      Key Features of This Document:  The handbook is geared to address sampling issues.
       The scope of the document covers all types or sources of water, including: municipal,
       industrial, surface, agricultural, ground, and drinking waters.  Types of samples are
       defined and discussed, including grab and composite samples. Diagrams, tables, and
       forms are provided to illustrate key points raised in the text. Statistical methods and
       related tables are provided. Each topic is accompanied by references.  The chapter on
       radioactive samples is brief but touches on: background, radioactive decay, detection
       capability, frequency of sampling, sampling location, sample volume, containers,
       filtration, preservation, general procedures, radiation safety, and references.
Environmental Protection Agency (EPA).  1984. Soil Sampling Quality Assurance User's
Guide. EPA 600/4-84-043, EPA, Environmental Monitoring Systems Laboratory, Office of
Research and Development, Las Vegas, NV.

•      General Description of Document:  Size: 102 pages—The introduction to this document
       starts with:  "An adequate quality assurance/quality control (QA/QC) program requires
       the identification and quantification of all sources of error associated with each step of a
       monitoring program so that the resulting data will be of known quality, the components
       of error, or variance, include those associated with sampling, sample preparation,
       extraction, analysis, and residual error."
August 2000                                M-5                        MARSSIM, Revision 1

-------
Appendix M
       Key Features of This Document:  Because of potential inhomogeneity in soil samples, the
       authors state this QA/QC document is specifically concerned with soil sampling.  The
       general outline of the document includes: objectives of QA/QC, statistics, exploratory
       studies, sample number and sample sites, sample collection, sample handling and
       documentation, analysis and interpretation of QA/QC data, and systems audits and
       training. References are provided followed by two appendices covering sample number
       precision and confidence plus tables for use in calculating confidence tolerance limits and
       judging validity of measurements.

       The sample collection chapter is very brief and does not specifically outline methods or
       types of equipment.  This and the following chapter on sample handling and
       documentation mention relevant topics in light of QA/QC.
Environmental Protection Agency (EPA). 1986. Engineering Support Branch Standard
Operating Procedures and Quality Assurance Manual. EPA, Region IV, Environmental
Services Division, Athens, GA. (Sections 3 to 5 reviewed)

•      General Description of Document:  Size: approximately 90 pages (single sided)—The
       introduction states: "The objectives of this section are to present the Branch standard
       operating procedures for sample identification, sample control and chain of custody,
       maintenance of field records, and document control.

•      Key Features of This Document:  The basic format of the document is that of a
       compendium of standard operating procedures bound in one volume. Each Standard
       Operating Procedure (SOP) is several pages and is dedicated to a specific topic.  A five
       page outline pertaining to sampling procedures presents a brief overview that is a
       relatively typical treatment of this topic.  Sample preservation, for example, is
       summarized with five bullet points.  The next section offers a three page listing of
       definitions covering grab, composite, split, duplicate, reference or control, and
       background samples, plus a very brief definition for sample aliquot.

       The document lacks figures but does include descriptive notes for equipment and
       methods related to taking samples of waste water, surface water (fresh and salt water),
       ground water, potable water supply, soil, samples from landfills and hazardous waste
       sites, followed by references.  The last part of the guide include information on making
       flow measurements.

       The document does not appear to focus on radioactive materials, but as with other
       documents the information can in part be used in conjunction with obtaining radioactive
       samples.

MARSSIM, Revision 1                        M-6                                August 2000

-------
                                                                             Appendix M


Environmental Protection Agency (EPA). 1987. A Compendium of SuperfundField Operations
Methods. EPA/540/P-87/001, EPA, Office of Emergency and Remedial Response, Washington,
D.C.

•      General Description of Document:  Size: Approximately 375 pages—the size and title of
       this document is a clue to the comprehensive nature of this volume.  In brief, the text of
       this document provides a potentially valuable resource to field workers involved with
       Multi-Agency Radiation Survey and Site Investigation Manual (MARSSEVI) surveys.
       While relatively complete—in that the document covers a broad range of topics—some
       readers may desire additional depth to the information provided here. Conversely,
       planners and field personnel might gain added insight by considering the broad range of
       topics included here before approaching the survey process.

•      Key Features of This Document:  Perhaps the best summary of this compendium is
       provided by a listing of sections, as follows: 1) Use of the Compendium, 2) Preparation
       of Project Description and Statement of Objectives, 3) Implementing Field Objectives, 4)
       Sample Control, Including Chain of Custody, 5) Laboratory Interface, 6) Sample
       Containers, Preservation, and Shipping, 7) Field Methods for Screening Hazardous
       Material, 8) Earth Sciences (i.e., drilling, excavations, reconnaissance, geophysics, and
       ground water), 9) Earth Sciences Laboratory Procedures, 10) Surface Hydrology, 11)
       Meteorology and Air Quality, 12) Specialized Sampling Techniques (e.g., wipes, human
       habitation sampling, TCDD, and container sampling), 14) Land Surveying, Aerial
       Photography, and Mapping, 15) Field Instrumentation (a comprehensive treatment
       including radiation monitors), 16) data handling, 17) Document Control, 18) Corrective
       Action, 19) QA Audit Procedures, and 20) QA Reporting.

       That this document serves objectives set forth by Superfund—and is not specifically
       focused on radionuclide sampling—in no way diminishes the importance of the
       compendium's complete overview of field sampling equipment and activities.
Environmental Protection Agency (EPA). 1989. Test Methods For Evaluating Solid Waste
Physical / Chemical Methods - Third Edition Proposed Update Package. EPA, Office of Solid
Waste, Washington, D.C. (PB89-148076)

•      General Description of Document: Size Approximately 500 pages—composed of several
       updated chapters and 46 methods that are described by text and graphics. Only methods
       that are updated from 2nd Edition appear in this volume.
August 2000                               M-7                        MARS SIM, Revision 1

-------
Appendix M
       Key Features of This Document: Chapters 1, 2, 4, and 7 describe QC, Choosing the
       Correct Procedure, Organic Analytes, and Regulatory Definitions, respectively.  Of
       primary interest are the 46 methods that are described in what constitutes the bulk of this
       document. However, as is evident from some of the first methods listed for organics,
       sample collection techniques are only briefly touched on by a section of Chapter Four.
       This essentially makes the methods laboratory oriented protocols and the only reference
       to field methods appears in the text of a short chapter as  opposed to part of each method.
       Some methods do list Sample Collection, Preservation, and Handling information with
       emphasis on use of containers, acidification or refrigeration, or a brief set of points to
       consider when preparing to go out to the field.

       Each method includes a method number and a title,  plus the following information:
       1) Scope and Application, 2) Summary of Method, 3) Interferences, 4) Apparatus and
       Materials, 5) Reagents, 6) Sample Collection, Preservation, and Handling, 7) Procedure,
       8) QC, 9) Method Performance, and  10) References. Diagrams,  flow charts, and tables
       follow the initial sequence of sections.

       The listing of methods include Method 9320 for Radium-228, Method 9310 for Gross
       Alpha & Gross Beta, and Method 9315 for Alpha-Emitting Radium Isotopes.  These
       methods do not appear in the bound volume used for this review and thus no further
       comment is offered here.
Environmental Protection Agency (EPA).  1991.  Compendium of ERTSurface Water and
Sediment Sampling Procedures. OSWER Directive 9360.4-03, EPA, Office of Emergency and
Remedial Response, Washington, D.C.  (PB91-921274)

•      General Description of Document: Size: 31 pages—this document includes three
       standard operating procedures (SOPs), the first of which is the same as the first SOP
       listed in the document described below.

•      Key Features of This Document: The three SOPs included in this document include: 1)
       Sampling Equipment Decontamination, 2) Surface Water Sampling, and 3) Sediment
       Sampling. Each SOP is similar in content with sections that cover: scope, method
       summary, preservation, containers, equipment, apparatus, etc.
Environmental Protection Agency (EPA).  1991.  Compendium of ERT Ground water Sampling
Procedures. OSWER Directive 9360.4-06, EPA, Office of Emergency and Remedial
Response, Washington, D.C. (PB91-921275)
MARSSIM, Revision 1                        M-8                               August 2000

-------
                                                                            Appendix M


       General Description of Document: Size: 71 pages—this document embodies eight
       standard operating procedures (SOPs) with a similar format as that described above.

       Key Features of This Document: The SOPs covered in this document include sampling
       equipment decontamination, ground water well sampling, soil gas samples, installing
       monitor wells, water level measurements, and other topics related to ground water and
       wells.
Environmental Protection Agency (EPA).  1991.  Compendium of ERT Soil Sampling and
Surface Geophysics Procedures.  OSWER Directive 9360.4-02, EPA, Office of Emergency and
Remedial Response, Washington, D.C.  (PB91-921273)

•      General Description of Document: Size: 39 pages—this document lists four standard
       operating procedures (SOPs) for soil sampling—with a similar format as that described
       above.

•      Key Features of This Document: The SOPs covered in this document include sampling
       equipment decontamination, soil sampling, soil gas sampling, and soil sampling and
       surface geophysics.  The SOP for soil sampling is five pages in length. This treatment
       essentially covers samples collected from the soil surface, to use of augers and tube
       samplers, a trier, split-spoon (barrel) sampler, and excavation techniques.
Environmental Protection Agency (EPA).  1991.  Environmental Compliance Branch Standard
Operating Procedures and Quality Assurance Manual. EPA, Region IV, Environmental
Services Division, Athens, GA.

•      General Description of Document: Size: Approximately 500 pages (single sided)—This
       document is presented with seven sections and eleven appendices. The main sections
       cover standard operating polices and procedures which relates to the Region IV
       laboratory's administrative functions to SOPs that are specifically focused on sampling
       activities.

•      Key Features of This Document: Sections 3 and 4 are of primary importance when
       thinking of sample control, field record keeping, document control and sampling
       procedures. Section 4 on sampling procedures is descriptive—without diagrams or
       figures—and quite comprehensive in that this section touches on a multitude of topics not
       mentioned in a number of other guides, including: selection of parameters to be
       measured, holding time, cross contamination, and Data Quality Objectives (DQOs)
       (described as Level I to V). The sampling of soil, water, and air are covered in this

August 2000                                M-9                        MARS SIM, Revision 1

-------
Appendix M


       section with many of the subsections covering topics that are common to other documents
       reviewed here. A number of example forms are presented, including several that relate to
       State programs. Section 6 covers field analytical methods and Section 7 describes field
       physical measurements.

       The appendices include helpful information relevant to sampling, including: A) sample
       containers, preservation, holding times, and permissible sample type, B) standard
       cleaning procedures, C) shipping procedures, D) standard field analytical methods, E)
       monitoring wells, F) pump operation procedures, G) air monitoring, H) wastewater field
       methods, I) saturation monitoring, and K) safety protocols.

Environmental Protection Agency (EPA).  1992.  Characterizing Heterogeneous Waste:
Methods and Recommendations. EPA/600/R92/033, EPA, Environmental Monitoring Systems
Laboratory, Office of Research and Development, Las Vegas, NV.  (PB92-216894)

•      General Description of Document: Size: 144 pages—the focus of this document is on all
       types of waste materials that one might encounter.  The base scenario appears to be one
       where a drum is encountered and the objective is to work to a point when the drum
       contents are understood. Because a drum may include more than one type of waste, this
       document provides a review of a wide variety of materials one might expect when
       surveying a site.

•      Key Features of This Document:  The table of contents reveals that the text attempts to
       provide a complete picture, from definitions of terms, to planning studies, QA/QC and
       data assessment, to sample acquisition, and steps that follow to the lab and what makes
       the characterization process a success.  Radioactive waste materials, along with organics,
       solids, liquids, etc., are covered, but in a relatively brief fashion.  The model scenario of
       dealing with wastes in a drum is incorporated into a hypothetical example in an appendix.

Environmental Protection Agency (EPA).  1992.  Preparation of Soil Sampling Protocols:
Sampling  Techniques and Strategies. EPA/600/R92/128, EPA, Office of Research and
Development, Washington, DC. (PB92-220532)

•      General Description of Document: Size: 174  pages—this document summarizes various
       statistical and geostatistical concepts and procedures pertaining to the design,
       implementation, and data interpretation of appropriate sampling designs.

•      Key Features of This Document: This document focuses on applying the concept of the
       Data Life Cycle to soil sampling.  The document describes statistical concepts that apply
       to  soil sampling, including particulate sampling theory. Types of samples, numbers of
       samples, and size of samples as well as methods for sampling soils from conveyor belts
       and stockpiles are also discussed.  A bibliography is provided.
MARSSIM, Revision 1                       M-10                               August 2000

-------
                                    APPENDIX N

                     Data Validation Using Data Descriptors
Data validation is often defined by six data descriptors:

1)     reports to decision maker
2)     documentation
3)     data sources
4)     analytical method and detection limit
5)     data review
6)     data quality indicators

The decision maker or reviewer examines the data, documentation, and reports for each of the six
data descriptors to determine if performance is within the limits specified in the DQOs developed
during survey planning.  The data validation process should be conducted according to
procedures documented in the QAPP.
N.I   Reports to Decision Maker

Data and documentation supplied to the decision maker should be evaluated for completeness
and appropriateness and to determine if any changes were made to the survey plan during the
course of work. The survey plan discusses the surveying, sampling, and analytical design and
contains the QAPP and DQOs.  The decision maker should receive all data as collected plus
preliminary and final data reports.  The final decision on qualifying or rejecting data will be made
during the assessment of environmental data. All data, including qualified or rejected data,
should be documented and recorded even if the data are not included in the final report.

Preliminary analytical data reports  allow the decision maker to begin the assessment process as
soon as the surveying effort  has begun. These initial reports have three functions.

1)     For scoping or characterization survey data, they allow the decision maker to begin to
       characterize the site on the basis of actual data.  Radionuclides of interest will be
       identified and the variability in concentration can be estimated.

2)     They allow potential measurement problems to be identified and the need for corrective
       action can be assessed.

3)     Schedules are more likely to be met if the planning of subsequent survey activities can
       begin before the final data reports are produced.
August 2000                                N-l                         MARS SIM, Revision 1

-------
Appendix N


N.2   Documentation

Three types of documentation should be assessed:  (1) field operation records; (2) laboratory
records; and (3) data handling records (EPA 1997a).

N.2.1  Field Operation Records

The information contained in these records documents overall field operations and generally
consists of the following:

•      Field measurement records. These records show that the proper measurement protocol
       was performed in the field. At a minimum, this documentation should include the names
       of the persons conducting the activity, measurement identification, measurement
       locations, measurement results, maps and diagrams, equipment and SOP used, and
       unusual  observations. Bound field notebooks are generally used to record raw data and
       make references to prescribed procedures and changes in planned activities. Data
       recording forms might also be used. A document control system should be used for these
       records to control attributes such as formatting to include pre-numbered pages with date
       and signature lines.

•      Sample tracking records.  Sample tracking records {e.g.., chain-of-custody) document the
       progression of samples as they travel from the original sampling location to the laboratory
       and finally to disposal (see Section 7.7).

•      QC measurement records. QC measurement records document the performance of QC
       measurements in the field. These records should include calibration and standards'
       traceability documentation that can be used to provide a reproducible reference point to
       which all similar measurements can be correlated.  QC measurement records should
       contain information on the frequency, conditions, level of standards, and instrument
       calibration history.

•      Personnel files.  Personnel files record the names and training certificates of the staff
       collecting the data.

•      Generalfieldprocedures. General field procedures (e.g., SOPs) record the procedures
       used in the field to collect data and outline potential areas of difficulty in performing
       measurements.

•      Deficiency and problem identification reports. These reports document problems and
       deficiencies encountered  as well as suggestions for process improvement.
MARSSIM, Revision 1                         N-2                                August 2000

-------
                                                                              Appendix N


•      Corrective action reports. Corrective action reports show what methods were used in
       cases where general field practices or other standard procedures were violated and include
       the methods used to resolve noncompliance.

N.2.2  Laboratory Records

The following list describes some of the laboratory-specific records that should be compiled if
available and appropriate:

•      Laboratory measurement results and sample data. These records contain information on
       the sample analysis used to verify that prescribed analytical methods were followed. The
       overall number of samples, sample identification, sample measurement results, any
       deviations from the SOPs, time of day, and date should be included. Sample location
       information might also be provided.

•      Sample management records.  Sample management records should document sample
       receipt, handling and storage, and scheduling of analyses. The records will verify that
       sample tracking requirements were maintained, reflect any anomalies in the samples (e.g.,
       receipt of damaged samples), and note proper log-in of samples into the laboratory.

•      Test methods. Unless analyses were performed exactly as prescribed by SOPs, this
       documentation will describe how the analyses were carried out in the laboratory. This
       documentation includes sample preparation and analysis, instrument standardization,
       detection and reporting limits, and method-specific QC requirements. Documentation
       demonstrating laboratory proficiency with each method used could also be a part of the
       data reporting package, particularly for subcontracted work.

•      QC measurement records. These include the general QC records,  such as initial
       demonstration of capability,  instrument calibration, routine monitoring of analytical
       performance, calibration verification, etc., considered in Section 7.3 for selecting a
       radioanalytical laboratory. Project-specific information from the QC checks such as
       blanks, spikes, calibration check samples, replicates, splits, and so on should be included
       in these reports to facilitate data quality analysis.

•      Deficiency and problem identification reports.  These reports document problems and
       deficiencies encountered as well as suggestions for process improvement.

•      Corrective action reports. Corrective action reports show what methods were used in
       cases where general laboratory practices or other standard procedures were violated and
       include the methods used to resolve noncompliance.  Corrective action procedures to
       replace samples violating the SOP also should be noted.

August 2000                                 N-3                        MARSSIM, Revision 1

-------
Appendix N
N.2.3  Data Handling Records

Data handling records document protocols used in data reduction, verification, and validation.
Data reduction addresses data transformation operations such as converting raw data into
reportable quantities and units, using significant figures, calculating measurement uncertainties,
etc. The records document procedures for handling data corrections.
N.3   Data Sources

Data source assessment involves the evaluation and use of historical analytical data. Historical
analytical data should be evaluated according to data quality indicators and not the source of the
data (e.g., analytical protocols may have changed significantly over time).  Data quality
indicators are qualitative and quantitative descriptors used in interpreting the degree of
acceptability or utility of data.  Historical  data sources are addressed during the Historical Site
Assessment, and are discussed in Section 3.4.1.
N.4   Analytical Method and Detection Limit

The selection of appropriate analytical methods based on detection limits is important to survey
planning.  The detection limit of the method directly affects the usability of the data because
results near the detection limit have a greater possibility of false negatives and false positives.
Results near the detection limit have increased measurement uncertainty.  When the
measurement uncertainty becomes large compared to the variability in the radionuclide
concentration, it becomes more difficult to demonstrate compliance using the guidance provided
in MARS SIM.

The decision maker compares detection limits (i.e.., minimum detectable concentrations; MDCs)
with radionuclide-specific results to determine their effectiveness in relation to the DCGL.
Assessment of preliminary data reports provides an opportunity to review the detection limits
early and resolve any detection sensitivity problems. When a radionuclide is reported as not
detected, the result can only be used with confidence if the MDCs reported are lower than the
DCGL.

If the DCGL is less than or equal to the MDC, and the radionuclide is not detected, report the
actual result of the analysis. Do not report data as "less than the detection limit." Even negative
results and results with large uncertainties can be used  in the statistical tests described in Chapter
8. Results reported as "
-------
                                                                               Appendix N
concerning non-detects or detections at or near MDCs should be qualified according to the
degree of acceptable uncertainty.
N.5   Data Review

Data review begins with an assessment of the quality of analytical results and is performed by a
professional with knowledge of the analytical procedures. Only data that are reviewed according
to a specified level or plan should be used in the quantitative site investigation. Any analytical
errors, or limitations in the data that are identified by the review, should be noted.  An
explanation of data qualifiers should be included with the review report.

All data should receive some level of review. Data that have not been reviewed should be
identified, because the lack of review increases the uncertainty in the data. Unreviewed data may
lead to Type I and Type n decision errors, and may also contain transcription errors and
calculation errors. Data may be used in the preliminary assessment before review, but  should be
reviewed at a predetermined level before use in  the final survey report.

Depending on the survey objectives,  the level and depth of the data review varies.  The level and
depth of the data review  may be determined during the planning process and should include an
examination of laboratory and method performance for the measurements and radionuclides
involved.  This examination includes

       evaluation of data completeness
       verification of instrument calibration
       measurement of precision using duplicates, replicates, or split samples
       measurement of bias using reference materials or spikes
       examination of blanks for contamination
       assessment of adherence to method  specifications and QC limits
       evaluation of method performance in the sample matrix
       applicability and  validation of analytical procedures for site-specific measurements
       assessment of external QC measurement results and QA assessments

A different level or depth of data review may be indicated by the results of this evaluation.
Specific data review procedures are dependent upon the survey objectives and should be
documented in the QAPP.
August 2000                                N-5                        MARS SIM, Revision 1

-------
Appendix N
N.6   Data Quality Indicators

The assessment of data quality indicators presented in this section is significant to determine data
usability.  The principal data quality indicators are precision, bias, representativeness,
comparability, and completeness (EPA 1997a).  Other data quality indicators affecting the RSSI
process include the selection and classification of survey units, Type I and Type n decision error
rates, the variability in the radionuclide concentration measured within the survey unit, and the
lower bound of the gray region (see Section 2.3.1).

Of the six principal data quality indicators, precision and bias are quantitative measures,
representativeness and comparability are qualitative, completeness is a combination of both
qualitative and quantitative measures, and accuracy is a combination of precision and bias. The
selection and classification of survey units is qualitative, while decision error rates, variability,
and the lower bound of the gray region are quantitative measures.

The major activity in determining the usability of data based on survey activities is assessing the
effectiveness of measurements.  Scanning and direct measurements taken during survey activities
and samples collected for analysis should meet site-specific objectives based on scoping and
planning decisions.

Determining the usability of analytical results begins with the review of QC measurements and
qualifiers to assess the measurement result and the performance of the analytical method.  If an
error in the data is discovered, it is  more important to evaluate the effect of the error on the data
than to determine the source of the  error. The documentation described in Section N.2 is
reviewed as a whole for some criteria.  Data are  reviewed at the measurement level for other
criteria.

Factors affecting the accuracy of identification and the precision and bias of quantitation of
individual radionuclides, such as calibration and recoveries, should be examined radionuclide by
radionuclide. Table N. 1 presents a summary of the QC measurements and the data use
implications.

N.6.1  Precision

Precision is a measure of agreement among  replicate measurements of the same property under
prescribed similar conditions. This agreement is calculated as either the range or the standard
deviation. It may also be expressed as a percentage of the mean of the measurements such as
relative range (for duplicates) or coefficient of variation.
MARSSIM, Revision 1                         N-6                                  August 2000

-------
                                                                               Appendix N
                         Table N.I Use of Quality Control Data
Quality Control
Criterion
Spikes (Higher than
expected result)
Spikes (Lower than
expected result)
Replicates
(Inconsistent)
Blanks (Contaminated)
Calibration (Bias)
Effect on Identification When
Criterion is Not Met
Potential for incorrectly
deciding a survey unit does not
meet the release criterion
(Type II decision error)
Potential for incorrectly
deciding a survey unit does
meet the release criterion*
(Type I decision error)
None, unless analyte found in
one duplicate and not the
other — then either Type I or
Type II decision error
Potential for incorrectly
deciding a survey unit does not
meet the release criterion
(Type II decision error)
Potential for Type I or Type II
decision errors
Quantitative
Bias
High
Low
High or Lowb
High
High or Lowb
Use
Use data as upper limit
Use data as lower limit
Use data as
estimate — poor precision
Check for gross
contamination or
instrument malfunction
Use data as estimate
unless problem is
extreme
  a     Only likely if recovery is near zero.
  b     Effect on bias determined by examination of data for each radionuclide.
For scanning and direct measurements, precision may be specified for a single person performing
the measurement or as a comparison between people performing the same measurement. For
laboratory analyses, precision may be specified as either intralaboratory (within a laboratory) or
interlaboratory (between laboratories). Precision estimates based on a single surveyor or
laboratory represent the agreement expected when the same person or laboratory uses the same
method to perform multiple measurements of the same location. Precision estimates based on
two or more surveyors or laboratories refer to the agreement expected when different people  or
laboratories perform the same measurement using the same method.

The two basic activities performed in the assessment of precision are estimating the radionuclide
concentration variability from the measurement locations and estimating the measurement error
attributable to the data collection process. The level for each of these performance measures
August 2000
N-7
MARS SIM, Revision 1

-------
Appendix N


should be specified during development of DQOs.  If the statistical performance objectives are
not met, additional measurements should be taken or one (or more) of the performance
parameters changed.

Measurement error is estimated using the results of replicate measurements, as discussed in
Chapter 6 for field measurements and Chapter 7 for laboratory measurements. When collocated
measurements are performed (in the field or in the laboratory) an estimate of total precision is
obtained. When collocated samples are not available for laboratory analysis, a sample
subdivided in the field and preserved separately can be used to assess the variability of sample
handling, preservation, and storage along with the variability in the analytical process, but
variability in sample acquisition is not included. When only variability in the analytical  process
is desired, a sample can be subdivided in the laboratory prior to analysis.

Summary statistics such as sample mean and sample variance can provide as assessment of the
precision of a measurement system or component thereof for a project. These statistics may be
used to estimate precision at discrete concentration levels, average estimated precision over
applicable concentration ranges, or provide the basis for a continual assessment of precision for
future measurements.  Methods for calculating and reporting precision are provided in EPA
Guidance for Quality Assurance Project Plans (EPA 1997a).

Table N.2 presents the minimum considerations, impacts if the considerations are not met, and
corrective actions for precision.

N.6.2  Bias

Bias is the systematic or persistent distortion of a measurement process that causes errors in one
direction. Bias assessments for radioanalytical measurements should be made using personnel,
equipment, and spiking materials or reference materials as independent as possible from those
used in the  calibration of the measurement system. When possible, bias assessments should be
based on certified reference materials rather than matrix spikes or water spikes so that the effect
of the matrix and the chemical composition of the contamination is incorporated into the
assessment. While matrix spikes include matrix effects, the addition of a small amount  of liquid
spike does not always reflect the chemical composition of the contamination in the sample
matrix. Water spikes do not account for either matrix effects or chemical composition of the
contamination.  When spikes are used to assess bias, a documented spiking protocol and
consistency in following that protocol are important to obtaining meaningful data quality
estimates.
MARSSIM, Revision 1                        N-8                                 August 2000

-------
                                                                                    Appendix N
                     Table N.2 Minimum Considerations for Precision,
                         Impact if Not Met, and Corrective Actions
   Minimum Considerations for
           Precision
   Impact When Minimum
 Considerations Are Not Met
         Corrective Action
  Confidence level as specified
  inDQOs.

  Power as specified in DQOs.

  Minimum detectable relative
  differences specified in the
  survey design and modified
  after analysis of background
  measurements if necessary

  One set of field duplicates or
  more as specified in the survey
  design.

  Analytical duplicates and splits
  as specified in the survey
  design.

  Measurement error specified.
Errors in decisions to act or not
to act based on analytical data.

Unacceptable level of
uncertainty.

Increased variability of
quantitative results.

Potential for incorrectly
deciding a survey unit does
meet the release criterion for
measurements near the
detection limits (Type I
decision error).
For Surveying and Sampling:

Add survey or sample locations based
on information from available data that
are known to be representative.

Adjust performance objectives.

For Analysis:

Analysis of new duplicate samples.

Review laboratory protocols to ensure
comparability.

Use precision measurements to
determine confidence limits for the
effects on the data.

The investigator can use the maximum
measurement results to set an upper
bound on the uncertainty if there is too
much variability in the analyses.	
Activity levels for bias assessment measurements should cover the range of expected
contaminant concentrations, although the minimum activity is usually at least five times the
MDC. For many final status surveys, the expected contaminant concentration is zero or
background, so the highest activity will be associated with the bias assessment measurements.
The minimum and maximum concentrations allowable in bias assessment samples should be
agreed on during survey planning activities to prevent accidental contamination of the
environment or an environmental level radioanalytical laboratory.

For scanning and direct measurements there are a limited number of options available for
performing bias assessment measurements. Perhaps the best estimate of bias for scanning and
direct measurements is to collect samples from locations where scans or direct measurements
were  performed,  analyze the samples in a laboratory, and compare the results. Problems
associated with this method include the time required to obtain the results and the difficulty in
August 2000
              N-9
               MARS SIM, Revision 1

-------
Appendix N


obtaining samples that are representative of the field measurement to provide comparable results.
A simple method of demonstrating that analytical bias is not a significant problem for scanning
or direct measurements is to use the instrument performance checks to demonstrate the lack of
analytical bias.  A control chart can be used to  determine the variability of a specific instrument
and track the instrument performance throughout the course of the survey.  Field background
measurements can also be plotted on a control  chart to estimate bias caused by contamination of
the instrument.

There are several types of bias assessment samples available for laboratory analyses as discussed
in Chapter 7. Field blanks can be evaluated to estimate the potential bias caused by
contamination from sample collection, preparation, shipping, and  storage.

Table N.3 presents the minimum considerations, impacts if the considerations are not met, and
corrective actions for bias.

                        Table N.3 Minimum Considerations for Bias,
                          Impact if Not Met,  and Corrective Actions
     Minimum Considerations for
               Bias
     Impact When Minimum
   Considerations Are Not Met
       Corrective Action
  Matrix spikes to assess bias of
  non-detects and positive sample
  results if specified in the survey
  design.

  Analytical spikes as specified in
  the survey design.

  Use analytical methods (routine
  methods whenever possible) that
  specify expected or required
  recovery ranges using spikes or
  other QC measures.

  No radionuclides of potential
  concern detected in the blanks.
Potential for incorrectly deciding a
survey unit does meet the release
criterion (Type I decision error): if
spike recovery is low, it is
probable that the method or
analysis is biased low for that
radionuclide and values of all
related samples may underestimate
the actual concentration.

Potential for incorrectly deciding a
survey unit does not meet the
release criterion (Type II decision
error): if spike recovery exceeds
100%, interferences may be
present, and it is probable that the
method or analysis is biased high.
Analytical results overestimate the
true concentration of the spiked
radionuclide.
Consider resampling at affected
locations.

If recoveries are extremely low or
extremely high, the investigator
should consult with a
radiochemist or health physicist
to identify a more appropriate
method for reanalysis of the
samples.
MARSSIM, Revision 1
           N-10
                   August 2000

-------
                                                                                Appendix N
N.6.3  Accuracy
Accuracy is a measure of the closeness of an individual measurement or the average of a number
of measurements to the true value (EPA 1997'a). Accuracy includes a combination of random
error (precision) and systematic error (bias) components that result from performing
measurements.  Systematic and random uncertainties (or errors) are discussed in more detail in
Section 6.8.1.

Accuracy is determined by analyzing a reference material of known contaminant concentration or
by reanalyzing material to  which a known concentration of contaminant has been added. To be
accurate, data must be both precise and unbiased. Using the analogy of archery, to be accurate
one's arrows must land close together and, on average, at the spot where they are aimed. That is,
the arrows must all land near the bull's eye (see Figure N.I).
         (a) high bias + low precision = low accuracy
        (b) low bias + low precision = low accuracy
         (c) high bias + high precision = low accuracy
        (d) low bias + high precision = high accuracy
          Figure N.I  Measurement Bias and Random Measurement Uncertainty
August 2000
N-ll
MARS SIM, Revision 1

-------
Appendix N


Accuracy is usually expressed either as a percent recovery or as a percent bias. Determination of
accuracy always includes the effects of variability (precision); therefore, accuracy is used as a
combination of bias and precision.  The combination is known statistically as mean square error.
Mean square error is the quantitative term for overall quality of individual measurements or
estimators.

Mean square error is the sum of the variance plus the square of the bias. (The bias is squared to
eliminate concern over whether the bias is positive or negative.) Frequently it is impossible to
quantify all of the components of the mean square error—especially the biases—but it is
important to attempt to quantify the magnitude of such potential biases, often by comparison with
auxiliary data.

N.6.4  Representativeness

Representativeness is a measure of the degree to which data accurately and  precisely represent a
characteristic of a population parameter at a sampling point or for a process condition or
environmental condition. Representativeness is a qualitative term that should be evaluated to
determine whether in situ and other measurements are made and physical samples collected in
such a manner that the resulting data appropriately reflect the media and contamination measured
or studied.

Representativeness of data is critical to data usability assessments. The results of the
environmental radiological  survey will be biased to the degree that the data do not reflect the
radionuclides and concentrations present at the site. Non-representative radionuclide
identification may result in  false negatives. Non-representative estimates of concentrations may
be higher or lower than the  true concentration.  With few exceptions, non-representative
measurements are only resolved by additional measurements.

Representativeness is primarily a planning concern. The solution to enhancing
representativeness is in the  design of the survey plan.  Representativeness is determined by
examining the survey plan.  Analytical data quality affects representativeness since data of low
quality may be rejected for use.

Table N.4 presents the minimum considerations, impacts if the considerations are not met, and
corrective actions for representativeness.

N.6.5  Comparability

Comparability is the qualitative term that expresses the confidence that two data sets can
contribute to a common analysis and interpolation.  Comparability should be carefully evaluated
to establish whether two data sets can be considered equivalent in regard to the measurement of a
specific variable or groups of variables.

MARSSIM, Revision 1                        N-12                                 August 2000

-------
                                                                                     Appendix N
                Table N.4 Minimum Considerations for Representativeness,
                         Impact if Not Met, and Corrective Actions
     Minimum Considerations for
         Representativeness
     Impact When Minimum
   Considerations Are Not Met
       Corrective Action
   Survey data representative of
   survey unit.

   Documented sample preparation
   procedures. Filtering,
   compositing, and sample
   preservation may affect
   representativeness.

   Documented analytical data as
   specified in the survey design.
Bias high or low in estimate of
extent and quantity of
contaminated material.

Potential for incorrectly deciding a
survey unit does meet the release
criterion (Type I decision error).

Inaccurate identification or
estimate of concentration of a
radionuclide.

Remaining data may no longer
sufficiently represent the site if a
large portion of the data are
rejected, or if all data from
measurements at a specific
location are rejected.	
Additional surveying or sampling.

Examination of effects of sample
preparation procedures.

Reanalysis of samples, or
resurveying or resampling of the
affected site areas.

If the resurveying, resampling, or
reanalyses cannot be performed,
document in the site
environmental radiological survey
report what areas of the site are
not represented due to poor
quality of analytical data.
Comparability is not compromised provided that the survey design is unbiased, and the survey
design or analytical methods are not changed over time. Comparability is a very important
qualitative data indicator for analytical assessment and is a critical parameter when considering
the combination of data sets from different analyses for the same radionuclides.  The assessment
of data quality indicators determines if analytical results being reported are equivalent to data
obtained from similar analyses.  Only comparable data sets can be readily combined.

The use of routine methods (as defined in Section 7.6) simplifies the determination of
comparability because all laboratories use the same  standardized procedures and reporting
parameters. In other cases, the decision maker may have to consult with a health physicist and/or
radiochemist to evaluate whether different methods  are sufficiently comparable to combine data
sets.

There are a number of issues that can make two data sets comparable, and the presence of each of
the following items enhances their comparability (EPA 1997a).
August 2000
           N-13
          MARS SIM, Revision 1

-------
Appendix N


•      two data sets should contain the same set of variables of interest.
•      units in which these variables were measured should be convertible to a common metric.
•      similar analytic procedures and quality assurance should be used to collect data for both
       data sets
•      time of measurements of certain characteristics (variables) should be similar for both data
       sets
•      measuring devices used for both data sets should have approximately similar detection
       levels
•      rules for excluding certain types of observations from both samples should be similar
•      samples within data sets should be selected in a similar manner
•      sampling frames from which the samples were selected should be similar
•      number of observations in both data sets should be of the same order of magnitude

These characteristics vary in importance depending on the final use of the data.  The closer two
data sets are with regard to these characteristics, the more appropriate it will be to compare them.
Large differences between characteristics may be of only minor importance depending on the
decision that is to be made from the data.

Table N.5 presents the minimum considerations, impacts if they are not met, and corrective
actions for comparability.

N.6.6  Completeness

Completeness is a measure of the amount of valid data obtained from the measurement system,
expressed as a percentage of the number of valid measurements that should have been collected
(i.e., measurements that were planned to be collected).

Completeness for measurements is calculated by the following formula:

                               (Number of  Valid Measurements) x 100
           Completeness  =
                               Total Number of Measurements Planned
Completeness is not intended to be a measure of representativeness; that is, it does not describe
how closely the measured results reflect the actual concentration or distribution of the
contaminant in the media being measured. A project could produce 100% data completeness
(i.e.., all planned measurements were actually performed and found valid), but the results may not
be representative of the actual contaminant concentration.
MARSSIM, Revision 1                        N-14                               August 2000

-------
                                                                                   Appendix N
                  Table N.5 Minimum Considerations for Comparability,
                         Impact if Not Met, and Corrective Actions
      Minimum Considerations for
     	Comparability	
    Impact When Minimum
  Considerations Are Not Met
      Corrective Action
  Unbiased survey design or
  documented reasons for selecting
  another survey design.

  The analytical methods used should
  have common analytical parameters.

  Same units of measure used in
  reporting.

  Similar detection limits.

  Equivalent sample preparation
  techniques.

  Analytical equipment with similar
  efficiencies or the efficiencies
  should be factored into the results.
Non-additivity of survey results.

Reduced confidence, power, and
ability to detect differences,
given the number of
measurements available.

Increased overall error.
For Surveying and Sampling:

Statistical analysis of effects of
bias.

For Analytical Data:

Preferentially use those data that
provide the most definitive
identification and quantitation of
the radionuclides of potential
concern. For quantitation,
examine the precision and
accuracy data along with the
reported detection limits.

Reanalysis using comparable
methods.
Alternatively, there could be only 70% data completeness (30% lost or found invalid), but, due to
the nature of the survey design, the results could still be representative of the target population
and yield valid estimates.  The degree to which lack of completeness affects the outcome of the
survey is a function of many variables ranging from deficiencies in the number of measurements
to failure to analyze as many replications as deemed necessary by the QAPP and DQOs.  The
intensity of effect due to incompleteness of data is sometimes best expressed as a qualitative
measure and not just  as a quantitative percentage.

Completeness can have an effect on the DQO parameters. Lack of completeness may require
reconsideration of the limits for decision error rates because insufficient completeness will
decrease the power of the statistical tests described in Chapter 8.

For most final status  surveys, the issue of completeness only arises when the survey unit
demonstrates compliance with the release criterion and less than 100% of the measurements are
determined to be acceptable.  The question now becomes whether the number of measurements is
sufficient to support the decision to release the survey unit.  This question can be answered by
constructing a power curve as described in  Appendix I and evaluating the results.  An alternative
August 2000
         N-15
         MARS SIM, Revision 1

-------
Appendix N


method is to consider that the number of measurements estimated to demonstrate compliance in
Chapter 5 was increased by 20% to account for lost or rejected data and uncertainty in the
calculation of the number of measurements. This means a survey with 80% completeness may
still have sufficient power to support a decision to release the survey unit.

Table N.6 presents the minimum considerations, impacts if the considerations are not met, and
corrective actions for completeness.

                  Table N.6  Minimum Considerations for Completeness,
                         Impact if Not Met, and Corrective Actions
    Minimum Considerations for
    	Completeness	
      Impact When Minimum
    Considerations Are Not Met
      Corrective Action
  Percentage of measurement
  completeness determined during
  planning to meet specified
  performance measures.
Higher potential for incorrectly
deciding a survey unit does not meet
the release criterion (Type II decision
error).

Reduction in power.

A reduction in the number of
measurements reduces site coverage
and may affect representativeness.

Reduced ability to differentiate site
levels from background.

Impact of incompleteness generally
decreases as the  number of
measurements increases.
Resurveying, resampling, or
reanalysis to fill data gaps.

Additional analysis of samples
already in laboratory.

Determine whether the missing
data are crucial to the survey.
N.6.7  Selection and Classification of Survey Units

Selection and classification of survey units is a qualitative measure of the assumptions used to
develop the survey plan.  The level of survey effort, measurement locations (i.e., random vs.
systematic and density of measurements), and the integrated survey design are based on the
survey unit classification.  The results of the survey should be reviewed to determine whether the
classification used to plan the survey is supported by the results of the survey.
MARSSIM, Revision 1
           N-16
                 August 2000

-------
                                                                               Appendix N


If a Class 3 survey unit is found to contain areas of contamination (even if the survey unit passes
the statistical tests), the survey unit may be divided into several survey units with appropriate
classifications, and additional surveys planned as necessary for these new survey units.

Class 3 areas may only require additional randomly located measurements to provide sufficient
power to release the new survey units.  Class 2 and Class 1 areas will usually require a new
survey design based on systematic measurement locations, and Class 1 areas may require
remediation before a new final status survey is performed.

If a Class 2 survey unit is determined to be a Class 1 survey unit following the final  status survey
and remediation is not required, it may not be necessary to plan a new survey. The scan MDC
should be compared to the DCGLEMC to determine if the measurement spacing is adequate to
meet the survey objectives.  If the scan MDC is too high, a new scan survey using a  more
sensitive measurement technique may be available.  Alternatively, a new survey may be planned
using a new measurement spacing or a stratified survey design may be implemented to use as
much of the existing data as possible.

N.6.8  Decision Error Rates

The decision error rates developed during survey planning are related to completeness. A low
level of completeness will affect the power of the statistical test. It is recommended that a power
curve be constructed as described in Appendix I, and the expected decision error rates compared
to the actual decision error rates to determine if the survey objectives have been accomplished.

N.6.9  Variability in Contaminant Concentration

The variability in the contaminant concentration (both in the survey unit and the reference area)
is a key parameter in survey planning, and is related to the precision of the measurements.
Statistical simulations show that underestimating the value of o (the standard deviation of the
survey unit measurements) can greatly increase the probability that a survey unit will fail to
demonstrate compliance with the release criterion.

If a survey unit fails to demonstrate compliance and the actual o is greater than the o used during
survey planning, there are several options available to the project manager.  If the major
component of variability is measurement uncertainty, a new survey can be designed  using a
measurement technique with higher precision or a lower MDC to reduce variability.  If samples
were collected as part  of the survey design, it may only be necessary to reanalyze the samples
using a method with higher precision rather than collect additional samples. Alternatively, the
number of measurements can be increased to reduce the variability.
August 2000                                N-17                         MARS SIM, Revision 1

-------
Appendix N


If the variability is due to actual variations in the contaminant concentration, there are still
options available. If there is a high variability in the reference area, it may be appropriate to
demonstrate the survey unit is indistinguishable from background. NUREG 1505 (NRC 1997b)
provides guidance on determining whether this test is appropriate and performing the statistical
tests. If the variability is caused by different contaminant distributions in different parts of the
site (i.e., changing soil types influences contaminant concentrations), it may be appropriate to
redefine the  survey unit boundaries to provide a more homogeneous set of survey units.

N.6.10 Lower Bound of the Gray Region

The lower bound of the gray region (LBGR) is used to calculate the relative shift, which in turn is
used to estimate the number of measurements required to demonstrate compliance. The LBGR is
initially set arbitrarily to one half the DCGLW. If this initial selection is used to design the
survey, there is no technical basis for the selection of this value. This becomes important
because the Type II decision error rate (P) is calculated at the LBGR.

For survey units that pass the statistical tests, the value selected for the LBGR is generally not a
concern.  If the survey unit fails to demonstrate compliance, it may be caused by improper
selection of the LBGR. Because the  number of measurements estimated during survey planning
is based on the relative shift (which includes both o and the LBGR), MARSSEVI recommends
that a power curve be constructed as  described in Appendix I. If the survey unit failed to
demonstrate compliance because  of a lack of statistical power, an adjustment of the LBGR may
be necessary when planning subsequent surveys.
MARSSIM, Revision 1                        N-18                                 August 2000

-------
                                     GLOSSARY

91b material: Any material identified under Section 91b of the Atomic Energy Act of 1954 (42
U.S.C. Section 2121).

Amin:  The smallest area of elevated activity identified using the DQO Process that is important to
identify.

action level:  The numerical value that will cause the decision maker to choose one of the
alternative actions. It may be a regulatory threshold standard (e.g., Maximum Contaminant Level
for drinking water), a dose- or risk-based concentration level (e.g., DCGL), or a reference-based
standard. See investigation level.

activity: See radioactivity.

ALARA (acronym for As Low As Reasonably Achievable):  A basic concept of radiation
protection which specifies that exposure to ionizing radiation and releases of radioactive
materials should be managed to reduce collective doses as far below regulatory limits as is
reasonably achievable considering economic, technological, and societal factors, among others.
Reducing exposure at a site to ALARA strikes a balance between what is possible through
additional planning and management, remediation,  and the use of additional resources to achieve
a lower collective dose level. A determination of ALARA is a site-specific analysis that is open to
interpretation, because it depends on approaches or circumstances that may differ between
regulatory agencies.  An ALARA recommendation should not be interpreted as  a set limit or level.

alpha (a):  The specified maximum probability of a Type I error. In other words, the maximum
probability of rejecting the null hypothesis when it is true. Alpha  is also referred to as the size of
the test. Alpha reflects the amount of evidence the decision maker would like to see before
abandoning the null hypothesis.

alpha particle: A positively charged particle emitted by some radioactive materials undergoing
radioactive decay.

alternative hypothesis (Ha):  See hypothesis.

area: A general term referring to any portion of a site, up to and including the  entire site.

area of elevated activity:  An area over which residual radioactivity exceeds  a specified value
DCGLEMC.
August 2000                                GL-1                        MARS SIM, Revision 1

-------
Glossary
area factor (Am):  A factor used to adjust DCGLW to estimate DCGLEMC and the minimum
detectable concentration for scanning surveys in Class 1 survey units—DCGLEMC = DCGLw*Am.
Am is the magnitude by which the residual radioactivity in a small area of elevated activity can
exceed the DCGLW while maintaining compliance with the release criterion.  Examples of area
factors are provided in Chapter 5 of this manual.

arithmetic mean: The average value obtained when the sum of individual values is divided by
the number of values.

arithmetic standard deviation: A statistic used to quantify the variability of a set of data.  It is
calculated in the following manner: 1) subtracting the arithmetic mean from each data value
individually, 2) squaring the differences, 3) summing the squares of the  differences, 4) dividing
the sum of the squared differences by the total number of data values less one, and 5) taking the
square root of the quotient.  The calculation process produces the Root Mean  Square Deviation
(RMSD).

assessment:  The evaluation process used to measure the performance or effectiveness  of a
system and its elements. As used in MARSSIM, assessment is an all-inclusive term used to
denote any of the following: audit, performance evaluation, management systems review, peer
review, inspection, or surveillance.

attainment objectives: Objectives that specify the design and scope of the sampling study
including the radionuclides to be tested, the cleanup standards to be attained, the measure or
parameter to be compared to the cleanup standard, and the Type I and Type II error rates for the
selected statistical  tests.

audit (quality): A systematic and independent examination to determine whether quality
activities and related results comply with planned arrangements and whether these arrangements
are implemented effectively and are suitable to achieve objectives.

background reference area:  See reference area.

background radiation: Radiation from cosmic sources, naturally occurring radioactive
material, including radon (except as a decay product of source or special nuclear material), and
global fallout as it exists in the environment from the testing of nuclear explosive devices or
from nuclear accidents like Chernobyl which contribute to background radiation and are not
under the control of the cognizant organization. Background radiation does not include radiation
from source, byproduct, or special nuclear materials regulated by the cognizant Federal or State
agency. Different  definitions may exist for this term.  The definition provided in regulations or
regulatory program being used for a site release should always be used if it differs from the
definition provided here.

MARSSIM, Revision 1                        GL-2                               August 2000

-------
                                                                                  Glossary
Becquerel (Bq):  The International System (SI) unit of activity equal to one nuclear
transformation (disintegration) per second. 1 Bq = 2.7xlO"n Curies (Ci) = 27.03 picocuries
(pCi).

beta (P):  The probability of a Type II error, i.e., the probability of accepting the null hypothesis
when it is false. The complement of beta (1-P) is referred to as the power of the test.

beta particle: An electron emitted from the nucleus during radioactive decay.

bias: The systematic or persistent distortion of a measurement process which causes errors in
one direction (i.e.., the expected sample measurement is different from the sample's true value).

biased sample or measurement: See judgement measurement.

byproduct material: Any radioactive material (except special nuclear material) yielded in or
made radioactive  by exposure to the radiation incident to the process of producing or utilizing
special nuclear material.

calibration:  Comparison of a measurement standard, instrument, or item with a standard or
instrument of higher accuracy to detect and quantify inaccuracies and to report or eliminate those
inaccuracies by adjustments.

CDE (committed dose equivalent):  The dose equivalent calculated to be received by a tissue or
organ over a 50-year period after the intake into the body. It dose not include contributions from
radiation sources  external to the body. CDE is expressed in units of Sv or rem.

CEDE (committed effective dose equivalent):  The sum of the committed dose equivalent to
various tissues in  the body, each multiplied by the appropriate weighting factor (Wt).  CEDE is
expressed in units of Sv or rem.  See TEDE.

chain of custody: An unbroken trail of accountability that ensures the physical security of
samples, data, and records.

characterization  survey:  A type of survey that includes facility or site sampling, monitoring,
and analysis activities to determine the extent and nature of contamination.  Characterization
surveys provide the basis for acquiring necessary technical information to develop, analyze, and
select appropriate cleanup techniques.

Class 1 area: An area that is projected to require a Class 1 final status survey.
August 2000                                GL-3                        MARSSIM, Revision 1

-------
Glossary
Class 1 survey: A type of'final status survey that applies to areas with the highest potential for
contamination, and meet the following criteria: (1) impacted; (2) potential for delivering a dose
above the release criterion; (3) potential for small areas of elevated activity; and (4) insufficient
evidence to support reclassification as Class 2 or Class 3.

Class 2 area:  An area that is projected to require a Class 2 final status survey.

Class 2 survey: A type of final status survey that applies to areas that meet the following
criteria: (1) impacted; (2) low potential for delivering a dose above the release criterion; and (3)
little or no potential for small areas of elevated activity.

Class 3 area:  An area that is projected to require a Class 3 final status survey.

Class 3 survey: A type of final status survey that applies to areas that meet the following
criteria: (1) impacted; (2) little or no potential for delivering a dose above the release criterion;
and (3) little or no potential for small areas of elevated activity.

classification:  The act or result of separating areas or survey units into one  of three designated
classes: Class 1 area, Class 2 area., or Class 3 area.

cleanup: Actions taken to deal with a release or threatened release of hazardous substances that
could affect public health or the environment. The term is often used broadly to describe various
Superfund response actions or phases of remedial responses, such as remedial investigation/
feasibility study.  Cleanup is sometimes used interchangeably with the terms remedial action,
response action, or corrective action.

cleanup standard: A numerical limit set by a regulatory agency as a requirement for releasing a
site after cleanup.  See release criterion.

cleanup (survey) unit:  A geographical area of specified size and shape defined for the purpose
of survey design and compliance testing.

coefficient of variation: A unitless measure that allows the comparison of dispersion across
several sets of data. It is often used in environmental applications because variability (expressed
as a standard deviation) is often proportional to the mean. See relative standard deviation.

comparability: A measure of the confidence with which one data set can be compared to
another.

completeness: A measure of the amount of valid data obtained from a measurement system
compared to the amount that was expected to be obtained under correct, normal conditions.

MARSSIM, Revision 1                        GL-4                                 August 2000

-------
                                                                                   Glossary
composite sample: A sample formed by collecting several samples and combining them (or
selected portions of them) into a new sample which is then thoroughly mixed.

conceptual site model: A description of a site and its environs and presentation of hypotheses
regarding the contaminants present, their routes of migration, and their potential impact on
sensitive receptors.

confidence interval:  A range of values for which there is a specified probability (e.g., 80%,
90%, 95%) that this set contains the true value of an estimated parameter.

confirmatory survey: A type of survey that includes limited independent (third-party)
measurements, sampling, and analyses to verify the findings of a final status survey.

consensus standard: A standard established by a group representing a  cross section of a
particular industry or trade, or a part thereof.

contamination: The presence of residual radioactivity in excess of levels which are acceptable
for release of a site or facility for unrestricted use.

control chart: A graphic representation of a process, showing plotted values of some statistic
gathered from that characteristic, and one or two control limits. It has two basic uses: 1) as a
judgement to determine if a process was in control, and 2) as an aid in achieving and maintaining
statistical control.

core sample:  A soil sample taken by core drilling.

corrective action: An action taken to eliminate the causes of an existing nonconformance,
deficiency, or other undesirable situation in order to prevent recurrence.

criterion: See release criterion.

critical group:  The group of individuals reasonably  expected to receive the greatest exposure to
residual radioactivity for any applicable set of circumstances.

critical level (Lc): A fixed value of the test statistic  corresponding to a given probability level,
as determined from the sampling distribution of the test statistic.  Lc is the level at which there is
a statistical probability (with a predetermined confidence) of correctly identifying a background
value as "greater than background."
August 2000                                 GL-5                         MARSSIM, Revision 1

-------
Glossary
critical value:  The value of a statistic (t) corresponding to a given significance level as
determined from its sampling distribution; e.g., if Pr (t > t0 ) = 0.05,  t0 is the critical value oft at
the 5 percent level.

curie (Ci): The customary unit of radioactivity.  One curie (Ci) is equal to 37 billion
disintegrations per second (3.7 x 1010  dps = 3.7 x lO10^), which is approximately equal to the
decay rate of one gram of 226Ra.  Fractions of a curie, e.g. picocurie (pCi) or 10"12 Ci and
microcurie (|iCi) or 10"6Ci, are levels  typically encountered in decommissioning.

cyclotron: A device used to impart high energy to charged particles, of atomic weight one or
greater, which can be used to initiate nuclear transformations upon collision with a suitable
target.

D:  The true, but unknown, value of the difference between the mean concentration of residual
radioactivity in the survey unit and the reference area.

DQA (Data Quality Assessment):  The scientific and statistical evaluation of data to determine
if the data are of the right type, quality, and quantity to support their intended use.

DQOs (Data Quality Objectives):  Qualitative and quantitative statements derived from the
DQO process that clarify study technical and quality objectives, define the appropriate type of
data, and specify tolerable levels of potential decision errors that will be used as the basis for
establishing the quality and quantity of data needed to support decisions.

Data Quality Objectives Process:  A systematic strategic planning tool based on the scientific
method that identifies and defines the  type, quality, and quantity of data needed to satisfy a
specified use. The key  elements of the process include:

       concisely defining the problem
       identifying the decision to be made
       identifying the inputs to that decision
       defining the boundaries of the  study
       developing the  decision rule
       specifying tolerate limits on potential decision errors
       selecting the most resource efficient data collection design

DQOs are the qualitative and quantitative outputs from the DQO process. The DQO process was
developed originally by the U.S. Environmental Protection Agency, but has been adapted for use
by other organizations to meet their specific planning requirement. See also graded approach.
MARSSIM, Revision 1                         GL-6                                 August 2000

-------
                                                                                   Glossary
data quality indicators: Measurable attributes of the attainment of the necessary quality for a
particular decision.  Data quality indicators include precision, bias, completeness,
representativeness, reproducibility, comparability, and statistical confidence.

data usability: The process of ensuring or determining whether the quality of the data produced
meets the intended use of the data.

DCGL (derived concentration guideline level):  A derived, radionuclide-specific activity
concentration within a survey unit corresponding to the release criterion. The DCGL is based on
the spatial distribution of the contaminant and hence is derived differently for the nonparametric
statistical test (DCGLW) and the Elevated Measurement Comparison (DCGLEMC).  DCGLs are
derived from activity/dose relationships through various exposure pathway  scenarios.

decay: See radioactive decay.

decision maker:  The person, team, board, or committee responsible for the final decision
regarding disposition of the survey unit.

decision rule: A statement that describes a logical basis for choosing among alternative actions.

decommission:  To remove a facility or site safely from service and reduce residual radioactivity
to a level that permits release of the property and termination of the license  and other
authorization for site operation.

decommissioning:  The process of removing a facility or site from operation, followed by
decontamination, and license termination (or termination of authorization for operation) if
appropriate.  The objective of decommissioning is to reduce the residual radioactivity in
structures, materials, soils, groundwater, and other media at the site so that the concentration of
each radionuclide contaminant that contributes to residual radioactivity is indistinguishable from
the background radiation concentration for that radionuclide.

decontamination:  The removal of radiological contaminants from, or their neutralization on, a
person, object or area to within levels established by governing regulatory agencies.
Decontamination is sometimes used interchangeably with remediation, remedial action, and
cleanup.

delta (5): The amount that the distribution of measurements for a survey unit is shifted to the
right of the distribution of measurements of the reference area.
August 2000                                GL-7                        MARSSIM, Revision 1

-------
Glossary
delta (A): The width of the gray region. A divided by o, the arithmetic standard deviation of
the measurements, is the relative shift expressed in multiples of standard deviations.  See relative
shift, gray region.

derived concentration guideline level:  See DCGL.

design specification process:  The process of determining the sampling and analysis procedures
that are needed to demonstrate that the attainment objectives are achieved.

detection limit: The net response level that can be expected to be seen with a detector with a
fixed level of certainty.

detection sensitivity:  The minimum level of ability to identify the presence of radiation or
radioactivity.

direct measurement:  Radioactivity measurement obtained by placing the detector near the
surface or media being surveyed. An indication of the resulting radioactivity level is read out
directly.

distribution coefficient (K,,): The ratio of elemental (i.e.,  radionuclide) concentration in soil to
that in water in a soil-water system at equilibrium.  Kd is generally measured in terms of gram
weights of soil and volumes of water (g/cm3 or g/ml).

dose commitment:  The dose that an organ or tissue would receive during a specified period of
time (e.g., 50 or 70 years) as a result of intake (as by ingestion or inhalation) of one or more
radionuclides from a given release.

dose equivalent (dose): A quantity that expresses all radiations on a common scale for
calculating the effective absorbed dose. This quantity is the product of absorbed dose (rads)
multiplied by a quality factor and any other modifying factors.  Dose is measured in Sv or rem.

double-blind measurement:  Measurements that cannot be distinguished from routine
measurements by the individual performing the measurement.  See non-blind measurement and
single-blind measurement.

effective probe area:  The physical probe  area corrected for the amount of the probe area
covered by a protective screen.

elevated  area:  See area of elevated activity.
MARSSIM, Revision 1                        GL-8                                August 2000

-------
                                                                                  Glossary
elevated measurement: A measurement that exceeds a specified value DCGL
                                                                       'EMC-
Elevated Measurement Comparison (EMC): This comparison is used in conjunction with the
Wilcoxon test to determine if there are any measurements that exceed a specified value
DCGLEMC.

exposure pathway:  The route by which radioactivity travels through the environment to
eventually cause radiation exposure to a person or group.

exposure rate: The amount of ionization produced per unit time in air by X-rays or gamma rays.
The unit of exposure rate is Roentgens/hour (R/h); for decommissioning activities the typical
units are microRoentgens per hour (|iR/h), i.e., 10"6 R/h.

external radiation: Radiation from a source outside the body.

false negative decision error: The error that occurs when the null hypothesis (H0) is not
rejected when it is false. For example, the false negative decision error occurs when the decision
maker concludes that the waste is hazardous when it truly is not hazardous. A statistician usually
refers to a false negative error as a Type II decision error. The measure of the size of this error is
called beta, and is also known as the complement of the power of a hypothesis test.

false positive decision error:  A false positive decision error occurs when the null hypothesis
(H0) is rejected when it is true.  Consider an example where the decision maker presumes that a
certain waste is hazardous (i.e., the null hypothesis or baseline condition is "the waste is
hazardous").  If the decision maker concludes that there is insufficient evidence to classify the
waste as hazardous when it truly is hazardous, the decision maker would make a false positive
decision error.  A statistician usually refers to the false positive error as a Type I decision error.
The measure  of the size of this error is called alpha, the level of significance, or the size of the
critical region.

Field Sampling Plan: As defined for Superfund in the Code of Federal Regulations 40 CFR
300.430, a document which describes the number, type, and location of samples and the type of
analyses to be performed. It is part of the Sampling and Analysis Plan.

final status survey:  Measurements and  sampling to describe the radiological conditions of a
site, following completion of decontamination activities (if any) in preparation for release.
August 2000                                GL-9                       MARSSIM, Revision 1

-------
Glossary
fluence rate: A fundamental parameter for assessing the level of radiation at a measurement
site. In the case of in situ spectrometric measurements, a calibrated detector provides a measure
of influence rate of primary photons at specific energies that are characteristic of a particular
radionuclide.

gamma (y) radiation:  Penetrating high-energy, short-wavelength electromagnetic radiation
(similar to X-rays) emitted during radioactive decay.  Gamma rays are very penetrating and
require dense materials (such as lead or steel) for shielding.

graded approach:  The process of basing the level of application of managerial controls applied
to an item or work according to the intended use of the results and the degree of confidence
needed in the quality of the results. See data quality objectives process.

gray region: A range of values of the parameter of interest for a survey unit where the
consequences of making a decision error are relatively minor. The upper bound of the gray
region in MARSSEVI is set equal to the DCGLm and the lower bound of the gray region (LBGR)
is a site-specific variable.

grid: A network of parallel horizontal and vertical lines forming squares on a map that may be
overlaid on a property parcel for the purpose of identification of exact locations. See reference
coordinate system.

grid block:  A square defined by two adjacent vertical and two adjacent horizontal reference grid
lines.

half-life (t1/2): The time required for one-half of the atoms of a particular radionuclide present to
disintegrate.

Historical Site Assessment  (HSA): A detailed investigation to collect existing information,
primarily historical, on a site and its surroundings.

hot measurement:  See elevatedmeasurement.

hot spot:  See area of elevated activity.

hypothesis: An assumption about a property or characteristic of a set of data under study. The
goal of statistical  inference is to decide which of two complementary hypotheses is likely to be
true. The null hypothesis (H0) describes what is assumed to be the true  state of nature and the
alternative hypothesis (Ha) describes the opposite situation.
MARSSIM, Revision 1                        GL-10                               August 2000

-------
                                                                                  Glossary
impacted area:  Any area that is not classified as non-impacted. Areas with a reasonable
possibility of containing residual radioactivity in excess of natural background or fallout levels.

independent assessment: An assessment performed by a qualified individual, group, or
organization that is not part of the organization directly performing and accountable for the work
being assessed.

indistinguishable from background: The term indistinguishable from background means that
the detectable concentration distribution of a radionuclide is not statistically different from the
background concentration distribution of that radionuclide in the vicinity of the site or, in the
case of structures, in similar materials using adequate measurement technology, survey, and
statistical techniques.

infiltration rate: The rate at which a quantity of a hazardous substance moves from one
environmental medium to another—e.g., the rate at which a quantity of a radionuclide moves
from a source into and through a volume of soil or solution.

inspection: An  activity such as measuring, examining, testing, or gauging one or more
characteristics of an entity and comparing the results with specified requirements in order to
establish whether conformance is achieved for each characteristic.

inventory: Total residual quantity of formerly licensed radioactive material at a site.

investigation level:  A derived media-specific, radionuclide-specific concentration or activity
level of radioactivity that: 1) is based on the release criterion, and 2) triggers a response, such  as
further investigation or cleanup, if exceeded.  See action level.

isopleth:  A line drawn through points on a graph or plot at which a given quantity has the same
numerical value  or occurs with the  same frequency.

judgment measurement: Measurements performed at locations selected using professional
judgment based  on unusual  appearance, location relative to known contaminated areas, high
potential for residual radioactivity,  general supplemental information, etc. Judgment
measurements are not included in the statistical evaluation of the survey unit data because they
violate the assumption of randomly selected, independent measurements. Instead, judgment
measurements are individually compared to the DCGLW.
August 2002                                GL-11                        MARS SIM, Revision 1

-------
Glossary
karst terrain: A kind of terrain with characteristics of relief and drainage arising from a high
degree of rock solubility.  The majority of karst conditions occur in limestone areas, but karst
may also occur in areas of dolomite, gypsum, or salt deposits.  Features associated with karst
terrain may include irregular topography, abrupt ridges, sink holes, caverns, abundant springs,
and disappearing streams. Well developed or well integrated drainage systems of streams and
tributaries are generally not present.

klystron: An electron tube used in television, etc., for converting a stream of electrons into ultra
high-frequency waves that are transmitted as a pencil-like radio beam.

less-than data: Measurements that are less than the minimum detectable concentration.

license:  A license issued under the regulations in parts 30 through 35, 39, 40, 60, 61, 70 or part
72 of 10  CFR Chapter I.

licensee: The holder of a license.

license termination: Discontinuation of a license, the eventual conclusion to decommissioning.

lower bound of the gray region (LBGR): The minimum value of the gray region. The width
of the gray region (DCGL-LBGR) is also referred to as the shift, A.

lower limit of detection (LD):  The smallest amount of radiation or radioactivity that statistically
yields a net result above the method background.  The critical  detection level, Lc, is the lower
bound of the 95% detection interval defined for LD and is the level at which there is a 5% chance
of calling a background value "greater than background." This value should be used when
actually counting samples or making direct radiation measurements.  Any response above this
level should be considered as above background; i.e., a net positive result. This will ensure 95%
detection capability for LD.  A 95% confidence interval should be calculated for all  responses
greater than Lc.

m:  The number of measurements  from the reference area used to conduct a statistical test.

magnetron: A vacuum tube in which the flow of ions from the heated cathode to the anode is
controlled by a magnetic field externally applied and perpendicular to the electric field by which
they are propelled.  Magnetrons are used to produce very short radio waves.

measurement:  For the purpose of MARS SIM, it is used interchangeably to mean: 1) the act of
using a detector to determine the level or quantity of radioactivity on a surface or in a sample of
material removed from a media being evaluated, or 2) the quantity obtained by the act of
measuring.

MARSSIM, Revision 1                        GL-12                                August 2000

-------
                                                                                 Glossary
micrometeorology: The study of weather conditions in a local or very small area, such as
immediately around a tree or building, that can affect meteorological conditions.

minimum detectable concentration (MDC):  The minimum detectable concentration (MDC) is
the a priori activity level that a specific instrument and technique can be expected to detect 95%
of the time. When stating the detection capability of an instrument, this value should be used.
The MDC is the detection limit, LD, multiplied by an appropriate conversion factor to give units
of activity.

minimum detectable count rate (MDCR): The minimum detectable count rate (MDCR) is the
a priori count rate that a specific instrument and technique can be expected to detect.

missing or unusable data: Data (measurements) that are mislabeled, lost, or do not meet
quality control standards. Less-than data are not considered to be missing or unusable data. See
R.

munitions: Military supplies, especially weapons and ammunition.

N: N= m + n, is the total number of measurements required from the reference area and a survey
unit. See m and n.

n: Number of measurements from a survey unit used to conduct a statistical test.

nf:  The number of samples that should be collected in an area to assure that the required number
of measurements from that area for conducting statistical tests is obtained. nf = n/(l-R).

NARM: Naturally occurring or accelerator-produced radioactive material, such as radium, and
not classified as source material.

naturally occurring radionuclides: Radionuclides and their associated progeny produced
during the formation of the earth or by interactions of terrestrial matter with cosmic rays.

non-blind measurement: Non-blind measurements are measurements that have a concentration
and origin that are known to the individual performing the measurement.  See single-blind
measurement and double-blind measurement.

nonconformance: A deficiency in characteristic, documentation, or procedure that renders the
quality of an item or activity unacceptable or indeterminate; nonfulfillment of a specified
requirements.
August 2000                               GL-13                       MARS SIM, Revision 1

-------
Glossary
non-impacted area: Areas where there is no reasonable possibility (extremely low probability)
of residual contamination. Non-impacted areas are typically located off-site and may be used as
background reference areas.

nonparametric test: A test based on relatively few assumptions about the exact form of the
underlying probability distributions of the measurements. As a consequence, nonparametric tests
are generally valid for a fairly broad class of distributions.  The Wilcoxon Rank Sum test and the
Sign test are examples of nonparametric tests.

normal (gaussian) distribution: A family of bell shaped distributions described by the mean
and variance.

organization: a company, corporation, firm, government unit, enterprise, facility, or institution,
or part thereof, whether incorporated or not, public or private, that has its own functions and
administration.

outlier: Measurements that are unusually large or small relative to the rest and therefore are
suspected of misrepresenting the population from which they were collected.

p: The probability that a random measurement from the survey unit is less than A.

p':  The probability that the sum of two independent random measurements from the survey unit
is less than 2A.

Pr:  The probability that a measurement performed at a random location in the survey unit is
greater than  a measurement performed at a random location in the reference area.

peer review: A documented critical review of work generally beyond the state of the art or
characterized by the existence of potential uncertainty. The peer review is conducted by
qualified individuals (or organization) who are independent of those who performed the work,
but are collectively equivalent in technical expertise (i.e., peers) to those who performed the
original work. The peer review is conducted to ensure that activities are technically adequate,
competently performed, properly documented, and satisfy established technical and quality
requirements. The peer review is an in-depth assessment of the assumptions, calculations,
extrapolations, alternate interpretations, methodology, acceptance criteria, and conclusions
pertaining to specific work and of the documentation that supports them. Peer reviews provide
an evaluation of a subject where quantitative methods of analysis or measures of success are
unavailable or undefined, such as in research and development.
MARSSIM, Revision 1                       GL-14                                August 2000

-------
                                                                                 Glossary
performance evaluation: A type of audit in which the quantitative data generated in a
measurement system are obtained independently and compared with routinely obtained data to
evaluate the proficiency of an analyst or laboratory.

physical probe area:  The physical surface area assessed by a detector.  The physical probe area
is used to make probe area corrections in the activity calculations.

Pitman efficiency:  A measure of performance for statistical tests. It is equal to the reciprocal of
the ratio of the sample sizes required by each of two tests to achieve the same power, as these
sample sizes become large.

power (1-P): The probability of rej ecting the null hypothesis when it is false. The power is
equal to one minus the Type II error rate, i.e. (1-P).

precision:  A measure of mutual agreement among individual measurements of the same
property, usually under prescribed similar conditions, expressed generally in terms of the
standard deviation.

process:  A combination of people, machine and equipment, methods, and the environment in
which they operate to produce a given product or service.

professional judgement: An expression of opinion, based on technical knowledge and
professional experience, assumptions, algorithms, and definitions, as stated by an expert in
response to technical problems.

qualified data: Any data that have been modified or adjusted as part of statistical or
mathematical evaluation, data validation, or data verification operations.

quality:  The totality of features and characteristics of a product or service that bear on its ability
to meet the stated or implied needs and expectations of the user.

quality assurance (QA): An integrated system of management activities involving planning,
implementation, assessment, reporting, and quality improvement to ensure that a process, item,
or service is of the type and quality needed and expected by the customer.

Quality Assurance Project Plan (QAPP):  A formal document describing in comprehensive
detail the necessary QA, QC, and other technical activities that must be implemented to ensure
that the results of the work performed will satisfy the stated performance criteria. As defined for
Superfund in the Code of Federal Regulations 40 CFR 300.430, the Quality Assurance Project
Plan describes policy, organization, and functional activities and the Data Quality Objectives and
measures necessary to achieve adequate data for use in selecting the appropriate remedy. The

August 2000                                GL-15                       MARS SIM, Revision 1

-------
Glossary


QAPP is a plan that provides a process for obtaining data of sufficient quality and quantity to
satisfy data needs. It is a part of the Sampling and Analysis Plan.

quality control (QC): The overall system of technical activities that measure the attributes and
performance of a process, item, or service against defined standards to verify that they meet the
stated requirements established by the customer, operational techniques and activities that are
used to fulfill requirements for quality.

quality indicators: Measurable attributes of the attainment of the necessary quality for a
particular environmental decision. Indicators of quality include precision, bias, completeness,
representativeness, reproducibility, comparability, and statistical confidence.

Quality Management Plan (QMP):  A formal document that describes the quality system in
terms of the organizational structure, functional responsibilities of management and staff, lines of
authority, and required interfaces for those planning, implementing, and assessing all activities
conducted.

quality system: A structured and documented management system describing the policies,
objectives, principles, organizational authority, responsibilities, accountability, and
implementation plan of an organization for ensuring quality in its work processes, products
(items), and services.  The quality system provides the framework for planning, implementing,
and assessing work performed by the organization and for carrying out required QA and QC.

R:  The rate of missing or unusable measurements expected to occur for samples collected in
reference areas or survey units. See missing or unusable data. See nf.  (Not to be  confused with
the symbol for the radiation exposure unit Roentgen.)

RA:  The acceptable level of risk associated with not detecting an area of elevated activity of area
radiation survey: Measurements of radiation levels associated with a site together with
appropriate documentation and data evaluation.

radioactive decay: The spontaneous transformation of an unstable atom into one or more
different nuclides accompanied by either the emission of energy and/or particles from the
nucleus, nuclear capture or ejection of orbital electrons, or fission. Unstable atoms decay into a
more stable state, eventually reaching a form that does not decay further or has a very long half-
life.
MARSSIM, Revision 1                        GL-16                                August 2000

-------
                                                                                  Glossary
radioactivity: The mean number of nuclear transformations occurring in a given quantity of
radioactive material per unit time. The International System (SI) unit of radioactivity is the
Becquerel (Bq).  The customary unit is the Curie (Ci).

radiological survey:  Measurements of radiation levels and radioactivity associated with a site
together with appropriate documentation and data evaluation.

radioluminescence: Light produced by the absorption of energy from ionizing radiation.

radionuclide: An unstable nuclide that undergoes radioactive decay.

random error:  The deviation of an observed value from the true value is called the error of
observation. If the error of observation behaves like a random variable (i.e., its value occurs as
though chosen at random from a probability distribution of such errors) it is called a random
error.  See systematic error.

readily removable:  A qualitative statement of the extent to which a radionuclide can be
removed from a  surface or medium using non-destructive, common, housekeeping techniques
(e.g., washing with moderate amounts of detergent and water) that do not generate large volumes
of radioactive waste requiring subsequent disposal or produce chemical wastes that are expected
to adversely affect public health or the environment.

reference area:  Geographical area from which representative reference measurements are
performed for comparison with measurements performed in specific survey units at remediation
site. A site radiological reference area (background area) is defined as an area that has similar
physical, chemical, radiological, and biological characteristics as the site area being remediated,
but which has not been contaminated by site activities.  The distribution and concentration of
background radiation in the reference area  should be the same as that which would be expected
on the site if that site had never been contaminated.  More than one reference area may be
necessary for valid comparisons if a site exhibits considerable physical, chemical, radiological, or
biological variability.

reference coordinate system: A grid of intersecting lines  referenced to a fixed site location or
benchmark.  Typically the lines are arranged in a perpendicular pattern dividing the survey
location into squares or blocks of equal areas.  Other patterns include three-dimensional and
polar coordinate systems.

reference region:  The geographical region from which reference areas will be selected for
comparison with survey units.
August 2000                                GL-17                       MARS SIM, Revision 1

-------
Glossary
regulation:  A rule, law, order, or direction from federal or state governments regulating action
or conduct. Regulations concerning radioisotopes in the environment in the United States are
shared by the Environmental Protection Agency (EPA), the U.S. Nuclear Regulatory
Commission (NRC), the U.S. Department of Energy (DOE), and many State governments.
Federal regulations and certain directives issued by the U.S. Department of Defense(DOD) are
enforced within the DOD.

relative shift (A/o): A divided by o, the standard deviation of the measurements. See delta.

relative standard deviation:  See coefficient of variation.

release criterion: A regulatory limit expressed in terms of dose or risk.

rem (radiation equivalent man): The conventional unit of dose equivalent.  The corresponding
International System (SI) unit is the Sievert (Sv):  1 Sv = 100 rem.

remedial action:  Those actions that are consistent with a permanent remedy taken instead of, or
in addition to, removal action in the event of a release or threatened release of a hazardous
substance into the environment, to prevent or minimize the release of hazardous substances so
that they do not migrate to cause substantial danger to present or future public health or welfare
or the environment.  See remedy.

remediation: Cleanup or other methods used to remove or contain a toxic spill or hazardous
materials from a Superfund site.

remediation control survey:  A type of survey that includes monitoring the progress of remedial
action by real time measurement of areas being decontaminated to determine whether or not
efforts are effective and to guide further decontamination activities.

remedy: See remedial action.

removable activity: Surface activity that is readily removable by wiping the surface with
moderate pressure and can be assessed  with standard radiation detectors. It is usually expressed
in units of dpm/100 cm2.

removal: The cleanup or removal of released hazardous substances, or pollutants or
contaminants which may present an imminent and substantial danger; such actions as may be
necessary taken in the  event of the threat of release of hazardous substances into the
environment; such actions as may be necessary to monitor, assess, and evaluate the threat of
release of hazardous substances; the removal and disposal of material, or the taking of other such
actions as may be necessary to prevent, minimize or mitigate damage to the public health or
welfare or the environment.

MARSSIM, Revision 1                        GL-18                               August 2000

-------
                                                                                   Glossary
replicate: A repeated analysis of the same sample or repeated measurement at the same location.

representative measurement:  A measurement that is selected using a procedure in such a way
that it, in combination with other representative measurements, will give an accurate
representation of the phenomenon being studied.

representativeness: A measure of the degree to which data accurately and precisely represent a
characteristic of a population, parameter variations at a sampling point, a process condition, or an
environmental condition.

reproducibility: The precision, usually expressed as a standard deviation, that measures the
variability among the results of measurement of the same sample at different laboratories.

residual radioactivity: Radioactivity in structures, materials, soils, groundwater, and other
media at a site resulting from activities under the cognizant organization's control. This includes
radioactivity from all sources used by the cognizant organization, but excludes background
radioactivity as specified by the applicable regulation or standard. It also includes radioactive
materials remaining at the site as a result of routine or accidental releases of radioactive material
at the site and previous burials at the site, even if those burials were made in accordance with the
provisions of 10 CFR Part 20.

restoration:  Actions to return a remediated area to a usable state following decontamination.

restricted use: A designation following remediation requiring radiological controls.

robust:  A statistical test or method that is approximately valid under a wide range of conditions.

run chart: A chart used to visually represent data.  Run charts are used to monitor a process to
see whether or not the long range average is changing.  Run charts are points plotted on a graph
in the order in which they become available, such as parameters plotted versus time.

s: The arithmetic standard deviation of the mean.

S+: The test statistic used for the Sign test.

sample:  (As used in MARS SUV!) A part or selection from a medium located in a survey unit or
reference area that represents the quality or quantity of a given parameter or nature of the whole
area or unit; a portion serving as a specimen.

sample:  (As used in statistics) A set of individual samples or measurements drawn from a
population whose properties  are studied to gain information about the entire population.
August 2000                                GL-19                       MARSSIM, Revision 1

-------
Glossary
Sampling and Analysis Plan (SAP):  As defined for Superfund in the Code of Federal
Regulations 40 CFR 300.430, a plan that provide a process for obtaining data of sufficient quality
and quantity to satisfy data needs. The sampling and analysis plans consists of two parts:  1) the
Field Sampling Plan, which describes the number, type, and location of samples and the type of
analyses; and 2) the Quality Assurance Project Plan, which describes policy, organization,
functional activities, the Data Quality Objectives, and measures necessary to achieve adequate
data for use in selecting the appropriate remedy.

scanning:  An evaluation technique performed by moving a detection device over a surface at a
specified speed and distance above the surface to detect radiation.

scoping survey: A type of survey that is conducted to identify: 1) radionuclide contaminants,
2) relative radionuclide ratios, and 3) general levels and extent of contamination.

self-assessment:  Assessments of work conducted by individuals, groups, or organizations
directly responsible for overseeing and/or performing the work.

shape parameter (S):  For an elliptical area of elevated activity, the ratio of the semi-minor axis
length to the semi-major axis length.  For a circle, the shape parameter is one. A small shape
parameter corresponds to a flat ellipse.

shift: See delta (A).

Sievert (Sv):  The special  name for the International System (SI) unit of dose equivalent.
1 Sv = 100 rem = 1 Joule per kilogram.

Sign test: A nonparametric statistical test used to demonstrate compliance with the release
criterion when the radionuclide of interest is not present in background and the distribution of
data is not symmetric. See also Wilcoxon Rank Sum test.

single-blind measurement: A measurement that can be distinguished from routine
measurements but are of unknown concentration. See non-blind measurement and double-blind
measurement.

site: Any installation, facility, or discrete, physically  separate parcel  of land, or any building or
structure or portion thereof, that is being considered for survey and investigation.

site reconnaissance:  A visit to the site to gather sufficient information to support a site decision
regarding the need for further action, or to verify existing site data. Site reconnaissance is not a
study of the full extent of contamination at a facility or site, or a risk assessment.
MARSSIM, Revision 1                        GL-20                               August 2000

-------
                                                                                  Glossary
size (of a test): See alpha.

soil: The top layer of the earth's surface, consisting of rock and mineral particles mixed with
organic matter. A particular kind of earth or ground—e.g., sandy soil.

soil activity (soil  concentration): The level of radioactivity present in soil and expressed in
units of activity per soil mass (typically Bq/kg or pCi/g).

source material:  Uranium and/or Thorium other than that classified as special nuclear material.

source term:  All residual radioactivity remaining at the site, including material released during
normal operations, inadvertent releases, or accidents, and that which may have been buried at the
site in accordance with 10 CFR Part 20.

special nuclear material: Plutonium, 233U, and Uranium enriched in 235U; material capable of
undergoing a fission reaction.

split: A sample that has been homogenized and divided into two or more aliquots for subsequent
analysis.

standard normal distribution: A normal (Gaussian) distribution with  mean zero and variance
one.

standard operating procedure (SOP):  A written document that details the method for an
operation, analysis, or action with thoroughly prescribed techniques and  steps, and that is
officially approved as the method for performing certain routine or repetitive tasks.

statistical control:  The condition describing a process from which all special causes have been
removed, evidenced on control chart by the absence of points beyond the control limits and by
the absence of non-random patterns or trends within the control limits. A special cause is a
source of variation that is intermittent, unpredictable, or unstable.

stratification: The act or result of separating an area into two  or more sub-areas so as each sub-
area has relatively homogeneous characteristics such as contamination level, topology, surface
soil type, vegetation cover, etc.

subsurface soil sample:  A soil sample that reflects the modeling assumptions used to develop
the DCGL for subsurface soil activity.  An example would be soil taken deeper than 15 cm below
the soil surface to support surveys performed to demonstrate compliance with 40 CFR 192.
August 2000                                GL-21                        MARSSIM, Revision 1

-------
Glossary
surface contamination:  Residual radioactivity found on building or equipment surfaces and
expressed in units of activity per surface area (Bq/m2 or dpm/100 cm2).

surface soil sample: A soil sample that reflects the modeling assumptions used to develop the
DCGL for surface soil activity.  An example would be soil taken from the first 15 cm of surface
soil to support surveys performed to demonstrate compliance with 40 CFR 192.

surveillance (quality):  Continual or frequent monitoring and verification of the status of an
entity and the analysis of records to ensure that specified requirements are being fulfilled.

survey: A systematic evaluation and documentation of radiological measurements with a
correctly calibrated instrument or instruments that meet the sensitivity required by the objective
of the evaluation.

survey plan: A plan for determining the radiological characteristics of a site.

survey unit: A geographical area consisting of structures or land areas of specified size and
shape at a remediated site for which a separate decision will be made whether the unit attains the
site-specific reference-based cleanup standard for the designated pollution parameter.  Survey
units are generally formed by grouping contiguous site areas with a similar use history and the
same classification of contamination potential.  Survey units are established to facilitate the
survey process and the statistical analysis of survey data.

systematic error:  An error of observation based on system faults which are biased in one or
more ways, e.g., tending to be on one side of the true value  more than the other.

T+:  The test statistic for the Wilcoxon Signed Rank test.

tandem testing: Two or more statistical tests conducted using the same data set.

technical  review:  A documented critical review of work that has been performed within the
state of the art.  The review is accomplished by one or more qualified reviewers who are
independent of those who performed the work,  but are collectively equivalent in technical
expertise to those who performed the original work. The review is an in-depth analysis and
evaluation of documents, activities, material, data,  or items that require technical verification or
validation for applicability,  correctness, adequacy,  completeness, and assurance that established
requirements are satisfied.

technical  systems audit (TSA): A thorough, systematic, on-site, qualitative audit of facilities,
equipment, personnel, training, procedures, recordkeeping,  data validation, data management,
and reporting aspects of a system.

MARSSIM, Revision 1                        GL-22                                August 2000

-------
                                                                                   Glossary
TEDE (total effective dose equivalent): The sum of the effective dose equivalent (for external
exposure) and the committed effective dose equivalent (for internal exposure). TEDE is
expressed in units of Sv or rem. See CEDE.

test statistic: A function of the measurements (or their ranks) that has a known distribution if
the null hypothesis is true. This is compared to the critical level to determine if the null
hypothesis should be accepted or rejected.  See S+, T+, and Wr.

tied measurements: Two or more measurements that have the same value.

traceability: The ability to trace the history, application, or location of an entity by means of
recorded identifications.  In a calibration sense, traceability relates measuring equipment to
national or international standards, primary standards, basic physical constants or properties, or
reference materials. In a data collection sense, it relates calculations and data generated
throughput the project back to the requirements for quality for the project.

triangular sampling grid: A grid of sampling locations that is arranged in a triangular pattern.
See grid.

two-sample t test:  A parametric statistical test used in place of the Wilcoxon Rank Sum  (WRS)
test if the reference area and survey unit measurements are known to be normally (Gaussian)
distributed and there are no less-than measurements in either data set.

Type I decision error:  A decision error that occurs when the null hypothesis is rejected when it
is true. The probability of making a Type 1 decision error is called alpha (a).

Type II decision error:  A decision error that occurs when the null hypothesis is accepted when
it is false. The probability of making a Type 11 decision error is called beta (P).

unity rule (mixture rule):  A rule applied when more than one radionuclide is present at a
concentration that is distinguishable from background and where a single concentration
comparison does not apply. In this case, the mixture of radionuclides is compared against default
concentrations by applying the unity rule. This is accomplished by determining:  1) the ratio
between the concentration of each radionuclide in the mixture, and 2) the concentration for that
radionuclide in an appropriate listing of default values. The sum of the ratios for all
radionuclides in  the mixture should not exceed 1.

unrestricted area:  Any area where access is not controlled by a licensee for purposes of
protection of individuals from exposure to radiation and radioactive materials—including areas
used for residential purposes.
August 2000                                GL-23                       MARSSIM, Revision 1

-------
Glossary
unrestricted release: Release of a site from regulatory control without requirements for future
radiological restrictions. Also known as unrestricted use.

validation: Confirmation by examination and provision of objective evidence that the particular
requirements for a specific intended use are fulfilled. In design and development, validation
concerns the process of examining a product or result to determine conformance to user needs.

verification: Confirmation by examination and provision of objective evidence that the
specified requirements have been fulfilled. In design and development, verification concerns the
process of examining a result of given activity to determine conformance to the stated
requirements for that activity.

Wr: The sum of the ranks of the adjusted measurements from the reference area, used as the test
statistic for the Wilcoxon Rank Sum test.

Ws: The sum of the ranks of the measurements from the survey unit, used with the Wilcoxon
Rank Sum test.

weighting factor (Wt): The fraction of the overall health risk, resulting from uniform, whole-
body radiation, attributable to specific tissue.  The dose equivalent to tissue is multiplied by the
appropriate weighting factor to obtain the effective dose equivalent to the tissue.

Wilcoxon Rank Sum (WRS) test: A nonparametric statistical test used to determine
compliance with the release criterion when the radionuclide of concern is present in background.
See also Sign test.

working level: A special  unit of radon exposure defined as any combination of short-lived
radon daughters in 1 liter of air that will result in the ultimate emission of 1.3xl05 MeV of
potential alpha energy. This value is approximately equal to the alpha energy released from the
decay of progeny in equilibrium with 100 pCi of 222Ra.

Zj_9: The value from the standard normal distribution that cuts off 100 9 % of the upper tail of
the standard normal distribution. See standard normal distribution.
MARSSIM, Revision 1                        GL-24                               August 2000

-------
                                              INDEX
a
    see Type I decision error
                        area of elevated activity
    see Type II decision error
91b material
Amjn
    area of elevated activity
action level
activity
    activity concentration
    distribution

    ratios
    gross activity
    units of activity
    see elevated activity
air
ALARA

alpha (a) radiation
    analysis
    detection sensitivity
        direct measurement
        scanning

    detectors
    attenuation
    measurement

    radon
alternative hypothesis

area
    evaluation & HSA
    classification

    contaminated
    land
    reference coordinate system
    scanning
    site
    site diagram
    structures
    survey unit
3-5

D-23
2-14, 27; 4-34,
35;7-3;D-6, 8,
9, 15, 16
2-3; 3-11
4-1,6
2-29, 30;
6-33, 34
4-4,5
4-8
2-14; 4-1

3-19; 5-10, 14,
18; 6-11, 13,
55 to 60; 7-13,
16, 27; App. M
2-5; 5-52;  8-21,
27; C-8 to  10
4-6, 7; 7-15
7-22

6-32 to 37
2-14; 5-48;
6-47 to 49
6-15 to 17, 20
4-23, 25
5-12, 13;
6-13, 14
6-55 to 59
2-39; 5-25;
8-11, 17

3-11
2-4, 5, 17,  28;
4-11
2-3
4-26
4-27
2-31; 5-46 to 48
4-17
3-21
4-23, 25
2-4; 4-15
    demonstrating compliance
    determining data points
    flagging
    investigation level
    final status survey design

area factor
                               2-3, 4, 27, 28,
                               30; 5-35 to 39;
                               6-42 to 45;
                               8-22, 23, 27
                               2-27
                               5-35
                               5-44
                               5.44 to 46
                               2-29, 32;
                               5-46 to 52
                               2-27; 5-36 to 39;
                               8-16, 22, 24
arithmetic mean
    see mean
arithmetic standard deviation
    see standard deviation
background (radiation)
    activity                     5-10, 11
    decommissioning            4-13
    detection sensitivity          6-37, 39 to 49
    ground water                5-13
    indistinguishable from        2-39
    samples                     5-10, 11; 7-2, 5
    statistical tests               2-26; 4-9; 5-28
    see background reference area
background reference area 2-6,28; 4-13 to
                               16; 7-5; 8-3 to
                               11, 17to21;A-5
                               4-13
                               5-25 to 31
                               5-27
    background radiation
    data points
    Pr
    relative shift
        WRS test
    survey
Becquerel (Bq)
    see conversion table
beta (P)  radiation
    analysis
    detection sensitivity
        direct measurement
        scanning

    detectors
    attenuation
    measurement
    radon
                               5-26
                               5-1,2,
10
bias
    field measurements
    laboratory measurements
                               4-6
                               7-21, 22

                               6-32 to 37
                               2-14; 5-48;
                               6-37 to 47
                               6-15 to 17, 21
                               4-23, 25
                               5-12, 13
                               6-55, 58, 59
                               2-11; 4-32 to 38
                               6-4 to 6
                               7-4,5
August 2000
                Index-1
                                 MARS SIM, Revision 1

-------
Index
biased sample measurement
    see judgement measurement
byproduct material
    byproducts
calibration
                              C-15, 16
                              3-5
                              4-17; 6-20 to 28;
                              7-4, 13; 9-5, 6
CEDE (committed effective dose
    equivalent)
CERCLA

    compared to MARSSIM
Chain of Custody

characterization survey
    checklist
    DCGLs
checklist(s)
    see survey checklist
Class 1 area

    investigation level
    scanning
Class 2 area

    investigation level
    scanning
Class 3 area
    investigation level
    scanning
classification
    areas
    HSA/scoping
    see Class 1, 2, and 3 area
cleanup
    regulations
    release criterion
cleanup standard
cleanup (survey) unit
    see survey unit
coefficient of variation
                              2-2
                              2-22, 39; 3-1, 2;
                              5-1,7
                              App. F
                              5-3, 17;
                              7-23 to 25; 9-8
                              2-15, 16,22,23;
                              3-24; 4-21;
                              5-7 to 17; A-17
                              5-16, 17
                              4-4
                              2-5; 4-11; 5-48;
                              8-24, 25
                              5-45
                              2-32; 5-46
                              2-5; 4-12; 5-49;
                              8-24
                              5-45
                              2-32; 5-47
                              2-5; 4-12; 5-49
                              5-45
                              2-33; 5-48
                              2-4, 10, 17, 28;
                              3-1, 12, 22; 4-11;
                              5-46 to 51; 7-7;
                              8-1,2, 15, 16,22,
                              24, 27; A-5; N-16
                              2-5
                              2-23

                              1-1, 4; 5-18, 19
                              1-3
                              2-2
                              2-2
                              5-26
comparability

completeness

computer code
    DEFT
    ELIPGRID
    RESRAD
    RESRAD-BUILD
conceptual site model
confidence interval
    alternate null hypothesis
confirmatory survey
    survey design
    see final status survey
contamination
    characterization survey
    classification

    DCGLs
    decommissioning criteria
    field measurements
    final status survey
    HSA
       historical data
       reconnaissance
       identifying
       in soil
       in water
       in structures
       in air
    remedial action
    sampling

    surrogate measurements
    see area of elevated activity
    see impacted area
control chart

corrective action

    bias
    comparability
    completeness
    precision
    representativeness
2-11; 6-6; 7-6,
12;N-12tol5
2-11; 6-6, 7; 7-6,
7;N-14to 16

D-20, 21
D-23
5-36
5-36
3-21, 22; 4-21;
5-8, 47; 7-11, 13,
15;A-10
6-53 to 55
2-36

5-21

1-1,2,3,6
5-7 to 15
2-4, 5, 28; 3-3;
4-11
2-2, 3; 4-3
5-25
6-5,6
5-25 to 52
2-22
3-7, 10
3-9
3-11
3-13, 14
3-15, 17
3-20
3-19
2-23; 5-18, 19
7-11 to 16;
App. M
4-4
4-33, 37;
6-5,7,  8
2-23; 6-28; 7-11;
9-8,9
N-10
N-15
N-16
N-9
N-13
MARSSIM, Revision 1
                                            Index-2
                               August 2000

-------
                                                                                                Index
criterion
    alternate hypothesis
    compliance
    DCGLs
    FSS
    measurement
    QC
    release criterion
    statistical tests
    null hypothesis
critical level (Lc)
critical value
                               2-39
                               2-25
                               4-3
                               2-24
                               6-1
                               4-32 to 38
                               1-1 to 3; 3-24
                               2-22, 34
                               2-9
                               6-32 to 37
                               8-12, 13, 15, 18,
                               21;A-18;
                               D-16, 17
curie (Ci)
    see conversion table
data
    conversion                  6-28 to 31
    data interpretation checklist   8-27
    distribution                 8-4,5
    number of points needed      2-10
       EMC                   5-35 to 39
       Sign test                5-31 to 35
       WRStest               5-25 to 31
    preliminary review (DQA)    E-3
    review                     N-5
    skewness                   8-5
    spatial dependency           8-4
    see mean, median, standard deviation
    see posting plot
    see ranked data
    see stem and leaf display
Data Life Cycle              2-6 to 12; 4-35;
                               5-46; 9-2, 3, 5
    figure                      2-7
    steps:
        1. planning              2-8; App. D
       2. implementation       2-11
       3. assessment           2-11; App. E
       4. decision making       2-7
    table                       2-16

Data Quality Assessment (DQA)
                               1-4; 2-6; 5-46;
                               8-1, 2; 9-2, 5;
                               App. E
    assessment phase            2-8, 11; App. E
    historical data               3-7
Data Quality Objectives (DQOs)
                               1-3, 4; 2-7, 9;
                               4-4, 19; 5-2, 8,
                               21, 52; 6-2;
                               7-1, 2; 8-1, 2;
                               9-2, 7, 8; App.D
    DQO Process               2-10; App. D
        iterations (figure)        D-3
        state problem            D-4
        identify decision         D-5
        inputs                  D-5,6
        study boundaries         D-6 to 8
        develop decision rule     D-8 to 13
        decision errors           D-13 to 28
        optimize design         D-28, 29
    HSA                       3-2
    Planning                   2-9
    preliminary review (DQA)    E-l
    measurement uncertainty     6-50
    QAPP                      9-2,3
data quality indicators      2-11; 6-3, 7; 7-2,
                               7; 9-9; N-6 to 18
Derived Concentration Guideline Level
(DCGL)                      2-2,11,33;
                               4-3 to 11; 6-1, 2,
                               7, 19, 32, 50;
                               7-2, 7, 9; 8-2, 6,
                               11,22, 26; 9-5
    DCGLW                    2-3; A-2; D-9
    DCGLEMC                   2-3
    HSA                       3-1, 12
    gross activity               4-8
    sampling                   7-2, 7, 9
    surveys                     5-1
decay
    see radioactive decay
decision error               D-13 to 17,
                               20 to 22, 26 to
                               29;N-17
    error chart                  D-27
    false positive                D-14, 21,26
        see Type I error
    false negative               D-15, 20
        see Type II error
    feasibility trials
        DEFT                  D-20,21
    specifying limits             D-15
    table                       D-15
August 2000
                                              Index-3
                               MARS SIM, Revision 1

-------
Index
decision maker

    alternate methods
    estimating uncertainty
    DQOs
decision rule
    one-sample case
    power chart (example)
    two-sample case
decision statement
decommissioning
    Characterization Survey
    criteria
    documentation
    simplified procedure
    site identification
    site investigation
delta (6)
delta (A)
    see relative shift
2-6; 4-14; 5-46;
6-27; 7-2, 18; 9-8
2-32
2-11
3-2; 6-2
1-2; 8-24
D-ll
D-25
D-12
8-24; D-2, 5, 6
1-1; 2-3; 3-1
2-23; 5-7, 8
4-1
5-52
App. B
2-16
4-1
5-26 to 35;
8-12 to 15, 19,
23;A-11, 19;
D-10, 13, 16, 17,
20,21
2-9, 10,31
direct measurement

    background
    description
    detectors
    instruments
    methods
    QC
    radon
    replicates
    sensitivity
    surveys
distribution coefficient (Kd)
documentation
dose equivalent (dose)
    DCGL
    release criterion
effective probe area
elevated area
    see area of elevated activity
elevated measurement
    see area of elevated activity
Elevated Measurement Comparison
2-4; 4-17;
Chap. 6
6-7, 35
6-10 to 13
6-15 to 22;
App. H
4-16, 6-15 to 28
4-17
4-32 to 38
6-55 to 60
6-3
6-31 to 49
5-45 to 51
3-19
N-2 to 4
1-1, 3; 2-1,2
2-3; 5-36 to 38
2-2
6-29, 37
eciion nniH
see minimum detectable

ector(s)

alpha
field survey

laboratory

beta
field survey

laboratory

calibration
in situ spectrometry
gamma
field survey

laboratory

low energy
radon
sensitivity
X-ray
concentration

Chap. 6; 9-6;
App. H

6-15 to 18, 20;
H-5 to 10
7-20, 22;
H-38 to 42

6-15 to 18,21;
H-lltol4
7-20, 21;
H-43 to 45
6-20 to 28
6-11, 12

6-15 to 18, 22;
H-15 to 24
7-20, 21;
H-46 to 48
H-31to33
6-57; H-25 to 30
6-31 to 49
H-31to33
(EMC)


DCGLEMC
number of data points
example
see area of elevated activity
exposure pathway model


exposure rate

field sampling plan
field survey equipment
final status survey



checklist
classification
compliance
DCGL
example
figure

2-3, 27, 32;
8C O IT 1O
-5, 9, 17, 18,
21 to 23
2-3, 27
5-35 to 39
5-39; A-16

2-2, 15, 27;
5-38, 44; 8-9, 23

4-20; 5-9 to 11,
n^i
, J1
2-6; 9-3
H-5 to 37
2-4, 24, 32; 3-24;
5-21 to 55; 8-1,
6, 10, 23 to 25;
9-5
5-53 to 55
2-28; 4-11
2-25
4-3
App. A
2-21

MARSSIM, Revision 1
               Index-4
                               August 2000

-------
                                                                                              Index
final status survey (continued)
    health and safety
    integrated design
    investigation process
    planning
    sampling

    survey units
fluence rate
frequency plot
gamma (y)  radiation
    analysis
    detection sensitivity
        direct measurement
        scanning
    detectors
    measurement
    radon
    scanning
    spectrometry
    surface measurement
graded approach
graphical data review
    see frequency plot
    see posting plot
    see stem and leaf display
gray region
    example
    see decision error
    see lower bound (LBGR)
grid
    example
    positioning systems
    random start example
    reference coordinate system

        example(s)
4-38
2-32
2-16
2-9; 5-21 to 55
7-7 to 16;
App. M
4-14
6-11, 12,44
8-4,5

7-21
6-31
6-32 to 37
6-37 to 47
6-15 to 18, 22;
7-20, 21;H-15to
24, 46 to 48
4-16
6-55, 57, 60
6-14
4-16
6-11, 12
1-5; 2-4, 5, 8;
3-1; 6-8; 8-1;
9-2, 3, 5
8-4; E-3
2-9, 31; 5-25 to
27, 32, 33; 6-7;
7-7, 8 to 12, 14,
19;D-16, 17,
20 to 22, 26, 28
A-7, 11
2-31; 4-27 to 31;
5-3, 16, 40 to 43;
7-7
A-7, 13, 14, 15
6-61, 62
5-40, 41; A-14
2-23; 4-27;
6-61, 66
4-28, 29, 30
grid (continued)
    sample/scan
    spacing
    triangular grid
        figure
half-life (t1/2)
                              2-32; 5-40
                              5-42
                              5-40 to 43
                              5-43
                              1-5; 4-6; 6-55;
histogram
    see frequency plot
    see stem and leaf display
Historical Site Assessment (HSA)
                               1-3, 4; 2-16, 22;
                               Chap. 3; 5-1, 16
                               39; 6-14; 7-12;
                               8-9; A-l
    data sources                 App. G
    figure                      2-18
    information sources          App. G
    survey planning              4-11
hot measurement
    see area of elevated activity
hot spot
    see area of elevated activity
hypothesis
    alternative hypothesis
    null hypothesis
    statistical testing
        approach explained
        Sign test
        WRS test
impacted area
    classification
    DQO
    HSA
    non-impacted
    Scoping Survey
    site diagram
    survey design
    see residual radioactivity
indistinguishable from background
                              2-39; D-19
infiltration rate             3-14,16,18
inventory                    3-8; 4-26
                              2-26; 8-8, 12, 18
                              2-39; D-14, 15
                              2-9, 26; 8-11, 15,
                              17, 23; D-14, 15
                              1-3; 2-13, 26
                              2-26
                              2-28; 8-11
                              2-28; 8-17
                              2-4
                              4-11
                              3-2
                              2-23; Chap. 3
                              2-4
                              2-23
                              3-23
                              2-25
August 2000
               Index-5
                                MARS SIM, Revision 1

-------
Index
investigation level
    example (table)
    scanning
    survey strategy
    see release criterion
    see action level
judgment measurement
karst terrain
laboratory equipment
less-than data
license
2-2, 32; 4-1;
5-18, 44 to 46;
6-14, 15;
8-9, 17, 21
5-45
6-3
5-46
2-22, 23, 30, 33;
5-2, 3, 44, 48,
51,55
3-19
4-16;H-38to48
2-13
2-16; 3-4, 5, 7, 8;
7-11
license termination
    see decommissioning
lower bound of the gray region (LBGR)
                             2-9, 31; 5-25 to
                             27, 31 to 33; 6-7;
                             7-7; 8-12, 13, 15,
                             19;D-17, 20,
                             21,28;N-18
    example                  A-11
    see gray region
m (number of data points in the reference
area)                       5-29, 39,42;
                             8-18,21
mean                       2-27, 28; 4-33;
                             5-49, 50; 8-2, 3,
                             5 to 7, 12, 13, 15;
                             D-9
    of data (example)           8-3
measurement techniques   1-2, 4; 2-4; 3-7;
                             4-16, 17;
                             7-20 to 22
median                     2-28; 5-27, 32,
                             45; 8-2, 3, 5 to 7,
                             12, 13, 15; D-9
minimum detectable concentration
(MDC)                      2-10, 34; 4-16,
                             17, 34, 35;
                             5-36, 37, 48;
                             6-31 to 49;
                             8-15, 18,22;
                             9-7 to 9
    direct measurement         6-32 to 37
    elevated activity            5-39
    reporting                  2-13
    scan                      6-37 to 49
minimum detectable count rate
(MDCR)                    6-40 to 45
missing or unusable data   5-29, 31, 33, 35
model(s)
    conceptual site model        3-3, 22; 5-8, 47
    defining study boundaries    D-6, 7
    exposure pathway           1-4; 2-2, 15, 27;
                             6-10, 28
       area factor (example)    5-36
       determining DCGLs     4-3,6
N (number of data points)  2-10; 5-25 to 39;
                             8-12, 13, 15, 18
    QC measurements           4-32 to 38
    Sign test                  5-31 to 35
       example               5-33, 35; B-2
       table                  5-34
    WRStest                  5-25 to 31
       example               5-29,31;
                             A-ll;B-2
       table                  5-30
n (number of data points in survey unit)
                             5-29, 38, 42;
                             8-18,21
NARM                      3-4
naturally occurring radionuclides
                             1-4; 3.3; 6-5; 7-5
non-impacted area          2-4
    background preference area)   4-13
    classification               2-28; 4-11
    DQO                     3-2
    HSA                     2-17;
                             3-10 to 12
    survey design              2-31
MARSSIM, Revision 1
              Index-6
                               August 2000

-------
                                                                                            Index
nonparametric test
    alternate methods
    one-sample test

    two-sample test
                              2-26; 4-10, 11;
                              5-25; 8-6, 7, 22,
                              24,25
                              2-34 to 38
                              2-28; 5-31;
                              8-11 to 16;D-10
                              2-28; 5-25;
                              8-17to21;D-10
    see Sign test
    see Wilcoxon Rank Sum test
    see Wilcoxon Signed Rank test
normal (gaussian) distribution
                              2-28; 5-45;
                              6-54, 55; 8-6; 1-1
one-sample test              2-28; 5-25,
                              31 to 35
    see Sign test
outlier                       9-7
Pr                            5-27, 28; 1-27, 28
performance evaluation     4-35, 37; 6-4, 9;
                              7-4, 10
physical probe area         6-29, 30, 38,48
posting plot                 2-27; 8-4, 8, 13
power (1-P)                 2-31, 34; 4-26;
                              5-27, 29, 33, 54;
                              6-15, 17; 8-2, 3,
                              5,6,8, 12, 15,
                              23,27;D-15,
                              17 to 19, 25, 26
    Sign test                   1-25, 26
    WRS test                  1-27 to 29
    chart                      D-25
    power curve                1-26, 29
    example                    A-7, 9, 11, 12
precision                     2-11; 4-32 to 38;
                              9-9; N-6 to 8
    global positioning system     6-61, 62
    QC measurements           4-35, 37; 6-3, 4;
                              7-3,4
probe area                  6-20,21,24,29,
                              30, 36, 37, 38,
                              43,48
quality                       2-6, 8, 9
    assessment data             2-11
    data quality needs           2-8
    HSA data                  3-10
    professional judgment        3-22
quality assurance (QA)      2-6; 4-32; 8-1,2,
                              4, 7; 9-1 to 4
    review of HSA              3-25
    document comparison tables  App. K
Quality Assurance Project Plan (QAPP)
                              2-6; 4-31,32;
                              5-5, 54, 55; 7-9;
                              9-2, 3, 6
quality control (QC)        2-6; 8-2; 9-1, 5, 7
    field measurement control    6-3 to 8
    laboratory control           7-2 to 7
    number of measurements      4-32 to 38
quality system               9-1 to 4
Quantile plot                8-4,7, 8,13;
                              1-18 to 21
Quantile-Quantile plot      A-16,17;
                              1-22 to 24
R                            5-29,31,33,35
RA                           D-23
radiation program managers
    list by region               App. L
radiation survey             1-1, 4;4-4,  21
    data life cycle               2-16
    HSA                      2-22; 3-1, 8
    scoping survey              2-22; 5-1 to 6
    characterization survey       2-23; 5-7 to 17
    remedial action support survey
                              2-23; 5-18  to 20
    final status survey           2-24; 5-21  to 55
    planning                   2-8 to 11;
                              Chap. 4; Chap. 5
    process                    2-14, 17 to 21
radioactive decay           3-12; 7-18,20
    decay chain                 4-6, 7
    half-life                   4-5
    radon                     6-55, 58, 59
    scanMDC                  6-44 to 46
    survey design               5-5, 8, 16
radioactivity
    see residual radioactivity
radiological survey
    see radiation survey
radionuclide                 2-2,5
    compliance/dose             2-25
    see unity rule
August 2000
                                             Index-7
                               MARS SIM, Revision 1

-------
Index
radon

random uncertainty
ranked data
    interpolated ranks
RCRA
    compared to MARSSIM
reference coordinate system
    see grid
regulations & requirements App. C
3-20; 5-14;
6-55 to 60
2-14; 6-50 to 52
1-22
1-23
2-22, 23, 39; 3-1;
5-1,7
App. F
remediation                  1-1, 3, 4; 8-9,11
    see remedial action support survey
    DOD
    DOE
    EPA
    NRC
    States
relative shift (A/o)
    calculate
        example

    DQO process
    number of data points
C-15 to 20
C-4 to 12
C-l to 4
C-12to 15
C-20, 21
5-26 to 35, 40,
42; 8-12 to 15,
19;D-17, 20
5-26, 5-32
5-29, 5-33;
A-ll, 19
2-9, 10,31
5-28, 33
5-27
5-32
    Sign p
    tables
        N (Sign test)            5-34
        N/2 (WRS test)         5-30
        Pr                     5-28
        Sign p                 5-32
release criterion             l-l, 2, 5; 2-2
    alternate null hypothesis      2-39
    compliance                 2-25
    DCGLs                    4-3
    final status survey           2-24
    null hypothesis              2-9, 26
    statistical tests              2-25
    survey planning             5-1
rem (radiation equivalent man)
    see conversion table
remedial action support survey
                              2-15, 23; 5-18 to
                              20; 6-12; 8-25
    checklist                   5-20
    figure                     2-20
    table                      2-16
removable activity

    see surface contamination
removal
    criteria
    of structures/equipment
    Superfund
       HSA
       scoping survey
replicate
    sample
    measurement

representativeness
                       reproducibility
                       residual radioactivity

                           analytical procedures
                           characterization surveys
                              land areas
                              structures
                           final status survey
                              land areas
                              structures
                           remedial action design
                           see surface contamination
                       restricted use
                           see unrestricted release
                       robust
                       s
                       S+
                           see test statistic
                       sample(s)
                           alternate survey design
                           background
                           blanks
                           Chain of Custody
                           characterization
                              land
                              structures
                           confirmation/verification
                           criteria
                           DCGLs
5-17, 52;
6-20, 21

2-5; 5-2
2-23; App. F
4-24 to 26
App. F
3-1
5-2
4-35, 37
7-3
6-3

2-11, 24; 4-34;
6-6; 7-3;
N-12,  13
4-27; 6-61
2-3, 26; 3-24;
4-1,24
7-17 to 23

5-11
5-10

5-40,50,51
5.44, 48 to 50
5-18

1-1; 5-7

2-35, 37; 8-6
5.45, 49; 8-2
8-12 to 16

2-4
2-33
4-13
7-5
7-23 to 25

5-11
5-10
2-25
4-19,21
4-4
MARSSIM, Revision 1
               Index-8
                               August 2000

-------
                                                                                               Index
sample(s) (continued)
    documentation
    final status survey
        locations
        number of data points
    matrix spikes
    packing/transport
    preservation of
    QC
    remedial action
    sampling
    scoping
    soil
    surrogate
    water & sediments
Sampling and Analysis Plan
scanning
    alpha
    alpha scanning sensitivity
        equations - derivations
    beta
    demonstrating compliance
    detectors

    elevated activity
    gamma
    MDCs
    pattern (example)
    sensitivity
    survey techniques
    scanning surveys
        scoping
        characterization
         land areas
         structures
        remedial action
        final status
         Class 1 areas
         Class 2 areas
         Class 3 areas
scoping survey
    area classification
    checklist
    figure
    HSA & planning
    table
sealed source
    final status survey example
5-52

5-40 to 44
5-25 to 39
7-4
7-25 to 28
7-16, 17
4-32 to 38
5-19
2-4
5-2,3
7-11 to 14
4-4
5-12, 13
        2-6; 9-3
2-4; 4-17
6-14

App. J
6-15
2-31
6-15 to 18, 20 to
22, 57;  App. H
2-29
6-14
6-37 to 49
A-6
6-37 to 49
4-17; 6-13 to 15

5-3,6

5-11
5-10
5-19

2-32; 5-46
2-32; 5-47
2-33; 5-48
2-15, 22; 5-1 to 6
4-11
5-5,6
2-19
3-1,2
2-16

App. B
sigma (a)
    see standard deviation
Sievert (Sv)
    see conversion table
Sign test

    applying test
    example(s)
    hypothesis
    number of data points
       example
    power
    Signp
site(s)
    clearing for access
    decommissioning
    definition
    historical assessment
    identification
    investigation process
    site preparation
site reconnaissance
    identify contamination
    site model
smear (swipe)
    see removable activity
soil
    analysis
    background
    sampling
    surveys

    survey coverage
source term
split
    regulatory verification
    sample
standard deviation
                               2-3, 27, 28; 5-25;
                               8-11 to 16
                               8-12
                               8-12, 14
                               8-11
                               5-31 to 35
                               5-33,35
                               1-25, 26
                               5-32
                               Chap.  1
                               4-24
                               4-1
                               2-3
                               Chap.  3
                               2-16; 3-4
                               2-14
                               4-22
                               3-9
                               3-13
                               3-22
                               3-13 to 15
                               7-17 to 23
                               4-13
                               7-11 to 14
                               5-33, 9 to 11, 19,
                               33,47,50,51
                               2-32; 5-47
                               4-21
                               2-25
                               4-35; 7-3, 14
                               2-9, 31; 4-16;
                               5-26,29,31,32,
                               45, 49; 8-2, 10,
                               12 to 15, 19,23;
                               A-ll, 19;N-17
standard operating procedure (SOP)
                               6-3,51;
                               7-9, 19, 25
August 2000
               Index-9
                                MARS SIM, Revision 1

-------
Index
statistical tests

    alternate methods
    documenting
    interpreting results
    selecting a test
    summary (table)
    verify assumptions
stem & leaf display
structures
    access
    HSA site plots
    measurements
    reference coordinate system
    surface activity
    surveys
    survey coverage
    survey example
    survey unit
    WRS test (example)
        Class 1
        Class 2
Student's t test
subsurface soil (sample)
    characterization survey
    HSA
    sampling
surface contamination
    detectors
        alpha
        beta
        gamma
    direct measurements
    identification
    in situ spectrometry
    land areas
    scanning
    soil
    structures
    surface activity DCGLs
    surrogates/DCGLs
surface soil
    background
    sampling

surrogate measurements
2-25; 4-11; 5-25;
Chap. 8; App. I
2-34 to 38
8-25, 26
8-21 to 25
8-6, 7; E-4
8-9
8-7, 8; E-4
8-5, 7; 1-17, 18
3-20
4-25
3-8
4-20
4-27 to 31
5-10
5-7 to 10, 46, 47
5-47
App. A
2-4; 4-14, 15

8-21, App. A
8-19
2-35, 37
1-9; 4-24
5-9,5, 11
3-11, 13, 14
7-16; App. M
1-3,4

6-20
6-21
6-22
6-10 to 13
3-12
6-11, 12
4-24
6-13 to 15
3-14
4-23; 5-10
4-4
4-4
1-3, 1-4; 3-13
4-13
7-9, 12 to 14, 16,
17, 21; App. M
4.4 to 7; 5-12;
6-14; 9-7
survey
approach
DCGLs
decommissioning criteria
DQOs
field measurements
instruments/technique
overview
planning

QAPP
sampling/preparation
simplified procedure
site investigation process
statistical tests

survey considerations
using MARS SIM
see characterization
see final status
see HSA
see remedial action
see scoping
see Data Life Cycle
see survey unit
survey checklist
characterization
final status
remedial action
scoping
statistical tests
survey plan

alternate designs
design
DQOs
optimizing survey
survey unit

area
characterization
characterize/DQOs
classification
classify /flowchart
elevated activity
HSA
identifying
investigation level
statistics & final status survey
uniform contamination

Chap. 1
4-3
4-1
2-9 to 11
Chap. 6
4-16; App. H
Chap. 2
2-8 to 11;
Chap. 5
2-6
Chap. 7, App. M
App. B
2-14
2-25; Chap. 8;
App. I
Chap. 4
1-6; Roadmap
5-7 to 16
5-20 to 53
Chapter 3
5-1 7 to 19
5-1 to 6



5-16, 17
5-53 to 55
5-20
5-5,6
8-27
1-5; 2-6; 5-54;
7-8, 18
2-33 to 40
Chap. 4; Chap. 5
2-9; 3-3
2-30
2-4; 4-14; 7-5;
9-6, 8; N-16
4-15
5-9 to 5-11
2-9
2-28; 4-11, 12
2-17
2-27
3-1,2,4
4-14
5-44 to 46
5-21 to 55
2-28
MARSSIM, Revision 1
               Index-10
August 2000

-------
                                                                                            Index
surveyor(s)

    selecting
systematic uncertainty
systematic grid

test statistic
4-22, 31; 6-24,
37, 38, 40 to 48
6-8,9
6-50 to 52
2-31, 32; 5-46;
6-7, 12; 8-19, 22
8-12, 13, 15;
D-16tol9
8-12 to 16
8-18
    example (S+)
    example (Wr,Ws)
    see critical level
total effective dose equivalent (TEDE)
                              2-2
triangular sampling grid    5-35, 36,
                              42 to 44; 8-4, 13,
                              16, 19
    see systematic grid
two-sample test              2-28; 5-25 to 31;
                              D-10
    alternate methods            2-37, 38
    nonparametric test           4-9 to 11
    see Wilcoxon Ranked Sign test
Type I decision error        5-25 to  35; 6-33,
                              34; 8-8, 10, 13 to
                              15, 18,  19,21;
                              9-8, 9;D-14to
                              17,21,26,28
    DQOs                     2-9,10,31
    examples                   8-10; A-7, 11,
                              18;B-2
Type II decision error       5-25 to  35; 6-33,
                              34; 8-8, 10, 12 to
                              15, 19; 9-8, 9;
                              D-14 to 18, 20,
                              21,26,28
    DQOs                     2-9,10,31
    examples                   8-10; A-7, 11;
                              B-2
uncertainty                  1-2; 2-25; 5-11,
                              14, 26, 29, 33,
                              35, 45, 46;
                              6-49 to  55; 7-3,
                              4, 8, 21; 8-17,  18;
                              9-7,9
        uncertainty (continued)
            confidence intervals
            decision making
            DCGL
            estimating
            measurement
            MDC
            propagation
            QC
            reporting
            statistical counting
            systematic/random
        unity rule (mixture rule)

            adjusting DCGLs
        unrestricted release
        validation

        verification
                              6-53 to 55
                              2-7
                              2-33
                              2-11
                              6-49 to 55
                              4-17
                              6-52, 53
                              4-32 to 38
                              2-14
                              6-52
                              6-50 to 52
                              2-27; 4-8; 5-38;
                              8-21, 23
                              4-8 to 4-10
                              3-22
                              2-8, 11; 7-9; 9-2,
                              5, 7, 8; App. N
                              2-15, 25; 5-21;
                              6-32; 7-9; 8-8;
                              9-2,4 to 7
Wr                           8-18
    see test statistic
Ws                           8-18
    see test statistic
Wilcoxon Rank Sum (WRS) test
                              2-28; 5-25 to 31;
                              8-17 to 21
    adjusted data               8-20
       example               8-19,21;
                              A-10, 11, 18, 19
    applying the test             8-18
       Class 1 example         8-21
       Class 2 example         8-19
    power                     1-27 to 29
    spreadsheet formulas         1-30
    see two-sample test
working level                6-56
August 2000
Index-11
                                                      MARS SIM, Revision 1

-------
NRC FORM 335
(2-89)
NRCM 1102,
3201, 3202
                            U.S. NUCLEAR REGULATORY COMMISSION

BIBLIOGRAPHIC DATA SHEET
      (See instructions on the reverse)
2 TITLE AND SUBTITLE
  Multi-Agency Radiation Survey and Site Investigation Manual (MARSSIM)
  Revision  1
1. REPORT NUMBER
 (Assigned by NRC, Add Vol., Supp., Rev.,
  and Addendum Numbers, if anv.l

      NUREG-1575, Rev.1;
   EPA-402-R-97-016, Rev. 1;
      DOE/EH-0624, Rev. 1
                                                                      DATE REPORT PUBLISHED
                                                                                                    MONTH            YEAR
                                                                                                   August	2000
                                                                                              4 FIN OR GRANT NUMBER
5. AUTHOR(S)
                                                                6. TYPE OF REPORT

                                                                            Technical
                                                                                              7. PERIOD COVERED (Inclusive Dates)
8. PERFORMING ORGANIZATION - NAME AND ADDRESS (If NRC, provide Division, Office or Region, U.S. Nuclear Regulatory Commission, and mailing address; if contractor,
  provide name and mailing address.)
  Department of Defense, Washington, DC 20301-3400
  Department of Energy, Washington, DC 20585-0119
  Environmental Protection Agency, Washington, DC 20460-0001
  Nuclear Regulatory Commission, Washington, DC  20555-0001
9. SPONSORING ORGANIZATION - NAME AND ADDRESS (If NRC, type "Same as above"; if contractor, provide NRC Division, Office or Region, U.S. Nuclear Regulatory Commission,
  and mailing address.)
  Department of Defense, Washington, DC 20301-3400
  Department of Energy, Washington, DC 20585-0119
  Environmental Protection Agency, Washington, DC 20460-0001
  Nuclear Regulatory Commission, Washington, DC 20555-0001	
10. SUPPLEMENTARY NOTES
11. ABSTRACT (200 words or less)
   The MARSSIM provides information on planning, conducting, evaluating, and documenting building and surface soil final status
   radiological surveys for demonstrating compliance with dose or risk-based regulations or standards.  The MARSSIM is a
   multi-agency consensus document that was developed collaboratively by four Federal agencies having authority and control over
   radioactive materials: Department of Defense (DOD), Department of Energy (DOE), Environmental Protection Agency (EPA), and
   Nuclear Regulatory Commission (NRC). The MARSSIM's objective is to describe a consistent approach for building and surface
   soil final status surveys to meet established dose or risk-based release criteria, while at the same time encouraging an effective use
   of resources.
12. KEYWORDS/DESCRIPTORS (List words or phrases that will assist researchers in locating the report.)
   Measurement, Planning, Data  Quality Objectives, Survey(s), Decommissioning, Clean-up, Statistics,
   Quality Assurance
                                                                        13. AVAILABILITY STATEMENT
                                                                                unlimited
                                                                        14. SECURITY CLASSIFICATION
                                                                                                       (This Page)
                                                                                                             unclassified
                                                                                                       (This Report)
                                                                                                             unclassified
                                                                                                      15. NUMBER OF PAGES
                                                                                                      16. PRICE
NRC FORM 335 (2-89)
                                                                                      This form was electronically produced by Elite Federal Forms, Inc.

-------