PB81-100364
Upgrading Environmental Radiation Data Health Physic Society Cotnmittee
Report HPSR-1 (1980)
North Carolina Univ. at Chapel Hill
Prepared for
Office of Radiation Programs
Washington, D.C.
August 1980
U.S. DEPARTMENT OF COMMERCE
National Technical Information Service
NTIS
-------
United Stares
Environmental Protection
Agency
Office of
Radiation Programs
Washington DC 20460
EPA 520/1 80012
August 1980
Radiation
Health Physics Society
Committee Report
HPSR-1 (1980)
-------
TECHNICAL REPORT DATA
(Please read Instructions on the reverse before completing)
1 REPORT NO
EPA 520/1-80-012
3 RECIPIENT'S ACCESSION NO
4 TITLE AND SUBTITLE
Upgrading Environmental Radiation Data
5 REPORT DATE
August 1980
6 PERFORMING ORGANIZATION CODE
7 AUTHOR(S)
Health Physics Society Committee Report HPSR-1 (1980)
8. PERFORMING ORGANIZATION REPORT NO
9 PERFORMING ORGANIZATION NAME AND ADDRESS
Health Physics Society Ad Hoc Committee
Department of Environmental Sciences and Engineering
University of North Carolina - Rosenau Building
Chapel Hill, North Carolina 27514
10 PROGRAM ELEMENT NO
11 CONTRACT/GRANT NO
12 SPONSORING AGENCY NAME AND ADDRESS
Office of Radiation Programs
Environmental Protection Agency
Washington, D.C. 20460
13 TYPE OF REPORT AND PERIOD COVERED
14 SPONSORING AGENCY CODE
ANR-461
15 SUPPLEMENTARY NOTES
16 ABSTRACT
This report is a collection of nine individual Health Physics Society
subcommittee reports on different aspects of environmental radiation data
associated with nuclear power plants. The subcommittee reports include:
Environmental Radiation Monitoring Objectives, Definition of Critical Pathways and
Radionuclides for Population Radiation Exposure at Nuclear Power Stations,
Propagation of Uncertainties in Environmental Pathway Dose Models, Detection of
Changes in Environmental Levels Due to Nuclear Power Plants, Quality Assurance for
Environmental Monitoring Programs, Reporting of Environmental Radiation Measurements,
Data, Statistical Methods for Environmental Radiation Data Interpretation,
Effective Communication with the Public, Environmental Radiological Surveillance-
Mechanisms for Information Exchange.
17
KEY WORDS AND DOCUMENT ANALYSIS
DESCRIPTORS
b IDENTIFIERS/OPEN ENDED TERMS
c COSATI Field/Group
environmental radiation data,
environmental pathway dose
Models, population radiation
dose at nuclear power plants,
environmental radiation surveillance
18 DISTRIBUTION STATEMENT
550 copies distributed by Health Physics
I Society; 50 copies distributed within EPA
19 SECURITY CLASS (This Report)
21 NO OF PAGES
20 SECURITY CLASS (This page)
22 PRICE
EPA Form 2220-1 (Rev 4-77) PREVIOUS EDITION is OBSOLETE
-------
THIS DOCUMENT HAS BEEN REPRODUCED
FROM TEE BEST COPY FURNISHED US BY
o
TEE SPONSORING AGENCY. ALTEOUGE IT
IS RECOGNIZED TEAT CERTAIN PORTIONS
ARE ILLEGIBLE, IT IS BEING RELEASED
IN TEE INTEREST OF MAKING AVAILABLE
AS MUCE INFORMATION AS POSSIBLE.
-------
Uii®
efion
-1 (1!
sgust
Office of Radiation Programs
U.S. Environmental Protection Agency
Washington, D.C. 20460
-------
Foreword
The Office of Radiation Programs (ORP) of the U.S. Environmental
&
Protection Agency is responsible for monitoring the environment and 'for
assessing the public health impact of radiation from all sources in the
United States. To meet these responsibilities ORP's radiological dose
assessment program relies heavily on environmental radiation data
gathered by numerous organizations and laboratories across the country.
In the past decade, EPA has found that the poor technical quality of
some radiation data has been a source of difficulty in its own program
and a concern to other agencies, including the Nuclear Regulatory
Commission, the Department of Energy, the Bureau of Radiological Health,
and the National Bureau of Standards. For some years now, representa-
tives from these agenices as well as from State agencies, universities,
and private industry have discussed the problems and explored ways, in
both business and professional forums, in which to improve the quality of
environmental, radiation data The Environmental Protection Agency has
encouraged its professional staff members to participate in such
activities.
The Office of Radiation Programs has closely observed the activities
of the Health Physics Society that led to the production of this report.
Those activities are summarized in the Preface. The report represents
iii
-------
the efforts of more than 75 radiation health professionals who, under the
auspices of the Health Physics Society, have pooled their expertise to
identify and solve some of the problems associated with the quality of
environmental radiation data.
The Office of Radiation Programs is pleased to have the opportunity
to fund the printing charges for the report in order to facilitate its
wide distribution and use. The Office of Radiation Programs hasn't,
however, assessed the technical validity of this report, and our support
for printing it is not an endorsement of its contents. Readers are
encouraged to direct their comments on this report to Dr. J. E. Watson,
Chairman, Health Physics Society Ad Hoc Committee, Department of
Environmental Sciences & Engineering, School of Public Health, Rosenau
Building, University of North Carolina, Chapel Hill, North Carolina
2751U.
Office of Radiation Programs
Environmental Protection Agency
Washington, D.C. 20460
iv
-------
UPGRADING ENVIRONMENTAL RADIATION DATA
REPORT OF HEALTH PHYSICS SOCIETY AD HOC COMMITTEE
J.E. Watson, Jr. (University of North Carolina), Chairman
Preface
The organization of the committee for Up-
grading Environmental Radiation Data resulted
from needs expressed at several conferences and
workshops held over the past decade related to
environmental radiation data.
An effort to bring together representatives
from industries, universities, state governments
and federal agencies to discuss and explore means
to Improve environmental radiation protection was
initiated in 1967 In Atlanta, Ga. At the
Symposium on Off-Site Public Health Aspects of
Nuclear Power Plant Operation, sponsored by the
U.S. Public Health Service, National Center for
Radiological Health, and the Georgia Department
of Public Health, technical presentations of good
quality were made on environmental radiation
surveillance which noted problems related to
environmental radiation data.
Discussion and interchange of information
was disappointing among participants of the 1971
Southern Conference on Environmental Radiation
Protection for Nuclear Plants in St. Petersburg,
Fl. The conference was organized and sponsored
by the U.S. Environmental Protection Agency (EPA)
and the Florida Division of Health with the co-
operation of the Florida Power Corporation.
A June 1974 seminar and March 1976 workshop
both, however, had positive impacts and served as
a direct impetus to the organization of this
committee. The Southeastern Seminar on Environ-
mental Radiation Surveillance was held in 1974 in
Columbia, S.C., under the sponsorship of EPA and
the South Carolina Department of Health and
Environmental Control. The 1976 Southeastern
Workshop on the Utilization and Interpretation
of Environmental Radiation Data in Orlando, Fla.,
was organized by the Florida Department of Health
and Rehabilitative Services and the University of
Florida in collaboration with the Nuclear
Regulatory Commission and EPA.
The proceedings of the 1976 workshop,
moderated by Melvin W. Carter, included the
following summary:
"The objectives of the workshop were to make
presentations and provide for open and candid dis-
cussion on the proper purposes for environmental
surveillance programs; data requirements and the
development, of baselines; and utilization and
interpretation of data.
. ,Scta& of the.primary issues raised included:
The quality and usability of existing data
The need for different objec&ives for differ-
ent monitoring programs „_
The feasibility of establishing common de-
finitions of what constitutes critical path-
ways and model verification
What constitutes an optimum blend of monitor-
ing and modeling
The advisability and feasibility of generat-
ing data requirements for regional impact
assessments of multiple nuclear facilities
0
Methods to determine If a detected increase
in radiation levels is "real" and whether it
is due to plant operations
Methods to convert data to individual and °
population dose commitments '
o
More standardization in data reporting con-
ventions, including the definitions of de-
tection limits and the treatment of results •
at or below these limits
Methods of reporting data for general public
understanding
Improving the Quality Assurance of the entire
program including sampling
The meeting provided a mechanism for scoping
of problems and identification or various ap-
proaches to their solution. One of the most strik-
ing and significant elements during the entire
workshop was the willingness of the representa-
tives from industry, governmental agencies and
academic institutions to freely exchange informa-'
tion and opinions, thus establishing communication
which lays the groundwork for mutual cooperation
in solving the pressing and complex environmental
radiation data problems confronting the Health
Physics Profession."
Communications among the participants of the
1976 workshop had Improved significantly compared
with the 1971 conference. This workshop Indicated
that most organizations were having similar pro-
blems in making meaningful use of environmental
radiation data. Specific problems related to the
quality and utilization of data were identified
and emphasized. Many of these problems had been
identified as early as the 1967 symposium but
still lacked solutions. The demonstrated willing-
ness of representatives from industriesj univers'ij-
ties and government agencies to exchange informa-
tion and opinions layed the ground work for co- ..
operation in solving the problems identified at
the workshop. a °
-------
Following the 1976 workshop, H. Richard
Payne proposed thecformation of a multi-organiza-
tional proaect to pursue solutions to the pro-
blems identified in the quality and utilization
of environmental radiation data. The proposed
project was discussed with persons who had
participated in the 1976 workshop and with other
representatives of organizations involved in
environmental radiation activities. There was
an overwhelming concensus of opinion that the
project should be pursued and there was an en-
couraging willingness of persons contacted to
participate In. the project.
An organizational meeting was held in the
spring of 1977. Committee officers selected by
participants werer Chairman - James E. Watson,
Jr.; Vice-chairman - Enrico F. Conti; and
Secretary - H. Richard Payne. Committee members
are listed following this preface. At that
meeting it was decided that the Health Physics
Society should be requested to serve as sponsor
for the project. John A. Auxier participated
in the organization of this project and carried
the participant's request for sponsorship to the
HPS's Board of Directors. The Board approved
sponsorship-of the Ad Hoc Committee for Upgrading
Environmental Radiation Data in July 1977.
Based upon the needs identified at the
Orlando workshop and other needs identified by
committee participants, nine project objectives
were assigned top priority by the committee.
These project objectives are the topics included
in this report. The committee provided overall
coordination and direction of the project. A
project manager, from the committee, was estab-
lished for each objective and a subcommittee was
formed to pursue solutions to the problems posed
by the objective. Project managers selected
subcommittee members from the most qualified
persons available from organizations represented
on the committee and from other organizations.
i'This provided very broad national participation
in this project. Subcommittee members are list-
ed at the beginning of each section of this
report. The project manager had j:he option of
serving as subcommittee chairperson or appoint-
ing the chairperson.
The results of the committee's work were
presented at a special session of the HPS's
Annual Meeting in Philadelphia on July 12, 1979.
This special session was made possible through
the excellent cooperation of the HPS Program
Committee, James E. McLaughlin, Jr., Chairman.
This report was finalized following the
HPS's Annual Meeting. It is an HPS committee
report and as such was not reviewed and approved
by the Society's Board of Directors. Committee
approval of subcommittee reports was based upon
a favorable vote by a simple majority of commit-
tee members with the understanding that dissent-
ing opinions would be presented in the report
preface. The following dissenting opinions were
prepared by H.L. Volchok:
Chapter 4, p 4-2, In reference to
section "Effects of Effluents from
Cqal-Flred Stations": Recent pub-
lished studies indicate that while
the radioactivity associated with
effluents from coal-fired ppwer plants
is not insignificant, the proximity
to a. nuclear power plaint and the
differences in radionuclides involved
should preclude misidentification.
Chapter 4, p 4-6, In reference to
section "Conclusions", item 4 e:
Attempts to standardize analytical and
sampling procedures are often counter-
productive and tend to stifle respon-
sibility and improvisation. We would
emphasize improving but not standardiz-
ing.
Chapter 6, p 6-4 , In reference to sec-
tion "Exponential Notation": In direct
opposition to the position outlined,
several participants contend that (a)
rarely are investigators faced with re-
sults beyond the 36 order of magni-
tude range of the prefixes, and (b)
computational mistakes are not mini-
mized by use of exponents; the reverse
is often the case. For the three ex-
amples given, the dissidents strongly
recommend:
2810 pCi-L
-1
104Bq-L
-1
12 uR-lT1 = 3.1 nC- Kg"1-if1
123 mrad =1.23 mGy
It should be emphasized that this report is
the collection of nine individual subcommittee
reports. The report is not intended to be a
completely integrated and comprehensive guide
for environmental radiation surveillance.
Efforts were made to avoid contradictions within
this report, but some overlaps and omissions are
to be expected. This document is intended to
report on work to resolve long standing problems
in the quality and utilization of environmental
radiation data. As previously stated, the re-
port contains the results of nine project ob-
jectives assigned top priority by the committee.
From its conception the committee recognized that
consideration of the broad spectrum of all types
of environmental radiation data was ideal.
However, in a number of cases this was not
feasible, and it was necessary for subcommittees
to limit their treatment to environmental radia-
tion data associated with nuclear power plants.
The committee recognizes that this report will
not be the final answer to all problems and that
efforts in the future will be needed to continue
to upgrade the quality and utilization of
environmental radiation data. The committee be-
lieves that the continuation of this effort is
an appropriate task for one of the new sections
to be organized within the Health Physics Society.
Subcommittee manuscripts were prepared in
accordance with directions developed by the
National Bureau of Standards. The report was
published as a Health Physics Society Committee
Report by the Office of Radiation Programs, U.S.
vi
-------
Environmental Protection Agency. The committee
is indebted to Raymond H. Johnson, Jr. for his
coordination of the report's publication. The
committee requested support from the HFS for
funds to help pay for the preparation and dis-
tribution of the report. The HPS's Board respond-
ed generously to this request. The support and
encouragement by Board members of this committee's
efforts are gratefully acknowledged.
A special acknowledgement is made to
committee member Herbert L. Volchok. He served
as the committee "whip" — encouraging project
managers and subcommittee chairpersons to comply
with schedules established by the committee.
Appreciation is also extended to the committee
chairman's secretary, Frances L. Dancy. She
efficiently assisted in administrative matters,
handled a multitude of committee correspondence
and prepared a large portion of the final report
manuscript for publication..
vii
-------
COMMITTEE FOR UPGRADING ENVIRONMENTAL RADIATION DATA
John A. Auxier
Industrial Safety & Applied Health
Physics Division
Oak Ridge National Laboratory
Ernest A. Belvin
Office of Health and Safety
Tennessee Valley Authority
Wayne L. Britz
Office of Nuclear Reactor Regulation
Nuclear Regulatory Commission
Lawrence 1C. Cohen
Office of Inspection & Enforcement
Nuclear Regulatory Commission
Ronald Colle'
Center for Radiation Research
National Bureau of Standards
Frank J. Congel
Office of Nuclear Reactor Regulation
Nuclear Regulatory Commission
Enrico Conti
Office of Standards Development
Nuclear Regulatory Commission
Geoffrey G. Eichholz
School of Nuclear Engineering
Georgia Institute of Technology
Elmer Eisenhower
Center for Radiation Research
National Bureau of Standards
Judith D. Foulke
Office of Nuclear Regulatory Research
Nuclear Regulatory Commission
Raymond H. Johnson, Jr.
Office of Radiation Programs
Environmental Protection Agency
Bernd Kahn
Environmental Resources Center
Georgia Institute of Technology
Sam A. Kingsbury
Florida Power & Light Cdmpany
Miami, Florida
Lionel Lewis
Duke Power Company
Charlotte, N.C.
Joe Lochamy
Duke Power Company
Charlotte, N.C.
Thomas W. Oakea
Industrial Safety & Applied Health
Physics Division
Oak Ridge National Laboratory
H. Richard Payne
Environmental Radiation Section,
Region IV
Environmental Protection Agency
Colin G. Sanderson
Environmental Measurements Laboratory
Department of Energy
Heyward G. Shealy
Bureau of Radiological Health
State of South Carolina
Herbert L. Volchok
Environmental Measurements Laboratory
Department of Energy
David A. Waite
Office of Nuclear Waste Isolation
Battelle Memorial Institute
James E. Watson, Jr.
Department of Environmental Sciences .
& Engineering
University of North Carolina
M. Marcy Williamson
Radiological and Environmental
Sciences Laboratory
Department of Energy
Gary Wright
Division of Nuclear Safety
State of Illinois
tx
Preceding page blank
-------
PROJECT MANAGERS AND SUBCOMMITTEE CHAIRPERSONS
Manager/Chairperson
Environmental Radiation Monitoring Objectives
Enrico Contl
Definition of Critical Pathways and Radionuclides
for Population Radiation Exposure at Nuclear
Power Stations
Bernd Kahn
Propagation of Uncertainties in Environmental
Pathway Dose Models
Frank J. Congel
*Wayne L. Britz
*Judith D. Foulke
Detection of Changes in Environmental Levels Due
to Nuclear Power Plants
Geoffrey G. Eichholz
.Quality Assurance for Environmental Monitoring
Programs
Lawrence K. Cohen
*Colin G. Sanderson
Reporting of Environmental Radiation Measurements
Data
Elmer Eisenhower
*Ronald Colle'
Statistical Methods for Environmental Radiation
Data Interpretation
David A. Waite
Effective Communication with the Public
H. Richard Payne
Environmental Radiological Surveillance:
Mechanisms for Information Exchange
John A. Auxier
*Thomas W. Oakes
* Subcommittee Chairperson
-------
TABLE OF CONTENTS
ENVIRONMENTAL RADIATION MONITORING OBJECTIVFS
Introduction , , 1-1
Background 1-1
Basic Purpose of Environmental Radiation Monitoring 1-2
Types of Programs 1-2
Regional Monitoring 1-2
The Objectives of the ERAMS Program 1-2
Design of Program 1-2
Implementation of Program 1-2
Use and Interpretation of Data 1-3
Nuclear Facility Monitoring 1-3
Design of Program 1-3
Implementation of Program 1-3
Special Studies 1-3
IntroductioH . 1-3
Types of Special Studies for Decontamination and/or Decommissioning 1-3
Limitations of Surveillance Programs • 1-4
General 1-4
Ambient Monitoring B 1-5
Nuclear Facility Monitoring 1-6
Special Studies for Decontamination and/or Decommissioning 1-6
References 1-6
DEFINITION OF CRITICAL PATHWAYS AND RADIONUCLIDES FOR POPULATION RADIATION EXPOSURE
AT NUCLEAR POWER STATIONS
Introduction . 2-1
Selection of Pathways and Environmental Monitoring Activities 2-1
Additional Studies 2-2
Appendix: Definition of the term "critical" 2-2
References 2-2
PROPAGATION OF UNCERTAINTIES IN ENVIRONMENTAL PATHWAY DOSE MODELS
Introduction • 3-1
Statistical Evaluation 3-2
Comments on the Data 3-2
Results and Discussion 3-3
Conclusion 3-3
References 3-3
DETECTION OF CHANGES IN ENVIRONMENTAL LEVELS DUE TO NUCLEAR POWER PLANTS
Nature of the Problem 4-1
Regulatory Background 4-1
Specific Causes of Variations 4-1
Transient Release of Liquid or Airborne Radionuclides by the Nuclear 4-1
Power Plants
Fallout from Atmospheric Nuclear Weapons Tests 4-2
Effects of Effluents from Coal-Fired Stations 4-2
Technologically Enhanced Airborne Concentrations form Industrial and 4-2
Medical Activities
Variations in Natural Background Radiation Levels 4-2
Variations Due to Improper Radiochemical Procedures and Calibrations 4-2
Instrumental Fluctuations , 4<-2-
Sampiing Varia-E.ion-s 4-2
Limits in Detection Sensitivity 4-3
Recommended Technical Solutions 4-3
xi
-------
r'
Identification of Transients 4-3
Data Analysis ' 4-3
Development of a Predictive Model 4-3
Detector Response of Transients , . _ 4-3
Procedural Recommendations • ' 4-4
Validation of Control Locations 4-4
Follow-up Procedures 4-4
Documentation 4-4
Information on. Fallout from Other Transients 4-4
Data Presentation 4-4
Uniformity of Evaluation 4-4
Suggested Administrative Solutions 4-5
Flexibility in Tech Specs. 4-5
Reporting on Environmental Radiation Levels 4-5
Emission from Industrial and Medical Radionuclide Sources 4-5
Coordination of Milk Sampling and Analysis 4-5
Certification of Analytical Laboratories 4-5
Short-Lived Radionuclides 4-5
Quality Assurance Program 4-5
Training of Personnel and Upgrading of Detection Procedures 4-5
Conclusions • 4-5
QUALITY ASSURANCE FOR ENVIRONMENTAL MONITORING PROGRAMS
Introduction 5-1
Organization and Responsibilities 5-1
Personnel Qualifications 5-2
Operating Procedures 5-2
Records 5-2
Sampling Procedures 5-2
Radioanalytical Procedures 5-2
Analytical Methodology 5-3
Counting Instrumentation 5-3
Data Reduction and Reporting 5-4
Audits 5-5
References 5-5
REPORTING OF ENVIRONMENTAL RADIATION MEASUREMENTS DATA
Overview 6-1
Dimensioning (Units) 6-2
Radiation Units 6-2
Radioactivity 6-2
Exposure 6-2
Absorbed dose * 6-3
Dose equivalent 6-3
Compound Units 6-3
Multiples and Submultiples of Units 6-3
Special Units 6-3
Exponential Notation 6-4
Conversion to SI Units 6-4
Recommended Units for Use in Data Reporting 6-4
Additional Specifications 6-4
Other Unsatisfactory Current Practices 6-4
Recommendations 6-5
Significant Figures 6-6
•* Recommendations 6-7
Treatment of Uncertainty Statements 6-7
Precision, Accuracy, Bias, and Random and Systematic Uncertainties 6-8
Precision and Random Uncertainty 6-8
. -Accuracy 6-8
Bias 6-8
Systematic Uncertainty - 6-9
"Blunders" and Data Rejection 6-9
Statistical Treatment of Random Uncertainties 6-10
Sample Mean and Standard Deviation 6-10
Standard Error of the Mean 6-11
xii
-------
Propagation of Random Uncertainties (Measurements Involving 6-11
Several Physical Quantities)
Grand Averaging and Weighted Means , • 6-12'
Statistics of Radioactive Decay (Poisson Distribution) 6-13
Background Subtraction 6-14
Random Uncertainty Components 6-14
Assessment of Systematic Uncertainties 6-15
Confidence Limits 6-16
Confidence Limits for Systematic Uncertainty 6-16
Confidence Limits for Random Uncertainty (Normal Distribution) 6-16
Degrees of Freedom 6-17
Confidence Limits for Random Uncertainty (t-Dlstribution) 6-17
Confidence Limits for Random Uncertainty (Poisson Distribution) 6-18
Reporting the Overall Uncertainty 6-19
Categorization of the Random and Systematic Uncertainty Components 6-19
Total Random Uncertainty 6-20
Systematic Uncertainty Components 6-20
Approaches to an Overall Uncertainty Statement 6-20
Methods of Combining the Random and Systematic Uncertainty Components 6-20
Recommended Statement of Overall Uncertainty and final Result 6-22
Recommendations '• 6-22
Detection Limits • 6-24
The Estimated Lower Limit of Detection (LLD) 6-24
The Estimated Minimum Detectable Concentration (MDC) o 6-26
Misapplication of the LLD or MDC for A Posteriori Decisions 6-27
The LLD and MDC for Multi-Component and Spectral Analyses 6-28
Interpretations and Restrictions 6-30
Recommendations 6-31
References 6-32
STATISTICAL METHODS FOR ENVIRONMENTAL RADIATION DATA INTERPRETATION
Statement of the Problem 7-1
Areas of Concern 7-2
Experimental Design 7-2
Required Sensitivity 7-2
Sensitivity Definitions 7-2
The Sampling Equation 7-3
Program Specifics 7-3
Findings 7-4
Recommendations and Conclusions 7-4
Sampling Representativity 7-8
Environmental Variability 7-8
How May Environmentalists Approach "Representativity"? 7-9
Summary 7-9
Establishing a Reliable Baseline , - • - 7-10
Baseline Model 7-10
Obtaining a Central Value and Variance 7-11
Fractional Exposure Mecnod 7-11
Contrast 7-12
Summary 7-12
Data Analysis 7-13
Estimates of Precision 7-13
Numerical Tests 7-14
Sequential Plots 7-14
Probability Plotting 7-15
Asymmetric Case 7-15
Standard Deviations and Other Estimates of Dispersion 7-15
Distribution Analysis 7-16
Handling of "Less Than Detectable" Values 7-16
Group Comparisons 7-16
Summary of Recommended Data Treatment 7-17
References 7-17
Selected Bibliography: Experimental Design of Environmental Surveillance Programs 7-L8
xiil
-------
EFFECTIVE COMMUNICATION WITH THE 'PUBLIC
Purpose 8-1
Objectives • 8-1
Introduction . ' . 8-1
Mass Communication Principles • 8-1
Effective Communication with the News Media and the Public 8-3
Accident Situations 8-3
Television Interview 8-6
Interview Bill of Rights 8-6
Reporting Environmental Radiation Data 8-7
Conclusion 8-7
References 8-8
ENVIRONMENTAL RADIOLOGICAL SURVEILLANCE: MECHANISMS FOR INFORMATION EXCHANGE
Purpose 9-1
Methodology of Approach 9-1
Contacts 9-1
Summary of Comments 9-1
Assessment of Existing Information Exchange Methods 9-2
Recommendations 9-2
References 9-2
Sample Newsletter 9-8
xiv
-------
ENVIRONMENTAL RADIATION MONITORING OBJECTIVES
E.F. Contl (Nuclear Regulatory Commission, Washington, DC), R.H. Johnson, Jr. (Environmental
Protection Agency, Washington, DC), D.E. McCurdy (Yankee Atomic Electric Co., Westborougn,
Mass.), J.A. Mclaughlin (Department- of Energy, Environmental Measurements Laboratory,
New York, NT) and C.A. Pelletier cience Applications, Inc., Rockvllle, MD)
The lack of specific guidance for environmental monitoring has confused the ob-
jectives of different programs and affected the quality and usefulness of re-
sults. Beginning with the general principles stated by the ICKP in 1965, and
by other authoritative groups, a viewpoint on objectives was developed to aid *"
our understanding of the potential and limitations of three types of monitor-
ing. Excluded from explicit consideration are monitoring for accidents, basic
environmental research and geological surveys. The purposes of monitoring,
which relate to assessing risk to man, are to facilitate the assessment of dose
to man and the assessment of changes in the radiation environment. The emphasis
depends on the type of monitoring identified. The three types are ambient, i.e.,
regional or global monitoring; facility monitoring and special Investigations
that are mainly retrospective studies such as those needed for decommissioning.
the workgroup noted that technical limitations in measurement and calculation
also lead to a diminished and sometimes inadequate data quality. This is a
significant problem when measurements are performed near thg lower limit of de-
tection and the pre-existing radiation field and contributions to concentration
and dose from any undesirable dispersed radionuclldes cannot be quantified. The
workgroup identified particular limitations for each of the three types of moni-
toring related to consideration of the stated objectives.
(Environmental; limitations; monitoring; objectives)
Introduction
The Health Physics Society Committee' on
Upgrading Environmental Radiation Data identified
the need for an evaluation of the objectives of
environmental monitoring. Representatives of
regulatory, developmental, commercial, State, and
utility organizations comprised the work group.
Monitoring objectives were identified In publica-
tions, and examples were developed on how these
objectives are reflected in different types of
monitoring programs. The work group emphasized
routine environmental monitoring as opposed to
the objectives related to accidental releases of
radioactive material to the environment. How-
ever, some of the considerations of this report
wr ild be of value in an evaluation of accidental
releases of radioactive material to ue environ-
ment. The limitations of monitoring programs
related to objectives were evaluated.
Background
The broad objectives of environmental moni-
toring programs identified by the International
Commission on Radiological Protection were:
a. The assessment of the actual or poten-
tial exposure of man to radioactive
materials or radiation present in his
environment or the estimation of the
probable upper limits of such exposure.
b. Scientific investigation','sometimes re-
lated to the assessment of exposures,
sometimes to oCher objectives, e.g.,
the general characterization of the
radiation environment, and
c. Improved public relations.
In 1972, the Environmental Protection Agency
identified three similar objectives but in a
[2]
somewhat more specific manner .
Despite these objectives, routine environ-
mental monitoring programs conducted at nuclear
facilities by facility operators, government
organizations, and research laboratories con-
tinued to consist mostly of collecting environ-
mental samples and conducting analyses for gross , •
or total alpha, beta, and gamma radioactivity.
In the past few years, both the types of
environmental samples collected and the analyses
performed have been focused more on dose-to-man
considerations. Environmental media represent-
ing pathways of exposure to man «by specific
radionuclldes are now often collected to allow
correlating environmental radioactivity concen-
(•3 41
trations to accepted dose values. '
Programs of ambient or regional monitoring,
facility operation, and special studies were ex-
amined by the work group. This examination in-
dicated that a number of objectives, not all
complementary, are used for the organization and
operation of these programs. The objectives
stated by the operators of specific programs have,
Included assessment of adequacy of facility con-.
trols, assessment of dose, demonstration of
compliance with regulations, detection of long-
term trends, protection of the general public, ' '
public acceptance, detection of environmental ' '
1-1
-------
pathways, emergency response, and offsite versus
'onsite source delineation. For many reactor
facilities, effluent releases have been markedly
reduced in the last decade, but environmental
monitoring programs are still required- as a
means of continuing the assessment of the environ-
mental impact of facility operation. Several
site-specific studies on effluent and environ-
mental field measurement have demonstrated the
difficulty of unambiguously correlating the
environmental radioactivity concentration of a
radionuclide like radioiodine with the release of
nuclear power plant effluents. A presentation of
the objectives for'the conduct of nuclear facili-
ty environmental monitoring programs was issued
by the NEC in 1978.[6]
Nuclide—specific environmental measurements
require that programs be conducted in a frame-
work that relates the results to some environ-
mental transport model and the resultant dose to
man. Environmental radiological monitoring thus
needs to be designed and operated in a manner that
allows the environmental dose rate and radio-
activity concentration to be used in meaningful
estimates of potential dose to man.
Basic Purpose of Environmental
Radiation Monitoring
The basic purpose of environmental radiation
monitoring is to assess dose or estimate the risk
to an individual or a population that may be ex- ,
posed to radiation and radioactivity in the
environment. The risk estimated need not be
absolute but can be comparative in nature, i.e.,
the measured or calculated dose can be compared
with background or the estimated risk compared to
acceptable values.
This underlying purpose is expressed by one
or more of the following specific objectives:
a. To aid in or assess dose,
b. To determine any trends? of environmental
radiation dose rates and radioactivity
concentrations, and
c. To reassure members of the public and
governmental organizations.
In addition to monitoring programs performed
for primarily technical reasons, e.g., to assess
dose or trends in environmental levels, public
reassurance requires monitoring approaches that
are responsive to the general public's percep-
tion of-risk. This latter approach may be differ-
ent from the first two, since public reassurance
is related to perception of risk that is variable
and difficult to quantify.
However, the underlying purpose of any
environmental radiation monitoring is to aid in
estimating risk that may result from exposures
of the general public. The other monitoring
objectives stem from this purpose.
Types of Programs
The basic objectives for environmental
radiation monitoring are interpreted in a
variety of programs in terms of program design,
implementation, and use of monitoring data.
Three types of programs (regional, facility, and
special studies) were considered to indicate how
the basic objectives are reflected in each pro-
gram.
Regional Monitoring
An example of an ambient (regional) moni-
toring program is the Environmental Radiation
Ambient Monitoring System (BEAMS)I ^ This
system is composed of nationwide sampling
stations that provide air, surface and drinking
water, and milk samples from which environmental
radiation levels are derived.
The objectives of the ERAMS program are:
a. To provide data for developing a
national dose model.
b. To provide a direct assessment of the
population intake of significant radio-
active pollutants.
c. To monitor pathways for significant
population exposure from routine and
accidental releases from major sources.
d. To estimate ambient levels of radio-
active pollutants for standard-setting
activities, verification of abatement
processes, and identification of
trends in the accumulation of long-lived
radlonuclides in the environment.
e. To provide an "early warning" system
for emergency abatement actions or to
indicate the necessity for further
evaluation in the form of contingency
sampling operations.
Design of Program - ERAMS is designed to
accomplish stated objectives by providing base-
line and long-term trend data for major airsheds,
watersheds, and milk-producing regions. This
system therefore complements localized facility
programs conducted by State agencies and facil-
ity operators. ERAMS objectives are met by the
selection of sampling stations to provide the
best combination of radiation source monitoring
(such as surface waters downstream from nuclear
power reactors) and wide population coverage.
Implementation of Program - ERAMS objectives
are Implemented through several programs that
provide coverage for particular exposure path-
ways or media and for particular radionuclides.
These programs include collection of air partic-
ulates on filters that are analyzed for gross
beta activity and for gamma emitters. Precipi-
tation samples are also collected. The water
program includes collection of surface water
located downstream from nuclear facilities or at
background locations. Grab samples for drinking
1-2
-------
water are taken at major population centers and
selected nuclear facility environs. Pasteurized
milk is also collected at one or more collection
sites in each State.
Use and Interpretation of Data - ERAMS data
are available quarterly in the publication en-
titled "Environmental Radiation Data". These
data are not interpreted and are provided mainly
for use by health professionals In State programs
that cooperate in collecting ERAMS samples. Trend
evaluations are prepared annually and published
in "Radiological Quality of the Environment la
the United States". Special dose assessments
have been made to determine public health signifi-
cance of fallout from atmospheric nuclear weapons
test. Public reassurance with regard to fallout
is provided by the identified reports and through
official news releases during periods of fallout
incidents.
Nuclear Facility Monitoring
Environmental monitoring programs are con-
ducted during the operational lifetime of nuclear
facilities primarily to meet the three basic ob-
jectives: determination of dose, assessment of
trends, and public reassurance.
Design of Program - Programs that are con-
ducted for routine facility operations are de-
signed to accomplish the above-stated objectives
by providing data (a) on dose rates and radio-
nuclide concentrations for the Important exposure
pathways and (b) for detecting and assessing
trends in environmental levels and concentrations
of radionuclides that may contribute to human ex-
posure. The public reassurance objective is
accomplished, in part, by a demonstration of the
quality of the environmental measurements.
Implementation of Program - As noted above,
environmental sampling is Implemented around
nuclear facilities according to the most Im-
portant exposure pathways. NRC has published a
listing of typical features of environmental
monitoring programs for nuclear facilities, in-
cluding nuclear power reactors, uranium mills,
ar.a fuel fabrication facilities '. The program
features Include number of samples and locations,
sampling and collection frequency, type and
frequency of analyses, and collection and analy-
sis of data from key locations as well as from
an unaffected "control" location.
Special Studies
Introduction - A type of monitoring that is
not easily classified as to technical purpose
includes investigations of fairly localized envi-
ronmental radioactivity that may produce unwanted
radiation exposure to man. Basic environmental
research and development, as well as efforts for
PQ gl
geological surveys ' are not Included In this
discussion of special studies.
Natural radioactive material Is perturbed
by mining and milling and is often redistributed
in man's environment as waste material °'i1'(
Nuclidea left by a nuclear operation may be add-
ed to the same nuclides that were widely deposit-'
tl21
ed from worldwide nuclear weapons debris
compounding the difficulty of measurement.
thus
These programs are conducted to define con-
taminated areas by determining radionuclide con-
centrations and distributions and external radia-
tion levels.
The objectives for this program aro4
a. To respond to public concern.
b. To provide guidance for needed decon-
tamination and to facilitate decommis-
sioning.
Unlike the operational monitoring performed
at nuclear facilities and regional or global
monitoring, this monitoring is often performed
retrospectively, i.e., after the discovery of the
possibility of abnormal radioactivity where It
previously was not expected, and it is usually
performed for definite or relatively short peri-
ods of time.
Types of Special Studies for Decontamlna- * * '
tion and/or Decommissioning - The characterize-P
tlon of potentially contaminated areas is requif- °
ed for systematic dose assessment to aid either '"
in estimating the risk to human inhabitants or
In simply determining that the radiation para- "^ •
meters, e.g., concentrations and dose, are with-
in an acceptable standard.* If the characteriza-
tion Indicates unacceptable conditions, it pro-
video guidance for any cleanup effort, including
protection of workers during decontamination, and
helps to ensure that cleanup is effective and
economical. In some instances, property is to
remain within the administrative control of the
owner or user of the property but will be used
for an alternative purpose. In other instances, « . _
the property will revert to a new administrative
control that will involve no use of radioactive
material.
The type and extent of monitoring depends on
whether or not only the first instance or whether
both Instances need to be addressed and whether
decontamination is necessary. Since the criteria
for each Instance, as well as the numerical
limits on radioactivity or radiation quantity, .
may differ, the details of the required monitor-
ing may also differ.
There are two levels of survey. The first-
level survey is an evaluation of the extent of
radionuclide contamination and may include dose
assessment. This survey is made primarily to
identify places that may need decontamination.
It makes lletle sense to perform full-fledged
studies of areas that obviously will require de-
contamination and then to repeat a detailed final
survey. • 'For etafflple.,. for .a study of a large area —
of land, such as that of Enewetnk Atoll1 , an\- »
*In the United States, there is no general °
accepted standard.
1-3
-------
airborne survey helps to determine the extent of
the decontamination necessary. Then, ground-
level evaluations are made in selected areas for
detailed evaluations leading to dose assessment.
An example of the first-level survey Is the work
of Leggett et al. for the Department of Energy's
[14]
Remedial Actzon Program . Following decon-
tamination, a complete survey of record is
necessary.
In cases that involve variable environ-
mental levels, e.g., radon and its daughters In
land and inside structures contaminated with
radium, characterizing the area depends on many '
successive measurements before a meaningful
determination of some average exposure parameter
can be made.
Another problem is choosing locations
to be measured or sampled so as to be representa-
tive of the radiological conditions. The survey
should identify average and "worst-case" condi-
tions in each location surveyed. If there is
good reason to believe that measurements and
sample collection vary fairly continuously and
moderately over the area surveyed, a reasonable
approximation is obtained by choosing a "network"
of points as uniformly spaced as practical.
For radiological surveys of land, gamma
radiation dose rates at some standard height (1
meter) are a measure of the average value over
approximately 100 m^.* This suggests a survey
block approximately 10 meters on a side for in-
tensive coverage of the land area. Instrument
readings should also be observed in transit from
one grid point to the next grid point to locate
highly elevated concentrations.
The second-level survey, and possibly the
most Important part of a decommissioning survey
program or any special monitoring program,
establishes a complete record of the character
of the land or structure prior to transfer to
the alternative use. This phase of the study
may Include dose assessments for different types
of human use as well as possibJLe estimates of
population health effects. This record should
be developed and preserved indefinitely as was
done, for example, for a former location con-
taminated with uranium and plutonium in Los
Alamos, New Mexico.
Because special surveys are performed for
short periods and not Indefinitely, as in the
case of monitoring near operating facilities,
periodic "check" surveys may be desirable for
allaying local concern to determine the possi-
ble importance of any trends in radiation levels
or radionuclide concentrations. These check
surveys are beneficial whether or not decontami-
*This is the height to which airborne sur-
veys are often normalized and, though the area
"sampled" is much larger than 100 m*, airborne
and ground-level measurements can and should be
complementary,..
nation or decommissioning is involved.
Limitations of Surveillance Programs
General
In order to meet the basic objectives of a
surveillance program, considerable care must be
taken to ensure that the program design is
technically sound. The previous sections have
outlined general monitoring philosophies and
several types of surveillance programs designed
to emphasize sampling "Important pathways". For
nuclear facilities, a monitoring program is
normally formulated and implemented prior to the
initial operation of the plant. In this case,
the pathways should be selected based on the
evaluation of the source terms, release mecha-
nisms as well as environmental parameters, and
land and water usage in the Immediate vicinity
of the operation. In the case of national
surveillance programs covering larger geographic
areas and uncontrolled global or geological re-
leases of radioactive material, the basic path-
way analysis concept is still applicable, but
the scope of the program may be limited to high
population density areas or regional areas of
concern.
In most cases, the information gathered from
an environmental surveillance program can not be
used to determine the radiation risk to a popula-
tion (or individual) exposed to all naturally
occurring and man-made radioactivity inherent In
or discharged to the environment. Current reg-
ulatory requirements limit the amount of radio-
active material discharged offsite to a level at
which, under normal dilution or diffusion pro-
cesses in the environment, typical radiation
detection equipment and laboratory radiochemical
techniques do not have the sensitivity to deter-
mine the specific radiation flux or concentration
for all radionuclides. However, this does not
mean that the current monitoring techniques are
antiquated or insufficient. The sensitivity of
surveillance methodologies is sufficient to pro-
vide an upper bound on the radiation flux and
radionuclide concentrations which, when coupled
with the important pathway parameters, can lead
to an upper limit estimation of the population
dose and possibly the radiation risk-
Since, in many cases, the estimation of the
radiation risk depends solely on the evaluation
of the final results of the monitoring program,
all aspects of a monitoring program should be
treated with equal importance. Experience has
shown that reliable results and interpretations
can come only from programs formulated and im-
plemented by individuals who are knowledgeable
in all facets of the program. The areas to be
considered are chemical and physical character-
istics of the source terms, pathway selection,
media to be evaluated, field sampling procedures,
laboratory analytical techniques, radiation de-
tection requirements, statistical interpretation
of data, and dosimetric evaluation. If possible,
the implementation of all program aspects should
be under the direction of one organization
1-4
-------
rather than fragmented Into areas of separate
group responsibilities so that continuity and
overall program understanding are maintained.
In the past, the usability of the environ-
mental data compiled from surveillance programs
that did not stress equal importance of trie in-
dividual monitoring components has been question-
able. Current state-of-the-art radiation de-
tection equipment and radiochemical techniques
have the necessary sensitivity to measure radia-
tion and radioactivity levels of about a millirem
per year. Therefore, if the sampling protocol,
i.e., station location, media sampled, and aane-
pling techniques, is poorly defined or inappro-
priate, the resultant analytical measurements, no
matter how precise or accurate, are meaningless
and could not be used to estimate realistic
radiological risks. Furthermore, it would be
just as meaningless to have excellent sampling
and laboratory analytical techniques if the
chemical form of the nuclide in the effluent and
mixing zone media has not been determined. In
most cases, the chemical form of the radioactive
material dictates the exposure pathways of Im-
portance. For example, many nuclides may be very
soluble in process solutions but when discharged
to a fresh or estuarlne water environment form
insoluble compounds that are unavailable for di-
rect water pathway considerations. Similarly,
volatile airborne contaminants may be in several
chemical forms, some of which may not be collect-
ed by normal sampling means, e.g., radioactive
I and CH I.
In order to ensure the usability of the data
compiled as part of the surveillance function, it
is essential that a quality assurance (QA) pro-
gram be an integral part of the program. The QA
program should be applied to every major portion
of the surveillance program, i.e., sampling and
analytical techniques, recordkeeping, and dosl-
metric evaluation. The subject of quality
assurance is discussed in Chapter 5.
To accomplish the stated objectives, the de-
sign and execution of the field sampling phase
of a monitoring program should be given at least
as much attention as laboratory analyses. Sam-
plab must be representative of the pathway under
investigation in terras of time, space, and media
characteristics of that pathway.
Environmental samples should be analyzed
to determine those radionuclides released from a
nuclear facility to the environment that have
the greatest significance from the standpoint of
radiation dose. This is necessary since radia-
tion dose estimates can be made only on the
basis of specific radionuclide measurements.
The ultimate use of environmental data will
define the sampling program parameters. If
primary interest is in average population dose
assessment, compositing techniques will suffice.
However, ±£ there is a need for more detailed
.information such.as. deflaiftg a temporal distri-
bution, analyses of individual samples will be
required.
Because of their inherent lack of specifi-
city and other limitations, measurements of gross
radioactivity (gross alpha, gross beta, gross
beta-gamma) generally are inadequate and there-
fore unacceptable for making estimates of radia-
tion dose. la certain cases, measurements of
gross radioactivity may be acceptable for screen-
ing purposes, e.g., when the radiafloclide composi-
tion of the environmental media is not expected
to change rapidly with time and when the results
can be correlated with determinations of specific
radionuclides from a selected number of similar
samples. For example, air and water are gnviron-
mental media for which comparative measurements
of gross radioactivity may be acceptable.
Analyses of composite samples are often used
to keep the analytical workload within reason
without sacrificing significant information.
Time composites for analyzing long-lived nuclides
may be in order if short-term time variations are
not Important. Analyses of spatial composites
may also be carried out depending -on the objec-
tives of a particular monitoring program.
c>
Ambient Monitoring
ERAMS attempts to meet its objectives pri-
marily through the location of one monitoring
station per State for each sampling program.
Only a few States with the highest population
densities have more than one station. There- J
fore, ERAMS is able to effectively monitor only
widespread contamination involving several
States. The matter of locating at least one
sampling station in each State is to assure the •
public that monitoring is being done in their
State. ERAMS would probably not be effective
for monitoring local contamination from an in-
dividual nuclear facility.
Because of the wide scope of a national pro-
gram, the types of media analyzed are kept to a
minimum. However, media such as airborne parti-
culates and dietary samples should provide at
least an indication of the magnitude of any
trend of radioactivity in other pathway media.
The distribution and levels of airborne and
dietary radioactivity should be fairly uniform
in a region with temporal variation on a seasonal
basis.
For regional monitoring programs, the.sam-
pling frequency will depend on whether the source
term is manmade or naturally occurring. In
general, natural radioactivity in potable sur-
face and ground water as well as surface air does
not vary significantly in concentration over
time unless changes occur in the watershed or
geological formations. Therefore, the sampling
schedule for pathway analysis of natural radio-
ac; ivity can be made on a relatively infrequent
baais.
While trend assessments may be done annually
as for the ERAMS program, such is not the case
for the assessment of doses. There are a number
of technical limitations on dose assessment on
the basis of regional data. Average exposure
conditions for individuals in various places in
the United States are variable as are the data
from regional measurements to characterize such'
1-5
-------
exposure conditions.
Nuclear Facility Monitoring
In recent years, there has been a trend to
use environmental surveillance as_ confirmatory
programs at nuclear facilities to verify the in-
plant controls on the release of radioactive
materials or to verify that the releases of radio-
active material to the offsite environment have
met certain regulatory criteria for acceptable
individual doses. Current regulatory guidance
for environmental monitoring programs at commer-
cial nuclear power plants has recommended evalu-
ating only those direct pathways, or the last
trophic level of a dietary pathway, that contri-
bute to an individual's radiation dose. For ex-
ample, only water and edible fin/shell fish would
be considered important in the water pathway.
For the confirmatory analysis of the liquid path-
way, the lower and intermediate trophic levels
(plankton, algae, consumer fish, and sediment)
are not deemed relevant and do not enter into
modeling techniques to evaluate individual doses.
Of course, the correlation between the estimated
dose values (probable upper limits) of the con-
firmatory environmental surveillance program and
the predicted individual dose from the effluent
monitoring program depends on the dosimetric
parameters used in each program as well as the
technical adequacy of the two monitoring programs.
It should be pointed out that most dosimetric
parameters for effluent monitoring are generic
rather than site specific and were obtained from
previous detailed scientific Investigations of
all components of a given pathway. Futhermore,
the dosimetric parameters are normally very
conservative and overestimate the resultant pre-
dicted radiation dose. Therefore, it would not
be unusual to find that the absolute results of
the programs are not in agreement.
A limitation to this approach is that direct
measurement of radiation exposure, with its
associated contribution to public reassurance, is
missing. A few continuously monitoring and re-
cording instruments located at a few key loca-
tions near some nuclear facilities (power reac-
tors) would supplement more extensive passive
gamma-ray monitoring. This would make it possi-
ble to provide information, if available, on a
more timely basis for public reassurance.
No environmental monitoring program can be
effective without direct knowledge of the efflu-
ent constituents as well as knowledge of the
frequency of routine and nonroutine effluent dis-
charges. Environmental samples should be ana-
lyzed to determine those radionuclides that are
released from a nuclear facility to the environ-
ment tha*t have the greatest significance from the
standpoint of radiation dose. Determinations of
specific radionuclide concentrations are needed
in order to relate the results of the analysis
to an estimated radiation dose level. As was
indicated in the discussion of general limita-
tions, because of their inherent lack of specifi-
city and tendency for large variation, measure-
ments of gross radioactivity (gross alpha, gross
'beta, gross beta-gamma) generally are inadequate.
The design and execution of the field sam-
pling phase of a monitoring program should be
given attention equal to that given the labora-
tory analysis. Very accurate and precise ana-
lyses of nonrepresentative field samples yield
results that may appear to be valid but that are
incapable of interpretation and not worthy of
detailed analysis. The main goal of a sampling
program is to obtain a sample that is repre-
sentative of the medium under investigation in
terms of time, space, and medium characteristics.
For example, it would be inappropriate to rely
on monthly grab samples of water if the plant
discharges infrequently (16 hours per week) or if
the effluent constituents vary dramatically over
time.
Special Studies for Decontamination and/or
Decommissioning
The first problem of this type of monitor-
ing concerns the lack of acceptable environmental
residual activity guidance, particularly for
property that was contaminated with either
natural or manmade radioactivity and, after some
type of decommissioning, was transferred to a
public use or one not Involving radioactive
material.
The second problem involves inadequacies
in methods for the characterization of some
radioactive contamination in the environment.
Computational methods for dose assessment, in-
cluding the estimation of effects on humans who
may inhabit the property, have not been satis-
factory, nor have measurement methods, including
sampling methods, been entirely adequate. The
latter problem seems partly attributable to the
failure to employ some existing monitoring
methods because of cost considerations and to
ensure that adequate criteria and standards are
available. Adequate criteria and standards are
needed to improve cost estimation for any decon-
tamination and decommissioning efforts and to
improve the development of standard measurement
methods. This oversight may have contributed
to the concern of local citizens because surveys
have been made repeatedly to ensure that the
environmental characterization was complete or
at least technically defensible. A corollary
problem has arisen because of the lack of pro-
grams for systematic development of needed
environmental instrumentation.
[1]
[2]
References
International Commission on Radiation
Protection, "Principles of Environmental
Monitoring Related to the Handling of Radio-
active Materials," ICRP Publication 7
(September 1965) .
United States Environmental Protection
Agency, "Environmental Radioactivity
Surveillance Guide," USEPA ORP/SID 72-2
(June 1972).
1-6
-------
I 3]
I 4]
[ 5)
I 6]
[ 7]
United States Nuclear Regulatory Commission,
Regulatory Guide 4.1, "Programs for Monitor-
ing Radioactivity in the Environs of Nuclear
Power Plants," (April 1975).
United States Energy Research and Development
Administration, "A Guide for Environmental
Radiological Surveillance at ERDA Installa-
tions," USERDA 77-14 (March 1977).
United States Nuclear Regulatory Commission,
Code of Federal Regulations Title 10 Part 50,
Appendix I (April 1975).
United States Nuclear Regulatory Commission,
"Radiological Environmental Monitoring by
NRC Licensees for Routine Operations for
Nuclear Facilities," NUREG-0475 (October 1978).
W.D. Rowe, F.L. Galpin, and H.T. Peterson,
"EPA1s Environmental Radiation-Assessment
Program," Nuclear Safety 16_, 6, 667-682
(November 1975).
[15]
United States Department of Energy,
"Radiological Survey and Decontamination of
the Former Main Technical Area (TA-1) at Los
Alamos, New Mexico," US DOE Report LA-6887,
(1977).
R.M. Kogan, I.M. Nozarov, and Sh. D. Fridman,
"Gamma Spectrometry of Natural Environments
and Formations," Atomizdat, Moscow (English
Translation, Israel Program for Scientific
Translations, Jerusalem) (1969).
[10]
[11]
[12]
[13]
[14]
'National Council on Radiation Protection and
Measurements, "Environmental Radiation Measure-
ments," NCRP Report 50 (1976).
Zbigniew Jaworotrski, Jan Bilkiewicz, Ludwiks
Kownacka, and S. Wloder, "Artificial Sources
of Natural Radionuclides in Environment,"
The Natural Radiation Environment II, J.A.S.
Adams, et al. (eds.) Report CONF 720805-P2,
U.S. ERDA (1972).
Richard J. Guimond and Samuel T. Windham,
"Radioactivity Distribution in Phosphate
Products, By-Products, Effluents and Wastes,"
United States Environmental Protection Agency,
USEPA Technical Note ORP/CSD-75-3 "1975) .
United States Energy Research and Development
Administration, "HASL Measurements of Fallout
Following the September 26, 1976 Chinese
Nuclear Test," USERDA Report HASL-314 (1976).
United States Atomic Energy Commission,
"Enewetok Radiological Survey," USAEC Report
NVO-140 (3 volumes) (1973).
R.W. Leggett, H.W. Dickr-ra, and F.F. Haywood,
"A Statistical Methodol ,y for Radiological
Surveying," Symposium ot Advances in Radiation
Protection Monitoring, June 26-30, 1978,
Stockholm, Sweden, International Atomic Energy
Agency, IAEA-SM-299/103 (1978).
1-7
-------
DEFINITION OF CRITICAL PATHWAYS AND RADIONUCLIDES FOR
POPULATION RADIATION EXPOSURE AT NUCLEAR POWER STATIONS
B. Kahn (Georgia Institute of Technology), J. Golden (Commonwealth Edison Co.), A. Goldin
(U.S. EPA) and P. Magno (U.S. EPA)
The U.S. Nuclear Regulatory Commission's Regulatory Guide 1.109 Identifies poten-
tial critical radlonuclides and pathways of exposure at nuclear power stations
and presents generic environmental transfer factors and calculational models
for preoperational, and later operational, estimates of doses to individuals and >
population groups. The Guide also encourages use of site-specific factors where
they are available. For consideration as a critical pathway, the sub-committee
on Defining Critical Pathways considers the calculational approach of the Guide
satisfactory so long as site-specific factors are used where available and all
pathways are included that contribute at least 10 percent of the total site dose
limits (specified in 10 CFR 50 Appendix I) or 1 mrem/yr (4 percent of the 40
CFR 190 whole-body limit), whichever is less. Critical pathways determined by
preoperational estimates of radionuclide releases should be confirmed with re-
spect to the source term when actual release data are available. Environmental
monitoring of these pathways is necessary, but the effort may be reduced if
environmental measurements show that the actual dose equivalent rates are below
1 mrem/yr. o
(Critical pathways; environmental monitoring; radiation exposure)
Introduction
The International Commission on Radiological
Protection (ICRP) has recommended that environ-
mental radiological monitoring be undertaken by
considering the critical radlonuclides, pathways,
and exposed population groups. In this con-
text, "critical" means "much more important than
others"; the detailed definition is given in the
Appendix. The U.S. Nuclear Regulatory Commission
has followed this concept in its Regulatory Guide
1.109, which identifies potential critical radio-
nuclides and pathways at nuclear power stations
and presents generic environmental transfer fac-
tors and calculational models for preoperational
estimates of doses to Individuals and population
groups. The Guide also encourages use of site-
specific factors and indicates the need to con-
sider other pathways if the additional dose ex-
cer
-------
"cause individual doses. It is possible, however,
that consideration, of such pathways will indicate
• that they are not suitable for environmental
monitoring because of expected undetectable levels
' of activity.
The operator of a nuclear power station
should have the option of eliminating unnecessary
monitoring activitiefe by demonstrating that the
calculational model unduly overestimates the dose.
Improved transfer factors can be applied by anal-
ysis of data from at least two years of routine
monitoring, or by brief studies under the range
of pertinent conditions. Because such reduced
. environmental monitoring places the main burden
of the radiation protection program on effluent
monitoring, the measured radionuclide source terms
must be demonstrably accurate, as confirmed by
independent quality assurance programs.
Additional Studies
In order to place these recommendations on a
firmer factual basis, it is recommended that the
factors used in the calculational model for the
, critical pathway be periodically scrutinized for
reliability and evaluated through additional
studies where necessary. Useful information may
also be available from facility monitoring pro-
grams and special studies, if only to indicate
upper limits of transfer factors.
It is anticipated that the Nuclear Regulatory
Commission will develop similar Regulatory Guides
for other nuclear facilities to calculate doses
to individuals and population groups, and that
these Guides will identify the common critical
pathways and radionuclides. Until these become
available, such other facilities will have to be
examined in detail with regard to radioactivity
source terms, pathways, and points of population
radiation exposure to assure that no critical
pathways are omitted,
Appendix: Definition of the term "critical"
Critical. "The word 'critical' has been used
by the ICRP to describe the organ of the body
whose damage by radiation results in the greatest
injury to the individual (or his descendants).
The injury may result from inherent radiosensiti-
vity or indispensability of the organ, or from
high dose, or from a combination of all three.
The use of the term "critical" has here been ex-
tended to describe nuclides, articles of diet,
and pathways of exposure which deserve primary
consideration as being the mechanisms of princi-
pal exposure of individuals. By a further exten-
sion, the term has been used to describe groups
of the population whose exposure is homogeneous
and typical of that of the most exposed popula-
tion."
12]
Office of Standards Development, Calculation
of Annual Doses to Man from Routine Releases
of Reactor Effluents for the Purpose of Evalu-
ating Compliance with 10 CFR Part 50, Appendix
!_ (Rev. 1), U.S. Nuclear Regulatory Commission
Regulatory Guide 1.109, October 1977,
[3]
C.W. Miller &t al., The Evaluation of Models
Used for the Assessment of Radionuclide Re-
leases to the Environment, USDOE Rept.
5382, June 1978.
ORNL-
[4]
Cited by C.P. Straub, Public Health Implica-
tions of Radioactive Waste Releases, World
Health Organization, Geneva, 1970, p. 47.
[1]
References•
Committee 4 of the International Commission on
Radiological Protection, Principles of Environ-
mental Monitoring Related to the Handling of
Radioactive Materials, ICRP Publication 7,
Pej-|amon Press, New Yprk, 3-965.,
2-2
-------
PROPAGATION OF UNCERTAINTIES IN ENVIRONMENTAL PATHWAY DOSE MODELS
W. Britz (Nuclear Regulatory Commission, Washington, D.C.), F. Congel (Nuclear Regulatdry
Commission, Washington, D.C.), D. Ebenhack (Chem-Nuclear, Barnwell, S.C.), K. Eckennan
(Oak Ridge National Laboratory, TN), J. Foulke (Nuclear Regulatory Conmission, Washington,
D.C.), 0. Hoffman (Oak Ridge National Laboratory, TN), C. Nelson (Environmental Protection
Agency, Washington, D.C.), C. Wakamo (Environmental Protection Agency, Atlanta, GA),
W. Wllkie (Tennessee Valley Authority, AL)
The uncertainty in the dose predicted by environmental pathway and internal
dosimatry models can be estimated from an analysis of the statistical properties
of the input parameters. This chapter presents one method, known as "imprecision
analysis," for propagating the uncertainties in the multiplicative chain model
for calculating the dose to an infant's thyroid from iodtne-131 via the pasture-
cow-milk pathway. For this pathway, the largest source of uncertainty is Che
dose conversion factor, which contributes nearly half of the uncertainty. Some
of the uncertainty due to other parameters can be reduced by obtaining site-
specific data. Because critical assumptions were required for this approach)
testing of the complete model by field measurements will be necessary to
determine the true uncertainty.
(Dose assessment; imprecision analysis; pathway models; uncertainty propagation)
Introduction
Limits on the amount of radioactivity in
effluents released to unrestricted areas vestab-
lished by the Nuclear Regulatory Commission for
its licensees are given in 10CFR20. The values
listed in Appendix B, "Concentrations in Air and
Water Above Natural Background," can be easily
measured by state-of-the-art techniques. Imple-
mentation of the "as low as reasonable achievable"
criterion in 10CFR50, Appendix I and the regula-
tions of the Environmental Protection Agency in
40CFR190, ~ however, set limits on doses to in-
dividuals. Since some of these doses cannot be
directly measured, it is necessary to rely on
calculations based on the amount of radionuclides
in the effluents released from licensed
facilities.
(1) the model is a correct representation of
reality and (2) the available data for input
parameters are representative ot the true distri-"
bution of parameter values. This approach has
been applied to the pasture-cow-milk pathway to c
[1 2] «
calculate the dose to infants' thyroids. '
This pathway was chosen as an example of propaga-
tion of uncertainty because of its importance in
reactor licensing and because a range of data
for input parameters are available from the
literature.
The model used to calculate doses to an in-
fant thyroid from I via the pasture-cow-milk
pathway is a simple multiplicative chain
Because the levels of radioactivity in the
environment resulting from routine releases of
radioactivity by nuclear facilities are expected
to b' too low to be accurately measured, dose
assessment is based on environmental pathway
models and internal doslmetry models. Predic-
tions of dose to any single individual are sub-
ject to uncertainty as models are only character-
izations of reality and their input parameters
are inherently variable. For environmental path-
way models the best method of determining the
uncertainty associated with model predictions is
experimental validation (i.e., the comparison of
model predictions with field observations) . Un-
fortunately, model validation experiments are
usually not feasible because of their cost and
the difficulty in detecting low concentrations
of radionuclides.
An alternative approach relies on an
analysts of the statistical properties of the
input parameters in order to estimate the over-
all imprecision in the calculated dose to
specific individuals. Such an imprecision
analysis requires two critical assumptions:
where
3
X " equilibrium air concentration CpCl/m );
k a a unit conversion factor C&6400- -sec/day);
I/A ff°T ff/lnZ " effective mean-time on
** '* pasture vegetation (days);
T
Qp ° total daily dry matter intake of a
dairy cow (kg/day);
f B fraction of the total dry matter intake
composed of fresh forage;
/ = fraction of a year that dairy cows re-
" ceive fresh forage,
• . F = intsfce-to-milk transfer' fact6r (day/
m liter >;
if = annual milk consumption rate for in-
fants, ages 0.5 to 1.5 years (liters/ i
year),
3-1
-------
D *• thyroid dose conversion factor for in-
fants, ages 0.5 to 1.5 years (mrem/pCi-
ingested) ;
the multiplication of values is equivalent to
the addition of logarithms of these values . Thus,
if
annual dose (mrem/-year). to the thyroid;
Dose = A • B • C, "
V- = an air concentration - pasture grass
transfer factor (m^/ltg, dry wt. • sec).
Details of the analyses and similar sections for
other pathways can be found in the report, "A
Statistical Analysis of Selected Parameters for
Predicting Food Chain Transport and Internal Dose
of Radionuclides". ^
Statistical Evaluation
The statistical evaluation of predictive un-
certainty is based on estimating the distribution
of doses calculated by the air-grass-cow-milk-
child pathway given the distributions of para-
meter values for that model. The parameter dis-
tributions are obtained by an analysis of data
available in the literature. Since these data
represent samples of convenience and judgement,
they contain a bias which cannot be assessed.
The dose distribution is biased both by the para-
meter data and by the limitations of the pathway
model itself. Since the bias introduced by the
parameter data and the model structure are un-
known, the evaluation used here is referred to as
an imprecision analysis. The relationship be-
tween the predicted and the true distribution of
individual doses could only be determined by a
model validation study. However, for the example
given here the assumption made is that the struc-
* ture of the model is correct and that the para-
meter data' are independently distributed and
o unbiased.
• The imprecision analysis for this pathway
has been performed analytically. Numerical
' techniques (Monte Carlo) have also been developed
for this purpose . The analytical approach in-
volves estimating the mean p and variance o? of
the log-transformed data for eaah parameter in
the model and assuming the parameters are in-
dependently distributed. In this case the sum
of the means of each log-transformed parameter
will then equal the mean of the logarithm for
the distribution of the model output.
V(ln R) «= u(ln X)
y(ln
then
Dose = exp (In A + In B + In C) .
The addition of normally distributed vari-
ables yields a quantity that is also normally
distributed. Furthermore, if the number of para-
meters in the model is sufficient and if the
total variance Is not dominated by the contribu-
tion of any individual parameter, the final dis-
tribution will, according to the Central Limit
Theorem,* also approximate a normal distribution
regardless of the type of distribution associated
with each input parameter. -A variable whose
logarithm is normally distributed is itself
lognormally distributed. Thus, the sum of the
means and the sum of the variances of log-trans-
formed parameter values can be assumed to be the
mean and variance of the logarithms of a log-
C41
normal variate. Since the distribution of
the model output has been assumed to be lognormal,
any quantile of that distribution is determined
by the values of the distribution parameters, p
and a^. For example, the most probably value or
mode (X ) is
P
exp (p-o )
X
eff) + p(ln Q) + p(ln f ) + p(ln f ) + p(ln F )
s p m
p(ln
p(ln D).
the geometric mean or median (X^j) is given by
Xffi = exp (p) ;
the arithmetic mean is
X = exp (p + o2/2) ;
the 84th percentile is
Xg4 = exp (M + c) ;
and the 99th percentile is
X99 = exp (p + 2.33 o).
In lognormal statistics, reference is also
made frequently to the geometric standard devia-
tion Sg. The geometric standard deviation is
the value, which when multiplied by and divided
into the geometric mean, gives an interval of
values within which 68 percent of the distribu-
tion is located. The geometric standard devia-
tion is
Similarly, the sum of the variances of each log-
transformed parameter will equal the variance of
the distribution estimated for the model output.
c2(ln R) f a2 (In x) + °2(ln VD) + a2 (In X^eff)
O2(ln Q) + o2(ln f
o2(ln
o2(ln f
a(ln F
+ O(ln D) .
In a multiplicative chain model of this sort,
S = exp (a) .
o
Comments on the Data
Although the literature is rich in data on
transported through the air-grass-cow-milk-
See Rao (1965) p. 128 for a more formal state-
ment and discussion of the Lindberg-Feller form
of the central limit theorem.'5-'
3-2
-------
child pathway, very few studies could be found
in ORNL/NUREG/CR-1004 that could be related
directly to the parameters as defined by the
model. Most measurements have been conducted
over only short time periods, whereas the model
parameters are assumed to be averaged over
reasonably long time periods (e.g., the length of
the grazing season) . To account for this short-
coming, judgement was exercised and only average
values derived from each published document were
used in the analysis whenever appropriately
averaged data were unavailable.
Results and Discussion
A summary of statistical information is
presented in Table 1 for each model parameter.
The result of the Imprecision analysis using
lognormal statistics to propagate the uncertain-
ties in the multiplicative chain model is given
in Table 2. The column in Table 1 headed "NRC"
contains typical values used by the Nuclear
Regulatory Commission in evaluating doses to
individuals and are provided for a comparison
with the data from the literature. More site
specific data may be used as is deemed necessary.
The results of the data from the literature pre-
sented in Table 2 give a geometric standard
deviation of 2.9. The calculated 84th percentile
is about three times the geometric mean value and
the 99th percentile is about ten times the
geometric mean value of the dose calculated from
a concentration of 131l2- It should be stressed
that these are calculated percentiles and that
the values for the true distribution may be
different.
Because of the unknown bias in the data and
model structure as well as the inherent impre-
cision in dose estimation, an absolute guarantee
that no individual child will receive a dose in
excess of a prescribed individual dose limit
cannot be given. About one-half of the uncertain-
ty is from the thyroid dose conversion factor
for infants. The refinement of this Input in
the dose model is not practical due to the
difficulty in measuring this parameter. Some of
the other uncertainties may be reduced */ ob-
taining site-specific information for the input
parameter and considering the various chemical
forms of 13^1 actually released by a nuclear
facility. The variability within these dose
parameters as indicated by the global literature
will remain unless additional data or more site-
specific data are used in the model. The deter-
mination of an acceptable probability that
calculated doses will not underestimate actual
doses received by a given member of the popula-
tion could ultimately be based on the health risk
associated with a particular range of dose limits
and the potential increase in risk related to
that portion of the estimated dose distribution
which may exceed the dose limit- . For example, a
dose calculation to determine compliance with a
300 millirem dose I'lmlt niay require greater
accuracy than that to comply with a 25 millirem
dose limit.
A numerical approach using Monte Carlo
techniques has bean used to analyze uncertainties
in environmental model prediction's. 'The,re-
sults obtained with this approach, using,an
assemblage of normal and lognormal distributions
for each parameter and truncating these distri-
butions at observed maximum and minimum values,
are not significantly different from the results
given in Table 2. The distribution of dose
determined in this manner is illustrated in
Fig. 1.
Conclusion
The approaches used herein to propagate un-
certainties in environmental dose models indi-
cate that the variability in dose estimation
may be quite large, even for a select group of
the population. We recognize that quantifica-
tion of the uncertainty associated with environ-
mental pathway dose models is best accomplished
by conducting field tests under the conditions
for which the models were intended. However,
such experiments are difficult and expensive to
perform. In addition, measurement of some in-
put parameters such as the internal dose con-
version factor is Impractical. Therefore analyt-
ical and numerical methods to investigate the
inherent propagation of uncertainties resulting
from parameter imprecision will offer the best
alternative approaches in lieu of model valida-
tion. Improvement in these techniques will re-
quire an increase in both the quantity and
quality of relevant data for environmental trans-
fer parameters as well as quantification of
statistical correlations between parameters.
Identification and quantification of site-
specific driving variables such as soil pH,
temperature, agricultural practices, season
and climate, which may influence the values of
the parameters should also contribute to a re-
duction of imprecision and an increase in pre-
dictive accuracy.
[1]
References
Hoffman, F.O., Baes, C.F. Ill, Dunning, D.E.
Jr., Little, C.A., Miller, C.W., Orton, T.H.,
Rupp, E.M., Shaeffer, D.L., Shore, R.W.,
Fields, D.E. 1979. "A Statistical Analysis of
Selected Parameters for Predicting Food Chain
Transport and Internal Dose of Radionuclides."
ORNL/NUREG/CR-1004 (ORNL/TM/282).
Shaeffer, D.L. and Hoffman, F.O. 1979.
"Uncertainties in Radiological Assessments -
A Statistical Analysis of Radioiodine Trans-
port via the Pasture-Cow-Milk Pathway."
Nuclear Technology 45, 99-106.
*• -"Schwarz, G. and Hoffman, F.O. (in preparation).
"Imprecision.of ,Dose Predictions for, Radio-
nuclides Released to the Environment:
Application of a Monte Carlo Simulation
Technique." Environment International
[2]
[4]
Schubert, J., Brodsky, A., and Tyler, S. 1967V
"The Lognormal Function as a Stochastic Model •
of the Distribution of Strontium-90 and Other
3-3
-------
Fission Products in Humans." Health Physics 13.
1187-1204.
Rao, R.C. Linear Statistical' Inference and
its Applications. 1965. Second Edition.
J. Wiley and Sons, Inc., New York.
3-4
-------
Table 1. Statistical properties of the parameters used to calculate the dose to a child's thyroid
resulting from a given air concentration (x) of
Parameter Units
VD
vv/
m /kg- sea
days
kg/day
n
2
10
2927*
\
0.12
5.9
16
Xm
0.12
6.0
16
X
0.12
6.1
16
X84
0.13
7.0
19
*».
0.14
8.4
22
Range of data
0.115 to 0.123
5.5 to 7.2
6 to 25
NRC
.039
7.4
12.5
fg - 2927* 0.43 0.43 0.43 0.56 0.73 0.1 to 0.8 1.0
F day/liter 20 7.4E-3 l.OE-2 1.2E-2 2.3E-2 4.0E-2 2.7E-3 to 3.5E-2 .006
jn
fp
J*
D**
-
liter/year
mrem/pCi
11468*
45
—
0.40
290
6.8E-3
0.40
300
1.1E-2
0.40
300
1.4E-2
0.62
370
2.2E-2
0.91 0 to 1.0
4SO 175 to 431
5.7E-2
1.0
330
.014
n - Number of averaged values used in the analysis Ref: Hoffman and Baes
(1979).
X - Mode or most probable value of estimated distribution.
X - Median or geometric mean of estimated distribution.
m
X - Arithmetic mean of estimated distribution.
Xg4 - 84th percentile of estimated distribution.
X.g - 99th percentile of estimated distribution.
* Herd averages; distribution appears to be normal.
** Estimated using statistical properties of thyroid mass, thyroid uptake, and retention for children
ages 0.5 to 2 years.
3-5
-------
Table 2. Propagation of uncertainties in the air-grass-cow-milk-child pathway model for
131.
Parameter
Contribution to the total variance
"D
Vx,
-2.1
1.8
2.7
-0.87
-4.6
-1.0
5.7
-4.5
0.002
0.02
0.014
0.058
0.3
0.17
0.04
0.49
0.18
1.8
1.3
5.2
27
16
3.7
45
fl/X*
8.5****
1.1
100
(mrem • m /pCi • yr)
- 1600
- 4900
= 8500
14000
51000
*Mean of log-transformed dafefl.
**Variance of log-transformed data.
3 131
***Annual thyroid dose resulting from 1 pCi per m of I- in air.
****Calculated by adding (In 86400 sec/day) to Zy(-2.9).
Ref: Hoffman and Baes (1979),
3-6
-------
DOSE TO AIR CONCENTRATION RATIO (D/X)>FOR INFANTS
FROM INGESTION OF ™1 VIA THE PASTURE-COW-
MILK-PATHWAY
^
0
LJ
~ "|
O
LL)
U.
C
:
:
1
>.5
: X
:
:
:
:
::
:::•
: ::
rr
:
:
i
_
0
°
.
! x X
!:!!:.! L
:••••:••••••:•::::.:. : .::....: .. .
1 1 1 1 1 1 1 1 1
5 40 15 20 25 30 35 40 50 60
D/x (rem/yr per pCi/m3)
3-7
-------
DETECTION OF CHANGES IN ENVIRONMENTAL LEVELS DUE TO NUCLEAR POWER PLANTS
G.G. Eichholz (Georgia Tech), A.E. Desrosiers (BNWL), B. Kahn (Georgia Tech), A, Strong
(USEPA), C.L. Wakamo (USEPA), W.H. Wilkie (TVA), E.F. Williams (S.C, Department of Health
and Environmental Control)
This committee report reviews the sources for sudden and slow variations in
detectable radiation levels in environmental samples from nuclear power plants.
The statistical nature of the operational and preoperational data makes it
important to determine their accuracy before evaluation of any apparent devia-
tions. Recommendations are presented for Improved identification of tran-
sient events and for procedural developments.
(Background radiation; counting statistics; environmental monitoring; fallout;
low-level detection; power plant effluents)
Nature of the Problem
Nuclear power plants are required to monitor
liquid and gaseous effluents and radioactive con-
tamination in air, water, soil, vegetation, food
stuffs (milk) and selected animals. Accidental
releases of radionuclides should show up in the
effluent monitors and, depending on the pathways
involved, may appear in environmental samples.
Since the levels of radioactivity involved
are very low, small fluctuations in background,
for whatever reason, may mask any Increase in
levels due to releases from the power plant.
Such fluctuations may be internal in nature, e.g.
due to counting statistics, sampling procedures,
radiochemical procedures or fluctuations in meteo-
rological conditions, or external, due to varia-
tions in cosmic ray background, weapons test fall-
out or to emissions from other power plants,
whether nuclear or codlfired, or users of radio-
nuclides in the same general area.
Unexolained fluctuations in read-ings on
environmental samples have occurred on many occa-
sions. The problem is how to reduce such occur-
rences or, alternatively, how to identify more
effectively the cause of these fluctuations. In
particular, it is important to determine if
operr"ional survey results should be compared
with preooerational data or with contes-v/orary
data obtained at control locations well off site.
Many of the fluctuations observed require care-
ful statistical analysis for proper evaluation,
but the depth of such evaluations may be limited
by cost and manpower considerations.
Regulatory Background
The magnitude of the radiological impact
has been described In the "GESMD Report" (NUREG-
0002), ORNL-5315 and EPA 520/1-77-009. In many
cases these make conservative assumptions, in
effect overestimating levels, so that measured
concentrations may be expected to lie well below
them. Environmental monitoring programs for
nuclear power plants are typically entrenched-in
the requirements for the operating license (Tech
Specs) to assure compliance with Appendix I of
10CFR50 and NRC Reg, Guide 1.21. There is a
great deal of variation in monitoring programs
from plant to plant and attempts are being made
to introduce greater flexibility, without the
rather cumbersome,amendment process. Reg. Guides
4.1 and 4.8 outline recommended surveillance pro-
grams and (ideal) detection sensitivities and
Reg. Guide 4.15 describes the program design for
quality assurance of such programs. Being
generic and rather general in nature those guide-
lines do not address themselves fully to the pro-
blems posed to this Committee, namely how to dis-
tinguish between internal and external variations *
in environmental samples and how to identify a
specific source of origin. Furthermore, at most
plants collection and reporting of radiation data
is mandated by Tech. Specs., but their analysis
is not.
Specific Causes of Variations
Transient Release of Liquid or Airborne Radio-
nuclides by the Nuclear Power Plants
Transient release to air or water may be due
to failure of plant equipment, human error,
planned waste discharges or certain accidents
affecting plant systems integrity. One con-
sequence may be the undetected appearance of
short-lived nuclides:
The detection and evaluation of the conse-
quences of such events is the principal reason
for operating environmental surveillance pro-
grams. The release may be sudden and brief or
may lead to continued, slowly changing concentra-
tion levels. In general, many such events should
be immediately evident on the effluent monitors
and trigger Implementation of both precautionary
measures and, if the release is large enough,
Immediate intensification of the environmental
sampling program. However, this may not be the
case for samples that are monitored only period-
ically, such as lodine-131 in air, particulates
in off-gases, or tritium in air or water.
A problem arises if the release is fairly
small and so transient that it escapes detection
by the effluent monitoring system due to the time
constants employed. This is one case where It is
essential that the environmental surveillance .. <
4-1
-------
system must respond adequately, allowing for
natural dispersion times and processes. Occasion-
ally, at some nuclear facilities, a short, in-
tense release of activity jnay overwhelm the
effluent monitors, making a quantitative assess-
ment difficult. Another common situation arises
when not all release points are monitored, so that
some pathways or effluents are not accounted for
properly.
Fallout from Atmospheric Nuclear Weapons Tests
The Chinese and, earlier, the French atmo-
spheric weapons tests led to measurable increases
in environmental levels even on the fourth or
fifth passage around the globe. Residual effects
in crops and water were observed for weeks after
and some long-lived tracers from much earlier
tests still linger in soil and vegetation. Time-
ly notification helps in identifying causes of
observed increases and in planning for the col-
lection of additional samples. Principal effects
arise from iodine-131 in milk, strorvtium-89 in
.vegetation, a-nd some shorter-lived fission prod-
ucts in air filters and marine organisms. TLD
dosimeters may be affected and their evaluation
also would need timely information on fallout
levels in surrounding areas.
Effects of Effluents from Coal-Fired Stations
There is increasing awareness of the pres-
ence of uranium and thorium and their daughter
radionuclides in fly ash and in the airborne
particulates emitted by coal-fired stations. In
certain localities, under particular meteorolog-
ical conditions, this excess activity may signifi-
cantly increase the radioactivity levels (gross
beta or gamma) monitored near nuclear power
plants. After some time, plant operators may
learn to allow for this, and in practice addition-
al measurements may have to be incorporated in the
surveillance programs, to permit identification.
Technologically Enhanced Airborne Concentrations
from Industrial and Medical Activities
In certain areas increasesOin ambient radio-
activity levels may arise from mining activities
involving minerals, such as shale, phosphates, or
granites, that may lead to a significant increase
in radon and radon daughter levels in the air.
In construction, earthmoving operations may
cause significant increases in airborne radio-
activity detected in filter samples, mainly due
to radon daughters. High dust levels may in-
crease airborne concentrations of Be-7 by resus-
pension. Air monitoring stations in heavily traf-
ficked .areas may be particularly susceptible.
Similar increases may arise from the use of phos-
phate fertilizers in dry form. Effluents from
hospitals, radiopharmaceutical plants, research
laboratories, and other radionuclide users may
introduce substantial and erratic levels of fre-
quently short-lived radioactivity into the
environment.
Variations in Natural Background Radiation Levels
.Ag detectors of increasingly greater sensi-
tivity are employed, variations in natural radia-
tion background form the largest contribution to
the measurement uncertainty and hence, affect the
ability to recognize the reality of a small but
statistically significant increase ir environ-
mental activity observed. Short-term variations
in cosmic ray intensity may arise during solar
flares and other geophysical disturbances affect-
ing the upper atmosphere and may be particular-
ly noticeable at higher altitudes. Radon levels
in the atmosphere will vary with meteorological
conditions, precipitation, and snow cover. Pre-
operational surveys should establish "typical"
seasonal fluctuations in atmospheric radon levels
and the magnitude of daily fluctuations, both for
air samples and in well water. However, addi-
tional short-term variations may arise from a
multiplicity of causes.
Variations Due to Improper Radiochemical Proce-
dures and Calibrations
In several instances sudden increases in
activity in environmental samples have been re-
ported that, on investigation, were found to
arise from sudden and ill-documented changes in
radiochemical extraction procedures or changes in
standards. While these changes presumably re-
sulted in more accurate results, the apparent
changes in reported activity were often left un-
explained and hence, in the absence of any expla-
nation, had to be assumed to be due to plant
operations. Such occurrences can be minimized by
a critical review of reported data and improved
quality control programs. Another possible
source of occurrences is unsuspected contamina-
tion of facilities, samplers, or reference
samples.
Instrumental Fluctuations
A major contribution to variations in re-
corded data results from systematic fluctuations
in detector response. In the case of TLD's this
may be related to variable dark current, changes
in thermoluminescence, and reader performance.
There are also variations in the detector "con-
stant," the inherent difference in response rate
of individual dosimeters. With other detectors it
may depend on variations in humidity, temperature,
supply voltage or the movement of calibration
sources, without the technician doing the work
being aware of these effects. These effects can
be minimized by frequent recalibrations and back-
ground counts but in many cases the demands on
equipment and staff time impose practical limita-
tions on their frequency. Ten to twenty-five
percent of operating time has been suggested as
being appropriate for this purpose. A further
means of ensuring more consistent detector opera-
tion is through better understanding of detector
parameters by the user.
Sampling Variations
Variations in output data are frequently en-
countered due to nonrepresentativeness of the
samples, inconsistencies in the method of sampling
from different sources, effects of weather and
seasonal conditions on the nature of the sample
and incompleteness of the monitoring plan, such
4-2
-------
as may occur if not all effluent pathways are
monitored and recorded. Care must be taken to
avoid cross contamination and Co employ consis-
tent procedures for sample collection.
Limits in Detection Sensitivity
Some determinations, particularly for iodine
in milk, are at the limit of practical detector
sensitivity for routine monitoring programs and
should be recognized as being potentially unreli-
able. As a result, excessively large counting
times may be required for adequate statistical
precision; failing this, rather large variations
may be expected in the results for comparable
samples. Instrument stability also Imposes prac-
tical limits on counting time, so do diurnal
variations in background. There is then a trade-
off between the number of comparable samples that
can be analyzed and the consistency of results.
Thus some uncertainty may exist as to whether a
transient increase in observed activity was real
and deserves further investigation.
Recommended Technical Solutions
The above sources of variations are clearly
not of equal importance and some of them may be
soluble by changes in equipment or technical pro-
cedures, while others call for additions or modi-
fications in administrative or regulatory proce-
dures. However, the basic problem is technical
in nature and changes in regulatory procedures
can merely change the degree in emphasis. Much
information related to this can be found in NCR?
Report 45. Any proposed solutions must address
themselves to two different aspects:
One concerns the positive identification of
any observed change in measured environmental
radioactivity as being statistically significant,
i.e. "real" in that sense. This should then be
followed, by assigning it to a specific cause if
possible. Since most of the effects to be ob-
served are expected to be small in amplitude, any
statistical evaluation depends on the standard
comparison value chosen, which may be based on
pre'perational surveys, or simultaneous monitor-
ing results elsewhere, in the region.
The other aspect is procedural in nature,
by attempting to improve detectability of small
events by improved methods of sampling, of
analysis, or by organizational measures. Clear-
ly no one approach will adequately cover the
various situations described in the preceding
section.
Identification of Transients
It is generally agreed that the concept of
a detectable "increase" in measured ambient
activity is meaningful only if one Can-answer the
question? Increase over what? This implies that
there Is a statistically valid difference In
measured level between that determined at the
time and place of interest and some previously
established "background" level. Thio background
may have bean determined either during tha pso-
operational survey, preferably over a 2-3 year
period, or it may represent the average of a
number of readings obtained during operations
off-site, upwind or upstream.
Ideally these background readings should be
analyzed for daily and seasonal periodicities to
permit extrapolation for future neaaurements.
Optimum use of this information requires develop-
ment of an accessible computer program capable
of evaluating environmental measurements?" The
statistical nature of the data available must be
examined carefully and their use in providing
more reliable information should be analyzed. A
sufficient data base must be established to per-
mit prediction of background levels during opera-
tion. The control locations themselves must be
validated to exclude any reference location with
normally high background levels from any continu-
ing cause.
Data Analysis
&
Mere collection of environmental data with-
out careful scrutiny is certainly not in the
spirit of the law. In many cases more detailed
analysis of reported data than currently employ-
ed is highly desirable. "
To identify causes of specific events com-
parison with a predictive model should be done to
account for seasonal variations, precipitation
effects of radon levels, fallout increases in
spring, etc.
Development of a proper statistical data
base then facilitates evaluation of measurements
with their standard deviation in comparison with
preoperational data and control locations.
Analysis of current data should also be
used to establish the validity of the predictive
model and to adjust its data base if appropriate.
Development of a Predictive Model
The above suggestions lead naturally to a
proposal that the industry obtain a predictive
model for their site that will produce data on
background variations, probable effects of weap-
ons fallout for specific samples, seasonal
effects, sample locations and radionuclldes.
This undoubtedly will require some new develop-
ment work as present predictive models are not
sufficiently advanced.
Pathway analysis in most cases will indicate
that only about 13 radionuclides participate in
critical pathways, depending on location. Only
four radionuclides or their isotopic ratios may
be required for initial discrimination between
fallout events and plant-related transients.
Examples of these are cobalt 58/60, cesium 134/
137, neptunium-239 and strontium 89/90.
Detector Response to Transients . '
It is recommended that plant operators
examine their effluent monitors for their senai- ".
tivity to short-term transients. In some cases '
intensified monitoring may be required, espe- •*
4-3
-------
dally for drinking water, to check unmonitored
pathways, such as floor washings draining by
surface runoff. For'^airborne pathways the pres-
surized ion chamber ts the most satisfactory de-
tector, but it is dominated by noble gases in
effluents, so that other constituents cannot be
distinguished. If the transient is due to de-
fective operation of filters, frequent sampling
of fenceline particu'late detectors may be ade-
quate.
Since periodic "grab" samples may miss short-
lived transients, continuous monitoring leading
to a. composite weekly or monthly sample is pre-
ferred.
Procedural Reco""'"'"dations
Some of the above suggestions require adjust-
ments in the way surveillance programs are con-
ducted and evaluated. Several of them involve
cooperation between plant operators, contractors
and local and federal agencies. Among these the
following changes or additions in existing pro-
cedures are recommended.
Validation of Control Locations
Most surveillance plans use "control loca-
tions" to provide reference data with which plant-
related effluent measurements can be compared.
These locations typically are upstream for water
samples and in locations ringing the plant In
widening circles for airborne samples and well
water. Occasionally such locations may be
anomalous themselves, by being located on gran-
ite-bearing rock or on other sites with high
•radon levels, or from industrial sources. Unless
such anomalies are recognized, these samples may
skew the apparent average values used for base-
line purposes. For this reason it is Important
to validate all control locations in the course
of the pre-operational program.
This implies that comparison of monitoring
data be conducted between Indicator stations and
control stations during the operational program.
Consideration must be given as to whether the
preoperational data set was sufficiently large in
order to obtain mean and standard deviation val-
ues for validation of the control stations of the
operational monitoring. Some emphasis should be
placed on increased use of pressurized ioniza-
tion chamber measurements in the selection of
preoperational sampling locations and more
attention to control station selection during
the preoperational program. If the sampling
period during preoperational monitoring is not
representative, because of abnormal conditions,
the period may require extension until repre-
sentative samples can be obtained.
It is also clear that initial selection of
control sites requires more care than has often
been devoted to this matter and should be sub-
ject to subsequent validation.
Follow-up Procedures
Procedures, should be developed to analyze
environmental data when they are reported and
to follow-up any apparently significant changes,
to compare them with measurements at control
stations, vith any other plant or government
site reporting data on a regular basis and with
the preoperational levels.
The statistical validity of any deviation
should be established and, as far as possible,
any analytical errors or procedural mistakes
should be noted to remove their consequences
from the list of "significant" transients. What
constitutes a "sufficient" data base has to be
assessed In each specific case.
Any changes in sampling or analytical pro-
cedures should be identified that may result in
a new range of standard deviations.
To avoid excessive additional costs the
extra effort Involved in these follow-up proce-
dures should be evaluated for cost-effectiveness
from time to time and new or continued sample
collection should be justified.
Documentation
To maintain continuity and to assist in the
follow-up described above it is important that
records be kept of any procedural or instrumental
changes that may affect results. This would in-
clude changes In TLD readers, changes in sampling
procedures, analytical procedures or methods of
sample compositing.
Information on Fallout from Other Transients
There is a clear need to Improve the flow
of information on weapons fallout and other re-
leases from government agencies to the industry.
Moves are underway for EPA to coordinate informa-
tion flow from federal agencies to the states.
However, we believe it would still be desirable
to have a non-government group, perhaps under
the auspices of the Health Physics Society or a
National Laboratory, to ensure rapid dissemina-
tion of any such data to plant operators and
environmental contractors.
Such a committee may also assist in publi-
cizing and standardizing improvements in ana-
lytical techniques, cross calibrations of equip-
ment and in developing sampling standards.
Data Presentation
To simplify the recognition of significant
trends it is recommended that environmental mea-
surements be presented, as much as possible, both
in graphical form and in a computer-compatible
format for continuing evaluation.
Uniformity of Evaluation
At present intercomparison between different
plants and recognition of common events is diffi-
cult because of variations in the reporting for-
mat and in the evaluation procedure. It is re-
commended that the industry adopt uniform methods
of data evaluation and report presentation; these
could perhaps be developed through the coordinat-
4-4
-------
ing committee (above).
Quality Assurance Program
Suggested Administrative Solutions
Flexibility in Tech Specs.
Present Tech Specs in many cases make it dif-
ficult for the operator to adopt critical path-
way programs or to eliminate uimeeded samples to
concentrate on significant ones. In this con-
nection Reg. Guides must recognize differences
between ideal laboratory conditions and routine
commercial operations. It is understood that
some expected modifications in NRC Regulations
may introduce greater flexibility.
Reporting on Environmental Radiation Levels
Central dissemination of environmental levels
by a Government agency (perhaps EPA) would be
most desirable, perhaps through a revival of the
Radiation Data Reports. Some information of that
type is distributed now, quarterly, by the USEPA
Eastern Environmental Radiation Facility.
Emission from Industrial and Medical Radionuclide
Sources
The significance of emissions from hospitals,
manufacturers, coal-fired stations, etc. as well
as sources of natural radioactivity should be
evaluated to determine if the emissions from
these facilities can significantly affect the
levels of radioactivity in the environment of
nuclear power plants. If it is determined there
is significant potential for changes in measured
levels, then the methods of analysis used for the
nuclear power plants programs should be up-graded
to make them more nuclide-specific. This then
would allow for the differentiation between
nuclides released from the nuclear power plant
and those released from other nearby sources.
Coordination of Milk Sampling and Analysis
Some variations in reported data for milk
may reflect dispersed responsibilities for sample
collection and assaying. Some administrative
action may be desirable to improve uniformity and
coordination.
Certification of Analytical Laboratories
Periodic certification of analytical labora-
tories, including auditing of environmental moni-
toring records, by an appropriate agency may be
desirable to assure quality and consistency of
performance. This would be distinct from NRC
inspections that focus mainly on compliance with
regulations.
Short-Lived Radionuclides
The role of short-lived radionuclides not
usually monitored or specified in surveillance
programs should be examined as they may well
account for unexpected peak counts, that cannot
be verified later..
Procedures should be developed for compre-
hensive quality assurance programs in the
management of releases and the operation of
effluent monitors in accordance with Reg. Guide
4.15.
Training of Personnel and Upgrading of Detection
Procedures
Personnel must be trained to recognize in-
strumental malfunction and attention to detail in
detector performance. This may involve computer
Indexing of individual TLD characteristics, more
frequent background checks and attention to low-
level laboratory contamination and to source
movement and storage. Upgrading of facilities at
some plants would minimize errors.
Conclusions
t> "~"~~™^—~""~———•
Specific recommendations of this subcommit-
tee are summarized as follows:
1. All environmental monitoring data (in- a
eluding preoperational data) should re- „
side in an easily accessible computer
data base. .D
«•
2. Two or more years of preoperational data
should be analyzed by sample type and
location:
a. To determine predictable periodici-
ties (seasonal, diurnal, etc.)
b. To identify any anomalies associated
with specific sample types or loca-
tions
c. To establish the validity of pro-
posed control and Indicator stations
to be compared during plant opera-
tion
d. To define expected statistical
characteristics of the data
e, To determine any interference from
radionuclides released by other
facilities
3. Operational environmental monitoring
data from Indicator stations should be
compared with similar data from control
stations. These analyses should In-
clude:
a. Comparison of current individual
Indicator with control-data points
using previously established stan-
dard deviations and cumulative
averages from previous information
to date (adjusted for predictable
periodicity)
b. Graphical presentation of current ,
and previous information ,
4-5
-------
4. Establish one or more groups:
a. To develop uniform methods of data
evaluation, presentation, and com-
puter storage . • ,.
b. To provide central dissemination of
environmental levels
t- *
c. To develop procedures for the
selection of control monitoring
locations
d. To ensure prompt notification of
operators and analytical laboratories
of regional or global events, such
as weapons testing, that could affect
environmental monitoring results
• e. To assist in improving and standard-
izing analytical techniques and
sampling procedures
f. To coordinate periodic cross cali-
brations among interested organiza-
tions
g. To perform quality assurance audits
of individual laboratories and their
reported data and to provide peri-
odic certification of these analyti-
cal laboratories
h. To develop a generic statistical
model to aid in prediction of
changes in environmental radiation
levels.
5. Establish comprehensive quality assur-
ance programs in accordance with NRC
Regulatory Guide 4.15.
4-6
-------
QUALITY ASSURANCE FOR ENVIRONMENTAL MONITORING PROGRAMS
C.G. Sanderson (Environmental Measurements Laboratory, USDOE, New York, NY), L.K.,Cohan
(Nuclear Regulatory Commission, Washington, DC), A. Goldin (Office of Radiation Programs,
Washington, DC), A.N. Jarvis (National Environmental Research Center, Las Vegas, NV), L.
Kanipe (Tennessee Valley Authority, Muscle Shoals, AL), C. Sill (Health Services Laboratory,
USDOE, Idaho Falls, ID), M. Trautman (Eberline Instrument Corporation, West Chicago, IL),
and B. Kahn (Georgia Institute of Technology, Atlanta, GA)
Quality Assurance (QA) Is the summation of all programmed events required to en- ,_
sure that data being generated by a laboratory are as meaningful as possible.
A comprehensive QA program must begin with management's commitment to quality re-
sults and continue through sice selection, sampling, analysis, and data reduc-
tion until a final report is issued. This report is intended to be a guide for
the development of a complete QA program. Detailed information relating to the
many aspects of QA can be found in the references cited.
(Analysis; calibration; data reduction; data reporting; personnel; quality
assurance; quality control; records; sampling; standards)
Introduction
Quality Assurance (QA) is the summation of
all programmed events Imposed internally and ex-
ternally in order to ensure that data being gener-
ated by a laboratory are as meaningful as possi-
ble. Quality Control (QC) procedures, which are
generally considered to be task specific, are a
very Important aspect of a comprehensive QA pro-
gram but individually are not a guarantee of
quality. A complete program must also take into
account management's responsibility to provide an
organizational structure and budget so that QC can
be routinely implemented, administered and review-
ed. Securing qualified personnel, providing
training and supervision are as Important as
written QC procedures and record keeping. Before
samples are collected or analyses performed, pro-
cedures must be developed, tested, and adopted to
ensure not only the validity of final results but
also the Integrity of the original sample. Site
selection, sample size, storage and preparation
are just as important as the calibration of elec-
tronic counting equipment. The use of computers
in the laboratory for process control, data reduc-
tio- and report preparation has greatly relieved
the burden of routine operations but 'v»s at the
same time created new areas for QC. The goal of
any QA program Is to maintain the quality of re-
sults within established limits of acceptance.
Quality assurance is not complete if It only de-
tects substandard results while not providing pro-
cedures for remedial action..
It is the purpose of this document to pro-
vide a guide for the development of a compre-
hensive QA program. In most Instances the reader
will be directed to publications which provide
adequate detailed information. Areas which are
not adequately discussed in the literature will
be covered here.[1~5]
Organization and Responsibilities
The first and most Important aspect of a QA
program is management's commitment to produce
quality results. Once this premise has been
accepted, a QA program can be established along
the following steps published by K.R. Wilcox,
1. Set objectives and policies to determine the
quality of work needed.
2. Establish methods to evaluate the levels of
accuracy and consistency achieved, and wheth--
er established standards are met.
3. Establish methods of Initiating corrective
measures if unacceptable accuracy is dis-
covered .
4. Be certain that personnel are competent to
perform the tasks required in the laboratory,
including the quality control measures.
5. Institute the methods and evaluative tools
with which personnel may determine whether or
not they are performing properly. This in-
cludes training the personnel so they know
the significance of the measures adopted and
the application of the results to their own
work.
6. Provide proper space, equipment, and materi-
als for personnel to carry out the organiza-
tion's missions.
7. Develop specific quality assurance objectives
for the bench worker, and make it the work-
er's responsibility to meet these objectives.
This step should be as much a part of one's
work goals as keeping UD with the workload.
The worker should know what is expected of
him.
'8. Develop a monitoring system for periodic re-
view so the supervisors and director know
whether or not the system is performing as,.; «
designed.
5-1
-------
Finally, individuals responsible for QA must
,be given the authority to initiate, recommend or
provide solutions and to verify the implementation
' of corrective actions when required. The amount
•of effort required' to maintain a desired degree
of quality depends upon: the types and importance
of samples being analyzed at a particular time;
the amount of method development or method evalua-
tion being performed; and the laboratory's past
history and demonstrated competence. Although no
single percentage will be appropriate In all situ-
ations, 10% to 20% of a laboratory's total effort
is generally considered to be adequate and neces-
sary for QA.
Personnel Qualifications
Quality data can only be produced if the in-
dividuals performing the analyses are qualified.
Education and past experience are not satisfacto-
ry indicators of proficiency. Every new assigned
function should be preceded by an appropriate
training program. Training should be performed
by or under the direction of qualified supervisory
personnel. Newly trained individuals should be
certified as qualified upon the successful analy-
sis of QC samples. Training and certification of
personnel should be documented and recorded.
A number of government and academic institu-
tions provide training for individual or groups in
radlochemical methodology, counting and environ-
[7-91
mental monitoring.
Operating Procedures
Written procedures that have been approved
should become part of a permanent library subject
to annual review. These written procedures should
Include all of the program's activities including
.sample site selection; sample collection; packag-
,-lng, shipment, and receipt of samples for off site
analysis; preparation and analysis of samples;
maintenance, storage, and use of radioactive re-
ference standards; calibration methods, and in-
strument QC; and collection, reduction, evaluation,
and reporting of data. In addition, procedures
should be prepared which clearly outline the pro-
per manner of record keeping.
Records
The records necessary to document all activi-
ties of a QA program should be specified. One key
aspect of a QA program is maintaining the ability
to follow a sample from selection to data report-
ing. •+
Records to accomplish this should cover the
following processes: field sample collection and
sample description; sample receipt and laboratory
identification coding; sample preparation and
radiochemical processing method; radioactivity
measurements-(counting) of samples, instrument
backgrounds, analytical blanks; data reduction and
verification. The individuals responsible for
' these various functions should be identified on
these records,
Quality control records for laboratory count-
ing systems should include the results of measure-
ments of radioactive check sources, calibration
sources, backgrounds, and blanks, as well as a
complete record of all routine maintenance and
service.
Records relating to overall laboratory per-
formance should include the results of analysis
of quality control samples such as analytical
blanks, duplicates, interlaboratory cross-check
samples and other quality control analyses; use
of standard (radioactive) reference materials to
prepare working standards; preparation and stan-
dardization of carrier solutions; and calibration
of analytical balances. In addition permanent
files should be maintained on personnel qualifica-
tions and results of audits including recommended
remedial actions.
Sampling Procedures
Environmental radiation monitoring frequent-
ly requires the collection of water and air sam-
ples. Therefore, instruments used to measure the
sample volume must also be incorporated into QC
procedures so that accurate sample collection can
be assured. Data pertaining to the calibration
and performance of these instruments must be re- '
corded and retained with the previously mentioned
records. The importance of establishing and main-
taining sample integrity cannot be overemphasized.
If the sample is not representative of its envi-
ronment the resulting data, regardless of the
accuracy of the analytical measurement, will be
meaningless. Written sampling and shipping pro-
cedures should specify containers and include any
required sample treatment. Special attention
should be given to liquid samples which might de-
posit some of their radioactivity on container
walls. Sampling procedures should indi-
cate the number of, quantity of, and location
where samples are to be obtained. Photographic
and/or written documentation of the sampling
should become an integral part of that sample's
history.
Radioanalytical Procedures
Radioanalytical QC usually accounts for the
largest portion of a. total QA program. Undoubted-
ly, this is where the greatest QA effort should be
because the sample is exposed to the greatest num-
ber of possible sources of error in the laboratory.
Laboratory QC, therefore, must cover the following
general areas of laboratory practice:
A. Analytical Methodology
B. Counting Instrumentation
C. Data Reduction
5-2
-------
Analytical Methodology
Standard methodology used by all laborato-
ries, for all radionuclldes in all sample matrices
might be ideal, but is unrealistic. However, even
accepted stc. dard methods of analysis must be sub-
jected to rigorous testing of the worst probable
sample matrix before being routinely implemented
in a particular laboratory. Without proper con-
trol of reagent material and other areas of possi-
ble sample contamination by the use of blanks and
blank data control charts, the best analytical
methods will not yield the desired results. If
extremely sensitive counting techniques are being
used, contamination from ambient air supplies must
be considered, e.g., carbon-14, radon daughters,
thoron daughters, etc. Continuous laboratory mon-
itoring can be accomplished by the introduction of
known samples into the routine sample inventory.
Blind replicates on samples known to be homoge-
neous can be used conveniently to check the re-
producibility of both the procedure and analyst.
Blanks can be used to check on contamination. The
best way to demonstrate the overall accuracy of
the procedure is by analysis of a standard refer-
ence material. A valid reference material con-
tains a known quantity of the element being deter-
mined, is known to be homogeneous, and contains
the most likely "real world" problems needing to
be checked. Most emphatically, neither replica-
tion nor intercomparison with other laboratories,
no matter how good the agreement, is conclusive
proof of accuracy. Intercomparison with others
is a valuable method of QC when the procedures em-
ployed are markedly different, employ direct
measurements without Involved sample pretreatment,
or when the laboratory to which the comparison is
being made is acknowledged to have expertise in
the particular field being treated.
The following table lists a number of items
that should be considered and adapted to meet a
laboratory's analytical QC commitment.
TABLE I
ANALYTICAL METHODOLOGY QUALITY CONTROL
Practice Reference
1. Separation of function 14
and levels of activity
2. Reagent quality 14
e-
3. Low level laboratory air 14
supply
4. Laboratory contamination 14
5. Sample decontamination 13
6. Reagent and sample blank 13
control charts
7. Standard reference materials " 15-19
o
8. Natural matrix materials 18-20
9. Blind replicates 21
10. Spiked samples and blanks 15,22
11. Laboratory intercomparisons 23,24
12. Calibration standards 25,32
13. Purification and standardlza- 17
tion of standards
Counting Instrumentation
In order to ensure that counting equipment
remains in calibrations and functions properly,
timely QC procedures must be followed. These pro-
cedures should begin with a complete and thorough
documentation of a new system's characteristics.
The system's responses to various input conditions
should be recorded for future reference. In
general, counting equipment operation should be
checked daily and at regular intervals calibrated
and monitored for background variations. Non-
spectrometric counting procedures should include
steps of sufficient specificity to verify sample
purity, such as decay or absorber measurements.
The following table outlines a number of re-
quired features for a valid QC program.
5-3
-------
TABLE I'l
COUNTING EQUIPMENT QUALITY CONTROL
P-gac.t'ice ' Reference
1. Documentation of new in-
strument response character-
istics 33
2. Daily stability checks and
control charts 13,34
3. Periodic background and
control chart 35
-4. Periodic calibration and
control chart - 36
5. Primary calibration
standards 37
Data Reduction and Reporting
Data reduction and reporting of final results
require as much attention and control as any of
the previous steps in the analytical proce-
dure. ' The value of any measurement is only
as good as the uncertainty associated with it.
Without some indication of the precision and
accuracy with which the measurement was made, the
user cannot make a valid interpretation of the re-
sults. In fact, with results near the detection "
limit, as is the case with many environmental
measurements, the uncertainty is frequently more -,
important than the result itself because it is the
uncertainty that tells the user just how accurate
the measurement was made. Consequently, every
measured value must include an uncertainty due to
all significant sources of inaccuracy involved.
This does not refer to just the counting errors of
sample and background as is frequently practiced,
but must include every significant uncertainty in-
curred anywhere in the entire measurement process
if it will affect the final result within the num-
ber of significant figures retained. Particular-
ly, uncertainties in the activity of the radio-
nuclide being determined, in the activity of the
tracer recovered, in the activity of the tracer
added, in counting times and efficiencies, in
chemical yields, in backgrounds and reagent blanks,
in sample volumes, etc. must all be evaluated and
the significant ones propagated to the final re-
sult.'39' •* It must be emphasized that the
statistical analysis of replicate determinations
provides only an indication of the radom vari-
ability^ A complete uncertainty assessment should
also include the analysts' best estimate of any
systematic biases. Reported values should include,
in addition to the final result, a propagated to-
tal random uncertainty expressed as the standard
deviation and an estimated overall uncertainty
that combines all of the significant sources of
[39]
inaccuracy. '
Terms such as "below detection limits", nil,
none, trace, zero, negligible should not be used
for reporting analytical results. Such terms in-
dicate only that the substance was not in fact
detected under whatever conditions used but give
no quantitative information as to the level at
which the substance was not detected. Reporting
results as "less than" some minimum detectable
activity results in some statistical loss of in-
formation and a slight bias in the results when
large numbers of results are combined in obtain-
ing averages. Undoubtedly, maximum information
is retained when the results actually obtained,
including negative values, are reported along with
their associated statistical uncertainties.
It is equally bad practice to ever permit a
measured value to appear to be more precise and/
or accurate than it really is to prevent un-
warranted and inaccurate conclusions from being
drawn from the results. Statistical methods be-
come quite inaccurate when applied to small num-
bers of observations such as generally encounter-
ed in environmental monitoring. Yet it is quite
common to see standard deviations reported to
three or more significant figures, or to have the
last significant figure in the result and its
associated uncertainty appear in different decimal
places. The rounded standard deviation reported
should not deviate from the original value by more
than 20%. For example, if the computed standard
deviation is 0.1635, it should be reported as 0.16
which differs from the original by only 2%. Round-
ing to 0.2 should be avoided because it differs
from the original value by over 22%. The re-
sult itself is then rounded such that its last
significant figure will be in the same decimal
place as that of the uncertainty.
All too often data are reported without a
final review by individuals who generated the
data or without being proofread by the typers of
the report. Hand calculations should be checked
routinely by a second individual and periodically
checked by resubmission of older data. The wide-
spread use of electronic data processing has re-
lieved one possible source of error while intro-
ducing a new one. Computer errors which are rare
•are usually catastrophic and can therefore be
easily uncovered. True computer errors usually
cause program failure or produce absolutely
ridiculous results. However, subtle errors have
been known to creep into a complex data reduction
scheme. For this reason, resubmission of refer-
ence data should be included with all sample data
submissions.
The following table lists the required fea-
tures of a QC program in this area.
5-4
-------
TABLE III
DATA REDUCTION QUALITY CONTROL
[ 6-]
Practice
1. Error propagation
2. Calculation of lower
limit of detection
3. Data evaluation
4. Data reporting
Reference
39-41
10, 37, 39, 42,
43, 44-46
43, 40, 47
38, 39, 48
Audits
A valid QA program is composed of numerous
QC procedures and checks that often cross depart-
mental lines of responsibility. Audits, there-
tore, should be made by individuals outside the
department being audited. These audits should be
frequent and allow the auditor to make recommenda-
tions for improvement and also follow-up on pre-
vious recommendations.
Quality assurance programs which are support-
ed by management, administered by responsible in-
dividuals, implemented by committed and dedicated
workers and audited by conscientious reviewers will
insure the quality of environmental radiation data.
References
*• Environmental Radioactivity Surveillance
Guide, ORP/SID 72-2, U.S. Environmental
Protection Agency, Washington, D.C. (1972).
f 21
1 Quality Assurance for Radiological Monitoring
Programs (Normal Operations) - Effluent
Streams and the Environment, Regulatory Guide
4.15, Revision 1, U.S. Nuclear Regulatory
Commission, Washington, D.C. (1979).
' -K Guide for Environmental Radiological Sur-
veillance at ERDA Installations, U S. Energy
Research and Development Administration Report
ERDA 77-24, Washington, D.C. (1977).
r 41
u Radiological Environmental Monitoring by NRG
Licensees for Routine Operations of Nuclear
Facilities, NUREG-0475, U.S. Nuclear Regula-
tory Commission, Washington, D.C. (1978).
^ 5^T.W. Oakes, K.E. Shank, and J.S. Eldridge,
Quality Assurance Applied to an Environmental
Surveillance Program, in Proceedings of the
o Fourth Joint Conference on Sensing of Environ-
mental Pollutants, New Orleans, Louisiana
(1977).
t 7]
[ 8]
[ 9]
[10]
[111
[12]
[13]
[14]
1
[15]
[16]
[17]
[18]
K.R. Wilcox, Jr., st_ al., Laboratory Manage-
ment, in Quality Assurance Practices for
Health Laboratories, American Public Health
Association, Washington, D.C., pp. 3-126
(1978) .
J. H. Harley, Private Communication, Environ-
mental Measurements Laboratory, U.S. Depart-
ment of Energy, New York (1979).
Rockwell International, Nuclear Training
Center, Canoga Park, CA.
Harvard University, Short Course in Environ-
mental Radiation Surveillance for Nuclear
Power, Dade W. Moeller, Director, Cambridge,
MA.
J.H. Harley, ed., HASL Procedures Manual, U.S.
Department of Energy Report HASL-300, New
York, revised annually (1972).
o
S.D. Shearer, e£ al•, Ambient Air Testing, in
Quality Assurance Practices for Health
Laboratories, American Public Health Associa-
tion, Washington, D.C., pp. 297-379 (1978).
!
Quality Assurance Handbook for Air Pollution
Measurement Systems, Volume II, U.S. Environ-1
mental Protection Agency Report EPA-600/4-77--
027a, Research Triangle Park, NC (1977).
L.G. Kanipe, Handbook for Analytical Quality
Control in Radioanalytical Laboratories, U.S.
Environmental Protection Agency Report EPA-
600/7-77-088, Washington, D.C. (1977).
J.M. Mullins, e£ al.., Radiochemistry, in
Quality Assurance Practices for Health Labora-
tories, American Public Health Association,
Washington, D.C., pp. 1007-1031 (1978).
C.W. Sill, and F.D. Hindman, Preparation and
Testing of Standard Soils Containing Known
Quantities of Radlonuclides, Anal. Chem., 46,
113 (1974).
NBS Radioactive Standards for Science, In-
dustry, Environment, and Health, National
Bureau of Standards, Washington, D.C. (1977).
Catalog of NBS Standard References Materials,
NBS Special Publication 260, National Bureau
of Standards, Washington, D,C. (1977).
C.W. Sill, Determination of Thorium and
Uranium Isotopes in Ores and Mill Tailings by
Alpha Spectrometry, Anal. Chem.,,4_9_, 618
(1977).
5^8
-Sill./ Simultaneous Determination of U,
234,, 230.,
*) T n
"Th, Th, and ""Pb in Uranium Ores,
Dusts and Mill Tailings, Health Physics, 33,. ^
5-5
-------
[22]
[23]
[24]
393 (1977).
J.R, Noyce, Development of a National Bureau
of Standards Environmental Radioactivity
Statidard: River' Sediment, in IEEE Interna-
tional Conference on Environmental Sensing and
Assessment, Las Vegas, NV, September (1975).
' M. Rosenstein, and A.S. Goldin, Statistical
Techniques for Quality Control of Environ-
mental Radioassay, Health Laboratory Science,
2, 93-102 (1965).
Report of Calibration - Gamma-Ray Emission
Rate Standards: Spiked Clay, National Bureau
of Standards, Washington, D.C. (1978).
Analytical Quality Control Service 1978-1979,
LAB/243, International Atomic Energy Agency,
Vienna, Austria (1978).
Environmental Monitoring Series, Environmental
Radioactivity Laboratory Intercomparison
Studies Program 1978-1979, U.S. Environmental
Protection Agency Report EPA-600/4-78-032, Las
Vegas, NV (1978).
L. H. Ziegler, Environmental Monitoring Se-
ries, Radioactivity Standards Distribution
Program 1978-1979, U.S. Environmental Protec-
tion Agency Report EPA-600/4-78-003, Las
Vegas, NV (1978).
Laboratory Seibersdorf, International Atomic
Energy Agency, P.O. Box 590, A-1011 Vie, ' •
Austria C1978).
1968 Certification of Standardized Radio-
active Sources, In International Commission
on Radiation Units and Measurements Report
ICRD 12, Washington, D.C. (1968).
Y, LeGallic., Validity of Radioactivity
Standards, NBS Special Publication 331 (1970).
Labeled Compounds, Radionuclides, Reference
Sources, LSC Fluors and Chemicals, Radio-
assay Products Catalog, New England Nuclear,
Boston, MA 02118 (1978).
f30^1978/79 Radiochemical Catalog, Amersham Cor-
poration, Arlington Heights, IL 60005 (1979).
[33]
[25]
[26]
[27]
[28]
[29]
[31]
1974 Users' Guide for Radioactivity Standards,
National Academy of Sciences Report NAS-NS
3115 (1974).
[32]
B.M. Coursey, Use of NBS Mixed-Radionuclide
Gamma-Ray Standards for Calibration of Ge(Li)
Detectors Used in the Assay of Environmental
Radioactivity, U.S. National Bureau of Stan-
dards Report NBS SP456, Washington, D.C.
(1976),.
[34]
[35]
[36]
L.H. Ziegler and H.M. Hunt, U.S. Environ-
mental Protection Agency Report EPA-600/7-77-
144, Las Vegas, NV (1977).
G.I. Coats, and A.S. Goldin, Energy Alinement
of Gamma Spectrometers, Public Health Reports,
81, 999-1007 (1966).
R.A.G. Marshall, Cumulative Sum Charts for
Monitoring of Radioactivity Background Count
Rates, Anal. Chem., 49^, 2193 (1977).
IEEE Standard Technique for Determination of
Germanium Semiconductor Detector Gamma-Ray
Efficiency Using Standard Marinelli (Reen-
trant) Beaker Geometry, American National
Standard Institute Report ANSI/IEEE STD
680-1978, New York (1978) .
A Handbook of Radioactivity Measurements
Procedures, National Council on Radiation
Protection and Measurements, Washington, D.C.
(1958).
C.W. Sill, Reporting of Analytical Results,
in Proceedings of the 21st Annual Conference
on Bioassay, Environmental and Analytical
Chemistry, San Francisco, California, Lawrence
Livermore Laboratory Report CONF-751004
(1975) .
^Health Physics Society, Upgrading Environ-
mental Radiation Data, Health Physics Society
Committee Report HPSR-1, I960, (1980).
[37]
[38]
[40]
[41]
[42]
[43]
[44]
[45]
P.R. Bevington, Data Reduction and Error
Analysis for the Physical Sciences, McGraw-
Hill Book Co., New York, p. 56 (1969).
H.H. Ku, Notes on the Use of Propagation of
Error Formulas, J. of Research, National
Bureau of Standards - Section C, Engineering
and Instrumentation, 70C, No. 4, p. 263
(1966).
L.A. Currie, Limits for Qualitative Detection
and Quantitative Determination, Application
to Radiochemistry, Anal. Chem., 40, 586
(1968) .
J.J. Donn, and R.L. Wolke, The Statistical
Interpretation of Counting Data from Measure-
ments of Low-Level Radioactivity, Health
Physics, 32_, 1 (1977).
Annual Report, Health Services Laboratory,
IDO-12076, U.S. Department of Energy, Idaho
Falls, ID (1971).
Hazards Control Report 43, Lawrence
Livermore'Laboratory Report UCRL-50007-72-2,
Livermore, CA (1972).
5-6
-------
' B. Altshuler, and B. Pasternack, Statistical
Measures of the tower Limit of Detection of a
Radioactivity Counter, Health Physics, £, 293-
298 (1962) .
T471
1 JJ. Mandel, and L.F. Nanni, Measurement Evalua-
tion, in Quality Assurance Practices for
Health Laboratories, American Public Health
Association, Washington, D.C., pp. 209-272
(1978) .
G.P. Hicks, A.F. Krieg, and R.E. Thiers, Data
Transmission, in Quality Assurance Practices
for Health Laboratories, American Public
Health Association, Washington, D.C., pp. 273-
294 (1978) .
5-7
-------
REPORTING OF ENVIRONMENTAL RADIATION MEASUREMENTS DATA
R. Colle (National Bureau of Standards, Washington, DC), H. H. Abee (Union Carbide Corp.j Oak Ridge,
TN), L. K. Cohen (Nuclear Regulatory Commission, Washington, DC), D. Ed (State of Illinois,
Dept. of Public Health, Springfield, IL) , E. H. Eisenhower (National Bureau of Standards,
Washington, DC), A. N. Jarvis (Environmental Protection Agency, Las Vegas, NV), I. M. Fisenne
(Environmental Measurements Laboratory, New York, NY), M. Jackson (Alabama Power Co ,
Birmingham, AL), R. H. Johnson, Jr. (Environmental Protection Agency, Washington, DC), .• •
D. Olson (Radiological and Environmental Sciences Laboratory, Idaho Falls, ID) and J. Peel
(Department of Energy, Washington, DC)
This report is intended to serve as a practical guide to treating and reporting ^_
environmental radiation measurements data. Recommendations for a uniform method of
data reporting are presented and justified. Three primary requisites are considered.
proper units, an appropriate number of significant figures, and an unambiguous
statement of measurement uncertainty. Present practices are summarized and evaluated,
and their deficiencies are examined. To avoid confusion, it is recommended that
the existing multiplicity of units used to report various radiation quantities be
reduced to a smaller consistent set. Use of the metric system of units is encouraged.
Rules for rounding reported values to an appropiate number of significant figures
are presented. The appropriate number of significant figures for a reported value
is determined by the magnitude of the total uncertainty associated with the value
Guidelines are given' for estimating random and systematic uncertainties, and for
propagating and combining them to form an overall uncertainty. l£ is recommended
that each reported measurement result include the value, the total random uncertainty
expressed as the standard deviation, and the combined overall uncertainty. To avoid
possible biases of data, all measurement results should be reported directly as
obtained, including negative values. The lower limit of detection (LLD) should
serve only as an a priori estimate of detection capability for the instrumentation,
and not as an absolute level of activity that can or cannot be detected. The concept
of a minimum detectable concentration (MDC) is introduced to serve as an a priori
estimate of the capability for detecting an activity concentration with a given
measurement instrument, procedure, and type of sample. Neither the LLD nor the MDC
is intended to be an a posteriori criterion for the presence of activity.
(Accuracy; activity, data reporting, detection limit; environmental, error; lower
limit of detetion [LLD]; measurements; minimum detectable concentration [MDC];
precision; radiation; random uncertainty, significant figures; statistics, systematic
uncertainty, uncertainty, units)
Overview
This report highlignts the guidelines and
recommendations contained within a much lengthier
version prepared by the subcommittee. The full
report, A Guide and Recommendations for Reporting
of Environmental Radiation Measurements
Data, is available as a special publication of
th" National Bureau of Standards [1]. The guide
and its recomendatio.ns are intended "• be useful
and specific. It is not intended to be a general
or exhaustive theoretical treatment of measure-
ment results, but rather to serve as a
practical guide to treating and reporting environ-
mental radiation measurements data.
There are three primal requisites to re-
porting measurement data. The reported value
of a measurement result must always:
1) be unequivocal and properly dimensioned
(i.e., the units must be clearly under-
stood and suitable for the value),
2.) be expressed in an appropriate number
of significant figures, and
3) include an uncertainty statement '(whose
meaning is unambiguous).
Mistreatments of conditions (1) and (2) are usu-
ally not the most serious failings in data report-
ing. There is, however, considerable variation
in the practices recommended and required by the
various federal agencies concerned with environ-
mental radiation data reporting. Recommended con-
ventions to satisfy these conditions are covered
in Parts I and II, respectively.
Failure to satisfy condition (3) is probably
the most frequent abuse and is of greater concern.
Unfortunately, almost all environmental radiation
data presently being reported fall into one of
three categories:
i) the value is reported without stating
the uncertainty,
li) the uncertainty is reported, but it is
not specified.
in) the reported uncertainty is specified,
but is based on an incomplete assess-
ment.
Further, federal practices pertaining to the
reporting of uncertainties not only contribute to
these abuses, but are also as diverse as the num-
ber of i{j'"n i •- -tul requirements specified with-in
6-1
-------
an agency's regulations. Recommendations for
assessing, propagating and reporting uncertain-
ties are given in Part III.
Perhaps th« most non—uniform 'practices in'
environmental radiation data reporting are those
involving detection limits. This and the teatment
of results at or below these levels are covered
in Part IV.
U i.
-, 1
I. Dimensioning (Units)
As outlined in the Overview, the first re-
quirement for the reporting of environmental ra-
diation measurements data is that reported values
must "be unequivocal and properly dimensioned."
That is, the units for the values must be clearly
understood and must be suitable for the value.
A survey and evaluation of typical units
currently in use indicates that there are con-
siderable variations in practice. This existing
multiplicity is both needless and confusing.
Two additional observations can be made. First,
compound units (e.g., for radioactivity concen-
tration) are frequently formed by using more
than one prefix in forming the multiple of the
compound unit (e.g., pCi-mL"^). Accepted con-
vention recommends that only one prefix be
used [2,3], Normally the prefix should be at-
tached to a unit in the numerator. The second
observation is that the conventional units of ra-
diation quantities (curie, roentgen, rad, rem)
are almost exclusively used and that few have
converted to use of the International System of
Units (SI).
It would be beneficial and helpful (to both
laboratories and data users) if the number of
units in use were reduced to a smaller consis-
tent set. This goal toward greater uniformity
could be accelerated by the encouragement of
national use of the metric system, formally
called SI, and by the use of standard metric
practices and conventions. The familiar "curie,"
"roentgen" and "rem" are deeply entrenched with-
in the radiological sciences and radiation fields,
and will not be easy to forsake. This natural
reluctance to change can be appoeciated, but the
eventual conversion to SI is inevitable and must
be recognized.
In addition to our national commitment to
the SI, there are many advantages to its use.
Some of these are excellently described in the
following brief summary from a British committee
report'4T
The International System of Units (SI) is a
rational and comprehensive system of units.
It_ihas seven base units. The unit for any
physical quantity within the system is de-
rived from one or more of these by multipli-
cation or division and without introducing
any maner-iical factors. The system is there-
fore coherent. It avoid the chaos of
individual choice of units, it breaks down
the'barriers to communication arising from
separate systems of units, and it removes
the need for conversion factors that arises
when incoherent units are used. The SI is
clearly destined to become the universal
currency of science and commerce. At first
sight the use of the new SI units seems to
bring little direct benefit to those...areas
in which radiation quantities are exten-
sively used, but their adoption is essential
if these areas are not to be cut off from
their parent sciences. The use of the new
units for ionizing radiations will only ex-
tend the existing and growing application
of SI....All changes bring their problems
but the introduction of SI units does not
change existing quantities although it does
require people to become familiar with new
numerical values.
A number of sources describing the use of the
SI are available 12,3,5,6]^ For environmental ra-
diation measurements data, we are primarily con-
cerned with the units for radioactivity, exposure,
absorbed dose, and dose equivalent. The status
of each of these, in terms-of both the SI and
non-Si units, will be considered in turn.
Radiation Units
Radioactivity is a measure of the rate of
those spontaneous, energy-emitting, atomic tran-
sitions that involve changes in state of the
nuclei of radioactive atoms. I*'*] jt LS g1Ven by
the quotient of dN by dt where dN is the number
of spontaneous radioactive events which occur in
a quantity 9fa radioactive nuclide in the time
interval dt^>. The SI unit of activity is the
becquerel (Bq), which is equal to one radio-
active event per second, i.e.,
1 Bq «• 1 s"1.
The special unit of activity is the curie (Ci)
which is defined as
1 Ci » 3.7 x 1010 Bq (exactly).
The International Committee for Weights and Mea-
sures (CIPM) has authorized, for a limited (un-
specified) time, the use of the curie and its
multiples. This temporary sanction was provided
to allow time for familiarization with the new
units.
Exposure, X, is the quotient of dq by dm
where dq is the absolute value of the total
charge of the ions of one sign produced in air
when all the electrons (negatrons and positrons)
liberated by photons in a volume element of air
dm
The SI unit for exposure is coulomb per kilogram
(C-kg~'). The familiar unit for exposure is the
roentgen (R) which is defined as
1 R - 2.58 x 10~4 C-kg"1 (exactly).
There is no special name for the SI unit of ex-
posure (C-kg~l) as a replacement for the roentgen.
As for the curie, temporary use of the roentgen
has been sanctioned.
6-2
-------
Absorbed dose, D, is Che quotient of de by
dm where de is the mean energy imparted by ioniz-
ing radiation to the matter in a volume element
and dm is the mass of the matter in that volume
element (7] The SI unit for absorbed dose and
related quantities (specific energy imparted,
kerma, and absorbed dose index)* is the gray
(Gy). The gray, in SI base units, is a joule per
kilogram.
1 Gy = 1 J-kg"1.
The gray is intended to replace the rad which
is defined as
1 rad = 10"2 Gy.
The unit rad has also been sanctioned for tem-
porary use.
Dose equivalent accounts for the relative
biological effectiveness of a given absorbed
dose. It depends on the type of radiation, on
the irradiation conditions, and, for a given
organ, is inferred by weighting the absorbed
dose in that organ by certain modifying fac-
tors[7]. The dose equivalent, H, is given as the
product of D, Q and N, at the point of interest
in tissue, where D is the absorbed dose, Q is
the quality factor for the type or radiation,
and N is the product of any other modifying
factors.
H = DQH
The SI unit for dose equivalent is the sievert
(Sv) which is equal to, in SI base units, J.kg"^.
Hence, when D is expressed in grays (Gy), H is
in sieverts (Sv). The special non-Si unit for
the dose equivalent is the rera. When D is ex-
pressed in rads, H is in rems. The unit rem has
also been sanctioned for temporary use.
Compound Units
Compound units for other quantities are
formed by combination of two or more units. The
SI unit for radioactivity concentration, for ex-
ample, could be Bq.kg"1 or Bq.IT1.Exposure race,
given by the quotient of dX by dt where dX is
the increment of exposure in the tin- interval
dt, is expressed in units of any quotient of
C-kg-1 by a suitable unit of time (e.g.,
C'kg"l'h~l). The unit for absorbed dose rate,
dD-dt"^, is any quotient of the gray or its
multiple or submultiple by a suitable unit of
time (Gy-s-1, vGy h~ , mGy.yr" , etc.). As will be
discussed shortly, there are a number of advan-
tages to using an exponential notation instead of
prefixes. The use of prefixes to form multiples
and submultiples of the SI units is permissible,
however, and for completeness is considered
here.
Multiples and Submultiples of Units
Multiples and submultiples of the SI units
arg formed by Adding prefixes to the names or
symbols of the units. The SI prefixes are listed
in Table 1. The unit of mass, kilogfan not kBq-mL"1. The use of the
unit kilogram is the only exception to
this rule since the kilogram is the
base unit of mass.
One should consult more comprehensive guides
[2,3,6] for additional details.
TARLE 1
FACTOR
SI Prefixes
PREFIX
1018
1015
1Q12
109
10s
103
102
10'
10-1
10-2
exa
peta
tera
glga
mega
kilo
hecto
deka
dec!
cent!
lO-3 mill!
10~6 micro
10~9 nano
10-12 plco
10~15 femto
SYMBOL
E
P
T
G
M
k
h
da
d
c
' m
R
n
P
f
Special Units
There are a number of units which are not
part of the SI, but are so widely used that they
are accepted for continued use with the SI. The
combination of these units with SI units should
be restricted to special uses in order to not
lose the advantage of the coherence of SI units.
Those of interest for environmental radiation
measurements data include the units of time*
minute (mm), hour (h) and day (d), and the
special name liter. The recommended symbol for
' tire liter is "V MI the U.S., not the.lower
case "1" which may be confused with the numeral
*For definitions of these quantities see, for example, ICRU Report
6-3
-------
•"1"16. The SI unit of volume is the cubic meter
<"(in-'), and it or one of its multiples or sub-
multiples is preferred for all applications. In
f any case, the special name liter is restricted
for use with only liquid or gas vo-lumes. No
prefix other than milli should be used with,
liter.
As noted earlier, the common non-Si units
for radiation quantities (curie, roentgen, rad and
rem) are authorized, for a limited time, for use
with the SI. These are destined to be replaced
by the becquerel (Bq), coulomb per kilogram
(C kg"*), gray (Gy) and sievert (Sv), respec-
tively.
Exponential Notation
There are a number of 'advantages to express-
ing results using powers of ten applied to the
base and coherent derived SI units. First, it
eliminates problems when values lie outside the
range covered by the prefixes, second, it mini-
mizes the possibility of computational mistakes;
and third, it helps to indicate the number of
significant figures in the value thereby reducing
ambiguity (see Part II). Some examples of values
expressed in SI units and this exponential no-
tation follow.
2810 pCi L"1 « 104 Bq.L"1
= 1.04 x 102 Bq IT1
= 1.04 E+02 Bq-L"1
12
3.1 x 10~9 C-
3.1 E-09 C-kg
"1 1
123 rarad •= 0.00123 Gy
-1.23 x 10~-3 Gy
=1.23 E-03 Gy
Conversion to SI Units
It obviously is neither feasible nor eco-
nomically justifiable to completely and immedi-
ately change over from non-Si to SI units. This
change will require familiarization and a tran-
sition period. Steps must be taken by both labo-
ratories and regulatory agencies«to aid and ex-
pedite this inevitable transition. This will
only be achieved if there is a spirit of cooper-
ation, adequate planning, and logical implemen-
tation by all those involved. At the same time,
a number of initial steps taken during the
transition would immediately further the goal of
greater uniformity in data reporting practices.
It is therefore recommended that data
reports be converted to the SI base and derived
units as soon as technically and economically
feasible-* In many cases, this cannot be accom-
plished without changes in governmental regula-
tions and data-reporting requirements imposed on
laboratories. During the transition it is recom-
mended that data reports include conversion
factors between the SI and non-Si units, and
whenever possible include the measurement results
in both units. Since the scope of this change
is much broader than the parochial scope of en-
vironment.al radiation measurements data, efforts
must be made by concerned laboratories, agencies,
and organizations to implement simultaneous
changes in all areas concerned with radiation
quantities. Broad-based initiatives to plan and
implement a safe and orderly transition are
highly recommended.
Recommended Units for Use in Data Reporting
Whether temporarily sanctioned non-Si units
(Ci, rad, etc.) or SI units (Bq, Gy, etc.) are
employed, it is recommended that the number of
combined unite for radioactivity concentrations,
exposure rates, etc., be immediately reduced to
a smaller consistent set. The existing multi-
plicity of units in use is both needless and con-
fusing.
It is recommended that radioactivity concen-
trations (except for effluent concentrations) be
reported in units of activity per kg for mass
measurements, and per m or L for volume measure-
ments. Recommended conventions are therefore.
1. Units for activity concentration in
solid samples (e.g., soils, vegetation,
etc.) should be Bq kg"1 expressed in
exponential notation (using the new
SI units) or submultiples of Ci-kg"1
(using non-Si units).
2. Units for activity concentration in
airborne particulate or gas samples
should be Bq-m~^ expressed in expo-
nential notation or submultiples of
the Ci-m~3.
3. Units for activity concentration in
liquid samples (e.g., water, milk, etc.)
should be Bq L"1 expressed in expo-
nential notation or submultiples of
Ci L"1.
The recommended units for these activity
concentrations and other radiation quantities, in
terms of both the SI and non-Si units, are con-
tained in Table 2. The table also illustrates
the conversion factors (in E-notation) from the
non-Si to SI units.
Additional Specifications
Obviously, additional specification of the
results of measurements may be neessary. Mea-
surements of solid sample, for example, will
still require specification as to "dry weight" or
"wet weight" and moisture content. Similarly,
measurements of gas volumes require specification
of the ambient temperature and pressure. One
should always explicitly state all of the con-
ditions that are necesssary for the results to be
understood and interpreted unequivocally.
Other Unsatisfactory Current Practices
As stated initially, proper dimensioning of
measurement results requires that the values and
units be clearly understood and suitable for the
value. There are a number of still unaddressed
current practices that do not satisfy these basic
criteria.
6-4
-------
Perhaps one of the most serious is the
failure to distinguish between measurements of
"dose equivalent" and "absorbed dose." The
abbreviation "dose" is continually, although
incorrectly, used for both quantities. Oftentimes,
the only way one can ascertain which is meant is
by seeing if the values are reported in units of
reins or rads. It should be apparent that not
only must the measurement value be properly di~
mensioned, but the reported quantity itself must
also be properly labelled.
A second major concern is the extensive re-
porting of "groaa activity" ("total beta", "total
gamma") measurement results in units of radioac-
tivity. The inherent problems and abuses of such
practices have been addressed previously I"' .
These measurements contain no information on the
identity of the radionuclides present in the sam-
ple. Therefore, no meaningful conclusions on the
dosimetric significance of the results can be ob-
tained. "Gross activity" measurements do serve
a useful function for screening purposes to decide
if additional, radionuclide-specific measurements
should be made. Their value in data reports, how-
ever, can be seriously questioned. Reporting
values of "gross activity" in units of Bq or pCi
(or some other submultiple of the curie) is
extremely misleading. At best, the result refers
only to the activity that would be obtained from'
the observed counting rate for the standard
radionuclide (that was used to obtain the^counting
efficiency). This conditional must be clearly
described in any report of the result.
Recommendations
I.I A consistent set of units for various en-
vironmental radiation quantities (for both
SI and non-Si units) is contained within
this guide (cf., Table 2) and shoulff"be
employed for data reporting The existing
multiplicity in the number of units in
use is both needless and confusing.
1.2 The use of the metric system, formally
called the International System of Units
(SI), and the use of standard metric prac-
' tices and conventions should be encouraged.
These units and recommended •practices are
briefly described within this guide.
Additional details are availatble from more
comprehensive sources referenced therein.
1.3 Data reports should convert to the SI
base and derived units for radiation
quantities as soon as technically and
TABLE 2
Recommended Units For Data Reporting
Activity Concentrations (Environmental)
Airborne Particulates and Gas
Liquids (Water, Milk, etc.)
Solids (Soil, Sediment,
Vegetation, Foodstuff, etc.)
Activity Concentrations (Effluent)
Gas (Air)
Liquid
Exposure Rate (Environmental)
Absorbed Dose
Dose Equivalent
Dose Equivalent Rate
(Commitment)
In Non-Si
Units a
pCi m~3
pCi-L-1
pCi-kg'1
mrem
mrem yr"
In SI
Bq m
~3
Bq kg
-1
(uCi-mL-1)* Bq-m"3
(uCi mL-1)* Bq L'1
UR h-1 C kg-1-
mrad Gy
Sv
Sv. YE
Conversion
Factor From
Non-Si to SI Unit-n
3.70E-02
3.70E-02
3.70E-02
3.70E+10
3.70E+07
2.58E-10
1..00E-05
l.OOE-05
l.OOE-05
3 Sanctioned for temporary usa
b To convert non-Si units to SI um.ts, multiply the non-Si units by cha conversion factor. •
* Adopted because of established convention and use in Maximum Permissible Concentration
(MPC) tabulations.
6-5
-------
economically feasible. In view of
existing governmental regulations and data-
reporting requirements, it is, at present,
neither feasible"nor economically justi-
fiable to completely and immediately change
over from non-Si to SI units. This change
will require familiarization and a transi-
tion period.
II. Significant Figures
The second requirement (outlined in the
Overview) for reporting measurement data was
that the reported value must be "expressed in
an appropriate number of significant figures."
Fortunately, this is nearly universally recog-
nized, and is seldom a problem.
The ''number of significant figures" refers
to the number of numerical digits that is used
to express the value. It is obvious that this
number must be reasonable, and must not mislead
pr imply fictitious accuracy in the reported
value. An "appropriate" number of digits
(or significant figures) is that which is war-
ranted by the accuracy of the reported value.
That is, the appropriate number of digits to be
retained in the reported value depends on the
magnitude of the total uncertainty (see Part III)
attached to this value.
For example, a reported value of 1.53365 E+12
Bq(41.45 Ci) for the quarterly tritium release
from a nuclear facility is not reasonable. Surely
no one would be fooled by the apparent implied
accuracy of this value. Suppose further that the
estimated total uncertainty in the value is +40%._
•Then, with the result expressed as
\
(1.53365 _+ 0.61346) E+12 Bq
(i.e., 41.45 + 16.58 Ci),
the ludicrousness in the reportd value is even
more apparent. This result should be reported
as (1.5 + 0.6) E+12 Bq
Similarly, suppose the annual release of
radioactivity from several facilities at a site
was reported (to two significant figures) as
21,000 Ci from one, 560 Ci from another, and
1.4 Ci from a third. To then say that the total
activity released from the site was 21,561.4 Ci
implies an accuracy in the measurement that is
both incorrect and unintended.
Another, equally obvious, bad practice is
the failure to ensure decimal agreement between
the result and its associated uncertainty. For
example, results such as 1.2 + 0.002 or
1.234 + 0-s2 are internally inconsistent. Either
the result is more accurate than is given, as
indicated by its associated uncertainty, and more
significant figures should be retained, or the
result is much less accurate than indicated, and
the number-of significant figures should be de-
creased to agree with the accuracy indicated by
the quoted uncertainty . Care must therefore be
taken in the number of significant figures re-
ported both for the value itself and for the un-
certainty term.
In general, environmental radiation measure-
ments data seldom justify more than two or
three significant figures for the value and one
or two significant figures for the uncertainty.
More significant figures should only be reported
after careful consideration and a decision that
the extra figures are indeed reportable. It is
recommended that the uncertainty should be re-
ported to no more than two significant figures,
and the value itself should be stated to the last
place affected by the qualification given by the
uncertainty term.
For example, given a measurement result of
123.45 Bq-L~lfor an activity .concentration
with an estimated total uncertainty of +12%
(i.e., +14.8 Bq IT1) , the result would be re-
ported as
(1.23 + 0.15) E+02 Bq-lT1.
In this case, two significant figures for the
uncertainty and three for the value might be
justified. One might not wish to round the
result to
(1.2 + 0.1) E+02 Bq IT1
since the latter result implies a greater ac-
curacy (approximately 8% compared to the original
estimated uncertainty of +12%). The decision
to report either two or three significant figy
ures for a value is particularly critical when
the value is close to running from one decade
to another, such as for 89, 95, and 103. With
estimated uncertainties of +15%, the above values
would be reported as
89
95
103
13
14
15
if two significant figures in the uncertainty term
are retained. Otherwise, the values would be
rounded to
90_+_ 10
100 _+ 10
100 jf 20.
The rules for rounding of numbers (i.e. , the
dropping of insignificant or unwarranted figures)
are generally well known and available in a large
number of sources '9, 10, 11, and 12] j including
elementary mathematics texts. These rules are
summarized below:
1. To round off the number to n signifi-
cant figures, truncate the number to
n digits and treat the excess digits
as a decimal fraction.
6-6
-------
2. If the fraction is greater than or
equal to 1/2, increase the least sig-
nificant digit (i.e., the nth) by 1 .*
3. If the fraction is ( less than 1/2 do
not increase and leave the nth digit
unchanged.
Similarly, rules for the number of signifi-
cant figures to be retained in successive arith-
"
metic operations have been developed
"'
The rules for addition and subtraction differ
from those for multiplication and division, and
involve other assumptions (such as, that the
values are independent and uncorrelated) . Since
no general rule can be given for all situations
involving different types of arithmetic opera-
tions, a recommended procedure is to avoid
rounding off until after the final calculation.
This can be accomplished by treating all measure-
ment values as exact numbers in the operations and
to only round off the final result, or to carry
two or more extra (insignificant) figures through-
out the computation, and then to round off the
final reported value to an appropriate number of
significant figures '9' . Carrying along
the extra insignificant figures will reduce the
possibility of "rounding errors." This practice
does not, however, excuse one from rounding the
final result to an appropriate number of signi-
ficant figures for the data reports.
These views are to a large degree currently
incorporated into most data reporting practices
and regulatory requirements.
Recommendations
II. 1 A reported value should be expressed in
an appropriate number of significant fig-
ures which is determined by the magnitude
of the total uncertainty asssociated with
the value.
11.2 The uncertainty should be reported to no
more than two significant figures, and the
value itself should be stated to the last
place affected by the qualification given
by the uncertainty term.
II. 3 Care should be taken in the nw >er of sig-
nificant figures reported both for the
value itself and for its associatd uncer-
tainty. The value and its uncertainty
must be in decimal agreement.
II. 4 Reported values and their uncertainties
should be rounded to an appropriate number
of significant figures using the consis-
tent, well -developed rules outlined in the
text.
II. 5 To avoid "rounding errors" in computations
involving successive arithmetic operations,
two or more extra (insignificant)-figures
should be carried on all the values through'-
out the computation, and then th-e , final re-
ported value should be rounded •'
III. Treatment of Uncertainty Statements
Three of the most common abuses concerning
the reporting of "errors" for environmental
radiation measurements data were outlined in the
Overview to this guide. These abuses shall be
addressed in turn, but it may be useful to state,
at the onset, the necessary conditions for avoid-
ing their failings
A REPORTED VALUE MUST INCLUDE AN ASSESS-
MENT OF ITS UNCERTAINTY.
THE REPORTED UNCERTAINTY MUST BE CLEARLY
UNDERSTOOD AND MUST CONVEY SUFFICIENT
INFORMATION SO THAT ITS MEANING, USING
CORRECT TERMINOLOGY, IS UNAMBIGUOUS.
i>
THE REPORTED UNCERTAINTY MUST BE BASED ON
AS NEARLY COMPLETE AN ASSESSMENT AS POS-
SIBLE.
To help avoid ambiguity, the use of the
words "error" and "uncertainty" (which are fre- '
quently used interchangeably) should be clari-
fied. The word "error" has familiar usages *
which range from "a mistake" or "an oversight"
to "a deviation from what is correct, right, or'
true," "the condition of having incorrect or
false knowledge," and "the difference between
a measured or computed value and a correct value."
Further, its use in even the statistical sense
is often confusing. Therefore, it is recom-
mended that it not be used except.
1) in those cases where its meaning is
unambiguous or of no portent, e.g.,
"absolute error";
2) when used in uniquely defined statis-
tical terms, e.g., "standard error of
the mean"; and
3) in commonly recognized phrases, e.g.,
"propagation of errors" or "statistical
theory of errors."
As such the word "error" should be used as in-
frequently as possible. The word "uncertainty"
is a preferred substitute which can be used to
refer to the values following the + symbol and
all of its component parts (random and systematic
uncertainties). Terms such as "uncertainty,"
"accuracy," etc. are more likely to be clearly
defined, better understood, and unambiguously
used. nata reports are less apt to be confusing
if the frequent and loose use of "error" is
avoided. This practice has been incorporated
into this guide, and it is recommended that it .
*This rule will introduce a alight -rounding bias due to the treatment -of the result
when the fraction is exactly equal to 1/2. An unbiased procedure .consists of increment-
ing the nth digit if it is an odd number (leaving the nth digit unaltered if it is an
even number).
6-7
-------
•be followed for all environmental radiation
^measurement data reports and documents.
Consider now the necessary conditions,
givep above, for reporting ao uncertainty. *' . '
A reportd value without an accompanying
uncertainty statement is for nearly all purposes
worthless, the value is rendered useless because
it cannot be put to use with any confidence. Al-
though the particular form of the uncertainty
statement may be dependent on the intended or
ultimate use of the result, it is without debate
that the value must include one.
The absolute error or uncertainty of a re-
ported value., which is the deviation from the
true value, is unknowable because the true value
can never be known exactly. Limits to this un-
certainty, however, can be inferred and esti-
mated from the measurement process itself Fore-
most, the uncertainty should be a statement, based
on a credible assessment, of the likely inaccuracy
or the likely limits to the "absolute error" in
the reported value. This uncertainty assessment
also includes an incumbent risk of being incorrect.
Similarly, a reported value, say 15 _+ 3 Bq,
without further information is equally troUble-
some. The user would be forced to speculate
whether the quoted 20% uncertainty is a standard
error of the mean based on multiple measurements,
or an estimated "statistical or counting error."
Also unknown is whether it includes possible
systematic uncertainties or is only the random
uncertainty, and to what level of confidence the
value can be ascribed. The possibilities are
innumerable. In short, the user might ask: "How,
certain1am I that the value is between 12 and
18 Bq?" To be useful, the uncertainty statement
must convey sufficient information so that its
.meaning, based upon correct terminology, is
' unambiguous.
Suppose the above quoted uncertainty was
reported as being a "counting error at 2 sigma
(95%) confidence interval" [Cf., Environmental
Radiation Data, quarterly compilations by the
U.S. Environmental Protection Agency's Office of
Radiation ProgramsJ. Such,reporting is cer-
tainly clearer. The user could infer that it
is derived from an estimate of the standard
deviation of the measurement or counting process
(assuming it follows a Poisson distribution)
taken to some higher confidence limit ("2
sigma").* This inference can be problematic how-
ever. First, although it is frequently assumed,
twice the standard deviation is not necessarily a
95% confidence limit (see subsequent section on
confidence limits). And second, the reported
uncertainty is only an estimate of the random un-
certainty or precision in the measurement process
which does not address the accuracy of the value.
The user might legitimately ask: "Is this the
sole confribut-iofl to the overall uncertainty7"
Precision, Accuracy, Bias, and Random and Syste-
matic Uncertainties
Precision and Random Uncertainty. The random
uncertainty is .a.^tatement of precision (or more
correctly, of imprecision) and is a measure of
the reproducibility or scatter in a set of suc-
cessive independent measurements [Ref. 13, p. 23-
1]. Precision refers to the closeness of the set
of results among themselves. When differences in
the magnitudes of the observations are small,
precision is said to be high, when the differ-
ences are large, precision is low. Random uncer-
tainties are assessed and propagated by statisti-
cal methods. The treatment of these uncertain-
ties is most familiar to physical scientists and
those concerned with environmental radiation data.
Accuracy. The accuracy of a measurement pro-
cess is a measure of the ability to obtain close-
ness to the true value. The.absolute error of a
particular result is just the difference between
the measured or reported result and the true
value. The exact difference of course is unknow-
able because the true value can never be known
exactly. Although the absolute error is unknow-
able, limits to its magnitude can be inferred and
estimated from the measurement process itself.
The estimate of these limits to the absolute
error is referred to as the uncertainty, and is
used to estimate the inaccuracy of the measure-
ment process.
Bias. A bias is a deviation from the true
value which is always of the same magnitude and
direction. It cannot be estimated or calculated
from a given set of replicate measurements since
each and every measurement is affected by the
systematic bias in the same way [Ref. 13, p. 23-1
and Ref. 14]. There can be many contributing
sources of bias in a given measurement process.
They are introduced by the process and are char-
acteristic of it. Such biases are not amenable
to statistical treatments. They should be
estimated upper limits for each conceivable
(or assessable) source of inaccuracy in the mea-
surement process. Their magnitudes would pref-
erably be based on experimental verification,
but may have to be estimated from experience and
judgement.
The very familiar "bull's eye" example shown
in Figure 1 should help illustrate the distinction
between precision and systematic biases, and
their relation to accuracy. The bull's eye of the
target corresponds to the true value, and the
six shots represent individual measurement re-
sults. The figure illustrates the concept of
an inaccurate measurement due largely to impre-
cision, a precise but inaccurate measurement, and
an accurate measurement. Inasmuch as accuracy
requires precision, there is no such case as an
accurate but imprecise measurement.
*The term "sigma," in this case, is used incorrectly.' Sigma is a parameter for the
population, and should not be used to refer to the calculated sample statistic
"standard deviation."
6-8
-------
INACCURATE
MEASUREMENT
SYSTEMATIC BIAS MAY
BE PRESENT, POOR
REPROOUCIBIUTY (IMPRECISE)
PRECISE BUT
INACCURATE
MEASUREMENT
L B
ACCURATE
MEASUREMENT
SYSTEMATIC BIAS PRESENT ,
LENGTH OF B IS MEASURE
OF BIAS , GOOD REPRODUCIBILITY
(PRECISE)
SYSTEMATIC BIAS REMOVED,
PRECISION MAINTAINED
Figure 1. Illustration of the distinction be-
tween precision and systematic biases, and their
relation to accuracy.
Systematic Uncertainty. For practical pur-
pose s,~we~may~deTine~sysTematic uncertainty to
consist of those sources of inaccuracy which are
biased, and those which may be due to random
causes (stochastic processes) but cannot be or
are not assessed by statistical methods. This
broader definition which includes sources of in-
accuracy in addition to biases is necessary to
obtain a complete accounting of all sources
of inaccuracy in the total uncertainty. It is
the total uncertainty (obtained by combining the
random and systematic components) that is ulti-
n<^-ely used to estimate the inaccuracy.
"Blunders" and Data Rejection
Before proceeding to discussions of random
and systematic uncertainties, it may Tse useful
to consider the other source of inaccuracy.
These are the outright mistakes or "blunders" in
the measurement process. If one or more blun-
ders are known to exist, then the value should be
discarded, or a correction should be applied to
the measurement. Parratt [Ref. 11, p. 69] de-
scribed some possible blunders and their sources.
A few examples which could easily be applied to
environmental radiation measurements follow:
1. Incorrect logic, 'or misunderstanding
what one is or should b~e doing. For.
example, applying a decay correction
in the wrong direction.
2. Misreading of an instrument. Perhaps
a field survey meter is an exigent
situation, such as on a cold rainy
day.
3. Errors in transcription of data which
might occur in recording a sealer read-
ing, transferring a value from one page
to another, or even mis-entering a num-
ber into a calculator or computer ter-
minal.
o-
4. Calculational and arithmetical mistakes,
which include misplaced decimal points,
etc.
Interestingly, Parratt considered confusion of
units and listing of an improper number of signi-
ficant figures to be blunders.
One must recognize that blunders do and will
occur. With an appreciation of this fact, every
effort should be made to detect or avoid them.
Many can be detefited by establishing a system of
independent checks on each stage of the measure-
ment and data reduction system. A simple but
effective system is an informal arrangement with
a partner, colleague, or co-worker to check each ^ *
other's work. It should include checking all
stages of the data reduction (arithmetic, etc.)
and an independent appraisal of the logic and
the measurement process used to obtain the re- '
ported value. This checking is also useful in o ^
trying to assess all of the sources of syste- .•>
matic error. In effect, the checker serves as
a "devil's advocate" to discover where the blunders
may be.
Lastly, there is the question of rejecting
suspicious or questionable data. When repli-
cate results are available, standard statistical
tests may be applied to decide if outliers should
be rejected. Details on applying rejection
tests can be found in Natrella [13, p. 17-1],
Dixon and Massey '^', and other standard statis-
tical texts. Although this is an acceptable
practice, caution must be exercised. Rejection
of data may result in an unrealistic self-
consistent set and may actually bias the re-
sult. A further problem is that it may mask a
real, anoraolous and unexpected effect. Had
Lord Rutherford's graduate students, Geiger and
Marsden, applied data rejection tests to the
small number of "unexpected" large scatterings
in their crparticle scattering experiments, they
might have failed to discover or interpret the
existence of Che atomic nucleus
Jaffey'*"' outlined three reasons why one
should be conservative in rejecting moderate
outliers:
a) Apparent patterns in sequences of ran-
dom data are often startling. In the
long run, averaging only bunched re-
sults gives averages that deviate
more from true values than do means o)
all values.
6-9
-------
b) As the number of measurements in-
creases, the probability of an out-
lier increases..•,..,
c) In a large group of measurements, omis-
sion of the outlier has little effect
on Che average although it makes the
data look better by decreasing scatter.
One of the purposes of environmental ra-
diation surveillance is to detect (perhaps
"unexpected") changes in trends Judgement is
necessary in order to avoid rejecting a result
which in fact is an important effect. A prac-
tical guide is to let the suspicious result
serve as a stimulant to find out what went wrong
or what happened.
Statistical Treatment of Random Uncertainties
As described in the preceding section, ran-
dom uncertainties are those which can be treated
by statistical methods, and are derived from an
analysis of replicate observations of a random
or stochastic process. Numerous excellent
sources on statistical theory are available [Cf.,
Ref 13 and 15] and should be consulted for
greater detail or a more rigorous treatment.
Only the results and a small amount of background
information are presented here.
Before proceeding, some terminology is
necessary. The term variate (or random vari-
able) is used to denote the quantity which may
take on any of the observed values. The agre-
gate of these observations is termed a sample
o-f some parent population, and may be described
by a frequency distribution. This distribution
of the population is a specification of the way
in which the number of observations (frequencies)
.are distributed according to the values of the
»variates. The parameters of a population are the
descriptive measures of the distribution. The
mean (y), a measure of the center or location
of the distribution, and the standard deviation
(0) a measure of the spread or scatter of the
distribution, are examples of parameters. The
mean (y) is also termed the firs£ moment of the
distribution, and the square of the standard
deviation (a ), called the variance, is the
second central moment. In the absence of an
infinite population, one must make estimates of
the parameters from finite populations (the
sample of observations). A sample statistic
is this estimator of the population parameter.
The values of sample statistics are computed en-
tirely from the sample and are the basic measures
of the central tendency (location) and dispersion
(variation). The mean (x) and standard deviation
(s) are widely known examples of statistics. Unfor-
tunately, the distinction between population
parameters and sample statistics is frequently
ignored and the two are often confused, and in-
correctly spoken of interchangably. The
following diagram attempts'to clarify the
distinction.
In practice, the parameters of the population are
denoted by Greek alphabet characters, and the
corresponding estimators of these parameters (the
statistics) by Roman alphabet characters. Table
3, taken from. Ref. ^^ lists a number of common-
ly used parameters and statistics.
The population distribution must be known
before one can proceed with the treatment of
random uncertainties. A rigorous analysis would
require confirmation that the sample of obser-
vations is a Normal or some other known distri-
bution. Numerous statistical tests, such as the
•)&-, t- and F-tests, are available for this.
Standard statistical sources such as Refer-
ences t11' 13 15« 16> and 19' may be consulted
for details. These tests are not always prac-
tical, particularly since they are not very
applicable with samples of less than about 30
observations. With fewer observations, a Normal
(or Gauss) distribution, which is completely
characterized by the mean and variance, is
assumed. For some other distributions, further
parameters, such as skewness (third central
moment) or peakedness (fourth central moment),
may be necessary [Cf., Ref.'11', p. 94]. The
justification for this assumption of normality
is based on precedent. The Normal distribution
can be viPweJ as a mathematical result empiri-
cally shown to be valid for a large number of
different experimental situations. It is still
an assumption and it is well worthwhile to make
a visual examination of the data for any marked
departures from normality. There are some simple
procedures to do this. They include construction
of a histogram or graphical tests using prob-
ability paper [Cf., Ref. t17^, p. 6-141 and Ref.
1*5]) p. 55]. xhe discussion of random uncer-
tainties which follows assumes that a Normal
distribution is justifiable. It can be shown
that this subsequent treatment is not absolutely
dependent on a Normal population distribution.
The Central Limit Theorem states this, provided
the departures are not too great, and further
predicts that the convolution or folding-to-
gether of non-Normal distributions tends to
form Normal distributions. The probabilities
for some typical intervals in the Normal dis-
tribution are provided in Table 4. As stated
before, an analysis of the observed values will
be used to estimate u and o^.
Sample Mean and Standard Deviation. For
n measurements of x, the best estimate of the
parameter M is obtained from the mean (x) of
the sample: and the best estimate of a^ from the
variance (sx2), where
(1)
STATISTICS
PARAMETERS
Calculated
from sample
(e.g. , x and
S2)'
used to'
estimate
for the
population
(e.g., y and a2)
and sxz •= 1
n-1
6-10
- x):
(2)
i-l
-------
TABLE 3
Population Parameters
Sample Statistics
(Estimators of Parameters)
u (mean - first moment)
2 (variance - second central moment)
a (standard deviation of x about
s = . s '
X \ X
(standard error of the mean, or
standard deviation of the average)
s_ =
x
a = a (covariance)
xy yx
(100)
(coefficient of variation,
or relative standard deviation,
expressed in per cent)
x (100)
The sample standard deviation la the square root
of the variance or the quantity sx. It refers to
the standard deviation computed from a sample of
measurements.
Interval
(u-io ) to (u+?a )'
X X
0.6745
000
960
000
576
3.000
TABLE 4
Probability of x having a
value within cnis
interval.
50.
68.269
95
95.450
99
99.73
Standard Error of the Mean. Any mean x is
determined from a finite number of measurements.
If the determination is repeated, one can obtain
a series of slightly different x values. Ac-
cording to the Central Limit Theorem, for large
n, the distribution of these x values will be
close Co Normal for any distribution of x. Thus,
a standard deviation of this distribution could
be obtained from repeat determinations of x.
It may, however, also be estimated from just
the measurements used in a single determination
of x. This estimate of the precision on the
mean is termed the standard error of the mean
which is given by
s-2
8x
£
1=1
- x)2.
(3)
is termed the variance of the
The quantity
mean.
not be confused with the sample standard devi-
The standard deviation s
The standard error of the mean (s^-) must
ation (s^).
x is only
dependent on the measurement precision,, while
s-j- depends on both the precision and the number
of observations .
Propagation of Random Uncertainties (Mea-
surements Involving Several Physical Quantities).
The~uncertainties that have oeen described up
till now are those derived by statistical methods
from replicate measurements. A reported value
is seldom measured directly, and its associated
total random uncertainty is usually not entirely.'
due to random fluctuations made just during that
meaaurorvat. The reported value nay be derived.
from quantities that were constant during the
measurement (e.g., an instrument calibration
6-11
-------
'factor) or from several measured quantities*
These quantities, including contan-ts, have un-
certainties in their numerical values due to ran-
dom causes. Es-timates of their uncertainty may
have beefl previously derived' from repltcate meai-. .
surements at the tine the values were determined.
It is necessary to combine (or propagate) these
individual contributions into a total random
uncertainty term for the reported value.
The statistical theory of errors to combine
the separate variances of the quantities involved
is usually based on the assumption of Normal dis-
tribution functions. This results in the very
familiar "propagatio'n of error formula" which is
based on just the first order Taylor series ex-
pansion of the functional form of x about the
mean of x (see Ku '2°J for a derivation and
greater detail). If the reportd value of x is
determined by a combination of independent vari-
ables, u, v, w. ... which is given by
Grand Averaging and Weighted Means. The
preceding discussion concerned only the cal-
culation and propagation of mean values and their
variances from a set of independent measurements
of the same quantity. • Frequently, one may have
estimates of the variance on each individual
value in the set. This may arise from estimates
•of the "counting errors" derived from the total
number of counts in single determinations (as
will be described shortly), or when computing
grand averages of previously averaged values where
a variance was calculated for each individual mean.
In this case, one may intuitively feel that the
averaging process should be performed by giving
greater weight to those values that have greater
precision. It can be shown (Cf., Ref.l2*', p. 44)
that for a set of m independent values of quantity
x (all taken from the same population), each with
mean x , and variance sxj2 the best estimate of
the population mean is obtained with a weighting
factor of l/sxj2 for each Xj. Thus, the overall
mean (grand average) is given by
f (u, v, w, ...),
(4)
then the estimated variance on x is
\3u/ S" \3v,
(5)
where s
covariance
. + terms
are the estimated vari-
ances on u, v, w, respectively. The partial de-
rivatives are evaluated with all other variables
fixed at their mean values. For simplicity, let
0^ correspond to the entity (jf/3i)2s12 for
the ith component. Then,
(6)
(7)
The use of this weighting factor results in a mean
that has the least variance. This variance is
•x2
!/•„
(8)
J-l
This treatment is not applicable if any of the
xj values are correlated to any of the others.
Therefore, weighting factors, which are based on
the reciprocals of individual uncertainties, must
not include any contribution from systematic
biases, but only those uncertainties due to in-
dependent random causes. One could have, just
as well, grand averaged the results without
weighting. That is,
This treatment assumes that the terms of higher
order than the first partial derivatives are
negligible,* and that all of the component quan-
tities are independent and not correlated to each
other. This is frequently the case, but not neces-
sarily always true. For details of treating
partialty correlated components, the reader is
referred to KuI2°J or other standard statistical
sources. It is always necessary to consider the
basic functional form of the component variables
before as&um.ii)g a particular propagation formula.
Error propagation formulae for some simple, com-
mon functions are provided in Table 5.
J=l
with
x.
J
8x ° m(m-l) j=l
(9)
(10)
*This is based on the assumption that over the range of the variables given by
their errors, only small changes in the partial derivatives occur. If the
errors are large, the second partial and partial cross derivatives should be
included. • Cf., KuL20]
6-12
-------
This latter approach results in a mean that does
not have the least variance. It can, however,
serve as a useful guide. In general, one should
always treat with suspicion any result that has
a significantly different weighted and unweighted
Statistics of Radioactive Decay (Poisson
Distribution)In addition to the usualrandom
uncertainties associated with replicate obser-
vations (discussed above), radioactivity mea-
surements are also subject to a random variation
arising from the nature of the radioactive decay
process itself. The rate of radioactive decay is
not a constant with time, but fluctuates randomly'
about a mean or expectation value. Thvs>-decay
can be described by the Poisson. 'probability dis-
tribution function (see Ref. 1^1 and references
therein for a derivation of the Poisson function
as a limiting case or the Binomial^of Bernoulli
distribution).
The Poisson assumption allows one to esti-
mate, from a single measurement, the spread or
scatter of a measured number of counts PC) about
the mean value n. Assuming that the total
number of counts c obtained in the measurement
is the best estimate (or is a good estimate of
TABLE 5
Propagation of Error formulas for Some Simple Functions
Function rormt
x = aurbv
Approximate variance
s 2 + b2s "
U V
Additional term
if u and v are
correlated . ..
x2a'os
x = auv
au
-2-
2 = 4 u2s 2
X = /U
1/4
±b
±bu
X = CU V
S2 _ ,2 S2 . 1,2 S 2
v = a u •+• o v
x " In u
x » a ln(±bu)
tWhere a, b and c are constants; u and v are assumed to be statistically independent, and the value of
x if finite and real [e.g., vj^O for ratios with • as denominator, u>0 for /u and In u, and (+bu)>0 tor'
ln(+bu)1.
6-13
-------
the mean value that would be obtained from a
large number of replicate measurements) of the
mean value n , i.e.
then the best estimate of the standard deviation
in n may be calculated by
(11)
where the arrows are read as "used to esti-
mate." Since the counting rate (r) measured over
a given counting time T is determined by c, the
mean rate (r) and its standard deviation are
estimated by
c
T
and sr = Vc
(12)
(13)
The relative standard deviation in r (sr/r) ob-
viously decreases with increasing total number
of counts. Similarly, an increase in the
counting time of a source with a given rate,
^ decreases sr.
Background Subtraction. For most environ-
.mental radiation measurements, the counting
,, rate of a source (rs) is determined by subtract-
ing a background rate (rj,) from a gross (source
plus background) rate. That is,
rs = rs+b " Eb-
By the propagation of errors, as^described earlier,
the standard deviation in rs is estimated by
s+b
rs+b + rb
cs+b cb
(14)
where Ts and Tj, refer to the gross and background
counting times, respectively. These and similar
equations which can be derived form the basis
for the experimental design of radioactivity mea-
surements (e.g., optimizing the division of time
between Ts and Tj,. See, for instance, Ref.
and [21] for greater details on radioactivity
measurement procedures and processing of count-
ing .data.
Random Uncertainty Components. The stand-
ard deviations described above, which are esti-
mated from the total number of counts in a single
determination of the counting rate, are usually
termed the "counting errors " Although the use
of this term will probably continue to prevail,
the term is a misnomer. It takes into considera-
tion only the random scatter about the mean from
the radioactive decay process itself. To pre-
sume that this is the only source of random
fluctuation in the overall measurement or count-
ing process is fatuous. Unfortunately, federal
guidance, as it pertains to reporting environ-
mental radiation data, does not suggest other-
wise. Other sources may be random timing un-
certainties, random variations in the sample pre-
paration, positioning of the sample at the de-
tector, etc. The list is nearly endless. Some
of these are addressed in Ref. I*'.
The only way to realistically assess the
overall random uncertainty is to make replicate
determinations and calculate the standard de-
viation (or variance) of the mean by the usual
statistical methods (e.g., by Eq. 2). For re-
ported values derived from several component
quantities, the individual uncertinty estimates
(obtained from replicate measurements for each
independent variable) must be propagated (e.g.,
by Eq. 5) to obtain the overall standard de-
viation (or variance) of the final value.
It is recognized that replicate measure-
ments of environmental samples are uncommon and
may not be feasible because of time and cost
constraints. Yet an uncertainty assessment
based on some procedure beyond calculating the
square root of the total number of counts is
clearly needed. A number of procedures, such as
the Poisson index of dispersion test, are avail-
able to test for the presence of extraneous
measurement variations beyond that inherent in
the radioactive decay process itself (Cf.. Ref.
[14]; Ref. U6]. Ref. [22] t p. 789( Ref. [23] j
p. 64). Again, such tests require multiple
measurements, at least occasionally, to assess
the magnitude of the overall variability from
all random causes. Various control charts
(Cf. Natrella 1131 , p. 18-1 and Ref. 1241, p. 32)
can also be used as a continuous graphical rec-
ord of this variability. It must be emphasized,
however, that these estimates of random uncertainty
should not be based exclusively on the information
derived from just the present measurements.
Presently derived information should be added to the
information accumulated in the past on the varia-
bility of the measurement process. In this way,
more realistic and reliable canonical values of the
random uncertainty estimates may be established
over time. Ideally, every major step or component
of the measurement process should be independently
assessed. This would include not only the varia-
bility inherent in the particular measurement of
concern but also the imprecision arising from
corrections, constants, calibration factors and any
other measurements that make up the final result.
6-14
-------
A practical procedure for making a more com-
ple'e uncertainty assessment will he discussed
later. So far, we have only addressed the calcu-
lation of statistics (e.g., the standard deviation)
from replicate measurements, and the estimation
of the Poisson counting error from a single mea-
surement. In either case, in order to utilize
these pieces in an overall uncertainty estimate,
we need to consider the assessment of systematic
uncertainties.
Assessment of Systematic Uncertainties
The distinction between random and systematic
uncertainties was demonstrated earlier. In prac-
tice, the systematic uncertainties can be con-
sidered to be those sources of inaccuracy which
are biased and not subject to random fluctuations,
and those which may be due to random causes but
can not be or are not assessed by statistical
methods. Although a general guideline for the
approach to the assessment can be formulated, there
are, unfortunately, no rules to objectively assign
a magnitude to the systematic uncertainties. For
the most part, it is a subjective process. Their
magnitudes would preferably be based on experi-
mental verification, but may have to rely on the
judgement and experience of the experimenter. The
general approach is:
Consider and identify all of the conceiv-
able sources of inaccuracy. This requires
a careful scrutiny of every detail and as-
pect of the measurement that can affect
the reported value. The "devil's advocate"
can be a useful aid in this step.
From the above set, extract the blunders
and those sources which can and have been
(or will be) assessed by statistical methods
(i.e., the random uncertainty contributions).
Assign a magnitude to the conceivable limit
of uncertainty for each of the remaining
sources.
There are at least two general reference
frames for estimating the magnitude of the syste-
matic uncertainties. One is to consider them as
upper bounds, overall or maximum limits'^]. The
other is to express them in terms of some proba-
bility (e.g., the limits in which the true value
is expected to lie in two out of three cases).
The latter approach is not recommended since it
implies some knowledge of their probability dis-
tribution, which is not likely [26],
In general, each systematic uncertainty con-
tribution is considered as a quasi-absolute upper
bound, overall or maximum limit on its inaccuracy
value. Its magnitude is.-typically estimated in
terms of a semi-range of plus or minus S about
the mean of the measurement result. If the
reported value or mean x is determined by a com-
bination of independent variables [as in Eq. (4)]
f(u,
),
then the relative contribution to the systematic
uncertainty in x due to the estimated systematic
uncertainty
one of the variables ( iu) is
(yf
<)U
(15)
(assuming that the variables are independent of
each other).
By what method then should the magnitude of
these maximum limits to the systematic uncer-
tainties be asigned? It may be based on a com-
parison to a standard or verification with two
or more independent and tellable measurement
methods. Additionally, it may be
based on judgement;
based on experience,
based on intuition;
based on measurements and data (e.g., by
"varying the factor at issue to its extreme ^
range and noting the change on the re-
sults"!-^ '; or comparison to other methpds
of measurement, other instruments, etc.^4',
or just a pure guess
[28]
(which is not recom-
mended, but may be better than nothing).
Or it may include combinations of, or all of the
above factors.
The fact that a conceivable source of in-
accuracy is ascribable to random causes is not
a sufficient condition for treating it as a
random uncertainty in error reporting. It must
be assessable by statistical methods derived
from an analysis of repeated measurements. In
the absence of these data, the effect of this
random cause must be treated as a systematic un-1
certainty. A program to design and execute ex-
periments (Cf., Ref.[13> 14> and 29J for sta-
tistical designs of experiments) to determine
the effects of all such factors and consider-
ations requires resources beyond the capacity of
most laboratories. Yet a conscientious effort
must be made to assess every source of inac-
curacy.
The process of assessing the systematic un-
certainties is not a simple task. Because of
the complexity and its rather subjective nature,
it has frequently been ignored. This has resulted
in optimistic and underestimated overall uncer-
tainty assessments. It is due as nuch to over-
looking and ignoring possible sources of inac-
curacy, as to underestimating those that are
known. To aid the experimenter in this process,
a list of conceivable sources of inaccuracy and
some suggestions to treat them is provided in
Ref.Ul.
6-15
-------
Confidence Limits '
The net result from a single counting experi-
ment (x) or_th-e calculated mean from multiple mea-
surements (x) is used as the be«t estimate of the
population parameter px and hence the true value
(in the absence of bias.). Because of the _
associated statistical fluctuations about x or jc,
it may not be exactly equal to ux. The confidence
-interval is the rang£ of possible values on
either side of x or x within which px can be
expected to fall. Confidence limits are the
numerical values at the limits of this range.
The parameter ux can be expected to lie within
the confidence limits with a given probability.
This probability, usually expressed as a percen-
tage, is called the confidence coefficient.*
Alternatively, the confidence coefficient can
be considered to be the probability that the
confidence interval will include the para-
meter ux. These concepts are illustrated be-
low for a mean of x. When addressing the subject
of confidence limits, what is sought is a procedure
to calculate the confidence limits for a given
confidence coefficient (probability), or to deter-
mine the probability for some confidence interval.
Confidence Limits for Systematic Uncertainty
As discussed in the preceding Section, a syste-
matic .uncertainty is an estimated upper or maximum
bound for each' conce>wable and assessable contri-
buting systematic source'of inaccuracy. That is,
there is a high degree of confidence (high likeli-
hood or large probability) that the systematic
uncertainty due to this contributing source of
inaccuracy would not exceed the numerical value of
this stated systematic uncertainty. Therefore, a
systematic uncertainty which is estimated in terms
of a semi-range may be regarded as a 99% or greater
confidence interval. Although the entire subject
of systematic uncertainty is characterized by an
uneasy nebulousness, the above approach, which is
unsupported by statistical or physical principles,
does contain a degree of decisiveness. One can
hardly argue that a better estimate than that pro-
vided by the conscientious experimenter is ob-
tainable.
Confidence Limits for Random Uncertainty
(Normal Distribution)iFortunately, one can rely on
statistical theory when considering the confidence
limits for random uncertainties. A common practice
is to express the confidence limits in terms of
CONFIDENCE LIMITS
are the values (x - LJ) and (x + L2)
CONFIDENCE INTERVAL
is the_ set of possible values between (x -
and (x + L2)
CONFIDENCE COEFFICIENT* (P)
is the probability (in %) that yx will lie within the
confidence interval or between the confidence limits
measured variable x
*The confidence coefficient is often referred to as the "confidence level".
Although the latter usage is probably more familiar to most readers, it is
in conflict and often confused with the term "significance level" which is
employed in all statistical literature. The confidence or significance (a)
associated with the specification of a confidence interval is the prob-
ability that the interval will not include the true value. The confidence'
coefficient (1-a), which is often expressed as a percentage £, is the
term used to describe the probability that a confidence interval will include
the true value. For example, a 95% confidence interval is one for which the
confidence coefficient (f), not the significance level (a), is 95%. In this
case, the significance level is 5%. To avoid this confusion, the term "con-
fidence level11 is deliberately avoided and, therefore, not employed.
6-16
-------
some multiple of the standard deviation or stan-
dard error of the mean. For results from nulti-
ple measurements, the confidence limits for the
mean w may be expressed in terns of some in-
tegral multiple of the calculated standard devi-
ation s^, i.e. ,
2i i sx >
x + 3sx , etc.
If the estimated statistic Sx was derived from a
large number of measurements n, and if the distri-
bution of x values is Normal, then these are
68.3Z, 95.52 and 99.72 confidence intervals, re-
spectively. It must be emphasized that these
confidence coefficients apply only when n is very
large and when the distribution is Normal. As
the number of measurements n decreases, the confi-
dence coefficient or probability that x lies
within a given confidence interval gradually
diminishes. Table 6, taken from [25]j demon-
strates the effect of various values of n on the
confidence coefficients. In order to determine the
magnitude of a confidence Interval (LI + L2) at a
given confidence coefficient (P), we need to first
consider the number of measurements or what is
termed the number of degrees of freedom.
Degrees of Freedom. For purposes of this
guide, the number of degrees of freedom (v) for
a calculated statistic (e.g., sx) is the number
of measurements (n) in excess of those needed to
determine the estimate of the population parameter
of interest. For example, previously we noted
that the precision on the mean (x) may be esti-
mated in terms of the standard error of the mean
(s^-). Recalling the definition of the variance,
given by Eq. (2), the number of degrees of free-
dom in SJT is the number of independent terms
in the residual sum of the squares,
£
Thus, in calculating the mean of n observations,
the number of degrees of freedom (v) in s^- is
(n-1).
If x is derived from a combination of more
than one variable [see Eq. (5)], the number of
degrees of freeedom does not have an exact meaning.
An approximation, derived by Welch[^0], can be
used to obtain an effective number of degrees of
freedom (xeff) in sj2. This is given by
(16)
°eff
z
i
ai
where v^ is the number of degrees of freedom in
s^f for the ith component of a^ is defined by
Eq. (6). The value of veff calculated in this
way is sufficiently accurate in nost applications.
When v^ for the individual components becomes
small, the approximation becomes less valid'^l].
Fxamples illustrating this type of calculation for
n measurements of a sample and k measurements of
a standard are provided in NCRP Report 12 [32] ^
Confidence Limits for Random Uncertainty
(t-Distribution). _With a value or v (or yeff)
and Sjj- for a mean x, one can determine the confi-
dence interval for any given confidene coeffi- '
cient P. The confidence interval about the mean
x is symmetrical for a Normal distribution, and
the confidence limits may be calculated by
LX = L2 = tv(P)Sx-
(17)
where t is the Student t- value for a confidence
coefficient P and v degrees of freedom.
Table 7 contains values of the t-statistic as a
function of v for various values or the confi-
dence coefficient P. When more than one variable
is involved, the value of veff [from Ea. (16)]
TABLE 6
Confidence coefficients P for the sx, 2ss and 3sx confidence intervals
for the Normal distribution with various~values of nfl6].
CONFIDENCE
INTERVAL
n=5
CONFIDENCE COEFFICIENT (P) IN %
a-10
n=20
x ± s
* X
x±2sx
x± 3%
5*4s
62.6
88.4
96.0
65.7
92.3
98.5
67.0
94.0
99.3
68.269
95.450
99.730
99.994
6-17
-------
is usually fractional. The value of t may then
_be obtained from the table by interpolation, or
.veff may be rounded down to the next integral
'value. Thus, the random unceftainty could be
expressed, in terms of the calculated or propa-
gated BX and v, as the confidence limits (+tsx)
•for a given confidence coefficient P. As before,
this process of going to higher confidence
intervals requires an assumption about the form of
the distribution, and IE dependent on the number
of measurements. Table 7 illustrates that the
common practice of equating Ssjj with the 992 con-
fidence interval is erroneous for v less than
about 9 or 10. ,
Confidence Limits for Random Uncertainty
(PoiBson DistributionJTAn analogous situation
exists when considering the confidence limits
for the Poisson counting error from single de-
terminations. In this case, one assumes the
counting process can be described by a Polsson
distribution, and the dispersion statistic (e.g.,
the standard deviation) Is estimated from the
number of counts obtained In the measncfnent
[such as by Eqs. (11), (13), and (14)]. As
.described before, the Polsson distribution npprox-
imates the Normal for very large counts (>100).
Then, the confidence limits for the number of
counts c could be given by
(18)
where t^, is the Student-t value for infinite
degrees of freedom corresponding to a given con-
fidence coefficient for the Normal distribution.
Such as, t Is 1.96 and 2.S8 for the 95% and 99%
confidence coefficients, respectively (refer to
Tables 4 and 7). This estimate will, of course,
be more correct the larger the number of counts.
For small numbers of counts, the Poisson distri-
bution no longer approximates the Normal very
well; the confidence limits become asymmetrical
and the approximation by Eq. (18) poorly esti-
mates the confidence interval" For example, the
lower and upper confidence limits for a low num-
ber of counts c may be given by (c-Lj) and
respectively where
TABLE 7
Student t-Values Corresponding to a Given Confidence Coefficient P for Use with an Estimated
Standard Deviation for a Normal Distribution Based on v Degrees of Freedom.
68.2692
95.0002
95.450
99.000
99.730%
1
2
3
4
5
6
7
8
9.
10
11
• 12
13
14
15
16
17
18
19
20
25
-t 30
40
60
120
1.837
1.321
1.197
1.142
1.111
1.091
1.077
, 1.067
1.C59.
1.053o
1.048
1,043
1.040
1.037
1.034
1.032
1.030
1.029
1.027
1.026
1.020
1.017
1.013
1.008
1.005
12.706
4.303
3.182
2.776
2.571
2.447
2.365
2.306
2.262
2.228
2.201
2.179
2.160
2.145
2.131
2.120
2.110
2.101
2.093
2.086
2.060
2.042
2.021
2.000
1.980
13.968
4.527
3.307
2.869
2.64a
2.517
2.429
2.366
2.320
2.284
2.255
2.231
2.212
2.195
2.181
2.169
2.158
2.149
2.140
2.133
2.105
2.087
2.064
2.043
2.023
63.657
9.925
5.841
4.604
4.032
3.707
3.499
3.355
3.250
3.169
3.106
3.055
3.012
-J.977
2.947
2.921
2.898
2.878
2.861
2.845
^.787
2.750
2.704
2.660
2.617
235.80
19.207
9.219
6.620
5.507
4.904
4.530
4.277
4.094
3.957
3.850
3.764
3.694
3.636
3.586
3.544
3.507
3.475
3.447
3.422
3.330
3.270
3.199
3.130
3.069
1.960
2.000
2.576
3.000
Adapted from Brian L. Joiner, "Student t-Deviate Corresponding to a Given Normal Deviate,"
J. Res. NBS 73C, 15 (1969); and CRC Handbook of Mathematical Tables, First Ed. Chemical
Rubber Publishing Co., Cleveland, Ohio (1962), p. 271.
6-18
-------
(19)
and
Values of T as a function of the number of counts
for confidence coefficients of P°68%, P-="95%, and
P=99% are illustrated in Figure 2 and tabulated
in Table 8. As indicated, for low counts, (under
100 or so), the Normal approximation of Eq. (18)
can give a result which is in error. As a re-
sult, special tables 1^3] are needed to obtain
appropriate confidence limits for small numbers
of countso
From Eq. (15) it follows that, for a count-
Ing rate rs with standard deviation srs, the
confidence limits for a higher confidence coeffici-
ent P would be given by rg + L where
TC (P)sr 2 + T (P)sr 2
s+b
s+b
(20)
Cb
Tb2
Or, alternatively, the T values in Eq. (20) could
be replaced with values of tm when the Normal
approximation is valid (I.e., very large counts).
In summary, any consideration of confidence
limits for single determination or multiple mea-
surement situations io, first, distribution de-
pendent requiring an assumption about the under-
lying population distribution; and second, is
size dependent requiring knowledge of the number
of measurements, degrees of freedom or magnitude
of the counts, etc.
Reporting the Overall Uncertainty •
If the reported uncertainty is to be a
credible assessment of the likely accuracy, it
must he based on as nearly a complete nn assess—
iiu»uf as possible anil must roasl>l«*r all conceiv-
able sources of inaccuracy. ^
Categorization of 'the Random and Systematic
Uncertainty Components. A clear distinction
between random and systematic uncertainties is
often difficult and troublesome. In part, this
is because many experimental processes embody
both systematic and stochastic (random) elements.
In general, the random uncertainty contri-
butions can be considered to be those sources of
Inaccuracy which can be and are assessed and
propagated by statistical methods. Estimates of
population parameters, oc statistics, are com-
puted entirely Prom the measurements data.
The systematic uncertainty components can
be considered to be the conceivable sources of .
Inaccuracy which are biased and arise from non-
stochastic systematic effects, as well as those
which may be due to random causes but can not be
or are not assessed by statistical methods. Hence,
it is meaningful to speak of a random uncertainty
contribution only if one has a. computed statistic •
for the magnitude of the random variation. Fur-
ther, this does not Imply that fvery conceivable
source of inaccuracy (say, the chemical yield) lies
la either just the random or systematic uncer-
tainty category. One might obtain an estimate
of the random uncertainty contribution by calculat-
ing a standard deviation from the results of multi-
ple determinations of the chemical yield, and
TABLE 8
Values of T Corresponding to a Given Confidence Coefficient P for use with an Estimated
Standard Deviation for a Poisson Distribution.
COUNTS
1
4
9
16
25
50
P°68%
P°95% t,
P°99%*
1
0.83
0.95
0.97
0.98
0.98
o.ga
2
2.24
1.60
1.37
1.28
1.22
1.15
'1
0.975
1.46
1.63
1.71
1.76
1.82
'2
4.57
3.12
2.69
2.50
2.38
2.25
Tl
0.995
1.66
1.96
2.11
2.20
2.31
T2
6.43
4.30
3.67
3.37
3.20
3.01
1.00
1.96
2.58
* Adapted from E.S. Pearson and H. 0. Hartley. Blometrika Tables for Statisticians
Vol. 1, Cambridge Univ. Press (1954), Table Jc~. ' ' ~~
6-19
-------
STANDARDIZED VALUES OF THE VARIATE T
CORRESPONDING TO A GIVEN CONFIDENCE
COEFFICIENT P FOR USE WITH AN ESTIMATED
STANDARD DEVIATION FOR A POISSON DISTRIBUTION
FOR POISSON, WITH MEAN AND VARIANCE
OF c, UPPER CONFIDENCE LIMIT IS (CtL
LOWER LIMIT IS (c L,l
MEAN
KX> 1000
COUNTS
Figure 2.
still have a systematic uncertainty in the chem-
ical yield due to a bias in all of the mass
measurements made in the determination of the
yields. The diagram below attempts to Illustrate
the concept of making a complete assessment of all
of the conceivable sources of inaccuracy, and their
division between random and systematic components.
,
o Total Random Uncertainty. From preceding sec-
tions, we can obtain a combined total random uncer-
' tainty contribution, stated in terms of some dis-
persion statistic, such as the standard deviation
sx for a measurement result x. The value of sx
" may be evaluated from the results of multiple mea-
surements using, for example, Eq. (3). The Poisson
counting error contribution may be estimated from
Eq. (13), (14) or comparable equations. Except
for this contribution which may be evaluated from
single measurements (due to the'Poisson assumption),
all other contributions must be evaluated by calcu-
lating statistics from the results of replicate
determinations. As noted, control charts may aid
this process. The total random uncertainty, sx,
would include not only the Poisson counting error
and randum uncertainty derived from the particular
measurement under concern, but also the random
components from corrections, constants, calibration
factors and any other measurements that also make
up the final result x. These contributions may be
combined by propagation of error formulae, such as
Eq. (5).
•Conceptually, one can propagate and express
the total random uncertainty in terms of some
confidence limits at any chosen confidence coef-
ficient. This requires a number of assumptions,
such as to the underlying population distribu-
tions, and can very rapidly become exceedingly
complicated requiring computations based on the
number of measurements involved in each random
uncertainty component, the number of degrees of
freedom, the magnitude of the counts in the count-
ing error contribution, etc. One is therefore
confronted with the choice between making simpli-
fying assumptions (which ultimately may provide
results tha.t are misleading), or performing com-
plex calculations (whose efficacy, in terms of
the time and cost required can be seriously
questioned).
Systematic Uncertainty Components. Lastly,
we can obtain a list of each conceivable source
of systematic uncertainty expressed as a confidence
limit (±4j which can be considered to corres-
pond to some 99% or greater confidence interval.
Approaches to an Overall Uncertainty State-
ment. One must now consider 1) to what confidence
coefficient should the total random uncertainty
sx be reported; 2) how should the individual syste-
matic uncertainty components (64) be combined; and
3) how should the random and systematic uncertain-
ties be combined to form an overall uncertainty.
All three questions are interrelated. Underlying
this entire discussion is the fundamental consider-
ation that the way uncertainties are reported is
dependent on their intended or ultimate use.
Methods of Combining the Random and Syste-
matic Uncertainty Components. Purists would
argue that it is incorrect from a theoretical
point to view to combine random and systematic
uncertainties in any fashion 134] _ Their approach
is to separately state the random and systematic
uncertainty components. For the random contri-
bution, this is to include the confidence limits,
the confidence coefficient, and the number of de-
grees of freedom. For the systematic uncertainty
contribution, each component is to be listed, and
the method of combination clearly stated. This
approach requires a detailed uncertainty state-
ment usually at least a paragraph in length. For
additiional discussion and references see Eisen-
hart 135] Ku[36]( Natrella '13^, and Campion,
et alJ37J.
This purist approach does not seem practical.
First, it is not amenable to large compilations
ALL CONCEIVABLE
SOURCES OF INACCURACY
RANDOM UNCERTAINTIES
estimate, calculate a disper-
sion statistic for each
SYSTEMATIC UNCERTAINTIES
evaluate and assign a magnitude
for each
6-20
-------
(e.g., the quarterly EPA Environmental Radiation
Data reports) of data from numerous sources. And
second, it shifts the burden of evaluating the
uncertainties to users. "Users now need, and will
do so for many years to come, a single error value
resulting from the combination of all the uncer-
tainties. Indeed, a compromise leading to a single
error value seems to be better than a perfect
solution which in reality shifts the problem of
combination of errors to the users" 1->*J.
Several arguments can be advanced for favor-
ing the standard deviation as a measure of the
random uncertainty over higher confidence limits.
It facilitates subsequent data handling since all
weighting and uncertainty propagation is performed
with the variances. A stronger argument is that
the variance is the only statistic which is de-
rived directly and unambiguously from the mea-
surements. It is unbiased and distribution-free.
Converting the measured standard deviation to
some other confidence interval requires an as-
sumption that the distribution is known. This
equally applies to the use of the Student-t
value. "Since an arbitrary multiple of a
standard deviation(s) does not contain more
information than s itself, the simplest and most
reasonable choice of Indicating the (random)
uncertainty of an experimental result seems to
me to go on stating its standard deviation,
eventually completed by the number of measure-
ments it is based on"f^8J.
The arguments for retaining the use of the
standard deviation do not address the question
of combining it with systematic uncertainties.
If one agrees that the systematic uncertainties
are to be expressed as estimated maximum limits
(the semi-range), then some higher confidence in-
terval for the random uncertainty part (e.g.,
99.7% for 3sx for a Normal distribution) provides
an acceptable or reasonable level for diret com-
bination with the systematic uncertainty. One can
also invoke an argument for which confidence co-
efficient will be of greatest value to the user
of the data. Returning to the original question
posed in the example of Section III for a report-
ed value of 15 + 3 Bq (How certain is the user
that "he value is between 12 and 18 Bq?"), a
high confidence coefficient may be the ^aly unambig-
uous and unequivocal choice. How sophisticated
is the user in ascertaining the meaning and cor-
rectly interpreting a 67% confidence interval'
If the reported uncertainty is to be a statement
of the "likely limits to the accuracy," then a
high confidence interval may be the best choice.
Several acceptable models for propagating
the systematic uncertainty components and com-
bining them with the total random uncertainty may
be considered. The two most commonly employed
methods are:
A. Add linearly all components of the
systematic uncertainty and add it to
the random part. 'Since the systematic
uncertainties (4j)_are considered to
be 992 or greater confidence intervale,
it logically should be added to the
total random uncertainty at a similar
confidence coefficient. Using this model,
the overall uncertainty for a reported
value x is
tv(P=99+%)sx +
6,
(21)
where s^ is an overall standard de-
viation based on v degrees of freedom
and tv (P=99+%) is the Student-t Value
corresponding to a 99% or greater con-
fidence coefficient. This is the ap-
proach favored by metrology and stan-
dardizing laboratories. It probably
overestimates the total uncertainty,
but can be considered as an estimate of
the maximum possible limit. For ex-
ample, if you estimated that five
contributions of about equal magnitude
made up the systematic error, you would
have to be very unlucky if all five were
minus. Yst, if there was one dominant
contributor, it might be a very valid
approximation.
B. Add in quadrature all of the systematic ,
uncertainty components, and either add it
linearly to sx,
(22a)
j
or add it in quadrature to sx>
(22b)
These are frequently considered (erroneously) to be
overall 68% confidence intervals. They are probably
the most widely employed approaches (if an attempt
was made to assess systematic uncertainties at all).
Actually, without making assumptions about
the distributions of the systematic uncertainties
it is impossible to proceed except to arbitrarily
select some method, such as those above. One of
the simplest set of assumptions that can be made
is that 1) the component systematic uncertainties
are all independent, and 2) they are distributed
such that all values within the estimated limits
are equally likely (rectangular distribution).
This approach, often termed the PTB approach'39]>
is gaining in popularity.
With the above two assumotions, the rectangular
systematic uncertainty distributions can be tolded
together to obtain a combined probability distri-
bution for which the variance may be computed.
This may then be combined in quadrature with that
for the random uncertainty. In effect, the as-
sumed Normal distribution or the random uncertainty
is convoluted with the combined systematic uncer-
tainty distribution to obtain an overall distri-
bution. With this, the overall uncertainty limits •*
at a given confidence coefficient can be evaluated.
For tha caae of a number of comparably-sized com- 4
6-21
-------
ponemt systematic uncertainties, a very reasonable
'estimate of the confidence interval +1^ for any
confidence coefficient P is
[tv(P)sx]2
C23)
where tvXP) and tj;(P) are the Student-t factor
and the value of the variate in a Normal distri-
bution, respectively, for the same confidence
coefficient P (see Table 9).* For the special
case when one of the 6.'s is much larger than
the others, then
"x
•v
It
CR(P)
(24)
is a better approximation, where CR(P) is the
value of the variate in a rectangular distri-
bution for a confidence coefficient P. For ad-
ditional details see Wagner '39J and Williams,
et al.l4°J
Recommended Statement of Overall Uncertainty
and Final Result. There is no one clearly superi-
or method out of the range of approaches to report-
ing an overall uncertainty. Yet, the diversity of
current practices and methods either recommended
or required by governmental bodies, or used by
those who report data, make it difficult if not
Impossible to combine or compare the data in a
manner that would be useful for determining *
*.trends or possible necessary actions. There is,
TABLE 9
P
50%
68.3%
95%
95.4%
99%
99.7%
V=°°
0.6745
' -1 .
1.960
2
2.576
3
^N
0.6745
It-
1.960
2
2.576
3
CR
1
r
i
i '
i
i
therefore, a great need for uniformity in the
method used to report the uncertainties of en-
vironmental radiation data. In hope of achiev-
ing greater uniformity, the PTB approach, which
was outlined above, is recommended. The detailed
recommendations for the assessment and propagation
of uncertainties which follow are given in this
spirit. The recommended method requires that each
reported measurement result include the value,
the total random uncertainty expressed as the stan-
dard deviation, and a combined overall uncertainty.
Examples illustrating the method are given in
Ref. '1J. This recommended approach may not be
best for all purposes. It is intended to be use-
ful and practical without imposing unnecessary
burdens on the time, money and personnel re-
sources of laboratories. Adoption of this
method may eventually demonstrate its most
serious shortcomings and ultimately lead to
better methods.
Recommendations
III.I Every reported measurement result
(x) should include an estimate of
its overall uncertainty (u^) which
is based on as nearly a complete
an assessment as possible.
III.2 The uncertainty assessment should
include every conceivable or likely
source of inaccuracy in the result.
III.3 Every conceivable source of inaccuracy
should be classified into one of two
categories depending on how the un-
certainty is estimated.
The random uncertainty contributions
are estimated by a statistical analysis
of replicate measurements. The random
counting error contribution may be esti-
mated from single measurements by making
the Poisson assumption.
The systematic uncertainty contributions
are estimated by less rigorous methods
as described in the text.
Until combined for the measurement
result, the individual uncertainties
in both categories should be retained
separately.
III.4 Each random uncertainty component should
be estimated in terms of its standard
deviation (s^), or the standard error
of the mean (sj) for independent mul-
tiple measurements.
*S. R. Wbgner [PTB-Mitteilungen 8£, 83-89 (1979)] recently pointed out that
the quadratic sum of confidence intervals [as in Eqs. (23) and (24)] is
without theoretical basis. The overall uncertainty i^ for any confidence
coefficient P is more properly obtained by
I
J
. where kp is the standardized variate corresponding to P for the combined
..random and systematic uncertainty probability distribution.
6-22
-------
III.5
III.6
III.7
The total random uncertainty (sx)
should be obtained by propagating the
individual variances (s^) for each
random uncertainty component. This
would include not only the Poisson
counting error and random uncertainty
derived from the particular measure-
ment under concern, but also the random
components from corrections, constants,
calibration factors and any other mea-
surements that make up the final result
x. Many of these sources of inaccuracy
are tabulated and discussed in Ref. 111.
These contributions may be combined
by summing in quadrature or by the use
of propagation of error formulae, such
as Eq. (5) or those in Table 5. This
overall random uncertainty may be repre-
sented by
2 „
(25)
1=1
which was given as Eq. (6) for p in-
dependent random uncertainty components.
Systematic uncertainties are considered
to be independent and each is expected
to be uniformly distributed over its
range (+*j)' Each systematic component
(Sj) should be estimated in terms of the
semi-range about the measurement result
for\the contributing source of inaccu-
racy [as in Eq. (15)].
The random and systematic uncertainties
should be combined to form an overall
uncertainty on the result x by
+ I
(26)
where Sjjis the standard deviation cor-
responding 50 the overall rar.&om un-
certainty and 1/3 2, «j2 (
J-l
then the overall uncertainty should be
given by
(29)
(or comparable equations analogous to
Eqs. (27) and (28) which do not include
the factor of 1/3.
Every reported measurement result x
(or mean x for averages of results)
should Include a three-part requirement
for the reporting of uncertainties:
(a) the value of the result x (or
mean x);
(b) the propagated total random un-
certainty expressed as the stan-
dard deviation s^ (or standard
error of the mean sj for results
based on multiple determinations
of x or for averaged results of
x); and
(c) the combined overall uncertainty
Ux (or uj) as in Eqs. (26), (27),
(28), or (29).
When short-hand expressions are nec-
essary, the measurement result should
be reported in the format
For example, a result of x=123 Bq L"1
with sx=15 Bq L"1 and ivl8 Bq IT1
should be reported in exponential no-
tation as
. (1.23 ± 0.15; 0,18) E+02 Bq L"1
Confidence intervals based on the over-
all uncertainty should not suggest a
particular confidence coelficient.
Rather, the two reported uncertainties
6-23
-------
may be referenced as:
"The total random uncertainty is the
propagated standard deviation, (or stan-
daTd error of the meatj) of all sources
of random uncertainty."
"The overall uncertainty was prop-
agated by^adding in quadrature
the total random uncertainty and
one-third of the estimated upper
limits of all conceivable sources
of systematic uncertainty."
Confidence limits for a measurement re-
sult at a particular confidence coef-
ficient cannot be obtaind by merely
multiplying the uncertainty by arbi-
trary constants and adding and sub-
tracting this value to and from the
result. Determination of limits for
higher confidence coefficients re-
quires knowledge of the underlying
population distribution, and knowledge
of the number of measurements, degrees
of freedom, or magnitude of the counts.
IV. DETECTION LIMITS
A myriad of vastly different expressions and
definitions of "detection limits" are frequently
encountered. Their meanings are often ambiguous,
inconsistent and incorrectly interpreted. Some
of these include "detection sensitivity," "mini-
mum detectable activity (or level)," "lower limit
of detection," and "background equivalent activ-
ity." Curriel1^] commented on a number of such
terms and addressed some of the problems and in-
consistencies.
Pragmatically, a detection limit is useful
as a criterion for experiment design, comparison
and optimization purposes, such as in selecting
among alternative measurement procedures. Addi-
tionally, detection limits may serve as guides
which are set by regulatory bodies for estab-
lishing minimum acceptable detec&ion capabilities
for a given type of analysis. For the purposes
of this report, the intent of detection limit
calculations is to satisfy both of the uses.
It must be emphasized that any calculation
of a detection limit is at best only an estimate.
Their use is "limited to that of serving as
guideposts only, and not as absolute levels of
activity that can or cannot be detected by a
counting system."'-'-*!
Much of the existing confusion with detection
limits fac environmental ionizing radiation
measurements arises from not only the large number
of different expressions that are in use, but
also from incorrect interpretations and mis-
applications of some of the original definitions.
In order to satisfy both of the above-mentioned
purposes,'two'distinctly different concepts are
required.
The first is an estimated detection limit
that is related ..to the characteristics of the
counting instrument. It is not dependent on
other factors invplved in the measurement method
or on the sample characteristics. It is a lower
limit in the true sense of the word. Because
of its current wide usage, the recommended term
Is ,the EST-IMATED LOWER LIMIT OF DETECTION (LLD) .
The second concept is that most useful for
regulatory purposes. It corresponds to a level
of activity concentration that is practically
achievable with a given instrument, method and
type of sample. It depends not only on the in-
strument characteristics, but also on many other
specific factors involved in the measurement
process, as well as the characteristics of the
sample being measured. As such, it is not a
limit at all, but only an estimated level achiev-
able under a given set of practical conditions.
The recommended expression for this concept is
the ESTIMATED MINIMUM DETECTABLE CONCENTRATION
(MDC).
It is recommended that only these two ex-
pressions be employed for all environmental
radiation measurement detection limits. Both the
LLD and MDC concepts can be based on a uniform
consistent methodology. The use of this method-
ology with only these two expressions should
help alleviate and avoid much of the existing
ambiguity and misapplications. Both the LLD
and MDC concepts will be considered in turn. If
the word "estimated" is emphasized and continually
used in reporting a LLD and MDC, then their
limited nature should be more apparent, and hope-
fully avoid the implication of an absolute
significance In their numerical values.
The Estimated Lower Limit of Detection (LLD)
THE LLD may be defined on the basis of
statistical hypothesis testing for the presence
of activity. This approach is common to both
that of Pasternack and coauthors'^2,43]} an(j
Curriel^J. Procedures for calculating a LLD
based on this approach have also been described
in the EML Procedures Manualt44! the EPA
quality control program reportl2*-", an(j the
NCRP Handbook of Radioactivity Measurements
Procedurest1A3.
Pasternack and Barley'431 defined the LLD
as "the smallest amount of sample activity that
will yield a net count for which there is a con-
fidence at a predetermined level that activity
is present."I*4] In theory) this approach for
calculating a LLD requires the number of counts
to be sufficient for the Poisson distribution
to approach the Normal distribution so that
Gaussian statistics can be applied. It has been
noted, however, that in practice "the approxi-
mation is good down to a few total counts."144 J
The LLD is an "a priori ESTIMATE of the detection
capabilities of a given measurement process."l^U
It is based on the premise that from a knowledge
of the background count and measurement system
parameters (i.e., detection efficiency), an
a priori (before the fact) limit can be established
for a particular measurement. This limit does
not depend on the sample activity, but rather
on the detection capability of the measurement
process itself. It is important in the appli-
cation of the LLD to make the distinction between
6-24
-------
it and other limits that are directly applicable
to the net sample activity. The latter are
applied as a posteriori limits (after the fact)
and can be determined only after the sample
has been counted.
The LLD is derived from the approximation
LLD » K (kas0 + kgsD) (30)
where
K is the proportionality constant re-
lating the detector response (counts)
to the activity, such as, K=l/c where
e is an overall detection efficiency,
or K=l/IYeT where !„ is the gamma
ray-emission probability per decay and
e the detection efficiency for the gamma
ray.
ka and kg are the upper percentlles of the
standardized normal variate correspond-
ing to the preselected risk for con-
cluding falsely that activity is pres-
ent (a) and the predetermined degree
of confidence for detecting Its pres-
ence (1-8)
so is the estimated standard deviation of
the net sample count (Nn) when the
limiting mean of Nn equals zero
SD is the estimated standard deviation of
the net sample count (Nn) when the
limiting mean of Nn equals the LLD.
The basis for this approximation for the LLD Is
illustrated in Figure 3. Additional details
find discussion may be obtained in References!^! >
I42' and t43]. In statistical hypothesis testing,
a and B are the probabilities for what are
frequently referred to as Type I (false detection)
and Type II (false non-detection) errors, re-
spectively. Values for ko and kg correspond-
ing to the risks for false detection (a) and
non-detection (8) can be found in most statis-
tical texts. As stated before, this assumes that
the ruiidom uncertainties ace normally di T(.bated.
In general, both a and-8 should be reasonably
small la order to provide a high degree of con-
fidence that neither type of error will be made.
If o- S^ 0.5 (i.e., a 50% risk for each type
of error), then the LLD would always be zero.
In this case, activity detection IUMC the LLD
will he wrong 50% of the time, and "the experiment
could be performed equally well hy the flipping
of a. coin."' ^ Conversely, if a and 8 are
set very small (say a- B-^0.001), then one would
rarply !>e Incorrect. At the same time, however,
oae would seldom *ttri'n.jt--> significance to any-
thing but very large activity measureiients. A
convenient Compromise and >nost coianon practice
is to set baCh risks equal, ami to swept a
5-Z chance of Incorrectly detecting activity when
It Is absent (a -0.05)' and a 95% confidence
that activity will be detected when it is present
(1-8-0.95). Then
k - k,, = kg = 1.645 (a » 8 => 0.05), (31)
which is recommended for use in all environmental
radiation LLDs. It Is incorrect to refer to this
as a 5% confidence level. First, the LLD cannot
be characterized by a single confidence
level; and second, its use can lead to
confusion with the confidence level for an
a posteriori decision on the presence of °~
activity after the measurement is made.
Preferred language is THE ESTIMATED LLD FOR 5%
RISKS OF FALSE DETECTION AND FALSE NON-DETECTION.
Using the recommended approximation of Eq.
(10) and the convention oF F,q. (31), several
typical and specific applications Can be de-
veloped.
Consider measurement processes In which
the net activity ts derived by subtracting a
background from a ^ross activity measurement.
The standard deviation oE the net activity is
(32)
where sg and sj, are the standard deviations of
the gross activity and background, respectively.
If the gross activity and background counts
are nearly equal (-which is a reasonable approxi-
mation near the LLD), then Eq. (32) reduces to
sb
(33)
Further, if one assumes that sn over the small
range of net activity from zero to the LLD is
"1*80-0
LLD
NET SAMPLE COUNT —
Figure' 3. Probability distributions of the net
sample count (Nn) at zero and at the LLD illu-
strating the relations kaso and k
6-25
-------
.approximately constant, i.e.
-from Eqs. (30) and (33)1"
h'
VT
4.65 K fib
s.b
Alternatively, if one doss not
assumption (so ^ sp), Currle l*
SQ « sj,, then
•(34)
ake the above
has shown that
the expression for the LLD becomes
LLD
K (k2 +
K C2';7l
2 VT ksb)
+ 4.65 sb)
(35)
ment, or if they abruptly change, then this
would suggest that there is a serious instrument
or procedural problem. This erratic behavior
may be due to an instrument malfunction, or may
be an indication of poor laboratory practice.
Under any circumstance, it demonstrates the
existence of a. problem, and should serve as a
stimulant to ascertain the cause of the problem
(see "blunders," in Part III). If the measure-
ment process or procedure is modified, or if sub-
stantive changes in the background or in any of
the parameters comprising the proportionality
constant K occur, then the LLD should be re-
calculated.
Except at extremely small counts, the difference
in the two calculations is trivial (see Table 10).
For a Poisson distribution, sn is not as inde-
pendent of the activity level and F.q. (35) may
be a better approximation. In most cases, both
calculations will be comparable. When they
are not-(e.g., extremely small background counts),
Eq. (35) Is recommended.
In many applications, the measurement pro-
cedure would frequently consist of a paired
sequential observation of the gross and background
counts. One could envisage obtaining a single
background value which is not exactly equal to the
known long-terra average background. This single
measurement would be used as a check, and perhaps
to continually update the average. To calculate
a new LLD from a single observation is without
merit. It would ultimately lead to a LLD for
each and every background measurement. This
approach would defeat the spirit of the LLD con-. ,
cept which is to provide an A PRIORI ESTIMATE
or a guidepost of the detection capability of
the instrument. Similarly, the quantities con-
tained within the proportionality constant K,
such as the detection efficiency, should be
average values for the instrument. If the values
vary substantially from measurement to measure-
TABLE 10
The Estimated Minimum Detectable Concentration
(MDC)
The MDC is a level (not a limit) of activity
concentration which is practically achievable by
an overall measurement method. As distinguished
from the LLD, the MDC considers not only the in-
strument characteristics (background and effi-
ciency), but all other factors and conditions
which influence the measurement. It is an
a priori estimate of the activity concentration
that can be practically achieved under a specified
set of typical measurement conditions. These
include the sample size, counting time, self-
absorption and decay corrections, chemical yield
and any other factors that comprise the activity
concentration determination. It cannot serve
as a detection limit per se, for any change in
measurement conditions or factors will influence
the value of the MDC. Its use is limited to
establishing, for regulatory purposes, that
some minimum overall measurement conditions
are met. Any of several factors, such as sample
size or counting time, could be varied to satisfy
these regulatory values.
Expressions for the MDC can be derived
analogously to those for the LLD using the ap-
proximation of Eq. (30) and convention of Eq.
(31). The results are
MDC " 4.65 K* sb
(36)
TOTAL
BACKGROUND
COUNTS
0
1
4
10
35-1
100
350
1,000
3,500
10,000.
sb
0
1.
2.
3.2
5.9
10.
19.
32.
59.
. '100.
Eq.
0
4.
9.
15.
28.
47.
87.
150.
280.
470.
ESTIMATED LLD
(K units)
(34) Eq. (35)
2.7
7 7.4
3 12.
17.
30.
49.
90.
150.
280.
470.
MDC = K*(2.7l + 4.65 sb)
(37)
which are analogous to Equations (34) and (35)
for the LLD. The proportionality constant K*,
in this case, relates the detector response
(counts) to the activity concentration in a
sample for a given set of measurement conditions.
It may, for example, consist of
K* -
YVTS eexp(-
(38)
where Y is the fractional yield for the radio-
chemical separation;
V is the sample size;
T is the counting time interval;
tA value of 4.,66 has also been frequently used for the quantity 2 VT k
[Cf. Ref. 44]. The difference is merely one of rounding.
6-26
-------
S is the self-absorption correction factor;
c is the detection or counting efficiency;
exp (-Xt) is the correction for radioactive
decay between sample collection and count-
ing (time interval, t);
X is the decay constant for the particular
radionuclide with half-life T1/2 " l"2/X.
As discussed previously when considering the K for
the LLD, all of the factors contained within the
proportionality constant K* for the MDC should
be typical or average values for the instrument
and measurement procedure.
Misapplication of the LLD or MDC for A Posteriori
Decisions
As stated earlier, the LLD is an a priori
estimate dependent on only the instrument back-
ground and detection efficiency, and the MDC is
an a priori estimate for a given type of analysis
or measurement process under specified typical
conditions. They are not a posteriori decision
limits for every measurement. They need not, and
should not, therefore, be calculated for each
individual measurement. The practice of compar-
ing a unique computed LLD or MDC for each measure-
ment against the measurement result should be
avoided. This has sometimes been employed to de-
termine the "significance" of the result for re-
porting purposes.
If results below a particular computed LLD
or MDC are rejected or excluded from data re-
ports, serious errors and distortion in long-term
trends could result. Consider the hypothetical
situation illustrated in Figure 4. The back-
ground is represented by the probability distri-
bution with the mean ^ and standard deviation
Ob« Similarly, a gross count is given by the
distribution with mean „„ and standard deviation
Og. Thus the net count, obtained by subtracting
the background from the gross count, is the
probability distribution with mean Wn «•( Pg - uj,)
and variance On2 » (ag2 + <,b2)' This net count
distribution can be compared, as shown, to the
esti ated LLD calculated with the standard de-
viation of the background (Og) using EM, (35).
In this case, the true or limiting mean of the
net count, and a substantial fraction of the
distribution are below the computed LLD. A
resultant net count Oln) from a typical individual
paired observation of background and gross counts
(Nb, Ng), which are both well within +CT, is
illustrated. If this value and many others less
than the LLD are not reported, then the distri-
bution of the reported values would grossly
distort the true situation given by Un and On
for the population. This point is demonstrated
further in the data of Table 11. In this ex-
ample, twenty-five net count (Nn) measurement re-
sults from a population with a mean of 20 were
obtained from paired observations of background
(Nt) and gross counts (Ng). The average background
of 16 counts was used to estimate the LLD:
I BACKGROUND]
ULO
l
12
b
18
I
24
I
30
I
36
I
42
I
48
I
34
COUNTS•
Figure 4. Illustration of the LLD and its re-
lation to the underlying population distributions
in a hypothetical measurement situation. See
text for details.
LLD = 2.71 + 4.65 Vl6
° 21.3 counts [from Eq. (35)]
or
LLD = 4.65 Vl6~
= 18.6 counts [from Eq. (34)].
The last two columns in the table contain the
tabulated results when values less than 21.3
and 18.6 counts are reported as "less than the
LLD" ( •
average estimates, then these estimates are always
* This conservative approach may be the most llkrly treatment when the
tabulated results are ultimately used to make dose assessments.
6-27
-------
going to be greater »than the "true" represen-
tation of the environment. Therefore, it is
recommended that evsery measurement result should
,be recorded and reported Qirectly as. found.
"Less Chan Che LtD," "not detected" and similar
expressions should -not be used in reporting data1.
This does not imply that the activity in the
sample is truly less than some absolute level
that can be detected. Rather, it merely indi-
cates that this particular measurement resulted
in a single value which was less than an estimated
LLD or MDC for the measurement procedure.
The LLD and MDC for Multi-Component and Spectral
Analyses
The MDC is conceptually more complex when the
net count is a function of the detector response
from two or more radionuclides in the sample.
In this situation, the proportionality constant
K* cannot relate the net count to the activity
of just a single radionuclide as was previously
done. The simplest case can be represented by
Nn ° Alclcl + A2C2E2
(39)
where Aj and A2 *efer to the activities of two
different radionuclides, c^ and e2 are the re~
spective detection efficiencies, and C^ and C2
are the respective constants which include
chemical yields, absorption corrections, decay fac-
tors, etc. Then, the LLD for A^ and A2 are just
given by
LLDCAj) = 1^(2.71 + 4.65sb)
LLD(A2) = K2(2.71 + 4.65sb)
(40)
where Kj = I/EJL and K2 = lA2. Obviously, this
interpretation results in an LLD for Aj when A2
is absent. And conversely, the LLD for A2 is
independent of the amount of Aj. For purposes
of the LLD calculation then, Equation (39) in
effect reverts to either Nn_° A^CjEj (with A2
absent) or Nn = A2C2e2 (with A^ absent). For
many procedures, such as the determination of
8^Sr and "°Sr, wjth typical samples, this ap-
proach has been subject to the criticism that
TABLE 11
Distorted N Reported Values
n
for LLD=21.3
for LLD=18.6
20
15
18
21
16
17
14
16
17
19
12
15
16
18'
23
14
11
16
10
15
22
16
13
38
36
34
27
36
41
33
44
35
38
32
36
39
34
43
31
28
35
40
37
38
30
37
44
34
18
21
16
6
20
24
19
28
18
19
20
'21
23
16
20
17
17
19
30
22
16
14
24
27
25
7.6
7.1
7.2
6.9
7.2
7.6
6.9
7.7
7.2
7.5
6.6
7.1
7.4
7.2
8.1
6.7
6.2
7.1
7.1
7.2
7.7
6.8
7.1
7.8
6.6
-------
it does not reflect realistic detection capa-
bilities. It is sometimes argued that the LLD
must consider that any net count Hn results
from contributions from both °'Sr and °"Sr.
This argument is another misapplication of the LLD
concept. If it is accepted, then the LLD would
no longer be a limit and would be dependent on
something other than the instrument background
and efficiency characteristics.
The MDC, however, is intended to serve as
a practical level that is achievable under a
given set of specified meaurement conditions,
and must consider the effect of multi-components
in the sample. Referring again to the case of
Fquation (39), the MDC for A^ can be written
MDCUj) = K1*2ks1
(41)
where k « ka = kg = 1.645, and s^is the estimated
standard deviation of the net sample count cor-
responding to AI over the range of net activities
from A! = 0 to A! => MDC, i.e. sx =so(l) =sn(l)
[refer to Kq. (10)]. Since the apt sample count
corresponding to A-j LS jjtvpri by
Nt = flx+2+b
the estimated btanrUrd deviation
is
and, with s21+2+b - s21+b + s22
sl
1+b
If the gross counts N^+b and background Nb are
approximately equal (which Is .T reasonable as-
sumption near the MDC), then Sj reduces to
VT
Making a Polsson assumption, s22 becomes
N2 A2C2e2
S 2 =^2 T2
where C2 and E2 were given in Eq. (39) and T
is the count time. Hence,
and s-ubstitution into Eq. (41) .yields
MDC(A
VT k
K1*(4.65sb)
sb
S2
sb
A2C2£2
sb2T2. (42)
Analogously to Eq. (37), the MDC for A^ may al-
ternatively be given by
MDC(Aj)
The factor K^* is, by definition, given by
.
Ki* a-L-
(43)
—r2
(44)
and is dependent on the relative amount of
activity A2 since the net count is apportioned
between that due to A^ and that due to A2-
One is therefore confronted with a paradox.
The MDC should be an a priori estimate, bur Its
determination requires a postectort knowledge of
the amounts of activity in the sample. This
dichotomy can be overcome by calculating the
MDC for typical sample conditions. This is
subject to criticism that some samples may vary
substantially from the typical, and that the
MDC may then be in considerable error. Without
denying this argument, there are two reasons
why the effects from such errors are unimportant.
First, the MDC is only to be an ESTIMATE; and
second, all measurement results should be re-
ported without a comparison to the MDC. As a
result, there is no great need for knowing some
absolute detection level for every measurement.
The determination of a LLD and MDC for
activity measurements by gamma-ray spectrometry
is a good example of the above described situa-
tion. It is complicated in that the background
count rate for the same radionuclide may vary
from sample to sample. This is due to the fact
that the background is usually the sum of two
separate sources of gamma radiation. First, there
is the system background radiation for a blank,
corresponding to the sample to be measured, as
discussed previously. Second, there is Compton-
scattered radiation from other higher energy
ganma rays in the sample. Thus, it is readily
seen that the measurement of blank backgrounds
has meaning only for very speciric situations,
e.g., when only one radionuclide is present in
the sample, or when the radionuclide of interest
emits the highest energy gamma ray.
It then follows that the LLD, which is in-
dependent of sample conditions, is a function
o-29
-------
of only the instrument performance (as contained
within K) and the uncertainty in this blank
background, i.e. = .
LLD
)
(AS)
blank
bkgnd
Blank backgrounds ar.e also useful for evaluating
the adequacy of shielding and system performance.
A large deviation in blank background values
may indicate instrument malfunction.
£** For environmental samples one can assume
that almost always two or more gamma-emitting
radionuclides will be' present in any given
sample, thus a more realistic background must be
determined for, calculating a MDC. This sample
background will depend upon the relative amounts
of other radionuclides that are present in the
sample and the energy of each of their gamma
raya. One can then represent the MDC as a
function of K* and the uncertainty in this sam-
ple background, i.e.
MDC = f(K*. s
sample
bkgnd
(46)
Pasternack and Harley ( for a number of
typical counting situations, considered the in-
fluence of other radionuclides on che detection
limit in mock multi-component samples. Their
experimental comparisons were made with large
Nal(Tl) detectors. This was extended by Wrenn,
et al.'^J for the effect of the presence of
natural radionuclides on the detection limits
of man-made radionuclides in environmental
samples. They also made comparisons to Ge(Li)
spectrometry. The general problem of detecting
small photopeaks in Ge(Li) spectra was also
addressed by Headl463. Fisenne, et al.lA7l
applied the Pasternack and Harley '^3J approach
to multi-component alpha spectrometry which has
pulse-height distribution data that ts very
similar to that of gamma spectrometry. In all
cases, the MDC for a particular^radionuclide
was shown to be dependent on the composition of
the sample. It depends upon the number, quantity,
and spectral characteristics of the other radio-
nuclides in the sample.
Therefore, to satisfy the condition that the
MDC be an a priori estimate, it is recommended
that it be calculated for each radionuclide
from typical spectrum backgrounds for each type
of sample.
Examples of these more complex MDC cal-
culations such as for raulti—component and
spectral analyses can be found in Ref. '^.
Interpretations and Restrictions
In summary, the ma3or concepts underlying
the Lower Limit of Detection (LLD) and Minimum
Detectable Concentration (MDC) are:
1. The LLD should be viewed as an a priori
estimate or guidepost of the detection
capability for an instrument. Its
value is dependent upon only the de-
tection instrument characteristics
(e.g., the efficiency) and the uncer-
tainty in the instrument's background.
2. The MDC should be viewed as an a priori
estimate or guidepost of the level of
activity concentration that is prac-
tically achievable by a specific given
instrument, measurement method and type
of sample. Its value is dependent
upon the characteristics and conditions
of the overall measurement system (in-
strument and method) and on the sample
characteristics.
The LLD and MDC are only estimates, and not ab-
solute levels of activity or activity concen-
tration that can or cannot be detected. Simi-
larly, they are not intended to be a posteriori
criteria for the presence dr absence of activity.
The practical significance of the estimated
LLD and MDC are only t>> serve as guldeposts or
criteria for experiment design, comparison and
optimization purposes, and to serve, for regu-
latory purposes, as approximate guidelines of
minim-lily aveptahle levels that can be practi-
cally achieved. As such, all measurement re-
sults should be reported directly as obtained,
and the estimated LLD or MDC should not be
employed to exclude some results from reports.
Similarly, the practice of calculating a new
LLD or MDC value for every measurement defeats
•the spirit of the concepts which are to provide
a priori estlmtes. A posteriori calculations
contain no additional Information, and are
neither technically or economically justifiable.
The above interpretations place some impor-
tant restrictions on the use of the LLD or MDC,
particularly with respect to satisfying regulatory
specifications. The most important restriction
is that it is unreasonable for a regulator to
establish absolute values for the MDC or LLD
for various combinations of radionuclide and type
oF sample media, and then expect compliance 100%
of (.he time in all situations. Even ideally the
MDC and LLD, by convention, involve 5% risks of
false-detection and false non-detection.
A farther restriction arises from the fact
that the MDC for a radionuclide may vary from
sample to sanple depending on the characteristics
of the sample. For example, the MDC for a radio-
nuclide assayed by gamma—ray spectrometry depends
upon the number, quantity, and spectral character-
istics of other radionuclides which may be pres-
ent in the sample. It must therefore be recog-
nised that an MDC established Cor one specific
sei- of conditions may not be applicable for all
other conditions.
Detection capabilities, notably those speci-
fied by the Nuclear Regulatory Commission, are
stated to be "state-of-the-art for routine en-
vironmental measurement"^ '. If these state-
of-the-art values are established for one set
of assumptions (instrument, procedures and sample
6-30
-------
variables), then the MDC values should not be
expected to he technically achievable unless
the assumptions, particularly typical sample com-
position, are still valid. Although NRC documents
reflect this view'*^»^'J, a nuaber of licensees
have reported cases where inspectors have in-
terpreted the MDC as an absolute level, and have
tested licensees for specification compliance
with simulated spiked samples. If an MDC con-
tained within a licensee's Technical Specifica-
tions was determined with blanks or typical
samples, then there is no reason to expect the
same value to be applicable for atypical samples
of dissimilar composition.
The frequent use of an MDC or LLD as a
criterion for excluding some results from data
reports must not continue. The resulting posi-
tive biasing effects of this practice were dis-
cussed previously. As a result, it is recommen-
ded that all measurement results be reported
directly as found; and that "less than MDC" and
similar terms never be used.
For some measurements, the reporting of all
results will not present a problem. Others, like
automated gamma-ray-spectrometer systems, can,
however, present a practical problem. Some com-
puter-coupled systems which routinely test for
upwards of a hundred or so radionuclides in a
system library would require a data report con-
sisting of a value for every radionuclide in the
library. Obviously, such an approach is not
reasonable. The current practice of using MDC
values to decide which results from the long list
will be reported is not the solution. What is
needed is a practical approach that avoids the )
problems inherent in using a preselected exclu-
sion criterion, but at the same time is reason-
able. This approach requires that efforts be
made to reflect on the purpose of the measure-
ments. Measurement of a hundred different
radionuclides in a surveillance or monitoring
program is neither reasonable or justifiable.
Fewer reliable measurement results are much bet-
ter than many questionable results. What is need-
ed are more good measurements whose results can
be viewed with confidence, not more measurements
per se. The criteria for what should be mea-
sured (and hence reported) should be based upon
what is actually needed for the purpost of en-
vironmental radioactivity monitoring i^O] and
upon the dosimetric significance of the radio-
nuclides. Attempts to measure everything that
is technically achievable do not serve anyone
(not the laboratories or the public). Lab-
oratories , as well as regulators, must begin
to recognize this. An approach which is more
reasonable and justifiable in terms of the
dosimetry and purposes of the monitoring must be
taken when designing measurement and data re-
porting programs.
A practical approach to this problem may
be for laboratories and regulators to preselect
only those measurements which are reasonable and
justifiable in terms of the dosimetry and pur-,
poses of the monitoring'program. The effect of
this preselection could be evaluated and its
magnitude incorporated as a systematic uncer-
tainty component in the assessment and propa-
gation of an overall uncertainty (see Part
III). In gamma—ray spectrometry with automated
systems, for example, the magnitude of this
systematic uncertainty could be evaluated by
analyzing a test spectrum first without any pre-
selection criteria, and then with a limited
system library containing only the preselected
radionuclides. The difference between the
two results could be used as an estimate of the
additional systematic uncertainty. This uncer-
tainty component could be evaluated for each
preselected radionuclide of interest using test
spectra containing a full range of radionuclides
found in typical samples. This practical ap-
proach may be a reasonable compromise.
Recommendations
IV.1 Only two expressions for detection limits,
based on a uniform consistent methodology,
should be employed. The recommended terms
are the KRTIMATKD LOWER LIMCT OK DETECTION
(abbreviated 7,LD) and the ESTIMATED MINIMUM
DETECTABLE CONCENTRATION (MDC).
o
IV.2 The practical significance of the estimated
LLD and MDC are only to serve as guideposts
or criteria for experiment design, com-
parison and optimization purposes, and to
serve, for regulatory purposes, as approxi-
mate guidelines of minimally acceptable
detection capabilities.
IV.3 The LLD should be viewed as an a priori
estimate or guidepost of detection capa-
bility, and not as absolute levels of
activity that can or cannot be detected.
Tt Is not intended to be an a posteriori
rrlterion for the presence of activity.
IV.4 The MDC should be viewed as an a priori
estimate or guidepost of the capability
for detecting an activity concentration
by a given measurement system, procedure
and type of sample, and not as absolute
activity concentrations th«t can or can-
not be detected.
IV.5 The estimated LLD or MDC should be based
on the approximation of Eq. (30) which
is an approach derived from statistical
hypothesis testing.
IV.6 The estimated LLD or MDC should be cal-
culated, using the convention of Eq. (31),
for "5X risks of false detection and
false non-detection."
IV. 7 The estimated LLD should be calculated
from Eq. (34), (35), (40), (45) or compar-
able equations which are derived from the
approximation of Eq. (30) and convention
of Eq. (31).
IV. 8 The estimated MDC should be calculated
from Eq. (36), (37), (42), (43), (46),
or comparable equations which are deriv-
able from the approximation of Eq. (30)
and conventiort of Eq. (31).
IV.9 The estimated LLD should be expressed in ••
units of activity and should include J .
6-31
-------
only the Instrument parameter which re-
lates the deteitor response (counts)
to activity, "pus normally would be only
the detection efficiency or calibration
factor for the instrument, and' would not
include other parameters such as the sam-
ple s-iE-e, chemical yield, decay scheme
parameters, absorption and attenuation
corrections, decay factors, etc.
£ '
IV.10 The estimated MDC should be expressed
as activity per unit mass or activity
per unit volume, and its calculation
would include all parameters which relate
the detector response (counts) to the
activity concentration in a given type
of sample. These may include detection
efficiencies, chemical yields, absorption
and attenuation corrections, decay scheme
parameters, decay factors, etc.
IV.11 The estimated LLD should be calculated
using average blank backgrounds and
efficiencies for the instrument; the
estimated MDC should be calculated using
average sample backgrounds and parameters
for typical samples.
IV.12 The estimated LLD for a given instrument
should be calculated for each radio-
nuclide of interest, but it is independent
of the sample characteristics; in contradis-
tinction, the estimated MDC for a given
procedure should be calculated for each
radionuclide for each type of sample.
IV.13 To avoid possible positive biases in
long-term data trends, all measurement
results should be reported directly
as obtained. "Less than LLD," "less
than MDC," "not detected" and similar
expressions should not be reported.
Similarly, the LLD or MDC should not
be used as a decision criterion for
excluding some results from data re-
ports.
REFERENCES
{. 1] A Guide and Recommendations for Reporting of
Environmental Radiation Measurements Data,
Ad Hoc Subcommittee on Data Reporting,
National Bureau of Standards Special Publi-
cation, to be published (1980).
[ 2] American Society for Testing and Materials,
ASTM Standard for Metric Practice E380-72
(1972). Excerpts from this Standard also
appear in the Annual Book of ASTM Standards
Part 31 (Water) and Part 45 (Nuclear Stan-
dards).
[ 3] American National Standards Institute,
"American Standard for Metric Practice,"
American National Standard Z210.1-76 (1976).
[ 4} British Committee on Radiation Units and
Measurements, "Draft Recommendations on the
Introduction of the New Si Units for Use
with Radioactivity and Ionizing Radiations,"
published by National Physical Laboratory,
United Kingdom (January 1978).
[ 5] National Bureau of Standards, The Inter-
-• national -System of Units (SI), NBS Special
Publication 330, 1977 Edition.
[ 6] National Bureau of Standards, NBS Guide-
lines for Use of the Metric System, LC1056
(Revised August 1977).
[ 7] International Commission on Radiation Units
and Measurements, "Radiation Quantities and
Units," ICRU Report 19 (July 1971).
[ 8] National Council on Radiation Protection and
Measurements, Environmental Radiation Mea-
surements, NCRP Report No. 50 (1976).
[ 9] Harry H. Ku, "Statistical Concepts in Me-
trology," in Precision^ Measurement and Cali-
bration: Statistical Concepts and Pro-
cedures, National Bureau of Standards,
Special Publication 300, Vol. 1, 296-330
(1969).
[10] R. C. Plnkerton and C. E. Gleit, "The Signif-
icance of Significant Figures," Journal of
Chemical Education, 44, No. 4, 2332-34 (April
1967).
[11] L. G. Parratt, Probability and Experimental
Errors in Science, Wiley NY (1961), p.69-71.
[12] P. R. Bevington, Data Reduction and Error
Analysis for the Physical Sciences, McGraw-
• Hill Book Co., NY (1969), p. 4-6. , •
[13] M. G. Natrella, "Experimental Statistics,"
National Bureau of Standards Handbook 91
(1973).
[14] National Council on Radiation Protection
and Measurements, "A Handbook of Radioactivity
Measurements Procedures," NCRP Report No. 58
(1978). A revision of National Bureau of
Standards Handbook 80.
[15] W. J. Dixon and F. J. Massey, Jr., Intro-
duction to Statistical Analysis, 1st. Ed.,
McGraw-Hill Book Co., N.Y. (1951).
[16] A. H. Jaffey, "Statistical Tests for Count-
ing," Nucleonics 18, No. 11, 180-4 (1960).
[17] Enargy Research & Development Administration,
"A Guide for Environmental Radiological
Surveillance at ERDA Installations," ERDA
77-24 (March 1977).
[18] H. H. Ku (ed.), "Precision Measurement and
Calibration: Statistical Concepts and Pro-
cedures," National Bureau of Standards,
Special Publication 300, Volume 1 (1969).
[19] A. Goldstein, Biostatlstics, MacMillan Publ.
Co., N.Y. (1964).
[20] H. H. Ku, "Notes on the Use of Propagation
of Error Formulas," J. Research NBS 70C,
6-32
-------
No. 4 236-73 (1966). (Also reprinted in
Ref. I18J, p. 331-41.)
[21] P. C. Stevenson, "Processing df Counting
Data," National Academy of Sciences - Nation-
al Research Council, Nuclear Science Series
NAS-NS-3109, U. S. Atomic Energy Commission
(1965).
[22] R. D. Evans, "Statistical Fluctuations in
Nuclear Processes," Chap. 5 in C. L. Yuan
and C. S. Wu (eds.). Methods of Experimental
Physics, Vol. 5, Ft. B, Academic Press (1963).
(23] W. J. Price, Nuclear Radiation Detection, 2nd
ed., McGraw-Hill (1964).
[24] Lee H. Ziegler and H, M. Hunt, "Quality Con-
trol for Environmental Measurements Using
Gamma-Ray Spectrometry," U.S. Environmental
Protection Agency, EPA-600/7-77-144 (Dec.
1977).
[25] International Commission on Radiation Units
and Measurements, "Certification of Stan-
dardized Radioactive Sources," ICRU Report
12 (1968).
[26] A. Williams, P. J. Campion and J. E. Burns,
"Statement of Results and Their Accuracy,"
Nucl. Instr. and Meth. 112, 373-376 (1973).
[27] H. H. Ku, "Discussion on Statement of Data
and Errors," Nucl. Instr. and Meth. 112,
391 (1973).
[28] H. H. Ku, "Statistical Methods Applicable
to Counting Experiments and Evaluation of
Experimental Data," Nucl. Instr. and Meth.
112, 377-383 (1973).
[29] W. J. Youden, eleven papers on statistical
experimental designs useful in physical
sciences reprinted in Ref.l^°J.
[30] B. L. Welch, "The Generalization of 'Stu-
dent's' Problem When Several Different
Population Variances Are Involved," Bio-
met rika 34, 28-35 (1947).
[31] A. A. Aspin, "Tables for Use in Comparisons
Whose Accuracy Involves Two Variances,
Separately Estimated," Biometrika 36, 290-6
(1949). See also RefJ3°l and B. L. Welch,
Biometrika 3£, 87 (1947).
[32] National Council on Radiation Protection
and Measurements, "Tritium Measurement
Techniques," NCRP Report No. 12, p. 72-74
(1976).
[33] E. S. Pearson and H. 0. Hartley, Biometrika
Tables for Statisticians, Vol. 1, Cambridge
Univ. Press, London (1954).
[34] Y. Le Gallic, "Discussion on- Statement of
Data and Errors," Nucl. Instr. and Meth.
112, 393 (1973).
[35] C. Eisenhart, "Expression of the Uncertain-
ties of Final Results," Science 160, 1201-
1204 (1968). (Also reprinted in Reft I18',
p. 69-72.)
[36] H. H. Ku, "Expressions of Imprecision,
Systematic Error, and Uncertainty Associated
with a Reported Value," Measurements and
Data, p. 72-77 (July-Aug. 1968). (Also
reprinted in RefJ18!, p. 73-78.)
[37] P. J. Campion, J. E. Burns and A. Williams
"A Code of Practice for the Detailed State-
ment of Accuracy," National Physical Lab-
oratory, London (1973).
[38] J. Mu'ller, "Discussion on Statement of Data
and Errors," Nucl. Instr. and Meth. U2,
393 (1973).
[39] S. Wagner, PTB Mitteilungen _7_i, 343 (1969).
An English version, "How to Treat Systematic
Errors in Order to State the Uncertainty
of a Measurement," is available from the
Physikalish-Technische Bundesanstalt,
Braunschweig, as report number FMRB 31/69.
[40] A. Williams, P. J. Campion and J. E. Burns ,
"Statement of Results of Experiments and
Their Accuracy," Nucl. Instr. and Meth.
112, 373-376 (1973).
[41] Lloyd A. Currie, "Limits for Qualitative
Detection and Quanitative Determination,"
Anal. Chem. 40, 586-93 (March 1968).
[42] B. Altshuler and B. Pastprnack, "Statistical
Measures of the Lower Limit of Detection
of a Radioactivity Counter," Health Physics
2, 293-8 (1963).
[43] B. S. Pasternack and N. H. Harley, "De-
tection Limits for Radionuclides in the
Analysis of Multi-Component Gamma-Spectro-
meter Data," Nucl. Instr. and Meth. 91,
533-40 (1971).
[44] John H. Harley (ed.), "EML Procedures
Manual," Department of Energy, Environ-
mental Measurements Laboratory, HASL-300
(1972 ed., revised annually).
[45] McDonald E. Wrenn, S. M. Jinks, L. M. Hairr,
Ao S, Paschoa and J. W. Lentsch, "Natural
Activity in Hudson River Estuary Samples
and Their Influence on the Detection Limits
for Gararaa Emitting Radionuclides Using Nal
Gamma Spectrometry," p. 897-916 in The
Natural Radiation Environment II, J. A. S.
Adams, Pt al. (eds.) Report COHF-720805
(U.S. ERTU, 1972).
[46] J. H. Head, "Minimum Detectable Photopeak
Areas in Ge(Li) Spectra," Nucl. Instrum.
S Meth. 98, 419 (1972).
[47] I. M. Fisenne, A. O'Tollie and R. Cutler,
"Least Squares Analysis and Minimum De-
tection Levels Applied to Multi-Component
Aloha Emitting Samples," Radiochem. Radio-
anal. Letters 16, 1, 5-16 (1973).
0-33
-------
•[48.] Nuclear Regulatory Commission, Branch Techni-
cal Position an Regulatory Guide 4.8 (March
1978). , '
[49] Cf., Nuclear Regulatory Commission, Regu-
' latory Guide 4,8 '^Environmental Technical
Specifications for Nuclear Power Plants"
(Dec. 1975);.Regulatory Guide 4.14, "Measur-
ing, Evaluating', and Reporting Radioactivity
in Releases' of Radioactive Materials in Liquid
and Airborne Effluents from Uranium Mills"
(June 1977); Regulatory Guide 4.16, "Measuring,
Evaluating, ai}d Reporting Radioactivity in
Releases of Radioactive Materials in Liquid
and Airborne Effluents from Nuclear Fuel
Processing and Fabrication Plants" (March
1978); and others.
[50] Nuclear Regulatory Commission, Radiological
Environmental Monitoring by NRC Licensees for
Routine Operations of Nuclear Facilities,
NUREG-OA75 (October 1978).
6-34
-------
STATISTICAL METHODS FOR ENVIRONMENTAL RADIATION DATA INTERPRETATION
D. A. Waite (Battelle, Office of Nuclear Waste Isolation), D. H. Denham (Battelle-Nor£hwest Laboratory),
J. E. Johnson (Colorado State University), D. E. Michels (Republic Geothermal, Inc.), N. Turnage
(Tennessee Valley Authority).
The interpretation of environmental radiation data encompasses those activities and
operations used to draw conclusions from measurements. In general, these conclusions
will extend considerably beyond the collected data to the environmental compartments
which the samples or measurements were Intended to represent. This chapter summarizes
the results of an evaluation of alternative statistical methods for data treatment and
interpretation. Following a brief introduction, major Areas of Concern are discussed. *""
Areas addressed are Experimental Design, Sampling Representativeness and Data Analysis.
Recommended methods were chosen to provide the best possible input into dose evaluation
procedures. In addition to containing recommandations for meeting minimum requirements
in the Area of Concern, several particularly troublesome topics are discussed. Among
these are establishing a reliable baseline, quantifying variables in the sampling
equation and handling less-than-lower-level-of-detection data.
(Accuracy; baseline study; critical pathway analysis; data analysis; data interpretation;
distribution analysis; dose assessment; environmental compartments; experimental design;
homogeneity; lower limit of detection [LLD]; precision; probability plots; radiation;
representativeness; sampling; statistics; surveillance objectives^ variaoility)
Statement of the Problem
The general purpose of environmental radia-
tion surveillance in the vicinity of any nuclear
installation is to obtain information essential
to assessing and controlling the exposure of the
neighboring population to radiation and/or radio-
active materials. Various components of the
nuclear community supplement this purpose with
secondary purposes in different ways. These
alternative statements of objectives are discussed
elsewhere in this document.
The necessity of adjusting environmental
surveillance techniques to meet selected ones of
these established objectives is a widely recog-
nized, yet seldom followed, principle. Often a
program is planned to meet several objectives,
but without explicitly stating the rationale for
any of these several objectives so that compatible
data collection, treatment and interpretation
techniques can be selected.
Recently, a variety of data hardling tech-
niques have come ioto common use. Witn the large
number of techniques presently in use, it is
nearly impossible to compare even similar data at
different sites. Both the regulatory agencies and
the public have deemed comparability of results
to be desirable and hence the techniques suggested
in this chapter have been adopted to help achieve
that end.
The marked increase of public and regulatory
interest in radiological environmental surveil-
lance in recent years and the continuing decline
in the environmental concentrations of most radio-
nuclides make the appropriate and as-rigorous-as-
possible data collection, treatment and interpre-
tation techniques mandatory. Areas of concern in
environmental surveillance which -are.affected by '
these changing requirements are (1) experimental
design, (2) sampling, and (3) data analysis.
Two problem issues commonly encountered in °
the interpretation and reporting of environmental'
data are the treatment of "less-than-minimum-
detection-limit" values, and the determination of
contributions to the environment by a nuclear
facility. Routine environmental surveillance data
are best interpreted by treating the data as groups
(not as individual datum) and plotting the groups
on probability paper of some kind. Because the
resulting distributions are presented graphically,
the median, geometric standard deviation (on log-
probability paper) or the mean and standard devia-
tion (on linear-probability paper) can be readily
determined visually. Contributions from operating
plants and expected upper limits can also be
identified.
The advantage of handling environmental sur-
veillance data as distributions instead of indi-
vidual datum points and graphically instead of
numerically is that these distribution plots yield,
with the same or less effort, average values and
standard deviations that are the same as those ob-
tained by numerical methods; quickly show whether
the distribution choice was correct (as between
Gaussian versus log-normal); or, whether the data
belong to two statistical groups.
Groups of data including 10 to 100 items, the
range of group sizes most common in environmental
programs, can be handled efficiently by graphical
methods. For example, weekly or monthly data may
be grouped for interpretation. Sampling sites are
generally limited to less than 100 in number.
These graphical methods are equally useful for
larger groups of data, but manual treatment be-
comes tedious, particularly for routine
applications.
In this chapter the mechanics and applica-
bility of the recommended data handling and
interpretation techniques are detailed for each
of the three areas of concern.
7-1
-------
Areas of Concern
Experimental Design"
Required Sensitivity. Evaluation of popula-
tion doses from the "as lov as practicable"*
viewpoint has created pressure for increasingly
sensitive radiation measurement and evaluation
techniques. As a result, nearly all DOE sites,
except for those withc high energy machine radia-
tions near site boundaries, currently report[3]
maximum individual^whole-body doses of less than
one mreia per year for comparison with a stan-
dard [4] of 500 mrem per year. Supporting such dose
estimates is the availability at most DOE and
commercial laboratories of radioanalytical detec-
tion levels for the more common analyses, using
practicable sample size and counting times,
equivalent** to annual organ doses of less than 5
mrem (for most analyses less than 1 mrem). Detec-
tion levels of this magnitude for direct measure-
ment of incremental external radiation in the
environs are not readily available because of the
comparatively large variability of natural radia-
. tion levels.
Attempts to calculate "health effects" in a
large population exposed to miniscule doses, as
well as the increased attention being paid by
regulatory agencies and public interest groups to
radiation levels far below the previously accepted
standards, reinforce the desirability of achieving
such dose discrimination levels when technology
permits. This will become even more important
within the next few years as power reactors and
other non-DOE nuclear facilities governed by NRC '
licensing requirements begin to operate at loca-
tions sufficiently close to DOE facilities to
have potentially overlapping environmental impacts.
A few definitions are required before the details
of this subject can be discussed.
Sensitivity Definitions. Confusion and non-
comparability of reported data have resulted both
from differing terminology for the measure of
analytical sensitivity and from differing mathe-
matical definitions. Frequently, as in DOE Manual
Chapter 0513[6] and the EPA/ORP Environmental
Radioactivity Surveillance Guide[5], "Minimum
Detection Limit" or a similar term has been used
without definition.
The Minimum Detectable Concentration, Minimum
Detection Limit or Minimum Detectable Level (MDL),
as commonly used, refer to a minimum incremental
concentration or exposure rate based only on
analytical or instrument sensitivity. In order to
lessen confusion, the Environmental Measurements
Laboratory (EML) usage[7] of Lower Limit of
Detection (LLD) is recommended for the minimum
detectable increase in sample counting rate. The
*The alternative term "as low as reasonably
achievable" has been recommended by the ICRP[2]
and adopted by the Nuclear Regulatory Commission.
**Table 2 of Reference 5 gives dose equivalents
for stated analytical levels. The levels given
in that 1972 document and included in Appendix
C of this document are readily lowered by most
laboratories,..especially for 'Osr and 131I.
LLD can be converted to a MDL by specification of
an acceptable confidence level and conversion to
sample concentration. A LLD for measurement of
external radiation can be calculated in exactly
the same way as for counting of samples ; for such
measurements the LLD and the MDL are identical.
A MDL for either a sample concentration or a
direct measurement can be translated to an equiva-
lent Minimum Detectable Dose by carrying it
through an environmental pathway matrix.
Since both the background (Cg) and sample
counts (Cg) (and rates) estimate central values of
distributions rather than points, preselecting a
lower limit of detection equivalent to (Cg + Cjj)
as some multiple of Cg implies the acceptance of
uncertainty in evaluating a sample count. The
total uncertainty is the sum of probabilities of
Type I decision error (PJ) (accepting the pres-
ence of radioactivity in the sample when none is
present) and Type II error (PU failure to rec-
ognize the presence of a specified amount of
radioactivity, the LLD, when it is present). For
direct counting of a sample with no interferences,
several reasonable assumptions permit the follow-
ing derivation:
Assume: (a) Foisson counting statistics:
standard error * v'c
(b) Similar counting times for
t determination of (Cg+Cg) and
standard error of C_
(c) Standard error of (CKL) »
standard error of C« for small
increments of Cc
D
then,
- C
B
S.E.
CS
(
Let
Cg = LLD, which can be set to some
factor K times the standard error
of Cs.
that is, LLD = K /Eel
D
and
where
kj + kjj ,
k values are taken from the stan-
dardized normal variate probability
tables for onesided errors.
It is Environmental Measurements Laboratory
practice to use the term Lower Limit of Detection
for the quantity here defined, that is:
The Lower Limit of Detection (LLD) is the
smallest true sample net count rate which,
using a given measurement process, will be
detected lOOpn percent of the time and
the risk of falsely concluding sample
activity is present, when it is not, is
lOOpj percent.
Johnston[8], addressing the equivalent of a
MDL directly and incorporating other sources of
variability in the overall sampling/analysis
scheme, derived an "Index of Adequacy" equivalent
to this combined probability factor.
7-2
-------
It has been the practice at several sites to
assign to the factor K a value of 2 (an approxi-
mation of 1.96) for a claimed 95 percent Confi-
dence Level of detection. This may be standarized
in a definition of LLD, and by implication the
MDL, for reporting purposes, but with the recogni-
tion that a continued probability of 95 percent
detection of positive results, by virtue of the
relationship between the k's, has a corollary of
false positive reporting for as many as one out
of three determinations.
Pasternack and Harley[9] have extended the
derivation to the much more complex problem of
gamma spectrum analysis, pointing out that for
such analyses the LLD for a given nuclide varies
not only with the other nuclides present but also
with the list of nuclides assumed to be present.
Mo attempt is made here to provide standard MDLs,
and for gamma emitters; comparability between
laboratories will undoubtedly remain questionable.
The Sampling Equation. Potential methods of
achieving a lower Minimum Detection Limit for a
given analytical Lower Limit of Detection are
indicated by the following sampling equation.
A tC
A = a VfTe
where A = target concentration, the smallest
concentration which will be quantita-
tively measured by the procedure
t = counting time devoted to a sample
C = minimum count rate measurable by the
detector in time t
V = volume of the sample collected
f = the fraction (or aliquot) of the sample
collected which is carried into the
purification and counting steps
r = fractional analytical recovery (effi-
ciency) for the radlonuclide of interest
c = counting efficiency of the detector
(fraction of a count per disintegration)
a. = unit conversion factor
Values of the terms V, C, e, and r are all
affected by available hardware, technical train-
ing, and operating expenses. Obviously, a broad
range of combinations of these four terms will
satisfy the requirement for measuring a given
value of A. Neither the minimum cost combination
nor the combination which can be most easily met
with capability on hand is straightforwardly
determined. In practice, several trial combina-
tions and their implications should be evaluated
in order to optimize the sampling and analytical
procedures. Note that if C is set equal to the
LLD, A becomes the MDL. However, the equation is
valid at all count rates and concentrations.
In this analysis, the variabilities of other
parameters deserve attention because their rela-
tive errors, especially for V, may be comparable
to counting error. The chance of making improve-
ments in counting statistics depends on the pre-
cision of the rest of the sampling-measurement
system. For example, the volume of air pumped may
not be known more precisely than ± 10 percent. In
this case there may be little merit in incurring
significant costs to make the counting procedure
much more precise than ± 10 percent.
Program Specifics. Most environmental sur-
veillance guides" devote one or more section to
program design, using terms such as protocol,
objectives and rationale. Yet few of these pro-
grams adequately specify such parameters"as
number of samples, sampling frequency, or methods
of compositing. Some guides have suggested minimum
numbers of samples and sampling frequency, but it
is not wise to apply simple criteria to all
facility types or sites.
One method of choosing criteria for sample
collection—numbers of sampling locations per
site, sample collection and/or analysis frequency,
and whether to composite samples or analytical
data is formal sampling theory.[10] An alternative
is to base a program design on experience.
(
To do this, the 1976 annual environmental >
monitoring reports for 13 major nuclear sites
within the U.S. were reviewed. Seven Department
of Energy (DOE) contractor sites and six Nuclear
Regulatory Commission (NRC) reactor sites were
included. The DOE contractor sites were chosen on
the basis of one site within the purview of each
of the regional operational offices from ERDA 77- •
104 (Environmental Monitoring at Major U.S.
Energy Research and Development Administration
Contractor Sites, Calendar Year 1976), while the
reactor sites were based on the availability of
1976 reports. Through this selection process it
was possible to utilize the reported details
from a wide cross section of facility types and
geographic locations around the U.S.
The information extracted from these reports
was grouped in one of three environments—atmo-
spheric, terrestrial, or aquatic. This grouping
was based on the "environment" most likely to be
impacted by the routine effluents from each site.
Only that sampling and analytical information
considered part of the routine surveillance pro-
gram at each site was included. Hence, precipita-
tion collection, special one-time environmental
studies, etc., were excluded. However, the impor-
tance of these media or parameters needs to be
judged on a site-specific basis and should not be
ruled out of an environmental surveillance pro-
gram simply because of omission here.
*See for example: ORP/SID 72-2, ERDA 77-24, Reg.
Guides 4.1 and 4.8, NUREG-0475, and NCRP-50. .
7-3
-------
Findings. Tables 1-3 summarize the results
of this compilation,.. giving the range and typical
D(median)'values for the following parameters:
o Number of sampling locations p,er site *
o Frequency of sample collection
o Number of analyses per sample collected
o Frequency of sample analyses (often
different than the frequency of
collection because of compositing).
In comparing the median number of sampling
locations in Tables 1-3 with the "perspective"
tables of Denham[ll], it is clear that more than
a single rationale exists for environmental
sampling. On the other hand, those sample media
which are difficult to collect or for which there
are limited dose pathways (e.g., HTO or radio-
iodine in air), a relatively consistent minimum
cumber of sampling locations is apparent. This
approach was used to arrive at an acceptable
minimum number of sample locations per site based
on the calculated annual dose to individuals as
the rationale. Similarly, the frequency of sam-
pling and analytical measurement experience
(Tables 1-3) of others provides a useful base from
which to establish "frequency" protocol. A third
item, compositing of samples or data, while not
given explicitly in Tables 1-3, was discussed to
some degree in several of the reports. Typically,
some samples (e.g., air filters and milk) are
composited over time (by location) to provide a
larger sample and enhanced analytical sensitivity,
while others (e.g., soil and water) are composited
at collection prior to analysis to avoid major '
concentration inhomogeneity problems. Both are
considered acceptable and useful techniques, yet'
the reasons should be examined on a case-by-case
basis to ensure meaningful results from which
appropriate interpretations can be made.
The previous discussions point to the fact
£hat a weak link in environmental surveillance
«continues to be insufficient documentation of why
and how samples are collected. Although sampling
rationale was not explicitly stated in any of the
reports reviewed, the following objectives,
including the four primary ones cited by
Denham[ll], are suggested for consideration in
program planning:
o Dose assessment
e Compliance with standards
e Verification of effluent controls
e Trend evaluation
a Public information.
Recommendations and Conclusions. To plan and
conduct environmental surveillance programs on a
minimum cost basis, it is essential that an
appropriate list of rationale be established for
that particulars-site. One method of establishing
an acceptable minimum number of sampling loca-
tions (by medium) per site is to perform a
critical pathway analysis. The doses thereby
obtained should be compared with the recommended
number of sampling /locations given in Table 4.
The same technique can and should be used for
other rationale. It should be recognized, however,
that these choices are not necessarily mutually
exclusive (i.e., sampling at a given location may
satisfy several rationales or different rationales
may require different sampling locations for the
same medium.)
Once the number of sampling locations has
been chosen, the distribution techniques of Waite
(BNWL-SA-4676), the EPA (ORP/SID 72-2), and
Regulatory Guides 4.5 and 4.8, can be used.
Another recent publication by Legett, et al[12]
addresses a statistical methodology for surveying
contaminated facilities. Many of the suggested
methods are applicable to environmental
surveillance.
It was interesting to note that several soil
sample compositing methods are presently in use,
ranging from a series of sample cores along a
straight line to taking one at each of the 12
hourly locations on the circumference of a circle.
However, no consistent depth of soil sampling is
observed. This inconsistency remains, but a depth
of 5 cm seems to be reliable and applicable to
most soil types. This was not considered a serious
issue since soil sampling is not recommended for
dose evaluation.
Section 3.6 in the DOE Guide (DOE 77-24)
should be consulted for an analysis of sample
size, especially with respect to analytical
detection levels and variability.
Similarly, Section 3.7 of the DOE Guide treats
the topic of measurement frequency and is
repeated here in total:
"Aside from sample sensitivity requirements,
the frequency of sample collection and
measurement must take into account the half-
life of the radionuclide being measured.
Even though a decay correction can be made,
a delay of two half-lives between sampling
and analysis for an intermittent occurrence
of a radionuclide in the medium being
analyzed increases the probable error and
thus the MDL by a factor of at least two, and
probably more depending on the time of
arrival relative to the time of sample col-
lection. For a sample analysis with a barely
acceptable sensitivity in any case, such an
increase may be unacceptable."
"Because effluent releases and the environ-
mental media they affect may both vary with
time, nonproportional and periodic grab
sampling risk bias by being synchronized
with some cyclic feature of the process or
the environment (e.g., a liquid effluent
release schedule, tidal cycles or daily
fluctuations of stream flow). Synchronization
(i.e., collecting an environmental sample
either in or out of phase with effluent
releases) may be desirable if the sensitivity
of the monitoring system can be increased,
but inadvertent synchronization should be
avoided because the resulting bias may lead
to misinterpretation of the results."
7-4
-------
TABLE 1. SUMMARY OF ENVIRONMENTAL MONITORING
PRACTICES—ATMOSPHERIC PATHWAY [11]
Number of sites
Number of sampling
locations
Number of analyses
Sampling Frequency
Daily
Weekly
Biweekly
Monthly
Quarterly
Semiannual
Annual
Other
Analytical Frequency
Daily
Weekly
Biweekly
Monthly
Quarterly
Semiannual
Annual
Other
Particulates
13
1-29
(12)*
3-6
(4)*
2
2-11
1-3
1
2
3-11
3
1-8
1-4
Direct
Radiation
Tritium Iodine (TLD)
5 8 13
3-29 3-12 4-53
(7) (7) (24)
111
1 7
2 1
2 7
7
^
i
i
(
1 7
2
217
7
2
1
1
^Typical or median value for those sites collecting and analyzing the
respective media shown.
7-5
-------
TABLE 1. SUMMARY OF ENVIRONMENTAL MONITORING
PRACTICES—TERRESTRIAL PATHWAY
' ' » " • *
Number of sites
Number of sampling
locations
Number of analyses
Sampling Frequency
Daily
Weekly
Biweekly
Monthly
Quarterly
Semiannual
Annual
Other
Analytical Frequency
Daily
Weekly
Biweekly
Monthly
Quarterly
Semiannual
Annual
Other
Soil
12
3-28
(12)*
1-4
(3)*
1
1
1
9
1
1
1
9
Vegetables
and
Grasses
9
3-25
(12)
1-5
(3)
4
2
1
9
4
2
1
9
Forage Milk
3 11
2-14 1-13
(5) (5)
2-4 1-5
(3) (3)
4
2
2 8
1
1
4
2
2 9
4
1
Animals
7
1-14
(3)
1-3
(2)
1
1
5
1
2
5
*Typical or median value for those sites collecting and analyzing the
respective media shown.
7-6
-------
TABLE 3. SUMMARY OF ENVIRONMENTAL MONITORING PRACTICES—AQUATIC PATHWAY
Number of sites
Number of sampling
locations
Number of analyses
Sampling Frequency
Daily
Weekly
Biweekly
Monthly
Quarterly
Semiannual
Annual
Other
Analytical Frequency
Daily
Weekly
Biweekly
Monthly
Quarterly
Semiannual
Annual
Other
Surface
Water
12
2-15
(6)*
2-9
(5)*
5
1
7
2
1
1
1
9
8
1
Ground
Water
9
1-75
(6)
2-8
(4)
5
4
3
1
5
5
3
1
Drinking
Water
10
1-16
(3)
2-8
(3)
3
6
2
1
1
3
8
4
1
1
Sediment
11
1-24
(4)
1-6
(3)
1
1
1 0
5
3
1
1
1
5
3
Fish/
Shellfish -
11
1-10
(3)
1-4
(2)
2
4
4
1
2
5
4
1
Plants,
Waterfowl,
Etc.
9
1-31
1-3
(2)
3
2
4
3
1
2
4
*Typical or median value for those sites collecting and analyzing the respective media shown.
7-7
-------
TABLE 4. RECOMMENDED MINIMUM NUMBER OF ENVIRONMENTAL
SAMPLING LOCATIONS PER NUCLEAR SITE AS A
FUNCTION OF CALCULATED ANNUAL DOSE TO THE
. . . MAXIMUM INDIVIDUAL* .
Environmental Media**
Calculated Annual Dose Level (mrem)
1-10 0.1-1 0.01-0.1 <0.01
Air, ambient radiation 10
Milk, other foodstuffs,
water, fish 5
*These criteria are for a dose based program, but are
assumed to be equally applicable for other rationales,
such as compliance and public relations. At higher dose
levels (e.g., 10-100 mrem/yr), increase the number of
locations by a factor of 3 for each order of magnitude
increase in the dose or factor of 10 for each factor of
100 increase in the dose level.
**0ther media, such as soil and sediment, which are not part
of a direct-dose pathway, are excluded (i.e., not re-
quired as part of the environmental program). However,
there may be other compelling reasons to include them, in
which case soil should be added at the same level as air;
sediment at the same level as water.
"Seasonal habits of people and animals can
also result in nonrelevant data from a uni-
form year-round program. Recreational expo-
sures are an excellent example, for aquatic
sports as well as hunting and fishing.
Aquatic and terrestrial biota can be selec-
tive in what they eat and they select M ,,
differently according to availability. For
example, assessing the dose impact via milk
cattle forage requires sampling in the sea-
son the forage is eaten. The temptation to
take either random or a complete series of
samples for such a purpose, based solely on
an arbitrary schedule such as every six
months, should be resisted because the data
may well be of no value in pathway analysis.
Although such data may be suggestive, they
are not likely to be reliable."
Another consideration relevant to compositing
is that it is characteristic of environmental
monitoring data that many results occur at or
below the minimum detectable level. For reporting
and calculation of averages (see the last section
in this chapter for a discussion of acceptable
methods of data analysis), it is imperative that
these facts be considered. When results of all
samples are grouped according to analysis type,
means and standard deviations of these results
can be calculated. The standard deviations so
calculated of these grouped data represent total
•sample variability rather than merely analytical
variability and are the values recommended to be
reported.
Sampling Representativity
Environmental Variability. Concern about
whether data are representative expresses an intui-
tion that composition across an environmental com-
partment might not be uniform. Less intuitive are
, factors about actual degree of nonuniformity and
about how large a degree of nonrepresentativity
one may be willing to accept in the data. Degree
of nonuniformity concerns the material substance
of interest whereas representativity per se con-
cerns our motives for wanting to know anything at
all in a quantitative way. These two aspects of
representativity will be contrasted throughout the
following discussion.
We worry whether the data are representative
because our interests extend beyond the data them-
selves. We wish to use the data as a springboard
to reach conclusions about something we did not
sample, perhaps for reasons of inaccessibility or
costs. Our springboard will contain a logical
basis that can be reviewed and refined at will,
but we worry that the data that go into the
major or minor premises might be faulty—nonrepre-
sentative—so that we might be led to incorrect
conclusions. We want to use data from a small part
of the physical world to calculate the "facts"
about a larger and possibly different part. Data
are said to be representative if the calculated
"facts" are similar to the larger reality in a
practical way.
In these guidelines emphasis is on the situa-
tion where the mass of the sample may be only a
millionth or less of the mass of the environmental
compartment it is intended to represent. Addi-
tionally, there is a dynamic aspect. We would like
to reach some conclusion that the sample can tell
us about its source (historical point), or what
might be inferred about events yet to come
7-8
-------
(projection in time), or about events that went
on in an adjacent area at the time the sample was
collected (geographic extension). Finally, we may
combine the field data with demographic data and
with information about biologic effects in order
to describe an impact.
The issue of representativity is important,
as the impact might be serious so, being sure of
what the data really represent, and what they do
not, deserves the highest priority, higher than
che priorities of accurate chemical analysis that
usually bear the burden of accusation when the
data "just don't look right".
How May Environmentalists Approach
"Representativity"^ Representativity, of course,
is something that happens in the field when the
sample is taken. That is, the field operations
and technique will determine how reliably the
sample represents a part of the environment we
are interested in. However, representative
sampling does not begin in the field—it only
ends there. To achieve representativity one
must start at the beginning. That beginning is
a debate about what we really want to learn of
the environment and of how sampling might play a
role in that learning process.
For most persons concerned about environmen-
tal degradation, the ultimate question concerns
the item called "impact", a net result of all the
factors involved—a measure of how severely
living space has been affected, might have been
affected, will be affected —. For example, the
mining industry has an analogous "bottom line"
to "impact"—value of the rock. And as the
value of a rock cannot be determined by sampling
alone, no matter how good or representative,
neither can impact be measured by sampling, good
or bad. Of course, neither value nor impact can
be reliably estimated without good sampling.
This emphasizes that all of the factors which go
into the logic net and the calculations too,
must form a coherent and intercompatible assemb-
lage. The requirements on the sample and thus
also on the field techniques which yield the
samples, must be rooted in the coherentness of
the entire assemblage of factors. Several pro-
gran-, to assess impact can be proposed for any
specific combination-of contaminant and geography.
More than one can be right in the sense that the
derived conclusions would not be misleading be-
yond the statistical certainties. They would
differ in cost and emphasis. It is important
that the sampling requirements for one program
are not thrust into the logic set of another pro-
gram with which it was logically ill-suited.
Such would be one form of nonrepresentativity.
To be more concrete on this point, in a net-
work of samplers one intends that all the space
between samplers is "represented" by at least one
sampler. That is to say, in the calculational
model the data- recovered from one sample point
will be applied in some way to the space between
that point and the next sampler. There,are
several ways to do this mathematically, inter-
polations may be linear, parabolic, logarithmic,
step function, etc. Which kind of interpolation
is used can be debated according to what is known
or suspected about dispersion, etc. The actual
choice for interpolation will determine, in part,
what kind of sample spacing to use. As a general
rule, data are more accurately interpreted if
the incremental values between data from adjacent
sampling points are similar in magnitude as used
in the model. Thus, a linear interpolation
would lead to equal spacings between sample sites
or sampling times, whereas a logarithmic inter-
polation which is linear in logarithmic units,
would be better served by placing the sample sites
closer where the concentration gradients are
steeper (in absolute, not logarithmic units).
Specifically, in the latter case, for a logarith-
mic concentration field the sample sites would be
optimally located if/when the data values among
them varied in a way that their logarithmic in-
crements (not the absolute ones) were equal. To
the degree that this is achieved one may minimize
the number of sample points needed to calculate,
with a preselected statistical certainty, the in-
ventory of material in the field. We should
expect there to be an economic drive to apply
similar techniques because once the gradient type
is known the num6er of samplers required to mea-
sure the gradient can be reduced to a minimum
number, commensurate with the degree of statisti-
cal precision desired in the calculated result.
Summary. Representativity is achieved when
the calculated results based on the samples and t.
the logic net correlate well with the physical
world. In addition to motives for being accurate
about the calculations and derived implications,
there are economic incentives to bring the
sampling into a good alignment with the most
realistic model. In that way the number of
sample points can be minimized, thus minimizing
the cost/quality factor for the surveillance.
Representativity in sampling Is not simply a
matter of good technique in the field, although
that is essential. But still more crucial is for
the sample taking to be consistent with the cal-
culational model and its logical correlates. It
is the model, not the samples, t-hich represents
the physical world. The samples/data merely
serve to calibrate the model. Multiple models
might be useful in a specific situation and they
may require different sampling programs. If only
one sampling program can be carried out the in-
appropriate calculational models would be
avoided. Representativity begins with a precise
statement of what sort of information is required
from the field samples. Field technique is only
the last step in the chain that yields represen-
tative samples.
Verifying that representativity has been
achieved or missed deserves considerable deli-
berate effort. Environmental issues are usually
without a referee, hence the verification must be
approached through indirect means. These include
attempts at disproving that an alleged situation
actually exists. The subtleties of logic for
proof versus disproof are important to this issue.
Acquiring data on a temporary basis in order to
pursue a choice between alternative interpretive .
models should be aimed at crucial tests of the
differences between those models.
7-9
-------
Establishing a Reliable Baseline. The mo-
,tives for establishing, a baseline hinge on quan-
.titata-ve results. Perhaps one wishes to know in
a simple way "how much is out there" without
regard (a past o.r.' future "measurements." In that
case, a simple central value would be adequate,
even if there wese actually a considerable range
of concentrations, and it could be obtained by
simple grab sampling.] Perhaps instead one
wishes for a reference,against which past or
future measurements can be compared. Then, some
time factor must be brought into the sampling
and also into the theory used for extrapolating
outside the realm of sampling. Alternatively,
one may wish for current data on background to
use in assessing current impact on an effluent,
a situation that requires decision about which
samples of a set were affected by an effluent,
or, if all were suspected to be affected,
. -assessing the relative amounts.
I't is useful in this discussion to distin-
guish the meanings of "background" and "baseline"..
Background refers to the reality of lower concen-
tration levels of a contaminant as it exists at
any time, whether measured or not. Baseline
means a numerical approximation of the real back-
ground. Baseline is the thing compared with mea-
sured concentrations to assess whether they
exceed background.
Whether a baseline is reliable will depend
as much on one's needs as on the facts. Specifi-
cally, how precisely must extrapolations or com-
parisons be in order to be serviceable? How does
the time interval involved compare with the
periods of natural fluctuations of either back-
ground or strength/direction of the effluent
' ' source. How much resolution is desired compared
c to the costs of acquiring more resolution'
« Three aspects of a baseline will be described
in more detail. First is concept—precisely
• stated concepts—about what the baseline is in-
0 tended to represent, when it is to be applied,
and what form the numerical data should have.
Second is measureability. A central value will
not be enough if there is intent to use the base-
line for comparative assessments about whether a
new item of data belongs to the background, or
how much it deviates from the background. Such
applications require that variance be known and
one needs als.o to know if the variance changes
with time or if it is different from place to
place. Third is contrast between the baseline
range and the arguable increments of contamina-
tion. If the nature of the effluent is to be
either very high in concentration relative to
the background or very low, then simple facts
about the background will suffice. However, if
the effluent concentrations are serious consid-
erations even when near background levels then
not only does one need to know the background
variance to good precision, but methods for
minimizing the uncertainties due to that
variance should be used.
Baseline Model. Baseline is a concept, not
a part of the physical world. It belongs to the
calculational models. There, it may variously
represent (1) pristine concentration that existed
before there was an artificial impact, (2) a nor-
mal level of effluent against which one looks for
excursions, (3) a level existing for nonlocal
reasons when one is interested in the local situa-
tion, (4) a background level due to multiple local
inputs, and, there may be others. Data one might
bring into the issue serve to calibrate the con-
cept with respect to the physical world. Estab-
lishing a reliable baseline involves concepts
from the section "Representativity", including a
precise description about what sort of baseline
might be appropriate for a specific situation.
The term baseline is somewhat misleading.
Not only is the concept used in a plurality of
ways but also the real-world concentrations to
which it refers are not regular. Consequently,
the conceptual baseline, which would be easier to
use if it were simple, must suffer some degree of
mismatch with the range of physical concentration
it is intended to represent. "Thus, "reliable"
has much to do with the appropriateness of the
model to be used.
Deciding which sort of model to use for base-
line begins with concepts about how the contami-
nant at issue enters the environment, accumulates,
and is consumed. Some materials degrade so slowly
that their buildup can be monitored. Plutonium in
soil is an example. Other materials, like tritium
or DDT degrade measurably so it is possible for
their buildup to reach a steady state or even de-
cline. In such a case it is the flux of material
through the environment, rather than the amount,
that might be of greatest pertinence to an assess-
ment program. For others, like iodine and xenon,
which are still more ephemeral, their concentra-
tions at selected points may yield patterns along
a time dimension that carries both a baseline
quality and a record of events. Materials like
NOx and photochemical smog yield patterns that
reflect weather conditions as strongly as they
characterize the sources of effluent. For them
the concept of background is obscure since to a
degree they are the environment and people who
live with the problem are concerned with the
absolute value of the concentrations more than
with the value at one time relative to the value
at some reference condition.
Setting up a reliable baseline begins by
getting the issue onto a conceptual right foot.
Does the model call for increments, for flux, or
for concentrations' Whatever, the baseline must
match. Furthermore, the baseline must be measure-
able. Concentrations can usually be measured
readily, but increments and fluxes may require
several coordinated samples, each of which pro-
vides a concentration, in order to get one esti-
mate of flux or increment. This is to say, the
environmental model should be not too sophisti-
cated. Even though logically elegant, if key
data for a model are difficult to obtain, then its
reliability will suffer. There is a trade-off
here between representativity and reliability. A
model which is astutely representative may not be
reliable in practice if the required inputs of
data are too difficult or costly to obtain.
7-10
-------
Obtaining a Central Value and Variance. Once
the conceptual baseline has been established,
then work may begin to calibrate it. The form of
the baseline would be defined by the terms of the
model and there are several possibilities: (1) a
single numerical value. A ± B to be applied at
all times and at all sample points, (2) single
values for each sample point and having the form
A ± B, but with tailored values of A and B for
each sample point, (3) time-variable values based
on circadian or monthly or seasonal systematics
about the concentration of the monitored material.
The variability may be applied uniformly to all
sample points in the net as in case (1) or
selectively as in case (2). How much complica-
tion is put into the definition will be determined
by how intense the differences are between sites
compared to the impact to be measured. The
simpler forms of baseline are easier to use and
discuss, but they may be lacking in fairness.
Thus, the degree of complexity to use may be a
topic for debate and there may be a trade-off be-
tween utility and accuracy.
If plant operations have not yet begun then
the measurement of background may be uncomplicated.
There may be advantages to studying the background
in much detail, not only to clearly measure cen-
tral value and variance, but also to find pat-
terns in the mechanisms which disperse the
material of interest so that the finally selected
locations of samplers may be more astute. At
this stage there should be an intent to refine
any supposition about dispersion modes and adjust
the monitoring program according to improved
concepts. Baseline dispersions of trace materials
will show patterns. The complexity of those
patterns depends on many local factors. Work
done to establish baseline should proceed until
the features of those patterns become clear.
These features may be statistical in nature, or
they may be cause/effect. The reliability of
the baseline will be proportional to the clarity
with which one can describe the patterns of the
background.
If plant operations are already underway
then baseline can still be quantified, but there
will be some constraints about how to arrange
samplers and select data. There is lit.le com-
plication for liquid*effluents since the inputs
and outputs of a plant are separately accessible.
If discharge is into a stream, then upstream and
downstream sampling will simultaneously obtain
data for both background and additions but ques-
tions of mixing and lack thereof are serious.
For air discharges a remote location may be used
to assess background, but representativity may
be suspect. Features of context, geography,
weather, and other factors partly measure the
suitability of a remote site and it must not
possess confusing attributes.
Obtaining a measure of background in the
face of active contaminating events can be done.
How it may be done, validly and precisely, de-
pends on local situations to such a degree that
general rules would be dangerous advice. Some
local quirk, ignored in the phrasing of the
general rule might invalidate the scheme. Yet,
the need tor a quantified baseline may justify
attempts at its measurement that are served, by non-
standard arrangements of data. Ah example of this
kind of approach will be given for the case of air
samplers surrounding a single source of contami-
nant. The purpose of showing this method is less
a specific formula for how to sample around parti-
cular sites than it is a qualitative example to
show how data from samplers that are affected by
effluent can be used also to extract information
about background. This method gets dual use from
the data. That is, this method arranges "the data
so they simultaneously indicate background and
contamination, quantitatively.
Fractional Exposure Method. The principle of
the method depends on (1) the different samplers
being exposed to the effluent-laden air for unequal
amounts of time during a sampling period and (2)
the rate of effluent discharge being somewhat
uniform. By using a wind rose constructed for the
period of sampling the sample points can be
evaluated according to the percentage of the total
period they were downwind from the effluent source.
Then, a graph is constructed of concentration
versus percent of time exposure. If the effluent
truly is measurable and adds to the background,
then the scatter diagram of plotted points in the
graph will have a slope that can be evaluated
statistically.
There are several important features in such a
graph, which are shown in Figure 1. A straight
line may be fitted to the data according to some *
method like least squares; the line will have both
a slope and an intercept. The value of the slope
can be tested by statistical means against the
value zero (no slope) to assess whether the slope
(hence impact) is statistically significant. If
the slope is statistically indistinct from zero,
then all the data may be combined directly to
yield an estimate of background. If the slope
were statistically different from zero, then the
calculated intercept (at zero percent exposure)
is one estimate of background. Successive esti-
mates of background provide data for a baseline.
Contamination level can be estimated at each
sampling point by subtracting the value B from
the apparent concentration. Impact could be
assessed either through an integration among the
individual contamination levels as fitted to their
geographic and demographic locations, or through
the slope of the line, adapted to a model based
on wind data and demography.
The graph has interesting properties. Note
that it yields an estimate of background even
though no sampler is in a background condition for
the entire sampling period. Additionally, the
estimate of background will have a lower statisti-
cal variance than the individual data points do
because the slope of the line is established by
several data. There may be an uncertainty about
the background estimate due to the difference be-
tween extrapolation and interpolation but it will
not be serious because the extrapolated distance
usually will be small and because conceptually
one is conrident that a finite background truly
exists. At the other end, extrapolation to 100 ,
percent exposure yields the best estimate for the
theoretical concept of continuous excosure.
7-11
-------
30 -
20 40 60 SO
Percent of lime Exposed
100
FIGURE 1 SIMULTANEOUS ESTIMATE OF BACKGROUND
AND CONTAMINATION FROM ONE DATA SET
Figure 1. Data for air concentrations (dots) are
plotted versus percent of time the wind blew from
a source toward the sampler, as determined from
a wind rose (not shown) constructed for the samp-,
ling period. Background (square B) is estimated '
by the intercept of a least-square linear fit to
the data. Square T represents the concept of
concentration at continuous exposure.
The procedure above is capable of being re-
fined in several ways to make it fit better to
Jocal circumstances. The first refinement might
recognize that the different air samplers are not
at the same ranges from the source. Thus, for a
better comparison the data values obtained at each
point could be adjusted to what tfiey might have
been if they were at a standard range from the
source. Typical factors of atmospheric dispersion
would be applied here to inflate or diminish the
data value prior to plotting it in the graph,
depending on whether the samplers actual position
was further or closer than the "standard" range.
Additionally, the diffusional mixing that goes on
naturally in the winds might not have been similar
in all directions or at all times. So far as data
can be mustered to this issue the measured con-
centrations can be adjusted and the adjusted
values appUied to a graph of the kind described.
Obviously, this method can be applied at any time
in the history of a program. Even if a baseline
were established before operations began, this
approach provides a way to assess whether the
baseline still is appropriately calibrated.
This methodology has also a substantial
tactical advantage in that it forces a clear
assessment of how successfully a plant's impact
was measured in each sampling period. A clear
indication is given about whether the sampling
program itself is adequate to detect an impact.
For example, if the calculated slopes on the
graphs wer? repeatedly not different, statisti-
cally, from zero, then one would be forced to
conclude that either the plant was (a) practi-
cally clean or (b) the monitoring system was not
sensitive enough to measure its effect. The same
data can be used to calculate a variance which
could be compared with the variance expected from
background—either from professional judgment or
independent measurements. If the calculated vari-
ance were relatively larger, then one would be led
to accept conclusion (b) over (a), and changes in
the sampling/interpretation program could be
entertained.
Note that in this method each item of data
participates in both a measure of contamination
and an estimate of background. Compared to the
alternative of using remote samplers for esti-
mating baseline there are several advantages. No
extra samplers are required, so costs are less.
There can be no quarrels about whether the base-
line data are biased for reasons of nonrepresenta-
tive geography. And, the background data are
timely, each background estimate applies to the
identical time period as involved with the samp-
ling of impact.
Contrast. Recognizing a real increment of con-
tamination in the face of a variable background
may require more than a good knowledge of the
background variability. This situation requires
special methods if the increment of contamination
that is looked for happens to be on the order of
the size of the background variability. If the
background variability conforms to a Gaussian
distribution then elementary statistical tests for
a suspect datum belonging to background can be
made without complication. For more difficult
situations there are additional treatments of the
data that enhance the contrast between background
and a contamination. These are discussed else-
where in this document.
Summary. Reliability of a baseline depends
jointly on the established facts about background
concentrations, the details of concept regarding
variations in background, and the contrast dis-
played when data are compared to the baseline.
The contrasts can be assessed by statistical
methods, but they may be of the most simple kind
unless near-background contamination levels are of
concern. The contrasts may be enhanced either by
reducing the effective variance of the baseline or
by transforming the data to a form that is amena-
ble to statistical tests that are more valid or
more precisely applicable.
As the facts about background concentrations
are learned, concepts about the baseline (as a
model) deserve to be refined and the results
translated into a more astute distribution of
samplers, followed by a more precise interpreta-
tion of the data they return. Knowledge and con-
cepts about the movements of contaminants may be
revised as new data are obtained on background
and contaminants. Proposed revisions to currently
used concepts can be encouraged throughout the
sampling program and new data used to demonstrate
their values. In this way the representativity
of future data can be pushed to a high degree. •
7-12
-------
Data Analysis
Data acquired during a data collection inter-
val may be grouped according to geographical loca-
tion, media sampled, nuclide sampled, or time of
sampling- (Such groupings are based on physical
considerations or specific program objectives
rather than on the actual data values obtained.)
The analysis and characterization of each of these
data groups constitutes a major emphasis in the
data treatment. Characterization may be graphical
or numerical, but usually involves determination
of statistics which estimate central value
(average or arithmetic mean, geometric mean or
median, mode, midrange) and dispersion (range,
standard deviation.) Determination of these
statistics and further data comparisons are often
simplified by graphical analysis if the data con-
form to some standard distribution.
Skewed and mixed distributions,.along with
the presence of outliers and of less-than-detec-
tion-limit values, are general features that per-
vade environmental radiological surveillance data.
The analysis and subsequent reporting of surveil-
lance data having these features are not easy or
routine. Moreover, the same analysis procedure
may not apply to different data groups.
One intermediate goal in environmental data
analysis is to characterize the value of some
parameter in a portion of the environment over
some period of time to a stated degree of accuracy
from a limited number of data points. The follow-
ing general principles apply:
o One datum provides an estimate of the
parameter value, but in nearly all
practical cases has such a large
uncertainty associated with it as to
be in itself of little value.
o Spatial and temporal variations have
significance to be determined on a
case-by-case basis. In trend indica-
tion, for example, spatial variation
may have no significance, but ex-
traneous sources of temporal variation
are highly important. For an Inventory
of accumulated radioactivity in "he
environment, temporal variation may
have little significance, but spatial
distribution is critical. For dose
estimation, as a basic goal, both
spatiax and temporal variations are
important, since annual averages of
exposure to a distributed human popula-
tion are desired.
o Completely random sampling (or measure-
ment) is neither astute nor economic
for environmental programs. Continuous
sampling may satisfy needs to account
for temporal variation. For spatial
distributions stratified sampling and
subgrouping of results are commonly used
but require a priori assumptions.
Estimates ot Precision. The overall precision
estimator, the variance (S-), equals the square of
the standard deviation (S). Note that estimates of
the variances may include nonidentifled systematic
or cyclical errors.
Then,
2 2 2
S + S + S
a t s
2 2 '
S + S + S"
n p m
where the subscripts denote:
>•
T = total
a = space
t » time
s = sampling
h = sample handling
p = sample processing
m = measurement
The degree to which these components can be
quantified and separated determines in part the
sensitivity of the measurement system for detect-
ing the presence 8f a contaminant.
Estimates of the magnitudes of some combina-
tions of variance components may be obtained from
replicate measurements, either as part of the /
routine quality assurance program or from specifi-
cally designed sampling and analytical experi-
ments. Such experiments should be performed ad-
junct to routine environmental monitoring, both j
to assure that observed variances are pertinent
to the monitoring system and to make the studies
more cost effective.
Results of replicate samples intended to mea-
sure variance inherent to the environment ($2 and
S^) should be treated cautiously, since these
components may be divisible into subcomponents for
time (both short and long intervals) and for scale
of geographic association with the facility being
monitored (local versus regional) There is a con-
tinuing risk that the differences between environ-
mental variance locally and regionally may be
wrongly assigned, biasing the apparent impact
of local operations. Similarly a normal seasonal
effect may be wrongly interpreted as a change in
environmental impact.
It has been common practice to report only a
multiple of the standard deviation of the sample
count (the square root of the variance s2 above)
as a measure of uncertainty of an environmental
datum. S^ is better known routinely than all
other variances, and frequently the only one
quantified. Each analytical result provides its
own measure of S^, undoubtedly a major reason for
its popularity. Unfortunately, this is often the
smallest of all components of variance, so that
the unqualified use of S2 alone yields an un-
warranted impression of
-------
variance from a sufficiently large number (at
• least ten and preferably more) of data points may
be satisfactory. For groups of less than five
^dafa points, such estimates are of little-value;
lacking other information, only precision of mea-
"su/rement can be given. In either case, ±he basis
should be clearly identified.
Testing, for Homogeneity . "When seemingly
extreme observations are obtained, the ques-
tion always arises whether these observa-
tions should be considered discrepant and re-
jected. Points of view range from the flat
statement that "data should never be re-
jected" to the vi^ew that it is not "practical"
to include any suspicious measurement, re-
. gardless of the fact that the suspicions
arose as a result of the examination of the
data. Both points of view have their merits;
often the single discrepant observation con-
tains the most significant information of the
group. On the other hand, this information
may be completely irrelevant to the question
that the data were originally supposed to
. • answer. For example, one discrepant analysis
in a group of five may give pertinent infor-
mation^with regard to difficulties present
in the analytical method, but may also bias
the analytical result if included in its
calculation."
- Bennett and Franklin [13]
In addition to the general problem in the
referenced quotation, environmental program
managers are faced with the question of initiating
investigative action or even halting facility op- >
eration on the basis of environmental data. The
probability that a single environmental data point
would imply an immediate risk of population doses
exceeding the established limits is considered ex-
trjamely low in the absence of prior and supple-
Cental data. If such an occasion does occur, the
assignment of cause and an indication of needed
corrective action would be expected to be straight-
forward. Far more frequent are those "out-of-
pattern" data points which may or may not reflect
changing environmental or operating conditions, but
which if accepted, will distort the eventual eval-
uation of environmental impact.- How intense and
prompt the follow-up efforts may be depends both
on the initial believability of the datum and the
.seriousness of the problem it might indicate. For
example, a requirement that a search for cause be
made for all values exceeding two times the stan-
dard deviation implies that on average at least
2-1/2 percent of all data points should be in-
vestigated. The bulk of these data will be re-
lated only to ordinary variations in the processes.
Accordingly, the standard response to values be-
tween the mean value plus two to three times the
standard deviation level could be casual and non-
disruptive of operations. At the other extreme,
response to values exceeding the mean plus four
times the standar-d deviation should be quicker and
more intense. Since there is only a vanishlngly
small likelihood that such a relatively large
value, if. real, is the one observation in 31,000
expected during normal operations, serious con-
.sideration might be given to shutting down or
isolating the offending operation. Intermediate
concentrations may merit an intermediate kind of
response.
At one site, better than 90 percent of those
few data points which, on initial inspection,
appeared suspicious were found to be due to ob-
vious computational, measurement, or sample pro-
cessing errors. Such immediate review of incom-
ing data and feedback to the field measurement or
sample-processing staff by an individual familiar
with the program and past results is a most valu-
able adjunct to the program. However, a few mea-
surements remain which cannot be explained and
which suggest casual effects. Each program has
its own rules for level and scope of follow-up
investigation of apparently discrepant results;
sometimes different rules apply to different
kinds of data. Frequently, such rules are based
implicitly on some statistical^property such as
range or median value. No single rule or set of
rules is recommended, but more formal statistical
techniques are suggested as an aid to a decision
that a particular result is or is not anomalous.
Numerical Tests. If the distribution of a
given set of data and measure of dispersion
(standard deviation) or standard geometric de-
viation has been established, a new data point,
as discussed early in the Data Analysis Section,
can be tested for fit to that same distribution
by standard statistical tests. A t-test is the
most common means of estimating the probability
that the given data point is part of the same
distribution.
Nonparametric tests, involving generally
some function of the range of the data set, are
also available, although less efficient (in the
statistical sense of the precision of the esti-
mate) if a standard distribution fits the data.
References 13-15, as well as other statistical
references, give range probability tables.
Sequential Plots. Sequential plotting of
data points as generated, even if no particular
statistical distribution is assumed, has the
potential advantage of revealing trends or peri-
odicity in the data. Even more useful, once
sufficient data)(at least 10 and preferably as
many as 50 points) have been accumulated to
establish a central value and measure of disper-
sion is a graph similar to the familiar control
chart used in quality control. Such a graph,
showing confidence intervals (multiples of the
standard deviation) minimizes wasted efforts on
values that are seemingly higher than usual but
which may not be statistically different. This is
especially useful when the data, like so much
environmental data, is distributed log-normally.
A modification of the simple control chart is
aimed at improving the detection of trends. Trends
in the data may exist entirely below selected ac-
tion levels, and indicate either incipient process
control problems or degraded procedures. The
search for such trends can be facilitated by
plotting a running average in a control chart. The
reduced scatter results from plotting the geo-
metric mean of a series of data points rather than
7-14
-------
single values. Specifically, each plotted point
is the geometric mean, appropriate for log-
normally distributed data, of for example, three
weeks data.*
The reduced scatter yields not only a
simpler curve, but also a more sensitive in-
dicator of small persistent anomalies. Be-
cause the standard deviation for the average
of a group of data is smaller than the stan-
dard deviation for single values, the action
level for a running average can be set closer
to the average value.
The assumption of normal (Gaussian) distri-
bution is often made, if only implicitly, since
the associated statistical parameters and pro-
cedures are most familiar and most sample count-
ing data can be treated satisfactorily** in that
way. However, experience at several laboratories
indicates[16] that much environmental data more
closely fit a log-normal distribution, which af-
ter a logarithmic transform, can be treated as a
normal distribution. Other recognized distribu-
tions have been identified with specific
data groups. [17,18]
When data of one set number fewer than 10 or
Sg <1.3, no assumed distribution is likely to be
superior to another as judged solely on the data.
In this case, treatment of the data as distrib-
uted either normally or log-normally is unlikely
to have a high confidence level. Dividing the
group into subgroups may result in still lower
levels of confidence. Averages can always be
obtained by calculation, and standard deviations
for small numbers of data may be estimated partly
on the basis of distributional assumptions as
derived from experience, treating small groups in
ways most appropriate for larger groups that
clearly come from a particular distributional
form.
Numerical methods are available for testing
the fit of data to at least the simpler distri-
butions. The familiar x^ test is such a test for
normality, or for log-normality after a logarith-
mic transform. However, the process can be
laborious with sufficient data points to provide
confidence in the result. Graphical procedures,
usin<* probability graph paper, may save not only
time and labor but haye other benefits 0.3 well.
In addition, the numbers of data most conven-
iently treated with graphical methods correspond
to the most common frequencies of data grouping
for annual reporting purposes.
Probability Plotting. Probability paper is
available commercially in several forms, but the
normal (Gaussian)and log-normal are most commonly
used for environmental data. Datum-percentiles
are ranked and entered onto the probability paper.
As a group, they should trend along a straight
line; the usual visual test for a straight line
trend is subjective, although least-squares
fitting procedures are available for at least the
normal and log-normal distributions. Linearity
verifies the assumptions and (a) the grouping is
homogeneous***, and (b) the distribution corre-
sponds to the kind of probability paper used. A
text by Hahn and Shapiro[19] is a useful refer-
ence for probability plotting.
When data are plotted, some variation will
exist about any straight line because of statisti-
cal fluctuations. A best-fitted straight li'ne
drawn through the data yields values tor median
and standard deviation, the median by the fiftieth
percentile intercept and the standard deviation by
the slope of the line. With nornal .(linear ordi-
nate scale) probability paper, the median gives
the geometric mean and the slope gives the geo-
metric standard deviation.**** From these, the
arithmetic mean and normal standard deviation can
•>e calculated. [20] °~
Asymmetric Case. In the asymmetric case, the
first consideration should be to explicitly relate
the measure of central tendency to the actual uses
to be made of the measure. Based on selecting an
appropriate measure for the "center" of the dis-
tribution. It may be more informative to give a
selected number of different estimates so that a
clearer understanding of the information is
possible.
One common way to proceed is to transform
the concerned data, e.g., by logarithms, so that
the transformed data appear symmetric. Then the
center is estimated for the transrormed data and
the estimate (if desired) transformed back. This {
does not necessarily eliminate the problem of
defining the best central value of an asymmetric *
distribution for the intended use. If this pro-
cedure achieves symmetry, any or the mentioned
estimators may be used.
Standard Deviations and Other Estimates of
Dispersion. In addition to estimating the center
of the distribution, some measure of the spread
of the data around the center is also informative
and useful. Alternate terminology for the spread
is precision, dispersion, or variability. Regard-
less of the terminology, the idea is to measure
the dispersion of the data about the center.
Again the symmetric and asymmetric cases must
be differentiated.
The most familiar measure of dispersion is
the standard deviation. Like the mean, it j.s not
resistant to the presence of outliers and is dif-
ficult to interpret if "less-than" values pre-
dominate. In particular, large outliers tend to
unduly inflate the standard deviation estimate.
It has the advantage that standard statistical
tests and tables are readily available to use with
it when it can be validly estimated.
*The number of data used in a running mean in-
volves several trade-offs that must be made by
persons handling the data. Too few data will
achieve only a minor reduction of scatter com-
pared to plots of single data; too many data
points per mean value make the running mean
insensitive and cumbersome to use.
**Since Poisson distributions applicable to
counting statistics approach a Gaussian distri-
bution with a large enough count.
***The members of a homogeneous group are collec-
tively described by a single central value and
a variance, each datum being an valid estimate''
of the central value.
****Sg = xo. 8413/10. 50 §r JO.
. 1587.
7-15
-------
The commonlyj.used sample range, based on
the maximum and minimum sample value, is extremely
sensitive to outliers, but requires no assumptions
as to distnbution.,of the data. The interquartile
range is given by the difference between the 75th
and 25th percentile points and is a resistant es-
timator. Note that 50 percent of the data occur
between the two percentile values.
Another not so commonly used estimate, but
one that is resistant and efficient, is the
median absolute deviation. This is computed by
subtracting an estimate of the center, prefer-
ably a resistant one, from all the data values,
taking the absolute value of every difference, and
then finding the median of the absolute values.
Reference 9 gives an example of its use.
Distribution Analysis. In view of the many
potential causes of perturbation in a group of
environmental data there is no basis for assuming
in advance that the data will follow any standard
statistical distribution.
Handling of "Lees Than Detectable" Values.
The direct use of "less-than-detectable" values
should not be reported. However, since exten-
sive data exist in this form, it was thought use-
ful to include applicable methodology.
If a large fraction of a data group shows
less-than-detectable (LD) concentrations, special
considerations are needed to determine a central
value and measure of dispersion. Such groups of
data are termed censored.
There is a significant difference between
censored and truncated distributions. Both terms
concern situations where some individual values
are not known, either because they are beyond the
range of measurement or because values beyond
some limit have been rejected. For censored dis-
tributions the number of data points beyond the
limit is known. For truncated distributions the
number of such data points is not known. Ob-
viously, percentile values can only be assigned to
data in censored distributions. Grouping of en-
vironmental data should always ^rield censored,
not truncated, distributions, i.e., at least in-
clude the information that a "less than detect-
able" result was obtained for the sample.
For conservatism, the assumption is sometimes
made that all LD samples actually had concentra-
tions equal to the detection limit and a group
average is computed accordingly. Since this
choice severely biases the computed average, it
should be avoided. Other artificial rules for
averaging LD values, such as setting LD or net
negative counting results to zero, or applying a
factor to LD results, also cause biased averages.
Hahn and Nelson [21] discuss various methods for
treating censored data.
One method [22] of group averaging of mixed
positive and LD values is to average all values,
including negative ones. By pooling the vari-
ances .of the individual results, a detection
level for the-average can be calculated. Although
the procedure is laborious for a large group of
data, the approximation may be asserted for iden-
tical variances for each data point, in which case
the uncertainty of the mean can be quickly ap-
proximated by the relationship:
Uncertainty of Mean
S.D.
where S.D.^ is the apparent standard deviation and
n is the number of data points. The detection
levels for the average and individual results are
similarly related for the same confidence level.
If only a few positive values are included with a
large number of LD values, the average may still
have to be reported as a LD value.
A preferred method for handling LD values in-
volves probability plotting. As described
earlier, the data are ranked by size, assigned
percentile values and plotted on probability
paper. The LD values span "a range of percentile
values in accord with the fraction of the group
they represent. The positive values are plotted
in their respective percentile positions and a
best straight line is fitted through them ignor-
ing the misfit to the LD values. As before, the
slope* of the best straight line yields the stan-
dard deviation, and the geometric mean value or
median, is obtained from the intersection of the
50th percentile with the fitted line. This
method succeeds even when more than half the data
are less than detectable, since the fitted line
can be extrapolated to the 50th percentile in
any case. Note that when more than half the data
are LD, this method yields a mean value smaller
than the individual value detection limit. The
method is .valid so long as a few data exceed the
detection limit, although as a practical matter
the confidence level is low if fewer than ten
positive data points are involved with the fitting.
A question will remain as to whether two data
distributions are involved, such that some or all
of the LD values belong to a background distribu-
tion, with the higher values indicating impact
from a source with intermittent loss of control.
If all the positive data are very much above the
detection limit, such a situation is indeed indi-
cated. In other situations, the question cannot
be answered solely from the data involved, al-
though other independent data may be found to
resolve the issue.
Group Comparisons. The primary purpose of
most environmental surveillance is to determine
the existence of and to quantify the effect by a
facility on environmental radioactivity levels.
When the existence of a local "background" level
cannot be ruled out and potentially affected en-
vironmental media show measurable levels, a com-
parison may be made between a set of background
or control measurements and a set of the same
measurements in the potentially affected locations.
If estimates of central value and dispersion
for each set have been obtained as discussed in
the previous'section, and tests for homogeneity
sg = X0.8413/X0.50 or
7-16
-------
are satisfactory, the data sets can be tested for
equivalence. Commonly, this is directed toward
determining if the central values of the two
groups are different, but differences in vari-
ability are also informative and may be impor-
tant. Graphical techniques may be useful, but
the standard statistical tests for differences of
means are usually relied upon.
In actual practice such comparisons can be
of dubious reliability when faced with data sets
containing outliers, leas than detection values,
and non-Gaussian distributions, since most of the
standard statistical techniques are not directly
applicable in these situations. Moreover, the
number of alternative procedures having good
statistical properties are somewhat limited.
Generally, the alternatives are nonparametric
procedures.
The basic • nonparametric testa available are
the sign test, permutation tests, Mann-Whitney ,.
rank sum tests and Kolmogorov-Smirnov two sample
test. These teats are all based on the relative
rank ordering of the two data groups and do not
assume any particular distributional shape for
the groups. The specific procedures for apply-
ing the tests are not given here, but they are
readily available in many elementary statistical
tests. [14,15] These tests should not be used
if actual distribution types are known because
they are less sensitive than are specific dis-
tribution-based tests.
What about resistant alternatives? For the
symmetric distributional shape, one possible
alternative is to use the classical two sample
t-test for differences in means (see above), re-
placing the usual arithmetic means and standard
deviations used in the test with selected re-
sistant alternatives such as medians. Presently,
this type of approach is only beginning to be
understood and no particular alternative can be
recommended.
Summary of Recommended Data Treatment
Dose /
Estimators
Estimates of Pre-
cision X
Corrections for Bias X
Sequential Plotting X
Distribution Analysis X
Group Analysis X
Trend
Indicators
X
(b)
(b)
(a) Required for critical exposure mode; desir-
able for other exceeding sampling criteria.
Annual averages are required.
(b) Desirable for more significant measurements.
Distribution analysis is required if group
comparison is to be.made.
[1]
[2]
[3J.
References
J.P. Corley, D.A. Waite, and fi.R. Elle, "A
Practical Guide for Radiological Surveillance
of the Environment at Federally-Owned Nuclear
Sites in the USA", BNWL-SA-5894 (ERDA 77-24)
March 1977.
International Commission on Radiological
Protection, Recommendations of the Inter-
national Commission on Radiological Prqa
tection (Adopted September 17, 1965),
ICRP Publication No. 9, Pergamon Press,
New York, 1966.
'Division of Operational Safety, Environmental
Monitoring at Major U.S. Energy Research and
Development Administration Contractor Sites -
Calendar Year 1974, ERDA-54(74), in 2 Volumes,
Energy Research and Development Administration,
Washington, D.C., August 1975.
Energy Research and Development Administration,
Manual of Operations, Chapter 0524, "Radiation
Standards", Energy Research and Development
Administration, Washington, D.C., April 1975
(revised). «
[5]
Office of Radiation Programs, Environmental
Radioactivity Surveillance Guide, ORP/SID
72-2, Environmental Protection Agency,
Washington, D.C, June 1972.
[6]
[7]
Energy Research and Development Administra-
tion, Manual of Operations, Chapter 0513,
"Effluent and Environmental Monitoring and
Reporting", Energy Research and Development
Administration, Washington, D.C., March 1974
(revised).
J.H. Barley, HASL Procedures Manual, HASL-300,
Health and Safety Laboratory, Energy Research
and Development Administration, New York, 1975
(revised annually).
[8]
[9]
[10]
[11]
J.W. Johnston, An Index of Adequacy and Some
Other Statistical Considerations for an
Environmental Monitoring System, BNWL-B-250
Battelle, Pacific Northwest Laboratories,
Richland, WA, February 1973.
B.S. Pasternack and N.H. Harley, Nuclear
Instruments and Methods, 91, p 533, "Detection
Limits for Radionuclides in the Analysis of
Multi-Component Gamma Ray Spectrometer Data",
North-Holland Publishing Co., Amsterdam, 1971.
W.G. Cochran, Sampling Techniques, John Wiley
and Sons, New York, 1963.
D.H. Denham, "Environmental Radiological
Surveillance in Perspective: The Relative
Importance of Environmental Media as a Func- .
tion of Effluent Pathway and Radionuclides",' -
7-17
-------
[12]
[13]
[1*1
[15]
[16]
UCRL-80279, October 1977.
R.W. Leggett, H.W. Dickson, and F.F. Hayvood,
"A Statistical Methodology for Radiological
Surveying", IAEA-SM-299/103, Proceedings of
Symposium on Advances in Radiation Protection
Monitoring, Stockholm, June 1978.
C.A. Bennett and N.L. Franklin, Statistical
Analysis in Chemistry and the Chemical In-
dustry (Sponsored by the Committee in Applied
Mathematical Statistics, National Research
Council), John Wiley & Sons, Inc., New York,
1954.
W.J. Conover, Practical Nonparametric
Statistics, John Wiley & Sons, Inc., New
York, 1971.
F. Hosteller and R.E.K. Rourke, Sturdy
Statistics Nonparametrics and Order Statis-
tics, Addison-Wesley Publishing Co., Reading,
Massachusetts, 1973.
D.H. Denham and D.A. Waite, Some Practical
Applications of the Log-Normal Distribution
for Interpreting Environmental Data. Paper
presented at the 20th Annual Meeting of the
Health Physics Society, Radiation Management
Corporation, Philadelphia, July 1975.
[17]
[18]
[19]
[201
[21]
D.R. Speer and D.A. Waite, Statistical
Distributions as Applied to Environmental
Surveillance Data, BNWL-SA-5482, Battelle,
Pacific Northwest Laboratories, Richland,
Washington, September, 1975.
L.L. Eberhardt and R.O. Gilbert, Gamma
and Log-Normal Distributions as Models in
Studying Food-Chain Kinetics, BNWL-1747,
Battelle, Pacific Northwest Laboratories,
Richland, Washington, May 1973.
G.J. Hahn and S.S. Shapiro, Statistical
Models in Engineering, Jonn Wiley & Sons,
Inc., New York, 1967.
J. Schubert, A. Brodsky, and S. Tyler,
"The Log-Normal Function as a Stochastic
Model of the Distribution of Strontium-90
and Other Fission Products in Humans", Health
Physics. 13_, 1187-1204, 1967.
G.J. Hahn and W.B. Nelson, A Review and
Comparison of Methods for Regression
Analysis of Censored Data, Report No. 71-C-
196, General Electric Co., Schenectady,
July 1971.
[22]
R..O,, Gilbert, Recommendations Concerning
the Computation and Reporting of Counting
Statistics for the Nevada Applied Ecology
•Group, BWL-B-368, Battelle Pacific North-
west Laboratories, Richland, Washington,
September 1975.
Selected Bibliography
Experimental Design of Environmental
Surveillance Programs
J. Altchison and J.A.C. Brown, The Log-Normal
Distribution, Cambridge University Press,
Cambridge, England, 1957.
American National Standards Institute, American
National Standards Performance, Testing, and
Procedural Specifications for Thermoluminescent
Dosimetry: Environmental Applications, ANSI
N545-1975, American National Standards Insti-
tute, Inc., New York, December 1975.
Principals and Practice of Environmental
Monitoring in the UK, IAEA-SM-180/8, Warsaw,
November 1973.
R.M. Bethea, B.S. Duran, and T.L. Boullian,
Statistical Methods for. Engineers and Scien-
tists, Marcel Dekker, Inc., New York, 1975.
M.B. Biles, M.W. Tiernan, A. Schoen, and C.G.
Welty, The Objectives and Requirements for
Environmental Surveillance at U.S. Atomic
Energy Commission Facilities, IAEA-SM-180/33,
Warsaw, November 1973.
L.L. Eberhardt, Applied Systems, Ecology:
Models, Data, and Statistical Methods, BNWL-
SA-5216, Battelle, Pacific Northwest Labora-
tories, Richland, Washington, 1975.
F. Haghiri, Detailed Characterization of Soil
and Vegetation on Selected Sites to Serve as ,
Basis for Future Evaluation of Effect of
' Radioactive Contamination, COO-414-18, Ohio j
Agricultural Research, 1972.
M. Hollander, and D.A. Wolfe, Nonparametric
Statistical Methods, John Wiley and Sons, Inc.
New York, 1973.
International Atomic Energy Agency, Objectives
and Design of Environmental Monitoring Pro-
grammes for Radioactive Contaminants, Safety
Series No. 41, International Atomic Energy
Agency, Vienna, 1975.
G.S. Koch, and R.G. Link, "The Coefficient of
Variation - A Guide to the Sampling of Ore
Deposits", Econ. Geol., 66^, 293-301, 1971.
D.E. Michels, "Lognormal Analysis for Pluton-
ium in the Outdoors", Proceedings of Environ-
mental Plutonium Symposium, LA-4756, Los
Alamos Scientific Laboratory, Los Alamos, New
Mexico, August 1971.
7-18
-------
( November 1976.
I.P. Murarka, A.J. Policasteo, and J.G.
Ferrante, An Evaluation of Environmental Data
Relating to Selected Nuclear Power Plant Sites,
ANL-EIS-8, Argonne National Laboratory,
National Council on Radiation Protection and
Measurements, Environmental Radiation Measure-
ments, NCRP Report No. 50, 1976.
C.A. Pelletier, "Performance and Design of An
Environmental Survey", Chapter 9, Environ-
mental Surveillance in the Vicinity of Nuclear
Facilities, Charles C. Thomas (Publisher),
Springfield, Illinois, 1970.
W.C. Renfro, and S.W. Fowler, "General Recom-
mendation for Designing Marine Radioecological
Experiments", Health Physics, 24, 572, 1973.
R.R. Sokal, and F.J. Rohlf, Biometry, W.H.
Freeman and Co., San Francisco, 1969.
R.G.D. Steel and J.H. Torrle, Principles and
Procedures of Statistics, McGraw-Hill Book
Company, Inc., New York, 1960.
P.C. Stevenson, Processing of Counting Data,
National Academy of Sciences - National Re-
search Council, Nuclear Science Series NAS-
NS-3109, Department of Commerce, Springfield,
Virginia, September 1965.
U.S. Energy Research and Development Adminis-
tration, A Manual for Environmental Radio-
logical Surveillance at ERDA Installation,
ERDA 77-24, March 1977.
U.S. Environmental Protection Agency, Environ-
mental Radioactivity Surveillance Guide,
ORP/SID 72-2, June 1972.
U.S. Nuclear Regulatory Commission, Environ-
mental Technical Specifications for Nuclear
Power Plants, Regulatory Guide 4.8, December
1975.
D.A. Waite, Analysis of an Analytical Tech-
nique for Distributing Air Sampling Locations
Around Nuclear Facilities, BNWL-SA-4676,
Battelle, Pacific Northwest Laboratories,
Richland, Washington, May 1973.
D.A. Waite, Use and Interpretation of Partl-
culate Deposition Collector Data. BNWL-SA-
4874, Battelle, Pacific Northwest Laboratories,
Richland, Washington, July 1974.
D.A. Waite, and D.H. Denham, Log-Normal
Statistics Applied to the Planning, Collection,
and Analysis of Preoperational Environmental
Surveillance Data, BNWL-SA-4840, Battelle,
Pacific Northwest Laboratories, Richland,
Washington, February 1974.
H. Wedllck, G, Chabot, and K. Skrable, "A
Practical Environmental.Surveillance Program
Utilizing a Minimal Number of Samples",
Paper No. 145, Health Physics Society 19th
Annual Meeting, Houston, Texas, July 1974.
7-19
-------
EFFECTIVE COtftlUNICATION WITH THE PUBLIC
H.R. Payne (Environmental Protection Agency), J.T. Alexander (Department of Energy), K.M.
Clark (Nuclear Regulatory Commission), R.R. Pierce (Duke Power Company), R.T. Phillips
(Department of Natural Resources-State of Georgia) and H. Thompson (Environmental Protection
Agency)
Effective communication of radiological information to the public is one of the
difficult tasks managers and scientists in the health physics field must face.
This report explores this problem from the standpoints of an analysis of mass
communication principles; enhancement of communication with the news media; and c~
reporting environmental radiation data. Emphasis is placed on understanding the
constraints of news media representatives by managers and scientists being inter-
viewed. The goal is to provide a tool which will significantly increase their
ability to communicate accurate and understandable radiological information to
the public. (A list of references is included as an attachment to this report.)
(Communication; examples; media; reporting)
To establish a plan for the professional
practitioner to effectively communicate with the
public.
Objectives
The objectives of this report are:
1. To educate management, scientific, and
other professional staff on the Impor-
tance of effective communication. This
may Involve a change in motivation and
attitude on the part of the professional.
2. To enhance the ability of the profession-
al to fulfill his responsibility to
communicate with the public.
3. To translate radiation information into
language meaningful to the public
Introduction
Few subjects are more of a mystery to the
public than that of radiation. This lack of
public understanding creates multiple p-oblems
such as:
1. Overreaction in regards to harmful
physical effects It may produce.
2. Failure to place it in perspective with
other hazards.
3. Vulnerability to exaggerated claims of
both pro-nuclear and anti-nuclear in-
dividuals or organizations.
4. Inaccurate handling of radiation data
and incidents by the news media.
In the light of these facts it behoves
managers and scientists to Inform themselves and
their staff, on the need for effective communica-
tion with those not familiar with the field. In
the past, this has not been given sufficient
priority and time with the inevitable results of
confusion, misunderstanding, fear and bewilder-
ment so common today among the public in radia-
tion matters. There is a current trend to cor-
rect this situation but it will take many years
of dedicated effort to fill the vacuum and make
up for past failures.
This report is designed to inform managers
and scientists in the health pnysics field on the
nature and structure of mass communication and
how to relate to news media representatives and
the public. The principle is that the more which
is known of the news media methodology, problems
and constraints, the more effectively communica-
tion may be accomplished.
However, this, for the most part, will be in
vain unless it is used by people in the radiation
protection field in their day-to-day contacts
with the lay person, lay groups, the press and
the electronic news media - hence, the importance
of developing a deliberate plan to place this
information in the hands of those who need to
know.
Mass Communication Principles
Communication is the basic building block of
human society. It is the process of transmitting
meaning between individuals. Society, as we know
it, could not very well exist in its absence.
Human beings, primitive to modern, have
always relied upon person-to-person communication
to transmit intentions, feelings, knowledge and
experience.
Only In the past century or so have methods
of communication really changed. The technologi-
cal advances of the 20th century have worked many
changes among and within the world's civiliza-
tions. People are no longer born to live and work
and die in the same locale. Individuals travel ,
• frequently. They may change jobs, even whole
vocations, several times. The individual in our'
society Is constantly on the move and is constant-'
ly exposed to new acquaintances and new ideas.
8-1
-------
The tremendous population growth and in- •
> creased nobility of our civilization has resulted
in a somewhat rapid decrease in personal inter-
action between individuals. The same technology
.•which makes it possible fox'out society to func-
tion -with such mobility is also utilized to pro-
vide rapid and mass dissemination of information
to a constantly shifting and faceless constituency.
Personal interaction between individuals has been
replaced by impersonal means of communication in
which the communicator and the individual with
whom he -wishes to communicate seldom, if ever,
meet.
This impersonal means of communication among
the constantly shifting members of our society
takes many forms, but it can best be described
simply- as "mass communication."
This is a special kind of communication In-
volving distinctive operating conditions. Health
physics is a technical discipline utilizing high-
ly specialized skills and knowledge. Such dis-
ciplines, of necessity, develop specialized
terminology to provide their practitioners maxi-
mum communication capability. However, when
• technical information is mass communicated to the
public in an unsimplifled and unexplained form,
It often serves only to confuse, to annoy, and to
misinform.
i
Operating conditions which distinguish mass
communication from more personal and simplified
forms of human contact involve:
1. The nature of the communicator.
2. The communicating conditions.
3. The nature of the audience.
The nature of the communicator is especially
rimportant in mass communication because the
communicator is usually not one individual, but
rather a complex, organized entity involving an
extensive division of labor and accompanied by a
rather high degree of expense.
When the manager, scientist-tor technician in
the health physics field communicates with the
public -by way of.mass media, he or she will do
'well to keep the following in mind: Response to
a media inquiry or Initiation of an unsolicited
news announcement in simple, clear and concise
terms does not assure its delivery to the public
in its original context. First, the individual
to whom you communicate the information has not
been educated and trained in your field and
frequently has, at best, a superficial knowledge
of your terminology and its meaning. So you are
immediately involved with a problem in semantics.
That individual writes down or records on tape
and camera what you say. But that individual's
perception of the meaning of your words may not
resemble accepted meanings within the context of
radiation physics, and the "story" presented the
public, whether in writing, on tape or on film,
will reflect the perceptions of the communicator
rather than your,s.
Second, the "story" will be handled by an
editor, print or electronic, and its context will
be influenced by a variety of factors not related
to the meaning of the text. It may be shortened
to fit the "new hole" in the newspaper or cut
from a minute and a half to A5 seconds or less to
conform with time requirements on the broadcast
or telecast. In other words, the person who
receives your original information must wade
through a sea of semantics to put the story to-
gether . And a person who is competent in his
field will then make the information fit the slot
assigned it with little or no regard for the
effect the deletion of a sentence or a paragraph
may have on the audience or reader perception of
the communicated information.
Third, the information will be passed through
the hands of a headline writer or promotion
writer whose job it is to read quickly through
a multitude of information and write headlines or
radio or television "promos" to attract the
attention of the reader/viewer. By the time this
individual goes to work on your information, it
has passed through the perceptions and value
judgments of at least two other persons. The
headline or electronic media "lead in" may bear
scant resemblance to the original information.
Even here, space becomes a problem because the
newspaper headline writer is assigned a type
font and size which limits what words can be
pieced together to form the headline in the
allocated space.
Fourth, material picked up and transmitted
by news wire services is often taken from news-
papers, shortened to fit wire space requirements,
and distributed with absolutely no contact with
the principals involved. This type of operation
is the nature of the beast for persons who work
in the newswire field, but it can lead to extreme
havoc when involved with a "breaking story" in-
volving real or alleged contact with or exposure
to radioactive materials.
Mass communication conditions are such that
the audience is exposed to the communication for
only a short period of time. Audience size
normally prohibits interaction between the
communicator and individual audience members.
This means information of a highly complex nature
must be presented to individual recipients by
means of an impersonal, diversified, technologi-
cal flash of lightning. The communication is
public because it is addressed to no one in
particular. It is rapid in that it is geared to
reach large audiences simultaneously, and it is
transient because it is intended for Immediate
The mass communication audience is hetero-
geneous in that it is composed of an aggregation
of individuals who occupy various positions in
society with wide variations in age, sex, educa-
tion and geographic location. The audience is
anonymous because its individual members remain
personally unknown to the communicator.
Complexities inherent in all phases of mass
communications move the original communicator of
information many stages away from the individual
member of society with whom he or she is attempt-
8-2
-------
Ing to communicate.
It can be both functional and dysfunctional
for the managers and scientists in the health
physics field who are attempting to communicate
information.
Mass communicated information is functional
when it provides the individual recipient of the
information with the capability to make intelli-
gent personal decisions regarding the use of
diagnostic or therapeutic equipment in today's
technical environment which exposes one to vari-
ous types of radiation. It is dysfunctional when
it fails to properly inform and serves merely to
frighten the individual to the extent that he or
she may refuse examination or treatment using
devices involving X-ray or other controlled
radiation exposure useful in detecting and treat-
ing conditions which could lead to the indivi-
dual's death.
It is fair to observe that today's health
physicist, involved in a highly technical and
complex field, is confronted with the task of
conveying information to an anonymous, hetero-
geneous population by way of a communications
medium that is so rapid, impersonal and diversi-
fied as to create a situation where the prover-
bial irresistible force strikes an immovable ob-
ject. It may seem similar to looking at a road
map only to find that there is no vay to reach your
desired destination from your present location.
Due to the complexity of both the informa-
tion and the medium by which it must be conveyed,
no guaranteed plan of action or program exists to
assure that the modern manager or scientist will
always be certain that his communications with
the general population will reach their intended
recipients undiluted and unchanged.
However, the following suggestions are sub-
mitted, in the belief that their utilization will
increase the likelihood of success to a sub-
stantial degree.
1. Educate managers and scientists in your
organization as to the nature of the
medium through which they must Communi-
cate. Knowledge of problem areas with-
in the mass communications concept can
and will avert many of the unpleasant
circumstances which confront health
physics professionals on a dally basis.
2. Know what to expect. Do not expect the
reoorter to serve as your personal re-
cording secretary. Do expect the re-
porter to take what you say with "a
grain of salt" and to compare your
comments with those of others who may or
may not agree with what you say. Be as
brief and as specific as possible. Try
to avoid complex technical terms unless
specifically accompanied by a simplified
explanation.
3. Do make managers and scientists avail-
able to answer questions from newsper-
sons if at all possible. Reporters do
not like to depend entirely upon stater-
ments or material from the information
or public relations office. Such in-
formation is frequently used only as
the basis for a story. The reporter
feels defeated if unable to talk in
person with the principals of whatever
activity is involved and will reflect
this in his or her story.
4, Develop Information useful in communica-
tion with media representatives in"
varied circumstances and on short
notice.
Effective Communication with the News Media
and the Public
Accident Situations
The health physics professional will find
that during the course of his career many of his
contacts with the news media will occur during
accident or "incident" situations involving
radioactive materials.
The nature of this contact may be direct at
the site of an accident, or indirect, in provid-
ing technical assistance to a public Information
specialist who is carrying the lead in communi-
cations with the media.
The health physicist will find that he has
two critical responsibilities during accidents
Involving nuclear materials. First, and most
obvious, he will put to use his skills and train-
ing in attending the accident situation; second-
ly, he becomes a principal source of information
for the news media and public.
Either the health physicist working alone
or the public information specialist working at
his side will immediately be besieged with ques-
tions of the following character: "Is there any
radiation leakage? Is there any danger? Has
anyone been exposed to radiation? How badly
has the person been exposed? If the site is
contaminated, will you have to evacuate?" ,And,
for each one of these basic question can flow
an almost endless chain of subquestions as the
reporter attempts to comprehend the situation
and translate that comprehension to a story.
Accidents involving nuclear material are
unique in terms of information handling, present-
ing a distinct set of problems. It should be
understood, however, that in terms of informa-
tion handling, both the health physicist and the
news media representatives share a common ob-
jective and that is accurate reporting of the
accident. That the health physics professional
often perceives the end product to be inaccurate,
distorted or overplayed is a common reaction.
The purpose of this section is to first attempt
to explain why news coverage of radiation inci-
dents is so zealous, then provide suggestions to
the health physics professional on how to handle
the flow of Information to the news media during
an incident to assure that he has done every-
thing within his control to achieve accurate and *
fair reporting of the news.
8-3
-------
The words "radiation accident" mean one
'thing to the health physics professional and
anothey to the news reporter.. • To the former, It
'is an event to be expected during the course of
one's career, providing the occasion to exercise
his knowledge and expertise- To the news media,
it is 'an event that as a rule, demands high atten-
tion, zealous, lengthy coverage and prominent dis-
play on the front page' of a newspaper or high in
the order of daily news covered by radio and
televvsion. '
The difference in reaction is a matter of
perspective. To some, radiation is perceived as
completely understood, well-controlled, wisely-
used by product'of the nuclear age that is making
invaluable contributions to the fields of medi-
cine, industry, agriculture and research. On the
other hand, radiation is also viewed as a mysteri-
ous force that completely evades all senses, that
•knows no obstacle, and can cause burns, disfigure-
ment, cancer and bizarre biological mutations.
In terms of dealing with the public as a whole and
the news media in general, the latter perception,
which drapes the subject of radiation in a veil
of mystery, should be assumed as the prevailing
attitude. There are additional suggestions as to
the basis for this difference in perspective.
One perspective suggests that since the world was
formally introduced to atomic energy through its
destructive profile, a conditioned, negative
response, supported in part by feelings of guilt,
is the Inevitable result.
How to alter what is thought to be a general
negative, uneasy attitude about radiation is the
subject of endless discussion. It is generally '
thought that through education and gradually
acquired familiarity with radiation, will come an
Informed rational, comfortable understanding and
knowledge of the subject.
In the meantime, suffice it to say that
"radiation" today Is a word in our vocabulary
that elicits a very strong, if not emotional,
response; and from the viewpoint of the health
physicist and information specialist involved
in a radiation incident, a subject that must be
treated with utmost care and sensitivity.
It bears repeating that unquestionably the
most important role for the health physicist in-
volved in an accident situation is resolution of ,
the problem. On the other hand, because of the
media's assured high level of interest in radia-
tion incidents, a parallel responsibility exists
to provide directly or through an information
specialist prompt, factual information on the
Incident. To effect a reasonable balance between
these twer" priorities requires a high degree of
planning, preparation and coordination.
Following is a set of general guidelines to
follow when dealing with the news media during a
radiation accident:
1. Information necessary for the newsman to
construct the "who, what, why, when, where, and
how" of the accident should be provided as quick-
ly as possible. "In a significant accident,
first attention should be directed to the so-
called "electronic" media, i.e., radio and tele-
vision plus the wire services, all of which have
the capability of virtual "instantaneous" news
reporting, compared with newspapers which operate
on deadlines dictated by their mechanical repro-
duction processes.
When first arriving on the scene of an
accident, it should be assumed that some media
may already be on site, having obtained incom-
plete and often speculative information from
emergency personnel that are unfamiliar with
radioactive materials. Those newsmen should be
intentionally sought Out and the offer made to
provide professional, clarifying and perhaps
corrective information.
In those instances where a health physics
professional along with other emergency personnel
are dispatched to an accident site involving
hours of travel time, consideration should be
given to local news media representatives who,
during the period of time required for travel,
may arrive and depart the accident scene with
fragmented, incomplete, or distorted information.
It may be prudent in the Interval to call the
press, radio, television, and wire services in
the immediate proximity of the accident giving
them as much information as is currently avail-
able with a commitment to update and expand that
information once the emergency personnel arrive
on the scene. This call may involve no more than
properly identifying the nature of the radio-
active material involved in the incident, but
even that simple confirmation can avoid early,
erroneous news accounts that will later have to
be corrected.
Rumors, chit-chat, the grapevine gossip and
the like spread rapidly following an accident;
and if the media feeds on this class of informa-
tion, real trouble is the result and it may take
hours to untangle and correct erroneous informa-
tion. Often the success in correction is only
partial.
2, Meyer try to hide or minimize the
significance of a mishap. The true facts will
emerge eventually; and it is better to seize the
initiative by volunteering the information prompt-
ly, rather than be placed in an apparent defen-
sive position which is extremely difficult to
turn around, once established. Your relation-
ship with the reporter will prove much more
satisfactory if you, rather than he, initiated
the dialogue. Your Initiative demonstrates an
attitude of openness and candor which will
strengthen your position in the communication
process. .
3. Be sure of the facts before they are re-
leased, but do not let lack of detailed informa-
tion serve as an excuse for not taking action.
In the early minutes and sometimes hours after
an accident, many facts cannot and will not be
known. Newsmen should be so advised with the
assurance that they will be given additional in-
formation as it becomes available. The first
contact with a newsman may be no more than an
advisory that an accident has occurred. Follow-
8-4
-------
up information will fill in details as to the
specific nature of the accident, the extent or
seriousness of injury, nature of corrective
measures to be undertaken, etc. Editors appre-
ciate the early advisory for it enables them to
plan their news "budget," make adjustments in
reporting assignments, and contemplate news page
makeup or newscast content, all of which can work
in the interest of story accuracy. By making the
initial contact as early as possible, the damag-
ing effects of the rumor are also ameliorated.
A separate and distinct set of problems a-
rises in the handling of information if the acci-
dent Involves a radiation exposure victim. If the
exposure is low, actual confirmation of the ex-
posure and dose may be delayed while tests are
being performed. If confirmation of an exposure
slips from the day of the accident to the follow-
ing day or longer, considerable difficulty will
be experienced in dealing with newsmen who are
accustomed to covering conventional accidents
where injuries are known immediately. During the
intervening hours while confirmation of a radia-
tion exposure is being made, the individual deal-
ing with the news media is in a very awkward posi-
tion. Belief may exist that there has been ex-
posure, but it has not been established as fact.
If confirmation of the exposure is delayed, then
the news media is often erroneously led to the
suspicion that information Is purposely being
withheld. Care must be taken to dispel that
suspicion by explaining the procedures involved in
determining whether an exposure has occurred.
4, Avoid being drawn into making specula-
tive statements based on hypothetical, "what if"
kinds of questions. There may be rare occasions
where such a question should be ansvered to avoid
a reaction that something is being concealed. It
is better to explain the rational, for the re-
luctance to answer a question than to respond
with a "no comment." It is difficult to conceive
a situation where a simple "no comment" response
would ever be appropriate,
5. Ideally there should only be one spokes-
man involved in providing information on an acci-
dent situation to the news media in the interest
of achieving consistency, If a situa1 ion develops
where more than one "person is sharing the chore
of providing information, each must keep the other
fully briefed on the content of any discussions
with the media. Frequently, an Information
specialist will carry the lead in a conversation
with a member of the news media and a health
physicist will be present Cor a party to a tele-
phone conversation) to provide backup for dis-
cussions bearing on the technical aspects of the
accident.
6, Remember that any action taken at the
site of an accident in the public domain can be
misinterpreted. The taking of radiation readings
by geiger counters, the taking of smear samples,
the cordoning off of a,given area and posting of
warning signs are of course commonplace to the
health physicist and other trained emergency
personnel. However, to members of the public and
press, such actions are likely to be a first time
experience with the inherent risk that these
8-5
novice observers can misunderstand their signifi-
cance. Therefore, it is important to explain
any overt precautionary or corrective steps taken
in the full public view.
7. In the case of significant accidents
which will likely come to wide attention, it is
advisable to take the time to make sure that
local officials, i.e., mayor or city manager,
are advised at an early opportunity. The news
media will likely call them for reaction or
comment and if they are officially brief§3 on the
situation they will be able to respond from an
informed position, aiding in the orderly, accu-
rate dissemination of information.
Under the heading of advanced planning, a
number of actions can be undertaken by health
physics professionals that will make working
under the stress of an accident situation less
difficult.
First of all, all organizations, public or
private, involved in handling or processing
nuclear materials, should include in their
emergency planning contingencies for handling
inquiries from the press. A public information b
plan will include not only designation of the t
individual who will take the lead role in hand-
ling Inquiries and his responsibilities, but al- ,
so delineation of the responsibilities and obli-
gation of other key organizational officials in-
volved in handling the flow of information dur- '•'
ing an -accident situation. The individual named
to an information responsibility should not be so
low in an organizational hierarchy that he must
wade through various levels of management for the
authority to make statements on behalf of the
organization. As was mentioned earlier, informa-
tion must move to the press quickly to avoid the
damages that can be caused by the rumor mill.
Secondly, a continuing effort must be ex-
pended to educate the various "publics" which
interact with a particular organization, facility
or operation dealing in nuclear materials. This
education effort should focus on explaining the
nature of an organization's activities plus an
outline of the facility's safety program to both
prevent and to correct accident situations onsite
and offsite. Such an education program can in-
clude an active press release program, the offer
of press interviews with key plant personnel,
arranging for plant tours, open houses, family
days, etc,, and maintaining a speaker's bureau.
Health physicists can make a valuable contribu-
tion by offering to speak before civic clubs,
schools, and various other community organiza-
tions on the nature of his profession with lay-
level explanation of the nature of radiation,
uses of radioactive materials in industry, re-
search, medicine and agriculture, biological
effects from exposure, etc. The obvious benefit
of such a general educational effort Is that
should an accident occur, the negative effects
of ignorance, misunderstanding fear and mystery
caa be reduced.
Health physicists can also make their ex--'
pertise available to training programs presented,
for police, state patrol, firemen, ambulance
-------
attendants, civil defense personnel and the like.
. As previously stated, emergency personnel and news
.media often arrive At. the scene of an atdident
=' simultaneously and the proficiency with which
• emergency personnel perform their duty reflects
critically on both the situation and the associ-
ated £lw of information 'to the news media,
Television Interview
t
1. Do your homework.
(a) Know your subject. Nothing builds
confidence or enables you to keep control of an
interview more than' knowing your subject thorough-
(b) Get someone to play the devils
advocate and put to you as many questions as they
can think of that your interviewer may use. This
will help keep you from being caught with an un-
expected question which you can't answer on the
spur of the moment.
2. For the actual Interview:
(a) Use some make-up, A good appear-
ance on. the screen helps get a favorable reaction
, to what you say,
(b) Dress conservatively. Avoid
patterns of clothing such as hounds tooth or bold
plaids which tend to jiggle on the screen. A
pastel colored plain shirt will look better than
a white one, but no loud fancy ones,
(c) The interviewer will usually dis-
cuss the subject, at least briefly, before start-
ing the interview although he, or stie, will not
•• tell you the exact questions to be asked. This
is not to try to trip you up but to make the
interview appear spontaneous rather than re-
hearsed.
(d) Sit in a four legged chair. Never
use a' swivel. There is a subconscious tendency
to squirm in a swivel chair giving the appearance
of tension, If you're more comfortable standing,
ask the Interviewer to do the interview standing.
That will usually be all right.
(e) Try to put the important points
you want to make right away. Time devoted to
your subj ect -tm. 'the' air will be very limited and
the real Impact of your message will stand a
better chance of getting on the air if it is
stated as quickly as possible.
(f) Without being curt or resorting to
unnaturally clipped speech, make your statements
as tlgHt and concise as possible. With the brief
time available on a newscast, each second is
'• valuable.
(g) Do not be facetious. Appear
friendly and confidant-concerned but not worried
or browbeaten.
(h) The interviewer is asking the
question, not the camera. Therefore, look at him,
the person asking the question, when answering.
(1) Do not call the interviewer by
first name, even if you are well acquainted.
This could be taken as a buddy-buddy relation-
ship and some of the audience might feel you are
being fed questions slanted to your advantage.
If you feel you must address the interviewer,
say "Mr. or Ms. .
(j) If you don't know an answer, say so
and that you'll try to get it to him promptly -
and do so. This segment will likely be edited
out.
An interview may be taped for five minutes
or longer but it will usually be cut to 40
seconds or less. If editing time is tight, the
editor will probably use the first portion and
barely look at the rest. That's one reason it
is important to get your main points made early.
When the interview is -over, keep looking at
the interviewer until he indicates he is through.
The camera will probably continue rolling for a
little time to provide a video picture while the
announcer makes "voice over" comments in his own
words.
The camera crew will often shoot some foot-
age at different angles while you are asked to
keep talking. This will also be used for voice
over comments.
Frequently, only a very short segment will
be your actual picture and voice. The remainder
of the time will be voice over by the reporter.
His comments are usually based on what you told
him during pre-fliming discussion.
And lastly, remember even though the inter-
view is over, the reporter is still at liberty
to use anything you say. So don't make careless
remarks thinking that you are "off the record."
"Interview Bill of Rights"*
1. Rights of the Interviewee
(a) The right to an objective listen-
ing to the facts he presents,
(b) The right to an accurate representa-
tion of his position.
(c) The right to a fair and balanced
context for his statements.
(d) The right to know in advance the
general area of questioning and to have reason-
able time for preparation.
(e) The right to reasonable flexi-
bility as to when to have the interview. (Just
as there are times when a reporter cannot be in-
terrupted near a deadline, there are times when
others cannot be interrupted).
(f) The right to expect the interviewer
to have done some homework.
* (Editor & Publisher, May 28, 1977) by John
Jamison, Director, Corporate Communications-NCNB.
8-6
-------
(g) The right to withhold comment when
there is good reason without having this translat-
ed as evading or "stonewalling," for example, in-
formation governed by SEC regulations, competitive
secrets, matters in litigation or negotiation, in-
formation that could damage innocent persons.
(h) The right to an assumption of
innocence until guilt is proven.
(1) The right to offer feedback to the
reporter, especially to call attention to in-
stances in which the story, in the honest opinion
of the interviewee, missed the point or was in
error - and to have this feedback received In
good faith.
(j) The right to appropriate correction
of substantial errors without further damage to
the credibility or reputation of the inter-
viewee's organization.
2. Rights of the Interviewer
Ca) The right to access to an authori-
tative source of information on a timely basis.
(b) The right to candor, within the
limits of propriety.
(c) The right to access to information
and assistance on adverse stories as well as on
favorable ones.
(d) The right to protection on a story
the reporter has developed exclusively, until it
has been published or until another reporter asks
independently for the same information.
(e) The right not to be used by business
for "free-advertising" on a purely commercial
story.
(f) The right not to be reminded that
advertising pays the reporter's salary,
(g) The right not to be held account-
able for ill treatment by another reporter or
another media at another time.
(h) The right to publish a story with-
out showing it to the interviewee in advance.
(i) The right not to be asked to
suppress legitimate news purely on the grounds
that it would be embarrassing or damaging,
Cj) The right not to be summoned to a
news conference when a simple phone call, written
statement, news release or interview would do
just as well.
Reporting Environmental Radiation Data
Reporting environmental radiation data to
the news media and/or the general public poses
many problems for the professional health
physicist. How can these data be communicated in
a manner which will be understood and properly
interpreted? How can over-emphasis on certain
facets of a particular report be avoided? Listed
8-7
below are some suggestions which will help:
Avoid technical jargon and use of actor-
nyms to the maximum extent possible.
When technical terms must be used; define
the terms in simple everyday word? If
data is to be presented in a formax re-
port, a summary, written in nontechnical
language, should be a part of the report
or included as an attachment.
Compare radiation data with a familiar
item such as diagnostic radiation X-rays.
Report environmental radiation data in
terms of concentrations or doses. If
there are no statistical increases in
environmental radiation levels then a
statement such as "no increase in radia-
tion detected above background levels"
would suffice.
* •
If the problem is minimal so indicate in
specific statements which are not likely
to be misinterpreted.
. Avoid rambling dialogue on non-pertinent
data as this may create more confusion
and misunderstanding than education.
Recognize that the individual seeking in-
formation in all probability knows little
or nothing about radiation and may be-
lieve what he wants, including the
"worst." The point is to keep language
simple and avoid using words or phrases
which can be exaggerated and blown out
of proportion.
. Recognize that most news media represent-
atives are operating on a very short
time schedule and give the meaning of
the environmental radiation data In clear
concise statements at the outset of the
interview. Double-talk or vague state-
ments lead to suspicion and often
erroneous interpretation by the media. - *
, If data is highly significant or tech-
nically complex, consideration should be
given to holding a press conference to
release the information in both written
and verbal format. This provides more
time for discussing the subject and data
and answering questions at one time and
place.
Avoid language which may be ambiguous
and lead to misunderstanding.
Conclusion
The material presented in the report demon-
strates that managers and scientists must take
time to educate themselves In communication prin-
ciples and techniques if they expect to success-
fully transmit information to the public which
will be understood and interpreted as intended. ' '
The time and effort expanded will be found to be>
worthwhile many times over, particularly in the >
field of radiation to which the public is currents-
-------
, ly so responsive.
References
•'• Fjadler, Fred. Afa lattoduetlon to the Mass Media.
,. i * Editor & Publisher Bookshelf, New York.
Ferkiss, Victor C. Technological Man; The Myth
and the Reality»• New American Library, New
York, 1969.
Goffman, Erving. Relations in Public; Micro-
studies of the Public Order. Harper & Row,
Publishers, -New York, 1971.
Harris, Morgan and Karp, Patti. How to Make News
and Influence People, Editor & Publisher
Bookshelf, New York.
•, Hogerton, John. The Atomic Energy Deskbook.
Reinhold Publishing Corporation.
Keeny, Spurg-eon M., Jr., _et_ al. (The Nuclear
Energy Policy Study Group), Nuclear Powet
Issues and Choices, Ballinger Publishing
Company, Cambridge, Massachusetts, 1977.
* MeCombs, Maxwell E. and Becker, Lee B. Using
Mass Communication Theory. Editor & Publish-
er Bookshelf, New York.
McGiffert, Robert C. The Art of Editing the News.
Editor & Publisher Bookshelf, New York.
Swain, Bruce M. Reporters' Ethics. Editor & "
Publisher Bookshelf, New York.
Weoner, Richard. Professional's Guide to Public-
ity, Editor & Publisher Bookshelf, New York.
Wright, Charles R. Mass Communication. University
of California, Los Angeles, Random House,
', Inc., ,New York, 1959.
"Everything You Always Wanted to Know About Ship-
ping High-Level Nuclear Wastes," DOE Report
No. DOE/EV-OOQ3, 1978.
"Operational Accidents and Radiation Exposure
Experience-Within the U.S. AEC, 1943-1975
(WASH-1192).
"Radiological Emergency Procedures for the Non-
Specialist," AEC, 1969.
"Report of the President's Commission on the
Accident at Three Mile Island," U.S. Govern-
ment Printing Office, 1979.
8-8
-------
ENVIRONMENTAL RADIOLOGICAL SURVEILLANCE: MECHANISMS FOR INFORMATION EXCHANGE
T.W. Oakea (Oak Ridge National Laboratory, Oak Ridge, Tenn.) , K.E. Shank (Oak Rid'ge National-
Laboratory, Oak Ridge, Tenn.), J-A-. Auxier (Oak Ridge National Laboratory, Oak Ridge, Tenn.),
J.S. Eldridge (Oak Ridge National Laboratory, Oak Ridge, Tenn.), P. Jenkins (Mound Laboratory,
Miamisburg, Ohio), G.L. Love (Department of Energy, Oak Ridge Operations, Oak Ridge> Tettt-.),
S.G. Oberg (Utah State University, Logan, Utah), V. Panesko (Rockwell Hanford Operations,
Richland, WA.) , B. Selby (Tennessee Valley Authority, Muscle Shoals, Ala.), W.D. Travers
(U.S. Nuclear Regulatory Commission, Washington, D.C.), W.R. Strodl (Consumers Power
Company, Jackson, Mich.)
>-
Subcommittee number 9's charter was to assess the adequacy of present means for
environmental radiological information exchange and to propose new techniques
if needed. Over two hundred health physicists throughout the country, who are
involved in environmental surveillance, were contacted for their comments and
suggestions. After these contacts were made, the discussions were analyzed in
regard to existing communication mechanisms of other professional societies,
government agencies, and information centers; these findings have been summarised
and will be presented. It was concluded that strong support exists for creation
of an Environmental Surveillance Newsletter and that this activity could con-
ceivably fit into an Environmental Surveillance Section of the Health Physics
Society. The desirability of the Health Physics Society's support for pro-
posed symposiums, the format of the proposed newsletter, and possible mechanisms
for support for these activities, will be reviewed.
(Environmental Data Bases; Environmental Information Centers; Environmental
Surveillance Activities; Information Exchange)
Purpose of Subcommittee Number 9
This report assesses the need for and ana-
lyzes the various options available for,informa-
tion exchange mechanisms in the field of environ-
mental radiation surveillance. In this field
where technology and techniques are constantly
being updated, it was the project steering com-
mittee's hypothesis that information flow could
be inadequate. This committee's charter was to
test this hypothesis.
Methodology of Approach
Contacts
The logical first step was to have the mem-
ber ^ of the committee contact various ~eople
throughout the country associated with radio-
logical environmental surveillance activities.
People associated with state and utility organi-
zations were particularly singled out as it was
felt that these individuals may be more in need
of technological information updating; however,
other industrial sites, EPA and NRC regional
offices, universities, and national laboratory
personnel were also contacted. The questions
asked of the individuals interviewed were: (1)
How do they get their information now? (2) Is
it adequate? (3) Do they have any suggestions
for improvement?
Summary o f Comment s
Just about everyone contacted had definite
views concerning information exchange and were
quite helpful. Due to the fact that close to
200 people were contacted, we cannot list all
the comments; however, certain concepts were
voiced repeatedly. The majority of contacts
favored the creation of an Environmental Surveil-
lance Newsletter over other mechanisms for ex-
change, such as an information center or other
ideas. The contacts stated that in the environ-
mental surveillance field they now try to contact
someone else if they have a question. It is
presently a "catch-as-catch-can" situation.
Some random comments concerning a newsletter are
stated below:
- "it should be oriented towards a practical
grass roots approach"
- "information that is printed in the journal
now is related to theory, not practical pro-
blems"
- "this information would be helpful, as I am
just starting out in environmental monitoring"
- "would like to see nonroutine data covered in
the newsletter, e.g., weapons fallout data"
- "would like to see new designs, methods, and
instruments for environmental monitoring"
- "operational experience in environmental pro-
grams at other nuclear plants would be appre-
ciated"
- "like to see suggested ways to simplify pro-
cedures and ways to compare results from
various plants"
- "would like to see meetings, job opportunities,
and training courses related to environmental
surveillance listed in one place"
- "need a Sounding board to ask questions related'
to radiological environmental surveillance"
- "... this is long overdue"
9-1
-------
After sorting through the comments, the sub-
committee agreed the newsletter should contain
'.items concerning updated references and notices
,'f-roa regulatory agencies, letters to the editor
{as a sounding board), meetings, training courses,
'and job opportunities and other items of interest
A draft of a' sample newsletter cofetaining some of
'the above items is contained in Appendix A.
Assessment of Existing Information Exchange
Methods r
At the same time that individuals were con-
tacted concerning their opinions, the committee
looked into existing information methods to en-
sure that what we would propose was not already
existent through the^ government or professional
society. In this endeavor, the Nuclear Regula-
tory Commission, Department of Energy, and
Environmental Protection Agency were looked into.
For the professional societies, American Indus-
trial Hygiene Association, American Public Health
Association, American Nuclear Society, Institute
of Electrical and Electronic Engineers, and
American Institute of Chemical Engineers were
also contacted concerning environmental informa-
tion exchange. In short, it was concluded that
nothing exists at this time dealing with the ex-
' change of information concerning radiological
environmental information. The information bases
that do exist are included in this section.
Some data collected on environmental surveillance
activities in other professional societies and
existing information centers of interest are in-
cluded in tne sample newsletter in the following
section.
Data bases and data banks exist in various
sizes and forms. The collection of raw process-
ed information Is an essential tool for an
environmental surveillance program. Table 1 is
a summary of useful computerized systems.
Government agencies, industries, trade associa-
tions and foundations keep data bases of histori-
j;al environmental information. Table 2 describes
selected data handling and Information systems
at the federal level.
The Environmental Protection*Agency main-
tains a surveillance program, for radioactivity
levels called the Environmental Radiation
Ambient Monitoring System (ERAMS). Table 3 is a
summary of the current status and historical
.perspective, of ERAMS,
contain is information that was included in the
now defunct Public Health Service Radiation Data
and Reports, since this was noted as an especial-
ly popular item with the people we contacted.
We would prefer, however, that the Environmental
Protection Agency renew this activity.
We also encourage a specialty certification
in Environmental Surveillance, similar to the
Reactor Health Physics Certification.
The Section should sponsor a symposium every
other year or so. A special committee within the
Section will be set up for this purpose.
Better public information on radiation
effects should be made available.
A list of persons specializing in environ-
mental surveillance should be made available to
all members of the Health Physics Society.
[11
[2]
References
Golden, J., R.P. Ouellette, S. Saari, and P.
N. Cher, Environmental Impact Data Book. Ann
Arbor Science, 1979.
Green, K.H., D.A. Blume, and J.E. Jones, A_
Directory of Computerized Environmental In-
formation Resources, PB-262-486, 1976.
[3]
Fennelly, P.E., et^ al. , Environmental Assess-
ment Prospectives . PB-257-911, 1976.
Bracken, M.J., J. Dorigan, J. Hushon, and J.
Overbay, User Requirements for Chemical
Information Systems, MRS-7558, The MITRE Corp.,
1977.
Reeo™««mdations of Subeo""''''ttee 9
We encourage the formation of an Environ-
mental Surveillance Section within the National
Health Physics Society. Officers of the Section
sfcould be elected.
An Information exchange newsletter should
be part of the newly formed Section's activities.
The editors fox'the newsletter will be appointed
by the officers of the Section (with the editor's
approval). The newsletter will be funded by the
dues of th« Section, and the printing and distri-
bution of the newsletter should be contracted out
to a private ftipm.. One item the newsletter should
9-2
-------
Table I Summarized List of Environmental Data Strvicej"-6
>
1
2
3
4
5-
£
7
S
9
10
II
12
13
14
IS
16
17
18
19
20
21
22
23
24
25
26
27
29
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
41
46
47
48
49
o
z
VI V)
H>
Z bj
f?
Amerlean Agricultural fconomics Documentation Center (USDA)
CAIN System (Cataloging and Indexing). 1970-nresent
Center for Air Environment Studies, The Pennsylvania State University
Air Pollution Technical Information Center (APTlC) (PPA)
National Air Data Btaileh, Air Pollution Office (EPA)
Air Quality Implementation Planning Program, Computer Program Tape
Projection Algorithm for Vehicular Emissions (PAVE 1)
Hazardous AIT Pollutants Lnforcement Management System (IIAPEMS)
Central Abstracting and Indexing Service, American Petroleum Institute
Franklin Institute Research Laboratories
General hlcctrlc Company, Space and RCSD Divisions
Center for Urban Regionalism, Kent State Unlverslt"
Textile Research Center, Illinois Institute of Technology
Freshwater Institute Numeric Database (FIND)
bnvironmental Information Retrieval On-Lme (bPA)
Tittsch Associates
Eric Clearinghouse for Science, Mathematics, and bnvironmental
Education, U S National Institute of Education
Computerized Products and Services, Data Courier, Inc
Waterways Experiment Station, U S Army Corps of Engineers
NASA Regional Center, Lot Angeles
American Society ot Civil Fngineers
Smithsonian Science Information Exchange, Smithsonian Institution
Center for Short Lived Phenomena, Smithsonian Institution
Conservation Library Center, Denver Public Library
Ronald J Scheidelmon and Associates, Searchline
Scientists' Institute for Public Information
Ecology Forum, Inc , Environmental Information Center
Biojcicncej Information Service! of Biological Abstracts X
Environmental Mutagcn Information Center X
Biological Information So/vice
Geographic Information Systems, Geographic AppllenUons Program (USGS)
Bibliography and Index to the Literature in the NBS Alloy Da (a Center
Mineral Supply, U S Bureau of Mines
Soil Data Storage and Retrieval Unit, Soil Conservation Service (USDA) X
Rock Mechanic Information Retrieval System, University of London X
tNDI-X/OASIS NOAA environmental Data Key ' X
Overview of the Water Quality Control Information System (STORET)
Matrix of Lnvlronmenlal Residuals for tnergy Systems (MbRES)
Institute for Seienllfie Inlormulion
National Weather Service Rivet 1 oreea»t System
The Defense Documentation Center
RfcCON (f RDA) X
land Base- 197 3
Oil Shale Project, IS69 to Present
BatteUe Lnergy Information Center (BblC)
Natural Resources Library U S Department of the In tenor
Federal Aid in Fish and Wildlife, Denver Public Library X
bnvironmental Technical Information Center, Institute for Paper Chemistry X
Transportation Noise Research Information Service, National Academy
of Sciences X
11 H
II 11
M **' J3 W C
_i < g
8 « S
_^ %| 1 | I|
| if § | 2 I 11
| 3 13 £ § | 1 1 1-5
J I £5 1 f I 1 I 3P
X XXX
X
X XX
X XX XXX
X
x x x x x
X X
XX X X
xx x
XX 'X X
x
x
x
x x
X X X X X XX
x
XXX X
X XX X
x x
xx xx
XX • X
XXX
XX XXX XX
X
x , x
x x
X
X X
X X
.
"
X f • X
X
x x
X ,;., XX
X X
1 f JB
11 S i *jj 3 af
-ii t %n HI i
Sue < S9c> •£ »u 3
U1 1 ll5 IP 1
XX X
X X
X X
xxx x
XXX X
X X
xxx
X X
X
X X X X
XXX X
XXX X X
X
X X
XX X
X
X
XXX X
X
X
X X
"
XX X
X X
X
X
X
X
X
X
X X
X
X
I
U
1
III
a ° 1 K
111 !
llu a
X
X
X X
X
X
X
X
X
X
X
X
X
X
X
X X
x x
X
X
X
X
X X
X
X
X
^
X
S E _
E 5 s
U o *•
C
I g. g. •
£ E E 3
e O O j?
£ o o iS
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
x ,
x ,
X
X
X
X
X
X
X
X
x t
X
X
-------
Table f Summarized List of Environmental Data Services (conl'd)
f
JS
•
-
•
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
A
II
• B s
* ' 5 w
t
0 B
z a
sa — It &
i 1 ill 1
ew 3 111 a 6'ri - 5 E 6
•§S S'SS C 0>O a •= « o
<« S a HO< oaoo
X X
'^
x x
XX X X
X X
X
X
x x
X X
x X
•X
*
•
X
t ~*
X
X X
X -X
xx xx
X X :' X
X
' '. x
X
X
f
X
X
X
X
X
X
X
X
3
£
3
.3
X
X
X
-------
Table I Summarized List of Environmental Data Services (cont'd)
I 1 '! I 1 I* <3
80 Data Bajci Solid Earth and Solar Terresliul Environmental Data XXX XX
81 Data Basel Oceanography X XX X
82 Analyses of Natural Gastt X
83 Data Bases Climatology X XX XX
84 Energy and Environmental Systems Division (ALC) X X XX
85 NUS Corporation Technical Library XXX X
86 SDC/POI LUTION XXX X
87 SDC/COMPENDbX X X , X X
88 American Geological Inslilute XX X X
89 National Technical Information Service (NTIS) XXX X XX
90 Chemical Abstracts Service, Division of American Chemical Society XXX X XX
91 Nuclear Science Abstracts (LRDA) 4X X ? X
"Greerr, et al, 1976
''Gtaden, ot J , 1979
-------
Table 2 Data Handling and Information Systems at the Federal Level" h
Sponsor
Air
Water
Comprehensive 1 nvironment.il l>Jtj
II S Ijmmmm-nlal Al ROS
Protection Agency
NI,DS
SAROAD
OAM1S
SOTDAT
HATRI MS
SIP
EDS
1TC-67
Apnc
UNAMAP
U.S. National
Oceanic and
Atmospheric
Administration
*
US Geological
Survey, Water
Resources Divutcm
U.S Department
of the Intenor
"Fennelly, et al , 1976
"Golden, et ad , 1979
Aeromctrii and
!• missions Report
mg Systems
- National 1 nils
sums Data
System
— Storage and
Retrieval ot
Acronutnc Data
- Quality of
Aeromatu
Data
- Source Test Data
Storage
— Hazardous and
Trace Substance
Inventory System
- State Implements
Iron Plan
— Energy Data
System
- Cumulative FPC
Form 67 Data
System
- Air Pollution
Technical
Information
Center
- Users Network
for Applied
Modeling of Air
Pollution
i
STORIT - Storage and NI-RC
Retrieval of
Water Quality
Data
ENVIRON - Environmental PRLS
Information
Retrieval
On Line
GPSI' - General Point
Source Pile
NEI - National
Estuanne
Inventory
Water Quality Standards
'
ENDEX - Environmental EDBD
Data Index
EDS - Environmental
Data Service ESIC
OWDC - Office of
Water Data •
Coordination
NWDS - National Water
Data System
WRSIC - Water Resources
Scientific
Information Center
Network
National
Lnvironmcntal
Research Center
— Program Review
and Evaluation
Environmental Impact
Statement System
- Environmental
Data Base
Directory
- Environmental
Science Informs
uon Center
9-6
-------
Table 3 EPA's Environmental Radiation Ambient Monitoring System Current Status'
Component
Air Monitoring Program
Gross Radioactivity and Deposition
Deposition
Plutonium and Uranium m Air
85Krin Air
Water Analysis and Sampling Program
Surface Water
Drinking Water
i '
Interstate Carrier System
Milk Analysis Program
Pasteurized Milk
Special Milk Analysis
Human Organ Program
Bone Analysis
Number
of Sampling
Sites
20 Active
54 Standby"
20 Active
54 Standby"
20
12
55
76
20
658
-220/year
65
9
Varies
Sampling
frequency
Semiweekly
As rain occurs
Semiweekly
Senuannually
Quarterly
Quarterly
Quarterly ' t*
/
'Tnannually
Monthly
Monthly
Annually
Analyses
Performed
Gross 0*
Gamma Scan'
Gross 0
Gamma Scan
Tritium
238,239pu
234,235..
23S,23»p|J
234.23S.J
"Kr
Tntium
Gamma Scan
Tntium
Gross a
Gross 0
Gamma Scan
"»Ra
'°Sr
13B,239p^ ,* '
134,135|j -'
Gross a
Gross 0
£|r ' ^
Gamma
Spectrometry'1
Trmum
14C
Stable Calcium
'°Sr
Frequency
of Analysis
Semiweekly
As rain occurs
As rain occurs
Monthly composite
March-May
Composite
Quarterly' *
Quarterly'
Semiannually
Quarterly'
Annually'
Quarterly
Annually'
Annually'
Annually'
Annually^
Annually*
' , Annually' ' ~ " J '
Annually*
Tnannually '
Monthly
Annually'
Annually'
Annually
Annually
Initially
Previous Network Established
Radiation alert network
Radiation alert network
Plutonium in airborne
Paraculates
No network, but individual
measurements for 1962
Tritium surveillance
system
n i
"' None'
Interstate Carrier
Drinking Water Project
Pasteurized milk network
14C
Human bone network
1956
1956
1967
1974
1965
1973
1964
1970
,
1960
1960
1965
1961
"Activate if contaminating event occurs (e.g, atmospheric nuclear test in the Northern Hemisphere)
*Field estimate and subsequent laboratory determination
c\! gross beta is greater than 10 pG/m3
d. deposition exceeds gross beta of ISnG/m3
'Composite sample analyzed -
f\l gross alpha level exceeds 3 pCi/Uter
'If gross beta level exceeds 10 pG/liter
*Forl3lI,137Cs,140Ba,andK
'Golden, et al, 1979
9-7
-------
Sample Newsletter
A MECHANISM FOR INFORMATION EXCHANGE
Vol, 1, No. 1
H.P.S. Subcommittee Number 9
Fall 1979
Editor-in-Chief-
Editors
STATES CONTACTED •. OPINION SURVEY REGARD-
ING EJTO!IROH>ffiNTAL RADIATION SURVEILLANCE
Two hundred individuals involved in some
aspect of environmental radiation surveillance
were contacted by Subcommittee 99 members in
order to obtain their opinions on how to best
obtain and interpret environmental monitoring in-
formation. State government representatives,
reactor and fuel cycle managers, EPA regional
officers, and NRC field persons were among those
t polled with questions regarding the adequacy of
current data resources and present information
exchange methods.
The majority of respondents felt that im-
provements in communication were badly needed and
thought that a periodical newsletter could answer
many of their needs. Many contacts indicated
that field persons are forced to rely too great-
ly on the "grapevine" for knowledge and support
in their monitoring efforts and subsequently do
not have equal access to current techniques and
methods.
Comments most frequently submitted by re-
spondents include: a) new reports and pertinent
notices should be made available through a readily
accessible medium, b) a nuts-and-bolts field-
oriented .approach to current practices would be
most useful, c) evaluation of environmental radia-
tion surveillance (ERS) instruments should be
shared, and d) methodologies of data analyses
should be openly reviewed.
WATCH THIS SPACE.
Watch this space for future articles on topics such as
What are other professional- societies doing in the field of radiation surveillance
Should a new'section be formed? An invitation for comments
What's new In Information retrieval resources.....
The need for QA
What is the utility viewpoint
Should the Subcommittee conduct a symposium
Guest editorials submitted by state reps, utilities, feds, etc
—i
Would you like to present a poster session at a non-EPS meeting explaining the function
of HPS radiation surveillance interests
9-8
-------
WATSON GIVES AD HOC COMMITTEE REPORT
In 1977 the HPS established an,ad hoc
committee with a multipurpose mission. Upgrading
Environmental Radiation Data was the Committee's
designated task and Chairman James E. Watson, Jr.
(Dept. Env. Sci. and Eng., Univ. of North
Carolina) recognized the charge as one requiring
input from persons representing many areas of
science and policy. Watson's ad hoc committee
subsequently identified nine priority areas of
interest, established subcommittee status for
the projects, and recruited subcommittee members
from industries, academics, state governments,
and federal agencies including DOE, EPA, NBS,
NRC and TVA.
Watson and each of the project leaders will
present the results 'of their subcommittee efforts
at a special session of the 1979 Annual Meeting
of the HPS (Philadelphia, PA). The titles of
the nine subcommittee projects and the committee
member serving as project manager in each case
are identified as follows:
(1) Objectives of Environmental Radiation
Monitoring, (E.F. Conti), three types of monitor-
ing are especially considered by the group-
ambient, operational, and special investigation-
type efforts.
(2) Definition of Critical Pathways and
Radionuclides for Population Radiation Exposure
at Nuclear Power Stations,(B. Kahn), power plant
design objectives are compared'to evaluation
techniques and applied to predicted effluent
fractions.
(3) Propagation of Uncertainty Analysis in
Environmental Pathway Dose Modeling, -(W. Britz),
the applicability of pathway dose models is con-
sidered as a function of the reliability of in-
dividual measurement parameters.
(4) Detection of Changes in Environmental
Radiation Levels Due to Nuclear Power Plant
Operations, (G.G. Eicnholz), variations in normal
environmental radiation background must be quick-
ly identified as a result from operational or
extr'neous activities.
(5) Effective Communication with the Public,
(R. Payne) communication that will significant-
ly improve the health physicist's ability to pro-
vide understandable radiological information
to the public (via the news media) are suggested.
(6) Statistical Methods for Environmental
Radiation Data Interpretations, (D.A. Waite),
experimental designs, sample selection, data
analysis, establishment of reliable baselines,
handling variables in sampling equations and
choices of alternative techniques are considered.
(7) Reporting, of Environmental Radiation
Measurement Data, (B,. Colle), standardization of
'data reporting in. terms of units, significant
figures, and measurement uncertainty is advised
and methods for instituting uniform reporting
procedures have been, devised.
(8) Quality Assurance for Environmental
Monitoring Programs, (C.G. Sanderson), compre-
hensive QA programs that appreciate administra-
tive input as well as sample integrity are ana-
lyzed and guidelines for such are provided.
(9) Mechanisms for Environmental Radiation
Information Exchange, (T.W. Oakes), utilizing
solicited input from field workers in environ-
mental radiation surveillance, techniques of im-
proved information exchange, including the
issuance of timely newsletters, can be initiated
by members of this subcommittee.
ENVIRONMENTAL SURVEILLANCE 'ACTIVITIES
IN OTHER SOCIETIES
The American Nuclear Society sponsors
symposia of interest to workers in environmental
surveillance activities at it annual meetings
and at topical meetings. In the 25th Annual
Meeting at Atlanta, GA, June 3-7, 1979, symposia
on "Environmental Monitoring and Measurement
Techniques," "Environmental Transport Mechanisms,"
Waste Management," etc. were conducted.
The Institute of Electrical and Electronic
Engineers regularly sponsor Nuclear Science
Symposia which include papers on instruments and
methodology for environmental radiation measurer,
ment. ' '• *' ' •" '* ,;',"'''
, , The Nuclear Technology Section 'of the • »
American Chemical Society sponsors symposia on
various aspects of radioactivity measurements at
its annual meetings. At the 178th ACS National
Meeting, September 10-14, 1979 at Washington,
D.C., symposia on "Waste Chemistry of the Nuclear
Fuel Cycle as it Relates to Health and Safety,"
"Recent Developments in Biological and Chemical
Research with Short-Lived Radloisotopes," and
"Radionuclides in Earth and Space Sciences" were
conducted.
The Annual Bioassay Conference is presented
in the fall of each year. The 23rd Annual Meet-
ing ia scheduled for October, 1979, in Las Vegas,
Nevada. Many of these papers will be of interest
to environmental surveillance workers.
The 23rd Annual Conference on Analytical
Chemistry in Energy Technology was conducted on
October 9-11, 1979 at Gatlinburg, Tennessee.
This annual conference is sponsored by the
Analytical Chemistry Division of Oak Ridge
National Laboratory. The theme of this year's
conference was "Progress and Problems in Radio-
active Analysis."
Future issues of the FORUM will update
environmental surveillance activities of the
aforementioned organizations and will include
additional Information from domestic and inter-
national conferences.
9-9
-------
CEHTERS OF INTEREST TO
ACTIVITIES.
Information centers play a valuable role in
many aspects of environmental surveillance pro-
grams. Information centers are conducted at
many organizations throughout the United States,
and a large number of such centers and related
projects are resident at Oak Ridge National
Laboratory. A 65-page brochure, "ORNL Technical
Information Center Program," (October, 1978) de-
scribes the current extent and availability of
such information. The brochure is available from
the Office- of Information Centers, Oak Ridge
National Laboratory, P.O. Sox X, Oak Ridge, TN,
37830,
Some of. the more than two dozen centers are:
Ecological Sciences Information Center; Data
bases to support bioenvlronmental research
specializing in nuclear and fossil energy and
environmental effects of electric power genera-
tion.
Nevada Applied Ecology Information Center:
Major emphasis on the environmental aspects of
plutonium and other transuranics .
Health and Environmental Studies Program:
Comprehensive, multidisciplinary state-of-the-
knowledge monographs and assessment reports on
the effects of environmental pollutants.
Ecosystem Analysis Data Center: Respository
for data generated by the Eastern Deciduous
Forest Biome Program; regional assessment and
ecosystem studies.
Geoeiiology Project; Data describing the 'spatial
and temporal distributor^ environmental resources
for analysis of regional and national energy-
related problems.
national Inventory of Biological Monitoring
Programs ; A centralized and coordinated system
for accessing biological monitoring information.
Nuclear Safety Information Center: National
center for nuclear safety information; publica-
tion v£ Nuclear Safety. <,
EMPLOYMENT OPPORTUNITIES
This space will be devoted to current job
opportunities in the broad areas of health
physics, environmental surveillance, etc.
Employees having current needs for specialists
in these areas may send announcements of such
openings to the editors for Inclusion in two suc-
cessive Issues of the FORUM. Ho attempt at pro-
viding* a clearing-house will be made. Only
announcements for direct contact between employer-
employee will be Included. As an example of the
format a current opening from the USNRC (June
1979) Engineering and Scientific Employment
Opportunities is quoted:
Title: Health Physicist GS-9/11
Duties: Participates in the development of
occupational health protection standards, in-
cluding regulations, guides and topical reports.
Performs calculations' and analyses to develop or
assist in the development of standards for occupa-
tional health protection, particularly those need-
ed in the preparation of value-impact analyses.
Qualifications: Graduate level training in health
physics, or B.S. level, plus specialized work
experience. Submit Standard Form 171, "Personal
Qualification Statement," available at most
Federal Offices to: U.S. Nuclear Regulatory
Commission Division of Organization and Personnel
Recruitment Branch, Washington, D.C. 20555.
SELECT PAPERS FOR YOUR REFERENCE
This section will include a listing of
current journal articles, conference proceedings,
regulatory guides, and general literature refer-
ences of interest to the environmental radiation
surveillance field. Three members of the editori-
al staff will routinely scan a list of journals
and other information sources to select the titles
to be included in this section. A listing of the
literature included in the search will be pub-
lished once or twice per year, and additions can
be made upon recommendation of the readership.
The format of literature citation will be similar
to the example shown: "Environmental Radiologi-
cal Surveillance in Perspective: The Relative
Importance of Environmental Media as a Function
of Effluent Pathway and Radionuclides," D.H.
Denham, Health Physics. March 1979, pp. 273-282.
SELECTED REFERENCE PUBLICATIONS
This section will include references to pro-
ceedings of meetings and symposis of interest to
environmental surveillance workers. Special
publications of IAEA, ANS, and NCRP will be list-
ed. In some cases, editorial comments will be
added to call attention to certain publications
useful as desk references. Examples of two re-
cent publications are:
Instrumental and Monitoring Methods for
Radiation Projection. 1978, NCRP Report Number
57, pp. 177. (Ed: This report supersedes NCRP
Report No. ID, "Radiological Monitoring Methods
and Instruments.")
A Handbook of Radioactivity Measurement
Procedures. 1978, NCRP Report Number 58, pp. 506.
(Ed: This report updates and extends the 1961
NCRP Report Number 28, "A Manual of Radioactivity
Procedures." It is a useful desk reference and
contains a 135 page appendix of nuclear-decay
data for some 200 radioactive nuclides of interest
in medical practice research, health physics,
industry, nuclear power, environmental Impact
studies, and as reference standards.)
ANNOUNCEMENTS OF MEETINGS, SYMPOSIA. AND
COURSES
This section will include announcements of
interest to environmental surveillance workers
9-10
-------
concerning future meetings, courses, or sympo-
sisms according to the format below:
19-23 Nov. 1979
"The Second Asian Regional Congress on
Radiation Protection," IRPA Manila,
Phillipines. Contact: Dr. Delia T.
Anstalio, Ministry of Health, Rizal
Ave. Sta. Cruz, Manila, Phillipines.
•ftU S. SOVERNMENT PRINT1N6 OFFICE. 1960-311 T32/73
9-11
------- |