U.S. Environmental	Topics:	EPRI EA-4663
Protection Agency	Precipitation chemistry	Project 1630-52
U.S. Geological Survey	Dry deposition	EPA 600/9-86/014
and	Air quality	USGS Contract 54000-5622
Electric Power	Methods standardization	Proceedings
Research Institute	Sampling	August 1986
Proceedings: Methods for Acidic
Deposition Measurement
Prepared by
Seattle, Washington

ERRATA: EPRI Report EA-4663 (EPA 600/9-86/014)

p. 5-27



For Figure
see Figure
on p.
For Figure
see Figure
on p.
illustrating the same phenomenon.

Acidic deposition

Precipitation chemistry
Methods standardization

Dry deposition

Air quality
AUDIENCE Environmental managers and analysts / R&D scientists and planners
Proceedings: Methods for Acidic Deposition
What methods are used to measure wet and dry acidic deposi-
tion? How can they be standardized? Eighty workshop par-
ticipants discussed these questions at length, identified specific
priorities for standardization, and recommended approaches and
research needs for achieving these goals.
Continuing research on the nature of acidic deposition, its relationship to
emissions, and the consequences at receptors requires reliable measure-
ments of gases or particles deposited directly (dry deposition) or by precipi-
tation (wet deposition). Although wet deposition has been measured for
several years, dry deposition measurements are still being planned and
tested. Standardizing measurement procedures would enhance the value of
the data.
To exchange information among practitioners and managers; to select
deposition sampling and chemical analysis methods for standardization; to
specify performance goals for measuring dry deposition; and to identify en-
gineering and research needs.
EPA, the U.S. Geological Survey, and EPRI sponsored a workshop in
Raleigh, North Carolina, April 30-May 3, 1985. Workshop organizers invited
representatives of acidic deposition-monitoring networks, methods stan-
dardization agencies, and the scientific research community to attend.
Presentations on wet deposition concerned sampler-siting criteria; sample
collection; chemical analysis; data handling, archiving, and accessing; qual-
ity auditing; and the process for developing standard operating procedures.
Another technical session on dry deposition addressed responses accord-
ing to forests, current abilities to measure chemical species, and operating
monitoring networks. Workshop participants also discussed the sampling
and chemistry of fog water.
• The 80 participants identified standardizable practices and methods, ob-
stacles to standardization, and needs for further advances in measurement

•	High priorities for standardizing wet deposition methods include terminol-
ogy, specifications for representative sampling, quality assurance, data
archiving and reporting, network documentation, wet collectors, analytic
methods, and data validation. A standard reference center was recom-
mended for evaluating different types of collectors.
•	Obstacles to achieving standardization are incomplete knowledge
(19 areas were identified), resistance to changing existing procedures,
differences in judgment, differences between network objectives, a lack
of precision and accuracy data, lack of colocation data or collaborative
testing, and insufficient resources for accomplishing the work.
•	The methods for measuring dry deposition are not as ready for stan-
dardization as are those for wet precipitation. Measuring only airborne
concentrations and mathematically deriving from them the dry deposi-
tion is inadequate because the techniques used to infer deposition
velocities must be improved and tested. Obtaining flux data for this pur-
pose is essential. Simple-to-use concentration monitors still need fur-
ther development for an increasing number of substances. However,
procedures for quality assurance, computations, and data reporting and
archiving could be written now. Standardizing terminology was given a
high priority.
EPRI Project Manager: Peter K. Mueller
Energy Analysis and Environment Division
Contractor: ENVIROTEST
Environmental Protection Agency Contract 5D0304NASA
Project Officer: R. Paur
U.S. Geological Survey Contract 54000-5622
Project Officer: J. Frisch
For further information on EPRI research programs, call
EPRI Technical Information Specialists (415) 855-2411.

Proceedings: Methods for Acidic Deposition
Research Project 1630-52
EPA 600/9-86/014
USGS Contract 54000-5622
Proceedings, August 1986
Raleigh, North Carolina
April 30-May 3, 1985
Prepared by
1108 Northeast 200th
Seattle, Washington 98155
D. R. Anderson
Prepared for
U.S. Environmental Protection Agency
Environmental Measurements and Standard Laboratory
Research Triangle Park, North Carolina
Project Officer
R. Paur
U.S. Geological Survey
Office of Atmospheric Deposition Analysis
Reston, Virginia
Project Officer
J, Frisch
Electric Power Research Institute
3412 Hill view Avenue
Palo Alto, California 94304
EPRj Project Manager
P K. Mueller
Environmental Physics and Chemistry Program
Energy Analysis and Environment Division

Requests for copies of this report should be directed to Research Reports Center
(RRC), Box 50490, Palo Alto, CA 94303, (415) 965-4081. There is no charge for reports
requested by EPRI member utilities and affiliates, U.S. utility associations, U.S. government
agencies (federal, state, and local), media, and foreign organizations with which EPRI has
an information exchange agreement. On request, RRC will send a catalog of EPRI reports.
Electric Power Research Institute and EPRI are registered service marks ol Electric Power Research Institute, Inc.
Copyright © 1986 Electric Power Research Institute, Inc. All rights reserved.
Although the research described in this report has been funded in part by the U.S. Environmental Protection
Agency (Contract 5D0304NASA) and the U.S. Geological Survey (Contract 54000-5622), it has not been subjected
to Agency or Survey review and therefore does not necessarily reflect the views of U.S.E.P.A. or U.S.G.S. and no
official endorsement should be inferred.
This report was prepared by the organization(s) named below as an account of work sponsored by the Electric
Power Research Institute, Inc. (EPRI). Neither EPRI, members of EPRI, the organization(s) named below, nor any
person acting on behalf of any of them: (a) makes any warranty, express or implied, with respect to the use of any
information, apparatus, method, or process disclosed in this report or that such use may not infringe privately
owned rights; or (b) assumes any liabilities with respect to the use of, or for damages resulting from the use of,
any information, apparatus, method, or process disclosed in this report.
Prepared by
Seattle, Washington

Workshop Proceedings Report on Deposition
Research on the nature of acidic deposition, its relationship to emissions, and
its consequences at receptors requires reliable measurements. This workshop
provided a forum for scientists and managers to exchange information on procedures
for sampling and chemical analysis with the goal of identifying methods ready for
standardization. Eighty representatives of monitoring networks, methods
standardization agencies and the research community attended. Presentations
explaining the technical issues were followed by working group sessions. These
identified candidate methods for standardization and developed criteria for
ranking and prioritizing their importance. Obstacles to standardization were also
identified. It was recommended that the documentation of standard practices first
be developed for broad topics, such as network design, sampling instruments, and
chemical analysis. Sampling precipitation accurately was identified as urgently
needing further development. Standardization of analytical laboratory procedures
was considered achievable now. For dry deposition measurement, special emphasis
was placed on the need for more research.

Section	Page
2	CHARGE TO THE WORKSHOP--Peter K. Mueller	2-1
Methods Development and Standardization Activities of the	3-3
Intersociety Conmittee (ISC)--Axel Hendrickson
Here a Site, There a Site, Everywhere a Site--John Robertson	3-9
Collection, Analysis, and Screening of Wet Deposition	3-19
Samples --Van Bowersox
Interpretation of Individual Precipitation Chemistry	3-21
Data Sets: General Considerations and Approaches-
Elaine Chapman
Wet Deposition Data Handling, Reduction, and Reporting--	3-29
Anthony Olsen
Siting Criteria—John Robertson	4-3
Sampling and Field Monitoring—Leo Topol	4-9
Chemical Analytical Methods—Jane Rothert	4-13
Wet Deposition Data Handling, Archiving, and Accessing--	4-19
Anthony Olsen
Quality Auditing--Bi11 Mitchell	4-37
Development of Standardized Operating Procedures--	4-45
Sally Campbell
Priorities and Processes for Achieving	4-53
Standardization, A Plenary Session Discussion-
Peter K. Mueller

Section	Page
Measuring the Deposition of Air Pollutants to	5-3
Forests—Ronald Bradow
Review of Methods to Measure Chemical Species that	5-7
Contribute to Acid Dry Deposition—Bob Stevens
Practical Aspects of Measuring Dry Deposition--	5-25
Marv Wesely
Dry Deposition Sitting Criteria—George Sehmel	6-3
Dry Deposition Sampling and Analysis--Bob Stevens	6-11
Dry Deposition Data Hand! ing—Anthonly Olsen	6-23
Dry Deposition Methods Specifications—Ray Hosker	6-33
Priorities and Process for Achieving Standardization,	6-43
A Plenary Discussion—Steven Bromberg
Characterization of the Mesh Impaction Fog	7-3
Sampler—Andy Huang
Conversion of SOgtg) to Sulfate in a Fog Bank--	7-13
Delbert Eatough
Annular Denuders to Collect Reactive Gases: Theory	7-21
and Application —Ivo Allegrini

Figure	page
Suggested Framework for Individual Data Set Analysis	3-27
Network Operations Block Diagram Q_)	4-21
Site Information	4-24
Precipitation Chemistry Collection Characteristics	4-24
Precipitation Standard Gauge and Other Instrumentation at Site	4-25
Individual Sampling Period Data items	4-26
Denuder Difference Analysis Method for Nitrates and Nitric Acid	5-13
Annular Denuder	5-15
Diagram of Annular Denuder System Showing Cyclone, Two Annular Denuders	5-17
Coupled in Series Followed by a Filter Pack (Teflon:Nylon Filters) and
Pump: Flow Controller Assembly
Nitric Acid Concentration in Air at Raleigh, NC, March 28-29, 1985	5-19
Diurnal Variation in Nitrate Concentration, Raleigh, NC, 1985	5-20
Typical Ion Chromatogram of Parallel Annular Denuders which Have	5-21
Collected Ambient Air Samples
ESRL Prototype Concentration Monitor for Estimating Acidic Dry Deposition	5-22
US - Italy Annular Denuder Acid Deposition Sampling System	5-23
Dry Deposition Monitoring Network Operational Function Diagram	6-25
Archived Data Block Structure for Dry Deposition Computerized Data Base	6-28
The Mesh Impaction Fog Sampler	7-5
Aerosol Penetration through Mesh Impaction Fog Sampler	7-6
Liquid Holdup in Polypropylene Mesh Filter of Fog Sampler	7-7
Pooled Standard Deviations (cr) and Coefficients of Variation (%) for Acid	7-9
Measurements for the Henninger Flats Intercomparison Study
Pooled Standard Deviations (cr) and Coefficients of Variation [%) for	7-10
Nitrate, Sulfate, and Ammonium for the Henninger Flats Intercomparfson
Pooled Standard Deviations (cr) and Coefficients of Variation (%) for Major	7-11
Cations for the Henninger Flats Intercomparison Study
Variation of Sulfate/sulfur Total Versus Plume Travel Time for the Plume	7-15
of an Oil-fired Power Plant Emitted into a Fog Bank

Table	Pa96
Critical Attributes: Functional Categories	3-23
Formal Documentation for Wet Deposition Monitoring Network Operation	3-33
National Atmospheric Deposition Program 1982 Annual Data Summary	3-37
1980 Annual Unified Data Summary	3-38
1980 Annual Unified Data Sutrmary - Nitrate Concentration	3-39
Site Representativeness Levels	3-44
A Proposed Definition of Data Completeness Levels Using Five Data	3-45
Completeness Measures
Standardization of Siting Issues	4-5
Wet Precipitation Chemistry Networks and the Methods of Analysis Used	4-14
Per Parameter
Analytical Methods Being Standardized	4-15
Formal Documentation for Wet Deposition Monitoring Network Operation	4-23
Prioritization of Methods and Practices for Standardization	4-51
Typical Concentrations of Chemical Species Related to Acid Dry Deposition	5-9
Description of Sampling Systems and Instruments that Measure Chemical	5-11
Species that Contribute to Acidic Dry Deposition
Italy - USA Field Study, RTP, NC « Spring 1985 - S02 Replicate ADM Test	5-18
Italy - USA Field Study, RTP, NC — Spring 1985 -- HNO3 Replicate ADM Test	5-18
Dry Deposition Information Required for Different Applications	5-29
Viewpoints on the State-of-the-Art of Measurement of Trace Gases and	5-31
Aerosols and of Knowledge of Appropriate Deposition Velocities
Summary of Instrumental Techniques	6-13
Methods of Trends Measurement	6-19
Formal Documentation for Dry Deposition Monitoring Network Operation	6-29
Suitability of Various Dry Deposition Flux Measurement Methods for Use	6-34
in Intensive Case Studies or in Routine Monitoring
Amounts of Ionic Species Measured by the Annular Denuder Methods during	7-25
27 hr Sampling and Relative Atmospheric Levels in the Gas and Particle

As part of the continuing research on the nature of acidic deposition and its rela-
tionship to emissions and its consequences at receptors, there is a continuing need
to obtain reliable measurements of the quantities of various materials deposited
directly (dry deposition) on various surfaces or by precipitation (wet deposition).
While extensive sampling networks have been employed for several years in North
America for studying wet deposition, the expansion of these networks to include
measurement of dry deposition is only in the planning stages. The value of the data
being produced is subject to considerable enhancement if the procedures for making
deposition measurements could be standardized.
Because of the extensive experience with wet deposition, it seems likely that many
aspects of the measurement process are ready for standardization. However, methods
for monitoring dry deposition routinely are still being developed. Therefore before
standardization can proceed, several measurement principles and their respective
practical attributes need to be established.
These concerns about the acidic deposition measurement technology are shared by
many, motivating the convening of workshops to exchange information and to achieve
congruency. This workshop is intended to identify which aspects of the process lend
themselves to immediate standardization, on the one hand, and those which require
first the identification of attributes which ought to be met by the emerging mea-
surement technology.
The concepts behind the workshop and its organization were established by a consen-
sus of an Executive Committee which included Steven Bromberg (USEPA), Joel Frisch
(USGS), Axel Hendrickson (Intersociety Committee), John Jansen (Southern Company
Services), Paul Kapinos (USGS), Bernard Malo (USGS), John McManus (American Electric
Power Service Corporation), Peter K. Mueller (EPR1), Richard Paur (USEPA), Jack
Pickering (USGS), and David Anderson (ENVIROTEST).

The primary workshop objectives were:
•	To exchange information among practitioners and managers of acid
deposition measurement and chemical analysis;
•	To select methods of deposition sampling and chemical analysis as
candidates for standardization and recommend routes for composing
and publishing standardized methods and establishing reference
•	To specify design and performance goals and identify engineering
and research needs of suitable methods for measuring dry
The workshop began with technical presentations that explained the methods stan-
dardization process; selection of sampling locations; influences of the measurement
process attributes and data users; and the aspects of data handling, data reduction
and data reporting that should be considered for standardization. Working groups
were convened which addressed the following tasks:
•	Listing of methods and procedures to be considered for standardiza-
tion (including quality control and auditing),
•	Proposing decision criteria for use in ranking the methods or
procedures as candidates for standardization,
•	Identifying problems blocking standardization, and
0 Prioritizing items for standardization.
Eighty representatives of acidic deposition monitoring networks, methods standardi-
zation agencies and the scientific research community attended. The participants
divided the activities into three categories: development of standard practices,
the documentation and acceptance of standard methods, and research on measurement
processes. It was recommended that standard practices covering broad topics such as
specifications for network design, sampling instruments and criteria, and chemical
analysis for desired components be developed first. Subsequently, specific aspects
could be promoted to standard methods.
Obstacles to standardization were identified as: inertia to evaluating and changing
existing procedures and participating in the standardization process, differences in
judgment, differences between network objectives, a lack of data suitable for estab-
lishing precision and accuracy, inertia to evaluating data from colocated sampling
and collaborative testing studies, and a need for new knowledge.

For wet deposition, high priorities for standardization include: terminology;
network documentation; wet collector design and operation; analytical chemical
methods; and data processing, validation, and archiving. The creation of a standard
reference center for evaluating different types of collectors, including new tech-
nologies, was recommended. Practices identified as ready for standardization in-
clude: planning and how to use goals and objectives in writing plans, the selection
of methods, quality assurance practices, data archiving and reporting formats, and
wet sample collection and handling.
The major problem requiring more research concerns the accuracy with which the
samples collected represent the actual precipitation. Research is also needed to
establish the density of sampler deployment with respect to spatial variability and
for using quality control information in data analysis.
The methods for measuring dry deposition are not as close to standardization as in
wet deposition. There is considerably more research needed to learn how to monitor
dry deposition before standardization can occur. While no methods appeared to be
good candidates for standardization, a number of associated procedures could be
developed as standard practices or at least well-documented protocols. Such proce-
dures include: calibration of meteorological and chemical sensors; siting criteria;
quality control testing and auditing; computations; data reporting and archiving;
and the different principles applicable to trends monitoring, effects determination
and air quality model verification. A high priority was placed on developing a
standard terminology.
Research needed to achieve capabilities for directly measuring dry deposition was
identified. These relate primarily to development of technology for obtaining flux
data for an increasing number of substances and for observational techniques that
are representative of large areas. The mathematical techniques used to infer dry
deposition from air concentration measurements and characteristic surface parameters
remain to be improved and tested.

As part of the continuing research on the nature of acidic deposition and its rela-
tionship to emissions and its consequences at receptors, there is a continuing need
to obtain reliable measurements of the quantities of various materials deposited
directly (dry deposition) on various surfaces or by precipitation (wet deposition).
While extensive sampling networks have been employed for several years in North
America for studying wet deposition, the expansion of these networks to include
measurement of dry deposition is only in the planning stages. The value of the data
being produced is subject to considerable enhancement if the procedures for making
deposition measurements could be standardized.
Because of the extensive experience with wet deposition, it seems likely that many
aspects of the measurement process are ready for standardization. However, methods
for surveying dry deposition routinely are still being developed. So before stand-
ardization can proceed several measurement principles and their respective practical
attributes need to be established.
These concerns about the acidic deposition measurement technology are shared also by
others, motivating the convening of workshops to exchange information and to achieve
congruency. This workshop was intended to identify which aspects of the process
lend themselves to immediate standardization, on the one hand, and those which
require first the identification of attributes which ought to be met by the emerging
measurement technology.
The concepts behind the workshop and its organization were established by a consen-
sus of an Executive Committee which included Steven Bromberg (USEPA), Joel Frisch
(USGS), Axel Hendrickson (Intersociety Committee), John Jansen (Southern Company
Services), Paul Kapinos (USGS), Bernard Malo (USGS), John McManus (American Electric
Power Service Corp.), Peter K. Mueller (EPRI), Richard Paur (USEPA), Jack Pickering
(USGS), and David Anderson (ENVIROTEST).

The primary workshop objectives were:
t To exchange information among practitioners and managers of acid
deposition measurement and chemical analysis;
•	To select deposition methods of sampling and chemical analysis as
candidates for standardization and recommend routes for composing
and publishing standardized methods and establishing reference
•	To specify design and performance goals and identify engineering
and research needs of suitable methods for measuring dry
In wet deposition the participants identified those aspects of the measurement
process that could be standardized and discussed a follow-up mechanism for producing
standards and standardized methods. They felt that dry deposition measurement was
not as close to standardization except in chemistry and some aspects of air samp-
ling. Major uncertainties about the dry deposition process remain. Therefore, a
primary hoped-for outcome from the sessions on dry deposition was specifications for
how the measurement process ought to be engineered. In the process of achieving
these outcomes the objective to exchange information was met.
The workshop structure had three technical parts - Wet Deposition (Section 3), Dry
Deposition (Section 5), and Special Topics (Section 7) with sub parts or working
groups on wet and dry deposition (Sections 4 and 6). Keynote speakers' technical
presentations in Sections 3, 5, and 7 provided a common information base and posed
questions and views to consider in working group discussions (Sections 4 and 6).
Each working group focused on a specific technical issue in detail. It was in the
working groups where we accomplished the workshop objectives for each topic issue.
Each working group addressed four tasks:
•	Develop a list of methods and procedures to be considered for
•	Propose decision criteria for use in ranking the methods or proce-
dures as candidates for standardization,
•	Identify problems in the way of standardization that need to be
addressed before standardization can occur, and
•	Apply the criteria for standardization to the list of methods to
develop a prioritized list of methods and procedures.
The proceedings document provides the sponsoring agencies with current information
to guide them in the development of their research programs in wet and dry deposi-
tion monitoring.

Peter K. Mueller*
A great deal of valuable precipitation chemistry data is being generated. Millions
of dollars are being spent each year. Of great concern is that the measurements we
are making do not have any official standards behind them. Accordingly, several
public and private organizations are active in trying to get some standard methods
established. In the interim we all use draft methods and standard operating proce-
dures (SOPs), but we have not put forth the necessary effort to agree on what the
standards should be for specific applications. Therefore, this workshop has been
convened to identify those parts of the precipitation measurement process that lend
themselves to standardization now.
In 1982 the Utility Acid Precipitation Study Program (UAPSP) workshop had the objec-
tive of finding comparability in the data generated by different precipitation
chemistry measurement networks (1J. Many of you attended that workshop. The pro-
ceedings document rather well the attributes of each component of the measurement
process that need to be carefully specified. Many of those now lend themselves to
standardization. The question is what does the community involved in the measure-
ment process think, and what do you feel could be done?
In organizing this workshop, we have identified several working groups. One of them
focused on siting criteria. It is important to have certain minimal siting specifi-
cations met. In the prior UAPSP report (!_), the attributes of sites and what ought
to be specified have been outlined. Many purposes for taking samples exist which
influence the selection of sampling locations and other observational methods.
Therefore, it may be necessary to have several alternative methods specified for
networks with different objectives. Then people can choose depending on the
applicat ion.
^Electric Power Research Institute, Palo Alto CA.
Source: Proc. Methods for Acidic Dep. Measurements EA-4663, EPRI, Palo Alto CA 1986.

The first step in the process of getting data is the operation of sampling equipment
at a monitoring site. What must be done there? Minimally, precipitation amount, pH
and specific conductivity ought to be recorded at the sampling location. Are there
other measurements that should be specified? Are current practices sufficiently
well described so that they can be standardized as is, or is more work needed?
The step following sample collection is transferring the sample to a chemistry
laboratory. This involves containers and transfers, shipping of materials to the
sampling locations and their return with the samples to the laboratory. All the
details of sample transfer must be specified. Temperature control may be critical.
One needs to consider the importance of doing duplicate sampling (colocated samp-
ling) to get precision information. This issue addresses the basic question of how
one actually obtains precision and accuracy information for an observation of envi-
ronmental quality. All groups need to address this issue.
The Chemical Analytical Methods group may have the easiest iob. The state-of-the-
art in the laboratory is the most advanced and has the longest history. Yet we are
discovering important differences when we start looking at the data sets. For
instance, the UAPSP laboratory applies a hierarchy for measuring analytes depending
on the sample size. There is hardly a sample that does not have at least a pH and a
sulfate measurement. Others have established a minimum sampler volume that will be
analyzed; for some sampling locations this minimum often cannot be achieved, and
that can lead to an unacceptable data loss. This practice could be addressed by
both the Chemical Analytical Methods and by the Data Handling, Archiving, and Acces-
sing working groups.
During the last two or three years tremendous strides have been made in combining
the data sets from various networks and in documenting each entry in the data set.
Even at that, such write-ups are full of jargon. For instance, we recently combined
the data sets from several daily collection networks. Although the wording for
describing the data entries and flags is often similar, actually trying to combine
them was a nightmare because the same wording often was intended to have distinctly
different meanings. To sort out the confusion we had to study procedures and inter-
view the leaders who really knew what they were trying to communicate. At least we
should all be using the same language so that there is a unique label for every
item. One of the chores for the data handling group is to consider a lexicon that
will define and standardize terms.

Finally, there is quality assurance. Most people use jargon to say, "We are doing
QA/QC." Sometimes one finds it repeated twice in a line, or once every lire. I
wonder if this practice is intended to convince somebody that a really bang-up job
is being done. But at best I find this indiscriminate use of QA/QC confusing. So I
am advocating the use of clear language.
Quality assurance is the process that ascertains that we know the precision and the
accuracy of our data. Two kinds of activities establish quality. One is the quali-
ty control which tests products, just like in a factory. Thus the measurement
process should include tests which provide rapid feedback concerning performance
within previously specified tolerances. Secondly, the quality control test data
serves retrospectively to establish the actual precision experienced for a selected
sampling period and location. Such data are important because we can still operate
usefully outside of our own specified tolerance for a period of time. For instance,
when a pH meter starts going bad, so that instead of being accurate to ±0.1 it is
±0.2 or even ±0.3 units it does not mean you have to stop measuring pH until you
fix the meter. One should continue measurements and also enter the uncertainty
data, whatever it actually was, into the data archive. The user can subsequently
decide how to treat such data. So, there is a need to identify the quality control
tests that should be and have been included in every standardized procedure.
Another part of quality assurance is auditing. There is a need to identify what the
audit process should be, how the auditing is done, how frequently it is done, what
performance audits should include, and what system audits should consist of. Audit
data help to achieve accuracy, but auditing does not really provide the data needed
to specify precision because it is done sporadically. It is a spot check to make
sure that claims for a measurement process are warranted.
Auditing is something an accountant does. One comes into a business and says, "Yes,
this business is doing its job within normal accounting standards." Thus the audit-
or is to establish the deviations from the expectations, but the expectations are
established by the network operators themselves. The auditor should not make a
value judgment, i.e. bad, good, or excellent. The auditor should only judge if
performance is within or outside of specified tolerances. In addition, the auditor
should identify practices or omissions that may lead to data quality limitations
which may not have been addressed by the operator.
In addition to standard measurement procedures, we need standard methods for perfor-
ming the auditing. Some of us have already drafted quality assurance plans that we

do our auditing from, which should serve as the starting point for standardization.
This will be addressed in the working group on Quality Auditing.
These are some of the ideas for the workshop to address. How are we going to imple-
ment the standardization process? What standardization activities already have
taken place? What is the rate of the current process? How much faster can we push
1. Proceedings: Advisory Workshop on Methods for Comparing Precipitation Chemistry
Data. UAPSP Report 100. Utility Acid Precipitation Study Program, Reports
Center, P.O. Box 599, Damascus, MD 20872.

The plenary session addressed those aspects of the measurement process and follow-up
mechanisms that can be standardized. Keynote speakers provided a common information
base and posed questions and views to consider in the working group discussions.
Presentations followed the pattern of the approach to acid deposition monitoring.
They began with an overall view of standardization and then progressed from siting
to sample collection, analysis, interpretation of results, and data handling.
Axel Hendrickson discussed the overall issues of standardization activities and the
use of draft methods or standard operating procedures (SOPs) which may be used in
the interim while standard methods are being established.
The first step in the process of monitoring is selection of a sampler location.
John Robertson addressed the issues of location of a sampling site and parameters
that must be dealt with in choosing a site. He presented an overview of the siting
issue as a basis for discussion in proposing standards for siting criteria.
In a discussion of sample collection and chemical analysis by Van Bowersox he pre-
sented examples of analytical results and sampling issues for development of repre-
sentative data. Some of the basic issues of precision and accuracy information for
observations of environmental quality were addressed.
Elaine Chapman addressed the issue of interpretation of precipitation chemistry data
sets. She suggested a framework for analyzing individual data sets and identified
potential areas for standardization that will aid the data analyst.
In the final paper of the wet deposition technical session, Anthony Olsen discussed
the issue of data handling. He addressed the usefulness of documentation beyond the
perceived present needs. Thus, when a future need to interpret the data arises,
sufficient documentation will exist to address those unforeseen needs. Data hand-
ling, reduction, and reporting from the individual level to the network level were

Axel Hendrickson*
This is a brief discussion of the methods development and standardization activities
of the Intersociety Committee (ISC). All of us bring different experience and
different backgrounds to the activities of the Committee and subcommittees that work
on it. Part of that background includes the old, original air pollution episode in
the United States at Denora in the late 1940's. This was investigated by industrial
hygienists in the State of Pennsylvania and the U.S. Public Health Service. Immed-
iately doubts arose about the data because of uncertainties in the sampling and
analytical procedures used. By the late 1950's the U.S. Public Health Service had
established a division devoted to air pollution problems. The need was recognized
for the development and "standardization" of the methods that were being used.
The action in their direction began in 1958. The American Public Health Association
appointed a committee to evaluate the need for a book on laboratory methods for the
examination of air and to inquire whether APHA should sponsor this activity. It
concluded that APHA should sponsor its publication provided collaboration with other
professional organizations could be obtained. The model for such a book was the
predecessor of what is now known as Standard Methods for the Analysis of Water and
Wastewater. This is a successful joint venture between the American Public Health
Association, the American Water Works Association, and the Water Pollution Control
Federation. The venture was started in 1905, and has now published the 16th
Early in 1960 APHA convened a conference attended by about 30 representatives of
U.S. Public Health Service, industrial organizations, and professional organiza-
tions who were interested in air pollution control. The conferees endorsed the APHA
*Intersociety Committee, Gainesville FL.
Source: Proc. Methods for Acidic Dep. Measurements EA-4663, EPRI, Palo Alto CA 1986.

report. They emphasized several aspects: the manual should be a joint effort;
prioritize the methods to be developed; publish the manual as soon as possible; and
especially evaluate each method. An ad hoc committee was selected to prepare a plan
of action. They called for the formation of an Intersociety Committee on Air Samp-
ling and Analysis (now referred as the ISC) to be managed by APHA and funded at
least partially by the U.S. Public Health Service. The conferees accepted the
report, and the ISC was launched.
During 1962 participating professional organizations who subscribed to the articles
of agreement began appointments to the ISC. By March of 1963 the ISC was incorpora-
ted by ten participating organizations including: American Conference of Governmen-
tal Industrial Hygienists; American Chemical Society; American Industrial Hygiene
Association; Association of Official Analytical Chemists; Air Pollution Control
Association; American Public Health Association; American Public Works Association;
American Society of Civil Engineers; American Society of Mechanical Engineers;
American Institute of Chemical Engineers; and the American Society for Testing and
Now the Intersociety Committee is organized as an executive committee which oversees
all operations and various special topic subcommittees. Each participating organi-
zation appoints one senior experienced member to the Executive Committee who serves
for indefinite terms. The Chair is occupied for a three-year period and is rotated
among the various participating organizations. The ISC employs an executive secre-
tary and an editor. Dr. George Kupchick has been the executive secretary from the
The general functions of the ISC include the governance, policy, editorial, and
acceptance and upgrading of methods. The function of governance involves the day to
day operations of the ISC including financial matters and supervision of the activi-
ties of various subcommittees. Internal policies and procedures include policies
ranging from, for example, conditions of reimbursement or expenses of the subcomnit-
tees and the subcommittee members. They receive only expenses, no other income.
The general guidelines for subcommittee operations and policies for elevating the
status of methods from tentative to recommended have been established. Of major
interest to this group is probably the latter policy. This requires resolution of
all the comments from users and satisfactory results from collaborative testing
before a method can be elevated from tentative to recommended. Early in the plan-
ning we talked about "tentative" versus "standard." Many of the government agen-
cies that were involved in the planning objected to the term "standard." This may

change over time, but at that time they suggested the term "recommended" rather
than standard.
Basic guidance for policy was established by the Articles of Agreement discussed
earlier. ISC operating procedures were developed for the ISC and its subcommittees.
Acceptance/upgrade procedures for methods were established early in the planning.
Great emphasis was placed by the planners on the evaluation of methods. When a
method was submitted by a substance subcommittee to the ISC for approval, it presum-
ably was one of the best methods available. It was designated as tentative because
it really had not been tested. The ISC then would assign the method for collabora-
tive testing and if the results were satisfactory the method might be upgraded to
recommended. A recommended method, therefore, is one of the best methods available
for a substance which has been collaboratively tested and whose limitations are
The editorial activities are concerned with decisions for publication of the methods
in journals and when new editions of the manual should be published. The editorial
mechanics of publication, of course, are the responsibility of the editor.
Scientific work of the ISC is in the hands of what we call substance subcommittees.
These are the people who develop the methods of air sampling and analysis. They
exist for various groups of compounds like: sulfur compounds; halogen compounds;
oxidants and nitrogen compounds; carbon compounds; hydrocarbon compounds; metals;
radioactive compounds; and particulate matter. There are other subcommittees which
are not substance subcommittees. They concern themselves with general techniques,
precautions, quality assurance, stationary source sampling, and management of col-
laborative testing.
Each participating organization appoints a member to each subcommittee who is expert
in that particular area. Special consultants who are particularly qualified on a
particular topic may be included in the development of a method on an ad hoc basis.
When a group of substances or ions, for example, might logically be determined by a
specific technique or equipment like AA, IC, or ICAP, a subcommittee of specialists
is assembled. Then a generic method is developed. The ability to address new
techniques easily is particularly important for precipitation chemistry.
Within the overall ISC priorities, each substance subcommittee chairman is responsi-
ble for the operation of his subcommittee. The development of a new method or
modification of an existing one may be on his own initiative, or it might be as-

signed to him by the ISC in response to users in the field. In general the flow of
work occurs as follows: The literature is examined by the substance subcommittee.
It develops and formats a method after achieving a consensus within itself. The
method is sent to the editor. The editor then sends it to the ISC for final approv-
al. It is then published as a tentative method. We decided that it is important to
get the methods out as soon as possible. The methods are usually published in
journals of participating organizations first as a tentative method. It is assigned
for collaborative testing. After the results of the collaborative tests are back it
goes back to the subcommittee. They make the necessary changes and send it on to
the editor and to the ISC for final approval. Finally, if appropriate, it is pub-
lished as a recommended method.
Methods that were adopted by the ISC initially were published in Health Laboratory
Science, which was a laboratory publication of the American Public Health Associa-
tion. If we did not feel that provided sufficient breadth of distribution to work-
ers in the field, we had tentative methods published in the Journal of the Air
Pollution Control Association and the Journal of the American Association of Analy-
tical Chemists. The first collection of these methods, after they had been publish-
ed in professional journals, was the first edition of Methods of Air Sampling and
Analysis in 1972. There were 57 methods in the first edition. The second edition
was published in 1977. It contained 136 methods. The third edition is currently
under preparation. Methods are replaced, upgraded or revised for each edition as
necessary. We get feedback from those who use the methods. It either comes to the
substance subcommittee chairman or to the ISC.
In addition to journal publications and the publication of these manuals, special
publications of groups of methods have been produced to meet the particular needs of
specific clients or sponsors. For example, the early sponsors were U.S. Public
Health Service, then the National Air Pollution Control Administration, then EPA,
and then NIOSH. We produced special publications for NIOSH who were interested in
workplace methods. They wanted a special booklet that contained all the methods
that they were interested in. These publications are in addition to journal publi-
cation and manuals. State-of-the-art reviews have been produced for specific indus-
tries or specific sources.
From the beginning an integral part of the ISC program was collaborative testing to
establish the precision, accuracy and the limitations of the methods. In December
1960 the ad hoc committee that I mentioned previously pointed out that, "Many
methods now in use have never adequately been evaluated for reliability, interfering

substances, or even suitability for measuring small quantities." When we get into
precipitation chemistry we are talking about even smaller quantities than we had in
the air pollution samples.
During the early discussions with U.S. Public Health Service on financing they
insisted on a collaborative test program. The ISC responded to this. Later the
National Air Pollution Control Administration, an interim agency between U.S. Public
Health Service and EPA, perceived a potential conflict of interest because the same
organization developed methods, produced the methods, and then tested the methods.
So the collaborative testing was assigned to a federal government group called the
Analytical Methods Evaluation Service based in Cincinnati. Since that time collab-
orative testing has been performed by various government agencies and by private
contractors (as in ASTM Project Threshold) and the data are used to move the status
of methods from tentative to recommended.
What I have tried to present to you is that the ISC is a thriving organization. We
are operating on the profits from the publication of the manuals. It no longer has
any outside sponsors except occasionally. It has been active for more than 20
years. It has a very strong group of people who are involved in it. These are very
knowledgeable, experienced people. The ISC is dedicated to collaborative testing as
a means of finding out what we are really measuring.

John K. Robertson*
I have been asked to give an overview of siting so that those in the two siting
workshops have a basis for discussion in proposing standards. What I hope to show
is that good siting is all in the eye of the beholder, that what siting methods and
criteria we have are subjective, based more on gut feeling and accepted practice
rather than on research results and the scientific method. Having said that, let me
say that I do not think that that is all bad. I think we can arrive at good siting
procedures without experiment. But no matter how many people participate or how
sound our approach there will always be someone who thinks the criteria lax or ill
applied and, therefore, the data produced from the experiment in question are no
good in that person's judgment.
For those of you who do not know me, I think it is important that you know something
of my background so that you understand the prejudices I represent. I have been
involved with the National Atmospheric Deposition Program (NADP) since 1977, first
as a member of the site selection and certification committee, then as vice-chairman
and chairman of that committee. I am now secretary of NADP. The Military Academy
at West Point has operated an NADP site since June 1979. We are currently working
with the Geological Survey on an intercomparison of samplers and a determination of
the spatial variability of precipitation chemistry over short distances. We operate
one of the prototype NOAA Air Turbulence and Diffusion Laboratory dry deposition
samplers and also operate two of the EPA prototype dry deposition monitoring sta-
tions. We run a HiVol for Ken Rahn's tracer experiments and are cooperating with
the Department of Energy's Environmental Measurement Laboratory (EML) in New York
City as a station in their precipitation monitoring network. EML is also working at
our site to intercompare the standard dry bucket chemistry data with the new proto-
type dry deposition methods by operating dry buckets for varying periods of expo-
*U.S. Military Academy, West Point NY.
Source: Proc. Methods for Acidic Dep. Measurements EA-4663, EPRI, Palo Alto CA 1986.

sure. In my previous role as NADP siting chairman, I took on the job of insuring
that all sites were properly documented according to the NADP protocol. The NADP/
NTN site documentation now occupies over 20 linear feet of maps, questionnaires,
photos, site evaluations, etc. Because of this material being at West Point I was
asked in 1981 to work on the design of the National Trends Network (NTN) for Task
Group D of NAPAP. Jerre Wilson and I spent a year developing NTN and on July 8,
1982, our design was adopted by Task Group D. In the summer of 1982, the Geological
Survey hired us to visit the existing network sites in NADP, and those MAP3S, UAPSP,
TVA and NOAA Reference Climate Stations we had recommended for inclusion in NTN. In
addition we were to visit the 42 locations not involved in any monitoring program
which we had proposed for NTN, to ascertain their suitability and willingness to
participate. Three of us, Jerre Wilson, Dick Graham, and I spent the better part of
a year visiting over 220 sites. Since that program was completed in 1983, Dick
Graham has made several additional trips to find sites to replace unsuitable loca-
tions or sites that did not care to participate. Within the next several months the
last of the 150 NTN sites will be installed completing the network according to the
design adopted in 1982 with only a small number of substitutions or deviations. So
much for background.
Because the purpose of this workshop is to propose standards and since NADP/NTN has
become the de facto national standard, I will present the siting criteria we used in
evaluating sites for NADP/NTN with what little rationale or reason we used in adopt-
ing those criteria. These criteria are sound, well thought out, workable compromi-
ses and are in many cases more detailed than anything we could find in the litera-
ture. I know many of you will disagree with these criteria, if you did not then a
meeting like this would not be needed or as much fun.
The first thing I think we must realize is that implicitly we make several assump-
tions when we sample. We sample "to obtain specimens that represent a larger popu-
lation being studied" QJ. The first assumption can be stated, "A sample is repre-
sentative of the area from which it is collected." Put another way it says that the
site and sampling method do not affect sample integrity. I will deal with siting,
and Van Bowersox will address sampling in the next paper. The second assumption
says, "The spatial statistical structure of all analytes is a continuous function
that varies slowly with distance." It only becomes important when we consider
network design. Taken together if these two assumptions are true, then a single
sample can represent a "large area" (i.e. the sample is regionally representative).

I see three aspects to siting: the first involves the agreeing to of a set of
criteria to be used in evaluating a site. The second has to do with how we locate a
site against which to apply the criteria, and the third involves the placement of
the equipment within the site. All have been called siting separately, but toge-
ther they all are necessary to put a monitoring site in place.
What do we need to consider in finding a representative site? One manual says, "To
select station locations it is necessary to have detailed information on the varia-
bilities of ambient pollutant concentrations, the precipitation amounts, the pre-
vailing winds and other meteorological data" (2j. In my experience this kind of
information seldom exists and, if you had it all, there would not be any need to
sample because you already had the answer. Below is a list of the factors I think
are important.
1.	Site stability
2.	Analyte(s) being sampled
3.	Meteorological factors
4.	Sources of contamination
5.	Topography and terrain
6.	Practical considerations
Let us look at these in more detail.
Site stability involves the ownership of the land on which the site will be placed
and the use for the land both at the site and in its immediate surroundings, but
only to the extent that the use does not change over time. The actual use of the
site is a factor to be considered under sources of contamination. Site ownership
stability is translated by NTN to mean "that the surrounding land be federally
owned, but state or other public ownership is acceptable" (3J. There are only a few
NADP/NTN sites that violate this rule, and these are corporate research or corporate
monitoring sites. I know of only one case of private ownership and that site is
covered by a long term easement on use of the land. The intent is to give the
program the needed long term stability for assessing long term trends. By the
nature of the federal/state beast, land use at most sites is stable and in most
cases under the control of the sites' administrators. The most difficult factor to
predict is changes in land use around a site. By locating well within the boun-
daries of federal/state preserves we can minimize the effect of change of land use,
but even here we must be ever vigilant to changes that can affect the site. Site
stability is needed to make our first assumption true. With little or no change at

a site we can assert that longitudinal changes in chemistry are real and not due to
siting factors.
The medium the sample is being obtained from will drive the sampling method and to
some extent the frequency with which a site needs to be serviced. Together, samp-
ling method and sampling frequency will greatly affect site location. For example,
a liquid absorption sample may require less sophisticated equipment that has minimal
power requirements (i.e. battery power) but which because of preservation, etc.,
will be changed more frequently than a continuous gas sampler which has greater
power requirements (maybe even environmental requirements [i.e. air conditioning]),
but can operate for longer unattended periods. The liquid site can be away from a
source of electricity, the other cannot.
Sampling method will govern how big a site is needed and how close to civilization
(electricity) it must be located. It will also affect location of a station. The
more sophisticated the instrumentation, the more limits are placed on location
because of the sophistication required of the operator.
Sampling frequency will affect the site location. The more a site needs service,
the closer it will be to the operator's home base, especially if the operator has
other duties.
The number of colocated experiments will increase the demand for space at the site
because instruments can get in each other's way. A site with one instrument will be
smaller than one with two or more. As we move from only sampling wet deposition to
sampling both wet and dry deposition, more space will be required at each site.
Some existing sites, particularly those in a small clearing in the woods, will not
house more instruments without additional clearing which violates our stability
Wind is the mover for our weather systems. It brings us our samples. Many of our
rules for contamination give different measures of closeness for an upwind or down-
wind source. Rules for orientation of collectors may be different for wet and dry
deposition because the mean dry weather wind direction may be different from the
mean wet weather wind direction.
Type of precipitation becomes a factor predominantly in areas that receive snow.
The NADP/NTN rules require an Alter Wind Shield at all sites receiving more than 20%
of their annual precipitation as snow and requires installation of a snow roof.

Precipitation amount seems like an odd factor, but I know of the two cases where we
did not take it into account and we should have. The first is at Olympic National
Park in Washington State. Here rainfall averages over 120 inches per year. Many
weeks, more than one bucket is needed to catch the complete sample. This has impli-
cations for sampling protocol and data base design. The second site is my own at
West Point. We are situated on an old rifle range adjacent to a swamp. Most of the
site is four feet above mean water level. In the spring of 1984 we had a family of
beaver build a dam which raised the water level slightly, followed by seven and one-
half inches of rain in a 36 hour period. When the beaver dam washed out and blocked
the culvert that drains our area, the water level rose about seven feet putting most
of the site under water. The same thing happens with snow in a much more discern-
able way. NADP/NTN siting rules allow sites with more than 0.5 meters of annual
snow accumulation to raise their collector (and rain gauge) off the ground.
The biggest meteorological factor legislated against in our rules has to do with
local perturbations to the wind field that can cause the amount of sample collected
to differ from that which actually falls. "Residential structures within 30 meters
of the collector should not be within 30° cone of the mean wind direction" because
of the bulk of these buildings and blocking effect on the wind. "The base of the
collector will be not enclosed" to allow the wind to pass under the collector and
not create turbulence at the collector. "Any object over 1 meter high with suffi-
cient mass to deflect the wind should not be located within 5 meters of the collec-
tor." Alter wind shields and open fences are excluded from this requirement.
Because of wind turbulence and screening of precipitation we require the rain gauge
to be at least 5 meters from the collector, but no further than 30 meters.
Sources of contamination are the biggest group legislated against when it comes to
establishing siting rules. The rules fall into several categories:
1.	Stationary point sources
2.	Area sources
3.	Line sources
Everybody has ideas on how far a site should be from a large pollution point source.
This is an area where Wilson and I departed from the conventional wisdom. Rule 1 -
"If a region is characterized by a certain type of . . . industrialization, the
collector should be located to provide representation of such extensive deposition
sources." For example, there are many areas of the country where large regions are
covered by oil fields or gas fields. Our sites, if called for by the plan, are
placed so that no one well, or group of wells, influences it but so that it is nest-
led in among the wells at reasonable distance. Rule 2 - "Industrial operations such

as power plants, chemical plants and manufacturing facilities should be at least
10 km away from the collector. If the emission sources are located in the general
upwind direction . . . , then this distance should be increased to 20 km." The
conventional wisdom says 50 km. Why did we change? Two reasons. When we first
started placing sites for NTN 50 km was no problem. We had started working in the
western United States, but there are large portions of the east and southeast that
would not have a site under this rule; I mean Maryland, the northern portion of
Delaware, eastern Pennsylvania, most all of New York and New Jersey except the
northernmost portions, Connecticut, Rhode Island, Massachusetts, and parts of south-
ern New Hampshire and southern Maine. The second reason we adopted this rule was
the accumulated experience of NADP. NADP had a rule in its original site selection
manual (£) that said, "No continuous sources of pollution shall be within 50 km in
the direction of the mean wind direction for the site, and 30 km in all other direc-
tions." When we started to examine how with this kind of rule we got sites in the
east, what we found was that most sites had a buffer of at least 10 km without large
pollution sources. We first said that our scientists must not have known their
local region out past 10 km, but we found they had reported the sources out to great
distances. We interpreted this to mean that even though they knew the rules their
collective wisdom was that it did not matter in terms of their view of the quality
of the site. This is one of those areas without definitive experimental data. I
know of nobody who has placed a grid of collectors around a power plant to see at
what distance the effect of the power plant goes away. Nor do I know of any funding
source out there willing to pay for such an experiment. However, a great body of
knowledge exists for tracing plumes around plants, but the data has not been applied
to aid site placement.
Nobody has done an experiment around cities aimed at discerning how close to a city
a site can be located and not see the influence of the city. A lot of data exists
from urban networks of one sort or another that could be sifted through to come up
with such a limit. Again, with the lack of data and based on the experience of NADP
operators, we have adopted a rule. Suburban/urban areas with a population of
10,000 people follow the same rule as for power plants above. As cities grow in
size (>75,000 people) then the collector should be no closer than 20 km and at least
40 km if the city is upwind of the site.
Local requirements for siting differ widely among networks. Each has established
its minimum distances from roads, airports, railroads, rivers, etc. All are based
on the feeling of the individuals involved that the dust and turbulence from these
potential sources can influence precipitation chemistry. Data does not exist to

support any particular distance, but no one will say that we should not locate away
from roads. For NTN we adopted 100 meters as our standard. It seemed like a rea-
sonable number and most researchers felt comfortable with it.
Similarly we established rules for feedlot and barns housing animals of 500 meters,
no active grazing within 20 meters, no surface storage of agricultural products,
fuels, vehicles, or other materials within 100 meters. The latter was based not so
much on the stored materials polluting but the people who would come and go to these
stock piles. We also legislate against parking lots and maintenance yards within
100 meters because of the traffic and dust. These rules are refinements of the NADP
rules that were basically the same (4J.
Topography can be a factor in siting. No network that I am aware of addresses it
though. In a national network like NTN how do you take account of topography? Our
guidance from Task Group D was to get a reasonable distribution of elevations.
Altitude obviously has an effect on rain amount. You can see this by looking at any
topographic map of the western U.S. Why else are the hills shaded green? Everyone
has learned about orographic effect on precipitation. Which side of a ridge you
place a site on can make a difference in the amount of sample and presumably its
chemistry. Another effect of topography impacts on accessibility. There are fewer
roads in the mountains. How many valleys have an overprint that shows the roads
closed in winter? There is a topographic effect, but I think each network will have
to sort it out in terms of its goals and density.
Terrain in the immediate neighborhood of the site is also a factor. Sudden breaks
in slope will affect the wind pattern at a site and could affect sampling rate.
Most sites and networks are in clearings in the woods, where the surrounding trees
protect from the wind, or are located on open level ground (or gently sloping ter-
rain). These conditions are legislated, so the conventional wisdom believes this to
be a factor. Again there is a dearth of data to support amount of slope, etc.
"Nontechnical considerations such as convenience and accessibility are usually the
dominant factors in selecting a specific monitoring location within an area of
interest" (5J. These are not the only practical considerations that impact on
siting. Perhaps the dominant factor is budget. Many ambitious projects have been
checked by lack of adequate funds to do it right. Another factor is an adequate
site but no personnel with the required level of training, right background, or
worse yet nobody with the time needed to take on an additional project. Personnel,
budget and accessibility can interact. A dedicated observer can travel far to

service a remote site, but someone with other duties, particularly if they relate to
the care and feeding of living things, will only be able to spend a short time as an
observer and therefore travel shorter distances and less remotely. Budget deter-
mines the ability to hire a dedicated observer. All the good intentions in the
world cannot overcome this.
One of the technical, practical considerations we have not addressed well is the
reliability of power. We put an electric powered machine out to sample precipita-
tion knowing that the most probable time we will lose power is during a snow or
thunder storm. We also have conflicts with our desire for sampling in remote areas
and accessibility year round. Building a network is a compromise between design
principles (e.g. remoteness or high elevations) and the practicalities of budget,
personnel, and accessibility. Is it better to sample all the time where we can
service a site or to sample remotely at a site we may only service regularly 70% of
the time? That is a tough question each network or site operator must resolve based
on network objectives.
Let me wrap this paper up with a quotation, "Currently the determination of the
number and location of monitoring stations required in a network is primarily based
on subjective considerations, semi-quantitative rules supported by experience, or
limited use of analytical tools such as simple Gaussian models" (5J. I have tried
to show this in my discussion. I think standardization is practical in some areas
of siting, but because of the subjective, semi-quantitative nature of our rules
there will always be someone who disagrees with what has been agreed to. What is
needed is some basic, very unglamorous research on the effect of sources of poten-
tial contamination on precipitation chemistry. I do not think this will ever occur;
therefore, we are left with subjectivity and semi-quantitativeness.
1.	ACS Committee on Environment Improvement and the Subcommittee on Environmental
Analytical Chemistry, Analytical Chemistry, v. 52, 1980, pp. 2242-2249.
2.	L.E. Topol, J. Flanagan, P. Chen, M. Lev-on, R. Schwall and L.S. Shepard.
Quality Assurance Manual for Precipitation Measurement Systems: Research
Triangle Park, NC, Environmental Protection Agency, EPA-600/4-82-042A&B, 1983.
3.	J.K. Robertson and J.W. Wilson. Design of the National Trends Network for
Monitoring the Chemistry of Atmospheric Precipitation: U.S. Geoloqical Survey
Circular 564, 1985.	 	
4.	Subcommittee No. 1, undated. Site Selection and Certification Manual:
National Atmospheric Deposition Program.

M. Liu and J. Arvin. Methodology for the Design of an Optimum Air Quality
Monitoring Network: Las Vegas, Nevada, Environmental Protection Agency,EPA-
6UU/S4-81-002, 1981.

Van Bowersox**
Data are routinely reported for acidic deposition samples collected and analyzed as
part of local, regional, and national monitoring networks. Each of these networks
has specific siting criteria, sampling equipment, operating protocols, and analyti-
cal procedures. These specifications are expected to require sufficient controls on
network operations so that measurements of the experimental variables can be used to
address the stated objectives in the least ambiguous way.
The National Atmospheric Deposition Program/National Trends Network (NADP/NTN) has
adopted practices of operation that are intended to capture data that are spatially
representative over a sufficient period of time to identify trends in precipitation
quality over the U.S. Key criteria related to this objective are discussed. Fol-
lowing a critical review, data for each sample where these criteria have not been
satisfied are flagged. Flagged data are screened from the set reported to the
scientific (user) community. These data screening procedures verify that:
1)	the collected sample was a wet-only precipitation sample,
2)	there was an adherence to network operating procedures by field and lab
personnel, and
3)	if extrinsic contaminants were reported, the associated sample chemistry
was not anomalous.
In looking toward standardization, experiences with several operating features of
the NADP/NTN network may serve as lessons. When they are contrasted with several
other large networks, potential shortcomings in the collection, analysis, and
screening of wet deposition samples deserve consideration.
*0nly the abstract of the presentation was available at printing.
**Illinois State Water Survey, Champaign IL.
Source: Proc. Methods for Acidic Dep. Measurements EA-4663, EPRI, Palo Alto CA 1986.

E.G. Chapman*
The operation of precipitation chemistry monitoring networks is an important compo-
nent of research efforts to understand acid deposition processes. Operation of such
networks includes not only locating sites, collecting precipitation, and analyzing
samples, but also conducting a comprehensive quality control/quality assurance
program, and preparing and utilizing a high quality data set. The preparation of
data sets from the standpoint of validating field and laboratory measurements was
discussed in the previous, and the next paper will discuss the merging and archiving
of data sets into usable large data bases. This paper addresses the analysis and
interpretation of individual data sets. A common thread through each of these
papers is that of data flagging, and the problems that unflagged data can create.
This thread underscores the point that data quality affects all aspects of acid
precipitation research.
The purpose of this paper is threefold:
1)	to identify the major components that influence interpretation of
individual precipitation chemistry data sets,
2)	to suggest a framework for analyzing individual data sets that
provides a built-in system for determining the influence of these
3)	to identify potential areas for standardization that will aid the
data analyst in the above work and in the overall interpretation of
The presentation is given from the perspective of an individual looking at the data
set from one precipitation chemistry monitoring network, whether it be one year or
multi-year, and asking the questions, "What can I learn from this data set? How can
I ensure that my analysis approach is valid?"
*Batte11e Northwest Laboratories, Richland WA.
Source: Proc. Methods for Acidic Dep. Measurements EA-4663, EPRI, Palo Alto CA 1986.

If utilizing the basics of good scientific method, the data analyst will approach
the interpretation work with a formulated set of hypotheses. In the field of acid
precipitation research, it is likely that the hypotheses will fall into one of four
general classes: physio-chemical behavior, deposition patterns, total deposition,
and transport/source receptor relationships. Physio-chemical behavior hypotheses
are those which attempt to relate ion chemistry to precipitation amount or type,
specific classes of meteorological systems, or aerosol and air chemistry. Deposi-
tion pattern investigations may include studies of seasonal or spatial variations,
while total deposition work may involve short-term (i.e. seasonal or annual) or
long-term studies that provide input to agricultural activity, forest productivity
or biological effect studies. The transport/source receptor category is related to
scavenging and long-range transport model validation.
These categories do not have rigid boundaries. For example, a deposition pattern
study could include investigation variations in source emissions and atmospheric
chemistry pathways relative to observe deposition amounts. Such work could also be
considered a physio-chemical behavior study. Formulating and classifying an hypo-
thesis, however, does help the analyst to design an investigative approach. The
next step is to determine whether a data set is truly applicable for testing the
hypotheses under consideration.
One of the simplest ways for the data analyst to approach the applicability question
is to perform a stepwise check of the critical procedures and protocols that affect
data quality and quantity. These critical procedures and protocols have been termed
"network attributes" (JJ. The 1982 workshop on "Methods for Comparing Precipitation
Chemistry Data" identified six functional areas of critical network attributes (2J.
These areas are listed in Table 1. Although each category includes many attributes
that will affect data quality, and therefore the interpretation tasks to which the
data can be applied, it is likely that the data analyst must pay close attention, no
matter what the hypotheses under consideration, to attributes in the functional
areas of site surroundings, sampling activities, chemistry laboratory activities and
data management. Major attributes in each category and examples of their impact on
data analysis work are discussed below.

Table 1
Critical Attributes: Functional Categories
Network Stability
Site Surroundings
On Site Measurements
Sample Transfer
Chemistry Laboratory Activities
Data Management
In terms of site surroundings, the data analyst must consider whether the network
was designed to have sites representative of a given area or whether it was designed
to investigate the impact of specific local sources. MAP3S, Air and Precipitation
Network (APN) and the Utility Acid Precipitation Study Program (UAPSP) networks are
examples of the former, while the short term Meteorological Effects of Thermal
Releases (METER) and the mesoscale study Philadelphia networks are examples of the
latter. Nearby sources, land use, and topography and altitude are considered in
locating sites in both types of networks. Normally all siting criteria are clearly
identified by the network. Data analysts must be aware that subsequent information
may determine that unforeseen forces impacted the quality or representativeness of
data available at certain sites. The work of Barrie and Sirois (3) in determining
that precipitation samples collected at certain CANSAP sites in use prior to 1980
were unduly influenced by dust and salts from nearby roads is a good example of this
latter situation.
In terms of sampling protocols, sampler design (i.e. bulk or wet only) and sample
frequency (daily, weekly, monthly, etc.) are of concern, and probably exert the
greatest influence on deciding whether to employ a given data set. If the analyst
is investigating hypotheses relating to source receptor relationships or physio-
chemical mechanisms, analysis of weekly data is not likely to be productive because
of the many different weather patterns moving through an area in this time period.
Use of daily samples and sub-event data is more appropriate.
In terms of chemistry laboratory activities, there are obvious attributes to consi-
der. One is simply to ensure that data on compounds needed for the study are ac-
tually measured and that these compounds are high enough on the analytical priority
list that an adequate data set exists. Also, the analyst must know what criteria
are employed for flagging data, so that the need for further data culling can be

More subtle aspects also need to be considered, such as determining that laboratory
precision and accuracy are adequate for the types of statistical tests to be employ-
ed. Any changes in analytical bias with time must also be clearly identified, since
such changes may produce or obscure a trend. The analyst must also understand
exactly what information is provided by a given analytical technique. For example,
ion exclusion chromatography (ICE) analysis is often thought to directly yield
organic acid concentrations. However, ICE measures the total free ion and the
undissociated acid present in the sample, and non-acid compounds may contribute to
the free ion levels. Salts from aerosols, such as ammonium or sodium formate, and
gases such as peroxyacetyl nitrate (PAN), which produce acetate and nitrate ion on
dissolution in acidic aqueous media (4J, may contribute to the observed levels. An
upper bound on organic acid concentrations can be made by assuming that all measured
organic ions were produced from the acid forms, but subsequent data analysis must be
performed with recognition of this assumption.
The attributes of data management relate to data entry and storage, which
Dr. Anthony Olsen will address in the following paper. Data analysis attributes
relate to decisions the analyst makes in handling the data base. One obvious deci-
sion concerns the handling of flagged data; in many cases these will simply be
eliminated. The influence of the analyst in handling unflagged data is also impor-
tant. Often an overall data culling procedure is used to eliminate "unusual"
samples. Ion balance and collection efficiency are the two criteria most often
employed in this process. It appears to the author that the collection efficiency
criteria is more useful, because it removes samples where part of the event was
missed and, therefore, where the reported chemistry results may not be representa-
tive. Elimination via ion balance criteria may remove samples where questionable
but unflagged chemistry data was reported, but seems more likely to remove samples
with good data but where all important constituents were not analyzed. Such samples
may provide good insights into a given problem, especially in studies of scavenging
processes and chemical mechanisms.
The data analyst must also consider how to handle samples listed as below quantifi-
able detection limits (BQDL). Often such values are set equal to the detection
limit, half this limit or to zero. Arbitrary selection of one approach may affect
interpretation of statistical tests involving parameters such as phosphate, magnes-
ium, or potassium where BQDL values are common. Usually this impact can be deter-
mined with statistical tests comparing the means or variances obtained in using the
various values.

A more complex problem with unflagged data arises when the analyst must determine
how to handle samples with atypical meteorological or biological influences. For
example, in a study of seasonal deposition patterns, should data from events associ-
ated with tornadoes in a region where tornadoes are a rarity be excluded? Decisions
on how to handle these questions obviously depend on the specific hypothesis under
consideration. However, such questions should not be ignored.
The analyst must also ensure that appropriate techniques are employed. This inclu-
des recognizing that precipitation chemistry data sets often exhibit the character-
istics of a lognormal distribution (5^, 6) and that simple arithmetic descriptors or
tests based on these descriptors may not be appropriate. It also implies under-
standing the basis for and assumptions made in the calculations performed, not an
easy task given the plethora of sophisticated tests included in the many commercial
statistical packages now commonly available.
The above presentation shows that data analysts must consider a wide range of items
in their work and that the concept of network attributes provides a convenient
framework for the step-wise consideration of these items. Figure 1 illustrates this
suggested framework and summarizes the overall process for analyzing individual
precipitation chemistry data sets just presented.
Figure 1 also helps focus attention on areas where this workshop can aid data
analysts, since two major blocks in this figure relate to the contrasting of network
attributes, establishing a minimum criteria for data flagging, and data culling
procedures will help data analysts to better assess their work. For example, if
attributes other than those discussed here and at the UAPSP workshop (1_) are impor-
tant, they need to be clearly identified and their potential impacts highlighted.
This is especially important concerning a consensus on suitable and appropriate
analytical techniques for both major components, such as sulfate, nitrate, and
hydrogen ions, and for minor analytes such as S(IV), organic ions, and trace metals.
Recommendations on criteria for minimum data flagging will allow analysts to know
that no matter what data set is used, samples with certain characteristics or cer-
tain types measurements will be flagged. Similarly, discussions on data culling
procedures, especially concerning the use of ion balance and collection efficiency
criteria, may provide analysts with a more laboratory-oriented perspective and
better insights into this problem.

This workshop is also an excellent opportunity to begin looking at future data
needs. Realistically, an understanding of the factors controlling precipitation
chemistry must come from a combination of laboratory, modeling, and field studies.
Network managers, especially those involved with research networks, need to address
the question of how their fieldwork can shed light on the major puzzles confronting
researchers today. We need to evaluate modeling needs, and consider how network
procedures can be modified to address these needs. The establishment of concurrent
air chemistry measurements (e.g. S(IV), organics, H2O2) are all examples of areas
where useful information for modeling work would result almost immediately. Ideas
and comments from those intimately involved with network operations are needed if
truly useful studies are to be initiated.
There are major components influencing the analysis and interpretation of individual
precipitation chemistry data sets. A general framework for such interpretation
work, utilizing the concept of network attributes, can be used to determine whether
a certain data set is truly appropriate for testing the hypotheses under considera-
tion. The suggested framework also illustrates the need for standardization or
recommended methods in the areas of analytical techniques, minimum data flagging
criteria, and data culling procedures. It is hoped that this paper will serve as a
useful starting point for discussions of these issues during the workshop.
This research has been funded as part of the National Acid Precipitation Assessment
Program by the U.S. Environmental Protection Agency under a Related Services Agree-
ment No. EPA-DW930059. The Pacific Northwest Laboratory is operated for the Depart-
ment of Energy by Battelle Memorial Institute. Although the research described in
this article has been funded wholly or in part by the EPA, it has not been subjected
to EPA review and therefore does not necessarily reflect the views of EPA and no
official endorsement should be inferred.
1.	Mueller, P.K. 1983.	"Practical Objectives and Criteria Governing Quality
Control and Assurance	Activities." In Proceedings: Advisory Workshop on
Methods for Comparing	Precipitation Chemistry Data. UAPSP-100.
2.	Utility Acid Precipitation Study Program (UAPSP). 1983. Proceedings: Advis-
ory Workshop on Methods for Comparing Precipitation Chemistry Data. UAPSP-100
February, 1983. ~

Form Hypothesis
Contrast Network
Reform Hypothesis
or Obtain New
Data Set
*T No
Data Set
~i Yes
i i
Perform Analysis
Interpret Results
Determine Culling
Select Data Culling
Figure 1. Suggested Framework for Individual Data Set Analysis

3.	Barrie, L. A. and A. Sirois. 1982. An analysis arid Assessment of Precipita-
tion Chemistry Measurements Made by CANSAP: 1977-1980. AES Report AQRB-82-
003-T, Downsview, Ontario, Canada M3H 5T4.
4.	Holdren, M. W., C. W. Spicer, and J.M. Hales. 1984. "Peroxyacetyl Nitrate
Solubility and Decomposition Rate in Acidic Water." Atmos. Environ., 18:1171-
5.	J.E. Rothert and M.T. Dana. 1984. The MAP3S Precipitation Chemistry Network:
Seventh Periodic Summary Report (1983). PNL-5298, Pacific Northwest Labora-
tory, Richalnd, Washington.
6.	MAP3S/RAINE Research Community. 1982. "The MAP3S/RAINE Precipitation Chemis-
try Network: Statistical Overview of the Period 1976-1979." Atmos. Environ.,

Anthony R. Olsen*
Wet deposition monitoring programs have several different objectives as to why
precipitation samples are collected and chemically analyzed. Regardless of the
objective, a data base of the information obtained from the precipitation samples is
necessary. This paper discusses key aspects of data handling, data reduction and
data reporting associated with wet deposition data bases. Specific topics and
issues to be addressed are 1) what information should be stored on network proto-
cols, siting, chemical analysis procedures and sample results, 2) should question-
able results be omitted, sequestered or flagged and reported, 3) can the determina-
tion of valid/invalid samples for statistical reporting be standardized, 4) what are
reasonable data completeness measures for statistical summaries, 5) can agreement be
reached on potentially useful statistical summaries and their definition, and 6) is
it possible to define a data quality level to be associated with each wet deposition
Previously the topic of data handling, archiving and accessing has been addressed
and described by others. The first descriptions and documentation are probably the
result of individual wet deposition monitoring efforts to design their data manage-
ment systems. Documentation now available from existing networks, e.g. NADP/NTN,
CANSAP, APN, UAPSP, MAP3S, and APIOS, provide examples of specific application of
data handling procedures. Typically, a single comprehensive document discussing all
aspects of data handling, archiving and accessing from initial field collection to
the production of annual summary data reports is not prepared. Prior to the current
workshop two previous workshops were held that included discussions on data hand-
ling. In 1979, a workshop on data management needs for atmospheric deposition data
was held (1_). The purposes of the workshop were to review existing and planned
*Battelle Northwest Laboratories, Richland WA.
Source: Proc. Methods for Acidic Dep. Measurements EA-4663, EPRI, Palo Alto CA 1986.

programs in North America for collecting acid deposition data and to identify speci-
fic needs to establish a coordinated, readily accessible data base. The workshop
attendees concluded that a single data base system could be designed. In the pre-
sent context such a data system is termed a centralized multi-network data base
system. A second advisory workshop on methods for comparing precipitation chemistry
data was held in August 1982 (2J. One of their objectives was to establish charac-
teristics of data archiving systems that would facilitate access and to outline
documents needed to describe the features of network design, operation and evolu-
tion. The proceedings of the workshop contains a comprehensive description of key
elements of a data management system, including example data forms and document
outlines. Under the auspices of the development of a quality assurance manual for
precipitation measurement systems, Topol et al. (_3) includes discussions on numerous
details related to data management systems. This is natural since a good data man-
agement system and a good quality assurance program for a network have substantial
overlap in documentation and information required. Finally, a centralized multi-
network integrated wet deposition data base system is described by Watson and Olsen
(_4). The Acid Deposition System (ADS) for statistical reporting is an actual imple-
mentation of the ideas resulting from the first workshop (1_).
The main perspective taken for this paper is to focus on the overall structure of
data management function for a single network and to cover the major topics from
initial data collection to final summary data report production. The integration of
data from multiple networks into a centralized data base will not be specifically
addressed. However, the construction of individual network data bases with accom-
panying documentation as suggested will naturally lead to a better ability to inte-
grate data from multiple networks. Rather that repeat the material developed in
prior workshops and reports, only an overview will be given. The process of data
validation, coding, selection of valid/invalid sample data, statistical summary
procedures, data completeness measures and data quality levels will be included.
The latter topics have received less emphasis in the past and more attention is now
being directed toward them.
Documentation of network protocols, procedures and data is essential for effective
use of the data and the ascertainment of its quality. Several levels of documenta-
tion exist. Detailed documentation of procedures to be followed in the field col-
lection, chemical analysis, data screening and data archiving is required for net-
work personnel who actually perform each of the functions. A general data user
rarely has ready access to this documentation nor do they typically require the

detailed information given. What a general data user does require is overview
information on the network procedures. It must be in sufficient detail to enable
the user to discern if the data fits their specific requirements. In addition a
network prepares data reports, summary data reports, and results of quality assur-
ance activities. Each of these has a role to fulfill and are essential for adequate
documentation. A maxim is to document past the point of resentment.
Table 1 lists formal documentation considered to be essential. Documentation states
not only what the network planned to do but also what they actually did. Documenta-
tion of the latter implementation portion can not be over emphasized. Because of
the maturity of wet deposition monitoring knowledge, a minimal documentation re-
quirement for a quality network should be possible. A group of knowledgeable re-
searchers should be organized to design the necessary documentation. Clearly,
sufficient flexibility must be maintained to allow individual networks to tailor the
documents to meet their specific objectives.
The design of a wet deposition data base requires knowledge of siting protocols,
field data collection procedures, chemical analysis protocols, network sample
screening and coding procedures, and reporting procedures. Based on experience
gained from combining information from major wet deposition networks, a framework
for including key information is suggested.
A network data base includes two broad categories of information: site characteris-
tics including history of changes and individual sample data results including
associated auxiliary information. Typically, only the latter category is actually
included in a computerized data base system. Principally, this is because site
information does not change frequently, a portion is not amenable to computeriza-
tion, and the information does not enter directly into other calculations. Sample
information is routinely computerized but the amount of supporting information
included varies widely. Potential data users have two widely differing options in
how they chose to operate. They may acquire the computerized data base, accept
whatever documentation accompanies it, and proceed with their analysis. At the
other extreme the user may demand additional information, usually through requests
for additional documentation and extensive discussions with network personnel. The
latter results in the user being very knowledgeable about the limitations of the
data but requires substantial investment in time from both the user and network
personnel. The standardization of procedures wherever possible and the availability

of user oriented documentation would decrease the effort required significantly.
More importantly, the overall quality of application of the data should increase.
Site information includes all aspects of siting, field equipment, and chemical
analysis procedures. Topol et al. (3J describe some of the information desirable.
Even though site information is not always computerized, key information that is
amenable to computerization should be made available in a form from which computeri-
zation is possible. Additional information is available in the documents discussed
previously. One aspect on site information that has not received sufficient atten-
tion is the recording of changes that occur at the site or in network protocol.
These changes do occur, hopefully to improve data quality, and must be recorded in a
form accessible to the data user. Samples collected under the "same" conditions
should be identifiable. One way of accomplishing this has been included in the ADS
data base system (4J. A revision code is attached to each sample as well as a site
identifier. All samples with the same revision code are collected under the same
conditions. Documentation of the time period associated with the revision as well
as the reason for a change in revision code is also maintained as part of the ADS
data base.
Sample information includes the data result for each analyte and all supporting
information that describes the conditions during collection and analysis, decisions
resulting from data screening and validation, and precision and accuracy (if avail-
able). The ADS User's Manual (Watson and Olsen 1985) and individual network docu-
mentation on their data base structures describe the many fields of information
required to adequately document a sample result. Major areas include: sample key,
sample dates, arrival dates, analysis dates, sample period description, precipita-
tion collection, sample notes and codes, and analyte analysis results including
notes and codes.
During field collection, chemical analysis and data screening of wet deposition
samples, a portion of the individual results will be identified as questionable,
unreliable, suspect or possibly contaminated. A decision must be made on how to
handle and archive these sample results. Should they be appropriately coded, flag-
ged and archived in the data base or should they be excluded? If they are included,
then should they be sequestered from or reported to users? These are questions that
must be addressed by each network as they design their data base system and their
data management philosophy. A quick review of major North American networks amply
illustrates the variation possible. It also suggests the nightmare that faces any

Table 1
Formal Documentation for Wet Deposition Monitoring Network Operation
Title	 Frequency		Description	
Network Overview
Siting Manual
Site Description
Field Operation and Once
Maintenance Manual
Chemical Laboratory Once
Operation, Analysis
Procedures Manual(s)
Archive Data	Once
Management Manuals
Quality Control	Once
Procedure Manual
Quality Control	Annually
Quality Assurance	Once
Quality Assurance	Annually
Program plan of study: goals of network,
design criteria, overview of individual
operation components. Describes network to
outsider users.
Detailed procedures of siting, including
protocol for siting, equipment require-
ments, etc.
Detailed description of each site, inclu-
ding photographs and maps. Documentation
of changes over time. A shorter summary
report may also be prepared for general
Describes activities of site operators and
field operation management personnel. De-
tailed procedures for sample collection,
field analysis, equipment maintenance, QC
activities, data forms, etc.
Describes activities of laboratory person-
nel. Overview of schedules and responsi-
bility. Detailed instructions for sample
handling, analysis, and reporting. QC
Describes responsibilities, procedures and
activities. QC activities. Software docu-
ments, listings, data formats. Data base
description. Coding procedures.
Prepared for general data users outside
network. General overview of procedures,
QC objectives, responsibility for QC, data
validation procedures, handling of missing
and discarded data.
Summary report of QC activities during
year. Documentation of data quality Inclu-
ding precision and accuracy summary data as
Prepared for internal use and general dis-
tribution. Describes activities conducted
with outside organizations to assure quali-
ty of reported data.
Summary report of QA activities during
year. Documentation of quality of data
including accuracy and precision summary

serious data user who must understand all the idiosyncrasies of each network
included in their study.
Notes refer to comments on the condition of the sample or at the field site. They
are thought of as being phrases produced by field or laboratory personnel. Codes
refer to the translation of note phrases into specific codes for computerization.
This may be a simple assignment of a numerical code to each phrase or the categori-
zation of several phrases with similar impact on interpretation. Codes are recom-
mended to be designed with two levels of information: primary information describ-
ing sample conditions, and secondary judgements resulting from data screening and
validation. Primary information does not change once it is observed. Secondary
judgements involve a decision process which is susceptible to change and includes
any biases of the individuals involved. If only the results of the secondary judge-
ments are archived, then other researchers with different criteria can not apply
them. This is especially critical when data from different networks is to be
Assume that the recommended two levels of codes are followed by a network. Some of
the sample results will have been judged to be questionable, invalid or suspect. A
network may chose not to make available to outside users these sample results, i.e.
sequester the data. Alternately, they may clearly identify such samples with appro-
priate codes, and indicate to the user that suspect data is included and describe
how the network handles suspect data in their summary process. Regardless of which
approach the network chooses, it is essential that their internal archived data base
include all sample data, valid and invalid. The major reason is that the determina-
tion based on data screening and validation is a secondary judgement and is subject
to change with increased knowledge. Additional information concerning characteris-
tics of sample results can be attained by statistically analyzing differences be-
tween valid and invalid samples. It is also recommended that the entire archived
data base be made available to the ADS data base with the provision that ADS will
follow any network restrictions on general access to invalid data.
Sufficient experience operating wet deposition networks has accumulated to address
the issue of standardizing part of the sample code process. No universal set of
codes will ever be possible due to differences in network objectives and the occur-
rence of unusual events that have not been considered. Certainly, a philosophy of
coding similar to that suggested above is a starting point. A minimal set of codes
that cover standard occurrences, leaves in sample, bulk, etc. can be standardized.
Additional flexibility to handle other codes would be essential.

All potential uses of wet deposition data require a determination of conditions
under which a sample will be considered valid or invalid. This determination is
intimately related to sample screening and data validation criteria discussed pre-
viously, but differs in that it depends on the specific objective of the intended
use. An issue to be resolved is whether it is possible to recommend a standard set
of criteria for sample validation.
Comparing data from multiple networks depends on the successful determination of
valid data for each network on a common basis. Currently the process is very diffi-
cult to implement due to the wide differences in how each network codes and reports
sample data. Unless each network applies equivalent data screening and sample
validation criteria, the data user must rely on the primary coding information to
construct common criteria. Standardization of sample coding and data screening will
simplify the situation immensely.
Since determination of valid or invalid samples depends on the objective for the
summary, it will not be possible to standardize the conditions a sample must meet to
be valid. What can be required is that the conditions be explicitly documented and
included with the summary.
Ion species concentration and deposition occurring at a site during a specified time
period may be statistically summarized in a number of ways. Average concentration
may be measured by the arithmetic mean, geometric mean, median or a precipitation-
weighted mean. Procedures for estimating total deposition during a summary period
must consider the effect of incomplete information. These and other statistical
summaries are defined. An issue is whether it is now possible to recommend a subset
of the statistical summaries for standard use. At a minimum standard terminology
will be proposed.
For concreteness assume that an annual statistical summary for a single analyte at a
site is desired. Moreover, assume that valid data have been identified for inclu-
sion. Two important concerns that must be addressed are how sample results below
the limit of quantitation are handled and how will incomplete data, e.g. sample
periods with invalid data, be handled. Decisions on how to address the two concerns
can be objective specific. Below limit of quantitation values may be used directly
or one-half of the quantitation limit used. Regardless of the procedure, it must be

explicitly stated. The extent of complete data is given by the proposed data com-
pleteness measures. If sufficient data are "missing," then no statistical summary
may be warranted. If sufficient data are available, then adjustments to the statis-
tical summary procedures may be necessary to account for any bias resulting. Al-
though important these concerns will not be discussed further.
Statistical summaries typically completed can be categorized as (1) characterization
of the concentration distribution, (2) characterization of the deposition distribu-
tion, and (3) estimation of total deposition during the summary period. Similar
statistical summaries are used for the first two and they are discussed together for
the concentration distribution.
Standard statistical summaries include measures of the average concentration and
associated variability, i.e. standard deviation. In addition, it is important to
report the number of observations and the number of values below limit of quantita-
tion. The minimum, maximum and selected percentiles are also used to characterize
the distribution. It is useful to report Lilliefors D statistic to test if the
distribution is normal or log normal. Graphical summaries such as frequency histo-
grams and empirical distribution function plots also are important. All of the
above are routinely applied in annual summary reports by wet deposition monitoring
networks. Tables 2, 3, and 4 are examples of report formats being used. The termi-
nology and definitions appearing on the reports are not given but are available from
the author.
Average concentration measures include arithmetic mean, geometric mean, median,
volume-weighted mean and precipitation-weighted mean. Each has an appropriate
application. Typically, concentration distributions are not symmetric so that the
arithmetic mean, geometric mean and median are measuring different aspects of the
center of the distribution. Which one is appropriate depends on the objective for
the summary, however the median and geometric mean tend to be more useful. Confi-
dence limits for the non-weighted means are readily available and should be included
with the average estimate. For the arithmetic and geometric mean, an arithmetic and
geometric standard deviation can be obtained. For the median confidence limits can
be determined as a function of sample size and reported.
The precipitation-weighted mean (PWM) is defined as
PWM = {sum of P(i) * C(i)) / (sum of P(i})
where P(i) is the precipitation depth, C(i) is the measured ion species concentra-
tion for the ith valid sample and the sum is over the n total number of valid samp-

Table 2
National Atmospheric Deposition Program 1982 Annual Data Sunrnary
Operated by
Funded by
University of Nebraska
University of Nebraska
Total estimated precipitation
% samples substituting sample volume
I Summary period with precipitation coverage :100.0%
% Summary period with valid samples
% Estimated precipitation with valid samples : 97.5%
% Collector efficiency valid samples
Station I -• 281520
Elevation :	0352 m
Longitude :	96 :29:34
Latitude : 41:9 :11
First summary day
Last summary day
Summary period length
365 days

t of
• of

: 95.6cm

: 4.8%
Sample periods with precipitation coverage:
: 100.0%
Sample periods with no precipitation :
s 86.6%
Sample periods with precipitation :
: 97.5%
Valid sample periods with precipitation :
: 98.4%
Invalid sample periods with precipitation ¦
Hicro s/cm
SO 4 / NO3
Equivalent Ratios
SQ4 + BO 3
Valid samples




Precip weighted mean


Geometric mean




Std Dev of logarithms




Arithmetic mean




Arithmetic Std Dev




Minimum value




Percentile 10




Percentile 25




Percentile 50




Percentile 75




Percentile 90




Maximum value




Ion Concentrations

HO 3


Precip weighted mean

Geometric mean

Std Dev of logarithms

Arithmetic mean

Arithmetic Std Dev

Number below detection

Minimum value

Percentile 10

Percentile 25

Percentile 50

Percentile 75

Percentile 90

Maximum value
Total deposition g/m**2


Table 3
1980 Annual Unified Data Summary
Site Kane
Period Sumarized
Pirst Date Last Date
N Mean
RDI. ng/1


1 81


1 81
1 81
New York


1 81
1 81
1 81
1 1.63
North Carolina


Clinton Station
30 80
30 80

1 81


30 80

1 81


Penn State
1 81

1 81
West Virginia


30 80

1 81

Table 4
1980 Annual Unified Data Summary
			 Concentration 			—	—	——
State	Wtd Geometric Arithmetic 		 Percentiles —K-S



Site Naae







New Tork






North Carolina


Clinton Station












Penn state



West Virginia






les included in the average. When the concentration C(i) is in units of mg/1 and
precipitation amount P(i) is in units of mm, then D(i) = P(i) * C(i) is an estimate
of the deposition (mg/m^) associated with the sample. The volume-weighted mean is
calculated in a similar way, except the sample volume from the precipitation chemis-
try sampling device is used instead of the precipitation depth. The term precipita-
tion depth usually refers to the precipitation amount as measured by a colocated
standard gage. A more general interpretation is that it is the best estimate of the
precipitation amount that occurred during the sample period. Hence even though the
precipitation-weighted mean is easily defined, it is necessary to state how the
precipitation-weights are defined.
For the ions generally considered to be conserved, the volume-weighted mean concen-
tration is conceptually the same as the single concentration that would be measured
if all samples had been combined into a sample and measured. This is not true for
non-conservative ions such as hydrogen. Similar statements apply to the precipita-
tion-weighted mean, with the additional assumption that the concentration measured
from the sample volume is representative of the concentration associated with the
precipitation depth estimate. Little is known on variance estimates for weighted
means. Miller (J5) has proposed one estimate but it has not been generally applied.
Stensland and Bowersox (6J compared four different averaging techniques for their
effect on average pH maps in the United States. Their results show that indeed
there are significant differences between the four approaches for certain regions of
the United States. Additional research on which measure of average concentration is
appropriate under specific objectives and conditions is necessary.
Estimating total deposition during the summary period, e.g., annual sulfate deposi-
tion, is simple when valid sample results are available for all sample periods when
precipitation occurred. The deposition estimate is the sum of the depositions for
the sample periods with valid data. The only issue to be resolved in this case is
the selection of the precipitation amount estimate. No standard terminology exists
to distinguish between the options. One recommendation is to reserve the phrase
"total measured deposition" for the case when the sample volume is the basis for the
precipitation amount information. In this case total measured deposition refers to
the deposition that actually occurred in the sample bucket during the summary
period. The sample volume may not provide the "best" estimate of precipitation
amount, ir\ which case it may be preferable to base the deposition estimate on the
"best" estimate.

When invalid data are present but precipitation amount estimates are available for
all sample periods, then total deposition can be estimated. One common procedure is
to multiply the precipitation-weighted mean by the total precipitation amount. This
procedure assumes sample concentration and precipitation amount are independent.
All precipitation at a site is not always measured during a summary period due to
operational problems. No standard procedure for estimating deposition that occurred
during the periods of no precipitation coverage is available.
Procedures for estimating total deposition are available but no standard terminology
exists to distinguish between them. The assumptions necessary for some of the
procedures are not generally known or their validity well researched. A research
effort to address these general issues would be useful.
Annual and quarterly summaries of ion species concentration and deposition at a site
are routine data summaries calculated and reported. It is unusual for all precipi-
tation events to be sampled and result in valid samples. Data completeness measures
are necessary to quantify the amount of information in the summary. Recommended
data completeness measures are percent precipitation coverage length, percent total
precipitation, percent valid sample length, percent of samples with measured preci-
pitation that are valid, and percent collection efficiency.
Percent precipitation coverage length (%PCL) is defined as the percent of the sum-
mary period for which information on whether or not precipitation occurred is
%PCL = 100 * (SPL - NDPP)/SPL
where SPL = number of days in the summary period
NDPP = number of days when it is not known
if precipitation occurred.
If precipitation is known to have occurred during a particular sampling period but
no quantitative estimate of the amount is available then no knowledge of precipita-
tion is assumed. Total precipitation depth is the amount of precipitation occurring
during the period of precipitation coverage. This data completeness measure does not
include any consideration of the availability of a valid precipitation chemistry

The remaining data completeness measures are computed for each ion species or compo-
nent summary and may differ even though they are for the same summary period and
site. The cause of the difference is that determination of valid samples is comple-
ted for each ion species or component independently. If a network has a protocol
for prioritizing the ion species to be measured in the event of insufficient sample
volume to complete all measurements or if a network's validation procedure is ap-
plied to individual ion species, then the number of valid sulfate and nitrate sample
values may differ, affecting the data completeness measures.
Percent total precipitation (%TP) is the percent of total precipitation depth mea-
sured during the summary period that is associated with valid component samples.
%TP = 100 * TPVC/TP
where TP = total precipitation depth
TPVC = portion of total precipitation depth associated
with a valid sample component measurement.
Component is a generic term that refers to any ion species or other measurement made
on a wet deposition sample.
Percent valid sample length (%VSL) is the percent of the days in the summary period
that are associated with valid sample periods.
%VSL = 100 * (NDNP + NDVCMP)/SPL
where NDNP = number of days in sample periods during which
no precipitation occurs
NDVCMP = number of days in sample periods with valid
sample component measurement on measured
precipitation sample.
This measure is most useful for sites with weekly, monthly or 28-day sampling
%VSMP is the percent of wet deposition samples that have valid component sample
component measurements.
where NSVCMP = number of wet deposition samples in summary
period that result in a valid sample component
NSMP = number of wet deposition samples in summary period.
%C0L EFF, percent collection efficiency, is the ratio (converted to a percent) of

the total sample volume (converted to a depth) to the total precipitation depth
where totals are for qualifying samples.
where	EPDVC = sum of depths predicted from sample volume for
qualifying samples
ERGVC = sum of standard gauge depths for qualifying samples
and qualifying samples are those a) that have both a colocated standard gauge and
sample volume measurement available and b) that have a valid sample component
The data completeness measures reflect the type of problems one encounters when a
data user is actually confronted with a "real" data set, and are motivated by the
following questions, all of which require an answer for the data completeness and
temporal representativeness to be properly assessed: For what portion of the summary
period do we have a good knowledge of the amount of precipitation that fell (%PCL)?;
what portion of the precipitation measured is associated with a valid chemical
analysis and sample result (%TP)?; further, what percent of the time during the
summary period and what percent of the total number of actual sample periods with
measured precipitation, do these valid samples represent (%VSL and %VSMP)?; and
finally, what was the collection efficiency, referred to a colocated standard gage,
over the summary period.
The data completeness measures have not been applied and evaluated for their ability
to adequately measure characteristics important in judging the "representativeness"
or "completeness" of a summary. Work is currently in progress to answer the ques-
tion. What is certain is that some set of data completeness measures is necessary
for evaluating the "completeness" of a summary. It should also be possible to
standardize the measures' definitions.
An effort is currently underway to assign data quality levels to annual and quarter-
ly statistical summaries of wet deposition data. The 'quality' of a summary is a
function of the representativeness of the site and of the completeness of the data
on which the data is based. A related, and very important, aspect of data quality
is whether the concentrations are representative of the chemistry in the precipita-
tion that actually occurred at the site. A framework for assigning data quality
levels is described. Whether it is possible to recommend standardized data quality
levels is an issue open for discussion.

One process for assigning site representativeness levels uses three levels and
centers around the concept of "regionally representative" (see Table 5). Regionally
unrepresentative, level 3, sites are identified using specific criteria applicable
to the study objectives. In this case these are sites with local influences (e.g.
dusty surroundings, significant emission sources within 40km) known or strongly
suspected to have a significant effect on seasonal or annual concentration or depo-
sition. On the other hand, regionally representative, level 1, sites do not suffer
from any of the interferences. For a large portion of existing sites, evaluation of
site representativeness is not conclusive and these sites are assigned to level 2.
They have failed one or more of the criteria; however, the severity of the local
influences is difficult to assess a priori. Whenever possible, comments of individ-
ual network operators, analyses of historical data, or site audits are relied on to
distinguish further between level 2a and 2b sites. In the former case, the local
interferences are judged to be small or insignificant and the sites are potentially
representative. Level 2b sites have more problems associated with them and are
judged to be potentially unrepresentative. In some cases sufficient information is
not available to make the above assessment and these sites are assigned as level 2.
This process is being used by the Unified Deposition Data Base committee in their
work to develop a well documented deposition data set that meteorological modelers
could use for evaluation of their long-range transport models.
Table 5
Site Representativeness Levels
1	Regionally representative
2a	Potentially representative
2b	Potentially unrepresentative
2	Potentially (un)representative
3	Regionally unrepresentative
A similar concept can be applied to obtain data completeness levels from the propo-
sed data completeness measures. Four levels of data completeness could be named:
excellent, good, fair and poor. The specific mapping of the data completeness
measures into the four levels is study objective specific. An excellent rating
level summary has the best information or the highest level of data completeness.
In effect, the data collection, analysis and reporting process for data in the
summary period came closest to meeting the planned program of data acquisition. The.
least confidence is given to a summary with a poor rating and accordingly is not

generally viewed as providing a repreSentative summary for the time period. The
Unified Deposition Database Committee proposed a preliminary procedure for assigning
data completeness levels using five data completeness measures defined previously
(see Table 6).
Table 6
A Proposed Definition of Data Completeness Levels Using Five
Data Completeness Measures
Data Completeness Level
Data Completeness Measure
(1) Criteria applies only to NADP, MAP3S

(2) Criteria applies only to CANSAP, APN and APIOS

All criteria must be satisfied for the given data completeness level to be assigned
and the minimum achievable level is assigned. Level 4, poor, is assigned to all
sites not meeting the criteria for level 3, fair. The criteria are assumed to apply
to an annual summary. The criteria are being reevaluated by the committee at this
time. Hence the assignment is not recommended for use and must be viewed only as an
example of how data completeness levels could be defined.
An overall data quality level may be constructed from the site representativeness
and data completeness levels. At this time additional research and experience is
required before the utility of data quality levels can determined. The general
format presented appears to be useful in organizing information on data quality of a
summary. Implementation is objective specific and standardization is unlikely to be

This work is funded as part of the National Acid Precipitation Assessment Program by
EPA under an Interagency Agreement with the U.S. Department of Energy (Contract DE-
AC06-76RL0 1980).
1.	J. H. Gibson. Workshop proceedings: Data Management Needs for Atmospheric
Deposition. Palo Alto, CA: EPRI. WS-79-163. 1980.
2.	UAPSP. Proceeding: Advisory Workshop on Method for Comparing Precipitation
Chemistry Data. Palo Alto, CA: EPRI. UAPSP 100. 1983.
3.	L. E. Topol, et al. Quality Assurance Manual for Precipitation Measurement
Systems. U. S. Environmental Protection Agency, Research Triangle Park, NC.
January 1985.
4.	C. R. Watson and A. R. Olsen. Acid Deposition System (ADS) for Statistical
Reporting. System Design and User's Code Manual. U.S. Environmental Protection
Agency, Research Triangle Park, NC. EPA-600/8-84-023. September 1984.
5.	J. M. Miller. "A Statistical Evaluation of the U, S. Precipitation Chemistry
Network." In Precipitation Scavenging (1974), ERDA C0NF-741003, 639-659. 1977.
6.	G. J. Stensland and V. C. Bowersox. "A Comparison of Methods of Computing
Precipitation pH Averages." Paper 84-19.1, 77th Annual APCA Meeting, San Fran-
cisco, CA. 16, pp. 1984.

The working groups were key to accomplishing the workshop objectives. To focus the
working group discussions, each group was asked to address their topic by developing
a list of methods and procedures to be considered for standardization. They were
then asked to prioritize these procedures and identify any obstacles in the way of
standardization. These obstacles or research needs must be addressed before stan-
dardization can occur.
The Wet Deposition Siting Criteria working group chaired by John Robertson addressed
siting in relation to network objectives. They discussed siting philosophy, siting
criteria, made recommendations, and identified specific research needs.
The Sampling and Field Monitoring working group chaired by Leo Topol addressed which
technologies or methods can be applied to satisfy data collection requirements.
Sampling equipment, methods, and times were discussed. Recommendations were made
for research needed prior to standardization.
Jane Rothert chaired the Chemical Analytical Methods working group which discussed
the analytical methods already in the process of being standardized. They made
research recommendations in specific areas that are not now currently in the process
of being standardized.
The Data Handling, Archiving, and Accessing working group chaired by Anthony R.
Olsen discussed a general structure for handling data from wet deposition monitoring
networks. Important issues requiring further research were also identified.
Bill Mitchell chaired the Quality Auditing working group, which defined quality
auditing issues and prioritized a set of standardized quality auditing procedures
for development.

The working group for Development of Standardized Operating Procedures led by Sally
Campbell addressed the overall issues and obtained input from each of the other
working groups in developing their output.
Following the presentations of the working group results, Peter Mueller led a plen-
ary discussion which addressed the progress of the workshop and identified priori-
ties that may not have been specifically addressed in each working group. The
mechanism for developing standards or standard practices was discussed in detail.

John K. Robertson*
The group used the outline of Robertson's paper to look at issues which could be
standardized across sites and networks. These issues are summarized in Table 1.
There are many areas in which standardization can be attempted. There are some
factors which depend on the objectives of the sampling program in terms of area
coverage, accuracy of data, density of the network and use of the data which we
believe are network specific and therefore not standardizable. Even though we do
not see total standardization as being possible, in those areas where it is possi-
ble, it is worth doing for all networks to get some standardization among them.
The most critical standardization need, as indicated in Table 1, lies in the proxi-
mity of sources of contamination in the immediate surroundings of a site. This
coupled with keeping these sources of contamination stable (no change) will pay the
biggest dividend in data quality and is where standardization can do the most good.
The second most important area for standardization is how close a site should be to
stationary point or area sources. We felt that such standardization must take
Cognizance of the number of sources and their distance. We do not feel that one
general standard will apply to all analytes, but rather vary analyte to analyte.
With this type of standard the siting criteria of a network would be governed by the
analyte with the most stringent requirements. Such a standard would also allow
someone who is not sampling all analytes to use less restrictive siting measures.
The working group also felt that any set of rules is empty unless there is periodic
review (audit) for adherence to criteria. Adherence to siting standards should be
P^t of each network's data record available to data users. The group felt strongly
Military Academy, West Point NY.
Source: Proc. Methods for Acidic Dep. Measurements EA-4663, EPRI, Palo Alto CA 1986.

that siting information should be available as part of the data base of chemical
data. Documentation of distance to sources, local siting conditions, site changes
(new instruments placed, movement of equipment, equipment outages, new operators,
etc.) should be available to data users. The working group feels strongly that data
for sites not meeting criteria be flagged and that each network should document
changes that bring sites into conformance with network standards.
There are a number of parameters that fall into our third priority group: instru-
ment installation, meteorological factors, and topography and terrain. Agreement
ought to be possible in these areas. Again some monitoring data can be used from a
siting perspective to glean insight.
The one area we saw no real hope for standardization in was in practical considera-
tions. Many are site and network dependent. We did see hope for some standardiza-
tion on accessibility, but the standards will vary between an event network and a
weekly network.
The working group recommends that whatever body attempts such standardization consi-
der the approach of Topol, et al. QJ in terms of criteria for different kinds of
network (i.e. the group felt more than one standard is needed for networks of diff-
erent scope). The group further recommends that when standardizing, the group doing
the standardization make its siting rules as objective and detailed as possible.
One of the great failings in current siting rules is the generality of the rule
which leaves the application of the rule to the subjective interpretation of the
applier (i.e. monitoring would be better served by a series of yes-no decisions to
be made in siting rather than subjective application of nebulous rules).
Local Source Influence on Rain Water Samples
Determine how rain water composition at a particular sampling site deviates from the
regional level due to local sources. Each analyte should be treated separately in
relation to the source. Research would primarily be a review of the literature to
find empirical data about contributions from splash (from trees, buildings, etc.),
resuspension (from trees, farm fields, roads, etc.), and gaseous (e.g. NH3 from
feedlots and fertilization). Also envisioned would be estimates by simplified
modeling of typical and maximum contributions from local sources with known or
estimated emission rates (e.g. power plant or wood burning, etc.) combined with wet
wind rose data. From such a study we envision data supported siting rules to re-
place today's empirical, subjective rules.

Table 1
Standardization of Siting Issues
Event Network
Weekly Network

I. Site Stability
1.	Site Ownership
2.	Land Use at the Site
3.	Land Use Around Site
Depends on the
long term goals
II. Instrument Installation
Parameters (orientation,
angle to overhead objects,
platforms, etc.)
III. Meteorological Factors
1.	Local Perturbations
to the Wind Field
2.	Type of Preclp
3.	Amount of Preclp
See research Items
IV. Closeness to Sources
of Contamination
1.	Stationary Point
2.	Area Sources
3.	Linear Sources
4.	Local Sources
(houses, trees, etc.)
Must consider
and be analyte
Depends on netv
lumber of sources
rork objectives
V. Topography and Terrain
1.	Topography around Site
2.	Terrain at Site
3.	Altitude at Site
Depends on network objectives
Depends on network objectives
VI. Practical Considerations
1.	Accessibility
2.	Budget
3.	Reliability of Power
4.	Site Security
5.	Qualified Personnel
Yes within
event sites
Not within real
Depends on samp
and equipment
Depends on netw
Depends on netw
Yes within
weekly site
in of control
ling design
ork objectives
ark objectives
VII. Site Audits
1.	Periodic Review
2.	Audit Reports
3.	Corrective Action
and Data Flagging

Validity of Siting Criteria
This is a reiteration of the recommendation of the UAPSP sponsored workshop at
Rensselaerville, N.Y. (2J which called for a quantification of the effect of local
emissions on precipitation chemistry by doing carefully controlled experiments to
determine the amount of contamination from roads, parking lots, railroads, tilled
land, airports, feedlots, sewage plants and lagoons, power plants, different types
of industry, houses, swamps and marshes, etc. This research coupled with our first
recommendation will provide the data needed to determine the validity of our siting
An Intercomparison of Site Descriptor Profiles
An intercomparison will evaluate how each network sites its collectors away from
potential sources of contamination. This coupled with statistical analysis of each
network's precipitation chemistry data may allow a determination of whether system-
atic differences among networks may be due to consistent siting differences. Again
this idea was proposed at Rennsselaerville (2J but has not drawn financial support.
Criteria for Determining Site Identity
Criteria should be established to determine when a site relocation has a quantifi-
able impact on the data. Or put another way, how far can you move a site before it
becomes a new site (or conversely gives the same data as the old site and therefore
can be considered the equivalent of the old site)? Some data from dense networks
such as the EPRI ILWAS study (3J and the METROMAX study may give enough information
from which to draw conclusions. If they do not, then field experiments to ascertain
this information is warranted. All networks experience minor equipment movements.
Data is needed to allow the networks and data users to quantify the effect of these
moves on data quality. Until such data exists, all networks should document equip-
ment moves, site moves, etc. so that when information on the effect of moves is
available, then the effect on each site's data can be sorted out.
Windscreen Guidance/Criteria
The need for effectiveness of and adverse effects of an Alter windscreen and other
types of windscreens on the rain gauge and collector at all sites in all precipita-
tion conditions. Issues to be addressed include collection efficiency, chemistry
effects, and installation logistics. This would be a field study that should be
replicated in several climatic and wind regimes.

Snow Roof Guidance/Criteria
Criteria should be established to determine the need for, effectiveness of, and
possible adverse effects of a snow roof on the typical network collector. Issues to
be addressed include whether the snow roof should be on all network collectors, not
just those in the snow belt, whether heating the roof is desirable, etc.
L.E. Topol, J. Flanagan, J. Chen, M. Lev-on, R. Schwall, and L.S. Shepherd.
Quality Assurance Manual for Precipitation Measurement Systems: Research
Triangle Park, N.C., Environmental Protection Agency, EPA-600/4-82-042A&B.
D.H. Pack and A.A. Shepherd, Eds. Proceedings: Advisory Workshop on Methods
for Comparing Precipitation Chemistry Data: Utility Acid Precipitation Study
Program, UAPSP-100. 1983.
A.H. Johannes, E.R. Altwicker, and N.L. Clerceri. Characterization of Acid
Precipitation on the Adirondack Region. EPRI-EA1826, Project 1155-1, Final
report by Rensselaer Polytechnic Institute, Troy, N.Y., for Electric Power
Research Institute, Palo Alto, CA. 1981.

Leo Topol*
The problem of collecting samples representative of the site area in both composi-
tion and quantity was addressed at some length. In order to compare the sample
quantity collected by a sampler with the correct value, the rain gauge is used as a
reference. For rain it appears that any rain gauge that is acceptable to the
meteorological societies can be used. The advantage of using the Belfort rain gauge
is that it can be interfaced by Aerochem Metrics to its HASL-type sampler, and an
event pen market then shows the lid movements. This helps indicate if there is a
sampler malfunction. Thus, for rain quantity the rain gauge appears to be amenable
or ready for standardization and can be accepted as a standard today.
For snow, however, the rain gauge has a problem. Studies in Canada have shown that
the rain gauge without a good wind shield has capture losses which increase with
wind velocity. These losses are due to both blow by and blow out of snow. The
Canadians believe that the Nipher snow gauge is the best state-of-the-art gauge for
snow today. However, it can result in positive errors with rain since its wide lip
can cause extra splash into the weighing bucket of the snow gauge. The other disad-
vantage of the Nipher gauge is that, at present, in contrast to weighing bucket rain
gauges it is not hooked up to a recorder.
As for the sampler, the HASL type is commonly used in the U.S. at present. Similar
problems as those with the rain gauge exist for snow, and there is one additional
one also. Dry snow may not remain on the sampler sensor long enough to melt and
activate the lid covering the sampler bucket. Also for rain there are two potential
problem areas. First, splash off the bucket lid of the sampler can cause sample
contamination as well as erroneously high sample capture. Second, the shape of the
sampler can cause aerodynamic problems, e.g. turbulence, which can affect sample
capture. However, based on a study performed by Battelle with its original one-
*Combustion Engineering, Inc., Newbury Park CA.
Source: Proc. Methods for Acidic Dep. Measurements EA-4663, EPRI, Palo Alto CA 1986.

bucket MAP3S collector and the HASL-type sampler, it was concluded that there were
no significant differences in sample capture between the two types of collectors.
The MAP3S collector is similar in design to the symmetrically shaped, single-bucket
European collectors. Since the HASL-type collector has a collection efficiency of
about 100% for rain, it appears that the HASL collector does not have a serious'
problem in regard to rain sampling and can be used as a standard or de facto stan-
dard. For snow, it is not a good collector, although it may be as good as any
available today.
In view of the above, the following studies are suggested. Design changes in the
sensor and sampler should be tested to improve the collection efficiency of the
sampler for snow. The importance of small droplets or snowflakes, which are most
likely to be lost under windy conditions, to the chemical composition and quantity
of a sample should be determined as a function of hydrometer size and wind velocity.
A direct comparison of performance between the HASL collector and at least the three
present European collectors (IMI, Slanina, WMO) should be made to ascertain the
relative merits of each and the optimum collector at present.
Collection conditions to maintain sample integrity are discussed next. The basic
question here is, "Is the rain sample analyzed representative of the composition of
the rainfall that occurred at the site?" For ambient type (no refrigeration) sam-
pling collectors such as those used in most networks at present, there is some
evidence that a sample collected on a weekly schedule frequently is significantly
different in composition from a week's composite of daily samples. The stability of
a sample is expected to depend on storage time, temperature, composition, bacteria,
etc., and thus will vary with the site and season. If necessary, sample stability
can be optimized by use of a refrigerating unit in the sampler, filtration to remove
particles and bacteria, and irradiation or preservatives (biocides) to kill bacter-
ia. These methods should be used in the field and are listed in decreasing order of
usefulness. The problem with refrigeration is that it is costly and requires an AC
power source. Filtration requires a vacuum pump in the field and any additional
sample handling increases the chance of contamination. Also, changes in composition
can occur during filtration by the increased exposure of the solution to atmospheric
gases as well as to solids on the filter. The effectiveness of UV irradiation on
the bacteria in question is not known and gamma irradiation could not be performed
at the site. As for chemical preservatives, the problem at present is that they
usually affect the analysis of some of the important ions. However, a chemical may
be found that would be beneficial and not interfere with any of the analyses.

Another possibility is the use of ion exchange where less stable ions are exchanged
for more stable ions. In order to make adequate decisions here, some studies must
be performed.
In addition to the collection stage, the sample can also degrade during storage and
shipment from the field to the laboratory. To minimize degradation the sample
should be stored and shipped cold. Also the sample should be shipped to the labora-
tory as quickly as possible after collection. This holds especially for event or
daily samples or samples that are kept cold in the sampler. For samples that remain
in the sampler for several days at ambient temperature, the tendency for degradation
is enhanced, and shipping under cold conditions is probably not worthwhile. Another
means of maintaining sample integrity is to minimize the headspace in the sample
container to decrease the volume of gas that can exchange between air and solution.
Finally, if the sample is not filtered, before it is shipped (and preferably immed-
iately after collection), it would be beneficial to decant the sample to remove
sediment and decrease reaction of particles with solution. For example, basic soil
particles will neutralize acid.
The working group has identified the lack of a standard reference instrument for
monitoring rain water as a serious problem in comparing results from different
samplers and procedures. Although no attempt was made to enumerate all characteris-
tics of the ideal sampler, the group consensus was that development of a standard
reference sampler should be encouraged. Such samplers may well prove too expensive
for routine use, but measurement in real-time (or near real time, i.e. every small
increment of rainfall) of pH, conductivity and as many ions and other species as
practical would provide a basis for determining the sample/analysis integrity from
other samplers/procedures.
Numerical ratings in order of importance were not placed on the different aspects of
sample collection and transfer activities. Essentially, all steps are of equal
importance and are of high priority since it does not really matter where the sample
degrades -- whether it is in sampling, handling, shipping, etc. Once a sample has
degraded it becomes practically worthless, if not completely worthless.
QUESTION: Do you mean the samples should be clarified or decanted but not filtered?
ANSWER: Yes, if the samples are kept at 4°C.

Jane Rothert*
The working group discussed the analytical methods currently used for the determina-
tion of the major ion concentrations in precipitation samples and the need to stand-
ardize these methods. Table 1 lists several of the major precipitation networks/
laboratories and the methods used for the analysis of the major analytes. It shows
that the majority of precipitation chemistry laboratories use relatively few methods
for analysis. The Illinois State Water Survey (ISWS) is preparing a two volume
precipitation chemistry methods manual for the United States Environmental Protec-
tion Agency (USEPA). Volume 1, now in draft form, contains analytical methods for
precipitation samples. Table 2 lists the analytical methods already being standard-
ized. Volume 2 will contain sample collection, handling, processing, and storage
procedures as well as quality assurance procedures. The working group recommends
that further standardization of analytical methods be delayed until these manuals
are published.
Standardized quality assurance procedures for precipitation sample collecting,
handling, and analysis were recommended. In addition to the quality assurance
section of the second volume of the USEPA manual mentioned above, two other quality
assurance manuals were discussed. Quality Assurance Systems Laboratory, Office of
Research and Development, U.S. Environmental Protection Agency, Research Triangle
Park, North Carolina, has been available for several years. The latest revision is
January, 1985. The American Society for Testing and Materials (ASTM) is preparing a
guide entitled, Quality Assurance for Precipitation Measurements. A partial list of
quality assurance manuals, plans, papers, and reports was compiled and is enclosed.
^Illinois State Water Survey, Champaign IL.
Source: Proc. Methods for Acidic Dep. Measurements EA-4663, EPRI, Palo Alto CA 1986.

Table 1
Wet Precipitation Chemistry Networks and the Methods of Analysis Used Per Parameter
Ca^+ Mg2+ pH-



Rockwell Int.




BAPMoN (alternate
Global Geo.

Min. of Ont.

AWC	Automated Wet Chemistry
AA	Atomic Absorption
IC	Ion Chromatography (suppressed)
FES	Flame Emission Spectrophotometer
ISE	Ion Selective Electrode

Table 2
Analytical Methods Being Standardized
Trace Metals
*ISE - Ion Selective Electrode
AWC - Automated Wet Chemistry
IC - Ion Chromatography
AA - Atomic Absorption-flame
ICP - Inductively Coupled Plasma
1.	All precipitation networks and precipitation chemistry laboratories need to use
the same definition of detection limit. A modified version of the American
Chemical Society definition was chosen as the standard method.
a.	Limit of Detection (LOD)
The Limit of Detection is to be defined as three (3) times the standard
deviation of a minimum of five (5) replicate reagent blank readings.
b.	Limit of Quantitation (LOQ)
The Limit of Quantitation is to be defined as ten (10) times the standard
deviation of a minimum of five (5) replicate reagent blank readings.
2.	The lower limit of the working range should be greater than or equal to the
3.	All samples whose concentrations are below the lowest working standard or above
the highest working standard must be flagged.
4.	Changes in the methods of analysis occurring during the life of a network
should be documented with accuracy, precision, and bias data.
1. Discrepancies between laboratory and field pH should be investigated and

2. A comprehensive bibliography of precipitation chemistry research projects and
network reports should be compiled.
3.	Sample stability during collection, shipping, analysis, and storage should be
investigated and published.
4.	Research on the following precipitation constituents is encouraged:
a.	peroxides
b.	organic acids
c.	nutrients
d.	industrial chemicals
e.	agricultural chemicals
5.	Laboratories that use ion chromatography for anion analysis are encouraged to
investigate for possible SO2 interference with the analysis of Br" and NO3 and
eliminating the interference when present if possible.
The standardization of analytical methods and quality assurance procedures for
precipitation samples is already in progress. Until the current work in progress is
completed, no further standardization is recommended. However, it is recommended
that the definition of LOD and LOQ be adopted by all precipitation chemistry net-
works and laboratories, a complete bibliography of "acid rain" research projects be
compiled and made available, and research on several additional constituents of
precipitation be continued.
1.	S. Campbell and H. Scott. "Quality Assurance in Acid Precipitation Measure-
ments," 1982.
2.	Environmental Science and Engineering Inc.. ^;^ffit;lu£nil»in9 Pr°JeCt
Quality Assurance Plan, for the Florida Acid Deposition Study, 1981.
3.	H. Feely. Quality Assurance Plan: EML Participation in the June 1982 Dry
Deposition Intercomparison, Environmental Measurements Laboratory, U. S.
Department of Energy, 198^.
4.	L. C. Freidman and D. E. Erdmann. Techniques of Water-Resources Investigations
of the United States Geological Survey: Wua1i*)f Assurance Practices for the
Chemical and Biological Analyses of1 water and Fluvial Pediments, Book 5, Chap-
ter A6, 1982.	
5.	National Atmospheric Deposition Program Quality Assurance Steering Committee,
The NADP Quality Assurance Plan, 1984.
6.	Pacific Northwest Laboratory, The MAP3S/RAINE Precipitation Chemistry Network:
Quality Control, PNL-3612, for EPA, lsuo.
7 M E Peden "SamDlinq, Analytical, and Quality Assurance Protocols for the
' National Atmospheric Deposition Program," Sampling and Analysis of Rain, ASTM
STP 823, S. A. Campbell, Ed., 1983, pp. 72-83.

8.	J. Rothert and M. T. Dana. The MAP3S Precipitation Chemistry Network: Seventh
Periodic Summary Report (1983), Pacific Northwest Laboratory, 1984.
9.	L. E. Topol, et. al. Quality Assurance Manual for Precipitation Measurement
Systems, Rockwell International for EPA, 1981.	"
10.	WMO Operational Manual for Sampling and Analysis Techniques for Chemical Con-
stituents in Air and Precipitation, World Meteorological Organization Pub. No.
2M (1974).	 		
11.	Quality Assurance Handbook for Air Pollution Measurement Systems. Vol. II
- Ambient Air Specific Methods, EPA-600/4-77-027a, Research Triangle Park,
North Carolina (1977).
12.	Guide to Meteorological Instrument and Observing Practices, World Meteorologi-
cal Organization Pub. No. 8,TP8 (1971).
13.	Quality Assurance Handbook for Air Pollution Measurement Systems, Vol. I -
Principles, EPA-600/9-76-005. Research Triangle Park, North Carolina (1976).
14.	Qua!ity Assurance Handbook for Air Pollution Measurement Systems - Vol. V -
Manual for Precipitation Measurement Systems. Part II - Operation and
Maintenance Manual. U.S. EPA, RTP, NC, £PA600/4-82-042b {Jan. 1981).
15.	Acidic Precipitation in Ontario Study, Technical and Operating Manual, APIOS
Deposition Monitoring Program, W.S. Bardswick, Ed. Ontario Ministry of the
Environment. Toronto, Ontario, Canada (April 1983).
16.	G. Maseninko and W.F. Koch. "A Critical Review of Measurement Practices for
the Determination of pH and Acidity Atmospheric Precipitation," NBSIR 84-2866.
17.	M. E. Peden and L. M. Skowron. "Ionic Stability of Precipitation Samples,"
Atmos. Environ. 12^, 2343 (1978).

Anthony R. Olsen*
Data handling, archiving and accessing are components of the data management func-
tion associated with the operation of a wet deposition network. The working group
discussions were undertaken with the understanding that a data management function
for a wet deposition network exists and that the essential structure of that func-
tion is well understood. The working group believed the function was described in
some detail in several available reports. The working group was familiar with
several reports by UAPSP (1_), EPA (2:, and EPRI (£).
In addition to the above reports documentation available from existing networks
(e.g. NADP/NTN, CANSAP, APIOS, UAPSP, MAP3S, APN) contain descriptions of aspects of
the data management function for their network. Time constraints precluded the
working group from synthesizing all the material contained in the above reports into
a single comprehensive report on the data management function. It is recommended
that a comprehensive report devoted to the data management function be prepared.
Data management function refers to the entire process of data management. The
function includes the development and administration of a system that allows the
collection, transmission, storage and reporting of information generated in a wet
deposition monitoring program. An Integral part is the preparation and availability
of documentation of all aspects of network operations.
The working group organized its discussions around five general topics:
0	Network documentation
•	Network computerized data base
•	Data screening and validation of sample data
•	Network QC and QA sample analysis data
•	Data reduction and reporting
*Ba±telle Northwest Laboratories, Richland WA.
Source: Proc. Methods for Acidic Dep. Measurements EA-4663, EPRI, Palo Alto CA 1986.

The topics begin with the initial design of the network, include the collection and
analysis of wet deposition samples and continue on to the summarization and repor-
ting of the results.
The UAPSP workshop proceedings contains a description of a data management system
model (see Figure 1). The model assumes a monitoring system encompasses four func-
tions: program management, field operations, laboratory operations, and data man-
agement. Program management (or network coordination office) oversees the field,
laboratory and data management functions. Field operation management includes
routine interaction with field sites; receiving, recording and checking field samp-
les and supporting information; quality control checks; and dissemination of samples
and data to the laboratory and data management office. The data management office
performs additional quality control checks, archives data in a data base, dissemina-
tes data and compiles data summaries. The working group accepted the general model
proposed and considered the material to be a reasonable base on which to start the
recommended data management report.
A summary of the working group discussions follows. No attempt was made to include
all topics associated with data management of wet deposition data.
The working group concluded that the operation of wet deposition networks had reach-
ed a high enough state of development that a minimum set of information to be docu-
mented could be standardized. The structure of the documentation would have to be
sufficiently flexible to allow for additional information and for network opera-
tional differences and sizes. The working group considered formal documentation in
the areas listed in Table 1 to be essential. One or more documents could be issued
as a combined single document.
In addition to the formal documents other necessary documentation includes log books
for site operations, field operation management, QC, laboratory operations and data
management, A variety of forms are required to record and transmit field and labor-
atory data and network management. Example forms are described in the UAPSP work-
shop proceedings (1_),
The working group considered what information should be made readily available for
outside data users. Generally, the working group believed that the information
would either be computerized (preferred) or be in a form amenable to computerization

Data Management System
Quality Control Reports
Data Generation and
Dissemination of Reports
Field Operations
Sample Data
Other Network
Analysis Data
Quality Control Data
Quality Control
Analysis of Samples
Quality Control Tests
Quality Control
Report Summaries
Receive and Record Samples
Quality Control Checks
Obtain Other Data and
Figure 1. Network Operations Block Diagram (_!)

in order for data users to use it effectively. In some instances, e.g. maps and
photos of sites, the information would be available only in formal documents.
Site location and characterization information is required to be sufficiently de-
tailed to enable a potential data user to determine if the site is suitable for use
in their study. A partial list of information desirable is in Figure 2. Compila-
tion of a comprehensive list requires additional input from researchers knowledge-
able on siting criteria and potential data users. An important feature recommended
is the inclusion of a data item, called site revision/change number, to track chan-
ges that occur at the site. The revision must also identify the time the change
occurred. This allows the identification of samples with specific time periods of
constant operating conditions.
Precipitation chemistry collection and field sampling protocol information is con-
tained in formal documentation of network operations. In Figure 3 primary informa-
tion that is readily computerized and considered useful in automated data analysis
efforts is given. Figure 4 contains similar information for precipitation standard
gauge and other meteorological instrumentation at the site. The working group
emphasized the advantages of knowing whether colocated samplers (even if operated by
another network) were present.
Field and laboratory sampling and analysis protocols are important to know. In the
context of the computerized data base, knowing what field measurements and labora-
tory analyte measurements are planned is important. Explicit definitions of the
analytes measured, including the laboratory analysis method and instrumentation, are
necessary. The analysis hierarchy for analytes must be identified so that when
result values are missing for a specific analyte it is possible to discern why they
are missing.
If changes in any of the information described above occurs, then a record of the
change should be made. This may be accomplished though the site revision number and
accompanied by a narrative describing the change.
Individual sampling period information should include pertinent characteristics
associated with each sample collected in the field. Sample identification codes,
sample condition, field and laboratory analyte results, and data screening and
validation notes and codes are necessary items for inclusion in the data base. The
working group emphasized the importance of having an explicit accounting for all

Table 1
Formal Documentation for Wet Deposition Monitoring Network Operation
Network Overview
Siting Manual
Site Description
Field Operation and Once
Maintenance Manual
Chemical Laboratory Once
Operation, Analysis
Procedures Manual(s)
Archive Data	Once
Management Manuals
Quality Control	Once
Procedure Manual
Quality Control	Annually
Quality Assurance	Once
Quality Assurance	Annually
Program plan of study: goals of network,
design criteria, overview of individual
operation components. Describes network to
outsider users.
Detailed procedures of siting, including
protocol for siting, equipment require-
ments, etc.
Detailed description of each site, inclu-
ding photographs and maps. Documentation
of changes over time. A shorter summary
report may also be prepared for general
Describes activities of site operators and
field operation management personnel. De-
tailed procedures for sample collection,
field analysis, equipment maintenance, QC
activities, data forms, etc.
Describes activities of laboratory person-
nel. Overview of schedules and responsi-
bility. Detailed instructions for sample
handling, analysis, and reporting. QC
Describes responsibilities, procedures and
activities. QC activities. Software docu-
ments, listings, data formats. Data base
description. Coding procedures.
Prepared for general data users outside
network. General overview of procedures,
QC objectives, responsibility for QC, data
validation procedures, handling of missing
and discarded data.
Summary report of QC activities during
year. Documentation of data quality inclu-
ding precision and accuracy summary data as
Prepared for internal use and general dis-
tribution. Describes activities conducted
with outside organizations to assure quali-
ty of reported data.
Summary report of QA activities during
year. Documentation of quality of data
including accuracy and precision summary

Site identification code: unique
Site name
FIPS state and county codes
Site revision/change number
start date, end date
Latitude (D.M.S.): -West and +East
Longitude (D.M.S.): -South and +North
Elevation of site above sea level
Time zone: standard time from GMT
UTM coordinates
Sampling purpose: regional, background, urban, source-
Site descriptions (see siting criteria for more extensive
direction and distance from sources
land use category
topography; photos
site operated by
site funded by
climatology data
local roughness/canopy heights
Figure 2. Site Information
Collector model
Collector functional description
Collector cross-sectional area
Height to bucket top
Platform: no, yes-open, yes-enclosed
Intended deposition collection: wet only, bulk
Protocol sampling period: daily, weekly, monthly,
sequential (volume or time controlled)
Sample composited at site versus cumulative collection
Chemical analysis laboratory used
Figure 3. Precipitation Chemistry Collection Characteristics

Presence of precipitation gauge at site: yes, no
Standard rain gauge model
Snow gauge description (if used)
Height of gauge
Distance and direction to precipitation chemistry collector
Frequency of reporting
Minimum detection limit
Precipitation collection protocol description
Presence of other meteorological instrumentation at site
including appropriate descriptions
Use of procedures for estimating precipitation if no
on site precipitation measurement is available
Presence of colocated precipitation chemistry collectors
Figure 4. Precipitation Standard Gauge and Other Instrumentation
at Site
time periods at the site. It was recognized that it is not sufficient to have the
data base include only sample periods associated with valid sample results. Know-
ledge of precipitation occurrence is required on a continuous basis. If no know-
ledge of precipitation is available during a specific period, then a record explain-
ing this must be entered into the computerized data base. Specific items discussed
by the working group for inclusion in the data base are in Figure 5.
Current wet deposition networks have defined and are accompanying sample results
with a number of note codes that describe the condition of the sample as collected
in the field, unusual occurrences in the field and departures from protocol. The
working group considered this information essential to be part of the computerized
data base. The combining of data from multiple networks usually requires informa-
tion at this level of detail to obtain comparable data. The group also believed
that a minimum set of condition codes could be developed and standardized for all
networks. Flexibility for additional notes unique to a network or to account for
unusual conditions would still be required. Additional work is required to imple-
ment the recommendation and the group indicated that such an effort be made. A
study group of researchers knowledgeable about the sample condition coding schemes
currently being used by major networks could form the core for the effort.
A subject that elicited considerable discussion by the working group was the report-
ing of analyte results. The consensus of the group was that all analyte results be

Sample identification
date: year, month, day, hour
Unique laboratory sample number
Sample period dates
sample period start
sample period stop
event start
event stop
arrival at lab: lab processing starts
Precipitation occurrence: no, yes-amount-known, yes-unknown
don't know
Precipitation type: rain, snow mixed, unknown
Deposition type actual: wet, bulk
Actual sample period length: number of days
Sample sequestered from general use: yes/no
Precipitation information
number of events in sample period
amount: mm of depth
flags: trace, BDL, bias corrected
precision: value and type code
Sample volume information
volume: ml
flags and note codes
precision: value and type code
Sample condition/collection codes
field condition codes
laboratory analysis codes and notes
Analyte results: repeated for all analytes measured
note codes: data present and valid
data present and suspect
data absent with reason
result value: actual value or detection limit
result flag: detection limit code
result precision: value and type code
result accuracy if available
Figure 5. Individual Sampling Period Data Items
entered in the network archive data base. This includes invalid as well as valid
sample results. Obvious errors such as transcription are assumed to be corrected
before archiving. The group also strongly supported the explicit flagging of the
results so that results judged to be invalid or suspect by the network are well
identified. The decision on whether to report invalid or suspect data in routine
data reports or data tapes was thought to be in the domain of network program
management. It was recommended by the working group, however, that all sample
results be transferred to the multi-network data base, acid deposition system (ADS)

for statistical reporting operated by Pacific Northwest Laboratory for the U.S.
Environmental Protection Agency. The network has the option of having ADS restrict
access to invalidated analyte results to correspond with network protocol.
The working group discussed whether it was possible to recommend standard units for
reporting of analyte results. The determination was made that the objectives of the
network, e.g. effects, trend or atmospheric modeling, dominated any standardization
effort. The group did emphasize the necessity for clearly identifying the reporting
units of each analyte in the data base. A related topic concerned the need for
clear definition on whether sulfate concentrations or sulfur concentrations measured
as sulfate are reported. Similar statements apply to other ion species. The ques-
tion of the number of significant digits required for archived data was addressed
briefly. No specific statement on standardization was made but the group thought
that the number of significant digits must be related to the precision of the mea-
surement, e.g. to at least 10% of the precision. The number of digits reported
should be adequate to preserve the precision of the measurement while not implying
an unjustified precision level. Further discussions are required before a recommen-
dation for standardization can be made.
Data handled in the field and laboratory must be critically reviewed to identify and
isolate errors. Errors in the present context are occurrences during the data
handling process that produce mistakes in the recorded information that are correc-
table, e.g. transcription errors, or non-correctable and fatal errors, e.g. field pH
reading of 21.1. This process has been referred to as data screening. It includes
sample validation in the field, preliminary physical screening of sample upon re-
ceipt at the laboratory, and automated statistical screening of data. Statistical
screening includes range checking for valid values and outlier detection procedures
to identify potential errors.
Data validation of individual sample data is the process of evaluating the data
after its preliminary screening. It is designed to identify data for subsequent
investigation of its validity. Data further investigated may be determined to be
valid, invalid, or suspect and then flagged accordingly if invalid or suspect.
Several data validation criteria are used by wet deposition networks. Topol (1985)
discusses many of the criteria and gives an indication of how the individual criter-
ia are applied to reach a decision.

The working group concluded that the process of data validation was not amenable to
standardization at this time. The analyst brings to the decision process factors
that are qualitative and depend on his experience with data from a site. Eventually
artificial intelligence procedures or expert systems may be developed to perform
data validation. Explicit documentation of the data screening and data validation
process was recommended as a minimal requirement. The documentation must be made
available to the general data user community.
The outcome of the data validation process is the identification of valid, invalid
and suspect data. The recommendation of the working group was that the decision be
documented by explicitly including a flag field to accompany the data result rather
that deleting or including the data result. Standardizing the flags was considered
a distinct possibility. The issue of having the data validation result apply to all
analytes as opposed to an individual analyte was discussed. The group concluded
that occasions occur when one or the other is appropriate. The documentation should
clearly state how the network completes this part of the validation process.
A recommendation was made by the working group to have a study of the data valida-
tion process completed. The study would consider the procedures currently used by
networks with a goal toward creating a framework for the process. This would in-
clude a discussion of the pros and cons of specific data validation procedures. The
group felt that a comprehensive study would require the analysis of actual network
data. All data collected by a network accompanied by applicable sample condition
notes and codes would be required.
Quality control and quality assurance activities for a wet deposition network were
not explicitly considered by the working group. The quality auditing working group
considered one portion of the activities. The Quality Assurance Manual for Precipi-
tation Measurement Systems discusses these in depth (2j. The group considered two
specific items related to QC and QA activities. They emphasized the necessity for
the preparation and open distribution of QC/QA reports summarizing the process and
results. The reports should have precision and accuracy summary information clearly
identified and in a useable form. The report should also state the sites and the
time period to which the information applies. The second item considered was the
archiving of individual QC and QA sample results. Although the group did not iden-
tify explicitly those QC or QA sample results that were appropriate for archiving,
the group considered the archiving of QC and QA sample results that lead directly to
estimates of sample precision and accuracy essential. Further definition is requir-

ed before a final recommendation can be made. Both items were considered important
enough to recommend that the information be transferred to the ADS wet deposition
data base.
Discussion of data reduction and reporting of wet deposition sample data was organ-
ized around four topics:
•	Determination of valid/invalid samples for reporting
•	Data completeness measures
•	Data summary reporting criteria
•	Statistical summary procedures for concentration and deposition
Two types of reports were considered: quarterly or annual sample data reports and
annual data summary reports. The structure and contents of the reports was consi-
dered network and objective specific by the working group. Hence no recommendation
was made. Both types of reports require the reporting organization to determine
which sample results are valid or invalid for the report objective. Note that this
determination is based upon the previous data validation process but is a subsequent
separate process. The remaining three topics are applicable to annual (or other
time period) summary reports. To focus the discussion the group assumed that an
annual summary for a single analyte at a single site was being prepared.
Determination of valid/invalid sample results is specific to the report objectives.
The process depends on the availability of sample data accompanied by well defined
and understood sample condition notes, codes and flags. This determination is a
secondary or even tertiary judgement of the sample result. Primary judgement is the
assigning of sample condition codes and notes to document the sampling and analysis
process of the sample. A secondary judgement is made during data validation when
sample results are flagged as valid, invalid or suspect. Depending on the objective
the tertiary judgement may simply be to accept the data validation flag judgement or
to use additional criteria, e.g. exclusion of bulk or long duration samples. Suc-
cessful completion of this valid/invalid determination step is a key to combining
data from different networks. Since the process is objective specific, no standar-
dization is possible. However, detailed documentation of the procedures applied
must be included in the report. Currently, successful completion of this step is
very difficult due to differences in how networks complete data validation and
sample flagging.
Compilation of an annual statistical summary for a site requires the researcher to
consider the handling of missing data, invalid data and below detection limit data.

Virtually all sites have these data problems present. Data completeness measures
are one way to document the amount of data loss in quantitative terms. The working
group discussed the following measures and proposed that they be considered for
standard use. The descriptions that follow are taken for the overview presentation
on data handling, reduction and reporting. Further study and refinement of the data
completeness measures was recommended.
Percent precipitation coverage length (%PCL) is defined as the percent of the sum-
mary period for which information on whether or not precipitation occurred is
%PCL = 100 * (SPL - NDPP)/SPL
where SPL = number of days in the summary period
NDPP = number of days when it is not known
if precipitation occurred.
If precipitation is known to have occurred during a particular sampling period but
no quantitative estimate of the amount is available then no knowledge of precipita-
tion is assumed. Total precipitation depth is the amount of precipitation occurring
during the period of precipitation coverage. This data completeness measure does
not include any consideration of the availability of a valid precipitation chemistry
The remaining data completeness measures are computed for each ion species or compo-
nent summary and may differ even though they are for the same summary period and
site. The cause of the difference is that determination of valid samples is comple-
ted for each ion species or component independently. If a network has a protocol
for prioritizing the ion species to be measured in the event of insufficient sample
volume to complete all measurements or if a network's validation procedure is ap-
plied to individual ion species, then the number of valid sulfate and nitrate sample
values may differ, affecting the data completeness measures.
Percent total precipitation (%TP) is the percent of total precipitation depth mea-
sured during the summary period that is associated with valid component samples.
%TP = 100 * TPVC/TP
where	TP = total precipitation depth
TPVC = portion of total precipitation depth associated
with a valid sample component measurement.
Component is a generic term that refers to any ion species or other measurement made
on a wet deposition sample.

Percent valid sample length (%VSL) is the percent of the days in the summary period
that are associated with valid sample periods.
%VSL = 100 * (NDNP + NDVCMP)/SPL
where NDNP = number of days in sample periods during which
no precipitation occurs
NDVCMP = number of days in sample periods with valid
sample component measurement on measured
precipitation sample.
This measure is most useful for sites with weekly, monthly or 28-day sampling
%VSMP is the percent of wet deposition samples that have valid component sample
component measurements.
where NSVCMP = number of wet deposition samples in summary
period that result in a valid sample component
NSMP = number of wet deposition samples in summary period.
%C0L EFF, percent collection efficiency, is the ratio (converted to a percent) of
the total sample volume (converted to a depth) to the total precipitation depth
where totals are for qualifying samples.
where EPDVC = sum of depths predicted from sample volume for
qualifying samples
ERGVC = sum of standard gauge depths for qualifying samples
and qualifying samples are those a) that have both a colocated standard gauge and
sample volume measurement available and b) that have a valid sample component
Data summary reporting criteria were the third topic on data reduction and repor-
ting. Completion of a statistical summary and accompanying data completeness mea-
sures is followed by a decision on whether or not the summary is "representative"
enough to satisfy the objective of the report. Data summary reporting criteria are
designed to document the decision process. Criteria should consider the site repre-
sentativeness, data completeness and representativeness of the chemistry.
Measures of data completeness are defined above. Representativeness of the chemis-
try refers to the sample concentration representing the chemistry of wet deposition
that actually occurred at the collection site. Representativeness depends on the
ability of the network sampling and analysis protocol to preserve the characteris-
tics of actual deposition. Currently evaluation of representativeness depends on a

qualitative assessment of network protocols. Site representativeness may be ap-
proached by evaluating the effectiveness of siting criteria and their implementa-
tion. Any assessment depends on the specific objective for the summary.
The working group considered the question of whether it was possible to standardize
a format for the assignment of an overall data quality level to an individual sum-
mary based on the three considerations above. Specific assignment of quality levels
is specific to the summary objective and standardization is not possible. A format
for assignment of data quality levels has been formulated by the Unified Deposition
Data Base Committee organized by the Canadian Federal-Provincial Research and Moni-
toring Coordinating Committee. The philosophy adopted was to categorize site repre-
sentativeness and data completeness into qualitative levels and then assign an
overall data quality level using those assigned levels. Representativeness of
chemistry was addressed through qualitative assessment of network protocols and
selection of valid/invalid sample data. The working group, in principle, considered
the approach worth further investigation.
Statistical summary and reporting procedures were discussed by the working group.
An overall recommendation proposed was to encourage (demand) that summary procedures
be explicitly defined and that standard terminology be adopted for specific statis-
tical summaries, if at all possible. Substantial confusion and differences in
results occur because of differences in terminology and criteria for valid/invalid
sample data. The group concluded that it was not possible to recommend a standard
set of statistical summary measures for an annual summary due to the requirements of
different objectives.
The group discussed procedures for estimating total precipitation at a site. For a
site operating according to a network protocol with a standard precipitation gauge,
the group concluded total precipitation should be estimated from the standard gauge.
The handling of missing standard gauge information evoked additional discussion.
However, on site sample volume measurements, converted to a depth, were preferred to
an off site or climatological estimate. Specific procedures may depend on the
planned use of the summary data. The maximum of standard gauge and depth based on
sample volume was not recommended.
Measures of "average" analyte concentration include arithmetic mean, geometric mean,
median, volume-weighted mean and precipitation-weighted mean. Each has a correspon-
ding variability estimate. The primary recommendation of the working group was for
explicit documentation of the procedure used. Each has an appropriate use depending

on the objective of the summary. Typically, analyte concentration distributions are
not symmetric so that the arithmetic mean is not as useful as a summary measure as
the others. The working group recommended the arithmetic mean not be used except
for specific instances where it is justified. A research study on the comparison,
use and interpretation of different measures of average concentration was recommen-
ded by the group. The study would include the use of actual data to illustrate the
differences among the measures. The study should identify which measures are appro-
priate for specific objectives.
Standardized terminology and definition for precipitation-weighted and volume-weigh-
ted means is lacking. Operational definitions use standard gauge depth and sample
volume for precipitation-weighted and volume-weighted mean respectively. Working
group members adamantly supported or opposed one or the other for use. Differences
were identified as being related to different unstated objectives and to willingness
to make assumptions about the representativeness of the concentration measured from
the sample collected. The group concluded that the study on comparisons of means
should include a careful assessment of weighted means.
The working group considered procedures for the estimation of total annual (or other
period) deposition of an analyte. Again terminology and calculation procedure
differences were apparent to the group. A research study on procedures for estima-
ting total deposition was recommended by the group. The study would be similar in
concept to that for studying estimates of average concentration. The group sugges-
ted that the phrase "total measured deposition" refer to the total estimated deposi-
tion associated with analyte that actually is collected in the sample bucket and
analyzed. Computationally, this is the product of measured analyte concentration
multiplied by the sample volume converted to a depth summed over all precipitation
events collected, A complete definition requires the procedures for handling con-
centrations below the limit of quantitation be identified. No single estimation
procedure was identified as the recommended procedure for obtaining the "best"
estimate of total deposition when the realities of incomplete data were present.
The working group identified a number of topics where additional work or research
would be useful. The list is not exhaustive nor is it prioritized according to
• Considerable variation in the definition of a precipitation event
occurs between networks and even sites. A study to develop an
operational definition of an event is required.

•	A study on the spatial variability of analyte concentration and
deposition in regions nearby the sample site is recommended. The
region envisioned extends from close colocated samplers to distan-
ces comparable to the open region required by siting criteria.
Additional study of spatial variability for area sizes typically
used by modelers also is needed.
•	A research study on the comparison, use and interpretation of
different measures of average concentration is proposed. The study
should be based on actual data representing all regions of North
America and should include discussion of procedures for handling
incomplete and below limit of quantitation data.
•	Procedures for estimating total wet deposition during a specific
period at a site require additional study. Actual data should be
used. The study could be combined with the study on measures of
average concentration. In both cases a link between possible
summary objectives and the summary procedure should be identified.
•	Additional discussion and study of procedures for estimating total
precipitation during an annual period is recommended. Specifi-
cally, issues to be addressed are selection of maximum gauge and
sample volume (depth); handling of missing standard gauge depths;
use of nearby weather service or climatological precipitation
information to fill in missing on site precipitation information.
•	A proposed set of data completeness measures were identified for
possible standard use. Additional research on their utility and
application is warranted before they can be recommended for general
use. A study may lead to their refinement or identification of a
better set of data completeness measures.
•	Development of data quality categories to accompany annual summar-
ies is a natural next step to consider. A study to formulate a
format for assignment of data quality levels based on site repre-
sentativeness, data completeness and representativeness of chemis-
try is recommended.
t A comprehensive study of spatial variation in concentration and
deposition is needed. The region to be considered for the study
would extend to areas (100 km sq) used by modelers to collocated
samplers. Generalization of the results to all regions of North
America is recommended wherever possible.
•	The study could begin with an analysis of existing data were avail-
able and applicable. A comprehensive study will require the design
of a special field study with appropriate spacing of sample collec-
tion sites.
1.	Proceeding: Advisory Workshop on Methods for Comparinq Precipitation Chemistry
ftata. Pa 10 AHo, CA: EPftt	10d. 1353. 			
2.	L.E. Topol, et al. Quality Assurance Manual for Precipitation Measurement Sys-
tems. U.S. Environmental Protection Agency, Research Triangle Park, NC. January

3.	C.R. Watson and A.R. Olsen. Acid Deposition System (ADS) for Statistical Repor-
ting. System Design and User's Code Manual. U.S. Environmental Protection
Agency, Research Triangle Park, NC. EPA-600/8-84-023. September 1984.
4.	J.H. Gibson. Workshop Proceedings: Data Management Needs for Atmospheric Depo-
sition. Palo Alto, CA: EPRI. WS-79-163. 1980.

Bill Mitchell*
The charge of the working group on Quality Auditing was to determine which quality
auditing activities for wet deposition measurements could be standardized. As
Dr. P.K. Mueller noted in his opening remarks, the terms quality control (QC) and
quality auditing are frequently used interchangeably, when, in fact, they are two
distinct quality assurance activities. Thus, the initial activities of the working
group concerned:
a.	Defining the meaning of the terms quality assurance, quality control and
quality auditing;
b.	Reaching a consensus on which activities should be included in QC and
which should be in quality auditing.
It was agreed to define these terms as follows:
These are routine checks, tests, inspections, etc. included in the measurement
procedure that are carried out by those actually involved in the measurement pro-
cess. (Measurement process includes site selection, sampling, sample handling,
analysis, data recording and validation.) QC tests, which include the routine
checks made to assess the precision and accuracy of the measurement data, are gener-
ally administered close in time to the part of the measurement process that is being
evaluated. They determine if the measurement process is performing within the
control limits specified in the measurement procedure and, if not, corrective action
is taken.
^Environmental Protection Agency, Research Triangle Park NC.
Source: Proc. Methods for Acidic Dep. Measurements EA-4663, EPRI, Palo Alto CA 1986.

These are checks that are administered by persons not involved in the measurement
process. These tests are done to assure that the QC checks specified in the proce-
dures used in the measurement process are being done in a timely manner and in
conformance to the criteria specified in the procedure.
In other words these checks are done to assure that data of defined quality that
meets the objectives (needs) of the project are being generated. These tests can be
done by someone on the staff of the organization responsible for the measurement
process (internal audit) or by someone who is outside the organization (external
audit). Either type is desirable. They may be system audits or performance audits
or a combination of both. Quality auditing by its nature is a rather infrequent
activity compared to QC.
Quality assurance (QA) includes all activities performed to control, assess and
assure the quality of data obtained from the measurement processes. Quality assur-
ance includes quality control, quality audits, and all other activities planned and
performed to achieve data quality such as: document control, quality organization,
quality policy, quality objectives and procurement control.
Employing these definitions one can see that using the abbreviation QA/QC in reports
is not really appropriate since QC is only one small part of QA. One can also
appreciate why so many interpretations of these three terms (QC, QA, quality audit-
ing) exist. That is, the measurement process under consideration frequently deter-
mines which activities are considered as quality control and which as quality audit-
ing and how these functions are organized under the broader quality assurance func-
tion. (An excellent discussion of the differences between these terms can be found
in the ANSI/ASQC report A3-1978 cited in the General References section.)
After agreeing on the definitions of quality control and quality auditing, the
working group developed a method for ranking those aspects of quality auditing that
should be standardized. Seven major components of the wet deposition measurement
process were identified and ranked with respect to:
• Impact on the usability of the data collected from the measurement
process if the QC checks were not performed. For example, could
the lost or invalidated data be recovered or corrected at a later

time? The less likely that the data could be recovered or correct-
ed, the more important it is to audit that part of the process.
• Cost to audit that part of the measurement process and the cost
savings that could result from standardizing the audit procedures
both within and between networks. For example, field site audits
are expensive. If auditing procedures were standardized across
networks, it would be possible for an auditor for one network to
visit other networks' sites located in the same general area of his
network's site. This could yield significant cost savings. The
two scores obtained were then summed and the parts of the measure-
ment process were ranked from lowest score (most important) to
highest score (least important).
The relative ranking of the seven components of the precipitation measurement pro
cess were (most important first):
1.	Site selection and equipment set-up
2.	Sample analysis
3a. *Sample collection (includes proper operation and maintenance of samp-
ler and rain gauge and ancillary equipment)
b.	Equipment selection and its conformance to equipment performance
c.	Data recording and summarization
4.	Data validation
5.	Sample handling (includes setting out bucket, recovery of sample,
shipping of sample to laboratory for analysis)
We felt that the above ordering fairly reflected the order of priority for develop-
ing a set of standardized quality auditing procedures. Those procedures that could
be applied by all organizations to the measurement process were then identified and
the minimum frequency at which each should be applied were determined. The results
of these exercises are described below.
Site's Conformance to Site Selection Criteria and Equipment Set-Up Specifications
The purpose of auditing the site to see if it conforms to the network's site selec-
tion criteria is to assure that the data from the site will be adequate for its
intended use. All networks in North America making similar measurements for SIMILAR
PURPOSES should agree on: a minimum set of site selection requirements, the site
records that should be maintained, the items that should be audited, and the fre-
*Iterns a, b, and c were considered about equal in importance.

quency at which these audits should be performed. This working group recommends
that all networks audit each site to assure that the minimum site criteria are being
met. It is recommended that standard forms and a standard procedure for photograph-
ing the site be developed and adhered to by all networks. For example, at six month
intervals the site operator could conduct a self audit by completing a form and
sending it along with a set of photographs of the site and surrounding area to the
person responsible for quality assurance activities for the network. At three year
intervals, (more frequently if the site operator changes or if major changes occur
in the area around the site), an external auditor should conduct an on-site visit.
It would be helpful if a standardized (across all networks) numerical scoring system
for siting were developed for site auditing purposes and the total "audit" score
entered into the data base management system with the actual monitoring data. We
would like to see agreement between networks on the number of collocated samplers,
the distance between sampler(s) and rain gauge, the tension on the lid spring, etc.
Analytical Laboratory
It is recommended that the central analytical laboratory of each network participate
in a common interlaboratory comparison at least yearly and that the results be
submitted to each network's QA person and be included in the data bases. Implemen-
ting this recommendation may only require setting up a procedure for reporting the
data to the data base since most laboratories making precipitation measurements now
participate in the interlaboratory comparisons now conducted by EPA, US6S, Canada
Center for Inland Waters, etc.
Each laboratory should also analyze the NBS precipitation reference sample that will
soon be released. (This sample will provide NBS-certified values for pH, conductiv-
ity, acidity and common anions and cations.) Weekly, each laboratory should analyze
a set of blind samples supplied by the laboratory's QC person (or equivalent) and
monthly should analyze a blind sample submitted by those responsible for auditing
activities. Ideally this latter sample should pass through as much of the measure-
ment process as possible. For example, it should be put in the bucket at a field
site and returned to the laboratory under the same conditions normally used for
actual precipitation samples.
Each network should agree to employ a minimum number of collocated samplers for
determination of precision and to provide uniform training for its site operators.
Written standard operating procedures should also be employed.

Equipment Selection Specifications
All networks making similar measurements should agree on a minimum set of equipment
selection and performance criteria and ensure that all required acceptance tests are
performed for each procurement prior to use of the equipment. Any time changes are
made in equipment specification or in other parts of the measurement system, the
impact of the change on the comparison of data from the same network or from differ-
ent networks should be documented. Parallel operations of the "old" and the "new"
designs should be done to obtain side by side comparison data.
Data Recording
Annually, data from 3 to 5 samples processed in the last year should be inspected by
an independent auditor to determine if all steps of the data transcription and
recording process were followed, including proper identification of samples, proper
coding of samples, etc. In addition, at least quarterly one sample should be selec-
ted at random and followed through the entire data handling process to assure that
the process is operating properly and all quality control checks are being done.
Each network should ensure that all precision and accuracy data associated with the
samples is reviewed at frequent intervals and is entered into the data base.
Data Validation
At six month intervals a sample problem ideally consisting of an original raw data
set from a previously validated sample should be sent to the person responsible for
validating data and this person should process the data as if it were an actual
sample. The results of the data validation should then be inspected by the auditor
to assure that specified validation procedures are being followed.
Sample Handling
At least every three years an on-site systems audit should be conducted, at which
time the proficiency of the site operator in carrying out his various duties would
be evaluated through direct observation and a review of records and documentation.
This on-site audit should be done more frequently is the field check samples and
field blanks mentioned previously indicate that samples are being contaminated,
mishandled, etc.
Except for developing improved data auditing procedures, we were not able to identi-
fy other areas of quality auditing where further research needs to be done. Quality

auditing in our opinion really is intended to assure that all QA procedures are
being performed and are meeting their objectives. Quality auditing results, for
example, should not be the sole means for determining precision and accuracy of the
system. Such data quality indicators should be based on the results of QC checks to
prevent compromising the independence of the quality auditing function. As much as
possible the auditing procedures used should be standardized across all precipita-
tion networks making similar measurements for similar purposes. Because of the
increasing trend to acquire, handle, reduce, transmit and archive data by electron-
ic methods, a systematic review and development of quality assurance techniques
should be pursued to assure control of data from the time it is electronically
acquired until it is acquired from the archives by a data user years later. This is
an important area for future research and development effort.
Precipitation Specific Documents
Quality Assurance Manual for Precipitation Measurement Systems. EPA 600/4-82-042a.
Available from ORD Publications, U.S. EPA, 26 West St. Clair Street, Cincinnati, OH
45268, revised October 1984.
Quality Assurance Plan for External Audits of the Utility Acid Precipitation Study
Program (UAPSP). Report: UAPSP 110. (Available from UAPSP Report Center, P.O. Box
599, Damascus, MD 20872.), in preparation.
Internal Quality Assurance Plan, Wisconsin Acid Deposition Monitoring Project.
Battelle Pacific Northwest Laboratories, Richland, WA., August 1983.
Work Plan for Providing Technical Assistance and Quality Assurance to the NTN/NADP
Network. Draft available from Berne-Bennett, U.S. EPA, Quality Assurance Division,
MD-77B, Research Triangle Park, NC 27711.
S. Campbell, editor. Sampling and Analysis of Rain. ASTM Publication ASTM STP 823.
December, 1983.
P.K. Mueller. Practical Aspects of QC and QA for Precipitation Chemistry Measure-
ments. Paper presented at 76th APCA National Meeting, Atlanta, GA, 1963.
D. Bigelow. Quality Assurance Considerations for Multiple-Network Data Comparison.
Paper presented at APCA/ASQC Specialty Conference, Boulder, CO, 1984.
L. Schroeder and B. Malo. Quality Assurance Program for Wet Deposition Sampling and
Chemical Analyses for the National Trends Network. Paper presented at APCA/ASQC
Specialty Conference, Boulder, CQ, 1984.
General Subject Documents
Quality System Terminology. American National Standard/American Society for Quality
Control publication ANSI/ASQC AS-1978. (An excellent description and explanation of
the terms used in quality assurance programs.)

Quality Assurance Handbook for Air Pollution Measurement Systems. Volume I (General
Principles) and Volume II (Ambient Air Specific Methods). EPA 600/9-76-005 (Volume
I) and EPA 600/4-77-027a (Volume II). Available from ORD Publications, U.S. EPA,
26 W. St. Clair St., Cincinnati, OH 45268.
Quality Assurance Practices for the Chemical and Biological Analyses of Water and
Fluvial Sediment's^ USGS Open-File Report 81-65-0, 1981.

Sally Campbell**
In developing program descriptions first the need for the program is defined, then
the approach is conceptualized and a work plan is developed. The plan has to go
through some review process to see if it will meet the objectives. The plan has to
consider whether or not to use standard methods and to consider the advantages of
using standard methods. These are comparability to other people's work and consis-
tency within a program. Standard methods cover administrative procedures, sample
collection, field analysis, lab analysis, data processing, and quality assurance.
When available standard methods are not suitable, then they should be revised. At
that point a workshop, like this one, would start some new standardization
Standardization occurs around specific methods and general practices. However, if
you are going to standardize, the standardization will be successful only if it is
presented as an available option and not if presented as a dictum. For instance, a
standard could be, "If you want to measure sulfate and rain, here is a good way to
do it." A standard that is not likely to be very successful would read, "Everybody
measuring rain water must measure sulfate in rain." Although a standard practice
could be written to say, "If you are measuring rain water for purposes of talking
about SO2 reductions, it is a good practice to measure sulfate."
Our group conducted a survey in the other working groups and integrated the find-
ings. We also reviewed the standardization process and established how it would be
applied to precipitation chemistry measurements. One of the questions that we asked
^Based on taped transcript edited by David Anderson and Peter Mueller.
**Martin Marietta Corporation, Columbia MD.
Source: Proc. Methods for Acidic Dep. Measurements EA-4663, EPRI, Palo Alto CA 1986.

was, "What activities do you know about that are going on in standardization?" This
is the list we came up with:
•	ASTM -
--Col lection
--Quality assurance
•	Intersociety Committee
•	UAPSP/ASTM conferences
•	Informal agreements
•	ISWS study for EPA, EPA-QA document
•	USGS methods manual, WMO
•	Rockwell, UAPSP, MAP3S, NADP methods manual, CANSAP
•	NBS studies
•	Colocation studies
There is fairly large standardization activity in ASTM. It has been going on for
two years as a task group. Last week in Milwaukee we elevated it to the status of
subcommittee under the leadership of Mark Peden. There is apparently some Inter-
society Committee activity going on which I am not knowledgeable of. There have
been several conferences like this one, the UAPSP conference we were all part of and
an ASTM conference. Those have led to informal agreements between groups of people
or networks to share procedures and to implement particular limited sets of activi-
ties so that data is more comparable. For instance, we have in Maryland an ongoing
process where the organizations collecting and analyzing rain have agreed to a
minimum set of reporting procedures and a minimum set of analytical procedures so
that everybody is able to use the same data set. There are also a series of differ-
ent activities going on to develop proposed standard procedures in EPA, the EPA
quality assurance document, the Illinois Water Survey work for EPA, the USGS methods
manual, Rockwell, UAPSP, CANSAP, MAP3S, and the long list of methods manuals. There
are some research activities going on that we know about including NBS studies,
especially Phil Cook's studies on measurement of pH. There is a whole set of co-
location studies that no one seems to have the time to draw conclusions from but
they should give good evidence about the comparability between networks.
I want to go over the standardization process again so that we are all talking about
the same thing. Since I am an officer in ASTM I will discuss that process.
•	Method selection
•	Preparation of formatted draft
•	Committee review/tentative approval
•	Collaborative testing
•	Revision
•	Final balloting
•	Publication

First is method selection. A group of people come together to discuss standardiza-
tion and select methods that are candidates for standardization. A draft of that
method has to be prepared in the format of the group you are working with, and it is
circulated for comment. At this point the question is whether the procedure is
likely to work. It must be written carefully so the scope indicates the right uses
for it and the ranges it is supposed to measure. At that point it can be published
as a proposed standard, or a an emergency standard. However, without collaborative
testing it soon dies. Collaborative testing means that different laboratories take
that method as it was written and apply it religiously to see how good the agreement
is between different laboratories using reference sample. Out of that comes rigor-
ous precision and accuracy data, both interlaboratory and intralaboratory. That
data forms the basis of a final approval. The final approval in ASTM parlance means
that it is balloted by the group that initiated the standard, then by the cormiittee
they work for, and then by the entire society. All methods in ASTM are based on a
consensus, therefore all negatives have to be addressed with some basis of data.
Then the procedure is published.
The significant thing about this process is that methods selection is the purview of
the group that needs the standards. No process will survive standardization unless
there is data to support it. If you try to standardize a method for which the data
is not there or if someone can produce data to indicate that the method will not
work then the standardization process will fail.
That brings us to the subject of obstacles to standardization.
•	Knowledge needs
•	Costs of improved procedures
•	Inertia to changing procedures
•	Honest disagreement in judgment
•	Undetected reference to objectives
•	P & A data
The first is knowledge needs. You may not know enough to write a standard that will
stand up to the rigors of collaborative testing. There may be some vested interest
present in the community that for one reason or another does not want a standard
method. The improved procedure that the standardization represents can be expen-
sive. By the time you get the whole society of ASTM, for instance, to agree to a
Quality assurance procedure in which you have to standardize on 10% duplicate
samples, it may be too expensive for some of the people who have to be guided by

The second obstacle may be inertia to changing procedures because there is a long-
standing base. At this point we may say, "Well, of course these are standard proce-
dures. They are meant to be reference points, not something you absolutely have to
adhere to. You can diverge from them as long as you note carefully what you did."
Sometimes there are honest disagreements in judgment. Everybody may be working from
the same information base but disagree on the interpretation of the information.
That can hold up the standard for some time. You may try to standardize something
that is not a candidate for standardization because it is related to goal oriented
objectives. People with different objectives will never agree. For instance, if we
try to agree that everybody will collect samples weekly, half of you in this room
would strongly disagree. Once in a while you try to standardize something that in
fact treads on some of those goal oriented objectives. Then standardization will
either fail or will take a very long time. The main thing that often happens to
standards is that they languish for years because no one comes up with precision and
accuracy data, colocation data, or collaborative testing.
There are advantages in going through the process. Some of those advantages are:
•	Improved data, especially from newcomers
•	Improved data comparability
•	Time savings in selection, implementation of methods
•	Separating different methods for determining average concentrations and
total deposition
People who are new to the field will produce usable data very quickly. There is
improved data comparability among the members of the group who are standardizing the
process. If you begin a new process and you know that your group standardized some
methods, it is a lot faster to pull out a standard method and give it to your chem-
ist than it is to search around and figure out how to do it, consult with the ex-
perts, etc. This would prevent delays and reduce costs, so there are some advan-
tages in going through the standardization process.
Members of our working group have talked with the other groups about what we are
considering as standardizable methods. I have a tentative proposal as to how a set
of standards could be written. They fall into the categories of standard practices
and of standard methods. I will address standard practices first.
•	Planning—setting goals and objectives
--Selection of siting criteria
--Selection of methods and hierarchy of application
--Quality assurance
—Specification for data archive

•	Determination of representativeness
--Statistical design
--Use of land use data
--Region of influence of Sources (by objective)
•	Sample collection and handling
--Contact materials
--Storage protocols
--Shipping protocols
•	Network documentation
--Ultimate goals and specific aims
--Site descriptions
--Precision and accuracy test results
--Auditing results
--Lower quantifiable limits
--Significant figures
—Sampling, sample condition and laboratory processing codes
There are four standard practices or recommended procedures. The first deals with
planning and how to use goals and objectives in writing plans. That includes set-
ting criteria where the standard practice may have several sets of criteria depend-
ing upon different objectives. The options may involve selection of methods, a
hierarchy for analysis, design of a QA program, design of a data archive, and speci-
fications for reporting styles or recommended reporting styles. The second recom-
mended standard practice deals with determination of representativeness. That might
include statistical design approaches, addressing the question of wind fields, use
of land use data in talking about influences of sources, etc. The third standard
practice is on sample collection and handling. This involves equipment choice,
contact materials, how you collect the samples, how you hold them, holding time, and
how you ship them. The fourth standard practice addresses documentation. This is
in reference to Dr. Olson's comments and those of his working group.
The items that need to be standardized include:
•	Terminology
—LOD (limit of detection)
--L0Q (limit of quantification)
--Volume weighted mean
--Precipitation weighted mean
--Measures of data dispersion
--Measures of data completeness
--Data representativeness
--Dynamic blank, field blank, reagent blank
--Wet deposition measurement
--Bulk deposition measurement
--Collection efficiency
--Total deposition
--Observed deposition
•	Wet collectors
•	Analytical methods
•	Data validation

•	Determination of LOD, LOQ
•	Snow collectors
•	Determination of sample contamination
The first, and one of the key ones, is terminology. Under terminology we came up
with a long list. The point is that when we say total deposition we all know that
the total deposition was computed by a particular method. It does not mean that
everybody has to report total deposition, but that if you do, compute it in a stan-
dard way.
Then we went through another process to prioritize the needs for standardization.
We talked about three ways to determine priorities for standardization. One was the
data integrity that the quality auditing working group talked about. The second was
data comparability, which was a priority for some of the rest of us in the group.
The third was how easy it would be to do or how complete the knowledge base was that
was needed to achieve standardization. In Table 1 we ranked these different candi-
date methods or practices. We came up with an overall ranking, and just about
everything got a very similar number. The rankings are judgments based on our
discussions with members of the other working groups. One of the questions that we
looked at was how much underlying agreement there seemed to be on some of these
topics. In at least one case we heard people saying that we all do it the same way.
But when the representative from our committee looked at it he said, "They do not do
it all the same way, and they do not even know how different they are." So this
reality also influenced our judgments.
The first one is whether the knowledge base is in place to standardize a good me-
thod. Agreement depends on whether everybody now does it the same way or not. Need
is whether it is necessary to maintain data comparability. We specifically keyed
that one on comparability, not on quality. The overall is a measure of whether we
thought standardization could be achieved in a short time frame. A short time frame
is in the order of a year, depending on what organization you are talking about. A
long time frame might be three, four, or five years until the state of the knowledge
becomes improved. The asterisks are the ones that Dr. Mueller and several other
people said if I had money that is where I would put my money in the process.
We thought terminology was cheap, easy to achieve, and was a high priority. It
could easily be done. Wet collectors ranged from very high all across the board for

Table 1
Prioritization of Methods and Practices for Standardization
Knowledge Agreement Need Overall
Wet collectors
Analytical methods
Data validation
Determination of DL, LOQ
Snow collectors
Determination of
sample contamination
Sample collection
H = High
M » Medium
L = Low
^Participants recommended for initial funding.
the weekly collector, to low across the board for daily or fractionation collector.
This indicates that you start with the weekly to get the easy ones done and mean-
while work on the fractionation collector. Analytical methods is high all the way
across the board. It is the one that is furthest along and is about to go to bal-
lot. We thought that a lot of people knew the different ways of doing data valida-
tion, but the probability that any one group of people would agree on the same set
of procedures was pretty slim. Therefore it comes in a low category. You can see
how the rest of Table 1 reads.
H	*
L-M	*
H	*

That is the result of what we did. We did develop an integrated list of research
needs which follows. There is not really anything on it that was different from
research	needs defined by other working groups except two items.
•	Define an event
•	Effect of obstructions on wind field
•	Definition of deposition when data are incomplete
•	Determining if chemistry is representative
•	How close or far to put samplers from each other
t	Improve reliability and decrease energy cost
•	Definition of moved or new site
•	Dealing with local sources
0	How do you measure true rain
•	Appropriate collector height
•	Design of rain gauge
•	Measuring deposition in forest areas
•	Holding time
•	Study colocation data
t	SO2 and HNO3 interference
•	Research bibliography on methods
•	Field pH versus lab pH
•	Experimental design with respect to spatial variability
•	Use of quality control information in data analysis

led by
Peter K. Mueller*
I am extraordinarily impressed by this workshop and the productivity of the working
group chairs. With what you have reported this afternoon, the hoped for outcome has
been achieved. I want to express my appreciation for the fantastic cooperation and
hard work that has been done.
Sally Campbell already indicated what the groups thought priorities ought to be for
standardization. She also described the ASTM standardization process in an abbrevi-
ated way. I would like to accomplish two objectives in this session:
• Look at the standardization process from a broader perspective; and
t Provide input to the sponsoring agencies so they can implement
standardization quickly.
It is relatively easy to implement a research project on a topic, but I see a lot of
practical obstacles to achieving what would be a standard method.
We have identified methods that lend themselves to standardization. Dr. Campbell
described the next step which is to prepare a formatted draft. We have to prepare a
document that people can follow and understand. We could begin with existing SOP's
and establish a review process to identify the differences and similarities and to
flag the important differences. Differences in style should not delay acceptance of
an SOP.
Progress on chemical laboratory procedures is well under way. I am not concerned
about implementation there. I am concerned about implementation on all the other
topics. From a managerial agency viewpoint, what do I do next to get that done?
What format should the documentation follow? Dr. Campbell suggested that It would
be very good to follow the format of a prestigious national/international standardi-
*Electric Power Research Institute, Palo Alto CA.
Source: Proc. Methods for Acidic Dep. Measurements EA-4663, EPRI, Palo Alto CA 1986.

zing agency. ASTM is organized and has a long standing process to achieve procedur-
al standardization. However, they are not the only standard setting organization.
Also, because of their thoroughness, and because it is primarily a volunteer effort,
it is a very slow process. Yet we need to move rapidly to get a formatted draft.
There are several recognized formats for procedures: EPA, Intersociety Committee,
Comment: The WMO will only develop its own procedures if it cannot resort to stan-
dards of other organizations.
If we are going to implement the next round of documentation, then we need to settle
on a format. Dr. Campbell suggested using the ASTM format since you are going to
end up with ASTM approval or publication. Is there any discussion about using ASTM
format as an initial specified format?
Comment: Should we select ASTM format when we know they may not be able to apply
the necessary resources to get the documentation prepared quickly?
The ASTM format is acceptable to every agency. The community represented here could
prepare the document in that format. We need to know whether ASTM would then be
willing to participate.
Comment: We should make sure that ASTM is agreeable before we proceed further.
Comment: As a former vice chairman to ASTM committee, D-22, which would address
these procedures, I can assure you that they are willing to participate.
Mark Peden is now in charge of that.
Comment: EPA really does not have a set of EPA specified procedures for acid depo-
sition monitoring. We have guidance documents for our state and local
agencies. They are air quality measurement methods, yes. They would not
necessarily apply to wet deposition methods.
Comment: The USGS is in about the same situation as EPA. Even though we fund a lot
of monitoring, we do it through the NTN almost totally. We do not have a
formulated plan to change methodology. We are members of the D-19 commit-
tee. We accept those methods. I am certain that ASTM procedures would be

acceptable. Their voting procedure is usually such a long, exhausting
process that we cannot wait until they come up with approved methods. We
do a lot of collaborative testing on our own as we proceed. The documen-
tation could be ASTM's format. We also use the Office of Water Data
Coordination format. We are not fixed to any format and are very
Comment: The Department of Energy is flexible regarding procedural format. ASTM
format is acceptable.
Then format is not a major problem.
Comment: For those of us who have worked with ASTM in other subcommittees, the
process is very lengthy in regard to our 2 and 3 year funding proposals
through Congress.
Comment: I would like to make a comment on the speed of ASTM. For the last four
years I have been administrative vice chairman of D-22. What I would like
to say is that we have identified in D-22 four or five specific points at
which the process can, and should be, speeded up. We have strategies for
doing that. Many of us now believe that it is possible to get a method
standardized within a year if you follow procedures and have push and
funds to do it. Otherwise you will never get it standardized. Those
strategies are: not trying to standardize something before it is reviewed
at ASTM; having collaborative testing data; writing a draft that is com-
plete and reviewed by the ASTM editor so as to get editorial comments;
having the ballot issued immediately after the meeting, getting the re-
sults, and resolving all the negatives by telephone before the next meet-
ing. If you follow those five procedures carefully, you can cut down the
So if we put a professional management structure into place to follow through, we
should be able to do two things: We should be able to prepare a document, and we
should be able to put it into the ASTM process. My constraints are similar to those
of USGS that Dr. Schroder mentioned. I would like to see peer reviews and have some
semblance of acceptance in the community that conducts this work. I do not mean to
exclude ASTM, but possibly speed the process up a little faster than some of the
volunteer efforts of ASTM.
Does USGS have peer review of methods that you accept outside of these?

Comment: Well, all of the USGS work is internally peer reviewed, but we rely heavi-
ly on the other groups that we think our work will impact. We rely on
professional courtesies. We review things for EPA extensively and we meet
a 30-day time frame. We do the same reviews for DOE on the same time
frame. Therefore, they respond to our requests in a similar time frame.
That is the way we have worked it out over the years. Then as an agency
the decision is made to publish that method.
Comment: I think Dr. Campbell did a good job of presenting an overall view of what
went on in the working groups. I am looking forward to the proceedings
where this will be recorded. In regard to the standardization process,
the Intersociety Committee format is almost the same as ASTM. As a matter
of fact a lot of ASTM methods were developed by the Intersociety Commit-
tee. This was done because the Intersociety Committee has funds to pay
for the people to do the work in the substance subcommittees. That means
an awful lot in terms of getting these things out quicker. The review
process, peer review, the resolution of negatives, etc., are essentially
the same.
I have been a member of D-22 since 1958 and a member of D-19, the water
committee. There are some turf problems. If it is water, you are going
to have a problem since that is D-19's topic of concern.
Response: Yes. There are already some internal ASTM communications regarding how to
handle acid deposition. Which subcommittee will handle all of it has not
been resolved.
Question: Could you address the procedural differences between standard practices as
opposed to standard methods?
Response: A standard practice does not require collaborative testing. There are
various levels of differences. The development of a standard practice
occurs entirely within a subcommittee. For standard methods we get meth-
ods written and approved on a temporary basis and on the books as tempor-
ary standards before collaborative testing is done. For example, if this
group decided they wanted to constitute themselves with the appropriate
ASTM committee, which there is very little resistance to, then we could
effectively write the standards and approve them. Get standards to ASTM
published as temporary standards without going through more than one or
two committee ballots.

Question: It appears that we have a shorter turnaround time and a better chance of
agreement on standard practices than we do on standard methods. That is,
except for those cases we have identified that are nearly complete. Maybe
we should steer away from standard methods initially.
Response: I think some standard methods are going to be fairly automatic. Those are
the ones that are in the process now. Everybody knows they are coming and
essentially what they are going to say. They are practically written, and
most of the hassles have been dealt with. Some of the other methods might
not be so fast.
Comment: It seems that standard practices have fewer problems than standard me-
thods. We would like to get standard methods, but standard practices may
be a reasonable compromise in the near future.
Response: Another reason standard practices do not have the turf problems in ASTM is
because D-22.06, the subcommittee for acid deposition, had undisputed turf
control over all the standard practice topics. There are no turf problems
at all from that standpoint. Some of the standard methods topics have to
be ironed out.
Comment: It does not make any difference which way you go. There are going to be
some problems. Standard methods of the old school were developed for each
element. Sulfate, for example, was a step by step, page by page drawn out
process. The way we are operating in the Florida Acid Deposition Study is
toward using generic methods. We are using IC, for example. You do not
have to go through that whole development for each particular ion. You
write a generic method and plug these ions in. In this way you can deter-
mine what the problems are. This is going to be a much quicker process
than the old method that ASTM, D-22, EPA and everybody else has been using
in developing standards or standardized methods.
What is happening on the chemistry methods now?
Comment: Well, they are looking at them individually.
Comment: There are methods already for water. Perhaps requesting a scope change is
one way to approach the new methods to allow use with rain water.

Comment: Essentially that would be included with the standard practice for collect-
ing precipitation samples. You may want to start with a practice for
collecting precipitation samples. Variations of the theme can be produced
for specific applications analogous to specific analytes, and those would
then become standard methods.
In the case of data handling and analysis, is there an ASTM committee that deals
with data management procedures?
Comment: I think there are some that deal with quality control procedures in the E
series, but there are one or two methods that are probably applicable. As
far as recording QC standards, this is terminology that has been standard-
ized and can be referenced. As far as data management specifically for
acid rain networks, that would be considered a standard practice by
There is a community of people who specialize in data processing and management.
Perhaps we need to interact with them because they have the expertise to do the
Comment: There is a standard practice for statistical design, recording, and dis-
play of data put out by E-34. It may contain some reasonable standards.
Comment: What you might run into there is the present committee not relating to
acid rain and having no knowledge of environmental analysis or environ-
mental sampling. You may have to start a new subcommittee of that
Comment: Or you may have to bring some people into the group who are expert in
documenting data from measurement activities. The group tends to be
dominated by analytical chemists now.
Comment: We have a lot of people in this room with experience who could function in
that ASTM committee. There is no barrier to everyone at this workshop
becoming part of that process, either by joining ASTM or communicating
with the committee.
In the geological sciences USGS is doing sampling all the time and collecting huge
quantities of data. Data is stored, processed, and made accessible.

Comment: We do not have a standard, other than reporting standards, in that we say
to what significance we report values. We identify what testing we will
do before we will record numbers. I do not know of any standard that we
have for data reporting per se, other than the pure chemical end of the
laboratory analysis. For surface waters we have what we call our TWRI
series, water resources investigation reports. There are literally sever-
al hundreds of those which define very minute aspects of the work. At
this time we do not have a TWRI, which is our standard practices, for
That is interesting. There is a group of people who are experienced in sampling a
complex system. It is very difficult to get an accurate representative sample of
that system. What kind of standards have they invented? What kinds of reference
materials or standards have they developed to check the sample representing the
environment they are trying to categorize. It would be good to identify people who
could transfer that kind of thinking and experience to sampling precipitation.
Comment: At USGS our process is the same thing that we did for NADP. We look
throughout our group and designate one of our experts and ask him to serve
with them. Vance Kennedy was designated to work wjth the deposition
people, and he has worked with them for five or six years.
USGS has leaned on an expert. You have taken his advice on a standard for taking
precipitation samples. Have you come up with a standard for precipitation
Conment: We do not have a written document for deposition in our process. We do
have ideas on how to approach it. Someone in our group would probably be
designated to handle it. It seems that we are talking about a statistical
design. When we can define the criteria a statistics group requires, then
we are talking about a statistical design. But on precipitation sampling
networks we have not been able to give the statisticians information to
obtain a "statistically valid design." This seems to be the nagging
The statistically valid design applies to the deployment of the network frequency of
sampling, etc. I understand that. That is network design. I have been focusing in
on what Dr. Topol has been saying about a reference. Something that would serve as

a reference for determining to what extent our current practice of taking a sample
either weekly or daily actually represents the chemistry and the amount of material
that is falling on the earth's surface.
Comment: It is still a statistical problem. You are asking how many grab samples
and in what kind of statistical design do 1 need to take, so that the
probability of underestimating what I am trying to do is minimized.
Yes, what you are saying is that we need to know what the natural variability is
before we know what subset of it we can sample to get an estimate.
Comment: That is one of the questions that statisticians are going to ask you. It
has been my observation that we have not been good at trying to pass that
information to them or putting it in a way that is understandable, or they
have not been able to ask it in a way that we understand.
We could probably recruit people to help us. That is a good point.
I feel there is a consensus on what we should do to get on with the standardization
process. Many here share#my sense of urgency to get some standard operating proce-
dures prepared and accepted widely in the community taking precipitation measure-
ments. I hear also that the ASTM process is a laudable one and that we should try
to get on its treadmill in this process. While many of us have had experiences with
ASTM taking a great deal of time, much of that delay is also associated with a lack
of management of the process for a specific method. If the effort for managing
this, doing the leg work, and producing the written work were managed and adequately
funded, we ought to be able to make substantial progress in the next 12 months. Is
there any disagreement with that?
Question: What are we going to do from here? Where do we go from here? This pre-
sumably is going to be published. Then what do we do with it?
I am glad you asked that important question. I would like to have some proposals.
We have three sponsoring agencies represented here who want to go on with this
process. What are the suggestions from EPA to follow on, to answer Dr. Hendrick-
son's question.
Comment: I think ASTM would have to send a letter to Barry Goldstein asking EPA to
formally participate. Then we would assign government permanent represen-

tatives. I presume they would be at RTP. I do not know yet what to do in
terms of the acid rain task force. That would be another consideration,
but to start the process out we need some kind of permanent organization.
Comment: First we have to decide what we want to do. What part of the procedures
should we attempt to standardize first? You cannot have a measurement
process of 8 major categories to standardize at the same time because you
do not have the resources, the manpower, or dollars, to do that. Some-
thing has to be done to identify what we do, and then the resources avail™
able have to be examined to see what can be done.
You are already sponsoring the chemical methods, the analytical methods standardiza-
tion process right now. Isn't that being sponsored by EPA?
Comment: The development of methodology is being sponsored by the EPA at the Illi-
nois State Water Survey. Strictly speaking, the standardization through
ASTM is not sponsored by EPA.
Comment: There are going to be differences in objectives, and thus methods, between
networks. The first course of action might be deciding where we need to
focus. My group thought it was site selection. We should identify the
priorities first, some reasonable minimum standards for specifications,
and adhere to those. If the site does not meet the minimum set of speci-
fications, the site data is not used. That is a harsh decision, but I
think it needs to be made in order to straighten out a major source of
error. That is my group's feeling. Other groups felt differently.
It points toward a standard practice.
Comment: A standard practice, rather than a standard method, is what should be
done. There are several documents becoming available that could be com-
bined by several knowledgeable people, the coirmon factors determined, and
differences identified and voted on.
Comment: USGS contributes quite heavily to D-19 in particular. We have made a
tentative decision that we are probably not going to have any standing set
of representatives for Dr. Campbell's subgroup, D-22.06. That is because
of the manpower availability. What we hoped is that enough could come out

of the documents from this workshop that it would give us an indication of
what aspects of deposition monitoring we will focus more funding towards.
I do not think that we are as worried about going through the Intersociety
Committee, OWDC s group, or ASTM for standard practices certification of
what we decide to do. I am not as worried about that as some people are,
although we never have worked around ASTM. We are trying to work through
them. We will certify our own methods and use them whenever we think they
need to be used.
As far as EPRI is concerned, we are concerned about the tremendous amount of resour-
ces going into the collection of the information without having any standard refer-
ences. We have an equal concern when the data is being generated by EPRI EPA or
USGS funded projects or anybody's projects. EPRI's research design very often takes
into account what other people are doing. The value of our work is very much depen-
dent on the information that is produced by other people at the same time. Standar-
dization is very important. Reference centers and reference material are very
important. Time and time again, in any kind of large scale measurement activity
there are peer enclaves who claim that they know how to do this, have done it all
their lives, and are making measurements on large scale. All of a sudden another
possibly eminent group comes into being or surfaces with similar claims. Let us
compare information from both groups to offset being surprised by unacceptably large
deviation between groups.
EPRI is very much concerned, as far as this environmental issue is concerned, to
prevent that from occurring later. Therefore, it is a matter of principle to estab-
lish standards that are widely accepted in the community, not just accepted by those
who think they do it the best. That kind of claim just does not stand up. We want
to facilitate this process and do it in a cooperative fashion with any group that
would like to participate. We would like to do it by getting projects started
cooperatively, or by co-funding with the concurrence of groups like EPA. We want
peer review of the objectives of the project and a plan for the study and the
Conment: An initial approach that might be beneficial would be developing the
standard practice objectives across as much of the measurement process as
we can. The standard practice would take the existing method by each of
the various networks. We should also include the Canadians who have some
very unique ideas about deposition sampling.

Comment: Let us find out what we are doing that is similar. We agree on that.
That starts us on a uniform set of standards. It might not be the ideal,
or perfect, but we begin by focusing on what we are doing right together.
We all agree that should be done. Then we iron out where the differences
are, if they are major or not, whether they should be allowed or not, and
we vote on it. Then we decide whether we should be doing real intercom-
parison or collaborative testing, which is very expensive, and which
statisticians fight over how it is designed.
Comment: I come back to my original question. Where do we go from here? We all
agree we do not have enough money to do all of them at one time. How are
we going to prioritize them? Are we going to try to get a consensus of
the group that is here, are we going to try to get a consensus and priori-
tization based on a subcommittee of the group that is here? That is what
I mean by where we go next.
Comment: The next step could be for the workshop participants to actively parti-
cipate in the next ASTM meeting. ASTM is a management organization. It
is a management framework for the development of standards. Those of us
in this room can decide to have a workshop at the next ASTM meeting. We
could develop drafts and work them over until there is agreement. The
ASTM would be very happy to have that activity going forth in conjunction
with the regular ASTM meeting. That could take place as part of the
meeting of 22.06 which Mark Peden would be leading, or interacting with
22.06. It depends on how you want to do it. Not only could that result
in prioritization but in a draft, probably an approved draft.
Comment: A first priority is the formation of a library where we can have the
available documentation. Right now if you try to work with more than one
data base, the information that you need to correctly analyze the data is
not readily available. We need a librarian of this data so that everyone
who has collected data and prepared documents can send a copy to the
library. The librarian needs a budget to make copies of the information
he receives. We could start by getting the information together in a
central library. That could be done easily. We are already proceeding to
get chemical methods standardized. That is in process, so continuing that
process is a number one priority.

As a member of the executive committee for this workshop, my sense of the process
that we will follow is that a draft report of this workshop, with all the working
group leaders' reports, and the drafts of the keynote talks will be drafted quickly.
The draft will be sent to the executive committee and the working group chairmen for
review. Each of the working group chairman has identified priorities, and Dr. Camp-
bell's group has extracted priorities from each of those groups as well. When this
is all written up, we can cross check for agreement and consistency. Then it will
become essentially the marching orders. If we have to fine tune because of resource
limitations, we will have to do that as authors or co-authors of this workshop
report. At the same time we will have to establish some sponsorship or funding
mechanism for getting the work started. This could be done with an advisory struc-
ture that includes the technical input of the key agencies and key knowledgeable
people involved here. I expect that there is enough interest so that co-funding
will take place in the next series of steps.
However, there are blocks to standardization other than the management of the pro-
cess itself. One is not having enough knowledge. That leads to the topics for
research that have been identified. Again, they will be reviewed by those of us who
will be reviewing the document for concurrence, but I think there are some people
here now who would like to address some specific topics that should receive
Comment: My point is in regard to the standardization of equipment, in particular
the wet only precipitation collector. This is not a point of question for
WMO any longer. In the early '70s our working group on instruments and
methods for measurement of environmental parameters agreed on specifica-
tions. These specifications were adopted by our Corrmission of Instruments
and Measures of Observations. At that time the chairman was a Canadian.
Now the chairman is a citizen of this country. In listening to the dis-
cussion yesterday, I was very much impressed by the enthusiastic reluc-
tance to improve your current HASL type of precipitation collector. I
think that we may consider changing our standards in WMO. In order to do
that some studies should be conducted. I propose the following.
One would have to answer the question, when you want to collect material
that is carried by the air and that is subjected to aerodynamics, is it
meaningful to design the collector in a way that its properties are inde-
pendent from the wind direction? In particular, would it be meaningful to
give the mouth area the exact dimension as a standard precipitation col-

lector so that the volume collected would give the reading at the same
time of the precipitation? The answer to this question must be "No" in
order to change our standard.
If you assume that sun radiation can cause transformation or implementa-
tion of chemical reactions, then would it be meaningful to protect the
sample already collected from sun radiation? The answer would have to be
"No" in order to change our standard.
The final topic you have to study is: "Is it meaningful to equip a wet
only collector with a dry only bucket?" The answer would have to be "Yes"
in order to change our standard. I wanted to underline the design
Are you saying that our de facto standard, which is the HASL collector in its var-
ious configurations, has a dry only bucket which is is not needed?
Response: Yes.
I see. And are you also saying that we have exposure to sunlight in our tubs that
could be avoided?
Response: Yes, the radiation is a problem. The heat, of course, promotes some
evaporation, and some people have proposed that one should refrigerate the
sample. This cannot be done with the present construction. You would
have to have the collecting bottle inside in a separate casing.
Now you are saying that there is no need to do research on this topic because WMO
has already got the information on which the design of the sampler can be based.
Response: I would say so, yes. I can very easily subscribe to that because these
decisions were made long before my time. When I came to WMO this already
existed. The precipitation chemistry standards have already been select-
ed. We do not standardize a certain make, only the properties of the
design specifications and performance criteria since we are not allowed to
make any proposals.
Have you followed those in building your sampler? Dr. Granat?

Response: I think so. They comply with most of these. It is the smallest possible
construction which makes the wind pass the collector. It is protected
against light. We have tested it in a few cases where we could have the
samples in a refrigerator below the collector. We have some indication
that there is no difference between the refrigerated samples and those
kept at ambient levels. We find that with the dry side bucket, in Sweden,
there is a little difference for major constituents between the one col-
lector and the wet only collector. We think that dry deposition would be
better estimated from calculations in relation to receptor areas, receptor
material like forest, grass and so on, rather than by a dry side bucket.
So we skipped the dry side buckets for two reasons: we thought it was
insufficient to determine the dry deposition; and it made an extra obsta-
cle to the wind. It simplified the construction not to have this extra
unit because we could fold the lid down below the collector to avoid snow
collecting on the lid. That provided several benefits.
My reaction to this input is that as we work on a standard practice for sampling,
one of the key chores is to review these design specifications and performance
criteria and determine whether we should start new, or whether the differences
between that and the current practice in the United States within the networks ought
to be standardized before we start something new.
Comment: Another priority is the establishment of a center where collectors of any
new type can be evaluated on a continuing basis. As new proposals come up
we can: 1) evaluate whether they meet needs of the intended use, and 2)
compare them with existing equipment in light of long term records to show
changes that might be made because of new technology. We always come up
against these comparisons. Sometimes we try to achieve the comparison by
colocating samplers for some period of time. Perhaps what we are really
talking about is establishing some place where some proposed protocols or
new designs can be evaluated and compared to what we have done historical-
ly. I do not think that is being done.
What agency in this country could serve to manage such a standard reference center?
I think it is a very important suggestion and should be followed up.
Comment: The EPA operates an equivalency function for air monitors which we have
done for 10 to 12 years at Research Triangle Park. Other groups at EPA
have similar functions for some part of a measurement process. Whether or

not that could not be done through EPA is a long-term funding problem.
Funding is very spotty and variable. You are not certain what you get
from year to year. It might be difficult for a government agency to do
The National Bureau of Standards has a nominal role for being a reference center in
this country.
Comment: But they are dependent upon funding from other groups or government
Comment: For EPA's part, what we fund each year is determined by what we are told
to research or develop or quality assure from the research committees. I
am not sure how the other government agencies are funded and directed.
Response: USGS contributes, but it is normally on a mission oriented basis rather
than a long-term continual funding basis.
That was an excellent suggestion. It is necessary to set up a standard reference
center where testing can be conducted on a continuing basis, on the sampling process
Comment: I would like to point out that the ASTM has its own test sites where they
have been testing materials for 50 or 60 years.
Comment: That is true, but on the atmospheric side ASTM committees have never used
Is there any more on research topics that people want to add?
Comment: Sample stability?
Comment: Probably the most important topic.
What research needs to be done along those lines? You want to make sure it is at
the top of the list.

Comment: I do not see any need for any work to be done on the analytical, chemical
aspects of rain measurement. It has been done. I am sure the methods we
use are, or should be, ASTM's standard procedures for water.
Comment: No, they are not the same.
Comment: We have had a lot of analytical work done. We found that you cannot do
that because the chemical matrix, the ionic strength, is subject to con-
tamination. Those problems have to be addressed with precipitation samp-
les. Many of the classical analytical methods for water do not have to be
concerned with this problem. It is like an entirely different set. You
cannot measure pH accurately in a precipitation sample using the ASTM
method for measuring pH in water. There is a difference, entirely differ-
ent ionic strength, that affects the performance of the pH. You cannot do
the conductivity method exactly the same either. We have the NBS documen-
ted research to show that. There are problems with saying, "Use the ASTM
method." ASTM's methods have to go through the validation balloting
period, and they can be out of date with respect to current technology.
Comment: We are at concentrations ranges that those methods were not designed for.
Comment: Last year at the D-19 meeting in November, I prepared a whole list of
results that had been obtained in various networks around the country.
They were going to look at their methods to see how many of these were
applicable. So far I have not heard anything.
Comment: I am hoping that we will have a meeting between D-19 and D-22 to look at
that at the next ASTM meeting.
Methods are being written for the chemical analysis of precipitation. That process
is well on its way. I think there is consensus that for the common analytes not
much more research is needed. For substances like HgOg, the organics, the agricul-
tural chemicals, etc., we are not yet ready to think about standardizing methods to
determine them. I am not quite sure where we are with trace elements.
It is time to close this session. It has been very valuable. I am very pleased
that we all could get together, and the priorities of what we have to do have been
established. We have a good idea of what the process must be to get on with the
standardization process and where we need to get more information.

Acidic deposition measurement technology is not as far along in dry deposition
monitoring as in wet deposition monitoring. Therefore, the dry deposition session
focused on how the measurement process should be engineered. Major uncertainties
about the dry deposition process still remain. These problems were identified so
that work can be addressed to resolve them.
Bob Stevens presented a review of methods to measure dry deposition chemical
species. Current methods in use were discussed as well as those under development
and evaluation.
The practical aspects of measuring dry deposition were discussed by Marv Wesely.
Dry deposition rates, deposition velocity, and the major acidic deposition species
were included. Monitoring for trends, effects research, and model evaluation were
discussed in light of the data requirements for each. A brief discussion of current
monitoring research underway was also presented.

Ronald Bradow**
There are three potential uses of air monitoring data in forest decline studies:
1.	Establish the typical levels of forest exposure to known air pollutants in
order to scale and design laboratory or field-based controlled exposure
2.	Provide specific support for field-based measurements of forest exposure to
test hypothesis about the nature of pollutant related damage, if there is any.
3.	Provide estimates of exposure of forest areas to pollutants for the purpose of
conducting epidemiology studies of forest response. Thus, the general experi-
ment would deploy sensitive indicators of biological harm in key forested areas
where air pollutant exposures are likely to be substantially different. Rele-
vant output information would involve correlation of forest productivity loss
with increasing concentrations of air pollutants. Such information would be
useful in economic analysis of air pollution effects; in fact, this kind of
data is probably the most valuable obtainable.
Unfortunately, such experiments are extremely expensive and also very risky. It is
not always possible to assure that there will be relatively clean and relatively
dirty areas with respect to the variety of pollutants present. Even if there are
such areas, it usually happens that pollutant concentrations are highly auto-
correlated and it is not possible to determine which substance was responsible for
any effect seen. Then too there are many variables which cannot be controlled such
as moisture, day length, average radiant energy over a growing season, nature of
soil chemistry, presence or absence of soil or plant pests. In most cases it is
*frnTy the abstract of the presentation was available at printing.
**Environmental Protection Agency, Research Triangle Park NC.
Source: Proc. Methods for Acidic Dep. Measurements EA-4663, EPRI, Palo Alto CA 1986.

impossible to construct an experiment completely redundant with respect to all these
sources of productivity variability without some mechanical control.
Therefore, it is highly likely that the early use of air pollutant measurements will
be primarily in designing controlled exposure experiments with plants. Since the
forest environment is remote from pollutant sources, it is also likely that the
usual pollutants directly emitted from traditional transportation and industrial
sources are present in very small concentrations. Consequently, it is probable that
there will be considerable interest in analysis of monitoring data for photo-
chemically generated pollutants such as ozone, nitric acid vapor, hydrogen peroxide
and related organic peroxides and nitrogen compounds, in the hope that forest expo-
sures to such chemicals can be simulated in controlled experiments.
The second consideration in monitoring air pollutants is the type of forest for
which such monitoring is necessary and desirable. The combined EPA-U.S. Forest
Survey Program has determined the following priority scheme for conducting forest
damage studies:
1.	Spruce/fir forests in the northeast and southern Appalachian Mountains.
2.	Southern commercial forests mainly involving loblolly or slash pine.
3.	Pacific Coast conifers, primarily Douglas fir and redwood.
4.	Eastern hardwood forest.
Another important consideration is dose. For there to be observed expression of
toxicity response, the air pollutant must first be deposited in the tissue of a
plant or an animal. In most cases, air concentration will be linearly proportional
to deposition rate for any given air pollutant, but there are vast differences in
deposition velocity, both wet and dry, for the chemicals of interest. Therefore, it
is necessary to measure the concentrations of each species independently to estimate
overall deposition, for example of nutrient nitrogen. It is not usually feasible to
measure deposition directly and the application of assumed deposition velocities is
questionable in complex terrain or in the presence of strong vertical temperature
gradients. Consequently, measurement of concentrations, hence of exposure, is the
common fall-back position.
Another aspect of dose involves the time response of the biological system. Most
biochemical responses to air pollutants in animals or in plants involve some sort of
tolerance level. Generally, there are present in living things suppression or
detoxification mechanisms which prevent major damage by toxic gas exposure, and it

is only when these systems are overwhelmed that an adverse response is observed. It
is common to find a bolus effect in which a single short-term dose overwhelms the
defense mechanism, leaving the organism much more susceptible to lower doses. This
kind of observation has important implications for monitoring frequency. For ex-
ample, in the latest ozone criteria document threshold levels of air pollutants
causing damage to trees and shrubs are cited to be:

Robert K. Stevens*
Dry deposition of acidic and basic air pollutants has been identified as important
contributors to the acid properties of the atmosphere that influence air quality and
adversely impact forest and aquatic ecosystems. The gaseous and particle pollutant
species that have been identified as the major chemicals present in the atmosphere
contributing to acidic deposition are shown in Table 1. The concentrations of those
species typically present in non-urban atmospheres are also shown in this table.
Commercial instrumental methods are available to monitor some of these pollutants,
such as SO2, NO, NO2, and O3. However, the sensitivity of these instruments is not
adequate to measure these pollutants, except for O3, at the concentrations typically
found in non-urban locations. In addition, the cost and maintenance of these moni-
tors is often prohibitive for many air quality studies. For these reasons a number
of investigators over the past five years have been developing filterpacks and
denuders (to be described more fully later) to measure these species by conventional
wet chemical procedures. This report will discuss the various sampling and analysis
procedures currently being used or under development for most of the pollutants
listed in Table 1. The discussion that follows will concentrate on methods of
measuring the nitrogenous gases and particles (HNO3, NH4NO3, HNO2) since these are
species which have been the most difficult to measure in non-urban atmospheres. We
will also report on recent advances in denuder technology and present results ob-
tained with a unique annular denuder assembly.
^Environmental Protection Agency, Research Triangle Park NC.
Source: Proc. Methods for Acidic Dep. Measurements EA-4663, EPRI, Palo Alto CA 1986.

Acid Deposition Monitors and Monitoring Requirements
Table 2 contains a description of the sampling systems and instruments that measure
the key pollutants contributing to acidic deposition. The ideal sampling system to
satisfy the monitoring and modeling needs of the dry deposition research programs
should have the following characteristics:
(a)	be_able to quantitatively collect SO2, HNO3, HCL, HNO2, NO2, NH3,
SO3, NO3, H+, NH| organics and trace metals;
(b)	collect enough of these species over 6 to 24-hour sampling periods
to exceed 3 times the detection of the analytical methods used to
measure the chemical species at the lowest expected ambient level;
(c)	be able to operate over a wide range of environmental conditions;
(d)	be based on proven laboratory methodologies; and
(e)	be amenable to field audits and cost less than $3,000.
Methods to Measure Ammonia, S0X, Nitric Acid and Nitrates: Problems and Solutions
As investigators have improved methods of sampling and analysis for atmospheric
nitrate, it has become evident that distribution between the gaseous and particle
phases has often been masked by experimental artifacts. Many early particle nitrate
data were based on analyses of extracts from glass fiber aerosol filters used in hi-
vol samplers. It is now known that these filters contain active sites which fix
gaseous HNO3 and make it appear as particle nitrate (1.-3J. Other filter materials
have also been shown to react with and collect gaseous HNO3 and SO2 creating a
positive particle nitrate and sulfate artifact (1_). The use of an inert filter
materials such as Teflon removes the "positive artifact" problem, except for the
possibility of reaction of gases with the collected aerosol particles. It has been
shown, however, that collected aerosol nitrate particles (true particle nitrate) may
be lost from filters due to reactions with other materials or to evaporation. Loss
of particles is known as "negative artifact." Reactive loss may occur if, for
example, H2SO4 aerosol comes in contact with nitrate aerosol on the filter surface.
Evaporative loss may occur if, due to decreases in ambient gas concentrations, the
solid and gaseous nitrate phases are no longer in equilibrium. Thus, on the one
hand, measurements of particle nitrate using glass fiber filters are expected to be
systematically high; on the other hand, measurements of particle nitrate using inert
Teflon filters are expected to be systematically low. The extent of loss due to
reaction and evaporation are difficult to predict. Recent measurements indicate
that because of the distribution between HNO3 and particle nitrate the glass fiber
filter nitrate overestimates may be considerable.

Table 1
Typical Concentrations of Chemical Species
Related to Acid Dry Deposition
Gases	Concentration
S02	0.2 - 30
HN03	0.1 - 15
NO2	0.2 - 30
HCL	0.1 - 10
HN02	0.2 - 2
NH3	0.1 - 20
O3	30 - 150
Aerosols	Concentration
SO4	0.2 - 40
NO3	0.1 - 10
H+	.02 - 10
Trace Metals (elements, e.g.	.001- 30
Na, As, Se, Pb)
Organics (organic acids,	.001- 30
agricultural pesticides,

Methods currently under development or used in research studies to measure gas phase
nitric acid and other gaseous and particulate species, include the use of continuous
(real-time) and semi-continuous monitors as well as the integrative collection of
HNO3 on adsorbing materials. The continuous methods are:
(a)	chemo"luminescent, and
(b)	Fourier transform long path infrared spectrometry (FTS-LPIR) (J_).
Methods involving preconcentration are:
(a)	collection of HNO3 on nylon or cotton followed by extraction,
conversion to nitrobenzene (C6H5NO2) and analysis by gas
chromatography (JJ;
(b)	reduction to ammonium ion (NH|) of fixed inorganic nitrogen
collected on nylon filter, followed by iodophenol ammonia test;
(c)	collection of HNO3 on sodium chloride impregnated filters followed
by extraction and hydrazine reduction-diazotization analysis of
(d)	denuder difference experiment--collection of total nitrate from two
parallel air streams, removal of nitric acid from one stream using
a diffusion denuder and subsequent determination by difference by
ion chromatography Q), and
(e)	tungsten VI oxide technique—short term collection of nitric acid
on diffusion tubes followed by release and detection using
chemoluminescent (£).
This last technique is a considerable advance, permitting measurements down to 0.25
fj.g/m3 of nitric acid for sampling intervals of 20 minutes and also permits simultan-
eous measurements of NH3.
The denuder difference method (DOM) has been used by a number of investigators over
the past few years due to its relative simplicity, sensitivity, and ease of analysis
of the filters used to collect the species of interest (5^ 6J. The diagram of the
denuder difference method is shown in Figure 1. The system consists of a Teflon
cyclone (not shown in Figure 1) to remove the coarse particles from the sample and
precedes the nylon filter and denuder. The ambient air passes through the cyclone
at ~ 30 1/min, and into a manifold where two parallel samples are collected down-
stream of the two tubes. One of the tubes, the denuder, is coated with MgO; the
other tube, constructed of Teflon, is uncoated. The residence time of the gas and
fine particles in the denuder is 0.2 s. After the air sample exits the tubes, it
passes through a 25 mm diameter, 1 nm pore size Membrana nylon filter. The MgO
removes the HN03 and the true fine particle nitrate is collected on the nylon fil-

Table 2
Description of Sampling Systems and Instruments that Measure Chemical Species that Contribute
to Acidic Dry Deposition
DDM cyclone,
Oenuder Filter
Pack System
Denuder, 1
Filter Pack,
7 day
300 mm x 4 mm
tube coated with
Nylon (25mm)
Teflon (47mm)
TEA treated
Fine particle nitrate, HN0,, SOT,
so2, no2 s
AIHL, Filter
4-24 hrs
Teflon (47mm Fine particle nitrate, HNQ,, SO.,
Followed by S0-, NO-
NaCL treated W41)
Modified Canadian
Filter Pack
1 day
7 day
28 day
1)	Teflon
2)	Nylon
3)	W41 Na,C03
SO., Total Nitrate, SO,
4 2
Filter Pack
> 12 hr
320 mm x 10 mm
Nylon lined tube
Teflon, Nylon
TEA treated W41
HN0,, NH-, H+, S0=, SO,, NO,,
Nitfate J z ^
pack in series
7 day
24 hr
12-24 hr
Annular design1
Na,C0, (SO,, HN0,,
HN0,) 2 3
Citfic Acid (NHj)
Teflon, Nylon
TEA treated W41
HN0,, HN0-, SO., HC1, NH3, S0^, H+,
Nitrate, NO2 and fine particle mass
NOAH Filter Pack
W41, Na2C03
SO", Total Nitrate, SO^,
Tungstic Acid
Diffusion Tube
30 min
200mm x 3mm WO^

HN03 and NHj
1 minute
°3* S02' NOx
1 Annular design is 20 times more efficient in removing gases than tubular designs of equivalent length and Reynolds
number operation.
* Sensitivity of the nethods listed in this table which directly derive the analysis value from a filter or denuder is
approximately 0.1 ^g/nr1 assuming at least 2m3 of air is sampled.

ter. On the other nylon filter HNO3 and fine particle nitrate is collected. The
difference between the HNO3 and nitrate sample and the fine nitrate is HN03, hence
the name of the method.
The nylon filters are removed and stored in a sealed desiccator at 5°C until they
are extracted. The nylon filters are extracted in 1 x 10"5 M NaHC03 solution in an
ultrasonic bath. The extract is analyzed by ion chromatography for nitrate content.
The precision of the method for 19 replicates is: nitrate = 0.1 tig/m3, nitric
acid = 0.2 ng/m3. The precision refers to an ambient air sample size of 2.0 m3.
Each technique to measure the pollutants that contribute to dry acid deposition
problems has unique features. FTS-LPIR is suitable for providing benchwork measure-
ments of HNO3, SO2, and NH3 because measurement takes place in the atmosphere and
identification is made unambiguous by the recognition of characteristic infrared
absorption of the species. However, the equipment is not portable, and the method
has a minimum detection level of 5 ppb. Chemoluminescent has the inherent sensitiv-
ity of a rate sensor, but involves the measurement of small differences in a signal
that is frequently large and time-varying due to interferences (e.g. total oxides of
nitrogen, N0X; peroxyacetyl nitrate, PAN; and organic nitrates). Sample collection
techniques using treated filters generally require relatively simple equipment;
however, they require documentation of collection and release efficiencies, maximum
loading, and possible interferences.
Annular Denuder Systems
During the past five years a new design for denuders has emerged from the Laboratory
for Atmospheric Pollution of (CNR) Rome, Italy (10). They have developed a denuder-
filter pack arrangement that is suitable for the simultaneous measurement of a
variety of different species contributing to acid deposition. Diffusion denuders
are devices that take advantage of the different diffusion coefficients of gas
molecules and aerosols. Gas molecules diffuse rapidly to a denuder wall while fine
particles (< 2.5 n-m in aerodynamic diameter) proceed unaffected through the denuder
and are either discarded or recovered on a filter. After sampling the denuder can
be extracted with the appropriate solvent and the extract analyzed. Ferm demonstra-
ted the applicability of this technique when he used an oxalic acid coated denuder
to collect and analyze for ammonia (8J.
Possanzini et al. have made a substantial improvement in the design of denuders by
going to an annular tube configuration (see Figure 2) (7_). The annular denuder

HN03 (GAS)
0 J»
|ooWo> ¦
"So*0. Q#
" 0 0 0

HzS0,. Q
Figure 1. Denuder Difference Analysis Method for Nitrates and Nitric Acid

collects the gaseous pollutants by moving air through an annular space between two
concentric glass cylinders coated with the appropriate material. The walls that
collect the gas have been etched by sand blasting the surface with ~ 100 nm size
sand. This etching increases the capacity of the walls to support the denuding
chemical substrate. Possanzini et al. have shown that the equivalent diffusion
equation describing the denuder properties of an annular design may be expressed as
follows (7_):
C = (0.82 t 0.10) EXP (-22.53 + 1.22) • A	Eq. 1
jcDL	Dj + d£
A = 	 • 	
4F	d2 - di
where D = diffusion coefficient of species: L = length of tube; d2 = internal dia-
meter of tube dj = diameter of internal cylinder; F = flow rate, and C = concentra-
tion of gas exiting the denuder; C0 = concentration of gas entering the denuder.
Comparing Eq. 1 with the classical tubular denuder equation describing conditions to
quantitatively remove a gas from an air stream, we can deduce the following: (a)
For equal sampling times and the same length of tubing, the annular denuder can
operate at 20 times flow rate and collect quantitatively the gas species of interest
compared to a cylindrical designed denuder; and (b) the Reynolds number (degree of
turbulence) would still be well under turbulent flow conditions for the annular tube
system. For this reason the EPA recognizes the potential of the annular tube denu-
der system as a device for use in dry acid deposition studies.
In March and April of 1985 members of the CNR research laboratory, EPA, and Research
Triangle Institute performed a field experiment to evaluate the sampling character-
istics of the annular denuder system developed by CNR. The CNR group brought to the
Research Triangle Park, North Carolina, several annular denuders to compare results
with measurements using EPA's denuder difference method and filter pack combination
system. The experiment was designed to measure HNO3, HNO2, nitrates, and SO2. The
annular denuder filter pack system used in this dry deposition study is shown in
Figure 3. The system consisted of a cyclone followed by two denuders coated with
glycerine and Na2C03 and a filter pack. The denuders were coated by placing 20 ml
50:50 Methanol .-water solution containing 1% Na2C03, 1% glycerine into the annular
space of the denuder. The tube was rotated to wet the surfaces of the denuder,



Figure 2. Annular Denuder

decanted, and the denuder dried with clean filtered air. A 47 mm diameter, 2 fxm
pore size Membrana Zefluor filter followed by a 47 mm diameter 2 \±m pore size nylon
filter were housed in the filter pack connected to the exit of the second denuder as
shown in Figure 3. The flow through the system during the study was maintained at
15 ± 1 2,/min with a differential downstream flow controller. Tables 3 and 4 are
the results of a comparison of two annular denuder systems operated in parallel used
to collect both HNO3 and SO2 obtained during the spring of 1985 in Research Triangle
Park. We can see from the table the reproducibility of the collection system is
typically better than ± 5% for both SO2 and HNO3 at concentrations under 1 ppb
(< 5 jj-g/m3).
Figure 4 shows the concentration of HNO3 collected at 4-hour intervals in the Re-
search Triangle Park, North Carolina. The data are consistent with the photochemi-
cal predictions indicating HNO3 concentrations would be highest during the daylight
hours while the nighttime HNO3 concentrations would tend to diminish. Figure 5
further amplifies this prediction with data taken at 12-hour intervals (day/night)
during March and April of 1985. The aerosol particle nitrate data did not show a
distinctive diurnal pattern.
An interesting feature of the annular denuder assembly used in this study was the
HNO2 data obtained from the extract of the Na2C03 coated denuders in the study
conducted in the Research Triangle Park, North Carolina, in March and April of 1985.
We found the HNO2 concentrations often greater than 0.2 ng/m^ and, as predicted from
photochemical principle, higher at night than during the daytime. Sjodin, Ferm, and
Allegrini et al. have made this same observation for studies performed in Goteburg,
Sweden, and Rome, Italy, respectively (£, 1£). Nitrous acid, present as N0£ in
the chromatogram, is also observed in this analysis. The absence of NOjj in the
second denuder is evidence of the presence of HNO2 in the sample. A typical chroma-
togram obtained from the ion chromatographic analysis of an extract of the annular
denuders is shown in Figure 6. There is visually no evidence of SOq or NO3 in
the extract of the second denuder indicating the first denuder collected all the
HNO3 and SO2 from the sample stream.
Filter Packs and Other Denuder Approaches
Some investigators are using filter packs and combination of filter packs and denu-
ders to collect HNO3, SO2, and NO2. One such assembly is described by Durham et al.
(11)¦ This device (Figure 7) uses a nylon denuder followed by a series of filters
and filter packs. The denuder is designed to collect -10% of the HNO3 from the

hno3. HCI. so2. hono
r rn-.n
w.c r-•
^—Benuder §2
I	\
ar<—* p-Connector

1Qenuder #1
IS |/min
Figure 3. Diagram of Annular Denuder System Showing Cyclone,
Two Annular Denuders Coupled in Series Followed by a Filter
Pack (Teflon: Nylon Filters) and Pump: Flow Controller

Table 3

Italy -
Field Study,
- Spring 1985 —
Replicate ADM Test



























Table 4

Italy -
Field Study,
- Spring 1985 --
Replicate ADM Test


























Z 0.8
Figure 4. Nitric Acid Concentration in Air at Raleigh, NC,
March 28-29, 1985

I w
I £ =>
I 5 ^
i 2<
I I "^O.l
N I D N 1 D N
31 1 1 I 2
DATE (1985)
Figure 5. Diurnal Variation in Nitrate Concentration, Raleiqh,
NC, 1985

MARCH 27,1985
1035-1605 hrj
15 1/minj
SO2'" 6.6
HNO3 "
HNO2 " 0.4 jjg/m^
	1 m—
Figure 6. Typical Ion Chromatogram of Parallel Annular Denu-
ders Which Have Collected Ambient Air Samples. The secondary
denuder after the first denuder does not contain measurable
amounts of SOjj or NO3 indicating the efficiency of the
first denuder.

Oxalic acid
Oxalic acid
Nafion Nylon
Nylon Nafion
valve, open
when sampling
valve, open
when sampling
1 1/min
mass flow
16 1/min
mass flow
2 1/min
mass flow
2 1/min
mass flow
16 1/min
Figure 7. ESRL Prototype Concentration Monitor for Estimating Acidic Dry Deposition

Kl. NiAt
200 mm
120 mm
190 mm
190 mm
Figure 8. U.S. - Italy Annular Denuder Acid Deposition
Sampling System

sample stream. The balance of the HNO3 is collected on a nylon filter downstream of
a Teflon filter located after the denuder. Two filter packs containing alkaline-
treated filters to collect the SO2 and NO2 are part of the package. These two
filters have 2 ji/min flowing through the assembly, therefore must be operated for 12
or more hours to collect enough material to obtain a reliable analysis for these
species for most non-urban ambient conditions. In order for the HNO3 measurement to
be accurate, a reproducible collection of 10% of HNO3 on the denuder must be main-
tained during the sampling period. For example, a collection efficiency change of
1% in the denuder results in a 10% uncertainty in the HNO3 measurement.
There are a family of monitoring systems in use today that are able to measure the
acidic and basic properties of the atmosphere. Instrumental methods for measuring
these species are generally expensive and require special environmental housing.
For these reasons and others, a family of sampling packages have been and are being
developed and evaluated. Filter packs and denuder combinations are devices which
have the most promise to obtain the air quality data needed to support acid deposi-
tion research studies.
One of the most promising systems currently under development and evaluation is
based on the use of annular denuders and filter packs operated in series. This
system's main features are:
(a)	it operates at relatively high flow rate while maintaining collec-
tion efficiencies greater than 95%;
(b)	it uses substrate to collect the pollutant of interest, which is
easily extractable in water and compatible for conventional labora-
tory analysis;
(c)	it needs to maintain only one flow;
(d)	it is relatively easy to set up, maintain, and manipulate during
preparation and extraction; and
(e)	it is relatively inexpensive.
More work is needed to validate the performance of all the denuders, filter packs,
and instrumentation being used today for acid deposition and related environmental
air quality studies. An intercomparison study to be conducted by the State of
California Air Resources Board in the fall of 1985 will provide a basis for select-
ing the most reliable cost effective system to obtain dry acid deposition data. The
Environmental Protection Agency has purchased an automated denuder package (Figure
8) which consists of duplicate denuders, filter pack assemblies so that 12-hour

samples can be collected uninterrupted. This system will be evaluated as part of
the forthcoming California study.
1.	R.K. Stevens, ed. "Current methods to measure atmospheric nitric acid and
nitrate artifacts," 1979, EPA-600/2-79-05.
2.	B.R. Appel, S.M. Wall, Y. Tokiwa, and M. Haik. "Interference effects in samp-
ling particulate nitrate in ambient air," 1979, Atmos. Environ., 13:319-325.
3.	R.W. Shaw, R.K. Stevens, J. Bowermaster, J. Tesch, and E. Tew. "Measurements
of atmospheric nitrate and nitric acid; the denuder difference experiment,"
1982, Atmos. Environ., 16:845-853.
4.	R.S. Breman, T.J. Skelley, and W.A. McClenny. "Tungsten VI Oxide for precon-
centration and determination of gaseous and particulate ammonia and nitric acid
in ambient air," Anal. Chem., 54:358-364.
5.	j.O. Forest, J. Spendau, R.L. Tanner, and L. Newman. "Determination of atmos-
pheric nitrate and nitric acid employing a diffusion denuder with a filter
pack," Atmos. Environ., 16:1473-1485.
6.	S.H. Cadle, J.M. Dash, and P.A. Malawa. "The deposition velocity of nitric
acid, sulfur dioxide, and various particulate species," GM Report 4918 EWV-196,
January 28, 1985. General Motors Research Laboratories, Warren, MI 48090.
7.	M. Possanzini, A, Febo, and A. Liberti. "hlew design of high-performance denu-
der for the sampling of atmospheric pollutants, 1983, Atmos. Environ., 17:2605-
8.	M. Ferm. "Method for the determination of atmospheric ammonia," 1979, Atmos.
Environ., 13:1385-1393.
9.	A. Sjodin and M. Ferm. "HONO measurements in Gotenburg, Sweden," 1985, Atmos.
Environ., in press.
10.	I. Allegrini, F. DeSantis, A. Febo, C. Perrino, M. Possanzini, and R.K.
Stevens. 1985, Proceedings this conference.
11.	J.L. Durham and L.L. Spiller. "Measurement of gaseous, volatile and non-vola-
tile inorganic nitrate in Riverside, California," 1984, Volume 22, No. 2 of the
Division of Environmental Chemistry Extended Abstract Book of the 184th Nation-
al ACS Meeting, Kansas City, MO.

Marvin Wesely*
The dry deposition of atmospheric trace substances is the transfer of substances to
the surface of the earth by all processes excluding those aided directly by precipi-
tation. These include the vertical turbulent transfer of trace gases and fine
particles and the gravitational settling of large particles in the lower atmosphere.
To address the dry deposition of acidic substances, we need mainly to consider the
major sulfur and nitrogen substances, which include S02, HNO3 vapor, NO2. particu-
late sulfate, and particulate fiitrate. Others often considered are NO, PAN, and
organic sulfates. Ozone should be added to this list because it is a substance that
is often linked to harmful effects on surface vegetation and materials, and its
effects are frequently intermingled with effects of airborne sulfur and nitrogen
substances. Here, our discussion will be confined mainly to S02, HNO3, N02, partic-
ulate sulfur, particulate nitrate and O3. Other species, although also of interest,
cannot yet be addressed even with limited confidence.
The dry deposition rates of most substances are highly variable. Since concentra-
tions alone have strong variations in time and space, discussion often centers on
the deposition velocity, the downward deposition rate divided by concentration at a
specified height, in order to obtain less variability. This approach is practical
because deposition velocity and concentration are largely independent variables, for
most surfaces exposed in ambient atmospheric conditions. That is, with only a few
exceptions, dosing a natural surface with the sulfur and nitrogen compounds at
concentrations usually found does not have a notable immediate effect on the ability
of the surface to remove that substance. Of course, this is an approximation ap-
plied to fairly short time periods, and cumulative effects associated with changes
in the ability of a surface to remove substances from the air can usually be observ-
ed and presumably taken into account when deposition velocities must be estimated.
*Argonne National Laboratories, Argonne IL.
Source: Proc. Methods for Acidic Dep. Measurements EA-4663, EPRI, Palo Alto CA 1986.

The concept of a deposition velocity is very useful for applications involving
modeling and monitoring of dry deposition; the value of v^ is simply a number that
can be multiplied by local concentration to produce an estimate of deposition rate.
Despite this utility, deposition velocity does not represent a fundamental concept
but is a derived quantity that alone tells us little about important processes.
Further, estimating appropriate deposition velocities that are site specific and
species dependent constitutes a major difficulty in the application of inferential
methods to derive deposition rates from concentration data.
The task of evaluating dry deposition presently falls into two categories: monitor-
ing and direct measurement. The distinction is made because there is no simple
method of direct measurement that can be used routinely. All of the currently
popular flux-measurement methods (surface sampling, eddy correlation, etc.) are
practical only for intensive studies of dry deposition processes. In this paper,
our primary interest is in monitoring rather than short-term intensive studies of
dry deposition.
Dry deposition data are urgently needed for use in activities and assessments for
the National Acid Precipitation Assessment Program (NAPAP), for the following three
types of applications:
1.	to document trends, both spatial and temporal;
2.	to provide information supporting effects studies; and
3.	to evaluate numerical models.
Data requirements are different for these three applications, as summarized in
Table 1. For studies of trends, interest is in slow changes occurring over periods
of time extending to at least 10 years. We want a continuous record of deposition
at sites selected to be unaffected by local sources and representative of the areas
in which they exist. For this purpose field monitored chemical data at numerous
sites will probably focus on measurements of concentrations (c), because direct
measurement of the vertical flux is presently considered impractical for the long-
term continuous efforts. Observations of surface and micrometeorological conditions
can be used to infer deposition velocities (v^) as they evolve in time for specific
sites and individual airborne trace substances.
Effects studies usually do not require such extended periods of operation, and fine
resolution in both space and time are often necessary. For studies of effects on

Table 1
Dry Deposition Information Required for Different Applications. Underlined
Responses Indicate Main Products Desired.
Effects Research
Model Eval.
Materials Ecological Regional
Local c
measured	yes
inferred	no
Area-averaged c
measured	no&
inferred	yes
Local vd
measured	no"
inferred	yes
Area-averaged vj
measured	no"
inferred	yes
alocal values obtained not as a final product, but an intermediate step
^direct measurements desired but not considered practical because of long-term,
intensive effort necessary
deposition inferred by direct examination of the surface
materials, the deposition rates are best evaluated by effects researchers, since
they can control exposure periods and locations of manufactured samples and can
bring them to the laboratory where sophisticated measurements of surfaces can be
made. To extrapolate their findings to surfaces fixed in the field, however, they
require knowledge of concentrations at many locations. It is desirable to discern
between local and regional influences. For studies of effects on natural surfaces,
a variety of types of dry deposition information is needed as a result of the large
number of types of ecological effects being investigated. Evaluation of dry deposi-
tion rates in the field are needed, and the techniques used require the services of
specialists in dry deposition measurements.
For evaluation of numerical models the spatial scale of the dry deposition informa-
tion is not larger than the smallest area resolved with the model, typically 20-
200 km. Even this requires averaging over areas where diverse surface types exist

arid where the effects of surface irregularities are poorly understood; the associa-
ted research questions are formidable because direct dry deposition measurement
techniques used to date are proven only over fairly simple, uniform, flat surfaces.
Hence, development and testing of the dry deposition portion (module) of the model
are necessary to produce realistic estimates of dry deposition over real-world
landscapes. For this, fine time resolution is needed (approximately three hours)
over many surfaces.
A large number of techniques to measure dry deposition have been proposed and at-
tempted. According to a workshop conducted in 1979 these can be considered in two
groups: for monitoring and for research (_1). For monitoring, the workshop conclu-
ded that a high potential for successful development existed for the micrometeoro-
logical techniques of Modified Bowen Ratio, Variance, and Eddy Accumulation. These
techniques have now been studied and developed for a few years, but still fall short
of providing true monitoring capabilities (2^, i.» JL» A)* Methods identified by
the workshop to have a medium potential for successful development included leaf
washing and surface snow sampling. These surface-sampling techniques have also been
developed and tested to some extent, and some valuable research results have been
produced (_!, 8) .
Methods to measure dry deposition directly include those listed above plus a wide
variety of techniques, such as reviewed by Hicks et al. (1980). It is likely that
micrometeorological technique, direct sampling of surfaces, mass balances, and
multiple tracer techniques will continue to provide information needed to understand
and parameterize deposition velocities for use in indirect monitoring and calcula-
tion of fluxes in numerical models. Different measurement methods are best for
different pollutants. At this time, for example, eddy correlation appears to be
preferred for measurements of ozone and sulfur dioxide and deposition, and gradient
methods seem best for nitric acid vapor. Leaf washing and throughfall measurement
methods appear very promising for the cases of particulate materials.
In the absence of simple direct monitoring methods, it is necessary to infer dry
deposition from quantities that can be measured easily. The standard approach is to
evaluate dry deposition from measurements of air concentration with knowledge of the
transfer mechanism to evaluate appropriate deposition velocities. The so-called
concentration monitoring approach (CMA) is the application of this philosophy. In
addition to measuring concentration, it is necessary to measure the local meteoro-

logical and surface conditions needed to infer deposition velocities. As shown in
Table 2, the ability to derive deposition velocities varies with chemical species,
because the amount of successful intensive research to evaluate and parameterize the
deposition velocities has varied for each substance. The state of the art just to
measure concentrations accurately and for long periods of time also is not uniform
from substance to substance. A troublesome subset of measurements concerns large
particle deposition; research is currently addressing the need for a standardized
collection surface to provide reliable measurements.
Table 2
Viewpoints on the State of the Art of Measurement of Trace Gases and
Aerosols and of Knowledge of Appropriate Deposition Velocities. Only
four subjective ratings are listed: good, fair, poor, and bad.
Concentration Ability	Suitability
	Measurement	to derive	for
Real-time Filterpacks,	deposition	routine
monitors denuders, etc. velocity	monitoring
Sulfur dioxide
Mitric acid vapor
Nitrogen dioxide
Sulfate aerosol
Nitrate aerosol
Other submicron
aerosol species
Existing knowledge allows application of CMA to only fairly simple surface condi-
tions. If the intent is to evaluate dry deposition, it would not be helpful, for
example, to measure concentrations in rugged, mountainous areas because appropriate
deposition velocities could not be produced. Further, CMA is limited to addressing
only a fairly small area because the deposition velocity parameter!zations available
are highly specific to type and condition of the surface. Spatial averages can be
constructed only by adding up the contributions computed for all the surfaces in the
area considered. Such methods have yet to be tested by field measurements of land-

scape-averaged deposition rates. For areas that contain non-uniform surfaces, which
dominate natural landscapes, methods of averaging are a subject of future research.
Dry deposition research stations were established in 1983 at three locations with
the combined support of NOAA, EPA, and DOE. They are located near Oak Ridge, Ten-
nessee, at Argonne, Illinois, and near State College, Pennsylvania, and service
southern forested surfaces, midwestern cropland, and eastern cropland, respectively.
These locations were chosen on the basis of availability of necessary experimental
capabilities and fairly wide geographical distribution in the eastern United States.
Prototype devices that couple air chemistry filterpacks with atmospheric and surface
sensors have been deployed at these sites and a few satellite stations. The deploy-
ment and analyses of these devices is a joint NOAA and US6S activity, conducted in
collaboration with EPA, intended to develop and verify the procedures, both chemical
and meteorological, associated with applying the CAM protocol. At the end of 1984
EPA's pilot network operation also commenced. This program is testing a variety of
air concentration devices in field conditions using many of the same experimental
locations as the NOAA and US6S activity.
The number of chemical substances measured by the first monitoring efforts with the
NOAA-USGS system is limited but includes SO2, HNO3, and a range of particulate
anions and cations. Ozone is not included since there is no simple method yet
suitable for routine filterpack detection; ozone is nevertheless monitored routinely
with real-time sensors at each of the three core dry deposition stations. The
filterpacks and recorded meteorological and surface information are collected and
sent to analysis centers on a weekly basis. Weekly averaged concentration values
are evaluated and diurnal cycles of deposition velocities are computed based on
measured variables such as wind speed, wind direction variability, and surface
wetness. Corrections are applied for the error involved in the use of weekly aver-
ages in concentrations instead of hourly data throughout each week. Errors found in
early comparisons vary from 10 to 20%, although intensive experiments have shown
that the diurnal correlation between concentration and deposition velocity near the
surface can result in errors as large as 30% for ozone and is typically 0% for
particulate sulfur (9J. An approach to lessen this source of error is to use a
concentration sampling method that differentiates between degrees of atmospheric
stability stratification or times of the day, such as daytime versus nighttime, but
this would increase the number of filterpacks that must be processed each week.

This research has been funded as part of the National Acid Precipitation Assessment
Program (NAPAP), partially by the U.S. Environmental Protection Agency (EPA) through
IAG DW89930069-01 to the U.S. Department of Energy; this article has not been sub-
jected to EPA review and therefore does not necessarily reflect the view of EPA, and
no official endorsement should be inferred. Support was also received directly, as
part of NAPAP, from the National Oceanic and Atmospheric Administration and the U.S.
Department of Energy.
1.	B.B. Hicks, M.L. Wesely and J.L. Durham. "Critique of Methods to Measure Dry
Deposition: Workshop Summary." Report EPA-600/9-80-050 (NTIS Publication
PB81-126443, 1980, Springfield, VlTT
2.	M.L. Wesely. "Turbulent Transport of Ozone to Surfaces Common in the Eastern
Half of the United States." In Trace Atmospheric Constituents: Properties,
Transformations, and Fates, (edited by S.E. Schwartz), John Wiley & Sons, Inc.,
New York, 1983, pp. 345-3/0.
3.	B.B. Hicks and R.T. McMillen. "A Simulation of the Eddy Accumulation Method
for Measuring Pollutant Fluxes." 1984, J. Clim. Appl. Met. 23:637-643.
4.	J.G. Droppo, Jr. "Concurrent Measurements of Ozone Dry Deposition Using Eddy
Correlation and Profile Flux Methods.11 1985, 0. Geophys. Res. 90:2111-2118.
5.	B.J. Huebert and C.H. Robert. "The Dry Deposition of Nitric Acid to Grass."
1985, J. Geophys. Res. 90:2085-2090.
6.	R.E. Speer, K.A. Peterson, T.G. Ellestad and J.L. Durham. "Test of a Prototype
Eddy Accumulator for Measuring Atmospheric Vertical Fluxes of Water Vapor and
Particulate Sulfate." 1985, J. Geophys. Res. 90:2119-2122.
7.	L.A. Barrie and J.L. Walmsley. "Study of Sulphur Dioxide Deposition Velocities
to Snow in Northern Canada." 1978, Atmos. Environ. 12:2321-2332.
8.	S.E. Lindberg and G.M. Lovett. "Field Measurements of Particle Dry Deposition
Rates to Foliage and Inert Surfaces in a Forest Canopy." 1985, Environ. Sci.
Tech. 19: 238-244.		
9.	M.L. Wesely, D.R. Cook, R.L. Hart and R.E. Speer. "Measurements and Parameter-
ization of Particulate Sulfur Dry Deposition Over Grass." 1985, J. Geophys.
Res. 90:2131-2141.

The dry deposition workshop was divided into working groups on siting criteria,
sampling and analysis, data handling, and methods specifications. Although dry
deposition measurement may not be as far along as wet deposition, a primary outcome
from the dry deposition session was specifications for how the measurement process
should be engineered.
The dry deposition siting criteria working group chaired by George Sehmel addressed
siting in relation to network objectives, siting philosophy, siting criteria and
made recommendations and identified specific research needs.
The sampling and analysis group chaired by Bob Stevens addressed which technologies/
methods can be applied to satisfy data collection requirements. Sampling methods,
equipment, and sampling times were discussed and recommendations for research made.
The dry deposition data handling working group chaired by Tony Olsen developed a
general structure for data handling associated with the operation of dry deposition
monitoring networks. Although the structure was not as detailed as for wet deposi-
tion, it did reflect the current stage of development of dry deposition monitoring.
Important issues to be addressed were identified.
Specifications for dry deposition methods were discussed in the working group led by
Ray Hosker. While no methods "per se" appear to be good candidates for standardiza-
tion, a number of associated procedures could be standardized or established as
protocols. The working group developed desirable features of monitoring networks
and identified a number of research needs and general procedural recommendations.
In the final plenary session discussion led by Steven Bromberg the priorities and
process for achieving standardization in dry deposition monitoring were discussed.
Plans for installation of monitoring networks in the near future were presented in
this section as well as discussion of the constraints and needs for further

George Sehmel*
Siting criteria for dry deposition networks were discussed by this working group.
Program management was emphasized because many aspects of network design and opera-
tion fall under siting criteria, since siting impacts or is impacted by them. Our
discussions did not consider the selection of the air concentration and meteorologi-
cal measurement methods, and sampling and analysis methods. Although knowledge of
the planned EPA dry deposition network provided information on siting criteria, the
working group included evaluation of siting criteria that might be applied to other
networks. Data from the three "core" stations in that network were not applied to
siting criteria. In the discussion, we use the terminology that networks are compo-
sed of "stations," and that each station could consist of several individual sites.
The working group considered the following:
•	Definition of network objectives
•	Philosophy for siting
•	siting criteria
t station selection
•	representativeness
•	station operation
0 data collection protocol
•	Siting criteria which might be considered for inclusion in:
•	standard practices
•	siting criteria for potential standardization
•	Recommendations
•	Research needs
Network objectives must be clearly stated to firmly develop siting criteria. That
is, siting criteria must reflect the objectives of networks. Network objectives
*Battelle Northwest Laboratories, Richland WA.
Source: Proc. Methods for Acidic Dep. Measurements EA-4663, EPRI, Palo Alto CA 1986.

could be directed toward air quality trends in time and/or space, effects, and
If networks are directed toward air quality trends in time and/or space, networks
must be sited to measure the desired trends. Depending on objectives, siting could
be on a local, intermediate, or nationwide scale. We feel individual stations
should be selected on the basis of variations in airborne concentrations
If networks are directed toward effects, stations in the network must be sited in
sensitive areas. For instance, if effects on forest ecosystems are of prime consi-
deration, sampling stations should be located in forests. If effects on a watershed
are of prime consideration, sampling stations should be located in the watershed.
Similarly if effects on materials are of prime consideration, sampling stations
should be located in urban areas.
If networks are directed toward modeling, sampling stations should be oriented
outward evaluating the effects of sources and source abatement on downwind concen-
trations and deposition. If modeling or effects are the prime focus of networks,
the concept of both permanent and temporary mobile sampling stations should be
considered in network design.
Objectives for a network must distinguish between the need for reporting airborne
concentrations and the need for evaluating area-average deposition fluxes. In
addition for the EPA network, objectives must include validation of techniques for
predicting deposition fluxes from airborne concentrations.
We feel that siting criteria must be directed toward the objective of data collec-
tion for determining processes of dry deposition fluxes. Because dry deposition
fluxes are limited both by uncertainties in computed dry deposition velocities and
by uncertainties in measured air concentrations, we feel that siting criteria must
emphasize data collection to fully describe the processes controlling dry
Approximations and simplifying assumptions must be made to calculate dry deposition
rates at a station, or for area-average deposition velocities. Concentration mea-
surements by themselves are inadequate for predicting area-average deposition

The philosophy for siting includes consideration of siting criteria, station selec-
tion, representativeness, station operation, and data collection protocol. Each of
these considerations is discussed briefly.
There are many aspects of siting criteria that must be considered. The aspects
discussed were:
•	Siting criteria must consider variations in land usage and area-
average values.
t Siting criteria may depend on the method selected for evaluating
•	Development of siting criteria for measuring dry deposition is more
complicated than for developing siting criteria for airborne con-
centrations. Although dry deposition velocities are a function of
pollutant properties, meteorological and deposition surface proper-
ties control deposition of each species.
•	Siting criteria must consider changes in airborne concentrations as
a function of height, changes which may depend on dry deposition
•	Siting criteria should consider potential effects of changes in
emissions on expected airborne concentrations and dry deposition
Meteorological models should be used to assist station selection by predicting
expected variation of measured properties temporally and spatially around stations.
Even though predictive models may not be developed, modeling approaches should
include consideration of sources, wind direction, terrain, edge effects, and canopy
characteristics including height, homogeneity, density, and coupling coefficients
for wind speed above and below vegetative canopies.
Representativeness is a crucial siting criterion that depends on the specific objec-
tive of a network. Several areas of representativeness were discussed:
•	Network operation should be designed to ease data interpretation by
determining what station "representativeness" means for selection
of the station locations, and determining variations in measured
properties both temporally and spatially around locations selected
for network stations.
•	We feel that station locations are important for obtaining repre-
sentative concentrations. Siting criteria should consider expected

isopleths of airborne concentrations in order to assist meteorolog-
ical model validation/calibrations.
•	We feel that station representativeness must be investigated exper-
imentally for most stations, preferably at all stations in a
•	It was suggested that each station initially consist of a "perma-
nent" station and mobile sites for rapidly evaluating averages and
variations about each station.
•	Colocation of several sites at a station should be an important
consideration during initial operation of a network. Even though
there will be a limited number of sampling sites, we believe sever-
al sampling sites should be colocated at the same station to inves-
tigate temporal and area averages and variations in pollutant
concentrations and fluxes, and meteorological variables.
•	Colocation means there should be several instrumented sites at each
station, possibly because of funding limitations, at the expense of
reducing the total number of stations in a network.
Station operation must be directed toward data usage. Average concentrations,
deposition rates, and temporal and spatial variations need to be evaluated. Data
users will need to know if there are statistically significant differences in mea-
sured properties and variations of properties between monitoring stations, compared
to temporal and spatial variations in measured properties at a single station. That
is, are temporal and spatial within-station variations less than, equal to, or
greater than between-station variations?
We believe that networks must be designed and operated to maximize the success of
subsequent data analysis. We emphasize that networks must consider how small "aver-
aging" windows can be used so that data stratification ability is not lost (i.e. many
data users will need averages for time periods of less than a week).
Areas of data collection protocol discussed include:
•	Depending on the objectives for a station, sampling could be con-
tinuous or stratified as a function of time and meteorological
•	Data analysis may be helped if data collection is segregated as a
function of important variables. Representative concentrations
might infer data stratification as a function of wind direction
sectors, wind speed increments, sampling times, atmospheric stabil-
ity, etc.

t Since airborne concentrations at some height above the canopy are a
function of deposition to the upwind fetch, consideration should be
given to operating the station to stratify data as a function of a
wind direction sector from that upwind fetch.
• Since deposition velocities increase with increasing wind speed
(friction velocity), data stratification could include data collec-
tion above minimum deposition fluxes.
The following potential standard practices for station operation were discussed:
Station Operators: Stations should be selected at locations where qualified opera-
tors are available to operate the stations. Standards should
address effects of operator schedules on servicing the stations,
and what is to be done during weekends, holiday, and vacations.
Precipitations: Standard practices should address effects of precipitation on
collected data and station operation. Standard practices should
specify if stations will be operated or deactivated during
precipitation, and how precipitation is to be documented.
Sampling Height: Standard practices should address sampling height in relation-
ship to the height of the vegetative canopy and height of the
surrounding terrain, and whether multiple sampling heights
should be required for estimating deposition fluxes.
Sampling Location: Standard practices should address sampling locations in rela-
tionship to the extent of the vegetative canopy, and selection
of station location in the canopy, in clearings, and distances
from the edges between the canopy and clearing.
Siting criteria that might be considered for inclusion in standards were discussed,
but the working group feels that all standards must be subject to revision after
experience is obtained using networks. Although we can learn from experiences of
siting for wet deposition, some siting criteria are different because of the differ-
ences in removal processes for wet and dry deposition. The areas suggested for
possible standardization for dry deposition follow:
Station Stability: Since station stability is essential for networks evaluating
long-term trends and effects of dry deposition, it is essen-
tial that stations be located on federal land, state lands,

Station Location:
Electrical Power:
Seasonal Access:
purchase land, or land with long-term leases to facilitate
continuity of station operation.
Sites should be representative of land usage and representa-
tive of pollutants.
Sites should be selected to include availability of electri-
cal power.
Stations should have year-round access, or should standards
allow data collection only when the station is accessible to
vehicular traffic for servicing? If not serviced, is the
impact acceptable for operation of networks?
Stations should be selected on the basis of distances from
roads, blowing dust, local sources, and large sources
Standards should address documentation of the station des-
cription. Documentation should include, but not be limited
to, the following: land use map, topographic map, station
specific map, aerial photographs, surface photographs, wind
rose, and sampling protocols.
The following recommendations for siting and network operation were made:
•	In order to more firmly establish siting criteria that would
reflect the physical processes being considered, the objectives of
a network must be defined precisely. Once objectives are
established and described, the distinct and differing requirements
for networks must be examined in greater detail.
•	Given the present research knowledge about dry deposition, there is
a need to build in flexibility and feedback mechanisms in station
selection and operation, especially during the earliest stages of
network operation.
•	There is a need to test/validate siting assumptions experimentally
for the selected station locations. Flexibility in network design
may permit relocating the station.
•	In order to develop siting criteria, there is a need for estimating
the range of dry deposition velocities expected for stations in the
network. Estimates of spatial variations in deposition fluxes are
needed to establish station representativeness.
•	The criteria for specifying sampling height(s) need to be documen-
ted. Documentation may include estimates of measured parameters as
a function of height.

There is a need to develop and document an objective method for
using maps of land usage, airborne concentrations, etc., for selec-
tion of station location.
•	To more firmly establish criteria for siting, we feel an increased
effort should be directed toward reducing the greatest uncertainty
in predicting dry deposition. The deposition rate is equal to the
product of airborne concentration times the dry deposition velo-
city. Since concentration measurements are usually more precise
than estimates of dry deposition velocities, improvements are
needed for predicting dry deposition velocities.
•	Methods must be developed and validated for predicting area-average
dry deposition velocities for heterogeneous surfaces composed of
sub-areas of homogeneous deposition surfaces.
Research needs were identified for improving the development of siting criteria,
network operation, and data analysis for the proposed dry deposition network for
EPA. Although some of the needs may be satisfied by operation of the three existing
dry deposition research "core" stations, the following needs were identified:
•	Research is needed to reduce uncertainties in measurement methods
for dry deposition and in estimating methods for area-average dry
deposition. One objective is to estimate area-average deposition
velocities with accuracies equivalent to accuracies for estimates
of wet deposition on large time and space scales.
•	Our knowledge of dry deposition velocities for the wide range of
surfaces and pollutants that are of interest for a dry deposition
network is limited. Research is needed to develop a validated
method to translate station-specific dry deposition velocities to
area-average dry deposition velocities and rates. This translation
from point measurements also requires research to evaluate area-
average dry deposition velocities experimentally for both homogen-
eous and heterogeneous surfaces.
t Continued laboratory and small-scale dry deposition experiments are
needed to generalize area-average results because effects of all
variables controlling dry deposition cannot be economically evalua-
ted in field experiments for area-average dry deposition. Dry
deposition needs to be evaluated in both wind tunnel and chamber
•	Expanded research is needed to examine experimentally and theoreti-
cally deposition rates as a function of canopy height and density,
surface moisture, canopy clearings and edges for tall vegetative
stands, topography, co-pollutants in regions with high airborne
concentrations, etc.

•	Research is needed to develop techniques for quantifying represen-
tativeness, which includes evaluations of variations. Research is
also needed to quantify the variations around stations for concen-
trations as a function of distributions of canopy characteristics
and topography. Similarly if dry deposition rates are measured,
variations are needed for dry deposition rates as a function of
distributions of canopy characteristics and topography.
•	Research is needed to develop and validate measurement techniques
for deposition associated with fog and cloud impingement.

Robert K. Stevens*
The objectives of the working group were to determine which technologies/methods can
be applied to satisfy data collection requirements in order to:
1.	Obtain pollutant concentration trends at acid dry deposition sites
(1-7 day averages);
2.	Obtain pollutant concentration measurements to assess effects on
ecological systems (12-24 hour averages); and
3.	Obtain pollutant concentration measurements at a precision and time
resolution or input into the various dry deposition models current-
ly under development (1-6 hour averages at 15-10% precision).
A discussion of methods that have the potential of providing data to satisfy the
requirements of the three types of research described above. Table 1 is a list of
the pollutants generally considered to be contributing to acidic dry deposition.
Denuder Difference Filter Pack Method
Denuder difference method and filter pack studies conducted by EPA/RTP suggest SOg
is converted to sulfate on nylon filters. The working discussion indicated the
amount of SO2 converted was highly variable and appeared to be a function of a
variety of factors. For example:
a.	SO2 is converted but not quantitatively to sulfate on nylon fil-
ters. Therefore filter packs which are designed to collect HNO3
after a teflon filter may not provide accurate SO2 data.
b.	TEA treated filters to collect NO2 are influenced by flow rate and
temperature. Flow rate should be less 2 l/m and temperature be-
*Environmental Protection Agency, Research Triangle Park NC.
Source: Proc. Methods for Acidic Dep. Measurements EA-4663, EPRI, Palo Alto CA 1986.

tween 10-40°C appear to provide reliable data below 5°C loss of
collection efficiency for NO2 in experience.
c.	PAN is also collected on TEA treated filters.
d.	Recovery of NO2 from TEA treated filters and interferences are a
function of humidity.
Annular Denuder
The discussion of this device centered on elaboration on theory. Some question
whether the system can be used to collect samples over a seven-day period.
Dr. Allegrini stated annular denuder system has been operated over seven-day period
with no apparent loss of efficiency for collection of HNO3 or SO2. Dr. Allegrini
stated CNR is working on a KI-Na arsenite/glycerine coating to collect N0£.
Preliminary data show that a 2 cm diameter x 150 cm long tube coated with the KI-Na
arsenite quantitatively collects NO2 at 5 £/m.
Brookhaven Filter Pack
This system basically is a filter pack composed of 3 filters in series that collects
unsized fractionated aerosols, HNO3, and SO2. The system is primarily designed to
collect over sampling period from 1-24 hours. Sample flow rate islm^ per minute.
Some questions were raised concerning the ability of the filter pack to differen-
tiate between fine particle nitrate and nitric acid. Newman cited work obtained by
his group at White Face Mountain which suggested most of the atmospheric nitrate was
in the form of HNO3. Variation in HNO3/NO3 concentrations in different air sheds
were discussed.
Canadian and NOAA Filter Pack
The device is similar to the Brookhaven device except flow rates are in the range of
1-20 £/m. Their systems use nylon filters to collect the HNO3. Some studies sug-
gest nylon also collects SO2 that may compromise the SO2 measurements. Also collec-
tion efficiency of SO2 and NO2 appear to be affected by temperature and humidity.
The group agreed the system's main virtue is simplicity. The Canadian system heats
incoming air to prevent clogging of filter with ice.
Cyclone Denuder Concentration Filter Pack
The system is designed to obtain 12 hr to 7 day integrated samples. Limitations are
identical to filter pack method. A system has been designed to transport denuder
and filter packs between the laboratory and field site. There was discussion of


Table 1

Summary of Instrumental




sulfur enriched FPD 0.2 ppb
pulsed fluorescense
i 0.1 ppb
many cos.
FPD, pulsed "
5 ppb
0.1 ppb
In transition


0.2 ppb
Univ. Denver

5 ppb
CSL, monitor

5 ppb
5 ppb
ultraviolet absorp.
5 ppb
Dasabu, CSI

0.1 ppb

Univ. South
tungstic KC acid
0.1 ppb
diffusion tube

Univ. South
tungstic acid
0.1 ppb


diffusion tube
0.1 ppb

lppb as S02


SO 4
0.5ppb as S02
Wash. Univ.
0.5ppb as S02
0.5ppb as S02
0.5 ppb
Univ. South
tungstic acid
0.2 ppb

diffusion tube

infrared absorp.
0.5 ppb
with TDL

ultraviolet absorp.
0.2 ppb

whether this system could operate 7 days without overloading filters downstream of
the denuders.
Denuder Differences: Filter Pack Assembly
The sampling system, a combination of tubular denuders and filter packs, has been
designed and is currently undergoing evaluation. The system is designed primarily
to collect integrated samples over seven days.
Instrumental Methods
The working group discussed a variety of instrumental methods to monitor O3, SO2,
NO/, and sulfate. Discussion centered on modification of commercial system to
measure low ppb or sub ppb concentrations of the dry deposition pollutants. Commer-
cially available O3 monitors when properly calibrated provide reliable real time
A brief description follows of each of the methods currently being considered for
deployment in dry acid deposition networks. A unique feature of this section is
that the developers of the methodology were part of the working group and were key
contributors to this section of the report.
Filter Pack Methods
Principle of Operation. The principle of the filter	pack is that air is pulled
through filters in series to sample particles, HNO3,	SO2 in that order. The major
advantages are that it measures major species, has a	long history of application, is
simple to deploy, is low in cost, and analysis is by	routine methods.
Potential Problems. The potential problems include transfer of NH4NO3 from particle
filter to subsequent filters, SO2 retention on particle filter, HNO3 retention on
particle filter, neutralization of H+ on particle filter. These problems are a
greater concern with longer sampling times and in light of sulfur retention on nylon
Denuder Samplers
Principle of Operation. Denuder samplers employ the diffusion of gases to surfaces
where they can be collected. The rate of diffusion is affected by gas viscosity,
temperature, Reynold number, gas composition, flow rate, residence time (length of

denuder), arid the number of waters of hydration of the gases of interest. The
collection of the gas by the surface also depends on these same variables. All
samplers rely on either filters or cyclones to remove particles from the air
Advantages. Denuder samplers can provide very reliable measurement for selected
gases under tightly controlled sampling conditions. It does attempt to separate
gases from particulate matter.
Denuder Accumulator and Filter Pack
Principle of Operation. This system measures gaseous HNO3, NO3", N02» SO2, SO4-2,
NH3, NH4+, H\ and fine particles. Total fine particulate matter and trace elements
of this material can also be determined if needed. The method draws air through an
all teflon system; each pollutant is removed for further analyses. The particles
are separated into a coarse and fine fractions with a cyclone. The cyclone is
followed by a vortex-removing tee and a 1 cm diameter tubular denuder which contains
a 3 mm wide nylon strip 8 mm downstream of the vortex remover. This removes about
10% of the gaseous HNO3. It is followed by a 3 mm wide strip that removes about 15%
of the gaseous NH3. The sample then passes into a three-filter holder with a teflon
filter for fine particle collection, and a nylon filter to remove the remaining
gaseous HNO3 and any HNO3 from the decomposition of NH4NO3 collected on the teflon
filter. The final filter is an oxalic acid coated filter to collect the remaining
gaseous NH3 and any NH3 from the decomposition of NH4N03. The flow stream is teflon
split and a 2 Jl/min stream is passed through a tandem filter system with two filter
coated with Triethanolamine (TEA). The overall flow through the system is 18 £/min.
The system contains two identical systems with teflon solenoid valves for intermit-
tant sampling, e.g. day/night.
Characteristics. The advantages are the system can operate under most suggested
procedures and collects most of the desired pollutants. The system is more compli-
cated than filter packs and flow must be maintained within the predetermined range.
Limitations. The denuder samplers require a higher level operator skill than filter
pack samples. If the filter removing device fails to operate as intended, the
sample may be contaminated without the analyst being aware of it. The flow rate
must be maintained in a tightly defined range as for all sampling procedures.

Denuder Difference/filter Pack System
Principles of Operation. The DDM is used to measure fine particulate nitrate by
collection on a nylon filter after removal of HNO3 with a denuder. It also measures
HNO3 indirectly by difference between total HNO3 + PN collected on a nylon filter
and the PN obtained from the denuded sample stream.
Relative Accuracy. Accuracy is limited by low flow 1 £/m and, for. HNO3, by the
inherent loss of accuracy when measuring the difference between two samples, es-
pecially if the difference is small.
Advantages. DDM separates coarse particles prior to fine particle collection. It
overcomes problems of acid gas - and particle vol ati bi1ity (NH3NH4) particle reac-
tions on the measurement of particulate nitrate. DDM uses routine laboratory analy-
tical methods, e.g. IC.
Disadvantages. DDM is limited by low flow rates 1 l/m. Therefore long sampling
times, e.g. seven days, are needed for ambient air. DDM measures only PN and HNO3,
no other species. DDM does not preserve acidity of particles, e.g. does not remove
ammonia gas.
Observations. DDM is a moderately complex system in regard to the number of compo-
nents. It can be automated, but for sequential sampling it requires replication of
the system.
Costs. The complete system cost includes cyclone, flow control system, denuder,
filter packs and pump - about $3,000.
To consider total costs the costs for the following must be considered: analytical
systems, extraction of filters, coating and extraction of denuders, and standard IC
Annular Denuder
Principle. Selective discrimination between gas and particles based on diffusion of
gases to reactive surface. It yields direct measurement of selected gaseous spe-
cies. The particles pass through the denuder and can be collected on filter. It is
generally used with a teflon cyclone inlet to remove large (>5 (a-m) particles.
Annular design allows sampling at relatively high flow rates of 15-20 £/min with
compact (5 x 25 cm) denuders.

Characteristics. Use of two denuders in series can provide a working blank for
particle or high concentrations of relatively unreactive gases (e.g. collection of
1% of 20 |j.g/m3 NO2 when measuring 1 (xg/m^ of HNO2).
The minimum detection limits at 12 hr @ 15 Jl/m) with two denuders in series coated
with Na2C03 are:
HNO3 0.1 possible even if NO3 >> HNO3 (20-1)
SO2 0.1 " " " S0| » SO2
HN02 0.1
HC1	0.1 limited by blank
With different coatings other substances are collected:
NH3 (citric acid surface) 0.2 |j.g/m3 MDL flows = 25 £/m
NO2 (KI + NaAs02)	0.4 (ig/m^ MDL flows = 5 Jl/m
(interference from PAN under study)
Typically 3 cm x 25 cm denuder tubes are extracted with a minimum of 10 ml of solu-
tion and the extract is analyzed by standard techniques (e.g. IC or colorimetry).
The denuders for NO3" analysis are typically followed by two filters; the first is
teflon, the second is nylon (for collection of volatile nitrates). Denuders for NH3
analysis are followed by teflon filter for NH4+. A second denuder (or filter pack)
can be used to collect NH3 from volatile NH4+ salts.
A microprocessor controlled, temperature compensated flow measurement, control
system has been designed to provide extremely precise (—135) flow data. The system
provides a printed record of sample date, time, flow, temperature, power failures,
etc. The flow control system and denuder system is commercially available in
Instrumental Techniques
For purposes of discussion here, instrumental techniques are defined as techniques
that sample and analyze at the same site on a continuous or semi-continuous
(^ 1 hr) manner. Examples are the current monitors commercially available for the
USA criteria pollutants. In some cases, the response of these or similar research
prototypes based on the same principles can be enhanced, e.g. NO, O3 chemolumines-
cence techniques combined with selective gas scrubbers for NO, NO2, HNO3 and NH3.
In the case of O3 the sensitivity of the commercial monitors, based on either the
chemoluminescence reaction of ozone and ethylene or on absorption of radiation at

2537 A, is already sufficient. New techniques in part stimulated by the needs for
trend monitoring are also emerging, e.g. the instrumental technique based on the
reaction of NO2 and a surface supported solution containing luminol. Research
groups involved in special studies requiring high sensitivity and specificity have
evolved techniques that are one-of-a-kind and often exist only as a research proto-
type. All these techniques are summarized in Table 1 with respect to sensitivity,
approximate cost, availability and sources.
With respect to the different monitoring requirements to establish trends or effects
on the applicability of models, the continuous or semi-continuous temporal response
of the instrument technique can satisfy all reasonable requirements by integrating
signal response. However, the calibration of response is necessary on a periodic
basis and the availability of standards and standard delivery systems as additional
Special instruments based on fundamental physical parameters such as absorption
coefficients have been developed in some cases for use with audit functions or in
one-of-a-kind monitoring applications in which unambiguous identification and sensi-
tive quantitation are necessary. Two examples of this are the ultraviolet photo-
metric for O3, and the tunable diode laser system based on infrared absorption of
simple gases. Even HNO2 has been measured using long, open air paths over which
ultraviolet absorption by HNO2 has been measured.
Rating of Sampling Systems for Dry Acid Deposition Measurement
The groups were provided worksheets containing a list of criteria for which they
were asked to rate the various sampling systems discussed at the workshop to measure
long term trends. Attached are the results of the evaluation (Table 2). From this
evaluation the working groups were not able to select one sampling approach superior
to another. One suggestion for their observation was the working group was composed
of individuals who were more familiar with one device and, therefore, the ratings
possibly reflect their biases.
The group was not able to adequately address or evaluate these systems for ecologi-
cal or dry deposition modeling purposes. Although some of the devices have the
potential of meeting criteria required for these types of studies.

Table 2
Methods for Trends Measurement
1	Based on
2	Reliable field
3	Measures all
gases reliably
4	Measures parti-
cle composition
5	Analysis
6	System
7	Particle size
8	Availability and
9	Flow control
10	Sampling time
11	Amenable to
QA audits
SCORES	755	696	687	663	667	617
unweighted	75.5 8	69.6 9.1 68.7 6.5 66.3 4.6	66.3 12.9 66.5 11.4
standard deviation)

Sampling Duration
One aspect of the demand on sampling time duration can be expressed as follows:
Make a sensitivity calculation based on the following relationship:
1.	F = 5 vjjt * Ct
When Vgt and Ct are deposition velocity and concentration averaged
over, for instance, 1, 4, 6, 24 hours, 1, 2, 7, 30 days.
Then compare their relationship with:
F =	* Cm * K
where Vgm and are averaged over a long period, e.g. 1 or 3
months, K is a correction coefficient for covariance between Cm
and vgm.
2.	Consider spatial variability in concentration.
3.	For a given budget compare uncertainties due to variations in C
with uncertainties due to area variability in C.
4.	Determine which of the following would give the lowest uncertainty
for a given overall cost (equipment, sampling, and analysis).
1.	Only sample for a portion of a day.
2.	Use 2-6 parallel collectors. Each one operates for a portion of
the day over several days.
3.	Each sampler operates for several days.
Consider constraints regarding sampling duration due to technical matters (overload,
aging, etc.).
•	Evaluation and comparison of sampling systems currently being used
to measure acid dry deposition is required.
•	Annular denuder package should be included in evaluation programs.
(Delete denuder difference method since annular denuder supersedes
this methodology.)
•	Perform sensitivity analysis or trade-offs between spatial and
temporal variation in pollutant concentration.
•	Develop system to obtain data on coarse particle composition to
compliment fine particle data bases.

Evaluate instrumentation relative to the needs of trends, ecologi-
cal or dry deposition studies.
Delineate requirements to measure H2O2 and other peroxides.
Expand analytical methodology to include all major species of gas
and aerosols nitrogen compounds (e.g. Nt^ HNO2 and PAN).

Anthony R. Olsen*
The working group considered the elements of the Data Management Function that are
an integral part of the operation of any dry deposition network. Data management
function refers to the entire process of data management, including administration,
documentation and a data management system. Data management system refers to the
hardware-software system and its capability. Because of the uncertain state of
knowledge concerning the actual operation of a dry deposition network, the working
group decided to address the data management function generically.
The following general overall recommendation was made by the working group. Regard-
less of the specific characteristics of a network operation and data management
system, written documentation of all aspects of network operation and a computerized
data base for the network including supporting notes, codes and flags is essential.
This recommendation assumes greater importance because the current state-of-the-art
in dry deposition monitoring has many unanswered questions. Consequently, as the
state-of-the-art advances, improvements and changes in operational protocol will
occur and subsequent data analysis of previously collected data will require expli-
cit knowledge of the protocols used.
The working group took as its objective the establishment of the characteristics of
a data management function that would facilitate access to dry deposition data and
provide documentation on network design, operation and evolution. First, a data
management function model was developed that encompassed the functions of a dry
deposition network. Second, a generic data base organization structure was identi-
fied that addressed the specific problems posed by planned dry deposition monitor-
ing, data collection and processing. Third, general types of information thought to
be important to include with primary data results were considered. It was not
possible to provide an exhaustive list of information due to the current uncertain-
Mattel le Northwest Laboratories, Richland WA.
Source: Proc. Methods for Acidic Dep. Measurements EA-4663, EPRI, Palo Alto CA 1986.

ties accompanying siting, field operations, chemical sampling and analysis, and
methods for calculation of dry deposition velocities and fluxes.
A dry deposition monitoring network was assumed to consist of one or more monitoring
sites coordinated by a network coordination office. The network coordination office
is the focal point for the entire network operation. A monitoring network was
envisioned as encompassing five functions: program management, field operation
management, chemical analysis laboratory operation, data preprocessing operation and
archive data management (see Figure 1).
Within this system some documentation, data and samples originate in the field;
additional documentation, data and samples are generated in the chemical analysis
laboratory; further documentation and necessary preliminary data reduction occur in
data preprocessing; and data screening, validation, consolidation and supporting
note generation occur during field operation management. Primary data and sup-
porting documentation are transferred to archive data management for generation of
derived primary data and the storage, retrieval, data reporting and analysis of
network information.
Network program management oversees field operations, chemical laboratory, data
reprocessing and archive data management functions. It also assumes responsibility
for quality assurance programs and reports. The working group viewed the network
program management, i.e., network coordination office, function as a necessary and
essential component of all dry deposition monitoring network organizations.
Field operation management includes three primary interaction functions: routine
interaction with each field monitoring site, routine interaction with chemical
laboratory and data preprocessing centers, and transfer of primary data and documen-
tation to archive data management. Field operation management staff have day-to-day
responsibility for the primary data generation, whether in overseeing the field
collection, subsequent chemical analysis, or standard data preprocessing.
Air concentration sample collection, meteorological and other time dependent data
collection, field operations, and documentation of field results are handled by
field operations personnel. The central staff are the primary contact with the site
personnel and are responsible for receiving field data (both sample data and opera-
tion data), reviewing field documentation, generating codes/notes detailing data
collection conditions, and forwarding samples and data to chemical laboratory, data

Archive Data Management
Individual Field Monitoring Sites
Field Operation Management Function
Monitoring Network Program Management Function
Network Coordination Office
Figure 1. Dry Deposition Monitoring Network Operational Function Diagram

preprocessing or archive data personnel. A primary quality control function is a
central aspect of the interaction between the field operation management and site
operation personnel.
Another primary interaction occurs between field operation personnel and personnel
in chemical laboratory and data preprocessing centers. It is similar in function to
interaction with sites in that quality control is a key aspect and that the interac-
tion is on a routine or day-to-day basis. A difference is that chemical laboratory
and data preprocessing center do not collect field samples or data but process data
collected at the monitoring sites. Transfer of samples and results between the
chemical laboratory and field operation center is similar to a wet deposition net-
work operation. One difference that may occur is the use of instrumental methods
for determining concentrations where no laboratory analysis is required. Site and
field operation personnel assume a greater responsibility in this situation. Data
preprocessing functions are an added complexity dry deposition network operation im-
poses compared to wet deposition. Determination of deposition velocities requires
(depending on method) meteorological data, snow cover, surface wetness and other
time dependent data. Some preprocessing of field data, especially meteorological
data, may be required to generate the primary data to be used in the deposition
velocity calculation later. This preprocessing may occur at each site or at a
central data preprocessing center.
Transfer of documented, quality controlled primary data to the archive data manage-
ment personnel is another responsibility of the field operation function. Informa-
tion transferred should include all documents, reports and primary data supported by
notes, flags and codes that will enable subsequent routine data reports and analyses
to be completed in a scientifically defensible manner. Although additional quality
control activities occur as part of the archive data management function, primary
responsibility of a near real time basis resides with the field operation management
Archive data management function includes more activities for dry deposition net-
works than for wet deposition networks. Primarily, this is related to the introduc-
tion of models to calculate dry deposition velocities. In addition to archiving
primary data obtained from the field operation management function, the archive
function includes generation and archiving of derived primary data, e.g. deposition
velocities and depositions. Archiving is viewed as including all network documenta-
tion, deposition model computer codes and documentation, and the computerized data

The working group emphasized the necessity for and the benefits of adequate documen-
tation of all phases of a dry deposition network operation. The group recognized
the need for descriptive manuals, operators' log books, periodic and annual reports,
a number of operation forms, and formal documents. Wet deposition networks require
similar items and generally their characteristics will be transferable to the dry
deposition situation. Because of the additional information and different sample
averaging periods used in collecting air concentration and meteorological data,
additional complexities occur for dry deposition documentation items. The working
group did not attempt to compile an exhaustive list of documentation items required
nor to specify the exact contents of necessary documents and forms. Lack of opera-
tional experience in dry deposition monitoring and the uncertainty associated with
actual field data collection requirements restricted the ability of the group to do
so. Necessary formal documents are listed in Table 1.
The archive data management function includes several distinct activities. The
working group concluded that a key activity was the centralization of all formal
documentation associated with the network operation and archiving (for active use)
with the computerized data base system. This includes computer software programs
and documentation used anywhere within the network operation. The primary activity
is the construction and updating of the archive computerized data base for the dry
deposition network data collection effort. As much information as possible is
encouraged to be included in the computerized data base, especially supporting
sample condition codes and site characteristics. A significant activity, not
present with wet deposition data management, is the preparation of derived primary
data such as deposition velocities. Other activities include preparation of routine
data summaries and reports, filling data requests, and transferring archived network
data to an integrated multi-network dry deposition data base.
The organization of the computerized data base system was considered by the working
group. The current state of knowledge was not considered well enough developed to
present the same level of detail as was possible for wet deposition. However, the
group defined a generic data block structure that embodied known characteristics
(see Figure 2).
The four primary data blocks are organized around the major functions of network
operation. The exact contents of each primary data block are not specified but are

Primary Data Block
Derived Primary Data Block
air concentration sample
data and associated codes.
Time dependent sample
Non-time dependent
flux models
conversion factors
Site, laboratory,
preprocessing operation
usually unchanging but
occasionally updated due
to network changes.
Other time dependent
meteorological data
surface wetness
factors affecting
deposition velocity
Data assumed to have common
time interval
Includes information like
deposition velocity
total deposition
(averaged) primary data
Figure 2. Archived Data Block Structure for Dry Deposition Computerized Data
similar to that recommended for corresponding wet deposition data blocks. The site,
laboratory and data preprocessing operation primary data block contain relatively
static information. It is updated (with a history maintained) only as network
operation or protocol changes. For a small network this information is generally
not computerized. However, it is recommended that it be available in a form that is
amenable to computerization and easy access. It is not sufficient to be available
in limited access documentation.

Table 1
Formal Documentation for Dry Deposition Monitoring Network Operation
Network Overview
Siting Manual
Site Description
Field Operation and
Maintenance Manual
Chemical Laboratory
Operation, Analysis
Data Preprocessing
Archive Data
Management Manuals
Quality Control
Procedure Manual
Quality Control
Quality Assurance
Quality Assurance
Program plan of study: goals of network,
design criteria, overview of individual
operation components. Describes network to
outside users.
Detailed procedures of siting, including
protocol for siting, equipment require-
ments, etc.
Detailed description of each site,
including photographs and maps. Documenta-
tion of changes over time. A shorter
summary report may also be prepared for
general use.
Describes activities of site operators and
field operation management personnel.
Detailed procedures for sample collection,
field analysis, equipment maintenance, QC
activities, data forms, etc.
Describe activities of laboratory
personnel. Overview Procedures of
schedules and responsibility. Detailed
instructions for sample handling, analysis,
and reporting. QC activities.
Activities of preprocessing including
responsibility and location of activity.
Software documentation, listings, data
formats, etc. QC activities.
Describe responsibilities, procedures and
activities. QC activities. Software
documents, listings, data formats. Data
base description. Coding procedures.
Prepared for general data users outside
network. General overview of procedures,
QC objectives, responsibility for QC, data
validation procedures, handling of missing
and discarded data.
Summary report of QC activities during
year. Documentation of data quality inclu-
ding precision and accuracy summary data as
Prepared for internal use and general
distribution. Describes activities conduc-
ted with outside organizations to assure
quality of reported data.
Summary report of QA activities during
year. Documentation of quality of data
including accuracy and precision summary

The gas/aerosol/particle primary data block contains the sample air concentration
data and characteristics similar to wet deposition sample ion concentration data
blocks. It is likely that not all sample intervals (time averaging) will be the
same for all analytes measured. The data system must be able to handle the added
complexity. The working group recommended that work be initiated to construct a
recommended prototype documentation for this data block. This document would dis-
cuss what information, and in what form, should be included with the analyte concen-
tration results. Items of importance include: sample identification key, analyte/
species key, units of reporting, sample condition codes, analyte result, result flag
identifying limit of quantitation, result data screen codes, precision and accuracy
for result (if available). It is recommended that all data remain in the archive
data base, including invalidated data with well-identified flags. Missing data
periods must be clearly identified and reason for absence stated. Although the
working group did not believe it was possible to standardize the analytes/species to
be measured, the group recommended that a minimum set of analytes/species be identi-
fied. A protocol sampling period (interval) was determined to be objective specific
and not amenable to standardization.
The other time dependent primary data block is designed to include that information
that deposition velocity models require. Primarily this consists of meteorological
data. Because of the capabilities of field instrumentation, meteorological data can
be acquired on short sampling intervals. The working group did not believe the time
resolution required for primary archived data needed to be the same as that during
acquisition. This is an area where a data preprocessing function may occur to
summarize the data to a time interval consistent with deposition velocity model
requirements. The working group believed that it would be possible for a knowledge-
able group to recommend a standard time interval for meteorological data, e.g. 15
min. or 1 hr. Additional information on the current state of deposition velocity
model input data requirements (or desired requirements) is necessary to begin defin-
ing all time varying information that should be included in this data block. Re-
gardless of the information selected, it must be accompanied by supporting data
similar to that specified for the previous primary data block.
A final primary data block has been identified as containing other non-time depen-
dent information. It was identified to make explicit the necessity of maintaining
that non-time dependent information entering into deposition velocity models and
calculations. This includes assumed constants, parameters of parameterized pro-
cesses, and the model itself. It may also include unit conversion factors or other
'housekeeping' information used in completing the archive data management function.

Each of the four primary data blocks enter into the construction of the derived
primary data block. A narrow view of a dry deposition archive computerized data
base is to have the data base consist only of this derived data block. The informa-
tion envisioned in the derived data block would be confined to data calculated or
summarized to a common sampling interval (i.e. time averaging period). It would
include all deposition velocity calculated data and primary data summarized (if
necessary) to the common sampling interval, e.g. daily, weekly. The working group
emphasized the importance of clearly identifying the handling of missing data.
Because no commonly accepted deposition velocity methodology exists and advances in
methodology are expected, the group stated that it was not sufficient to archive
only the derived velocity and flux data and that the primary data blocks must be
maintained to accommodate the expected changes. Not maintaining the primary data
blocks could result in significant loss of information during the early development
of dry deposition monitoring efforts.
The working group developed a general structure for data handling associated with
the operation of dry deposition monitoring networks. The structure is not as de-
tailed as for wet deposition but that reflects the current stage in development of
dry deposition monitoring. No comprehensive summary is presented but a few ques-
tions and statements identified by the group as important are collected here.
•	Standardized, generic data management structure for dry deposition
monitoring networks has been identified. The working group recom-
mends that their initial efforts be continued by an active working
group. Members of the group should include potential data users
and researchers knowledgeable on siting, field operations, sampling
and analysis, current wet deposition data handling and deposition
velocity modeling.
•	Additional knowledge is required on how much and how 'raw' of
primary data is needed for archive purposes.
•	Potential data users must begin discussions now to define required
primary data and standard data summaries and reports. These dis-
cussions must include researchers knowledgeable with planned dry
deposition monitoring networks.
•	A significant difference in data handling requirements is present
for dry deposition compared to wet deposition data. Primarily more
data is required for collection and not all data is collected on
the same sampling interval.
•	The data management function is more automated and likely to be
more dispersed than for wet deposition. This will require more
organization and quality control to assure the completeness and
quality of the data reported.

Ray P. Hosker, Jr.*
The task group on Dry Deposition Methods Specification was asked to consider the
methods available for measurement of dry deposition, to indicate those methods and
procedures (if any) that might be candidates for standardization, to specify design
and performance goals for the methods where possible, and to identify engineering
and research needs for effective use of the methods. It was generally agreed that
measurements of airborne pollutant concentrations alone would be inadequate; dry
deposition flux data are also essential.
The group first considered the methods available, and discussed their suitability
for standardization, as well as their suitability for use in both intensive case
studies and in monitoring. While no methods per se appeared to be good candidates
for standardization, a number of associated procedures could be standardized, or at
least established as well-documented protocols. The group next considered desirable
features of monitoring networks to be used for trends studies, effects studies, and
model verification. Some of these features are common to all intended uses, but
some are specialized to meet particular needs. The discussion of system features
and of requirements for data handling and processing led to a number of identified
research needs, as well as some general procedural recommendations.
Many methods have been suggested to determine dry deposition fluxes. The task group
attempted to deal only with those intended for use at individual sites, and which
are at least somewhat suited to the monitoring process. Methods such as aircraft
measurements of fluxes and tracer studies were not considered because they are
essentially case study techniques, although their importance in testing the extrapo-
lation of "point" data to regional areas was recognized.
*N0AA Environmental Research Laboratories, Oak Ridge TN.
Source: Proc. Methods for Acidic Dep. Measurements EA-4663, EPRI, Palo Alto CA 1986.

Table 1 shows the measurement methods discussed, and attempts to assess their suita-
bility for use in'either intensive case studies, or in routine monitoring. Paren-
theses suggest limited suitability for a particular technique in a given applica-
tion. For routine monitoring work associated with trends determination and effects
research, the so-called inferential methods (using measured concentrations and dry
deposition velocities inferred from meteorological and surface condition observa-
tions to calculate dry deposition fluxes) are probably best, given present techno-
logical capabilities. Even these methods still require a good deal of additional
effort to deal adequately with the wide range of pollutant species and receptor
surfaces of concern. In particular, the mathematical techniques now used to infer
dry deposition velocities must be improved and tested, and further development is
needed of simple-to-use concentration monitors that can deal with the increasing
number of pollutants that must be considered.
Table 1
Suitability of Various Dry Deposition Flux Measurement Methods for Use in
Intensive Case Studies or in Routine Monitoring
Concentration Accumulation	(x)*	x
Eddy Accumulation	(x)	x
Eddy Correlation	x
Gradient Techniques	x	(x)
Variance Techniques	x
Inferential Methods:
Continuous Chemical Monitors	x x
Integrating Chemical Samplers	(x) x
Natural Surface Accumulation:
Leaf Washing Techniques	x (x)
Snowpack Accumulation	x x
Watershed Accumulation	x
Surrogate Surface Accumulation:
(for large particles on"ly--not suitable	x
for gases or small particles)
*Parentheses indicate limited suitability for a given application.

The task group felt that none of the dry deposition measurement methods presently
used was suitable for standardization in the usual sense of an unchanging reference
method, simply because all of these methods are still being refined as ongoing
research programs suggest needed improvements. However, most of these methods are
conducted according to certain protocols, and these have been written down to some
extent in the technical literature, or (more likely) in internal reports and manuals
of the research groups using the methods. With some effort, these protocols could
be more thoroughly documented and made available to newcomers entering the field, to
designers of the next generation of measurement methods, and to users of the data
collected by these methods. This would be a highly useful exercise.
A single contact person in each research group is needed to coordinate standardiza-
tion efforts. This is perhaps even more important in multi-agency networks, where
communications with "outsiders" and newcomers would be greatly enhanced by a single
contact person. This person should be cognizant of the broad scope of the work
underway, should be able to direct technical questions to the appropriate scientist
for detailed response, and should be familiar with the existing documentation. This
function might usefully be connected with a central data bank, especially in its
initial phases.
Furthermore, there are a number of procedures associated with these methods that
probably could be standardized, or at least placed on the footing of a recognized
protocol. These include:
•	Calibration procedures for most meteorological and chemical
•	Some siting criteria and site audit procedures.
•	QA and QC procedures for some methods or portions thereof.
•	Some computational procedures (especially statistical evaluation
•	Data reporting and archiving methods.
To ensure that everyone is using a common language, some Immediate effort is needed
to produce a standard terminology for dry deposition processes, components, symbols,
and so on. There is also a real need to develop standardized data sampling frequen-
cies, averaging times, and data stratification classes for the various methods, but
additional research is needed before this can be accomplished.

Data base design could be facilitated if the designers of monitoring networks provi-
ded a list of the variables to be archived. Given the present uncertainties in, for
example, the methods of evaluating deposition velocities, enough information should
be included to permit recalculation of key parameters as the state of the art im-
proves. Provision should be made for the inclusion or exclusion, as appropriate, of
slightly questionable data ("flagging"), bad data, and missing data.
Some documents are available to assist the standardization process. These include
U.S. EPA (and other) protocols for concentration monitors, a limited number of ASTM
standards for meteorological and chemical sensor calibration, and an ASTM standard
on intercomparison of instruments. Internal agency documents are undoubtedly avail-
able to supplement this list.
The task group considered desirable features of dry deposition measurement methods
suitable for different purposes; specifically, trends monitoring, effects determin-
ations, and mathematical model verification.
Trends Monitoring
The intent of such work is to establish the temporal trends and spatial distribution
of pollutant dry deposition. This goal mandates long term operation of a large
number of sites scattered across the country in carefully selected locations. The
following features are especially important in monitoring trends.
•	Long term (i.e. ten years or more) operation must be guaranteed, so
that trends may be determined with reasonable certitude. Clima-
tological variability may mask the "signal" if the measurements are
made for only a few years.
•	During this operating period, the sites and their surroundings
should remain unchanged, so that local land use changes do not
obscure the results.
0 Very stable (over a period of, say twenty years) calibration stan-
dards must be available to ensure periodic reliable calibrations of
all instruments.
•	The instruments used should themselves have stable calibrations;
the instrument characteristics should remain unchanged over periods
of about one year.
•	Periodic recalibrations should be performed according to an estab-
lished protocol on all instruments at all sites. Annual recalibra-
tion is the minimum acceptable period.

•	Data loggers should routinely test their own calibration and record
the results, to ensure correct data capture. This should be done
at least weekly.
•	Standard methods should be developed, documented, and promulgated
for all data validation and computational practices, to ensure that
the data are treated in the same way at all sites.
•	Data recording should be accomplished using a standardized, well-
documented format on computer-compatible media.
•	A quality assurance team should visit each station at least annual-
ly, to check for changes in site condition or instrument exposure,
and to ensure site-to-site compatibility of operations.
A number of other important features were also identified by the task group.
•	The individual sites should be representative of local air quality,
and, if possible, of local land use patterns, to ensure that the
data are representative of the surrounding region.
•	Ideally, the sites should be locally uniform (in the micrometeor-
ological sense), although this may not be feasible in certain
topographic regions where dry deposition data must nevertheless be
obtained. In fact, it is important that at least some data be
collected in typically non-ideal "real-world" locations to improve
our understanding of the influence of complicating factors such as
forests, vegetation "edge effects," complex terrain, and bodies of
0 The trends monitoring stations must be able to detect mean values
of both pollutant concentrations and fluxes, because both quanti-
ties are important. The data should provide time resolution simi-
lar to that obtained in wet deposition monitoring; weekly average
values seem quite feasible, and allow one to calculate monthly,
seasonal, and annual means. It may be useful to provide day/night
stratification of these data.
•	The accuracy of the concentration data should be on the order of
±10% although an exact specification of the tolerable error has
not yet evolved. The meteorological data should meet the commonly
accepted levels of accuracy published, for example, by the World
Meteorological Organization.
The task group identified a number of features that an ideal trends monitoring
network would possess; these should probably be regarded as design goals for future
use, although some features could be implemented using presently available
• The equipment should be compact, easily transportable, and quickly
installed and tested. It should need little or no environmental
conditioning (e.g. shelter, climate conditioning) and should re-
quire little or no AC power, to facilitate installation at remote

•	Remote sensing would provide excellent aerial coverage, if adequate
vertical resolution could be obtained. It must be remembered that
data near receptor surfaces--i.e. relatively close to ground level
--are of principal interest in this application.
•	An ability to follow trends in the peak values of concentration and
deposition flux would be useful; resolution of peaks occurring over
a few hours time would probably be adequate for this purpose.
•	Ideally, data would be telemetered to a central location, for
prompt QC checks and early identification of equipment failures.
•	A central data bank would be identified and funded for long term
operation. User access would be provided by telephone computer
link using a documented protocol, but data insertion or modifica-
tion would not be allowed via this route. All additions or changes
in the data would be accomplished through an established procedure
under the direction of the data librarian.
Effects Determination
The characteristics of a monitoring network suitable for pollutant effects research
are similar in many ways to those for a trends network, discussed above. However,
the task group identified some important differences.
•	The lifetime of a given network may be only three to five years at
a particular set of sites.
•	Many different sites encompassing a very wide variety of potential
receptors (vegetation, water bodies, etc.) will be needed.
•	It will probably be necessary to include a number of spatially
dense sub-networks, to examine the spatial variability of concen-
tration and flux data. Identification of locations of locally high
concentrations or fluxes is important.
•	The effects network may need lower detection limits of concentra-
tions and fluxes than for trends monitoring, so that observed
effects can be quantitatively related to exposure levels.
•	In some effects work, it may be adequate to determine the total
flux of an element (e.g. nitrogen, in studies of the role of foliar
fertilization), without regard to the exact chemical form. This
could greatly simplify the required measurement method. Guidance
from the effects research comnunity is needed to determine instan-
ces where this simplification may be appropriate.
•	Good time resolution (better than one hour) of concentration and
flux maxima is essential for determining the effects of short-term
high exposures.
a The stations must monitor surface wetness and precipitation, be-
cause these may strongly affect the way pollutants react with local
receptor materials, and the resultant damage to the receptor.

•	The effects network should monitor materials (e.g. sea salt)
emitted by natural sources, so that observed receptor effects are
not erroneously attributed to other species, and so synergistic
effects can be determined.
•	Stations should have local data readouts, to facilitate frequent
(perhaps daily) operational and calibration checks.
•	The hardware should be easily relocatable, to facilitate the neces-
sary studies of a range of receptor types.
•	It may be possible to relax the power and service requirements for
stations located in urban areas. On the other hand, urban equip-
ment must be vandal proof. The determination of representative
sites may be difficult in urban areas, especially when logistical
constraints are considered.
•	It may be possible to determine average pollutant concentrations in
urban areas by installing automatic equipment in city-owned vehi-
cles (e.g. police cars) that circulate routinely throughout the
area. If automatic vehicle location equipment is also included,
then concentration isopleths could be developed for the city.
Model Verification
The task group recognized that the needs of a model verification study are generally
very different than those in trends monitoring or effects research. In particular,
a dry deposition model test can be expected to be confined to a particular region,
with rather fine spatial and temporal resolution. In practice, there may be a need
for high time resolution data at a number of sites, or for fine spatial resolution
at certain times; the exact data requirements will depend on the type of model under
test. While some model evaluations may be conducted using data from monitoring
networks, case study data will probably be needed for most other tests. Design of
an appropriate system must be strongly linked to the requirements of the mathemati-
cal model being tested (an episodic Eulerian model will have vastly different needs
than a model supplying long-term averages), but a few general recommendations and
features can be given.
•	Tests of dry deposition models will probably be confined to a 500
km x 500 km area. Within this area, data will be needed on a grid
scale of a few tens of km.
•	For intensive case studies, it might well be possible to include
data from stations in the effects and trends networks. This would
help position the case study results within the much longer data
record from the monitoring stations.
•	The data should include information on surface energy budgets and
surface wetness, because these factors will strongly influence the
transfer and collection of pollutants at receptor surfaces. The
data will also serve to test the parameterizations and predictive
capabilities of the model.

•	Ideally, the model testing network would provide isopleths of
concentration and deposition fluxes several times daily. Using
present technology, it should be possible to collect such data
(from at least key sites) at a central site via telemetry or tele-
phone links. Automatic interrogation of the individual sites is
quite feasible.
•	Ideally, remote sensors would determine spatially averaged data
over squares some tens of km in size, for direct use in the mathe-
matical model. Otherwise, point measurement data will have tc be
adjusted to be representative of the grid square where they were
collected; this will introduce a good deal of uncertainty into the
comparisons. Special methods (aircraft measurements; tracer stu-
dies) will be necessary to confirm the accuracy of the spatial
averaging techniques that are used. Tests of so-called sub-grid
scale parameterizations must be conducted.
The task group identified a number of areas where additional research is needed.
These are grouped into the following categories.
Methods Development
•	Determine the optimal stratification of dry deposition monitoring
data. Possible strategies include day/night sampling {to account
for the changes in both concentration and deposition velocity that
can occur for various air pollutants), and sampling according to
wind direction (to account for direction-dependent changes in
upwind fetch and the influence of local sources). The existing
prototype monitoring networks should provide data suitable for this
•	Develop improved hardware for dry deposition monitoring. This
might include automatic equipment for gradient methods which are
applicable in certain circumstances, fast-response switching and
pumping systems for eddy accumulation, improvements in the equip-
ment and methods for eddy correlation work, and improved chemical
monitoring devices. In particular, present chemical monitors
capable of good time resolution generally require elaborate climate
conditioning and AC power, and may rot respond well to the low
pollutant concentration levels encountered in remote locations.
Simpler systems such as filterpacks can deal only with a rather
limited number of pollutants, and inherently provide poor time
resolution. The ideal instrument would combine the best features
of both types, and would be inexpensive to purchase and operate as
wel 1.
•	Conduct careful and frequent intercotnparisons of the various moni-
toring methods at a number of sites under different conditions, to
determine the accuracy and precision of the various techniques.
Some of this work is already being accomplished at the "core"
research sites in the prototype network being operated by NOAA
under multi-agency support, but additional sites and effort are

•	Direct measurements of dry deposition flux by the eddy correlation
method are possible for only a few chemical species, and can be
accomplished only with some difficulty in any case. Hence alter-
nate methods such as the inferential techniques have become popu-
lar. The credibility of the resulting data would be greatly enhan-
ced if direct flux measurements of some type could be made more or
less continually at a reasonably large number of monitoring sta-
tions. It would therefore be useful to develop an eddy flux system
capable of continuous unattended operation, as a test of the fluxes
estimated using other methods. If the agreement between, say, an
eddy heat flux measurement and the heat flux determined by some
other method were good, one would be reasonably confident that the
similarly derived estimates of pollutant flux would also be accur-
ate. Sensible heat flux is probably the most likely candidate; the
measurement might be accomplished using a ruggedized sonic anemo-
meter and fast-response thermometer, and a dedicated small compu-
ter, but a good deal of engineering effort will be needed to pro-
duce a reliable system.
•	In many ways, sampling during periods of fog slips through the gap
between wet and dry deposition measurements; yet fog is a common
occurrence in many parts of the country. Simple, reliable fog
monitors are needed to establish those periods during which fog is
present; quantification of the density of the fog is perhaps a
secondary issue at this time. Collection and analysis of the fog
moisture may have to be conducted on an event basis.
Measurement Representativeness Studies
•	A great deal of work remains to be done on the fundamental question
of how to extrapolate data from a point measurement location to
some large region. This becomes especially troublesome when the
measurement site falls in a region of nonuniform terrain, or where
there are many discontinuities in the type of surface (forest to
field to lake, say). Schemes to perform this extrapolation should
be explored. It may also be necessary to address the problem
directly by a well-planned series of aerial measurements and
regional-scale tracer studies to determine the approximate extent
of a "representative" region, although this can be expected to vary
with location.
•	Mobile measurement systems may be very useful in addressing the
representativeness question in some areas (cities), and in deter-
mining true spatial averages. Highly portable systems that can be
set up quickly or systems capable of measurements while in motion
are needed for this application.
•	It would probably be worthwhile to explicitly include skilled
statisticians in the design of monitoring networks, to help reduce
bias in the results. The statisticians should thoroughly under-
stand the method of measurement and the data analysis techniques to
be used, so that their expertise can be applied optimally.
•	It might be possible to develop highly portable, inexpensive flux
monitoring systems based, for example, on the filterpack system. A
number of these systems could be temporarily installed in a dense
network surrounding a likely permanent monitoring site to directly

study local variability in dry deposition results. From these
data, a rational decision could be made concerning the final site

led by
Steven Bromberg*
I would like to set the stage and provide a basis for discussion. Beginning in 1985
and 1986 EPA will establish some dry deposition monitoring sites. Because of the
short time frame, there will not be time to take into consideration all of the
research recommendations and deficiencies discussed here. For instance, in siting
there may not be sufficient time to take a detailed look at each site. In methods
evaluations we will probably have to go without totally evaluated methods. The
question I pose is, what problems are we going to have, given that we are not going
to be able to address all of these concerns?
Comment: Given those constraints, you will have to document much more than you
think because the data will be questioned. Without sufficient documen-
tation the questions cannot be answered. Then arbitrary decisions will
have to be made to toss or believe the data. It is most important to
thoroughly document the quasi research-monitoring efforts.
Comment: In choosing the instrumentation one must decide whether to monitor
concentration or flux. More elaborate instrumentation is needed to
monitor flux. Expenses will be very high in comparison to the wet
deposition monitoring. If you monitor only concentration, then call it
a concentration monitoring network, and set it up to gather only concen-
tration data.
Comment: Most of the uncertainty in dry deposition measurement is in the calcula-
tion of deposition velocities. If that is true, then narrowing down the
deposition velocity method is of real importance in light of the time
^Environmental Protection Agency, Research Triangle Park NC.
Source: Proc. Methods for Acidic Dep. Measurements EA-4663, EPRI, Palo Alto CA 1986.

priority. It seems like siting is the place to make sure you get enough
information so that we can begin to get a handle on what that uncertain-
ty is. Are we in agreement that most of the uncertainty in the whole
process is with deposition velocity?
Comment: That is true only if you are talking about deposition models. When you
are talking about trends, you want actual values.
Comment: If we are measuring concentration, call it a concentration network. If
we call it a deposition network and get concentrations, that is in
We are certainly interested in flux measurements. There has been a lot of discus-
sion of where we use concentration information and where we need flux information.
I do not believe anyone has stated that we need only concentration information.
If you think the meteorology is inadequate, let us know. We are operating under the
impression that the meteorology package we are planning to put out will be enough to
calculate deposition velocities.
Question: Briefly, can you tell us what you decided on?
Yes. We have had to make some decisions for practical reasons in order for the
contractor to prepare a bid. The decisions are not necessarily popular with a lot
of people, but they have been made.
We have decided to put a meteorology package in the monitoring shelter that we have
been talking about. That includes wind speed direction, delta T (vertical on a 7
meter mast), solar radiation, relative humidity, some kind of surface wetness, and
ozone. I am not sure how we would measure surface wetness right now. We also
include something similar to the Canadian filterpack. That would be included in the
initial expansion.
We have other equipment in the six sites that we have running now. As we are able
to get the Italian denuders and install them in these six sites, then we will do so.
We fully expect that after the first 30 sites are deployed and after we have approx-
imately a year's worth of information, we should be able to do something better than
the Canadian filterpack. As other requirements are defined, e.g. low level SO2 or
continuous SO2 or very low level NO2, then instrumentation can be put in the sta-

tions as the instruments are developed. The inlet for both the ozone instrument and
the Canadian filterpack is approximately 7 meters, since the 10 meter tower was too
Question: Are you using chemoluminescence to measure ozone?
There are two schools of thought. One is that if you use chemoluminescent then you
have to take that big cylinder up to the station every six or nine months. Another
is that the chemoluminescent method is easier to keep operating than the UV. I am
not sure that we have decided which of those to put in.
Question: Has there been discussion of putting samplers at two elevations?
That has not come up before.
Comment: That was a research topic mentioned. It is not in the program now. It
is difficult to determine real differences between sampling stations
when the expected difference between stations is about 5% and the mea-
surement accuracy is ±10%.
Question: Where are you making the wind measurements? Only at one level?
Yes. Only at one level.
Someone mentioned earlier that there were some issues that had not been brought up
in any of the working groups. If you have an issue that you would like to have in
the record and would be helpful in the direction of the program, I wish you would
speak now. I would hope that the proceedings of this will act as a guide for us for
the next several years.
Comment: I have two things to say. One of them concerns your RFP. I say this in
full appreciation of the need to move on, to do engineering, and to make
some measurements. Since you have decided to put specifications into an
RFP, you have also made a de facto decision to standardize for some time
in the future, whatever that length of time is. Therefore all of the
items that you have in the RFP by definition lend themselves to standard
methods drafting. Perhaps that ought to be done, if not necessarily by
your group, by an expert group capable of doing that. The minimum
advantage is that it provides uniformity to whatever will be done in

traceability. The maximum advantage is that it provides a vehicle for
quantitative analysis of the risks that are being taken in getting bogus
data. That analysis should be done quantitatively by people who are
That brings me to my second point. My statement concerning anxiety
about the collection of bogus data is based on experience. That happen-
ed on large scale. An example is the National Air Sampling Network,
which was established a couple of decades ago. There were similar
pressures for a need to know the trends, to use methods that were affor-
dable and could be operated by the states, could be more or less self
sustaining, and easy to operate. You will recall what those methods
were. They were bubbler samples. They looked good to many of us.
It was only after years of analysis that it was discovered the methods
were not adequate. The result was that we did not get the trend inform-
ation we needed. We did not get information on NO2. We did not even
get information on SO2, which we thought was a pretty easy thing to do,
simply because at that time we were not smart enough to undertake this
kind of critical, semi-educated public look at the methods that were
chosen to get that data. I am concerned about the risks that you are
taking. The only suggestion that I can make is to go back to the begin-
ning of this comment.
I am also very concerned because it is EPA's name on the RFP! I guess that we have
not learned, in one respect, from the examples of NASN and are going ahead and doing
it even though we know the deficiencies. We have learned that we have to look at
the data in real time, which is "less than a couple of years. We need to be able to
evaluate that data very quickly. We have looked at some of the methods we have
chosen and know there are some deficiencies. We will continue to have an ongoing
evaluation and development program until we come up with something that everyone or
most people accept.
Comment: Please clarify whether your RFP has enough latitude for the contractor
to insert another piece of technology, if there is enough reason to put
it into the stations.
A specific piece of equipment is specified. If we can come up with a reason to
change one of those pieces of equipment, specifically the Canadian filterpack,

before that contract gets on the street for bidding, then there is still time to do
that because the whole package is now in Washington for approval. After it is
approved the contracts people will start working on it and the RFP will be on the
street. Then it will be too late.
Comment: This is a routine operating network. As Dr. Bromberg mentioned, we
operate prototype sites now. They are under our EPA staff research
guidance. As improvements come about, they will be put into the other
network which is contractor operated. We do not think it fair to a
contractor to have him bid on something where we might change the equip-
ment suddenly. It is a very flexible system in which the prototype site
or research site feeds to the network.
That is a very important point. As other pieces of equipment come along, they will
be put in the first six prototype sites to determine the operational problems. When
it is in place we are assuming it is chemically correct, and the mechanical problems
are solved. Our experience has been that once you get into routine operations
things happen that should not happen, but they do. That is what we will find out in
the six prototype sites. Once the equipment goes through the six sites and operates
for six months or so, and after evaluation of the information, we can put it into
the routine network if it is appropriate. This will be a dynamic system for a
number of years, and we have to document everything we are going through until we
finally settle down.
Comment: I want to make a point from a meteorological view. One-level wind
direction information will not provide enough information to determine
the flux.
If that is the case, then there is a subgroup that needs to get together and make
some decisions on that.
Comment: I suggest that you include in your RFP a mechanism for evaluating poten-
tial new methods to be considered for change. Possibly, you could set
some criteria by which those changes might be made. Can you include
statements like that?
The decision to make changes will be made by EPA. If we ask the contractor to do
the evaluation, it adds a degree of complexity which we would rather they not do.
We would rather have control over doing it in-house. I think we can be much more

responsive in-house to making those changes and evaluations than the contractor can
be. I understand what you are saying. I am not sure putting it in this particular
RFP is the way to achieve it.
Comment: You seem to hedge your bets very carefully. With a six month evaluation
period, if you can get the operating procedures prepared and externally
reviewed, and the testing site, which you call the research station run
by your own staff, the program is dynamically operated.
Yes. I share everyone's concern because of what has happened in the past. We
recognized in the beginning that this was going to be a high risk operation. There
is a very good chance that we are going to fail at some things. In fact we stated
that some of the equipment we put out initially will probably fail. Those of us
with our names on the line are going to have to take the flack.
Comment: Given the scrutiny under which each working group evaluated methods and
the uncertainty each group has identified in their particular area of
expertise, do you think it is realistic to think that whoever bids that
RFP as a single unit can effectively evaluate the data that comes back
in six months?
I do not think the contractor can do that. We perceive him being a pair of hands.
What we do expect is that after things get going reasonably well, he will be able to
turn over the data to someone who can evaluate that fairly quickly. The data base
handling system is being developed now. By the time the contract is in place, the
mechanism for the data flow should be there. It is a matter of working through the
field portion and making sure that the samples flow. It is realistic to have some
validated data from a contractor in less than six months. By validated I mean he
had made sure, given the existing methods, that what is being reported is what is
being measured. Then another group can decide whether or not that method gives an
accurate reading. The contractor will not do this. I would not want the contractor
to do that because he has some vested interest in the outcome.

In addition to the dry deposition and wet deposition sessions of the workshop, a
special topics session addressed issues of fogwater sampling and presented recent
results of the denuder research conducted in Italy. Dr. Andrew Huang discussed the
means of sampling acid fog with a mesh impaction fog sampler. The fog sampler
design and results of field studies were presented along with a discussion of the
aerodynamics of acid fog sampling.
Delbert Eatough presented a discussion of the conversion of SO2{g) to sulfate in a
fog bank on the West Coast. The rapid conversion to sulfate occurs in a cloud or
fog more rapidly than it does in clean air.
Dr. Ivo Allegrini presented results of recent studies from the Laboratory for Atmos-
pheric Pollution in Rome, Italy, on the annular denuder. A discussion of the theory
and application of annular denuders was presented as well as evaluation of results
from annular denuders.

Andrew A. Huang *, Elizabeth C. Ellis *
Andrew R. McFarland **, Carlos A. Ortiz **
and Robert L. Brewer ***
An active fog sampler that employs a mesh impactor at its inlet to intercept and
collect fogwater is described. This simple, portable, and rugged device called the
Mesh Impaction Fog Sampler (MIFS) was developed by Global Geochemistry Corporation
(GGC) and has been used to collect fogwater in the Los Angeles Basin since 1980 (1J.
The MIFS has been characterized both in the laboratory and in the field. In the
former, the MIFS was subjected to a wind tunnel test at the Texas A&M University to
determine its droplet collection characteristics. In the latter, it was intercom-
pared with four other fogwater collectors at Henninger Flats near Pasadena, Cali-
fornia, to determine their comparability in fog chemistry. This paper describes the
MIFS design and presents results of the laboratory and field characterization.
The MIFS (Figure 1) collects fogwater by drawing an airflow of approximately
1.5 m^/min. into the sampler inlet and through a polypropylene mesh filter using a
vacuum cleaner blower downstream. The mesh, used commercially to coalesce acid
mists, is made of interlaced filaments (410 nm diameter) and has a void volume of
96%. The fogwater intercepted by the 10 cm-diameter 4 cm-thick mesh drains down the
Teflon-lined 10 cm polyvinyl chloride pipe and into a polyethylene collection bottle
for subsequent chemical analyses. Since the air sampling flowrate is clearly de-
fined, the fogwater chemistry can be related back to the ambient fog chemistry.
Such relationships are important in studying the sorption and scavenging processes
of the ambient gases and aerosols of a fog event.
*Southern California Edison Company, Rosemead CA.
**Texas A&M University, College Station TX.
***Global Geochemistry Corporation, Canoga Park CA.
Source: Proc. Methods for Acidic Dep. Measurements EA-4663, EPRI, Palo Alto CA 1986.

A MIFS was subjected to wind tunnel tests at the Texas A&M University to determine
its droplet sampling characteristics at several wind speeds and wind directions (2J.
The test equipment and methodology employed by the Texas A&M University have been
described by McFarland and Ortiz (3J and Olan-Figueroa, et al. (4J.
The MIFS, less the exhaust tube and receiver elbow, was first mounted in the aerosol
wind tunnel. During testing monodisperse oil droplets tagged with sodium fluores-
cein were generated with a vibrating jet atomizer (5J and drawn into the tunnel.
The oil droplets were then sampled alternately with the MIFS and an isokinetic
probe. Droplets that penetrated through the inlet, mesh, and connecting tubing of
the MIFS were collected on a glass fiber filter. The isokinetic probe, which was
used to establish the actual concentration in the wind tunnel, also employed a glass
fiber filter for droplet collection. The droplets collected on the filters were
quantified by fluorimetry. Droplet penetration through the MIFS was calculated by
dividing the aerosol concentration detected on the MIFS filter by that detected on
the isokinetic probe filter. Figure 2 presents the MIFS characteristic curve thus
determined using droplets of several diameters. As shown, the 50% cutpoint droplet
size is 2.4 |im, and very few droplets (less than 2%) larger than 5 nm can penetrate
through the MIFS. Thus, the MIFS can effectively intercept droplets larger than
5 jxm.
Effects of wind speed were studied using 10 |xm droplets at 2, 4, and 8 km/h, with
the MIFS inlet facing the wind. It was found that penetration is only slightly
affected by wind speed, increasing from 0.2 to 0.8% with increasing wind speed.
Effects of wind direction were also examined using 10 nm droplets in a wind speed of
4 km/h with the MIFS inlet facing toward (0°), perpendicular to (90°), and away from
(180°) the wind. The results showed that the penetration is relatively unaffected
by the wind direction.
Liquid holdup on the mesh and other internal elements of the MIFS was examined using
a 2.4 x 3.6 x 5.5 m fog chamber. Before and after each test the collection bottle
and other MIFS elements were weighed. Duration of sampling for each test was selec-
ted to provide a range of values for the total weight of sampled liquid. Figure 3
presents the liquid holdup of the mesh filter as a function of total liquid collec-
ted by the MIFS. The solid line represents an undisturbed liquid collection mode,
and the dotted line represents the actual field technique whereby the sampler is
given multiple shakes-and-taps to get as much liquid into the sampling bottle as
possible. This figure shows that for extremely light loadings when only about 1 g

Figure 1. The Mesh Impaction Fog Sampler; the polypropylene mesh is 10 cm in diameter and
4 cm thick, installed inside the Teflon-lines 10 cm PVC down tube.

i	3 5 7 10	20
Figure 2. Aerosol Penetration through Mesh Impaction Fog Sampler;
flow rate = 1.59 nw/min, wind speed = 2km/h, wind direction = 0.

T	1	1	1—I I I I
-i	1	1	i—i—i i i i
>	i i
7 10	30 50 70 100
Figure 3. Liquid Holdup in Polypropylene Mesh Filter of Fog Sampler

of total liquid is sampled, virtually all of it remains as holdup. However, when
the total liquid sampled is 100 g, only 5% remains as holdup using the field
The MIFS was intercompared with four other fog collectors in June, 1983, at Hennin-
ger Flats (about 780 m msl) near Pasadena, California. The five collectors were
operated by their respective designers, i.e. Caltech (Pasadena, California), Aero-
Vironment (Pasadena, California), Atmospheric Sciences Research Center (ASRC) of the
State University of New York, (Albany, New York), Desert Research Institute (DRI),
(Reno, Nevada), and GGC (Canoga Park, California). The Caltech sampler (5J uses a
rotating arm, the ends of which have slots milled into the leading edge. Fogwater
which impacts in the slots is collected in the bottles mounted at the ends of the
arm by centrifugal force. The AeroVironment sampler uses rotating rods designed to
give two size cuts at 2.5 and 10 (im. Fogwater impacting on the rods is transferred
by centrifugal force to circular troughs and drains to collection bottles. The ASRC
sampler consists of 150 strings mounted between two plates. The assembly is rotated
so that fogwater is impacted on the strings and collected in traps on the bottom
plate. The DRI sampler (7J draws air through three rectangular jets. The fog
droplets impacted onto rotating rollers mounted behind the jets are transferred onto
a center roller and collected in a bottle.
All five samplers were sited within a 20 x 25 m area and operated simultaneously
during fog incursions. Duplicate collectors for the Caltech sampler and the GGC
MIFS were operated to determine data precision (when colocated) and homogeneity of
the fog (when separated across the study area). A description of the study has been
reported by Hering and Blumenthal (8).
Figures 4-6 present the results of the intercomparison study in the form of pooled
standard deviation (cr) and coefficients of variation (%) (9J. In general, the
laboratory precision is better than 5% for acids, nitrate, sulfate, and ammonium,
and not as good for the major cations. The colocated results show a data precision
within 16% for acids, nitrate, sulfate, and ammonium, and 14-46% for the major
cations. The results from the separated samplers show an agreement of 17-22% for
acids, 8-12% for nitrate, sulfate, and ammonium, and 30-70% for the major cations.
Combining the data from all of the samplers, the agreement for acids is 21-23%, for
nitrate, sulfate, and ammonium is 24-33%, and for the major cations is 34-80%.

400 f
2 200
200 f
400 f
3- 200
H+ from field pH
Replicate Collocated Separated	Different
Chemical Identical Identical	Sampler
Analysis Samplers Samplers	Types
Figure 4. Pooled Standard Deviations (*) and Coeffi-
cients of Variation (%) for Acid Measurements for the
Henmnger Flats Intercomparison Study: type IV data
include those from ASRC, Caltech, DRI, and GGC samplers

^ 300 -
| 200
"b 100
150- Sulfate
3- 50 -




i i



Replicate Collocated Separated	Different
Analysis of Identical	Identical	Sampler
Single Samplers	Samplers	Types
Figure 5. Pooled Standard Deviations (





« 100
5 504
o> 204
? 104





Duplicate Collocated Separated	Different
Chemical Identical	Identical	Sampler
Analysis Samplers	Samplers	Types
Figure 6. Pooled Standard Deviations (or) and Cooeffi-
cients of Variation {%) for Major Cations for the Hennin-
ger Flats Intercomparison Study; type IV data include
those from ASRC, Caltech, DRI, and GGC samplers

It has been shown that the MIFS has good sampling characteristics (Figure 2) with a
50% collection efficiency at 2.4 |xm. Since fogs typically have a droplet diameter
size range of 5-65 n-m, the MIFS should effectively collect fogwater from a sampled
air stream.
The intercomparison results show that the MIFS compares favorably with the collec-
tors used by other researchers. When examining fog chemistry data, one should
recognize the data variation due to sampler precision (colocated), fog homogeneity
(separated samplers), and different sampler types.
The MIFS was developed by GGC under the sponsorship of the Southern California
Edison Company (SCE). The wind tunnel characterization of the MIFS, performed by
the Texas A&M University, was also sponsored by the SCE. The Henninger Flats inter-
comparison study was jointly supported by the Coordinating Research Council and
1.	R.L. Brewer, R.O. Gordon, L.S. Shepard, and E.C. Ellis. Chemistry of mist and
fog from the Los Angeles Urban Area. Atmos. Environ., 17 (1983) 2267-2270.
2.	A.R. McFarland and C.A. Ortiz. Characterization of the mesh impaction fog
sampler. Texas Engineering Experiment Station Project 32525-1107 (1984).
3.	A.R. McFarland and C.A. Ortiz. A 10 m cutpoint ambient aerosol sampling inlet.
Atmos. Environ., 16 (1982) 2959-2S55:
4.	E. Olan-Figueroa, A.R. McFarland, and C.A. Ortiz. Flattening coefficients for
DOP and oleic acid droplets deposited on treated glass slides. Am. Ind. Hyg.
Assoc. J., 43 (1982) 395-399.
5.	R.N. Berglund and B.Y.H. Liu. Generation of monodisperse aerosol standards.
Environ. Sci. & Techno!., 1_ (1973) 147-153.
6.	D.J. Jacob, R.F.T. Wang, and R.C. Flagen. Fogwater collector design and char-
acterization. Environ. Sci. & Technol., 18 (1984) 827-833.
7.	U. Katz. A droplet impactor to collect liquid water from laboratory clouds for
chemical analysis. Communications a la Vlleme conference International sur la
Physique Des Nauges Clerment-Ferrand, France (1980), pp. 696-700.
8.	S.V. Hering and D.L. Blumenthal. Field comparison of fog/cloud water collec-
tors: pre!iminary results. Transactions, The Meteorology of Acid Deposition-
An APCA Specialty Conference (1983), pp. 45-56.
9.	S.V. Hering and D.L. Blumenthal. Fog sampler intercomparison study, final
report draft. Sonoma Technology Inc., STI 11 90063-308-FR (1985).

Delbert J. Eatough*
Lee D. Hansen*
High acidity in rainfall (1.-.4), cloud droplets (jj, 6J, and fog droplets (7-10) in
areas influenced by anthropogenic sources of SO2(g) and N0x(g) has been attributed
to the formation of both H2SO4 and HNO3. The analysis of field data (_5, {}, 11_, 12J
suggest that rapid conversion of S02(g) to sulfate must occur in cloud or fog drop-
lets. Data on the conversion of SO2(g) to sulfate in water droplets is largely
confined to results of laboratory studies (e.g. 12, T3). Waldman et al. (7_) and
Brewer et al. (10) have recently reported on the chemical composition of fogs along
the coast in the Los Angeles Basin and report pH values from 2.2 to 4.0. In compar-
ison, rain events in polluted areas in the Los Angeles Basin are reported (£, 2) to
produce precipitation with an average pH value of about 4.5. The conversion mechan-
isms and rates responsible for this apparent rapid conversion of SO2(g) to sulfate
in fogs along the Los Angeles coast are presently unknown, but the rates must be
rapid compared to those occurring outside fog banks. Hegg and Hobbs (1£, 15) have
reported rates for the formation of sulfate from SO2(g) in wave clouds based upon
measurements before and after passage of an air mass through the cloud. Because of
major uncertainties in the measurement of the residence (and hence reaction) time of
the air mass in the cloud, their results have large uncertainties. However, the
oxidation rate in the more acidic clouds (pH of cloud droplets 4 to 5) was 20±33%
SOg(g) hr~l. They interpret their data with a model in which SO2(g) is oxidized by
03(g) in water droplets and the rate of oxidation of sulfite ion is inversely pro-
portional to the hydrogen ion concentration in the clouds studied (14). The data of
Hegg and Hobbs (14^ r5) suggest that conversion of S02(g) to sulfate in a fog drop-
let with pH <3.5 would be less than 10% hr~*. Novakov has reviewed evidence (16)
suggesting that soot may play a role in the aqueous phase oxidation of S02(g). The
rapid reaction of SO2(g) with freshly emitted soot can be expected to be very rapid
*Brigham Young University, Provo UT.
Source: Proc. Methods for Acidic Dep. Measurements EA-4663, EPRI, Palo Alto CA 1986.

once the soot particle is wet until the initial "activated" sites have been consum-
ed. However, it can be expected that this process will lead to the very rapid
conversion of only about 0.05% of the emitted SO2(g) from an oil-fired power plant
plume Q6., 17). The estimated conversion is based on an initial C/SO2(g) mole ratio
of 0.005 (T7) and an expected sulfate/C ratio of 0.1 resulting from this reaction
(16). Hydrogen peroxide (ljJ, 20) and organic peroxides (.16) may also be important
in the aqueous droplet oxidation of SO2(g) in an oil-fired power plant plume. Both
have been postulated to be emitted from oil-fired power plants (16). In very acidic
fog droplets the rate of oxidation by H2O2 (20, 21J may be at least an order of
magnitude more rapid than the oxidation by 03(g). In comparison, the homogeneous
daytime rate of oxidation by S02(g) to sulfate in plumes of power plants (22-25) and
smelters (22, 26) is in the range of 1 to 67c S02(g) hr"1.
Direct measurements of the rate of SO2(g) to sulfate conversion in an oil-fired
power plant plume as it passed through a fog bank were recently reported by us (27).
A conversion rate of 30±4% S02(g) hr'l was found in the fog bank.
During the summer of 1980, a field program was initiated to study the sulfur chemis-
try in the plume of an oil-fired power plant at Morro Bay, California (27-29). The
plant is located on the shore of the Pacific Ocean in central California. During
the summer, daytime on-shore breezes carry the plume inland up the Los Osos Valley.
The direction of plume flow inland is constrained by mountains on either side of the
valley. Several hills jut up in the center of the valley and are impacted by the
plume as it travels inland. A series of four sampling sites were established on
these hills. Samples of emissions were collected from the flue lines of the power
plant and from the four ambient stations on days when the power plant burned oil or
only gas (27). The samples collected on days when the plant was burning only gas
were used to correct for background contribution to the data obtained when oil was
burned. The resulting data were then used to calculate the rate of conversion of
SO2 to sulfate both when the plume was in and out of the fog bank (27J. Typical
results obtained by this analysis for several sampling periods are given in
Figure 1.
The data given in Figure 1 indicate that two different mechanisms contribute to
sulfate production in the plume. For plume travel times greater than about one
hour, the data are linear with a slope corresponding to a first order conversion of

Clear Air
Plume Emission
Time, hours
Figure 1. Variation of Sulfate/sulfur Total versus Plume Travel Time for
the Plume of an Oil-fired Power Plant Emitted into a Fog Bank

so2(9) t° sulfate of from 3±1% S02(g) hr_1. This is typical of rates seen for the
OH catalyzed conversion of SO2 to sulfate in the daytime (22-26). However, the
conversion rate during the first hour of plume transport is much higher, averaging
30±4% SO2(g) hr"l for the several sampling sets (27). During the early morning
hours, the coast line is covered by a fog bank which extends inland for several km.
The initial rapid conversion of S02(g) to sulfate occurs within the fog bank.
The measured size distribution of the sulfate is consistent with formation of sul-
fate via droplet chemistry. The photochemical conversion of SO2(g) to sulfate in
power plant plumes gives sulfate mainly in <0.5 aerosols (23, 26, 30J. The
formation of sulfate in droplets gives sulfate in larger particles (30). The parti-
cle size distribution of sulfate at the first sampling site after the plume exited
the fog bank was 33% in <0.5 |im particles and 67% in the 0.5 - 3.5 nm particles.
Huang et al. (25) have previously shown that the rapid initial conversion observed
in this study does not occur in the plume of a similar, coastal sited, oil-fired
power plant in which the plume did not impact a fog bank prior to sampling.
The results obtained for the conversion of SO2(g) to sulfate in the fog bank can be
compared to results previously reported by Meagher et al. Ql) and Hegg and Hobbs
(15). Conversion rates of S02(g) to sulfate resulting when the plume of a coal-
fired power plant mixes with the plume from the cooling tower cannot be directly
calculated from the measurements of Meagher et al. (31). However, their results
suggest that the rate is greater than 15% SO2(g) hr-1 and that the oxidation is
catalyzed by constituents of the power plant plume. Hegg and Hobbs (14) reported
the conversion rate of S02(g) to sulfate as 20±33% S02(g) hr"l in wave clouds when
the pH of the cloud droplets was less than 5. The measured reaction rate was depen-
dent on the 1st power of the hydrogen ion concentration (assuming sulfite ion to be
the species oxidized).
Fog droplet pH was not measured directly in the present study. The concentration of
water-soluble strong acid in the collected aerosols was measured. The ratio of
H+/S042" in aerosols collected at the first sampling site immediately after the
plume exited the fog bank varied from 0.2 to 1.0. If the concentration of sulfate
in the fog droplets is comparable to that observed in previous field studies (_1, 2,
b I* 12)» the PH of the f°9 droplets in the plume would average about 3.5±0.5.
This is at the lower limit of pH values measured in cloud droplets by Hegg and Hobbs
(14), and their model would predict that the rate of conversion of S02(g) to sulfate

in the fog bank should have been less than 10% SO2(g) hr~* instead of the 30% S02(g)
hr"l measured. Furthermore, during the early stages of plume transport, oxidation
of S02(g) by ozone would be limited by mass transport of 03(g) into the droplets
(32). This suggests that 03(g) is probably not the principal oxidant in the plume
studied by us. The heavy metals found in the plume aerosol (28) also cannot account
for the observed rapid conversion rate in our study (_21J. It has been postulated
that H2O2 or organic peroxides are present in the primary emissions from oil-fired
power plants (16) or may be rapidly formed in the plume (33-35). The conversion of
S02(g) to sulfate by peroxides at the fog droplet pH found in this study should be
rapid enough to account for the reaction rates observed (21, 36). We conclude that
flue line derived peroxides probably contribute to the oxidation rate in the fog.
Appreciation is expressed to personnel from Pacific Gas and Electric Company for
their valuable assistance in obtaining the flue line samples and to Laura Lewis and
R.T. DeCesar for technical assistance. The research was supported by the U.S.
Department of Energy, Contract DE-AC02-80EV10405 and the U.S. Environmental Protec-
tion Agency, EPA Assistance Agreement CR-810335-01.
Although the research described in this article has been funded wholly or in part by
the United States Environmental Protection Agency through Assistance Agreement CR-
810335-01 to Brigham Young University, it has not been subjected to agency review
and therefore does not necessarily reflect the views of the Agency, and no official
endorsement should be inferred.
1.	E.B. Cowling. Environ. Sci. Tech., 16, 1982, pp. U0A-122A.
2.	J.M. Hales, et al. Atmos. Environ.. 16_, 1982, pp. 1603-1631.
3.	T.E. Graedel and C.J. Weschler. Rev, Geophys. Space Phys., 19, 1981, pp. 505-
4.	A.L. Lazrus, et al. Atmos. Environ.. 17, 1983, pp. 581-591.
5.	P.H. Daum, S.E. Schwartz and L. Newman. J. Geophys. Res., 89, 1984, pp. 1447-
1458.	~~
6.	R.F. Falconer, P.D. Falconer, "Determination of cloud water acidity at a moun-
tain observatory in the Adirondack Mountains," December 1979, A.S.R.C. Publi-
cation 741.

7.	J.M. Waldman, J.W. Manger, D.J, Jacob, J.J. Morgan, and M.R. Hoffman. Science,
281, 1982, pp. 677-680.
8.	D.R. Hitchcock, L.F. Spiller, and W.E. Wilson. Atmos. Environ., 14, 1980,
pp. 165-182.
9.	B. Hileman. Environ. Sci. Tecttt» 17_, 1983, pp. 117A-120A.
10.	Brewer R.L., Gordan R.J., Shepard M.L.S., Ellis E.C. Atmos. Environ., 17,
1983, pp. 2267-2270.		 ~
11.	L. Newman. "General considerations of how rainwater must obtain sulfate
nitrate and acid," Division of Environmental Chemistry, American Chemical
Society, Preprints of the 177th National Meeting, Honolulu, 1979, pp. 475-478.
12.	P.V. Hobbs. "A reassessment of the mechanisms responsible for the sulfer con-
tent of acid rain," Advisory Workshop to Identify Research Needs on the Founda-
tion of Acid Precipitation, Electric Power Research Institute. 1979. EA-1074.
13.	B.C. Scott and D.J. McNauqhton. J. Air Poll. Control Assoc.. 30. 1981. dd.
272-273.	~~ ~
14.	D.A. Hegg and P.V. Hobbs. Atmos. Environ., 16, 1982, pp. 2663-2668.
15.	D.A. Hegg and P.V. Hobbs. Atmos. Environ., 15, 1981, pp. 1597-1604.
16.	T. Novakov. "The role of soot and primary oxidants in atmospheric chemistry,"
Sci. Total Environ., 36, 1984, pp. 1-11.
17.	M.W. Henry and K.T. Knapp. Env. Sci. Tech., 14, 1980, pp. 450-456.
18.	J.J. Bufalini, H.T. Lancaster, G.R. Namie, and G.W. Gay, Jr. J. Environ. Sci.
Health, A14, 1979, pp. 135-141.	~~
19.	G.L. Kok. Atmos. Environ., 14, 1980, pp. 653-656.
20.	H.G. Maahs. "The importance of ozone in the oxidation of sitlpher dioxide in
nonurban tropspheric clouds," Symposium on the Composition of the Nonurban
Troposphere, Amer. Met. Soc., Boston, 1982, pp. 261-264.
21.	National Research Council. "Acid deposition - Atmospheric processes in eastern
North America." National Academy Press, Washington, D.C., 1983.
22.	L. Newman. Atmos. Environ., 15, 1981, pp. 2231-2239.
23.	D.J. Eatough, B.E. Richter, N.L. Eatough and L.D. Hansen. Atmos. Environ., 15,
1981, pp. 2241-2253.		 ~
24.	J. Forrest, R.W. Garber and L. Newman. Atmos. Environ.. 15. 1981, dd. 2273-
2282.	~~
25.	A.A. Huang, R.J. Farber, R.L. Mahoney, D.J. Eatough, L.D. Hansen and D.W.
Allard. "Chemistry of invisible power plant plumes in Southern California—The
airborne perspective." Proceedings of the 1982 Apca Meeting, 1982, Paper No.

26.	D.J. Eatough, J.J. Christensen, N.L. Eatough, M.W. Hill, T.D. Major, T.D.
Mangelson, M.E. Post, J.F. Ryder, L.D. Hansen, R.G. Meisenheimer and J.W.
Fischer. Atmos. Environ., 16, 1982, pp. 1001-1015.
27.	D.J. Eatough, R.S. Arthur, N.L. Eatough, M.W. Hill, N.F. Mangelson, B.E.
Richter, L.D. Hansen and J.R. Cooper. Environ. Sci. Tech., 18, 1984. dd. 855-
28.	D.J. Eatough, N.L. Eatough., M.W. Hill, N.F. Mangelson and L.D. Hansen. En-
viron. Sci. Tech., 18, 1984, pp. 124-126.
29.	L.D. Hansen, D.J. Eatough, N.L. Eatough and Cheney J.L. "The Chemistry of
Dimethyl Sulfate in the Atmosphere," Proceedings of the 1985 APCA Meeting,
1985, Paper No. 85-36.6.
30.	S.V. Hering and S.K. Friedlander. Atmos. Environ., 16, 1982, pp. 2647-2656.
31.	J.F. Meagher., E.M. Bailey and M. Luria. J. Air Poll. Control Assoc., 32,
1982, pp. 389-391.
32.	J.E. Freiberg and S.E. Schwartz. Atmos. Environ.. 15, 1981, pp. 1145-1154.
33.	W.L. Chameides and D.D. Davis. J. Geophys. Res.. 87, 1982, pp. 4863-4877.
34.	B.G. Heikes, A.L. Lazrus, G.L. Kok, S.M. Kunen, B.W. Gandrud, S.N. Gitlin and
P.D. Sperry. J. Geophys. Res., 87, 1982, pp. 3045-3051.
35.	L.W. Richards, J.A. Anderson, D.L. Blumenthal, J.A. McDonald, G.L. Kok and A.L.
Lazrus. Atmos. Environ., L7» 1983, pp. 911-914.
36.	S.M. Kunen, A.L. Lazrus, G.L. Kok and B.G. Heikes. J. Geophys. Res.. 88. 1983.
pp. 3671-3674.		 ~

I. Allegrini*, F. DeSantis*, A. Febo*
C. Perrino*, M. Possanzini*, and Robert K, Stevens**
During the past two years a new design for denuders has been developed at the Labor-
atory for Atmospheric Pollution of CNR, Rome, (Italy) which has led to a device that
is suitable for the simultaneous measurement of different species contributing to
the acid deposition (1_). The theory and the technique have been well developed and
for this reason the U.S. Environmental Protection Agency (EPA) invited a team from
CNR, Italy, to bring the equipment needed for the measurement of acid species and to
compare the results with those obtained with devices currently being evaluated by
the Research Triangle Institute in North Carolina for the EPA.
Diffusion denuders are devices that have been used recently to remove selected gases
from an air stream by taking advantage of the very different diffusion coefficients
of gas molecules and aerosols. Gas molecules diffuse rapidly to the denuder walls
where they react with a selective substance, while fine particles (<2.5 pm aerody-
namic diameter) proceed unaffected through the denuder and are recovered by filtra-
tion. After sampling, the denuders can be extracted with water and analyzed for
their content of selected species as first shown by Ferm (2J. Denuders used for
this application are made with cylinders whose efficiencies are large only at very
low flow rates. They have low capacity so that several tubes operated in parallel
are needed to obtain a reasonable amount of analyte and, in addition, they are not
very suitable for the simultaneous collection of gases and particles. To overcome
these difficulties, two new systems were recently developed: the Denuder Difference
Method (DDM) for the measurement of HNO3 {3), and the Annular Denuder Method (ADM)
QJ. An adequate description and discussion of the DDM system can be found in the
literature (_3). This report focuses on the ADM system, information given in
1iterature(]J, and as yet unpublished data obtained in Italy.
*Istituto Sull-inquinamento Atmosferica, Rome, Italy
**Environmental Protection Agency, Research Triangle Park NC.
Source: Proc. Methods for Acidic Dep. Measurements EA-4663, EPRI, Palo Alto CA 1986.

The annular denuder is designed to collect gas by moving air through an annular
space between two concentric glass cylinders coated with the appropriate chemical
(Figure 1). In this annular configuration gases are efficiently removed at high
flow rates with tubes of relatively short length compared to conventional diffusion
denuder tubes. In order to fully understand conceptual features and advantages of
the ADM system, it is necessary to examine the basic equation which describes the
diffusion through an annular channel. The general solution for the denuder collec-
tion efficiency can be written as follows:
Cz vi==° -XiZ'
	 = \ a-j e	Eq. 1
where <*-j and X-j are coefficients of the series expansion, Cz and C0 are the mean
concentrations of the gas after and before passing through the channel, respective-
ly, and V a parameter which is in turn defined as:
Z 0
z. = 		Eq. 2
V0 deqZ
where d6q is the equivalent diameter of the annular channel, D the diffusion coeffi-
cient of the gas and V0 the mean velocity of the air in the channel. Z is the
coordinate along the axis. Substituting L = Z, with L the length of the denuder,
deq = d^ - dg, where d^ and 62 are the diameters of the two concentric cylinders,
introducing the flow rate F through the tube and approximating to the first term of
the expansion, one gets:
/ JtDL dl + d2 \
C = C0 • a . exp (- X	• 	)	Eq. 3
V 4F dx - d2 /
The terms a and A have been determined experimentally (1J :
a = 0.82 + 0.10
X = 22.53 ± 1.22
Thus the equation for efficiency becomes:
5	/	*DL	dl + d2 \
— S 0.82 exp I -22.53 	 • 		Eq. *
C0	\	4F di - dz J
It is interesting to compare this finding with that found by Gormley and Kennedy for
cylindrical channels (4J:
0.819 exp f -14.62 	 }	Eq.

The truncation to the first term is justified by the fact that the tube works in
asymptotic conditions, i.e. 1* > 0.05. By assuming typical sizes for an annular
denuder (d]_ = 3.3 cm and d£ = 3.0 cm}, it is possible to compare the performance of
the cylindrical and annular denuders at a given efficiency, by comparing the F/L
ratio for both cases. It can be shown easily that:
which means the annular denuders can reach the same efficiency as a cylindrical
denuder about 30 times longer or, assuming the same length, the annular denuder can
operate at flow rates 30 times larger. For instance, a tube of dj = 3.3 cm,
d2 = 3.0 cm, L = 20 cm can be used to collect ammonia at flow rates up to 25 I /min
still maintaining efficiencies larger than 97K.
The large operating flow rates make the ADM very useful for experiments where col-
lection of low concentration of certain gases are required over short term sampling
(1-2 hr) periods. Another feature of the ADM's function is to remove from air
samples reactive molecules (e.g. NH3 or HNO3), to prevent artifact-causing reactions
of such species with particles collected or back-up filters.
The annular denuder's unique geometry, combined with the appropriate coatings to
remove selected species and with filterpacks downstream are intended to collect both
gases and particles. For instance, the denuder functions to separate the gases from
particles. This separation can be near quantitative given the fact that diffusional
deposition and inertial deposition within the denuder are not larger than about 3%
(JL» £)• Another desirable aspect is that the transit time of sample through the
denuder is very small, not allowing the gas-particle equilibrium to be reestablished
differently from that existing in the atmosphere. Also, because of its capacity to
collect several mg of pollutants, the ADM is conceptually appropriate for collection
of atmospheric components for long sampling periods.
The field apparatus to implement ADM consists of a sampler and a denuder assembly
which, in turn, consists of a train including:
Eq. 6
a A cyclone for size segregation of particles entering the sampling
train. In the present configuration a Teflon cyclone having a cut
size cf approximately 2.5 urn at 15 iVmin is used.

•	Two annular denuders in series coated with a 1% Na2C03 and 1%
glycerine in a 1:1 water-methanol solution for the collection of
acid species.
•	One annular denuder coated with 1% citric acid in methanol for the
collection of gaseous ammonia.
•	A filter pack which accommodates a Teflon filter for the collection
of particulate matter and a back-up nylon filter to collect ni-
trates evaporated from the Teflon filter.
The components of the train are assembled by means of threaded rings and connectors.
After sampling, the tubes are recapped for transportation to the laboratories where
analyses will be carried out. The analytical procedure requires the extraction of
the tube and filters with about 10 ml of pure water and the extracts analyzed by ion
chromatography or, for ammonia, by colorimetry (Indophenol Blue Method). Teflon
filters are analyzed by ion chromatography or titration for the measurement of
strong acidity, while nylon filters require IC analysis for the measurement of
nitrate evaporated from the Teflon filter.
The ion chromatography analysis of the extract of the first Na2C03 denuder gives the
concentration of chlorides, nitrites, nitrates, and sulfates formed on the denuder
by uptake of HC1, HNO2, HNO3, and SO2, respectively. However, such a direct mea-
surement can be affected by errors because other substances deposited on the denuder
may give rise to the formation of the same ions; namely, deposition of particulate
matter containing chlorides, sulfates, and nitrates interfere with the measurement
of HC1, S02» and HNO3, while the absorption of NO2 and PAN on Na2C03 yields nitrites
which interfere with the measurement of HNO2. However, the efficiency for the
collection of the interfering species such as N02, PAN, particulates (4), is small
(about 1-3%). Thus, the amount collected in the first denuder will be equal to that
found in the second denuder which can be used to correct data obtained from the
analysis of the first denuder. The use of two denuders in series then permits the
simultaneous analysis of several acidic compounds, even when the ratio of analytes
in the gas phase and particulate matter is extremely low. For instance, the
technique is very valuable for the measurement of trace levels (< 0.1 ng/m^) of SO2
and HNO3 in the presence of large quantities of sulfates and nitrates in particulate
matter and, for unambiguous measurement of trace levels of HNO2. Table 1 shows some
as yet unpublished results obtained in several studies performed in Italy. Although
the results of these studies have not undergone peer review, the data shown in
Table 1 provides an example of the data that may be obtained with this device when
collecting ambient samples. The values in Table 1 are typical of those observed in

Table 1
Amounts of Ionic Species Measured by the Annular Denuder Methods during 27 hr Sampling and Relative
Atmospheric Levels in the Gas and Particle Phases
Ion content (pg)	Atmospheric Concentration (g/*3)
1985	Denuder 1	Denuder 2	Filter	Gas Phase	Particle Phase









Rome (JJ. It is worth stressing that the ADM has the potential of short and long
sampling times (3 hr-1 week), and therefore it is possible to obtain diurnal profile
concentrations of reactive species over 24 hr periods. Figure 2 is an example of
monitoring HNO2 and HNO3 over a 24 hr period at 3 hr sampling intervals in Rome,
Italy, during April, 1985. Nitric acid accumulates during the day and, due to its
large deposition velocity, almost disappears overnight. Nitrous acid (HNO2) con-
centration is typically higher at night because during the day it is rapidly photo-
1i zed by the sunlight (£).
The determination of HC1 at very low levels requires particular care as the accuracy
of the analysis is strongly affected by blank values which should be kept as small
as possible if precise and reliable results are needed.
Gaseous ammonia may also be collected with relatively short annular denuders because
of its large diffusion coefficient which ensures a quantitative collection on acidic
materials, such as citric acid. Other acids can also be used for coating the denu-
der, but attention should be paid to secondary reactions and interferences with the
analysis of other components. For instance, oxalic acid evaporates from the denuder
walls and is partly collected on the Teflon filter modifying the acidic content.
Ammonia is often in ambient air in equilibrium with the ammonium nitrate aerosol and
HNO3. Thus, the removal of this gas from the air stream may cause a modification in
the equilibrium between ammonium nitrate, NH3 and HNO3. For example, if NH3 or HNO3
are removed from the gas/aerosol stream, NH4NO3 may dissociate and yield ammonia in
gas phase which would be collected on the denuder walls. This effect would result
in an over estimation of gaseous ammonia and in an underestimation of ammonium ion.
However, the transit time of the air in the denuder is so small (< 0.1 s) that the
dissociation is not expected to proceed to an appreciable extent for either neat
particles or adsorbed molecules (3). Thus, with the usual train, when the air
passes through the Teflon filter, it no longer contains HNO3 and NH3 in gas phase
and this has the advantages that allow reliable measurements of total ammonium
nitrate in the samples as follows. Free ammonia and nitric acid that may result
from disassociation of NH4NO3 in the sample stream can be collected on an acid-
unpregnated filter and a nylon filter, respectively, located downstream of the
Teflon filter. Thus, adding up the amounts of ammonium and NO3 found on the Teflon
filter and on the back-up filters, the total ammonium and NO3 on particulate matter
can be determined.

1.	M. Possanzini, A. Febo, and A. Liberti. "New Design of High-Performance
Denuder for the Sampling of Atmospheric Pollutants," Atmos. Environ. 17, 1983,
pp. 2605-2610.
2.	M. Ferm. "Method for the Determination of Atmospheric Ammonia," Atmos.
Environ. 13, 1979, pp. 1385-1393.
3.	R.W. Shaw, R.K. Stevens, J. Bowermaster, J. Tesch, and E. Tew. "Measurements
of Atmospheric Nitrate and Nitric Acid; the Denuder Difference Experiment,"
Atmos. Environ. 16, 1981, pp. 845-853.
4.	M. Ferm and A. Sjodin. "A Sodium Carbonate Coated Denuder for the Determina-
tion of Nitric Acid in the Atmosphere," Atmos. Environ. 19, 1985, pp. 979-985.


Ivo Allegrini
Consiglio Nazionale Delle
Area del la Ricerca di Roma
Istituto Sul1-inquinamento Atmosferica
Posta Via Salaria Km 29300 - C.P.10
Rome, Italy
Mary Ann Allan
Electric Power Research Institute
P.O. Box 10412
Palo Alto, CA 94303
(415) 855-2000
Bill Alsop
NCSU Acid Deposition Program
1509 Varsity Drive
Raleigh, NC 27606
David R. Anderson
1108 N.E. 200th St.
Seattle, WA 98155
(206) 363-0706
J.M. Barnes
Room 213, J. Morrill Hall
Washington, D.C. 20251
Ralph Baumgardner
Environmental Protection Agency
MD 47
Research Triangle Park, NC 27711
Berne Bennett
Environmental Protection Agency
Research Triangle Park, NC 27711
Dave Bigelow
Natural Resource Ecology Laboratory
Colorado State University
Fort Collins, CO 80521
(303) 491-5574
Van Bowersox
Illinois State Water Survey
2204 Griffith Drive
Champaign, IL 61820
(217) 333-7873
Ronald L. Bradow
Envi ronmental Protection Agency
Mail Drop 84
Research Triangle Park, NC 27711
(919) 541-5179
Bob Brewer
Global Geochemistry
6919 Eton Avenue
Canoga Park, CA 91303
(818) 992-4103
Steve Bromberg
Mail Drop 75
Research Triangle Park, NC 27711
(919) 541-2919
F.	Burman
2421 West Hill crest Drive
Newbury Park, CA 91320
G.	Byers
Natural Resource Ecology Laboratory
Colorado State University
Ft. Collins, CO 80523
Sally A. Campbell
Martin Marietta Corporation
Environmental Systems
9200 Rumsey Road
Columbia, MD 21045
(301) 964-9200

Elaine G. Chapman
P.O. Box 999
Richland, WA 99352
(509) 376-8316
Dwight Clay
Environmental Protection Agency
Mail Drop 56
Research Triangle Park, NC 27711
Kel ly Cox
Research Triangle Institute
Research Triangle Park, NC 27711
Cliff Decker
Research Triangle Institute
P.O. Box 12194
Research Triangle Park, NC 27711
J.G. Droppo
Battelle Northwest Laboratories
P.O. Box 999
Richland, WA 99352
(509) 376-7915
W. Cary Eaton
Research Triangle Institute
P.O. Box 12194
Research Triangle Park, NC 27711
(919) 541-6720
Delbert J. Eatough
Chemistry Department
Brigham Young University
Provo, UT 84602
(801) 378-6040
Eric Edgerton
ESE, Inc.
P.O. Box ESE
Gainsville, FL
Bill Ellenson
Northrup Services, Inc.
c/o Environmental Protection Agency
MD 47
Research Triangle Park, NC 27711
Tom Ellestad
Environmental Protection Agency
Research Triangle Park, NC 27711
Judy El son
NCSU Acid Deposition Program
1509 Varsity Drive
Raleigh, NC 27606
E. Gardner Evans
Environmental Protection Agency
MD 56
Research Triangle Park, NC 27711
(919) 541-3887
Antonio Febo
Consiglio Nazionale Delle
Area del la Ricerca di Roma
Instituto Sull-inquinamento
Posta Via Salaria Km 29300 - C.P.10
Rome, Italy
Herbert W. Feely
Environment Measurements Lab
Department of Energy
376 Hudson Street
New York, NY 10014
(212) 620-3637
Joel Frisch
U.S. Geological Survey
National Center, MS 416
Reston, VA 22092
(703) 648-6877
James H. Gibson
Natural Resource Ecology Laboratory
Colorado State University
Fort Collins, CO 80521
(303) 491-1977
Lennart Granat
International Meteorological Institute
Arrhenius Laboratory
University of Stockholm
S-106 Stockholm, Sweden
Cary Gravatt
National Bureau of Standards
Washington, D.C. 20234
Bill Guyton
2834 Ave. & Ferry Rd., Apt. 302
Raleigh, NC 27606
Charles S. Hakkarinen
Electric Power Research Institute
P.O. Box 10412
Palo Alto, CA 94303
(415) 855-2592
Jim Healey
W.S. Fleming & Associates
55 Colvin Avenue
Albany, NY 12206

Axel Hendrickson
2044 N.w. 7th Place
Gainesville, FL 32603
(904) 372-8759
Ray Hosker
U.S. Dept. of Commerce
NOAA Environmental Research Labs
P.O. Box E
Oak Ridge, TN 37831
(615) 576-1248
Andy Huang
S. Calif. Edison, Inc.
2244 Walnut Grove Ave.
Rosemead, CA 94303
(818) 302-4165
Barry J. Huebert
Department of Chemistry
Colorado College
Colorado Springs, CO 80903
John Kadlecek
Atmospheric Sciences Research Center
State Univ. of New York at Albany
1400 Washington Avenue
Albany, NY 12222
(518) 457-4930
Paul Kapinos
U.S. Geological Survey
National Center, MS 412
Reston, VA 22092
(703) 648-6876
Ken Knapp
Atmospheric Science Research Lab
Environmental Protection Agency
MD 46
Research Triangle Park, 27711
Ronald Knuth
Environment Measurements Lab
Department of Energy
376 Hudson Street
New York City, NY 10014
(212) 620-3637
Earl Knutson
Envi ronment Measurements Lab
Department of Energy
376 Hudson Street
New York City, NY 10014
(212) 620-3637
Albert Koehler
Environmental & Instruments Program
World Meteorological Organization
Case Postale No. 5, CH-1211
Geneva 20
Sagar Krupa
University of Minnesota
Dept. of Plant Pathology
Room 304
1519 Gortner Avenue
St. Paul, MN 55108
(612) 376-3871
Dick Livingston
Environmental Protection Agency
Acid Deposition Research Staff
RD 676
Washington, D.C. 20460
(202) 382-2606
Warren Loseke
Environmental Protection Agency
Research Triangle Park, NC 27711
(919) 541-2173
Barry Martin
Environmental Protection Agency
Mail Drop 76
Research Triangle Park, NC 27711
Chris Maxwell
Martin Marietta Environmental Systems
9200 Rumsey Road
Columbia, MD 21045-1934
(415) 859-4854
William McClenny
Environmental Protection Agency
MD 44
Research Triangle Park, NC 27711
(919) 541-3158
John M. McManus
American Electric Power Service Corp.
1 Riverside Plaza
Columbus, OH 43216
(614) 223-1268
Bill Mitchell
Environmental Protection Agency
MD 77 B
Research Triangle Park, NC 27711
(919) 541-2769

C. Moore
Research Triangle Institute
P.O. Box 12194
Research Triangle Park, NC 27711
Peter Mueller
Electric Power Research Institute
P.O. Box 10412
Palo Alto, CA 94303
(415) 855-2586
Stuart J. Nagourney
Environment Measurements Lab
Department of Energy
376 Hudson Street
New York City, NY 10014
(212) 620-3637
Lenny Newman
Brookhaven National Laboratory
Buiding 51
Upton, NY 11973
(516) 282-4467
Anthony 01 sen
Battelle Northwest
P.O. Box 999
Richland, WA 99352
(509) 376-4265
Richard Paur
Environmental Protection Agency
Mail Drop 44
Research Triangle Park, NC 27711
(919) 541-4459
Eric Peake
Kauanaskis Center
University of Calgary
Biosciences 402
2500 University Dr. N.W.
Calgary, Alberta
Canada T2N 1N4
Jack Pickering
U.S. Geological Survey
National Center, MS 412
Reston, VA 22092
(703) 648-6875
Myron Plooster
Denver Research Institute
University of Denver
P.O. Box 10127
Denver, CO 80210
(303) 871-2834
R.C. Rhodes
Environmental Protection Agency
Research Triangle Park, NC 27711
(919) 541-2574
John Robertson
U.S. Military Academy
10 B Thayer Road
West Point, NY 10996
(914) 938-2105
Jane Rothert
Illinois State Water Survey
P.O. Box 5050, Station A
Champaign, IL 61820
(217) 333-7942
H.B. Sauls
MD 56
Research Triangle Park, NC 27711
V.K. Saxena
Associate Professor
Atmospheric Sciences
Dept. of Marine, Earth and
Atmospheric Sciences
P.O. Box 5068
Raleigh, NC 27650
(919) 737-2210
Leroy Schroeder
U.S. Geographical Survey
P.O. Box 25046
MS 407
Lakewood, CO 80225
(303) 234-3975
George Sehmel
Battelle Northwest Laboratories
P.O. Box 999
Richland, WA 99352
(509) 376-8527
Joseph Sickles
Research Triangle Institute
P.O. Box 12194
Research Triangle Park, NC 27711
(919) 541-6903
Gene Stephenson
8413 Two Courts
Raleigh, NC 27612
Robert K. Stevens
Environmental Protection Agency
Mail Drop 47
Research Triangle Park, NC 27711
(919) 541-3156

Harry Ten Brink
ECN P.O. Box 1
Petten, The Netherlands 1755ZG
E. Tew
Research Triangle Institute
Research Triangle Park, NC 27711
Richard J. Thompson
School of Public Health
University Station
Birmingham, AL 35294
(205) 934-5154
J.B. Tommerdahl
Research Triangle Institute
P.O. Box 12154
Research Triangle Park, NC 27711
Leo Topol
Combustion Engineering, Inc.
Environmental Monitoring and Services
2421 West Hillcrest Drive
Newbury Park, CA 91320
(805) 498-6771
Camille C. Torquato
Environment Measurements Lab
Department of Energy
376 Hudson Street
New York City, NY 10014
(212) 620-3637
Joe Walling
Environmental Protection Agency
MD 78
Research Triangle Park, NC 27711
(919) 541-7954
Marv Wesely
Argonne National Laboratories
Bldg. 181, ERD
Argonne, IL 60439
(312) 972-5827


Siting Criteria
Quality Auditing
John Robertson Chairman
Harry Ten Brink
Joel Frisch
Lennart Granat
John McManus
Bill Mitchell Chairman
Mary Ann Allan
W. Cary Eaton
R.C. Rhodes
Leroy Schroeder
Sampling and Field Monitoring
Development of Standardized Operating
Leo Topol Chairman
Van Bowersox
Bob Brewer
Steve Bromberg
Jim Gibson
Paul Kapinos
Sally Campbell Chairman
Elaine Chapman
Dick Livingston
Curtis Moore
Richard Paur
Albert Koehler
Peter Mueller
Chemical Analytical Methods
Jane Rothert Chairman
Berne Bennett
Delbert Eatough
Herb Feely
Warren Loseke
Leonard Newman
Joe Sickles
Camille Torquato
Data Handling, Archiving and
Tony 01 sen Chairman
Mary Ann Allan
William Alsop
Van Bowersox
Gardner Evans
Charles Hakkarinen
James Healey
Andy Huang
Albert Koehler
Christopher Maxwell


Siting Criteria
Data Handling
George Sehmel
Harry Ten Brink
Steve Bromberg
Eric Edgerton
Gardner Evans
Joel Frisch
Dick Livingston
Peter Mueller
Edward Tew
Tony 01 sen Chairman
Mary Ann Allan
William Alsop
David Bigelow
Dwight Clay
Gardner Evans
James Healey
Harold Sauls
Sampling and Analysis
Bob Stevens Chairman
Ralph Baumgardner
Berne Bennett
Van Bowersox
Jerry Byers
Elaine Chapman
Kelly W. Cox
Delbert Eatough
Bill Ellenson
Tom Ellestad
Herb Feely
Lennart Granat
Andy Huang
Ken Knapp
Ron Knuth
Albert Koehler
Warren Loseke
Barry Martin
William A. McClenny
John McManus
Bill Mitchell
Stuart Nagourney
Leonard Newman
Richard Paur
Jane Rothert
Joe Sickles
Camille Torquato
Methods Specifications
Ray Hosker Chairman
Sally Campbell
Jim Droppo
W. Cary Eaton
Barry Huebert
Earl Knutson
Myron Plooster
Marvin Wesely