£EPA
United States
Environmental Protection
Agency
Environmental Monitoring Systems
Laboratory
Research Triangle Park NC 2771 1
EPA-GOO 9-84-019
November 1 984
Research and Development
Proceedings:
National Symposium on
Recent Advances in
Pollutant Monitoring of
Ambient Air and
Stationary Sources
-------
EPA-600/9-84-019
November 1984
PROCEEDINGS: NATIONAL SYMPOSIUM ON RECENT ADVANCES IN
POLLUTANT MONITORING OF AMBIENT AIR
AND STATIONARY SOURCES
Radison Plaza Raleigh Hotel
May 8-10, 1984
U.S. Environmental Protection Agency
Region 5, library
-------
NOTICE
This document has been reviewed in accordance with
U.S. Environmental Protection Agency policy and
approved for publication. Mention of trade names
or commercial products does not constitute endorse-
ment or recommendation for use.
11
-------
FOREWORD
Measurement and monitoring research efforts are designed to anticipate
potential environmental problems, to support regulatory actions by develop-
ing an in-depth understanding of the nature and processes that impact health
and the ecology, to provide innovative means of monitoring compliance
with regulations and to evaluate the effectiveness of health and environ-
mental protection efforts through the monitoring of long-term trends.
The Environmental Monitoring Systems Laboratory, Research Triangle Park,
North Carolina, has the responsibility for: assessment of environmental
monitoring technology and systems; implementation of agency-wide quality
assurance programs for air pollution measurement systems; and supplying
technical support to other groups in the Agency including the Office of
Air, Noise and Radiation, the Office of Pesticides and Toxic Substances
and the Office of Solid Waste and Emergency Response.
This symposium is part of a continuing effort to explore recent advances
in pollutant monitoring of ambient air and stationary sources. It serves as
a forum for exchange of ideas and scientific information. In response to
the Agency regulatory needs, this symposium focused on acid deposition,
personal exposure and toxic substances. This publication is intended to
assist those researchers interested in furthering the science of air
monitoring.
Thomas R. Mauser, Ph.D.
Di rector
Environmental Monitoring Systems Laboratory
Research Triangle Park, North Carolina
m
-------
TABLE OF CONTENTS
Foreword m
Introduction vii
PM-10 Instruments: A Manufacturer's Perspective 1
Generation and Use of Large, Solid Calibration Aerosols 8
A Size Classifying Isokinetic Aerosol Sampler Designed for
Application at Remote Sites 9
Particle and Substrate Losses During Shipment of Teflon and
Quartz Filters 12
Pollutant Losses in Dichotomous Samplers 24
Mass Distribution of Large Ambient Aerosols and Their Effect on
PM-10 Measurement Methods 28
Rotary Impactor for Coarse Particle Measurement - Mass and
Chemical Analysis 33
Individual Micrometer-Size Aerosol Compounds 36
Human Exposure Assessment: A New Methodology for Determining the
Risk of Environmental Pollution to Public Health 52
Results of the Carbon Monoxide Study in Washington, D.C., and
Denver, Colorado, in the Winter of 1982-83 57
A Review of Indoor Air Quality Research at Oak Ridge National
Laboratory 61
Passive Sampling Devices with Reversible Adsorption: Mechanics
of Sampling 67
Portable Instrument for the Detection and Identification of Air
Pollutants 73
Problems and Pitfalls of Trace Ambient Organic Vapor Sampling
at Uncontrolled Hazardous Waste Sites 82
New Continuous Monitoring Systems for Measurement of
Hazardous Pollutants 91
Reagent Impregnated Film Badges for Passive Pollutant Sampling 96
A Cryogenic Preconcentration-Direct Flame lonization Method
for Measuring Ambient NMOC 104
Mobile Air Monitoring by MS/MS - A Study of the TAGAR 6000 System. . . .109
Development of Surface-Enhanced Raman Spectroscopy for
Monitoring Toxic Organic Pollutants 113
-------
Thermal Desorption Techniques for the Gas Chromatographic
Analysis of Particulate Matter 115
A Method to Specify Measurements for Receptor Models 127
The Application of SIMCA Pattern Recognition to Complex Chemical
Data 131
Description of a Continuous Sulfuric Acid/Sulfate Monitor 140
Automated Sampling and Analysis of Flue Gases from Plasma Pyrolizer. . .152
The Ratio of Benzo(a)pyrene to Particulate Matter in Smoke from
Prescribed Burning 161
Volatile Organic Sampling Train (VOST) Development at MRI 171
An Evaluation of Instrumental Methods for the Analysis of Vinyl
Chloride in Gaseous Process Streams 180
Overview of Semiconducting Gas Sensing Devices 193
Examination of Calibration Precision Calculations and Protocols for
Air Monitoring Data 198
-------
INTRODUCTION
The fourth annual national symposium sponsored by EPA'e Environmental
Monitoring Systems Laboratory was held May 8-10, 1984 in Raleigh, North
Carolina. In seven sessions over three days, papers and discussions
focused on state-of-the-art systems for monitoring source emissions,
ambient air, acid deposition, hazardous emissions and personal monitoring.
The sessions were categorized as follows:
SESSION I Particulate Pollutants
SESSION II Personal Monitoring
SESSION III Hazardous Waste Monitoring
SESSION IV Organic Pollutants
SESSION V Analysis of Complex Chemical Data
SESSION VI Acid Deposition
SESSION VII General and Source Oriented Monitoring
The papers are in the same order as presented by the speakers.
Several papers are omitted because the speakers did not submit them
in time for the agency's peer review.
VII
-------
PM-10 INSTRUMENTS: A MANUFACTURER'S PERSPECTIVE
By: Michael L. Smith, Andersen Samplers, Inc.
Andersen Samplers, Inc. and its subsidiaries, General Metal Works (GMW)
and Sierra-Andersen (S-A), have been manufacturing and marketing size specific
particulate samplers since the mid-1970's. The U. S. Environmental Protection
Agency recently proposed revisions to the National Ambient Air Quality
Standard (NAAQS) for particulate matter which would base the primary,
health-related standard on only those particles smaller than 10 micrometers
aerodynamic diameter (PM-10).
Both GMW and S-A manufacture and market a complete line of PM-10 instru-
ments, including Medium Flow Samplers, Dichotomous Samplers and Size Selective
High Volume Samplers. Although the instruments sold by each company are
designed to measure the same particulate pollutants, the collection mechanisms
are different. The GMW instruments are based upon cyclonic collection whereas
the S-A instruments are based upon impaction.
Each of the three types of instruments have specific features which give
them certain advantages in certain applications. Both the Medium Flow and
the Dichotomous Samplers utilize high vacuum pumps and therefore can use
Teflon membrane filters to collect the PM-10 particles. The use of Teflon
filters allows subsequent chemical analysis using x-ray flourescent analysis.
The Dichotomous operates at a low flowrate (16.7 1pm) but separates "fine"
mode particles smaller than 2.5 micrometers from "coarse" mode particles in
the range of 2.5 to 10 micrometers. The Medium Flow Sampler operates at
4 CFM and collects all PM-10 particles on one 102 mm filter.
Because of the separation of coarse and fine mode particles, the Dicho-
tomous Sampler provides the most information in areas where difficult
compliance strategies are required. An automated version of the Dichotomous
Sampler which allows up to 15 samples without operator intervention provides
a method to sample "episode" events and to study short term (e.g., day
versus night) fluctuations or cycles. The Medium Flow Sampler collects
larger samples and provides the basis for developing compliance strategies
for noncompliance areas. The Medium Flow Sampler is easier to use than the
Dichotomous Sampler.
1
-------
For routine monitoring stations, the Size Selective High Volume Sampler
will probably be the instrument of choice because the operating procedures
are similar to the current TSP High Volume Sampler and it is easy to use.
Existing TSP Hi-Vols can be easily converted to PM-10 Hi-Vols by adding a size
selective inlet, a flow controller, a flow recorder and a filter paper cart-
ridge. Glass fiber filters will not be allowed because of artifact formation,
and the quartz filters require a filter paper cartridge because they are more
fragile.
The proposed Federal Reference Mothod (FRM) performance specifications
for PM-10 Samplers are shown in Table 1 and the "sampling effectiveness"
(penetration) curves for the GMW and S-A Size Selective High Volume Samplers
are shown in Figure 1. Both inlets exhibit sharp sampling effectiveness
curves which meet the FRM performance specifications. The cutpoint of the
S-A Model 321-A Two Stage SSI is closer to the 10 micrometer cutpoint
desired by EPA (10 micrometers versus 9.0 micrometers for the GMW Model
9000 inlet). Table 2 summarizes the cutpoints of the two inlets at wind-
speeds of 2, 8 and 24 km/h.
As part of the FRM performance specifications, candidate samplers must
collect within ±10% of the mass that an "Ideal Sampler" would collect if
both sampled a hypothetical ambient mass distribution. A summary of the
performance of the GMW and S-A inlets compared to the "Ideal Sampler" is
shown in Table 3. For the hypothetical mass distribution specified in the FRM,
the S-A Model 321-A would read 0.4% low whereas the GMW Model 9000 would
read 3.6% low.
There has been some question as to whether the hypothetical ambient
mass distribution specified in the FRM is truly representative. The FRM
distribution is representative of urban environments with relatively high
fine particle concentrations, but may not be representative of rural or
fugitive emissions distributions. As a further test of the PM-10 samplers,
we have compared their expected performance to the "Ideal Sampler" for the
three additional hypothetical ambient mass distributions shown in Figure 2.
Table 4 summarizes the variances of the S-A and GMW samplers from the
"Ideal Sampler" for each different mass distribution. Even for the Case
III distribution (fugitive emission, high large-particle concentration),
-------
the S-A Model 321-A would read 0.7% high while the GMW Model 9000 would
read 4.5% low.
SUMMARY AND CONCLUSION
Commercial PM-10 instruments are now available which meet or exceed all
of the proposed Federal Reference Method performance specifications. These
inlets have been tested in the wind tunnel and in collocated field inter-
comparison studies. PM-10 concentrations measured with different commercial
instruments should all be well within ±10% of 'the concentration that an "Ideal
Sampler" would be expected to measure.
REFERENCES
1. Federal Register, Vol. 49, No. 55, pages 10408-10462, March 20, 1984.
-------
TABLE 1: PERFORMANCE SPECIFICATIONS
PM-10 SAMPLERS
Parameter
Units
Specifications
Sampling Effectiveness
A. Liquid Particles
B. Solid Particles
Cutpoint (50%)
Reproducibility
Flow Rate Stability
% Within ±10% of "Ideal Sampler"
% <57o higher than results for
liquid particles for 20pm
particles
ym 10± 1 ym D
A
% _
-------
TABLE 3: COMPARISON WITH "IDEAL SAMPLER"
Sampler
Expected Mass
(yg/M3)
(at 8 km/h)
Relative Error
Compared to
"Ideal Sampler"
S-A 321 A (2 Stage)
GMW - 9000
"Ideal Sampler"
52.1
50.4
52.3
-0.4%
-3.6%
TABLE 4: COLLECTED MASS COMPARISONS
Size Distribution
Expected Mass Collected By
321-A Wedding Ideal
Two Stage Hi-Vol Sampler
SSI Inlet
yg/m3 yg/m3 yg/m3
Case I;
Total Aerosol Concentration 27.2 pg/m3
Variance from Ideal
Case II:
Total Aerosol Concentration 71.5 yg/m3
Variance from Ideal
Case III:
Total Aerosol Concentration 179.9 yg/m3
Variance from Ideal
16.4
-1.2%
46.1
0%
74.5
+0.7%
15.8 16.6
-4.8% . . .
44.8
-2.8%
70.7
-4.5%
46.1
74.0
-------
100
LJ
O
(T
LJ
Q_
»
z
O
J-
oc
LJ
z
LJ
Q,
CUTPOINT SIZES
GMW- 9000
4 6 8 10 20
AERODYNAMIC PARTICLE DIAMETER,
40
Figure 1: Aerosol Sampling Characteristics of PM-10 Inlets for the
Hi-Vol Sampler. Wind Speed = 8 km/h. Flow Rate = 1.13 m3/min.
-------
160 -
140
0.120
Q
100
80
CASE I
CASE I
CASE HI
< 60
40
20
/ x !
/ _ \ \
0.01 O.I 1.0 10
PARTICLE DIAMETER , Dp ,
100
FIGURE 2: Three Hypothetical Ambient Mass Distributions
-------
GENERATION AND USE OF LARGE, SOLID CALIBRATION AEROSOLS
R.¥. Vanderpool and D.A. Lundgren
University of Florida
Gainesville, Florida
The calibration of four, large-particle impactors developed at the
University of Florida required the development of a technique for the
generation of large calibration aerosols. Slight modifications to a vibrating
orifice aerosol generator (Model 3050, TSI Inc., St. Paul, Minn.) enabled the
successful generation of solid ammonium fluorescein particles up to 70 urn
aerodynamic diameter. When generated under the proper test conditions, the
particles were found to be spherical and of uniform size.
The developed generation procedure does not involve the somewhat awkward
inversion of the aerosol generator. The dilution flowrate of the generator,
however, is inadequate to suspend generated droplets larger than about 65 urn.
As a result, large droplets will normally settle out and be lost before having
sufficient time to dry to the desired particle size. The successful
generation of large particles, therefore, requires the use of liquid solutions
of high volume concentrations. This allows production of droplets of
suspendable size which dry to form particles of the desired diameter. By
dissolving fluorescein powder in aqueous ammonia, volume concentrations as
high as 30% were produced. Although the use of high volume concentration
solutions requires more patience to start and maintain the liquid jet through
the orifice, the overall generation process is considered to be more
convenient than inversion of the generator.
The generation of particles larger than 20 urn requires careful
optimization of the operating parameters of the aerosol generator including
orifice diameter, liquid feedrate, vibrational frequency, and dilution
flowrate. Guidelines for the proper selection of these parameters are
outlined. The results of the impactor calibrations using the described
aerosol generation technique are briefly discussed.
-------
A SIZE CLASSIFYING ISOKINETIC AEROSOL SAMPLER
DESIGNED FOR APPLICATION AT REMOTE SITES
_C._F_._ Rogers and J.G. Watson, Desert Research Institute,
Atmospheric "Sciences Center, P.O. Box 60220, Reno, NV 89506;
and C.V. Mathai, AeroVironment, Inc., Pasadena, CA 91107
Research quality aerosol monitoring projects require an aerosol filter
sampling device with the following characteristics:
• Measurement of inhalable (0 to 10 or 15 ym) and fine (0 to 2.5 ym)
size-classified particulate matter, with acceptable sampling effec-
tiveness.
• Simultaneous sampling on two different substrates, one amenable to
elemental and the other amenable to carbon analysis.
• Sequential sampling, without operator intervention, at greater than
75 1/min flow rates to obtain continous samples over 4- to 24-hour
sample durations with sufficient deposits for chemical analysis.
• Simple and reliable field and laboratory operation at an affordable
price.
A Size Classifying Isokinetic Sequential Aerosol Sampler (SCISAS) com-
bines the best features of the SURE/ERAQS (Mueller and Hidy et al., 1983)
sequential filter sampler and the WRAQS (Allard et al., 1982) and Henry
(1977) isokinetic sampling manifolds to meet these requirements.
Ambient air is continuously drawn at a rate of 1100 1/min into a ten
inch diameter PVC stack through a 10 or 15 ym McFarland size-selective
inlet (SSI). Particles are then drawn from this stack, at a velocity close
to that flowing through the main stack to sample 0 to 10 or 15 ym particles
on two different filter media. Each of the two 0 to 2.5 ym aerosol samples
is withdrawn from the main stack at a flow rate of 113 1/min through a
single two-inch internal diameter tube which leads to a cyclone for exclu-
sion of particles larger than 2.5 ym diameter. The outlet of the cyclone
leads to a simple rectangular supply manifold into which six 47 mm
Nuclepore filter holders are mounted. A general view of the SCISAS is
shown in Figure 1.
The basic configuration of the SCISAS includes fourteen two-inch inter-
nal diameter supply tubes clustered inside the main ten-inch stack. The
flow rate of approximately 80 1/min within each inhalable particle two-
inch supply tube was chosen empirically to provide very nearly isokinetic
matching between the average velocities in these tubes and. that in the main
stack with 1100 1/min flow. An alternative design draws the 0 to 10 or
15 ym sample at isokinetic velocities through a four inch diameter tube
into a 46" long sampling plenum. Filters along the side of the plenum draw
the sample from it.
-------
The following particle loss mechanisms were theoretically evaluated and
predicted to be negligible (less than 1%):
• Electrostatic capture in the main PVC stack.
• Turbulent diffusion in the main stack and two inch sampling tubes.
• Brownian diffusion losses.
• Inertial impaction losses in the two inch tubes.
Sedimentation losses in the inhalable particle sampling tubes are calcu-
lated to be a maximum of 1% for 15 pm particles. The maximum bias in the 0
to 10 or 15 um mass estimation is much less than 5%.
In ten tests of a 15 pm cut-point SCISAS prototype* aerosol mass con-
centrations measured in Reno, NV, ranged from 6 yg/m to 32 ug/m ; the
maximum difference between any two of the four SCISAS 0 to 15 ym particle
sampling tubes operated simultaneously in each of these tests was 4%. More
typically, any two sampling tubes agreed to better than 3%. At the same
sampling site, three comparisons of the SCISAS prototype to a collocated
hivol outfitted with an identical 15 ym size selective inlet were con-
ducted. Ratios of the mass concentrations measured by the SCISAS, to those
measured by the hivol/SSI, were 0.98, 0.95, and 0.96 for these three preli-
minary tests.
Further tests of the SCISAS are now scheduled and will include exten-
sive comparisons with other sampling devices. Other tests include
1) measurement of passive deposition inside the SCISAS sampling tubes,
2) measurements of re-entrainment of large particles inside the SCISAS,
3) evaluation of virtual impaction into non-operating sampling tubes in the
SCISAS tube cluster, 4) quantitative evaluation of the effects of non-
isokinetic mismatches at the entrance to the tube cluster, 5) and measure-
ment of the effects of flow rate variations in the main stack and SSI.
Velocities inside the main stack at the approach to the tube cluster will
be mapped with a hot-wire anemometer. A theoretical evaluation of the
possible effects of aerosol particle deliquesence or shrinking, due to
heat transfer in the SCISAS, will also be performed.
REFERENCES
Allard, D.W., Tombach, I.H., Mayrsohn, H., and Mathai, C.V., 1982. "Aerosol
Measurements: Western Regional Air Quality Studies" Air Pollution
Control Association Annual Meeting, New Orleans, LA.
Henry, R., 1977. "A Factor Model of Urban Aerosol Pollution," Ph.D.
Dissertation Oregon Graduate Center, Beaverton, Oregon.
Mueller, P.K., Hidy, G.M., Baskett, R.L., Fung, K.K., Henry, R.C., Lavery,
T.F., Warren, K.K. and Watson, J.G., 1983. "The Sulfate Regional
Experiment: Report of Finding Volume 1 Report EA-1901, Electric Power
Research Institute, Palo Alto, CA.
10
-------
SIZE SELECTIVE
INLET = I0fj.m or
TEN INCH STACK
O
O
O
O
/T\ AAK
TUBE
CLUSTER
O
V — '
O
O
O
FINE
PARTICLE
MANIFOLD
CYCLONE
PLENUM
TOTAL PARTICLE
TUBES (12)
HI VOL FAN
ATYPICAL VACUUM MANIFOLD
(3 MORE NOT SHOWN)
FIGURE 1: General view of SCISAS, not showing Nuclepore filter
holders, three of four vacuum manifolds, four suction
pumps, connecting tubing and stand.
11
-------
PARTICLE AND SUBSTRATE LOSSES DURING SHIPMENT OF TEFLON
AND QUARTZ FILTERS
V. Ross Highsmith, U.S. Environmental Protection Agency
Andrew E. Bond, U.S. Environmental Protection Agency
James E. Howes, Battelle Columbus Laboratories
ABSTRACT
A special study was conducted to evaluate particle and filter substrate
losses resulting from routine handling of particulate samples collected on
quartz and teflon filters. Filters were weighed at pre-determined stages of
the filter handling process to estimate changes in mass corresponding to the
various filter handling operations. Control filters, both field blanks and
sampled filters, were used to estimate passive artifact formation and particu-
late matter volatilization. The remaining filters were shipped to the labora-
tory for observation and returned for final weighing. Changes in shipped
filter mass could be contributed to both a loss of large particles during
shipment and a loss of particulate matter from volatilization. A comparison
of control filter weight changes with shipped filter weight changes would
provide an estimate of the overriding mechanism responsible for any observed
particle and substrate losses following sample collection.
The data presented in this report suggest no significant weight
loss from routine high volume sampling of particulate matter using quartz
filter media, as long as the final filter weights are performed without
archiving or shipping the filter to the laboratory. A reduction in filter
mass was observed after the shipment of total suspended particulate quartz
filters to the laboratory. Particulate matter loss from volatilization was
also noted with the high volume samples collected in Phoenix, Az. No signifi-
cant weight change was observed in the routine handling of dichotomous teflon
filters.
This is an abstract for proposed publication and does not necessarily
reflect EPA policy.
12
-------
INTRODUCTION
The Environmental Protection Agency recently conducted a field evaluation
of commercially available nominal 10 micrometer (10pm) inlets for particulate
samplers. Total suspended particulate (TSP) and 10pm size selective inlet
(SSI) high volume samplers as well as 10pm dichotomous samplers were operated
in four cities using established Inhalable Particulate Network (IPN) operating
2
procedures. TSP and SSI samples were collected using 8" x 10" quartz filters.
Dichotomous samples were collected using 37 mm teflon filters media identical
to those employed in the IPN.
A special filter evaluation study was conducted at two of the four
cities, Phoenix, A2 and St. Louis, MO. The purpose of this special study was
to evaluate various aspects of the filter handling operation in order to
estimate the magnitude of particle and filter substrate losses from quartz
and teflon filters during the particulate matter (PM) measurement process.
The initial consideration was to determine the magnitude of particle and
substrate losses resulting from the folding of quartz high volume sampler
filters. Commercially available quartz filters have a tensile strength equal
3
to about 25-40% of the tensile strength cited for glass fiber filters. When
folded, quartz filters have a greater tendency to crack and fray along the
crease. Glass fiber filters typically do not crack or fray when folded.
Monitoring both particulate-loaded and blank quartz filter weights before and
after folding would provide information to assess the combined particle and
substrate losses resulting from the folding process. The second aspect to be
considered was the loss of particulate matter resulting from the shipment of
sampled filters to the laboratory for final weighing. Filter weights were
monitored both before and after shipment to document mass losses resulting
from shipment. Loss of PM mass during filter shipment is thought to be a
large particle phenomenon; i.e., with increased large particle concentration,
PM loss resulting from shipment is expected to increase. Compared to St.
Louis, Phoenix's particle size distribution data indicates a significantly
larger coarse particle fraction. Therefore, the Phoenix samples would be
expected to be more adversely affected by large particle losses during ship-
ment than the St. Louis samples.
13
-------
The third aspect to be evaluated in this study was PM losses resulting
4
from volatilization over long periods of time. For a typical PM monitoring
network, a lapse of up to 30 days occurs from the date the sample is collected
in the field to the date the gross weight is obtained in the laboratory.
This study data would provide a means to estimate particulate mass loss
resulting from volatilization. Estimating passive artifact formation on
quartz filters was the last aspect considered. Sulfate and nitrate artifact
formation on glass fiber filters, routinely used in PM monitoring networks
such as the IPN, has been documented. Unlike glass fiber filters which have
a pH of ca. 9.5, quartz filters have a pH of ca. 7.0. Consequently, passive
sulfate and nitrate artifact formation on the quartz filters used in this
study should be minimal.
The results from this study are considered to be "best case". Extra
precautions were taken by the field operators in both the handling of samples
in the field and the operation of the samplers in accordance with the
manufacturer's specifications. In addition, the study quality assurance
protocol, especially with regards to acceptable weighing procedures, was
strictly enforced. Most important, both TSP and SSI sampling was conducted
using filter cassettes, minimizing filter handling in the field and reducing
the potential for voided filters.
EXPERIMENTAL
Five sampling days were scheduled in both Phoenix, AZ and St. Louis, MO
to conduct the special study. Figure 1 diagrams the high volume filter
handling process. Prior to sampling, groups of quartz and teflon filters
were placed opened in racks inside the mobile weighing laboratory with controlled
chamber conditions of 40±3% relative humidity and 20±2° Centigrade. Following
24 hours of conditioning, filter tare weights were recorded. Quartz filters
were then loaded into filter cassettes, with lids, while the teflon filters
were first placed in Lexan jigs and then into petri dishes. On each sampling
day, two TSP and four SSI high volume samplers were operated. Four dichotomous
samplers were also operated for each 24-hour sampling period. Additionally,
on each sampling day, two quartz filter field blanks and two teflon filter
field blanks were identified. These field blanks, quartz filters loaded in
cassettes with lids and teflon filters in Lexan jigs inside petri dishes,
were removed from the conditioning chamber and placed in the sampling
environment during the 24-hour sample period.
14
-------
At the completion of the sampling period, each filter was then returned
to the controlled conditioning chamber. Filters were removed from the cassettes
and Lexan jigs and placed opened in the conditioning chamber racks for 24
hours. The gross weight was then recorded for each filter. After weighing,
one TSP filter sample, one SSI filter sample, one quartz filter blank, one
dichotomous fine sample, one dichotomous coarse sample and one blank teflon
filter were designated as controls and returned to the conditioning shelf.
The remaining TSP, three SSI and one blank quartz filters were individually
folded, placed in separate cardboard supports and enclosed inside manilla
envelopes. These folded quartz filters were immediately removed from the
envelopes and reweighed to obtain a weight after folding. After reweighing,
the filters were returned to their appropriate cardboard supports and envelopes.
After weighing, the three coarse, three fine and one blank dichotomous samples
were reloaded into their Lexan jigs and placed in their petri dishes. The
folded quartz filters and dichotomous filters were then mailed to the laboratory
®
using Jiffy bags, with three or four samples per bag.
Upon receipt of the filters at the laboratory, each filter was visually
inspected for obvious PM loss, cracks, tears or any other unusual physical
change. Following this visual inspection, the filters were repackaged and
mailed back to the sampling site. Upon arrival at the sampling site, the
filters were opened and returned to the conditioning shelf for 24 hours.
After reconditioning, the shipped and corresponding control samples were
again weighed and final weights obtained. For both Phoenix and St. Louis,
the final filter weighing occurred within 30 days after sample collection.
The resulting mass data was summarized and analyzed using standard
statistical tests for both paired and non-paired data at the 5% significance
level as described below:
For paired data
Calculated T Statistic T = / ( , .—
ad ' 1 n;
where d = mean difference between pairs
a, = standard deviation of the mean difference
n = number of pairs
15
-------
Paired Test Interval t ± T
For non-paired data
where d.. - d? = difference between population means
n^n^ = number of observations for groups 1 and 2
S2
p (nj + n2 - 2)
2 2
Sl 'S2 = variance f°r groups 1 and 2
For paired data sets, if the test interval, calculated test statistic (T)
plus or minus the tabulated Student-t (t ) value contains zero,
n."" i j i *" of/ £,
then the two sets of data being compared are determined to be indistinquishable.
For non-paired data sets, if the test interval defined above includes zero,
then the two sets of non-paired data are considered to be indistinquishable.
Otherwise, the two data sets are considered distinquishable, i.e., the
difference in the two sets are statistically signficant. In instances where
the statistical analysis of the data indicate the two data sets to be
distinquishable, but the magnitude of the real difference between the two
data sets is determined to fall within the experimental error associated with
the weighing process, the difference is determined to be marginal and not of
practical significance.
Three paired statistical comparisons were performed on the shipped
quartz filter weight data for each of the three sample types (TSP, SSI and
blank) for each sampling city. First, the 24-hour weights were statistically
compared to the corresponding folded filter weights to determine mass loss
resulting from folding. Secondly, the folded filter weights were compared to
the 30-day weights to estimate the combined losses resulting from PM
volatilization and shipment of the filters to the laboratory. Finally, the
24-hour weights were compared to the 30-day weights and a total mass loss
since sample collection was calculated. Likewise, the 24-hour control filter
weights were compared to the corresponding 30-day weights. As these filters
were neither shipped nor folded, any significant loss in mass could be attributed
to volatilization. Any significant gain in filter weight observed with the
16
-------
control filters would be attributed to passive artifact formation. Using the
non-paired test statistic, the average weight loss for each shipped filter
type was then compared to the average weight loss for the corresponding
control filter type to determine if any statistical difference in filter mass
observed with the shipped filters was also observed with the control filters.
The results of both paired and non-paired statistical tests were then used to
determine whether the filter folding process, shipment of the filter or
volatilization was the overriding mechanism contributing to any observed
weight change.
The dichotomous filter weight data was statistically tested following
the same procedures outlined above for the quartz high volume samples with
one exception. The shipped dichotomous filters were only weighed at two
intervals, 24 hours following sample collection and at the 30-day interval.
Therefore, only one paired comparison of the shipped dichotomous sample data
was conducted for each city.
RESULTS AND DISCUSSION
When received at the laboratory, the shipped filters were opened and
examined for tears, cracks, loss of large particles or any other usual physical
change. More that 80% of the shipped TSP and SSI samples received from both
Phoenix and St. Louis were cracked and/or frayed along the folded crease.
However, less than 10% of the shipped blank filters experienced cracking
along the crease. No explanation can be given at this time regarding why the
sampled filters cracked and the blank filters did not crack upon folding and
shipping. Examination of the St. Louis dichotomous filters revealed no
obvious physical changes in the filters. The Phoenix coarse dichotomous
filters, however, did show some loss of large particles both to the Lexan jig
and the petri dish. No particle loss or other physical change was noted with
the Phoenix fine dichotomous samples.
Table 1 summarizes the results of the paired t-test statistics performed
on the Phoenix and St. Louis quartz high volume filter weights. For all
three sample types (TSP, SSI and blank samples), at both cities, comparisons
of the 24-hour filter weights to the corresponding folded filter weights
yielded no significant difference in mass. This implies that neither
particulate matter nor filter substrate material was lost as a result of the
folding process. A statistically significant loss in filter mass was observed
for the Phoenix TSP (12.7 (Jg/m3), Phoenix SSI (4.7 M8/m3) and St. Louis TSP
17
-------
(4.6 |Jg/m3) samples shipped to the laboratory. This statistical test suggests
that this change in filter mass is directly attributed to both a large particle
loss during shipment and to a loss of PM due to volatilization. The shipped
St. Louis SSI filter weight change, although statistically significant, was
considered marginal as the average SSI filter mass loss falls within the
experimental error established for the filter weighing process. An analysis
of the shipped blank quartz filter data shows no significant weight change.
This suggests that the blank quartz filters used in this special study neither
loss substrate due to folding or shipping nor did they undergo significant
passive artifact formation. For both St. Louis and Phoenix shipped quartz
filters, the overall mass change observed in the 24-hour versus 30-day
comparisons corresponded to the summation of the mass change calculated for
both the 24-hour versus folded comparison and the folded versus the 30-day
comparison. The control quartz high volume sample data reveals that the
Phoenix control TSP and SSI filters experienced a significant loss in filter
mass equivalent to ca. 3 (Jg/m3, over the 30-day period. Since the control
filters were neither folded nor shipped, this mass loss is thought to correspond
to volatilization. The St. Louis control quartz filter data, as well as the
blank quartz filter data, indicate no significant mass change over the 30-day
period and therefore no loss of PM due to volatilization. The overall
differences in the St. Louis particle size distribution, the chemical
constituency of the particles and the 24-hour mass loadings as compared to
Phoenix are considered the primary reasons for this observation.
Using standard statistics for non-paired data, the overall weight change
for each shipped sample type was compared with the weight change for the
corresponding control sample type. Testing the Phoenix shipped TSP filters
against the Phoenix control TSP filters yielded a significant difference in
weight loss. Recalling the earlier paired filter comparisons, this indicates
that the observed weight change for the shipped Phoenix TSP filters represents
the combined effects of shipment and volatilization. A similar comparison of
the Phoenix SSI shipped and controlled filters yielded no statistically
significant difference in observed shipped SSI weight loss. Therefore, the
shipped Phoenix SSI sample weight loss is solely attributable to volatilization
and not large particle loss during shipment. The St. Louis shipped versus
controlled TSP filter data suggests that the observed weight changes results
solely from large particle losses occurring during shipment of the sample to
the laboratory.
18
-------
Table 2 summarizes the analysis results for the dichotomous filter data.
A significant loss in filter mass was noted with the shipped Phoenix fine and
coarse dichotomous samples. As was noted previously for the shipped quartz
filters, this data indicates a loss of large particles during shipment as
well as a loss of PM due to volatilization. No other significant changes in
shipped filter mass was observed for either the Phoenix or St. Louis shipped
dichotomous filters. The control dichotomous sample statistics show no
distinquishable differences between the 24-hour and 30-day weights and indicates
that volatilization did not significantly affect the collected dichotomous
samples. Therefore, the mass loss observed with the shipped Phoenix dichotomous
samples is contributed to a loss of large particles during shipment. Since
the Phoenix size distribution contains an extremely large coarse fraction,
the loss in filter mass seen in the shipped Phoenix dichotomous filters is
attributed to this abnormal large particle loading. Although significant for
Phoenix or any other arid environment heavily laden loaded with coarse particles,
a loss in filter mass resulting from shipping dichotomous filters to the
laboratory is considered unlikely for most routine sampling sites.
CONCLUSIONS
Folding quartz high volume filters does not significantly affect filter
mass. Both Phoenix and St. Louis shipped TSP samples experienced large
particle losses equivalent to approximately 5% of the filter mass loading as
a result of shipping the filter to the laboratory. Based on the blank quartz
filter data, no filter substrate material was lost nor did passive artifact
formation significantly affect the mass determinations. The Phoenix TSP and
SSI sample data showed significant weight loss corresponding to volatization.
Except in areas with extremely high coarse particle loadings, you would not
expect particle loss from shipping dichotomous filters to the laboratory.
This special study suggests that quartz filters can be used in routine
PM monitoring. However, this was a "best case" study. Field operators and
laboratory personnel exercised caution in the handling of these quartz filters.
The sampling equipment was routinely monitored to insure compliance with the
manufacturer's specifications. And most important, filter cassettes were
used for high volume sampling, minimizing the handling of quartz filters in
the field. Additionally, cracked or frayed quartz filters were not voided
but were considered as acceptable samples in this study.
19
-------
REFERENCES
1. Rodes, C.E., R.M. Burton, L.J. Purdue and K.E. Rehme. Protocol for PM
Inlet Evaluation and Comparison with the Wide Range Aerosol Classifier,
April 1983, U.S. Environmental Protection Agency, Research Triangle
Park, N.C. 27711.
2. Inhalable Particulate Network Operations and Quality Assurance Manual,
March 1983, U.S. Environmental Protection Agency, Research Triangle
Park, N.C. 27711.
3. Whatman, Inc. 9 Bridwell Place, Clifton, N.J., 1984.
4. Clement, R.E. and F.W. Kurasek. Sample Composition Changes in Sampling
and Analysis of Organic Compounds in Aerosols. Int. J. Environ. Analyt.
Chem. 7:109, 1979.
5. Appel, B.P., S.M. Wall, Y. Tokiwa and M. Hunt. Interference Effects in
Sampling Particulate Nitrate in Ambient Air, Atmos. Environ. 13:319,
1979.
6. Remington, R.D. and M. Anthony Sihork. Statistics with Application to
the Biological and Health Sciences, Prentice-Hall, Inc., Englewood
Cliffs, N.J., 1970.
20
-------
TABLE 1. Results of Statistical Analysis on Quartz Filter Weights
COMPARISON
TSP
PHOENIX
SSI
BLANK
ST. LOUIS
TSP SSI
BLANK
Shipped Filter
24-Hour versus
Folded Weight
Indist Marg Indist
Indist Marg Indist
Shipped Filter
Folded versus
30-Day Weight
Dist Dist Indist
Dist Marg
Indist
Shipped Filter
24- Hour versus
30- Day Weight
Dist Dist
Indist
Dist Marg Indist
Control Filter
24-Hour versus
30-Day Weight
Dist Dist
Indist
Indist Indist Indist
Shipped versus
Control Filter Dist
30-Day Weight Loss
Indist Indist
Dist Indist Indist
21
-------
TABLE 2. Results of Statistical Analysis on Teflon Filter Weights
COMPARISON
FINE
PHOENIX
COARSE BLANK
FINE
ST. LOUIS
COARSE BLANK
Shipped Filter
24-Hour versus
30-Day Weight
Dist
Dist
Indist
Marg
Indist Indist
Control Filter
24-Hour versus
30-Day Weight
Indist Indist Indist
Indist Marg
Indist
Shipped versus
Control Filter Indist Indist Indist
30-Day Weight Loss
Marg
Indist Indist
Dist indicates a significant difference between paired data
Indist indicates no significant difference between paired data
Marg indicates that although a statistical difference is noted, that the real
difference is within the experimental error associated with the weighing
process
22
-------
24-HOUR
CONDITIONING
Figure 1. High Volume Sample Handling Diagram.
23
-------
POLLUTANT LOSSES IN DICHOTOMOUS SAMPLERS
T. Jarv and O.T. Melo
Ontario Hydro Research
Toronto, Ontario, Canada M8Z 5S4
INTRODUCTION
Ontario Hydro (OH) voluntarily initiated a sulfate aerosol monitoring
program in 1975/1/ in response to concerns over health effects. The monitoring
network underwent several changes since its inception. In 1981, the sampler
used for sulfate aerosol sampling was changed. Dichotomous samplers replaced
hi-vol and RAC low-vol air samplers, to take advantage of improved knowledge
and methodologies in the study of sulfate aerosol. At the same time, a shift
in emphasis from sulfate aerosol to acid precipitation required that other acidic
pollutants, such as nitric acid, nitrate and sulfur dioxide, be monitored also.
To accomplish this the dichotomous sampler, originally developed for aerosol
sampling, was modified slightly to allow collection of these other pollutants.
A comparison with the Ontario Ministry of the Environment (ONE) Acidic
Precipitation in Ontario Study (APIOS) event network and the Atmospheric Environ-
ment Service (AES) of Environment Canada Air and Precipitation Monitoring Network
(APN) was undertaken in 1981. The comparison, undertaken at the APIOS monitoring
site at Dorset, Ontario, indicated that the OH results for total nitrate and
sulfur dioxide were lower than those obtained by OME and AES/2/.
In order to explore the differences observed in the Dorset comparison,
two additional studies were undertaken: i) a laboratory investigation to deter-
mine whether gaseous nitric acid losses occur in the dichotomous sampler and
ii) a second field comparison of a dichotomous sampler, a Teflon-coated dicho-
tomous sampler and an OME/AES-type sampling system under meteorological condi-
tions similar to those encountered in the Dorset comparison. In this paper,
the highlights of these more recent studies are presented. Some interpreta-
tion of the results is also provided.
LABORATORY STUDY
A modified dichotomous sampler, similar to the ones used in the OH network,
was tested in the laboratory. The sampler is composed of three components:
1) inlet head, 2) inlet tube and 3) virtual impactor. The inlet head was
placed in a 0.022 nf Teflon chamber into which a nitric acid containing atmosphere
was introduced. A reference filter cassette, attached to the Teflon chamber, was
used to sample the test atmosphere.
24
-------
Four sampling configurations were examined: these are noted in Figure 1. A
total of thirty-six 24-h samples were collected, with at least six 24-h samples
for each experimental arrangement. The filter cassettes used in the laboratory
tests consisted of a Teflon filter to remove particulates and a single nylon fil-
ter which was then extracted and analysed for nitrate using automated colouri-
metry. The ratio, (C+F)/REF, total (coarse + fine) nitric acid sampled by the
dichotomous sampler to that collected on the reference filter were determined
and are shown in Figure 1.
The (C+F)/REF results for the complete sampler configuration indicated
that gaseous nitric acid is lost to the interior of the dichotomous sampler.
The addition of a Teflon liner to the inlet tube reduced the nitric acid loss.
Each sampler component was found to contribute significantly to the loss of
nitric acid, with the virtual impactor appearing to be the largest sink.
FIELD COMPARISON
An OME/AES-type sampler (1), a modified dichotomous sampler (2), and a
Teflon-coated modified dichotomous sampler (3) were compared. The following
sampler pairings were made: Pair A - 1 versus 2; Pair B - 1 versus 3 and Pair C -
2 versus 3.
The OME/AES-type sampling system consisted of a multi-stage filter cassette
mounted in a Teflon holder, a protective polyethylene cone, a flow controller, a
Cast pumping system and a supporting stand. The filter cassette was held in
the inverted polyethylene cone 2 meters above ground level. The sampling flow
rate was identical to that of the modified dichotomous sampler, 16.7 L/min. The
dichotomous samplers collected air at 2 meters also. The dichotomous sampler
had an upper cut-off of 15 ym diameter. The OME/AES-type sampler has been esti-
mated to collect particles smaller than about 30 ym, under laminar atmospheric
flow conditions.
Identical filter cassettes were used with all samplers. Each multi-stage
filter cassette consisted of a 37-mm diameter, 1 ym pore-size Teflon filter for
sulfate, nonvolatile nitrate and ammonium aerosol collection; followed by a
37-mm diameter, 1.1 ym pore-size nylon filter for gaseous nitric acid and vola-
tile nitrate collection; and terminated with a 37-mm diameter, potassium
carbonate-impregnated Whatman 41 cellulose filter for sulfur dioxide collection.
A total of 25 concurrent 24-h samples were collected with the three samplers.
25
-------
The filters were extracted and analysed for sulfate, nitrate, ammonium and
sulfur dioxide by continuous flow analysis (automated colourimetry). The
results were evaluated statistically with scattergrams, the nonparametric sign
test and least squares linear regression. The nonvolatile nitrate, volatile
nitrate and sulfur dioxide scattergrams are shown in Figure 2. The sign test
and linear least squares results are presented in Table I.
An examination of Figure 2 and Table I indicates that sulfate, nonvolatile
nitrate and ammonium aerosol and sulfur dioxide concentrations measured with
the two dichotomous samplers were statistically equivalent. Volatile nitrate
concentrations measured with the Teflon-coated dichotomous sampler were larger
than those measured with the uncoated dichotomous sampler. This is consistent
with the laboratory results. The OME/AES-type sampler was found to collect more
nonvolatile nitrate and volatile nitrate (total nitrate) than either of the di-
chotomous samplers. These nitrate concentration differences are attributed to
the additional coarse particulate nitrate sampled by the OME/AES-type sampler
and to the partial volatilization of this material/3,4/.
As a comparison, sulfate and ammonium aerosol, both predominantly associated
with submicron particles/5/, were sampled in a statistically equal fashion by
all three samplers. If nitrate aerosol were predominantly associated with
submicron particles then the three samplers would have collected nitrate aerosol
in a statistically equal fashion. The loss of nitrate aerosol through volatili-
zation would have been comparable in all samplers, as the flow rates were equal.
However, the OME/AES-type sampler collected more nonvolatile nitrate. The
difference likely results from the additional coarse aerosol that appears to
be sampled by the OME/AES-type sampler.
Sulfur dioxide concentrations measured with the OME/AES-type sampler and
the Teflon-coated dichotomous sampler were statistically equivalent. A similar
result was expected for volatile nitrate. However, the results were clouded
by artifact volatile nitrate from the partial volatilization of the additional
coarse nitrate aerosol collected by the OME/AES-type sampler.
These results, combined with the Dorset comparison, suggest that the
large nonvolatile and volatile nitrate and sulfur dioxide concentration dif-
ferences found at Dorset resulted from the following: (a) a small contribution
from the inlet height difference (<10%), (b) losses to the aluminum inner walls
of the dichotomous sampler, (c) occasional losses to moisture trapped on the OH
26
-------
filter during the Dorset study, (d) nitric acid artifact contributions during
sampling, (e) quality control variations in the preparation of the potassium
carbonate-impregnated filters, (f) differences in sample flow rates and (g) the
different particle size ranges sampled.
ACKNOWLEDGEMENT
The authors thank Messrs. G. Till, B. Handy and D. Knebel for their help
in conducting the experiments. This work was supported by the Chemical Research
Department of Ontario Hydro Research.
REFERENCES
1. Melo, O.T., (1975). A Proposal for Atmospheric Sulphate Monitoring in
Southern Ontario, Ontario Hydro Research Division Report No. 75-19-K.
2. Concord Scientific Corporation, (1982). The Dorset Intercomparison of Pre-
cipitation and Air Sampling Methodologies, CSC Report 182-2 prepared for
Ontario Ministry of the Environment - Air Resources Branch, Atmospheric
Environment Service and Ontario Hydro.
3. Appel, B.R., Y. Tokiwa and M. Haik, (1981). Sampling of Nitrates in Ambient
Air, Atmos. Environ. 15, pp 283.
4. Appel, B.R., S.M. Wall, Y. Tokiwa and M. Haik, (1979). Interference Effects
in Sampling Particulate Nitrate in Ambient Air, Atmos. Environ. 13, pp 319.
5. Kadowski, S., (1976). Size Distribution of Atmospheric Total Aerosols,
Sulfate, Ammonium and Nitrate Particulates in Nagoya Area. Atmos. Environ.
10, pp 39.
27
-------
MASS DISTRIBUTION OF LARGE AMBIENT AEROSOLS
AND THEIR EFFECT ON PM-10 MEASUREMENT METHODS
Dale A. Lundgren and Brian Hausknecht
University of Florida
Gainesville, FL 32611
Robert M. Burton, EMSL, EPA, R.T.P., N.C.
EXTENDED ABSTRACT
Introduction
A mobile aerosol size classifying sampling system for the collection of
very large (lOO urn diameter) particles was designed and constructed by
Lundgren and Rovell-Rixx at the University of Florida.1 An analysis van was
outfitted to accompany the sampling trailer. A specially designed air
sampling inlet was fitted to a very high flowrate (~40 m^/min) sampler, which
greatly reduced the large particle sampling errors due to inertial effects, as
described by Lundgren and Paulus.2 In a 10 km/hr wind, the design criteria
predicted a less than 20% error for sampling 100 urn particles. Test results
indicated that the sampling error was within this design limit.
Ambient aerosol mass distribution were measured in five cities across the
U.S. and compared with data collected using several conventional ambient
aerosol samplers and size selective inlet (SSl) samplers. The five cities
sampled were Birmingham, Alabama (an industrial area); Research Triangle Park,
North Carolina (a background site); Philadelphia, Pennsylvania (metropolitan
site); Phoenix, Arizona (high fugitive dust area); and Riverside, California
(photochemical aerosol site). These cities provided a variety of sampling
conditions and aerosol compositions. The actual location selected in each
city was at an EPA Inhalable Particulate (IP) Network station where a history
of data for the high-volume air sampler, size selective inlet sampler and the
dichotomous sampler were available.
Present ambient air quality standards for particulate matter are based on
measurements made by the EPA reference method (High-Volume Method).^ Weight
gain by the sampler filter media, divided by the volume of air sampled, is
defined as total suspended particulate matter, or TSP. Health effect studies
have correlated this measurement with adverse health effects. However, it is
generally accepted that some of the particulate mass collected by this
reference method sampler is too large to cause health effects. This has
resulted in proposed changes to the primary air quality standard and method of
measurement. If new standards are to be set and the method of measurement
changed it is necessary to determine the relationship between the present
reference method (High Volume Method) measurements and a size selective
reference method measurement.
Equipment
Large particle size-distribution data were obtained using the mobile
aerosol-sampling system, called the Wide Range Aerosol Classifier (WRAC). A
schematic diagram of the WRAC is shown in Figure 1. The large (60cm) diameter
aerosol inlet tube leads to a cluster of five individual sampler units, each
of which operates at an actual sampling rate of 1.56 nK/min (55 acfm). The
center sampler collects what is considered to be a total aerosol mass sample
onto a standard 20.3 by 25-4 cm (8" by 10") glass or quartz fiber filter
28
-------
media. Four other samplers, placed at 90° intervals around the center
sampler, are single stage, rectangular slot impactors. These single stage
impactors collect size fractionated samples of the large ambient particles
onto grease coated impaction plates. Remaining particles are collected by a
standard filter which follows each single stage impactor. Each impactor has a
different particle collection efficiency. The particle cutoff diameter (for
50$ collection efficiency) for Impactors 1, 2, 3, and 4, are 47, 34, 18.5 &
9.3 urn, respectively. These impactor nozzles were carefully calibrated at the
University of Minnesota, Particle Technology Laboratory and the University of
Florida to determine their exact outpoints, as described by Vanderpool.^
Particle size-fractionating samplers were operated simultaneously with
the WRAC at each site. These samplers included instruments typically found at
an Inhalable Particulate (IP) network site such as: high-volume air sampler
(HIVOL), a 15 urn type size selective inlet sampler (SSl), and dichotomous
sampler (Moot). These instruments were normally located at the site and were
operated by the WRAC sampling team during the special sampling. At each site
was at least one high-volume ambient cascade impactor -was also used. Most of
these samplers were run with a duplicate unit at one or more location to check
for repeatability of results.
Results
At each site, samples collected under similar conditions were averaged to
determine a representative distribution.
Aerosol Mass Fraction and Particle Size
Collected by the High-Volume Sampler
The total atmospheric aerosol mass fraction and particle size collected
by the standard High-Volume Sampler (Hi-Vol) can be inferred by comparison
with the aerosol mass and size distribution measurements made using the Wide
Range Aerosol Classifier (WRAC).
The grand distribution for all 41 usable WRAC runs produce a total
aerosol concentration of 134.0 ug/riK with 91.0$ of the aerosol mass < 34 urn
diameter. Most single city average distribution and the 41 day grand
distribution average suggests that the standard High-Volume Sampler collected
all particles less than~30 urn diameter (on the average). Calibration data of
McFarland, Ortiz and Rodes^ also suggests a Hi-Vol sampler 50$ cut size of
about 30 urn. These data were also presented in an article by Watson, Chow,
Shah and Pace which discusses the Hi-Vol aerosol collection.
Atmospheric Aerosol Large Particle Mass Distribution
Plots of the total large particle grand average distribution and various
city average distributions suggest that the large particle distribution is
approximately log-normal. These data also suggests a minimum value between
the large and small particle mass modes at about 3 urn (aerodynamic diameter).
If one assumes the large particle mass mode is log-normal and that there is a
minimum point at 3 urn, several features of the distribution can be determined.
A best fitting curve was drawn through the actual mass measurement data
plotted as a cumulative distribution curve on log-normal probability paper.
Several of these curves were then drawn as histograms in Figure 2.
29
-------
Discussion
There has been much discussion recently about incorporating an upper size
limit for the regulation of particulate matter. A size limit of 10 jim has
been suggested. The WRAC measurements reveals that the fraction of
particulate matter in ambient air associated with particles less than 10 urn
diameter can vary between about 50% and 90% for single run percentages (for
Birmingham and Riverside respectively). The average distribution data for the
high concentration days in Birmingham and Riverside suggest that a 10 urn size
selective sampler would collect 68% and 89% respectively, of what the standard
High Volume Air Sampler would collect. An average distribution for all 41
test days from 5 cities suggest the 10 urn sampler would collect 14% of that
collected by the Hi-Vol.
Ambient aerosol distributions display a large particle mass mode under a
variety of situations. The situations include: relatively clean areas like
Research Triangle Park, areas with high small-particle concentrations like
Riverside, areas with high large-particle concentrations like Phoenix, and
areas with high concentrations of large and small particles like Birmingham.
Each of these areas will be affected differently with the implementation of a
new health-related particulate matter standard. This will relieve certain
areas which have historically been in a non-attainment status because of the
presence of a high mass fraction of large particles.
Acknowledgment
This investigation was supported by cooperative agreement CR808606 from
the Environmental Monitoring Systems Laboratory, Environmental Protection
Agency, Research Triangle Park, North Carolina.
References
1. Lundgren, Dale A. and David C. Rovell-Rixx, 1982. Wide Range Aerosol
Classifier, EPA-600/4-82-040, PB82-256264 N.T.I.S.
2. Lundgren, Dale A. and H.J. Paulus, 1975- The Mass Distribution of Large
Atmospheric Particles, JAPCA 25 (12):1227.
3. "Reference Method for the Determination of Suspended Particulates in the
Atmosphere (High Volume Method)", 40 CFR 50, Appendix B, U.S. Government
Printing Office.
4. Vanderpool, Robert, 1983. Particle Collection Characteristics of High
Flow-Rate Single Stage Impactors, M.S. Thesis, University of Florida.
5. McFarland, A.P., C.A. Ortiz and C.E. Rodes, 1979- Characteristics of
Aerosol Samplers Used in Ambient Air Monitoring, Presented at 86th
National Meeting of the American Institute of Chemical Engineers, Houston,
TX.
6. Watson, J.G., J.C. Chow, J.J. Shah and T.G. Pace, 1983. The Effect of
Sampling Inlets on the PM-10 and PM-15 to TSP Concentration Ratios, JAPCA
33:114.
30
-------
RAINCAP
WIND SHROUD
Figure 1. Schematic diagram of mobile sampling system.
31
-------
140
120
NB-HIGH-205 M9/m3
100
80
"a,
a.
Q.
a
u
a
x
Ul
X
o
60
40
20
GRAND AVERAGE -134 Wm3
10 30 100
PARTICLE DIAMETER (Dp),Mm
Figure 2. Large particle mode distributions.
32
300
-------
ROTARY IMPACTOR FOR COARSE PARTICLE MEASUREMENT -
MASS AND CHEMICAL ANALYSIS
Kenneth E. Noll
Department of Environmental Engineering
Illinois Institute of Technology
Chicago, IL 60616
Yaacov Mamane
NRC Research Associate
Environmental Sciences Research Laboratory
U.S. Environmental Protection Agency
Research Triangle Park, NC 27711
ABSTRACT
A unique combination of an effective sampler and analysis of individual
particles has been used in studying large particle (> 5 Um) at a rural
site in Eastern United States. The sampler is a modified "high volume"
rotary inertial impactor, which consists of four collectors of different
widths, rotating at high speed and collecting particles by impaction.
The collector surfaces were mylar films coated with apiezon to ensure
retention. After sampling, the collection surfaces were weighted to
obtain the mass-size distribution. A section of the mylar sample was
transferred to a scanning electron microscope to study in detail the
morphology and elemental content of individual particles.
The following features characterize the impactor: (a) Particles 6
to 100 are collected effectively on four stages. Stages A, B, C and
D — collect particles larger than 6 urn, 11 ym, 20 ym, and 29 ym,
respectively. (b) The sampler operates at high velocities, therefore
sampling a "large volume" of air — a necessary requirement because of
the low concentration of large particles; (c) Due to the special collection
technique, no losses "to the walls" or "bouncing off" are expected. To
insure a high degree of retention the collector faces were coated with a
thin film of apiezon; (d) No problems associated with isokinetic sampling
at variable wind speed are expected, since the collectors operate at
velocities considerably higher than the average wind speed, and the
instrument has a wind vane to point it into the wind; (e) The various
stages allow the collection of particles over restricted ranges of the
33
-------
size distribution without interference from particles outside of the
range. This eliminates errors in counting and x-ray analysis of indi-
vidual particles due to the excessive covering of the collector surfaces
by numerous small particles.
Samples were taken during the month of August 1983 in a rural site
in western Maryland as part of the Deep Creek Lake (DCL) Experiment.
The objective of the DCL study was to collect air quality data base and
source signatures to determine the impact of primary emissions and
secondary pollutants from combustion sources on a remote site. The
sampling site is located in a rural area surrounded by over fifteen
coal-fired power plants which are big enough (> 1000 MW) and close
enough (50 to 300 km) to have a significant impact on the site.
For the electron microscopy analysis, the mylar films were observed
in an optical microscopy to verify a homogeneous collection of particles
on the collector surface. The analysis includes particle size, shape
and special surface features, and elemental content of the particle.
Out of the samples collected at the DCL site, two samples which
were chosen for SEM analysis representing two different atmospheric
conditions — low versus high wind speed. Both samples were represen-
tative of midday summertime conditions.
Information on a few hundred individual coarse particle has been
obtained, including their heterogeneity and surface properties. Based
on the elemental content and mosphology, particles were assigned to
several category groups such as clay minerals, quartz, calcite, gypsum,
coal and oil fly ash, biological (pollen, spores, plant debris) particles,
and special particles — mostly of anthropogenic sources with high metal
content — rich in Fe, Pb, Zn.
The main results are summarized as follows:
34
-------
(a) In the rural area studied here the aerosol mass distribution peaks
in the 10 to 20 ym range with fairly significant mass in the 20 to
60 ym. During windy conditions mass concentration is higher for
most parts of the size range, but not in the below 10 ym range.
The wind speed may have two effects on aerosol concentration:
higher wind speeds cause resuspension of particles, while low wind
speeds are associated with less dispersion and higher concentration
of the smaller size fraction.
(b) Electron microscopy analysis of individual large particles revealed
the overwhelming presence of natural contributions in the whole
range, namely minerals (clay minerals, calcite and quartz — about
50 percent), and biological particles such as pollen and spores.
(c) Contribution of anthropogenic sources to large particles was
limited to a few percents and mainly to particles smaller than 10
ym. Most of these were fly ash transported from coal-fired power
plants situated 50 to 300 km upwind of the sampling site.
(d) Pollen particles represent a large fraction of the large particles
collected at the DCL site. Different types have been observed even
on the calm day indicating a fairly long residence time in the air.
The pollens contained large amounts of sulfur, either as small
sulfate particles deposited on the pollen surfaces, or as absorption
of SO through the wet surfaces.
(e) Mineral particles were found to be enriched in sulfate. As with
the pollen the sulfate may have accumulated on the particle surfaces
while being airborne. The sulfate was found to be associated with
calcite and clay minerals in significant amounts, about 1.5 to 3
percent of the particle mass, or an average of 0.02 g SO, . g
solid.
35
-------
INDIVIDUAL MICROMETER-SIZE AEROSOL COMPOUNDS
Eliezer Ganor* and Rudolf F. Pueschel**
* Research Institute for Environmental Health
Ministry of Health and Tel-Aviv University, Israel
** NUAA, Environmental Research Laboratories, Air Resources Laboratory
Boulder, CO 803U3, USA
ABSTRACT
A quantitative method for the analysis of individual micrometer-size dry
and wet aerosols is developed. It is based on mineralogical and microchemical
analysis. An aerosol compound is analyzed for crystallography by petrographic
microscopy, and for anions and cations by electron microscopy. Analysis of
the anions NO^' and S0^"~ is based on their microchemical reactions with
nitron and Ba Cl^, respectively. The microchemical analysis is identified by
transmission electron microscopy and the anion Cl and cations such as Ca, Mg,
K and Na are determined with a scanning electron microscope interfaced with an
X-ray energy spectrometer. The methods were tested in several locations:
(a) At the Boulder Atmospheric Observatory, a 300 m tower located in a rural
area, during Chinook conditions and within clouds, (b) At Tel-Aviv
University, during air pollution and Sharav conditions, (c) At Masada, Dead
Sea, 10 cm above sea level, during winter conditions. The aerosols are
classified as mineral, soot containing, water containing and electrolite
(mixed dry and wet aerosols).
36
-------
INTRODUCTION
The study of individual micrometer-size aerosols provides a great deal of
new information on the characteristics of aerosols, which otherwise cannot be
obtained. The aerosols in the atmosphere are not stable; they change during
transport due to chemical reactions within the particles and of particles with
gases, coagulation of particles and alterations of relative humidity (Hanel
and Zankl, 1979; Mamane et al., 198U). These changes can be noticed by
analysis of individual aerosols from different sources.
There are several sources of aerosols, which can- be grouped in two
categories: (1) Natural (soi1-derived aerosols, sea-spray aerosols, volcanic-
derived aerosols and organic particles) and (2) Anthropogenic. The basic
aerosol components have been classically defined as water-soluble particles,
dust like particles, oceanic particles, soot particles and ash particles (WCP,
1983).
In our work we classify the aerosols into four types: mineral and dust
like particles, soot particles, electrolite particles and mixed particles.
The electrolite particles are sea salt particles generated at the sea surface
by the action of the wind: such particles are halite (NaCl), sylvite (KC1),
carnallite (K MgCl2-6H^O) and those containing sulfate and nitrate. The mixed
particles consist of several compounds and are water containing, such as soot
coated with Ho S0« and dust like particles coated with electrolites.
37
-------
METHODS AND ANALYSIS
Aerosols were collected on electron microscope (EH) grids, on plain glass
and on blue gelatine, that were mounted on three stages of a four-stage
Casella impactor. The aerosols were collected for chemical and mineralogical
analysis by microchemical spot test (Mamane and Pueschel, 1980), X-ray energy
dispersion analyzer, electron microprobe analyzer and petrographic microscope
(Ganor et al., 1982).
GEOGRAPHIC LOCATION AND METEOROLOGY
The methods were tested in several locations: (1) At the Boulder
Atmospheric Observatory (BAO), USA. The BAO is a 300 m tower located in a
rural area 20 km east of Boulder and 25 km north of Denver. Aerosol samples
were collected at 10, 22, bO, 100, 150, 200, 250 and 300 m levels on the tower
(Ganor and Van Valin, 1982). (2) In Israel, at three places: (a) Tel-Aviv
University (TAU), a residential area in north Tel-Aviv, on the roof of a 15 m
building, located 2 km from the Mediterranean Sea; (b) Tel-Aviv Marina shore
(TAM), 0.5 m above sea level; and (c) Masada shore (DSM), 10 cm above the Dead
Sea level.
The aerosols were collected during different meteorological conditions:
at BAO, during chinook conditions and within a cloud; at Tel-Aviv, in winter,
during air pollution and Saharan dust storms; at Masada shore, in winter.
38
-------
RESULTS
PARTICLE MORPHOLOGY
A treated an a non-treated marked screen were observed by a petrographic
microscope, a transmission electron microscope (TEN) and a scanning electron
microscope (SEN) for shape, size distribution and chemical composition.
Figures 1-3 are photomicrographs of particles collected at TAU on a dusty
day-November 19, 1983. Figure 1 is a petrographic photomicrograph of Saharan
particles on a nontreated screen. The particles were identified as quartz,
calcite, dolomite, feldspar, clay minerals, fossile fragments and oil soot.
Figure 2 is an SEM photomicrograph of the same sample on a non treated
screen. The particles were analyzed for their elements with an X-ray energy
analyzer. The major elements found in the particles were Al, Si and Ca.
Figure 3 is an SEM photomicrograph of the same sample on a treated Bad 2
screen. The figure shows that the particles are mixed sulfates. The circular
spots indicate the presence of sulfates. Most of the Saharan particles are
coated with a thin layer of sulfate. In the figure there are three particles,
identified as (a) clay, (b) textularia fossil fragment and (c) calcium
carbonate aggreate.
Reaction spots of individual droplets were tested in the laboratory on
blue gelatine, BaC^ and nitron pre-coated screens for shape, size
distribution and microchemical composition by a petrographic microscope and a
scanning transmission electron microscope (STEM). Later, the methods were
tested at TAM and TAU.
Figures 4 and 5 are typical photomicrographs of aerosols collected on
February 21, 1994, at TAM and on October 24, 1983, at TAU. Figure 4a is a
39
-------
typical ThH photomicrograph of sea drops collected at JAM on February 21,
1984. The reaction spots indicate the impaction of the drops onto a HaCIo
pre-coated EM screen. Figure 4b is an SEN photomicrograph of the sample shown
in Figure 4a. The cubic particles inside the reaction spot are salt
particles, containing Mg, Na, S, Cl, K and Ca. The use of the TEH and SEM
with an X-ray energy analyzer in this case gives us more information, which
otherwise cannot be obtained. Figure 5 is a typical photomicrograph of air
pollution particles collected onto gelatine pre-coated glass at TAU on October
24, 1983. Some of the rounded particles show drop replicas, and are therefore
identified as water-containing particles.
The size distribution of particles collected on stages 3 and 4 of the
Case! la impactor is shown in Figure 6 (0.8
-------
collected on a pre-coated carbon screen. The particles inside the drop are
NaCl crystals.
Figure 9 shows typical TEM and SEM photomicrographs of nitrate, and the
X-ray energy dispersion spectrum (XEDS). The nitrate, identified by the
fingerlike microreaction, contains Al, Si, Ca and a trace of K and Fe.
PARTICLE COMPOUNDS
The particles, which have been tested in several locations, were
classified as dust like and minerals, soot containing, water containing,
droplets, nitrate and sulfate. Tables 1-3 summarize the microchemical
analyses: Table 1, the frequency of elements present in the aerosol
particles, in percentage collected at BAG, TAD and DSM; Table 2, the
percentage of the elements contained in the particles at TAU, TAM and DSM; and
Table 3, the percentage of aerosol compounds at BAO, TAM, TAU and Dead Sea,
based on sulfate and nitrate identification using the Bad- and nitron
techniques, by petrographic microscopy and by SEM-XEDS.
SUMMARY
The size spectra of cloud droplets and aerosols and of the aerosol
compounds were measured using multi-microchemical techniques, in different
meteorological conditions. The techniques provided an assessment of water-
containing aerosols, and most particles were identified as electrolite
aerosols. It was also found that because of the high relative humidity at TAU
the electrolite particles became water-containing droplets. Simultaneously,
various techniques were used to obtain relevant data on individual aerosol
41
-------
compounds. For Instance, it was found out that a considerable portion of
particles contain dust like nuclei, as in the Dead Sea droplets. On a
polluted day at TAD, about 42% of the micrometer-size particles were found to
be electrolites. 25%, soot; and 33%, dust like particles. The mixed sulfate
and nitrate particles were probably formed through a heterogeneous nucleation
of S02 and N02 on the surface of insoluble dust like particles.
REFERENCES
1. Ganor, E. and C. C. Van Valin, 1982. Vertical profiles of gases and
particles in the nonurban atmosphere. Proceedings, 2nd symposium on the
Composition of the NonurbanTroposphere, Wi11iamsburg, VA., May 1982. pp.
214-217.
2. Ganor, E., R. F. Pueschel, and C. T. Nagamoto. Sulfates and nitrates:
Concentration as function of particle size in eastern Colorado, (in
preparati on).
3. Hanel, G. and B. Zankl , 1979. Aerosol and relative humidity: Water
uptake by mixtures of salts. Tel 1 us, 31, 478-486.
4. Mamane, Y. and R. F. Pueschel, 1980. A method for the detection of
individual nitrate particles. Atmospheric Environment, 14, pp. 629-639.
5. Mamane, Y., E. Ganor, and A. E. Donagi, 1980. Aerosol composition of
urban and desert origin in the eastern Mediterranean. I: Individual
particle analysis. Water, Air, and Soil Pollution, 14: pp. 29-42.
6. WCP (World Climate Programme) 1983. Aerosols and Their Climatic Effects,
Williamsburg, Va.
42
-------
Table 1. PERCENTAGE UP PARTICLES CONTAINING THE ELEMENT AT VARIOUS LOCATIONS.
Height (m)
Diameter (p.m)
No. of aerosols
Conditions
BAO
300
0.5-4.00.4-20
200
Polluted
0.5-5.0
100
Polluted
TAU
15
0.5-5.0
100
Winter
D S M T A U
0.115
100
Dusty
GO
Ele ment
Na
Mg
Al
Si
P
S
Cl
K
Ca
Ti
Fe
Zn
Pb
V
Se
Br
27
12
3b
65
5
66
13
28
35
3
31
4
0
0
U
0
Percentage
43
37
75
93
0
81
50
75
88
67
63
0
25
18
12
6
5
28
37
58
4
60
13
37
61
3
38
1
0
0
0
0
24
24
73
89
7
63
61
59
73
9
70
3
3
1
0
0
-------
Table 2. PERCENTAGE UF THE ELEMENTS CONTAIiNED IN THE PARTICLES AT VARIOUS LOCATIONS
Date
Height (m)
Conditions
Element
Na
Mg
Al
Si
S
Cl
K
Ca
Ti
Fe
Pb
V
Se
Br
TAU
23 Jan 84
15
Polluted
3
1
6
19
28
5
5
24
1
4
2
1
1
• • •
TAM
21 Feb 84 ly
0.5
Bright
Percentage
31
4
1
1
6
5b
1
1
* • •
• • *
• • •
• • •
• • •
• • •
TA M
Nov 82
1
Dusty
1
1
9
36
2
2
10
30
2
7
• • •
• • •
• • •
• • •
DSM
21 Dec 83
0.1
Bright
7
8
23
15
19
3
22
• • •
3
• • •
• • •
• • •
* • •
-------
Table 3. PERCENTAGE OF AEROSOL COMPOUNDS AT VARIOUS LOCATIONS
Ul
BAO
Height (m) 300
Date 16 Apr 81
83
Conditions Chinook
Percentage
Nitrate
Sulfate 95
Dust-like b
Droplets
Water Containing ...
Soot containing ...
# 42% of the minerals
* The droplets contai
** .The droplets contai
## The droplets contai
BAO
300
27 Jul 81
Within Cloud
26
1
• • •
73
• • •
• • •
contain sul
ned Na, Al ,
TAU
15
24 Oct 83
Polluted
6
36
33
#
34
25
fate.
Si , S, Cl , Ca
ned Na Cl and elements as
ned Mg, Al ,
Si, S, Cl , K,
TAM
1
19 Nov 82
Dusty
• • •
10
88*
• • •
• • •
2
, Fe.
Mg, S, K, Ca.
Ca, Fe.
TAM
0.5
21 Feb 84
Bright
• • •
* • •
14
86 **
• • •
» • *
DSM
0.1
21 Dec
Bright
• • •
* • •
32
68 ##
• • •
• • •
-------
Fig. 1 - Petrographic microscope photo-
graph of Saharan particles on
a non-treated EM screen
Fig. 2 - Typical non-treated screen SEM
photomicrograph of Saharan par-
ticles shown in Fig. 1
Fig. 3 - Photomicrograph of the BaClp treated sample shown in Fig. 2.
The circular reaction spots indicate the presence of sulfates.
(a) clay, (b) textularia fossil fragment, and (c) calcium
carbonate aggregate.
46
-------
Fig. 4a - TEM photomicrograph of the pre-coated BaCl2
Mediterranean drops collected at TAM
Fig. 4b - SEM photomicrograph of
The cubic particles in
containing Mg, Na, S, Cl, and Ca,
the drops shown in Fig. 4a.
reaction are salt particles,
47
-------
Fig. 5 - SEM microphotograph of particles collected onto gelatine at TAU during
a polluted day - October 24, 1983. The circular replica indicate the
presence of water.
I TAU
| 24.10.83
1
£
u
0)
o
10 ' I
0
-1
10
*-2
10
2 3 456789 10
R AD IU S ,
Fig. 6 - Particle size distribution of the sample collected at TAU
on October 24, 1983
48
-------
.m
Fig. 7 - Typical TEM microphotograph of Dead Sea aerosols collected on pre-coated
(a) BaC^ and (b) nitron. The circular reaction spots on the BaClp indicate
the presence of drops. The circular spot reaction on the pre-coated nitron
shows the presence of drops.
49
-------
Fig. 8 - SEM photomicrograph of cloud droplets collected onto
pre-coated carbon screen at BAO. The particles inside
the drops are NaCl crystals.
50
-------
2/J.m
-2/xm
Fig. 9 - TEM and SEM with X-ray energy spectra of
nitrate collected at BAO. The nitrate is
identified by the finger-like micro-reaction,
It is mixed with Al, Si, Ca, K, and Fe.
51
-------
HUMAN EXPOSURE ASSESSMENT: A NEW METHODOLOGY FOR DETERMINING
THE RISK OF ENVIRONMENTAL POLLUTION TO PUBLIC HEALTH
Wayne R. Ott
U.S. Environmental Protection Agency
Office of Research and Development
Washington, DC 20460
INTRODUCTION
Determining the risk of environmental pollution to public health
requires a knowledge of five fundamental components: (1) the sources of
pollutants, (2) the transport of these pollutants from sources to humans,
(3) the distribution of exposures of humans to these pollutants, (4) the
doses received by people who are thereby exposed, and (5) the adverse
health effects resulting from these doses. These five components may be
viewed as links in a chain — from source to effect — comprising the
full risk model (Figure 1):
SOURCE
»•
FATE AND
TRANSPORT
FYPHQI IRF
CArUOLfn C
nnQF
LJVJoC.
EFFECT
Figure 1. Major components of conceptual risk model relating
the sources of environmental pollution to the ultimate effects
of pollutants on the population.
Despite the importance of each of the five components for estimating
the public health risk associated with environmental pollution, our
scientific knowledge about each component is not balanced. Usually,
environmental pollution comes to the attention of public officials be-
cause pollutant sources, such as a smoke stack plumes or leaking toxic
waste drums, provide obvious evidence of a disturbing environmental
condition. Thus, a great deal is known about the sources of pollution,
and source abatement and control has received considerable research
attention. Once a source of environmental pollution is known and iden-
tified, interest often focuses on the manner in which the pollutant
moves through the environment — its fate and transport — ultimately
becoming assimilated by ecosystems or transported to humans. As with the
source component of this risk model, the fate and transport component
52
-------
likewise has received considerable research attention. The field of
meteorology has developed a great number of atmospheric dispersion models,
and other fields have developed models for the movement of pollutants
through streams, soil, and the food chain1.
As with the first two components, the last component — the effects
of pollutants on humans — also has received considerable research atten-
tion. Numerous studies have been undertaken relating various exposures
and doses to identifiable effects on animals and humans, as can be seen
in any of the published air quality criteria documents'^ 3. However,
our knowledge of two important components of the risk model — exposure
and dose — is very limited for most pollutants. Accurate exposure data
unfortunately are lacking for most of the air pollutants that EPA regulates.
The environmental risk model is serial in structure: the output of
each component serves as the input to the next component. Thus, the
absence of valid information on any component seriously impairs our abil-
ity to assess public health risk, and the absence of human exposure data
has serious implications for regulatory policies. If, for example, human
exposures to a criteria air pollutant were found to be negligible, then
the public health risk of the pollutant may be exaggerated, and concern
about controlling this pollutant could be reduced. Conversely, if human
exposures to a pollutant were found to be higher than previously sus-
pected, then additional control actions might be warranted. In all
cases, the important information needed is the frequency distribution of
exposures of the population.
APPROACHES FOR DETERMINING EXPOSURES
Two alternative conceptual approaches have been proposed for obtain-
ing information on the frequency distribution of exposures of the popula-
tion to environmental pollutants:'*'^
Approach A. An obvious solution, called the "direct approach" by
Duan^, is to measure the concentration at the boundary of the person
by monitoring the air he breathes, the water he drinks, and the food he
eats. Several recent field studies have implemented this conceptual
approach. The Total Exposure Assessment Methodology (TEAM) study measured
53
-------
the concentration of an important class of chemicals, volatile organic
compounds (VOC's), in the air, drinking water, and breath of respondents
using personal monitoring techniques"" . In order to generalize to a
larger population than the number actually surveyed, a multi-stage statis-
tical sampling design was used. In Elizabeth and Bayonne, New Jersey,
365 respondents carried a Tenax portable personal exposure monitor, and
the levels of VOC's in their breath and drinking water also were deter-
mined. A major finding was that personal exposures (and indoor levels)
were much greater than outdoor exposures for at least 1 1 important car-
cinogens^'9.
More recently, the Denver-Washington, DC, carbon monoxide (CO) human
exposure field survey was carried out10. Because CO is associated only
with air pollutant exposures, it was not necessary to monitor food or
drinking water. A specially designed personal exposure monitor for CO
was developed which could measure and record concentrations with a time
resolution of one minute of less11. An interviewer delivered the monitor
to each respondent and picked it up 24 hours later. Each respondent
carried the monitor with them while engaging in their normal daily act-
ivities. By interpreting their diaries listing the times that each
activity began, it was possible to obtain 712 24-hour exposure profiles
in Washington, DC, and 450 48-hour profiles in Denver, CO. Although
these data currently are being analyzed, many new findings are emerg-
Approach B. An alternative approach, called the "indirect approach"
by Duan^, is to measure and fully characterize pollutant concentrations
in the locations (called "microenvironments" ) people normally visit.
Then, by combining this information with data from activity pattern and
time budget studies, it is possible to compute an estimated exposure
profile for each person. This approach initially was suggested by Fugas1^
and is discussed by Duan^. Computer models such as SHAPE1 ^ and NEM1 ?
have been developed for combining the activity data with the microenviron-
mental concentration data, but human exposure activity modeling is in
its infancy and needs further development and field testing.
54
-------
SUMMARY AND CONCLUSIONS
A new methodology, human exposure assessment, is emerging for
determining the frequency distribution of exposures of the population
to environmental pollutants. Two approaches, direct measurement and
indirect estimation through models, have been developed. Several field
studies have been undertaken demonstrating the feasibility of the direct
approach, yielding a wealth of new exposure data and many important new
findings. The indirect approach has not been fully developed and needs
further work, but it, too, may yield much important exposure informa-
tion. Initial studies have dealt with volatile organics and CO, and the
same methodology now needs to be extended to N02, inhaled particles, and
other important pollutants.
REFERENCES
1. Ott, Wayne R., ed., "Proceedings of the EPA Conference on Environ-
mental Modeling and Simulation," U.S. Environmental Protection Agency,
Report No. EPA-600/9-76-016, Washington, DC, July 1976.
2. "Air Quality Criteria for Particulate Matter and Sulfur Oxides," U.S.
Environmental Protection Agency, Vol. I, No. EPA-600/8-82-029a; Vol.
II, No. EPA-600/8-82-029b; Vol. Ill, No. EPA-600/8-82-029c; Research
Triangle Park, NC, December 1982.
3. "Air Quality Criteria for Oxides of Nitrogen," U.S. Environmental
Protection Agency, Report No. EPA-600/8-82-026F, December 1982.
4. Ott, Wayne R., "Concepts of Human Exposure to Air Pollution," Environ-
ment International, Vol. 7, pp. 179-196, 1982.
5. Duan, Naihua, "Models for Human Exposure to Air Pollution," Environment
International, Vol. 8, pp. 305-309, 1982.
6. Wallace, Lance, Ruth Zweidinger, Mitch Erickson, S. Cooper, Don
Whitaker, and Edo Pellizzari, "Monitoring Individual Exposure Measure-
ments of Volatile Organic Compounds in Breathing-Zone Air, Drinking
Water, and Exhaled Breath," Environment Internatinal, Vol. 8, pp.
269-282, 1982.
7. Zweidinger, Ruth, Mitch Erickson, S. Cooper, Don Whittaker, Edo
Pellizzari, and Lance Wallace, "Direct Measurement of Volatile
Organic Compounds in Breathing-Zone Air, Drinking Water, Breath,
Blood, and Urine," U.S. Environmental Protection Agency, Report No.
EPA-600/4-82-015, Washington, DC, June 1983.
55
-------
8. Pellizzari, E.D., T.D. Hartwell, C.M. Sparacino, L.S. Sheldon, R.
Whitmore, C. Leininger, H. Zelon, and L. Wallace, "Total Exposure
Assessment Methodology (TEAM) Study: First Season - Northern New
Jersey," Research Triangle Institute, Report No. RTI/2392/03-03S,
Research Triangle Park, NC, June 1984.
9. Wallace, L., E. Pellizzari, T. Hartwell, M. Rosenzweig, M. Erickson,
C. Sparacino, and H. Zelon, "Personal Exposure to Volatile Organic
Compounds: I. Direct Measurements in Breathing-Zone Air, Drinking
Water, Food, and Exhaled Breath," in press, Environmental Research,
1 984.
10. Akland, Gerald G., Wayne R. Ott, and Lance A. Wallace, "Human Exposure
Assessment: Background, Concepts, Purpose, and Overview of the
Washington, DC-Denver, Colorado Field Studies," Paper No. 84-121.1
presented at the 77th Annual Meeting of the Air Pollution Control
Association, San Francisco, CA, June 24-29, 1984.
11. Ott, W.R., C. Williams, C. Rhodes, R. Drago, and F. Burmann, "Application
of Microprocessors to Data Logging Problems in Air Pollution Exposure
Field Studies," Paper No. 84-121.2 presented at the 77th Annual
Meeting of the Air Pollution Control Association, San Francisco, CA,
June 24-29, 1984.
12. Johnson, Ted, "A Study of Personal Exposure to Carbon Monoxide in
Denver, Colorado," Paper No. 84-121.3 presented at the 77th Annual
Meeting of the Air Pollution Control Association, San Francisco, CA,
June 24-29, 1984.
13. Hartwell, Tyler D., Carlisle A. Clayton, Raymond Michie, Jr., Roy W.
Whitmore, Harvey S. Zelon, and Deborah A. Whitehurst, "Study of Carbon
Monoxide Exposure of Residents of Washington, DC," Paper No. 84-121.4
presented at the 77th Annual Meeting of the Air Pollution Control
Association, San Francisco, CA, June 24-29, 1984.
14. Wallace, Lance A., David T. Mage, and Jacob Thomas, "Alveolar
Measurements of 1,000 Residents of Denver and Washington, DC — A
Comparison with Preceding Personal Exposures," Paper No. 121.5
presented at the 77th Annual Meeting of the Air Pollution Control
Association, San Francisco, CA, June 24-29, 1984.
15. Fugas, Mirka, "Assessment of Total Exposure to Air Pollution,"
Proceedings of the International Conference on Environmental Sensing
and Assessment, Paper No. 3R-5, Vol. 2, IEEE #75-CH 1004-1, Las Vegas,
NV.
16. Ott, W.R., "Exposure Estimates Based on Computer Generated Activity
Patterns," Paper No. 81-51.6 presented at the 74th Annual Meeting of
the Air Pollution Control Association, Philadelphia, PA, June 21-26,
1981 .
17. Johnson, T., and R.A. Paul, "The NAAQS Exposure Model (NEM) Applied
to Carbon Monoxide," U.S. Environmental Protection Agency, Office of
Air Quality Planning and Standards, Strategies and Air Standards
Division, Research Triangle Park, NC, April 1982.
56
-------
RESULTS OF THE CARBON MONOXIDE STUDY IN
WASHINGTON, D.C., AND DENVER, COLORADO,
IN THE WINTER OF 1982-83
Introduction
During the winter of 1982-83, the U. S. Environmental Protection
Agency conducted a large-scale urban field study to develop and test
methodologies for determining, with known accuracy, the exposures to
carbon monoxide (CO) of the population of a city. The primary study
objective was to develop and evaluate a methodology for measuring the
distribution of CO exposures of a representative population of an urban
area. Two urban areas were chosen for study; Denver, Colorado, and
Washington, D.C. These areas were selected because they differ in ele-
vation, relative CO levels based on historical fixed site data, diversi-
ty of land use characteristics and commuter patterns. Approximate dates
of field monitoring were November 1, 1982, through-February 28, 1983.
Participants in the study were chosen using a 3-stage design. Ap-
proximately 3200 households in Denver and 5800 households in Washington,
D.C., were screened by telephoning a representative random sample of
the population. During the screening process, respondents were asked
about their smoking habits, commute times, and other factors which might
influence CO exposures. Data from the screener survey made it possible
to subsequently create a stratified random sample of individuals with
particular characteristics of interest. For example, only* non-smokers
were selected in the final sample, and persons who commuted long dis-
tances were more heavily sampled than were those who commuted short
distances.
The exposure measurements were made with a specially designed per-
sonal exposure monitors (PEM) with a built-in data logger. The data
logger was developed to provide an integrated value expressed in ppm-
minutes which was determined by change of activity pattern or automati-
cally on the clock-hour. The clock-hour values were necessary for
comparing PEM results with the fixed site results. The interviewer
visited the respondent and left a calibrated PEM with instructions for
its use and a diary. The respondent carried the PEM recording each
change of location and activity in the diary and depressing the data
logging button when they changed activity. A questionnaire also was
administered to obtain detailed information about the respondent's home,
workplace and commute habits. Details of the survey are presented by
Whitemore, et_ al_. 1 Other details are reported in the final reports by
Johnson^ and Hartwell.^
57
-------
Study Results
1. Quality Assurance
(a) Precision of PEM Values
The assessment of PEM precision was determined by having a
member of the project field staff carry two or more randomly
assigned monitors for a 24-hour period. The monitors were exposed
to typical sampling conditions of changing temperature, humidity,
elevation, etc. , as well as to the vibrations and physical shocks
inherent in transporting the instruments. In Washington, the mean
relative of the standard deviation of the measurement pairs was
30.6%. In Denver, where the average concentrations were higher,
the mean relative standard deviation was 14.2%.
Accuracy
Two independent audits of the PEM's were conducted by the
Quality Assurance Division, Environmental Monitoring Systems Labor
atory, EPA. The first audit was conducted at the start of the pro-
ject and the second near the end of the study. Results of both
audits indicated that the audited PEM's were within ±10% of the
audit gases in both cities.
2. Fixed Site Concentrations
One goal of these studies was to compare exposure results obtained
from fixed monitors with directly measured personal exposure for CO.
It should be noted that the National Ambient Air Quality Standard
(NAAQS) levels for CO are 35 ppm for 1-hour concentrations and 9 ppm
for 8-hour concentrations. During the study period the 35 ppm level
for 1 hour was never reached in Washington, but it was exceeded in
Denver on one day (12/16/82) at one site «1%). The 8-hour level was
exceeded at two Washington sites - one site had one exceedance and the
other site had five exceedances (0.5% site-days). The 8-hour level was
exceeded at 11 Denver sites (8.7% site-days).
3. Personal Exposures
The field study yielded 712 24-hour exposure profiles in Washington
and 900 24-hour exposure profiles (450 persons @ 2 days each) in Denver.
The 8-hour maximum results in Denver were approximately twice as high as
the levels found in Washington. (The fixed site CO concentrations at
Denver were also about twice that observed in Washington.) The Denver
personal exposure distribution indicates 10.7% of the 8-hour maximum
daily CO exposures were above 9 ppm. This compares to 3.9% above 9 ppm
observed in Denver at the fixed sites.
58
-------
Other results include:
(a) The distribution obtained from the concentrations measured at
a combination of fixed site monitors can generally provide a reason-
able measure of exposure to CO for the study population over the
distribution except for the upper 10 percent of exposure.
(b) Personal CO exposures were higher in microenvironments asso-
ciated with motor vehicles, such as while commuting.
(c) Personal CO exposures were also higher for persons in high-
exposure occupations, e.g., truck drivers, construction workers
and garage/service station workers.
(d) In Denver, indoor concentrations were higher than correspond-
ing fixed site concentrations during the time period 0900-1600 hours.
(e) In Denver, the first day and the second day personal exposure
profiles are approximately equivalent for workdays.
Summary
From these studies we can conclude that the methodology exists for
conducting exposure studies for CO for an urban area. The studies have
provided an extensive data base from which statistical comparisons can be
performed between population subgroups, between fixed site concentrations
and personal exposure, and between indoor and outdoor concentrations. In
addition, factors associated with exposures can be estimated and modeled.
It is clear that an extension of this concept to other pollutants and
other areas over differing time periods (for temporal resolution) is
warranted.
59
-------
References
1. R. W. Whitmore, Jones, S. M., and Rosenzweig, M. S. "Final Sampling
Report for the Study of Personal CO Exposure." Report by Research
Triangle Institute to the U. S. Environmental Protection Agency,
Research Triangle Park, N. C., January 1984.
2. T. Johnson. "A Study of Personal Exposure to Carbon Monoxide in
Denver, Colorado." Report by PEDCo Environmental, Inc., to the U.
S. Environmental Protection Agency, Research Triangle Park, N. C.,
December 1983.
3. T. D. Hartwell, et_ al. "Study of Carbon Monoxide Exposure of
Residents of Washington, D.C., and Denver, Colorado." Report by
Research Triangle Institute to the U. S. Environmental Protection
Agency, Research Triangle Park, N. C., January 1984, Parts I and II.
60
-------
A REVIEW OF INDOOR AIR QUALITY RESEARCH AT OAK RIDGE NATIONAL LABORATORY*
A. R. Hawthorne, T. G. Matthews, R. B. Gammage,
C. S. Dudney, and T. Vo-Dinh
Health and Safety Research Division
Oak Ridge National Laboratory
By acceptance of this article, the
publisher or recipient acknowledges
the U.S. Government's right to
retain » nonexecutive, royalty-free
license in and to any copyright
covering the article.
* Research sponsored by the Tennessee Valley Authority under Interagency
Agreement IAG-40-1406-83 with the Martin Marietta Energy Systems, Inc. under
Contract DE-AC05-840R21400 with the U.S. Department of Energy.
61
-------
A REVIEW OF INDOOR AIR QUALITY RESEARCH AT OAK RIDGE NATIONAL LABORATORY
A. R. Hawthorne, T. G. Matthews, R. B. Gammage,
C. S. Dudney, and T. Vo-Dinh
Health and Safety Research Division
Oak Ridge National Laboratory
INTRODUCTION
Indoor air pollutants are increasingly recognized as important contributors
to the total public exposure to pollutants. Radon, formaldehyde, volatile
organic compounds, combustion gases, and particulates are among the more
important indoor air pollutants. Indoor levels may in fact be comparable to or
greater than levels that have caused concern outdoors. Sources identified as
contributing to reduced indoor air quality include construction products,
consumer, products, combustion appliances, and lifestyle habits. When the
potential for elevated concentrations is considered in conjunction with the fact
that many people spend a large fraction of their time indoors, the need to
understand better the indoor component of the population's total exposure is
evident. In addition to assessing the direct impact of indoor air quality, there
is a need to determine the impact of indoor exposures on conclusions drawn about
outdoor air quality. Much health effects information is based on the assumption
that outdoor air quality is the dominant determinant of observed health effects.
A better assessment of pollutant exposures from indoor air relative to outdoor
air is necessary to test this assumption.
For approximately five years. Oak Ridge National Laboratory has had an
active indoor air quality research program. Areas of activity include
instrumentation and methods development, source characterization, field studies,
modeling, remedial measures, and impact assessment. This paper will briefly
review the following components of our research: (1) measurement developments,
(2) source characterization, and (3) field studies.
MEASUREMENT DEVELOPMENTS
There is a need for relatively low-cost, easy-to-use monitoring
instrumentation that is sensitive enough to meet the requirements for measuring
indoor air quality. Much of the available industrial hygiene instrumentation is
not suitable for monitoring indoor air quality. Similarly, much of the equipment
used in assessing outdoor air quality is either too large, noisy, or expensive
for practical use in indoor air quality research. Methods to address this need
62
-------
have been developed as part of oar research program.
Both active and passive methods have been developed for formaldehyde
monitoring. A pumped molecular sieve sampling technique was developed and
reported by Matthews, T. G., and T. C. Howell, 1982a. This procedure addresses
the problem of water vapor collection and presents a procedure that allows low-
level formaldehyde monitoring even with relatively high humidities. The method
uses a simple water rinse desorption followed by pararosaniline colorimetric
analysis. For a 30-L sample taken at 1-2 L/min a detection limit of about 25 ppb
is achieved. A second improvement in active formaldehyde monitoring involved
modifications to a commercially available CEA-555 formaldehyde monitor (Matthews,
T. G., 1982b). Using the reported protocol, this instrument has monitored
formaldehyde vapor as low as 10 ppb in a controlled laboratory environment.
Passive formaldehyde monitoring techniques have also been developed. A
dimethylsilicone membrane sampler containing water sorbent is exposed for a 24-h
period and analyzed using the pararosaniline procedure (Matthews, T. G. , et al.,
1982c). The detection limit for this method is about 25 ppb. This sampler was
used extensively in our field studies.
A surface monitor has been developed to measure the formaldehyde flux rate
from a solid material such as pressed-wood products or a wall insulated with
urea-formaldehyde foam insulation. The formaldehyde surface emission monitor
(FSEM) is a device approximately 20 cm in diameter which holds a layer of
molecular sieve parallel to the emitting surface and provides a means to measure
the emission rate nondestructively (Matthews, T. G., et al., 1984). For a 2-h
measurement period, a detection limit of about 0.025 mg/cm^-h is achieved.
Although not developed specifically for indoor air quality monitoring,
recent advancements in screening methodology for PNAs by room temperature
phosphorescence and synchronous luminescence by Vo-Dinh, T., 1983, offer a low-
cost means of screening indoor air sample extracts for PNA content. This
approach was found to be particularly useful in evaluating indoor air quality in
homes with wood stoves (Vo-Dinh, T., et al., 1984a) . Another attractive device
that is currently being further evaluated for indoor air quality monitoring is
the passive BJA dosimeter developed by Vo-Dinh, T., 1984b. The monitor is a
diffusion device using a heavy-atom-treated filter paper as the sorbent and room
temperature phosphorescence as the analytical method. The unit is particularly
attractive in that it does not require sample treatment after exposure and is
placed directly into a spectrometer for readout.
63
-------
SOURCE CHARACTERIZATION
Our activities in source characterization have emphasized resin-containing
materials that emit formaldehyde. Laboratory measurements have been conducted
using both small chambers and the surface emission monitor. Current activities
also include experiments in a room-sized environmental chamber.
An early example of formaldehyde source characterization involved the
measurement of emission rates from simulated wall panels containing urea-
formaldehyde foam insulation (UFFI) (Hawthorne, A. R., and R. B. Gammage,
1982). The results of this work indicated that UFFI could be a significant
source of formaldehyde and that the levels measured in the laboratory were
similar to levels observed in homes with recently installed UFFI.
Characterization of formaldehyde release rates from fiberglass insulation
has recently been completed (Matthews, T. G., et al., 1983). These results
indicate that formaldehyde release from fiberglass insulation is expected to
produce a minimal impact on indoor air quality.
The most extensive source characterization activity involves a continuing
study of the formaldehyde emission characteristics of pressed-wood products.
This work includes the measurement of emission rates of pressed-wood materials
from a product survey (measured at standard environmental conditions),
measurement of emission decay rates, emission rate dependence on environmental
conditions, and a study of permeation barriers and potential sinks such as gypsum
wallboard.
FIELD STUDIES
Field studies of indoor air quality in occupied residences are an important
component of our research activities. The major study that we have conducted
involved 40 homes in the Oak Ridge/Knoxville area of East Tennessee (Hawthorne,
A. R. , et al., 1984). This study measured the levels of formaldehyde, volatile
organics, particulates, and combustion gases during a one-day visit to each
house. Formaldehyde concentrations were also monitored once a month with a 24-h
passive sampler for about nine months. Radon levels were measured in all 40
homes using passive track etch monitors exposed for three months. Hourly
readings of radon were obtained in a subset of the homes for periods of up to a
week. Air exchange rates and meteorological data were obtained during the one-
day visits. Air leakage was also measured in a subset of the houses using a
blower door (Gammage, R. B. , et al., 1984).
64
-------
A continuing investigation of volatile organic compounds in a subset of the
40 homes is currently underway. Compounds more volatile than toluene are being
emphasized using a mixed sorbent bed collection tube and high-resolution gas
chromatography. A portable photoionization gas chromatograph is also being used
to locate sources.
A preliminary study to measure combustion gases produced from the operation
of unvented gas space heaters was conducted this spring in six houses. Levels of
carbon oxides, nitrogen oxides, and oxygen depletion were monitored. Air
exchange rate measurements were also performed.
Monitoring of radon and radon daughter levels in 60 homes in the Tennessee
valley is planned to begin this summer. Quarterly measurements will be conducted
to evaluate the variability of radon in both basements and living areas of the
homes. Air exchange rates will also be determined.
REFERENCES
1. Gammage, R. B. , A. R. Hawthorne, and D. A. White, 1984. Parameters
affecting air infiltration and air tightness in thirty-one east Tennessee
homes. ASTM Sysposium on Measured Air Leakage Performance of Buildings,
Philadelphia, Penn.
2. Hawthorne, A. R., and R. B. Gammage, 1982. Formaldehyde release from
simulated wall panels insulated with urea-formaldehyde foam insulation. J.
Air Pollut. Cont. Assoc. 32, p.1126.
3. Hawthorne, A. R., et al., 1984. An indoor air quality study of forty East
Tennessee homes. ORNL-5965, Oak Ridge National Laboratory.
4. Matthews, T. G. , and T. C. Howell, 1982a. Solid sorbent methodology for
formaldehyde monitoring. Anal. Chem. 54, p!495.
5. Matthews, T. G. , 1982b. Evaluation of a modified CEA Instruments, Inc.
Model 555 analyzer for the monitoring of formaldehyde vapor in domestic
environments. Am. Ind. Hyy. Assoc. J. 43, p547.
6. Matthews, T. G., A. R. Hawthorne, T. C. Howell, C. E. Metcalfe, and R. B.
Gammage, 1982c. Evaluation of selected monitoring methods for formaldehyde
in domestic environments. Environ. Int. 8, p!43.
7. Matthews, T. G., et al., 1983. Determination of formaldehyde emission
levels from ceiling tiles and fibrous glass insulation products. Project
report to the U.S. Consumer Product Safety Commission.
65
-------
8. Matthews, T. G., A. R. Hawthorne, C. R. Daffron, M. D. Corey, T. J. Reed,
and J. H. Schrimsher, 1984. Formaldehyde surface emission monitor. Anal.
Chem. 56, p448.
9. Vo-Dinh, T., 1983. Rapid screening luminescence techniques for trace
organic analysis. New Directions .in Molecular Luminescence. ASTM
Publications, pp5-16.
10. Vo-Dinh, T., T. B. Bruewer, G. Colovos, T. J. Wagner, and R. H. lungers,
1984a. Field evaluation of a cost effective screening procedure for PNA
pollutants in ambient air. Environ. Sci. Tech., 18, p477.
11. Vo-Dinh, T., 1984b. Air pollution: Applications of simple luminescence
techniques. Identification and Analysis ol Organic Pollutants in Air.
Butterworth Publishers, Best, pp.257-269.
66
-------
PASSIVE SAMPLING DEVICES WITH REVERSIBLE
ADSORPTION: MECHANICS OF SAMPLING
Robert W. Coutant
Battelle Columbus Laboratories
Columbus, Ohio 43201
Robert G. Lewis and James D. Mulik
Advanced Analysis Techniques Branch,
Environmental Monitoring Systems Laboratory,
U.S. Environmental Protection Agency
Research Triangle Park, North Carolina 27711
INTRODUCTION
Most commercially available passive sampling devices employ activated
carbon as the sorbent. With such devices, the sorption process is not
thermally reversible, and solvent desorption must be used to recover the
sample. Consequently, the use of these devices to sample ambient concen-
trations (0.1-lOppbv) of volatile organic compounds (VOC) can impose
severe restrictions on the analytical techniques (1). On the other hand,
passive devices using reversible adsorption offer several advantages speci-
fically suited to sampling of ambient concentrations of VOC's. These
include:
1. Independence from solvent contamination
2. Increased sensitivity because of the availability
of the whole sample for analysis
and 3. more rapid sample turnaround
However, the sampling behavior of these devices differs from the ideal
case normally assumed for activated carbon, and failure to recognize the
differences can lead to biases in sampling and interpretation.
This paper discusses the mechanics of sampling with reversible
adsorption, and presents a simple model for calculating sampling rates.
This model provides guidelines for proper design and application of passive
monitors employing reversible adsorption, and the performance of the EPA
personal exposure monitor (PEM) is used to illustrate the consequences
of proper and improper application of the fundamental principles.
SAMPLING MECHANICS
There are currently two designs of PEMs that use reversible adsorption.
Both of these use Tenax GC, and their major difference in the thickness of
67
-------
the sorbent bed. The EPA PEM is a large face area system having a thin bed,
while the device developed by Brown (2) is a thick-bed system. The funda-
mental mechanics of sorption are the same for both devices, but the thin
bed system is subject to simplifications that more readily obviate the
significance of key physical parameters. For the thin bed system, the time
averaged sampling rate can be written as
where RQ is the sampling rate at zero time and is given by R = DA/£.
(D is the gas phase diffusion coefficient of the sorbate, A is the
effective area of the diffusion barrier, and £ is the effective length
of the diffusion path.) k is the ratio of R to the bed capacity, WV.;
where W is the weight of sorbent and V, is the GC retention volume for
the sorbate. Equation 1 indicates that for sorbates having relatively
low retention volumes, the sampling rates will be strongly dependent on
sampling time, but this effect can be offset to some extent by design
of the device to yield lower values of R .
EXPERIMENTAL RESULTS
The thin bed model was evaluated through exposure of the EPA PEM
to various mixtures of VOC's in the Battelle dosimeter test facility.
Concentrations were in the range of 1-10 ppbv, and exposure times were
varies from 15 minutes to 24 hours. Figure 1 shows a comparison of
experimentally determined one hour average sampling rates with values
predicted by Equation 1 for 17 common VOC's. Agreement is good for all
but 3 compounds. For acrylonitrile, literature values of the retention
volume vary widely and good agreement could be obtained by choosing a
retention volume near the upper limit of those cited in the literature.
Figure 2 shows examples of long term behavior typical of compounds
having high and low retention volumes, with the curves having been calculated
using Equation 1. In general, we found excellent agreement between Equa-
tion 1 and measured sampling rates for sampling times between 15 minutes
and 24 hours, and for compounds having retention volumes ranging from 0.5
L/g (trichlorotrifluoroethane) to over 2000L/g (o-xylene).
68
-------
An alternative illustration of the applicability of Equation 1 can
be gained by using the experimentally measured sampling rates to calculate
apparent retention volumes. Table 1 shows calculated retention volumes for
4 VOC's in comparison with literature values for the same compounds.
CONCLUSIONS
Passive monitors utilizing reversible adsorption can be used for
monitoring of ambient concentrations of VOC's, but strict attention must
be paid to device design and bed capacity to avoid severely time sensitive
sampling rates. The thin bed model, which assumes that all of the bed
capacity is available, is applicable to the EPA PEM. However, with thick
bed systems only a small volume of sorbent near the face of the device
will be utilized and sampling rates can be even more time sensitive than
illustrated with the EPA PEM, depending on the face area to volume ratio.
With thick bed systems, the thin bed model is not applicable, and one
must resort to a more complex treatment involving a series solution
to the problem.
69
-------
80
70
60
50
R 'R = 0.96 + 0.08
m p ~
-j R Measured,
0 cc/min
30
20
10
(2)
R Predicted, cc/min.
FIGURE 1. ONE HOUR SAMPLING RATES
CHEMICAL KEY
VINYLIDENE CHLORIDE
ACRYLONITRILE
FREON 113
1,2-DICHLOROETHANE
CHLOROFORM
METHYLCHLOROFORM
CARBONTETRACHLORIDE
BENZENE
1,2-DIBROMOETHANE
TRANS-1,3-DICHLOROPROPENE
TOLUENE
CHLOROBENZENE
HEXACHLOROBUTADIENE
0-XYLENE
TRICHLOROETHYLENE
TETRACHLOROETHYLENE
BENZYL CHLORIDE
90
-------
100
90
2 80
70
60
R, 50
cc/min 40
30
20
10
0
0 1
1 ,2 DICHLOROETHANE
Ro = 81.2 cc/min.
Vb = 18 + 4 L/q
SDEV =3.3 cc/min.
HEXACHLOROBUTADIENE
Ro = 42 cc/min.
Vb = 324 L/q
SDEV = 1.3 cc/min.
5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
SAMPLING TIME, HR
FIGURE 2. RESPONSE WITH HIGH AND LOW RETENTION VOLUMES
-------
TABLE 1. RETENTION VOLUMES
Acryloni trile
Vinylidene
Chloride
Freon 113
1 ,2-Dichloro-
ethane
« u v<-a ' ; »w y
4.9 + 0.8
1.5 + 0.1
0.5 + 0.08
18 + 4
v, VL i uy ,i_/ y
0.3 - 7.0
2 - 6
0.23 - 0.47
24.4
REFERENCES
1. Coutant, R. W., and Scott, D. R., 1982. "Applicability of Passive
Dosimeters for Ambient Air Monitoring of Toxic Organic Compounds",
Environ. Sci. Techno!., 16, pp. 410-413.
2. Brown, r. H., and Walkin, K. T., "Performance of a Tube-Type Diffusive
Sampler for Orqanic Vapors in Air", Proc. Fifth Int. SAC Conference, pp.
205-208, May, 1981. ~~
72
-------
PORTABLE INSTRUMENT FOR THE DETECTION AND
IDENTIFICATION OF AIR POLLUTANTS
J.R. Stetter, S. Zaromb, W.R. Penrose, and T. Otagawa
Argonne National Laboratory
and
J. Sinclair and J. Stall
United States Coast Guard, Washington, DC 20593
INTRODUCTION
A portable instrument for detecting, identifying, and monitoring chemical
hazards is described by Stetter et al. (1984a, 1984b). The instrument was
developed at Argonne National Laboratory for the purpose of alerting U.S. Coast
Guard personnel to the presence of hazardous vapors during cleanup of chemical
spills or during inspection of chemical shipments. Instruments of the same type
may be used as personal monitors for employees in hazardous waste cleanup
operations and in various industrial environments, especially in the chemical,
pharmaceutical, petroleum, mining, and metallurgical industries. They may also
serve as inexpensive substitutes for, or supplements to, the instrumentation now
used to monitor hazardous emissions from smokestacks and other stationary
sources.
MAIN FEATURES OF THE PROTOTYPE INSTRUMENT
The recently completed prototype instrument, which uses the array shown in
Fig. 1, comprises four electrochemical sensors that respond to toxic gases and
two heated noble-metal filaments that cause many compounds to be partially
pyrolyzed or oxidized in air (Stetter, Zaromb, and Findlay, 1984). The four
sensors can be rapidly switched to one of several operating modes. In practice,
four modes and four sensors yield 16 measured parameters, that is, 16 independent
data channels.
The prototype instrument fits into a camera bag (Fig. 1), weighs about 15
pounds, and can operate for at least four hours on self-contained rechargeable
batteries. The user interface consists of five keys and a 32-character
display. The user selects detection, identification, or calibration modes by
pressing the appropriate keys. Extensive training of personnel is avoided by
having the instrument provide menus of choices for each operating mode desired.
73
-------
The menus are controlled by a microprocessor that has 12 kilobytes of memory and
extensive self-test capabilities.
When monitoring for the presence of any unknown air contaminant, the sensor
array is connected directly to a sampling probe. A signal from any of the
sensors indicates the presence of a possibly hazardous species near the probe
intake. To identify the detected species, the user first draws a sample through
the probe intake into a sampling bag. The collected sample is then passed
through the sensor array, with the sensors being switched into four differently
selective modes at appropriate intervals. The response of each sensor at the end
of each interval is recorded in one of 16 independent data channels. The
relative magnitudes of these response signals provide the information needed to
identify the particular species giving rise to the observed signals. Once the
microprocessor identifies a compound based on the recorded data, it then sets the
sensor array for maximum sensitivity to that compound in the monitoring mode.
The microprocessor can also set the alarm level to correspond to an appropriate
fraction of the short-term exposure limit or immediately-dangerous-to-life-and-
health concentration of the identified compound.
Of some 30 different compounds tested with the array shown in Fig. 1, each
yielded a distinctive response pattern, as illustrated by the histograms of Figs.
2 and 3. These histograms, as well as the response versus concentration plots of
Fig. 4, obtained as described in Stetter et al. (1984a), show how specificity and
quantitative determinations derive from the measured values. To demonstrate the
differences in response patterns, a "fingerprint index" can be derived from each
histogram by:
1. Assigning a two-digit index (01 through 16) to each of the
channels 1-16;
2. Listing the indices of the strongest channels in order of de-
creasing channel strength; and
3. If the signal in one of the three strongest channels is negative,
drawing a bar over the corresponding channel index (Stetter
et al., 1984b).
Thus, for carbon tetrachloride and tetrachloroethylene, the fingerprint indices
become 020307 and 070602, respectively (cf. Fig. 2). Figure 4 shows the
proportionality of the signals in the strongest channels to the concentrations of
four different compounds.
74
-------
Table 1 lists 19 of the tested compounds that are of concern to the U.S.
Environmental Protection Agency (Federal Register, 1980). All but one of the
fingerprint indices listed in the second column of Table 1 can be seen to differ
from each other. The one exception is the identical fingerprint index (070603)
for chloroform and pyridine. However, it is clear from Fig. 3 that the
histograms of these two compounds differ substantially in the ratios of the
normalized responses in the three strongest channels -- 1.00:0.51:0.18 for
chloroform as compared with 1.00:0.77:0.17 for pyridine. Moreover, if our
fingerprinting procedure is extended to the five (instead of three) strongest
channels, then the new indices for chloroform and pyridine become 070603 0205
and 070603 0208, respectively. Thus, these two compounds are also distinguish-
able from each other with the array used.
The last two columns in Table 1 list the ELD (estimated lowest detectable)
concentrations determined as twice the noise levels (Stetter et al., 1984b) and
the TWA (time-weighted average) threshold limit values of the 19 compounds. In
16 out of 19 cases, the ELD concentrations fall below the TWA values. Only three
nitrogen-containing compounds (hydrazine, methylhydrazine, and nitrobenzene)
present detection problems below the TWA threshold levels with the sensor-
filament array used in this work. Since this particular array did not include
some of the most sensitive sensors, such as those used in the parts-per-bil lion-
level hydrazine detector (Rogers et al., 1980), we expect that low-level
detection should usually be achievable with arrays tailored to the compounds of
interest.
FUTURE DIRECTIONS
The numbers of differently selective sensors, S, and their differently
selective sensing modes, M, can be varied to yield P = MS independent
parameters. The number P required to perform a specified task depends on the
number N of different compounds that may be encountered in a given environment
and also on the number A of significant components that may be encountered at one
time, in accordance with the following inequality (Zaromb and Stetter, 1984):
2p 1 > V N!
i L
Thus, P = 16 can serve to identify a maximum of N = 74 different species in
mixtures containing up to three different air contaminants or a maximum of N = 30
75
-------
different species in mixtures of up to four different contaminants. Alter-
natively, P = 24 (obtainable with S = 4 and M = 6 or S = 6 and M = 4) may serve
to identify up to 100 different species in mixtures of up to four contaminants or
up to 30 species in mixtures of up to eight contaminants. A computer algorithm
developed for estimating measurement errors can be used to evaluate the
appropriateness of any given sensor array and selected operating modes for
identifying and monitoring any group of compounds that may be encountered in a
given environment.
REFERENCES
1. American Conference of Government Industrial Hygienists, 1983. Threshold
Limit Values for Chemical Substances in the Work Environment Adopted by ACGIH
for 1983-84, Cincinnati, Ohio.
2. Federal Register, 1980. Rules and Regulations -- Appendix VII -- Hazardous
Constituents, pp33132-33133.
3. Rogers, Perry M., et al., 1980. Instrument development of a toxic level
hypergolic vapor detector, Proceedings of the JANNAF Safety and Environmental
Protection Subcommittee, Chemical Propulsion Information Agency, Laurel,
Maryland, ppl-22.
4. Stetter, Joseph R., et al., 1984a. Portable device for detecting and
identifying hazardous vapors, Proceedings of the Hazardous Material Spills
Conference, J. Ludwigson, ed., Government Institutes, Inc., Rockville,
Maryland, pp!83-190.
5. Stetter, Joseph R., et al., 1984b. Selective monitoring of hazardous
chemicals in emergency situations, Proceedings of the JANNAF Safety and
Environmental Protection Subcommittee, Chemical Propulsion Information
Agency, Laurel, Maryland, in press.
6. Stetter, Joseph R., Solomon Zaromb, and Melvin W. Findlay, Jr., 1984.
Monitoring of electrochemical ly inactive compounds by amperometric toxic gas
sensors, Extended Abstracts of 1984 Pittsburgh Conference on Analytical
Chemistry and Applied Spectroscopy, Atlantic City, New Jersey, March 5-9,
p!7.
7. Zaromb, Solomon, and Joseph R. Stetter, 1984. Theoretical basis for
identification and measurement of air contaminants using selective sensor
arrays,,submitted for publication to Sensors and Actuators.
76
-------
TABLE 1. SUMMARY OF TEST RESULTS FOR 19 COMPOUNDS OF
CONCERN TO THE U.S. ENVIRONMENTAL PROTECTION AGENCY
Compound
Simple Hydrocarbon
Ring Compounds
Cyclohexane
Benzene
Toluene
Carbon-Oxygen Compounds
Carbon monoxide
Formaldehyde
Chlorinated Aliphatics
Carbon tetrachloride
Chloroform
Tetrach 1 oroethy 1 ene
Nitrogen Compounds
Hydrogen cyanide
Hydrazine
Methylhydrazine
1 , 1-Dimethyl hydraz i ne
Pyridine
Acrylonitrile
Nitrobenzene
Nitric oxide
Nitrogen dioxide
Sulfur Compounds
Hydrogen sulfide
Sulfur dioxide
Fingerprint
Index*
020706
070611
150706
161314
120910
020307
070603
070602
091011
040302
080712
120811
070603
060705
041110
070609
050607
080512
121011
ELD Concentration!
(ppm)
0.4
2.5±1.5
3±2
2+1
0.1
0.1
1
4
0.4
0.1
0.8
0.2
0.2
0.1
10
0.1
0.15±0.05
0.1
0.2
TWAII
(ppm)
300
10
100
50
1
5
10
50
10
0.1
0.2
0.5
5
2
1
25
3
10
2
*See text.
#Estimated lowest detectable concentration.
##Time-weighted average threshold exposure level (American Institute
of Government Industrial Hygienists, 1983).
77
-------
Electro-
chemical
Sensors
Vent
Memory
Power
Control
Rhodium
Filament
Platinum
Filament
Data
Micro-
processor
Power
Display
Alarm
Polluted Gas
Pump
B
Figure 1. Prototype Instrument' for Detection, Identification,
and Monitoring of Chemical Hazards: A. Photograph
and B. Basic Components.
78
-------
Normalized Response
Normalized Response
Normalized Response
Normalized Response
n> -••
"T| f> lQ r
O T3 C
C. O ~1
-1 3 (D
o o> ro
—i. I/) • N) —
-ti
-h -•• w_
tt> 3 :r
o> m to ^~
3 P» ft"
<-f o o ^
T~ -«J O Cft —
o* o P> 2
N -h 3 9 vi-
tu v> £
1 r* o oo-i
Q. 3- O —
CT> 3" &• °
o o ~< =i
"D PJ O KJ-
O 3 T _^
13 ro cu
Q- — ' — i •£_
i/> to — '• ^
. IM _^
c-l- fD f
O Q. _.
1
0
D In c
i
p .
5 In c
I
CXN
S3
S
S3
OxXNN^VX^
\\\\\XI
C\\\VH
\\\\\\\\v
Formaldehyde
-^ _
3 C
--
M-
W-
Oi-
O <" -
LJ
CD 00 -
Z jo.
3_.
(D
M ~
O
D CJl C
[
C
^\\
KS
E
E
E
o _
3 In c
I
OOvNXXNN
\\\\\\\\N
0\S\SC\\N.\NCN
\\X\\\1
\N\\N\\\N?\\I
\\X\\V\N
sX.xx^XXs.XXOH
VX^^Ox^OxNTH
5'
o
X.
CL
a>
A _
3 C
--
N3 — <
w-
C/l — i
O o»-|
U'
Q -vj _ i
D
CD 00 —
^- to — i
cr on
CD
hO ~|
Ol
- p
D Ln
k\\V
[^
1
_^ o
b In c
i_ j_
M-
0.-
Ol-
O o>-
Q vl-
3
(D OO-
C
3 g-
-------
Normalized Response
Normalized Response
00
o
-n
_i. C
ua f.
c
-s
~tl _1 -H
3 ^
* • N>-
o'g *--
O ^ C/l -
o H o o>-
3 to ?
3 0 § >i-
^ 0 ^ °°~
^-t. Z ® _»_
31 __k
I.' — i. SJ ~
^ t/,
c~^~ tx ~
0
-5
3
CO j^_
3
J C
R\l
ES
k\\
1
0000-
3 fo *. O> 00 C
1 1 1 1
3
\\\N\x-sX\\\\N
NX\\\\\NNX\X\\\N
a
3
3
"D
-1
a.
3'
C
3 r~
_^
M-
W-
^-
Ul-
O O>-
u
3 ^1-
bo c
i | i i
S3
\V\\\\\N
CxXXNXxNNNXxNXx^
s
o
•3-
o
~\
o
o1
-1
-------
12
11-
10-
9-
8-
I 71
D 6-
c
D> 5
CO
4
3-1
2
H
0
0
T
10
NO (later run, average
sensor nolse=0.0072
Formaldehyde (average
sensor noise=0.18 tiA)
NO (earlier run, average
noise=0.0063
20 30 40
Concentration (ppm)
n
50
60
-50
-40
-30
-20
~D
C
D>
CO
60
D)
CO
-0.9
-0.8-
-0.7-
-0.6-
-0.5-
-0.4-
-0.3-
-0.2
-o.H
o
CCI4 (Ave.
nolse=0.00045
Tetrachloraethylene (Ave.
noise=0.0036
A
20 40 60 80
Concentration (ppm)
0.18
-0.16
-0.14
-0.12
-0.10
0.08
-0.06
-0.04
-0.02
D)
CO
0.00
100
'Figure 4. Proportionality of the Response Signals in the Strongest
Channels to the Concentration in Air of the Sampled Compounds.
81
-------
Problems and Pitfalls of Trace Ambient Organic Vapor
Sampling at Uncontrolled Hazardous Waste Sites
Michael S. Zachowski
Stephen A. Borgianini
New Jersey Department of Environmental Protection
Hazardous Site Mitigation Administration
CN 028
Trenton, New Jersey 08625
Vectors of pollutant migration from uncontrolled hazardous waste sites have
been under investigation by regulatory agencies for many years. The current
data base has largely been limited to direct impact upon surface and ground
water via leachate discharge and ground water infiltration. Leachate sampling
and characterization protocols are well established. Subsequently, current
engineering practices are adequate to address and mitigate environmental impact
from leachate streams. Potential ground water impacts from uncontrolled
hazardous waste sites present a more complex problem to environmental scientists
than the leachate route. Earlier ground water investigations in the previous
decade, though ambitious, fell short of accurately defining ground water flow
networks. Refinement of analytical techniques and mathematical modeling
approaches in the late 1970's demanded a more sophisticated approach to field
techniques and monitoring network design. Subsequent refinement of
environmental measurement system design and collection procedures are
approaching a convergence with state-of-the-art analytical and modeling
protocols.
The airborne route of contaminant migration, though always a concern, has
not received comparable attention as other more defined routes, i.e. ground and
surface waters. Both liquids & gases, in their natural physical state, take on
the shape of their container. In the biosphere, leachate & surface waters take
on the shape of their stream channel or local ground water basin. The
82
-------
atmosphere, however, is relatively dimensionless. Water is of much greater
relative density than air, and is constricted by it's container; minimizing
dilution effects relative to air. In the past, regulatory agencies have placed
their emphasis on point source air emissions due to high concentrations emitted
and ability to control said point sources. Efforts to control these point
sources have largely been concerned with potential health effects to impacted
individuals & communities.
In New Jersey Department of Environmental Protection - (NJDEP) Hazardous
Site Mitigation Administration's (HSMA) efforts to perform environmental
evaluations and risk assessments at uncontrolled hazardous waste sites, it has
become clear to these investigators that non-point source emissions from the
subject sites were being largely ignored by federal and state regulatory
agencies. These investigators believe organic vapor and particulate emissions
from uncontrolled hazardous waste sites represents a significant pathway for
offsite migration of contaminants. In order to provide toxicologists with the
data necessary to perform a risk assessment, emissions must be qualified and
quantified. In order to develop the data base, a literature search was
conducted, revealing inadequacies & difficulties in sampling network design and
a wide divergence of analytical methodologies. It became apparent that these
problems that were encountered were not unlike those faced by investigators
performing water quality studies in the early 1970's. In many ways, the
problems faced by investigators attempting to design comprehensive sampling and
analytical plans to delineate airborne pollution migration from non-point
sources are complexed by the relative instability of the air matrix in relation
to the water matrix, with regard to diluation effects, variability of direction,
and speed of transport.
83
-------
Design
In attempts to approximate environmental conditions in the air matrix the
investigator must correlate the physical nature of the environment with the
design of the monitoring network. One of the most critical physical
considerations is the availability of adequate meterological data such as
air temperature, wind speed, wind direction, humidity and barometric pressure.
Though these metrological data may be readily available at regional airports or
meterological stations, these conditions may be indicative of the region but
lack site • specificity. On site conditions, such as landfill elevations
considerably above grade, can significantly disrupt "normal" meterological
conditions. Localized anthropogenic disturbences of natural geomorphology can
lead to very localized atmospheric conditions both effecting the site as a whole
or causing a broad heterogeniety of meterological conditions within the site.
The design of an environmental sampling network must take into
consideration not only the regional, but also localized physical conditions
outlined above. In order to adequately design a monitoring network, such that
mathematical modeling of site conditions & contaminant transport can be
addressed, accurate meterological data must be available for each sampling
station for the duration of the program. Only by such a strategy can localized
site conditions be defined.
Design & implementation of the actual sampling network is dependent upon
accurate definition of regional and localized effects catalogued above. These
conditions will bear significantly upon selection of sampling locations. Major
problems in selection of sampling locations are the number of sampling stations
required to adequately define the site situation and selection of a true
84
-------
statistical background. Selection of a background location is extremely
critical in order to determine if ambient site concentrations are significantly
different than background concentrations. Establishment of variability of
background concentrations is required to statistically evaluate background vs.
site concentrations.
The lack of a standardized data base from uncontrolled hazardous waste site
emissions causes difficulties in design of a adequate monitoring network with
respect to suspected contaminants & concentrations. Uncontrolled hazardous
waste sites are characterized by a broad heterogeniety of compounds disposed as
well as uneven distribution throughout the site. These facts must be considered
in selection of the analytical scheme as well as number & location of sampling
stations.
Classically, air quality standards have been directed toward occupational
exposures. The relationship of these air quality standards to data derived from
ambient air monitoring is nebulous. The paucity of ambient air quality
standards drastically effects sampling network design since the design should be
dependent upon compounds & concentrations of contaminents which are known to
adversely effect the environment or public health. In light of the absence of
well defined action levels, it becomes unclear to the investigator the goal or
targets of his program,
Sampling
As mentioned earlier, much of the focus of ambient air evaluation has been
focused upon industrial or occupational exposure. Subsequently, much of the
sampling equipment available to the investigator is limited in it's usefulness.
85
-------
Specifically, most sampling pumps are designed to monitor eight hour exposures.
NJDEP-HSMA's investigations were more concerned with long term chronic exposure.
Based on a literature review, no standard sampling periods were established. In
order to calculate daily exposures and to minimize man-power considerations, a
twenty-four hour sampling period at low flow velocities (10-15 ml/min.) was
chosen. This immediately presented a problem, as to the best of our knowledge,
no such pump was readily commerciable available. For this study, only volatile
organic constituents were considered, due to budgetary & logistical constraints.
The sampling device that was chosen was fibricated, consisting of a battery
powered Gillian - 10020 pump which draws 10-15 ml/minute of air through a
collection trap packed with Tenax - GC. Laboratory calibrated rotameters are
adjusted in the field to assure that the proper flow rates were achieved. To
prevent airborne particulates from entering the traps, a glass fiber filter is
placed before each trap. These filters were impregnated with sodium thiosulfate
to avoid any oxidation of the Tenax - GC and to minimize the formation of
artifacts.
In conclusion, sampling efforts of trace ambient organic vapors at
uncontrolled hazardous waste sites was hindered by the lack of standard
procedures and readily available commercial samplers, making data
representativeness & comparibility among investigations poor.
Analytical
There are a wide variety of analtyical methods for specific pollutants as
well as scanning procedures for chemically related pollutants. None of these
analytical methodologies is all inclusive. Many of these methodologies include
their own specific sample collection technique. If an investigator chooses to
86
-------
examine a wide variety of contaminents in ambient air, he is faced with the
possiblity of having to use several different sampling apparatus to collect
samples as the specific method requires. It is obvious that this can be quite
cumbersome & require a high degree of labor to maintain the sampling network.
In an attempt to maximize the information gained & minimize the physical effort
and money, involved; investigations should be narrowed in scope. This
analytical targeting of compounds could be selected by toxicity, mode of
transport or pervasiveness in the environment. Analytical targeting can be
accomplished by either fine tuning sample collection methodologies to cover a
broad spectrum of compounds or adapting current analytical techniques to enhance
the detection of specific chemical species. From a regulatory standpoint,
standard analytical methods must be employed that are legally defensible and
scientifically valid. All of the HSMA.sites carry the potential of litigation
therefore all analytical methods performed must be of demonstrated quality.
A rigid well defined Quality Assurance/Quality Control program must be
established to consistantly demonstrate the validity, representativeness, and
comparibility of the data generated from standardized analytical and field
methodologies. At present the type & frequency of Quality Control procedures
vary greatly. Therefore, Standard Procedures such as duplicate/replicate
analysis, blank methodologies, and field practices must be established. The
problem of establishment of background for an ambient air study is a case in
point. The background can change day by day if not hour by hour, therefore,
without measuring the variability of your background sample, data interperation
can be complicated. All background data should at a minimum be replicated by
collecting and analysing at least two background samples from the same station
collected in the same manner. Utilizing more than one sampling station for
background calculations can lead to errors caused by pseudoreplication.
87
-------
Selection of background location and a knowledge of regional ambient
concentration of the compounds of interest are the most critical factors in
obtaining useful data. It becomes evident that design of the sampling network
determines the usefulness of the data to be used in environmental evaluations.
Evaluation
The problems & pitfalls of trace ambient orgaic vapor sampling at
uncontrolled hazardous waste sites outlined throughout this paper provides the
investigator with a multitude of complexities in the evaluation of data
generated from this study. In the process of fine tuning & standardizing the
environmental measurement system the information gained will be of superior
quality. This increase in data quality does not, however, lead to an immediate
corresponding increase in the amount of information gained by the study. The
evaluation of these data as they relate to public health, environmental impact
and environmental fate of trace ambient contaminants eminating from non-point
sources such as uncontrolled hazardous waste sites thrusts the investigator into
areas of environmental evaluation and risk assessment not previously addressed
at the same level of sophistification as other vectors of pollution migration.
The lack of trace ambient air standards creates problems toxicological
assessents of potential health effects. Improperly designed sampling and
analytical strategies may overlook high, short term exposures, potentially due
to specialized or localized meterological conditions that would not be accounted
for in mathematical modeling for risk assessment. One area of promise recently
commercially available to the investigator is real time, real world, multi point
analysis of ambient air. Advantages of a real time, analytical system include
large number of analyses over a short period of time which will yield
information about the variability of the contaminent load in the atmosphere as
88
-------
well as allowing for the correction of any apparent analytical problems can take
place almost immediately without lose of valuable data. Such a system, used in
conjunction with an ambient monitoring network should prove more effective than
either methodology used separately.
The problems presented in this paper provide a substantial challenge to the
environmental scientist, but are not insurmountable. Accurate & definative
characterization of trace ambient organic vapors emitted from non-point sources
is currently limited to only an evaluation process. Point source emissions,
once a source of gross environmental pollution have been significantly reduced
due to regulatory & technical advances. Non-point sources, such as uncontrolled
hazardous waste sites, still pose a threat to the environment and public health
& will require greatly increased efforts and state-of-the-art technologies to
evaluate & mitigate.
89
-------
REFERENCES
1. Barras, R.C. ed., Instrumentation for Monitoring Air Quality, American
Society for Testing and Materials, STP 555, 1974.
2. J. Bozzelli, B. Kebbekus, "Collection and Analysis of Selected Volatile
Compounds in Ambient Air", paper no. 82-65.2, in Proceedings of the 75th
Annual Meeting, Air Pollution Control Association, Pittsburgh, 1982.
3. R. Harkov, Toxic Air Pollutants in New Jersey, N.J. Department of
Environmental Protection, Trenton, 49 pp., 1983.
4. R. Harkov, R. Katz, J. Bozzelli, B. Kebbekus, "Toxic and Carcinogenic Air
Pollutants in New Jersey - Volatile Organic Substances", Proceedings, Toxic
Air Contaminants, VIP - I, Air Pollution Control Association, Pittsburgh,
1981.
5. R. Harkov, B. Kebbekus, J. Bozzelli, P. Lioy, "Measurement of Selected
Volatile Organic Compounds at Three Locations in New Jersey during the
Summer Season", JAPCA 33:1177 (1983).
6. Himmelsbach, B.F. ed., Toxic Materials In the Atmosphere, American Society
for Testing and Materials, STP 786, 1981.
7. D. Lillian, H. B. Singh, A. Appleby, L. Lobban, R. Arnts, R. Gumpert, R.
Hague, J. Tosney, Jr., Kazzazis, M. Antell, D. Hansen, B. Scott,
"Atmospheric fates of halogenated compounds", Environ. Sci. Technol. 9:1042
(1979).
8. E. Pellizzari, "Analysis for organic vapor emissions near industrial and
chemical waste disposal sites", Environ. Sci. Technol. 16:781 (1982).
9. E.D. Pellizzari, and John Bunch, "Ambient Air Carcinogenic Vapors Improved
Sampling and Analytical Techniques and Field Studies", United States
Environmental Protection Agency, EPA-600/2-79-081, 1979.
10. E.D. Pellizzari, M. D. Erickson, R. A. Zweidinger, "Formulation of a
Preliminary Assessment of Halogenated Organic Compounds in Man and
Environmental Media", EPA-560/13-79-010, U.S. Environmental Protection
Agency, 1979.
11. H. B. Singh, L. J. Salas, A. J. Smith, H. Shigeishi, "Measurement of some
potentially hazardous organic chemicals in urban atmospheres", Atmos.
Environ. 15:601, (1981).
12. United States Environmental Protection Agency, Quality Assurance Handbook
for Air Pollution Measurement Systems, EPA-600/9-76-005, 1976.
13. United States Environmental Protection Agency, Technical Assistance
Document for Sampling and Analysis of Toxic Organic Compounds in Ambient
Air. EPA-600/4-83-027, June 1983.
14. Verner, S. S. ed., Sampling and Analysis of Toxic Organics In the
Atmosphere, American Society for Testing and Materials, STP 721, 1979.
90
-------
NEW CONTINUOUS MONITORING SYSTEMS FOR MEASUREMENT OF HAZARDOUS POLLUTANTS
J.N. Driscoll, A.G. Wilshire, J.W. Bodenrader
HNU Systems, Incorporated
160 Charlemont Street
Newton, Massachusetts 02161
Over the past few years, the HNU PI-101 photoionizer has been one of the
primary instruments used for hazardous waste site entry program by the FIT
and TAT environmental response teams (1). The PI-101 is used for preliminary
screening of the area including the extent of the problem. The 101 can also
be used to determine where barrels are located and/or where problematic
barrels are stored (underground) by simply breaking the ground with the heel
or shovel and looking for an increase in hydrocarbon concentration with the
analyzer. The PI-101 can also be used during Phase II to check soil samples
or water samples which are collected and stored for subsequent laboratory
analysis. One gas chromatograph which can be used for analysis in the field
is our Model 301P, a gas chromatograph which will operate on batteries in the
field and provides complete GC capabilities in the laboratory with tempera-
tures to 300°C and packed/capillary capability.
Once in the laboratory, the PI-101 can be used as a qualitative tool
(2,3) to screen (via headspace) samples and ensure the integrity prior to GC-
MS or other analysis. Of course a simpler, less costly, and more useful
sample screening technique involves using GC - PID/FID for analysis.
Several years ago, a technique was described (7) that utilized the dif-
ference in response ratios for organic compounds on a photoionization detector
(PID/10.2 eV) and a flame ionization detector (FID) as a means for hydrocarbon
identification.
Typical ratios calculated by: PID attenuation x peak height/FT.D attenua-
tion x peak height were as follows: Type of Hydrocarbon PID/FID Ratio
Saturated 10 or less
Unsaturated 10 - 25
Aromatic >25
The HNU Gas Chromatographs (GC 301 or GC 421) were designed to incorpor-
ate an integral PID, (non-destructive detector) with an FID or another detec-
tor in series.
The PID response (10.2) eV was found to be a function of the electronic
structure (pi vs sigma electrons) of the solute; as a consequence, it provided
91
-------
a discriminative response. This characteristic (discriminative response)
can be turned into a very powerful qualitative tool, by comparing (ratioing)
the PID signal to that of another detector. The preferred detector for com-
parison was one which had a homogeneous response (i.e., the FID has a homo-
geneous response in the sense that its response for organic molecules depends
mainly on the number of carbon atoms in the molecule). In this way, general
classes of compounds could be identified, since the FID response provides the
normalizing factor. We have been able to differentiate the presence of aroma-
tic, unsaturated, and small molecular weight chlorinated compounds in a haz-
ardous waste dump site by calculating the PID/FID ratios (5). The compounds
present in each group can be further clarified by using retention indices.
Retention indices (RI) can be very useful to aid solute identification from
the quantitative and qualitative point of view. Quantitatively it provides
structural information, (i.e. two adjacent homologs in a series differ by 100
units). Several of the unknown compounds in the dump site were "identified"
by matching up its retention index. In Table IV we have detailed an instru-
ment selection guide.
If a hazardous waste site is particularly problematic e.g., very large
(clean up cost >$1 M), it is quite possible that Phase III will be reduced
(Table I and II), and some types of continuous monitoring equipment will be
necessary for Air Quality Monitoring.
We have developed a very powerful and flexible series of analyzer systems
for continuous monitoring measurement of pollutants. The first is a dedicated
microprocessor controlled chromatograph (501) for fixed installations of spec-
ific pollutants (up to 10); the other (301) is a small transportable system
which is also microprocessor controlled but can be utilized for a variety of
analyses.
The microprocessor controlled environmental chromatograph, the HNU Model
501, which utilizes the high sensitivity Photoionization Detector (PID) or Far
UV Absorbance Detector (FUV) (6) for inorganics or hydrocarbons monitoring at
percentage or sub-ppm levels. The Model 501 consists of four sections, the
oven which contains the columns and PID, the fluidics bay (hardware and valves
need to inject the sample), the electronics bay where all the electronics
(including microprocessor) for the system are located and the multipoint
sequencer capable of sampling up to ten points (7,8).
92
-------
The Z-80 microprocessor controls all functions of the Model 501 under
direction of the operating system in ROM. This includes temperature control,
valve control, diagnostic tests, as well as data collection and manipulation.
As a result, the Model 501 is designed for unattended operation 24 hours a
day. Automated calibration with a known level of a hydrocarbon occurs every
eight hours as part of the program.
The chromatographic system used consists of a precolumn, an analytical
column and a detector (PID or FUV). The precolumn is used to remove compounds
more strongly retained from the air sample during injection. All materials
trapped on the precolumn after injection are back flushed to vent during the
remainder of the analytical cycle. The analytical column separates the com-
ponents for quantitation using the PID or FUV.
The use of a properly chosen precolumn as part of the chromatographic
system allows the development of a specific analyzer.
The Model 301 system weighs about 30 pounds and consists of two packages;
the chromatographic oven, waiving, detector, analog electronics; the micropro-
cessor controller/programmer which can be used for temperature control, valve
timing, and data reduction. The detectors on this system can be any one or
two of the following: PID, FUV, or flame ionization (FID). Virtually any
possible combination of contaminants can be analyzed with these detectors. A
very flexible automatic analyzer can be constructed with the 301 components.
In conclusion, we have shown that HNU Systems has a variety of instruments
which can be utilized for preliminary site entry, screening, analysis and/or
continuous monitoring of hazardous waste sites.
REFERENCES
1) Driscoll, J.N., & Hewitt, G.F. , Instrumentation for "On Site" Survey &
Identification of Hazardous Waste, Ind. Hyg. News (May 1982).
2) Becker, J.H., Driscoll, J.N. & Higgins, M., A Sensitive Portable Instru-
ment for Arson Detection. Pitt. Conf. Paper #617, Atlantic City, N.J.
(May 1980).
3) Driscoll, J.N., Becker, J.H., Click, A., & Renaud, D., Rapid Screening
Techniques for Determination of Residual Organics in Foods, Polymers &
Soils. Pitt. Conf. Paper #603, Atlantic City, N.J. (March 1981).
4) Driscoll, J.N., Ford, J., Jaramillo, L.F. & Gruber, E.T., J.Chrom., 158,
171 (1978).
93
-------
5) Jaramillo, L.F., Driscoll, J.N., & Conron, D.W., Identification of Hazard-
ous Waste Compounds, Using Retention Indexes & Response Ratios of PIP &
FID in Series, Pitts. Conf. Paper #864, Atlantic City, N.J. (March 1981).
6) Driscoll, J.N. Ferioli, P., & Towns, B., A New Sensitive Universal Detect-
or for Gas Chromatography: Far UV Absorbance, Research & Development
(in press).
7) Hewitt, G.F. & Driscoll, J.N., A New Concept in Env. Chromatography,
Anal. Inst., 19, 5 (1981).
8) Driscoll, J.N., Atwood, E.S., Hewitt, G.F., PID-Automatic GC Combination
Detects Toxic Chemicals at ppb Levels, Research & Development (Feb. 1982)
94
-------
Fig. 1
CJ1
What Type of Instrumentation is Needed
PHASE I
Table II
Tasks for Hazardous Waste
Site Evaluation
PHASE I
• Preliminary Field Screening
• Site Evaluation
• Location of Drums
PHASE II
• Sample Collection
(soil, water, air, hazardous waste)
• Sample Screening
• Analysis and Identification
PHASE III
• Continuous Monitoring
• TLV Data
• Drilling
Table I • "Air Quality Monitoring"
• Initial Survey - Field
• Portable instruments for initial survey to measure
hydrocarbons and inorganics
Examples:
HNU PI-101 Battery operated photoionization based analyzer
HNU GC-301 P Battery operated gas chromatograph
PHASE II
• Screening - Field or Laboratory
Example: HNU PI 1O1 - Headspace
• Identification & Analysis
Example: GC - MS (?)
Alternative: GC - PID/(?) other detective
PHASE III
• Continuous Monitoring/Air Quality Monitoring - Field
Example: GC 301 Automatic EC 501 /511
Hazardous Waste Instrument Selection
Skill Level of Operator
Portability
Ability to Operate in a Van
Lab. Screening (gas samples)
Liquid Samples
Capillary Capability
Auto Sampler
PI-101
low
yes
yes
not
no
no
GC301
high
yes
(301 P)
yes
yes
yes
yes
no
GC421
high
no
yes
yes
yes
yes
yes
Guide
EC 501
lowt
no
yes
yes
not
no
no
EC 511
high
no
yes
yes
not
no
no
fOnly if 501 is not used with 511
tOnly headspace
Table III
-------
REAGENT IMPREGNATED FILM BADGES
FOR PASSIVE POLLUTANT SAMPLING
Rene Surgi and Jimmie Hodgeson
Department of Chemistry
Louisiana State University
Baton Rouge, LA 70803
INTRODUCTION
Silicone polymer films have previously been used as diffusion barriers
placed over reagent substrates for the passive collection of pollutants from
the atmosphere (Reiszner and West, 1973; Nelms, 1976). In the present
application the collection device is the polymer film itself, in which a
reagent or trapping site, has been dissolved and homogeneously dispersed.
Prior to this experimental work, a mathematical diffusion model was developed
to predict the response of such a device to a given pollutant dose (Rubin,
1980). The model predicted a much improved sensitivity over comparable
polymer diffusion barrier sensors and a response which varies as the square
root of pollutant dose. Prior to this effort, experimental verification of
the model has been lacking.
Briefly, we sought to design and evaluate a personal sampler for ozone
using a reagent, 10,10-Dimethyl-9,9'-Biacridylidene (DBA) dissolved
homogeneously in a gas-permeable, silicone-polycarbonate copolymer film badge
(General Electric, 1978). This system was chosen because the simplicity and
rapidity of the O-j-DBA reaction provided an ideal means for testing the
behaviour of this general kind of sensor and for model validation. DBA has a
reactive double bond that is expected to undergo ozonolysis at a diffusion
controlled rate (Kearns, 1969; Turro, 1970). The rate constant of the reaction
O 1 1
between ozone and DBA was determined to be 5.2 ± 0.6 x 10 M sec .
96
-------
The disappearance of DBA upon ozonolysis can be monitored spectrophoto-
metrically at 430 nm. Unlike other reagents for ozone determination (Pryor ;and
Collard, 1978; Hodgeson and Surgi, 1984), this reagent is specific for
ozone. Interferences from NC^ and SC^ are less than 1%. Currently, there is
no inexpensive, convenient personal monitor for ambient concentrations of
ozone, no experimental evidence supporting Rubin's mathematical models, nor is
there an experimental measurement of the permeability of ozone in silicone.
EXPERIMENTAL
Twenty-six independent exposures using ozone concentrations from 0.036 to
1.29 ppm ranging over times of 15 to 400 minutes were run. Each data point
(see figure 1) is the average of three independent readings. Various
concentrations of ozone were generated by photolysis of air or oxygen by a
mercury arc lamp housed in a retractable sleeve. The flow rate was maintained
at 12 1/min (.63 M.P.H.) by bearing flow meters. The ozone concentration,
corrected for temperature and pressure, was continuously monitored by a Dasibi
Model 1008-AH ozone photometer.
RESULTS AND DISCUSSION
Rubin (1980) has solved Picks laws of diffusion to obtain two equations
which can be interfaced to relate the amount of reagent (DBA) depleted to the
dose (atm-min) of ozone:
R(t) = (2SP0DTN0F2 (1)
R(t) = total amount of gas per unit area which has reacted up to time
97
-------
FIGURE 1
EXPOSURE
o>
0.083 PPM OZONE
10
00
o
111
I-
< «
2 o
cc
111 *
o £
N CC
O
I-
O
I-
27-
2 1-
15-
9-
3-
I
10
I
20
I
30
I
40
I
50
0.5 3
(N0T) x 10
0.5 3
(NO. OF INITIAL SITES x EXPOSURE TIME) x 10
-------
p
t, moles cm
S = solubility coefficient of gas in membrane, moles cm atm .
PQ = partial pressure of gas, atm.
7 1
D = diffusion coefficient of gas in membrane, cm min
--3
T = time, min
NQ = initial concentration of trapping sites, moles cm"
(SD)^ = permeabil ity at a given thickness, b.
P(t) = the instantaneous partial pressure at a time t, (atm).
Since the average partial pressure of pollutant, PQ, is simply the time
averaged instantaneous partial pressure:
P0 = 1 J P(t)dt.
T 0
Thus the following expression can be written:
R(t) =
2SDN,
P(t)dt
(2)
[ R(t) ]
2 _
P(t)dt
2SDNQ 0'
The quantity of gas reacted, R(t), is directly proportional to the sensor
response as measured by the decrease in absorbance at 430 nm. The
proportionality constant includes the molar absorptivity of DBA and the film
99
-------
thickness, both of which have been determined experimentally. Thus the
response of the sensor should be proportional to the square root of the time-
averaged dose. Furthermore an experimental determination of the
proportionality constant yields a measurement of the pollutant permeability in
the polymer film.
The following quantities have been determined experimentally:
(2SD)b = 62.3 +_ 5.8 x 10~8 (Note: Units given by above expressions)
N0 = (Ai/b) 8.59 x 1(T8
R(t) = (AA) 8.59 x 10"8
where: 8.59 x 10"8 = (density of membrane 213, 1.156 g/cm )
["molar absorptivity"] [molecular weight 1
[of DBA, 34800 cm j [of DBA, 386.5 9/moleJ
AA = initial absorbance, A^ - final absorbance, Af
Membrane thickness (b) = 36.5 +_1.3 x 10"4cm
Substituting these values into the above equation yields:
,-T
(AA)2 = J P(t)dt (3)
1990 Ai 0
By the use of this expression, the integrated dose may be determined
directly from the measured response. A unique advantage of this device is
that the proportionality constant given here remains constant as long as the
film thickness is controlled. The uncertainty limits given above for membrane
thickness reflect the degree of control which has been attained for film
100
-------
thickness.
The results of one typical run are shown in Figure 1. In this figure the
y-axis is proportional to the square root of the dose. All the runs performed
showed the same linear relationship. Thus the unique dose-response
relationship predicted by the Rubin model has been verified. For each run the
doses may be calculated from equation (3) and compared to the actual doses as
determined from the measurement of Og concentration and time. The results of
one such typical comparison are shown below:
Calculated
A-
1
.370
.300
.360
.430
.339
.400
.468
AA
.031
.040
.056
.092
.120
.140
.208
Dose: j P(t)dt
J
(atm x min) x 10
1.26
2.71
4.31
11.0
20.1
27.4
47.4
Actual Dose
(atm x min) x 10
1.11
2.85
4.68
10.6
24.6
32.8
45.0
% Error
14
4.9
7.9
3.8
18
16
5.3
Average of % Error = 10%
Average Deviation = 5%
Since reagent is depleted from the silicone portion of the copolymer 300
times faster than the polycarbonate portion (Hwang et al, 1974a). it is also
possible to calculate the intrinsic permeability of ozone in siloxane. A plot
1/2
of (2SD), against b, extrapolated to zero thickness, yields a y-intercept
which is taken as the intrinsic permeability. A comparison of the
experimentally derived intrinsic permeability to that predicted by modeling
the permeability of ozone in siloxane (General Electric, 1978; Lange, 1952) is
given below.
101
-------
A) Experimental - Extrapolation to Zero Thickness:
(SD)b=u = 153 ± 20 x ID'9 (cm3 gas) (cm thick)
(sec) (cm polymer) (cm Hg, AP)
B) Model: Boiling Point of Gas, Molecular Diameter of Ozone
3
S = 1.18 x 10"2 3 Ncm 9as/
(cm polymer) (cm Hg,
-6 (cm*
D = 13 x 10
sec
3
SD = 150 ± 25 x 10'9 (cm gas) (cm thick)
(sec) (cm polymer) (cm Hg, AP)
Although the use of this approach to determine the intrinsic permeabl ity
of ozone in siloxane is novel, such an approach is not entirely without
precedent. Hwang et al . (1971, 19746) has investigated permeabilities of
carbon dioxide, oxygen and water through acetyl cellulose acetate membranes
and discovered the following relationship. A plot of (b) against (SD) can
be extrapolated to a y-intercept which is equal to the reciprocal of the
intrinsic permeability.
REFERENCES
1. General Electric Permselective Membranes. 1978 Membrane Products
Operation - Medical Systems Business Operations; Schenectady, N.Y.
2. J.A. Hodgeson, and M.R. Surgi, 1984. The Air Qua! ity Criteria Document
for Ozone and Other Photochemical Oxidants; Chapter 4.4.4. 1984 (to be
published. U.S. Environmental Protection Agency, Research Triangle Park,
N.C.
3. S.T. Hwang, C.K. Choi and K. Kammermeyer, 1974a. Gaseous Transfer
Coefficients in Membranes, Separation Science, 9, pp 461-478.
102
-------
4. S.T. Hwang, and K. Kammermeyer, 1974b. Effects of Thickness on
Permeability, Polymer Science and Technology, 6, pp 197-205.
5. S.T. Hwang, T.E. Tang, and K. Kammermeyer, 1971. Transport of Dissolved
Oxygen through Silicone Rubber Membrane, Journal of Macromolecular
Science and Physics, B5, pp 1-10.
6. D.R. Kearns, 1969. Selection Rules for Singlet-Oxygen Reactions.
Concerted Addition Reacitons, Journal of The American Chemical Society,
91, pp 6554-6563.
7. N.A. Lange, 1952. Handbook of Chemistry, 8th edition, Handbook
Publishers, Inc., Sandusky, Ohio, pp 264-265.
8. P.O. Mollere, K.N. Houk, D.S. Bomse, and T.H. Morton, 1976.
Photoelectron Spectra of Sterically Congested Alkenes and Dienes.
Journal of the American Chemical Society, 98, pp 4732-4736.
9. L.H. Helms, 1976. The Development of a Personal Dosimeter for Vinyl
Chloride Utilizing the Permeation Approach. Ph.D. Dissertation,
Louisiana State University, 93pp.
10. W.A. Pryor, and R.S. Collard, 1981. Measurement of Ozone in the Presence
of Sulfur Dioxide and Nitrogen Oxides. Journal of Environmental Science
and Health, A16, pp 73-86.
11. K.D. Reiszner and P.W. West, 1973. Collection and Determination of
Sulfur Dioxide Incorporating Permeation and West-Gaeke Procedure,
Environmental Science and Technology, 7, pp 526-532.
12. R.J. Rubin, 1980. Analysis of Mathematical Models of Integrating
Monitoring Devices. NBSIR 80-1975, National Bureau of Standards,
Washington, D.C., 31pp.
13. N.J. Turro, 1978. Modern Molelcular Photochemistry. Benjamin-Cummlngs
Publishing Co., Menlo Park, CA., p 246.
103
-------
A CRYOGENIC PRECONCENTRATION-DIRECT FLAME IONIZATION METHOD
FOR MEASURING AMBIENT NMOC
Frank F. McElroy and Vinson L. Thompson
Environmental Monitoring Systems Laboratory
U.S. Environmental Protection Agency
Research Triangle Park, North Carolina 27711
INTRODUCTION AND APPLICABILITY
A variety of photochemical dispersion models have been developed to
describe the quantitative relationships between ambient concentrations of
precursor organic compounds and subsequent downwind concentrations of
ozone.1 An important application of such models is to determine the
degree of control of such organic compounds that is necessary in a par-
ticular area to achieve compliance with applicable ambient air quality
standards for ozone.1'2 For this purpose, the models require input of
data on ambient concentrations of nonmethane organic compounds (NMOC).
The more elaborate theoretical models generally require detailed
organic species data.2 Such species data must be obtained by analysis of
air samples with a sophisticated, tnulticomponent gas chromatographic
(GC) analysis system.2'3 Simpler empirical models such as the Empirical
Kinetic Modeling Approach (EKMA)1 require only total NMOC concentration
data, specifically the average total NMOC concentrations from 6 a.m. to
9 a.m. daily.2
Commercial, continuous NMOC analyzers have been used to obtain urban
NMOC concentrations,2 but these methods have proved to be only marginally
adequate4 because of limitations from variability, zero and span drift,
lack of sensitivity, non-uniform response characteristics, and the in-
direct nature of the measurement. Moreover, these methods are clearly
inadequate for determining the low, upwind NMOC concentrations needed
when transport of precursors into an area is to be considered in the EKMA
application.4'5
NMOC GC species measurements can be used by summing the various com-
ponents to obtain a total NMOC concentration.2 These measurements are
much more accurate than continuous NMOC analyzer data, but species data are
not needed for EKMA, and the procedure is therefore unnecessarily expensive and
complex.
104
-------
The cryogenic Preconcentration-Direct Flame lonization (PDFID) method
can be used to obtain the requisite upwind, as well as urban, NMOC measure-
ments.6'7'8 This method is based on a simplification of the GC speciation
technique. It combines the cryogenic concentration technique used in the GC
method for high sensitivity with the simple flame ionization detector (FID) for
total NMOC measurements without the complex GC columns necessary for species
separation. And because of the use of helium carrier gas, the FID has less
response variation to various organic compounds than a conventional NMOC analyz-
er with air carrier or direct sample injection into the FID.
This method can be used either for direct, in situ ambient measurements or
for analysis of integrated samples contained in metal canisters. Making direct
measurements at the monitoring site avoids the potential sample loss or contami-
nation problems possible with the use of canisters. However, the analyst must
be present during the 6 to 9 a.m. period, and repeated measurements (approxi-
mately six per hour) must be taken to obtain the 6 to 9 a.m. average NMOC con-
centration. A separate analytical system and analyst is needed for each moni-
toring site. (Further development of the method may allow for automatic
operation for on-line semi-continuous analysis in the future.)
The use of sample canisters allows the collection of integrated air samples
over the 6 to 9 a.m. period by automated samplers at unattended monitoring sites.
One centralized system can then analyze the samples from several sites. Degrada-
tion or contamination of the air samples by the canister could be a potential
problem, but tests indicate that the use of properly fabricated, treated, and
cleaned stainless steel canisters, as described in the procedure, is practical
and adds relatively little additional variability to the method.8
PRINCIPLE
An air sample is taken either directly from the ambient air at the monitor-
ing site, where the analytical system is located, or from a sample canister
filled previously at a remote monitoring site. A fixed-volume portion of the
sample is drawn at a low flow rate through a glass beaded trap cooled to approxi-
mately -186° C. This temperature is such that all organic compounds in the
sample other than methane are collected (either via condensation or adsorption)
in the trap, while methane, nitrogen, oxygen, etc., pass through. The system is
dynamically calibrated so that the volume of sample passing through the trap
105
-------
does not have to be quantitatively measured, but must be precisely repeatable
between the calibration and analytical phases.
After the fixed volume air sample has been drawn through the trap, the
helium carrier gas is diverted to pass through the trap in a direction opposite
to that of the sample flow and into a flame ionization detector (FID). When
the residual air and methane have been cleared from the trap and the FID base-
line becomes steady, the cryogen is removed and the temperature of the trap is
raised to approximately 90° C. The organic compounds collected in the trap
revolatilize and are carried into the FID, resulting in a response peak or peaks
from the FID. The area of the peak or peaks is integrated, and the integrated
value is translated to concentration units via a previously obtained calibration
curve relating integrated areas with known concentrations of propane.
The cryogenic trap simultaneously concentrates the nonmethane organic com-
pounds while separating and removing the methane from air samples. Thus the
technique is direct reading for NMOC and, because of the concentration step, is
more sensitive than conventional NMOC analyzers. Quantitative trapping has
been shown for most compounds tested.6 A complete description of the method,
including the collection of integrated air samples in stainless steel canisters,
is provided in Reference 8. Figure 1 is a schematic diagram of the analytical
system.
PRECISION AND ACCURACY
The overall precision estimate for the method, including the effect of
collecting and storing the ambient samples in stainless steel canisters, has
been found to be 4.5%.9
Because of the number and variety of organic compounds included in the
NMOC measurement, determination of absolute accuracy is not practical. Based
on comparison with manual GC speciation analysis—regarded as the best avail-
able for measurement of organic compounds—the proportional bias was determined
to be +5.7%, with a negligible fixed bias.9 Although the 5.7% bias was statisti-
cally significant, no correction factor is proposed for the method because this
bias is modest, and the speciation techique is not an absolute standard.
Experimental tests indicate some degree of FID baseline shift from water
vapor in ambient samples, which could result in positive bias, variability, or
both. These problems can be adequately minimized by careful selection of the
integration termination point and appropriate baseline corrections.9
106
-------
REFERENCES
1. Uses, Limitations and Technical Basis of Procedures for Quantifying
Relationships Between Photochemical Oxidants and Precursors. EPA-
450/2-77-021a, U.S. Environmental Protection Agency, Research Triangle
Park, NC, November 1977.
2. Guidance for Collection of Ambient Non-Methane Organic Compound (NMOC)
Data for Use in 1982 Ozone SIP Development, and Network Design and
Siting Criteria for the NMOC and NO Monitors. EPA-450/4-80-011, U.S.
Environmental Protection Agency, Research Triangle Park, North Caro-
lina, June 1980.
3. Guidance for the Collection and Use of Ambient Hydrocarbon Species
Data in Development of Ozone Control Strategies. EPA-450/4-80-008,
U.S. Environmental Protection Agency, Research Triangle Park, North
Carolina, April 1980.
4. Richter, Harold G. Analysis of Organic Compound Data Gathered During
1980 in Northeast Corridor Cities. EPA-450/4-83-017, Environmental
Protection Agency, Research Triangle Park, NC, August 1983.
5. Sexton, F.W., F.F. McElroy, R.A. Michie, Jr., and V.L. Thompson. A Com-
parative Evaluation of Seven Automatic Ambient Non-Methane Organic
Compound Analyzers. EPA-600/S4-82-046, Environmental Monitoring
Systems Laboratory, U.S. Environmental Protection Agency, Research
Triangle Park, NC, August 1982.
6. Jayanty, R.K.M., A. Blackard, F.F. McElroy, and W.A. McClenny.
Laboratory Evaluation of Nonmethane Organic Carbon Determination
in Ambient Air by Cryogenic Preconcentration and Flame lonization
Detection. EPA-600/54-82-019, July 1982.
7. Cox, R.D., M.A. McDevitt, K.W. Lee, andG.K. Tannahill. 1982. Deter-
mination of Low Levels of Total Nonmethane Hydrocarbon Content in Ambi-
ent Air. Environ. Sci. Technol. 16, 57-61.
8. Determination of Atmospheric Nonmethane Organic Compounds (NMOC) by
Cryogenic Preconcentration and Direct Flame lonization Detection.
Method description available from the Methods Standardization Branch
(MD-77), Quality Assurance Division, Environmental Monitoring Systems
Laboratory, U.S. Environmental Protection Agency, Research Triangle
Park, NC 27711, September 1983.
9. McElroy, F.F., V.L. Thompson, D. Holland, W.A. Lonneman, and R.L.
Seila. Cryogenic Preconcentration-Direct FID Method for Measurement
of Ambient NMOC: Refinement and Comparison with GC Speciation. Sub-
mitted for publication, February, 1984.
107
-------
ABSOLUTE
PRESSURE GAUGE
VACUUM £^ SAMPLE
VALVE \,S VALVE
LOW
PRESSURE
REGULATOR
VACUUM
PUMP
1.7 LITER
RESERVOIR
SAMPLE
METERING
VALVE
r
PRESSURIZED
CANISTER
SAMPLING
He
CANISTER
VALVE
GLASS
BEADS
CRYOGENIC
SAMPLE TRAP
(LIQUID ARGON)
-HYDROGEN
-AIR
n
>
VENT
Figure 1. Schematic of analysis system showing two sampling modes.
108
-------
MOBILE AIR MONITORING BY MS/MS - A STUDY OF THE TAGA® 6000 SYSTEM
Bruce A. Thomson, John E. Fulford and William R. Davidson,
SCIEX®, 55 Glen Cameron Road, Unit 202, Thornhill, Ontario, Canada, L3T 1P2
In this paper we report on a series of experiments undertaken to evaluate a
mobile Tandan Quadrupole Spectrometer System (the TAGA® 6000) in performing
qualitative and quantitative real-time air analysis. The work was a cooperative
venture between SCIEX® and the Environmental Monitoring Systems Laboratory,
Environmental Protection Agency at Research Triange Park. A mobile TAGA® 6000
located at RTP was used to analyze both synthetic gas mixtures and ambient air
in an industrial environment. The results of the tests were intended to indi-
cate to both SCIEX® and EPA what the current strengths and weaknesses of the
system are, to show what areas require further development in order to streng-
then the capabilities, and to suggest hew such a system can best be used as part
of an overall approach to air monitoring.
OBJECTIVES
A series of controlled experiments was undertaken in order to evaluate a)
the ability to identify unknown organics at ppm and ppb levels in a mixture; b)
the ability to rapidly measure the concentrations in a mixture with sufficient
accuracy for a field program; c) the possible presence of matrix effects; d)
ease of operation of the system and data manipulation and interpretation
facilities and e) the reliability of the system in a mobile mode.
During the two week program the TAGA® 6000 was used to analyze seven pre-
pared gas cylinder mixtures. Two of the cylinders were mixtures of compounds
selected from a target list of 32, and were prepared and certified by an outside
supplier under the direction of EPA. The actual components and concentrations
in each were unknown to SCIEX®. Two of the cylinders were uncertified mixtures
also supplied to EPA, but with the components completely unknown to SCIEX® (i.e.
not necessarily selected from the target list). The other three cylinders were
two component mixtures with differing relative concentrations designed to show
whether matrix or interference effects were present.
109
-------
EXPERIMENTAL PROCEDURES
The challenge mixtures were analyzed using both an APCI (Atmospheric Pres-
sure Chemical lonization) and a more conventional CI (with a discharge ioniza-
tion process) source. The APCI source is very sensitive to polar compounds;
the CI source, using charge transfer from N2+/ C>2+ and NO+, is sensi-
tive towards the chlorinated hydrocarbons and aromatics. Both sources are
designed to allow air to be sampled directly into the source (with no pre-separ-
ation or concentration) so that analysis is performed continuously in real-time.
The cylinder mixtures were admitted either directly into the sources or by
diluting with clean bottled air. Compounds were identified by first performing
scans with a single mass spectrometer (Ql) to reveal the parent ions from the
source. Each parent ion noted in the spectrum was then collisionally dissoci-
ated to produce a daughter ion spectrum and then compared (using computer-match
procedures) with standard CAD library spectra. Where library spectra existed,
identification was thus accomplished in a few seconds. Where no library spec-
trum existed, CAD spectra were manually interpreted to identify the compound.
Quantitation was performed using a headspace injection technique to produce a
five point calibration curve. This technique requires that the vapor pressure
of the compound be known, and takes about 5 minutes per compound to perform.
RESULTS AND DISCUSSION
The two certified cylinders were seven component mixtures; the components
in each were identical, but the levels were approximately twenty times lower in
one cylinder than the other. The components in each were correctly identified
by MS/MS, and the concentrations measured in a separate experiment. The other
two multicomponent mixtures consisted of 16 and 7 components respectively. In
total, among the four cylinders, 13 compounds were correctly and unambiguously
identified, 7 were identified as either one or both of a pair, 1 was incorrectly
identified and 2 were missed. Table 1 summarizes the results of the qualitative
experiments.
The compounds which were identified as either/or could not be resolved
because each pair of compounds forms the same parent ion in the source (for
example, methylene chloride forms (M-H)+ and chloroform forms (M-C1)+, both
at m/z 83). Appearance potentials of these ions are such that it is difficult
110
-------
TABLE 1. SUMMARY OF QUALITATIVE CYLINDER ANALYSES
Conpounds Present in Mixture Identification by MS/MS
Benzene
Toluene
Chlorcbenzene
Carbon Tetrachloride
Trichloroethylene
Tetrachloroethylene
Acrylonitrile
Pyridine
Xylene
Benzyl Chloride
Dibromoethane
Hexachlorobutadiene
Vinyl Chloride
Dichloroethane
Methylene Chloride
Chloroform
Vinylidene Chloride
Methyl Chloroform
Ethylene Oxide
Dichloropropene (cis-1,3 and trans 1,3)
Propane
Trichloro-trifluoroethane
Benzene
Toluene
Chlordbenzene
Carbon Tetrachloride
Trichloroethylene
Tetrachloroethylene
Acrylonitrile
Pyridine
Xylene
Benzyl Chloride
Dibromoethane
Hexachlorobutadiene
Vinyl Chloride or Dichloroethene
Dichloroethane or Vinyl Chloride
Methylene Chloride or Chloroform
Chloroform or Methylene Chloride
Vinylidene Chloride or
Methylene Chloride
Methyl Chloroform or
Vinylidene Chloride
Ethylene Oxide or Acetaldehyde
Allyl Chloride
Not Identified
Not Identified
TABLE 2. SUMMARY OF QUANTITATIVE CYLINDER ANALYSES
Cylinder 9558 Cylinder 11745
Compound SCIEX»(ppn)
Vinyl Choride 1.05
Benzene 1.68
Toluene 1.03
Chlorcbenzene 1.07
Carbon Tetra-
chloride 0.63
Trichloro-
ethylene 0.78
Tetrachloro-
ethylene 0.56
Manufacturer
Certified (ppm)
0.904
1.28
0.894
0.862
0.737
0.66
0.52
SCIEX«(pEfc>)
Detected but
not quantitated
23
20
23
17
23
20
Certified (ppm)
19.1
17.7
14.4
18.0
19.8
16
15.7
111
-------
to generate characteristic molecular ions under any ionization condition. Since
identical parent ions are formed in the source, CAD spectra are also identical.
Some preseparation will likely be required in order to resolve these ambiguities
in identification. Evidence existed in the parent ion spectra for the presence
of propane and trichloro-trifluoroethane, but this was only observed after their
presence was known. Dichloropropene was mis-identified initially as allyl
chloride; later experiments revealed that these two compounds also form the
same ions in the source, and so cannot be resolved in real-time.
The results of the quantitative experiments are summarized in Table 2. The
average deviation from the manufacturers certified values was 17% in the ppm
cylinder and 36% in the ppb cylinder. Both of these are based on the primary
calibration technique using introduction of headspace vapor. The ppb cylinder
was also calibrated by using the other cylinders as a standard; this procedure
gave better agreement, yielding an average deviation from the manufacturer of
only 13%. The matrix experiments, in which benzene and toluene were present in
mixtures at ratios of approximately 1:10, 1:1 and 10:1,. revealed that each
component could be quantitatively measured in the presence of the other.
However with the APCI source, reagent ion depletion resulted in matrix effects
above a concentration of about 500 ppb. No matrix effects were noted with the
CI source at concentrations of up to 10 ppm.
SUMMARY
The mobile MS/MS system proved capable of identifying unknown components in
a mixture at the ppm and ppb level in real-time, using computer library search
procedures. Some problems were encountered in resolving pairs of compounds
which form the same ion in the CI source; a fast preseparation technique will
likely be required where unambiguous identification is necessary. Quantitation
could be performed in-situ, with sufficient accuracy for a real-time field
monitoring program. More effort needs to be devoted to increasing the CAD
library size, and to characterizing single MS spectra of environmentally inter-
esting compounds so that their possible presence can be recognized in a mixture.
ACKNOWLEDGEMENTS
The cooperation and assistance of the Advanced Analysis Techniques Branch,
Environmental Monitoring Division of EPA, Research Triangle Park, North Carolina
is gratefully acknowleged.
112
-------
DEVELOPMENT OF SURFACE-ENHANCED RAMAN SPECTROSCOPY
FOR MONITORING TOXIC ORGANIC POLLUTANTS*
T. Vo-Dinh, P. D. Enlow, T. L. Ferrell , -,
T. A. Callcott, E. T. Arakawa, and J. P. Goudonnet
Health and Safety Research Division
Oak Ridge National Laboratory
Oak Ridge, TN 37831
ABSTRACT
Raman spectroscopy has proved its usefulness as a practical tool for
M ?\
organic analysis. ' One limitation of this spectroscopic technique,
however, is its low sensitivity due to the small Raman cross section, which
often requires the use of powerful and costly laser sources for excitation.
Recently a renewed interest has developed in Raman spectroscopy as a result
of observations indicating enhancement in the Raman scattering efficiency
by factors of 10 to 10 when a compound of interest is adsorbed onto special
metal surfaces. ' These spectacular enhancement factors for the weak
conventional Raman scattering process help overcome the normally low
sensitivity of Raman spectroscopy. This new Raman technique, known as
surface-enhanced Raman spectroscopy (SERS), could open new horizons for
trace organic analysis.
This paper presents the analytical usefulness of a novel technique based
on SERS for monitoring toxic organic pollutants. A new method for preparing
SERS-active substrate using submicron silver-coated spheres deposited on filter
paper substrates is described in detail. The analytical advantages and
limitations of the technique are discussed. Figure 1 shows a typical
SERS signal of 3.6 ng benzoic acid on a SERS active cellulosic surface having
o o
910 A spheres coated with 2000 A film of silver. The detection limits for
several organic compounds such as carbazole, 1-aminopyrene, 1-nitropyrene, and
(4)
benzoic acid are at the nanogram and subnanogram levels. ' The results
of this study indicate that SERS shows great promise as a useful analytical
tool for monitoring various important air pollutants, such as the nitro-
polyaromatic species, that cannot be detected by other spectroscopic techniques.
*
Research sponsored jointly by the Department of the Army under Inter-
agency Agreement Numbers DOE 40-1294-82 and ARMY 2211-1450, and the Office of
Health and Environmental Research, U.S. Department of Energy, under contract
DE-AC05-840R21400 with Martin Marietta Energy Systems, Incorporated.
Present address: University de Dijon, Faculte" des Sciences, Dijon,
France.
113
-------
8264
in
I 7783
c
o
- 7302
_j
<
z
in
UJ
CO
6821
6340
950 970 990 1010 1030
WAVENUMBER (cm'1)
1050
1070
Figure 1. Surface-Enhanced Raman Signal of 3.6 ng of Benzoic Acid.
(Laser excitation wavelength = 632 nm)
REFERENCES
1. Lord, R. C., 1977, Applied Spectroscopy. 31, p. 187.
2. Harvey, A. B., (Editor), 1981, Chemical Applications of Non-Linear Raman
Spectroscopy, Academic Press, New York.
3. Chang, R. K. and T. E. Furtak, (Editors), 1982, Surface-Enhanced Raman
Scattering, Plenum Press, New York, New York.
4. Vo-Dinh, T., M.Y.K. Hiromoto, G. M. Begun, and R. L. Moody, 1984, Surface-
Enhanced Raman Sepctroscopy For Trace Organic Analysis. Analytical
Chemistry, in press.
114
-------
THERMAL DESORPTION TECHNIQUES FOR THE GAS
CHROMATOGRAPHIC ANALYSIS OF PARTICULATE MATTER
Stanley L. Kopczynski
U. S. Environmental Protection Agency
Research Triangle Park, N. C. 27711
INTRODUCTION
Simple screening techniques for polycyclic aromatic hydrocarbons (PAHs) can
facilitate the characterization of ambient air quality by providing a quick rou-
tine means of identifying those particulate samples which should be subjected
to a rigorous detailed compositional analysis. Lengthy and laborious solvent
extraction and fractionation procedures commonly employed in analyses for PAHs
may be circumvented by thermal desorption of PAH directly from particulate sam-
ples. Studies reported by other investigators indicate that PAHs can be effec-
tively extracted from particulate matter by sublimation at both atmospheric
pressure and at reduced pressure.^"') Vacuum-sublimation has been used to
reduce extraction time, improve extraction yields, and avoid thermal degrada-
tion. (1»*,5) in this study a vacuum-sublimation system was designed and con-
structed for direct analysis of volatilized PAHs with a capillary gas chromato-
graph. Early results obtained with test mixtures are reported.
EXPERIMENTAL
Analyses were performed with a Varian Model 3700 gas chromatograph (GC)
equipped with a Durabond, DB-5, fused silica capillary column (30m x 0.32mm x
0.1 u film thickness) and an HNU photoionization detector (PID) (Model 52-02,
9.5 eV lamp). The sample injector of the GC was removed and the GC column was
extended from the oven through the injector heating block to a connecting union
on an external sample oven (Figure 1). The sample trap (borosilicate glass
tubing) was connected to the carrier gas line and the end of the capillary
column by means of stainless steel tube fittings using graphite or 40% gra-
phite/vespel ferrules. The exit end of the column emerged from the oven
through an auxiliary injector heating block and was inserted directly into the
115
-------
ionization chamber of the detector (Figure 2). The vacuum-sublimation system
is shown in Figure 3. The particulate sample is loaded into a borosilicate
glass tube (6.4 mm O.D. x 11 cm), which is connected to the sample trap (3.2
mm O.D. x 11 cm) by means of a tube fitting using 40% graphite/vespel fer-
rules. The tube fitting is bored through so that the arm of the trap can be
extended into the sample tube.
Sample tubes and sample traps were cleaned before use with a 20 min helium
purge at 260°C. PAHs were vacuum-sublimed for a period of 30 min at 260-
280°C and 0.04 to 0.05 Torr. The sample trap containing the extracted ma-
terial was transferred to the external oven of the GC and purged with carrier
gas (helium) at room temperature before analysis. The extracted material was
then volatilized at 260°C and concentrated on the cooled section of the
external GC column (28-30°C). After 10 min at 260°C the oven cover was
removed. After 2 min more, the external column was rapidly heated to 260°C
and the temperature programmed analysis was begun at a carrier flow rate of 6
ml/min. The column was held initially at 55°C for 10 min, then raised to
225°C at 10°C/min, and held at the final temperature for 30 min. Measure-
ments were made with a photoionization detector to minimize chromatographic
interferences from non-aromatic species co-desorbed with the PAHs.
RESULTS AND DISCUSSION
Standard reference material (SRM) 1647 from the National Bureau of Stan-
dards (NBS) was used to test the transfer of PAHs from the sample trap to the
GC column. A 30 pi sample which had been diluted 10:1 was injected onto
the glass wool plug of the trap and evaporated to dryness at room temperature
with a stream of helium. All of the PAHs, ranging in volatility from acenaph-
thylene to benzo(g,h,i)perylene, were successfully transferred to the GC
column and eluted within 35 min.
A previously desorbed urban dust sample (SRM 1649 from the NBS) was
spiked with a 5-component PAH solution, dried, and then extracted by thermal
purging and by vacuum-sublimation. In both cases 4 of the components, phenan-
threne, fluoranthene, benz(a)anthracene, and benzo(a)pyrene were desorbed
with approximately 90% or greater efficiency. However, the least volatile
component, benzo(g,h,i) perylene was not desorbed sufficiently to be detected
(Figure 4).
116
-------
Vacuum sublimations conducted with virgin SRM 1649 produced chromatograms
dominated by a strong envelope of material peaking at an elution time of 28.0
min and containing a strong peak at 22.9 min (Figure 5a). The presence of
most PAH compounds was rather obscure although the test sample contained
25-70 ng of several PAHs. Weak peaks were found at elution times consistent
with those for phenanthrene and benz(a)anthracene. Fluoranthene plus pyrene
and benzo(a)pyrene were overlapped by the strong peaks at 22.9 and 28.0 min,
respectively. A repeat sublimation test with the spent SRM 1649 sample
indicated that the major portion of desorbable compounds detectable by the PID
can be extracted in a reasonable time (Figure 5b). However, the PAHs of
interest constitute only a minor portion of these compounds. Similar results
were obtained with a 60 min thermal purge of SRM 1649 at 280°C and 100 ml/
min, although the extraction was somewhat less efficient.
CONCLUSIONS
Although PAHs deposited from solution on borosilicate glass sample hol-
ders may be readily volatilized for analysis by capillary column gas chroma-
tography, urban dust samples are much more resistant to volatilization of
adsorbed PAHs. At low PAH concentrations and in the absence of a sufficient-
ly selective detector or a clean-up step the presence of desorbed PAHs is ob-
scured by other co-desorbed species. A multidimensional gas chromatograph
with a mass selective detector would offer improved selectivity for PAHs
thermally desorbed from urban particulate matter. A more selective PID (8.3
eV lamp) may also be helpful.
117
-------
REFERENCES
1. Ball, W.L., G.E. Moore, J.L. Monkman, and Morris Katz, 1962. An Evalua-
tion of Micro-Vacuum Sublimation Separation of Atmospheric Polycyclics.
American Industrial Hygiene Association Journal, 23, pp 222-227.
2. Burchfield, H.P., Ernest E. Green, Ralph J. Wheeler, and Stanley M.
Billedeau, 1974. Recent Advances in the Gas and Liquid Chromatography of
Fluorescent Compounds l.A Direct Gas-Phase Isolation and Injection System
for the Analysis of Polynuclear Arences in Air Particulates by Gas-Liquid
Chromatography. Journal of Chromatography, 99, pp 697-708.
3. Monkman, J.L., L. Dubois, and C.J. Baker, 1970. The Rapid Measurement of
Polycyclic Hydrocarbons in Air by Microsublimation. Pure and Applied
Chemistry, 24, pp 731-738.
4. Schultz, Michael J., Robert M. Orheim, and Harley H. Bovee, 1973. Simpli-
fied Method for the Determination of Benzo(a)Pyrene in Ambient Air.
American Industrial Hygiene Association Journal, 34, pp 404-408.
5. Stenberg, Ulf R. and Thomas E. Alsberg, 1981. Vacuum Sublimination and
Solvent Extraction of Polycyclic Aromatic Compounds Absorbed on Carbon-
aceous Materials. Analytical Chemistry, 53, pp 2067-2072.
6. Thomas, Jerome F., Eldon N. Sanborn, Mitsugi Mukai, and Bernard D. Tebbens,
1958. A Fractional Sublimation Technique for Separating Atmospheric
Pollutants. Analytical Chemistry, 30, pp 1954-1958.
7. Weschler, Charles J., 1983. Indoor-Outdoor Relationships for Selected
Organic Constituents of Aerosol Particles Collected at Wichita, Kansas
and Lubbock, Texas. 187th National Meeting. American Chemical Society,
Seattle, Washington.
118
-------
ALUMINUM
EXTERNAL \
OVEN COVER ;>
(-
1
1
, r. 0 ,-, ^ ",
II 1
: : iiy
i ! i in
_-V - "l %' U
i
^XNICHROME WIRE
HEATER
ADJUSTABLE
FACEPLATE '
CARRIER
GAS LINE
THERMOMETER
fl
ADJUSTING
NUT
SAMPLE
TRAP
ALUMINUM
SAMPLE OVEN
COLD N2 GAS LINE
FOR COLUMN
FOCUSING
HEATING TAPE
& WRAPPED OVER
V*" O.D. COPPER
] | TUBING SHEATH
)•!
C = ?
INJECTOR
HEATING
: ' x^/ I IU.AA I MMVJ
3=/ BLOCK (250 °C)
'""N'
CAPILLARY
COLUMN
i
CHROMATOGRAPH
Figure 1. Sample Injection System
119
-------
VENT TO
HOOD
EXHAUST
LINE
HNU
PHOTOIONIZATION
DETECTOR (230 °C)
MAKEUP
GAS
AUXILIARY INJECTOR HEATER BLOCK
(250°C)
|M» CAPILLARY
'• DETECTOR
INSERT TUBE
CAPILLARY COLUMN
GAS CHROMATOGRAPH
Figure 2. Detection System Configuration
120
-------
ro
1
r ~~>
o r } -\ { | p s
i 1 ; ; i ! i
' i i iiii*
\ O L' v-' ^ i
HI
1
1
n SAMPLE TUBE
lt»*x yyxxx* ysoeA—lF
u V X
i />/
GLASS WOOL
PLUGS
X. SAMPLE
TUBE vxjx^
SUPPORTS \
BASE
f
U
#•
.— —
t=
, ALUMINUM
^OVEN COVEF
1 NICHROME
t-'WIRE
UP ATP D
SAMPLE
/^TRAP
s^/ /?
VS >^
". ''
\ /
V-=/
LIQUID
ARGON
?
r
1
=fl(ft — O
"MJ •> — /
VACUUM
GAUGE
rx /^
S\ ft-
• f'
x"*" _""* x
-'-^
\x ,'/
\^-''J
^_.^
LIQUID
ARGON
VENT
i i
5
_JL TO
^ \ / A r*i 1 1 i (\ n
PUMP
DEWAR FLASK
DEWAR FLASK
FigureG. Vacuum-Sublimation System
-------
BENZ(a)ANTHRACENE —»•
UJ
CO
z
O
a.
in
tu
cc
cc
O
15
10
Figure 4. Chromatogram of Vacuum-Sublimed Material from Spiked SRM 1649.
122
-------
UNKNOWN + BENZO(a)PYRENE
-UNKNOWN + PYRENE + FLUORANTHENE
— BENZ(a)ANTHRACENE
LU
C/3
2
O
Q-
CO
LU
CC
CC.
O
O
LU
h-
LU
Q
PHENANTHRENE
(x16)
40
i
35
30
I
25
l
20
TIME (min)
I
15
I
10
I
5
Figure 5. (a) Chromatogram of Vacuum-Sublimed Material from Virgin SRM 1649.
(b) Chromatogram of Vacuum-Sublimed Residual Material from SRM 1649.
123
-------
A REVIEW OF MULTIVARIATE RECEPTOR MODELS
Philip K. Hopke
Institute for Environmental Studies
University of Illinois at Urbana-Champaign
The objective of receptor modeling is to deduce information regarding the
origins of observed concentrations of ambient species from those measured
concentrations. This approach is in contrast to source or dispersion modeling
that predict the ambient concentrations from emission source data and the
meteorology of the region. In general receptor modeling has focussed on the
identification of and apportionment of aerosol mass to particulate emission
sources .
Several previous articles have reviewed the development and implementation
of these models upto several years ago (Cooper and Watson, 1980; Gordon, 1980).
Prior work has primarily been empirical explorations of the application of these
approaches to specific urban air quality problems. Recently, there has been the
beginnings of some more fundamental studies examining the basic mathematical
methods being used and the ways in which the limts of these methods can be
rigorously defined.
The fundamental assumption of receptor models has been that a mass balance
can be applied; that is, the amount of any particular observed species is a sum
of contributions from the independent sources of that species. For example, the
total lead concentration observed in a parcel of air can be considered to be the
sum of the lead in that parcel from motor vehicles burning leaded gasoline plus
smelter emissions plus refuse incinerators, etc.
Pbt - Pbmotor + Pbrefuse + Pb smelter + '"
Total aerosol mass would represent a similar mass balance. However, the
airborne lead is not the only species in motor-vehicle-generated particles.
Thus, ^b t can be considered to be the product of the concentration of lead
in the particles, ap^ motor times the mass of motor vehicle particles in the
sampled air parcel, r .
^ v motor
motor ~ aPb, motor motor
Generalizing this approach to multiple variables measured in multiple samples
yields
P
ikfkj i=1'm' J=1'n
_
K.~~ _L
where x
the conc
. is the amount of the ith species measured in the jth sample, a-^ is
entration of species i in material from source k at the receptor site,
and f^. is the amount of mass contributed to sample j by source k.
124
-------
There are two commonly used multivariate methods to solve for the desired
parameters; multiple linear regression and factor analysis. In the multiple
linear regression approach, commonly called the Chemical Mass Balance method, it
is assumed that the number of sources, p, and their compositions, ajif's, are
known. The mass contributions, f, .'s, are then calculated as the regression
parameters. It is usually assumed that the compositions do not change from
source to receptor and source-measured values are employed. Efforts have been
made to examine reactions of polynuclear aromatic hydrocarbons using first order
reaction kinetics (Duval and Friedlander, 1972). These methods have been widely
applied to the study of a variety of air quality problems with particularly good
results in the Portland Aerosol Characterization Study (Core et_ al_., 1983) and
in Washington, B.C. (Kowalczyck et_ al.., 1983).
The other approach is factor analysis (Hopke, 1981). In this method only
the ambient data is employed and the analysis is used to deduce the number of
identifiable sources, their composition, ahd the mass contributions. These
methods have been primarily applied to data from St. Louis, MO (Alpert and
Hopke, 1981; Liu et_ a_L., 1982; Sever in et_ al., 1983) where results have appeared
to be quite good.
Both approaches have limitations that are only now being studied at a more
fundamental level. Inherent in the regression approach are problems in
correctly calculating the mass contributions when two or more of the sources
have similar compositions even when those compositions are perfectly known. In
real situations where there are fluctuating compositions measured with sampling
and analytical errors additional problems arise. In many cases in the
literature, the number of sources reported to be resolved accurately is
considerably overestimated and the results are much more uncertain than they are
reported to be (Henry, 1983).
A similar problem exists for the factor analysis in that it also will not be
able to separately identify sources of similar composition. However, it can be
used to find appropriate linear combinations of the sources that can be
accurately fit. Furthermore, factor analysis is driven by variations in the
system that can come from both varying source emission rates and from
meteorology. If the latter dominates, two sources with quite different
composition may be found to be inseparable by factor analysis. However, factor
analysis is not unlikely to overestimate the resolvable number of sources nor
overlook an unsuspected source. Thus, a combination of these methods is the
best current approach to obtain results upon which air quality management
decisions can be confidently made.
REFERENCES
Alpert, D.J. and P.K. Hopke, 1981. A Determination of the Sources of Airborne
Particles Collected During the Regional Air Pollution Study, Atmospheric
Environ. 15, pp675-687.
Cooper, J.A. and J.G. Watson, 1980. Receptor-Oriented Methods of Air Particulate
Source Apportionment, J. Air Pollut. Control Assoc. 30, ppl!16-1125.
125
-------
Core, J.E., J.A. Cooper, P.L. Hanrahan, and W.M. Cox, 1982. Particulate
Dispersion Model Evaluation: A New Approach Using Receptor Models, J. Air
Pollut. Control Assoc. 32, ppl!42-1147.
Duval, M.M. and S.K. Friedlander, 1982. Source Resolution of Polycyclic Aromatic
Hydrocarbons in the Los Angeles Atmosphere: Application of a Chemical Species
Balance Method with First Order Chemical Decay, U.S. Environmental Protection
Agency Report No. EPA-600/S2-81-161, January 1982.
Gordon, G.E., 1980. Receptor Models, Environ. Sci. Technol. 14, pp792-800.
Henry, R.C., 1983. Stability Analysis of Receptor Models that Use Least-Squares
Fitting, Receptor Models Applied to Contemporary Pollution Problems, S.L.
Dattner and P.K. Hopke, eds, Air Pollution Control Association, Pittsburgh, PA,
ppl41-157.
Hopke, P.K., 1981. The Application of Factor Analysis to Urban Aerosol Source
Resolution, Atmospheric Aerosol: Source/Air Quality Relationships. E.S. Macias
and P.K. Hopke, eds, American Chemical Society, Washington, D.C., pp21-49.
#
Kowalczyk, G.S., G.E. Gordon, and S.W. Rheingrover, 1982. Identification of
Atmospheric Particulate Sources in Washington, D.C. Using Chemical Element
Balances, Environ. Sci. Technol. 16, pp79-90.
Liu, C.K., B.A. Roscoe, K.G. Severin, and P.K. Hopke, 1982. The Application of
Factor Analysis to Source Apportionment of Aerosol Mass, Am. Ind. Hyg. Assoc. J.
43, pp314-318.
Severin, K.G., B.A. Roscoe, and P.K. Hopke, 1983. The Use of Factor Analysis in
Source Determination of Particulate Emissions, Particulate Sci. Technol. 1,
pp!83-192.
126
-------
A METHOD TO SPECIFY MEASUREMENTS
FOR RECEPTOR MODELS
by John G. Watson and Norman F. Robinson
Desert Research Institute, University of Nevada, Reno, NV 89506
INTRODUCTION
Numerous chemical measurements have become available in recent years
which can be used in receptor models to differentiate between and to
quantify the contributions of source emissions to ambient pollutant con-
centrations. Instrumental neutron activation analysis. x-ray
fluorescence, ion chromatoqraphy. and step-wise thermal combustion have
been used individually and in combinations to supply chemical con-
centration input data to receptor models for a large number of samples.
X-ray diffraction, computer automated microscopy, mass spectrometry,
electron capture and flame ionization gas chromatography, and isotopic
enrichment analysis have recently been proposed as analytical methods
which would provide more characterization of pollutant sources. Since
these analytical methods can be very expensive, and since most receptor
model studies are performed on a fixed badget, some objective procedure
of selecting the observables to be used in the models, and the measure-
ment methods required to obtain values for those observables. needs to
be applied at the study design stage. This paper proposes such a proce-
dure. The paper's objectives are to 1) describe a receptor model
measurement specification methodology. 2) to illustrate the methodology.
and 3) to provide measurement uncertainty specifications for a specific
application of tiie mass balance receptor model.
MEASUREMENT DEFINITION
A measurement posesses the following attributes (Watson et al.,
1983): 1) an observable specification, 2) a value. 3) a lower quan-
tifiable limit. 4) precision. 5) accuracy, and 6) validity. The model
imposes certain requirements on the tolerances assigned to each of these
attributes. Measurement methods must then be selected such that they
meet or exceed tiiese tolerances. The most important questions regarding
measurements used in receptor models are:
• Does a proposed observable differentiate between sources?
• How many observables or measurements of the same observables
are required?
• How low must each detection limit be?
• What precision must each measurement have?
127
-------
MEASUREMENT SPECIFICATION PROCEDURE
The following steps can be followed to provide a quantitative answer
to these questions.
1. List all p likely sources and expected contributions.
2. Obtain likely emissions compositions for all quantifiable spe-
cies.
3. Set the true contributions (S-i) and true compositions (a- • ) at
expected values.
4. Generate n true concentrations using a model which physically
represents the situation under study. The linear model
C~i = a^ Sj, i = 1, n (1)
1=1
5. Simulate the measurement process to obtain measured values
and a^^) k=1 to m times (Watson. 1979) using random numbers
(e^ and e^-;^) drawn from a normal distribution of mean zero
and unity standard deviation, and measurement uncertainties a,
and o
,-,
Cik = + eik
aij = *ij + eijk
6. Apply the receptor model to answer the questions.
• To determine whether or not a new observable differen-
tiates between sources, apply the model to simulated data
sets with and without the observable.
• To determine the number of different observables or
measurements, apply the model to selected subsets of the
simulated data.
• To determine required detection limits (D^) vary the ratio
of Cj/O.^
• To determine required precisions, vary
-------
APPLICATION
To illustrate this methodology, it is applied to the mass balance
receptor model using the effective variance least squares solution
(Watson et al., 1984). \ desert environment is likely to receive
ambient parti dilate contributions from secondary nitrate. secondary
sulfate, soil, burning, and motor vehicle sources. Typical compositions
of these sources have been drawn from Watson (1979). The chemical spe-
cies included are organic carbon, elemental carbon, NH^ NO3' SO4' A^'
Si, Cl, K, Ca, Ti, V, Cr, Mn. Fe, Ni. Cu, Zn. Br, and Pb. In this
application of the procedure, the measurement precision requirements are
evaluated, so a,-. and a,. . are set to 10%, 20%, and 30% of their respec-
i i T
tive values.
The results of this application appear in Table 1. These results
indicate that measurement methods must have precision less than 20% in
order to assure model calculations which are within a factor of two of
reality and to have the majority of the calculated source contributions
fall within one calculated standard deviation of the true contributions.
More detailed observations about Table 1 are presented in Watson et al.
(1984).
FUTURE STUDIES
This method can be applied to factor analysis and linear regression
receptor models in order to determine the value of measurement methods
to the modeling process. Future research plans include these applica-
tions along with creation of a model /measurement evaluation computer
package which can be used to design future receptor modeling studies in
an optimal manner.
REFERENCES
1. Watson, J. G. , 1979. "Chemical Element Balance Receptor Model
Methodology for Assessing the Sources of Fine and Total Suspended
Parti culate Matter in Portland, OR, " Ph.D. Dissertation, Oregon
Graduate Center, Beaverton, OR.
2. Watson. J.G., P.J. Lioy, and P. K. Mueller, 1983. "The Measurement
Process: Precision. Accuracy. and Validity" in Air JSampling
Instruments for Evaluation of Atmospheric Contaminants, frth Edition,
American Conference of Governmental Industrial Hygienist.
Cincinnati, OH.
3. Watson, J.G., N.F. Robinson, A.. P. Waggoner. R.E. Weiss, and J.
Trijonis, 1984a. "Error Analysis of Mass Balance and Particle
Scattering Budget for RESOLVE" Desert Research Institute Document
6660. 1D1, Reno, Nevada.
4. Watson, J.G., J. A. Cooper, and J. J. Hun thicker, 1984b. "The
Effective Variance Weighting for Least Squares Calculations Applied
to the Mass Balance Receptor Model" accepted by
129
-------
TABLE 1. Averages. Standard Deviations
and Ranges of Source Contributions and Their
Uncertainties as a Function of Uncertainty Level.
(Units are yg/m )
Source Type
1. NH NO
2. (NH4)2S04
3. Soil
4. Burning
5. Motor Vehicle
Parameter
Average S-
Range of S-
Std. Oev S_
Average as .
Std. Oev osl
Range of Og.
Average S-
Range of Sj
Std. Oev Sj
Average og .
Std. Oev og.
Range of ag^
Average Sj
Range of Sj
Std. Dev Sj
Average Og .
Std. Oev
-------
THE APPLICATION OF SIMCA PATTERN RECOGNITION
TO COMPLEX CHEMICAL DATA
W. 3. Dunn III and Michael Koehler
Department of Medicinal Chemistry
The University of Illinois at Chicago
833 South Wood Street
Chicago, Illinois 60680
Svante Wold
Research Group for Chemometrics
Unea Un i ve r s i ty
S 901 87 Umea, Sweden
Introduct ion
A number of analytical techniques are used in air quality
monitoring depending on the nature of the agents being monitored.
For low molecular volatile organic chemicals such as
hydrocarbons, aliphatic halides, etc., the me thod of choice is
gas chromatography/mass spectroscopy (GC/MS). Figure 1 is an
illustration of an output fr om GC/MS analysis of a c om p1e x
mixture. The output consist of two parts, the gas chromatogram
and the mass spectrum. The GC data are 2 -dimensiona 1 in
concentration VS. retention time. The mass spectrum is 3-
dimensional with ion intensity as a function of retention time
and mass (m/e). The GC data contains information regarding the
number of components in the sample and their concentration. The
ma ss spectral data, if in terms of relative ion intensities,
contains information which can be used to identify the chemical
species present. By applying methods of classification or
pattern recognition to such data it is possible to classify and
identify the components present. It is this aspect of the data
analytic problem that will be discussed in this report.
t
Figure 1. Output from a GC/MS analysis of a complex mixture,
131
-------
Hi storical
Before a discussion of the specific methods of data analysis
is presented, it is worthwhile to review the levels of
information which can be obtained from the application of
classification methods to GC/MS data (Albano, et al., 1978).
There are two major objectives of such an analysis: 1)
classification and 2) quantification. With these two objectives
in mind, three levels of information can be obtained from the
data analytic method. At the first or lowest level,
classification is into one or the other of a group of well
def i ned classes.
At the next level it would be possible, considering the
example above, to classify a compound into one or the other of
the defined classes, with the possibility that the sample may be
neither. At the highest level of classification, it is the
objective to quantify the amount of a classified compound. This
level includes, in addition to a classification step, a
calibration step.
A number of pattern recognition methods are available. These
methods have differing potential with regard to the above
mentioned levels of classification so at is necessary to know in
advance the desired level of classification. The methods
corrmon ly used are:
1. the hyperplane or class discrimination methods such as
the linear 1 earning machine (LLM) and linear discriminant
analysis (LDA),
2. distance based methods such as the k-nearest neighbor
(KNN) and
3. the class modeling methods such as SIMCA.
All methods of pattern recognition operate at the first
level. However, LLM and LDA operate only at this level. KNN and
SIMCA both operate at the second level while SIMCA can operate at
level three.
Theory of SIMCA Pattern Recognition
The objective of methods of classification is to identify
objects, in this case lowmolecular weight volatile organics.
For the purpose of this report, the identification will be based
on information obtained in the mass spectra of the compounds.
From information obtained on compounds similar to those whose
identity is to be determined, rules are developed which allow the
unknowns to be classified. The compounds of known class
assignment are the tjiajjljjl£. £ej^s_ while those compounds of unknown
classification are the t£sjt_ j^e t_ compounds.
In order to apply mathematical methods to the data in the
132
-------
analysis, the mass spectra for each compound is represented as an
object vector as in Figure 2. The elements of the vector are ion
intensities at each mass in the interval observed for the
classes. The object vectors are tabulated in matrix form in which
the elements of the matrix are ion intensities, with j< the
compound index and _i_ the mass index. ~
x= (i..i..i
m/e
sample 20 21 22 i p
1
2
o class 1
I, . class 2
ki
unknowns
n
Figure 2. Objiect vector and matrix employed in SIMCA analysis.
If the compounds in the classes (training sets) are similar,
the data for each class can be modeled by a principal components
model in few terms (Wold, 1976). This is shown in equation 1
where I is the mean of ion, _i_, A is the number of product terms
or principal components in themodel and e^j is the residual of
the observed and predicted ion intensities. The product terms,
which model the systematic variation in the data, are composed of
the loading term, ba and the principal component, t k. For A=0
the class is represented by a point in space (class members are
identical); for A=i the class data structure is approximated by a
line and for A ^ 2 the class is approximated by a plane or
hyper plane.
Classification of the unknowns is based on information in the
residuals, ek-, for the test compounds. From the fit of the
training set data to their respective class mode Is, a residual
133
-------
standard deviation (RSD) for each object and for each class can
be calculated. Fr om the fit of the unknowns to the class mode 1 s ,
a classification result can then be obtained. Since the RSD is
approximately F-distributed a confidence interval for the
classification result can be established.
A geometric interpretation of the SIMCA classification result
is shown in Figure 3. Here the result is shown in 3-dimensions
for convenience while in reality the data space is much higher in
d imen s i ona 1i t y.
m/e.
RSD
class 1
m/e.
Figure
t i on .
3. A 3-dimensiona1 representation of a SIMCA classifica-
In this example, the classes are represented by a line. The
projection of the objects onto the line gives the relative
position of each compound in the class structure. This is
important if classification is desired at the third level. The
RSD about the line defines a volume element in space and
classification is based on where the unknown falls with respect
to the defined classes.
The class modeling philosophy has a number of advantages when
considered in terms of the levels of pattern recognition as
discussed earlier. If the possibility exists that a compound is
a mem be r of none of the defined classes it wil 1 be observed as an
outlier to the defined classes. This result is not possible with
the methods of LLM or LDA. Another advantage of this approach is
that if other information, such as health effects data, are
available for the membe rs of the defined classes, correlation
methods can be used to relate levels of these effects to chemical
structure.
134
-------
Pretreatment of Data
In order to enhance the various types of information in mass
spectral data, the spectral data can be scaled or transformed.
This is especially critical with ma ss spectral data since in
normalized form it may not be appropriate for classification
purposes. Mass spectra are interpreted by attempting to identify
fragmentation patterns within the spectrum of a compound. These
patterns result from the loss of a common fragment(s) to give
sequences of peaks that are related or correlated. This is
illustrated in Figure 4 in which ion of ma ss j_ results fr om the
fragmentation of ion jj_+j_ by loss of fragment j_.
m/e
Figure 4. Example of fragmentation patterns within mass spectrum.
The molecular weight of compounds which contain a common
functional group will vary depending on the masses of the
residual molecular fragments attached to the functional group.
The loss of a common fragment j_ within the series can occur in
each spectrum to varying extents, but will be shifted to
different masses _i_. Classification should be based on the
relative extent of this fragmentation and this information can be
amplified by applying the autocorrelation transform (Wold, et
al., 1984) to the normalized mass spectrum. The autocorrelation
transform is given in equation 2 (Box, Hunter and Hunter, 1978).
Ed; - T)2
2)
135
-------
:
and
and Ij+i are the intensities of the respective ions i and i+j
I is the spectral mean. r: is called the autocorrelation
coefficient for lag j. Its range is -1 r: 1 and measures
the MS which result from loss
correlation between ions in
c orrmo n f r a gme n t j .
of
the
the
The MS of 1,2-dichlorobutane and its autocorrelation
transform are given in Figure 5 . Very pronounced
autocorrelations are observed between ions 2, 36 and 38 mass
units apart. These result from loss of fragments which contain
the isotopes 35C1 and 37C1 and from the loss of HC 1 containing
the se isotopes.
ci
CICH,CHCHZCH,
40
10
.20
1 2
38 31
Figure 5.
transform
MS of 1,2-dichlorobutane
(bot torn) .
(top) and its autocorrelation
Application of SIMCA to MS Data
In order to illustrate the utility of SIMCA to classification
of volatile organic compounds, the MS of 9 dich1 orobutanes and 8
chloromethyl butenes were obtained from the Finnigan catalogue of
mass spectra. These two classes of halogenated compounds were
used as training sets. The data matrix consisted of relative ion
intensities in the interval 39 to 118 m/e with each spectrum
consisting of the 16 most intense ions.
Each spectrum was transformed to the autocorrelation spectrum
and SI MCA wa s applied to both types of data for the purpose of
comparison. The data for each class were modeled by a 2-
component (A=2 in equation 1) model. This is equivalent to
fitting the data to a plane. This is somewhat arbitrary but is
done in this case as to give the analyst a view of the structure
of the classes. By generating principal components plots of the
data, the approximate structure of the data can be observed.
136
-------
Figure 6 is such a plot of the classes when the MS data are
modeled. When compared to the results of applying SIMCA to the
autocorrelation transformed spectra (Figure 7) it appears that
the two classes overlap with very little structure in the
dich1orobutanes from either treatment. A more revealing method
of displaying the results is the Cooman's plot (Wold, et al.,
1984). This plot is obtained by fitting all of the compounds to
the class 1 and class 2models, respectively, and calculating the
distance of each compound to each class model. When these class
distances are plotted from the analysis of the MS data (Figure 8)
and the autocorrelation transformed data (Figure 9) it is seen
that the latter result in a much better description of the
chloromethyl butenes while the dich1orobutanes are not well
described by either method. It is possible, then, to classify
the class 2 compounds with a high level of certainty and that the
classification results from the autocorrelation data are much
bet te r .
A A
• di-CI2 butanes
A Cl butenes
Figure 6. Principal
data are modeled.
components plot of the classes when the MS
137
-------
OQ
c
OO
n
o
i
(a
3
T3
O
CO (/)
OO _
0)
D
N
Q.
C/l
D.
CU
distance to class:
I
I
T
D T
cr Q
5 f
o -•
2 -g
C/1 .
O. -T3
(a -i
D)
n
•o
(U
n
o
3
TJ
O
in
•o
(a
C
r*
O
O
O
ft
tu
o
a
3
in
I
-------
A
A
• di-CI2 butanes
/\ Cl butenes
class 2
c la ssi
distance to class 2
Figure 9. Cooman's plot of the SI MCA analyzed autocorrelation
transformed MS data.
Acknowledgements.
The authors wish to acknowledge discussions with Dr. Donald Scott
of the EPA Environmental Monitoring and Surveilance Laboratory
regarding the treatment of mass spectral data.
References.
1.
2.
3.
4.
Albano, C., W. 3. Dunn III, U. Edlund, B. Norden, M.
Sjostrom and S. Wold, 1978. Four Levels of Pattern
Recognition. An a 1 . Chbn^ Com£Ujt_._ Te_c_h_._ O_£t_irru_, 121> 429-443.
Box, G. E. P., W. G. Hunter and 3. S. Hunter, 1978,
lilies. 12JL ^^^llUl^ni6-!!» Wiley and Sons, New York.
Wold, S., 1976. Pattern Recognition by Disjoint Principal
Components Models., Pa rt_e r_n_ Re c_O£nji_ tj_0jn , ^, 127-139.
Wold, S., C. Albano, W. 3. Dunn III, U. Edlund, K. Esbensen,
P. Geladi, S. Hellberg, W. Lindberg and M. Sjrostrom, 1984.
Multivariate Data Analysis in Chemistry, Proceedings of
N-A.!0. ASj_ 011 Ch_emome^£_i_c _s_, B. R. Kowlaski, Ed., Feidel
Publishing Co., Dordrecht, Holland.
139
-------
DESCRIPTION OF A CONTINUOUS
SULFURIC ACID/SULFATE MONITOR
George A. Allen, William A. Turner, Jack M. Wolfson, John D. Spengler
Department of Environmental Science & Physiology
Harvard School of Public Health
665 Huntington Avenue
Boston, Massachusetts 02115
ABSTRACT
A flame photometric/thermal speciation system for continuous measurement
of ambient total sulfate, sulfuric acid, and two other sulfate fractions is
described. The instrumentation is suitable for long-term ground-based installa-
tions, and has a limit of detection for sulfate as sulfuric acid of 2 yg/m^
for an integrated sample period of one hour.
An example of episodic ambient data from the system is presented. These
data are compared to water soluble sulfate data from a co-located dichotomous
sampler, and particle scattering extinction coefficient (b ) data from an inte-
grating nepholometer.
Limitations of the thermal speciation technique with regard to the measure-
ment of total strong acidity of sulfates are discussed.
Presented at the Fourth Annual National Symposium on Recent Advances in Pollutant
Monitoring of Ambient Air and Stationary Sources in Raleigh, North Carolina,
May 1984.
140
-------
I. Introduction
Acid Aerosols
There is growing evidence that atmospheric aerosols in the lower troposphere at times can
be acidic. Unfortunately, the measurements have been neither extensive nor systematic. The
temporal variation and the geographic extent of acid aerosol events have not been documented.
While nitrous, hydrochloric, or organic gases can lead to acidified particles by absorption or con-
densation, sulfur gases are believed to be the dominant source of acid species in the atmosphere.
Health Evidence
Some studies of human populations have linked sulfur dioxide and ambient particulate sul-
fates to increased respiratory diseases, but have been unable to identify the specific pollutants
responsible. It remains to be established which species of sulfates are physiochemically important,
since they can occur within a variety of metal cations as well as with the more common
ammonium and hydrogen ions. As has been illustrated in a recent review of the toxic effects on
pulmonary macrophages, these sulfate species have widely varying effects. Of the variety of sul-
fate species that exist in the atmosphere, the strong acid sulfates (ammonium bisulfate and hydro-
gen sulfide) residing in the submicron size range are more likely to induce responses in the human
respiratory tract. Evidence exists for alterations in epithelial secretory cells in the lower bronchial
airways in rabbits exposed to 250 /zg/m3 sulfuric acid after relatively short duration exposures
(8h/d, 5d/w, 1m). In his work, Lippman points out that these studies provide further support for
the role of acid sulfate species in the pathogenesis of chronic bronchitis via effects on the mucocili-
ary clearance system.
As a component of Harvard's Air Pollution Health Study, we have developed an operational
monitoring system for real-time sulfate/sulfuric acid aerosol measurements. This system will be
deployed in each of the six cities participating in the Harvard Air Pollution Health Study in order
to more fully characterize the nature of sulfate particulate exposure. Peak, hourly, and daily con-
centrations of sulfate particle and sulfuric acid fractions will be incorporated into our aerometric
database for the analysis of daily pulmonary function data, or to more fully understand the simi-
larities or differences among our cities.
As we assemble additional units, we will characterize the nature of atmospheric aerosol aci-
dity throughout the year in each city. A plan being considered involves the use of the continuous
sulfate monitor to trigger a dichotomous particle sampler during 'episodic' conditions. The parti-
culate filter samples would be used to characterize the elemental composition (by XRF) or ion
composition of aerosols, including total strong acid (H+).
Instrument Description
Based on work done by Dr. Roger Tanner at Brookhaven National Laboratories and earlier
cooperative work with Dr. Rudy Husar and Geoff Cobourn at Washington University, we at HSPH
have developed a low temperature volatilization flame photometric detection (FPD) method for
continuous measurement of ambient sulfate aerosol suitable for long-term ground based field
operations. This method allows for some discrimination of the acidic component of the aerosol
due to the higher rate of volatilization of the r^SO^ species at lower temperature, as well as those
sulfates that are stable at 300 °C (Sulfates of calcium, sodium, lead, zinc, iron, and copper).
The FPD method has been used routinely to measure total gaseous sulfur in ambient air by
excluding particles with a membrane filter. The sulfur is detected in a hydrogen flame which
results in an optical emission (fluorescence) from the electronically excited sulfur dimer (S2*). To
determine particulate sulfur, a lead oxide (PbO) coated tube is used as a diffusion denuder to
remove gaseous sulfur species. Under appropriate conditions of laminar flow, the particles, with
much higher momentum, pass through the denuder tube, while the sulfur-containing gas
141
-------
molecules collide and react with the walls and remain deposited on them. SF6 doped hydrogen is
used to improve the FPD's sensitivity and stability and reduce interferences.
II. Thermal Analysis
Thermal analysis allows discrimination between the different sulfate species. With this tech-
nique, the ambient air is heated initially to about ~120°C. At this point most of the H2SO4
(sulfuric acid) is vaporized. The vaporized material is subsequently scrubbed by the denuder.
Consequently, if a decrease is observed in the FPD signal when the temperature is elevated to
120 ° C, the size of this decrease may be correlated with the amount of H2SO4 in the ambient air
sample. The next step is to heat the sample air to about 300 ° C. At this point the NH4HSO4
(ammonium bisulfate) and (NH4)2SO4 (ammonium sulfate) in the particles is completely vapor-
ized. Again, the amount of the sum of these two species can be correlated with the change in the
FPD signal. Any non-volatile sulfur particulate [i.e., Na^O,, (sodium sulfate) from marine air
masses] which is present will give a residual FPD signal. Interference by any potential non-
denuded gaseous sulfur species and changes in the FPD's zero will be accounted for by determin-
ing the analyzer's baseline signal with particle-free air.
The cycle of measurements (12 min./cycle) is controlled by a timer that is synchronized
with the data acquisition system, and includes: 1) ambient temperature; 2) 120 ° C; 3) 300 ° C; 4)
Instrument Baseline. See Section V for a detailed example of the thermal analysis technique.
The temperature for the second part of the cycle (~120°C) is chosen to maximize the frac-
tion of H2SO4 that is volatilized without any significant loss of (NH4)2SO4. With the apparatus
we are currently using, about 5% of the H2SO4 remains unvolatilized at the temperature that
~1% of the (NH4)2SO4 is lost. The Limits of Detection for the system using SFg doped hydrogen
are presented in Table I. The system's flow diagram can be found in Figure 1.
in. Limitations
It is important to note that this method generally underestimates the total amount of strong
acidity of ambient sulfate containing particles because both ammonium bisulfate (NH4HSO4) and
ammonium sulfate |(NH4)2SO4] vaporize at about the same temperature. The NH4HSO4 is almost
as strong an acid as H2SO4, whereas (NH4)2SO4 is a relatively weak acid.
If and when there are occasions in which there are aerosol particles that are only pure
H2SO4, and others that are only pure (NH4)2SO4, the TA-FPD method will give a good quantita-
tive determination of total strong acidity. In general, however, we expect that whenever any
H2SO is present in ambient air, there is also some NH4HSO present. Under these conditions the
TA-FPD method will underestimate the amount of total strong acidity. The overall situation is
even more complicated since: 1) individual aerosol particles generally have a mixture which
includes sulfate, ammonium, and hydrogen ions; and 2) different aerosol particles within the same
sample of ambient air may have varying amounts of each of these three ions.
IV. Calibration
The FPD method requires a calibration to determine the relation between the aerosol sulfate
concentration and the size of the emission signal as measured by the flame photometer. Dynami-
cally generated SO2/air mixtures between 2 and 15 ppb are used as the principle means of cali-
brating the flame photometer. A portable aerosol generator is used in situ to determine the
system's response to H2SO4 and (NH4)2SO4 at different temperatures. A diagram of the aerosol
generator can be found in Figure 2. In addition, we have semi-continuous calibration by measur-
ing the water soluble sulfate of simultaneous samples (4 to 24-hour collection period) from mem-
brane filters. This method will compare the time-integrated FPD signal for total sulfate with
chemically determined water-soluble sulfate data. Figures 3 and 4 show the relationship between
142
-------
the FPD sulfate and a dichotomous sampler and integrating nephelometer.
V. Explanation of Continuous Sulfate Data Reduction
Figure 5 is an example of the system's output during an episode in St. Louis, Missouri on
December 20, 1983, showing both the Meloy analyzer's output and the temperature of the sample
heater tube. Four values are taken from the particulate sulfate analyzer every 12 minute meas-
urement cycle. Point #1 represents the analyzer's output when the sample air has not been
heated. Point #2 is the output then the sample air is heated to ~120 °C to volatilize most of the
H2SO4. Point #3 is the output when the sample air is heated to 300 °C to volatilize H2SO4,
ammonium sulfate and ammonium bisulfate. Point #4 is the output when the sample air has
been filtered to remove all particles (instrument baseline).
The first step in reducing the data is to calculate sulfate concentrations for the first three
points, using Point #4 as the baseline. In this example, data are reduced as follows:
(net chart div x __ o.™
4
ppb bu 2 chart div
The scaling factor of 3.06 and the exponent of 0.929 are derived from the most recent cali-
bration of the analyzer's net voltage output against multiple SO2 concentration inputs. The first
point is total sulfate in jtg/m3. The third point is sulfate that does not volatilize at 300 ° C
(sodium sulfate, etc.) in //g/m3. To calculate sulfate as sulfuric acid, subtract the sulfate reading
for Point #2 from Point #1 and multiply that result by 1.08 (a factor that corrects for the frac-
tion of sulfuric acid that is not volatilized at the mid-point temperature, determined by in-situ
testing for this specific unit). To calculate sulfate as ammonium sulfate plus ammonium bi-
sulfate (this system cannot distinguish between these two species of sulfate), subtract the non-
volatile sulfate and sulfate as sulfuric acid from the total sulfate.
For this example,
Point #1 = 52.0 /ig/m3 Total Sulfate
Point #3 = 2.7 //g/m3 Non-Volatile Sulfate
Point #2 = 14.2 //g/m3 Sulfate, so:
Sulfate as sulfuric acid = (52.0 - 14.2) x 1.08 = 40.8 /ig/m3
Sulfate as ammonium sulfate
plus ammonium bisulfate = 52.0 - 40.8 - 2.7 = 8.5
The signals from the sulfate analyzer are sampled by a data logger and stored on a cassette
tape. The tape and strip charts are changed after 7 days and returned to HSPH for validation,
processing, and Quality Assurance checks. Figure 6 is a plot of sulfate and particle scattering
extinction coefficient data from St. Louis for December 16 to December 23, 1983.
Acknowledgments
The cooperation and help of Dr. Roger Tanner (Brookhaven National Laboratories) contin-
ues to be invaluable. We are also indebted to Andrew English for our original prototype develop-
ment, which he began at HSPH in 1981; to Anthony Majahad, Stephen Bertolino, and Craig Nor-
berg of the HSPH staff for assistance in construction, development, and operation of this system;
to Steve Fick and John Chao for data processing efforts; and to Allison Maskell for typing and
editorial assistance. This work is funded by grants from the National Institute of Environmental
Health Sciences (ESP-1108), and the Electric Power Research Institute (RP-1001).
143
-------
BIBLIOGRAPHY
Acid Aerosols
1. R.L. Tanner, B.P. Leaderer, J.D. Spengler, "Acidity of atmospheric aerosols," Env. Sci. 8
Tech. 15:1150 (October 1981).
2. J.W. Waldman, J.W. Munger, D.J. Jacob, R.C. Flagan, R.C. Morgan, M.R. Hoffman,
"Chemical composition of acid fog," Science 218:677 (November 1982).
3. P.J. Lioy, "Ambient measurement of acidic sulfate species in the U.S.," Presented at the
76th Annual Meeting of the Air Pollution Control Association, Paper No. 83-8.3, Atlanta,
Georgia, June 1983.
4. P.D.E. Biggins, R.M. Harrison, " Characterization and classification of atmospheric sul-
phates," J. Air Poll. Control Assoc. 29: 838 (1979).
Health Evidence
1. R. Ferek. "Review of atmospheric acidity measurements," Progress Report ~ Study of Health
Effects of Exposures to Airborne Particles, Spengler and Ozkaynak, Harvard University
DOE, HERAP contract,.
2. J.G. French, G. Lowrimore, W.C. Nelson, J.F. Finklea, T. English, M. Hertz, "The effect of
sulfur dioxide and suspended sulfates on acute respiratory disease," Arch. Environ. Health.
27:129 (1973).
3. D. Levy, M. Gent, "Relationship between acute respiratory illness and air pollution levels in
an industrial city," World Health Organization (UNESCO) International Symposium on
Recent Advances in the Assessment of the Health Effects of Environmental Pollution,
Vol. Ill, Paris, France: Commission of the European Communities, Luxembourg, 1975.
4. "Health consequences of sulfur oxides: a report from CHESS, 1970-1971, U.S. Environmental
Protection Agency," EPA 650-74-004, U.S. Government Printing Office, Washington, DC,
1974.
5. A.A. Cohen, S. Bromberg, R.W. Buechley, L.T. Heider-Scheit, C.M. Shy, "Asthma and air
pollution from a coal-fueled power plant," Amer. J. Public Health 62:1181 (1972).
6. E.G. Ferris, l.T. Higgins, J.M. Peters, "Sulfur oxides and suspended particulates," Arch.
Environ. Health 27:179 (1973).
7. U.S.H.O. Committee on Science and Technology, "The Environmental Protection Agency's
research program with primary emphasis on the community health and environmental
surveillance system (CHESS): An investigative report," U.S. Government Printing Office,
Washington, DC, 1976.
8. M. Lippman, "Effects of airborne particles on physiological parameters," Study of Health
Effects of Exposures to Airborne Particles, Spengler and Ozkaynak, Harvard University
DOE, HERAP contract, January 1983.
9. R.B. Schleslinger, L.C. Chen, G. Leikauf, D. Spektor, "Alteration of lung defenses by acid
sulfates," Presented at the 76th Annual Meeting of the Air Pollution Control Association,
Paper No. 83-8.4, Atlanta, Georgia, 1983.
144
-------
10. M. Lippman, "Health effects of atmospheric aerosols," Presented at the 76th Annual Meeting
of the Air Pollution Control Association, Paper No. 83-8.7, Atlanta, Georgia, 1983.
11. R.B. Schlesinger, M. Halpern, R.E. Albert, M. Lippman, "Effect of chronic inhalation of sul-
furic acid mist upon mucociliary clearance from the lungs of donkeys," J. Environ.
Pathol. Toxicol. 2:1351 (1979).
12. E.G. Ferris, Jr., F.E. Speizer, J.D. Spengler, D. Dockery, Y.M.M. Bishop, M. Wolfson, and C.
Humble, "Effects of sulfur oxides and respirable particles on human health," Amer. Rev.
ofResp. Disease, 120:767-779 (1979).
Instrumentation
1. S.S. Brody, J.E. Chancy, "Flame photometric detector, The application of a specific detector
of phosphorus and for sulfuric compounds sensitive to sub-nanogram quantities," /. Gas
Chromatogr. 2:42 (1966).
2. D.C. Camp, R.K. Stevens, W.G. Cobourn, R.B. Husar, J.F. .Collins, J.J. Huntzicker, J.M.
Jaklevic, R.L. McKenzie, R.L. Tanner, J.W. Tesch, "Intercomparison of concentration
results from fine particle sulfur monitors," Aim. Env. 16:911 (1982).
3. W.G. Cobourn, "In-situ measurements of sulfuric acid and sulfate aerosol in St. Louis,"
Ph.D. Thesis, Washington University, Sever Institute of Technology, St. Louis, Mo., 1979.
4. W.G. Cobourn, R.B. Husar, J.D. Husar, "Continuous in-situ monitoring of ambient particu-
late sulfur using flame photometry and thermal analysis," Aim. Env. 12:89 (1978).
5. P. Gormley, M. Kennedy, "Diffusion from a stream flowing through a cylindrical tube," Proc.
R. Ir. Acad. (52A), 1949.
6. J.J. Huntzicker, R.S. Hoffman, C. Ling, "Continuous measurement and speciation of sulfur-
containing aerosols by flame photometry," Atm. Env. 12:83 (1978).
7. D.B. Kittelson, R. McKenzie, M. Vermeersch, F. Dorman, D. Pui, M. Linne, B. Liu, K.
Whitby, "Total sulfur aerosol concentration with an electrostatically pulsed flame pho-
tometric detector system," Atm. Env. 12:105 (1978).
8. T. Sugiyama, S. Yoshihito, T. Takeuchi, "Characteristics of S2 emission intensity with a
flame photometric detector," J. Chromatogr. 77:309 (1973).
9. R.L. Tanner, P.H. Daum, T.J. Kelley, "New instrumentation for airborne acid rain
research," Environmental Chemistry Division, Brookhaven National Labs. BNL 31596
Presented at the 12th Annual Symposium on the Analytical Chemistry of Pollutants,
Amsterdam, The Netherlands, April 14-16, 1982.
10. R.L. Tanner, T. D'Ottavio, "Preparation of a gaseous sulfur denuder," In-house document,
Brookhaven National Labs., Upton, N.Y.,.
11. R.L. Tanner, T. D'Ottavio, R. Garber, L. Newman, "Determination of ambient aerosol sulfur
using a continuous flame photometric detection system. I. Sampling system for aerosol
sulfate and sulfuric acid.," Atm. Env. 14:121 (1980).
12. T. D'Ottavio, R. Garber, R.L. Tanner, and L. Newman, "Determination of ambient aerosol
sulfur using a continuous flame photometric detection system. II. The measurement of low
level sulfur concentrations under varying atmospheric conditions." Atm. Env. 15:197-203
(1981) .
145
-------
TABLE I
HSPH Continuous Sulfate TA FPD System
Limits of Detection
with SFg Doped Hydrogen
SULFATE
TOTAL
As Sulfuric Acid
As Ammonium Sulfate Plus Bi-Sulfate
That Does Not Volatilize at 300° C
L.O.D. in yg/m3
Average Period
1 Hour 4 Hours
24 Hours
1
2
3
1
0.5
1
1.5
0.5
0.5
0.5
0.7
0.5
NOTES:
1) L.O.D. is defined as twice the short term peak to peak noise
of the system.
2) A concentration of five times the L.O.D. is necessary to insure
data precision of 10%.
146
-------
FLOW DIAGRAM FOR HSPH CONTINUOUS SULFATE SYSTEM
DUMP
3-PORT ,.
SOLENOID^1
VALVE
Q
VALVE
ZERO CYCLE
AIR
-—>•
SAMPLE
INLET
(~1.7L/min)
BYPASS
FLOW
H-SL/mln)
FILTER
VALVE
-I-
CAPILLARY
5/im FILTER
PUMP! 1 *-
, OUTSIDE
AIR
(SAME AS
SAMPLE AIR)
(ACTIVATED CHARCOAL
a SILICA GEL)
->H HEATER
5/im FILTER
NH3(~18ppm(a 3mL/min) "
~240
mL/tnin
MAGNEHELIC
GUAGE
VALVE
NH3
PERM. TUBE
L
f
MELOY TOTAL
SULFUR ANALYZER
PUMP
EXHAUST
FIGURE I
147
-------
HSPH AEROSOL GENERATOR
ZERO
r-PORT FOR FPO
ZERO AIR SAMPLING
RESERVOIR—i
FLOW-
CONTROLLED
ZERO AIR
(~18LPM)
\
ZERO AIR
AT 15PSI -
(1.5LPM)
AIR— i
FROMZ-AIR TEE
FILTER 5 ,
7 -'
f/ t-SLEEVE
ft* MIXING CONNECTOR -
TUBE
JETS
••— SLEEVE
, CONNECTOR
r»TO Z AIR FILTER
|. AND PUMP
Q*
— «J
\
-*DUMP
PORT FOR FPD
SAMPLING
r
\
TO
•• — MEMBRANE
FILTER
HOLDER
METERED
VACUUM
SURFACE LEVEL
PUMP
(10LPM)
r
t1
V S
(
(
1
!) t
i. I
J
t
MODIFIED
"LOVELACE"
BAFFLED-JET
NEBULIZER
L
MICROBORE
POLYETHYLENE
TUBING
SYPHON LINE
FIGURE 2
148
-------
ST. LOUIS DICHOT SOn (OF) VS. FPD TSOU
24 HR INTEGRRTED SRMPLES
12-16-83 THROUGH 12-22-83
15
14
13
12
1 1
10
9
8
7
6
5 -
3
2
1
FPD TSOu = (DICHDT SOU)0.982 - 0.45
R2 = 0.931
15
14
13
12
11
10
9
8
7
6
•5
•4
•3
•2
-1
o
ID
5 6 7 8 9 10 11 12 13 14 15
DICHOT SOu (OF) „
FIGURE 3
ST. LOUIS FPD TSOu VS BSP
24 HR INTEGRflTED SflMPLES
12-16-83 THROUGH 12-22-83
1.0
.9
0.0677 (FPD TSOU)
RJ = 0. B91
0.149
1.0
.9
. 7
.6
.5 -
7
6
5 2
a. U
tn • ^
OD
.3
.2
. 1
3
2
1
5678
FPD TSOu, wG/M3
10 11 12
FIGURE 4
-------
EXAMPLE OF SULFATE DATA REDUCTION
FIGURE 5
-------
ST. LOUIS (ONE-HOUR flVERflGE VflLUES)
DflTfl flRE REPORTED RT BEGINNING OF flVG. PERIOD
FROM 350:00:00 (12/16/83) TO 357:00:00 (12/23/83)
-t-t-1 1-1-
-f-f i-1-1-
.}.; 1 11-.
•t-t-1 1-1-
•rt-t-rt-
"t"f~t-! !-
fr-f-f H"
-t-t-4-f I-
•i H4 i-
-; t M-I-
-i-f- . .
-1 (- .-(--I-
ffi
• t-i-fi-
•t-t ',-', I-
n-i-1
-i-i
'
•4-4 .
:t±-
'
'TiTT
44 4- ! -i
T'TT t T
ittli
mw-
X
CD
r;
a:
a:
CD
T I i i 1 i,
• t-t-ri-i- • «•
-H-I-H—H"
----.
-t-i i-fr
t-t-i-i-i-
-f-t-1-1-1-
Itnt
:rti-ffl
i
HI
i-f-i- -t-44-i--i-
I-4-4- -+4-I-1 t-
•t 4-4
1-4-4-1-4-
,(,,,.4..,.),.
•4-1-444-
1 -I -I -r4-
•4 -4 -i 44-
A4.4-j.-.-
3S1
352
355
3S3 35U
DflYS
TOTRL SULFflTE. SULFflTE flS SULFURIC flCID
356
a 20
az
CD
350
351
352
356
357
353 3SU 355
DflYS
SULFflTE flS flMMONIUM SULFflTE+BISULFflTE SULFflTES THRT DO NOT VOLHTILIZE flT 300 DEC C
x i
a_
CO
CD
0
352
355
353 3511
DflYS
PflRTICLE SCflTTERING EXTINCTION COEFFICIENT
FIGURE 6
151
-------
AUTOMATED SAMPLING AND ANALYSIS OF
FLUE GASES FROM PLASMA PYROLIZER
Marek E. Krzymien and Lome Elias,
National Resesarch Council of Canada,
National Aeronautical Establishment
1.0 INTRODUCTION
A Canadian company, Plasma Research, Inc., of Kingston, Ontario, is
currently constructing a plasma torch incinerator for the purpose of disposing
of toxic waste chemicals on a commercial scale. Hazardous materials, such as
PCBs, when subject to the intense, electrically-produced plasma of the facility,
are expected to undergo a thorough chemical degradation to form innocuous
products such as C02 and water, or other products which can be readily
neutralized and released safely into the environment.
The breakdown process occurring in the plasma is highly complex and not
completely understood. It is possible that highly reactive molecular fragments
(free radicals, atoms, ions) produced in the plasma might recombine in a cooler
region of the torch to form environmentally undesirable products. At the same
time, many of the hazardous chemicals to be destroyed are very stable. For
these reasons, it is essential that the gaseous products vented to the
atmosphere from the exhaust stack of the facility be closely monitored to ensure
proper operation of the incinerator.
The Unsteady Aerodynamics Laboratory (UAL), following a request from the
company, has undertaken to assist in the development of a trace gas analysis
system suitable for monitoring the concentration levels in the flue stack of the
plasma torch. In this paper a design of the monitoring system is outlined, and
some preliminary work is described on the collection and analysis of PCBs ,
2,3,7,8-tetrachlorodibenzodioxin (TCDD) and 2,3,7,8-tetrachlorodibenzofuran
(TCDF).
2.0 DESIGN CONSIDERATIONS
2.1 Requirements
Among the prerequisites for a suitable analyzer in the plasma torch
scenario are the following:
(1) high sensitivity, to allow the detection of ppt-concentration levels;
(2) qualitative reliability, or specificity, to ensure unambiguous
identification of the target compounds;
152
-------
(3) quantitative accuracy, to meet environmental control standards;
(4) versatility, to cover a broad range of vapours and gases;
(5) computer-based, for unattended analysis as well as for possible feedback
control of incinerator operation;
(6) economical, in terms of capital outlay and operating cost.
2.2 Gas Chromatography/ECD/TFD/FID
The detection levels mentioned in (1) dictate the use of a preconcentration
technique. Preconcentration of a large air sample results unavoidably in the
collection of many vapours and gases in addition to those of interest and,
therefore, suggests the use of a separation technique, e.g. gas chromatography
(GC), to permit sensing of the target chemicals. The GC sensor could take the
form of a class-specific detector, such as the electron-capture (ECD),
thermionic flame (TFD), or flame ionization (FID) detector. However, although
highly responsive to certain types of compunds, these detectors may lack the
specificity and versatility to meet the requirements of (2) and (4).
2.3 Gas Chromatography/MS
A more universal and positive detection technique can be achieved by the
use of a mass spectrometer (MS) as the GC sensor. Compact, simplified MS
systems have recently become available for use in capillary GC. As analytical
instrumentation this equipment has similar specifications to those of more
sophisticated mass spectrometers while being lower priced and is a priori the
preferred choice of GC detector over the ECD, TFD and FID with respect to
fulfilling the above prerequisites, especially items (2) and (4).
2.4 Preconcentration
The method of trace vapour preconcentration developed previously at UAL is
considered to be adaptable to the present case. In that approach, air to be
tested is drawn through an adsorbent-packed tube which collects or
preconcentrates the vapours of interest; after sampling, the vapours trapped in
the tube are thermally desorbed and transferred by a carrier stream to a second,
smaller adsorber tube from which they are subsequently desorbed and injected
into the GC column.
The two-stage adsorber concept has been used with success to quantify ppt
153
-------
levels of airborne vapours, including organophosphorus and carbamate
insecticides, chlorophenoxy and chlorobenzoic acid herbicides, as well as
organonitrate explosives (1-5). It has recently been extended for use with
capillary-column GC, and tested in headspace sampling of high-molecular weight
hydrocarbons and fenitrothion (6).
As currently implemented, the technique was designed to permit the
(detachable) first-stage adsorber to be used for sampling at remote locations,
then returned and manually reinstalled in the analyzer unit. To render the
technique suitable for automated sampling and analysis, as required for the type
of fixed-installation monitoring of the plasma incinerator products, some
modification in instrumentation is required.
3.0 EXPERIMENTAL
3.1 Capillary-column GC Analysis
Two GC instruments have been utilized, a Varian 1600/Vista U01 and an HP
5790A, each fitted for capillary column operation.
The varian GC was equipped with an adsorber tube injector port, illustrated
in Fig. 1, as well as a regular septum inlet. The column used in this
instrument was 30 m x 0-32 mm I.D. SPB-5 fused silica. The column oven was
temperature programmed as follows: initial temperature 80°, hold 10 min; 80° to
150° at 20°/min; 150° to 260° at Wmin, hold 2 min. Helium carrier gas flow
velocity was 39 cm/sec. Under these conditions good separation of the
individual PCBs in Aroclors 1242, 1251* and 1260 and of 2,3,7,8-TCDD and 2,3,7,8-
TCDF was achieved in approximately 30 min. Both FID and ECD were used in the
PCB analysis. The sensitivity of the FID allowed the analysis of microgram
quantities of the Aroclors, while with the ECD sub-nanogram samples were
analyzed.
The HP 5790A GC was fitted with a 12.5 m x 0.2 mm cross-linked dimethyl
silicone WCOT fused silica column. The column oven was operated with the same
temperature programme as the Varian GC. Helium carrier gas velocity was 20
cm/sec. Under these conditions chromatograms were similar to those obtained
with the Varian GC. Splitless injection mode was used to inject samples. The
GC was coupled with an HP 5970A Mass Selective Detector. The detector was
operated in the Peakfinder programme to identify the peaks and in the Selective
Ion Monitor programme to determine trace quantities of analytes.
The GCs were calibrated by means of standard solutions of the Aroclors in
154
-------
iso-octane having concentrations ranging from 10~^ to 10~10 g/uL. The
concentration of TCDD and TCDF standard solutions was 10~9 g/pL. Calibrations
were made both through direct liquid injection and, in the case of the Varian
instrument, through deposition of the solution in the adsorber tube followed by
the appropriate desorption/injection procedure.
3.2 Adsorber Tube Sampling
Pyrex adsorber tubes were 7.5 cm x 6.3 mm O.D. containing a 1 cm column of
Tenax GC 45/60 mesh adsorbent. Tenax has been reported to be superior to
polyurethane foam, XAD-2 resin and Florisil as a sorbent for collecting PCBs in
air sampling (7). The thermal stability, hydrophobia properties and high
retention capacity of Tenax make it suitable for trapping PCBs and dioxins from
large sample volumes of moisture-laden air, and subsequent recovery of the
target vapours through thermal desorption.
The breakthrough volume of the sorbent plug was estimated by placing a
backup adsorber in series with the first and sampling a spiked air stream; the
presence of PCB vapours in the backup adsorber for a measured volume of air
sample signifies breakthrough from the first tube.
3.3 PCB Vapour Source
A continuous stream of PCB vapours in air was generated by passage of a low
flow of N2 through a U-tube containing glass beads wet with Aroclors 1254 and
1260. This vapour stream was mixed with a larger flow of air (8) to achieve a
controlled dilution ratio of the equilibrium vapour pressure of the PCBs in the
test stream. With the U-tube thermostated at 0°C and a dilution ratio of 1/500,
PCB concentrations of the order of 100 ng/m3 were obtained.
In sampling the test stream, adsorbers were maintained at room temperature,
or heated to 80°C to simulate the plasma stack temperatures. At room
temperature (22°C) it is estimated that less than 5% of the total Arochlors in
30 L of air sampled at 0.5 L/min. escaped the first adsorber; when kept at 80°C,
the first adsorber trapped over 90% of the PCBs from a 20 L volume.
Chromatograms from some of these tests are shown in Fig. 2, obtained with the
Varian GC/ECD.
In Fig. 2 differences between the vapour and liquid signatures are evident,
and attest to the fact that the partial pressure of a particular component in
the vapour phase may far exceed the mole fraction in solution.
155
-------
Using the ECD, the smallest mass of Aroclor mixture that can be measured
with S/N ^ 5 is about 0.5 ng. With the MSD operated in the Selected Ion
Monitoring (SIM) mode and the electron multiplier voltage set at 1600V, the
smallest quantity of Aroclor that can be measured is about 1 ng. Assuming a
breakthrough volume in sampling of not less than 20 L, the minimum detectable
concentration of PCBs measurable with the adsorber tubes is about 25-50 ng/m3.
By way of comparison, in a recent survey the atmospheric PCB background level in
the province of Ontario was found to range from 0.01 to 1.4 ng/m3, averaging
about 0.20 ng/m3.
The sampling system proposed in monitoring the plasma flue gases is based
on a first-stage adsorbent bed of comparable dimensions to that tested above.
4.0 SAMPLER CONFIGURATION
A module has been designed and fabricated which interfaces with the Hewlett
Packard 5790A/5970A MSD system, and is presently being tested.
The module is essentially an auxiliary oven supporting the first and
second-stage adsorbers, and housing two six-port switching valves and associated
plumbing. Connection from the second adsorber to the GC is made through heated
capillary tubing. The air sampling line is a length of heated stainless steel
tubing, 6.3 mm O.D., which is provided with an injection port for
calibration/test purposes, and with a (replaceable) filter to remove particulates
from the air sample. Care has been taken to avoid cold spots in all vapour
transfer lines. A schematic view of the sampler configuration is given in
Fig. 3.
The twin-valve design shown is sufficiently flexible to allow for purging
of Ads 1 before transfer of the PCB vapours to Ads 2, by operating the valves
independently. Valve actuators, adsorber heaters and the air pump are
microprocessor-controlled.
5.0 CONCLUDING REMARKS
From the initial study carried out to date it is felt that a viable
monitoring system based on the sampler configuration and GC/MS approach outlined
in this report is feasible.
In principle, the proposed system is useful for any vapour or gas that is
amenable to GC analysis. The preconcentrator component of the system, involving
the two-adsorber concept, is of proven efficacy in trace vapour detection, and
156
-------
can, moreover, be tailored to the gases of interest through selection of
suitable adsorbent packings. At the same time, the MS detector provides
complete versatility of detection.
6.0 ACKNOWLEDGEMENT
The authors thank Dr. Andre Lawrence of this laboratory for his valuable
assistance in the initial stage of the work.
7.0 REFERENCES
1. M. McCooeye, C. Cooke and L. Elias, March 1984. GC Analysis of Post-Spray
Air Samples in Priceville Forest Field Study. NRC NAE LTR-UA-72.
2. R.S. Crabbe, L. Elias and S.J. Davie, January 1983. Field Study of Effect
of Atmospheric Stability on Target Deposition and Effective Swath Widths
for Aerial Forest Sprays in New Brunswick. Part II. NRC NAE LTR-UA-65.
3. R.S. Crabbe, M. McCooeye and L. Elias, January 1984. Effect of Atmospheric
Stability on Wind Drift in Aerial Forest Spray Trials. Neutral to Stable
Conditions. NRC NAE LTR-UA-73.
4. R.S. Crabbe and M. McCooeye, March 1984. Field Measurement of Ground
Deposit and Windborne Drift from Herbicide Sprays in New Brunswick. NRC
NAE LTR-UA-72.
5. L. Elias, January 1981. Development of Portable GC Explosives Detector.
NRC NAE LTR-UA-57.
6. M.E. Krzymien, November 1983- Dual Adsorber-Capillary Column System for
Gas Chromatographic Analysis of Air Samples. NAE-AN-20, NRC No. 22889;
also unpublished data.
7. W.N. Billing and T.F. Bidleman, 1981. High Volume Collection of
Chlorinated Hydrocarbons in Urban Air Using Three Solid Adsorbents. Atmos.
Env., Vol. 17 (1981), pp 383-391.
8. M.E. Krzymien and L. Elias, 1976. A Continuous-Flow Trace Vapour Source.
J. Phys. E: Scient, Inst., Vol. 9 (1976), pp 584-586.
9. E. Singer, T. Jarv and M. Sage, 1983- Survey of Polychlorinated Biphenyls
in Ambient Air Across the Province of Ontario. Physical Behaviour of PCBs
in the Great Lakes (Papers Presented at a Meeting, 1981), pp 367-383
(1983), Ann Arbor Sci.
157
-------
1 - SEPTUM RETAINING NUT
2 - SEPTUM
3- INJECTOR CAP
4 - SILICONE RUBBER O-RING
5 - BAYONET COUPLING
6- FIRST ADSORBER GLASS
TUBE
7- GLASS WOOL PLUG
8 - SOLID SO R BE NT
9 - CARRIER GAS (He) INLET
10 - 1/4 INCH SWAGELOK NUT
11 - GRAPHITE FILLED VESPEL
REDUCING FERRULE (1/4
TO 1/8 INCH)
12 - SOLENOID VALVE
13- SOLENOID VALVE
14- SPLIT VALVE
15 - 1/16 SWAGELOK FITTING
WHERE THE CAPILLARY
COLUMN IS ATTACHED
16- STAINLESS STEEL TUBES
HOUSING CARTRIDGE
HEATERS AND PLATINUM
TEMPERATURE SENSOR
17 - SECOND ADSORBER NICKEL
TUBE
18 - BAKELITE INSULATOR
B
15
FIG. 1: DUAL TRAP
158
-------
(•) BACK-UP ADSORBER. HI ADSORBER
AT 22°C. 30L VAPOUR SAMPLE
(b) BACK-UP ADSORBER, HI ADSORBER
AT aO°C. 20L VAPOUR SAMPLE
(c) VAPOUR COLLECTED ON
TENAX QC ADSORBER AT
SO°C. 20L VAPOUR SAMPLE
LIQUID INJECTION
1.5 ng OF MIXTURE:
20% AROCHLOR '254
20% AROCHLOR 1260
«0% TRICHLOROBENZENE
FIGURE 2. CHROMATOGRAMSOF PCB ANALYSIS.
159
-------
C2
AIR SAMPLE
I
ADS I
ADS 2
SAMPLING
COL
ADS I
SAMPLE TRANSFER
CZh
ADS 2
ADS I ADS 2
SAMPLE INJECTION/STANDBY
FIGURE 3. SAMPLER CONFIGURATION.
C1, C2 - CARRIER GAS; ADS 1, ADS 2 - FIRST- AND
SECOND-STAGE ADSORBERS; P - AIR PUMP; V - VENT.
160
-------
THE RATIO OF BENZO(A)PYRENE TO PARTICULATE MATTER
IN SMOKE FROM PRESCRIBED BURNING
Jerry D. White, Charles K. McMahon, and Hilliard L. Gibbs
Southern Forest Fire Laboratory
Southeastern Forest Experiment Station
Route 1, Box 182A
Dry Branch, Georgia 31020
INTRODUCTION
Although forest burning is prescribed widely across the United States, it
is most commonly practiced in the Northwestern and the Southern United
1 2
States. ' In 1978, approximately 37 million metric tons of forest fuels on
all forest ownerships were burned by prescription; approximately 12.5 million
3
metric tons were burned in the South. This burning produces an estimated 0.6
million metric tons of total suspended particulate matter (TSP) annually in the
United States. Of that total, about 0.2 million metric tons of TSP originate
3
in the South.
Considerable uncertainty exists over the estimation of benzo(a)pyrene
(BaP) produced by prescribed burning. Forest and agricultural burning were
estimated by the National Academy of Sciences to emit 127 metric tons per year
4
in 1968, but that figure was reduced to 9.5 metric tons per year in 1972. In
a 1977 report, EPA estimated BaP emissions from prescribed burning to be 4.5
metric tons nationally, which was 0.5 percent of the BaP from all sources.
Early data gathered by the Southern Forest Fire Laboratory suggested that
the amount of BaP emitted might also vary with the fuel condition and method of
burning. In a series of experimental fires conducted by McMahon and Tsoukalas,
the ratio of BaP to TSP was found to be much higher among simulated backing
4
fires than among simulated heading fires in pine needle fuel (Table 1). The
measurements were made in a special combustion chamber at the Southern Forest
Fire Laboratory (Figure 1). Backing fires are spreading fires that progress
into the wind and heading fires are those that progress with the wind. Both
types of fires are commonly prescribed.
161
-------
A serious limitation in these results was that they represented only one
fuel type burned by prescription in the South. Perhaps more important, they
represented a fire environment in which pine needles were isolated from all
natural variations in conditions of duff, soil, moisture, and wind. Questions
were raised. Are the order of magnitude differences between BaP/TSP ratios
from backing and heading fires seen in laboratory fires also characteristic of
fires in natural forest settings? What is the range of BaP/TSP ratios for some
other fuels commonly prescribed burned in the South? The study described here
was directed toward these questions.
METHODS
Experimental Design
In the forest, several factors, which can be selected or measured prior to
a prescribed burn, are believed to influence BaP and TSP production. These
factors fall into two broad categories: fuel conditions and weather
conditions. The fuel conditions are fuel type, fuel load, and moisture
content. Weather conditions are fire type (or wind direction), wind velocity,
and relative humidity.
In this field experiment, we examined the effect of fire type and fuel
type only. For comparison with the laboratory experiment, we incorporated
three levels of fuel loading for one fuel type—pine needle litter. The
statistical design chosen was a factorial experiment (2 fire types x 4 fuel
types) with an unbalanced incidence matrix. The two fire types examined were
backing and heading fires. The four fuel types examined were pine needles
(litter of pure slash pine needles), hardwood litter (mixed hardwood leaves and
pine needles), broomsedge (pine-needle litter with broomsedge understory), and
palmetto (pine-needle litter with palmetto understory).
The plots burned in each fuel type were approximately 5m by 25m. With the
exception of the pine-needle fuel, 6 plots were burned in each fuel type—3
replicate backing fires and 3 replicate heading fires. In the case of the
pine-needle fuel, each of 3 levels of loading was treated separately, giving a
subtotal of 18 pine needle fires. The statistical analysis was appropriately
adjusted for this unbalanced incidence of fuel types. In all, 36 individual
plots were burned and sampled. Results were subjected to statistical analysis
of variance.
162
-------
The Sampler
A light, portable sampler was designed to collect, simultaneously, samples
of total suspended particulate matter, benzo(a)pyrene, and combustion gases.
The sampling train consisted of five units: a glass-fiber filter holder, a
polyurethane foam (PUF) trap, two personal pumps, and a gas bag (Figure 2).
The sampler, assembled from components and attached to a long aluminum pole,
was designed for portability and safety. On one end was an air-intake probe
which could be extended into the smoke plume directly over the flaming zone.
The probe was positioned by the operator so that its temperature rarely
exceeded 60 C by raising or lowering the probe above the flames. On the other
end of the pole were the pumps and other electrical components which were less
resistant to heat. The person who carried the sampler could walk near the
advancing fire line holding the probe within the plume of emissions directly
above the flames.
The sampling probe (Figure 3), consisted of four main parts: filter
holder, PUF trap, thermocouple and anemometer. The open-faced aluminum filter
holder contained a 47-mm glass-fiber filter. The exit of the holder fed
directly into a PUF trap constructed of PVC pipe and end caps. The trapping
material was polyurethane foam in 3 cylindrical plugs (30-mm diameter by 35-mm
length") prepared in advance by soxhlet extraction with methylene chloride. We
expected most, if not all, BaP to be trapped on the glass-fiber filter; but to
be safe, we placed the PUF plug into the sampling train to trap any BaP in the
vapor phase as well. ' The filter holder and PUF trap were attached by a
"quick-disconnect" to an extension tube running to the pumps. A thermocouple
was placed on the probe very near the entrance to the filter holder with a
temperature readout in sight of the person using the sampler. The thermocouple
was used to monitor the temperature of gases entering the sampling probe. A
Biram anemometer, not used in this study, was located adjacent to the sampling
probe and could be used to determine average windspeed flowing by the sampler.
On the other end of the sampler (Figure 4), a Dupont P-4000 pump, powered
the probe's air flow with a flow rate of 4.0 liters per minute. A smaller
pump, a Dupont P-200, pulled a constant proportion of the exhaust gases from
the main pump into a 2.5 liter aluminized gas bag at a flow rate of 0.12 liter
per minute. Thermocouple and time readouts were also located here. About 1 to
5 mg of TSP and 1 to 2 liters of gases were collected for subsequent analysis.
163
-------
TSP was determined gravimetrically while the concentrations of carbon
monoxide (CO) and carbon dioxide (C0_) were determined by a nondispersive
infrared technique. CO and CCL values, while not used in this study, could be
used to estimate emission factors by the carbon balance technique as reported
Q
by Ward, et al. BaP trapped in the TSP and PUF was determined by a routine
method validated for wood smoke at the Southern Forest Fire Laboratory. In
this method, BaP was recovered from the TSP and PUF by soxhlet extraction with
methylene chloride and quantified via high performance liquid chromatography on
a bonded octadecyl column. The limit of detection was about 1.34 ng BaP and
the limit of quantitation was about 2.02 ng. A precision of better than 10%
was typical at the BaP levels determined.
RESULTS AND DISCUSSION
Benzo(a)pyrene appeared to be trapped completely by the sampler's
glass-fiber filter. In only one PUF analysis out of 12 did BaP exceed the
limit of quantitation (2.02 ng). And this one case was thought to be due to
leakage of TSP rather than breakthrough of BaP. In separate tests, samples of
TSP were held for BaP analysis for at least 4 months under refrigeration
9
without significant degradation.
For the pine-needle fuels, the trends of the ratios of BaP to TSP in the
field were similar to trends reported in the laboratory (Table 1). For
example, backing fires produced higher ratios than heading fires, except for
backing fires with heavy fuel loads. Also, ratios decreased with increasing
loading of needles. In the field, however, the ranges in observed values (7 to
45 yg per gram) were far less than in the laboratory (2 to 274 ug per gram).
The unusually high values of ratios for laboratory backing fires were, we
believe, because conditions for pyrosynthesis of BaP were more favorable in
these fires.
What are the conditions that influence formation of BaP during prescribed
burning? Strong experimental evidence suggests that BaP pyrosynthesis within
the flame envelope is governed by temperature, oxygen concentration, and length
of time BaP precursors remain inside the flame. ' If the flames are too hot
(above 1000 C) and turbulent, BaP levels are low because oxidation is favored
over pyrosynthesis. On the other hand, if temperatures are low (below 600°C),
as often occurs in smoldering combustion, the BaP precursors do not cyclize to
the 5-ring BaP structure. The optimum temperature for BaP pyrosynthesis
164
-------
is near 800 C. ' In prescribed burning, our evidence suggests that the
conditions that favor BaP pyrosynthesis are low-intensity fires in which
flaming combustion predominates over smoldering combustion. These conditions
are produced in light fuel loadings that burn with relatively nonturbulent
flames.
When BaP/TSP ratios were listed by fire type and fuel type, the range of
values was less than an order of magnitude (Table 2). Over the four fuel
types, no significant difference was found between ratios from backing and
heading fires. A mean of 23 yg per gram for backing fires and 25 yg per gram
for heading fires showed this clearly. However, there was a significant
difference among mean values by fuel type. We cannot explain the variation
among fuel types at this time. However, we believe that it is caused by a
combination of fuel characteristics such as fraction of green fuels, fuel bed
porosity, fuel loading, and chemical composition, which contribute to fire
behavior factors such as reaction intensity and fire line intensity.
Additional work is planned.
The mean BaP/TSP ratio of all 36 fires in the experiment was 24 yg per
gram with a relative standard deviation of 0.47. In another study by Ward, et
14
al. currently in progress in the Pacific Northwest, a BaP/TSP ratio of
15 yg per gram with a relative standard deviation of 0.49 has been determined
from 27 TSP samples. These samples were obtained from burning unpiled forest
residues (slash burning). Applying these new ratios (24 and 15) to the fuel
3
and TSP data available from Chi, et al. , we calculate a new annual BaP
production of 11 metric tons for prescribed burning in the United States.
Although this new value is still an approximation, we believe it to be accurate
within a factor of two and a significant improvement over previous estimates
because of the new information available on BaP/TSP ratios.
CONCLUSIONS
1. The ratio of BaP/TSP averaged 24 yg per gram with a relative standard
deviation of 0.47 in four forest fuels common to the Southeast.
2. Significant differences were not found between heading and backing fire
types, but were found amongst the fuel types.
3. BaP production from prescribed burning is estimated to be 11 metric tons
annually.
165
-------
REFERENCES
1. Southern Forest Fire Laboratory Personnel, 1976. Southern Forest Smoke
Management Guidebook. Gen. Tech. Rep. SE-10. U.S. Department of
Agriculture, Forest Service, Southeastern Forest Experiment Station,
Asheville, NC, 140pp.
2. Johnson, V. J., 1984. Prescribed burning: Requiem or renaissance?
J. For. 82:2, pp82-90.
3. Chi, C.; D. Horn; R. Reznik; D. Zanders; R. Opferkuch; J.Nyers;
J. Pierovich; L. Lavdas; C. McMahon; R. Nelson; R. Johansen; P. Ryan,
1979. Source Assessment: Prescribed Burning, State of the Art. EPA (U.S.)
Report EPA-600/l-79-019h, Research Triangle Park, NC, 107pp.
4. McMahon, C. K.; S. N. Tsoukalas, 1978. Polynuclear aromatic hydrocarbons
in forest fire smoke. In: Jones, P. W. and R. I. Freudenthal, eds.
Carcinogenesis, Vol. 3: Polynuclear Aromatic Hydrocarbons. Raven Press,
New York, NY, pp61-73.
5. Eimutis, E. C.; R. P. Quill, 1977. Source Assessment: Noncriteria
Pollutant Emissions. EPA (U. S.) Report EPA-600/2-77-107e, Research
Triangle Park, NC, 99pp.
6. Thrane, K. E.; A. Mikalsen, 1981. High volume sampling of airborne
polycyclic aromatic hydrocarbons using glass fibre filters and
polyurethane foam. Atmospheric Environment 15:6, pp909-918.
7. Yamasaki, H.; K. Kuwata; H. Miyamoto, 1982. Effects of ambient temperature
on aspects of airborne polycyclic aromatic hydrocarbons. Environ. Sci.
Technol. 16:4, pp!89-194.
8. Ward, D. E.; D. V. Sandberg; R. D. Ottmar; J. A. Anderson; G. C. Hofer;
C. K. Fitzsimmons, 1982. Measurement of smoke from two prescribed fires in
the Pacific Northwest. Presented at the 75th Annual Meeting of the Air
Pollution Control Association, New Orleans, LA.
9. White, J. D., 1984. A simplified determination of benzo(a)pyrene in
particulate matter from prescribed burning. (Submitted to Am. Ind. Hyg.
Assoc. J.)
10. Badger, G. M.; R. W. L. Kimber; J. Novotny, 1964.
aromatic hydrocarbons at high temperatures. XXI.
M-butylbenzene over a range of temperatures from
intervals. Aust. J. Chem. 17, pp778-786.
The formation of
The pyrolysis of
300 to 900 C at 50°C
11. Crittenden, B. D.; R. Long, 1976. The mechanisms of formation of
polynuclear aromatic compounds in combustion systems. In: Freudenthal,
R. I.; P. W. Jones, eds. Carcinogenesis, Vol. I, Polynuclear Aromatic
Hydrocarbons. Raven Press, New York, NY, pp209-223.
166
-------
12. Schmeltz, I.; D. Hoffman, 1976. Formation of polynuclear aromatic
hydrocarbons from combustion of organic matter. In: Freudenthal, R. I.;
P. W. Jones, eds. Carcinogenesis, Vol. I, Polynuclear Aromatic
Hydrocarbons. Raven Press, New York, NY, pp225-239.
13. Commins, B. T., 1969. Formation of polycyclic aromatic hydrocarbons
during pyrolysis and combustion of hydrocarbons. Atmos. Environ. 3, pp565.
14. Ward, Darold E.; Colin C. Hardy, 1984. Advances in the characterization and
control of emissions from prescribed fires. Presented at the 77th Annual
Meeting of the Air Pollution Control Association, San Francisco, CA.
167
-------
Figure 1. Bed of pine needles
burning in combustion
chamber at Southern Forest
Fire Laboratory.
Figure 2. Portable smoke sampler.
Figure 3. Portable smoke sampler,
probe unit.
Figure 4. Portable smoke sampler,
control unit.
168
-------
TABLE 1. RATIO OF BENZO(A)PYRENE TO TOTAL SUSPENDED PARTICULATE MATTER
FROM BURNING PINE NEEDLES IN THE LABORATORY AND FIELD
Fire type and Laboratory fires Field fires
fuel loading Fuel load* Ratio Fuel load Ratio
kg/m2 yg/g kg/m2 ug/g
Backing fires
Light load 0.5 274** 0.3 45
Median load 1.5 135 1.3 14
Heavy load 2.4 98 1.6 _1_
Mean value 1.5 169 1.1 22
Heading fires
Light load 0.5 3 0.3 24
Median load 1.5 2 1.3 11
Heavy load 2.4 2 1.5 11
Mean value 1.5 2 1.0 15
* In the laboratory, fuel load referred to kilograms per square meter of pine
needles placed on the burning rack; however, in the field, fuel load
referred to the difference in the average kilograms per square meter of
pine needle litter (6 replicates) before and after the fire.
** Ratio values for laboratory fires which were taken from reference 4 were
recalculated. The correct ratio for the backing fire with light load should
be 318 ug per gram and not 274 pg per gram.
169
-------
TABLE 2. RATIO OF BENZO(A)PYRENE TO TOTAL SUSPENDED PARTICULATE MATTER
ARRANGED BY FUEL TYPE AND FIRE TYPE
Fuel types
Fire Types
Backing fires
Pine needle
Hardwood
Broomsedge
Palmetto
Ratio*
W8/8
22
9
17
44
Number
of
Replicates
9
3
3
3
Relative**
Standard
Deviation
1.00
0.44
0.23
0.40
Heading fires
Ratio Number
Mg/g of
Replicates
15 9
13 3
13 3
60 3
Relative
Standard
Deviation
0.60
0.18
0.30
0.14
* Ratios are reported in micrograms of benzo(a)pyrene per gram of total
suspended particulate matter.
** Relative standard deviation (coefficient of variation) is the ratio of the
standard deviation of the replicates to the mean.
170
-------
VOLATILE ORGANIC SAMPLING TRAIN (VOST)
DEVELOPMENT AT MRI
Fred J. Bergman
Midwest Research Institute
The purpose of this presentation is to describe volatile organic sampling
train (VOST) technology presently in use at Midwest Research Institute (MRI).
We hope this information will help you avoid many of the problems MRI en-
countered when it first sampled for volatile organics using Tenax traps.
Early in 1983 MRI received a task on an EPA Office of Toxic Substances pro-
gram to evaluate the emissions from hazardous waste incinerators. We started
the program using a commercial conditioner, desorber, and Tenax traps. The
Tenax in the trap (Figure 1) was held in place with plugs of glass wool, and the
traps were stored in test tubes with Teflon-lined caps. The traps were con-
nected to the sampling system using Swagelok fittings with Teflon front and back
ferrules. A laboratory evaluation for compound retention was performed and the
system appeared to be working. After our first field test, however, we found
that the blanks contained almost the same levels of volatile organics as our
samples. A frantic search was initiated to eliminate the problem while the test
program continued. We employed a multi-approach attack, obtained very low
blanks, and adopted a procedure which is still being used.
To our knowledge, MRI has had more experience with the VOST as applied to
source measurements than any other organization. Because this experience has
yielded additional information, we have developed what we believe is a better
understanding of the VOST's problems. We now know, for example, that many of
the steps, originally incorporated in the procedure to eliminate the high
blanks, are unnecessary. The major difficulties with using the VOST will be
getting the cartridges clean, knowing when they are clean, and keeping them
clean.
The successful use of the VOST requires good cartridge design. We designed
the MRI double-walled cartridge (Figure 2) to minimize the contamination which
we felt was coming from outside the tube. Tests show however that even the
171
-------
double-walled cartridge does not completely protect the cartridge from contami-
nation. With this knowledge, we are investigating a new simplified design (Fig-
ure 3).
Our first VOST runs were made on incinerators with wet scrubbers. We were
having difficulty during our analysis because there was high water retention in
the cartridges. We found that most of the water was contained in the glass wool
used to retain the Tenax. To solve this problem, a system of C clips and stain-
less steel screens was developed to hold the Tenax in the traps. This retention
system kept the Tenax compressed, with a surprising but now understandable im-
provement in cartridge performance. Keeping the Tenax compressed eliminated
voids and channeling in the resin bed. The result was cartridges which were
more uniform in their compound retention and which were significantly improved
in their retention capacity.
Another way we modified the original Tenax tube system was the method of
connecting the tubes to the system. We found that the Swagelok fittings with
Teflon front and back ferrules we originally used frequently broke the tubes
when tightened sufficiently to obtain a leak-free system. We therefore used the
end plates in the double-walled design to obtain a seal with the tubes and at-
tached the end plates to the system using VCO fittings.
In the new tube design, we are attempting to use Ultra-Torr fittings which
use an 0 ring. While we were trying to reduce the blanks, we placed a cartridge
filled with the Viton 0 rings and desorbed them into the mass spectrometer.
High levels of hydrocarbons were detected. We decided to condition the 0 rings
in a vacuum oven to remove the volatile material. When the first batch of 0
rings was removed from the oven, about half had turned to black glass, indicat-
ing that the vendor had mixed Viton and rubber 0 rings together. We changed to
a vendor that color-coded its 0 rings (Viton is tan). Because we have not yet
checked volatiles run with the new 0 rings, to be safe we are continuing to
treat the new rings. It appears that we may be able to employ Teflon 0 rings in
the Ultra-Torr fittings in place of Viton for even better performance.
Let us summarize the advantages and disadvantages of the two MRI tube de-
signs and the original all-glass commercial tube design. The MRI double-walled
design gives reproducible results, provides low water retention, is rugged, pro-
tects the exterior of the tube from contamination, goes in the system one way
172
-------
only, and does work. It is, however, heavy, large, expensive to construct, and
requires more time to recycle. (The cartridges are disassembled and only the
inside tube is placed in the desorber, so the cartridge must be reassembled.)
The new simplified MRI tube design will have the same advantages as those of
the double-walled design but have the added advantages of being smaller,
lighter, and lower in cost. It will probably require some type of outside pro-
tection in the field such as being wrapped in aluminum foil. A system to assure
proper orientation of inlet to inlet will also be desirable. It is currently
untried. The original commercial all-glass tube design using C clips and stain-
less stell screen in place of the glass wool should perform as well as the new
simplified MRI design, and the different sized ends will assure proper orienta-
tion. Replacement glass tubes will be more expensive because one end must be
drawn down, but the sampling train (metal parts) will cost less because of the
smaller fittings on one tube end. The small tube end will also be more fragile.
A fourth tube design (Figure 4) has been proposed in the VOST protocol. We
believe this new commercial tube design is unacceptable. Because the tube is
necked down to 1/4 in. on each end, it is necessary to hold the Tenax in place
with glass wool plugs, with the attendant disadvantages.
It has been our experience that for good conditioning, especially in the
case of used cartridges, it is essential that the gas be forced to pass through
the cartridges. In the commercial unit that is presently available for car-
tridge conditioning, the gas does not normally pass through the cartridges but
flows around the tube. We understand that the manufacturer has developed a
modification which forces the gas through the cartridges. If you decide to use
this conditioner, we strongly recommend that you use the cartridge flow-through
modification. We have found that many cartridges that have been used still fail
the purity check after two 8-hr conditioning periods using the unmodified sys-
tem. These same cartridges will clean up, however, after only 4 to 8 hr when
the purge gas goes through the cartridge. We have been informed that some users
have solved the cleanup problem by discarding the approximately $10 worth of
Tenax in- each cartridge after it has been used once, an approach you may wish to
take.
We recommend adopting the one-step conditioning and monitoring technique
(Figure 5) regardless of which cartridge design is used. The one-step procedure
173
-------
consists of passing hydrocarbon-free nitrogen at 30 mL/min through the cartridge
while it is heated at 200°C. The exit gas stream from every tube is checked at
regular intervals using a flame ionization detector (FID) until the hydrocarbon
level approaches the lower detection limit (LDL). Using the one-step technique,
you know when the cartridges are clean. This permits stopping the conditioning
when most of the cartridges pass the purity check. In addition, all components
of the cartridge can be cleaned (conditioned) at the same time. This has the
added advantage of not requiring stringently clean facilities for cartridge
assembly. If you elect to condition and perform the purity check separately,
you will find it necessary to recycle cartridges through the desorber until they
pass the purity check.
A manifold for conditioning the new tube design is shown in Figure 6. Each
manifold holds 10 tubes in the conditioning oven, so that with four manifolds
40 tubes can be conditioned at a time.
We do not agree with the introduction of a chromatographic column during
the cleanup procedure, as proposed in the VOST protocol. We do not see how it
improves the method, and it has a number of disadvantages. Use of a column in-
creases the time required for each purity measurement from the present 2 min to
at least 30 min. Use of a column also significantly reduces the sensitivity of
the measurement. When using an FID, the hydrocarbon response is additive so
that the lower detection limit is the sum of all components eluted. If the
cartridge contains, for example, 15 components just below the 0.2 ng level, they
would pass using the column technique. With the FID only, the response would
equal 3 ng and would fail to pass. Another disadvantage concerns column contam-
ination. If you use a column when checking cartridges that have collected field
samples, you will find that high boiling hydrocarbons are slowly accumulated in
the column. This will cause an increase in the background hydrocarbon level.
It will, therefore, be necessary to stop and bake out or replace the column at
frequent intervals.
The only method we have found that protects clean cartridges is to store
them over activated charcoal or under water. Tests have demonstrated repeatedly
that neither the double-walled MRI cartridges nor the all-glass cartridges stored
in a Teflon-capped test tube will remain uncontaminated. After conditioning,
174
-------
the cartridges should be placed under water or over charcoal as soon as they
reach room temperature. As an added precaution, we maintain the purge gas flow
on the cartridges until they are cool.
As indicated in the VOST protocol, and as has been done by users of all-
glass tubes in the past, the tubes are capped and placed in Teflon-lined screw-
capped test tubes after sampling for shipping and storage. We question this
procedure. Any contamination on the exterior of the Tenax tubes, if carried to
the inside of the test tubes, will migrate to the Tenax.
Since we found that volatile organics diffused through the 0 ring seals on
the double-walled design, we also believe that storing the tubes after sampling
in screw-capped test tubes over charcoal should be avoi'ded.
The original MRI train is described in the VOST protocol. The train is be-
ing improved and modified to use the new MRI cartridge and the type of lubricant
free valve required by EPA (Figure 7). The valve manufacturer recommends the
use of a small amount of lubricant to maintain leak-free operation. We deter-
mined modest amounts of Apiezon grease in hydrocarbon sampling trains, will
neither add to nor remove measurable quantities of hydrocarbons from the gas
stream. However, the use of greases has been forbidden by EPA.
The addition of a third valve to the train will permit carrying out all the
required operations without having to remove or replace various components. The
valves may be arranged in such a manner that only one valve is in the system dur-
ing the leak check.
In summary, MRI believes the VOST procedure can be simple and straight-
forward, as shown in the following steps:
Clean new metal parts with suitable solvent.
Sonicate all components in hot detergent solution.
Rinse with water and oven-dry.
Assemble cartridges in clean area.
Condition at 200°C with 30 mL/min of hydrocarbon free gas.
175
-------
Check exit gas of each tube at intervals using FID.
Stop conditioning when hydrocarbon level approaches lower detection
limit.
For cartridges that do not pass after 8 hr, fill with fresh Tenax and
recycle.
Cool with gas flowing, cap, and store over activated charcoal.
Protect outside of tube while sampling with aluminum foil.
After sampling, cap and store tubes under water until analyzed.
If benzene or toluene is to be measured, the cartridges should be conditioned as
close as possible to the sampling time and analyzed as soon as possible. Cart-
ridges for benzene or toluene should be stored under ice water after condition-
ing and until analyzed. For samples not requiring benzene or toluene analysis,
storage over charcoal after conditioning and under water after sampling should
be sufficient. However, it would probably be a good idea to keep cartridges
cold when possible.
Our presentation has been limited to the sampling train. Our analytical
procedure remains basically the same as reported to the contractor who prepared
the protocol. The one exception is that we have discontinued using the com-
mercial desorber. In its place we are connecting inlet and outlet fittings
directly to the Tenax tubes (Figure 8) so that the purge gas must pass through
the tube. The tube is then heated by a small resistance heater placed around
the glass tube. Using this system we have decreased the repeatability of stan-
dardization from 10 to 2%.
In conclusion, I would like to acknowledge the following MRI personnel who
made significant contribution during this work: Paul Gorman, Greg Jungclaus,
Gil Radolovich, George Scheil, Bob Stultz, George Vaughn, and Ken Wilcox.
176
-------
Gloss Wool
Tena
Glass Wool
2 Layers ~*~
S.S. Screen
VCO Fittings
VCO Fittings
Figure 1. Original commercial trap. Figure 2. MRI double-walled cartridge.
'"C" Clip
2 Layers
S.S. Screen
1/2 of Ultra-
Torr Fitting
"C" Clip
Tenax TA
1/2 of Ulfra-
Torr Fitting
Gloss Wool
Tenax
Glass Wool
Figure 3. New simplified MRI trap. Figure 4. New commercial trap.
177
-------
N2 Gas from
Liquid N2
Suociy
-o
Pressure
Regulator
Conditioning |
Oven 200 °C I
L.
Flow
Limiting
Orifice
Tenax
Cartridge
• Flow Restrictor in
Place of Column
10'of 1/16 OD
S.S. Tubing
Heated Gas
s
.oop
\
I\
l\
1 3 Detec
i
[ Oven 200 -°C
^^_
1
1
1
tor |
1
1
1
J
Gas Chromatograoh with FID
Figure 5. One-step conditioning and monitoring technique.
Brass Tube
1/2 Ulrra-
Torr Fitting
Gas Outlet
to FID
Figure 6. New MRI conditioning manifold.
178
-------
f
Probe
Thermocouple-
Ultra-Torr Union j j
Cartridge
Ultra-Torr Union
Trao
I '—"-To Pump
Sampling
Rare
Rotameter
Figure 7. New VOST sampling train.
To
GC/MS
System
Resistance Heater Coil
Power Supply
Gas
Supply
Figure 8. New desorber
179
-------
AN EVALUATION OF INSTRUMENTAL METHODS FOR THE ANALYSIS
OF VINYL CHLORIDE IN GASEOUS PROCESS STREAMS
George W. Scheil
Midwest Research Institute
Kansas City, Missouri 64110
INTRODUCTION
This presentation describes a project conducted by Midwest Research Insti-
tute (MRI) and funded by the Environmental Protection Agency (EPA) to provide
background information for the development of performance specifications for
continuous analyzers for hazardous organic pollutants. The project has two main
purposes: to assess the state of the art in continuous monitoring for vinyl
chloride (VC) and to measure the actual performance of two different analyzers
over a period of 6 months.
The test design (Figure 1) has the analyzers connected with a calibrator to
substitute calibration gases of 0, 5, and 9 ppm VC for the sample gas once each
day and a digital data logger designed by MRI to provide a phone modem link to
transmit data back to MRI on request, with backup records on a printed log and
magnetic tape. The system operates unattended; only a twice-monthly supply
visit is made unless the phone link checks indicate the need for a repair trip
to the site.
ANALYZER SELECTION
One of the analyzers to be tested is an existing EPA furnished process gas
chromatograph (GC) (Applied Automation). This unit has a 30-cm backflush column
of 1/16-in. stainless steel tubing packed with n-octane Durapak for removing
heavy organics, followed by a 30-cm analytical column of 1/16-in. stainless
steel tubing packed with Porasil C to separate VC from other light organics; a
flame ionization detector (FID); and an analog preset time window integrator.
The system also has a sample conditioning section, gas sampling valve, and con-
trol system for automatic, repetitive sampling and analysis.
180
-------
A review of currently available instruments was conducted to select the
second analyzer. Several techniques are commercially used for continuous moni-
toring of VC such as gas chromatography, infrared, and electrochemical sensors.
Only gas chromatography has the necessary sensitivity and selectivity for mea-
suring VC near the current 10 ppm standard in process streams in the presence
of significant amounts of ethylene dichloride and other organics. A growing
number of VC analyzers are using photoionization detectors (PID) in place of an
FID with some use of electron capture detectors. Digital integrators are also
gradually replacing analog systems for peak analysis with occasional use of mea-
surements for peak height instead of peak area.
A process GC with a PID and a digital integrator was selected as the best
choice for the second analyzer. The PID has potentially better sensitivity to
VC than an FID and little response to potential interferences such as ethyl
chloride. The PID, as opposed to an FID, requires no special supply gases, and
the PID is not as easily poisoned as an electron capture detector. Peak height
measurements suffer from nonlinearity problems whereas digital integration
matches the growing use of microprocessor controlled analyzers.
More than half the cost of purchasing a second process GC is needed to
duplicate the valves, column, and other basic hardware to support a detector and
integrator. Since the PID is a nondestructive detector, an in-line PID could be
added to the existing system. A simple switching relay allows either integrator
to be selected. Adding a PID and digital integrator to the existing system thus
saves considerable money and has the added advantage of allowing better dis-
crimination of any differences between the types of detectors and integrators by
having the control system alternate each integrator with each detector. Thus, a
revised analyzer system was assembled (Figure 2). The added modules were a
Model PI52 PID with a 10.2 eV lamp (HNU Systems, Inc.) and a Model BC-2 instru-
ment control computer (Action Instruments).
LABORATORY TESTS
Before proceeding to the field test, a series of experiments was completed
in the laboratory to determine optimum operating conditions and possible inter-
ferences, and a matrix test of the analyzer control variables was conducted.
The typical VC reactor product gas contains significant concentrations of ethyl-
ene and ethylene dichloride, as well as VC, and smaller amounts of other light
181
-------
chlorinated hydrocarbons. Small amounts of chlorine and hydrochloric acid are
also present which required the replacement of all stainless steel in the sample
conditioning system with Monel, Teflon, or Knyar parts. Test mixtures of the
compounds (Table 1) were prepared with permeation tubes at about 10 ppm with
similar concentrations of VC and were sampled by the process GC. The backflush
column rejected most of the compounds, and the positive interferences had
shorter retention times than VC with at least partial separation. Although the
FID response to chloromethane was similar to that to VC, the PID response was
less than 1% of the equivalent VC response.
During preliminary testing the BC-2 digital integrator system developed
severe problems due to a lack of isolation from electrical noise. The computer
was having nonrecoverable system crashes about once an hour and the probable
cost of remedial action was excessive. Fortunately the MRI data logger, based
upon an Epson HX-20 briefcase computer matched with a Wintek MCS analog inter-
face, was functioning well in the same environment. The excellent line isola-
tion of a Nicad-powered computer together with the optical isolation and reset
capabilities of the Wintek system allowed reliable recovery from noise. The
logger system could measure the detector signals with the addition of a simple
amplifier and had sufficient idle time during each analysis cycle to perform
the necessary peak integration. Therefore, the logger was reprogrammed to per-
form the digital integration task as well as to log the results and communicate
with MRI.
The reliability of this modified system proved satisfactory and the matrix
test of control variables was conducted. The variables can be separated into
variables affecting the entire analyzer (Table 2) and variables affecting only
part of the system (Table 3). Each variable was tested by measuring a VC stan-
dard gas at the optimum condition and then at reasonable steps higher and lower
than the optimum. A strong effect showed a change of more than three standard
deviations from the average concentration at the optimum condition, and no ef-
fect showed a change of less than one standard deviation. Since the entire ana-
lyzer is pneumatically driven, the air pressure effects occurring only at lower
pressure are not surprising.
The analog integrator measures the baseline at a fixed time just before the
VC peak and integrates any signal above that baseline until the fixed stop time.
182
-------
Since the two detectors measure the peaks at slightly different times, a change
in the integration windows or the retention time will affect each detector dif-
ferently. The digital integrator program first scans the chromatogram for the
peak maximum nearest the expected time of the VC peak, jumps forward by a pre-
set offset to begin searching for a level baseline at the start of the peak,
performs a similar operation to find the end of the peak, and subtracts the
average baseline from the area under the peak. The program is insensitive to
changes in the offsets or the factors used to set the minimum width and flatness
of the baselines. Although the PID has no readily controlled unique variables,
the FID is sensitive to changes in the flame hydrogen supply pressure and thus
to its flow rate.
FIELD TEST
While the complete instrument system was undergoing a 1-month reliability
test in the laboratory under simulated field conditions, the final arrangements
for the field installation were completed. The test design required that the
analyzers be operated for a period of 6 months at a VC monomer production fa-
cility with three 5-day periods of equivalence tests comparing the analyzers
with EPA Method 106. The primary difficulty in selecting a test site was that
all the plants contacted had gas streams which were either much less than 1 ppm
or at concentrations of at least 1,000 ppm VC. The analyzer was finally in-
stalled on a reactor offgas stream with a nitrogen dilution tee to bring the VC
concentration within the 1 to 10 ppm range needed for equivalency testing.
Only partial data are available since the test period does not end until
late May. Figure 3 shows linear regression lines from the 20, 1-hour equiva-
lence tests in the initial test series. The analyzer abbreviations shown in
these figures refer to PID (P), FID (F), digital integration (D), and analog
integration (A). All four detector-integrator channels read lower than the
reference method at low concentrations. As a further check of equivalence the
Method 106 integrated bag was also connected to the analyzer sample inlet. This
detects differences caused by variations in the Method 106 sampling rate or
sudden changes missed during the 150-second analyzer cycle. The results from
the integrated bag measurements (Figure 4) indicate the same bias pattern.
183
-------
After a series of tests the problem was isolated to the Porasil C analyti-
cal column. The column has a short retention time for VC but the uncoated
silica was causing nonuniform adsorption. After replacing the analytical column
with a Porapak Q-S column the nonlinearity disappeared. After resetting the
system timing, the analyzer operated for about 1 month before the second equiva-
lence test.
During the second test series the direct monitor readings (Figure 5) show
some scatter but random bias. The integrated bag readings (Figure 6) have less
scatter. An examination of the individual test runs indicates that the inte-
grated bag concentrations are biased toward the initial sample concentration
caused by a higher than normal flow rate during the first few minutes as the
pressure within the bag enclosure stabilizes.
Data recovery efficiency is shown in Figure 7. The daily data sets are
checked for outliers by measuring the standard deviation of the differences be-
tween the simultaneous pairs and rejecting any pair which exceeds four standard
deviations. The over-range readings were caused by changes within the host
monomer plant which upset the sample dilution ratio. Data recovery for November
and December was affected by bad weather which produced repeated flameouts when
the plant instrument air supply failed. During February the sample gas input
overloads were so severe that the integrator skipped cycles. During April the
data logger malfunctioned when a power supply in the data logger analog inter-
face failed. The power supply was successfully replaced.
Finally, Figure 8 shows the daily bias and precision results for the test
period following the analytical column replacement. The number shown below each
set of error bars is the average VC concentration for that month. The different
detector-integrator combinations show little overall bias with reference to the
analyzer's original FID with analog integration. A more detailed analysis will
be made after the test period is completed.
184
-------
SAMPLE
I SPAN
MID-SCALE
Figure 1. Original schematic of analyzer system.
SAMPLE
SAMPLING BACKFl-USH/
VALVE ANALYTICAL
COLUMNS
PID
1
FID
1
ZERO
SPAN
MID-SCALE
'
1
DATA LOGGER
MODEM
Figure 2. Revised schematic of analyzer system.
185
-------
00
+30 %r
t-10%
< 0
a
-20%
-30c
FIRST SERIES — Monitor Readings
• PD
• FD
O PA
A FA
10
PPM FROM M106
15
20
Figure 3. Bias of analyzer direct readings for the first equivalence test.
-------
CO
-30*01-
-20%-
-30%
0
FIRST SERIES — from Integrated Bag
• PD
• FD
O PA
A FA
10
PPM FROM M106
15
20
Figure 4. Analyzer bias when sampling from the integrated bags for the first equivalence test.
-------
00
00
+30°'c
-20"?-,
< o -
-20°
-30 7c
SECOND SERIES — Monitor Readings
10
PPM FROM M106
15
20
Figure 5. Bias of analyzer direct readings for the second equivalence test.
-------
00
< 0
-20
-30
SECOND SERIES — from Integrated Bag
15
20
PPM FROM M106
Figure 6. Analyzer bias when sampling from the integrated bags for the second equivalence test.
-------
100 •-
so;
60
u
OS
40
20
NOV
DATA RECOVERY
• OUTLIERS
H OVER-RANGE
til USABLE
DEC
JAN
FEB
MAR
APR
Figure 7. Average monthly data recovery of complete data sets,
BIAS AND PRECISION
H.Oi—
+0.5
I 0
-0.5
-1.0
• PD
O FD
A PA
4.6PPM 1.5PPM 0.25PPM
1
NOV DEC JAN FEB MAR APR
Figure 8. Analyzer bias, compared to the FA channel, and precision.
190
-------
TABLE 1. TEST FOR INTERFERENCES
Compound Effect
Dichloromethane None
Acetaldehyde None
cis-l,2-Dichloroethane None
1,1-Dichloroethane None
Chloroethane None
Chloroform None
Chloromethane FID only
Isobutane Both
TABLE 2. EFFECT OF GENERAL
INSTRUMENT VARIABLES
Variable Effect
Backflush time Strong
Carrier pressure Strong
Main air pressure Strong-low
Valve air pressure Moderate-low
Oven air pressure None
191
-------
TABLE 3. EFFECT OF SPECIFIC
INSTRUMENT VARIABLES
Variable Effect
Analog integration
Integration stop time Moderate-PID
Integration start time Weak
Digital integration
Baseline scan forward Weak
Noise factor None
Leading edge offset None
Trailing edge offset None
FID detector
Fuel pressure Strong
Flame air pressure None
192
-------
OVERVIEW OF SEMICONDUCTING GAS SENSING DEVICES
R. H. Krueger
J. M. Fildes
Roy C. Ingerso'll Research Center
Borg-Warner Corporation
Wolf & Algonquin Roads
Des Plaines, IL 60018
The measurement of low concentrations of gases in air and of chemicals in
water is becoming of increasing importance. For this reason, there is a need to
develop less expensive equipment to make these measurements. Low cost, quick and
easy-to-use sensors with sufficient selectivity, sensitivity and durability are
needed. Now, more and more attempts are being made to satisfy these requirements
through the use of sensors made from semiconductors and transistors. These
sensors have several potential advantages such as, miniaturization, speed and
long service life.
We believe a new type of gas detector, an integrated silicon sensor or
sensor-on-a-chip, will be introduced into many new applications in the next few
years. The purpose of this paper is to give an overview of some of the present
work and predict future developments.
The object of the work on integrated sensors is to take advantage of the
advances made in recent years in the microfabrication of various potential
sensors, such as ion-selective field-effect transistors (ISFET) and
metal-oxide-semiconductor field effect transistors (MOSFET). Bergveld^ ' was
first to propose a sensor based on a modification of a MOSFET where the gate
metal was replaced by an aqueous solution. This resulted in a device in which
the channel conductance appeared to be a function of the ionic concentration of
the solution. Bergveld called this device a CHEMFET or an ion sensitive field
effect transistor. Since this work, other researchers - Zemer ',
Lundstrorrr ', Senturia^ ', Krey, et.al.' ' - have prepared and tested these
devices as sensors.
A metal-oxide-semiconductor field effect transistor (MOSFET) is shown in
Figure 1. The substrate material is p-type silicon with a source and drain of
n-silicon. The gate is a metal film evaporated over a thin insulating layer of
193
-------
SiC>2- With no voltage on the gate, the source and drain are insulated from
each other. When a positive voltage is applied to the gate, electrons are
attracted to the surface of the silicon. This produces a thin conductive surface
layer of induced n-type material (electrons) which now forms a channel connecting
the source and drain. The number of electrons is directly proportional to the
gate voltage so that the conductivity of the channel increases with gate voltage.
How does the MOSFET act as a sensor for a gas? One example is the
mechanism proposed by Lundstronr ' for the detection of hydrogen. It is known
that a number of metals, palladium and platinum, adsorb and dissolve hydrogen.
This occurs at the gate surface. Lundstrom explains: "Some of the hydrogen
atoms diffuse through the thin metal film and are adsorbed onto the metal -
SiOp interface. An equilibrium develops between the number of adsorbed
hydrogen atoms on the surface and those at the interface. The number of adsorbed
hydrogen atoms on the surface depends not only on the hydrogen present in the
atmosphere, but also on the other gases present. The hydrogen atoms at the
interface are polarized and this gives rise to a dipole layer which corresponds
to a voltage drop ( AV) which is added to the external voltage Vg." This is
shown in Figure 2 for a palladium MOSFET, and in Figure 3 for a palladium MOS
capacitor.
Figure 4 illustrates the voltage necessary to keep a small, constant drain
current on an n-channel Pd-MOSFET. This is called the threshold voltage and it
depends on the hydrogen pressure and temperature. Structures such as those shown
in Figures 2 and 3 have been used as sensors to detect gases in air as low as 5
PPB for H2, 50 PPB for H2S and 100 PPB for ammonia.
Non-hydrogenous gases cannot diffuse through the gate and, therefore, they
cannot be detected by the sensor developed by Lundstrom. To detect carbon
monoxide, for example, Krey and co-workersU) modified the transistor to have a
palladium gate with holes. In this way, carbon monoxide was able to reach the
metal-oxide interface. When carbon monoxide is adsorbed at the Pal ladiurn-SiOo
interface, a rise in the dipole layer occurred just as with hydrogen. Again, the
threshold voltage increased with carbon monoxide pressure.
Although these palladium gate MOSFETs look very promising as sensors for
hydrogen and carbon monoxide, there are some remaining problems. Lundstrom has
reported that storage in oxygen gives a slow response the first time this sensor
194
-------
is exposed to hydrogen. In some instances, a drift in threshold voltage due to
so-called negative bias stress instability or hole trapping occurs. Another
problem is poor palladium adhesion caused by phase changes in the palladium under
high hydrogen pressure, even at low temperatures.
In addition to metal gate MOSFETs, Senturia^ and researchers at Siemens
in Germany^ ' have worked with polymer and organic semiconductor coatings
deposited on the gate region. Some minor success was achieved by the Siemen
researchers but the objective of producing a usable gas sensor for gases such as
CO, C0£, SC>2 and NO with sufficient sensitivity, reproducibility and
stability was not achieved.
The relative value of organic and inorganic sensing materials is yet to be
defined. Many variables are important in this work - composition, film
thickness, surface chemistry, and topography; therefore, much more work needs to
be done to develop better gas sensors. Lundstrom, Zemel and others have outlined
several areas of research needed to improve the sensitivity and selectivity of
gas sensors:
1. Metals and polymer coatings on the gate
2. Doping of gate metals
3. Porous gates
4. Insulator thickness
5. Insulator composition: Si02, Si'3N4, A^Oo
6. Bulk composition
7. Device packaging.
The sensor work is interdisciplinary and now it appears that there is an
increasing emphasis being placed on bringing together workers in the fields of
chemistry, physics and electronics. The performance of chemical sensitive
devices is only limited by the selection of the proper chemistry. This is a
relatively new area for research and development. It has a high risk, but the
potential payoff should also be high.
195
-------
REFERENCES
1. Bergveld, P., 1970. Development of an ion-sensitive solid-state device for
neurophysiological measurements. IEEE Trans. Biomed. Eng., 17, pp. 70-71.
2. Krey, D., K. Dobos, and G. Zimmer, 1982/83. An integrated CO-sensitive MOS
transistor. Sensors and Actuators, 3, pp. 169-177.
3. Lundstrom, Ingemar, 1981. Hydrogen sensitive MOS-structures Part I:
Principles and applications. Sensors and Actuators, 1, pp. 403-426.
4. Plihal, Manfred, Hans Pink, Ludwig Treitinger, and Peter Tischer, 1980. Gas
sensitive semiconductor field effect sensors. NTIS Report No. BMFT-FB-T
80-091, 78 pp.
5. Senturia, Stephen D., 1980. Studies of conduction mechanisms in
gas-sensitive polymer films. Naval Research Report AD-A100995, 8 pp.
6. Zemel, J. N., 1975. Ion-sensitive field effect transistors and related
devices. Analytical Chemistry, 47, pp. 255A-266A.
196
-------
SOURCE^ -GATE
, DRAIN
-Si 02
Pd
SiQ2
r*
p-SI
•±
ID
WITH HOJ / / WITHOUT
/
Fig. 1. Cross Section
of a MOSFET.
Fig. 2. Hydrogen Sensitivity of a
Pd-MOS Transistor (fron Ref. 3).
Pd
Si02
p-SI
-WITHOUT HOI
-WITH HQi
l.2
1.0
no
U.O
Vto
—43Po(43ppm)
T- H2 ot 150°
30
TIME (sec)
60
Fig. 3. Hydrogen Sensitivity of a
Pd-MOS Capacitor (from Ref. 3).
Fig. 4. Response of Pd-fOSFET
to Hydrogen
(from Ref. 3)
197
-------
EXAMINATION OF CALIBRATION PRECISION
CALCULATIONS AND PROTOCOLS FOR AIR MONITORING DATA
by James B. Flanagan
Rockwell International
Environmental Monitoring and Services Center
Chapel Hill, NC
The Clinical Environmental Laboratory (CEL) is an EPA research facility
located on the campus of the University of North Carolina in Chapel Hill. CEL has
been extablished for the study of the effects of exposure to priority pollutant
gases on human subjects. Standard air monitoring equipment is used to monitor and
control the pollutant gas levels during exposure sessions. These instruments are
calibrated every few days using automated 3-point calibrations. Old calibration
constants are superseded whenever a new calibration is performed.
This paper examines the hypothesis that by increasing the size of the
ensemble of calibration data points, the net accuracy and precision of exposure
session averages will improve as a result of the corresponding reduction in the
calibration confidence interval. Ozone instrument calibrations provide actual
data with which the theory is tested. It will be shown that drift and instrument
nonlinearity complicate predictions made on the basis of simple statistical
theory.
THE LINEAR REGRESSION MODEL
The following linear regression model is assumed for the instrument
calibrations:
yi = B xt + C (1)
During calibration, the paired y. and x. values are assumed known, and the B
and C constants are calculated from the ensemble of data pairs. The following
assumptions are the basis for elementary linear regression applications:
1. Instrument response is linear in concentration.
2. The error variance of the output is independent of level.
3- Random errors are small compared to the total variation in levels.
4. The error in y is Normally and Independently Distributed.
5. Error in x is negligible relative to error in y.
198
-------
PRECISION ESTIMATION FOR LINEAR REGRESSION
The confidence interval is a statistical estimate of the region about the
calibration line in which a specified percentage (e.g., 95?) of additional (x,y)
points taken under specified conditions would lie. Points taken subsequent to
calibration are referred to in this paper as "probe" points. A "probe" of the
calibration precision may be either a single data point or an average of a number
of data points. In order to correctly assess the confidence interval about a
regression line, the following additional conditions must be specified:
1. Estimating the Population Variance. If an instrument is calibrated
without any knowledge of previous precision history, it is necessary to
estimate the instrument's variance from the calibration data itself. In this
case, It is necessary to use Student's t statistics to estimate the
confidence region.
If (1) the instrument has a known history from which a prediction of its
variance can be made, and (2) it can be statistically shown that the
calibration is representative of the same variance, then the confidence
interval can be derived using Gaussian probability tables.
2. "Probing" the Confidence Interval. Calibration curves are made
using individual data points, such as 2-minute averages. The confidence
interval for subsequently acquired "probe" points is dependent upon the
method used to acquire these points. A For example, when the confidence
interval for an hourly average is required. This confidence interval will be
substantially narrower than the confidence interval for a 2-minute average.
The following four equations represent the different cases:
Population Variance Unknown; Single Sample "Probe" of Precision:
p = tn_2 • Sy • (1 + l/n + (x- xm)2/Sxx)1/2 (2)
Population Variance Unknown; q-sample Mean "Probe" of Precision:
p = fcn-2 ' sy ' (1/q + 1/n + (x - Xm)2/Sxx)V2 (3)
Population Variance Controlled; Single Sample "Probe" of Precision:
p = z ' sigma • (1 + 1/n + (x - xm)2/Svv)1/2 (4)
y ui x x
Population Variance Controlled; q-sample Mean "Probe" of Precision:
p = z ' sigma • (1/q + 1/n + (x - x )2/S )1/2 (5)
j cn xx
199
-------
where,
s - sample standard deviation of y about the regression line,
slgma - population standard deviation of y variate,
z - Gaussian probability statistic for specified probability level,
t - student's t value for n-2 degrees of freedom,
x - mean value of x used in regression line data set,
x - any particular observation of x,
Sxx - sum of squares of (x-xm) for all x values,
q - number of points averaged for the "probe",
n - number of calibration points used in the regression line,
p - confidence interval half-width.
TESTING THE STATISTICAL PRECISION THEORY
Calibrations of CEL gas analyzers are done approximately every other day and
results are archived. This provides a data set which can be used to test the
theory of the single-point "probe" of calibration precision. First, a data set is
designated as the calibration ensemble; data points from the next succeeding
calibration are designated as "probe" data points. Next, differences are
calculated between the calibration curve and the "probe" points. Finally, the
confidence interval of the method can be investigated by accumulating ensembles of
these differences and evaluating these by appropriate statistical means. If the
basic assumptions listed above hold exactly, the statistical characteristics of
the ensembles of differences should be exactly described by equations (2) - (5).
The actual ensembles of experimental data are constructed by taking a "moving
window" j calibrations wide to calculate the B and C constants. The data points
for the calibration immediately following are taken as "probe" points, and the
differences added to the respective ensembles for each concentration/voltage
level. The window is then moved one calibration forward in time, and a new set of
j calibrations consisting of j-1 of the previously used calibrations and one new
point is used to derive a second set of calibration constants. The window is
moved in this manner until N differences are collected.
In the absence of bias, the mean squared error should equal the variance, and
the ensemble mean would be zero. In the presence of bias, the mean square error
exceeds the variance and the mean error is nonzero.
The standard deviation of the ensemble of differences should be related to
the confidence interval. For a single-point "probe," such as the case here, the
"theoretical" standard deviation would be:
200
-------
E.S.D.= sigma ' (1 + 1/n + (x.-x )2/S )1/2 (6)
i ra xx
where,
sigma - the true (unknown) standard deviation of the y variate,
E.S.D - expected standard deviation of an ensemble of n
deviations at concentration level x. ,
n - number of individual points used in the regression:
if j three-point calibrations are combined, then n=3j,
x. - x-value: zero, midpoint, span, etc.
Figure 1 shows a plot of mean and standard deviation of the ensemble of
differences computed as a function of number of combined calibration curves, j,
used to derive B and C. The data are for a Bendix Ozone analyzer Model 8002,
0.1 ppm range, using the j most recent calibrations to form the ensemble of
calibration points for the regression. Each point plotted is based upon an
ensemble size of N=12 difference values. Also shown is the theoretical curve
(E.S.D.) for the standard deviation given by eqn. (6). The solid curves in the
figure refer to the expected standard deviation of eqn. (6), with the value of
"sigma" being obtained by forcing the line through the first point. Examination
of Figure 1 yields the following observations:
1. Agreement between the theoretical and experimental standard
deviation curves as a function of j, the number of 3-point calibrations
forming the regression set, is moderately good. This gives some confidence
in the applicability of the theory represented by equations (2) - (5) to the
CEL data.
2. The mean value of the differences increases as number of
calibrations used to derive the regression coefficients B and C increases.
Furthermore, for the midpoint data, the mean difference starts out negative,
goes through zero and becomes positive at larger j values. Two effects
appear to cause this behavior:
a. Systematic drift leads to increasing mean error when older
calibrations are included in the calculation of the regression lines.
b. Nonlinearity of the instrument biases the mean deviation for
the midpoint data.
3. The assumption of independence of variance from the concentration
level is clearly violated. Physically, this probably arises from random flow
and pressure errors which are simply "amplified" more at higher concentration
levels.
201
-------
PRECISION AND ACCURACY FOR SESSION AVERAGES
The work above used only a single point "probe" of the calibration precision;
we have not yet addressed tne precision for session averages, which is the actual
quantity of interest. By using equations (2) - (5) above, we may make an estimate
of the confidence interval when the "probe" is a mean of many 2-minute averages
which comprise a session average. The correction factor for converting from a
confidence interval for 2-minute averages to a confidence interval for session
averages can be expressed as the ratio of eqn. (5) to eqn. (4), assuming that the
session average is composed of q data points:
(1/q + 1/n + (x - x )2/S )
m xx'
1/2
Ratio of Improvement = —— (7)
(1 + 1/n + (x - x )2/S )1/2
m xx'
This ratio of improvement is applied directly to the ensemble Standard
Deviations shown in Figure 1 to obtain the corresponding Standard Deviations in
Figure 2. Since the bias may be assumed to be unaffected, the Mean Errors plotted
in Figures 1 and 2 are identical. Total Error in Figure 2 is obtained as the
square root of the sum of the squares of the corrected Standard Deviation and the
Mean Error.
Since the Standard Deviation in Figure 2 is decreased, the bias error has
more relative importance in the Total Error. Thus, the relative effect of drift
and nonlinearity on precision for session averages is significantly greater than
for individual 2-minute averages.
In summary, the data from the CEL gas data base illustrate the improvement in
net precision and accuracy attainable by using larger calibration ensembles.
Although combining historical calibration data points reduces the width of the
calibration confidence interval, the systematic bias caused by drift and
nonlinearity reduces the amount of improvement in total accuracy attainable.
REFERENCES
1. Quality Assurance Handbook for Air Pollution Measurement Systems,
Vol. I. Principles. U.S.E.P.A., SPA-600/9-76-005.
2. Acton, Forman, Analysis of Straight-Line Data, John Wiley and
Sons, Inc., New York, 1959.
202
-------
ro
o
-5
n>
o CD rn
dJ CO ~<
_i. cu o
cr ~s
-S -h co
-i. O CO
O c-h CO
3 -"• O
co o O
3 -"•
_i. OJ
3 O rt-
o -h ro
—• Q.
C 3
Q- c S
n> 3 -••
CL cr r+
ro zs-
-1- -5
3 OO
o ro
ro
CQ 13- -••
o
co
ro co
co o ^
->• -s ro
o ->• cu
3 o 3
OJ CO
ro —>
CO
ro
cr
ro
.0
II
cr>
o
a.
tt
(E
O
CC
a:
2.0
1.5
1.0
0.5
0
-0.5
•-NET ERROR A
— STANDARD DEVIATION I
A-MEAN ERROR
.A
• A
• * A
• • A
• o A
ZERO MIDPOINT A SPAN
• • £
/^^•^••A • * A .0
-° A A A • • • • • A ° o
^AAAA O A ° o «
X o o o
A°°o« OOQ^OO °
0 ° o o o A
AA
A
A
A
A
13579 13579 13579
WINDOW SIZE (CALIBRATIONS)
-------
<
• < / o
0
0
0
» '•If O
• <0
• / o ^^ ^™
cr uj uj
uj o or Q:
oo
h- Q Q^ LU
LU 1— Q£ X
Z CO UJ H
I 1 1 1 .
• o < -S
1 1 1
P 10 o
CM _j -:
2 :
CO
••
^3
^d
30
>
------- |