-------
l=o-
II
•RS'
II
I*
J*
JO.
_. LU
m
m til
S —
o
i 5 3
I £> tn
0 0"
X X
XX X XX
X
X X
XX X
X
XXX X
' . ' -
X X X • X X X
x x • x x x x
X
X
ii. _ g -a >,
a <2 g js &
f 1 l«l i
i 8. |« 8 •§•
s § a— a | m g 8 1
J J Jl ii! tl!i HI 1 {
Ii! Ii! Ii! Hi Jiii lilt |I 11
0) 0) W •£, 3 09 3^
tacdos^o' <0 toS
CD C3 C3 X 3 (9 2 <
Inductively Coupled Plasma (ICP)
Pyrolisis
(carbon analysis)
56
-------
•g
H
if
CO <
II
Q.
ffl
§
a a a.
S S 2
^J "* ^3
o o 3
co O co
c -a >,
1 5?
.S. »J 5 71
•.
I'
"i w o ie
18 I I
§•? a •§
s
>;
57
-------
gemlvolatile Organics
The class with the most limited coverage among the studies reviewed is
semivolatile organics. These samples can be collected on polyurethane foam (PUF) or on
XAD-2, another sorbent compound. Analysis is by gas chromatograph/mass spectrometer
(GC/MS), or by GC/MS coupled with high performance liquid chromatography (HPLC).
XAD-2 is generally preferred when biological testing is to be performed, as it has been in
the Integrated Air Cancer Project (IACPJ. The IEMP Denver study used PUF sampling and
GC/MS analysis to cover this class of pollutants. One reason that the sampling of
semivolatile organics is so limited is cost: analyses can range from $1,000 to $2,000 per
sample, as shown in the documentation for the Denver study. (Machlin, 1986) The trend
toward sample compositing for semivolatiles is an attempt to extend the number of hours
integrated into each sample to stretch analytical dollars within the generally tight budgets
of applied programs.18
Metals " '
Metals have been included in many urban studies. In the South Coast MATES and
Urban Air Toxics Monitoring Studies, high volume samplers were used to collect metals
samples, a different procedure from that used by the Denver studies, which collected fine
particulate mass using fine particulate samplers. Atomic adsorption spectrometry,
neutron activation analysis, inductively coupled plasma spectroscopy, and X-ray
fluorescence spectroscopy are the methods generally preferred for metals analysis.
Hexavalent chromium (Cn+f>), an important contribution to urban air toxics risk,
has not been distinguished from total chromium in most ambient measurements. Cr+6 was
reported in ambient samples collected as part of the SCAQMD MATES project. (Shikiya,
1988) Many studies that have analyzed Cr+6 risk have either (1) assumed a certain
percentage of total ambient chromium is Cr+6 (e.g., 10 percent or 100 percent) or
(2) modeled Cr+f> and Cr+3 emissions distinctly.
18 See Section 2.8 for a more detailed discussion of sample compositing.
58
-------
Aldehydes/Ketones
Formaldehyde and the remaining aldehydes/ketones were addressed by 2,4-
dinitrophenylhydrazine (DNPH) cartridges and HPLC analysis in Denver, Staten Island,
and the Kanawha Valley studies and in the Urban Air Toxics Program.
The results of the Denver and Staten Island studies were not available at this
writing. Kanawha Valley, however, showed serious contamination problems developed
under this method, invalidating the use of its samples. This is not to suggest that there is
a general flaw with DNPH cartridges or HPLC analysis, but the Kanawha report does not
discuss the cause of, or possible corrections for, this problem. This problem has been
overcome in the Urban Air Toxics Monitoring Program by careful preparation of the
sample matrix before use.
2.7. Evolving Monitoring Technologies
New developments in technology for analyzing air toxics offer the promise of
increased breadth and depth lor "air toxic studies, with greater reliability and possibly
lower costs. Two major EPA research efforts are providing measured data that can help
guide present and future applied studies of the urban soup. These are the Total Exposure
Assessment Methodology (TEAM) and the Integrated Air Cancer Project (IACP). In
addition, two specialized monitoring techniques that were used in the studies under
review are briefly discussed later in this section—the TAGA® and Remote Optical Sensor
(ROSE) systems.
The TEAM studies have collected extensive measured data sets of personal
exposure and ambient concentrations of selected toxic air pollutants. These studies, of
which the principal ones were conducted in New Jersey and California,19 measured
exposure of 20 toxic air pollutants to a total of over 600 subjects. The TEAM data base
19 Additional field studies have been conducted in North Dakota, North Carolina and
most recently, in Baltimore, Maryland, a study still in progress as of this writing"
More studies are planned.
59
-------
provides measured ambient and personal data to display more fully the range, of personal
exposures than can be revealed by the more traditional approach of using measured or
modeled ambient concentrations as a surrogate for personal exposure. Chapter 4 more
fully describes the exposure data collected in the TEAM studies. This section briefly
discusses TEAM'S measured data set.
The TEAM study personal samples were collected by human subjects wearing
vests containing a Tenax® cartridge and a sampling pump; samples are collected over two
12-hour periods for each subject.20 Gas chromatography/mass spectrometry (GC/MS) have
been used for analysis of 20 preselected compounds. In parallel with the personal
sampling, each TEAM study took ambient samples using traditional fixed site samplers.
In each case, approximately 20 liters of air were collected during the sampling period and
the 12-hour integrated averages were compiled. Tenax® was used for both personal and
ambient samples in the early TEAM studies. The recent Baltimore TEAM study employed
a combination of Tenax® (for personal sampling) and canisters (for fixed stations).
During the TEAM studies, several criteria were used to select the target chemicals
(EPA, 1986). These criteria included:
1. Toxicity, carcinogenicity, mutagenicity;
2. Production volume;
3. Presence in ambient air or drinking water;
4. Existence of National Bureau of Standards permeation standards; and
5. Amenability to collection on Tenax®.
20 Despite the problems with Tenax discussed by Walling (Walling 1984), TEAM studies
continue to rely on Tenax for at least two reasons. First, Tenax appears to be
acceptable for characterizing distributions of air concentrations, which is an important
output of the TEAM studies; Tenax problems are most severe when the goal is to
measure minor variations in concentrations over time, as is typical with fixed-station
programs. Second, 'no satisfactory substitute for Tenax appears to be available for
portable use in personal monitoring vests.
60
-------
A total of 20 compounds have been routinely measured by TEAM, but 11
compounds were given particular emphasis during interpretation because: (1) this subset
was identified as being the most amenable to the Tenax®/GC/MS sampling and analytical
techniques, and (2) these pollutants were identified as being of greatest interest from a
health perspective. The pollutants emphasized were:
1. Chloroform
2. 1,1,1-Trichloroethane
3. Benzene
4. Carbon tetrachloride
5. Trichloroethylene
6. Tetrachloroethylene
7. "Styrene
8. meta or para-Dichlorobenzene
9. Ethylbenzene
10. o-Xylene.
11. meta or para-Xylene.
The Integrated Air Cancer Project
Drawing on the combined expertise of four EPA research laboratories, the IACP
program probably provides the best opportunity to further develop monitoring methods
for toxic air pollutants. The broad nature of IACP, with the goal of identifying species
most likely to be carcinogenic and their sources, requires development of monitoring
methods to better identify specific carcinogens in a wide range of classes, including VOCs,
semivolatile/particulate organic compounds, and inorganic pollutants. In addition to
developing monitoring methods for directly characterizing pollutant species, IACP is also
advocating the state of the art in monitoring methods for biological sampling and data
collection to support receptor modeling applications.
61
-------
Special Monitoring Techniques
This section briefly describes the TAGA® and ROSE® systems.
TAGA®
The TAGA® system is a mobile mass spectrometry/mass specliometry (MS/MS)
system that has been used to collect near real-time air quality concentrations. TAGA® is a
mobile system that can be driven to many pre-selected sites, or used to more randomly
search for "hot spots." Some limited sampling was performed in Santa Clara (EPA, 1984)
and Denver (Dumdei, 1986) using this system over a one-week period in each study area.
Problems with detection limits lessened the ability of this system to meet the
objectives of the Denver and Santa Clara studies—most compounds at most sites were
present in the ambient air at levels well below the detection limits of the TAGA®,
limiting sampling to major intersections- during rush hour in Denver and some vent
sampling in Santa .Clara. Data gathered were not useful to support the broader objectives
of these studies. • ' .
ROSE
The Remote Optical Sensing System (ROSE®) is a portable infrared sensor housed
in a van. It was used to monitor at selected municipal and hazardous waste landfills in
New Jersey as part of the Philadelphia study (EPA, 1986). As with the TAGA® system,
high detection limits adversely affected meeting the goals of the study. The data were not
usable for meeting the study's stated objectives.
2.8 Tnsights into tlis Use of Monitoring in Air Toxics Programs
Cost-Saving Measures
Since monitoring programs are often the most expensive component of an urban-
scale air toxics study, they often offer the greatest potential for employing cost-saving
measures. Perhaps the key point to consider when designing a cost-effective monitoring
program is that the goal is to best characterize air quality, often for a metropolitan area.
62
-------
Assuming funds are fixed, then tradeoffs exist between the coverage of the sampling
program (i.e., the number of sites and days of sampling per site) and the data quality
objective of each sample.
If cost-saving measures can be instituted to improve spatial and temporal coverage,
and if the data quality of each sample can be documented to be within limits defined as
acceptable to the project, methods such as those mentioned below may help improve
spatial and temporal coverage.
Sample Compositing—Sample compositing (combining short-term samples before
analysis to increase the temporal representativeness of each composited sample) can be an
attractive concept, especially if the major concern is with carcinogenic risks, where annual
averages are the most important statistic of concern. Although metals have been
composited effectively for many years and the concept has been used with source testing,
only during the past several years has compositing gained acceptance as a viable potential
option for ambient "air sampling._
Instead of combining short-term (e.g., <24 hr) samples after they are collected, a
variation of sample compositing involves the collection of long-term, but intermittent,
samples. An example of this might be for each sample to be collected over a period of
several days or a week, but actually "pulling" sample air for only 15 minutes within each
hour of sample duration. Thus, the sample compositing is being done, in effect, by the
sampling device rather than by the analyst in the lab. One concern with this approach is
the potential for pump failure, which could result in substantial loss in coverage.
Another concern is the possibility for sample loss or degradation over the long collection
period or during subsequent storage.
To date, sample compositing has only been used in laboratory testing and was not
used in any of these studies, with the exception of semivolatile organic compounds in the
Denver study. While further documentation appears to be needed to better describe the
tradeoffs in terms of accuracy, especially for VOCs, sample compositing appears to offer
63
-------
the potential for making effective air toxics monitoring programs more affordable at the
local level.
Indicator Pollutants—Another potential cost-saving measure is the use of indicator
pollutants to priortize and/or limit subsequent sample analysis. The concept here is to
identify an easily measured substance or property that reasonably relates to other
substances or properties that are more expensive to measure. Thus, one can rank the
samples by the value of the indicator pollutant to isolate a subset of the samples that
warrants further analysis. In the Denver IEMP study, for example, total organic carbon
will be used to rank semivolatile samples collected at each site. The groups of low,
medium, and high concentrations based on total organic carbon will be analyzed in
composited batches. This offers the benefit of performing individual analysis for the
upper end of the distribution, which may be of greatest interest in terms of short-term
exposures, and also provides a means of quality assurance of the compositing step.
Periodic-Network Enhancement—This technique is based on the concept of a
routinely operated core monitoring network (such as three sites for a metropolitan area),
which can be expanded to a larger network (such as eight to ten sites for a metropolitan
area) to collect very detailed data for a short time period every two to three years. Having
a screening-level monitoring program with a larger network in place aids the selection of
optimal sites to address local needs. Performing follow-up monitoring every two to three
years for a one-season program could provide documentation concerning changes in the
relationship between the core and supplemental sites. This approach provides the benefit
of an expanded network to characterize spatial distributions, without the continuous
collection of data at the supplemental sites and attendant costs. None of the studies have
applied this concept to date.
Selective Analysis—Another approach to reduce the costs of monitoring programs
is the concept of selective analysis. If samples can be collected at a fraction of the costs
of analysis (as is the case for many methods), a larger data set can be collected than
analyzed, i.e., every sample that is collected does not need to be analyzed. By this
approach, samples can be selected to fill out a matrix of conditions of greatest importance
64
-------
to a study; for example, worst-case dispersion, flow from specific facilities, or
nonprecipitation days. By having broad coverage, the objectives of a monitoring program
may be achieved by selecting those samples that best meet prespecified objectives. If
estimates of annual averages are to be based on user-selected days, however, the burden is
on the user to demonstrate that the averages are not biased by selecting unrepresentative
days. For example, a weighting scheme may be needed to effectively represent long-term
averaging. The South Coast study, which is using a form of selective analysis for its
monitoring program, could benefit by describing the representativeness of the measured
data set to estimate annual average conditions.
The "Supersite" Concept—As part of the Philadelphia study, a quantitative
methodology was developed to select optimal number and location of monitoring sites
based on dispersion modeling of available emissions data (see Appendix B). The goal was
to maximize the use of available informa.ti:pn to select the core sites that best met project
goals. The cost of air toxics monitoring is expensive, especially the cost of maintaining
long-term sites that might- be~Tised to help track trends in urban air toxics levels.
Allocating a fraction of these resources to help support the selection of sites that provide
the most independent data is one way of obtaining the most information per monitoring
dollar spent.
This approach can be briefly summarized as follows:
1. Model the available emissions data for the key pollutants to be addressed
in the monitoring program. Perhaps 5 to 10 pollutants could be used to
guide the selection of monitoring sites. Normalized modeling with tight
resolution would be recommended, such as a 1 km grid spacing.
2. Based on the initial results, select a long list of 25 to 30 potential site
areas (at the block or neighborhood level).
3. Compute the correlation of modeled hourly or daily ambient
concentrations among all sites for each of the targeted pollutants based on
meteorological and emissions data that are representative of seasons in
which monitoring programs will be done.21
21 Consideration of summary statistics could also be included in consideration of
independence.
65
-------
4. Select a maximum correlation, such as a correlation coefficient in the range
of 0.6 to 0.8. For purposes of illustration, consider 0.7 as the desired
cutoff.
5. For each pollutant, group all sites that are correlated by 0.7 or higher as a
cluster.
6.- Develop a scoring technique that identifies sites that provide the most
independent information. (Refer to Appendix A for the details of the
proposed scoring technique.)
7. Develop an iterative procedure that eliminates sites that provide the least
amount of information, until the only sites left are the most independent.
The Philadelphia Study presented the opportunity to test this approach by using
this methodology for the actual monitoring sites, and then comparing the model-based
results with rankings based on the actual measured data. Figure 2-2 shows the optimal
four sites based on modeling and monitoring. _As shown , there was overlap on two of the
four sites. Appendix B provides a more -detailed description of this potential cost saving
measure.
Issues
Accuracy. Precision, and Representativeness — It is a common misconception that
measured air quality data are inherently "better" than modeled concentrations. As the
experience of these studies shows, measured data can have as many limitations as
modeled estimates of ambient air concentrations and can be equally misleading — more so,
perhaps, because of their presumptive validity. Especially in air toxics studies, where
costs tend to be high and funding limited, the number of sites and samples are often very
small. Data sets developed to characterize urban-scale concentrations may often not be
representative either in space or in time. In addition, the uncertainties of sampling and
analytical methods, as well as the errors that can be present in identifying and quantifying
pollutants, can often reduce the effectiveness of monitoring approaches.
This is not to say that monitoring should not be an important element of air toxics
studies, but rather to suggest that monitoring should, in most cases, (1) be carefully
planned in relation to its own study objectives and (2) be integrated as much as possible
66
-------
1
PHILADELPHIA
COUNTY
LEGEND TO IDENTIFY OPTIMAL TOP FOUR
SITES BASED ON CORRELATION ANALYSIS
@ » BASED ON MEASURE DATA
^ * BASED ON MODELED DATA .
^ - BASED ON BOTH MODELED
AND MEASURED DATA
SAMPLING SITE
1. NAVAL HOSPITAL
2. GOODYEAR
3. FD 16
4. FO7
5. ST. JOHN
6. LARONER'S POINT
7. FD71
8. FO36
9. SAM BAXTER
10. NE AIRPORT
* ROADWAY METEOROLOGICAL/MONITORING SITE
FIGURE 2-2 OPTIMAL TOP FOUR MONITORING SITES FOR PHILADELPHIA PROGRAM
BASED ON MODELED AND MEASURED DATA
67
-------
into complementary analytic processes—namely, emissions inventory development and
air dispersion modeling.
Vulnerability of Programs to Prnjant Schedules—In a number of instances,
monitoring results have been to some degree compromised by overall program schedules.
Especially for estimating carcinogenic risk, where the key parameter of concern is annual
average concentrations (either for the population as a whole or for the maximum exposed
individual), monitoring programs of single seasons (or even shorter periods) must be
carefully reviewed to ensure that the risks are not unduly biased by unrepresentative
meteorological conditions or seasonal variations in emission rates from key sources or
source categories. Ideally, concentrations should be estimated over a period of more than
one full year, as has been routinely possible with the criteria pollutants. Unfortunately,
air toxics funding tends to be provided on a~special project basis, and few projects can
afford more than selective seasonal sampling programs. Much has been done to
compensate for this perennial problem through analysis of annual meteorological data,
links to air modeling efforts, and the like, but the issue remains a significant one.
Lack of Comprehensive Pollutant Coverage—Limitations in funding, lack of
suitable ambient sampling and analytic methods for some pollutants, and lack of
available potency data mean that all monitoring studies must concentrate on a subset of
potential pollutants of concern. The subset may be limited to carcinogens only, to certain
classes of pollutants (metals, volatiles, etc.), or to a selected group of substances from a
variety of classes; however, the effect is the same—a possible underestimation of
exposures and risks from certain pollutants. For example, even studies such as IACP or
the Denver study, where the goal is broad pollutant coverage, the limitations of the state
of the art for measuring many compounds in the ambient air are substantial. Although
measurement techniques continue to improve, only those compounds that -have been
validated for the selected methods can be accurately quantified.
Compromise Among Program Obiectives—Most, if not all, of the studies
reviewed were compelled to make compromises among multiple monitoring objectives,
68
-------
with varying effects on project results. In many cases, these compromises did not
seriously hinder the quality or usefulness of study results, but in others they apparently
have. There is a tendency to attempt to respond to as many pertinent questions as
possible. Serious problems can occur when compromises prevent attainment of one or
more primary goals.
Use of New Technologies—Studies that have attempted to apply evolving
laboratory techniques to operational field studies have frequently failed to meet their
original program objectives. The recurring theme is that the detection limits for which
these technologies were developed may not be appropriate for field studies of ambient air.
Although there is every need to evolve new laboratory techniques for operational studies,
projects on limited budgets should be skeptical of applying innovative sampling and
analysis techniques in the absence of concrete information on their performance
(especially detection limits and interference. and contamination problems), reliability,
suitability to planned field conditions, and costs.
Spatial Coverage—Generally, measured data are collected to help meet the
objective of characterizing concentrations within a large area. It is impractical, however,
to equip a large number of sites to meet this goal because of the high cost of monitoring
most noncriteria pollutants. The range in the network size among these studies was 1 to
13 sites in an urban area. For applied studies, therefore, the analyst must extrapolate data
from a limited number of sampling points to represent a much larger area.
The extrapolation of measured data to a geographic area broader than specific
monitoring sites needs to be carefully considered. For example, the 13-station network
for the Staten Island study has by far the most extensive spatial coverage among the
studies reviewed. The 13 sites, however, are a very small subset of the points in a study
area that likely has large concentration gradients in some subsections. The question is
where these specific sites fall within the spatial distribution of concentrations.
Dispersion modeling can help shed light on this problem, which again points to the
need to more fully integrate monitoring and modeling components of such studies.
69
-------
Dispersion modeling provides the opportunity in some cases to estimate likely
contamination gradients throughout an urban area prior to establishing a monitoring
network. Interpreting the modeled results, including evaluating exposure correlation
among candidate monitoring sites, may improve the site selection process. Refer to
Appendix B for further details.
Temporal Coverage—Ambient measurements will reflect tremendous variability as
a function of time because of meteorological and emissions variability. This is most
significant for major point source releases, as was shown in the measured data for
Kanawha Valley where the industrial clusters dominated impacts for selected compounds.
It was also shown in the Philadelphia IEMP study (for 1, 2 dichloropropane and 1, 2
dichloroethane), where a major wastewater treatment plant was a dominant emitter that
could show two orders of magnitude difference in ambient concentrations as a function of
wind flow alone. Area source-dominated, receptors are less affected, but seasonal
variability and diurnal patterns in emissions and meteorology will nevertheless produce
substantial variations in concentrations.
For carcinogens, the goal in these studies was to estimate annual average
concentrations, so the question becomes one of how well the short-term measurements
for each site represent annual average conditions. The IEMP Philadelphia study made this
comparison during a 30-day winter sampling program. Concentrations were modeled to
match the days of the monitoring period, and also for a five-year climatological average
data set. These modeled data were compared with the measured data set and found to be
in reasonably close agreement.
In the studies reviewed, the sampling program ranged from 5 days per year in Santa
Clara up to 50 days in Staten Island. Intuitively, it appears that Santzi Clara data would
be totally inappropriate for use in estimating annual average conditions,. Staten Island, on
the other hand, should reasonably represent annual average at these sites. Modeling
similar to what was done for the Philadelphia study could help confirm this hypothesis.
70
-------
Acute noncancer risks, because of their relation to short-term exposures, pose
substantial challenges in terms of temporal coverage, Here, the problem involves
characterizing peak concentrations, such as over a 1- to 24-hour period. None of studies to
date provides sufficient measured data to effectively evaluate acute exposures.
Methods Limitations—It appears that methods for measuring toxic air pollutants
are becoming more standardized over time, but there still are no standard methods such as
those that exist for criteria pollutants. Until standardization is achieved, inherent
variability in the data collected by different studies will persist. This may inadvertently
lead to invalid comparisons of the resulting risk estimates across different metropolitan
areas.
71
-------
Chapter 2
References
Barcikowski, W., 1988. South Coast Air Quality Management District. Letter and
attachments to Tom Lahre, U.S. Environmental Protection Agency, Research Triangle
Park, North Carolina. October 24, 1988.
Dumdei, Bruce, David Mickunas and James Zoldak, 1984, "Characterization of the Urban
Plume in Denver, Colorado; February through March 1986," prepared for EPA Region VIII,
1986.
EPA, 1984, Keith Hinman et al., "Santa Clara Study Report," EPA Integrated
Environmental Management Division, Washington, D.C., 1984.
EPA, 1986, Final Report of the Philadelphia Integrated Environmental Management
Project. U.S. Environmental Protection Agency, Washington, D.C.
Machlin, Paula R., 1986. "Denver Integrated Environmental Management Project:
Approach to Ambient Air Toxics Monitoring,""EPA Region Vffl, Denver, Colorado, 1986.
Shikiya, et al., 1988. Analysis of Ambient Data From Potential Toxics "Hot Spots" in the
South Coast Air Basin- Multiple Air Toxics Exposure Study. Working Paper No. 5,
September 1988. , '
Wallace, L., USEPA, Office of Research and Development, personal correspondence,
March 1988.
Walling, Joseph F., 1984. "Experience from the Use of Tenax in Distributed Ambient Air
Volume Sets," presented at APCA/ASQC Specialty Conference on Quality Assurance in
Air Pollution Measurements, Boulder, Colorado, October 1984.
72
-------
CHAPTERS
EMISSION INVENTORIES
The following topics are discussed in this chapter:
• Use of emission inventories in multi-pollutant, multi-source urban air
toxics assessments
• Pollutant coverage
• Source coverage- — • .
• Estimating emissions
• Spatial and temporal resolution
• Quality assurance
• Insights into compiling inventories for urban air toxics assessments
3-1 Use of Emission Inventories in Multi-Pollutant. Multi-Source Urban Air Toxics
Assessments
Compiling an emission inventory is one of the first steps normally taken in urban
air toxics assessment programs. Emission inventories can be used in many ways in air
toxics programs. They can be used to identify sources and emission strengths, patterns,
and trends. They can be used to store information from related programs, such as permit
or right-to-know data. They can be used to predict ambient concentrations and to assist in
the development of control strategies and regulations.
73
-------
Most commonly, in urban air toxics assessment studies, emission inventories are
used as input to ambient air dispersion models for estimating concentrations, exposures,
and risks across an urban area. In the dispersion modeling approach, all inventoried
sources within a study .area are modeled, using one or several dispersion models to
estimate pollutant concentrations across a representative receptor network. The resulting
model-predicted concentrations are applied to a population distribution to yield estimates
of'the number of persons exposed to each concentration. The resulting person-
concentration totals are then multiplied by cancer unit risk factors to estimate aggregate
risk, which is a measure of the number of excess cancer cases expected in an area due to
combined exposures to the pollutants of concern. Model-predicted concentrations can
also be used to estimate individual risks, such an average areawide risk or risk to the
maximum exposed individual. Another expression of risk" could simply be the number of
persons exposed to various levels of each pollutant.
A major advantage of the dispersion modeling approach is that it allows the agency
to project changes in ambient-^ir quality and risk as a function of projected changes in
emissions. This allows the agency to test the impact of growth and alternative control
measures on air quality and makes the emission inventory an important tool in control
strategy development. A second advantage is that concentrations and risk can be
estimated for many more receptors than could reasonably be covered in an ambient air
monitoring network.
3-2 Pollutant Coverage
Table 3-1 shows the pollutants that were inventoried and modeled in the urban
studies reviewed in this report. The fact that 70 compounds were covered in one or
another of these studies is indicative of the difficult task confronting the study manager—
he/she must choose from literally thousands of potential pollutants to define the
coverage of the emission inventory, after which the inventory must be compiled for each
of these compounds.
74
-------
TABLE 3-1. Pollutants Inventoried in Urban Air Toxics Studies
POLLUTANT
ACETONE
ACRYLAMIDE
ACRYLONITRILE
ALLYL CHLORIDE
ARSENIC
ASBESTOS
BENZENE
BENZO(a)PYRENE
BENZYL CHLORIDE
BERYLLIUM
BROMOCHLOROMETHANE
BROMOFORM
BUTADIENE, 1,3-
CADMIUM
CARBON TET
CFC-113
CHLOROFORM
CHROMIUM(TOTAL)
CHROMIUM+6
COKE OVEN EMISSIONS
DICHLOROBENZENE
DICHLOROETHANE, 1,2-
DICHLOROETHYLENE
DICHLOROPROPANE, 1,2-
DIETHANOLAMINE
DIMETHYLNITROSAMINE
DIOCTYL PHTHALATE
DIOXIN
EPICHLOROHYDRIN
ETHYL ACRYLATE
ETHYL BENZENE
ETHYLENE
ETHYLENE DIBROMIDE(EDB)
ETHYLENE DICHLORIDE(EDC)
ETHYLENE OXIDE
FORMALDEHYDE
GASOLINE VAPORS
GLYCOL ETHERS
ISOPROPYL ALCOHOL
ISOPROPYLIDENEDIPHENOL,
SIX MONTHS
STUDY
(35 COUNTY)
X
X
X
X
X
X
X
X
X
X
X
(SEE EDO)
X
X
X
X
4,4-
SIX MONTHS
STUDY
(NESHAPS)
X
X
X
X
X
X
X
X
X
X
X ~
. ...x
X
(SEE EDO)
X
X
X
X
X
X
X
X
X
X
X
KANAWHA
VALLEY
IEMP
X
X
X
X
X
X
X
X
X
X
(SEE EDO)
X
X
X
X
X
SANTA CLARA
IEMP
X
X
X
X
X
X
X
X
(SEEEDC)
X
X
X
X
X
PHILLY IEMP
IEMP
X
X
X
(SEE EDO)
X
X
X
75
-------
POLLUTANT
TABLE 3-1. (confd) Pollutants Inventoried in Urban Air Toxics Studies
SIX MONTHS SIX MONTHS KANAWHA SANTA CLARA PHILLYIEMP
STUDY STUDY VALLEY IEMP IEMP
(35 COUNTY) (NESHAPS) IEMP
LEAD
MANGANESE
MELAMINE
MERCURY
METHYL BROMIDE
METHYL CHLORIDE
METHYL CHLOROFORM
METHYLENE CHLORIDE
METHYLENE DIANILINE, 4.4-
NICKEL
X
X
NICKEL SUBSULFIDE
NITROBENZENE
NITROSOMORPHOLINE
PCBs
PENTACHLOROPHENOL X
PERCHLOROETHYLENE ' X
PHENOL
POM/PIC X
PROPYLENE DICHLORIDE
PROPYLENE OXIDE -
STYRENE X
TEREPHTHAUCACID
TITANIUM DIOXIDE
TOLUENE
TRICHLOROETHANE, 1,1,1- '
TRICHLOROETHYLENE X
VINYL CHLORIDE X
VINYUDENE CHLORIDE
XYLENE ISOMERS
X
X
X
X
X
x - x
X
X X
X
X
X
X X
X X
X X
X X
X
x
X
X
x x
X
x
76
-------
TABLE 3-1. (cont'd) Pollutants Inventoried in Urban Air Toxics Studies
POLLUTANT
BALTIMORE SE CHICAGO FIVE CITY SOUTH COAST
IEMP CONTROLLABILITY MATES*
MOTOR VEHICLE
ACETONE
ACRYLAMIDE
ACRYLONITRILE
ALLYL CHLORIDE
ARSENIC
ASBESTOS
BENZENE
BENZO(a)PYRENE
BENZYL CHLORIDE
BERYLLIUM
BROMOCHLOROMETHANE
BROMOFORM
BUTADIENE, 1,3-
CADMIUM
CARBON TET
CFC-113
CHLOROFORM
CHROMIUM(TOTAL)
CHROMIUM+6
COKE OVEN EMISSIONS
DICHLOROBENZENE
DICHLOROETHANE, 1,2-
DICHLOROETHYLENE
DICHLOROPROPANE, 1,2-
DIETHANOLAMINE
DIMETHYLNITROSAMINE
DIOCTYLPHTHALATE
DIOXIN
EPICHLOROHYDRIN
ETHYL ACRYLATE
ETHYL BENZENE
ETHYLENE
ETHYLENE DIBROMIDE(EDB)
ETHYLENE DICHLORIDE(EDC)
ETHYLENE OXIDE
FORMALDEHYDE
GASOLINE VAPORS
GLYCOL ETHERS
ISOPROPYL ALCOHOL
ISOPROPYLIDENEDIPHENOL,
X
X
X
X
X
X
X
X
(SEE EDO)
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
(SEE EDC)
X
X
X
X
X
X
X
X
X
X
X X
X
X X
X
X X
X
X X
X X
X X
X X
X
(SEE EDC) (SEE EDC)
X
X X
X
X
X
X
X
X
X
X
•• (SEE EDC)'
X
X
X
X
77
-------
TABLE 3-1. (cont'd) Pollutants Inventoried in Urban Air Toxics Studies
POLLUTANT
LEAD
MANGANESE
MELAMINE
MERCURY
METHYL BROMIDE
METHYL CHLORIDE
METHYL CHLOROFORM
METHYLENE CHLORIDE
METHYLENE DIANILINE, 4.4-
NICKEL
NICKEL SUBSULFIDE
NITROBENZENE
NITROSOMORPHOLINE
PCBs
PENTACHLOROPHENOL
PERCHLOROETHYLENE
PHENOL
POM/PIC "
PROPYLENE DICHLORIDE
PROPYLENE OXIDE
STYRENE
TEREPHTHALJCACID
TITANIUM DIOXIDE
TOLUENE
TRICHLOROETHANE, 1,1,1-
TRICHLOROETHYLENE
VINYL CHLORIDE
VINYUDENE CHLORIDE '
XYLENE ISOMERS
BALTIMORE SE CHICAGO FIVE CITY S
IEMP CONTROLLABILITY
X
X
X X
X
X
X X X
X X
X
X
X
xxx
X
X - X X
X
X
X
X X
X X X
X X
X
X X
OUTH COAST MOTOR VEHICL
MATES*
X
*
X
X
X
X
X
X
X
X
X
* South Coast actually included many compounds in their inventory
not shown in this table. Only those pollutants
included in their MATES are shown here.
78
-------
Note in Table 3-1 that some compounds were inventoried in more studies than
others. This gives some indication of what compounds might logically form a core list for
consideration by a study manager at the outset of a new assessment. The following
compounds were inventoried in five or more of the studies reviewed:
Arsenic
Benzene
B(a)P
Beryllium
Butadiene 1,3-
Cadmium
Carbon tetrachloride
Chloroform
Chromium (total or +6)
Ethylene dibromide
Ethylene dichloride
Gasoline vapors
Ethylene oxide
Formaldehyde
Methylene chloride
Perch loroethylene
POM
Trichloroethylene
Vinyl chloride
• It is important to note that the above compounds account for the vast majority of
aggregate cancer incidence in most of the studies reviewed. Of these pollutants, POM,
formaldehyde, 1,3 butadiene, chromium, and benzene generally contributed most to
aggregate incidence. . •-. — —.- . -
In addition, the above list of compounds conforms well with the recommended
list of compounds given in Table 9 of earlier EPA guidance on compiling air toxics
emission inventories (EPA, 1986), even though the lists were independently derived.
Some of the pollutants listed in Table 3-1 were inventoried out of concern for
their contribution to maximum exposed individual (MEI) risks rather than to their
contribution to areawide cancer incidence. Some pollutants were included because of
their potential for noncancer health effects, while others were probably included because
of general public concern.
The treatment of several pollutants contributing significantly to cancer risk—
polycyclic organic matter (POM), hexavalent chromium (chromium +6), and
formaldehyde—is important to note in the studies reviewed and discussed below. The
treatment of beryllium and nickel is also discussed below, since, like chromium, the
79
-------
chemical form assumed in one's analysis will have an important effect on cancer
incidence.
Treatment of Polvcvclic Organic Matter (POM)
There was considerable divergence among the studies on the treatment of POM,
which is sometimes (as in the Six Months Study) called PIC, or products of incomplete
combustion. Two distinct approaches have been used in those studies where POM was
addressed:
1. BfalP Surrogate Approach. The Six Months Study, Southeast Chicago
Study, and the Santa Clara IEMP assumed that all POM could be
represented (with acknowledged uncertainty) by a surrogate compound,
benzo(a)pyrene. The Six Months Study inventoried B(a)P,, specifically, and
then applied a potency factor that represented all PIC. Somewhat
differently, the Southeast Chicago Study and Santa Clara IEMP inventoried
all POM and then applied the- potency factor for B(a)P. The Baltimore
JEMP and Motor Vehicle Study also used this approach as onie of several
alternatives for assessing risk from POM. '
2. Comparative Potency Factor Approach. This approach (described in greater
detail in Chapter 6) applies a potency score,(or cancer unit risk factor) to
the entire mixture of POM emitted by each source category rather than to a
particular surrogate compound. Comparative potency factors are
forthcoming from EPA's Integrated Air Cancer Program and have been
developed for road vehicle exhaust (diesel and gasoline fueled vehicles),
wood smoke, and various other combustion sources. In the studies using
comparative potency factors (Baltimore IEMP, 5 City Controllability, and
Motor Vehicle), total solvent-extractable organic particulate was
inventoried for those categories for which comparative potency factors
existed. In these studies, POM was assumed to be represented by the
solvent-extractable fraction of total particulate. As an option, total
particulate can be inventoried rather than just the organic fraction if the
comparative potency factors are increased accordingly to account for the
non-solvent extractable particulate fraction that contains POM. The latter
option allows the direct use of existing particulate emissions data.
The divergence that exists in the treatment of POM reflects the lack of consensus
among study managers on which of these approaches (if any) is most suitable for
estimating cancer risk. Hence, at present, POM incidence estimates must be considered to
be highly uncertain.
80
-------
Treatment of Chromium
Chromium is an important contributor to aggregate risk in the studies reviewed,
but only the Cr+6 valence state is considered carcinogenic. Hence, the treatment of
chromium is very important and has been approached in several ways. One approach is to
treat all chrome emissions the same in the inventory (i.e., inventory total chromium) and
then assume that some fraction of the resulting modeled ambient concentrations is Cr+6.
Early studies, such as the Six Months Study, very conservatively assumed that all chrome
was Cr+6, as did the South Coast MATES project in some of its risk estimates.. The Santa
Clara IEMP assumed 10 percent of total chromium was Cr+6, whereas the Baltimore BEMP
assumed that varying fractions—from 0 to 100 percent—were Cr+6.
The 5 City Controllability Study distinguished between Cr+6 and total chromium
directly in the inventory and thus was able-to model both ambient Cr+6 and total
chromium levels. One advantage of this approach is that it allows one to project what
fraction of ambient chromium is Cr+6. This is important since Cr+6 cannot be directly
measured at typical ambient "levels.
Treatment of Beryllium and Nickel
As with chromium, it is important to be aware of the chemical form of other
metals such as beryllium and nickel. In the 5 City Controllability Study, most beryllium
was assumed to be in the oxide form rather than to be present in the more carcinogenic
ore, fluoride, phosphate, or sulfate forms. Similarly, in this study, no nickel emissions
were assumed to be in the carcinogenic refinery dust or subsulfide forms; hence, no cancer
incidence was associated with nickel emissions. It appears from the various studies that
conservatively assuming all beryllium and nickel emissions, to be as carcinogenic as the
most potent compounds containing these elements would greatly overestimate risks from
these two substances.
81
-------
Secondary Pollutants
None of the studies assessed secondarily formed (e.g., photochemical) pollutants
via dispersion modeling since no validated models yet exist for predicting transformation
products such as formaldehyde, peroxyacetyl nitrate (PAN), and the like. Hence, none of
the studies expressly developed emission inventories of precursors of these secondary
products for modeling purposes.
Generally, secondary pollutant levels have been approximated by using ambient
air monitoring data. The 5 City Controllability Study, for example, used measured
ambient air levels of formaldehyde in each city to estimate exposures to both primary
formaldehyde emissions and photochemically formed formaldehyde. After backing out
that fraction of ambient formaldehyde directly attributable to primary emissions, the
balance was assumed to be due to secondarily formed formaldehyde. The 5 City
Controllability'Study then assumed, since formaldehyde is photochemically formed from
VOC emissions, that each VOC emitter was culpable for that fraction of risk attributable
to- secondary formaldehyde in "direct proportion to its level of VOC emissions. .
3.3 Source Coverage
Because the emphasis in urban air assessments is on multi-source, multi-pollutant
exposures and risks, most studies have tried to be comprehensive in their coverage of
point, area, and mobile sources. So-called "nontraditional" sources have also been
included in several studies, such as wastewater treatment facilities (including publicly
owned treatment facilities or POTWs); treatment, storage, and disposal facilities (TSDFs),
which include aeration tanks, landfills, and surface impoundments, for waste handling;
waste oil combustion; hazardous waste combustion; and ground-water aeration. Most
nontraditional sources involve pollutant transfer from another media (e.g., water or solid
waste) to air and are not yet well characterized.
By and large, the sources covered in most urban air toxics assessments are the
same sources covered in criteria pollutant inventories. However, most of the study
managers were conscious of the need- to inventory important sources of air toxics that may
82
-------
not have been included in their criteria pollutant inventories, because either (1) their
criteria pollutant emissions were below some point source cutoff level or (2) they did not
emit any criteria pollutants at all. Important examples of this kind of source include
chrome platers, cooling towers (using chromium corrosion inhibitors), hospital sterilizers,
wood stoves and fireplaces, and small surface coating/degreasing operations that emit
toxic but photochemically nonreactive VOCs (e.g., methylene chloride).
Some sources received particular or unique emphasis in certain studies. The Santa
Clara IEMP, for example, included drinking water treatment plants, municipal landfills,
and semiconductor manufacturing in the inventory. The Kanawha Valley IEMP
emphasized fugitive equipment and vent releases from the major organic chemical
manufacturing plants. The Baltimore IEMP and Southeast Chicago Study both placed
particular emphasis on large iron and steel facilities in each of their respective urban areas.
3.4 Estimating Emissions
Two approaches were used in the studies reviewed for estimating air toxics
emissions. In one approach, emissions were derived from existing data bases.
Alternatively, emissions data were developed from source-specific surveys. A
combination of these approaches was used in most studies.
Emissions Estimation from Existing Data Bases
To differing degrees in the studies reviewed, existing data bases were used as a
starting point for locating sources of air toxics and estimating emissions therefrom.
Existing data bases included criteria pollutant inventories, published emission factors,
species profiles, and information in the general literature. There were two variations on
this approach. In one, an air toxics emission factor was applied to the existing throughput
or activity level to estimate emissions. This works well for sources where the air toxics
emission factors are expressed in the same units as the criteria pollutant factors (e.g.,
Ib/ton of fuel burned or per vehicle mile traveled). All studies used this emission
estimating approach for some sources.
83
-------
A second variation is to apply species factors to existing VOC and particulate
matter (PM) emission totals to estimate emissions for particular toxics. The Southeast
Chicago Study provides an example of the use of the species factor method to estimate
point source emissions. First, emissions estimates for total VOC and TSP were obtained
from EPA's National Emissions Data System (NEDS) for each process within 88 point
sources. Species factors (representing the fraction of the total TSP or VOC emitted as
various individual species from each process) were then multiplied by the process-specific
emission totals to yield emission levels of specific toxics. Information on species
fractions are available from EPA. (EPA, 1988)
Source-Specific Survey Data
Local survey data were used in a number of the studies, including the 5 City
Controllability. Study, the Southeast Chicago Study, the South Coast Study and the
IBMPs, to estimate the emissions of selected air toxics from point sources. The lEMPs
illustrate the use "of "local suryey.data; with the exception of the Denver IEMP study, each
study included the incorporation of survey-based emissions data info the emissions
inventory used for the exposure assessment. Each survey was done somewhat differently,
as shown below:
Philadelphia—The Philadelphia Air Management Services (PAMS) conducted a
survey of 345 potential point sources of air toxics in the Philadelphia study area.
Emissions estimates at the facility level were solicited from industry; these were, in turn,
subjected to engineering review by the PAMS engineering staff. This survey did not,
however, include a large wastewater treatment plant. IEMP estimated releases from this
plant based on influent, effluent, and sludge data to help complement the PAMS
inventory.
Baltimore—Surveys were conducted by the Maryland Air Management
Administration (AMA) to obtain input data needed to estimate the emissions from
approximately 200 point sources. AMA staff then computed emissions estimates at the
facility level.
84
-------
Kanawha Valley—The West Virginia Air Pollution Control Commission
(WVAPCC) conducted a survey at the release point level for all major sources addressed
in the study. WVAPCC staff conducted limited verification of these emissions data. A
total of 17 facilities", 2,258 release points, and 570 pollutants were covered.
The WVAPCC provided guidance to the industries on generating emission rates for
three major categories: process, combustion, and fugitive releases. Unlike the other
studies, the Kanawha Valley study had very detailed release specifications, grouped into
logical modeling units, (e.g., building sources, area sources, and stacks). The other studies
did not subdivide the data below the facility level, resulting in relatively crude treatment
of release specifications.
The vast majority of emissions within this study were from vents and fugitive
releases rather than from stacks. Since the Kanawha Valley experiences such wide
variations in meteorological conditions as a function of daytime or nighttime conditions, it
was important to account for_the variability in emissions on.this basis to avoid
introducing model bias for sources that primarily emit during the daytime hours. Hours of
operation data were thus considered when using the inventory for dispersion modeling
purposes.
Southeast Chicago—Questionnaires were sent to 29 of the 88 facilities in the
source grid. These 29 were selected in a two-step process to rank the most important
sources. First, all chemical manufacturing facilities were automatically listed because of
presumed importance. Second, the remaining sources were ranked by probable impact of
their VOC/PM emissions on the receptor area and on the probability of these sources
emitting air toxics.
A questionnaire was sent to each of the 29 companies asking them to make their
own emission estimates of about 50 pollutants and also asking for certain modeling data.
Substantial follow-up was often necessary to obtain, clarify, or confirm company
responses. In many cases, the companies asked EPA for species fractions to estimate their
own emissions.
85
-------
South Coast MATES—The South Coast Air Quality Management District
(SCAQMD) conducted a mail survey of 1,606 companies in the South Coast Air Basin to
update its toxics emission inventory. A number of local information systems were used
to compile the emissions inventory. These information systems included SCAQMD's:
1. Automated Equipment Information System
2. Emissions Inventory System
3. Annual Emission Fee Reports
4. New Source Review Files.
Literature searches were conducted, letter mail-outs were sent, and telephone calls
were made in cases where insufficient information was received.
3.5
Spatial and Temporal Resolution
Spatial resolution is a. measure of how finely emissions data are subdivided (i.e.,
resolved) in space, whereas temporal resolution is a measure of how finely emissions data
are subdivided in time. The resolution of any risk estimates resulting from an urban
assessment cannot exceed the resolution of the emissions (and other) data used as input.
Grid Size and Grid Cell Resolution
Many of the studies superimposed a dispersion modeling grid over the urban area.
Each grid comprised many, usually square, grid cells. The South Coast and Southeast
Chicago studies did this as did the lEMPs in Baltimore, Kanawha Valley, Santa Clara and
Philadelphia. The grid at once defines the receptor network for dispersion modeling
(receptors being defined as one or several points within each grid cell) as well as the
necessary resolution of the emission inventory.
Grid cell sizes varied from 1 km x 1 km in the South Coast MATES project to 5 km
x 5 km in the Philadelphia and Baltimore lEMPs. (The Baltimore IEMP actually defined a
"refined" grid, comprised of 2.5 km x 2.5 km grid cells over Baltimore City within the
larger grid.) Southeast Chicago adopted a 2 km x 2 km grid cell size and the Kanawha
86
-------
Valley ffiMP used a 2.5 km x 2.5 km grid cell size. (Note that the IEMP studies also
included special discrete receptors in residential areas near major facilities to support
modeling of the maximally exposed individuals.)
Southeast Chicago actually defined separate source and receptor grids. Their
source grid was 46 km x 46 km, comprised of .529 2 km x 2 km grid cells. The emission
inventory was compiled for this area. In contrast, their receptor grid was only 13 km x
13 km and was comprised of 169 1 km x 1 km grid cells. The idea here was to define a
smaller receptor grid within a larger source grid to assure coverage of all sources that
could reasonably be expected to impact on the receptor grid. This same concept was also
employed in the Baltimore and Philadelphia lEMPs. In the Southeast Chicago study, the
receptor grid was located within the source grid, but skewed slightly off center in the
direction of the prevailing winds.
Point Source Resolution ......
_._ Typically, point source -locations are known to the nearest 0.1 km in emission
inventories, which is adequate resolution for areawide cancer assessments. Several of the
studies (e.g., the Baltimore IEMP) may have lost a measure of point source resolution
because the emissions data were submitted at the facility level and could not be assigned
to specific stacks, vents, and the like, within the facility.
In studies evaluating maximum individual exposures and risks, more attention
needs to be given to the actual release configuration within large, sprawling facilities that
have different types of release points. The Kanawha Valley IEMP, of the studies
reviewed, best delineated different kinds of releases within the large chemical
manufacturing plants in the study area. Instead of assuming complex facilities could be
characterized by one or several release points, up to 20 release groups were defined.
Stacks were modeled individually. All vents on each building were modeled as a volume
source based on the dimensions of the applicable building. Fugitive releases extending
over an area, such as tank farms or equipment leaks, were modeled as industrial area
87
-------
sources. The Southeast Chicago Study also characterized several large coking operations
at this level of resolution.
Area Source Allocation
All of the studies that used grid networks necessarily had to disaggregate area
source emissions to the grid cell level. The 5 City Controllability Study did not define a
modeling grid for each urban area, nor did it allocate emissions to the subcounty level.
This is because EPA's Human Exposure Model (HEM)—used therein for dispersion
modeling and risk assessment—does not require subcounty apportioned data. HEM (1)
internally apportions county level area source emissions to the Block Group/Enumeration
District (BG/ED) level based on the population within each BG/ED, (2) runs a simplified
box model to calculate population exposure at each subcounty level, and (3) sums the
resulting BG/ED risks to produce a measure of aggregate risk in the entire study area. HEM
does not yield'spatially disaggregated exposure and risk results as do the models used in
the studies where gridding was done.
The basic allocation approach used in most of the inventories was to apportion
county level emissions to individual grid cells based on some surrogate indicator(s) such
as population, employment within certain SICs, or vehicle miles traveled (VMT). The
inherent assumption is that area source emissions are distributed according to the known
distribution of the surrogate indicator.
The simplest approach is to apportion all area source emissions by population.
This was done in the South Coast MATES study for all area sources but road vehicles and
service stations. Road vehicle emissions of toxics were distributed based on the known
distribution of VOC in the criteria pollutant inventory. Service station emissions were
clustered at street intersections based on a weighting model developed from a telephone
survey.
In the Southeast Chicago study and the Baltimore, Santa Clara, and Philadelphia
lEMPs, area source emissions were apportioned by a mix of surrogate parameters. In
Baltimore and Philadelphia, digitized United States Geological Service (USGS) land use
88
-------
data were used, including residential land use, commercial/services land use, and
industrial and transportation land use. As an example, dry-cleaning emissions were
apportioned by commercial/service land use. As another example, Southeast Chicago used
VMT distributions to apportion gasoline marketing emissions and used manufacturing
employment to allocate degreasing emissions. In the Southeast Chicago study, special
surveys were used to assess the potential locations of hospital sterilizers and chrome
platers.
Temporal Allocation
Most of the studies appear to have collected annual emissions data with no
consideration of diurnal, seasonal, or batch operation variability. This has generally been
considered satisfactory for cancer assessments where the emphasis is on long-term
exposures to pollutants.
Since the dispersion models used in urban assessments distinguish daytime and
nighttime dispersion conditionsrsome error is introduced, if emissions are assumed to be
uniformly .emitted both day and night. In the Kanawha Valley, Baltimore, and
Philadelphia ffiMPs, engineering estimates were made of diurnal emissions variability for
certain source categories, and these estimates were then used as input to evaluate
dispersion model performance (Sullivan, 1985a). This was done to reduce some of the
bias introduced by assuming uniform emission rates throughout the day. The assumption
of uniform emissions can result in a disproportionate amount of emissions being
associated with the poor nighttime dispersion conditions.
3.6 Emission Inventory Quality Assurance
Quality assurance checks were not explicitly discussed in most of the studies
reviewed, so for the most part the following discussion infers measures that were taken to
assure emission inventory quality. Some of these measures may not have been done
intentionally as quality assurance activities, but nevertheless served in that capacity.
89
-------
Emission Inventory Review
A number of the studies incorporated reviews at various times to help assure that
the best possible inventory procedures and data were being used. Ideally, such reviews
would be done during the planning stages, at the end of the data collection effort, and
when the results were compiled. As an example of this, the 5 City Controllability Study
regularly reviewed source and emissions data coming out of EPA's regulatory programs
and incorporated changes to reflect these data. Two particular EPA programs worth
following are the NESHAPS program and the motor vehicle testing program. The
NESHAPS program routinely generates source, emissions and risk data for specific
facilities based on "114 letters," i.e., responses obtained under the authority of Section
114 of the Clean Air Act. As a result of following these programs, significant changes
were made during the 5 City Study for several major source categories and pollutants. In
addition, previously uninventoried sources and pollutants were uncovered and added.
Data Verification
In a number of the studies, including the Baltimore, Philadelphia, and Kanawha
Valley lEMPs, State or local agency personnel spot-checked the adequacy of emissions
data submitted by industry. In some cases, site inspections were made. Emissions
estimates were reevaluated and revised or updated as appropriate.
The Kanawha Valley IEMP inventory was resolved at the stack/vent level of
detail for large organic chemical manufacturing facilities. The State agency spot-checked
release points that emitted large quantities of highly potent pollutants. For example, for
the most important facility in the study area, the State requested backup calculations for
15 to 20 selected processes to confirm the emissions data.
In the Southeast Chicago Study, the emissions data for the largest point sources in
the study area—several major coking operations—were personally reviewed by agency
personnel who had inspected each facility.
90
-------
Monitoring vs. Modeling Comparisons
The IEMP methodology (Sullivan, 1985b) is based on using measured data as a
means of facilitating the quality control of emissions data. For many major facilities,
especially where there are large uncertainties in fugitive and vent releases, it can be
prohibitively expensive to perform comprehensive source testing. However, by measuring
the composite plume, such as at points 500 to 1,000 m from the facilities, comparisons
with modeled data can be an effective means of checking the reasonableness of the
emissions data. This approach was used in designing the Kanawha Valley monitoring
program.
Comparing model predictions with measured ambient levels was also done in the
Philadelphia and Baltimore lEMPs to check emissions data. In one case, discrepancies led
to significant increases in the inventory estimates for a large garment manufacturer. In
another case, such discrepancies led to the identification of sewer influent "and effluent
lines to a major wastewater treatment plant as a missing source of toxic emissions. The
lEMPs developed a computer-program called the Monitoring and Data Analysis Module
(MADAM), which facilitates this comparison of measured and model-predicted data.
(MADAM is discussed further in Section 4.9.)
Receptor models represent another potential technique for quality assuring
emission inventories, but have not been used for this purpose in any of the studies
reviewed in this report.
3-7 Insights into Compiling Inventories for Urban Air Toxics Assessment
The following insights can be drawn from the studies conducted to date:
The study area (grid) should be denned to be as large as possible to assure
that all sources are included in the inventory that may impact on the
proposed receptor sites. The Southeast Chicago study and Baltimore and
Philadelphia lEMPs defined source grids larger than their respective
receptor grids to account for all local sources impacting on the receptor
sites. The Southeast Chicago study source grid was about 13 times larger
m area than the receptor grid and accounted for "upwind" sources up to
20 km away in the direction of the prevailing wind. Theoretically, this
91
-------
approach should help account for local background levels of pollutants
that are not secondarily formed or due to gradual global buildup.
The chemical form of some pollutants is important in cancer risk
assessments. For example, it is important to distinguish between total
chromium and Cr+6; between total nickel and the carbonyl/subsulfide
forms of nickel; and between the oxide and other forms of beryllium
(particularly beryllium sulfate). Only some forms of these metals are
considered carcinogenic. Limited test data are becoming available to allow
this kind of speciation in emissions inventories.
Consideration should be given to how POM will be handled in the
inventory. Depending on the risk approach adopted in the urban study,
emissions of either total POM emissions or some surrogate—probably
B(a)P—will be required.
Broad source coverage is important in urban air toxics assessments, as
many sources contribute to aggregate risk. Area and road vehicle sources
must be included for reasonable completeness. Chrome platers and
cooling towers should be included, whether treated as area or point
sources. Wood smoke should -be included. Formaldehyde emitters—
particularly road vehicles—should be included, even though aggregate risk
from primary formaldehyde emissions is probably less than from
secondary formaldehyde.
The spatial relationship of release points within large, complex facilities .
"(e.g., iron and steel plants or synthetic organic chemical manufacturing
facilities) may need to be characterized if one's study will assess maximum
individual risks near these facilities (< 1 km). This requires better
delineation of stacks, vents, building dimensions, equipment leaks, and
other fugitive sources than is typically available in most inventories
without special site surveys. For broad screening of areawide aggregate
cancer incidence, simpler point source emissions characterizations may be
adequate.
The spatial resolution of the area source inventory needs to be
commensurate with the level of spatial resolution desired in the resulting
risk estimates. To the extent possible, the study area manager should pick
and choose from several socioeconomic and/or land use parameters as
surrogates for allocating area source emissions rather than rely on a single
indicator such as population. Use of the appropriate variables for different
area source categories should improve the accuracy of the study results
since the distribution of emissions within an urban area can have a great
effect on population exposures.
The use of EPA's Human Exposure Model (HEM) frees the study manager
from having to allocate emissions to the subcounty level because HEM
performs this step internally (down to the BG/ED level) based on U.S.
Census Bureau data. A downside is that HEM only allows this subcounty
allocation to be done by population or else a spatially uniform distribution
92
-------
is assumed, neither alternative being as accurate as the use of direct survey
techniques and better surrogate apportioning variables.
Limited data suggest that increasing the temporal resolution in the
emissions data may improve the accuracy of the study results. Typically,
most studies have just compiled annual average emissions. However, since
emission rates differ considerably for some sources, both diurnally and
seasonally, and since most models reflect different diurnal and seasonal
dispersion patterns, it may behoove the study area manager to consider
diurnal and/or seasonal patterns when compiling the emission inventory,
especially if a more detailed assessment is contemplated or if short-term
concentrations and maximum exposures are desired as output. No studies
reviewed have attempted to compile short-term (e.g., hourly) variability in
emissions data. This level of temporal resolution would probably not be
warranted in an urban assessment whose focus was on cancer, but could be
important for evaluating acute, noncancer effects.
As with any inventory, data quality objectives and quality assurance are
important. As a planning step, a formal data quality protocol should be
considered that reflects all anticipated end uses of the emissions inventory.
During and after inventory compilation, as many review steps as possible
.should be planned. These reviews should involve parties who will use the
data, parties who might be affected by the study results, and parties who
have particular expertise .concerning certain aspects of the inventory.
Comparing monitoring data with modeling results can provide insights into
missing sources, missing pollutants, or erroneous emission estimates in the
air toxics emission inventory.
93
-------
Chapter 3
References
EPA., 1986. Compiling Air Toxics Emission Inventories, EPA-450/4-86-010, Office of Air
Quality Planning and Standards, Durham, North Carolina, 1986.
EPA., 1988a. Air Emissions Species Manual, EPA-450/2-88-003a—VOC Species Profiles,
Office of Air Quality Planning and Standards, Durham, North Carolina.
EPA 1988b. Air Emissions Species Manual, EPA-450/2-88-003b—Particulate Matter
Species Profiles, Office of Air Quality Planning and Standards, Durham, North Carolina.
Sullivan D A., 1985a. Evaluation of the Performance of the Dispersion Model SHORTZ
for Predicting Concentrations of "Air Toxics in the U.S. Environmental Protection Agencys
Philadelphia Geographic Study. U.S. Environmental Protection Agency, Integrated
Environmental Management Divisions, Washington, D.C.
Sullivan, D.A., and C. Carter, 1985b. A Screening Methodology for Air Quality Analysis
(Draft). U.S;- Environmental Protection. Agency. Regulatory Integration Division,
Washington D.C.
94
-------
CHAPTER 4
DISPERSION MODELING
4.1
The following subjects are covered in this chapter:
• Use of models for estimating exposure in multi-pollutant, multi-source
urban assessments.
• Decisions affecting modeling protocols
• Model selection- — • o
• Release specifications
• Selection of receptor network
• Meteorological data
• Decay, transformation, deposition
• Model execution
• Model performance evaluation
• Insights
Use of Models in Estimating Exposure in Multi-Pollutant. Multi-Source Urban
Assessments
Most studies of toxic air pollutants in the urban environment rely on dispersion
modeling as their chief means of estimating ambient concentrations of pollutants.
Modeling is used at scales ranging from microscale to urban-wide, sometimes including
95
-------
transport from neighboring regions. As shown by the studies under review here,
concentration estimates are generally used directly as input to risk assessments, although
they may also be used as a preliminary step in the planning of monitoring programs, as
input to receptor modeling analyses, or, in conjunction with monitoring data, for the
verification of emission inventories.1
The Gaussian models in use for toxic pollutant analysis were originally developed
for analyzing ambient air quality for criteria pollutants and represent nearly two decades
of practical development experience. There has been no need to develop new classes of
dispersion models specifically for the noncriteria pollutants—Gaussian model
performance has been demonstrated by White (White, 1984) and others to be effective for
the annual or seasonal averages that are the most common averaging periods for air toxics
studies. As discussed in this chapter, however, there are a variety of areas (e.g., dense gas
releases) where the application of existing, models to air toxics studies requires careful
review and analysis to achieve best performance. How well a model performs for a
specific use depends on the-accuracy of emissions data, including release specifications;
the representativeness of meteorological data for the area under study; and how well the
application matches the source and terrain types for which the model was originally
developed.
In the studies under review here, emission inputs were compiled for specific
industrial facilities, area sources (such as mobile sources, commercial development, and
residential heating), and a variety of miscellaneous sources sometim.es referred to as
"nontraditional sources," such as comfort cooling towers, volatilization, of organics from
sewage treatment plants, wood burning furnaces or stoves, and other small point sources
not generally included (at least until recently) in urban emission inventories. Some
studies relied predominantly on individually defined emission rates; most used default
estimates for at least some categories of sources (see Chapter 3).
Potential future applications of dispersion models could include estimating deposition
rates (to analyze indirect pathways of toxic chemical uptake), or analysis of
hypothetical or actual accident scenarios.
96
-------
Meteorological data are input to dispersion models to characterize the direction
and speed of transport, estimate horizontal and vertical dispersion rates, and account for
the vertical extent of turbulent mixing. For the modeling analyses of these studies,
meteorological data sets were chosen that yielded annual average concentrations because
this averaging period was needed for exposure assessments of cancer effects.
The advantages of the use of air dispersion modeling in urban scale assessments
are straightforward. Modeling analyses generally require only a small fraction of the
resources needed to conduct comprehensive ambient air quality monitoring programs.
They can provide extensive spatial and temporal resolution of estimated concentrations
and can cover pollutants for which suitable ambient air monitoring methods are not
available. Modeling analyses can isolate the effect of any single source or evaluate the
impact of any aggregate of sources. Finally, modeling can analyze hypothetical situations,
such as the imposition of a range of control scenarios on existing sources, addition of new
sources, or operational changes in the utilization patterns of facilities.
An important disadvantage of modeling lies in the limited validation 'of available
models and the consequent uncertainties this produces. Available validation data apply
to only a small subset of the scenarios in which models can be used, and models are often
applied under conditions quite different from those for which they were developed. Also,
models cannot accurately estimate concentrations unless sources are adequately
characterized, which is problematic in many cases.
4-2 Decisions Affecting Modeling Protocols
Unlike monitoring programs, for which it is common (and often required) practice
to prepare quality assurance/quality control (QA/QC) plans, modeling protocols have
typically not been prepared to guide the design and execution of modeling programs.
Ideally, all studies would benefit from defining protocols that would: (1) clearly define
objectives to be achieved in relation to general program goals and (2) present the details of
the approach to meet these objectives. Detailed elements of the approach would include:
97
-------
Level of analysis: In modeling, "screening level" or "scoping" analyses are
often used as the first phase in a multi-stage study, such as to estimate the
scope of potential problems or aid in design of monitoring programs.
Depending on their purposes, screening level programs can use less
detailed input data and more generalized modeling techniques. Where the
purpose of a program is to develop refined modeling, input requirements
are more stringent and modeling protocols are likely to be more elaborate.
Source categories/emissions data; Study managers may elect to model only
certain categories of sources (e.g., industrial point source:?) or may attempt
to be comprehensive, including a wide range of area sources and minor
point sources. In addition, the level of detail in emissions data
specifications may vary considerably from study to study. For instance,
for screening purposes a major industrial source might be modeled as a
single point, but for refined modeling might be treated as a combination of
point sources, volume sources, and area sources.
Pollutants to be modeled: Both gaseous and particulate pollutants may be
of concern to air toxics studies. For most analyses, the design and
execution of the modeling program is seldom significantly affected by
pollutant selection. An exception would be where analysis of atmospheric
reactions or transformations is required, but none of the studies reviewed
here attempted such analysis:
Receptor networks/Scale of analysis; Design of modeling programs is
significantly affected by whether the scale of interest is to define average
urban-wide pollutant concentrations or to define concentrations for the
most exposed individual (MEI). These concerns strongly influence the
design of the grid used to structure model outputs.
Terrain: Terrain is a key factor of concern; modeling complexity increases
significantly if terrain is defined as "complex"—i.e., if some receptor points
in the study area are at higher elevation than release points, or if the study
has unusual geographic features (proximity to large bodies of water,
location in valleys or in mountainous areas, etc.).
Meteorological data: The two factors of concern are the data's
appropriateness and detail. Available recording stations for the region
must adequately represent conditions in the study area,, In addition, for
uses such as running models in complex terrain, evaluating model
performance, or estimating short-term concentrations, detailed hour-by-
hour data may be required. In some cases, air toxics studies collected and
used their own meteorological data set.
Averaging period of interest: All the studies reviewed included
consideration of carcinogenic pollutants, for which average annual
exposures are the statistic of concern. Such studies can use more
simplified data to achieve study goals. Modeling is also capable of
estimating short-term exposures (e.g., exposures averaged over a few
minutes to a few hours). Although few air toxics studies have investigated
98
-------
short-term exposures
the future.
i, short-term exposures may be of increasing interest in
All of the modeling studies reviewed in this report dealt with these issues, even if
they did not develop formal protocols. Table 4-1 summarizes their general goals, as
reflected in published reports. Of the studies reviewed here, only the Kanawha Valley
study prepared a formal modeling protocol covering the elements listed above. The
Kanawha study was the exception because of the highly unusual geography of the study
region; the Kanawha River lies in a deep, winding valley with steep walls, a situation that
no single available EPA model is designed to model effectively. This study therefore
developed a formal modeling and meteorological data collection protocol for review by
the EPA Science Advisory Board to clarify the project's goals and document its methods
(see discussion below).
A broader discussion of the above decision elements is given below.
Level of Analysis—Both the NESHAPS study and the 35 County study (within the
general Six Months Study) can be considered screening level or scoping studies in their
entirety. Several other studies used screening-level analysis for preliminary evaluations.
The South Coast study conducted a screening-level evaluation of 4,200 grid cells to select
sites for the monitoring program (see Chapter 2 for a full discussion). The Philadelphia
study used a screening level analysis to help select study area boundaries. Major point
sources across the general region, including sources in Delaware, were modeled using
generalized data to see whether the effects of the Delaware sources were significant in the
Philadelphia metropolitan area; their impacts were found to be negligible, so the Delaware
sources were not included in the program. In Baltimore, screening-level modeling of VOC
sources helped in the selection of monitoring sites. In the Kanawha study, simplified
screening helped select pollutants and sources of greatest concern for refined modeling.2
Another type of screening, not involving air dispersion modeling, was used in the 35-
Lounty study: counties across the country were evaluated for inclusion in the study
by ranking them (1) by total emissions and (2) by total emissions multiplied by
99
-------
Table 4-1. Goals for Dispersion Modeling Analysis In
Multi-Pollutant, Multi-Facility Air Quality Studies
(NOTE: See Chapter 1 for a summary of each of these studies.)
Baltimore
Santa Clara
Kanawha Valley
NESHAPSf
35 County Studyf
South Coast
Southeast Chicago
Philadelphia
5-C'rty Controllability
Scales Addressed
Spatial Resolution
(* included, ** emphasized)
Average
Exposures
**
**
*
**
**
**
* *
**
* *
ME!
Exposures
*
*
**
*
*
*
*
*
*
Temporal Resolution
(* included, ** emphasized)
Annual
Average
*
*
*
*
*
*
*
*
*
Short-Term
Evaluations
tt
Model Performance
only
Components of the
Six Months Study
ft Anticipated'study.
Source Categories/Emissions Data—All the studies in Table 4-1 attempted to cover
all categories of sources, both point and area. They varied, however, in the level of detail
of their emissions statistics and release terms. As discussed further below, the Kanawha
Valley study, for instance, developed highly disaggregated release points within the large
chemical complexes of interest within the study; it also investigated diurnal differences in
release terms, primarily because dispersion conditions within the valley sometimes vary
dramatically from day to night.
Pollutants to be modeled: With the exception of the Kanawha study, none of the
studies reviewed here evaluated pollutants that posed unusual technical problems in
modeling. Where particulate matter was modeled, the implicit assumption was that
100
-------
particle size was less than or equal to 10 microns, and that gravitational settling was
negligible and that all particulates could be modeled the same as gases. The Kanawha
project investigated the possible effects of decay of pollutants within the modeling area:
separate model runs were done to evaluate allyl chloride.3
Receptor Networks/Scale of Analysis—Mn«t of the studies concentrated on
developing estimates of population weighted average annual exposures. From the point of
view of dispersion modeling, estimation of these average exposures poses the fewest
technical problems and requires the most generalized types of data (e.g., average annual
emissions and average annual meteorological data).
The Kanawha Valley study, and, to a lesser extent, the Philadelphia study, were
the only studies that put relatively heavy emphasis on modeling of MEI (most exposed
individual) exposures. The main objective ~in the Kanawha study was to determine
whether maximum exposures in the neighborhoods of the large chemical complexes
located along the 50 km valley corridor between Nitro and Belle are significant. It was
also pertinent.to focus the modeling effort on. relatively near-field effects because of
concerns, such as those expressed by the EPA Science Advisory Board, that modeling of
broader exposed areas was technically questionable. The Philadelphia study modeled
potential MEI exposures primarily because of concern in one heavily industrialized area of
the city that air toxics exposures were high.
The primary technical difference in conducting MEI evaluations in these two
studies had to do with resolution of the emission inventories. Because of the large size of
the chemical complexes in the Kanawha Valley, several of which span many hundreds of
acres, the study coded hundreds of individual release points (vents, stacks, storage tanks)
within each complex so as to support detailed estimates of MEIs offsite.4 The
Although this pollutant has a relatively short atmospheric half-life (about five hours)
compared with others in the study, the effects of decay were determined to be
negligible within the short transport distances of interest within the study area.
Accommodating the large size of the source complexes also led to location of
monitoring sites at between 1.5 and 5 km from the fencelines so as to ensure
maximum coverage of multiple emission points (see discussion in Chapter 2).
101
-------
Philadelphia emission inventory was not resolved at this level of detail, although some
further detail was added for select facilities to support the goals of the MEI analyses.
Terrain—EPA modeling guidance stipulates that if any receptor points within a
modeling area are higher in elevation than physical stack heights of any sources, then
complex terrain models should be used to evaluate exposures for those locations. None
of the studies, however, used complex terrain models for this purpose, even though the
geography in several of the areas could be formally defined as "complex."5 The reason for
not including complex terrain modeling in the South Coast and Kanawha studies is that
the influence of terrain on predicted concentrations would be minor for the distances and
stack heights evaluated and, therefore, would not be critical to project objectives. Two
complex terrain models were used, SHORTZ (Philadelphia Study) and LONGZ (Kanawha
Valley Study), but terrain data were not input. - These models were used because of the
treatment of dispersion. .
Meteorological" Data—Most of the studies relied on available meteorological data,
but three developed their own sets. The Philadelphia and Baltimore studies developed
independent data sets, including sequential data for one-year periods, in order to better
interpret their studies' monitoring data and to support model performance evaluation. The
Kanawha Valley study developed its own data set because the closest available
meteorological data station was located at an airport on a plateau above the valley, and
was therefore entirely unrepresentative of conditions within the study area.
Averaging Periods of Interest—As noted, the risk evaluations conducted by these
studies emphasized chronic health effects from long-term (i.e., annual or lifetime)
exposures. The Philadelphia and Baltimore studies, however, used short-term modeling
to aid in model performance evaluation.. In addition, future phases of the Kanawha
Valley study are planning the use of short-term exposure analysis for evaluation of
possible health effects (such as respiratory or neurological impacts) of less-than-lifetime
exposures to toxic air pollutants.
5 As discussed below, however, LONGZ—a complex terrain model—was used in several
studies for other reasons.
102
-------
4.3 Model Selection
The recommendations in EPA Guideline on Air Quality Models were used as a
basis to select models for the Baltimore, Kanawha Valley and Southeast Chicago Studies.
A combination of models (ISCLT and CDM) was used because none of the current models
in the Guideline provide adequate treatment of complex industrial sources and urban-scale
area sources. The NESHAPS Study, 35-County Study, South Coast Study, and 5 City
Controllability Study (see Table 4-2), all used exposure models that contain a dispersion
model that is not listed in the Guideline as a recommended model for this application.
This does not imply that the model selection in these studies, or the Philadelphia Study,
is not appropriate. A major advantage of using a Guideline recommended model, however,.
is that such a model has greater consistency among studies. Table 4-2 lists the dispersion
models used in these .studies.
Table 4-2. Dispersion Models Used in the Reviewed
Studies
.-"-
ISCLT
CDM
SHORTZ
LONGZ
HEM-SHEAR
SCREAM
GAMS
- • Santa Kanawha 35- South Southeast Phila- 5 City
Baltimore Clara .". Valley NESHAPS County Coast Chicago delphia Control.
X
X
X
X
X
X
X
X
X
X
X
X
X
X
Model Characteristics
Each of the models has specific strengths and limitations.
HEM/SHEAR—HEM (Human Exposure Model) is an exposure modeling system.
that internally includes national population data from the Census Bureau at the block
group level6 and national meteorological ("STAR") data from the National Climatic Center
6 Reference is made within this report to census data reported both at the block
group/enumeration district (BG/ED) and block levels. For exposure evaluations,
103
-------
(NCC). HEM was designed to accept output from any dispersion model that produces
displays in a compatible format, although most urban air toxics studies using HEM have
employed the internal SHEAR model rather than conducting modeling separate from HEM.
SHEAR (Systems Applications Human Exposure and Risk) is basically consistent
in many respects with the stand-alone EPA dispersion models shown in Table 4-2; it uses
similar treatments for dispersion coefficients, plume rise, and building downwash. Two
major differences between SHEAR and the EPA-developed dispersion models are:
(1) SHEAR uses simplified box model treatments and prototype sources to represent area
sources (rather than using gridded area source data7) and (2) SHEAR does not contain a
mixing height term.
HEM/SHEAR uses modeled data as a basis to interpolate average concentrations
for all block groups within the modeling domain, such as from 30 to 50 km from a source.
Contributions from multiple sources are summed to estimate total concentrations for
population exposure "within, each block group. Since HEM/SHEAR uses national
population data, it does not include the exact locations of residences within each block
group, and can therefore develop only approximations of MEI exposures.
SCREAM—SCREAM (South Coast Risk and Exposure Assessment Model) is a
version of HEM/SHEAR as developed for use by the South Coast Study. It involved the
following changes to SHEAR:
1. The model's code was changed to facilitate the use of a 16-station
meteorological data set that was available for the study area. The closest
meteorological data set was selected when modeling each source.
dispersion modeling usually uses population data resolved to the census tract level,
apportioning that data to a modeling grid with cells that might typically be 1 to 5 km
square. BG/EDs, used in both HEM and GAMS, are a smaller census reporting unit
than a census tract. Concentration estimates produced by HEM and GAMS are used to
estimate concentrations at the centroids of individual BG/EDs. The SCREAM model,
which was adapted from HEM, considers the smallest census unit, i.e., block-level
data.
7 SHEAR uses prototype sources for some area source components, such as gasoline
marketing. For example, rather than assume uniform emissions throughout a specified
area, specific hypothetical sources are located on the modeling grid to match the
expected density of the service station coverage.
104
-------
2.
3.
Area source modeling was done using an adapted version of the
Climatological Dispersion Model (CDM) rather than the box model
treatment internal to SHEAR.
Population was resolved to the street block rather than the block group
level.8
4. City-specific population growth projections were used.
The SCREAM computer model is being enhanced and will include options for
population mobility, indoor versus outdoor exposure, and noninhalation routes of
exposure. SCREAM will also be enhanced to provide better estimates of individual risk
and community cancer burden in regions and subregions of the air basin. (Barcikowski,
1988)
GAMS—GAMS (GEMS Atmospheric Modeling System) is an exposure modeling
system similar to HEM/SHEAR, developed bylhe EPA Office of Toxic Substances as part
of its Graphical Exposure Modeling System (GEMS). It uses the ISCLT model for point
sources and a box model .approach for area sources based on national meteorological
("STAR") data.9 GAMS integrates population data with concentration data in a similar
manner to HEM, although the approaches differ .for close-in receptors, which could lead to
minor differences on this basis between the two models in exposure estimates across a
study area.
ISJ1LT—ISCLT (Industrial Source Complex-Long Term) is designed to provide
enhanced flexibility for evaluating complex industrial sources. It includes separate
treatments for stack emissions, volume-source emissions, and area sources, as well as a
wide range of control options to tailor the model run to match the degree of specificity in
the available source and meteorological data. ISCLT requires joint frequency data (wind
speed, wind direction and stability class data) in the "STAR" format. A prime limitation
of ISCLT to support modeling goals in multi-pollutant, multi-facility studies is its
SCREAM used dispersion model estimates to predict concentrations at the centroid of
the block or BG/ED, depending on the distance from the source.
As noted below, at the time of the 35-County Study, GAMS used ATM as its internal
Ql^ /11 OT^Q^OI f\m i-vm ns3 n"\ v***lv£ ovl> .Ln_.J,..Jj._il* _i___l 1 .
air dispersion model, which tended to bias study results.
105
-------
relatively weak treatment of widespread urban area sources (e.g., mobile sources,
residential heating, and distributed solvent use).
CDM—CDM (Climatological Dispersion Model) does not provide the degree of
flexibility offered by ISCLT to address major point sources—which can be a significant
limitation for estimating MEI exposures. Its greatest strength for modeling the urban soup
is its relatively detailed area source treatment. If matched with highly resolved emissions
data, for area sources, this model theoretically can provide the most representative
treatment available for area sources. If MEI analyses are not required, the resolution in
CDM may be suitable for most point or area source treatments. CDM requires
meteorological data similar to ISCLT, except that neutral stability is subdivided into
daytime and nighttime periods.
, SHQRTZ/LONGZ—SHORTZ and LONGZ are EPA-recommended second-level
screening models for estimating short- and long-term averages, respectively, in urban areas
haying complex terrain. They .have two potentially useful features: (1) representative
treatment of urban area sources and (2) the potential to use site-specific measured
turbulence data to characterize dispersion rates. This model requires meteorological data
in the form of a joint frequency distribution as in ISCLT. The user has the option of using
site-specific meteorological data to characterize dispersion rates, however.
Rationale for Model Selection
Baltimore and Southeast Chicago—These studies used a combination of two
models to reach their objectives: ISCLT (for its strength in modeling complex point
sources) and CDM (because of its relatively refined treatment of area sources). Model
selection was guided by adherence to the Guideline on Air Quality Models (revised) (EPA,
1986).
Philadelphia and Santa Clara—As the first IEMP pilot study, the Philadelphia
study tested two modeling approaches. The CDM model was used early in the study for
all point and area sources, but the program later shifted to SHORTZ/LONGZ, primarily to
106
-------
have the option of using site-specific meteorological data (collected over a one year period
to document the project's 10-station air quality monitoring network) to define dispersion
conditions. SHORTZ/LONGZ were noj. selected because of their complex terrain
capability.10
Santa Clara used LONGZ primarily for consistency with the concurrent
Philadelphia study. During the initial design stages of the Santa Clara study, it was
unclear if site-specific meteorological monitoring would be done as in Philadelphia. If a
comparable meteorological monitoring program were done in Santa Clara, LONGZ would
have provided the option of inputting this data set as an alternative means of
characterizing dispersion. Such a meteorological data set was not collected, however.
Although Santa Clara has substantial terrain variations within the study area, the
terrain features of LONGZ were not used, essentially because none of the releases in this
study area were from stacks (process vents and fugitive releases dominated the inventory)
and terrain rise consequently was not a sensitive model input.11
Kanawh^ Valley—Model selection for Kanawha Valley was controversial because
of the region's complex topographical setting. At the outset, the study made the
following assumptions to simplify the modeling analysis and model selection:
• The modeling region was confined to the valley floor, thereby removing
the complication of selecting models recommended for use in complex
terrain. ^
The Philadelphia Study might have been strengthened by a more detailed comparison
ot these two alternative modeling techniques. Although minor improvements were
observed for a few pollutants in the model performance tests using the more specific
dispersion treatment in SHORTZ (as opposed to the default stability class approach
used by^CDM), the benefits of using SHORTZ/LONGZ over the more simplified CDM
approach were not established. Perhaps the most useful finding of the Philadelphia
?HnRT7§/TanS^1S Wal **"* t-heJU?e Of the more sPecific evaluations possible with
Sir v ? Mu best SUlted for settings with complex topography and for all
n applications where short-term averaging is a modeling goal.
Sn^°fr£e?i*-n the aUJho°rS'i.opinion, an alternative modeling procedure, such as
SantTciSa Stud101'6 Southeast Chicago, may have better served the needs of the
107
-------
• The valley was subdivided into four relatively homogeneous zones, each
to be modeled independently to minimize complications in flow analysis.
• Modeling objectives emphasized MEI concentrations in the areas near the
major chemical facilities; this was expected to simplify the influence of
complex flows and minimize the need to consider reflection of plumes off
the valley walls.
Based on these simplifications, the project selected a reference model (ISCLT),
consistent with EPA's Guideline on Air Quality Models frevised! (EPA, 1986), and an
alternative model (LONGZ), essentially for research purposes. The study's hypothesis
was that ISCLT, which uses generalized stability classes, may not adequately characterize
site-specific dispersion because of the Valley's complicated flow patterns. It therefore
used LONGZ, which can accept measured site-specific turbulence data, as an alternative
model to ISCLT and compared the results.12 Overall, ISCLT met the project's objectives
better—LONGZ substantially underestimated .concentrations relative to the limited
measured data set, usually by a factor of 2-to 3.^3
Because o'f ISCLT's simplified method for handling emissions from large area
source emissions, the study also used a third model—a simple box model treatment—to
evaluate area source impacts.
1 5 City Controllability Study; NESHAPS Study— Several studies chose to utilize
EPA's Human Exposure Model (HEM) because (1) it is consistent with what EPA has used
in its NESHAPS regulatory program and (2) it currently includes dispersion modeling
12 LONGZ was used as an alternative model because it is not recommended within the
EPA Guideline on Air Quality Models as a recommended model for this application.
In this sense, it was used for research purposes.
13 Although LONGZ showed a consistent bias in its estimates, it also showed systematic
differences in dispersion rates caused by site-specific turbulent intensity differences,
as hypothesized by the study. There appeared to be a benefit in using LONGZ or
SHORTZ if concentration bias could be removed. It appears that the empirical
algorithms used by both LONGZ and SHORTZ to relate turbulence to dispersion rates
work best with elevated sources. Most of the industrial sources of concern in this
study, however, had low-level release points.
The study concluded that the apparent bias in LONGZ's estimates could be removed
by modifications to the model code; currently proposed investigations of short-term
ambient conditions in the Valley may therefore use a suitably modified version of
SHORTZ to exploit available site-specific turbulent intensity data.
108
-------
exposure and risk characterization modules, all within one system. (HEM is being
updated by EPA to give it much more capability.)
The NESHAPS and 5 City Controllability Studies used the HEM/SHEAR model.
The use of HEM/SHEAR appears to have met study goals, but the limitation of no mixing
height treatment may act to significantly underestimate average concentrations for some
applications. It is also unclear if different conclusions may have been reached if EPA
recommended dispersion models were used in lieu of HEM.
South Coast MATES—The South Coast adapted HEM/SHEAR for its own purposes
by incorporating a modified version of the CDM model in place of SHEAR, but retained
some of the assumptions of HEM/SHEAR, such as not considering mixing height in the
calculations (Liu, 1987). The adapted model was named "SCREAM." The motivation to
develop an alternative model appears to have "been: (1) to conduct more sophisticated
modeling techniques for area sources, (2) to take advantage of the extensive coverage of
meteorological data in the study area, and (3) to obtain finer resolution in predicted
concentrations. Development of the SCREAM model appears to have produced a model
that better met the goals of the South Coast study, while retaining much of the exposure
modeling capability available within HEM.
35-County Study—Model selection was difficult for this study. An approach was
needed that could readily model the 600-plus point source and 35 metropolitan-scale area
source treatments that make up this study. The EPA GAMS exposure model, developed
by the Office of Toxic Substances, was' selected over HEM primarily because: (1) model
execution for this scale study was expected to be facilitated by the GAMS approach and
(2) the analysts were more familiar with. GAMS. The tight project schedule resulted in the
selection of GAMS.
GAMS was successful in meeting the immediate objectives of this project, and
appears to have provided more efficient execution and processing of this large data set
than would have been possible with HEM. A limitation of this approach, however,
observed two years after the completion of this project (Sullivan and Hlinka, 1986), was
109
-------
errors in the ATM model, which was the only Gaussian model available in GAMS at the
time. An error in ATM's plume rise term, and questionable assumptions regarding
depletion of mass from the plumes caused by wet and dry deposition (Sullivan, 1986),
were hypothesized to underestimate toxic air pollutant risk throughout the 35 counties
analyzed in the study.14 (Sullivan and Hlinka, 1986) The magnitude of the
underestimates is mainly a function of the exhaust temperature. For sources with high
exhaust temperatures, (e.g., greater than 500° F), model underestimates could be in the
range of a factor of 3 — 10, in the authors' opinion; at near ambient temperatures,
I - -
underestimates might be expected to be low by a factor of 2 -3.
4.4 Release Specifications for Emissions
Model outputs can, in some cases, be sensitive to the degree of resolution in
release specifications,15 with optimal resolution being a function of the scale of analysis.
For example, complex facilities can be represented as a single release point to estimate
average exposures -across a study area without necessarily sacrificing the
representativeness of the average modeled concentrations. On the other hand, the level of
detail in release specifications can be an important model input for MEI analyses—the
greatest differences in modeled concentrations occur within the first 1 to 2 kilometers of a
source, where MEIs are likely to be located.16
With the exception of Kanawha Valley, and to a limited extent the Philadelphia
study, all modeling analyses used simplified treatments to characterize release
14 This problem points to the risk taken when selecting a dispersion model that has not
been subject to the review process of recommended models in the Guideline on Air
Quality Models (EPA, 1986). The same risk appears to be present in using the SHEAR
model routinely used within the HEM system.
15 Release specifications required as model input are as follows:
Stack Source release height, inner stack diameter, exit velocity, and exhaust
temperature.
Area Source horizontal dimensions of area, characteristic release height.
Building Source vent specifications (similar to stack data), dimensions
horizontally and vertically of building (or structure).
16 This is especially true of air toxics evaluations, where most sources of concern are
low-level sources that produce MEI concentrations relatively close to the facility
itself.
110
-------
specifications. Often an assumed release height, such as 10 m, or one release point within
a major facility, was used to represent a complicated industrial facility. This was
consistent with the goals of the modeling analyses (see Table 4-1) because most studies
were primarily concerned with average annual population exposures and areawide
incidence.
The Kanawha Valley study, however, was primarily concerned with MEI
exposures and therefore needed greater detail in its emissions specificity. The available
emissions inventory, compiled by the West Virginia Air Pollution Control Commission,
was coded at the release point level—too much detail to model within resource
constraints. The releases were therefore grouped into units that were more suitable for
practical modeling purposes—up to 20 groups within each complex facility. Stacks were
modeled individually. Vents on a common building were modeled within one volume
source. In some cases, fugitive sources emitted over an area, such as tank farms, piping
arrays, and so forth, were modeled as area sources. Figure 4-1 summarizes the approach
used in Kanawha to group sources; this may be of use to future'studies with similar
emphasis.
4-5 Selection of Receptor Network Array
The receptor arrays used in these studies can be grouped into four categories:
(1) grid-based selection of receptors, (2) a Block Group/Enumeration District (BG/EDJ-based
approach, (3) use of special receptors to match monitoring sites, and (4) complex terrain
receptors.
Ill
-------
FACIIUY X
©
ptf*
/?'*'
, f
n n n n n n
©
©
lir
n
n n n n
n
o
Grouf
1
2
3
Source Type
STACK
STACK
VOLUME SOURCE
(BUILDING)
VOLUME SOURCE
(BUILDING)
AREA SOURCE
Sped fie Rel ease Points
A
B
C,D,E,F,G,H
I,J,K,L,M,N,
O.P.Q
Figure 4-1. Example of source grouping technique used for the Kanawha Valley Study.
112
-------
Grid-Based Approach—In a grid-based receptor array, estimates of impacts from all
sources modeled are output to a single set of cells, with each cell including one or more
discrete receptor points within or along its boundaries. Since the grid is rectangular, all
cells are the same size unless otherwise defined (some studies define subdivided cells in
more densely populated areas). Figure 4-2 provides an example of a rectangular grid
system used to define a receptor array.
This approach was used for the IEMP studies, where a rectangular grid was
employed for all but the microscale analyses. First, a standard grid spacing was
established (such as a 2.5 to 5 km square17). Average concentrations were then estimated
for each grid cell by placing evenly spaced receptors (one to four per cell) within each cell
and averaging all receptors within a cell to represent the average concentrations for all
residents of the grid cell. Population within "each grid cell was allocated by overlaying
census maps on the grid and allocating population from each census tract to the grid
proportionally on- the basis of area (i.e., if half the area of a census tract lay within a cell,
half the population of that tract is assigned to that cell). The population thus assigned is
assumed to experience exposures equal to the average concentrations estimated for that
cell.
Resolution tighter than 2.5 to 5 km grids is often needed to meet objectives other
than estimating average concentrations. For example, the South Coast MATES used a
1 km grid system to help select monitoring sites in the South Coast Air Basin. As another
example, the Southeast Chicago Study also used a 1 km grid system, albeit for only a
relatively small (13 km x 13 km) receptor area. Within the IEMP studies, supplemental
receptors within certain grid cells were needed to estimate MEI concentrations—the
standard grid was complemented with specially selected locations (discrete receptors) that
*1 7 VH
For core urban areas with relatively high population density, or sharp gradients in
concentrations caused by major industrial releases, a tighter (2.5 km) grid was
generally used. 6
113
-------
Figure 4-2. Example of rectangular grid system.
114
-------
represented residential areas closest to major facilities. The maximum concentration
within residential areas was selected from the discrete receptor array to represent MEI
concentrations..
BG/ED-Based Approach—This method is used in the HEM and GAMS exposure
modeling systems. A polar coordinate grid (such as Figure 4-3) is used to estimate
concentrations around each facility. Unlike the rectangular grid-cell approach in which all
facilities output to the same grid, every facility has its own (polar) grid. Also unlike the
rectangular grid-cell approach, cells in a polar array become larger the farther they lie from
a source.
To aggregate exposures from multiple sources, the HEM and GAMS systems assign
concentrations from each grid cell of each polar array to the appropriate BG/ED, and then
aggregate exposures for each BG/ED.18 HEM and GAMS use somewhat different methods
of assigning concentrations to each BG/ED. In GAMS, all BG/EDs whose centroids lie
within a particular cell are assigned the same concentration. HEM interpolates values and
assigns interpolated values to each BG/ED based on the location of its centroid in relation
to each cell's assigned receptor points.
An innovative approach along the same lines, developed for the South Coast
MATES Study, was to consider distance from the source as a means of identifying the
resolution to be used for receptor coverage. For example, for distances less than 2.5 km,
the street block level of detail was used. BG/EDs were used for distances 2.6 to 10 km.
This permits the model to allocate concentrations more accurately, compensating for the
variation in grid-cell size across the polar array.
The HEM and GAMS exposure modeling systems are quite similar in their
treatment of receptors, although there are some differences between the two in the
treatment of receptors within 3.5 km of a facility. Because of its finer detail, the BG/ED
A block group (BG) is an area representing a combination of contiguous blocks having
an average population of about 1,100. Enumeration districts (ED) are areas containing
an average of about 800 people and are used when block groups are not defined.
115
-------
NNW
WNW,
WSW
sw
SS'
NE
ENE
ESE
ONE SECTOR SEGMENT
Figure 4-3. Example of Polar Grid System
116
-------
approach should theoretically develop superior characterizations of average exposures for
each cell than the rectangular grid approach, possibly leading to minor differences in
estimates of total cancer incidence across a study area. Intuitively, it does not appear,
however, that their more refined resolution in population data necessarily improves the
accuracy of the exposure assessment on the metropolitan scale. Since the goal is to
estimate total exposure generally across a metropolitan area (e.g., 50 x 50 km), minor
differences in allocating population to model receptors would not be expected to make
major differences in exposure. Differences at the fine scale are smoothed when producing
an estimate of total exposure. The major limitation of these two exposure modeling
systems is their relatively poor resolution for MEI exposures. Supplemental receptor
coverage to include points representing distances to property boundaries or nearest
residences appears to be needed when using HEM or GAMS if MEI analyses are one of the
modeling goals of a study. (Refer to Chapter 5 for more detail on exposure assessment
issues.)
Special Receptors Matched to Monitoring Sites—To evaluate the performance of
models, the IEMP studies and the South Coast Study identified additional receptor points
representing the locations of monitoring sites in order to be able to compare monitored
ambient concentrations with modeled concentrations. Similar special sites are located
where the study has exact knowledge of potential MEI locations, such as the exact
distances to the fenceline of houses nearest a major source.
Selecting receptors for this purpose is obviously straightforward; the important
concern here is that the exact location of the site be entered into the modeling analysis in
order to accurately define the receptor locations. Exact locations are especially important
when large point sources exist within 1 to 2 km of the monitoring or receptor sites.
Characterizing Terrain Rise for Receptors—Modeling analyses should include
consideration of terrain differences between release heights and receptor elevations
whenever terrain rise is expected to be a significant term in a modeling analysis. This can
occur with complex terrain or when the effective height of the plume is substantially
117
-------
lowered with respect to the ground level. For emissions from elevated stacks, such as
incinerators or power plants, terrain differences can be a significant issue.
The majority of the air toxics emissions considered in these studies, however, were
from sources near ground level. Consideration of terrain differences between sources and
receptors, therefore, was generally not a sensitive issue, and none of these studies
incorporated it.
Of all the studies reviewed, the South Coast and Kanawha Valley projects had the
greatest potential for terrain-related modeling complications. Both studies considered the
issue, but concluded that it was unimportant in meeting project goals. In the South Coast
study, power plants were the source category expected to be most affected by terrain, but
these sources turned out to be located 10 km or more from areas with major terrain rise
(Shikiya, 1987). In Kanawha Valley, flat terrain modeling was used even though large
terrain rise occurred within 1 km of the sources because: (1) in nearly all cases, pollutants
were emitted from near ground level; (2) the modeling goals emphasized MEI
concentrations, which were essentially unaffected by terrain rise; and (3) the population
was densely clustered along the valley floor.19
Modeling receptors located in areas with substantial terrain rise could, however,
be a major limitation for exposure modeling systems such as HEM, SCREAM, and GAMS,
adversely affecting model results. The automated feature of these exposure modeling
systems to estimate concentrations without consideration of terrain rise can lead to large
model inaccuracies for source categories, such as power plants and incinerators, that
release from high stacks. A bias to underestimate impacts for tall stacks can occur on
this basis for areas with moderate to high terrain.
19 A limited special modeling analysis was done for Kanawha Valley to demonstrate the
limited sensitivity of model results to terrain rise for this application, based on the
LONGZ model.
118
-------
4.6
Meteorological Data
In the ideal case, at least one year of hourly meteorological data collected at a
representative location within a study area would be available to support air toxics
modeling. The following parameters would be optimal for most of the models used in
these studies:
• Wind speed/wind direction
• Atmospheric stability
• Mixing height specific to the study area
• Ambient temperature
In the studies reviewed, three different approaches were used to obtain the
required meteorological data: (1) use of available data from one existing site to represent
the general study area, (2) use of available meteorological data from multiple sites, and
(3) collection of site-specific meteorological data at one or more sites within the study
area. •
Studies That Used Existing Data from a Singla Sitft—Mnst studies used available
meteorological data from one site in the region to represent a study area. These included
the Southeast Chicago study, the NESHAPS study (within the Six Months Study), the
Santa Clara study, the 35 County Study, and the 5 City Controllability study. Since all of
these emphasized estimation of areawide average concentrations, not MEIs, over annual
averaging periods, reasonably representative annual meteorological data from a single site
could be expected to satisfy the modeling objectives.
Study That Used Meteorological Data from Multiple Sites with Available Data—
The South Coast Study is the only one to use data from multiple existing sites within one
defined study area. Here, the closest meteorological station to each source was selected
from among 16 available sites. The incremental contributions from each source were
modeled on this basis. This procedure was used to account for the differences in wind
119
-------
flow across the relatively large study region, which included substantial terrain
complexities in some subareas.
gtudies Collecting Site-Specific Meteorological Data—The collection of site-
specific meteorological data was only done in these studies to (1) support the
interpretation of detailed air quality monitoring programs, as in Philadelphia and
Baltimore, or (2) support modeling within a complex topographical setting, i.e., Kanawha
Valley.
The IEMP studies in Philadelphia, Baltimore, and Kanawha Valley collected
meteorological data at one, two, and four monitoring sites, respectively. Wind speed,
wind direction, and turbulence intensity data were collected, generally at 20 m above
ground level, at sites selected to best represent the entire study area or subregions within
the study area.
In the Philadelphia and Baltimore studies, site-specific meteorological monitoring
was primarily done to support the interpretation of the 10-statiori air quality measured
data sets that were collected in each city. For the Kanawha Valley study, meteorological
monitoring at three new sites and one existing site was conducted because of the lack of
available data to account for the variability in wind flow and dispersion characteristics
across four delineated zones within this complex study area.
Treatment of Meteorological Parameters
Differences in the treatment of meteorological conditions can make large
differences in modeled concentrations:
Wind Flow—Many of these study areas were relatively flat, such as Philadelphia,
Baltimore, and Southeast Chicago, where during most periods the differences in wind flow
across the study area would be expected to be relatively small. A likely exception is
during evening hours under inversion conditions. During such periods, wind flow can be
highly variable and dispersion rates very small. Meteorological data that best represent
receptors of greatest concern, such as the locations of air quality monitoring stations,
120
-------
areas with high population density, or MEI locations, are highly desirable for representing
such conditions. This can be especially important when specific days are being modeled,
such as to support a model performance evaluation.
Stability—Atmospheric stability is a term used within a dispersion model to
indicate the rate of horizontal and vertical pollutant mixing within a plume. Unstable
conditions produce vigorous mixing, neutral conditions indicate moderate mixing, and
stable conditions result in very limited mixing.
Stability can be estimated based on "default" data sources, such as available
National Weather Service data, or site-specific data, such as turbulent intensity data. For
many applications, default treatments likely provide reasonable estimates and are the
only means of estimating concentration because more site-specific data are unavailable.
There are applications, however, where default treatments may introduce considerable
uncertainty into the modeling analysis.
Research has shown that large differences in dispersion conditions can occur
within specific default stability classes (Luna and Church, 1972). The use of default •
stability data to represent dispersion rates can therefore introduce substantial inaccuracies
into the modeling analyses for some applications. An alternative is to collect site-specific
turbulent intensity data specifically for the study, as was done in the ffiMP studies of
Philadelphia, Baltimore, and Kanawha Valley.20 Based on their experience, two
conclusions can be drawn:
1.
2.
The influence of terrain on dispersion rates appeared to be relatively
significant in the Kanawha Valley study. The benefit of collecting
turbulent intensity data as site-specific indicators of dispersion seems
greater in this setting, whether the averaging period be short term or annual
average.
For areas with flat terrain, the site-specific dispersion data appear to be
useful for characterizing conditions for short periods, such as for model
These studies measured the standard deviation of horizontal and vertical wind speed
which was used in conjunction with mean wind speed to estimate turbulent intensity.'
These data were then used within LONGZ estimate average dispersion rates within
each stability class.
121
-------
performance evaluations or short-term ambient exposures. For annual
averaging periods, the default data appear to be a more reasonable indicator
of dispersion.
Mixing Height—Mixing height refers to the vertical limit of turbulent mixing.
Because toxic air emissions are generally dominated by low-level sources, which are not
as significantly influenced by the mixing height term as elevated sources, none of the
studies collected site-specific mixing height data. Regionally available mixing height data
were used as a default. For relatively large study areas, however, or for modeling the
impacts of outlying major industrial areas on central business district receptors, the
mixing height term could significantly affect model estimates.
It is important to note that there can be large differences between regionally
collected mixing height data and site-specific data. In fact, mixing heights can be quite
variable in time and space within a specific metropolitan area. Differences can be most
pronounced during evening or early morning hours during low-level inversions.
Although the collectibn~6f site-specific mixing heights can be an expensive
addition to a meteorological monitoring program, it can at least theoretically improve the
representativeness of modeled air toxics concentrations in urban environments. The lack
of a mixing height term in the HEM/SHEAR and SCREAM models is a pertinent example in
which model estimates might be improved to some degree by inclusion of a representative
treatment of mixing height in the modeling analyses.
Treatment of Meteorological Variability
The incorporation of diurnal or seasonal differences in meteorological conditions
can be important when modeling industrial facilities or area source categories that have
distinct diurnal or seasonal patterns in emission rates. A number of studies took such
differences into account.
Diurnal Differences—For modeling purposes, separate daytime and nighttime
model input files can be created, containing meteorological and emissions data
representative of each period. This approach was used in the Philadelphia, Baltimore,
122
-------
Kanawha Valley, and South Coast MATES studies to minimize diurnal bias, although the
results were reported without displaying diurnal differences.
Hours of operation data within the National Emissions Data System (NEDS) are a
default data source to aid in apportioning emissions on a day/night basis. More detailed
and accurate data on the hours of operation of processes and facilities may be available at
the local level. Modeling daytime and nighttime conditions essentially doubles data
processing and requires additional quality control, but ignoring these differences can
introduce substantial bias in some situations, tending to overestimate concentrations of
near-ground-level releases, and underestimating sources with high-level releases. Routine
model runs with HEM or GAMS do not consider diurnal differences in release.
Seasonal Differences—Some emission source categories, such as wood burning,
have distinct seasonal patterns in their emission rates. Some less obvious source
categories can also show distinct seasonal patterns. For.example, mobile source
emissions are highly" sensitive._to ambient temperature; winter periods show higher
emissions per vehicle mile traveled because of reduced combustion efficiencies (Black,
1986).
If there are also large seasonal differences in transport and dispersion within a
study area, bias can be introduced to the modeling analyses if annual average emissions
and annual meteorological data sets are used for modeling. This bias could be reduced by
modeling on a seasonal basis and by linking seasonal emission rates with meteorological
data representative of each season. Annual average concentrations can then be averaged
across all seasons. None of the studies used this level of resolution in the modeling
analyses, a limitation that most significantly affects heating-related emissions (such as
wood and oil burning) and major industrial facilities with emission rates that are highly
variable on a seasonal basis.
123
-------
4.7 Decay. Deposition, and Transformation
The models used in all of these studies contain simplified treatments for decay,
deposition, and transformation. These factors generally have not been significant issues in
most studies of urban air toxics, but a number of issues deserve at least some
consideration.
Decay
The half life values of most pollutants are long in relation to the travel times for
traversing the 50 km distances that are the limit of most of these modeling analyses. Over
these distances, the exponential decay term contained in these dispersion models would
generally produce only minor differences in model output. A widely used, practical
simplification in these studies was to assume an essentially unlimited half life, such that
the decay term becomes zero and incremental concentrations from each source can be
scaled by emission rates for different pollutants.
Deposition ' "
Deposition of pollutants removes pollutants from the ambient air and therefore
lowers ambient concentrations.21 It can occur through three mechanisms: (1) gravitational
settling, (2) dry deposition, and (3) wet deposition. Consideration of these terms has not
been included in the modeling analyses under review in this study. Model-predicted
concentrations may therefore be biased upward on this basis, but in many cases these.
terms would not substantially affect the results.
Gravitational Settling—Gravitational settling has a significant effect only on
pollutants that are bound or attached to particles that are relatively large, such as
10 microns in diameter or larger. Since most directly emitted, particulate phase toxic air
pollutants are released from combustion sources with relatively fine particles, little
!
accuracy is likely to be lost by assuming zero gravitational settling for these pollutants.
21 Indirect effects of deposited pollutants, such as contamination of surface or ground
water, uptake through the food chain, or direct ingestion (pica), have not been
considered in any of the current studies and are therefore not discussed here.
124
-------
The adsorption in the atmosphere of gas phase pollutants onto participates is another
mechanism where gravitational settling could potentially be an issue, but such model
treatments are beyond the current state of the art. In addition, a major percentage of the
particles involved in the adsorption process likely are less than 10 microns in diameter,
and therefore have minimal settling velocities.
Wet and Dry Deposition—Most of the pollutants emphasized in urban air toxics
studies to date are volatile organics that are minimally affected by wet or dry deposition
because the washout ratios and deposition velocities are relatively low. All of these
studies used modeling techniques that assumed that ambient concentrations are not
affected by wet or dry deposition.22 This assumption probably results in slightly higher
estimates of concentration than would be predicted if these terms were included.
The wet and dry deposition terms can produce more significant reductions in
average concentrations at the urban scale for metals and high molecular weight organic
compounds. The current ,model treatments could have a bias to overestimate
concentrations for metals and high molecular weight organics relative to volatile organics.
4.8 Model Execution
For criteria pollutant modeling analyses, the specific procedures in the Guideline
on Air Quality Models (Revised) (EPA, 1986) are followed. This reference guides model
selection, as well as the manner in which the models are to be executed. For example,
22 The one exception is the version of GAMS used for the 35 County Study. At that time
the Atmospheric Transport Model (ATM) was used as the dispersion model in GAMS,
rather than ISCLT. ATM could model gravitational settling, dry deposition, and wet
deposition. There were substantial limitations, however, noted with the use of these
terms (Sullivan and Hlinka, 1986). The wet deposition term was set to zero for the 35
County Study, and the dry deposition term was set to its minimum value. Still,
concentrations were probably reduced on the basis of deposition in the 35 County
study to a greater extent than appropriate for most toxic air pollutants evaluated in
that study.
Another problem that was identified with ATM is an error in the plume rise term
(Sullivan and Hlinka, 1986). For sources with relatively high exhaust temperature,
such as incinerators or power plants, the 35 County Study would underestimate
concentration and risk on this basis.
125
-------
guidance is provided regarding how to select model options for specific applications.
Most of these studies of noncriteria pollutants, however, did not use procedures
recommended in the guideline. Alternative model execution procedures were used,
apparently because the goals of most studies were perceived to be substantially different
from those of typical criteria pollutant modeling programs. The three exceptions (i.e.,
where the modeling analyses were consistent with the guideline) are Baltimore, Kanawha
Valley, and Southeast Chicago.23 '
In terms of the mechanics of executing the model runs, there were two primary
techniques used to execute the models: (1) one model run is made for each pollutant based
on using actual emission rates and no post processing,24 or (2) model outputs were made
for each source based on a normalized emission rate, such as 1 kkg/yr or 1 g/sec, with
post-processing used to sum concentrations across all sources at each receptor.
With the exception of the early analyses in the Philadelphia Study and the
NESHAPS Study/normalized, modeling techniques have been used in all urban scale toxic
air studies, for a variety of reasons. The primary consideration is the relatively large
number of pollutants modeled in air toxics studies; for criteria pollutant analyses, where
the number of pollutants is much more limited and multiple averaging periods frequently
need to be addressed, it is often a more common practice to make pollutant-specific model
runs. Another important factor that leads to normalized modeling is the often dynamic
nature of emissions inventories that support analyses of the urban soup. Since
normalized modeling techniques link modeled data to emissions during post-processing
rather than within the modeling program,25 effects of changes in emissions terms on
23 The Baltimore modeling was consistent with EPA guidance, with the exception that
Urban Mode 1 dispersion coefficients were used in place of Urban Mode 3 coefficients.
This departure from recommended practices was made because of the study managers'
concern ^that the Urban Mode 3 dispersion coefficients would introduce a.bias,
underestimating concentrations for the predominant, low-level heightis of release.
24 Post processing involves multiplying normalized concentrations for each source times
emissions rates for each source, and then summing, at each receptor, the incremental
concentrations across all sources.
25 Although some dispersion models, such as ISCLT allow for changing modeled data for
selected sources, the incorporation of updates and control scenarios in a data base is
much easier to implement.
126
-------
predicted concentrations can be displayed readily. The same applies to displaying the
benefits of various control scenarios. Hence, changes in exposures and risks can be
linearly predicted from changes in emissions without having to rerun the models for each
emission scenario.
Although the normalized modeling approach is now almost universally adopted, it
introduces several significant technical limitations that deserve attention:
• Potential Complications with Atmospheric Decay—Normalized modeling
techniques restrict consideration of pollutant decay because differences in
the treatment of decay for different pollutants results in the relation
between emissions and concentrations for each source becoming nonlinear.
As noted above, this is currently not considered an important problem, but
there may be situations where decay must be addressed. Although special
normalized model runs can be made for pollutants with particularly short
half lives, such as the pollutant-specific modeling of allyl chloride in the
Kanawha Valley Study, this level of detail can be impractical for a study
with wide pollutant coverage.
• Representativeness of Release Specifications—Whan estimates of ambient
concentrations are_made for a source on a normalized basis, a set of release
specifications "are used, such as stack specifications, area source
dimensions, and so forth. Some changes in emissions may be associated
with changes in release specifications, defying the assumption of linearity
inherent in the normalized modeling approach.
4.9 Model Performance Evaluation
Model performance evaluations are only an option for studies that have an
adequate measured data set against which to compare the modeled values. Where they are
possible, performance evaluations offer two major benefits: (1) they can help evaluate the
uncertainties in modeling results—a prime concern when using modeled air toxics data to
support regulatory or policy developments, and (2) they can be used as a basis to improve
future model performance.
Evaluating Model Performance
Procedures have been established to help support the standardization of model
performance evaluations; these include consideration of bias, noise, correlation, and other
127
-------
statistical tests (Fox, 1981). The Philadelphia study showed statistical analyses for each
of these tests. The more limited model performance evaluations presented for the South
Coast MATES and Kanawha Valley studies emphasized tests of model bias.
Improving model performance implies more than simply comparing statistical
analyses of alternative model formulations.26 Much can be learned about the strengths
and weaknesses of alternative model formulations through detailed review of the patterns
revealed by the raw measured data set. The measured data can provide clues for
improving model performance, such as:
• If measured concentrations decrease with distance from a source
inconsistently with the rate of decrease predicted by the model, perhaps
the treatment of dispersion is suspect.
• If wide variations are observed in measured concentrations ; for similar
meteorological conditions, perhaps more specific emissions data are needed
for key facilities to better describe emissions variability.
For the Philadelphia, Kanawha Valley, and South Coast MATES studies, it appears
that the demonstration of model performance strengthened confidence in using modeled
data for the exposure assessments. Particularly for studies with complex emissions or
topographical factors, model performance testing can help show the reasonableness of
predicted concentrations to meet project goals, or demonstrate the failure of modeling
techniques to meet goals for some pollutants. Kanawha Valley is a good example of
complicated sources located in complicated terrain. Model performance testing in this
study, although relatively crude, was used to confirm that the modeled data were at least
the correct order of magnitude.27
26 Formulation refers to a dispersion model run with specific input data and control
settings. A dispersion model can be run with multiple formulations.
27 For this study, the EPA Science Advisory Board (SAB) noted during the series of
review meetings the benefits of evaluating model performance in this complex setting.
This component of the study appeared to be instrumental in obtaining SAB acceptance
of the modeling protocol
128
-------
Improving Model Performance
It is a common misconception of reviewers of studies that have undertaken model
performance testing that measured data are used to calibrate model output to reduce bias,
thereby showing a better match with observed results. This has not been done in any of
these studies, principally because of the difficulty of demonstrating that model
performance on an annual average basis is improved throughout a study area, and not just
for the monitoring sites and specific days of the sampling program.
Instead, model performance evaluation is usually used to help reveal the
limitations in the physical aspects of a model, such as the strengths and weaknesses of
emissions for specific point sources or source categories, weaknesses of treatments of
transport and dispersion, and so forth. For example, the Philadelphia study used model
performance testing as a basis for developing a-priority ranking for independently verifying
emissions data. Its procedures and results- illustrate many of the issues of concern with
model performance evaluation:
Philadelphia Model Performance' Evaluation—Four model formulations were
tested in the Philadelphia Study, all of which were based on the SHORTZ dispersion
model.28 Different treatments of stability and different methods of considering emissions
variability were tested.
Table 4-3 summarizes the means and correlation coefficients across the ten
pollutants and ten sites that were included in this program. Figures 4-4 and 4-5 display
examples of average modeled and measured concentrations for each of the 31 days of the
monitoring program at one of the sites (Site 7) of the Philadelphia network: pollutant
coverage is for 1,2 dichloroethane and carbon tetrachloride, respectively.
As already noted, the Philadelphia study used SHORTZ purely for model performance
evaluations, not for evaluating complex terrain. In the opinion of the authors, who
participated in the Philadelphia study, it would have been desirable to have tested
alternative models, such as CDM and RAM, in addition to SHORTZ.
129
-------
Table 4-3. Summary of Moans and Correlation
Coefficients for Modeling and Measured VOCs In the
Philadelphia Study
Compound
Chloroform
Dichloroethane
Carbon Tetrachloride
Trichloroelhylene
Benzene
Dichloropropane
Toluene
Perchloroethylene
Ethyl Benzene
Xylene
Measured
0.3
0.4
1.8
1.6 .
6.0
1.2
12.0
4.7
4.7
12.3
Modeled
0.2
0.4
0.1
1.0
2.3
0.5
8.8
3.5
0.4
2.4
R Value
0.27
0.89
0.47
0.00
0.58
0.95
0.76
0.88
0.64
0.23
The IEMP model performance evaluations were performed based on the
Monitoring and Ambient Data Assessment Module (MADAM)29 of the PIPQUIC data
management system. Figure 4-6 presents a summary of the inputs and outputs to this
system. The Philadelphia study focused on partitioning the measured and modeled data
by wind flow quadrant as a means of identifying systematic differences associated with
flow across" source regions. This approach provided a means of improving the verification
of emissions data. The following factors reduced bias, but did not significantly improve
correlations.
29
MADAM is a software package within PIPQUIC that inputs measured concentrations,
normalized modeled concentrations, emissions data, and meteorological data that is
concurrent with work days of the monitoring program. Model performance can be
displayed through MADAM by partitioning the data by wind flow, stability, or other
parameters. See Chapter 7 for a discussion of PIPQUIC. ;
130
-------
tn
G
O
a
o .
U-X
•a o
•11
"9
'-
Co ^««
-a '3
22
3 B
S 2
l-a
C Z3
O A
.2
oo o
131
-------
D
T3
2
o
+-•
•O
Q>
0)
•a
____ a
— x1
'—£J
m
A
II
D
U
a
o
m
PI
a
a
(U
fU
• TO
• ID
- 00
®
Cli
o
P)
o
21
-s 1
sl
g a
S ep
13 .5
1 S
T5.-SJ
•« a
2 °
S 6
T3 cd
-a
en
td '
.s
a
1-1
Q-S
in
•41
I *
IA
en
i •
cu
oozroiu
f- M O
Dfl
132
-------
Long/Short Term
Modeling
Concentration
<
Q
<
I
Ambient
Monitoring
Data
I
Hourly
Meteorological
Data
I
Compare Modeled/Measured
Concentrations
Partition Results by
Meteorological Parameters
I
Wind
rection .
^
r
Heat
Island
*
Rainfall
^
r
Stability
V
Mixing
Height
^
r
Weekday vs.
Weekend
C O 0
ffl s •*-
3 a
Figure 4-6. Summary of MADAM Model Performance
Evaluation Tool
The study identified an order of magnitude error in the emission rate for a
major facility on this basis. Engineering review by an independent agency
identified a process that was missing from the inventory—although process
emissions as a category were included in the inventory, the solvent
recovery system, which was the dominant source, was inadvertently
omitted (Sullivan, 1985).
Volatilization from industrial sewage en route to the wastewater treatment
plant was flagged during the model performance review because of the
relatively low concentrations for two key pollutants transferred from an
industrial facility to the wastewater treatment plant via the sewage
system.
An area source term was also added for drinking water releases of
chloroform, essentially removing the bias for this compound. Again, this
133
-------
-------
assessment was done by an independent analyst based on water
consumption rates and average chloroform concentrations in the drinking
water, not by model calibration techniques.
Only 31 days of monitoring data were available, however, which was an
insufficient sample to allow partitioning of the data set in greater detail. It was not
possible, for instance, to evaluate performance as a function of mixing height, stability, or
similar factors. More data rich, monitoring programs, such as the Staten Island study,
could provide a more robust data set and permit more comprehensive model performance
evaluations for a broader range of toxic air pollutants.
Results of Model Performance Evaluations for South Coast—Table 4-4 presents
the limited results of the model performance evaluation in the South. Coast area, and
Table 4-5 presents the results for the Kanawha Valley. These tables show the ranges of
measured and modeled data to provide an indication of the magnitude of model bias. Note
that benzene and chromium, which account for about 95 percent of-the risk in the South
Coast Air Basin, show relatively good agreement between measured and modeled
concentrations. " '
134
-------
Table 4-4. Comparison of Measured and Modeled
Predicted Toxic Air Pollutants In the South Coast MATIES
Project*
Air Toxics
Measured
Model
predicted
Predicted/
measured ratio
ORGANIC GASES
Benzene 1.0-4.9 0.56-5.0
Carbon Tetrachloride 0.10-0.12 1.1-24 x 10r5 -
Chloroform 0.02-0.30 2.0-17 x 1Q-8
Ethylene Dibromide 0-100 1.1-22 x 10"4
Ethylene Dichtoride 0-18 1.7-10 x 1 (T3
Parchlorethyiene 0.5-3.1 0.28-2.4
Toluene 2.5-6.7 0.80-3.7
Trichtoroethylene 1.1-7.1 0.33-2.9
Vinyl Chloride 0-2.0 5.1x10-5
0.22-1.8
1.0-25X10-4
0.68-49 X10-7
0.11-110x10-5
1.2-52X10-4
0.22-1.5
0.16-0.66
0.13-54
2.6X10-5
TRACE METALS (ng/m3)
Arsenic
Beryllium
Cadmium.
Chromium
Lead
Nickel - - -
0-8.8
0-0.5
0-4.1
1.8-11
180-280
3.7-8.9
5.0-10 X10-4
~ -0-5.4x1 CT3
1.1-9.6
3.6-60
1,100-1.700
0.7-5.6
1. 5-2,3 X1CT4
0.003-3.4
0.71-1,200
1.06-8.6
3.9-9.4
0,08-7.3
'This table shows the ranges in measured and modeled concentrations across all of the "existing
sampling sites, which are more indicative of average/background concentrations than the
SCAQMD's "new" sites, which were selected to reflect maximum exposures.
Table 4-5. Comparison of Measured and Modeled Data for
Kanawha Valley Toxics Screening Study
Concentration (ug/m3)
Pollutant
Chloroform
Methylene Chloride
"Ethylene Oxide"
(unknown compound)
Carbon Tetrachloride
Site 1
Mod. Mon.
1.9
0.6
2.4
0.0
3.6
3.3
8.4
1.4
Site 2
Mod. Mon.
4.5
1.3
12.8
0.0
8.7
3.1
11.3
1.0
Site 3
Mod. Mon.
Site 4
Mod. Mon.
8.1
10.8
0.0
2.6
8.9
20.8
13.3
3.4
11.5
14.6
0.0
i
3.3
8.0
12.3
12.4
2.2
NOTE: Ethylene oxide ("unknown compound") averages are based on very limited data-
Site 1 (n-10); Site 2 (n=14); Site 3 (n-3); and Site 4 (n=4).
135
-------
Conclusions Drawn from Available Model Performance Evaluations to Date— Ths
limited model performance evaluations completed to date in these three study areas tend
to support the following general conclusions:
* Models tend to underestimate urban concentrations of many toxic air
pollutants. It is obvious from review of Tables 4-3, 4-4, and 4-5 that, in
general, there is a bias to underestimate modeled concentrations relative to
ambient concentrations for volatile organics. (From Table 4-4, no clear
trend exists for metals.) Similar model underestimations for volatile
organics were found in other studies (EPA, 1988; Sullivan and Martini,
1987J. In many cases modeled results are low by a factor of 2 or 3 or
more. Four factors may be responsible for this tendency toward
underestimation:
1.
2.
of the model-based analyses assumed zero background. For some
of these compounds, especially those with long atmospheric half
lives such as carbon tetrachloride, the background term is probably a
substantial portion of total concentrations.
Gaps in coverage in emissions inventories could be accounting for a
significant percentage of the model underestimates.
.3. Current models do not predict secondary pollutant formation (e.g.,
photochemical formaldehyde).
4. For the more recent studies (EPA, 1988 and Sullivan and Martini,
1 98 7Jtnat_ evaluated model performance .based on the EPA
- recommended dispersion coefficients for urban applications (i.e.
Urban Model 3 dispersion coefficients), substantial bias to
underestimate concentrations was observed for low-level sources
relative to the more traditional Pasquill-Gifford coefficients that are
modified for urban applications.
Model performance annears to be strongly pollutant df-pRnrifmt These
studies appeared to be successful in separating pollutants with relatively
low bias (such as less than a factor of 2) from those with high bias, which
in some cases was orders of magnitude. These findings help support the
use ol modeling for exposure assessments for the subset of pollutants
found to be within reasonable accuracy limits.30
The correlation of the average concentrations across all sites is another important
factor with which to judge model performance, particularly to evaluate the ability of a
model formulation to identify gradients in concentration across a study area. The
Philadelphia study is the only study to report correlation results; its correlation
coefficients (R), presented in Table 4-3, indicate wide differences in correlation across
™ "I80 " Con?ider1fble uncertainty in the measured data, which
be considered when assessing the confidence in the modeled data.
136
-------
pollutants, ranging from 0 to 0.95. When considered in conjunction with bias, as shown
in Table 4-4, it is clear that far more confidence should be placed in the results for some
compounds than for others. For example, modeling predictions of carbon tetrachloride or
arsenic levels appear to be orders of magnitude less reliable than pollutants that appear to
be better characterized in emission inventories, such as perchloroethylene or toluene.
4.10 Insights into the Use of Modeling in Air Toxics Evaluations
i
The experience gained through the air toxics studies conducted to date lead to
insights that can guide future use of dispersion models in multi-facility, multi-pollutant
studies. Observations on their experience are grouped below under four headings:
(1) Modeling Techniques, (2) Model Performance Considerations, (3) Completeness of
Scope, and (4) Cost Saving Measures.
Modeling Techniques ....
There are many similarities between routine modeling analyses done for permitting
or State Implementation Plan (SIP) development for criteria pollutants, on the one hand,
and modeling done to support multi-pollutant, multi-facility urban air toxics studies, on
the other. In many cases the same Gaussian dispersion models are applied, with similar
meteorological and release specifications data. There are also some major differences.
The following are issues that appear to be of greatest concern for modeling noncriteria
pollutants:
Background—All of these modeling analyses were limited to pollutants released
within the study area. There was no inclusion of a background term to estimate
concentrations in the ambient air (before transport into the study area) prior to the
incremental contribution from the pollutants released in the study area. This background
term is not necessarily represented by available data from remote rural sites, but should
be representative of concentrations at the upwind fringes of the modeling domain.
Although in some cases, such as the Philadelphia and Southeast Chicago studies, the
boundary of the modeling domain was substantially larger than the receptor area to
137
-------
partially account for local transport, any regional background was not effectively
considered. Available national data bases, which cover a wide range of monitoring sites,
suggest that some of the pollutants commonly reviewed may have background levels that
are significant in relation to modeled concentrations (Sullivan and Martini, 1987). The
increased ambient monitoring by EPA and others should improve information on pollutant
background levels, at least in the urban context. Little rural sampling is under way; if
attempted, it would probably yield nondetectable levels for some pollutants.
Tendency to Underestimate Modeled Concentrations for Some Compounds—
Although limited data are available, the results in Table 4-4 suggest that model
underestimates can be relatively large for pollutants such as carbon tetrachloride,
chloroform, ethylene dibromide, xylene, and ethyl benzene. These and other pollutants
suspected of being substantially underestimated by modeling should be (1) pursued to
evaluate cause or (2) flagged in the reported results.
Use of Normalized Modeling TerhnifpiP.!— whpn normalized model runs are used
to support the analysis of "control options, there is a potential for mischaracterizing
release specifications for alternative controls. As processes change, and especially if
controls are added, the release specifications can substantially change, requiring follow-up
modeling of these sources to replace outdated normalized model output. For example, if
a scrubber is assumed to be added to enhance the control of emissions from a facility, the
release specifications can be substantially altered. In this example, the benefits of a
control option could be greatly exaggerated because the lower effective release height of
the scrubber would not be considered if only the resulting emissions were adjusted in the
modeling analysis. (A scrubber can produce a substantially lower exhaust temperature,
which can result in much lower plume rise and can increase the air quality impacts on a
unit mass release basis.) This issue can be a problem especially for data bases that are
used on a long-term basis as a repository for emissions and modeled data. Some system
controls are needed to maintain the currency of the data base, including flagging major
process changes and changes in facility status through the permit process.
138
-------
A problem related to the representativeness of the release specifications is
maintaining the currency of the emissions data in a data base. If a normalized modeling
approach is selected to meet the immediate project goals, and to become part of a data
base for future uses, a system is needed to ensure that emission rates and release
specifications remain current.
Microscale Errors for Simplified Release Specifications—Most facilities in these
studies were modeled' with highly simplified release specifications. In some cases this
likely was a reasonable approach, especially for estimating average concentrations across
a study area. In the case of modeling MEI concentrations for highly complex facilities,
however, major inaccuracies likely resulted from using simplified release specifications to
estimate maximum offsite annual average concentrations. Depending on the layout of
each facility, it is possible that modeled concentrations could underestimate or
overestimate on this basis. The study that appears to best represent MEI concentrations is
the Kanawha Valley study, which is consistent with the .emphasis provided on the
modeling goals in Table 4-1. ___.__.
Modeling Fugitive and Vent Releases—Fugitives and vent releases are dominant
source categories for many major and minor industrial releases of toxic air pollutants. In
many cases, stack emissions make minor contributions, yet the emphasis on model
development within EPA has been on releases of criteria pollutants from stacks. Errors
within area and building source treatments within models such as ISCLT and LONGZ can
introduce bias that acts to underestimate the risks from air toxics (Sullivan and
Hlinka, 1985). These two models were found to have an error in the smoothing term that
acts to underestimate concentration from area or building source treatments. It is unclear,
based on Sullivan and Hlinka, 1986, if this error is present in other models, such as CDM.
Coupling modeling uncertainties for fugitive/vent releases with the ioften large
uncertainties in emission terms adversely affects the confidence in modeling many major
point sources of air toxics. Model performance testing within the composite plume (such
as 1 km from, the fenceline of a facility, as used in Kanawha Valley) provides a means of
displaying the overall acceptability of the modeling analyses for such sources.
139
-------
Model Bias for Low-Level Sources in Urban Areas—It does not appear that any of
these modeling analyses are based on the most recent set of dispersion coefficients
recommended for modeling in urban areas. This is largely because most of these studies
predate this guidance (EPA, 1986). The UNAMAP 6 version of the EPA dispersion models
provides regulatory guidance for using an adaptation of the McElroy-Pooler dispersion
coefficients for urban applications. These coefficients predict concentrations from low-
level sources that are substantially lower than predicted by. previous versions of the
UNAMAP models, which may result in significant underestimations of the magnitude of
air toxics risks (Sullivan, 1985). It is also likely that source culpability will be
inaccurately apportioned between low-level and high-level releases (such as power plant
or incineration emissions) and low-level industrial and area sources. (EPA, 1987)
Transformation Products—Limited laboratory testing has indicated that the
reaction products of some pollutants have order of magnitude higher mutagenicity than
the parent compounds. This is a research issue that is beyond the scope of applied
modeling studies "conducted, at. this time, and likely in the -near future. Some
transformation products (e.g., secondary formaldehyde) may be covered by aldehyde
monitoring, but others likely are missed by both modeling and monitoring components of
current multi-pollutant, multi-facility studies. Although little can be done to fill this
void in present studies, it is an issue that should be considered when interpreting the
magnitude of the overall results.
Short-Term Variability in Concentrations—There has been no comprehensive
treatment of the effect of the variability of emissions and meteorological data on the
short-term variability in modeled concentrations in an urban area, or in the immediate
vicinity of major industrial sources. For air toxics evaluations, the issue of most likely
potential future concern would be noncarcinogenic health effects related to maximum
short-term concentrations. Studies based on annual average emissions rates and the use of
stability class data could underestimate peak concentrations by orders of magnitude (Luna
and Church, 1972).
140
-------
To date, it does not appear that any studies have adequately screened the
magnitude of potential for exceedance of short-term noncarcinogenic thresholds, either
region-wide or within isolated subregions. From a modeling perspective, the issue is
accurate representation of dispersion rates on a short-term (hourly) basis and estimation of
hourly distributions of emission rates for key selected industrial processes (and possibly
area source categories).
Two EPA efforts currently in the planning stages may contribute to the
understanding of short-term variations in concentrations of air toxics. EPA's Office of
Research and Development is initiating a study to characterize variations in concentrations
of a wide range of toxic air pollutants in urban areas. In addition, a follow-up study is
planned in the Kanawha Valley to address the distribution of concentrations of selected
toxic air pollutants. These studies could help show the relationship of peak hourly
concentrations to data compiled to estimate thresholds for toxic air pollutants.
Absence of a. Mixing Height Term in HEM/SHEAR and SCREAM—The mixing
height term can result in significantly higher predicted concentrations when modeling at
the scale of a metropolitan area, or when assessing exposure for major industrial facilities
that are located 10 to 15 km or more from high population centers. The lack; of a mixing
height term in SHEAR and SCREAM appears to be a simplification that could
underestimate some concentrations and risks.
Use of Turbulence Intensity Data to Characterize Dispersion Rates—The Kanawha
Valley study demonstrated the potential benefits that could be achieved if a dispersion
model, such as SHORTZ or LONGZ, were modified to include functional relationships
between turbulence intensity data and site-specific dispersion rates representative of near
ground-level releases.
Availability of Terrain Data—The HEM and GAMS exposure modeling systems
could be refined to include access to nationally available terrain data. This could lead to
more accurate model prediction, especially for tall stacks located in moderate to rough
terrain. Without consideration for terrain, the model estimates of power plants,
141
-------
incinerators, and other sources with high effective stack heights may underestimate certain
ambient concentrations.
Network Size for Model Performance Evaluations—A common question posed
during the design stages of a study involves the optimal number of monitoring sites and
the length of the field program that is needed to support a model performance evaluation.
The duration of most of these monitoring studies was one season, with
intermittent sampling. As long as meteorological and emissions data are available to
compare this season to annual and climatological conditions, the single season approach
appears to be a reasonable tradeoff between cost and data needs. There also is precedence
for single season model performance testing for criteria pollutant applications.
Appendix B presents a methodology^ to help address the issue of the optimal
number of monitoring sites, but ultimately..the.size of the network needs to be selected to
match the goals of the model performance evaluation. For example, if only model bias is
evaluated, as presented in the-South Coast Study, a few carefully selected sites may meet
this objective. If spatial correlation were also to be assessed, such as in the Philadelphia
study, it would appear more reasonable to design an eight to ten site network. If costs
were prohibitive, perhaps some of the cost-saving measures as presented below could be
considered to stretch monitoring resources.
Cost-Saving Measures
Screening versus Refined Modeling—An obvious cost-saving measure for modeling
analyses is to first do screening-level analysis prior to performing refined analysis. By
using coarse modeling grids and simplified release specifications, it is possible to identify
the geographic areas and sources that require the greatest emphasis in the more detailed
modeling to follow. In this manner, available resources can be most efficiently allocated
to improving estimates in high impact areas.
Normalized Emissions Modeling—KfnHgl ing of normalized, rather than actual,
emissions is discussed earlier in this chapter. This technique saves resources insofar as it
142
-------
eliminates the need to rerun the model(s) every time emissions change or a different
control scenario is evaluated (assuming that the release specification for each facility does
not change significantly as emissions change).
143
-------
Chapter 4
References
Barcikowski, ,W., 1988. South Coast Air Quality Management District. Letter and
attachments to Tom Lahre, U.S. Environmental Protection Agency. RTF, NC. October 25
1988.
Black, P., 1986. Workshop to Support the Design of Denver Air Toxics Monitoring
Program, Research Triangle Park, North Carolina, May 1986.
Environmental Protection Agency. 1986. Guideline on Air Quality Models (Revised)
EPA-450/2-78-027R, Office of Air Quality Planning and Standards, Durham, North
Carolina.
Environmental Protection Agency. 1987. Integrated Environmental Management Project:
Baltimore Phase H Report (In Progress]. Regulatory Integration Division, Washington D.C.
Environmental Protection Agency. 1988.- Baltimore Integrated Environmental
Management Project: Phase II, Draft Final Report, Regulatory Integration Division, EPA
Office of Policy, Planning and Evaluation, (in progress).
Fox, R.G., 198i: Judging Aii-jQuality Model Performance, Bulletin of the American
Meteorological Society. Vol 62., No. 5, May 1981.. pps. 599-609.
Liu, C., 1987. California South Coast Air Quality Management District. Personal
Communication.
Luna, R.E. and Church, H.W., 1972. "A Comparison of Turbulent Intensity and Stability
Ratio Measurements to Pasquill Stability Classes," Journal of Applied Meteorology, June
j. y / £» t
Shikiya, D., L. Chung, E. Nelson, and R. Rapoport, 1987. The Magnitude of Ambient Air
Toxics Impacts from Existing Sources in the South Coast Air Basin: 1987 Air Quality
Management Plan Working Paper (Revision Number 3). South Coast Air Quality
Management District, El Monte, California.
Sullivan, D. A., and J. Martini, 1987. Evaluation of Total Exposure Assessment
Methodology (TEAM) Data: Review of Ambient and Personal Data Sets (In Progress).
U.S. Environmental Protection Agency, Office of Air Policy Analysis and Review.
Sullivan, D. A., 1985* > Evaluation of the Performance of the Dispersion Model SHORTZ
for Predicting Concentrations of Air Toxics in the U.S. Environmental Protection Agency's
Philadelphia Geographic Study. U.S. Environmental Protection Agency, Integrated
Environmental Management Divisions, Washington, D.C.
144
-------
Sullivan, D.A. and D.J. Hlinka. 1986. Air Quality Exposure Assessments: Comparison
and Evaluation of the Human Exposure Model, Atmospheric Transport Model, Industrial
Source Complex Model, and LONGZ. U.S. Environmental Protection Agency, Office of
Air Policy and Review, Washington, D.C.
White, Fred D. (Editor), 1984. "Review of the Attributes and Performance of Six Urban
Dispersion Models, U.S. Environmental Protection Agency, Environmental Sciences
Research Laboratory, Research Triangle Park, North Carolina, EPA-600/3-84-089.
145
-------
CHAPTERS
EXPOSURE AND RISK ASSESSMENT
5.1
The following topics will be discussed in this chapter:
• The use of exposure and risk assessment in air toxics studies
• .Methodological issues
• Comparison of approaches to interpretation of exposure and risk
• Comparison of field-oriented and research-oriented approaches to exposure
and risk assessment
• Insights into the use of exposure and risk assessment in air toxics studies
The Use of Exposure and Risk Assessment in Air Toxics Studies
Estimating exposures and risks associated with air toxics is a complex issue that
most studies have addressed in a highly simplified manner. Total human exposures to air
toxics include contributions from numerous residential, commercial, occupational, and
transportation-related microenvironments. Fully describing these exposures and their
consequent risks is beyond the scope of field-oriented studies concerned with ambient
outdoor air concentrations of air toxics. The Total Exposure Assessment Methodology
(TEAM) program has attempted more comprehensive exposure evaluations, but has not
attempted detailed examinations of individual microenvironments. The Integrated Air
Cancer Project, on the other hand, is investigating microenvironments, but is not
conducting comprehensive exposure assessments, such as through the personal sampling
methods employed by TEAM. Evaluation of all risks associated with air toxics is
146
-------
hampered by gaps in available cancer and noncancer potency data; much additional
research must be completed before reasonably comprehensive evaluations of all potential
air toxic risk can be attempted.
All of the studies reviewed in this report evaluated, cancer risks from air toxics.
With the exception of the TEAM studies and the Integrated Air Cancer Project, all the
studies simplified their exposure and risk assessment in a similar manner, essentially
reducing them to simple multiplications of ambient concentration values times population
exposed times available cancer potency scores, as shown in the following formula:
Areawide
Cancer -„
Incidence
/ Model Predicted
'I or
'I Measured Ambient
\ Concentrations
Number of \
Persons Exposed
to
[ Ambient Concentrations^
Cancer^
Unit
Risk
Factor
In this.formula, areawide, excess, additive (i.e., multi-pollutant) cancer Incidence is
computed by summing over each subarea (e.g., 1x1 km2 grid square) and each pollutant.
Cancer unit risk factors are, -with- some exceptions, drawn from current listings developed
by the EPA Cancer Assessment Group and are accepted as constants. These factors are
available from EPA's Integrated Risk Information System (IRIS). (EPA, 1988) Some
groups used other unit risk factors for certain pollutants. The South Coast MATES
project, for example, used risk factors developed by the California Department of Health
Services, as required under State law. (Barcikowski, 1988) The issues of greatest practical
concern are (1) estimating concentrations (usually by monitoring or modeling) and
(2) assigning population to these concentrations. Both topics have been discussed in
earlier chapters. This chapter therefore deals primarily with the purposes to which
exposure and risk assessment have been put and how their results have been interpreted,
Of special interest are studies that compare results obtained through monitoring with
those obtained through modeling. It is also important to discuss differences between
typical field methods and more refined methods explored by research programs such as
IACP or TEAM.
At least two of the studies reviewed—the Baltimore and Santa Clara lEMPs—also
evaluated noncancer health effects. In the Baltimore IEMP, the investigators calculated a
147
-------
"health hazard index" to assess the potential hazards from cumulative exposures to the
target compounds. The index is based on the assumption of dose additivity and is defined
by the following equation:
Where:
fe &
AI_t AL.2 ALj
HI = Health index for a particular category of health effect
EJ = Ambient concentration of pollutant i
ALj » Threshold value for pollutant i for a particular category of
health effect
After first calculating the ratio of the ambient concentration for the pollutant to
the threshold for the health effect (Ei/ALi), these ratios are then summed across all
pollutants with the same effect. _ -
The health hazard index is a numerical indication of the level of concern
associated with exposures to complex mixtures of pollutants in the- environment. As the
index approaches unity, concern for the potential hazard of the chemical mixture
increases. If the index exceeds 1, the concern is the same as if the no-effect threshold
were exceeded by the same amount by an individual pollutant.
As noted in earlier chapters, the studies under review have varied in the types of
exposure and risk estimates they developed. Most, for instance, emphasized areawide or
grid cell average annual population exposures; only a subset estimated values for "most
exposed individuals" (MEIs). Beyond this, however, the interpretation of results follows
some common patterns.
• Ranking the relative importance of pollutants: One .of the most common
analyses has been to rank pollutants in order of importance, either in terms
of ambient concentrations or in terms of potential risk (ambient
concentration times potency). Where "threshold" (most noncarcinogenic)
pollutants are concerned, the ranking may simply distinguish between
pollutants whose concentrations fall above health thresholds and those
that do not.
• Ranking the relative importance of sources or source categories: The other
most common analysis is to investigate the relative importance of sources
or source categories — for single pollutants, for common pollutant groups,
148
-------
or for total risk. Source categories can be broad (area sources versus point
sources, mobile sources versus stationary sources) or narrow
(discrimination among industrial types). In some cases, total risks or
pollutant-specific risks are compared from source to source. Any of these
analyses can help set priorities for potential future source controls.
• Estimating "nulpabilitv" of particular sources or source BTOUDS in regard to
. individual receptors; Culpability analysis determines, for a particular grid
cell or receptor point, the risks or pollutant-specific ambient air
concentrations at that point which are attributable to specific sources in
the area. This type of analysis may be most useful for.MEI analysis, where
the immediate question after identifying the most exposed individual is to
establish which sources are responsible for that exposure.
• Evaluating variations in exposures and risks; For studies that estimate MEI
exposures and risks, the ratio of MEI to average values may be of
considerable interest. For all studies, the ranges of exposures or risks are
also of interest, particularly in terms of the number of people exposed to
various levels. Note that population risks can be characterized in two
significantly different ways: (1) as the total number of canjcer or other
disease cases estimated withia-the exposed population, a figure that is
..inherently dependent on the size of the exposed population (some studies
based on population normalize their incidence figures to avoid this
dependency), and (2) as probabilistic risk (i.e., ranging from 0 to 1)
experienced by the average exposed person in the region or in each grid cell,
a figure that is not dependent on the total number of people in the region.
« Evaluating relative effectiveness of alternative control scenarios: The 5-
City Controllability Study and several of the lEMPs projected future risk
reductions as a function of alternative combinations of measures
superimposed to control air toxics emissions (see Chapter 6).
5.2 Methodological Issues
Although many of the technical-issues involved in the initial steps of exposure and
risk assessment have already been discussed in earlier chapters, there are a number of
special topics that require additional clarification. This section also briefly summarizes
risk assessment methods and available data.
Exposure Assessments
Most technical issues pertinent to conducting exposure assessments have already
been discussed in previous chapters. These included such factors as:
• Selection of study boundaries
149
-------
• Location of monitoring sites in relation to sources and population
• Design of receptor grids and special receptors for modeling analyses
— Type (polar, rectangular)
— Density
— Special Points (monitoring station locations, known MEI locations)
Various other exposure-related topics are discussed below.
Linking Population Data with Concentration Data— T^P South Coast MATES
study, which estimated exposure based on the direct use of measured data, employed the
most detailed treatment for assigning concentration data to population of any study
reviewed. The study first plotted the locations of the study's ten monitoring stations on
its 4,200-cell grid, each cell being 1 km on a side. It then estimated average
concentrations for each grid cell by averaging the reported concentrations at each
monitoring station, weighted in inverse proportion to the square of the distance to the cell
from the monitoring station. In other words, concentrations at a monitoring station 2 km
from a particular, cell would be given one-fourth the weight of concentrations 1 km from a
cell. All population within a grid cell was then assumed to have the same average
concentration, based on the distance-weighting procedure. (SCAQMD, 1987)
The 5 City Controllability Study and NESHAPS component of the Six Months
Study utilized EPA's Human Exposure Model (HEM) to link population and concentration
data. HEM is an integrated modeling-exposure assessment system that first runs a
dispersion model to estimate ambient air concentrations around sources and then assigns
these concentrations to BG/ED population centroids, based on internally stored 1980
census data. HEM calculates a polar concentration array for 160 receptors around each
point source (ten receptors extending radially out to 50 km along each of 16 wind
directions). See Figure 4-3 in Chapter 4. Then, depending on whether one is running
HEM/SHED or. HEM/SHEAR, HEM associates concentrations and populations in different
ways as a function of distance from the emission source. Within 3.5 km, the polar grids
are smaller than typical BG/EDs, and SHED apportions BG/ED populations to each grid
point concentration based on proximity and area. Beyond 3.5 km in SHED, and at all
radial distances in SHEAR, a log-log linear interpolation is made among the polar grid
points to estimate concentrations at each BG/ED centroid. Incremental contributions from
150
-------
multiple facilities in an area are stored at the BG/ED level in SHEAR and summed across
all facilities. Total exposures are then estimated in SHEAR across all BG/EDs in the study
area. The SHED methodology is considered more accurate than SHEAR for estimating
maximum lifetime risks, but not significantly more accurate for estimating aggregate
incidence.
Within HEM, area sources are handled only in the SHEAR module. The user does
not provide subcounty-allocated data to SHEAR; rather, SHEAR performs this suhcounty
allocation in either of two user-specified ways—(1) uniformly, assuming no spatial
variation in emissions, or (2) by population. In the latter case, a simplified box model is
run within hypothetical boxes superimposed over each BG/ED. The resulting
concentrations are therefore associated directly with population controls of each BG/ED.
In the IEMP studies, a rectangular grid was used rather than a polar coordinate
system, as is used in HEM. The census tract maps for the metropolitan areas were then
overlayed onto the-modeling grid and population assigned to each.grid cell. For census
tracts that extended into two or more grid cells, population was assigned to each affected
grid cell on the basis of the percentage of the area of the census tract within each cell.
Estimates of exposure for each grid cell were made by multiplying the resulting grid cell
populations by the model-predicted concentrations for each grid cell. In several of the
DEMPs, up to four receptor points were defined within each grid cell; in these instances,
the average concentration from these multiple points was multiplied by the grid cell
population to estimate exposure. Total exposure was estimated by summing across all
grid cells.
A number of quite different approaches for relating populations and concentrations
are planned for the Denver study. As noted earlier, most toxic pollutants in the Denver
area are believed to be mobile-source related, with concentration peaks near the urban
center; hence, more homogeneous concentrations are expected here than in a more
industrialized area. Because of this anticipated regional homogeneity; only three
monitoring stations will be used, but, as discussed in Chapter 2, pollutant coverage at
each site will be extensive. One goal of the study is to represent uncertainty in
151
-------
concentrations by estimating ranges of exposures as well as best estimates. Plans for
denning these ranges include:1
1. Calculating expected exposures by simply assigning measured values at
each of the three monitoring stations to the population residing in each
station's delineated zone.
2. Calculating exposures at distances away from the three monitoring stations
through a variety of statistical correlations, one of which is expected to be
multiple regression analysis involving (1) available conventional pollutant
data and (2) distance from the center of the city. CO data would serve as
an indicator of all automotive-related toxics, which include all categories
of pollutants of concern in the study (metals, VOCs, semivolatiles, and
aldehydes/ketones). Fine particulate data would be correlated with metals
concentrations, and ozone data with aldehydes/ketones. Distance from the
center of the city is significant because empirical evidence suggests that,
despite regional mixing, air pollutant concentrations peak in the central
urban area.
3. Estimating high and low average-concentrations across all three monitoring
-sites for each pollutant to. define endpoints of ranges of exposures in the
region.
Population MobiHty/MiGroenvironmental Exposure—Population mobility has not
been widely addressed in studies to date. Of the studies reviewed, only the Motor
Vehicle Study attempted to factor in non-outdoor exposures, using a modified version of
EPA's NAAQS Exposure Model (NEM) for CO. (Ingalls, 1985) The NEM approach relies
on an activity pattern model that simulates a set of population groups called cohorts as
they go about their day-to-day activities. Each of these cohorts is assigned to a specific
location type during each hour of the day. Each of several specific location types in the
urban area is assigned a particular ambient pollutant "concentration based on fixed site
monitoring data. The model computes the hourly exposures for each cohort and then
sums up these values over the desired averaging time to arrive at average population
exposure and exposure distributions. Annual averages are possible because a full year's
data from fixed site monitors are input to the model.
It should be noted that indoor concentrations (and, therefore, exposure) caused by
ambient mobile source pollutants are also accounted for in the model. A scaling factor of
0.85 was applied to the appropriate neighborhood monitoring data to estimate indoor
1 These approaches may not necessarily be carried out as described here.
152
-------
exposures to the pollutant of interest in each, neighborhood. The scaling factor was based
on comparisons of indoor and outdoor CO levels of homes with no indoor CO sources
(e.g., gas stoves, smokers).
The modified MEM does not account for photochemical reactions. The exposure
levels predicted by the model are'those resulting from direct exhaust emissions, and do
not account for either the destruction or photochemical formation of the pollutant in the
atmosphere. The model also assumes that the pollutant of interest has emission
!
formation and dispersion characteristics similar to those of CO.
In contrast to the NEM approach in the Motor Vehicle Study, most studies,
reviewed herein implicitly assume a fixed population defined by the place of residence.
To evaluate the reasonableness of this assumption, Figure 5-1 summarizes the results of
the work done by Rheingrover (1984), which helps shed light on the assumption of a
nonmobile population. This figure applies to the Philadelphia Metropolitan area, as
represented by 85 five-km grid cells.
153
-------
Figure 5-1. Comparison of Residential and Workday
Population by Grid Cell
Core Central Business
District
1357 9 11 131517192123252729313335373941434547495153555759616365676971 7375777981 8385
Grid Cell Number
.— Residential Population — Workday Population
This figure shows population totals (1) under the assumption that all residents remain at
their place of residence, and (2) as compared to workday estimates that account for
occupational mobility. As shown, most grid cells had relatively equal residential and
workday populations, reflecting an equal influx and outflux of people during the work
day. The core central business district (see grid cells 47 and 48 in Figure 5-1) shows a
noticeable exception, where the influx of workers did substantially increase exposures
during the daytime over what would have been estimated based on considering residential
exposures only.
If this is representative of other metropolitan areas (which has not been
demonstrated to date), the assumption of no mobility for ambient-based exposure
assessments does not appear to be a major limitation, at least in terms of outdoor air
exposures. A more important factor likely is the mobility of subjects among different
micro environments (e.g., indoors, automobiles, etc.). Only personal sampling and
154
-------
microenvironment sampling are able to account for the mobility of the population beyond
outdoor air exposures.
Risk Assessment
The air toxics studies reviewed here used existing health effects data to estimate
potential air toxics risks based on their exposure estimates. None of the applied studies
generated any original health effects data. Although technical discussion of the
development and interpretation of dose-response data is beyond the scope of this study, a
brief review of the use of existing health effects data for air toxics studies is appropriate.
The use of potency figures, for carcinogens and noncarcinogens is discussed separately
below.
The factor that contributes significantly to the uncertainty in urban assessments is
the conversion'from exposure to risk, especially when using unit cancer risk factors. The
use of limited animal data to evaluate human risks, and the extrapolation from high to
low doses, are two contributors to uncertainties that can span orders of magnitude. For
air studies, additional uncertainties also exist when oral ingestion effects data are used to
estimate potencies of the same substances when inhaled; in the absence of other evidence,
researchers must often make the assumption that oral and inhalation potencies are the
same.
Carcinogens—Cancer has been the major emphasis of these studies, with nearly all
analyses being focused on this effect. . All of the studies that performed exposure/risk
assessments relied heavily on unit risk values developed by the EPA Cancer Assessment
Group (GAG), These are usually expressed as "unit risk factors," a number that represents
the probability of contracting cancer from constant inhalation, over a nominal 70-year
lifetime, of 1 ug/m3 of the substance in question. These are established conservatively,
generally representing the 95th percentile upper bound value based on laboratory
experiments.
Studies that employ these dose-response functions explicitly or tacitly assume the
following:
155
-------
• Risks calculated are theoretically valid only for constant exposures over a
70-vear period. Some studies have "annualized" cancer incidence or risk
values by dividing lifetime risk or the total number of cases by 70.
Although this is intuitively useful for policy purposes, it is not, strictly
speaking, a valid application of these numbers.
• There is no threshold of exposure below which no carcinogenic effects are
presumed to exist. Any dose of a carcinogenic substance, no matter how
small, is assumed to impose a finite risk of contracting disease. For most
studies of carcinogenicity at the low levels encountered in ambient air, the
shape of the dose-response function is assumed to be linear.2
• Cancer effects are additive. Studies have assumed that it is valid to sum
the risks of different carcinogenic substances. For instance, if substances A
and B are each assumed to impose a risk of 10 cancer cases over 70 years to
a given population, then the additive risk for the two together is 20 cases.3
• No interactions exist among chemicals that would cause simultaneous
exposures to multiple carcinogens to lead to either higher or lower disease
incidence than would otherwise be presumed to occur. GAG potency
values include no allowance" for synergistic or antagonistic effects.
-According to EPA guidance, all cancer incidence estimates are assumed to
be additive unless there is specific evidence of interaction among
substances. So far, no such evidence is available with reference to the
concentrations -of-carcinogens that may found in the ambient air.
• No cancer occurs from secondary or transformation products or other
compounds for which cancer potency scores are unavailable.
Since all dose-response values published by GAG have undergone extensive peer
review, individual studies generally have not had to defend the validity of any
toxicological potency estimates developed by this group. Some studies, however, such as
Obviously, risk levels cannot exceed 1, but the shape of dose-response curves as they
approach 1 will vary. This issue is assumed to be unimportant in ambient air studies
because measured or predicted ambient concentrations are almost always at the very
low end of the observable range.
3 The theoretically more correct approach is to assume that effects should be summed
only for carcinogens affecting the same organ. For instance, risks from all carcinogens
linked with lung cancer could be summed, but not risks from carcinogens affecting the
lungs and, say, the liver. Another more theoretically correct approach to cancer risk
assessment is to consider the weight of available evidence on hazard in defining
potential risks: risks from proven human carcinogens should, in such an approach, be
give more weight than risks from substances for which evidence of carcinogenicity
comes only from laboratory experiments. EPA uses a five-category classification
scheme (A through E) to distinguish among pollutants showing varying weights of
evidence of carcinogenicity. Group A includes definite human carcinogens, Group B
includes probable human carcinogens, etc., on down to Group E, which includes
compounds for which there is no evidence of carcinogenicity. (EPA, 1987)
156
-------
the Kanawha Valley and Southeast Chicago studies, have augmented the GAG list with
additional unit risk values developed by other offices within EPA. For example, in
Kanawha Valley other unit risk values were used for formaldehyde and propylene oxide.
Although use of GAG dose-response values has been the standard approach
throughout these studies, at least two alternative approaches have been used, or
suggested, as methods of investigating the potential health effects of the urban soup:
(1) the use of structure-activity relationships and (2) comparative risk methods.
Structure-Activity Methods—The Southeast Chicago study considered but rejected
the use of structure-activity data as a potential means of expanding the pollutants covered
in its risk assessment (Summerhays, 1987). In this approach, pollutants for which GAG
has not yet developed dose-response scores are compared to CAG-scored chemicals on the
basis of similar chemical structures and activities. Where chemical structures and
properties are sufficiently similar, risks from the matched chemical are presumed to be the
same as for the scored- chemical. The benefit of this approach is to expand the number of
pollutants evaluated; its obvious disadvantage is its uncertainty. Pollutants 'with closely
similar chemical structures and activities may, in fact, have entirely different potencies.
Comparative Risk Method—The most fundamentally different risk assessment
approach suggested in the studies reviewed here is to estimate the risk of a suspect
carcinogen (e.g., diesel emissions), for which there are no epidemiological cancer data, by
comparing the potency of the agent in short-term mutagenicity and carcinogenicity
bioassays to those of known human carcinogens. The known human carcinogens are coke
oven, roofing tar, and cigarette smoke tar emissions. The following formula is used:
Estimated Known
Human «*£--- Human Risk
x
Bioassay Potency untsMMMure
Bioassay Potency TeMedCaicinoggn
The IACP has used this biological approach to evaluate organic particulate
emissions from specific source categories—road vehicles, residential heating (oil, wood),
and utility power plants. These tests consider the relative potency in both skin tumor
initiation and short-term bioassays. (Lewtas, 1987)
157
-------
The advantage of this approach is that it directly evaluates complex organic
mixtures (e.g., POM), potentially taking synergistic or antagonistic effects into account.
This avoids potential errors in previous approaches (e.g., in the Six Months Study) of using
a single compound, often B(a)P, as a surrogate for all compounds within a class (typically
POM). And, because this approach uses, short-term tests, potency factors can be
developed more quickly and cheaply than developing human cancer data or long-term
animal data. The disadvantage is that the short term tests commonly used (including in
vivo skin tumorigenicity bioassays and various in vitro bioassays such as the Ames
Salmonella test), evaluate mutagenicity rather than carcinogenicity. Although most
known human carcinogens are also mutagens, the reverse is not true—evidence suggests
that many mutagens are not carcinogens.
Comparative potency factors were used in the Motor Vehicle, 5 City
Controllability, and Baltimore IEMP studies.
Noncancer- effects—Noncannftr effects have been considered in the IEMP
Baltimore, Santa Clara, and Denver projects, but work in evaluating health endpoints
other than cancer is still fairly recent and not well developed. Although some
pollutants—like, carcinogens—pose risks at any dose, most noncancer effects are
presumed to occur only above a definable "no effect" level. Defining this level, and
predicting whether this level is ever exceeded in the ambient environment, is the focus of
noncancer health effects evaluations of the urban soup.
Although few studies have attempted to assess noncancer endpoints, those which
do must consider the following issues:
• Noncancer effects are assumed not to be additive for dissimilar endpoints:
Where noncancer effects have been demonstrated, their range of nature and
severity is wide. Some effects are temporary and reversible (headaches,
blurred vision); others permanent but nonfataJ (birth defects, neurological
disorders); and some fatal or potentially fatal (heart disease). It is clearly
inappropriate to sum across such different effects.
• Threshold values for noncancer effects may be time-sensitive: For some
pollutants, adverse health effects are only assumed to exist if the
threshold is exceeded for long periods of time (e.g., one year). For others,
158
-------
the significant exceedance period may be minutes or hours. Very few data
are available on which to ground reasonable assessments of effects.
• Thresholds mav vary among individuals or across population groups: For
instance, lead has much more severe effects on children than on adults.
Individual sensitivities can change over time—sensitization to
formaldehyde is an example.
• Dose-response relationships above threshold doses are a separate factor for
evaluation; When thresholds are exceeded, dose-response relationships can
follow different curves—linear, step-function, curves of various shapes.
Studies that have addressed these issues typically make a number of simplifying
assumptions. The most common is to focus on the threshold level alone, disregarding the
shape of the dose-response curve above the threshold. The statistic of interest is simply
whether or not a substance exceeds its threshold or some fixed fraction of its threshold,
such as 10 percent or 25 percent—the latter approach leaving room for other routes of
exposure, such as residential indoor air, occupational exposures, or nomnhalation routes.
As an example, the Santa Clara IEMP estimated that there were 100,000 people in the
study area at some 'rislc of blood_effects from benzene exposures.
Health data on noncarcinogens are limited. Although industrial exposure
standards have been established for many chemicals, EPA has not endorsed their use for
ambient air toxics studies for a number of reasons, one of which is because they are
usually based on the acute effects of relatively high exposures to healthy, adult male
populations in work settings. Effects of involuntary, long-term exposures at subacute
levels have not been evaluated in equivalent detail, particularly on potentially more
vulnerable subpopulations. EPA is in the process of developing inhalation reference
doses, which will facilitate the inclusion of noncarcinogenic effects in future studies.
The IEMP Santa Clara, Baltimore, and Denver projects have, however, provided
limited data on noncarcinogenic evaluations of the urban soup.
Santa Clara—-Because the only available noncarcinogenic potency scores available
at the time were based on using oral reference doses as a surrogate for inhalation values,
the Santa Clara study stopped short of actually doing a risk assessment for
noncarcinogens. Instead, the percentages of the population above the threshold were
159
-------
estimated for the following effects: immune system, blood, liver, and kidney. These
estimates were based solely on chronic effects because only annual average modeled
concentrations were available, and because of data limitations for short-term health
effects.
It is possible that the exceedances of thresholds that were identified in the Santa
Clara study could be translated into cases of noncarcinogenic risks in the future once more
complete coverage of inhalation reference doses is available.
Baltimore—Noncancer effects were determined in two ways. First, the increased
risks of several noncancer effects (liver toxicity, kidney toxicity, reproductive,
neurological, fetal, and blood) were evaluated for specific compounds in the modeling
exercise. In particular, the effect threshold(s) relevant to each pollutant was divided into
the model-predicted ambient air concentration of that pollutant. If the resulting ratio
exceeded unity, a concern for noncancef effects was identified. Second, a hazard index
(discussed earlier in this chapter) was developed that summed individual pollutant ratios
by effect category. This latter;analysis was aimed at examining the impact of exposure to
complex chemical mixtures in the ambient air: if the hazard index exceeded unity, the
concern for noncancer effects would be the same as for the exceedance of a threshold
value. The resulting analysis suggested, preliminarily, some concern for blood effects
from benzene exposures and elevated concern for the following effects from xylene
exposures: liver, kidney, reproductive, neurological, fetal, and blood.
Denver: The IEMP Denver study collected 12- and 24-hour measured air quality
data on days that include limited coverage of peak pollution levels. The study anticipated
that the data gathered on peak exposure days may be of future use when additional short-
term health effects data become available. The project will evaluate these effects to the
extent feasible with currently available data, but is deliberately gathering more data than
it can now evaluate in expectation of better future information—an interesting precedent
for other studies.
160
-------
5.3 Comparison of Approaches to Interpreting Exposure and Risk
Table 5-1 compares the emphasis of the various studies in relation to| interpreting
exposure or risk information.4
Table 5-1. Comparison of Approaches to Interpreting
Exposure and Risk Information
Study
South Coast MATES
Motor Vehicle Study
Clark County
Southeast Chicago
lEMPs
Philadelphia
Baltimore
Santa Clara'
'Kanawha
Denver
NESHAPS*
35 County Study*
5 City Contrail. Study
Urban Air Toxics
Proqram
TEAM
IACP
Population
Exp or Risk
X
X
X
X
X
X
X
x" T" -
X
X
X
X
Goals vary
X
X
Maximum
Exp or Risk
X
X
X '
X
X
X
X
Goals vary
X
Rank
Exp or Risk,
by Source
Stat. v. Mobile
X
X
X
X
X
Area sources
X
X
Goals vary
Rank
Exp or Risk,
by Pollutant
X
X
X
X
X
X
X
X
X
X
X
X
Goals vary
X
Evaluate
Patterns of
Exp or Risk
X
X
! X
! x
X
X
Goals vary
X
* Part of Six Months Study
4 This information is taken, in some cases, only from published final reports and may
therefore not capture all analyses conducted for each study.
161
-------
The purpose of this report is to present methods and approaches, not results. The
brief discussions below therefore cover only the most general types of exposure and risk
conclusions, emphasizing problems of interpretation rather than statements of results.
Ranking the Relative Importance of Pollutants
Most studies evaluated the relative importance of pollutants in terms of potential
population risks. An important general issue in ranking the relative significance of
pollutants has been the accuracy of potencies or the assignment of default potencies to
certain classes of pollutants. For instance, in the absence of data on the speciation of
airborne chromium between the hexavalent and the trivalent forms, some studies have
assumed, conservatively, that all chromium emissions are in the hexavalent form. This
significantly increases the calculated risks associated with chromium. Actual chromium
risks are likely far lower, since a substantial -portion of emissions likely occur in the
trivalent form.' The Santa Clara IEMP assumed that only 10 percent of ambient chromium
is Cr+6. The 5 City .Controllability Study avoided this problem by directly estimating
both Cr+6 and total chromium- emissions and modeling them separately. Preliminary
results from the 5 City Study suggest that only about one-third of total chromium
emissions will be as hexavalent chromium.
The problem of speciation has occurred with beryllium, nickel, and products of
incomplete combustion (PICs), where some investigators have assigned all species of a
compound with the toxicity of a particular compound or chemical form for which data are
available.
Ranking the Relative Importance of Sources or Source Categories
A general conclusion of a number of the studies reviewed was that area source-
related pollutants tend to dominate population-weighted average risks, but that point
sources often dominate MEI impacts. Within area sources, mobile source-related
compounds—generally POM and VOC—tend to dominate exposures and risks. Wood
stoves, fireplaces, chrome platers, comfort cooling towers, hospital sterilizers, small
162
-------
degreasing operations, and gasoline marketing are other area source contributors to
exposures and risks in most urban areas.
The Santa Clara study investigated risks from some relatively unusual classes of
sources, such as volatilization of organics from drinking water treatment ;plants and
electronic component manufacturing. Risks from these sources were found to be
relatively minor in comparison with more common sources of toxic air exposure. This
study also did some risk analysis of highways to evaluate MEI exposures near
intersections. ,
For some point source emission categories, such as those investigated in detail in
the Kanawha Valley study, the general conclusion has been that fugitive and vent releases
may often dominate air toxic risks, in large part because of their release characteristics.
Since emissions from these sources are at low-elevations, potential exposures and risks
!
near the sites may be relatively high. Stack-emissions of a particular air toxic tend to pose
lower risks than low level emissions of the same compound because greater dispersion is
likely-to occur prior to exposure.
One "source" that has not been addressed in significant analytical detail is
background contributions from adjoining regions. As noted in Chapter £, modeled
exposures tend to be consistently lower than monitoring concentrations of the same
pollutants. The inability of modeling studies to consider background is one of several
factors contributing to this underestimation. Transport may be most important for long-
lived substances such as carbon tetrachloride, but it can theoretically be a significant
factor for other pollutants as well. "
Evaluating Exposure and Risk Patterns
Comparisons of MEI risks to average individual risks were done in the IEMP
studies (especially Philadelphia and Kanawha). On the basis of the types of evidence
developed by these'programs, it appears that the ratios of MEI to average risks (or
concentrations) may be significant in some areas, possibly between 10 and 100. This
ratio, however, is strongly dependent on how close residential areas are to major
163
-------
industrial facilities or other significant point sources. In Philadelphia, the nearest
residences to industrial facilities were typically much farther away than they were in the
Kanawha Valley. In relatively unindustrialized areas, such as Denver, ratios of MEIs to
average population exposures will typically be considerably lower.
Using TEAM data, it is possible to look at ranges of average population exposures.
TEAM studies suggest that maximum exposures within a population are likely to be far
higher than mean exposures, possibly by as much as three or more orders of magnitude.
Figure 5-2 presents examples of the wide range in concentrations observed based on
personal sampling. (Wallace, 1987) These plots show the differences in indoor and
outdoor concentrations, based on matched-pair TEAM samples, for several important
VOCs. (The matched-pair values actually represent the differences between personal
monitoring samples and outdoor samples; however, since the samples represent the 7p.m.-
7a.m. time frame, the personal monitoring samples largely reflect indoor concentrations.)
All outdoor sampling was conducted in direct proximity to the indoor sampling locations.
164
-------
en
01
o
a:
ui
u.
u.
oc
z
UI
o
o
oc
Q
O
Q
I
CC
O
o
D
70
60
50
40
30
20
10
-10
-20
WINTER
TETRACHLOROETHYLENE
SUMMER of FALL
[Indoor] > [Outdoor]
r
[Outdoor] > [Indoor]
I I I I I I
I
I
10 20 30 40 50 60 70 80 90
CUMULATIVE FREQUENCY (%)
;98
Figure 5-2. Distributions of Matched Pair Concentrations Differences, Based on Indoor and
Outdoor TEAM Samples, from 7pm to 7am j
165
-------
1
CO
01
2
H-
LU
O
O
O
cc
O
O
a
H
o
cc
O
O
Q
800
700
600
UJ
O
Z
Ul.
£ 500
400
300
200
100
. 0
-100
m, p - DICHLOROBENZENE
FALL
SUMMER
I I
I I I I I I I
I
[Indoor] > [Outdoor]
[Outdoor] > [Indoor]
I _ I
10 20 30 40 50 60 70 80 90
CUMULATIVE FREQUENCY (%)
95 98
Figure 5-2 (cont'dj
166
-------
80
60
40
CHLOROFORM
[Indoor] > [Outdoor]
WINTER.
FALL
-80
-100
SUMMER
I I
I i I I I I I
10 20 30 40 50 60 70 80
CUMULATIVE FREQUENCY (%)
90
95
Figure 5-2 (confdj
167
-------
Ill
u
LU
oc
UJ
u.
oc
H
Ul
u
o
u
oc
O
O
Q
I-
o
I
oc
o
o
a
120
100
80
60
z
2 40
20
-20
-40
BENZENE
[Indoor] > [Outdoor]
T
[Outdoor] > [Indoor]
SUMMER
I I fill
I
I
I i
10 20 30 40 50 60 70 80 90
CUMULATIVE FREQUENCY (%)
95
98 99
Figure 5-2 (cont'd)
168
-------
Since TEAM compares outdoor ambient exposures with indoor exposures of all
types (occupational exposures, consumer product exposures, residential exposures), it is
not surprising that some fraction of the indoor sample concentrations exhibits much
higher levels than the matched outdoor samples. This effect is pronounced during the
winter for pollutants (e.g., p-dichlorobenzene) associated with consumer products (e.g.,
mothballs) or indoor activities. On the other hand, the middle range and low end of
indoor and outdoor exposures documented by TEAM tend to be in much closer agreement.
Again, this is intuitively plausible—ambient air will be a proportionally more important
source of exposure for populations experiencing low workplace or other indoor air toxics
concentrations, in part because of infiltration of outdoor air to the indoors. !
In investigating patterns of exposure near large industrial complexes, the Kanawha
study apparently was successful in the level o"f source disaggregation that it selected. As
discussed in Chapter 4, this study represented emissions within large chemical facilities as
a series of grouped -sources. Alternatives would have been, at one extreme, to model all
emissions' as a single point source at the centroids of each facility, or, at the other
extreme, to attempt to model every release point individually. Analysis suggested that,
results would have been significantly less accurate, especially for MEI exposures, if these
large complex sources had been modeled as single points, but that not much additional
accuracy, if any, would have been gained if all sources had been modeled individually.
5.4 Insights into the Use of Exposure and Risk Assessment in Multiple Air Toxics
Studies
The following insights have been gained from air toxics exposure and risk
assessments done to date. :
i
• Perhaps the single most important insight that can be offered is that
exposure and risk assessments are very complex, integrating many data
bases, procedures, and assumptions. As such, the user should be very
careful in interpreting the results or comparing the results; to those of
another study. Major uncertainties are involved in all steps of the process,
from monitoring, emission inventorying, modeling, assigning exposure
levels to populations, and attributing health effects to exposure levels.
169
-------
Because of the errors, uncertainties, assumptions, and limitations involved
in risk assessments, most investigators concede that their results should
not be interpreted to represent absolute or precise predictions of cancer or
other health risk. At best, most studies have labeled themselves as
screening or scoping studies, only yielding estimates of the relative
importance of various sources, pollutants, etc., or the relative merits of
alternative regulatory measures to control air toxics.
Because cancer unit risk factors are considered to be conservative for a
number of reasons, some studies have concluded that their results are
biased conservatively high. In fact, there are other potential biases in these
studies, some of which may lead to underpredictions of cancer risk.
Missing source categories, unaddressed pollutants, underpredicting models,
and underaccounting of pollutant transformation, all represent potential
biases on the low side. Conservative unit risk factors, conservative
exposure assumptions, and the assumption of additivity among different
carcinogens all represent potential biases on the high side. The practice of
considering all chromium, beryllium, and nickel to be as carcinogenic as
particular forms of these metals (i.e., Cr+6, beryllium sulfate, and nickel
carbonyl/subsulfide, respectively) also causes significant biases on the high
side, where this is done.
The use of EPA's Human" Exposure Model frees the study manager from
having to associate concentration data with population data in an exposure
assessment, as.this is done internally using stored 1980 U.S. Census
Bureau data. This is a potential advantage for areas not wanting to develop
their own exposure assessment modeling capabilities.
Evolving methods being developed by the TEAM and IACP studies both
challenge the "standard" assessment methods used in most studies to date,
and offer promise for improved methods in future studies. For example,
the TEAM results challenge the standard assumption of constant exposure
to ambient outdoor air, and suggest that personal exposures may be much
more influenced by indoor, workplace, and product exposures than by
ambient air in many cases. As another example, the IACP challenges the
practice of evaluating pollutants individually, and is developing techniques
to access the effects of complex mixtures through the use of source
apportionment/bioassay fractionization of ambient samples and the
development of comparative potency factors for complex organic mixtures.
The study manager should be aware that the data, techniques, and
assumptions in the field of cancer risk assessment have changed rapidly in
the past five years and will probably continue to be dynamic as better data
become available. Changes in key emission factors and cancer unit risk
factors, for example, can dramatically change both the absolute and
relative contributions of particular pollutants, source categories, etc., in
one's study analysis. The study manager should thus either maintain a
dynamic data base, incorporating changes as they are perceived, or be
prepared to be second-guessed if he/she "freezes" the study at a certain
point in time and certain data elements, assumptions, or techniques become
outdated.
170
-------
Chapter 5
References
Barcikowski, W., 1988. South Coast Air Quality Management District. Letter and
attachments to Tom Lahre, U.S. Environmental Protection Agency. Research Triangle
Park, North Carolina., October 24, 1988.
EPA, 1987. National Air Toxics Information Clearinghouse report entitled "Qualitative
and Quantitative Carcinogenic Risk Assessment," EPA 450/5-87-003, U.S. Environmental
Protection Agency and STAPPA/ALAPCO, pp. A-ll, A-12.
EPA, 1988. Letter from W.H. Farland to Potential Risk Assessors announcing availability
of IRIS, and accompanying brochure. EPA Office of Research and Development.
Washington, D.C., April 15, 1988.
Ingalls, M.N., 1985. SWRI, Improved Mobile Source Exposure Estimation, EPA-460/3-85-
002, March 1985.
Lewtas, J., and L. Cupitt, 1987. "Overview of the Integrated Air Cancer Project,"
Proceedings of the 1987 EPA/APCA Symposium on Measurement of Toxics and Related
Air Pollutants, Research Triangle Park, North Carolina., pp.. 555-561.
Rheingrover, Scott, D. Wardi B Mazaache, and J. Thomas.1984. "Draft Report: Estimates
of 1980 Work and Residential Populations for Philadelphia, Pa.," Contract No. 68-02-
3970, General Software Corporation, prepared for EPA Office of Toxic Substances,
Washington, D.C., December 1984. !
SCAQMD, 1987. MATES Working Paper #4. Urban Air Toxics Exposure Model:
Development and Application. South Coast Air Quality Management District and SAI,
Inc. October 1987.
Sullivan, D. A., and J. Martini, 1987. Evaluation of Total Exposure Assessment
Methodology (TEAM) Data: Review of Ambient and Personal Data Sets (In Progress).
U.S. Environmental Protection Agency, Office of Air Policy Analysis and Review.
Summerhays, 1987. "Air Toxics Emission Inventory for the Southeast Chicago Area."
U.S. Environmental Protection Agency Region V.
Wallace, Lance, 1987. The Total Exposure Methodology (TEAM) Study: Summary and
Analysis (Vol. I), U.S. Environmental Protection Agency, Office of Research and
Development, Washington, D.C.
171
-------
CHAPTERS
CONTROL STRATEGY SIMULATION AND EVALUATION
The following topics are discussed in this chapter:
• The use of control strategy simulation and evaluation in urban air toxics
assessments
• Comprehensive vs. site-specific strategy simulation
• Control strategy, simulation procedures in the 5 City-Controllability Study
• Insights on control strategy simulation and evaluation
6.1
The Use of Control Strategy Simulation and Evaluation in Urban Air Toxic
Assessments
A primary objective of urban air toxics assessments has been to define the existing
levels of emissions, concentrations, exposures, and risks in the study area. This has
generally involved the assessment of current conditions in what is often termed
"baseline" or "base year" analyses. Some studies have stopped at this point. Ambient air
monitoring studies can only assess current conditions because of their inherent inability to
project future ambient air levels. Emission inventory/dispersion modeling studies, on the
other hand, are suitable for control analyses, because emissions can reasonably be
projected into the future both as a function of anticipated growth in an area and as a
function of alternative control measures that may be applied. As indicated previously,
this represents a significant advantage of using an emission inventory as a basis for
conducting an urban air toxics assessment.
173
-------
A principal objective in several of the studies reviewed was the analysis of the
potential for risk reductions achievable through alternative control measures. ; To do this,
specific control measures or combinations of control measures are superimpbsed on the
base case emission inventory, and the resulting emission projections are then modeled in
the same manner as in the base year analysis to estimate reductions in exposure and risk.
!
Thus, the emission (inventory) projection becomes an important tool irt carrying out the
control strategy evaluation. - i
•Once the control analysis is completed, the study manager can provide the local
policymakers (i.e., risk managers) with information needed to prioritize various alternative
measures, based on risk reduction potential. Cost-effectiveness can also be evaluated. A
cost-effectiveness analysis of control options can generate information highly useful to
local policymakers in setting control priorities. Decision makers can get a sense of the
effectiveness o.f individual controls and total control strategies by seeing their costs to
reduce cancer risk, either in terms of aggregate (population) incidence or risk to the
maximum exposed individual,--This allows them to allocate their'community's limited
resources into activities that provide the greatest environmental health protection.
Such analyses can also yield estimates of "co-control" potential, i.e., the extent to
which measures designed for air toxics control will also control ozone, PM-10, or other
criteria pollutants. This is an important consideration as air toxics control benefits may
help "sell" certain criteria pollutant control measures that may otherwise have marginal
acceptability on their own merits. Similarly, the concept of co-control may help the
acceptance of certain air toxics measures. ;
6.2 Comprehensive vs. Site-specific Strategy Analyses
Two types of control strategy analyses are employed in the studies reviewed. One
can be best described as a comprehensive or "across-the-board" type of analysis, wherein
different combinations of controls are applied (hypothetically) to all facilities within
many source categories. This type of analysis was employed in the 5 City Controllability
Study. The second type of analysis is more limited and site specific. In this latter type of
analysis, the technical feasibility of candidate control measures is first evaluated in detail
174
-------
for particular facilities within the study area, followed by a simulated application of these
specific measures to determine specific exposure and risk reductions. The lEMPs
employed the more focused, site-specific type of analysis (to varying degrees).
In both types of analyses, emissions projections are made corresponding to the
control scenarios simulated. Fundamental differences are due to (1) the extent of source
coverage, (2) the consideration of multiple vs. single control strategies, and (3) the
incorporation of growth and plant retirement into the future projections of emissions and
risk.
Extent of Source Coverage
The comprehensive coverage in the 5 City Study allowed for the simulated
application of controls to all sources within any source category to which a given
regulatory option may apply. For example, the 5 City Controllability Study could
simulate the incorporation of drift eliminator retrofits on all industrial cooling towers or
scrubbers on all hospital sterilizers within each study area, without having to identify
specific candidate facilities for analysis. In contrast, the control measures superimposed
in the lEMPs typically targeted a handful of source categories or specific facilities for
analysis, albeit categories thought to be more important.
Multiple vs. Limited Control Options
In the 5 City Controllability Study and Philadelphia IEMP, multiple control
options were analyzed for the sources under consideration. The idea behind this analysis
was to provide an assessment of different combinations of potential measures, from both
a technical and a cost standpoint. In contrast, the Santa Clara IEMP considered only one
(or two) control options per source type based on a preliminary decision of what appeared
to be the most cost-effective.
Inclusion of Source Growth and Retirement
Of the studies reviewed, only the 5 City Controllability Study projected emissions
and risks to a specific year in the future. This study used 1980 as its base year and 1995
175
-------
as the yeax to which projections would be made. This time interval was considered long
enough that new source growth and old source retirement had to be factored into the
analysis. This added considerable complexity to the analysis, as growth and retirement
rates had to be determined for each source category undergoing analysis, and separate
control efficiencies had to be considered for existing sources vs. new growth and
replacement growth sources.
6.3 Control Strategy Simulation Procedures in the 5 Citv Controllability Study.
The 5 City Controllability Study is both an analysis of existing conditions in a
"base year" (defined as 1980) and an analysis of conditions in a projection'year (1995),
reflecting alternative control scenarios. The heart of this study is the base year inventory
and a regulatory impact model (RIM) that operates on the base year inventory to simulate
different combinations of control measures." "Emission projections made iby RIM are
subsequently used in EPA's Human Exposure Model (HEM) 'to estimate reductions in
cancer incidence.- - - __ .
RIM Operation
Figure 6-1 shows a schematic of the Regulatory Impact Model (RIM) that is used to
estimate future emissions and costs of emission control. As shown in Figure 6-1, there are
two functional components of RIM: an emissions projection module .and a control cost
module. •
Operation of RIM starts with the baseline emissions inventory which, for each
source type, contains information relating to annual emissions and the existing level of
control in place on each source type. From this inventory, the uncontrolled emission
rates can be determined by .back-calculation for the base year.
To project changes in the baseline emissions to any future year, three pieces of
information, specific to each source type, must be established:
1. The rate at which old equipment will be replaced with new, less polluting
equipment;
176
-------
H
co
5
U
I
co
co
•W
t
CO
ffl
EW GROW
2
CO
EMISSION
H
§ M
W M
rj co
< &
^ S
& »
EXISTING
MISSIONS
w
S-vs
1 BASELINE
4
EMISSIONS
^
INVENTORY
>
UNCONTROLLED
EMISSIONS AND
EXISTING LEVEL
OF CONTROL
a
CO
H-4
I
Q
s
co
CO
CO
—
w
PUT
»"*»
s
-------
2. The rate at which the industry (or emissions source category) is expected to
experience growth in a geographic region; and
3. The constraints that existing and future environmental regulations impose
on sources to reduce emissions from uncontrolled levels.
Accordingly, three files are created in RIM, each of which operates on the
uncontrolled PM and VOC emission levels of each source category in the baseline
inventory. (Note: In this analysis, RIM calculates PM and VOC reductions and assumes
that particulate toxics and toxic VOC are controlled to the same extent.) The actual
calculations made by RIM cannot be described here in detail. The interested reader should
consult (EPA, 1985).
6.4
Control Strategies Evaluated in 5 Citv Controllability Study
The following listing shows the kinds of control strategies evaluated in the 5 City
Controllability Study.
Scenario 1: Emissions projections'for 1995 under. existing and expected criteria
and NESHAP regulatory programs. Results of this scenario are considered representative
of the 1995 emissions picture assuming the anticipated regulatory agenda is accomplished.
This scenario would reflect toxics co-control benefits of ozone and PM-10 SIPs.
Scenario la: Same as 1, plus the effects of new NESHAP initiatives. This scenario
might result if EPA focuses control of toxic air emissions on Section 112 of the Clean Air
Act, resulting in more NESHAPS.
Scenario 2: Same as 1, with the addition of the most stringent (reasonable)
controls on new capacity emissions. The incremental effect of this scenario may be
described as requiring very stringent BACT on all new sources of air toxics.
Scenario 2a: Same as Scenario 2, with the addition of the most stringent controls
on road vehicle emissions. This scenario imposes the control of mobile source emissions
required in Los Angeles to all study areas.
178
-------
Scenario 2b: Same as 2a, with the addition of the most stringent controls
(reasonable) on replacement emissions. The scenario extends stringent BACT to
replacement sources of toxic air emissions.
Scenario 3: Same as 2b, with most stringent (reasonable) control on all (i.e., new,
replacement, existing, and retrofit) emissions. This may best represent a requirement for
stringent BACT on all air toxics sources.
6.5 Assumptions Inherent in 5 City Controllability Study Control Analysis
A number of important assumptions had to be made in the 5 City Controllability
Study in order to complete the analysis. Assumptions that are made in a broad national
scoping study, because of insufficient resources to analyze individual
facilities/pollutants/controls, may not be acceptable to some policymakers who need a
firmer basis for taking regulatory action.- Many of these assumptions, listed below,
should be reviewed carefully in more detailed, locale-specific studies.
• The baseline emissions aiid control efficiency data are assumed to be
accurate in the existing inventory. Many of the projection algorithms are
based on these two parameters, so it is essential that they be as accurate as
possible. Because uncontrolled emissions are back-calculated from
existing controlled emissions using the control device efficiency levels,
this type of analysis is very sensitive to errors in the control device
efficiency, especially as it approaches 100 percent.
• Control levels of toxic organics and toxic PM are assumed proportional to
control levels of VOC and PM, respectively. This assumption should be
challenged in a more detailed study, at least for the more important
sources and pollutants. Some metals, for example, may be present "in a
gaseous state in a high temperature exhaust and may not be controlled at
the same level as PM from some control hardware.
• Both new and replacement growth, as well as capacity retirement, are
assumed to occur at the same location as existing sources. A State or local
agency may have more detailed, plant-specific data that could obviate the
need to make projections in this manner. The locations of large plants
expected to open or close in a few years is probably well known to local
officials, so growth and retirement could probably be handled without
having to make general projections based on published trends data.
• Control measures are assumed to be applicable to the targeted sources,
without regard for technical feasibility on a case-by-case basis. Although
the measures selected in this study are generically applicable, there may be
179
-------
specific technical reasons why they would not be appropriate in certain
facilities. Again, a State or local agency would be in a better position to
make case-by-case projections.
6.6 - Control Strategy Evaluation in the lEMPs
The approach for simulating and evaluating controls in the lEMPs was much more
focused on specific sources, pollutants, arid control measures than in the 5 City
Controllability Study discussed above. The Philadelphia and Baltimore lEMPs were
somewhat more site specific than the Santa Clara IEMP. All are discussed separately
below.
Philadelphia IEMP Control Analysis
In the Philadelphia study, controls were evaluated for a "cluster" of point sources
centered in a particular section of the city. Questionnaires and phone calls to industry
were used to help tailor the control options to specific space, power, and cost constraints
at each plant site.' Even so, the analysis provided only a rough estimate of costs and
effectiveness because a more comprehensive engineering analysis was beyond the scope of
the study. A total of eight point sources were evaluated. Control options included
charcoal adsorption, solvent substitution, steam stripping, and scrubbers. Costs and
emission rates to all media were estimated to address risk reductions on a multimedia
basis.
The Philadelphia IEMP study also evaluated controls for several area sources
including dry cleaning, gasoline marketing, degreasing, and miscellaneous solvent usage.
Control options included vapor recovery, solvent substitution, and Staige I and Stage II
controls for gasoline marketing. The feasibility of each option was evaluated, as well as
the cost and pollutant removal effectiveness.
Table 6-1 summarizes the control options evaluated in the Philadelphia IEMP.
Multiple options were analyzed for each facility or source category and then different
combinations of options were evaluated for cost-effectiveness. Table 6-2 shows the
projected cost-effectiveness of various control simulations in terms of reducing annual,
aggregate cancer incidence.
180
-------
Table 6-1 Control Ootions Evaluated in the Philadelphia IE-IP
Source
0 eg r easing
Refinery 8
Control Option
Option 1
Option 2
Option 3
- -
'Option. 2.
Option 3
Option 4
Option 21
Option 23
Control Descriptions
Cold Cleaners: cover during idle time
(90 percent control efficiency); drain racks
with 30-second drains (50 percent control
efficiency)
Open Tap Vapor Oegreasers: cover during idle
time (90 percent control efficiency); increase
freeboard ratio during operation (27 percent
control efficiency)
Cold Cleaners: cover during idle time
(90 percent control efficiency); drain racks
with 30-second drain (50 percent control
efficiency)
Open Top Vapor Oegreasers: cover during idle
time (90 percent control efficiency); use
refrigerated freeboard device (60 percent
reduction in vapor losses, 29 percent reduction
of carry -out losses)
Cold Cleaners: cover during idle time (90 oer-
cent control efficiency); drain racks with
30-aecond drain (50 percent control
efficiency)
_0,pen Top Vapor Oegreasers: cover during idle
time (90 percent control efficiency); carbon
•adsorber (reduce 70 percent vapor losses,
30 percent carry-out losses)
Convert nonconfact . floating roofs to contact
floating roofs on benzene tanks only; leak
detection and repair methods (air)
Secondary seals on benzene ' tanks only; install
rupture disks for controlled degassing
reservoirs for compressors (air)
Convert noncontact floating roofs to contact
floating roofs for benzene tanks only; install
rupture disks for safety/relief- valves and seals
with controlled degassing reservoirs for com-
pressors (air)
Secondary seals on benzene tanks only; install
rupture disks for controlled degassing reser-
voirs for compressors; convert noncontact
floating roofs to contact floating roofs 'for
benzene tanks only; install dual mechanical
seals with a barrier fluid system and degassing
reservoir vents on light liquid pumps (air)
Secondary seals on benzene tanks only; install
rupture disks for controlled degassing reser-
voirs for compressors; install a thermal oxida-
tion system on. all benzene and gasoline tanks;
install dual mechanical seals with a barrier
fluid system and degassing reservoir vents on
light liquid pumps; require more frequent
inspection on the valves and addition of sealed
bellow valves (air)
181
-------
Table 6-1 Control Options Evaluated in the Philadelphia, IEKP (cont'd)
Source
Other Industrial
Industrial Dry Cleaner
Queen Lane and Selmont
Drinking Water Treatment
Plants
Baxter Drinking Water
Treatment Plant
Dry Cleaning
Chemical Manufacturer
.
•
.
Refinery A
•
Gas Marketing
•
Control Option
Option 1
Option 2
Option 1
Option 1
Option 2'
Option 3
Option 2
Option 3
Option 1
Option' 2
Option 3
Option 1
Option 2
Option 3
Option 4
'Option 5
Option 10
Option 11
Option 1
Option 2
Option 1
Option 2
Option 5
Control Descriptions
Enlarged condensation zone, waste recovery
facility, manual enclosure (air)
Same as Option 1, except automatic enclosure
Inspection and maintenance
Granular activated carbon (GAC) '•ugh-
sffectiveness
Granular activated carbon (GWC! medium-
effectiveness
Granular activated cartoon (GAC) lew-
effectiveness
GAC high-effectiveness
GAC medium-effectiveness
Inspection and maintenance (,air)
_ Same as Option 1, plus carbon adsorotisn For
commercial dry cleaners (air)
Same as Option 2, plus carbon adsorption for
coin-op and commercial dry cleaners (air)
Carbon adsorbers on each of three vents (PCS
emissions) :
Carbon adsorbers on each of ;bhree vents. (DC? •
emissions)
Option 2 plus Option 1 (air);
GAC of DCS and OCP waste streams (water)
Steam stripping of DCS and Ofc? waste streams
(water)
Option 2 plus Option 4
Option 2 plus Option 5
Secondary seals on internal 'floating roofs of
gasoline tanks (air)
Install secondary seals on internal floating
roof gasoline tanks only; leak detection and
repair methods (air)
Stage H, no enforcement inspection (air)
Stage II, plus enforcement inspections (air)
Option 2 plus onboard controls on all vehicles
182
-------
Table 6-2. Estimated Cost Effectiveness of Philadelphia IEMP Control Options
WAS II (CSUUIS »rO«D FW KJUICT
SMDUU or CWTWL OFTios rw «DOC»C MMUL cwen WCOTNCE
Atr
(198* analysis)
C«s«« ftaducsd
par Tsar
0.0
0.03
0.09
0.1S
Jsreant Xoductlon
in Canear
Inciasncs frasi
Currant Control
0.0
1.4
•3f.7
*».»
' 0.25
Tat»l Out
(n.OOO/vsar)
.122
323
632
900
2,080
Avcrao* Coat- par
Case fttducsd frasj
Currant Control
(K.000/e»»«l
-3,7»
3,435
-4,074'
4,fZl
l Ca«t
P«r :nc
C*««
(SI. OOP/MM)
-3,7W
7,417
5,150
•.TOO '
19,6*7
0.30
77.1
«,724
22,335
•ollution Controls !aelaa«nt
Soure*
Indwtriai dry ei»«i«
Otn«r indu»tri«l
<«fln«ry 9
Control
Industrial drv clt«n«r
Other
3
Dry elovning
tndustri«l dry el««n«r
Otn*r industrial
Rsflntry 8
•anufaeturtr.
tndustrisl dry elcsncr
Othsr industrial
Rafiniry 8
0«(jr«sslng
Dry elssning
Cnsiiieal
Industrial dry eltansr
Otfisr industrial
Osijrsssinq
Dry elsaninq
Rsrmsty 3
»arin«ry A
Cttcaueai •anu^aetursr
Industrial dry eitsnsr
Othar industrial
Oaqrassinq
»srtn«ry 3
Dry elaanxng
Caaelirw ssrkstinq
2
5
1
1
2
J
2
5
1
1
3
2
23
2
10
1
2
J
1 The unit risk factors used in this analysis are based on conservative assumptions that generally
produce upper bound estimates. Because of limitations in data and methods in several areas of the
analyses, such as exposure calculations and pollutant selection, risk estimates were calculated as aids to
policy development, not as predictions of actual cancer risks in Philadelphia. Actual risks may be
significantly lower; in fact, they could be zero. The proper function of the estimates is to help local officials
select and evaluate issues, set priorities, and develop control strategies for the topics examined.
2. See Table 6-1 for definition of control options.
CAUTION: All emission and risk reductions and cost estimates in this table are shown only for example
purposes, and are not intended to apply in other situations and locales.
183
-------
Baltimore IEMP Control Analysis \
The Baltimore EEMP performed a control analysis for four point sources. Sources
and pollutants were selected for the control options analysis if they contributed 1 percent
or more of the cancer incidence in the study area, or more than five in a million average
individual risk within a particular grid cell. Each facility emitted metals (most notably
hexavalent chromium) and POMs, with one plant also being a major source! of benzene
emissions. Emission reductions and associated costs were determined for baghouses and
for air toxics co-control benefits of VOC and particulate controls. Air pollution controls
were evaluated on a risk reduction basis.
The study identified potentially available point source controls by first consulting
with Maryland Air Management Administration (AMA) staff responsible for monitoring
permit compliance of the respective facilities. Once potential control options were
identified, costs and removal efficiencies were calculated based on data in the literature,
input from AMA staff, and estimates provided by vendors.
Performing the site-specific assessment needed to accurately quantify control
costs and removal efficiencies for complex industrial facilities was beyond the scope of
this study, since neither the time or resources were available to identify and accurately
evaluate individual control options. The intent of the study was to estimate relative costs
and effectiveness rather than to perform the much more intensive engineering studies.
Results must be interpreted accordingly, keeping in mind that the actjial cost, of
implementing these controls, as well as the actual reductions that would be achieved,
might be significantly different from the generated estimates.
The area source controls analysis for the Baltimore IEMP was more extensive than
for the Philadelphia EEMP since it also included a heating source and diesel road vehicles.
IEMP generated the cost estimates and pollutant removal efficiencies for heating, dry
cleaning, degreasing, miscellaneous industrial solvent usage, and diesel vehicles, while
Maryland AMA provided cost estimates and efficiencies for gasoline marketing.
184
-------
Santa Clara IEMP Control Analysis
The Santa Clara IEMP was undertaken with the assumption that local efforts can
complement the work of State and Federal agencies. Therefore, this controllability study
provided added emphasis on unique, localized air toxics sources. At the same time,
sources of air toxics currently being considered for regulation at the State and Federal
level were included in the study to add perspective and provide a point of reference from
which to measure the cost-effectiveness of other control strategies.
There are approximately 24 principal source categories (e.g., cement manufacturing,
residential wood combustion) of air toxics in the Santa Clara Valley. The following
criteria were used to select source types for the controllability study.
• Estimated health risk. Do the pollutants from this source category
represent a significant health risk?
• "Feasibility of control. Is it technically feasible to control emissions in this
source category?
•-•- . * Data availability;- Is there easily accessible information on costs and
efficiencies of control measures?
Often, regulatory development programs examine several control options for one
source type. The most stringent control option that is economically achievable is chosen
as the preferred option. Rather than compare several options for a given source type, the
Santa Clara IEMP chose for evaluation what appeared to be the most cost-effective
control strategy. In general, only one or two control methodologies were thus considered
per source type. In this way, numerous source types were considered in the analysis and
compared with each other. Where a risk reduction strategy appears desirable for a
particular source type, more in-depth control studies should be performed in which
• various control options might be considered.
Where possible, the capital and annualized costs of the control strategies were
evaluated. Annualized costs consist of annual costs and capital recovery (depreciation
and interest on capital). The estimated reductions in excess cancer incidence were also
estimated for each control strategy. Generally, the reductions in the incidence rate were
calculated from average risk values (i.e., the number of excess cancer cases divided by the
185
-------
overall study population). For a few select source categories, the reduction 'in risk for a
hypothetical maximum exposed individual (MEI) was evaluated.
Cost-effectiveness was measured in terms of the dollars per reduced cancer case.
More specifically, cost-effectiveness was calculated by dividing the. net present worth
(NPW) of the control strategy by the estimated number of cancer cases reduced. The NPW
provides an indication of how much would theoretically have to be set aside in the
present to provide for the future services of the control option. A time period of 30 years
was used; this is the maximum estimated lifetime of the control strategies considered.
Table 6-3 summarizes the results of the cost-effectiveness analysis for cancer
incidence reduction in the Santa Clara IEMP. Table 6-3 also shows the specific control
methods simulated on the source categories and pollutants selected for analysis.
Analysis of Areawide vs. Most Exposed Individual fMEIl Risk Reduction
|
The primary controllability emphasis in the lEMPs (and 5 City Controllability
Study) is the reduction in areawide cancer incidence. Assessing risks to the most exposed
individual (MEI) is another way to evaluate the effectiveness of control alternatives. In
the management of risks, policymakers are often faced with value jiidgments between
control options that reduce risks to the overall population and alternatives that reduce
risk to the most exposed individuals. Analysis that considers both measures of risks adds
clarity to the implications of policymakers' decisions and allows fair consideration of the
risks and control costs among different exposed groups of people.
Cost-effective controls to reduce risk to the most exposed individual are not
necessarily the same as and may be quite different from the cost-effective controls
identified to reduce risk to the general population. Though consideration of these two
different measures of risk in a cost-effectiveness framework may result in widely differing
control solutions, the contrasting quantitative information places into sharper focus some
of the values policymakers must weigh before making public health protection decisions.
186
-------
Table 6-3. Estimated Cost Effectiveness of Santa Clara IEMP Control Options
Source Type
Dry Cleaning0
Solvent Degreasing
Residential Wood
Gcnfeusticn
Hospital
Sterilizatica
Mnufaeturing
Habile Sources
.
CtllLlUl Ifetlul
Converting perc
transfer to perc
dry-to-dry
Converting pare
transfer to
CPC 113 dry-to-dry
Converting pare
dry-to-dry to
CPC 113 dry-to-dry
Refrigerated
freeboard chillers
Riel efficient vccd
stores
Burning curtailttnts
Hydrolixing uot
scrubbers
Thensal incinerators
Ctt*lytic.incinarators
• Qq-gen sensor durability8
Hsdifier certified new.
vehicle registration1
Mass transit usuovcnnts
Catalyst retrofit for
heevy-duqr gaaoline
vehicles
I/H for heavy-duty
gasoline vehicle*
Pollutant
perchloroethylene
perchlcroethylena
parchlorouthylane
perchloroethylene
trichloroethylene
nethylene chloride
(e)
(e)
ethylene oxide
cellosolve
cellocolve
berscne and organic
paniculate
benzene and organic
particulate
batueue and organic
particulate
benzene and orgnaic
particulate
benzene and organic
particulate
Cancer
Incidence
Reduction
(incidence
in 30 years)
0.0019
0.00028
0.00044
0.069
0.021
0.12
0.9
0.74
0.33
Cfl
«) '
0.09
0.034
0.21
2.3
0.31
Hat Present
Value for
30 Tears of
service ($)
56,000
130.000
37,000
2.400,000
290,000
470,000
420.000
H/A
1.200,030
Oh •
°h
1,S» x 106
170,000,000
6, 600, COO
Oare
ZffactivcnesE
(nillian $/
reduced ,
incidence)
29
50
200
36
14
4.1
0.47
H/A
3.1
Cfl
(f)
•0
0
3.800
76
22
'
Adopted by Ara in V85.
187
-------
Unfortunately, population-averaged cost-effectiveness cannot be used in
determining MEI cost-effectiveness. If the size of the general population is large and the
risk to them low, the average cost-effectiveness of control will be poor regardless of the
risk to the maximum exposed individual. In addition, a single, meaningful cost-
effectiveness value for the MEI is impossible to calculate without an estimate of the size
of the affected population. Thus, several lEMPs computed the total annualized cost of
control and cost per pound of pollutant for removal along with the risk reduction to the
MEI. Together, these variables were felt to offer qualitative insight into .the relative
merits of the various controls.
i
Table 6-4 shows the results of the MEI analysis in the Santa Clara IEMP.
6.6 Insights on Control Strategy Simulation and Evaluation
Perhaps most important, the study manager should be well aware of the
limitations of the controllability-analysis and should not use any conclusions to support
actions unwarranted by the accuracy of the underlying data base or the validity of the
assumptions made. All of the studies reviewed were described as screening or scoping
studies, and were not intended to predict absolute reductions in risk associated with any
particular control strategy. Rather, they were intended to be used in a relative sense only
to begin to develop priorities. !
!
Case-by-case emission and control projections at specific facilities are generally
more accurate than more general types of projections based on surrogate measures of
growth and plant retirement. The latter type of analysis, used in jthe 5 City
Controllability Study, gives only a very broad sense of control potential associated with
certain groups of control measures, and may not be appropriately applied to determine
control potential and feasibility in a particular locale. i
The study manager should plan at the outset of the controllability study just what
measures of risk reduction need to be quantified. Generally, the studies reviewed focused
on areawide cancer incidence reductions; moreover, it was the easiest to develop cost-
188
-------
en
O
2
a
u
cd
on
CO
0}
I
• ^H
•^
-2
•s
s
•ii
09
¥
co
•5
||l
U T)J §
a w
3 s
If -a
J 32
o r* <~
u 5 5
S §~
• Q •> *— '
3 o a
u
V *H
S 2_
-< c o>
1°
e
co-.
• O — b
•sts-s
.3 -iS
.5£~
S
•W
3
t-i
2
JS
2
e
<3
41
§
«
u
3
5
\
X
M
T0
M
cs
"
<^
^J
o
8
1C
o
o
o
0
o
o
o
>
1'
O *«
EB
•o u
"o
M
CO
te
X
2
'o
ro
O
<«l4
§
••in-
•*
•o
I
JS
u
s
a
u
e
4)
<^
S:
2
o.
'X
"3
cu
••*
a
b
2 i
Q
O
§
o
Irt
o
o
0
•
01
01
**1
«n
3
2
U
V
1
kt
u
•o
^
I
u
4)
§ *
b e
M •**
^ *^
b CU
O
i
CO
w«
§
.0
o>
§
o
CO
' 04
41
S
JS
tt
o
v-4
U
w
•
^3
|
u
"**
«l
as
00
e
a
u
b
00
o
o
s
r<
O
i
o
0
93
•»
13
>*4
jj
j:
u
u
»
b
«
5
U
•o
J
41
2
U4
£
S
OA
o
o
o
^
o
o
r»
»4
-g
•m
o
e
«
OS
a
b
C ^
M U
2 S
S?
e
a
•* b
'S.2
^
*M
*»»
I-W
•H
^
e
1*4
•o
9>
m
o
Ob
M
«
•
M
II
M
£
&
u
S
U
V
u
(*4
•
M
•M
a.
«
u
o
3
*-«
a.
Q
w
U
O
U
«-0
c
•»*
w
2
i
O
^4
nnua
«
it
v
.w
•
8
^
U
N
*H
i
c
<
b
>»
*•»
J3
*H
e
Q
•*4
W
3
^
V
sion r
u
•v4
S
*»•*
w
M
C9
O
u
^J
4)
M
••^
C
Q
II
9
eaov
w.
w
O
U
o
u
41
w
^
e
a
M
Q
•o
•«4
3
M
• H
(3
t
m
3
*H
•H
•o
^g
«
U
*
3
M
^
4*
M
'i
M
M
3
U
^
C
a
e
o
i
O
Vfc*
e
u
o
T> £
3 g
•J 3
C U
41 U
*-» *
•
S S
3.=
e M
0 '
•* '4
4» a
3 9
O 0
a j u -o l —
-------
effectiveness data for these reductions. While MEI reductions can also readily be
projected, they are more difficult to evaluate from a cost-effectiveness standpoint.
None of the studies documented how, if at all, the resulting controllability
i
conclusions would be used in the risk management process, or broached the topic of
defining acceptable public risk.
190
-------
1
CHAPTER 7
COMPUTERIZED DATA HANDLING
7.1
The following topics are presented in this chapter:
• Data handling considerations in urban air toxics studies
• -Data handling aspects of emission inventory/dispersion modeling studies
• Data handling aspects of ambient air monitoring studies
• Insights on computerized data handling
Data Handling Considerations in Urban Air Toxics Studies
The development of exposure and risk estimates in multi-source, multi-pollutant
assessments involves extensive data handling. Because exposure and risk estimates are
made, both individually and collectively, for many pollutants and receptors across broad
geographical areas, computerized data handling is a virtual necessity in most cases.
Hence, the study manager should consider the data handling aspects of carrying out an
urban air toxics study at the outset of his/her study as part of the overall study protocol.
The development of specialized data handling software is expensive and time consuming
and can potentially be avoided if existing software can be utilized. Many of the studies
reviewed in this report adapted data handling capabilities already in existence rather than
developing new capabilities.
191
-------
Study Type
I
As discussed earlier, most urban air toxics studies have involved either emission
inventory/dispersion modeling studies or ambient air monitoring studies, albeit with some
hybridization common. Data handling is more complex for the former study type because
one has to work with emission inventory data and dispersion models in order to predict
ambient air concentrations of air toxics, whereas these concentrations are measured
directly in the latter type of study. In both study types, ambient air concentrations must
be applied to population data, usually at some level of spatial disaggregation, to estimate
population-averaged exposures. These exposures are, in turn, adjusted by potency values
to estimate cancer risks or other health effects.
Computer Accessibility
An obvious consideration is whether the study has access to a mainframe
computer. Most State and local agencies can access EPA's mainframe computers at
Research Triangle-Park, North-Garolina through remote terminals. "In some;cases, these
agencies also have access to their own States' computer facilities. EPA maintains an IBM
3090 computer and several DEC VAX computers. EPA's Univac computer, used in several
of the studies reviewed, is no longer on line as of October 1988. Many of the dispersion,
exposure, and risk assessment models discussed earlier in this report reside on EPA's
mainframe computers. \
Most agencies now have personal computers, and many agency personal untrained
in mainframe languages prefer to operate in the PC environment. Some of the operations
involved in urban air toxics assessments, such as dispersion modeling, cannot realistically
be carried out using today's PCs, but some operations are possible, including inventory
compilation, exposure and risk estimation, and control strategy evaluation. The extent to
which these latter operations can be accomplished on PCs will depend on the number of
pollutants, sources of pollution, and the spatial resolution reflected in the analysis.
Regardless of the extent of mainframe involvement in "number crunching,"; PCs can be
useful for analyzing summary data sets created by the mainframe and for tailoring special
192
-------
reports and graphics. Hence, PC data handling should be considered to complement
mainframe analyses in many facets of a study.
PCs are certainly capable of storing air quality monitoring data from one or several
sites. Data summaries can be produced using spreadsheets or data base management
programs commonly available for most PCs.
7.2 Data Handling Aspects of Emission Inventory/Dispersion Modeling Studies
PIPOUIC
Many of the studies reviewed herein (the Philadelphia, Baltimore, Santa Clara, and
Kanawha Valley IEMPS; the Southeast Chicago Study; the 35 County Study within the Six
Months Study) used a computerized data handling system called PIPQUIC to store their
emissions data and develop estimates of exposure and risk. PIPQUIC was developed for
the lEMPs and resides on EPA's 1MB 3090 at Research Triangle Park, North Carolina. It
is accessible to "account holders .through any dialup terminal, including desktop PCs.
Various terminal emulator software programs may be used for accessing PIPQUIC.
(PIPQUIC, 1989)
PIPQUIC offers the user a "tool kit" to produce bar charts, tables, printouts, pie
charts, area maps, 3-D maps, contour maps, and spreadsheets. PIPQUIC creates charts,
maps, tables, and the like, using air toxics emissions data (or, as a default, data generated
by PIPQUIC by applying species factors to NEDS data). PIPQUIC executes two EPA
models—ISCLT and CDM—using gridded emissions data and appropriate meteorological
data for the area. Optionally, the user can run his/her own models of choice in lieu of ISC
and CDM. In either case, the model-predicted ambient concentrations are then coupled
with population and cancer potency data to estimate individual cancer risks and aggregate
incidence. PIPQUIC stores emissions, modeled concentrations, risk, and incidence data
for efficient retrieval and analysis to create tables, charts, and maps.
A discussion of specific PIPQUIC tools is useful as these tools were designed to
assist study area managers in answering key questions regarding urban air toxics risks.
193
-------
Illustrations of example outputs from PIPQUIC are included to show the: reader the
various ways the data are handled to provide useful summary graphics.
PIPQUIC's Tool 450 creates a broad range of study maps, and allows, the user to
overlay point sources, modeling grids, and area source emissions data on these maps.
Figures 7-1 and 7-2 show examples of several study area maps produced by PIPQUIC.
I
Tool 440 allows the user to rank order his/her source and emissions data in many
ways via bar charts, cross-tabulations, pie charts and printouts. The user can select one or
more pollutants, facilities, counties, industries, source types or estimation methods, and
can also define variables. Whole values should appear as separate pages, rows, or
columns. Figure 7-3 and Table 7-1, respectively, show example bar charts and cross-
tabulations created by Tool 440. ;
194
-------
Figure 7-1. Example of PIPQUIC Output for Southeast Chicago Study Area
SOUTHEAST CHICAGO STUDY AREA
TOOL 450A: MAP OF AREA-SOURCE EMISSION SQUARES
195
-------
is
§
2
a
. ^ 01 ^r f>» ^ n: n o oo
T T V u ii ii ui n a ji
-------
2:
•3
in
PX en CD
t-i n CD
co in ^
cu
ca
en
01
CVJ
T-t
CVJ
en
c\j in
r-. o
CD 'q-
en on
in -H
en eu
in
en
en
en
uj
a
%
I
I
1
•3
O
CJ
5
a
PH
tM
O
CD
-a
CO
I
Cx
03
3
DC
Ul
Ul
D_
CO
U
u§
>• M O
a DC *"
ZJ I- -T-
i- ui ±
%.% 5l
o
UJ M .-,:
-J tO £2
D_ CO C
^i M. ^
x uj t;
UJ j
is
a
o
o
o
o
o
o
o
a
en cj
i— a:
^3
UJ
en
en
o
o
o
o
o
o
cu
to
>
cj
a:
o 3 LU
zoo
— en
en en _j
< — o
o 2 en
o
o
*
o
o
o
o
en
o;
UJ
en
o
o
a
o
a.
a
o
o o uJ
z z >
o < ^
O UJ O
o a: ce
01 .
DC
o
a. ui ui
< z z
> Ul Ul
M =)
to z _i
< Ul O
CD CD, I—
to
DC
Ul.
O
to
M
Ul
z
Ul
1
5~
X
CHLORIDE
UJ
z
UJ
_1
>-
X
f-
Ul
s:
THYLENE
Ul
o
DC
O
I
T"
u
DC
Ul
0_
Z
Ul
>
o
Ul
V
0
CJ
THYLENE
Ul
0
DC
0
_J
X
CJ
M
DC
H-
UJ
a
>
X
Ul
a
_j
<
•gr
DC
O
U_
DC
Ul
X
h-
0
_I
_J
<
o
a:
o
en
M
O
DC
X
CJ
197
-------
Table 7-1. Example Cross-Tabulation Created by PIPQUIC Tool 440
EXAMPLE STUDY AREA
TOOL 44ft HUSSIONS IN HETHIC TONS PEfl YEAH
POLLUTANT: BENZENE
TABLE OF INDUSTRY BY COUNTY
INDUSTRY COUNTY
REOJEHCY ADAMS CO JONES CO LAKE CO SAND CO TOTAL
. 3318. STEEL MILLS 2401.68 642.493 0 0 3044.15
3989 AREA SOURCE 53.4081 740.087 35.5435 23.5138 852.553
9998 OTHffl POINT 4.25125 56.6334 3.695 0 84.5796
4952 ROWS 0 .725755 0 0 .785755
TOTAL 2459.32 1499.94 99.2985 23.5138 9962.01
198
-------
PIPQUIC's Tool 453 allows the user to pinpoint the sites of maximum
concentrations, individual risk, and aggregate cancer incidence and to assess the impact of
each pollutant and source at any receptor within the study area. Output options include
bar charts, 3-D maps, contour maps, charts ranking receptors, charts ranking sources
culpable at the point of maximum individual exposure (or at any other point), and tables
of concentration and risk. Tool 453 is perhaps PIPQUIC's most powerful analysis tool.
Figures 7-4 through 7-6 and Tables 7-2 and 7-3 show example graphics created by Tool
453, which characterize emissions data and associated exposure and risks in effective
ways to help understand the nature and magnitude of the air toxics problem in a given
area.
PIPQUIC enables the user to download maps, graphs, and the like, to his/her
desktop computer and then to re-create and edit them without having to re-enter PIPQUIC.
Using various inexpensive PC software packages, the user can edit titles, change colors,
assemble video .presentations, route to color plotters, or convert downloaded files into
spreadsheet or graphics files for detailed editing. As a specific example, PIPQUIC's Tool
123 supports downloading of source, emissions, and aggregate incidence data for creating
LOTUS 1-2-3® spreadsheets to evaluate control scenario effectiveness.
Other Data Handling Systems
Several other emission inventory/dispersion modeling studies developed their own
data handling capabilities. The 5 City Controllability Study developed input/output
software around EPA's HEM/SHEAR model, whereas the South Coast Study did likewise
around its modified version of HEM called SCREAM. The 5 City Study also produced a
series of files containing regulatory projection information to run a Regulatory Impact
Model (RIM). RIM allows the user to project future emissions and cancer incidence by
simulating various hypothetical control strategies. HEM/SHEAR and SCREAM are
mainframe models, whereas RIM is a PC-based model. As of this writing, HEM/SHEAR is
being updated by EPA and converted to a VAX computer at Research Triangle Park, North
Carolina. HEM/SHEAR will be accessible to users having an account on the EPA VAX
199
-------
Figure 7-4 Example PIPQUIC Output - Incidence
o
o
o
en
in
Q
p
o
o
CD
z
UJ
u
u
M
a
i
u°
5
S
3«
IN,
3 t| cn
UJ M g _,
gj _i g IJ
*«e» *—* CD *^
UJ
LL.
s to
¥S
cn b
M a-!
cn
DC
cn
m
m
in
o
a
UJ
u
z
UJ
a
M
u
z
\
1 '
o
o
LU
O
CO
\
1
o
o
UJ
•T
1
o
o
UJ
tu
CXI
; V 1 °
' 1— Q
o
cn
o ^^r
o cn
1 UJ
o
o
o
200
-------
°3
Si
M J
i
I
a
•
O
U
O
a>
f— 4
eu
O
u
M
UJ DC
5 °
< U.
DC
Z
UJ UJ
CJ Z
UJ
N
CO
Q
ID
H-
cn _i
sS
< LU
CQ cn
QC
ID CD
z
o
CJ
UJ
a m
a
o
co
N
Z
UJ
en
n
co
UJ
UJ
UJ
a
UJ
< o
^i. Q.
i-f
I-
ca
UJ
CJ
<
U_
QO
»5
CO
z '
<3J
m
in
a
co
X
ooo
o o o
+ + +
ID UJ UJ
UD o CO
o in T
n ^T? m
•^r ^r co
ooo
CO CU CO
co m
01
r-.
Is-.
01
Is-
in cu
01 in co
o o o o
o o o o
UJ UJ UJ UJ
CXI CO in QD
in r--
T in
i^.
o UD
^T "T UD CO
o o o o
CO' O T DO
o in o in
^ TH cn co
•*-i "^r co cn
UJ
z
DC
^
O
O
CJ
o
cu
T
in
in
in
in
co
o
•^r
in
201
-------
Figure 7-6 Example PIPQUIC Output
ID
IT)
CVJ
o
CD
z
M
U
Q.
LU oc
en o
< LL
LU
u
LU
a
u
z
M
CE
LU
_J
CO
LU
U
cr
a
co
h-
a
a.
in
01
>-
a
i-
co
n
en tr
CE
13 CD
M
CD
J- LU
ID CJ
_) tr
iii—j—'
"J a a
Q. co
LU M a
' tr
LU
a
CD
X
LU
LL
O
CO
CO
m
m
o
o
H-
OJ
202
-------
1
Table 7-2. Example Cross-Tabulation Created by PIPQUIC Tool 453
XYZ STUDY AREA
TOOL 453: ANALYSIS OF MODELING RESULTS - FOR POLICY-MAKING ONLY
STANDARD (1-KM) GRID, ESTIMATED 70-YEAR INCIDENCE
POLLUTANTS: ALL C.A.G. POLLUTANTS
FACILITIES: ALL
RECEPTORS: 4616.00 454.00
SOURCES: ALL
TABLE OF COMPOUND BY SOURCE
COMPOUND SOURCE
FREQUENCY
POINT ROAD
SOURCES VEHICLES
COKE OVEN
ARSENIC
BENZENE
CADMIUM
1,3-BUTADIENE
GAS VAPORS
CHROMIUM HEXAVAL
METHYLENE CHLORIDE
FORMALDEHYDE
ETHYLENE OXIDE
CHLOROFORM
PERCHLOROETHYLENE
TRICHLOROETHYLENE
4
0
b
0
:D
0
.4075
.8915
.3055
.3694
E .
.0011
0
0
0
0
0
.0832
.0011
.2394
.1459
.0285
0
0
0
0
0
0
.00011
.05016
.03648
.00684
.01026
.00684
MISC COOLING GAS SOLVENT HEATING TOTAL
AREA -TOWERS MKTG USAGE
0.0022 0.00045
0.04104
0.0330
0.01254
0.00570
0.00342
0.0114
0.0068
4.4075
0.8915
.0.3916
0.'3705
0.2394
0.1790
0.1026
0.0490
0.0433
0.0103
0.0068
0.0057
0.0034
TOTAL
5.9751 0.4982 0.11070 0.04100 0.0353 0.02210 0.0182 6.7007
203
-------
Table 7-3 Example P1PQUIC Output
CJ
M
iu
en
cc
UJ
z
UJ
CC N
CD Z
UJ
m
a
M
a
cc
< 2
a ID
Z. »
r~i
< 2
h- a
i
a
en en
H- H
_j z
ID <
en H-
UJ Z)
cc _i
CD
z
M
LU
a
< o
o
0-
en
M
cc
_j
a
h- cn n
< u.
o
a en
en
z <
< z
co <
cc
en
LU in
_
< o
x o
UJ H-
_
M _J
a <
en
uu
H-CJ
en*?
LUU_
en
M
cc
en T T T
o o o o
i
iu
I I I
o o o o o
I I
TT -q- in
o o o
i i i
UJLULULUUIUJUJUJUJUJUJ
Ol
cn m CD CD
CVJ
en
en
O
CD
CD
O O
-T r-.
OJ
oj co 01 CD in
CD cn CD
•«H CD •«•!
in CD oj
m cu CD
CD
CM r^»
T r**
m CD
CD CD
aCDCOCDCDCDCOCDCDCDCDCDCD
OZOJCUOJOJOJOJOJCUOJOJOJOJ
u.
CJ
u
CJ
u .....
u
CJ
, CD -
CJ
CJ
CJ
CJ U
m cj a
< cj a cj
< CD O CJ
< < CD CD
< < < CD
.
en
M
CC
a
w a
> Z 2
M 13 13
a a M
z a. 2
M 2 a
o <
cc a cj
>
o o
Is- CD
a
en
LU
< < < < CJ CJ CJ
<<<o
o
CJ
2OOOOOOOOOOOO
H- OOOOOOOOOOOO
a
I— OOOOOOOOOOOO
Q.OOOOOOOOOOOO
LU
OCCDCOCDCOCDCDCOCDCOCDCDCOLU
en
UJ
o
o
o a
o z
O ID
O O
o
o
uj
CD
LU
a. M
2: z
O LU
CJ CD
o
CD
CO
en
u
a
o_
UJ
en
a cc
u <
o
CD
en
204
-------
computer, and a user's guide, reflecting the updates made to the model, will be
forthcoming. A user'sguide for the current version of HEM/SHEAR model is presently
available. (EPA, 1986) The SCREAM model is applicable only to the Los Angeles
geographical area.
Cost Saving Techniques
Normalized Modeling—As mentioned in'Chapter 4, the practice of normalized
modeling will minimize the number of dispersion model runs necessary in an urban air
assessment. This practice will commensurately save on data handling expense.
Normalized modeling was done in most of the emission inventory/dispersion modeling
studies reviewed in this report.
For example, instead of running HEM/SHEAR separately for each pollutant and
each emission.projection, the 5 City .Controllability Study used a single run, assuming
100 tons per year of pollutant was emitted from each point source. For each point source,
the output of SHEAR was-saved in an intermediate file containing the cumulative
population exposure (microgram-persons/cubic meter-year) for each modeled point. These
cumulative values were thus used to estimate population exposures to individual
pollutants by multiplying them by the ratio of actual-to-modeled emissions.
Modeling Small Point Sources as Area Sources—Because the cost and execution
time of modeling point sources is much greater than for modeling area sources, the 5 City
Controllability Study opted to treat small point sources as area sources if they emitted
below a particular cutoff level. This minimum cutoff level varied by pollutant and was
determined to some extent to be a function of the number of small sources that emitted a
particular pollutant within, the study area, as well as by the toxicity of that pollutant.
7-3 Data Handling Aspects of Ambient Air Monitoring Studies
Limited information is available on data handling specifics in the ambient air
monitoring studies reviewed herein. Two of the studies for which some information is
available—the Staten Island/Northern New Jersey Study and the Urban Air Toxics
Monitoring Program—are only in the data collection phases at present and have not yet
205
-------
completed any exposure or risk assessment. Both of these studies are using Lotus 1-2-3
spreadsheets to store the raw data and to develop summary reports. Table 7-4 is an
example of a quarterly report generated in the Staten Island/Northern New Jersey Study.
7.4 Insights on Computerized Data Handling
Data handling in urban air assessment studies involving dispersioh modeling,
exposure/risk assessment, and control scenario evaluation can become quite complex and -
should be carefully considered when developing the study protocol. The i agency may
want to consider contractual assistance in this area.
The study area manager should consider using available mainframe; software for
conducting his/her urban study. EPA maintains various dispersion models as well as
exposure/risk models that can perform many of ±he core data handling functions necessary
in an urban air toxics assessment. j • .
!
Many data handling functions can be performed efficiently and more! readily on a
PC than on a mainframe computer. Specifically, the preparation of emissions data and
other data needed to run dispersion models can be done on a PC in cases where
extraordinarily large data bases are not involved. Additionally, the outputs can
effectively be downloaded to desktop PCs for editing and analysis, and for the creation of
summaries and graphics.
Data handling complexity and costs can be reduced by normalized modeling and
by modeling small point sources as area sources. Both techniques result in fewer point
source modeling runs. The reader is cautioned, however, that there are drawbacks to both
techniques. Normalized modeling assumes a linear relationship between emission changes
and model-predicted concentrations. This assumption may be invalidated if the release
specifications change' as emissions change (e.g., a control device may alter a plant's
stack/exhaust parameters as well as its emissions) and, hence, may need to be carefully
examined in detailed assessments. Also, the treatment of small point sources as area
sources may change the exposures resulting from those sources, as their emissions will
subsequently be "smeared out" over entire grid cells and could be assumed to be emitted
206
-------
I
Table 7-4 Example of Ambient Air Data Set Summary
Reduced Data from All Sampling Systems
Agency: College of Staten Island
pollutant:Chloroform
Quarter Beginning (Month, Year): Jan88
H3L: 0.04 ppb low flow & 0.02 ppb high flow
(Quarterly Report)
CAS #: 67-66-3
Units: ppb
SAROAD t Site
Sampling Analytical * of Arith. Std. 1st
Code Code Samples Mean Dev. Max
2nd
Max
Min
* >
HDL
FC
8 Bayley Seton
3 Eltingville
6 Dongan Hills
Tenax GC/HS
Tenax GC/MS
Tenax GC/MS
82 0.039
57 0.043
55 0.046
0.037
0.023
0.037
0.321
0.121
0.269
0.125
0.112
0.111
0.008 81
0.011 57
0.012 55
207
-------
at ground level. Such a distribution of emissions from small sources could potentially
overemphasize the risks from these small point sources, especially if they are treated as
area sources that are subcounty-apportioned by population.
208
-------
Chapter 7
References
EPA, 1986. User's Manual For the Human Exposure Model (HEM). Office of Air Quality
Planning and Standards, Research Triangle Park, North Carolina.
EPA, 1988. Staten Island New Jersey Urban Air Toxics Assessment Project, "Air Quality
Data Report. Volume I: Quarterly Summary Reports. July 1987-March 1988." U.S.
Environmental Protection Agency Region II.
PIPQUIC, 1989. Draft PIPQUIC User's Guide being prepared for T. Lahre of
U.S. Environmental Protection Agency, Research Triangle Park, North Carolina., by
American Management Systems.
209
-------
CHAPTER 8
EMERGING METHODS: RECEPTOR MODELING
AND BIOLOGICAL TESTING
The following topics are presented in this chapter: I
i
!
Receptor modeling ~" j
• Use of receptor modeling in urban assessments
• Pollutants and source categories addressed - ' ' .
• Source signature testing and tracer analysis
• Measured ambient air quality data sets :
i
• Statistical techniques used for source apportionment '•
• Spatial and temporal representativeness of results :
• Comments on the use of receptor modeling
Biological testing
• Approach to use of biological testing in urban air toxics studies
• Comments on the use of biological testing
8.1 The Use of Receptor Modeling in Urban Assessments
Receptor modeling—also called source apportionment—is an evolving science and
is not yet widely implemented in urban air toxics assessments. Receptor modeling
techniques were originally developed to study sources of particulate matter, and have
210
-------
been used to identify sources of certain toxic metals and extractable organic matter within
the particulate catch (Lioy, 1988). Recently, investigators have begun to use receptor
modeling to study sources of VOC (O'Shea, 1988; Scheff, 1987), which can yield
information on specific gaseous toxics such as benzene. The Denver IEMP will attempt to
apportion gaseous VOC if adequate data are acquired during the air monitoring phase of
the study. In addition, EPA's Integrated Air Cancer Project (IACP) is using receptor
modeling to apportion the mutagenic activity of ambient particulate matter between
mobile sources and wood smoke.
The purpose of receptor modeling is to estimate contributions of sources to
monitored pollutant concentrations at specific receptor sites. The process employs a
variety of statistical techniques to identify the site-specific impacts of pollutant sources,
or source categories, on the basis of their emission "signatures" or "fingerprints." Several
basic signature types may be employed.. One type of signature is the specific mixture of
chemical species emitted from a particular source, identifying the ratio or relative
concentration of "each chemical species to the whole quantity of pollutant in an emission
stream. For instance, if one source emits two grains of benzene for every one gram of
toluene, its impacts may be distinguished, through statistical analysis, from another
source that emits two grams of toluene for every gram of benzene. Another signature type
involves the use of unique "tracer" pollutant in sources' emissions. For instance, lead and
bromine emissions are associated with mobile source emissions, potassium and iron with
wood burning emissions, and so forth. These' tracers can be used in conjunction with
other source signature information to determine source contributions at a receptor point.
Since receptor modeling is dependent on the availability of source/emissions chemical
composition data, it can be applied only to those pollutants for which adequate emissions
data are available for all sources, or source categories, in the study area emitting those
pollutants.
Advantages of receptor modeling are that it can (1) confirm source contributions
estimated through air dispersion modeling and (2) provide data on source contributions
where air dispersion modeling has not been done, where modeling results are suspect
because of terrain or meteorological complexities, or where uncertainties exist regarding
211
-------
atmospheric transformation mechanisms. As receptor modeling techniques improve—
particularly as hybrid techniques more comprehensively integrate emissions, dispersion
and transport, and measured data—receptor modeling may significantly improve the
reliability of policy conclusions about the nature of the urban soup problem. !
The disadvantages of receptor modeling are its technical complexity, its data
requirements, and the inherent limitations of the monitoring data (and' sometimes
modeling data) upon which it relies.
Pollutants and Source Categories Addressed
Past receptor modeling analyses have dealt almost exclusive^ with particulate
matter. Little has been done on gaseous pollutants, although theoretically these could be
addressed as long as the signatures of sources, or source categories, are j sufficiently
different to allow for differentiation (Pace, 1987). The current IACP and Denver IEMP
studies may further develop the appropriate techniques. j
Both the IACP and Denver IEMP studies are using receptor modeling primarily to
distinguish between mobile source and wood combustion contributions to ambient
particulate concentrations. The IACP Boise study will also include the apportionment of
specific gaseous pollutants and pollutant classes. The Denver study will attempt
apportionment of gaseous pollutants only if the study's monitoring program develops
adequate data on the pollutants of concern. (Stevens, 1987) ',
i
At this time, neither of these studies plans to use receptor modeling to distinguish
between point and area source categories, although some work may be done with power
plants and refineries in the Denver study.
Source Signature Testing and Tracer Analysis
The IACP study is performing source testing for wood combustion and mobile
sources to complement the source signature data available in the literature. Similarly, the
Denver IEMP will test mobile sources and power plants to compile sufficient signature
data for receptor modeling. (Stevens, 1987)
212
-------
In certain kinds of receptor modeling, tracers are used as unique signatures for
certain source categories. The following tracers are commonly used to identify source
categories:
Mobile Sources: Lead, bromine, carbon monoxide
Power Plants: Sulfur, selenium, arsenic
Wood Burning: Potassium, iron
Refineries: Lanthanides
Incinerators: Zinc
Measured Ambient Air Quality Data Sets
Pollutant coverage and representativeness in time and space of the measured
ambient air quality data set are important considerations when drawing conclusions from
the receptor modeling analysis, as discussed below:
Diurnal Coverage—The monitoring programs for the IACP and Denver studies are
very similar; the Denver study, in fact, was patterned after the IACP approach. Each
study relied on 12-hour sampling to separate diurnal trends, although in Denver a
combination of 12- and 24-hour samples was collected to reduce costs. In these studies,
it is important to be able to resolve the chemical species emitted from residential wood
burning (predominantly a nighttime activity) and mobile source emissions (predominantly
a daytime activity). Hence, the 12-hour sampling periods each day were extended from
7 a.m. to 7 p.m. and from 7 p.m. to 7 a.m.
Pollutant Coverage—Chapter 2 discussed the pollutant coverage of these programs
in general terms. Pollutants collected specifically to support the receptor modeling
include sulfates and nitrates, elemental and organic carbon, elemental analysis, and, in the
case of IACP, carbon-14 data to help separate wood from fossil fuel combustion sources.
CO data are collected at some sites, along with other criteria pollutants and
meteorological data, to provide additional input on source contributions. The IACP study
also analyzed samples for mutagenicity to apportion biological activity to appropriate
source categories.
213
-------
-------
In certain kinds of receptor modeling, tracers are used as unique signatures for
certain source categories. The following tracers are commonly used to identify source
categories:
Mobile Sources: Lead, bromine, carbon monoxide
Power Plants: Sulfur, selenium, arsenic
Potassium, iron
Lanthanides
Zinc
Wood Burning:
Refineries:
Incinerators:
Measured Ambient Air Quality Dafo Sets
Pollutant coverage and representativeness in time and space of the measured
ambient air quality data set are important considerations when drawing conclusions from
the receptor modeling analysis, as discussed below:
"" Diurnal CovftragR—The" monitoring programs for the IACP and Denver studies are.
very similar; the Denver study, in fact, was patterned after the IACP approach. Each
study relied on 12-hour sampling to separate diurnal trends, although in Denver a
combination of 12- and 24-hour samples was collected to reduce costs. In these studies,
it is important to be able to resolve the chemical species emitted from residential wood
burning (predominantly a nighttime activity) and mobile source emissions (predominantly
a daytime activity). Hence, the 12-hour sampling periods each day were extended from
7 a.m. to 7 p.m. and from 7 p.m. to 7 a.m.
Pollutant CovRrqgR—Chapter 2 discussed the pollutant coverage of these programs
in general terms. Pollutants collected specifically to support the receptor modeling
include sulfates and nitrates, elemental and organic carbon, elemental analysis, and, in the
case of IACP, carbon-14 data to help separate wood from fossil fuel combustion sources.
CO data are collected at some sites, along with other criteria pollutants and
meteorological data, to provide additional input on source contributions. The IACP study
also analyzed samples for mutagenicity to apportion biological activity to appropriate
source categories.
213
-------
-------
Seasonal Coverage—The results of a receptor modeling analysis are directly
applicable to the monitoring period for the ambient air quality data set. Both the IACP
and the Denver study are performing monitoring in distinct seasonal blocks. For the
IACP, the major emphasis will be on the winter season. The Denver study will provide a
more balanced emphasis between the winter and summer seasons.
The two-season coverage in Denver provides a more representative data set for
estimating annual average source-receptor relationships. The inclusion of two seasons in
Denver provides more data for estimating source culpability on an annual average basis.
Spatial Coverage—The Denver study has three ambient air quality monitoring sites
in the air toxics monitoring network; a fourth (supplemental) site was available during the
winter season. The IACP studies have used as many as seven fixed sites to support the
receptor modeling analysis, although only two- sites were used in the earlier testing in
Raleigh, North "Carolina and Albuquerque, New Mexico. With only a few sites there is a
possibility that the results of source apportionment do not represent averages for the
metropolitan area being evaluated. •
Statistical Techniques Used to Estimate Apportionment
The statistical techniques used in receptor modeling strive to find the best fit
between the measured data at one or several receptors and the source signature data. It is
beyond the scope of this report to address these techniques in detail. References such as
EPA 1981, 1983, 1985 provide more comprehensive treatment of this subject. The
following is a capsule description of these techniques:
Chemical Mass Balance—The CMB method is based on the assumptions that the
mass of material deposited on a filter at a receptor site is a linear combination of the mass
contributed from each of the sources and that the mass and chemical speciation are
conserved from the time of emission to the time it is measured at a receptor site. The
measured data and the data on source signatures are used to form a set of simultaneous
equations. There are as many equations as there are chemical species being addressed.
The best fit for the set of equations is identified.
214
-------
Factor Analysis—Whereas CMB methods apply knowledge about source
characteristics to a single filter data set to derive a source's contribution, multivariate
methods such as factor analysis extract information about a source's contribution on the
basis of the variability of elements measured on a large number of filters. If two or more
chemical components originate from the same source, their variability as a function of
time as measured at a receptor site is assumed to be similar. j
Multiple Linear Regression Analyses—This analysis provides a means of
calculating the mass of emissions from a given source once the tracer species from that
source are known. |
Hybrid Receptor Modeling—The measured ambient data and the dispersion term
between sources and monitoring sites are used as known values, and the emission rate
from each source is solved as the unknown.
To date, the IACP has only used multiple linear regression because the studies are
reviewing only two source categories—wood combustion and mobile sources;—and more
complex models are not needed. As more complex airsheds are studied, future IACP
studies may include more complex receptor modeling techniques such as chemical mass
balance, factor analysis, and hybrid receptor modeling. For example, factor analysis may
be used in the IACP to provide groupings of chemical species (both organic and inorganic)
that are characteristic of the emissions and transformation products of sources within the
airshed under study.
Spatial and Temporal Representativeness of Results ',
i
The Denver study plans to add a step in the data interpretation that will evaluate
the representativeness of the ambient air quality measured data set for areas in the city
beyond the receptor locations and for periods not covered during sampling. Two
techniques—dispersion modeling and use of CO data as a surrogate to toxic air
pollutants—will be used to extrapolate "the receptor modeling results to develop more
general conclusions regarding culpability.
215
-------
Dispersion Modeling—There are clear limitations to the use of dispersion models
in Denver because of the city's topography. The complex drainage flows that occur during
periods with peak concentrations are relatively difficult to model, considering the
available wind data and the cost of performing non-Gaussian modeling to address the
complex trajectories. Confidence in modeled exposure estimates during peak days may be
particularly low because of these factors. Dispersion modeling will, however, be used as
input to the assessment of the spatial representativeness of the monitoring sites and the
representativeness of the monitoring periods to typical concentrations. This link will
provide some limited input to support the extrapolation of the results.
CO Measured Data as a Surrogate for Toxic Air Pollutants—As already noted, the
Denver air toxics monitoring network has four sites; CO coverage is available at these
sites as well as at three additional sites located in widely varying settings, including a
downtown site, a residential site, and a relatively remote site.
CO should be a valuable indicator of the magnitude of mobile source impacts at
each monitoring site because Cifniobile sources are the dominant source category for both
CO and air toxics emitted in the Denver area (Machlin, 1986), and (2) it is reasonable to
expect that the measured data for CO could be used to estimate the general magnitude of
some air toxics for sites where CO data only were available.
&2 Comments on the Use of Receptor Modeling for Urban Air Toxics Studies
All receptor modeling analyses are limited by the representativeness of the
ambient air quality monitoring sites used to support their general conclusions. The
primary disadvantage of receptor modeling is that a limited number of locations
(monitoring sites) are often used to support general conclusions regarding urban-scale
impacts. For example, it could be interpreted from a residential sample obtained during
the winter that 30 percent of the fine particulate concentrations in an urban area are from
mobile sources and 70 percent are from wood combustion. It is always essential,
therefore, to place the receptor modeling results in proper spatial and temporal context.
Any extrapolation of the results beyond this point must be supported by a justification of
the representativeness of the data.
216
-------
Expanded use of dispersion modeling in conjunction with receptor modeling
analyses provides a means for assessing the representativeness of the measured data in
time and space, which should improve the interpretation of the results of: a receptor
modeling analysis. Since the IACP and previous Denver studies do not perform this
interpretive step, receptor modeling could be strengthened in this area. j
It is assumed that the ratios of signature pollutants will remain unchanged during
transport from the source to the receptor. If significant changes do occur during transport,
the apportionment of impacts among sources could be biased. j
The measurements need to be precise enough to distinguish among sources; that is,
if little difference is discerned among signatures of potential sources, the imprecise nature
i
of the measuring techniques could blur the differences among the sources under review.
Ambient concentrations should be. sufficiently above the detection limits as to
allow accurate quantification. This could be a limiting factor for many compounds and
could be addressed by a complementary dispersion modeling analysis. •
i
The study area in question should not contain other significant sources that are not
considered in a receptor modeling analysis. :
The IACP methodology recommends that at least 40 daytime and 4Q nighttime
samples be available to characterize a season (Stevens, 1987), which can be resource
intensive. Similarly, research into hybrid receptor modeling techniques has also indicated
that relatively detailed temporal coverage is needed in order for receptor modeling to be
effective. (Draxler, 1987) There is some leeway in terms of the minimum number of
samples that is considered reasonable for attempting source apportionment (Stevens,
1987); however, any study considering this approach should devote adequate resources to
the ambient monitoring and possibly to source testing to assure its effectiveness.
8.3 The Use of Biological Testing in Urban Air Toxics Studies
Testing of toxic air pollutants on living organisms has attracted increased interest
over the past several years, but this approach is still basically experimental. Py exposing
217
-------
microorganism, whole animals, or selected plant species to potential genotoxicants,
biological monitoring provides a method of assessing the potential biological effects of
previously untested chemicals or chemical mixtures. Generally, bioassay techniques are
limited to assessing direct mutagenic effects; by comparing the number of revertants to
known mutagens, bioassaying can be used as a relative indicator of mutagenic activity.
These tests are not used to identify tumor promoters or other nonmutagenic effects in
environmental monitoring applications.
The advantage of biological testing as .part of multi-pollutant, multi-facility
studies is that such testing can address interactions among pollutants in a complex
mixture, thereby reducing study uncertainties and characterizing sources and fractionated
samples of ambient air in terms of their relative mutagenicity.
Biological testing has been conducted -primarily in the IACP studies, although
some testing has also been done in California, Connecticut, and New Jersey. Current
testing has been carried out in research programs rather than in support of regulatory
programs. For this reason, this chapter will provide only a brief overview of biological
testing in order to introduce this technique as an approach to be considered in future
applications. Providing details on biological testing is beyond the scope of this report.
Details are available in (Claxton, 1987) and (Lioy, 1988).
At present, biological testing has a number of practical and theoretical
disadvantages. Because biological testing is still experimental, there are a number of
purely practical problems in attempting to apply it within current field-oriented air toxics
surveys. For instance, chemicals such as acetone and toluene/ethanol, which are used to
extract samples, can produce artifacts that affect the estimates of mutagenicity. The
relatively small mass available from typical ambient air samples is another important
factor: it limits the degree of fractionation and the subsequent chemical and biological
testing that can be performed. Similar practical problems exist with recovery rates of
pollutants from collection media and contamination and/or loss of material in the
collection medium.
218
-------
The theoretical development of the technique is also limited. Fqr instance,
experience with gas phase bioassay techniques is still limited (Claxton, 1985; Hughes,
1987). Field studies to date generally do not cover gas phase pollutants in a
comprehensive sense because of limitations in the sensitivity of the techniques. Existing
studies to address gas phase pollutants have only been carried out in the laboratory, at
higher than ambient conditions. At a more basic level, additional research is needed on
evaluating relationships between mutagenic activity and human or animal data on
carcinogenicity or other health impacts. j
Approach
The IACP relies on short-term bioassay tests primarily because they ,require less
massive quantities of the sample, are rapid enough to guide the chemical speciation work,
and correlate well with known carcinogenesis studies (Claxton, 1987). Two types of
biological testing are used in the IACP:
1. Ames testing is used for large samples, i.e., when, more than 10 mg of
organic material are available. Ames test procedures allow for the
calculation of dose-response relationships, mutagenicity slojpes, and so
forth. In the IACP studies, Ames tests are used to compare combustion-
impacted ambient air samples to background (clean) ambient air samples,
indoor ambient air samples, and the combustion source emissions. The
IACP is currently examining wood smoke-affected ambient air sites.
2. For relatively small samples, high performance liquid chromatography
(HPLC)-coupled liquid pre-incubation assay is used to provide a bioassay
"fingerprint" of the sample. Each HPLC fraction is characterised as to its
mutagenicity. If the mutagenic fractions are not highly toxic, an indication
of their relative mutagenicity is available as revertants per plate per
fraction recovered.
i
8.4 Comments on the Use of Biological Testing in Air Toxics Studies \
Relative Importance of Gas Phase Pollutants ;
Perhaps the most significant finding thus far is that the mutagenicity of the gas
phase pollutants, after aging and transformation, can be much greater than either the
"fresh" gaseous pollutants or the particulate matter in urban air. This cpnclusion is
tentative, however, and more research is needed to corroborate initial' indications.
219
-------
1
Nevertheless, it suggests that if biological testing were used as a comprehensive screening
technique, future applications would have to address gaseous aged pollutants as well as
the more routinely studied particulate matter.
Indoor/Outdoor Differences
In another finding of the IACP study, particulate mutagenicity levels inside a
subset of residences were found to be lower than those immediately outside the home.
The basis of this finding, as well as the gas versus particle phase issue mentioned above,
will undoubtedly be subjects of future research.
Use of Biological Testing to Set Priorities
Biological testing could provide a broad check of genotoxic risks among different
metropolitan areas, though variations resulting from the use of different strains of the
same species, or varying laboratory conditions, could limit the accuracy of these
comparisons. Overall, however-, biological testing could become a useful means of
identifying urban or industrial areas where a more detailed review of source-receptor
relationships for specific pollutants is warranted.
220
-------
Chapter 8
References
Claxton, L.D., A.D. Stead, and D. Walsh. 1987. "An Analysis by Chemical Class of
Salmonella Mutagenicity as Predictors of Animal Carcinogenicity." Mutation Research (in
press).
Draxler, R. R. 1987. "Estimating Emissions from Air Concentration Measurements."
Journal of Air Pollution Control and Hazardous Waste Management, Volume 37, No. 6,
pp. 708-714.
EPA, 1981, 1983, 1985. Receptor Model Technical Series. Volume I. EPA-450/4-81-016a,
1981. Volume II. EPA-450/4-81-0166, 1981, Volume III. EPA-450/4-83-014, 1983.
Volume IV. EPA-450/4-83-018, 1983. Volume V. EPA-450/4-85-007, 1985.
Lioy, P., et at., 1988. Toxic Air Pollution, A Comprehensive Study of Non-Criteria Air
Pollutants, Chapter 5, "Mutagenicity of Inhalable Particulate Matter at Four Site in New
Jersey," Lewis Publishers, pp. 125-166.
Machlin, Paula R., 1986. "Denver Integrated Environmental Management Project:
Approach to Ambient Air Toxics Monitoring," Environmental Protection Agency Region 8,
Denver Colorado', November. 1986.
O'Shea, William J., and Peter A Sheff, 1988. "A Chemical Mass Balance for Volatile
Orgnaics in Chicago," Journal of the Air Pollution Control Association, Volume 38, No. 8,
pp. 1020-1026, August 1988.
Pace, T. 1987, U.S. Environmental Protection Agency. Personal Communication.
Scheff, Peter A. and Mardi Klevs, 1987, "Source-Receptor Analysis of Volatile
Hydrocarbons," Journal of Environmental Engineering, Volume 113, No. 5. October 1987.
Stevens, R. 1987. U.S. Environmental Protection Agency. Personal Communication,
1987.
221
-------
APPENDIX A
GLOSSARY OF TERMS
Acute exposure:
Additive risk/incidence:
Aggregate incidence: -
Ambient (air) monitoring:
Annual incidence:
Area source:
Areawide average individual
risk:
Areawide incidence:
ATM:
One or a series of short-term exposures generally less than
24 hours.
Risk/incidence due to the interaction of two or more
chemicals in which the combined health effect is equal to
the sum of the effect of each chemical alone.
In an urban air toxics context, an areawide, additive
incidence. This term sometimes refers'to areawide incidence
or additive incidence only, so one should pay attention to
the contextual use for the proper meaning.
The collection of ambient air samples and the analysis
thereof for air pollutant concentrations.
Lifetime cancer incidence adjusted to a yearly basis,
typically by dividing lifetime incidence by 70.
Any source too small and/or numerous to consider
individually as a point source in an emissions inventory.
Average individual risk to everyone in an area (but not
necessarily the actual risk to anyone). May be computed by
dividing lifetime aggregate incidence by the population
within the area.
Incidence over a broad area, such as a city or county, rather
than at a particular location, such as an individual grid cell.
Atmospheric Transport Model. A Gaussian point source
dispersion model used in GAMS (before incorporation of
ISCLT for this purpose).
A-4
-------
Averaging period:
B(a)P:
Background:
BACT:
BG/ED:
Bioassay:
Biological testing:
Box model:
GAG:
Cancer:
Carcinogenicity:
Catalyst:
COM:
The length of time over which concentrations are averaged,
such as 1 hour, 8 hours, or 24 hours.
Benzo(a)pyrene. One of a group of compounds called
polycyclic organic matter (POM). B(a)P is sometimes used
as a surrogate for all POM in computing emissions,
exposures, and risks.
A term used in dispersion modeling representing the
contribution to ambient concentrations from sources not
specifically modeled in the analysis, including natural and
man-made sources.
Best available control technology.
Block group/enumeration district, as designated by the
Bureau of Census. A block group is an area representing a
combination of contiguous blocks having an average
population of about 1,100. An enumeration district is an
area containing an average of about 800 people and is
designated when-block groups are not defined.
A test in living organisms, e.g., a test for carcinogenicity in
laboratory animals, generally rats and mice, which includes
a near-lifelong exposure to the agent under test.
See Bioassay.
A simplified modeling technique that assumes uniform
emissions within an urban area, and uniformly mixed
concentrations within a specified mixing depth.
Carcinogen assessment group. EPA group that prepared
qualitative and quantitative carcinogenic risk assessments.
A cellular tumor, the natural course of which is fatal.
Cancer cells, unlike benign cells, exhibit properties of
invasion and metastasis (malignancy). Cancers are divided
into two broad categories: carcinoma and sarcoma.
The extent to which a substance is able to induce a cancer
response.
A substance that promotes a chemical reaction. In the
context of this report, a device installed on the tailpipe of a
motor vehicle to control exhaust emissions.
Climatological Display Model. A Gaussian dispersion
model whose particular strength is its detailed area source
A-ii
-------
treatment. Can also handle point sources, but not in as
detailed a manner as ISC.
Centroid (population, source): A single point whose coordinates represent the location of a
BG/ED, in the case of a population centroid, or the location
of an emission point, in the case of a complex source.
A type of receptor model, employing chemical methods for
source impact determinations.
Long-term exposure usually lasting six months to a lifetime.
In the context of air toxics, co-control represents the
simultaneous control or mitigation of air toxics and criteria
pollutants via the same control measure. For example, a
motor vehicle catalyst would reduce VOC and CO
emissions and also reduce benzene and other gaseous
toxics.
Comparative potency factor: A cancer unit risk factor for a complex substance or mixture
that is extrapolated from human risk data for a reference
substance based on the ratio of short-term bioassay
responses of the complex substance to the reference
substance. EPA is developing comparative potency factors
. for_various sources of POM.
Chemical mass balance:
Chronic exposure:
Co-control:
Complex facility:
Complex terrain:
Composite plume:
A point source covering a large area and comprised of many,
generally different, kinds of emission points such as fugitive
equipment leaks, vents, stacks, volume sources, etc.
Terrain exceeding the height of a stack.
The result of merging of multiple plumes downwind of an
industrial complex.
Coverage (pollutant, source, The extent of inclusion of pollutants or sources in an
spatial): emission inventory or risk analysis or the amount of space
or area represented in a monitoring program or inventory or
risk analysis.
Cr+6:
Criteria pollutants:
Hexavalent chromium, i.e., chromium in the +6 valence
state.
Pollutants defined pursuant to Section 108 of the Clean Air
Act and for which national ambient air quality standards are
prescribed. Current criteria pollutants include particulate
matter, SOX, NOx, ozone, CO, and lead.
A-iii
-------
Culpability:
Decay:
Deposition:
Detection limits:
Dispersion coefficients:
Dispersion modeling:
Dose-response assessment:
Effective stack height:
Excess cancer risk:
Exposure:
Exposure assessment:
The extent to which something (generally a pollutant or
source) is responsible for some effect (such as exposure or
risk).
A term that represents pollutant removal by physical or
chemical processes.
The removal of particulate matter and gases, at a land or
water body surface, by precipitation or dry removal
mechanisms, including surface reactions and filtering.
See minimum detection limits.
Parameters used in Gaussian dispersion modeling to
estimate plume growth through dispersion along the
horizontal and vertical axes. These are computed as a
function of downwind distance and atmospheric stability.
A means of estimating ambient concentrations at locations
(receptors) downwind of a source, or an array of sources,
based on emission rates, release specifications, and
meteorological factors such as wind speed, wind direction,
atmospheric stability, mixing height, and ambient
temperature.
The determination of the relation between the magnitude of
exposure and the probability of occurrence of the health
effects in question.
Factor analysis:
Mltilial Uiid Is scBgial cartridge §
for monitoring formaldehyde and other aldehydes.
The height above ground level of the centerline of a plume.
It is the sum of the physical stack height, plume, and stack-
tip downwash (as applicable).
An increased risk of cancer above the normal background.
An event in which an organism comes into contact with a
chemical or physical agent.
Measurement or estimation of the magnitude, frequency,
duration, and route of exposure to substances in the
environment. The exposure assessment also describes the
nature of exposure and the size and nature of the exposed
populations.
A type of receptor model, employing chemical methods for
source impact determination.
A-iv
-------
Fenceline:
Fine particulate:
FMVCP:
Formulation:
Fugitive emission/release:
GAMS:
Gaussian model:
GC:
GC/MS:
GEMS:
Global buildup:
Grid:
A term used to represent the property boundary of a
facility.
Particulate matter less than 2.5 microns in size.
Federal Motor Vehicle Control Program. EPA's program to
control motor vehicle emissions.
(see Model formulation)
Emissions unconfined to a stack or duct, such as equipment
leaks from valves, flanges, etc., or open spills.
GEMS Atmospheric Modeling System. GAMS is an
Exposure model, similar to HEM/SHEAR, developed by
EPA's Office of Toxic Substances.
A Gaussian dispersion model represents the distribution of
concentrations within a plume by assuming a normal
distribution along the horizontal and vertical axes. In the
basic form, predicted concentrations are estimated as a
function of emission rate, horizontal and vertical dispersion
coefficients, and vertical and horizontal distance from the
plume centerline.
Gas chromatography. A technique for separating
compounds on a chromatographic column for subsequent
analysis.
Gas chromatography coupled with mass spectrometry for
analysis of compounds.
Graphical Exposure Modeling System. An interactive,
multimedia information management system that contains
physiochemical parameters, fate data, and multimedia
exposure models (e.g., GAMS), developed by EPA's Office
of Toxic Substances.
The widespread accumulation of pollutants in the
atmosphere over the years. Global buildup is generally
associated with more inert compounds such as halogens
(e.g., carbon tetrachloride).
A network of rectilinear or polar grid cells superimposed
over an area, generally for modeling analyses. A rectilinear
grid is defined by a series of perpendicular lines defining
rectangular or square grid cells, whereas a polar grid is
defined by a series of concentric circles and straight lines
radiating from the center of the circles.
A-v
-------
Grid cell:
Grid spacing:
Grid square:
Hazard identification:
HEM:
Highway vehicle source:
Hot spot:
HPLC:
Hybrid modeling:
IACP:
IEMP:
Incidence:
Individual risk:
The smallest area resolved within a modeling grid.
The dimensions of the grid cells within a grid.
A grid cell whose sides are equal.
The determination of whether a particular chemical is or is
not casually linked to particular health effects.
Human Exposure Model. EPA model used for exposure and
risk analysis, which defines polar receptor grids around each
point source. Can also model area sources by apportioning
county level emissions to each BG/ED and running a simple
box model. HEM contains two component modules: SHED
and SHEAR. (Note: HEM is being upgraded by EPA.)
Car, truck, or motorcycle. Also called road vehicles or
motor vehicles.
A particular receptor, grid cell, or localized area wherein
exposure or risk-is high.
High performance liquid chromatography.
.-The- use of several different models, particularly the mixed
use of dispersion and receptor modeling, for complementary
analyses.
Integrated Air Cancer Program. EPA long-term research and
development program to develop methods and conduct field
and lab tests to learn what causes cancer in complex urban
air mixtures and what sources are contributing to this cancer
burden.
Integrated Environmental Management Project. A series of
studies conducted in Philadelphia, Baltimore, Santa Clara,
Kanawha Valley, and Denver to evaluate multimedia
contributions to various health risks, with emphasis on
cancer.
The frequency of occurrence of a certain event or
conditions, such as the number of new cases of a specific
disease or tumor occurring during a certain period. The
incidence rate is the number of new cases during a certain
period divided by the population size (e.g., 10 cases per
100,000 exposed persons).
The increased risk for a person exposed to a specific
concentration of a toxicant. May be expressed as a lifetime
A-vi
-------
IRIS:
ISC:
Lifetime:
LONGZ:
MADAM:
MEI:
Microenvironment:
Micron:
Microscale:
Minimum detection limit:
MIR:
individual risk or as an annual individual risk, the latter
usually computed as 1/70 of the lifetime risk.
Integrated Risk Information System. EPA computer system
containing risk information (e.g., cancer unit risk factors) for
specific chemicals.
Industrial Source Complex model. EPA Gaussian dispersion
model designed to handle complex sources. ISC contains a
long-term module (ISCLT) and a short-term module (ISCST).
Considered to be 70 years in EPA health risk assessments.
A predecessor model of ISCLT, containing many of the
source-specific features of ISCLT. A long-term (seasonal or
annual average) Gaussian dispersion model that is used for
point and urban-wide area sources. This model can accept
site-specific turbulence data to estimate local dispersion
rates.
Monitoring of Ambient Data Assessment Module. Module
within EPA.;s PIPQUIC system used to evaluate model
performance by comparing measured and model predicted
ambient air quality data and partitioning the results by
.various meteorological parameters.
Maximum exposed individual.
Localized environment in which one may be exposed to
pollutant concentrations that differ considerably from
ambient (outdoor) air (e.g., indoor household air,
occupational exposures, air within automobiles, etc.) EPA's
TEAM studies evaluate personal exposures as individuals
are exposed to air in different microenvironments during
each day.
One millionth of a meter. A dimensional unit used to
measure the diameters of particles.
The immediate vicinity of a source, e.g., within 1 to 2
kilometers.
The lowest level measurable by a monitoring technique at
some level of confidence.
Maximum individual risk, i.e., risk to the most exposed
individual.
A—vii
-------
Mitigation:
The reduction or control of emissions, exposures, or risks
due to air toxics.
Mixing height:
(Or mixing depth)
Mobile source:
Model formulation:
Model performance:
Model(ing):
Modeling protocol:
Monitoring:
Motor vehicle:
Multiple linear
regression analysis:
Mutagenicity:
NAAQS:
The height above the surface through which vertical mixing
occurs without suppression by an elevated stable layer.
Any motorized vehicle, such as cars, trucks, airplanes, or
trains. Sometimes refers specifically to highway vehicle
sources.
A model formulation is determined by the model selected,
the specific input data, and options selected for a model
run. A range of model formulations could be made using
the same dispersion model. •
The evaluation of the performance of a dispersion model by
comparing modeled concentrations to meeisured air quality
data, generally based on statistical tests such as measures
of bias, variance, and correlation.
See dispersion modeling or receptor modeling. This term
can also be used in the context of emissions modeling,
referring to" the prediction of emissions from a source or
source category.
As used in this report, a modeling protocol provides a
detailed account of the specific model formulation(s) that
will be used to perform a dispersion modeling analysis. It
fsnerally is used to obtain comment prior to doing a
etailed analysis.
The collection and analysis of ambient air samples.
Sometimes refers specifically to sampling alone and not to
analysis. Can also refer to source (stack) sampling.
On-road or off-road cars, trucks, or motorcycles.
A type of receptor model employing chemical methods for
source impact determination.
The extent to which a chemical or physical agent interacts
with DNA to cause a permanent, transmissible change in the
genetic material of a cell.
National Ambient Air Quality Standard. Set by EPA for
criteria pollutants under the Clean Air Act.
A-viii
-------
NEDS:
NEM:
NESHAP:
Network:
Network enhancement:
Noncancer risk:
Nontraditional sources:
Normalized modeling:
NSR:
OPPE:
PAN:
National Emissions Data System. EPA's centralized
emission inventory of criteria pollutant emissions.
NAAQS Emissions Model. EPA exposure model that
considers movement of individuals through various
microenvironments.
National Emission Standard for Hazardous Air Pollutant.
Standards set by EPA for hazardous air pollutants under
Section 112 of the Clean Air Act.
An array of ambient air monitors distributed over an area.
The enhancement of an ambient air network to improve
spatial or temporal coverage. Sometimes done on a
temporary basis.
Risk of a health effect other than cancer.
Sources not usually included in an emission inventory, such
as wastewater -treatment plants, ground-water aeration
facilities, hazardous waste combustors, landfills, etc.,
which are air emitters due to intermedia transfer from water
or solid waste.
Modeling of unit weights (e.g., 1 mg/yr) of emissions from
each source, rather than modeling of actual emissions, and
displaying incremental receptor concentrations or receptor
coefficients. Thereafter, the resulting normalized receptor
coefficients are adjusted by actual emission rates to
simulate different emission scenarios rather than re-running
the model over and over with different emissions totals.
This process assumes linearity between emissions and
modeled ambient air concentrations, which does not always
hold true if stack and exhaust parameters change.
New Source Review. Permit process for evaluating
emissions and need for controls before construction and
operation of a proposed facility. EPA, as well as many
States and local agencies, has NSR requirements for air
toxics sources.
EPA's Office of Policy, Planning, and Evaluation. Initiator
of the Integrated Environmental Management Project, a
series of geographic, multimedia studies in various cities.
(See IEMP.)
Peroxyacetyl Nitrate. A photochemical oxidant formed in
urban atmospheres along with ozone.
A-ix
-------
Personal Monitoring:
Photo chemically formed
pollutant:
PIC:
PIPQUIC:
PM:
Point source:
Polar grid:
POM:
Population risk:
Primary pollutant:
PUF:
QA/QC:
Receptor:
Receptor grid:
Receptor modeling:
Sampling done in EPA's TEAM study by individuals
wearing personal monitors.
A secondarily formed pollutant due to atmospheric
photochemistry, e.g., formaldehyde or PAN.
Products of incomplete combustion. A term used
somewhat loosely in various studies referring generally to
polycyclic organic matter.
Program Integration Project—Queries Using Interactive
Commands. A data handling system developed by EPA as
part of the IEMP to calculate exposures and risks from air
toxics emissions data. PIPQUIC is being used in various
urban air toxics studies.
Particulate matter.
A source large enough for individual record to be kept in an
emission inventory, often emitting above a certain cutoff
level or threshold.
See grid.
JPolycyclic organic matter. A broad class of compounds
that generally includes all organic structures having two or
more fused aromatic rings (i.e., rings sharing a common
border). POM includes polynuclear aromatic hydrocarbons
(PAH or PNA).
Generally synonymous with areawide incidence.
One emitted directly from an emission source prior to any
secondary physical or chemical reaction.
Pblyurethane foam. An adsorbent material used for
sampling semivolatile organic compounds.
Quality assurance/quality control.
A particular point in space where a monitor is located or
where an exposure or risk is modeled.
An array of receptors. Generally synonymous with
network.
A technique for inferring source culpability at a receptor(s)
by analysis of the ambient sample composition. There are
A-x
-------
Release specifications:
Risk:
Risk assessment:
Risk characterization:
Risk management:
• -u
Road vehicle source:
Sample compositing:
Sampling:
Sampling duration:
Sampling frequency:
Sampling period:
SCAQMD:
various receptor models employing microscopic and
chemical methods for analysis.
Used as model inputs to characterize the location, release
height, and buoyant and momentum fluxes of each source.
Required terms include stack height, exit velocity, inner
stack diameter, exhaust temperature, and the dimensions of
nearby structures.
The probability of injury, disease, or death under specific
circumstances. In quantitative terms, risk is expressed in
values ranging from zero (representing the certainty that
harm will not occur) to one (representing the certainty that
harm will occur).
The. use of the factual base to define the health effects of
exposure of individuals or populations to hazardous
materials and situations. May contain some or all of the
following four steps:
The description of the nature and often the magnitude of
human risk, including attendant uncertainty.
The decision-making process that uses the results of risk
.assessment to produce a decision -about environmental
action. Risk management includes consideration of
technical, social, economic, and political information.
See highway vehicle source.
The combining of samples before analysis to increase the
temporal or spatial representativeness, while reducing
analytical costs.
See monitoring or ambient air monitoring
The length of time (generally in hours) each sample is taken
or drawn (e.g., 12 or 24 hours).
The length of time between samples (1 hour, 1 day, 6 days,
etc.).
The length of time (days, months, years) for which a
sampling program is operational.
South Coast Air Quality Management District. The local air
pollution control agency in California responsible for the
Los Angeles area.
A—xi
-------
Scoping study:
SCREAM:
Screening study:
Secondary pollutant:
Semivolatile otganics:
SHEAR:
SHORTZ:
SIC:
SIP:
Source apportionment:
Source grid:
Also known as a screening study. An assessment or
analysis using tentative or preliminary data whose results
are not accepted as absolute indicators of risk or exposures,
but rather are taken as an indication of the relative
importance of various sources, pollutants, and control
measures. Most urban air toxics assessments conducted to
date have been considered to be scoping studies, useful for
pointing out where more detailed work is needed prior to
regulation.
South Coast Risk and Exposure Model. An enhancement of
EPA's HEM/SHEAR developed by SCAQMD that uses more
detailed population and meteorological data.
See scoping study.
Also, "secondarily formed pollutant." A pollutant formed
in the atmosphere as a result of chemical reaction and/or
condensation, such as PAN. Some pollutants (e.g.,
formaldehyde) are both primary and secondary pollutants.
Compounds that have vapor pressures (in clean air) of 10"8
to 10"4 torr, and which readily adsorb upon particulate
matter. Not clearly gaseous or particulate under all
. conditions. • .
Systems Application Human Exposure and Risk. A module
within EPA's Human Exposure Model designed to focus on
multiple pollutant, multiple source exposures, including
area source analyses. SHEAR uses a Gaussian dispersion
model for point sources and a box model for area sources.
Short-term version (e.g., 1-hr to 24-hr averaging periods) of
SHORTZ/LONGZ companion models. See LONGZ.
Standard Industrial Classification. A series of codes or
classifications to categorize industry, published regularly
by the Office of Management and Budget.
State Implementation Plan. Required by States under the
Clean Air Act to indicate a plan of action to mee.t National
Ambient Air Quality Standards for criteria pollutants.
See receptor modeling. ;
A grid defined to encompass all emission sources that one
wants to model. The source grid is sometimes defined
bigger than a corresponding receptor grid so that all local
sources impacting on the receptor grid will be considered.
A—xii
-------
More typically, the source grid and receptor grid coincide in
most studies.
Spatial coverage:
Spatial resolution:
Species profile:
Stability:
Subchronic exposure:
Supersite:
Surrogate indicator:
TEAM:
Temporal resolution:
Tenax:
The area included or covered by a sampling network or a
source/receptor grid.
The extent to which emissions, monitoring or any other data
are subdivided or resolved in space, generally across a
geographical area. For example, emissions data may be
spatially resolved to 1. kilometer by 1 kilometer squares
within an urban area.
A set of apportioning factors that allow one to subdivide
VOC or PM emission totals into individual chemicals or.
chemical classes.
A parameter to describe the degree of turbulence in the
atmosphere, ranging from- unstable (vigorous mixing) to
stable (suppressed mixing).
Exposure to a substance spanning approximately 10 percent
of the lifetime of an organism.
A monitoring site that alone, or in conjunction with other
sites, best represents the scale of interest, such as suburban
neighborhoods, central business district, or rural areas.
Such sites can be inferred by statistical analysis .of modeled
data.
A variable whose spatial or temporal distribution is
assumed to behave in the same manner as some variable of
interest. Surrogate indicators are used for spatial and
temporal apportionment of emissions data, especially for
area sources.
Total Exposure Assessment Monitoring. The type of
monitoring being conducted by EPA's Office of Research
and Development to measure total human exposures of
individuals as they occupy various microenvironments, such
as outdoors, indoors, and commuting in motor vehicles.
The extent to which some variable, typically emissions and
monitoring data, is subdivided or resolved in time. Data, for
example, may be resolved hourly or seasonally.
A porous polymer material often used for sample collection
of certain organic materials.
A—xiii
-------
Tracer:
Transformation:
Transport:
UNAMAP model(s):
Unit cancer risk factors:
Urban soup:
VMT:
VOC:
Tracer pollutants are used in receptor modeling to estimate
the contribution of a source, or source category, to total
ambient levels of the pollutant or pollutant class (such as
PM-10) of interest. Ideally, tracers are unique to a source or
source category.
The conversion, through chemical or physical processes, of
one compound or several compounds into other compounds
as a result of aging and irradiation in the atmosphere.
The movement of pollutants by wind flow. Transport is
characterized for modeling purposes by wind speed and
wind direction.
User's Network for Applied Modeling of Air Pollution. A
set of dispersion models compiled by EPA that is used to
support regulatory and other needs for modeled data.
The incremental upper bound lifetime risk estimated to
result from a lifetime exposure to an agent if it is in the air
at a concentration of 1 microgram. per cubic meter.
An expression referring to the multi-source, multi-pollutant
urban air toxics problem resulting from the complex
- interaction of many pollutants, sources, and atmospheric
transformation.
Vehicle miles traveled. Mobile source emission factors are
typically expressed in terms of grams per VMT.
Volatile organic compounds.
A—xiv
-------
APPENDIX B
EVALUATION OF OPTIMUM SIZE AND LOCATION OF AIR
TOXICS MONITORING NETWORKS BASED ON SPATIAL
CORRELATIONS OF CONCENTRATION
Introduction
The purpose of this Appendix- is to describe a method to select optimum
monitoring sites based on dispersion modeling and statistical analysis. This method uses
modelled emissions, data to" design a monitoring, network that best meets project
objectives within a specific study area. The ultimate goal of this methodology is to select
a limited number of monitoring sites that yield relatively independent (i.e., noncorrelated)
air quality data, and to avoid selecting sites that provide relatively little new information.
Appendix B is organized to first describe this optimization method, and then to provide a
limited evaluation of its effectiveness based on the DEMP Philadelphia measured air toxics
data set. This measured data set is used to evaluate how well the modeling-based site
selection approach corresponds with the selection of optimal sites ("Supersites").
B.I Method
There are six steps, which lead to a quantitative determination of the sites that
best meet study objectives based on the best available emissions data. The steps can be
briefly summarized as follows.
Step 1 - .Select a "long list" of candidate monitoring sites. Select a relatively
broad range of candidate monitoring sites. In many metropolitan areas this set could
include all existing criteria pollutant monitoring sites and other readily accessible
B-i
-------
locations, such as schools, state/county facilities, etc. The goal of Step 1 could be to
provide broad coverage of candidate sites throughout the study area.
Step 2 - Compile emissions data. Obtain (or compile as necessary) available
emissions data for point and area sources (such as mobile sources, degreasing, etc.)
Step 3 - Perform dispersion modeling analysis. Perform dispersion modeling using
emissions data obtained in Step 2. The meteorological data ideally would be five1
separate years of sequential data to represent the season(s) proposed for the monitoring
program. The averaging period should also match that proposed for the monitoring
program, e.g., 24-hour periods. If every 3rd of 6th day sample frequencies will be used for
the monitoring program, the same should be done for the modeling. In short, the goal of
the modeling is to compile an output data set comparable to the measured data set.
Step 4-- Perform statistical analysis. Correlation matrices for each pollutant are
compiled to show "R" values for all site combinations. The correlation considers all
"samples." For example, if a-three-month sampling period is proposed, taking 24-hour
integrated samples on every third day, then there would be 30 modeled 24-hour averages
for each site and each pollutant.
Step 5 - Group sites. Group sites by combining sites correlated by 0.7 or higher
into one cluster. (A 0.7 correlation coefficient was arbitrarily selected. A higher or lower
value could be selected that best met project needs.) There would be separate groups of
sites for each pollutant. Assign a score of +1 to each cluster and subdivide the score
among the members of the cluster. This is done separately for each pollutant, and then
the scores at each site are summed across all pollutants.
Step 6 - Successive elimination of sites. The site with the lowest score is
eliminated and scores recomputed (as was initially done in Step 5). Another site is
eliminated as above and the scores recomputed, and so forth, until a core network (e.g., 3
sites) is described.
1 If multiple years of meteorological data are obtained, the model analysis is run for
each receptor for each year to assess the influence of meteorological variability on site
selection.
B-ii
-------
B.2 Example Application: Philadelphia IEMP
This example application of the supersite concept first shows site selection based
on modeled data, followed, as a comparison, by sites that would be selected based on the
actual measured data set. Within the measured data set greater detail is provided in the
site elimination procedures of Step 6. (The reader is referenced to the model performance
study conducted for the BEMP Philadelphia Study 2 for background information on the
measured data set used for this example.]
Siting Analysis Based on Modeled Data
The IEMP Philadelphia Study operated a ten-site monitoring network3 during the
period of 1983-1984. Using these ten locations as "candidate sites," dispersion modeling
was used to select sites expected to be most independent. Independence was defined as
having a correlation (R value) of less than or equal to 0.70 with all other "candidate sites."
For benzene, ethylbenzene, trichloroethylene, and xylene all correlations in Step 1
were above the cutoff of 0.7 (in fact, they were above 0.84, 0.70, 0.73, and 0.79,
respectively), making all ten sites into one cluster. This is attributed to the dominance of
area sources in the model and the inability to detect many of the spatial variabilities
caused primarily by mobile sources and possibly by gasoline marketing. The remaining
clusters are:
Carbon tetrachloride 3,5, and 10; 4,7, and 10; 3,4,5,8, and 9
Chloroform 2 and 9; 3,4,5,7,8,9, and 10
Ethyl-chloride 2 and 3; 5,,7,8, and 10; 5,8,9, and 10
Perchloroethylen 1,2,3, and 5; 3,4,5,7,8,9, and 10
1,2-dichloroethane 2 and 3; 5,7,8, and 10; 8 and 9
Elimination of sites by Steps 5 and 6 of the site selection procedure leads to:
2 Sullivan, D. A., 1985 .. Evaluation of the Performance of the Dispersion Model
SHORTZ for Predicting Concentrations of Air Toxics in the U.S. Environmental
Protection Agency's Philadelphia Geographic Study. U.S. Environmental Protection
Agency, Integrated Environmental Management Divisions, Washington, D.C.
3 Nine sites had high enough data recovery to support the objectives of this analysis.
B-iii
-------
Step
Stage 1
Stage 2
Stage 3
Stage 4
Stage 5
Stage 6
etc.
Site
Eliminated
7 out:
5 out:
8 out:
3 out:
10 out:
4 out:
Sites
Left
1,2,3,4
1,2,3,4
1,2,3,4
1,2,4,9
1*4,9
1,2,9
,5,8,9,10
,8,9,10
,9,10
,10
This process would thus prescribe sites 1, 2 and 9 as yielding the most
independent information, and thus represent the optimum location for 3 monitoring sites.
The process could be truncated or continued to yield any number of sites, down to a
single location.
Siting Analysis Based on jvteasured Data
As a test of the site selection approach described above, the measured data set
from the Philadelphia IEMP air toxics monitoring network was used in place of the
modeled data used in the previous subsections. The goal was to use the measured data set
to help evaluate the usefulness of the model-based siting procedure. (The following
discussion describes the site elimination process in detail as it was carried out in
Phildelphia.)
B-iv
-------
The final clusters of Step 1 based on measured data are as follows:
Compound Clusters
Number of Sites Needed
for Network4
Benzene
Carbon Tetrachloride
Chloroform
Ethylbenzene
1,2-Dichloroethane
Perchloroethylene
Toluene
Trichloroethylene
Xylene
1,2-Dichloropropane
9 and 10 8
4 and 9 8
1,3,4,8,9, and 10 4
1 and 5; 4,7, and 8 6
1 and 10 8
4 and 10; 8 and 10 7
Sample size too small
3,4, and 8; 9 and 10 6
4 and 8; 7,9, and 10 6
1 and 4; 3 and 5 7
Considering one compound at a time.
B-v
-------
We note that for perchloroethylene, site 10 is associated with sites 4 and 8,
although 4 and 8 are not highly correlated (0.58). We then assigned a score of +1 to each
cluster and subdivided the score among the "members of each cluster. Adding up the
scores across the nine chemicals produced the following values:
Site Score
1
2
3
4
5
7
8
.9
10
6.67
9.00
7.00
4.84
8.00
7.67
5.84
5.84
5.84
From this analysis, site 4 clearly produced the least information, i.e., for seven of
the nine compounds considered,'site 4 was well correlated with other site(s) at the 0.7
level or higher, and added marginal information to the network. Consequently, if one
wanted to have a network of only nine sites, site 4 would be the first one to eliminate
based on this approach.
Next we assumed that station 4 was eliminated and repeated the above analysis.
This resulted in the information scores:
B-vi
-------
Site Score
1 7.20
2 9.00
3 7.20
5 8.00
7 7.83
.8 6.70
9 6.53
10 . 5.53
Thus, the next site we would eliminate would be site 10. Continuing in this
manner we created networks with fewer and-fewer sites until only three sites remained.
At that stage, "the sites were all independent and so these represented a "core". The
networks we obtained via this procedure are as follows:
Remaining Sites
Stage 1 4 out: 1,2,3,5,7,8,9,10
Stage 2 10 out: 1,2,3,5,7,8,9
Stage 3 8 out: 1,2,3,5,7,9
Stage 4 3 out: 1,2,5,7,9
Stage 5 9 out: 1,2,5,7
Stage 6. 1 out: 2,5,7
It should be noted that based on the initial "value" ranking that sites 2,5, and 7
had the highest scores: Site 2 (9.0), site 5 (8.0) and site 7 (7.7). This result is. not
surprising when one considers the orientation of sources along the corridor in which the
monitoring network is located. Site 2 is located in the downtown area, which is
substantially different than the locations of the other sites that are in more industrialed
areas. Site 5 is located in the heart of the industrial area of Bridesburg, and as such would
be expected to be relatively independent. Site 7 is located in an area north of the rest of
the network, and experienced substantially elevated concentrations for automobile related
B-vii
-------
emissions with flow from the southwest, and as such, could be expected to be relatively
independent based on the measured data.
Method Evaluation
We note that at the end of the fourth stage of the modeling analysis, we have five
sites (1,2,4,9 and 10) in our proposed network, which include at least one representative
from each cluster for each of the nine chemicals. In contrast, sites 1, 2, 5, 7 and 9 would
have been selected as optimum based on the actual measured data set. It is interesting to
note that sites 5 and 7 were preferred sites based on the measured data while those same
two sites were the first to be eliminated for the theoretical model data. On the other
hand, the model-based process selected sites 1,2, and 9 as important; these sites were
among the five most informative for the actual data. Therefore, it appears that there is
some value in using dispersion models and spatial correlation analysis as a guide to site
selection for air quality studies. Caution should be used in selecting only the top few
sites, because ast shown, the modeled data did well in selecting the top five sites, but
'resulted in significant differences compared to the measured results when selecting a 2-3
site network. Further data needs to be evaluated to fully evaluate this approach.
B-viii
-------
1. REPORT NO.
EPA-450/2-89-01Q
m n, TECHNICAL REPORT DATA
(Please read Inunctions on the reverse before completing)
4. TITLE AND SUBTITLE
."' o .,
Assessing Multiple Pollutant Multiple Source Cancer
Risks From Urban Air Toxics (Summary of Approaches
and Insights From Completed and Ongoing Urban Air
ft te~int'~\ i^
'.AUTHORS Toxics Assessment Studies)
D. Sullivan, T. Lahre, M...Alford
9. PERFORMING ORGANIZATION NAME AND ADDRESS
12. SP
fo^^e^aN7omi^?^rDoRge!lm Branch
Air-Quality Management.Division-
Office of Air Quality Planning and Standards
Research Triangle Park, N. C. 27711
IS. SUPPLEMENTARY NOTES '
3. RECIPIENT'S ACCESSION NO
5. REPORT DATE
April 1Q«Q
6. PERFORMING ORGANIZATION CODE
8- PERFORMING ORGANIZATION REPORT NO
10. PROGRAM ELEMENT NO.
I. CONTRACT/GRANT NO.
3. TYPE OF REPORT AND PERIOD COVERED
«. SPONSORING AGE'NCY CODE
techniques that others have elected to employ and offers insiahts
that may assist the reader in selecting a particular set of
techniques for use in a given locale. Par^cuiar set of
Major topics covered include: (1) a summary of completed and
ongoing urban air toxics assessment studies, (2? SbieSt
momtoring assessment approaches, (3) emission amDient
inventory/dispersion modeling assessment approaches (4) aspects
?I^HP?SU;e ^ r±Sk aSSeSSment' (5> Control stra?Igy evaStion
(6) data handling, and (7) evolving assessment technologies
s2mpl?ngg reCept°r Deling, personal monitoring S3 biSalsay
17.
KEY WORDS AND DOCUMENT ANALYSIS
DESCRIPTORS
b.IDENTIFIERS/OPEN ENDED TERMS
c. COSATI Field/Gr<
Air Toxics
Assessment Methods, Air Toxics
Air Toxics Cancer Studies
Toxics, Air
Urban Air Toxics Assessments
Urban Soup
ION STATEMEN1
19. SECURITY CLASS (TinsReport)
21. NO. OF PAGES
237
20. SECURITY CLASS (Tliispage/
22. PRICE
Form 2220-1 (R«v. 4-77) PREVIOUS EDITION is OBSOLETE
-------
-------