Past, Present, and Future Air Quality Modeling and Its Applications in
the United States
S.T. Rao, J. Irwin, K. Schere, T, Pierce, R. Dennis, and J. Ching
NOAA Atmospheric Sciences Modeling Division1
1On Assignment to the U.S. Environmental Protection Agency
Research Triangle Park, North Carolina 27711, U.S.A.
Since the inception of the Clean Air Act (CAA) in 1969, atmospheric models have been used to
assess source-receptor relationships for sulfur dioxide (S02), CO, and total suspended particulate matter
(TSP) in the urban areas. The focus through the 1970's has been on the Gaussian dispersion models for non-
reactive pollutants. The 1977 Amendments to the CAA mandated the use of dispersion models for assessing
compliance with the relevant National Ambient Air Quality Standards (NAAQS) when new sources of
pollution are permitted and for prevention of significant deterioration. In the 1980*s, the focus has been on
the secondary pollutants (e.g., ozone, acid rain), which led to the development of grid-based photochemical
models to better understand the urban and regional scale pollution. In the 1990's, attention was paid to the
development of one-atmosphere models to deal with multiple pollutants. The new NAAQS for ozone and
fine particulate matter (PM2.5) that were promulgated in 1997 call for the use of one-atmosphere models in
designing multi-pollutant emission control strategies. In the 2000*s, there is considerable interest in the
development of integrated airshed-watershed models to properly assess the effects of atmospheric pollution
on sensitive ecosystems. Air quality models can help improve our understanding of the transport and fate of
pollutants, and are essential tools for designing meaningful and effective emission control strategies. Future
applications of air quality models will be towards the prediction and improved understanding of human
exposure, especially in urban areas, and intercontinental-cross oceanic hemispheric air pollutant transport.
Air Quality Models
There are six types of atmospheric transport and diffusion models: Plume, Segmented Plume and
Puff, Lagrangian Particle, Box, Eulerian Grid, and Computational Fluid Dynamics. Plume models assume
conditions are horizontally homogeneous (everywhere the same) and steady state. Segmented Plume and
Puff models divide the emissions into a series of overlapping volumes (or puffs) so that we no longer need
assume horizontal homogeneous conditions or require conditions to be steady state. Lagrangian Particle
models divide the emissions into tiny masses that are individually "traced" as they are stochastically
transported and diffused downwind. The emitted gaseous or particulate material is "moved" at each time step
by pseudo-velocities that take into account the three basic components of transport and diffusion: (I) the
transport due to the mean wind, (2) the turbulent diffusion caused by the (seemingly) random fluctuations of
wind components (both horizontal and vertical), and (3) the molecular diffusion (if not negligible). Box
models assume the modeling domain is one large homogeneous volume (box); emissions entering this
volume are assumed to be uniformly and instantaneously mixed throughout the volume. The top of the box
may rise to simulate the rise of the mixing depth after sunrise, and pollutants aloft of this rising lid would
then be entrained into the volume. The location of the box can be stationary (to simulate the air over a city),
or can move with the transport wind (to simulate the 'aging' of an air mass). Eulerian Grid models divide the
world into a three-dimensional array of rectangular cells ("grids") within which mixing is considered
uniform and instantaneous. Grid models are used to simulate the formation of products through atmospheric
chemistry and the removal of products by clouds and precipitation, all of which are usually sufficiently
removed from the emissions of immediate concern that the "well-mixed" assumption in each cell is
reasonable. Computational Fluid Dynamics (CFD) modeling is the science of determining a numerical
solution to the governing equations of fluid flow whilst advancing the solution through space or time to
obtain a numerical description of the complete flow field of interest. The governing equations for Newtonian
fluid dynamics, the unsteady Navier-Stokes equations, have been known for over a century. However, the
analytical investigation of reduced forms of these equations is still an active area of research as is the
problem of turbulent closure for the Reynolds-averaged form of the equations. For non-Newtonian fluid
dynamics, the theoretical development of chemically reacting flows and multiphase flows is at a less
advanced stage.

-------
Of the transport and diffusion models listed, plume and puff models are used operationally for air
quality assessments of chemical species whose chemical/radioactive transformations can be represented
using time-dependent decay approximations (e.g., SO2, CO, primary emissions of particulate matter). Plume
models are used for the near-field assessments and puff models are used for the meso-scale assessments.
Eulerian grid models are used for operational air quality assessments of multi-pollutants having nonlinear
chemical/radioactive reactions and secondary species of interest resulting from these reactions (e.g., ozone,
sulfate and nitrate aerosols). Box models are primarily used for testing and comparison of alternative
chemical kinetics. Regardless of the transport and diffusion model chosen, there are four additional
fundamental requirements: characterization of the meteorological conditions, characterization of the
emissions, characterization of the chemical/radioactive transformations, and characterization of the wet and
dry deposition processes.
Meteorological characterization for air quality modeling has evolved considerably over the last three
decades. Initially, fairly straightforward objective analyses of ambient surface and upper air data from
routine meteorological monitoring networks was used to obtain gridded fields of winds, temperature, and
moisture for input to Eulerian grid models, such as the Urban Airshed Model (UAM). This technique was
facilitated by the ready accessibility of data from U.S. and Canadian centralized data sources. Limitations
included the sparseness of data in key areas (e.g., mountain areas, seacoasts), interpolation and extrapolation
uncertainties, and physical inconsistencies in the analyses, such as in the derivation of vertical wind
	velocities or botmdaryi-ayerdepths.The nextmethodological advance included physical eonstraints with the~
objective analyses of data. These included divergence minimization schemes to constrain mass fluxes and
vertical velocities. These techniques were called diagnostic analyses of meteorological data, to distinguish
them from the earlier objective analyses. Some of the same limitations (data sparseness, interpolation errors)
applied to the diagnostic techniques as well. A major advance occurred in the mid-1980s when the Regional
Acid Deposition Model (RADM) was linked to a prognostic meteorological model, the Penn State/NCAR
Mesoscale Model (MM4). For the first time, fully self-consistent dynamically balanced meteorological
fields were used to drive an air quality model. With the addition of four-dimensional data assimilation of
observed profile data to the MM4 a few years later, error propagation and growth was controlled in the
numerical meteorological simulation. The state-of-science since then has evolved with nested versions of
mesoscale models (e.g., MM5, RAMS) with advanced data assimilation techniques as drivers for air quality
models.
Emissions are an essential part of an air quality modeling application. Emissions have evolved from
simple time-invariant estimates of pollutant mass to emissions modeling systems, which are as complex as
the underlying chemical-transport model. For plume modeling, proper depiction of the relative position, size
and amount of the release with respect to nearby receptors is critical. Early Gaussian dispersion model
applications rarely considered temporal changes in emissions, although the Tennessee Valley Authority
employed emission models in the mid-1970s for its supplementary control program that considered
variations in power load, which were directly related to projected variations in stack parameters and hourly
sulfur dioxide emissions. With the advent of photochemical models in the 1980s, emission estimates began
to consider temporal variations (time-of-day and day-of-week) especially for mobile sources. The MOBILE
models, first introduced in 1978, considered fleet composition, spatial variations, driving cycles, and
temperature effects on evaporative emissions and cold starts. By the late 1980s, the important of biogenic
hydrocarbons was recognized for photochemical models, as emission models such as the Biogenic Emissions
Inventory System (BEIS) began to account for spatial vegetation patterns, temperature, solar radiation, and
leaf phenology. Another important advancement in the late 1990s was the development of modeling systems
and frameworks in handling the disparate emission inventories, emissions modeling systems, and model
formats. For air quality management applications, projected emissions are a key consideration. Projections
must consider the current state of emissions and using economic forecasts, anticipated technological changes,
and future forecasts of land use and population patterns. Projected inventories and emission control strategies
lay at the heart of air quality model simulation; they influence the confidence that can be placed in regard to
the selected emission control strategy's efficacy in achieving compliance with the relevant NAAQS.
Photochemical kinetic mechanisms contain the set of reactions responsible for the generation of
ambient ozone and other photooxidants, and are key components of air quality simulation models for ozone

-------
and other chemically reactive ambient trace gases. Both inorganic (mainly NOx and HOx) and organic (HC)
chemistry of the atmosphere are described in these mechanisms. Because the organic chemistry involves
hundreds, if not thousands, of chemical reactions among several hundred chemical trace gases, mechanism
compression techniques have evolved over time that aggregate the organic compounds into classes to make
the numerical solution within gridded air quality models a tractable problem. One compression technique
entails lumping similar organic (carbon-hydrogen) structures together as defined by the bonds between
carbon atoms. A series of mechanisms evolved kno^^n as Carbon Bond (latest version is CB4). Another
compression technique lumps organic compounds of similar reactivity together into certain classes. The
series of mechanisms developed at the Statewide Air Pollution Research Center (SAPRC) in California uses
this method (latest version is SAPRC-99) as does the chemical mechanism developed for the RADM model
(latest versions are RADM2, RACM).
Wet and dry deposition models have relevancy to the determination of the fate of all chemical
species emitted into and formed within the atmosphere, for without these removal processes, pollutant
concentrations would ever increase over time. For local-scale modeling assessments, these removal processes
are only significant for very brief periods when wet deposition occurs, or for very large particles where
gravitational settling is significant. Beginning with the assessment for acid deposition of sulfates, which was
completed in 1990, it became clear that improvements were needed in the roles played by cloud processes in
removal of species by wet deposition. Then with the current interest in the characterization of fine particulate
\«CIv5Ul3j vvllvCIIU aliUllo) WCl ailU Uljr UCjJUoJUUIl HIUUvlo IlaVC UCwUHlC Linlwalij r	uIC JJIvpci
partitioning of species between gas-phase and particle-phase.
Model Evaluation
The terms "evaluation", "verification", and "validation" are being used to identify the process of
comparing model estimates with observations. The validity of a model is examined by assessing how well
model predictions agree with observations given a perfect specification of model inputs. Since the model
inputs are less than perfect, can the models be validated? The term "verification" usually applies to models
that are used in the prognostic mode (e.g., a weather prediction model). The term "evaluation" entails a
systematic comparison of the model predictions against observations. Two kinds of model evaluations are
being typically performed: performance evaluations and diagnostic evaluations. Performance evaluations and
diagnostic evaluations assess different qualities of how well a model is performing, and both are needed to
establish the model's credibility within the client and scientific community. Performance evaluations allow
us to decide how well the model simulates the temporal and spatial features imbedded in the observations.
Performance evaluations employ large spatial/temporal scale data sets (e.g., national data sets) and allow the
determination of the relative performance of a model vis-a-vis alternative models by comparing end-to-end
model calculations against full-scale data sets. Diagnostic evaluations allow determination of model
capability to simulate individual processes that affect the results (e.g., droplet fall velocity using small-scale
data sets such as from, special field experiments, wind tunnels or other laboratory equipment). When model
formulation and model inputs are precise, diagnostic evaluations allow us to decide if we get the right answer
for the right reason, and usually employ smaller spatial/temporal scale date sets (e.g., field studies). Although
different in analysis scale and technique, these two evaluation types will require the same basic protocol,
namely, data collection and organization, analysis development and application, and the dissemination of
results to both client and scientific communities.
In the past, the emphasis of the statistical evaluation comparisons has been on the "intended use."
For instance, one of the uses for modeling results is to estimate the highest concentration values to be
expected over a 5-year period, resulting from the operation of a proposed new power plant. Other statistical
measures have also been employed to compare the concentration (or dose) values of "intended use," such as
number of values within a factor of two, linear least-square fits to scatter plots of observed and predicted
values, and normalized mean-squared errors of observed and predicted values. Implicit in such statistical
comparisons is an assumption that the predicted and observed distributions of concentration values are from
the same population, which may not be a well-founded assumption. Work is underway to develop a new
generation of evaluation metrics: ones that diagnostically probe a model's chemical and thermodynamic
processing and ones that take into account the statistical differences {in statislical deconvolutions and error

-------
distributions) between model predictions and observations. As a result, a shift in philosophy is occurring as
to how models of environmental processes can be acceptably evaluated. Most models provide estimates of
the first moment of conditions to be expected for each ensemble (e.g., average time-space variation of the
meteorological conditions, average time-space variation of the surface-level concentration values). The key
to the next-generation evaluation metrics is that they will no longer assume that the modeled and observed
values come from the same statistical population of values. They will assume that they "share" certain
fundamental properties, but could be inherently different. A next step is to develop spatio-temporal analysis
evaluation procedures that compare the modeled and observed spatial and temporal patterns of dispersing
material. Concentration patterns are known to contain correlation structures in time and space that could be
tested. For instance, the time series of ozone and particulate observations contains fluctuations occurring on
many different time scales. Since these observations are taken at discrete time intervals, the sampling interval
limits the highest frequencies that can be observed, and the sampling duration limits the lowest frequencies
that can be observed. Various authors have illustrated the use of spectral decomposition of a time series of
ozone concentration values to investigate model performance at selected time scales of interest. Research is
being conducted to explore other methods for the statistical comparison of modeled and observed spatial and
temporal behavior of pollutant concentrations.
Model Applications
	T-o-ttete, an^q"alrty modeliHiav&-been^ed-priTOarily in the retrospective tfiede, to simttiate historical
episodes and to approve or deny a new source permit or an emissions management plan to meet and maintain
the relevant NAAQS. From the discussion above, it is clear that models cannot be relied upon to predict the
absolute concentration values at a given time for a location. Since not only the model formulations are less
than perfect, the input data for the models is uncertain. Therefore, models are more reliable when used in a
relative rather than in absolute sense. When the air quality models are used in the regulatory framework, it is
important to consider probabilistic-based decision-making. Hence, outputs from deterministic models need to
be transformed into the probabilistic form so the model results can be used more confidently in making
emissions management decisions, in other words, policy makers need to see and understand the relative
efficacy of various emission control strategies in meeting the NAAQS and the probability of the success of
the adopted control option in maintaining the NAAQS over a region. Such integrated observational-modeling
techniques are emerging now to help address the new ozone and PM standards in the United States.
Future Needs
In the future, we see several areas for air quality model research. These include on the large scales,
addressing modeling issues associated with cross-boundary transport on hemispheric and global scales, and
on fine scales, the role of models for driving human exposure assessment, especially in the high population
urbanized areas. Research is underway to determine the limits of Eulerian grid models to provide operational
estimates of fine-scale grid estimates of the concentration values. Then, the goal is to 'add' to this solution a
stochastic component that represents the sub-grid variability in the predicted concentration values to enable
the assessment of populations to exposures of various pollutants at the neighborhood scale. Also, attention to
modeling emissions from natural and prescribed fires, from wind blown dust, and from marine sources such
as sea salt are needed. Further, some classes of "emissions" should be handled within the chemical-transport
model, because some compounds may be either emitted or deposited (bi-directional flux). Emissions and dry
deposition are currently treated separately in air quality models, but future models should consider bi-
directional fluxes of compounds such as ammonia, mercury, and trace gases in a holistic manner.
Work is underway to apply numerical models to routinely forecast air quality over the United States.
To this end, ensemble modeling might be needed to improve the quality of the predictions. These efforts, in
turn, would help us in developing the next generation of coupled meteorological-chemical transport models.
It is important to realize that the new model evaluation metrics will require dense field data sets that are of
sufficient length in time to support spectral decomposition analyses, or sufficient extent to define samples
from ensembles for analysis. To this end, organizing and making publicly available field data from past
model evaluation exercises is important, so that research can be conducted towards best use of these often
episodic data sets, in light of the data needs of the new model evaluation procedures.

-------
TECHNICAL REPORT DATA
1. REPORT NO.
2 .
3
/ y /
4 . TITLE AND SUBTITLE
Past, Present, and Future Air Quality Modeling and
Its Applications in the United States
£ t?CDr>E>T" T"S ?\ T T
3* • jy iC* tf l\ J. XvJrv X
6.PERFORMING ORGANIZATION CODE
7. AUTHOR(S)
S.T. Rao, J. Irwin, K. Schere, T. Pierce, R. Dennis, and I, Ching
8.PERFORMING ORGANIZATION REPORT HO.
9, PERFORMING ORGANIZATION NAME AND ADDRESS
National Exposure Research Laboratory - RTP, NC
Office of Research and Development
U.S. Environmental Protection Agency
Research Triangle Park, NC 27711
10.PROGRAM ELEMENT NO.
11, CONTRACT/GRANT NO.
12. SPONSORING AGENCY NAME AND ADDRESS
Same as 9.
13.TYPE OF REPORT AND PERIOD COVERED
14. SPONSORING AGENCY CODE
15. SUPPLEMENTARY NOTES
16. ABSTRACT
Since the inception of the Clean Air Act (CAA) in 1969, atmospheric models have been used to assess source-receptor
relationships for sulfur dioxide (S02), CO, and total suspended particulate matter (TSP) in the urban areas. The focus
through the 1970's has been on the Gaussian dispersion models for non-reactive pollutants. The 1977 Amendments to the
CAA mandated the use of dispersion models for assessing compliance with the relevant National Ambient Air Quality
Standards (NAAQS) when new sources of pollution are permitted and for prevention of significant deterioration. In the
1980's, the focus has been on the secondary pollutants (e.g., ozone, acid rain), which led to the development of grid-based
photochemical models to better understand the urban and regional scale pollution. In the 1990's, attention was paid to the
development of one-atmosphere models to deal with multiple pollutants. The new NAAQS for ozone and fine particulate
matter (PM2.5) that were promulated in 1997 call for the use of one-atmosphere models in designing multi-pollutant
emission control strategies. In the 2000's, there is considerable interest in the development of integrated airshed-watershed
models to properly assess the effects of atmospheric pollution on sensitive ecosystems. Air quality models can h elp improve
our understanding of the transport and fate of pollutants, and are essential tools for designing meaningful-and effective
emission control strategies. Future applications of air quality models will be towards the prediction and improved
understanding of human exposure, especially in urban areas, and intercontinental-cross oceanic hemispheric air pollutant
transport.
17. KEY WORDS AND DOCUMENT ANALYSIS
a- DESCRIPTORS
b.IDENTIFIERS/ OPEN ENDED TERMS
c.COSATI



18. DISTRIBUTION STATEMENT
19. SECURITY CLASS (This Report)
Unclassified
21.NO. OF PAGES
20. SECURITY CLASS (This Page)
Unclassified
22. PRICE

-------