f/EPA
United States
Environmental Protection
Agency
Environmental Sciences Research
Laboratory
Research Triangle Park NC 27711
EPA-600/9-83-014
August 1983
Research and Development
Proceedings of the
Empirical Kinetic
Modeling Approach
(EKMA) Validation
Workshop
-------
EPA-600/9-83-014
PROCEEDINGS OF THE
EMPIRICAL KINETIC MODELING APPROACH
(EKMA) VALIDATION WORKSHOP
Research Triangle Park, NC
December 15-16, 1981
Edited by
Basil Dimitriades, Director
and
Marcia Dodge, Research Chemist
Atmospheric Chemistry and Physics Division
Environmental Sciences Research Laboratory
U.S. Environmental Protection Agency
ENVIRONMENTAL SCIENCES RESEARCH LABORATORY
OFFICE OF RESEARCH AND DEVELOPMENT
U.S. ENVIRONMENTAL PROTECTION AGENCY
RESEARCH TRIANGLE PARK, NORTH CAROLINA 27711
-------
NOTICE
This document has been reviewed in accordance vith U.S. Environmental
Protection Agency policy and approved for publication. Views expressed by
non-EPA speakers or discussers do not necessarily reflect the views or
policies of the Environmental Protection Agency.
Mention of trade names or commercial products does not constitute
endorsement or recommendation for use.
i i
-------
PREFACE
The Empirical Kinetic Modeling Approach (EKMA) is the most recent and
presumably the most advanced air quality guideline issued by the U.S.
Environmental Protection Agency for use in preparing State Implementation
Plans. However, researchers who have used EKMA over the past few years have
pointed out two important weaknesses in the approach.
The approach's first weakness is the uncertainty of whether or not the
chemical mechanism used in EKMA is the most sound one; several mechanisms are
available that could be used. The other major problem with the approach is
that it has not been verified adequately. Particularly, results obtained from
the approach have not been compared to real atmospheric data. Several methods
for the verification of EKMA are also available.
The objective of this workshop was to discuss and rate the methods
available for verifying EKMA. The conclusions resulting from the workshop
should aid in the development of comprehensive future research programs to
address these problems. These proceedings include the text of presentations
made at the workshop, follow-up discussions, and the conclusions and
recommendations arrived at by a wrap-up committee consisting of Drs. Alan
Eschenroeder, Alan Lloyd, Gregory McRae, and Richard Derwent.
iii
-------
ABSTRACT
The U.S. Environmental Protection Agency currently expects to continue
recommending use of the Empirical Kinetic Modeling Approach (EKMA) for the
preparation of State Implementation Plans aimed at achieving the National
Ambient Air Quality Standard for ozone. In view of this fact, efforts to
evaluate and document the performance of EKMA as an ozone-predictive model
must be continued. To best guide these future efforts, the workshop
documented in these proceedings was organized.
These proceedings contain the ten presentations made at the workshop by
some by some of the world's foremost experts in the fields of air quality
measurement and monitoring. Also included are discussions of the
presentations, informal presentations made at the workshop, a concluding
discussion of the issues, and the recommendations made by a wrap-up committee
formed to review and analyze the workshop proceedings. Among the issues
discussed were the differences between four kinetic mechanisms now in
existence and between six EKMA field-evaluation methods now available.
iv
-------
CONTENTS
Preface ill
Abstract iv
Figures vii
Tables xi
Abbreviations and Symbols xii
1. Trend Analysis of Historical Emissions and Air Quality Data,
John Trijonis 1
2. Evaluation of the Empirical Kinetic Modeling Approach As
Predictor of Maximum Ozone, J. Raul Martinez and Christopher
Maxuell 42
3. Predicting Ozone Frequency Distributions from Ozone Isopleth
Diagrams and Air Quality Data, Harvey E. Jeffries and Glenn
T. Johnson 109
4. Simplified Trajectory Analysis Approach for Evaluating
OZIPP/EKMA, Gerald L. Gipson and Edwin L. Meyer 162
5. Application of EKMA to the Houston Area, Harry M. Walker .... 200
6. A Comparison of the Empirical Kinetic Modeling Approach Models
with Air Quality Simulation Models, Gary Z. Whitten 215
7. Deriving EKMA Isopleths from Experimental Data: The Los Angeles
Captive Air study, Daniel Grosjean, Richard Countess, Kochy
Fung, Kumaraswamy Ganesan, Alan Lloyd, and Fred Lurmann 265
8. Description and Comparison of Available Chemical Mechanisms,
Gary Z. Whitten and James P. Killus 285
9. Effects of Chemistry and Meteorology on Ozone-Control
Calculations Using Simple Trajectory Models in the EKMA
Procedure, Harvey E. Jeffries, K.J. Sexton, and C.N. Salmi- ... 331
10. Effect of Radical Initiators on Empirical Kinetic Modeling
Approach Predictions and Validation, William P.L. Carter .... 339
-------
Comments, Herbert McKee 377
Workshop Commentary, William P.L. Carter 383
Final Discussion 387
»>
Follow-Up Committee Recommendations 406
Introduction 406
Reviews of Papers Presented at Workshop 409
Response to Agency Research and Regulation Planning Needs. . 425
Committee Recommendations Based on Workshop Inputs 444
An Action Plan for the Agency. 452
References 456
VI
-------
FIGURES
Number Page
1-1 NMHC emission trends for the Houston study area 9
1-2 NOX emission trends for the Houston area 11
1-3 Historical 03 trends at Mae Drive and Aldine 13
1-4 Predicted versus actual 03 trends at Mae Drive 15
1-5 Historical RHC emission trends in the Los Angeles basin 18
1-6 Historical NOX emission trends in the Los Angeles basin 19
1-7 Approximate source areas affecting 03 at downtown Los Angeles
and San Bernardino 21
1-8 Comparison of RHC emissions trends with ambient NMHC trends
for various source areas in the Los Angeles region 23
1-9 Comparison of NOX emission trends with ambient NO trends
for various source areas in the Los Angeles region 24
1-10 Historical 03 trends at the Los Angeles study sites 25
1-11 Sensitivity of predicted 03 trends to the range in the
NMHCrNO,, ratio at all four study sites 28
J\
1-12 Sensitivity of predicted O3 trends at Anaheim to various
EKMA simulation conditions 30
1-13 Results of the validation studies for the 95th percentile of
daily maximum O3 at Azusa 31
1-14 Results of the validation studies for yearly maximum 03
at Azusa 32
1-15 Results of the validation studies for the 95th percentile of
daily maximum 03 at downtown Los Angeles 33
1-16 Results of the validation studies for yearly maximum 03
at downtown Los Angeles 34
2-1 Evaluation methodology used in the study 45
2-2 Evaluation region of the NMOC-NOX plane for St. Louis 47
2-3 For St. Louis, standard EKMA overestimates reference 03 50
2-4 Accuracy probability describes performance of standard EKMA
for St. Louis 51
2-5 Accuracy regions define NMOC and NOX values associated with
standard-EKMA predictions for St. Louis 53
2-6 City-specific EKMA estimates for St. Louis tend to be more
accurate 55
2-7 Region of high accuracy probability is larger for city-specific
than for standard EKMA for St. Louis 56
2-8 Standard EKMA produced substantial overprediction for Houston
(HAOS) 58
VI1
-------
2-9 Probability of an accurate prediction is less than 0.6 for
standard EKMA for HAOS 60
2-10 Region of overprediction includes most of the evaluation
region for standard EKMA for HAOS 61
2-11 City-specific EKMA underpredicted substantially for HAOS 63
2-12 Accurate estimates have a low probability for city-specific
EKMA for HAOS 64
2-13 Standard EKMA for Philadelphia tends to overpredict 66
2-14 Probability of accurate predictions is moderately high for
standard EKMA for Philadelphia 68
2-15 High NMOC concentrations yield overpredictions for standard
EKMA for Philadelphia 69
2-16 City-specific and standard-EKMA estimates for Philadelphia
were very similar 70
2-17 Standard-EKMA estimates for Los Angeles fell in each region in
equal proportions 72
2-18 Under, over, and accurate predictions have similar
probabilities for standard EKMA for Los Angeles 73
2-19 City-specific EKMA for Los Angeles overpredicts substantially. . . 76
2-20 Accurate predictions are most probable for 0.1 < Z < 0.3 for
city-specific EKMA for Los Angeles 77
2-21 Most NMOC and NOX lead to overprediction for city-specific
EKMA for Los Angeles 78
2-22 Standard EKMA with CBII overpredicts more than Dodge 82
2-23 Standard-EKMA estimates for DM are similar to CBII predictions . . 83
2-24 CBII/standard-EKMA estimates have a small high-accuracy region
compared to the Dodge model's 85
2-25 DM/standard EKMA overpredicts over most of the evaluation
region 87
2-26 Accuracy regions for the three models are very different 88
2-27 CBII/city-specific EKMA estimates have improved accuracy 89
2-28 Accuracy of DM/city-specific EKMA estimates shows slight
improvement 91
2-29 Regions where accurate estimates are at least 70% probable
do not overlap 93
2-30 Point-estimate method 95
2-31 Monte Carlo method 98
3-1 Projection of an O^-precursor surface into the L-D plane 113
3-2 Standard Dodge isopleth surface produced by OZIPP computer
program 118
3-3 Fitted standard Dodge isopleth surface produced by mathematical
equation 119
3-4 The regional air monitoring stations network 122
3-5 Joint NOX/NMHC distribution for June to September 124
3-6 Fitted standard Dodge isopleth surface for St. Louis produced by
mathematical equation 126
3-7 HC and NOX precursor pairs selected by the procedure as having
maximum 03 formation potential using the Dodge isopleth surface. . 127
Vlll
-------
3-8 Ambient 03 distribution and predicted distribution using Dodge
fitted isopleth surface 128
3-9 Cumulative ambient 03 distribution and cumulative predicted
distribution using fitted Dodge isopleth surface 129
3-10 Predicted 03 distributions using fitted Dodge isopleth surface
and joint precursor distributions 134
3-11 Cumulative 03 distributions as in Figure 3-10. 135
3-12 HC and NOX precursor pairs selected by the procedure as having
maximum 03 formation potential for 80% NMHC reduction 137
3-13 Fitted Carbon Bond II isopleth surface for St. Louis . . 138
3-14 Ambient 03 distribution and predicted distribution using fitted
carbon-bond isopleth surface 141
3-15 Cumulative ambient O3 distribution and cumulative predicted
distribution using carbon-bond isopleth surface 142
3-16 Predicted 03 distributions using carbon-bond surface and joint
precursor distributions. 144
3-17 Cumulative 03 distributions as above 145
4-1 Conceptual view of trajectory model in OZIPP 166
4-2 RAMS station locations 169
4-3 Air-parcel trajectory for July 19 test case 173
4-4 Air-parcel trajectory for July 19 test case demonstrating how
fresh precursor emissions are considered 176
4-5 Comparison of observed and predicted peak hourly 03
concentrations ...... 180
4A-1 Observed 03 (ppm) for Philadelphia EKMA analysis 1982 SIP 198
4A-2 Level III EKMA U.S. air quality (St. Louis RAPS) 199
5-1 114 predicted days on standard 03 isopleth diagram 203
5-2 61 decisive days on standard O3 isopleth diagram 206
5-3 EKMA 03 predictions versus actual observed O3 values 208
6-1 Emission-oriented Level II EKMA isopleth diagram based on a
26 June 1974 trajectory in Los Angeles 228
6-2 Isopleth for 26 July 1973: Trajectory 1—HC:NOX ratio = 100:1 . . 234
6-3 Isopleth for 26 July 1973: Trajectory 1—HC:NOX ratio = 100:1 . . 235
6-4 Comparison of NOX, PAR, ETH, BZA, PAN, and O3 for the SAI
trajectory and CBM/OZIPP models for 29 July 1977 238
7-1 Light-molecular-weight HC's 274
7-2 Heavier-molecular-weight HC's 275
10-1 Plots of 1n([n-butane]/[propene]) against time for the evacuable
chamber run EC-623 350
10-2 OH radical concentrations as a function of irradiation time. ... 351
10-3 OH radical concentrations as a function of irradiation time. . . . 352
10-4 Plot of (radical source/k^ against the average NO2 concentration. 356
10-5 Plots of 1n([propane]/[propene]) against irradiation time for
evacuable chamber run ..... 359
10-6 Calculated 03 isopleths at 0.12 ppm and 0.3 ppm 364
IX
-------
10-7 Predicted present HC control required to reduce O3 from 0.3 ppm
to 0.12 ppm for a range of HC:NOX ratios 365
10-8 Calculated 03 isopleths at 0.12 ppm and 0.3 ppm 368
10-9 Predicted present HC control required to reduce 03 from 0.3 ppm
to 0.12 ppm for a range of HC:NOX ratios 370
1 Model output probability distribution 433
2 Framework in which sensitivity/uncertainty is linked
with model comparison through the probability density
distribution of the predicted AC>3: Aprecursor ratio 441
-------
TABLES
Number
2-1 Number of Standard-EKMA 03 Estimates in Each Accuracy Region. ... 84
2-2 Number of City-Specific EKMA 03 Estimates in Each Accuracy
Region 90
3-1 Parameters for 03 Isopleth Surface Equation 117
3-2 Regional Maximum G>3 120
3-3 Precursor Pairs Selected From Morning Time Periods 131
3-4 Precursor Pairs Selected From 25 Stations 131
3-5 Effect of Parameter Variation on Sums of Squares of Differences
Between Obser-vations and Predictions 131
3-6 Control Predictions (Standard EKMA) Cumulative Percentage of
Regional Daily Maxima 133
3-7 Cumulative Percentage of Regional Daily Maxima Control Predictions
(Carbon Bond II Mechanism) 140
3-8 Maximum O3 Precursor Pairs 146
4-1 Model Test Cases 171
4-2 Estimates in 03 Aloft 178
4-3 Range of Sensitivity Tests for 10 Test Cases 182
5-1 Checking Predicted Days with High Adverse Factors 205
5-2 03 Versus Mixing Height 205
6-1 Level III EKMA Relative to SAI Airshed Model 219
6-2 Level II EKMA Relative to SAI Airshed Model 221
6-3 Concentration at 8:00 a.m. After 22 h of Modified EKMA
Simulations 230
6-4 Values for Maximum O3 Versus Values for Initial Conditions 232
6-5 Control-Strategy Comparison for San Francisco between the LIRAQ,
EKMA/CBM, City-Specific EKMA, and Level III EKMA Models 236
6-6 Simulated Maximum 03 241
7-1 Example of Light- and Heavier-Molecular-Weight HC Composition in
Ambient Los Angeles Air 276
7-2 Total HC's by Class 277
7-3 Example of Initial Carbonyl Concentrations in Los Angeles Ambient
Air - 9/30/81 277
7-4 Preliminary O3 and NOX Data for Large Outdoor Chamber 279
XI
-------
10-1 Dependence of Radical Source on Chamber 358
1 Summary of Areas and Questions for Additional Research 448
2 Summary of Meteorological Measurements Needed for Model
Evaluation 450
3 Summary of Needed Chemical Measurements 451
XII
-------
ABBREVIATIONS
LIST OF ABBREVIATIONS AND SYMBOLS
AQSM
BOM
CDT
CIT
CRC
EKMA
ERT
FID
GC
HAOS
HPLC
LARPP
HSD
LOT
LIRAQ
LST
LT
MS
NAAQS
NTIS
OAQPS
OZIPP
RAMS
RAPS
RH
SAI
SCAQMD
SIP
TACB
UASN
UNC
use
UV
Air Quality Simulation Model
Bureau of Mines
Central Daylight Time
California Institute of Technology
Coordinating Research Council
Empirical Kinetic Modeling Approach
Environmental Research and Technology, Inc.
flame ionization detector
gas chromatograhy
Houston Area Oxidant Study
high-performance liquid chromatography
Los Angeles Reactive Pollutant Program
Hecht, Seinfeld, and Dodge mechanism
Local daylight time
Liverraore Regional Air Quality model
Local Standard Time
local time
mass spectrometry
National Ambient Air Quality Standard
National Technical Information Service
Office of Air Quality Planning and Standards
Ozone Isopleth Plotting Package
Regional Air Monitoring Stations
Regional Air Pollution Study
relative humidity
Systems Applications, Inc.
South Coast Air Quality Management District
State Implementation Plan
Texas Air Control Board
Upper Air Sounding Network
University of North Carolina
University of Southern California
ultraviolet
Xlll
-------
SYMBOLS
CO
CH4
DNPH
HC
HNO3
H02
MONO
MBTH
NMHC
NH4
NH4NO3
NMOC
NMVOC
NO
N02
NOX
03
OH
PAN
RHC
RO2
SO2
THC
TNMHC
VOC
carbon monoxide
methane
2,4-dinitrophenylhydrazine
hydrocarbon
hydrogen peroxide
nitric acid
hydroxy radical
nitrous acid
2-benzothiazolone hydrazone hydrochloride
nonmethane hydrocarbon
ammonia
ammonium nitrate
nonmethane organic compounds
nonmethane volatile organic compounds
nitric oxide
nitrogen dioxide
oxides of nitrogen
ozone
hydroxyl
peroxyacetylnitrate
reactive hydrocarbon
peroxy radical
sulfur dioxide
total hydrocarbon
total nonmethane hydrocarbon
volatile organic compound
xiv
-------
1. TREND ANALYSIS OF HISTORICAL EMISSIONS AND AIR QUALITY DATA
John Trijonis
Santa Fe Research Corporation
228 Griffin Street
Santa Fe, New Mexico 87501
ABSTRACT
Historical trend data for the Houston and Los Angeles regions are used to
check the EKMA isopleth method of relating ozone concentrations to precursor
control. Historical trends in the ozone precursors (nonmethane hydrocarbons
and oxides of nitrogen) are estimated from detailed studies of emissions and
ambient data; these precursor trends are entered into the Empirical Kinetic
Modeling Approach (EKMA) method to predict historical ozone trends, which are
then compared to actual ozone trends to test the EKMA method. The investiga-
tion also includes studies of the errors in the historical data bases as well
as sensitivity analyses regarding EKMA simulation conditions and precursor
ratios. The historical trend analyses are useful not only as a means to
validate EKMA, but also as a means to understand the air quality effects
produced by source growth and source control in Houston and Los Angeles.
The Houston studies were conducted on a yearly basis from 1974 to 1978
for two ozone monitoring sites. Because historical emission changes in
Houston were rather small from 1974 to 1978, a definitive test of the EKMA
method cannot be performed with Houston data.
The Los Angeles studies were conducted on a tri-yearly basis from 1964 to
1978 for four ozone monitoring sites. The EKMA model performs fairly well in
the Los Angeles validation analysis, although there is a general tendency for
predicted ozone trends to underestimate decreases in actual ozone trends.
This disagreement apparently cannot be accounted for by error in estimated
precursor trends, by meteorological fluctuations in actual ozone trends, or by
sensitivity to varied EKMA simulation conditions. The reason for the
disagreement appears to be error in the choice of a precursor ratio. Further
research should be conducted to test the EKMA chemical-kinetic mechanism and
to check the equivalency between ambient and EKMA precursor ratios.
1
-------
1. TREND ANALYSIS OF HISTORICAL DATA Trijonis
INTRODUCTION
The Empirical Kinetic Modeling Approach (EKMA) has recently been proposed
as a method for evaluating ozone (03) control strategies. Before the EKMA
method is accepted as a reliable technique for control strategy analysis, it
should be subjected to validation studies. This paper describes validation
studies of the EKMA method using historical trend data for the Houston (Davis
and Trijonis, 1981) and Los Angeles (Trijonis and Mortimer, 1981) areas.
Specifically, historical trends in the 03 precursors [nonmethane hydrocarbons
(NMHC) and oxides of nitrogen (NOX)] are estimated from emissions and ambient
data; these precursor trends are entered into the EKMA method to predict
historical 03 trends, and the predicted 03 trends are then compared to actual
03 trends as a test of the EKMA method.
Historical trend analysis may be the most appropriate and useful way of
validating the EKMA method because, in historical trend analysis, the method
is tested under conditions that are entirely parallel to the intended use of
EKMA in evaluating O3 control strategies. Historical trend analysis also
provides a very convenient format for illustrating the sensitivity of the EKMA
method to various inputs, such as the NMHC:NOX ratio and the choice of
specific photochemical simulation conditions. The historical trend data,
however, are of interest not only with respect to testing EKMA but also of
interest in and of themselves. By conducting a detailed analysis of precursor
emission trends, we can show how emissions for individual source categories
have changed due to controls and source growth, and how total emissions have
-------
1. TREND ANALYSIS OF HISTORICAL DATA Trijonis
changed as a net response to trends for individual source categories. By
examining trends in ambient precursor data, we can check the emission trend
estimates and verify that control programs and source growth have had the
anticipated effects. By analyzing historical 03 trends, we can investigate
whether they make sense in light of precursor changes and meteorological
fluctuations. Thus, our studies are useful not only as a means to validate
EKMA, but also as a means to understand the air quality effects being produced
by source growth and control programs in Houston and Los Angeles.
STUDY AREAS
There are two basic reasons why Houston was selected as a study region.
First, as evidenced by the Houston Area Oxidant Study (HAOS) sponsored by the
Chamber of Commerce and by several Houston programs sponsored by EPA, a great
deal of interest currently exists in the Houston 03 problem. Second,
substantial changes in historical precursor emissions are required for an
adequate test of the EKMA method; previous investigations (Tannahill, 1976;
Radian, 1977) suggested that large reductions in NMHC emissions, on the order
of 20 to 35%, have occurred in the Houston area after 1974. As will be
explained later, our detailed analyses of emission and ambient trend data
indicate that NMHC emissions changed very little in Houston from 1974 to 1978.
This unexpected finding not only contradicted the results of the previous
investigations cited above but also implied that definitive tests of the EKMA
method could not be conducted with Houston data.
-------
1. TREND ANALYSIS OF HISTORICAL DATA Trijonis
The Los Angeles region offered the best opportunity for EKMA validation
studies based on historical trend data. Only Los Angeles could provide nearly
two decades of high-quality, spatially resolved, long-term trend data for
ambient concentrations of 03, hydrocarbons (HC's), and NOX. Only Los Angeles
could provide independent data sets regarding the NMHC:NOX ratios from several
monitoring programs. Los Angeles was also unique in regard to the excellence
of data for emission trend analysis; the Los Angeles area had unusually good
information regarding existing emission levels, source growth rates, and
source control levels. Furthermore, Los Angeles met the requirement of
historical changes in precursor levels: we know that Los Angeles has
undergone significant decreases in HC's and increases in NOX since the middle
1960's (Trijonis et al., 1978; Trijonis, 1980). Los Angeles, of course, was
also an intriguing study area because it has a very severe photochemical smog
problem and because there is a controversy regarding the recent lack of air
quality improvement.
STUDY DESIGN
The Houston studies were restricted to the years from 1974 to 1978
because of constraints imposed by the availability of ambient O3 data. The
year 1974 served as a base year for the analysis, with predictions made for
each of the subsequent years. The Los Angeles study covered the time period
from 1964 to 1978. To provide more robust data sets for the Los Angeles
analysis, 3-year averages (1964-1966, 1967-1969, 1970-1972, 1973-1975, and
1976-1978) of air quality data were used. The years 1964-1966 served as a
4
-------
1. TREND ANALYSIS OF HISTORICAL DATA Trijonis
base period, with predictions made for each of the subsequent 3-year
periods.
The EKMA validation studies were conducted for two 03 monitoring sites
in Houston (Mae Drive and Aldine) and four O3 monitoring sites in Los
Angeles (Azusa, downtown Los Angeles, Anaheim, and San Bernardino). For each
of the six study sites, four types of data were used:
• Information on the base-year 6 to 9 a.m. ambient NMHC:NOX ratio
for the source area affecting the site.
• Estimates of historical emission trends for the source area
affecting the" site.
• Ambient precursor trend data for the source area.
• 03 trend data for the site.
The source areas for each study site were determined by analyzing wind
transport patterns (wind roses, pollution roses, wind streamlines, and/or wind
trajectories). The base-year 6 to 9 a.m. NMHC:NOX ratios for Houston sites
were estimated from ambient data collected by the Texas Air Control Board
(TACB). The base-period 6 to 9 a.m. NMHC:NOX ratios for the Los Angeles
locations were determined by examining ambient data from several monitoring
programs of the early 1970" s and by extrapolating the results back to the base
period (1964-1966) by historical precursor trends. For both Houston and Los
Angeles, historical trends in NMHC and NOX emissions were documented through a
new and comprehensive analysis of growth and control schedules for all major
source categories. In fact, a sizeable fraction of the entire project
-------
1. TREND ANALYSIS OF HISTORICAL DATA Trijonis
resources were devoted just to the emission trend analysis. Ambient precursor
trends were examined in detail for both Houston and Los Angeles to verify the
emission trend results. The final input to the study, ambient 03 data,
consisted of yearly or tri-yearly 03 air quality indices (e.g., yearly
1-h 03 maxima or 95th percentile of daily 1-h O3 maxima).
The procedure for the study was to locate the base-period point on the
EKMA diagram using the base-period NMHC:NOX ratio and the base-period value of
the 03 air quality index. The historical precursor changes (based on
emissions and/or ambient precursor trends) were then used to locate points on
the EKMA diagram for subsequent years. The O3 values predicted by EKMA for
the subsequent years were then compared to actual O3 levels as a test of the
EKMA model.
In the analysis, we included error bounds for both the predicted and
actual 03 trends. The error bounds on the predicted 03 trends represent the
uncertainties in our emission trend estimates and the uncertainties in the
base-year ambient NMHC:NOX ratio. The error bounds on the actual 03 trends
represent the statistical variation in the 03 trend index caused by yearly
weather fluctuations. Our studies included special analyses that allowed us
to quantify these various sources of error. The error bounds on both the
predicted and actual 03 trends were critical in interpreting the results of
the validation tests.
-------
1. TREND ANALYSIS OF HISTORICAL DATA Trijonis
As part of our studies, we also investigated the sensitivity of the
results to the specific EKMA simulation conditions. The "standard" EKMA
isopleths pertain to the maximum 03 produced during 10-h irradiations under
the following conditions: diurnal sunlight intensity corresponding to
8 a.m. to 6 p.m. LOT for the summer solstice at 34°N latitude; a dilution rate
of 3%/h for the first 7 h and zero dilution thereafter; no emissions after
8 a.m.; no transported or advected O3; and a HC mix of 25% propylene and 75%
n-butane with aldehydes of 5% the initial NMHC. In addition to the "standard"
EKMA isopleths, we considered isopleths that included emissions after 8 a.m.
and special dilution patterns. We also considered isopleths representing 03
at fixed irradiation times (specific to the individual monitoring site) rather
than maximum 03 over the entire irradiation period.
HOUSTON RESULTS
Historical Precursor Trends
The most difficult and time-consuming task in our Houston study was the
compilation of historical emission trends for the 03 precursors, NMHC and NOX.
We compiled quite detailed emission trend information — year-by-year
emissions from 1974 to 1978 for 26 NMHC source categories and 15 NOX source
categories. For each source category, emission trends are governed by three
factors: uncontrolled emission levels, growth of source activity levels, and
the timing and effectiveness of controls. Each of these three factors often
-------
1. TREND ANALYSIS OF HISTORICAL DATA Trijonis
involves considerable unceYtainty. To minimize these uncertainties, we used
the most accurate and up-to-date information available from industrial
representatives, consulting firms, trade associations, and various local,
state, and federal agencies. Despite our efforts, certain information gaps
persist which lead to significant uncertainties in the emission trend
estimates for Houston.
Figure 1-1 summarizes the total NMHC emissions in our Houston study
region (Harris County, Brazoria County, and Galveston County). To simplify
this figure, we have summarized the 26 NMHC source categories according to
four major source types: the chemical industry, the petroleum industry, motor
vehicles, and other sources. Figure 1-1 shows that total emissions in the
three-county study region decreased 17% from 1974 to 1976, and increased 12%
from 1976 to 1978, so that only a slight NMHC reduction (7%) occurred from
1974 to 1978. The single major source category undergoing a significant
reduction in NMHC from 1974 to 1978 was the chemical industry which, despite
extremely rapid growth in output, managed to decrease NMHC emissions by 34%.
Although some degree of control was installed on all other major NMHC source
categories (both stationary and mobile) from 1974 to 1978, these controls were
insufficient to overcome growth rates; thus, NMHC emissions from the petroleum
industry, motor vehicles, and other sources increased slightly from 1974 to
1978. The two major reasons for the lack of significant NMHC reductions in
the Houston region from 1974 to 1978 were rapid source growth (activity
increases of 78% for the chemical industry, 24% for the petroleum industry,
-------
1. TREND ANALYSIS OF HISTORICAL DATA
Trijonis
500 -4
400-
o
00
z
o
oo
t—t
s:
LU
o
200-
100-
1973 1974
Total NMHC in the Study Area
Chemical Industry
Petroleum Industry
Motor Vehicles
Other Sources
1975 1976
YEAR
1977
1978
Figure 1-1. NMHC emission trends for the Houston study area.
-------
1. TREND ANALYSIS OF HISTORICAL DATA Trijonis
30% for motor vehicles, and 16% for other sources) and the fact that most of
the stationary-source controls were already in place by 1974.
Figure 1-2 shows that the estimated NOX emissions in the three-county
region rose by 18% from 1974 to 1978. The NOX increase was a product of high
growth and the absence of NOX controls (except for automobiles). The
predominant part of the NOX rise resulted from a 43% increase in chemical
industry NOX over the 4 years. Increases in NOX from other sources from
1974 to 1978 were as follows: petroleum industry, 13%; power plants, 15%;
motor vehicles, 7%; and all other sources, 13%. The increase in NOX emissions
from the chemical and petroleum industries would have been substantially
greater were it not for a voluntary fuel conservation program that
significantly decreased fuel use per pound of product.
In addition to determining total three-county emission trends, we also
estimated emission trends for various subregions. Phenomenological analysis
of 03 and wind data indicated that Harris County should be considered as the
basic source area for 03 at the Mae Drive and Aldine study sites. The net
percentage emission changes for Harris County are similar to, but slightly
different from, those for the entire three-county region.
To check the emission trend estimates, we also examined ambient precursor
trends for 1974 to 1978. The ambient trend analysis was based on regression
lines fit to NMHC and NOX data at Mae Drive and Aldine for three air quality
indices: 6 to 9 a.m. concentrations averaged over the smog season,
10
-------
1. TREND ANALYSIS OF HISTORICAL DATA
Trijonis
400 H
300-
to
CL
c
o
GO
z
o
I—I
oo
CO
I—I
s:
UJ
X
o
200-1
100-
1973
1974
Total NO in the Study Area
A
Chemical Industry
Petroleum industry
Power Plants
Motor Vehicles
Other Sources
1975
1976
1977
1978
YEAR
Figure 1-2. NOX emission trend for the Houston area.
11
-------
1. TREND ANALYSIS OF HISTORICAL DATA Trijonis
6 to 9 a.m. concentrations averaged annually, and annual mean concentrations
(all hours). The ambient data present a more pessimistic picture than our
emission trend estimates. The ambient data indicate that NMHC concentrations
increased somewhat (on the order of 7 to 23%) and NOV concentrations increased
A
substantially (on the order of 16 to 48%) from 1974 to 1978. It should be
noted, however, that the ambient trend data (like the emission trend data)
also involve significant uncertainties.
Historical 0-3 Trends
Figure 1-3 illustrates the historical 03 trends at Mae Drive and Aldine.
Two of the three air quality indices included in Figure 1-3 (the yearly second
maximum 1-h concentration and 95th percentile of daily 1-h maximum
concentration during the smog season) were used in the EKMA validation study.
Figure 1-3 shows thab no overall trends are evident in the 63 data from 1974
to 1978. The only exception is the yearly second maximum at Aldine which
displays an anomalous decrease from 1975 to 1978.
NMHC:NO,, Ratio and EKMA Simulation Conditions
K.
After examining ambient 6 to 9 a.m. NMHC:NOX ratios during the smog
season at Mae Drive and Aldine, we conclude that a median base-year ratio of
17:1 is appropriate for use in EKMA. In arriving at this ratio, we have
divided the raw data for NMHC by a factor of 0.7 to make the NMHC data (based
on methane-calibrated instruments) approximately equivalent to data based on
12
-------
1. TREND ANALYSIS OF HISTORICAL DATA
Trijonis
CsL
<
cu
c
C
tfl
•rl
h
D
01
(C
I
o
I
o
I
in
I
o
I
o
(uidd) NOI1WUN30HOO
to
T3
0)
UJ
Qi
Q
n)
o
•H
W
•H
03
ro
I
3
60
(iudd)
13
-------
1. TREND ANALYSIS OF HISTORICAL DATA Trijonis
propane-calibrated instruments (which were used in the chamber studies that
serve as a foundation for EKMA). To represent the uncertainty in the ambient
6 to 9 a.m. NMHC:NOX ratio, we have chosen a low value of 8.5:1 and a high
value of 47:1, which are the 10th and 90th percentiles of the ambient values.
We find, however, that because of the relatively small emission changes and
because of the qualitatively similar variations of NMHC and NOX emissions from
1975 to 1978, the predicted 03 trends in Houston are very insensitive to the
initial NMHC:NOX rati9- Uncertainty in the ratio is, therefore, a minor
source of error in the Houston validation studies.
Standard EKMA isopleths for basinwide maximum 63 are used in the
validation study. Sensitivity analyses show that, for our particular Houston
emission changes, the results of our study are insensitive to the choice of
specific EKMA simulation conditions.
EKMA Validation
The Houston validation studies were conducted from 1974 to 1978 at Mae
Drive and from 1975 to 1978 at Aldine using two 03 air quality indices: the
95th percentile of daily maxima during the smog season, and the yearly second
maximum hourly concentration. Figure 1-4 presents an example of our results:
the tests for the two air quality indices at Mae Drive. As noted previously,
the error bars on the predicted 03 trends represent uncertainties in the
NMHC:NOX ratio and in the emission trend estimates (mostly the latter), and
14
-------
1. TREND ANALYSIS OF HISTORICAL DATA
Trijonis
I
£
'i
§in
•o
•^ c
<_j 01
ii;
01 3
u> u
C 01
O>
C
O
I
X
re
<§
o
O)
C
O)
if
£ <
l
o
a>
•H
a)
C
(1)
ro
O
(uidd) NOIlVblN30N03
O)
O
N
O
X
£
O
(J
O)
oo
s_
re
O)
£ UJ
— >-
I
o
I
o
I
tn
o
to
3
to
M
0)
•a
0)
4J
o
•H
TS
•H
fn
(uidd)
15
-------
1. TREND ANALYSIS OF HISTORICAL DATA Trijonis
the larger error bars on the actual 03 trends represent statistical variance
caused by yearly weather fluctuations.
The findings of our validation studies at Mae Drive and Aldine can be
summarized as follows: predicted 03 trends at both sites show little change
from 1974 to 1976, a slight increase from 1976 to 1977, and little change from
1977 to 1978. For both air quality indices at Mae Drive and for the 95th
percentile of daily maxima at Aldine, actual 03 trends increase moderately in
1975 and/or 1976 and then decrease somewhat in 1977 and 1978. The net result
for these three cases is that the isopleth model underpredicts 63 somewhat in
1975 and/or 1976 but yields fairly good agreement in 1977 and 1978. The
discrepancies could easily be explained by potential errors in the emission
trends and, especially, by meteorological variance in the actual 03 trends.
In the fourth case, yearly second maximum O3 at Aldine, actual 03 decreased
substantially from 1975 to 1978, producing a large discrepancy with predicted
trends. This large discrepancy might be explained by an extreme statistical
error in this 03 air quality index at Aldine.
Given the rather large error bounds in our analysis, it is difficult to
tell if there is any fault with the EKMA model itself. In fact, a major
conclusion of this study is that emission changes in Houston from 1974 to 1978
were not large enough to provide an adequate test of the EKMA model.
16
-------
1. TREND ANALYSIS OF HISTORICAL DATA Trijonis
LOS ANGELES RESULTS
Historical Precursor Trends
As was the case with the Houston study, much of the effort in the Los
Angeles study was devoted to the task of determining historical emission
trends for reactive hydrocarbons (RHC's) and NOX. We compiled the precursor
emission trends for Los Angeles year-by-year from 1965 to 1977. Although the
Los Angeles emission trends involve significant uncertainties, these are less
than those for Houston because better data are available in Los Angeles for
base-year emissions, source growth, and control schedules.
Figures 1-5 and 1-6 summarize basinwide emission trends for RHC and NOX
on a yearly basis from 1965 to 1977. As shown in Figure 1-5, estimated
basinwide emissions of RHC decreased continually during the study period, with
a net reduction of 29% from 1965 to 1977. The predominant part of this
reduction was due to decreases in emissions from light-duty vehicles (the
largest source category); light-duty vehicle RHC emissions decreased 40% from
1965 to 1977 despite a 54% increase in traffic levels. Organic solvent
emissions also underwent a significant (30%) decrease, with this reduction
basically occurring between 1965 and 1974.
As shown in Figure 1-6, estimated basinwide NOX emissions rose rapidly
from 1965 to 1973 and then basically leveled off. The net increase over the
entire study period, 1965 to 1977, was 34%. The predominant part of this rise
17
-------
1. TREND ANALYSIS OF HISTORICAL DATA
Trijonis
uo
UJ
a
S
o '
o
oo
_ (S3
pv.
fs.
_ VO
IT)
Ps.
_ <*•>
UJ
o
PN.
CO
VO
Px.
VO
VO
VO
tn
vo
•
c
•H
CO
(fl
XI
CO
CD
iH
I
CO
0
o
o
(/Cep/sucn) SNOISSIW3
18
-------
1. TREND ANALYSIS OF HISTORICAL DATA
Trijonis
I— K>
3
o
Of
O
*-
to
e:
Of
UJ
_ to
_ ID
— ro
Ul
_ O
CM
vo
_ CO
O
O
ID
i
o
o
ID
o
o
10
US
ID
to
CTi
c
•H
01
18
CO
0)
CO
0
•rH
c
8
0
•1-)
01
CO
X
O
(0
O
•H
M
0
+J
(0
VD
&
•H
SNOISSIW3 XON
19
-------
1. TREND ANALYSIS OF HISTORICAL DATA Trijonis
was due to a 55% increase in NOX emissions from light-duty vehicles (the
largest source category). The net increase in light-duty vehicle NOX from
1965 to 1977 basically represented traffic growth. Light-duty vehicle NOX
emission factors for new cars jumped upward in the late 1960's, but by 1977
the fleet-averaged NOX emission factor was reduced back to the 1965 level due
to the new car NOX emission standards of the 1970's. Oxides of nitrogen
emissions from heavy-duty vehicles and residential, commercial, and industrial
fuel burning also increased significantly from 1965 to 1977, reflecting growth
in traffic and natural gas usage, respectively. Power-plant NOX emissions
decreased slightly from 1965 to 1977.
The EKMA validation studies in Los Angeles were conducted using basinwide
03 trends as well as Og trends at four individual monitoring sites: Azusa,
downtown Los Angeles, Anaheim, and San Bernardino. In addition to determining
basinwide emission trends, we estimated emission trends individually for the
source areas affecting the various monitoring sites. Figure 1-7 provides
examples of source areas affecting 03 at downtown Los Angeles and San
Bernardino. Emission trends differ somewhat among the various individual
source areas, reflecting differences in growth rates and source types within
those areas.
Using -che numerous long-term monitoring sites in Los Angeles, we also
made a detailed study of ambient precursor trends for each source area. As
was the case with the emission trend studies, the ambient precursor trend
studies included analyses of the errors in the trend estimates.
20
-------
1. TREND ANALYSIS OF HISTORICAL DATA
Trijonis
LOS ANGELES COUNTY
Azusa
ank
SAN BERNARDINO COUNTY
San Bernardino
Pomona/ ( i
Whitften; ,-•' •
"* ;~~~m '. \ " Riverside
• La
Habra
\
RIVERSIDE COUNP/
ORANGE ')
COUNTY/'
RNARDINO COUNTY
n Bernardino
Figure 1-7. Approximate source areas affecting 03 at downtown Los Angeles and
San Bernardino.
21
-------
1. TREND ANALYSIS OF HISTORICAL DATA Trijonis
Figures 1-8 and 1-9 compare emission trends with ambient trends in
various source areas for HC's and NOX, respectively. As evidenced by these
figures, the agreement between emission trends and ambient trends is generally
very good, especially when the data are viewed in an overall sense from 1965
to 1977. In light of this agreement, we can state with a high degree of
confidence that there has been a moderate (15 to 30%) reduction in HC's and a
moderate (25 to 35%) increase in NOX within the Los Angeles basin from 1965 to
1977.
Historical 03 Trends
Figure 1-10 illustrates historical 03 trends at the four study sites for
the two air quality indices used in our Los Angeles validation studies: the
95th percentile of daily maximal 1-h 03 and yearly maximum 03. The 03
concentrations at Azusa, downtown Los Angeles, and Anaheim decreased from the
middle 1960's to the early- to middle-1970's and then leveled off. All sites
in Los Angeles except those in the extreme eastern part of the basin exhibit
this historical pattern in 03 trends {Trijonis, 1980). Characteristic of the
far eastern part of the basin, 03 at San Bernardino increased at a very slight
rate during the entire study period.
NMHC:NOV Ratio and EKMA Simulation Conditions
X.
One of the critical inputs to the EKMA isopleth model is the base-period
ambient 6 to 9 a.m. NMHC:NOX ratio. Our best estimate of this ratio for the
22
-------
1. TREND ANALYSIS OF HISTORICAL DATA
Trijonis
CJ
K
<
LL)
—. >-
-CO
— UJ
1 r- >-
•s
&
4J
C C
u o
X IM
O I
gi
w -^
•H •
^ W C
dj 'O 0
E <1) &1
o >-i a>
U 4J ^
00
I
(996I-t?96I) QOIbBd 3SVS 3Hi 01 (9961-17961) QOIb3d 3SV8 3H1 01
3AI1V13H SQN3dl OHWN dO OHb 3AIlV13b SQN3bl OHWN bO OHb
23
-------
1. TREND ANALYSIS OF HISTORICAL DATA
Trijonis
(996T-i?96I)
01
1
Lcs
I
ID
UJ
O
I
in
o*
3SV8 3H1
XON
-^P^
LJ
<
C
cr
c:
uj
is
«r
UJ
•f>. >-
in
.o
I
in
|
-
in
o*
(996I-W6I) Q0iy3d 3SV9 3H1
01 3AI1V13H SQN3ai XON
ox $
S5 H
0)
_X 0
0
to
•H
t
W C
(« ^3 0
& C 'H
O Vl II)
U 4J 1-1
cr>
I
•H
24
-------
1. TREND ANALYSIS OF HISTORICAL DATA
Trijonis
o
a:
«t
UJ
' I
i 1
(0
0)
+J
-P
03
(0
a)
H
8.
(0
o
(WHdd) HOUVaiM33N03 3NOZO
!!
Ul
C
0)
ro
O
U
-H
4J
0)
O
I
&
•H
25
-------
1. TREND ANALYSIS OP HISTORICAL DATA
Trijonis
m «
3 s_
>— V
E <
4>
•rH
CO
I
CO
CO
co
o
~r~
o
—r
o
~T
o
1
0
II
(MHd<0 MOUVU1U33M03 3N020
a:
UJ
c
cu
k
ro
O
S-l
o
^)
CO
OJ
s
•1-1
c
0
o
I
0)
S-i
J,
fc
nr
o
r
o
1
o
\rt
~T-
o
—r
o
~T~
o
NOIlVaiNSONOO
26
-------
1. TREND ANALYSIS OF HISTORICAL DATA Trijonis
1964-1966 base period in Los Angeles is 13:1. This estimate is derived by
examining data from several monitoring programs during the early 1970"s and by
extrapolating back to 1964-1966 based on historical trends in ambient
precursor levels. In an attempt to make atmospheric NMHC concentrations more
nearly equivalent to EKMA NMHC concentrations, the ambient NMHC data are
adjusted to correspond to measurements made with propane calibration; the
contributions from the nearly nonreactive compounds ethane and propane are
also subtracted from the ambient NMHC da.ta. Because of disagreements among
various monitoring programs concerning the NMHC:NOX ratio, because of
uncertainties in some of the adjustments applied to the data, and because of
stochastic day-to-day fluctuations in the ratio, there is considerable
potential for error in our estimate of the NMHC:NOX ratio. In sensitivity
studies, we consider the ratio to range from 7:1 to 25:1.
For the specific EKMA simulation conditions used, and for the type of
historical precursor changes examined (decreasing NMHC with increasing NOX),
predicted O3 trends are extremely sensitive to the base-period NMHC:NOX ratio.
In fact, it is obvious that potential errors in the NMHC:NOX ratio alone could
account for any of the discrepancies between predicted O3 trends and actual
O3 trends (see Figure 1-11). Holding this conclusion in reserve, we decided
to conduct the EKMA validation studies under a very stringent set of test
conditions (not allowing for potential errors in the NMHC:NOX ratio).
The EKMA simulation conditions chosen for this study are the standard
conditions except for the inclusion of emissions after 8 a.m. and the use of a
27
-------
1. TREND ANALYSIS OF HISTORICAL DATA
Trijonis
. o
«,' V **
"• fe =• is $
£'£„«
O OJ
c.
I- O
•g"
V C
~ £ - I.-D
• o • o< « s
l« 1« trt £ > •—
aj e y-
.
i «> *
•f.'Sg*
* « X — *J
O— "
O
Z
o
CO
' vc
I
O
I
C
i
o
O
61 S O
i. m
&?
e IB
e t.
IB o
cs x-
-s
in
o
in
us
1 en
O
a
C
•H
0)
Cn
4J
O
10
•O
C
0)
M .
4-> W
,
-P T3
O 3
•H -P
"O CO
0)
M >-l
a a
0
14-1 M-l
o
(0
4-1
•H
•H
4J
•rH O
W -H
C 4J
0> (0
i
O
I
o
I
o
I
o
0)
S-l
&
•H
(uiL(dd)
28
-------
1. TREND ANALYSIS OF HISTORICAL DATA Trijonis
special Los Angeles-specific dilution pattern. For the studies at individual
monitoring sites, we also used isopleths representing Oj at fixed irradiation
times rather than maximum 03 over the entire irradiation period. A
sensitivity analysis using six types of EKMA isopleths indicated that
predicted 03 trends are moderately sensitive to the specific simulation
conditions (Figure 1-12). In particular, the addition of emissions after 8:00
a.m. to the model significantly improved the agreement between predicted and
actual O3 trends in Los Angeles.
EKMA Validation
The EKMA validation studies for Los Angeles were conducted for two air
quality indices: the 95th percentile of daily maximal 03 and yearly maximum
1-h 03. Also, at each site, separate analyses were conducted using either
emission trends or ambient precursor trends as inputs to the EKMA model.
j
Figures 1-13 to 1-16 present some example results for Azusa and downtown Los
Angeles. In these figures, the error bars on predicted 03 trends represent
uncertainties in the precursor trend estimates; the error bars on the actual
03 trends represent the meteorological variance in the 03 trend indices. As
noted previously, the Los Angeles results are extremely sensitive to the
initial NMHC:NOX ratio, so that very large error bars would have to be added
to represent uncertainty in the NMHC:NOX ratio.
The results of the Los Angeles EKMA validation studies are nearly the
same whether one uses emission trends or ambient precursor trends to derive
29
-------
1. TREND ANALYSIS OP HISTORICAL DATA
Trijonis
25-
20-
O.
o.
10-
o
o
CO
o
5-
STANDARO
FIXTIME-LADILUTION
FIXTIME-CHARCURVEDILUTION
FIXTIME
FIXTIME-CHARCURVEDILUTION-EMISSIONS
FIXTIME-LADILUTION-EMISSIONS
ACTUAL 03 TREND
ANAHEIM
NOTE:
All predicted 63 trends are for the 59th percent!le
of daily maximum 03, using historical changes in
ambient precursor levels, and using an NMHC:NOX
ratio of 13:1.
1965
63
71
YEAR
74
77
Figure 1-12.
Sensitivity of predicted 03 trends at Anaheim to various EKMA
simulation conditions.
30
-------
1. TREND ANALYSIS OF HISTORICAL DATA
Trijonis
40-
30.
D-
Q.
o
p
o
o
20-
Predicted
Ozone Trends
«USA
9STH PERCENTILE OF OAIU MAXIMUM OZONE
PREDICTED TRENDS BASED ON EMISSION TRENDS
A: Statistical error 1n ambient ozone trends
B: Error in precursor trend data
1965
68
A
YEAR
40-
30-
o
H*
I—
<
O
O
20-
10-
Actual
Ozone
Trends
AZUSA
95TH PERCENTUE OF DAILY MAXIMUM OZONE
PREDICTED TRENDS BASED ON AMBIENT PRECURSOR TRENDS
A: Statistical error 1n ambient ozone trends
8: Error (n precursor trend data
1965
71
YEAR
74
77
Figure 1-13. Results of the validation studies for the 95th percentile of
daily maximum 03 at Azusa.
31
-------
1. TREND ANALYSIS OF HISTORICAL DATA
Trijonis
11
l/l UJ
«c as
(HH,ld)
LU CO
°2
O CO
5: £r
3 c>
£ uj
>- <->
4J
ro
O
•H
X
(0
g
(0
Q)
tfi
(1)
-H
•d
s
4J
(0
C
o
-^
+J
(C
-------
1. TREND ANALYSIS OF HISTORICAL DATA
Trijonis
30-
Q.
Q.
*~ 20-
•z.
o
I—I
I—
<
LU
o
o
o
ro
O
10-
Predicted
Ozone Trends
T
Actual
Ozone
Trends
DOWNTOWN LOS ANGELES
95TH PERCENTILE OF DAILY MAXIMUM OZONE
PREDICTED TRENDS BASED ON EMISSION TRENDS
A: Statistical error in ambient ozone trends
B: Error in precursor trend data
1965
68
71
YEAR
74
1 I
77
30-
CL
Q.
20-
-------
1. TREND ANALYSIS OF HISTORICAL DATA
Trijonis
i v 1
§s
Csl
O i/l
o: o
«£ UJ
LUQ:
- <
"• UJ
>-
-S
4J
(8
n
O
•H
X
18
0
M-l
en
o;
•H
(HHJd) MOUWUM33NOD 3NOZO
5 I
u to u
§s t
si :
>- (J n
_J — AJ
ce o «/t
o;
-------
1. TREND ANALYSIS OF HISTORICAL DATA Trijonis
predicted 03 trends. In both cases, the agreement between predicted and
actual trends is not poor considering the stringency of the test conditions
(in the sense that potential errors in the NMHC:NOX ratio are not included).
For most sites and years, the predicted and actual 03 trends agree within the
error bars representing variance in precursor trends and meteorological
fluctuations in 03 trends. There is, however, a disturbing general tendency
for predicted 03 trends to underestimate decreases in actual 03 trends at all
sites except San Bernardino. This discrepancy is rather large for certain
sites and years.
We are able to eliminate three possible explanations for the general
discrepancy that predicted 03 levels tend to underestimate historical
improvements in actual 03 levels. The cause is not errors in the historical
precursor trends, because the error bars on precursor trends are rather small,
and because the precursor trends are confirmed by two independent data bases
(emissions data and ambient data). Studies of weather-adjusted 03 data
stongly suggest that the discrepancy is also not due to meteorological
fluctuations in actual 03 trends. Also, further refinements in the EKMA
simulation conditions (improved dilution patterns, improved estimates of
emissions after 8 a.m., addition of carry-over 03, etc.) will probably not
eliminate the discrepancy because our predicted 63 trends are not quite
sensitive enough to changes in the simulation conditions.
The obvious factor that can account for observed discrepancies between
predicted and actual O3 trends in Los Angeles is error in the NMHC:NOX ratio.
35
-------
1. TREND ANALYSIS OF HISTORICAL DATA Trijonis
We find that using a 10:1 base-year ratio rather than a 13:1 base-year ratio
would eliminate all of the discrepancies. There are two ways in which the
NMHC:NOX ratio could be in error. First, there could be a random error in our
estimate of the ratio — an error that would not necessarily reproduce itself
in applying EKMA to other areas. Second, there could be a systematic error in
the EKMA chemical-kinetic mechanism or in matching the atmospheric NMHC:NOX
ratio with the EKMA NMHC:NOX ratio. Further research should be conducted to
test the EKMA kinetic mechanism and to check the equivalency between
atmospheric and EKMA NMHC:NOX ratios. This research might lead to
improvements in the EKMA chemical mechanism or to guidelines for adjusting
atmospheric NMHC:NOX ratios before they are used in EKMA.
The Los Angeles study points out two potentially significant
sensitivities of the EKMA isopleth method. The predictions of EKMA can be
moderately sensitive to the specific EKMA simulation conditions and can be
very sensitive to the NMHC:NOX ratio. Thus, it may be important, in some
applications, to ensure that the most realistic, area-specific EKMA simulation
conditions are used. Also, it may be critical to obtain an accurate estimate
of the EKMA-equivalent NMHC:NOX ratio. Fortunately, predictions of future 03
trends are likely to be less sensitive to the simulation conditions and to the
NMHC:NOX ratio because future emission changes are likely to involve
reductions in both NMHC and NOX, and because the predictions of EKMA are much
less sensitive to the simulation conditions and the NMHC:NOX ratio when both
precursors undergo similar changes.
36
-------
1. TREND ANALYSIS OF HISTORICAL DATA Trijonis
CONCLUSIONS
Historical trend analysis may be the most appropriate way of validating
the EKMA method because, in historical trend analysis, the EKMA method is
tested under conditions that are entirely parallel to the intended use of EKMA
in evaluating control strategies. Because precursor emissions did not undergo
large changes from 1974 to 1978, definitive EKMA validation studies cannot be
conducted with the available data for Houston. Historical data for Los
Angeles, however, provide a very useful test of the EKMA method.
The EKMA model performs fairly well in the Los Angeles validation
studies, although there is a general tendency for predicted 03 trends to
underestimate historical decreases in actual 03 trends. The reason for this
discrepancy appears to be error in the choice of an NMHC:NOX ratio or error in
the sensitivity of the EKMA chemical-kinetic mechanism. Further research
should be conducted to test the EKMA chemical-kinetic mechanism and to check
the equivalency between ambient and EKMA NMHC:NOX ratios.
REFERENCES
Davis, M., and J. Trijonis. 1981. Historical Emission and Ozone Trends in
the Houston Area. Prepared for EPA Office of Research and Development under
Contract 68-02-2976.
Radian Corporation. 1977. Examination of Ozone Levels and Hydrocarbon
Emissions Reduction. DCN 77-100-151-04.
37
-------
1. TREND ANALYSIS OF HISTORICAL DATA Trijonis
Tannahill, G.K. 1976. The hydrocarbon/ozone relationship in Texas.
Presented at the Air Pollution Control Association Ozone/Oxidants Conference,
Dallas, Texas.
Trijonis, J. 1980. Oxidant and Precursor Trends in the Metropolitan Los
Angeles Region: An Update. Report to California Air Resources Board under
Contract A9-095-31.
Trijonis, J., and S. Mortimer. 1981. Analysis of Historical Ozone Trends in
the Los Angeles Region Using the EKMA Isopleth Model. Prepared for EPA
Offices of Research and Development and Air Quality Planning and Standards
under Contract 68-02-2976.
Trijonis, J., et al. 1978. Oxidant and precursor trends in the metropolitan
Los Angeles region. Atmos. Environ., 12:1413-1420.
WORKSHOP COMMENTARY
ESCHENROEDER: What was the data source for your emission inventory; official
figures from the district?
TRIJONIS: We compiled a base year from HAOS data or district data, although
we also made some of our own calculations for the base-year inventory. To get
the source growth rates we went to industry personnel, trade associations, or
federal, state, and local agency publications. Then to get the amount of
control put on, again we consulted with state and local agencies and industry
personnel. We compiled it all ourselves, basically, except for the base-year
inventory.
ESCHENROEDER: How were the footprints shown on the map for the source
influence areas determined for Los Angeles?
TRIJONIS: Those were determined from a review of various types of wind
analyses: wind roses, pollution roses, streamlines, and trajectory analysis.
For Los Angeles, we examined previous streamline and trajectory studies. In
Houston, we did our own pollution roses, wind roses, and so forth.
ESCHENROEDER: And the sources only in those areas were the ones that you were
considering the emission trends?
TRIJONIS: That's correct.
ESCHENROEDER: Finally, were your HC measurements, as far as the ambient
measurements or the inventories, in some way corrected to reflect the aldehyde
component of organic emissions?
38
-------
1. TREND ANALYSIS OF HISTORICAL DATA Trijonis
TRIJONIS: No, we didn't consider aldehydes at all. The only ambient and
emission HC data we had were for NMHC.
ESCHENROEDER: It's been suggested that that might explain the stronger
leverage in the real results than the model would predict. That might be
reflected in organic controls and maybe not reflected in the measurements or
in the inventories.
TRIJONIS: Yes, that is possible.
McRAE: How sensitive were your results to the choice of the validation
statistics? You used two; one was equivalent to the standard, which is the
maximum 03 value exceeded per year; and the other one was the 95th percentile.
Error bounds on the 95th percentile seemed to be much better than the use of
the statistic corresponding to the standard.
TRIJONIS: In Los Angeles we saw about the same validation results for both
the maximal air quality index and the 95th percentile air quality index. Both
had a slight disagreement between predicted trend lines and actual Oj trends.
Depending on which sites you looked at, each air quality index had a few error
bars that weren't in agreement and a few that were. I don't think the choice
was too critical. I'd generally recommend the 95th percentile because it does
have a lower variance in the actual trends.
McRAE: The second question I have relates to the use of EKMA. EKMA was
basically designed to help the various agencies work out the control
requirements to meet the standards. The validation results that you have
presented for Los Angeles, for example, are looking at much smaller changes in
emissions. Would you care to comment on how you think EKMA might perform when
you're talking about changes by factors of two or three in the emissions and
relatively small changes in the trends?
TRIJONIS: For Los Angeles, the 29% decrease in RHC emissions coupled with the
35% increase in NOX emissions was fairly considerable. I wouldn't know about
extrapolating that to larger changes. It seems that there is this discrepancy
in Los Angeles, and if EKMA were incorrect, I would imagine for larger
emission changes the discrepancy would increase. But it is not obvious that
there is a problem with EKMA anyway. Our results are very sensitive to the
NMHC:NOX ratio — future predictions might not be so. Historically, we have
been dealing with really rapid decreases in HC's and rapid increases in NOX«
In the future, you would expect that NOX and HC's would decrease together. In
fact, we find that if HC and NOX emissions are changing in about the same
manner, EKMA is very insensitive to the initial NMHC:NOX ratio. If there is
some difficulty matching the ambient and EKMA NMHC:NOX ratios, it might have
been important historically in Los Angeles, but it might not be nearly as
important in the future.
39
-------
1. TREND ANALYSIS OF HISTORICAL DATA u Trijonis
LLOYD: Did you examine any data on changes in the chemical composition with
time? I realize that the data would be scarce.
TRIJONIS: We examined chemical composition, but not with time. What we did
in Los Angeles was take out the ethane and propane in the ambient data to
lower our ratio by 10%. I thought of looking at historical changes in
composition with time, maybe to track sources with time. We haven't done
that, but it would be a very good idea to try to determine the composition
trends, especially in Los Angeles. It seems that over the last 3 or 4 years
ambient HC's aren't going down while emissions are supposed to be going down.
If you looked at trends in the composition you might get an idea of what
sources are causing the problem.
McRAE: What about your assumptions for 03 aloft?
TRIJONIS: We didn't have 03 aloft built into the model. There is a recent
study by Southern California Edison that suggested that once the O3 goes up
the sides of the mountains and goes back over the basin it stays at a fairly
high level and very seldom gets re-entrained. We didn't have 03 aloft in
our EKMA simulation conditions.
McKEE: In your graphs from the Los Angeles area, I believe both HC and NOX
emission data showed a sharp break in the curve in 1971. Also, your ambient
air monitoring data for 03 showed a break and a sharp leveling off in 1971.
When you compared predicted with actual O3, you started out with zero
difference because that was your base year, and then the discrepancy between
actual and predicted values increased to 1971, when it reached a maximum, and
then decreased again down to fairly good agreement in the latest years for
which comparison was made. My question is, what happened in 1971? Why do all
these curves show a sharp break in 1971?
TRIJONIS: There is nothing specific about 1971, although there was an unusual
meteorological phenomenon going on. It seems that '69, '70, and '71 were bad
years meteorologically for O3 in Los Angeles, and then '72, '73, and '74,
were good years. That might be part of it, but I don't think it is a general
phenomenon in all of our validation studies. It might have been true for the
two sites and the particular air quality index (95th percentile) I put up
there.
There was a break in our emissions data, not in 1971, but in 1974, with
the gasoline crisis. Gasoline sales is one of our parameters to track growth,
and emissions jogged down especially rapidly for HC's that year, and also for
NOX; 1974 was the only year that NOX had a significant decrease. So the
emissions jump you saw was not in '71 but in '74. And the only thing I can
think of special about "71 was that there was a break from 2 or 3 bad years in
terms of very high 03 potential to 2 or 3 good years, meteorologically.
40
-------
1. TREND ANALYSIS OF HISTORICAL DATA Trijonis
McKEE: Well, it wasn't just '71. I looked at your original report, and there
was some sort of a long-term trend for several years which seemed to reach a
peak or a nadir or something in 1971, and then that trend reversed over the
next several years. So it wasn't just meteorologic emissions in any 1 year.
TRIJONIS: We were grouping the data by 3-year periods. There were a couple
of summers in a row during 1972 to 1974 that were good years, i.e., smogless
years.
JEFFRIES: What method did you use to generate the isopleth diagrams?
TRIJONIS: One was standard EKMA. Another, instead of isopleths representing
maximum 03 over 10-h radiation, was Oj exactly at 7 h; another one added the
emissions after 8 a.m.; another used a special dilution pattern.
JEFFRIES: The results you showed on the comparison, then, use a mixture of
diagrams or use one diagram?
TRIJONIS: The final comparison in Houston was the standard FKMA diagram,
because in Houston we found the results were insensitive to the EKMA diagram.
In Los Angeles the final results were based on 0^ for radiations at fixed
times, depending on the monitoring site; addition of emissions after 8 a.m.;
and a special dilution pattern. Those are the three extra additions to EKMA.
These were not standard; standard EKMA would not have given as good agreement
as these.
JEFFRIES: So, in effect, you generated a type of city-specific diagram and
then used it throughout the whole time period?
TRIJONIS: That's right, for Los Angeles.
41
-------
2. EVALUATION OF THE EMPIRICAL KINETIC MODELING APPROACH
AS PREDICTOR OF MAXIMUM OZONE
J. Raul Martinez
Christopher Maxwell
SRI International
Menlo Park, California 94025
ABSTRACT
The Empirical Kinetic Modeling Approach (EKMA) is a Lagrangian photo-
chemical air quality simulation model that calculates ozone from its
precursors, nonmethane organic compounds, and oxides of nitrogen. This study
evaluated the performance of this approach when used to estimate the maximum
ozone concentration that can occur in an urban area and its environs. The
evaluation was conducted using data for St. Louis, MO, Houston, TX,
Philadelphia, PA, and Los Angeles, CA.
A novel statistical evaluation procedure was developed to measure the
accuracy of the EKMA ozone estimates. The accuracy parameter is defined as
the ratio of observed ozone to estimated ozone. Associated with this ratio is
an accuracy probability, which is defined as the probability that the ratio
lies within a predefined percent (e.g., ± 20%) of unity, a unit value of the
ratio denoting perfect agreement between observation and prediction. The
evaluation procedure uses nonmethane organic compounds and oxides of nitrogen
as inputs to calculate the accuracy probability of the EKMA ozone estimate.
The full range of accuracy probabilities associated with the EKMA ozone
estimates is displayed in graphical form on the nonmethane organic compound-
oxides of nitrogen plane.
For St. Louis, results are presented comparing EKMA's performance for
three different chemical models. A Monte Carlo method for using EKMA to
predict the distribution of ozone maxima is described, and applications of the
method to the design and analysis of ozone control strategies are discussed.
42
-------
2. EVALUATION OF EKMA AS PREDICTOR OF MAXIMUM 0^ Martinez and Maxwell
INTRODUCTION
The Empirical Kinetic Modeling Approach (EKMA) is a Lagrangian
photochemical model that calculates ozone (03) as a function of nonmethane
organic compounds (NMOC) and oxides of nitrogen (NOX). The EKMA approach has
beem extensively documented, and we will forego discussing the technical
details of the model (EPA, 1977, 1978; Dodge, 1977; Trijonis and Hunsaker,
1978; EPA, 1980; Whitten and Hogo, 1978).
This paper describes the results of a study conducted to evaluate the
performance of EKMA in estimating the maximum 03 concentration that could
occur in an urban area and its environs. The study measured EKMA's ability to
predict maximum 03 and defined conditions under which 03 estimates could
achieve specific accuracy levels. The paper also summarizes the results of
the evaluation for four cities: St. Louis, MO, Houston, TX, Philadelphia, PA,
and Los Angeles, CA. A detailed description of the results is contained in
reports by Martinez et al. (1982) and Martinez and Maxwell (1982).
METHODOLOGY
Figure 2-1 illustrates the steps of the evaluation methodology, which are
described in this section.
43
-------
2. EVALUATION OF EKMA AS PREDICTOR OF MAXIMUM 0? Martinez and Maxwell
Reference Data Set
The reference data set consisted of observed 6:00 a.m. to 9:00 a.m. local
time (LT) NMOC and NOX and the measured 03 maximum for each day. We averaged
6:00 a.m. to 9:00 a.m. NMOC and NOX over a group of sites located in the
subregion of the urban area where their respective sources were concentrated.
EKMA considers the 6:00 a.m. to 9:00 a.m. LT NMOC and NOX to be the precursors
of the maximum 03 that occurs downwind of the emissions subregion later in the
day. The observed daily 03 maximum is defined as the highest hourly average
03 measured from 12:00 noon to 5:00 p.m. LT at any monitoring station in the
area. The timing of the 03 maximum is dictated by EKMA's assumptions. Each
reference data set contains one entry per day. The data set for St. Louis
contains 100 days, for Houston, 61 days, for Philadelphia, 29 days, and for
Los Angeles, 176 days. The data sets are listed in Martinez et al. (1982).
The criteria for selecting days for the reference data set consisted of:
data availability, the time of occurrence of the daily Oj maximum, and the
prevalence of meteorological conditions that are necessary, but may not be
sufficient, for the 03 maximum to be at least 100 ppb. The 100-ppb cutoff
ensures that all days when the 03 national ambient air quality standard
(NAAQS) of 120 ppb is exceeded are included in the evaluation data set. Such
days were chosen for the evaluation for two reasons. First, EKMA is based on
worst-case assumptions and therefore cannot be expected to perform well for
days that do not meet these conditions. Second, because we were interested in
44
-------
2. EVALUATION OF EKMA AS PREDICTOR OF MAXIMUM O-,
Martinez and Maxwell
Reference Data Set
I
NMOC, NOX
EKMA ]
Estimated
0,
Compare
Dafaron/*o
nt?!t?l tJllCtJ
and Estimate
I Define Accuracy )
Figure 2-1. Evaluation methodology used in the study.
45
-------
2. EVALUATION OF EKMA AS PREDICTOR OF MAXIMUM O? Martinez and Maxwell
maximum 03 potential with respect to the NAAQS, days with conditions
associated with low 03 were not important to our evaluation.
We used the reference concentrations of 6:00 a.m. to 9:00 a.m. LT NMOC
and NOX to define an evaluation region on the NMOC-NOX plane. (The evaluation
region is defined to limit the results of the EKMA evaluation to the area of
the NMOC-NOX plane where the data are located.) Figure 2-2 shows the
evaluation region for'St. Louis. It was defined by regressing NOX on NMOC and
placing a hand around the regression line to capture approximately 95% of the
joint distribution of NMOC and NOX- For St. Louis, the regression line is
NOX = 0.0958 (NMOC) + 10.4, and the evaluation region is defined by NOX =
0.0958 (NMOC) + 10.4 ± 1.96s, where s is the standard error of regression
(s = 16.9 for St. Louis). For St. Louis, but not for the other three cities,
we further restricted the evaluation region by omitting all points in the
shaded area of Figure 2-2, in which NMOC/NOX < 5. We omitted these points
because in St. Louis the daily maximum 03 is always below 100 ppb when
NMOC/NOX < 5. We defined evaluation regions for the remaining three cities
using the same procedure.
EKMA 03 Estimates
Basically, EKMA takes two forms — standard EKMA and city-specific EKMA.
Standard EKMA is based on conditions that prevail in the Los Angeles area,
whereas the city-specific version uses model inputs for a particular city. We
evaluated the performance of both forms of EKMA for each city.
46
-------
2. EVALUATION OF EKMA AS PREDICTOR OF MAXIMUM
Martinez and Maxwell
2»\ to 332 »0 (24 00 IO»S ZO 1366 4O 1637 60 I Mm 10 21 SO OO 2431 2O 2722 4O
3O2 OO |
273 10
O
I23O.CO I3O2.OO 1773 20 2O44 40 2311 CO 28** »O 2*M OO
12* 60
Figure 2-2. Evaluation region of the NMOONOX Plane for St. Louis.
47
-------
2. EVALUATION OF EKMA AS PREDICTOR OF MAXIMUM O? Martinez and Maxwell
For both standard and city-specific EKMA, the inputs used to estimate 03
were 6:00 a.m. to 9:00 a.m. LT NMOC and NOX, spatially averaged over a
specific set of monitors located in the emissions region of the city.
Statistical Evaluation of EKMA Performance
We used the ratio R = KEF/EST, where REF denotes the reference daily O3
maximum and EST the EKMA estimate, as the performance measure in evaluating
EKMA. Using this ratio, we defined three accuracy regions as follows:
Region 1: 1.2 < KEF/EST
Region 2: 0.8 < REF/EST < 1.2
Region 3: REF/EST < 0.8
Region 1 contains ratios representing cases of substantial
underprediction. We consider ratios in Region 2 to represent the most
accurate predictions that can be made because the error is at most 20%, which
is consistent with measurement precision. Region 3 contains ratios
representing cases of substantial overprediction.
To evaluate EKMA performance, we calculated the probability that a given
EKMA estimate falls in one of the three regions defined above. We made this
calculation by finding a multiple linear regression equation that described
the ratio R as a function of NMOC, NOX, and other variables such as maximum
daily temperature. We then used the equation for R to estimate the
48
-------
2. EVALUATION OF EKMA AS PREDICTOR OF MAXIMUM O-, Martinez and Maxwell
probabilities P(R > 1.2), P(0.8 < R < 1.2), and P(R > 0.8), based on the
assumption that the residuals of the regression fit are Gaussian. We then
displayed the accuracy probabilities on graphs showing the probability as a
function of NMOC and NOX, and plotted probability isopleths on the NMOC-NOX
plane.
EKMA Evaluation
The results discussed below for St. Louis, Houston, Philadelphia, and Los
Angeles pertain to EKMA with the Dodge chemical mechanism. Another section
describes the EKMA evaluation for two other chemical mechanisms using St.
Louis data.
Results for St. Louis
Figure 2-3 plots the standard-EKMA 03 estimates against the reference O3
concentrations. The figure shows the three accuracy regions. Most of the
points evidently fall within Region 3. Hence, standard EKMA tends to
overpredict most of the time. The figure also shows that the correlation
between estimate and reference values is low, being only r = 0.476.
For the standard-EKMA estimates, Figure 2-4 shows the accuracy
probability associated with the regression equation for R that is shown at the
bottom of the figure; R is a function of 1/NMOC, 1/NOX, NOX, maximum daily
temperature (denoted by T), and background O3 (denoted by 603).
49
-------
2. EVALUATION OF EKMA AS PREDICTOR OF MAXIMUM 07 Martinez and Maxwell
«.»» !»».»• It*. I* ?»».•• Jtt.ff 411.« S*S.** J7J.M 14S.** ?!».
o
t-
o
L>
O
M
O
REGION
3
II*.••
II*.••
7*.**
«*.**
r • 04757
V • 77.13+ 02251X
i • 3278
««.(• II*.•• If*.•• ?«*.*« ]}*.•• <•*.>• 47*.•• 14*.•• «!».»• ••*.*• ft*.**
DODGE STANOARD-EKMA OZONE ESTIMATE Ippbl
Figure 2-3. For St. Louis, standard EKMA overestimates reference 63.
50
-------
2. EVALUATION OF EKMA AS PREDICTOR OF MAXIMUM
Martinez and Maxwell
1.0
0.8 -
CD
i
C£
Q.
0.2 -
0.2 0.4 0.6 0.8 I.O I.2 I.4
R - REFERENCE OZONE / PREDICTED OZONE
.6
R = -0.6 + 88.7/NMOC + 7.6/NOx - 0.0009 NO, + 0.02 T + 0.005 BO3
Figure 2-4. Accuracy probability describes performance of standard EKMA for
St. Louis.
51
-------
2. EVALUATION OF EKMA AS PREDICTOR OF MAXIMUM 0^ Martinez and Maxwell
Figure 2-4 shows that for a value of R equal to 1.0, there is about an
85% probability that the model estimate is within 20% of the reference 03
concentration. Correspondingly, the probability that the model estimate will
overpredict or underpredict the reference 03 by at least 20% is about 8%. The
two vertical dashed lines, drawn from the curve defining the probabilities for
accurate predictions, define the range of R values for which there is at least
a 70% probability that the EKMA estimate is within 20% of the reference 03
concentration; these values range approximately from 0.875 to 1.12. The
shaded area identifies the 70% or greater probability range for the most
accurate predictions.
Figure 2-5 displays the constant-R contours on the NMOC-NOX plane for
EKMA. The evaluation region shown in the figure corresponds to that shown in
Figure 2-2; it contains the range of NMOC and NOX combinations observed in the
evaluation data set. The shaded area in the figure corresponds to the shaded
area in Figure 2-4. The shaded area depicts the range of NMOC, NOX pairs for
which the EKMA estimate has a 70% or greater probability of accurately
predicting (within 20%) the reference 03 concentration. Figure 2-5 shows that
an accurate estimate of the reference 03 concentration is most probable for
NMOC concentrations approximately between 200 ppbC and 500 ppbC, and for NOX
concentrations approximately between 18 ppb and 50 ppb. The figure also shows
that the majority of the evaluation region corresponds to R values less than
1.0 or where EKMA overpredicts the reference 03. The region of overpredicfrion
corresponds to the large NMOC and moderate to large NOX concentrations. The
52
-------
2. EVALUATION OF EKMA AS PREDICTOR OF MAXIMUM 0,
Martinez and Maxwell
500
200
too
~ 50
x
20
10
I
0.7 0.6 0.5
I
R-0.4
1 I
I
Evaluation
Region
t. - _V_ _ JVw . _T5^_ - _
' «
100
200
500
NMOC (ppbC)
1000
2000
Figure 2-5. Accuracy regions define NMOC and NOX values associated with
standard-EKMA predictions for St. Louis.
53
-------
2. EVALUATION OF EKMA AS PREDICTOR OF MAXIMUM 07 Martinez and Maxwell
region where the model underpredicts, R > 1.2, is relatively small, and the
range of NMOC and NOX concentrations that will result in underpredictions is
small.
Figure 2-6 depicts the reference 03 plotted against the city-specific
EKMA estimates. Compared with Figure 2-3 for standard EKMA, Figure 2-6 shows
a much larger percentage of points in Region 2 (0.8
-------
2. EVALUATION OF EKMA AS PREDICTOR OF MAXIMUM 0-^ Martinez and Maxwell
II. M I1S.M
tfl.M ItS.ff IM.«« SM.M S7S.M «««.»§ JlS.tf
J5t.fi •
IJi.M
tll.it
O
2
UJ
z
O
rsi
Mt.it
I'i.tt
IM.if
tit. it
UJ
llt.it
tt.tf
REGION / REGION
1 / 2
REGION
3
r • 04705
¥ - 60 74 + 0.3949X
i > 32 88
tl.lt ttl.lt Hf •• ltl.lt )».•• <••.•• t?l.*f »«!.»• *>•.«• t*f.«f »«§.••
DODGE CITY-SPECIFIC EKMA OZONE ESTIMATE (ppbl
Figure 2-6. City-specific EKMA estimates for St. Louis tend to be more
accurate.
55
-------
2. EVALUATION OF EKMA AS PREDICTOR OF MAXIMUM O-,
Martinez and Maxwell
500
200
100
I so
X
20
10
-R-1.2
0.8
R -0.4
100
200
500
NMOC (ppbC)
1000
2000
R = -0.5 + 84.8/NMOC - 0.002 NOX + 0.03 T + 0.006 BO3
Figure 2-7. Region of high accuracy probability is larger for city-specific
than for standard EKMA for St. Louis.
-------
2. EVALUATION OF EKMA AS PREDICTOR OF MAXIMUM 0^ Martinez and Maxwell
range of NMOC concentrations, which for standard EKMA was from 200 ppbC to 500
ppbC, has decreased, but the range of NOX concentrations (previously between
18 ppb and 50 ppb) increased. As was found for standard EKMA, the majority of
the evaluation region results in overpredictions (R < 1.0) of reference 03.
Conversely, there is a small range of NMOC and NOX concentrations that will
result in underpredictions (R > 1.2).
Results for Houston
Figure 2-8 shows a scatterplot of observed (OBS) and estimated (EST) 03
for the Houston Area Oxidant Study (HAOS) data set. Region 1 contains two
points, Region 2 has 11, and Region 3 has 48. (One of the points in Region 2
is plotted just below the line OBS = 0.8 EST.) Thus, about 3% of the cases
are underpredicted, and the remainder satisfy the inequality OBS < 1.2 EST.
These percentages are similar to those previously obtained for St. Louis (see
Figure 2-3).
A multiple regression equation was derived for R = OBS/EST as a function
of NMOC and temperature difference (denoted by DT) for the HAOS data set; the
equation for R is shown at the bottom of Figure 2-9. The multiple correlation
coefficient is r = 0.63 and the standard error of the regression is s = 0.25.
The significance level of the regression and of the coefficients of NMOC and
DT is p < 0.0001, but the constant term is not statistically significant at
the 0.05 level.
57
-------
2. EVALUATION OF EKMA AS PREDICTOR OF MAXIMUM
Martinez and Maxwell
30*. DO
141.38 I»I03 242.79 2*1.49 344.19 3*4*9 449.89 4M.29 84*.88 8*7.«8
78. *0
lle.OO 1*6.70 217.40 26* IO 31* *O 36* SO 420.2O 470 »O 921.CO 872 3O *23 OO
STANDARD-EKMA OZONE ESTIMATE (ESTI — ppb
Figure 2-8. Standard EKMA produced substantial overprediction for Houston
(HAOS).
58
-------
2. EVALUATION OF EKMA AS PREDICTOR OF MAXIMUM O3 Martinez and Maxwell
Figure 2-9 displays the probability curves for -the three accuracy regions
as a function of the variable Z = -0.0002681 (NMOC). The figure shows that
P(R < 1.2) > 0.5 for Z < -0.175, which corresponds to NMOC > 654 ppbC. This
reflects standard EKMA's tendency to overpredict.
Figure 2-9 shows that the curve for P(0.8 < R < 1.2) is flattened and
spread out, in sharp contrast to the relatively narrow curve shown for St.
Louis in Figure 2-4. (Note, however, that the horizontal axes are not scaled
equally in Figures 2-4 and 2-9.) The maximum value ofP(0.8 < R < 1.2) is
0.58 in Figure 2-9, compared to the maximum of 0.83 in Figure 2-4. The curves
in Figures 2-4 and 2-9 differ in shape because the standard error of the
regression is smaller for the St. Louis data (s = 0.15) than for the Houston
data (s = 0.25). The maximum probability of 0.58 in Figure 2-9 occurs in the
neighborhood of Z = -0.21, which corresponds to NMOC = 723 ppbC. The shaded
region in Figure 2-9 is where P(0.8 < R < 1.2) > 0.50 and is defined by
-0.35 < Z < -0.060, which corresponds to 224 ppbC < NMOC < 1305 ppbC.
Figure 2-10 displays constant-Z lines on the NMOC-NOX plane, along with
the evaluation region for the Houston data. The shaded area corresponds to
the region where P(0.8 < R < 1.2) > 0.50. Inside the evaluation region, NMOC,
NOX combinations that fall within and to the right of the shaded area satisfy
the inequality P(R < 1.2) > 0.60. Points to the left of the shaded area
satisfy the relation P(R > 1.2) > 0.42, and thus are more likely to yield
underestimates of observed 03. The tendency of standard EKMA to overpredict
59
-------
2. EVALUATION OF EKMA AS PREDICTOR OF MAXIMUM
Martinez and Maxwell
CQ
<
00
O
1.0
0.8
0.6
0.4
Q.Z
0
-1
.0 -0.9 -0.8 -0.7 -0.6 -0.5 -0.4 -0.3 -0.2 -O.I
Z- -0.0002681 (NMOC)
R = 0.1 -0.0002681 NMOC/ 0.06 DT
V
Z
Figure 2-9. Probability of an accurate prediction is less than 0.6 for
standard EKMA for HAOS.
60
-------
2. EVALUATION OF EKMA AS PREDICTOR OF MAXIMUM O-,
Martinez and Maxwell
400
350
300
I 25°
Q* 200
z
ISO
100
50
- Z • -0.1 -0.2
I " I ' I ' I ' I ' I
-O.3 -0.4 -0.5 -0.6 -0.7 -0.8 -0.9
EVALUATION
REGION
I
I
I
500 IOOO
1500 2000 2500
NMOC — ppbC
3000 3500
Figure 2-10. Region of overprediction includes most of the evaluation region
for standard EKMA for HAOS.
61
-------
2. EVALUATION OF EKMA AS PREDICTOR OF MAXIMUM O? Martinez and Maxwell
03 is reflected in the large difference in the respective sizes of the regions
that flank the shaded area in Figure 2-10.
Figure 2-11 shows a scatterplot of observed and estimated O3 for the HAOS
data. Whereas the standard-EKMA estimates demonstrate a marked tendency to
overpredict, the city-specific estimates demonstrate the opposite. The
majority of the points in the figure are in Region 1, the region of
underprediction. For, the HAOS data, Figure 2-11 has 33 points (54%) in Region
1, 16 points (26%) in Region 2, and 12 points (20%) in Region 3. Observed and
estimated 03 are correlated for the HAOS data; the correlation is low
(r = 0.24) but statistically significant (p < 0.04).
We derived a multiple regression equation for the ratio R = OBS/EST for
the HAOS data set. The equation is shown at the bottom of Figure 2-12, and,
as for standard EKMA, it shows R as a function of NMOC and DT. The multiple
correlation coefficient is r = 0.65, and the standard error is s = 0.60.
Figure 2-12 displays the probability curves for the three accuracy
regions as a function of the variable Z = -0.0006191 (NMOC). The curves are
computed for DT = 11.6°C, which is its mean value. The probability P(R < 1.2)
exceeds 0.5 for Z < -0.90, which corresponds to NMOC > 1454 ppbC. Thus, the
city-specific 03 estimates can be considered to be upper bounds generally for
high values of NMOC. The curve for P(0.8 < R < 1.2) covers the entire range
of NMOC concentrations, but the probability is low that a given NMOC value
will fall in Region 2. The highest probability of an accurate prediction is
62
-------
2. EVALUATION OF EKMA AS PREDICTOR OF MAXIMUM O3 Martinez and Maxwell
213.10
231 30
8 *" "
o
§ I7t.»0
8
Q
UJ
> 193.60
(t
m
O
101 »o
97 70 78 1O »2 SO 10* »0 127 30 144 70 1*2 10 17* SO I»S *O 214 30
4t 00 «« 40 «J »0 101 20 II* CO 116 00 ISO 4O I7O *0 !•• ZO 209 «0 223 OO
CITY-SPECIFIC EKMA OZONE ESTIMATE (CSO3I — ppb
Figure 2-11. City-specific EKMA underpredicted substantially for HR.OS.
63
-------
2. EVALUATION OF EKMA AS PREDICTOR OF MAXIMUM 0-,
Martinez and Maxwell
1.0
0.8
> 0.6
03
O
0.4
0.2
-2.5
-2.0
-1.5 -1.0
Z- -0.0006191 (NMOCI
-0.5
R = 0.2 -v0.0006191 NMOC + 0.2 DT
v
Z
Figure 2-12. Accurate estimates have a low probability for city-specific FKMA
for HAOS.
64
-------
2. EVALUATION OF EKMA AS PREDICTOR OF MAXIMUM Oj Martinez and Maxwell
about 0.26, and occurs for Z = -1.11, which corresponds approximately to
NMOC = 1793 ppbC.
For the HAOS data, EKMA substantially overpredicted in the standard mode,
and underpredicted in the city-specific mode. As a result, the probability of
an accurate prediction for the HAOS data was generally low. In general, it
appears that in either mode EKMA tends to be a low-accuracy predictor of 03
for the Houston area.
Results for Philadelphia
Figure 2-13 shows a scatterplot of observed 03 and standard-EKMA 03
estimates. Of the 29 points plotted, 4 (14%) are in Region 1, 5 (17%) are in
Region 2, and the remainder are in Region 3. Note, however, that the four
points in Region 1 are not grossly underpredicted.
Thus, as in St. Louis and Houston, standard EKMA shows a marked tendency
to overpredict. Unlike the St. Louis case, there is no statistically
significant correlation between observed and estimated 03.
The multiple regression equation derived for the ratio R = OBS/EST is
shown at the bottom of Figure 2-14; R is a function of 1/NMOC and daily
maximum temperature (T).
65
-------
2. EVALUATION OF EKMA AS PREDICTOR OF MAXIMUM
Martinez and Maxwell
• I.CO 113.CO 1M.OO IB*.40 1*0.*O 2OJ IO ttt.M 14*.OO 270.4* Mt.*O
•O.OO 102.40 124. »0 147.20 IM.6O 1*2. OO 214.40 23« »O 2SV.2O 2«1.«O
STANDARD-EKMA OZONE ESTIMATE (ESTI — ppb
Figure 2-13. Standard EKMA for Philadelphia tends to overpredict.
66
-------
2. EVALUATION OF EKMA AS PREDICTOR OF MAXIMUM 07 Martinez and Maxwell
Figure 2-14 displays the accuracy probability plot for R as a function of
Z = 162.45/NMOC for the mean value of T, which is 27.5°C. The probability
P(R <1.2) exceeds 0.5 for Z < 0.85, which corresponds to NMOC > 191 ppbC. The
shaded area in Figure 2-14 defines the region with the highest probability of
an accurate prediction, i.e., where 0.8 < OBS/EST < 1.2. In the shaded area,
the maximum probability is approximately 0.71, and occurs for Z = 0.65, which
corresponds to NMOC = 250 ppbC. For the shaded area the variable Z is bounded
by 0.51 < Z < 0.79, for P(0.8 < R < 1.2) > 0.6.
The accuracy regions are illustrated in the NMOC-NOX plane in Figure
2-15. The shaded area of the figure corresponds to that in Figure 2-14 and is
defined by 206 ppbC < NMOC < 319 ppbC; NMOC, NOX combinations within the
shaded area have the highest probability of yielding an accurate 03 estimate.
Values of NMOC and NOX to the right of the shaded area have a high probability
that the ratio OBS/EST < 0.8; hence, this is the region of overprediction.
Underprediction is most probable in the thin slice to the left of the shaded
area. Thus, the vast majority of the evaluation region is associated with
OBS/EST ratios smaller than 1.2.
The accuracy regions for city-specific-EKMA estimates are displayed on
the NMOC-NOX plane in Figure 2-16. The shaded area of this figure corresponds
to the 03 estimates that have the highest probability of being most accurate.
The region where R < 1.2 includes the shaded area and all of the evaluation
region to the right of the shaded area. Only a very narrow slice of the
NMOC-NOX plane to the left of the shaded area corresponds to the region of
67
-------
2. EVALUATION OF EKMA AS PREDICTOR OF MAXIMUM 0-,
Martinez and Maxwell
O.I
0.3 0.4 0.5 0.6
Z - 162.45/NMOC
0.7 0.8 0.9
R = -0.7 + 162.45/NMOC + 0.04 T
Figure 2-14. Probability of accurate predictions is moderately high for
standard EKMA for Philadelphia.
68
-------
2. EVALUATION OF EKMA AS PREDICTOR OF MAXIMUM
Martinez and Maxwell
140
120
100
60
40
1 ' I
(060.5 0.4
- 0.9
0.3
Z-0.2
EVALUATION
REGION
ZOO
400
600
800
1000
1200
1400
1600
NMOC — ppbC
Figure 2-15.
High NMOC concentrations yield overpredictions for standard EKMA
for Philadelphia.
69
-------
2. EVALUATION OF EKMA AS PREDICTOR OF MAXIMUM
Martinez and Maxwell
140
120
100
Z 80
60
z -oi
I
200 4OO 600 800 1000
NMOC — ppbC
I2OO
1400 I6OO
R = -0.6 +V117.7/NMOC,+ 0.04 T
Figure 2-16. City-specific and standard-EKMA estimates for Philadelphia were
very similar.
70
-------
2. EVALUATION OF EKMA AS PREDICTOR OF MAXIMUM O? Martinez and Maxwell
of underprediction, and that slice is actually outside the bounds of the
evaluation region.
For Philadelphia, the standard- and city-specific-EKMA 03 estimates were
very similar. However, in a reversal of roles from the St. Louis and Houston
cases, the Philadelphia city-specific estimates were more accurate and also
displayed a lower tendency toward underprediction than did the standard-EKMA
estimates. However, in keeping with previous results, the standard-EKMA 03
estimates showed a pronounced tendency toward overprediction.
Results for Los Angeles
Figure 2-17 is a scatterplot of OBS and EST for Los Angeles. As the
scatter suggests, no statistically significant correlation exists between
observation and prediction. Nevertheless, the distribution of the points
among the three accuracy regions is of interest. Region 1 contains 62 points
(35%), Region 2 has 51 (29%), and Region 3 has 63 (36%). Thus,
overpredictLons, underpredictions, and accurate predictions are about equally
probable. This is surprising because standard EKMA is supposed to simulate
worst-case 03 conditions in the Los Angeles area.
A multiple regression equation was derived for the ratio R = OBS/EST as a
function of several variables: the equation is shown at the bottom of Figure
2-18. R is a function of 1/NOX, NMOC, and 1/NMOC, and maximum daily
71
-------
2. EVALUATION OF EKMA AS PREDICTOR OF MAXIMUM
Martinez and Maxwell
••.79 139.29 1*0.79 226.29 271 79 317.29 3«Z 79 4Ot 29 493 79 4»9 29
1 * * 1 » 1 1 1 1
• 7.OO IIZ.9O I9». OO 203 9O S4* OO 2>4.90 340 OO 3«S.SO 431. OO 47(.»O Stl OO
STANOARD-EKMA OZONE ESTIMATE (ESTI —pppb
Figure 2-17. Standard-EKMA estimates for Los Angeles fell in each region in
equal proportions.
72
-------
2. EVALUATION OF EKMA AS PREDICTOR OF MAXIMUM
Martinez and Maxwell
-1.0 -0.5 0 0.5
Z " -39.06/NOX + 371.6/NMOC - 0.000466(NMOC)
R = -0.2 -39.06/NO, + 371.6/NMOC - 0.000456 NMOC + 0.02 T
Y
Z
Figure 2-18. Under, over, and accurate predictions have similar
probabilities for standard EKMA for Los Angeles.
73
-------
2. EVALUATION OF EKMA AS PREDICTOR OF MAXIMUM 07 Martinez and Maxwell
temperature (T). The multiple regression coefficient is r = 0.68, and the
standard error of the regression is s = 0.43.
The equation for R in Figure 2-18 resembles that for St. Louis in Figure
2-4 because it includes both 1/NMOC and 1/NOX« However, for St. Louis, these
two variables had positive coefficients, which is not the case for Los
Angeles. Another difference between the equation for St. Louis and that for
Los Angeles is that the latter includes NMOC; despite the fact that 1/NMOC,
1/NOXf and NMOC are correlated, the latter two variables increase the amount
of variance explained. By far the most important variable in terms of amount
of variances explained is 1/NMOC, which by itself explains about 40% of the
variance. The remaining three variables — T, NMOC, and 1/NOX together add
another 6% to the total explained variance. Thus, NMOC or its reciprocal
continues to play a large role in explaining the predictive performance of
EKMA.
Figure 2-18 displays a plot of the accuracy probability derived from the
expression for R as a function of the variable Z, which is defined in the
figure. Reflecting the indications of Figure 2-17, Figure 2-18 shows that the
probability of an accurate prediction is relatively low, with P(0.8 < R < 1.2)
< 0.37. Moreover, the magnitude of the three probabilities is about the same
in the neighborhood where P(0.8 < R < 1.2) has its maximum. Thus, relatively
small changes in the value of Z in this neighborhood can radically shift the
probability of overprediction or underproduction. The steepness of the curves
for P(R < 0.8) and for P(R > 1.2) suggests a similarly sensitive behavior in
74
-------
2. EVALUATION OF EKMA AS PREDICTOR OF MAXIMUM O^ Martinez and Maxwell
the overprediction and underprediction regimes, respectively. The sensitivity
of the probabilities causes the standard-EKMA 03 estimates to be of limited
usefulness for obtaining upper bounds for O3 in Los Angeles.
Figure 2-19 shows a scatterplot of observed O3 and the city-specific EKMA
estimate. In contrast to Figure 2-17, Figure 2-19 has a preponderance of
overpredictions and few underpredictions. Region 1 contains six points (3% of
the total), Region 2 has 25 (14%), and Region 3 has 145 (83%). Thus, the
number of underpredictions has been reduced by more than a factor of 10, but
the number of accurate predictions has decreased by a factor of 2.
Figure 2-20 shows the multiple regression equation derived for the ratio
R = OBS/EST for the city-specific case. The equation for R in Figure 2-20
differs from that in Figure 2-17 in the presence of the variable NOX in the
former instead of the NMOC that appeared in the latter. Nevertheless, in both
equations, 1/NMOC explained approximately the same amount of variance, 37% in
Figure 2-20 and 40% in Figure 2-17.
i
Constant-Z contours are displayed on the NMOC-NOX plane in Figure 2-21.
The shaded area of Figure 2-21 corresponds to the interval 0.1 < Z < 0.3, in
which P(0.8 < R < 1.2) > 0.60. Thus, NMOC, NOX combinations in the shaded
area have the highest probability of yielding accurate estimates. The small
area to the left of the shaded slice is the region where underprediction
becomes more probable. The part of the evaluation region to the right of the
shaded area is associated with an increasing probability of over-prediction.
75
-------
2. EVALUATION OF EKMA AS PREDICTOR OF MAXIMUM 0-, Martinez and Maxwell
V)
a)
O
N
O
OC
UJ
ISC 93 230 aS 304 79 376 65 492 99 926 49 600 39 674 29 746 IS 622 09
-1 + +
I2O OO 1»3 »O 267 60 341 70 419 6O 46* SO 963 40 637 30 711.20 76S 10 6S6. OO
CITY-SPECIFIC EKMA OZONE ESTIMATE ICSO3I — ppb
Figure 2-19. City-specific EKMA for Los Angeles overpredicts substantially.
76
-------
2. EVALUATION OF EKMA AS PREDICTOR OF MAXIMUM
Martinez and Maxwell
CO
O
1.0
0.8
0.6
0.4
0.2
-0.8
*
I
-0.6 -0.4 -0.2 0 0.2
Z - -54.999/NOx + 286.822/NMOC - 0.001687(NOX)
0.4
R = 0.08 - 54.999/NO, + 286.322/NMOC - 0.001687 NO, + 0.009 T
_
Y
Z
Figure 2-20. Accurate predictions are most probable for 0.1 < Z < 0.3 for
city-specific EKMA for Los Angeles.
77
-------
2. EVALUATION OF EKMA AS PREDICTOR OF MAXIMUM
Martinez and Maxwell
x
O
z
450
400
350
3OO
250
ZOO
ISO
100
50
500
IOOO ISOO
NMOC — ppbC
2OOO
Z5OO
Figure 2-21. Most NMOC and NOX lead to overprediction for city-specific EKMA
for Los Angeles.
78
-------
2. EVALUATION OF EKMA AS PREDICTOR OF MAXIMUM O^ Martinez and Maxwell
The tendency toward overprediction is thus made apparent, because most of the
evaluation region is to the right of the shaded area.
That the standard EKMA yielded so many underestimates was surprising,
indicating that the worst-case conditions supposedly embodied in standard FKMA
do not in fact define a worst case. The city-specific EKMA, by contrast,
yielded a large majority of overestimates. This suggests that the city-
specific EKMA is the operational mode of choice for the purpose of obtaining
an upper bound for O3, although the magnitude of the overprediction can be
very large.
DISCUSSION
The results of the EKMA evaluation for the four test cities indicate that
it is feasible to use EKMA to estimate maximum 03 because:
• Predictive accuracy is a function of NMOC and NOX, hence, can
be estimated in advance.
• Different data bases produce similar patterns of predictive
accuracy, with NMOC being the most important explanatory
variable.
• Evaluation methodology can be extended to other urban areas.
Three levels of accuracy were defined based on the ratio R = reference/
estimated — R > 1.2, 0.8 < R < 1.2, and R < 0.8. The interval R > 1.2
defines cases of underestimation; R < 0.8 defines cases of overprediction.
The closed interval 0.8 < R < 1.2 defines the most accurate estimates. The
79
-------
2. EVALUATION OF EKMA AS PREDICTOR OF MAXIMUM 03 Martinez and Maxwell
accuracy of the O3 estimates depended not only on the level of pollutant and
meteorological variables, but also on whether standard or city-specific EKMA
was used. Moreover, the frequency of occurrence of 03 estimates that fell in
each of the three accuracy intervals was different for all the data sets
studied. Nevertheless, a general pattern emerged that relates low, medium,
and high values of NMOC and NOX to the three accuracy intervals. Low values
tended to yield R > 1.2, medium values, 0.8 < R < 1.2, and high values, R <
0.8. The precise values of NMOC and NOX that mark the boundaries of the three
accuracy intervals differed among the individual data sets. St. Louis and
Houston exhibited a general trend toward a high frequency of overestimation,
and a low frequency of underprediction, for the standard EKMA. Although for
St. Louis and Houston standard EKMA yielded estimates in the interval 0.8 < R
< 1.2, the frequency was low, and the range of NMOC and NOX values that
produced such estimates with high probability was narrow.
The situation was different for city-specific EKMA, which tended to have
a higher frequency of estimates in the interval 0.8 < R < 1.2. City-specific
EKMA also tended to underpredict more frequently, and to overpredict less
frequently, than standard EKMA. Thus, although city-specific EKMA offers a
higher probability of producing an accurate estimate (for which 0.8 < R < 1.2)
standard EKMA has a higher probability of yielding an upper-bound estimate
(for which R < 1.2).
We regard the evaluation methodology developed in the study as an
important contribution. The methods are general, and the numerical results
80
-------
2. EVALUATION OF EKMA AS PREDICTOR OF MAXIMUM O? Martinez and Maxwell
reported are real-life examples of what can be accomplished with these
techniques. In the future, this methodology can be applied to other data
bases to extend and generalize the results reported here.
COMPARISON OF CHEMICAL MECHANISMS
Three different chemical mechanisms were used with EKMA, and the model's
response was evaluated using St. Louis data. The three mechanisms are:
• Dodge
• Carbon Bond II (CBII)
• Demerjian (DM)
These mechanisms have been described by Jeffries et al. (1981).
Figures 2-22 and 2-23 plot the CBII and DM standard-EKMA estimates
against the—reference O3• It is apparent that the CBII and DM mechanisms
yield similar estimates that exhibit a stronger tendency to overpredict than
the Dodge estimates (Figure 2-3).
Table 2-1 compares the number of points in the accuracy regions for the
three models. The table shows that most of the data points are in Region 3,
which indicates that the models strongly tend to overpredict the reference 03
concentration. The CBII and DM models overpredicted with greater frequencies
than did the Dodge model. The Dodge model had more than twice as many
predictions in the accurate region (Region 2) than did the CBII model, and
nearly six times as many as the DM model. The CBII and DM models each had
81
-------
2. EVALUATION OF EKMA AS PREDICTOR OF MAXIMUM O? Martinez and Maxwell
•s.it its.ft tit.•• MS.tt i»s.tt 4)i.i* tn.it m.tt «4».tt 7it.ii
JSf.it •
Mt.tt
tit.*t
Mt.tt
Ht.tt
z
UJ
o
z
S ISt.tl
o
M
0 llt.tt
UJ llt.tt
«t.tt
Tf.tt
It.tt
i • 04800
V • 77.13 * 0.1696X
t • 32.69
(lift tlt.lt l»t.tt Zft.tf JJt.tt 4tt.tt 471.tl 14t.tt «lt.tt itt.tt »•.!
CARBON BOND II STANOARO-EKMA OZONE ESTIMATE Ippbl
Figure 2-22. Standard EKMA with CBII overpredicts more than Dodge.
82
-------
2. EVALUATION OF EKMA AS PREDICTOR OF MAXIMUM 0,
Martinez and Maxwell
IS.ft liS.M ttS.M !«!.» J6».»» <]«.(( StJ.tt »?».»• •«».•* ?!!.••
HI.II
Itl.ll
IW.M
"••••
Itf.ft
z
O
o
z
o
rsj
o
z
UJ lit.**
(C
tt.fi
!*.«(
REGION / REGION
» / 2
REGION
3
r • 04325
V • 6966 + 0.1640X
i - 33.60
SC.ff Ut.«« Itt.ft ttt.M 1M. •• 4M.tt «TI.»« 141 tf tlf.M III.** ?»•.»•
DEMERJIAN STANOARO-EKMA OZONE ESTIMATE Ippbl
Figure 2-23. Standard-EKMA estimates for DM are similar to CBII predictions.
83
-------
2. EVALUATION OF FKMA AS PREDICTOR OF MAXIMUM
Martinez and Maxwell
only one data point in the underpredicted accuracy region (Region 1). In
contrast, the Dodge model had five data points in Region 1.
TABLE 2-1. NUMBER OF STANDARD-EKMA 03 ESTIMATES IN EACH ACCURACY REGION
Model
Region 1
(R > 1.2)
Dodge 5
CBII 1
DM 1
Accuracy Region
Region 2
(0.8 < R < 1.2)
17
7
3
Region
(R < 0
78
92
96
3
.8)
Figure 2-24 shows the equation derived for R = REF/EST and the constant-R
contours on the NMOC-NOX plane for the CBII EKMA. Similar to the Dodge EKMA
(Figure 2-5), most of the NMOC, NOX combinations in the evaluation region are
expected to overpredict the reference 03. Accurate estimates of the reference
03 occur with a relatively small group of NMOC, NOX combinations depicted by
the shaded area in the figure. The shaded area for the CBII model is about
1/3 the size of the shaded area for the Dodge model (Figure 2-5), and the
range of NMOC values that will result in an accurate prediction (150 ppbC to
250 ppbC) is also about 1/3 the size of that for the Dodge model (200 ppbC to
500 ppbC). The range of NOX concentrations resulting in an accurate
prediction is not much different for the CBII model (14 ppb to 50 ppb) as for
the Dodge model (18 ppb to 50 ppb). The area within the evaluation region
where the CBII model is expected to underpredict (R > 1.2) is very small.
84
-------
2. EVALUATION OF EKMA AS PREDICTOR OF MAXIMUM O7
Martinez and Maxwell
500
200
100
20
10
100
R -
Evaluation
Region —
200
500
NMOC (ppbC)
1000
2000
R = -0.6 + 200.2/NMOC - 7.8/NO, + 0.02 T + 0.004 BO,
Figure 2-24. CBII/standard-EKMA estimates have a small high-accuracy region
compared to the Dodge model's.
85
-------
2. EVALUATION OF EKMA AS PREDICTOR OF MAXIMUM 0? Martinez and Maxwell
Figure 2-25 shows constant-R contours on the NMOC-NOX plane for the DM
EKMA. For the DM model, there are very few combinations of NMOC, NOX
concentrations for which the model is not expected to overpredict the refernce
03 concentration. There are no NMOC, NOX combinations for which the DM EKMA
is expected to underpredict (R > 1.0).
Figure 2-26 compares the regions of the NMOC-NOX plane for which the
probability P(0.8 < R < 1.2) is at least 0.70 for the three FKMA models.
There is little overlap of the areas for the three models, and none of the
models is expected to predict accurately for NMOC concentrations > 500 ppb or
NOX concentrations > 50 ppb. The Dodge model is accurate over the widest
range of NMOC, NOX combinations, while the DM model has the smallest range.
Figure 2-26 also shows that for low input NMOC and NOX concentrations, the DM
and CBII EKMA models are expected to be more accurate than the Dodge EKMA.
Figure 2-27 compares the CBII city-specific EKMA estimates to the
reference 03 concentrations. Compared with Figure 2-22 for the CBII standard
EKMA, we see an increase in the number of points in Regions 1 and 2, and a
corresponding decrease in Region 3. However, the majority of the points are
still in Region 3. The correlation coefficient between the CBII city-specific
EKMA estimates and reference 03 concentrations is r = 0.530. This value is
slightly better than the correlation (r = 0.480) between the CBII standard-
EKMA estimates and the reference 03 concentrations (Figure 2-22).
86
-------
2. EVALUATION OF EKMA AS PREDICTOR OF MAXIMUM
Martinez and Maxwell
500
200 -
!00 -
50
x
o
20 -
10 -
too
R -
—
—
i_
i_
i_
— L.
l_
l_
L
0
8 0
.'
7 0
X
1
.6 0
rf
1
1
5 R-0
_,,-•
\
' ' ' ' 1 1
4
Evaluation ^,-'
Region ^ „ - ' , ' '
* ** <
^ * * x ' ^^
x *
X
X "
X
X
X ^
X
X
X ^^
X
X
X -
X
X
—
. . . . 1 1
200
500
NMOC (ppbC)
1000
2000
R = -0.4 + 94.0/NMOC + 0.01 T + 0.003 BO,
Figure 2-25. DM/standard EKMA overpredicts over most (jf evaluation region.
87
-------
2. EVALUATION OFEKMA AS PREDICTOR OF MAXIMUM
Martinez and Maxwell
500
200
100
£
- 50
x
20
10
100
, , , ,
200
5OO 1000
NMOC (ppbC)
2000
PROB (0.8 < R < 1.2) > 0.7 FOR STANDARD EKMA
Figure 2-26. Accuracy regions for the three models are very different.
88
-------
2. EVALUATION OF EKMA AS PREDICTOR OF MAXIMUM
Martinez and Maxwell
• S.if ItJ.M tit.ft tn.lt J«S.»« (».•• MS.ft Mt.lt ««S.if MS.H
til.it
m.f*
tit.**
z
o
<.
cc
z
o
o
z
I7t.it
llf.tt
tt.ft
7f.«f
REGION
3
r * 05297
V * 67.5U03550X
i • 3161
Sf.ti l?*.ff I4«.f« ?»» *» Jlt.ff <«( f* 471.«• J4(.(« (!».«• •!«.§« Tlf.tt
CARBON BONO II CITY SPECIFIC EKMA OZONE ESTIMATE (ppbl
Figure 2-27. CBII/city-specific EKMA estimates have improved accuracy.
89
-------
2. EVALUATION OF EKMA AS PREDICTOR OF MAXIMUM O? Martinez and Maxwell
Figure 2-28 shows that for the DM EKMA there is no large increase in the
number of points in Regions 1 and 2 for the DM city-specific EKMA estimates.
However, Figure 2-28, for the DM city-specific EKMA, shows that the scatter in
the data is not as widespread as in Figure 2-23 for the DM standard EKMA. The
decrease in the overall scatter for the DM city-specific model is indicated by
a slightly larger correlation coefficient (r = 0.480) as compared with the DM
standard EKMA (r = 0.432).
Table 2-2 lists the number of cases for each EKMA version for which the
city-specific estimates fall within each of the three accuracy regions.
Comparing Table 2-2 with Table 2-1 shows a marked improvement in the number of
03 estimates in Region 2 when changing from the standard to city-specific mode
for the Dodge and CBII models. However, the DM model showed only slight
improvement. Table 2-1 also shows that the number of 03 estimates in Region 1
increased for the Dodge and CBII models when changing from the standard to
city-specific modes; but for the DM model there was no change.
TABLE 2-2. NUMBER OF CITY-SPECIFIC EKMA O3 ESTIMATES IN EACH ACCURACY REGION
Model
Region 1
(R > 1.2)
Dodge 9
CBII 10
DM 1
Accuracy Region
Region 2
(0.8 < R < 1.2)
39
37
8
Region 3
(R < 0.8)
52
53
91
90
-------
2. EVALUATION OF EKMA AS PREDICTOR OF MAXIMUM
Martinez and Maxwell
11.11 IIS. II
HI. I* •
ttl.il
tlf.M
— HI. ft
o
I-
<
z
UJ
o
z
o
<_>
UJ
z
o
o
171.11
111.11
llf.t*
ft. II
tl.il
II I1t.ll lit. II «U.0I lit. If tlt.M tit.lt 1tt.ll
---- t ---- • ---- • ---- • ---- . --- -« ---- • ---- » ---- « ---- • ---- • ---- «.
REGION
3
i • 0.4800
y • 63.31 * 02539 X
i « 32 69
11.11 Itl.ll 111.II tCI.II 3)1.11 411.11 471.11 Kl.ll III.II III.II 7tl.ll
OEMERJIAN CITY SPECIFIC EKMA OZONE ESTIMATE (ppbl
Figure 2-28. Accuracy of PM/city-specific EKMA estimates shows slight
improvement.
91
-------
2. EVALUATION OF EKMA AS PREDICTOR OF MAXIMUM O? Martinez and Maxwell
Figure 2-29 compares the location of the accuracy regions for the Dodqe
and DM mechanisms in the city-specific mode. The shaded areas correspond to
the probability P(0.8 < R < 1.2) being 70% or greater, that is, where the
city-specific estimates have the highest probability of being accurate.
Figure 2-29 shows that the areas for the two models do not overlap, and for
very low NMOC and NOX concentrations, the DM city-specific EKMA may predict
the reference 03 concentration more accurately than the Dodge model.
Comparing Figure 2-29 with Figure 2-26 for the standard-EKMA models, we see
that the areas are larger for the Dodge and DM city-specific models. However,
for the majority of NMOC-NOX paired concentrations, the models still tend to
overpredict.
The evaluation showed that the three chemical mechanisms have different
accuracy characteristics. Thus, the accuracy regions of the NMOC-NOX plane
are different for the three models. This feature suggests the following
criterion for choosing a chemical mechanism for a particular EKMA application:
select the model with the highest probability of producing an accurate
prediction for the specific application of interest.
EKMA APPLICATIONS
Using EKMA to estimate maximum 03 from its NMOC and NOX precursors
implies that EKMA could be used to assess the effect on O3 of actions that
modify the concentration of NMOC and NOX, for example, emissions control
strategies. A corollary is that FKMA could also be used in the design of
92
-------
2. EVALUATION OF EKMA AS PREDICTOR OF MAXIMUM O-,
Martinez and Maxwell
500
200
100
- 50
x
o
20 -
10 -
T T
1—i—r
Evaluation
Region
100
200 500 1000 2000
NMOC (ppbC)
PROB (0.8 < R < 1.2) > 0.7 FOR CITY-SPECIFIC EKMA
Figure 2-29. Regions where accurate estimates are at least 70% probable do
not overlap.
93
-------
2. EVALUATION OF EKMA AS PREDICTOR OF MAXIMUM O^ Martinez and Maxwell
control strategies by performing a sequence of analyses of the effect on O3 of
a variety of postulated control measures. Below we examine the possibilities
and pitfalls of using EKMA in this fashion in light of the results of this
study. The discussion will be cast in terms of emissions control strategies,
because we consider this to be the most common application of EKMA. However,
it should be understood that any action that modifies precursor levels is
implicitly treated. The discussion assumes that the reader is familiar with
the principles and assumptions of EKMA as presently formulated.
The problem at issue is to evaluate the effectiveness of a control
strategy aimed at reducing 03 by curtailing emissions of NMOC or NOX, or both.
In this context, EKMA could be used to estimate the maximum 03 associated with
the control strategy in an attempt to answer the following questions:
• What is the probability that the maximum 03 will exceed 120 ppb?
• If it appears that 120 ppb will be exceeded, is the estimated
63 close to or much greater than 120 ppb?
One general procedure, which we call the point-estimate method, is
depicted in the flow chart in Figure 2-30. EKMA could be used to answer the
first question provided that the 03 estimate is less than or equal to 120 ppb,
and there is a high probability that the estimate is an upper bound for the
actual concentration. This corresponds to taking the right-hand branch of
Figure 2-30. In this case, the answer to Question 1 above is that the
probability that the control strategy will produce 03 levels that exceed 120
ppb is very low; the precise value of the probability would be obtained from
94
-------
2. EVALUATION OF EKMA AS PREDICTOR OF MAXIMUM O-,
Martinez and Maxwell
-------
2. EVALUATION OF EKMA AS PREDICTOR OF MAXIMUM O^ Martinez and Maxwell
accuracy probability plots such as Figure 2-4. Note that the accuracy of the
03 estimate is not important in this situation because an upper-bound estimate
is all that is needed. Hence, standard EKMA may be satisfactory for this
application.
EKMA would not be as helpful in answering Question 1 if the estimate is
under 120 ppb, but there is a high probability of underprediction. As
indicated in the right-most branch of Figure 2-30, in this case, the analysis
must be refined to establish the accuracy of the estimate and the probability
associated with that accuracy. The methods previously described allow a user
to perform such a refined analysis. Because accuracy is important in these
circumstances, it may be necessary to resort to city-specific EKMA to obtain
the 03 estimate.
Answering Question 2 requires analyzing the accuracy of the estimate and
its associated probability. As depicted in the left branch of Figure 2-30,
three cases are to be considered when the estimate is over 120 ppb:
• There is a high probability of overprediction.
• There is a high probability of underprediction.
• There is a relatively high probability that the estimate is
accurate, for example, to within ± 20%.
EKMA would be least helpful in the first case (the left-most branch of Figure
2-30), because although the control strategy may actually reduce 03 below 120
ppb, the overprediction masks the effect. This case thus requires a very
96
-------
2. EVALUATION OF EKMA AS PREDICTOR OF MAXIMUM O? Martinez and Maxwell
thorough analysis of the accuracy of the estimate. In the second case, the
control strategy clearly does not work, and the margin of ineffectiveness
should be assessed by analyzing the accuracy of the estimate as a means of
guiding the reformulation of the control strategy. The third case is where
EKMA would be most useful, because the estimate is relatively accurate. Such
an estimate can also be used to guide the design of a new control strategy.
Used in the fashion described above, EKMA can be considered as a
screening tool, albeit coarse at times, that allows one to analyze the
potential impact of a control strategy. Coupled with the analysis of the
accuracy of the estimates, EKMA could also be used to help formulate a control
strategy by sequentially screening a series of control strategies. In
general, we recommend applying standard EKMA first, then going to the city-
specific mode if the results obtained with the standard mode warrant it.
The screening model provided by the point-estimate method is of the
go/no-go type, and is based on a single 03 prediction. The Monte Carlo
method, by contrast, is designed to predict the distribution of 03 maxima,
from which one can estimate the statistical information required by the Oj air
quality standard. Figure 2-31 shows a flowchart of the Monte Carlo method.
The input consists of N pairs of NMOC and NOX concentrations drawn from a
joint distribution; N is an arbitrary number determined by statistical
criteria. The NMOC, NOX distribution is associated with a postulated control
strategy. For example, one could obtain a joint distribution of NMOC and NOX
by scaling an observed distribution. The N pairs of NMOC and NOX are used
97
-------
2. EVALUATION OF EKMA AS PREDICTOR OF MAXIMUM
Martinez and Maxwell
OZONE CONTROL STRATEGY
JOINT DISTRIBUTION
OF NMOC AND NO,
PROS
I/V
EKMA O3
ESTIMATE
N PAIRS OF
NMOC, NOX
J
1 EKMA
[ N OZONE
1 ESTIMATES
1
CALCULATE R PR
DISTRIBUTION
i
N DISTRIBUTIONS
FOR R
1
PROB
I AJUSTED 03 * EKMA 03 x R I
T:
N DISTRIBUTIONS OF
ADJUSTED 03
AOJ0
OBTAIN N SAMPLES,
ONE FROM EACH DISTRIBUTION,
BY MONTE CARLO METHOD
M, = MAXIMUM 03 FOR
N SAMPLES
PROB
R = REF/EST
K = NUMBER OF
ITERATIONS
YES
MAX 0,
DISTRIBUTION
OF 0, MAXIMA
(M,, i = l.,,k)
Figure 2-31. Monte Carlo method.
98
-------
2. EVALUATION OF EKMA AS PREDICTOR OF MAXIMUM 0? Martinez and Maxwell
with EKMA to obtain N 03 estimates. Each NMOC, NOX also defines a
distribution of the ratio R = reference O3/estimated 03. The next step is a
key element of the method: we obtain an adjusted 03 estimate by multiplying
the EKMA 03 estimate by its corresponding R distribution. Thus, in this step
we compensate for the expected error in the EKMA estimate. In this manner we
obtain N distributions of adjusted 03 estimates. As Figure 2-31 indicates, we
then obtain a distribution of 03 maxima by using Monte Carlo sampling with the
N distributions of adjusted 03 estimates.
The Monte Carlo method has several advantages:
• It uses the entire distribution of NMOC and NOX- This contrasts
with the conventional approach to the use of EKMA, which uses
only the median of the distribution of NMOC:NOX ratios.
• It compensates for modeling errors. The error structure of
EKMA is reflected in the distribution of R, and this distri-
bution is used to adjust the EKMA estimates.
• It estimates the distribution of 03 maxima, which provides the
statistics required by the O3 air quality standard.
Both the Monte Carlo and point-estimate methods can be used as screening
tools for statistically analyzing the effectiveness of a control strategy.
Such a statistical approach is consistent with the 03 air quality standard,
which is itself statistical.
To apply EKMA in the manner described, one must have a means of
calculating the accuracy, and its associated probability, of the 03 estimate.
This, of course, presumes that EKMA performance has been already evaluated
99
-------
2. EVALUATION OF EKMA AS PREDICTOR OF MAXIMUM O-^ Martinez and Maxwell
following the methods described in this report. Hence one would have
available probability curves and equations for the R ratio, such as shown in
Figure 2-4, for analyzing the accuracy of the emissions.
ACKNOWLEDGMENT
The work reported here was sponsored in part by the U.S. Environmental
Protection Agency and the American Petroleum Institute. The views expressed
in this report are the authors' and do not necessarily represent those of the
sponsors.
REFERENCES
Dodge, M.C. 1977. Effect of Selected Parameters on Predictions of a
Photochemical Model. EPA-600/3-77-048, U.S. Environmental Protection Agency,
Research Triangle Park, NC (June).
Jeffries, H.E., K.G. Sexton, and C.N. Salmi. 1981. Effects of Chemistry and
and Meteorology on Ozone Control Calculations Using Simple Trajectory Models
and the EKMA Procedure. EPA-450/4-81-034, U.S. Environmental Protection
Agency, Research Triangle Park, NC (November).
Martinez, J.R., C. Maxwell, H.S. Javitz, and R. Bawol. 1982. Evaluation of
the Empirical Kinetic Modeling Approach (EKMA). Draft Final Report, SRI
Project 7938, SRI International, Menlo Park, CA (April).
Martinez, J.R., and C. Maxwell. 1982. Evaluation of New Versions of the
EKMA. Interim Report, SRI Project 3502, SRI International, Menlo Park, CA
(January).
Trijonis, J., and D. Hunsaker. 1978. Verification of the Isopleth Method for
Relating Photochemical Oxidant to Precursors. EPA-600/3-78-019, U.S.
Environmental Protection Agency, Research Triangle Park, NC (February).
100
-------
2. EVALUATION OF EKMA AS PREDICTOR OF MAXIMUM 0-, Martinez and Maxwell
U.S. Environmental Protection Agency. 1977. Uses, Limitations, and Technical
Basis of Procedures for Quantifying Relationships Between Photochemical
Oxidants and Precursors. EPA-450/2-77-021a, U.S. Environmental Protection
Agency, Research Triangle Park, NC (November).
U.S. Environmental Protection Agency. 1978. Procedures for Quantifying
Relationships Between Photochemical Oxidants and Precursors: Supporting
Documentation. EPA-450/2-78-021b, U.S. Environmental Protection Agency,
Research Triangle Park, NC (February).
U.S. Environmental Protection Agency. 1980. Guideline for Use of City-
Specific EKMA in Preparing Ozone SIP's. EPA-450/4-80-027, U.S. Environmental
Protection Agency, Research Triangle Park, NC (October).
Whitten, G.Z., and H. Hogo. 1978. User's Manual for Kinetics Model and Ozone
Isopleth Plotting Package. EPA-600/8-78-014a, U.S. Environmental Protection
Agency, Research Triangle Park, NC (July).
WORKSHOP COMMENTARY
DIMITRIADES: How did you deal with the variation in sunlight intensity from
day to day? Did you screen out all the data for cloudy days?
MARTINEZ: No. We were interested in predicting the maximum potential O3 for
an area. So we fixed the meteorological variables. In essence, we had one
set of meteorological conditions for all the points I showed you. The only
things that changed were the HC and the NOX.
We fixed the meteorological variables, based on the data analysis. Like
standard EKMA, it has fixed meteorology, but the meteorology is tailored for a
city in the city-specific case, and based on the data analysis.
DIMITRIADES: Let me see if I understand. When you say that in those points
EKMA overpredicts, couldn't that be because during those days the sunlight
intensity was low and that's why we had an overprediction?
MARTINEZ: It's possible. The wind may have been 40 mph. There are a number
of reasons for that.
DIMITRIADES: Well, you're not just saying that the EKMA model is bad?
MARTINEZ: No, in fact, I consider overpredictions very useful because they
give you the upper bounds that can help you in the go/no-go type screening.
Underpredictions are more worrisome to me.
101
-------
2. EVALUATION OF EKMA AS PREDICTOR OF MAXIMUM 0-, Martinez and Maxwell
Basically, I wanted to characterize the errors of EKMA used in this way.
When I say that EKMA overpredicts, I'm not saying that it is bad.
ESCHENROEDER: I have a couple of questions, Dr. Martinez, which probably
indicate that I don't really understand the Monte Carlo method.
One question is how do you enter the method and get your joint
distribution of HC or NMOC and NOX points, if it is for some time in the
future or some configuration that doesn't—
MARTINEZ: That has nothing to do with the Monte Carlo method. That is an
input.
ESCHENROEDER: An input of historical statistics.
MARTINEZ: You can do it several ways. You can, given an 03 control strategy
to control HC'S and NOX such as Dr. Trijonis discussed here, take an existing
distribution and decrease HC, the mean HC, by 20% and the mean NOX by 15%.
You also have to consider the correlation between the two. Then, if you
want to, you can keep the same variance as the existing distribution and
change that everywhere, then use that as your base distribution.
Alternatively, you can try to come up with an emissions model that can
give you sources and that can relate the sources to the concentrations of HC
and NOX- That is simpler than estimating O-^ from the emissions themselves.
ESCHENROEDER: So you are saying that you don't need to know what the joint
frequency distribution of these might be in the future. You just take a point
and then find out what the probability of your 03 prediction is as far as the
correctness of over or underpredicting?
MARTINEZ: You have to postulate a joint probability distribution for the HC
and NOX.
ESCHENROEDER: You do have to know that then?
MARTINEZ: Yes.
ESCHENROEDER: Okay. I think that is the problem.
MARTINEZ: It could be just a set of points. You don't have to say that it is
a Gaussian or a log normal distribution or whatever. Just a set of points
will do it.
ESCHENROEDER: I'm thinking of the user, who having read this in the Federal
Register, says that it's hard enough to just take existing data, not to
mention an estimated joint frequency distribution of NMOC.
102
-------
2. EVALUATION OF EKMA AS PREDICTOR OF MAXIMUM 0? Martinez and Maxwell
MARTINEZ: I think, looking at this from California, we haven't really—
ESCHENROEDER: I'm trying to think what step—
MARTINEZ: —handholding with the potential user, but that is a necessary
step, no question.
ESCHENROEDER: The second question is, does the Monte Carlo approach actually
constitute a rational method of correcting EKMA? In other words, are you
considering what, when there is an overprediction, you're going to modify?
MARTINEZ: Yes, provided, of course, that you have good statistics for your
EKMA errors and for your distribution of what the ratio is. It certainly
compensates for modeling errors.
ESCHENROEDER: Is there any other rational method for correction? In other
words, given that the patterns are similar from city to city and model to
model, is there some simple, rational correction scheme for EKMA that might
have occurred to you in doing this?
MARTINEZ: I would have to look into that. That would be an extension of the
present work. This is fairly new.
ESCHENROEDER: I would encourage that because it looks like there may be some
way of fixing the patterns.
MARTINEZ: Yes, undoubtedly.
ESCHENROEDER: Purely empirical.
MARTINEZ: There are a lot of things behind this that I haven't mentioned, for
lack of time. One is that the city-specific EKMA and standard FKMA
predictions were all very highly correlated, and I mean 0.98 and 0.99.
It seems like one should be able to derive one from the other and then go
with only one of them. It may be that standard EKMA is the one to choose.
That should be investigated.
McRAE: Dr. Martinez, I would like to follow up on a question that Dr.
Dimitriades raised with regard to the choice of the meteorological conditions.
How did you go about choosing those meteorological conditions that would
produce the maximum Oj in the region?
Related to that question, if you put in the specific meteorology of the
day, wouldn't that, using city-specific EKMA, tend to tighten up the accuracy
bounds on the predictions?
MARTINEZ: Well, no.
103
-------
2. EVALUATION OF EKMA AS PREDICTOR OF MAXIMUM O, Martinez and Maxwell
Concerning the first question, we examined the data bases for all these
places, and our concern was to come up with meteorological conditions that
were associated with high 03. The data that went into the model were those
obtained on the date when the highest O3 occurred. We felt that on that day
there was enough sunlight.
Concerning the second question, the NC>2/NOX ratio was the average of the
ratios for the 5 days with the highest 03.
The background O3 was the same for the cases that we have information on.
Background 03 was the average of the 5 days of the highest observed O3
concentrations.
We used wind speeds and wind trajectories to come up with post-8:00 a.m.
emissions.
We took the emissions inventory for the areas in question, we plotted the
emissions on a map, and then we drew a set of circles based on the central
region, and computed emissions densities along trajectories.
The trajectories were based on several days, at 5 days or so, on which
the highest O3 concentration values were observed, so you have fairly low
wind speeds for those days.
We averaged the trajectories out and came up with the amount of the
increments of the emissions after 8:00 a.m..
So if you have a slow trajectory, it means lots of emissions for a while
after 8:00 a.m.. If the winds are high, then, of course, you don't have as
much emissions input.
McRAE: The second question was, why wouldn't you tighten up the error bounds
if you used better characterization of the physical processes?
MARTINEZ: Could you rephrase the question?
LLOYD: If you put in the specific meteorology for those 5 days, since you are
putting more physics into the problem, I would expect bhat you would—
MARTINEZ: We have data for a lot of other days in there that have somewhat
different meteorology, and that meteorology is reflected in the ambient
concentrations of HC and NOX« So, it probably washes out.
LLOYD: Did you check the chemical mechanism against any other data base
besides the St. Louis one?
MARTINEZ: No. But I'm going to check it against the Houston data base.
104
-------
2. EVALUATION OF EKMA AS PREDICTOR OF MAXIMUM O^Martinez and Maxwell
UNIDENTIFIED SPEAKER: In the State Implementation Plan (SIP) guidelines for
the application of the EKMA model, we request the states and local agencies to
use day-specific meteorological inputs for each day that they model, and,
indeed, we have found that the results obtained with the model are pretty
dependent on these day-specific meteorological inputs.
I think that has also been found to be the case when mechanisms other
than Dodge's have been incorporated into the model.
One of the interesting observations you had, however, was that when you
switched from standard inputs to city-specific inputs, not all of the
mechanisms reacted to that in exactly the same way.
For example, there was a difference between Carbon Pond II and the
Demerjian mechanisms in St. Louis, and I suspect that is a topic that we will
probably be discussing tomorrow.
MARTINEZ: E have a lot of other things that I could tell you about that.
First of all, the Demerjian mechanism behaved very differently in many, many
respects.
For one thing, if you plot the relative change in the 03, going from the
standard to a city-specific EKMA model, there is a function of the input HC.
It's essentially over 25% for all the hundreds of points.
Regardless of what the value for the O^ concentration was, you got a
horizontal line. There appears to be a saturation here. No matter what you
do to that model, it reduces everything by 25%. That's very strange.
Another thing is that for the predictions of the Dodge and Carbon Bond II
models where linearly correlated, and generally highly linearly correlated,
the predictions of the Demerjian model and the other two were correlated in a
nonlinear fashion.
In fact, the parabolic curve is quite noticeable. You can fit a nice
parabola to that, and the correlation then becomes very high. The Demerjian
model does things in a different way compared to the other two.
DUNKER: When you go to these other mechanisms, they have different HC
species. Particularly, the Carbon Bond mechanism uses the ethylene aromatics.
Have you changed the HC mix for those tests?
MARTINEZ: Yes, Dr. Jeffries has. Based on his work with the EKMA model,.he
was able to translate the HC, overall HC-NMOC concentration into different
species.
DUNKER: But aren't you then comparing not simply different mechanisms, but
possibly different HC reactivities?
105
-------
2. EVALUATION OF EKMA AS PREDICTOR OF MAXIMUM 0-^ Martinez and Maxwell
MARTINEZ: That is possible.
DUNKER: You are comparing it in terms of absolute 03 with observed ambient
data. I think that would be important.
MARTINEZ: It's possible that that is the case. The only thing that we know
for sure is that the different HC species in the Demerjian model add up to the
same number as the ones in the Carbon Bond II model. But, their speciation is
somewhat different.
Dr. Jeffries, would you like to address that point?
JEFFRIES: I'll talk a little bit more about that tomorrow when I give my
presentation on the mechanisms in St. Louis.
For the St. Louis work, we took the emissions inventory values that had
been used for the 10 days that Dr. Whitten is going to talk about and I will
speak about briefly tomorrow.
All the mechanisms had the same composition, but each mechanism, of
course, has its own rules for how you take a particular composition and input
it into the mechanism. Each mechanism has its own set of carbon numbers, its
own set of average molecular weights. The Carbon Bond II has a different
treatment than the Demerjian mechanism, and to the extent possible, we took
into account all those factors in putting them in.
I'll show comparisons of the mechanisms against Demerjian1s tomorrow.
DUNKER: I guess I'm saying that you can run, for example, the Carbon Bond II
mechanism with the same butane/propene mix.
JEFFRIES: That was not done. The effort was not to make Carbon Bond II
simulate Dodge's mechanism; the effort was to make Carbon Bond II simulate the
St. Louis data.
DUNKER: Right, but I think there are two questions here. One is the
difference in chemical mechanisms, and possibly rate constants and half
weights. The second question is the difference in HC reactivities as you go
from the EKMA mechanism, which has a limited number of HC species, to some of
these other mechanisms where you can have other species in there like ethylene
or aromatics.
When you're comparing, you can change then to another mechanism. When
you put in something more realistic in terms of the inventory, it's not
surprising that you might get different 03 values.
MARTINEZ: That's the object.
DIMITRIADES: We will be discussing this tomorrow, the mechanisms.
106
-------
2. EVALUATION OF EKMA AS PREDICTOR OF MAXIMUM O-, Martinez and Maxwell
WHITTEN: Dr. Martinez, I am troubled by the whole basic approach that you've
taken here.
My understanding of EKMA is that it is a model where the meteorology is
held completely fixed and only the emissions, the concentrations, and the
chemistry are allowed to vary on different points of the diagram. So I am
troubled that you've taken different HC and NOX concentrations, which, to my
mind, are not changing because of the emissions. Emissions from day to day
are usually quite constant.
MARTINEZ: That's not true.
WHITTEN: The difference in the HC and NOX concentrations that you see in the
atmosphere is due primarily to the differences in meteorology and micro-
meteorology from day to day. You might have poor mixing in the morning and
get a pocket of HC one day that doesn't come down that same street the next
day, even though the meteorology, generally, is somewhat the same. So you see
large fluctuations in HC and NOX concentrations, but these fluctuations are
not the same kind of fluctuations that the model is intended to take into
account.
These are meteorological fluctuations, not emissions fluctuations.
MARTINEZ: I disagree on several counts. First of all, when you say that the
meteorology is held fixed on a diagram produced, how do you think that when
you produce a diagram you have to change HC and NOX to—
WHITTEN: That's all that's changed.
MARTINEZ: But, in reality, how does that change, in—
WHITTGN: But the model is designed to have fixed meteorology on every single
point of that diagram. The only thing that changes—
MARTINEZ: All I'm doing is changing the HC and the NOX, except that I'm using
atmospheric measurements of HC and NOX and you're estimating HC and NOX-
WHITTEN: I think there are fluctuations due to instrument calibration,
micrometeorology, and other meteorology. These are not fluctuations in
emissions that you're looking at.
I think that you're not really testing the model. You're testing
something else.
MARTINEZ: I'm testing it in a different way.
TRIJONIS: I have studied this problem in Los Angeles. I have to agree with
Dr. Whitten that there is definitely a correlation between meteorology and the
morning HC/NOX ratio.
107
-------
2. EVALUATION OF EKMA AS PREDICTOR OF MAXIMUM 0, Martinez and Maxwell
Low ratio days tend to be winter days: high ratio days tend to be summer
days. There are also other things. High ratio days tend to be days of
greater carryover.
There is a correlation between the meteorology and the ratio.
MARTINEZ: No, if there weren't, we wouldn't have any problems.
It's a different way of looking at EKMA. You can consider EKMA as a
transfer function. It's an engineering point of view with which I've
approached the model.
I instinctively dislike using, for example, a single ratio, such as a
median ratio, to characterize a whole distribution of NMOC/NOX ratios. That
was one point that led me to think along these lines.
The sensitivity to the HC/NOX ratio has been shown to be very large,
which is only to be expected from the shape of the contours and the way the
ratio is plotted. Any rather small change in the ratio will give you large
changes in the 03.
To produce an Og isopleth diagram, you fix everything except the HC and
the NOX, and then you postulate a lot of HC' s arid TTOX to give you points on
that diagram through which you draw your 03 isopleths. That is basically the
same thing I was doing, except that I didn't draw any isopleths, and instead
of postulating HC's and NOX, I used measured HC and MOX.
[AUTHOR'S NOTE: Upon further reflection, I agree with Dr. McRae that it may
be possible to obtain smaller error bounds for the predictions by using day-
specific meteorology.
108
-------
3. PREDICTING OZONE FREQUENCY DISTRIBUTIONS FROM OZONE ISOPLKTH DIAGRAMS
AND AIR QUALITY DATA
Harvey E. Jeffries
Environmental Sciences and Engineering
University of North Carolina
Chapel Hill, NC 27514
and
Glenn T. Johnson
School of Mathematics and Physics
Macquarie University
North Ryde, New South Wales, 2113
Australia
ABSTRACT
Air quality data are used to derive an ozone isopleth surface that
predicts the observed ozone frequency distribution from the observed joint
distributions of hydrocarbons and oxides of nitrogen. The method uses a
seven-parameter mathematical description of the ozone isopleth surfaces. The
process begins with parameter values that describe the standard Dodge
chemistry EKMA surface or a standard carbon-bond chemistry EKMA surface, and
uses a nonlinear convergence technique to find the parameters for a specific
city. Once a surface is found, the precursor distributions can be modified
and the surface used to predict the new ozone distribution. The method has
been applied to St. Louis, MO Regional Air Pollution Study data.
109
-------
3. PREDICTING OZONE FREQUENCY DISTRIBUTIONS Jeffries and Johnson
INTRODUCTION
The U.S. Environmental Protection Agency (EPA) has suggested four levels
of ozone (03) modeling analysis that could be applied to urban areas (Federal
Register, 1979). These are photochemical dispersion models, simplified
trajectory models, the standard Empirical Kinetic Modeling Approach (EKMA),
and city-specific EKMA. The last three are based on the concept of generating
a maximum 03 surface as a function of initial nontnethane hydrocarbon (NMHC)
and oxides of nitrogen (NOX) concentrations. A particular surface is created
by performing simulations with a photochemical mechanism under a fixed set of
meteorological conditions together with fixed emissions characteristics and
boundary conditions.
Although each of the assumptions used in choosing the 03 surface can be
justified on the basis of available knowledge or expediency, taken together
they do not seem to constitute a solid foundation on which to build control
strategies. Jeffries et al. (1981) showed that different gets of assumptions,
all justifiable, led to 03 surfaces which for a given scenario (in St. Louis)
required hydrocarbon (HC) controls that varied from 15 to 80%. The two most
significant factors in this variation were in the choice of a representation
of meteorological conditions for a particular day and in the choice of a
photochemical mechanism.
Despite the difficulties inherent in these simple modeling analyses, they
have been pursued on the hypothesis that when model predictions are not in
110
-------
3. PREDICTING OZONE FREQUENCY DISTRIBUTIONS Jeffries and Johnson
agreement with observations, a procedure (EKMA) is available to correct the
application of the 03 surface to ambient conditions, and control strategies
can then be evaluated. Jeffries et al. (1981) showed, however, that this
technique can produce results in which the degree of control calculated is
correlated with the degree of absolute prediction of the maximum 03 in the
base case.
An alternative approach, first introduced by Post (1979), is to determine
an 03 surface that relates the observed distribution of precursor
concentrations with the observed distribution of maximum 03 concentrations.
That is, an 03 isopleth diagram is used not to predict the concentration of 03
from the concentration of precursors, but to predict the relative frequency of
occurrence of 03 concentrations. Such a surface does not imply a single set
of meteorological conditions, but embodies a probability distribution of
conditions specific to the period and location of the data collection. It
does not attempt to predict the maximum 03 concentration measured on a
particular day, but rather the surface, together with the precursor
distribution, represents a statistical transformation of the precursors into
measured 03. That such a surface might exist was suggested by the work of
Post (1979). Holton and Jeffries (1979) found an easy method for manipulating
the surfaces.
As part of the Sydney Oxidant Study, Post (1979) had obtained HC, NOX,
and O3 data along 16 trajectories. When the initial HC and NOX concentrations
were plotted on a standard Dodge isopleth diagram, Post observed that if the
111
-------
3. PREDICTING OZONE FREQUENCY DISTRIBUTIONS Jeffries and Johnson
03 values of the isopleth lines were multiplied by 0.6, then the diagram would
be nearly absolute. Further, when the more than 800 half-hourly averaged
pairs of NMHC and NOX from 6 to 10 a.m. were plotted on this scaled diagram,
the relative frequencies of predicted 03 concentrations agreed well with the
observed frequencies of 03 obtained by the five-station 03 monitoring network
operated by the State Pollution Control Commission.
Holton and Jeffries (1979) noted a relatively simple mathematical
formulation for maximum 03 surfaces as functions of precursor concentrations.
This formulation is based on a rotation of the HC and NOX axes through an
angle (90-9), where 6 is the angle between the ridge line and the NMHC axis.
The ridge line is that line through the points on each isopleth representing
the minimum sum of [NMHC] and [NOX]. If the new axes are called D and L,
where D is positive to the right of the ridge line and L is positive along the
ridge line (Figure 3-1), then
L= [NMHC]cos 9 + [N0x]sin8 (3-1)
D= [NMHC]sin 9+ [NOX]cos 9 (3-2)
Holton and Jeffries (1979) found that for concentrations less than about 2 ppm
the ridge line points lie on a straight line. They also found that an
adequate representation of a particular 63 surface was a power function of the
form:
[03] = cLn (3-3)
112
-------
3. PREDICTING OZONE FREQUENCY DISTRIBUTIONS
Jeffries and Johnson
c
-H (!)
5 0)
O *-i
£ £
en 4->
0) T3
C C
(0 tf
i-H
ft f)
&
D X
'. *
o; o
0 C
c
•i-< C
O .-I
(0 (0
U-i C
S-4 -rH
3 IT>
W -H
0
w
O (0
c
o
a
I C
ro O
O -H xJ
c ns <;;
fd «u w
0
U-1 ^ W
o w
o 0
c: -— u
O o O
•i-l I
•P O
-------
3. PREDICTING OZONE FREQUENCY DISTRIBUTIONS Jeffries and Johnson
up the ridge line (D = 0), with an elliptic cross-section for D > 0 and an
exponential cross-section for D < 0. Holton (1981) suggested a more general
form:
[03] = cLn[1 - (D/(Ltan9))9]h for D > 0
and
[O3] = cLnexp[-q(-D/L)r] for D < 0 (3-4)
It was decided to extend the work of Post by fitting a surface of the form of
Equation (3-4) to the precursor and maximum O3 concentrations forming part of
the St. Louis data base.
GENERAL PROCEDURE
The functional form Equation (3-4) for the O3 surface involves seven
parameters: c, n, 9, g, h, q, and r. Suppose
(H-j^, Nji ) i = 1,..m sites, j = 1 , . .n days
is a set of measured HC and NOX precursor pairs and that
Z.: = max {Zj-;} i = 1, . .m sites
114
-------
3. PREDICTING OZONE FREQUENCY DISTRIBUTIONS Jeffries and Johnson
is the maximum of the measured 03 concentrations on day j. Then if
Z*j = max{f(H^jNijjCfnje.g.hjqjr) } (3-5)
is the maximum of all the 03 concentrations predicted by Equation (3-4) for
each day, the seven parameters can be chosen so that the frequency
distribution of Z*-:, the predicted values, matches as closely as possible the
frequency distribution of Z j, the measured regional 03 maximum, over all days
j = 1. ...n.
The algorithm used to find the values of the parameters was a
modification of the nonlinear least-squares estimation of Marquardt (1963).
Rather than minimizing the differences in concentrations on the same day,
n n
V 2 V
> [Z*j - Zj] 2, the value of > [
was minimized, where F^(Z) is the percentage frequency of concentrations in
class i, with an interval width of 0.01 ppm O3 (Table 3-1).
The following iterative procedure was adopted:
(1) Choose initial estimates for the parameters which specify the O3
surface.
115
-------
3. PREDICTING OZONE FREQUENCY DISTRIBUTIONS Jeffries and Johnson
(2) Use this surface to predict a maximum O^ concentration for each
day of the period, given all of the precursor pairs measured
during the morning period of that day.
(3) Check the predicted maximum Og frequency distribution against the
observed distribution.
(4) Modify the parameters and return to step 2.
Because of the large number of precursor pairs involved, it was more
efficient to note for each day the particular precursor pair that generated
maximum O-j and to retain that set of pairs through several iterations before
returning to the complete precursor set.
It is possible that several different surfaces will generate a maximum 03
frequency distribution similar to that observed. The assumption was made that
the final surface should have a similar shape as those generated by
simulations using chemical mechanisms derived from smog chamber studies and
kinetics models. For this reason the initial approximation to the St. Louis
surface was chosen to be the standard Dodge isopleth surface generated by the
default conditions in the Ozone Isopleth Plotting Package (OZIPP) computer
program (Figure 3-2). It was, therefore, necessary to select parameter values
for the functional form Equation (3-4) which reproduced the Dodge surface.
The values that give a close approximation are shown in the first row of
Table 3-2. The fitted isopleth surface is shown in Figure 3-3.
116
-------
3. PREDICTING OZONE FREQUENCY DISTRIBUTIONS Jeffries and Johnson
TABLE 3-1. PARAMETERS FOR O3 ISOPLETH SURFACE EQUATION*
Ridgeline Right Left
Surface c n 9 g
Standard 0.307 0.657 0.198 1.60
Dodge
St. Louis 0.143 0.668 0.192 1.54
Dodge
CBII (Med React) 0.50 0.640 0.56 1.70
St. Louis 0.140 0.639 0.562 1.70
CBII
h q r
0.500 9.0 1.6
0.529 8.78 1.6
0.56 5.4 1.5
0.562 5.4 1.5
= (cLn)(1-(D/tan 9L)9) for D > 0; [03] = (cLn) exp(-q(-D/L)r)
for D < 0; L = [NMHC] cos Q + [NOX] sin 9; D = [NMHC] sin 9 + [NOX] cos 9.
117
-------
3. PREDICTING OZONE FREQUENCY DISTRIBUTIONS
Jeffries and Johnson
(HE'D OOZ'O 09TO OZI'O 0080*0 0040'0
o>
-p
D
0
u
•o
a>
u
o
IM
ft
M-l
S-l
JJ
a)
a,
c
•0
o
c
•c .
it a
TD M
C CP
(fl O
co PI
CM
I
ro
CD
&
•rH
fc
OBZ'fl OiE'O OOE'O 09fd OETO OOSO'O OOiO'O
118
-------
3. PREDICTING OZONE FREQUENCY DISTRIBUTIONS
Jeffries and Johnson
OOS'O
o
•H
JJ
0) 00
fi ff>
•U T-
0) v
u r-
3 in
'O \£>
O .
i-l O
a
n •
Q) \£>
u a .
«s »-
U-l >.
^ [^ u
3 O
to ro fc
JS o -O
JJ C
O, U
O
tn w
0)
0) .
&i 0)
no £ Cr1
O tJ
O SJ •>
(0 in
T) CM •
II
II
•c
C
(0
4J
05 C
0
•C -rH
0 -P
-P *
[r, 0) &1
ro
I
ro
&
•H
NOX.PPM
119
-------
3. PREDICTING OZONE FREQUENCY DISTRIBUTIONS
Jeffries and Johnson
TABLE 3-2. REGIONAL MAXIMUM O3
Class
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
Concentration
(ppm)
0.00-0.01
0.01-0.02
0.02-0.03
0.03-0.04
0.04-0.05
0.05-0.06
0.06-0.07
0.07-0.08
0.08-0.09
0.09-0.10
0.10-0.11
0.11-0.12
0.12-0.13
0.13-0.14
0.14-0.15
0.15-0.16
0.16-0.17
0.17-0.18
0.18-0.19
0.19-0.20
0.20-0.21
0.21-0.22
0.22-0.23
0.23-0.24
0.24-0.25
0.25-0.26
0.26-0.27
0.27-0.28
0.28-0.29
Observed
0
0
1
1
4
15
19
22
29
15
26
26
15
10
14
11
10
6
2
4
3
0
3
2
1
2
0
1
0
Model
Expectation
0
0
2
3
3
11
20
22
31
19
26
20
15
15
15
8
6
4
5
4
4
1
0
3
2
1
0
1
1
242 days
120
-------
3. PREDICTING OZONE FREQUENCY DISTRIBUTIONS Jeffries and Johnson
APPLICATION TO ST. LOUIS
The Regional Air Pollution Study (RAPS) was conceived by EPA early in
1970 to provide a rational scientific basis for the management of air quality.
In the program 25 regional air monitoring stations (RAMP) were placed
throughout the study region (Figure 3-4). The stations, numbered from
101 to 125, were thought to be located where they would not be unduly
influenced by any one source or group of sources. The network was operated
for two years, 1975-1976.
The RAPS data base was obtained from EPA. It contained data on 731 days.
The 2-year average 0^ concentration was 0.077 ppm. The 0-3 standard of
0.12 ppm was exceeded on 105 of the days (14%). The daily maximum 03 occurred
at station 122 on 181 days (25%) and at station 124 on 110 days (15%). These
stations were approximately 50 km north and south of the center city. Four
stations, 122, 124, 109, and 118, were the sites for the daily maximum O3 on
62% of the days.
For the purposes of this study, the period June through September of each
year was selected. Analysis of HC, NOX/ and O3 data suggested that this
period was a good approximation to a "smog season"; 244 days were included in
this period. In choosing the regional O-j maximum for each day (i.e., a single
value per day), only hourly averages between 1200 and 2100 LST were selected.
Two days were omitted because no station recorded a maximum above 0.005 ppm;
precursors for these 2 days were also excluded.
121
-------
3. PREDICTING OZONE FREQUENCY DISTRIBUTIONS
Jeffries and Johnson
Circles denote radius
in km from Jefferson
Arch Memorial in down-
town St. Louis
Figure 3-4. The regional air monitoring stations network.
122
-------
3. PREDICTING OZONE FREQUENCY DISTRIBUTIONS Jeffries and Johnson
Precursor data were pairs of NMHC and NOX data for each of the
25 stations for each of the five hourly periods from 7 a.m. to noon LST.
There were 18,790 such pairs. The joint distribution is shown in Figure 3-5.
One worrisome feature of the test data set was the presence of negative
NMHC values. Because the NMHC concentrations were calculated by subtraction
of methane from total HC, it was assumed that the negative values represented
very small concentrations. The complete 2-year data set was analyzed to
check this.
The negative concentrations were found at the same time as low NOX
values. In 67% of the negative concentration cases, the NOX value was less
than 0.01 ppm. Also, they were present more often later in the morning in the
summer months when photochemical reaction is more likely. The month of most
frequent occurrence was July (13%) and the month of least frequent occurrence
was December (5%). Finally, they were more often obtained from stations away
from emissions sources, rather than those in the city region. These results
justified the assumption of very small concentrations.
Although setting these NMHC values to zero presented no problem, it did
raise the question of what to do about precursor pairs having very low
concentrations. Twenty-four percent had NMHC concentrations less than
0.01 ppm. To include all precursor pairs in the analysis in the manner of
Post would distort the 03 surface derived. This is a primary reason for the
123
-------
3. PREDICTING OZONE FREQUENCY DISTRIBUTIONS
Jeffries and Johnson
hotght f")
mi.
11H.
JOINT NOX/NMHC DISTRIBUTION FOR JUNE - SEPTEMBER USING INCREMENTS
OF 0.1 ppmC FOR NMHC AND INCREMENTS OF 0.01 ppm FOR NOX.
frrt\
HI.
HI.
Figure 3-5. Joint NOX/NMHC distribution for June to Septeinber (a) using
increments of 0.1 ppm for NMHC and increments of 0.01 ppm for
NOX, and (b) using increments of 0.02 ppm for NMHC and
increments of 0.002 ppv for NOX-
124
-------
3. PREDICTING OZONE FREQUENCY DISTRIBUTIONS Jeffries and Johnson
rather different procedure that was followed, that is, the selection of the
highest potential C^-forming precursor pair for each day.
The procedure was applied to determine a St. Louis 03 surface that would
predict the frequency distribution of the regional maximum hourly average 03
over the June through September periods of 1975 and 1976. The observed Og
frequency distribution for this period is given in Table 3-1. The Dodge
fitted surface was used as the source of the starting values for the
parameters (row one, Table 3-2).
The parameters for the best fitting St. Louis 03 surface are given in the
second row of Table 3-2. The surface is shown in Figure 3-6. The precursor
pairs selected in the fit are shown in Figure 3-7. The predicted and observed
frequency distributions and the cumulative distributions are shown in
Figures 3-8 and 3-9.
On only 10 of the 242 days did the chosen precursor pair give a negative
value of D (i.e., to the left of the ridge line). Thus the shape of the
surface there was not well determined by the data. The major change made to
the standard OZIPP/EKMA surface was to move the isopleths farther apart, that
is, to lower the parameter c. This fitted surface is quite similar to that
obtained empirically by Post foe Sydney. This 03 surface is able to reproduce
the bimodal shape of the 03 frequency distribution (Figure 3-8) and to match
the cumulative frequency distribution (Figure 3-9).
125
-------
3. PREDICTING OZONE FREQUENCY DISTRIBUTIONS
Jeffries and Johnson
OOcTCI
£>
•O
U
d n
T3 ^
0 r-
W II
•H
3 O
CO
CO
II
tr1
. 05
4-> ^
10
-------
3. PREDICTING OZONE FREQUENCY DISTRIBUTIONS
Jeffries and Johnson
X) °
:\j o
0 CD
CD
1
^* "^
•^
' *t"
4- . I
/~~N
UJ 4. * 4- ,
0 , + .
g 4- ++ + + +
a
_i_ _l_
C/l -i- i • "^*
~ ._ -h ~~
=5 + + + 4- +
Q
-1 + 4- 4- + +
• i ' . i
2 ++* +/+ *+i +
W) + ++ + +44- "++4- ^ 4-
-s ++ ^+V+++ *+ +- -
Q. , 41" -UT" T 4- +
-U- 3" ""
^ i. ^i-^-*""M~ i
t ] Tp. T^ ' 4"
21 4- 4- +
| , 4- 4j 4.4-4.
^ 4- "ji. 4- "*"
"TT" ^^.~"
•f" i , i 1^ i
"h -T^* H**" _L "P-4^" ^^"
-f-7" 4j- ,
-f- T^^ ,^
"H- -t-t'- " ' '
"~ i 1 1 • ** -^ *~
"T^li -^ ,1
+ ++ *++
4- £+~
i "" .J.4"
i\
•£ 4-
4" 4..
4. Ijl
1 1 1 1 1 1 1 1
CM Q)
Cri u
C 10
•H UJ
t> ^J
tO 3
,C W
W pC
(0 -P
fl)
^ rH
3 C
TJ W
0) -H
O
0 0)
Jj Cn
a, -a
o
GJ Q
-P 0)
N^ Xj
en
•o t
0) -r-l
" t{ §
t= Q)
Q. -^ H
CD ^ w •'-<
^j o SS S •
31 -H .P -1
5- ,
— Q_i Cj J
2 -H
E (fl
c i PH
(C -3 (N
X ^
CJ (0 (N
tc e —
r^
i
ro
0)
S-l
CD o1
• p^
CD ^- CD U3 C\l CO =t- CDCD
CMCUCM-— i^CDCDCD
CDCDCDCDCDCDCDCD
127
-------
3. PREDICTING OZONE FREQUENCY DISTRIBUTIONS
Jeffries and Johnson
ooooooooo
c
c -o
OJ
I I I I
I I I I I I I I I I O
OOOOOOOOOOO
CO
CO
a:
_i
CO
O
rxi
o
-p
3
fi
•H
ro
O
-P 0
TO U-J
•H -C O
•0 C
(0 <\>
*O &>
OJ
-------
3. PREDICTING OZONE FREQUENCY DISTRIBUTIONS
Jeffries and Johnson
CD
CO
r-
CM-
=»-
CM
.—I OO
CM •-«
LO CM
CD
CD
CO
O
O
CD r- =t- *-<
CO CM CM CM
CD CD CD CD
LO
en
CO
CD
CD CD CD CD CD CD
mdd 'sn
,
3 tn to
U tn c
•rJ ^- -H
'U 0
C T-l
ro O
4J (1)
•P 3 >
C £ lJ
(1)
•H i-i
0)
w
E tn 0
(U -H
•r-( Q;
4J 4-1
(C O
1) 0)
V -C
3 T3 UJ r-l
E 0) Vj U
3 !M 3 C
U ft tn -H
I
ro
0)
&
129
-------
3. PREDICTING OZONE FREQUENCY DISTRIBUTIONS Jeffries and Johnson
A chi-square goodness-of-fit test was applied to see whether the
differences between the observed and predicted frequencies were significant.
The classes were grouped so that only one group had an expected frequency less
than five, resulting in 17 groups. Since seven parameters were used in the
model, the number of degrees of freedom was nine. Now
X2 = 13.0 < X:
(9, 0.90)
and the null hypothesis, that the observed frequencies conform to the expected
model frequencies, was not rejected.
Because the 242 precursor pairs chosen by the final surface fit were
retained, it was possible to analyze their characteristics. More than 87% of
these pairs came from the period from 7 to 9 a.m. (Table 3-3) and nearly 85%
were provided by the seven innermost stations (Table 3-4).
PARAMETER SENSITIVITY
To test how well each parameter was defined by the data, each was varied
by 20%, with the others fixed, and the value of sum of squares noted. The
data in Table 3-5 show that the parameter c is the best defined of all the
parameters, followed by 8 and n. Parameter c is the amount of 03 produced by
1 ppm of reacting material and is strongly influenced by local dilution.
Holton's (1981) study suggests that most chemical mechanisms have a similar
value of n, approximately 0.65. This parameter is related to the competition
130
-------
3. PREDICTING OZONE FREQUENCY DISTRIBUTIONS Jeffries and Johnson
TABLE 3-3.
Time Period
0700-0800
0800-0900
0950-1000
1000-1100
1100-1200
PRECURSOR PAIRS SELECTED FROM MORNING TIME
No Control
62.0
25.2
7.0
2.5
3.3
PERIODS*
80% HC
Reduction
29.0
29.3
20.2
11.2
9.5
Percent of 242.
TABLE 3-4. PRECURSOR PAIRS SELECTED FROM 25 STATIONS*
Station
101
102
103
104
105
106
107
Others
No Control
8.7
11.2
4.5
13.2
18.2
9.9
19.0
15.3
80% HC
Reduction
15.3
3.3
1.2
21.1
15.3
7.4
10.3
26.1
Percent of 242.
TABLE 3-5. EFFECT OF PARAMETER VARIATION ON SUMS OF SQUARES* OF DIFFERENCE
BETWEEN OBSERVATIONS AND PREDICTIONS
Relative Change c n 9 g h q r
+ 0.20 154.3 53.6 91.5 49.1 41.3 27.0 28.0
-020 187.2128.497.348.555.328.026.6
Best fit surface had a sum of squares of 27.0.
131
-------
3. PREDICTING OZONE FREQUENCY DISTRIBUTIONS Jeffries and Johnson
for the hydroxyl (OH) radical among the aldehydes produced and their parent HC
compounds. Holton also showed that 9 is related to HC reactivity and light
intensity; increasing either increases 6. The lack of sensitivity of the
parameters q and r is due to only 10 of the precursor pairs giving negative D
in Equation (3-2). The reduction of r by 20% actually improved the model fit,
but the improvement of less than 2% in the sum of squares for a change of 20%
in the parameter was not accepted because of the constraint that the surface
shape should be determined by the likely behavior of a chemical mechanism such
as the standard Dodge surface.
CONTROL STRATEGIES
Air pollution models have been developed primarily to evaluate
alternative control strategies. One of the advantages of the approach pursued
in this study is the ease with which different strategies can be tested. It
was assumed that a reduction in emissions levels would reflect in the same
reduction in morning precursor concentrations. Thus a given reduction was
applied to all 18,790 precursor pairs in the original data set and, from the
fitted maximum 03 surface, the future frequency distribution of Og for the
summer period was estimated for each of the reductions. The predicted
distributions are given in Table 3-6 and are compared in Figures 3-10
and 3-11.
132
-------
3. PREDICTING OZONE FREQUENCY DISTRIBUTIONS
Jeffries and Johnson
TABLE 3-6. CONTROL PREDICTIONS (STANDARD EKMA)
CUMULATIVE PERCENTAGE OF REGIONAL DAILY MAXIMA
MNHC
1.0
0.80
NOY 1.00 1.00
A
0.60 0.40 0.20 1.00 0.60 0.40
1.00 1.00 1.00 0.80 0.80 0.80
Ozone (ppm)
.01
.02
.03
.04
.05
.06
.07
.08
.09
.10
.11
.12
.13
.14
.15
.16
.17
.18
.19
.20
.21
.22
.23
.24
.25
.26
.27
.28
.29
.30
0.0
0.0
0.8
2.1
3.3
7.9
16.1
25.2
38.0
45.9
56.6
64.9
74.1
77.3
83.5
86.8
89.3
90.9
93.0
94.6
96.3
96.7
96.7
97.9
98.8
99.2
99.2
99.6
99.6
100.0
0.0
0.4
1.2
2.5
4.1
10.7
21.5
31.8
43.8
53.7
63.2
70.2
77.7
83.9
87.2
89.7
92.1
93.8
95.5
96.7
97.5
97.5
98.8
99.2
99.2
99.6
99.6
99.6
100.0
0.0
0.4
2.1
2.9
7.4
18.2
30.6
43.0
52.9
65.3
73.1
81.0
86.4
89.3
91.3
95.0
95.9
97.5
97.5
97.5
99.2
99.2
99.2
99.6
99.6
100.0
0.0
1.2
2.5
6.2
17.8
33.5
50.0
64.5
75.2
83.1
90.1
94.2
96.7
97.9
98.3
98.8
99.2
100.0
0.0 0.0
5.8 0.0
17.8 1.7
43.0 2.1
69.8 5.0
85.5 10.7
91.3 22.3
95.0 34.7
97.9 44.6
99.2 57.0
100.0 64.9
71.9
78.1
85.5
87.6
89.7
92.6
94.2
95.5
96.7
96.7
97.9
98.8
99.2
99.2
99.6
100.0
0.0
0.4
1.7
4.1
11.2
22.3
36.4
47.1
59.9
69.8
78.1
85.1
89.7
90.5
93.8
95.5
97.5
97.5
97.5
99.2
99.2
99.6
99.6
100.0
0.0
1.2
2.9
7.0
20.2
36.4
50.8
65.7
76.4
83.9
90.1
94.6
95.5
97.5
97.5
98.3
99.6
99.6
100.0
133
-------
3. PREDICTING OZONE FREQUENCY DISTRIBUTIONS
Jeffries and Johnson
CO
CO
or
_i
CO
o
rvi
o
0>
u
1C
M-i
to
3
(0
0)
4-)
0>
a u
o u
X
o
•- n
to O
,
c m
4-1 d
•r-l -r-l >,
w
D^ c! 'C
C 0 11
•r-l -l-l O
W 4J 3
3 3 "3
XI 0)
W -r4 to
0 4J C
•H W Q)
4-1 -r-l Q)
3 T3 XI
XI
•H U O>
^ 0 >
4-1 05 (0
03 S-l £
•H 3
t! U W
0) C
n V4 O
O Cn -H
4J
T3 4J it
0) C V<
4-1-^4-1
U 0 C
-H -n 0)
73 U
0) 73 C
h C O
ft iC U
n
0)
0)
tT'
4J
c
0)
D
0)
ft
4J
0 •
w
01 U)
^ (C
0) rH
5 O
SSU1D NI iN33M3d
134
-------
3. PREDICTING OZONE FREQUENCY DISTRIBUTIONS
Jeffries and Johnson
•H C
-P -H
-H
TJ C
O
•O
-H
-P
o
3
"O
0)
I O
ro CN
0) co
fc 3
p O
& (1)
•H C
fo «J
+J
C H
•H 3
W -H
(0 CO
W (C
C
0 -
•H >,
-P -H
3 C
A 0
-H
fc U
4-1 t
to
•H in
T) 0
PO C
O 0
> o
•^ 3
-P T3
(0 Q)
<*>
O
ludd 'er
n
0)
g.
•H
135
-------
3. PREDICTING OZONE FREQUENCY DISTRIBUTIONS Jeffries and Johnson
In the calculations the whole precursor-pair data set was used. It was
not assumed that the precursor pairs that generated regional maximum 03 under
a control strategy calculation were the same as those under either no control
or an alternative control.
Given two points representing precursor pairs of similar concentrations,
the procedure for selection will choose the point that is closer to the ridge
line. Under different control options all of the precursor points are moved
relative to the ridge line, and hence the set of selected points will change
slightly (see Tables 3-3 and 3-4). This facility in the procedure tends to
make it more conservative than other simple modeling exercises (EKMA, for
example).
The model suggested that for the St. Louis region NMHC control, rather
than NOX control, was the appropriate path to follow without considering the
question of relative costs. This is also in agreement with the findings of
Post (1979) for Sydney. The model also indicated that under 60% NMHC control
a standard of 0.12 ppm would be exceeded on about 6% of the days in the
4-month period (i.e., 7 days); under 80% control a standard of 0.12 ppm would
be met virtually all the time. The joint HC-NOX distribution for 80%
reduction is shown in Figure 3-12 for comparison with that for no control in
Figure 3-7.
136
-------
3. PREDICTING OZONE FREQUENCY DISTRIBUTIONS
Jeffries and Johnson
OOT'O
_ 0)
&> f.
C -P
•H
> Cn
10 C
f -H
03
W 3
(0
d
0) 0
0) 3
0 T!
C 0)
^ ^
a
U
0) £
rQ o
00
0) M '-
4-> 0 W
O M-i t-i
<1) *H
iH H H!
0) (C ft
W -H
4-> CM
tf) c ^r
H <1J CN
•H 4-1 —
(0 0
ft ft
X -O
U <0 0
EC e o
0)
i-l
&
•H
0.200 0.100
NOX.PPM
137
-------
3. PREDICTING OZONE FREQUENCY DISTRIBUTIONS
Jeffries and Johnson
CO
CM
O
CD
CD
O
a
o
a
(/)
D
Q
(fl
L.
a
OL
x
o
i
o
CD
CM
O
E
Q.
a.
o
1C
•o -
U ro
0 ID
"TIJ •
0 O
a< H
w
•H
C IT)
4-1 O
CO 'O
II C
1-4 a
o u
Q) .. .
u en in
M-l (U II
3 0) CP
£ M CN
4-1 ffl '-Q
QJ O4 in
H •
Dn £ O
0 4-1
W -^ II
•H S
H C
H 0 v
•H O
C (0 •
o s «-
pa ti1
II
C
o
en
>-l U ••
(C -H Ol
U 4-J O
t3 e ^
0) (D O
4-> £
4-1 4J II
•iH (0
fe 6 °
I
r"
CD
138
-------
3. PREDICTING OZONE FREQUENCY DISTRIBUTIONS Jeffries and Johnson
MECHANISM SENSITIVITY
It has already been noted that in this application the maximum 03 surface
was specified poorly to the left of the ridge lines. This section of
surfaces, generated by simulations with chemical mechanisms under fixed
meteorological conditions, is also the least well defined, that is, it is the
section most sensitive to changes in chemical mechanisms (Jeffries et al.,
1981). For this reason, part of the study was repeated using a different
initial surface.
Holton (1981) supplied parameter values which give a maximum 03 surface
generated using the Carbon Bond II mechanism used currently in the Systems
Application, Inc. urban airshed grid model. The surface resulted from
simulations under summer-solstice light and medium HC reactivity. The
parameter values are given in row three of Table 3-7.
Using the same data base, the iterative procedure described previously
was followed to determine a surface with the same general shape as this
initial surface but which produced a similar O3 frequency distribution to that
observed. The surface is shown in Figure 3-13 and the parameters are given in
row four of Table 3-2. Note that the ridge line parameters are quite similar
to those obtained with the Dodge surface as starting conditions, but the
left-side parameters are quite different. The O3 frequency distribution was
not quite as good a fit (Figures 3-14 and 3-15) as that achieved using the
standard Dodge surface as an initial surface. The initial Carbon Rond II
139
-------
3. PREDICTING OZONE FREQUENCY DISTRIBUTIONS Jeffries and Johnson
TABLE 3-7. CUMULATIVE PERCENTAGE OF REGIONAL DAILY MAXIMA
CONTROL PREDICTIONS (CARBON BOND II MECHANISM)
MNHC 1.0 0.60 0.40 0.20 0.40
NOY 1.0 1.00 1.00 1.00 0.80
Ozone (ppm)
.01
.02
.03
.04
.05
.06
.07
.08
.09
.10
.1 1
. 12
.13
. 14
.15
.16
.17
.18
. 19
.20
.21
.22
.23
.24
.25
.26
.27
.28
0.0
0.0
0.8
2. 1
2.9
9.5
18.2
26.4
39.7
49.2
60.7
69.4
75.6
81.8
86.8
89.3
90.9
93.0
94.6
96.3
96.7
97.1
98.3
99.2
99.2
99.6
99.6
100.0
0.0
0.4
2. 1
2.9
8.7
17.8
30.2
43.8
55.0
66. 1
75.2
82.2
88.8
89.7
93.0
95.0
97.1
97.5
97.5
99.2
99.2
99.6
99.6
100.0
0.0
0.8
2. 1
5.0
16.5
33.5
48.3
63.2
74.4
83.5
90.5
95.0
96.7
97.9
98.3
98.8
99.6
100.0
0.0
2.1
13.2
37.2
66.6
83.5
90. 1
95.0
98.0
98.8
99.6
100.0
0.0
0.8
2.9
7.0
20.7
34.7
49.2
66.9
77.3
88.5
90.9
94.6
95.9
97.5
97.5
98.8
99.6
100.0
140
-------
3. PREDICTING OZONE FREQUENCY DISTRIBUTIONS
Jeffries and Johnson
O C3
I I I I I I I I I I I I I I I I I CO
CO
CO
01
_J
O
O
M
o
d
0
•H
-P
3
,C
•l-J
iq
4J
OT
-H
T!
0)
4-1
U
•H
'O
0)
Ll
ft
t2
C
(C
, — >
0)
C
•r-l
r-l
*O
•rH
rH
0
w
„
C
Q
•r-l
JJ
_£
•.H
^
to
•H
*•*
p"
O
4J
C
0)
•H
,£
I
T3
C
(0
0)
O
(0
IM
^
w
4:
4J
a)
rH
Q ,
0
to
•rH
C
0
ft
1
C
0
,0
Vj
tr
^_1
'C
CJ
4J
^J
-^
M-
d
•^J
w
3
^
d
^ T-!
•-1
C
cr>
ffl
JJ
C
D
O
^
0)
ft
C
0
•H
4-)
3
r-O
r,_i
Ll
-U
w
•c
JJ
C
•H
0
•n
Li
0
(/7
^j
3
U
0)
ft
O
2
C
ra
CJ
rr
tT-c
V-1
^1
U)
o
,^
to
U)
T5
iH
0
ro
O
^
O
,
*
T/
CM
CM
<•
I
O O
CM —> —i
SSblO NI !N30y3d
Cn
•rH
141
-------
3. PREDICTING OZONE FREQUENCY DISTRIBUTIONS
Jeffries and Johnson
4J
(Tj
i— (
D
£
d
o
•n
c
fC
, — .
0)
c
•H
i — )
c
0)
^
0
^
5
c
0
•rH
D
£
•^
4J
W
•H
T>
f"i
O
.jj
C
a!
•rH
PC
c
(0
a1
^
•rH
4J
•C
rH
3
C
3
'C
a
0
jQ
|
C
0
XI
S-l
(0
u
•D
D
4J
4-)
-H
IM
tn
C
•H
tn
3
"a;
c
[
•c
•rH
0
to
— •
c
o
•H
4->
3
^5
•rH
^_|
4J
tn
-H
•a
'O
0)
4-)
U
•rH
^J
0)
!^
C
0
•H
4-)
rj
^
•rH
^
4J
03
•H
'O
Vj
o
tn
^
3
O
u
*
I4J
SH
3
w
r1
4-1
a)
i— i
en
c
tn
•
, — .
•c
0)
T3
M
r— '
rj
C
•H
U)
^
(0
"D
tN
•xf
CN
U O, -rH
I
ro
S-i
&
•H
&.
142
-------
3. PREDICTING OZONE FREQUENCY DISTRIBUTIONS Jeffries and Johnson
surface, however, was substantially different from the Dodge surface. A
carbon-bond surface of lower reactivity or lower light intensity might have
generated a better fit.
Some of the control strategies tested with the previous surface were
retested (Table 3-7). The maximum Og distributions predicted by the two
surfaces for controlled conditions are very similar (Figures 3-16 and 3-17).
This is somewhat surprising since, as HC concentrations are reduced, more
precursor points move across the ridge to the region of negative D, the region
where the two surfaces are most different. The reason is that different
precursor pairs are responsible for the maximum 03 under the control strategy
than under standard conditions. Once precursor points move across the ridge
line into the region of negative D, other points with less NOX will be likely
to produce more 03 and will be selected to replace them. Thus there is a
decrease in the NOX concentrations of the precursor points chosen as the HC
control is increased (Table 3-8). The tendency of the procedure is to choose
points closer to the ridge line, because this is where maximum 03 production
occurs, and near the origin at the ridge line the two surfaces are nearly
identical.
DISCUSSION
There are three assumptions in this technique for evaluating emissions
control strategies in relation to the production of 03 in urban atmospheres.
The first is that there exists a simple surface which embodies the probability
143
-------
3. PREDICTING OZONE FREQUENCY DISTRIBUTIONS
Jeffries and Johnson
CO
CO
cr
O
(XI
CD
•a
c
(0
S-l
3
-o
U
4-1 -H
•H XI
IJ-J S
tr. c
C -rJ
•H
03 CO
3 C
0
03 -H
C 4J
0 3
•H X3
CM 03
O 03
•- O
03
i
*
'O
T3 0
U3 "C
O 3 S^
u
•O 0) C
a) s-i
0 tj
ft
144
-------
3. PREDICTING OZONE FREQUENCY DISTRIBUTIONS
Jeffries and Johnson
01
•H
OP X
o O
0 C
-!-> -H
C C
0 O
••-I -r-t
4J +J
•rJ U
T3 D
•C TJ
m co
M
c
H dP
01
>
0
.C
(C
01
01
C
C
•H
-p
3
w
3
0
CO
C
-P
3
cr
•S
01
(C
•H >,
i-J rH
4-> C
05 O
•rH
•D CJ
t
O M-J
0 •
CU C
> C 5
•rJ O 0
0-' -H JC
t -P Ul
rH U
3 3 C
O V-i ft
i
CO
3
CP
145
-------
3. PREDICTING OZONE FREQUENCY DISTRIBUTIONS
Jeffries and Johnson
TABLE 3-8. MAXIMUM O3 PRECURSOR PAIRS
Cumulative Percentage
Carbon Bond II
Standard EKMA
NOX (ppm)
.01
.02
.03
.04
.05
.06
.07
.08
.09
.10
.11
.12
.13
.14
.15
1.00 NMHC
1.00 NOX
0.0
1.7
4.1
9.9
18.2
26.0
35.1
43.0
50.4
57.4
64.5
71.1
75.6
78.5
81.4
.20 NMHC
1.00 NOX
2.9
11.6
31.0
53.7
63.2
70.2
78.1
83.1
86.8
89.3
93.4
96.7
97.9
97.9
98.3
1.00 NMHC
1.00 NOX
0.0
1.7
4.5
11.6
19.4
27.7
36.0
43.8
52.1
59.1
64.9
71.1
75.6
78.5
81.4
.20 NMHC
1.00 NOX
4.5
14.1
33.9
56.6
67.4
74.0
81.8
86.4
89.7
91.3
94.6
97.5
98.8
98.8
99.2
146
-------
3. PREDICTING OZONE FREQUENCY DISTRIBUTIONS Jeffries and Johnson
distribution of summertime conditions in a given region and which produces a
distribution of maximum 03 as a function of morning NMHC and NOX
concentrations. This implies that it will be valid to use the surface to
determine O3 distributions for a variety of precursor distributions. The
second is that it is possible to determine the general shape of the surface
from simulations using photochemical mechanisms derived from smog chamber
studies. The third is that parameters can be found which produce an
acceptable fit to air quality data from the region and which maintain the
surface shape.
In this revised application of the Post method the procedure operated in
a stable and consistent manner. Once the surface was determined it was a
simple matter to determine the effects of a number of control strategies.
The reasonableness of the assumptions used can be established by applying
the procedure to different regions, by checking the conclusions against more
complex photochemical models, and by applying the procedure to different
summertime periods in the same region where there have been changes in
precursor concentration levels. The technique has given a good description in
two large cities: Sydney, Australia, and St. Louis, MO.
CONCLUSIONS
Maximum 03 surfaces can be constructed from air quality data and
kinetic-mechanism-derived surfaces to evaluate emissions control strategies.
147
-------
3. PREDICTING OZONE FREQUENCY DISTRIBUTIONS Jeffries and Johnson
Nonmethane hydrocarbon emissions control is the most effective strategy
to follow for the St. Louis region. Using 1975 and 1976 as a baseline, a
reduction of 70 to 80% NMHC is predicted to achieve the 0.12-ppm O3 standard.
REFERENCES
Federal Register. 1979. Data collection for 1982 Ozone Implementation Plan
Revisions. 44:65667.
Holton, G.A. 1981. Mathematical Properties of Ozone Precursor Relationships
in Photochemical Mechanisms. Ph.D. Disseration, University of North Carolina,
Chapel Hill, NC.
Holton, G.A., and H.E. Jeffries. 1979. Mathematical analysis of ozone
isopleths. Conference Ozone/Oxidants Interactions with the Total Environment,
Air Pollution Control Association, Houston, TX.
Jeffries, H.E., K.G. Sexton, and C.N. Salmi. 1981. Effects of Chemistry and
Meteorology on Ozone Control Calculations Using Simple Trajectory Models and
the EKMA Procedure. EPA-450/4-81-034, U.S. Environmental Protection Agency,
Research Triangle Park, NC (November).
Marquardt, D.W. 1963. An algorithm for least-squares estimation of nonlinear
parameters. J. Soc. Indust. Appl. Math., 11:431.
Post, K. 1979. Precursor distributions, ozone formation and control strategy
options for Sydney. Atmos. Environ., 13:783.
WORKSHOP COMMENTARY
ESCHENROEDER: How sensitive is the O3 frequency distribution to the number of
sample days that you choose?
JEFFRIES: I don't know. We didn't have the opportunity to go back through
and try dropping out half the days and a third of the days. Keith Post
dropped out half the days and got the same answers.
ESCHENROEDER: He had how many, 800?
148
-------
3. PREDICTING OZONE FREQUENCY DISTRIBUTIONS Jeffries and Johnson
JEFFRIES: He had 2 years of summertime data, 6 months. He halved the
distribution; in other words, half the HC and NOX, and half the O3- He
calculated the frequency distribution for each half and got the same answers
as before. The distributions for the whole year are, of course, different
from the distributions for just the smog season.
ESCHENROEDER: I was thinking that given a certain interval a sparser sample
would give the same results.
JEFFRIES: We don't know how stable the process would be. It's a question of
how good the statistics might be as the numbers drop below a couple of hundred
points. Post had 800 half-hour precursor pairs, and we started out with
18,000 hourly pairs and reduced that to 242. The problem is, how many pairs
out of 242 could be misplaced before significant errors ensue.
As it turns out, the major difference between a Dodge and a carbon-bond
surface is in one little region down in the corner. Of course, Carbon Bond II
picks a different set of points than does Dodge. Each mechanism is going to
choose its own set of points out of the 18,790 initial pairs; assuming that
they are reasonably close, the method would have chosen points very similar
both for Carbon Bond II and for Dodge. Overlaying the carbon-bond isopleth on
top of the Dodge points, we note the Dodge isopleth contains a few points
counted above, whereas in the carbon-bond isopleth they're counted in the
interval below. It's very subtle. That little difference of curvature can
change the fit to the frequency distribution.
The method does converge. We tricked the convergence method, because we
make it converge on the difference between the intervals, which is a little
unusual in terms of the way the algorithm was derived. The algorithm is
intended to converge the absolute O3 versus the predicted O3, and we're
converging the frequency. So it's a strange operation, and it's not as
sensitive as we like. However, once the values get this close, one can almost
do a better job of fitting by inspection.
ESCHENROEDER: But the appealing thing is that the frequency is really what
you want in the end to formulate the control strategy.
JEFFRIES: Yes. I spent a long time trying to argue with Keith Post over
whether this method worked, or whether it was just coincidence. Keith would
always respond that it must be in the model because the model reproduces the
observed frequency distribution. So if O3 aloft is important, fine, it's
already in the isopleth diagram because the precursors reproduce the frequency
distribution. In fact, it's hard to argue with a model that fits well.
McRAE: Did you look at the distribution of 03 to see how it compares with a
conventional log normal distribution?
149
-------
3. PREDICTING OZONE FREQUENCY DISTRIBUTIONS Jeffries and Johnson
JEFFRIES: Yes. It's very log normal except for wintertime. It's log normal
for the whole period, but it's not log normal for this subset that we're
looking at here.
ESCHENROEDER: Didn't you have one log-versus-probability plot for frequency
distribution that showed curvature?
MARTINEZ: That was linear probability.
JEFFRIES: No. I do have a strictly log probability plot fitted with the
points from a log normal model and you can't tell the difference. Chi-square
tests indicate they are the same.
McRAE: The second question I have is a little bit more general. Since you're
introducing the whole notion of probability to control strategy design, what
would your recommendations to agencies be about how they would use this
information to design control strategies to meet standards which don't match
this same idea? That is, what does this procedure give you that's different
from the conventional approach to the problem from the control agency point of
view?
JEFFRIES: The current guideline that's out recommends looking at 5
HC-to-NOx ratios for 5 days at each of the sites that you're concerned with,
which is some kind of subset of the whole precursor distribution. Here we are
concerned with the entire precursor distribution. This method allows for the
fact that, unlike the conventional EKMA approach where you work with the one
point and take it over to the limited side, here it's similar to the airshed
model results, in that the airshed model in the future would predict some 03
where it's not predicted today, because that's the optimal point for
production of Og under the control conditions. With the standard-EKMA
approach, you don't know that point because you're not dealing with all of the
data. The complicating feature is that all the data are required, including
the ambient measurements. This raises the issue of how many data are really
necessary. I don't know the answer to that yet.
MEYER: Do you have any feel, based on the work you have already done, of how
applicable this method might be in other cases, say, if the 03 transported
into the area were to change in the future, or if some diurnal emission
pattern were to differ radically?
JEFFRIES: If transported 03 has a significant impact on the generated
isopleth surface, and that somehow goes away in the future, the isopleth would
no longer fit the case. The question would have to be, how sensitive is the
03 frequency distribution to the transported 03,- I don't know how to answer
that. It may be possible to relate this isopleth back to chemical and
meteorological conditions that produce it. I can't say, for example, that if
I make a simple trajectory model run and produce an isopleth from a given
mechanism, with a given amount of dilution and all the other input, the
isopleth is absolutely correct. All I can say is that the isopleth would be
150
-------
3. PREDICTING OZONE FREQUENCY DISTRIBUTIONS Jeffries and Johnson
within the bounds that would be produced from such a run. Taking the chemical
model, you can run the model for the impact of 03 aloft and determine the
effect on the isopleth. The problem is that you can't go beyond the initial
data base. Whatever is important in controlling 03 now is captured in the
model, and the assumption is that that's stable across the design period.
MEYER: Do you think there is some way of factoring out the nonmeteorological
inputs?
JEFFRIES: No. One cannot say that the O3 measured at a site is caused by so
much 03 aloft, so much emissions from here, and so much emissions from over
there.
MEYER: Rather than having HC-NOX pairs, how about using a triplet: HC-NOX
plus transported O3?
JEFFRIES: I hadn't thought about that process.
WHITTFN: I'm troubled with this method because I have a hard time feeling
that it is really related to control of emissions in any way, and that it's
going to respond that way under future controls. I feel that the test that
you have here, namely, the shape of the frequency distribution, may not be a
severe enough test. My experience is that no matter what chemical or
meteorological inputs are used, the same general "L"-shaped isopleth always
results. If you then skew them around to fit your data base, as with your
factor, the result would be a similarly shaped curve and a similar shape
distribution, and hence a similar 03 distribution.
JEFFRIES: It is not true that you can change the parameters and get a good
fit.
WHITTEN: What I mean is parameters in the chemistry and parameters in the
meteorology.
JEFFRIES: I know pretty much the relationships between the parameters that
fit the surface and the parameters of chemistry. But I know if I change the
rate constant in the chemistry it's going to change the parameter of the
surface.
WHITTEN: And that will alter the result.
JEFFRIES: No. The dominating factor here is meteorology, not chemistry.
WRITTEN: Yes, and my experience is that if you alter the meteorology you
still get a similarly shaped curve.
JEFFRIES: The point here is that there is a number or a set of numbers that
characterize the conditions in St. Louis during this period sufficiently well
that when you put the observed HC-NOX pairs on them, you get the observed O-j
151
-------
3. PREDICTING OZONE FREQUENCY DISTRIBUTIONS Jeffries and Johnson
distribution out of them. The surface is a model that somehow captures all
the factors that are important in going from initial concentrations to 03.
WRITTEN: My question is, do you feel that if you put in some obviously faulty
meteorology, would you produce a frequency distribution that would definitely
show up as being wrong?
JEFFRIES: Yes, it will not fit. I start with simply a description and modify
it. It's equivalent to taking the Dodge isopleth program, and changing the
mixing heights in it repeatedly until I get an isopleth with satisfactory fit.
WRITTEN: Basically your dilution is a kind of factoring.
JEFFRIES: Yes, the results say that the frequency distribution can be fit
with the observed precursor distribution by simply taking the Dodge isopleth
surface and changing the initial and final mixing heights until a similar
isopleth appears. The relative spacing, for example, up the isopleth, is a
function that is competition between the aldehydes produced and apparent HC's
in the mechanism. That's why the carbon-bond isopleth has a different
curvature on it than does Dodge's. Dodge has different rate constants for OH
reacting with the aldehydes than in OH reacting with apparent organics.
The key factor is for so much mass shoved in, how much 03 do you get out?
And that's the c factor. And in effect I can control that by simply
controlling dilution. Now, there is some fine tuning that can be done. To
the extent that your mechanism and her mechanism represent approximately the
same chemistry of NOX and HC's, yes, they will both give generally the
same-shaped isopleth, but if you change a particular mechanism, you won't get
an isopleth that will give you the same answers. What I have done is produce
a nonchemical way to look up the parameters that you'd have to solve for by
repeated iterations of the chemical parameters.
I did want to say that the method obviously assumes that there is a
linear relationship between emissions and precursor concentrations; and that
over a long time period that's probably not a bad approximation.
WRITTEN: But as you know, I showed you yesterday, there is discrepancy —
JEFFRIES: If 03 aloft is important in the formation of Og, it will influence
the frequency distribution observed; since the model already reproduces that,
it's already in the model.
WRITTEN: And if that changes?
JEFFRIES: If that changes, then nothing can be done about it.
WRITTEN: My other questions are, your system goes through and picks up these
pairs of HC amounts which on a given isopleth diagram produced the worst 03
152
-------
3. PREDICTING OZONE FREQUENCY DISTRIBUTIONS Jeffries and Johnson
for that particular day; have you ever gone back and looked at your 242 pairs,
just a few of them, to see if the wind was blowing in the direction where the
maximum 03 was produced?
JEFFRIES: There is no effort to relate the 03 produced on that day and the
precursor observed on that day. I don't want to because what is the
probability that a measured precursor pair is going to actually make it to a
station and be measured? I suspect it's quite low.
WHITTEN: I was just wondering if the wind was blowing in the opposite
directions on some of those days.
JEFFRIES: I don't know. What the method does assume is, if you measured that
pair, there is some other pair that has the same concentration you didn't
measure that does make it to a station that you do observe.
WHITTEN: I can agree with that.
JEFFRIES: So it's that kind of statistical relationship. I'm not saying that
the pair that I kept is the pair that made the O3, not at all; that is what
Dr. Martinez does. He says that the pair he kept was validated and he puts it
on the isopleth diagram. The 03 he predicts is compared with the day's 03. I
don't do that. I just say that tells me how often I ought to observe that 03.
MARTINEZ: I did that for the Houston data, and in general the high 03
observed agrees with the wind direction.
JEFFRIES: It's clear here that 62% of the time the 03 occurs either due north
or due south of the city and most of the time the high concentrations are in
the middle of the city.
WRITTEN: You seemed to indicate that many of the maximum pairs that you
observed occurred at the hour of 7 a.m..
JEFFRIES: 7 or R a.m.. That was under the no-control condition. Under the
control condition they spread out. So do the stations.
WHITTEN: Did you look at any earlier hours?
JEFFRIES: No.
WHITTEN: So it looks to me like the earliest hour you looked at was 6 a.m.,
and that was the most important one. And so I would have a little more
interest as to what about 4 a.m. and 5 a.m.; maybe they would even be better,
so it's hard to say. It's kind of at the edge of your artifact cut-off.
JEFFRIES: That's right. I'm comfortable with the cut-off at noon, as
concentrations drop away dramatically.
153
-------
3. PREDICTING OZONE FREQUENCY DISTRIBUTIONS Jeffries and Johnson
WHITTEN: 6 a.m. sounds good until you find out that that happens to be the
most important one, and then it's not so good anymore.
TRIJONIS: Statisticians look at an awful lot of frequency distributions of
aerometric data. Maybe it's nothing to be concerned about, but I remember you
said that there were two dips in the HC data; those are certainly
statistically insignificant from the 224 points, but today you said something
about the Og being bimodal because it had one dip in the sense there were less
points between 9 and 10 a.m. than there were between 8 and 9 a.m. and
10 and 11 a.m. What concerned me is that I don't think it is bimodal. I
think that's just a statistical error. Would it affect the results? I think
that dip is a minus rather than a plus. I don't think it's the type of thing
you want necessarily to fit.
JEFFRIES: It was in the observed data.
TRIJONIS: But if it's a statistical fluke in the observed data, would that
affect your results?
JEFFRIES: I would not be willing to say that it's a statistical fluke in the
observed data. I think it really is true that during that period there were
fewer days with that concentration.
TRIJONIS: There were, but if you took 5 more years, I don't think you would
see that.
JEFFRIES: That's probably true. And in 5 more years we wouldn't have the
same kinds of fluctuations in the HC data, either.
TRIJONIS: Right. But is there any problem with overfitting in the sense that
you're fitting a dip in the data that just happens to be there? Are you
overfitting statistical flukes rather than the smooth distribution? You
weren't fitting the smooth distribution, you were fitting the random
fluctuations in the distribution.
JEFFRIES: The point is that the precursor distributions in the isopleth
diagram reproduced those random fluctuations, so they're not necessarily
random. And they were in the precursors. What it says is that the precursor
distribution is everything.
TRIJONIS: They were based on 242 kept points. And you'd have 18 in one,
13 in another, and so on. If you had enough data points, you would have had
17, 16, 17, 14, 13, a nice smooth distribution. What disturbs me is the fact
that your method was geared to fit these fluctuations.
JEFFRIES: The method is geared to minimize the sum of the squares; the
difference is between the two distributions. Period. That's all it does.
154
-------
3. PREDICTING OZONE FREQUENCY DISTRIBUTIONS Jeffries and Johnson
TRIJONIS: Right. But in that sense it's fitting statistical irregularities;
overfitting, in a sense.
JEFFRIES: You cannot say they're statistical irregularities. They are
actually in the observed data.
TRIJONIS: Yes, they're there, but in the observed data it's not a smooth
distribution and you're fitting the outlying points.
Also, I thought there was something to what Dr. Whitten was saying in
that if you took a nonsense isopleth, could you also get it to give good
results? I was wondering if it had something to do with the fitting problem.
JEFFRIES: I've got observed O3 data and I've got observed EC data and I've
got a model and I put the observed HC-NOX data in and I get the observed O3
data out; that's all I can say. The isopleth is an isopleth that can be
related to standard chemical mechanisms and operated under very reasonable
conditions in the functions. There is nothing unusual or odd or weird about
the isopleths.
TRIJONIS: Maybe there's a different answer. You know what data mining
is — when you have a small set of data and you're trying to represent in some
sense a continuous distribution; only you have 242 points. Data mining is
when you overfit your parameters to the particular set of data you have. You
don't think there is any possibility of that type of problem?
GIPSON: The question here is, what is the likelihood of getting a spurious
fit to your frequency distribution, and how would you recognize the spurious
fit, if you got it?
JEFFRIES: Well, we give the typical chi-square test of the two frequency
distributions to see the fit. I suspect if you're allowed to adjust the
isopleth surface enough, you could probably find other surfaces that vrould
also fit. That's why we decided to start tfith a surface that looked like a
surface produced by chemical mechanisms, under reasonable conditions, so that
the surface that we get is not obviously weird. It's not a weird isopleth.
Clearly, there are two slightly different isopleths that came out of Carbon
Bond II and out of Dodge. One of them gives a superior fit compared to the
other and you can inspect the data against the lines and see why it gives a
superior fit, and we could easily change a slight parameter in the Carbon Fond
II fit and make it come out just as good.
GIPSON: The question really is, when any model fails, can you recognize it as
failing? What will you have to do to a chemical model, statistical model, or
whichever, so that it would not reproduce the frequency distribution?
JEFFRIES: Very simple. It doesn't fit the frequency distribution. Tt would
overpredict.
155
-------
3. PREDICTING OZONE FREQUENCY DISTRIBUTIONS Jeffries and Johnson
GIPSON: And how would you recognize it when it didn't fit the frequency
distribution? I believe that's the question.
JEFFRIES: The red area wouldn't be as small. If you put a different isopleth
on here — let's suppose I start with a standard Dodge isopleth; what would
the initial frequency distribution in the standard Dodge isopleth look like?
Its peak would be shifted upwards. There would be a huge difference between
the predicted distribution from the Dodge isopleth and the observed
distribution. So the method continues to modify the isopleth until the two
converge.
TRIJONIS: At how many points do you measure the difference between the two
distributions?
JEFFRIES: Every single point.
TRIJONIS: No.
JEFFRIES: Twenty-eight difference places.
TRIJONIS: How many peak parameters?
JEFFRIES: Seven.
TRIJONIS: Then there could be an overfitting. In fact, I would think there
would be an overfitting problem. That's why it fits those points.
JEFFRIES: It's not like we're distorting an isopleth to get the fit.
TRIJONIS: I don't quite agree. The difference in the end is small, but, in
your final output, there is an overfitting problem with fitting 7 variables to
28 points.
JEFFRIES: We constrain those points and they can't take on just any values.
We constrain the points to take on only values that are very similar to values
predicted by the chemical mechanism.
TRIJONIS: That's why I said there's not much difference at the end.
JEFFRIES: The parameters don't change very much. The most critical parameter
is the meteorology dilution factor. That's the biggest difference between
standard Dodge for Los Angeles with small mixing height rise and St. Louis.
So you make half the 03 for the same starting material. In effect, you could
simply take the Dodge isopleth, multiply all the answers by half, all the
isopleth numbers by half, and get approximately the same result.
DODGE: When you vary these parameters, are you by any chance altering the
chemistry in the sense that, if there are different chemical mechanisms that
156
-------
3. PREDICTING OZONE FREQUENCY DISTRIBUTIONS Jeffries and Johnson
give different predictions, these differences may be dampened by the iterative
process? Does it make the chemistry you start out with less critical?
JFFFRIES: Yes, it does. T can take the final parameters that cause the fit
and go back to the mechanisms and see what kinds of things would have had to
have been true for certain mechanisms to give a surface that looked like this.
The first thing that would have to be true is the dilution has to be bigger
than it is in the standard case. The second thing is, the reactivity may be
slightly different than it was in the standard case, because reactivity
determines the position of the ridge line. So the higher the reactivity, the
larger the angle; the more the ridge line moves over. So if the fit had to
have a higher or a lower angle, that means that the reactivity of the mix was
higher or lower, or the light intensity was larger or smaller. If the
curvature is different, it means that the competition between the aldehydes
and the apparent HC's for OH was different in that mechanism than in some
other mechanism.
So the parameters have specific things in the chemical mechanisms that
control their values. I know that because I can change those things in the
mechanism; in brief, do an isopleth and refit it and then see the value of the
parameters that came out of that change. We have done this. I have many
slides here that show all the things that changed in the chemistry, how the
isopleth changed, how the parameters changed, and so on. It's not like the
parameters are just numbers out of nowhere. They have a physical meaning in
terms of the chemistry and the meteorology, the conditions used to solve for
the 0-3 in the first place.
DIMITRIADES: Would Drs. Trijonis, Whitten and Kill.us briefly summarize their
comments, objections, or reservations for the benefit of the report?
TRIJONIS: I'm not sure it would affect the end predictions at all, but just
on a statistical basis I was concerned about the Fact that in choosing the
seven free parameters, Dr. Jeffries seemed to be fitting an abnormal
distribution or a set of random fluctuations in the frequency distribution.
It just so happened for this set of data that fewer points fell between
9 and 10 a.m. than between 8 and 9 a.m. or between 10 and 11 a.m. It
concerned me that the method selected one of the seven free parameters to fit
that abnormality in the 28 data points. I don't think it probably affects the
end result. But it's just the statistical question of overfitting to whatever
deviations happened to be in this data get.
WHITTEN: I had three questions. One question was that the hour that seemed
to be the most important seemed to be the earliest hour chosen, and T would
feel more comfortable if that were a peak in a distribution of hours. I would
suggest looking at other hours. A second question was about the reality of
the model. Is it possible to relate the wind direction on the days where the
HC and NOX pair indicated high 03? Was the wind blowing in a direction such
that that station and that particular pair was at least in the direction of
where the maximum was that day? Finally, it's difficult for me to believe
157
-------
3. PREDICTING OZONE FREQUENCY DISTRIBUTIONS Jeffries and Johnson
that this methodology relates the control of emissions to the reduction of O3
in the atmosphere. I can see that we have a statistical distribution of NOX
and a statistical distribution of 03, but it's not necessarily putting a cause
and effect relationship between reducing the concentrations of one or the
other and reducing the 03.
JEFFRIES: The point that we used 6 a.m. as the first value (and 6 a.m. is the
most frequent concentration period) is valid. We should have looked at 5 a.m.
and we didn't. However, my intuition from remembering what the data looks
like tells me that 6 a.m. is probably the hour of peak concentration, and
that's what the method tends to zero In on, peak concentration.
Concerning wind direction, the model in no way attempts to relate
concentration on a given day with 03 produced on that day. So it doesn't
matter where the wind was blowing. The issue is statistical probability that
the parcel that you hold in the morning is the parcel that you measure in the
afternoon. We are interested in knowing the frequency of occurrence of a
HC-NOX pair and the frequency of occurrence of an 63 value. That is what the
model deals with. For that reason there is no physical cause and effect that
I can point to and say, yes, the air flowed from here to there and, yes, it
made that 03- All T can say is that when you put the measurements onto the
model dry surface you get the answers that agree with the measured values.
As far as the ability of the model to work in the Future under changed
conditions, it's like any other particular model: it's tied to the data base
that you use. So if something like 03 aloft is significant today and it is
not significant tomorrow, then you're out of luck.
ALTSHULLER: Did I understand that you picked 3 models in each of the 3
years?
JEFFRIES: Four.
ALTSHULLEP: Four, okay. Would looking at different time intervals --
2 months, 4 months, 6 months -- aid in answering some of these other questions
that have come up?
WALKER: You said that HC abatement seemed the preferred control strategy, but
the distribution of points in your before and after plots does not seem to
support that. What do you mean when you say the points slip up and so forth,
and work against you on the NOX values? Superficially, it would seem that NOX
abatement would pull you down into the compliance region faster.
JEFFRIES: That's not true when you actually do the calculation on the
surface. The model always tends to choose that precursor pair that Is closest
to the ridge line. In the future, when you apply as much as 80% HC control,
most of the points that you were concerned with now are so-«NOx-limite 1 that
they don't contribute anything at all to the 03 production that you're
158
-------
3. PREDICTING OZONE FREQUENCY DISTRIBUTIONS Jeffries and Johnson
interested in. It was points that had higher HC before but lower NOX that
didn't make the O3/ they weren't the high O3 pairs today but in the future
they will he. Thus the method already has a built-in selection to go to lower
NOX anyway, and the fact that you reduce NOX doesn't change things very much.
When you deal with an entire distribution, the issue is this: NOX
control is unusual in that on one side of the diagram it helps you but on the
other side of the diagram it hurts you. Hydrocarbon control either leaves
things the same or reduces the O3, and what happens is that the distributions
are such that there is good probability that reducing NOX only will help you
in some cases and hurt you in some cases, and those tend to offset each other.
However, HC control almost always either doesn't change anything or helps you.
You have to deal with the entire distribution and if you don't, you're not
looking at the whole problem.
KILLUS: Suppose you were to take a HC and NOX distribution in some completely
different city, or perhaps a random HC and NOX distribution, and started
running that through your model, fitting it to the 03. I believe if that then
failed, you'd have a reasonable test of your model. If, however, it
succeeded, you obviously have spurious data.
JEFFRIES: It's obvious it will fail. Suppose it were a uniformly random
distribution; then every O3 concentration is equally like that.
KILLUS: No, not the O3 concentration. I'm talking about HC and NOX
precursors. Assume you take a random HC arfd NOX precursor set and the real
St. Louis O3 set, and try to see if you can regularize the two. If you feed
that through your model and you still get the same sort of frequency
distribution of 03, you certainly haven't discovered a relationship between HC
and NOX to 03. The same is true if you were to take a HC:NOX ratio from a
completely different city. Essentially what I'm asking is whether there is
any way of assuring that the HC and NOX distribution in St. Louis really
reproduces the 03 in St. Louis?
JEFFRIES: What makes the O3 in St. Louis? Hydrocarbons in Los Angeles? No,
clearly.
KILLUS: Exactly. If your model can reproduce the <~>3 distribution for HC and
NOX data somewhere else, obviously then the model is incorrect. However, if
the model fails under those circumstances, then I believe our concerns about
spurious data are somewhat alleviated. Would that be a reasonable test?
TRIJONIS: It might be. I'm not sure the end results are affected by it. I
just know that fitting a 7-parameter model with 2R data points shouldn't be
done.
159
-------
3. PREDICTING OZONE FREQUENCY DISTRIBUTIONS Jeffries and Johnson
JEFFRIES: If I had done what Post did, that is, take the Dodge isopleth, plot
these points on it and conclude that multiplying the Dodge isopleths by 1/2
yields the right 03 values, I wouldn't be fitting anything.
BUFALINI: Haven't you answered the question by saying that, when applied to
Sydney, Australia, those factors were a little bit different?
JEFFRIES: Yes. The factors are city-specific.
BUFALINI: Therefore, doesn't that answer the question?
KILLUS: Not if the HC and NOX distribution in Sydney, Australia happen to be
the same as the HC and NOX distribution in St. Louis.
JEFFRIES: If you took the HC and NOX distribution I had and shifted all the
points a little bit, it's very clear you get a different frequency
distribution predicted by the model. I did that. That's how I did the
control calculations. So it's very clear that any juggling of the
HC-NOX distribution is reflected instantly in the 03 frequency distribution
predicted by the system. Any modification of the precursor distribution gives
rise to a different frequency distribution. What I have done is to find the
set of parameters that most closely relates to standard mechanisms in a smog
chamber and which also gives the frequency. Taking the precursor
distributions and putting them on that surface gives rise to the O3
distribution.
KILLQS: If you took your shifted HC and NOX with control and then
renormalized your isopleth diagram through your fitting procedure, would you
not then again reproduce your present 03 frequency distribution? Would you
get a different isopleth diagram, for example?
JEFFRIES: What do you mean by "renormalize"?
KILLOS: I mean refit your O3 isopleth; the shifting of a frequency
distribution on an isopleth diagram that has been designed to transform a HC
and NOX frequency distribution into an O3 frequency distribution is not the
same as comparing whether one can generate an arbitrary transfer function in
the HC:NOX ratios or HC and NOX distribution and the O3 distribution and
always get the frequency distribution. If you can always get your frequency
distribution of O3 from any HC and NOX distribution, then you don't have a
predictive model.
ESCHENROEDER: The result is right there. It gives a different O3 frequency
distribution.
KILLUS: No, it gives a different 03 frequency distribution with the same O3
isopleth that originally transferred one HC and NOX distribution to another.
I am asking whether any arbitrary isopleth diagram can be so constructed.
160
-------
3. PREDICTING OZONE FREQUENCY DISTRIBUTIONS Jeffries and Johnson
ESCHENROEDER: You're saying old O3 frequency with the new HC:NOX data?
KILLCJS: Yes.
ESCHENROEDER: That is not going to work. The new data gave a new result, as
we already saw. Am I correct?
JEFFRIES: Yes, sir.
ROMANOVSKY: You raised a very controversial practical issue here. Before
someone reports to Congress that the control of NOX would not be cost
effective, I think we need to remember that there are other eavironmental
problems associated with this photochemical system. Have you looked at other
environmental pollutants, I suspect not nitric acid, but hopefully NC>2, in the
same light?
JEFFRIES: My comment was specifically addressed to whether control of NOX in
this model was of benefit in terms of controlling the 03. That's all I can
say. And I haven't looked at anything else.
161
-------
4. SIMPLIFIED TRAJECTORY ANALYSIS APPROACH FOR EVALUATING THE OZONE ISOPLETH
PLOTTING PACKAGE/EMPIRICAL KINETIC MODELING APPROACH
Gerald L. Gipson
Edwin L. Meyer
Office of Air Quality Planning and Standards
U.S. Environmental Protection Agency
Research Triangle Park, North Carolina 27711
ABSTRACT
The ability of the trajectory model underlying the Empirical Kinetic
Modeling Approach (EKMA) to predict peak ozone*concentrations was examined.
Detailed air quality, emissions, and meteorological information available from
the St. Louis Regional Air Pollution Study was used as input. In addition,
sensitivity of model predictions to varying morning and afternoon mixing
heights input to the model and varying spatial detail in emissions input were
investigated.
Results indicate that the model tends to underpredict peak ozone observed
in St. Louis. However, of the ten cases examined, six agreed with observed
peaks within — 30% for at least part of the range of input variables examined.
The four cases, for which poor agreement between observations was always
obtained, were examined to ascertain reasons for the lack of agreement. It
was concluded that the poor agreement probably arises primarily from incorrect
input to the model (e.g., incorrect trajectories, inaccurate mixing height
profiles, etc.). Since the Regional Air Pollution Study data base is likely
to be superior to data bases for most other cities, similar difficulties may
be encountered elsewhere.
162
-------
4. TRAJECTORY ANALYSIS APPROACH FOR EVALUATING OZIPP/EKMA Gipson and Meyer
INTRODUCTION
The Empirical Kinetic Modeling Approach (EKMA) is a procedure developed
by the U.S. Environmental Protection Agency (EPA) to estimate the reductions
in precursor emissions necessary to achieve the National Ambient Air Quality
Standard (NAAQS) for ozone (O3) (Gipson et al., 1981; Federal Register, 1981).
The approach uses an O3 isopleth diagram which presents peak hourly-average 03
concentrations explicitly as a function of early morning precursor levels.
The positioning of the isopleths is an implicit function of a number of other
factors as well (e.g., meteorology, emissions, and transported pollutants).
The diagrams are developed using the Ozone Isopleth Plotting Package (OZIPP),
which incorporates a simplified trajectory model (Whitten and Hogo, 1978; EPA,
1978) .
The objective of this study was to investigate one approach for assessing
the validity of the EKMA procedure. This approach attempts to assess how
accurately the model used to generate the EKMA isopleths is able to predict
peak 03 concentrations using detailed input information. Such an approach
does not directly answer the key question of how accurately the model predicts
changes in peak 03 accompanying a reduction in precursors. However,
successful prediction of base-case conditions provides some confidence that
the model provides a reasonable approximation of physical and chemical
phenomena. Further, if the model can predict base-case conditions accurately
in several cities with varying levels of precursors, one may place additional
confidence in the predicted controls.
163
-------
4. TRAJECTORY ANALYSIS APPROACH FOR EVALUATING OZIPP/EKMA Gipson and Meyer
This report provides an overview of the study, a brief explanation of the
trajectory model contained in OZIPP, and a description of the data base
employed in the study. It also describes the methodologies employed to derxve
the information necessary to apply the model to the urban area under study and
discusses the results of the model application.
Background
The EKMA procedure uses the trajectory model contained in OZIPP to
express maximum hourly average 03 concentrations as a function of
early morning ambient levels of nonmethane organic compounds (NMOC) and total
oxides of nitrogen (NOX). The functional relationship derived using the
trajectory model takes the form of an Oj isopleth diagram. An isopleth
diagram can be tailored to a specific area or situation using city-specific
information on emissions, transport, and dilution. The diagram is designed to
be used with a measured peak hourly 03 concentration and morning NMOC:NOX
ratio to estimate the reductions in emissions necessary to achieve the NAAQS
for 03.
The approach used in this evaluation of EKMA was to employ atmospheric
data to test the trajectory model used to develop the isopleth diagrams. That
is, measured ambient 63 peaks were compared to peak levels of 03 predicted by
the trajectory model. In the analysis described in this report, the model was
applied to the St. Louis metropolitan area, for which an extensive
air quality, meteorological, and emissions data base was compiled under the
164
-------
4. TRAJECTORY ANALYSIS APPROACH FOR EVALUATING OZIPP/EKMA Gipson and Meyer
Regional Air Pollution Study (RAPS). This data base contained sufficient
information to evaluate the performance of the trajectory model in predicting
peak 03 concentrations, to investigate other facets of model performance, and
to examine potential problem areas in the model's absolute prediction
capabilities.
Description of Trajectory Model and Its Application
The conceptual basis for the trajectory model in OZIPP is similar to a
Lagrangian photochemical simulation model (Figure 4-1). A column of air is
advected by the wind along a specified trajectory. The height of the column
is equal to the mixing depth (i.e., the column extends from the earth's
surface through the mixed layer). Horizontal dimensions are selected such
that concentration gradients are small, and thus the effects of horizontal
exchange of air between the column and its surroundings can be ignored.
Within the column, air is assumed to be uniformly mixed at all times.
Initially, the column contains predetermined levels of pollutants
(primarily NMOC and NOX), which may result from prior emissions and/or
possible transport from upwind areas. As the column moves along the
trajectory, its height increases with the diurnal rise in mixing height. As
the mixed layer grows, air above the column mixes downward into the column
instantaneously, leading to the entrainment of additional pollutants found
above the early morning mixed layer. Also, the column moving along the
trajectory can encounter fresh emissions of precursors, further adding to
165
-------
4. TRAJECTORY ANALYSIS APPROACH FOR EVALUATING OZIPP/FKMA Gipson and Meyer
INITIAL
CONDITIONS
KEY
TIMEPERIOOi
MIXING DEPTH AFTER TIME!
PRECURSORS INJECTED INTO
COLUMN DURING TIME i
SUNLIGHT INTENSITY DURING
TIMEi
Figure 4-1. Conceptual view of trajectory model in OZIPP.
166
-------
4. TRAJECTORY ANALYSIS APPROACH FOR EVALUATING OZIPP/EKMA Gipson and Meyer
pollutant concentrations. Consequently, pollutant concentrations within the
column are physically increased by emissions and entrainment of pollutants
from aloft, and decreased by the increase in volume of the column caused by
the rise in mixing height (i.e., dilution).
The trajectory model mathematically simulates the changes in pollutant
concentrations that result from the physical processes described above and
from chemical reactions taking place within the column. A chemical kinetic
mechanism incorporated into the model mathematically describes these chemical
reactions. For those reactions affected by sunlight, the rates of reaction
are estimated by theoretical considerations of diurnal variation in solar
radiation. The trajectory model calculates the concentrations of all reactive
species included in the chemical mechanism as a function of time. In
addition, the model determines the maximum 1-h average 03 concentration
occurring during the simulation. This peak 03 concentration, therefore, is a
function of the following factors:
• initial pollutant concentrations
• emissions occurring along the trajectory
• dilution of pollutants caused by rise in mixing height
• entrainment of pollutants from aloft
» chemical reactions affecting pollutant concentrations
167
-------
4. TRAJECTORY ANALYSIS APPROACH FOR EVALUATING OZIPP/EKMA Gipson and Meyer
RAPS Data Base
An extensive air quality management data base was compiled under RAPS.
This data base contains all the elements necessary to evaluate air quality
simulation models, including air quality, emissions, and meteorological data
sufficiently resolved in space and time to develop input data and provide
measures for evaluating model performance. The data base as it relates to
this study is briefly described below.
The bulk of the air quality data used in the study was collected at
25 regional air monitoring stations (RAMS) spaced concentrically throughout
the study region (Figure 4-2). These stations were located such that they
would not be unduly influenced by any one source or group of pollutant
sources. At each station, hourly average concentrations for the following
pollutants were recorded: 03, nitrogen dioxide (NO2), nitric oxide (NO), NOX,
NMOC, and carbon monoxide (CO). The size of the network, quality of data, and
duration of measurements provide the best available temporal and spatial
resolution of ambient pollutant levels available to evaluate a model such as
the trajectory model incorporated within OZIPP.
The emissions data employed in this study included an hourly resolved
area- and point-source emissions inventory for NOX, CO, and reactive
hydrocarbons (RHC's). The area-source emissions were spatially resolved by
means of the RAPS grid system, which consists of about 2000 variably sized
grids (Hans and Paddock, 1975). Thus, estimates of emissions rates from both
168
-------
4. TRAJECTORY ANALYSIS APPROACH FOR EVALUATING OZIPP/EKMA Gipson and Meyer
Circles denote radius
in km from Jefferson
Arch Memorial in down-
town St. Louis.
Figure 4-2. RAMS station locations.
169
-------
4. TRAJECTORY ANALYSIS APPROACH FOR EVALUATING OZIPP/EKMA Gipson and Meyer
area and point sources are available by hour and by grid for any day in 1975
and 1976, the primary period covered by RAPS.
The primary source of meteorological data consisted of continuous
measurements of wind speed, wind direction, and temperature made at each of
the 25 RAMS stations. Additional data were collected from the RAPS Upper Air
Sounding Network (UASN). In this program, radiosondes were conducted three
times per day, five times per week, at a minimum of two stations. From these
soundings, vertical temperature profiles are available from which mixing
heights can be estimated.
METHODOLOGY FOR DEVELOPMENT OF MODELING DATA
We chose 10 trajectories on 9 different days during 1976 to evaluate the
trajectory model's performance in the prediction of peak 03 concentrations.
We employed several criteria in the selection of days. Because EKMA is
necessarily used for those days with the highest Oj levels, primary interest
focused on predicting the highest O3 levels measured in the region. Further,
we selected a sufficient number of days to insure that the model's performance
be evaluated for a number of different atmospheric conditions. We included a
few days with lower O3 concentrations to test for a possible systematic bias
in the model's predictions. These considerations led to the selection of the
10 test cases summarized in Table 4-1.
170
-------
4. TRAJECTORY ANALYSIS APPROACH FOR EVALUATING OZIPP/EKMA Gipson and Meyer
TABLE 4-1. MODEL TEST CASES
Julian
Date Date
10/01/76
07/13/76
06/08/76
06/07/76
06/08/76
08/25/76
10/02/76
09/17/76
07/19/76
08/08/76
275
195
160
159
160
238
276
261
201
221
RAMS
Site
102
114
115
122
103
115
115
118
122
125
Time of Peak O^ Peak
( LDT ) *
3
4
5
4
2
2
5
1
1
6
:00-4:
:00-5:
:00-6:
:00-5:
:00-3:
:00-3:
:00-6:
:00-2:
:00-2:
:00-7:
00
00
00
00
00
00
00
00
00
00
p.m.
p.m.
p.m.
p.m.
p.m.
p.m.
p.m .
p.m.
p.m.
p.m.
O3 Concentration
(ppm)
0
0
0
0
0
0
0
0
0
0
.24
.22
.22
.20
.19
. 19
.19
.15
.15
.12
*Local daylight time.
The first step in performing an air quality simulation involves deriving
an air-parcel trajectory corresponding to the time and location of the
observed peak 03 concentration. This trajectory represents the path an air
parcel would have traveled to reach the site of interest at the specified time
(thereby representing the movement of the theoretical column previously
described). Once the column movement is defined, the remaining information
necessary to simulate a test case can then be developed, that is, the initial
conditions, emissions, boundary conditions (O^ aloft), and dilution data
171
-------
4. TRAJECTORY ANALYSIS APPROACH FOR EVALUATING OZIPP/EKMA Gipson and Meyer
needed. The following section describes the methodologies we used to develop
each of these.
Air Parcel Trajectory
We calculated air-parcel trajectories for each of the test cases from the
minute-by-minute measurements of wind speed and wind direction taken at each
of the 25 RAMS stations. First, we calculated 10-min vector averages of wind
speed and direction for each of the 25 sites. We then averaged the individual
site averages to obtain an overall regional average wind speed and direction
for each 10-min period. We could then use the regional average wind speed and
direction to track a trajectory backwards from the site and time of the
observed peak O3 concentration until 8:00 a.m. Central Daylight Time (CDT).
pigure 4-3 illustrates the back trajectory for the July 19 test case, with the
hourly segments of the trajectory shov^n along the path.
Initial Concentrations
The initial concentrations of NMOC, NOX, and O3 represent the pollutant
levels that were initially within the theoretical model column at 8:00 a.m.
CDT. They were estimated from the hourly averaged concentrations at the RAMS
stations nearest the trajectory starting point. The first step in the
procedure was to select the three RAMS sites closest to the trajectory
starting point. For these three sites, we computed instantaneous
concentrations corresponding to the simulation starting time by averaging the
172
-------
4. TRAJECTORY ANALYSIS APPROACH FOR EVALUATING OZIPP/EKMA Gipson and Me
700 710 720 730 7^0 750 760 770 780
Figure 4-3. Air-parcel trajectory for July 19 test case.
173
-------
4. TRAJECTORY ANALYSIS APPROACH FOR EVALUATING OZIPP/EKMA Gipson and Meyer
hourly pollutant levels for the hour immediately preceding the starting time
and the hour immediately following the starting time. For example, we would
calculate the 8:00 a.m. CDT instantaneous concentration for a pollutant at one
site by averaging the 7:00 a.m. to 8:00 a.m. and the 8:00 a.m. to 9:00 a.m.
hourly levels. We then computed the initial column concentrations as a
weighted average of the three instantaneous levels, with the weighting factors
equal to the square of the reciprocal of the distance between each PAMS site
and the trajectory starting point (i.e, Vr^). In performing these
calculations, we set any concentration below the .minimum detectable limit of
the analyzer equal to the following lower limits: 0.005 ppm for NOX; 0.1 ppmC
for NMOC; and 0.005 ppm for O3 .
Emissions
As previously mentioned, the trajectory model also simulates the impact
of precursor emissions occurring after the simulation starting time.
Therefore, we also used hourly emissions rates for NMOC and NOX• The
emissions encountered by the column daring each hour are input to the model as
fractions of initial concentration of NMOC or NOX« We computed the fractions
themselves by comparing the emissions densities encountered by the column of
air during each hour to the pollutant density initially in the column.
From the RAPS emissions inventory, an average emissions density for each
hour of the column trajectory path could be computed. We did this by
(1) summing the hourly point- and area-source emissions occurring within each
174
-------
4. TRAJECTORY ANALYSIS APPROACH FOR FVALUATING OZIPP/EKMA Gipson and Meyer
grid square encountered by a trajectory segment, (2) dividing the total
emissions in each grid square by the area of that grid square, and
(3) weighting the resulting emissions densities consistently with the
proportion of the trajectory segment in each grid square. For example,
consider the trajectory path shown in Figure 4-4. An emissions density for
the First hour (i.e., 8:00 a.m. to 9:00 a.m. LOT) would be calculated from the
total emissions occurring between 8:00 a.m. and 9:00 a.m. LDT within grid
squares (1) and (2). Since roughly two-thirds of the trajectory segment
between 8:00 a.m. and 9:00 a.m. occurs in grid square (2), the emissions
density in (2) is weighted by a factor of 0.67, whereas the emissions density
in grid square (1) is weighted by a factor of 0.33.
We calculated the actual fractions input to the OZIPP model for both
organic compounds and NOX using the following expressions:
(4-1)
Ho Co p
where: e^ = fraction of initial concentration to be added during hour i to
represent emissions occurring during hour i
2
Qi = emissions density for hour i (mol/m )
Ho = initial morning mixing height as described previously (m)
Co = initial pollutant concentration (ppm or ppmC)
p = density of air (41 mol/m3)
Equation 4-1 represents the ratio of the emissions density at a particular
175
-------
4. TRAJECTORY ANALYSIS APPROACH FOR EVALUATING OZIPP/EKMA Gipson and Meyer
i'.—JL-
m:00 a.m. LDTwu—L
720 730 7^0 750 760 770 780
Figure 4-4. Air-parcel trajectory for July 19 test case demonstrating how
fresh precursor emissions are considered.
176
-------
4. TRAJECTORY ANALYSIS APPROACH FOP. EVALUATING OZIPP/FKMA Gipson and Meyer
hour to a hypothetical column density based on the initial column conditions.
In the above expression, the area of the column does not appear. However, the
emissions density term, Qi, implicitly accounts for this term. Hence, one can
vary the impact of fresh emissions simply by changing the size of the grid
squares used in the modeling exercise. As subsequently described, we
performed model sensitivity tests to grid-square size.
Emissions derived from the RAPS inventory were expressed on a mass basis
(e.g., kg). To convert to a molar (or ppm) basis, we used the following
conversion factors: 46 g/mol for NOX and 14.5 for NMOC. For NOX, the
inventory gives NOX as equivalent NC>2, and for NMOC, 1.0 ppmC is assumed
equivalent to CH2.5«
Boundary Conditions
Boundary conditions for the trajectory model include pollutant
concentrations found in the layer above the early morning mixed layer. Gipson
et al. (1981) have described the procedure used to estimate the level of Oj
aloft. We averaged hourly 03 concentrations measured between 11:00 a.m. and
1:00 p.m. CDT at upwind, rural-type monitors to obtain an estimate of the
levels aloft (Table 4-2 summarizes the results). We assumed any precursor
pollutants aloft to be negligible.
177
-------
4. TRAJECTORY ANALYSIS APPROACH FOR EVALUATING OZIPP/EKMA Gipson and Meyer
TABLE 4-2. ESTIMATES IN 03 ALOFT
Date
10/01/76
07/13/76
06/08/76
06/07/76
08/25/76
10/02/76
09/17/76
07/19/76
08/08/76
Julian
Day
275
195
160
159
238
276
261
201
221
O3 Aloft
(ppm)
0.06
0.08
0.10
0.11
0.09
0.06
0.06
0.08
0.07
Dilution
Dilution in the trajectory model results from the change in mixing height
throughout the day. In the OZIPP model, the mixing height is assumed to rise
as a function of the time after sunrise in accordance with a "characteristic
curve." This so-called characteristic curve has been derived empirically from
data taken during the PAPS study. The curve is defined by specifying the
8:00 a.m. CDT and maximum afternoon mixing heights. We derived these heights
using information from soundings taken just prior to sunrise, in mid-morning,
and in mid-afternoon. Then we fitted the diurnal mixing height curve between
the 8:00 a.m. and maximum afternoon mixing height. The result was an
178
-------
4. TRAJECTORY ANALYSIS APPROACH FOR EVALUATING OZIPP/EKMA Gipson and Meyer
"S"-shaped curve in which little or no increase in mixing height occurred in
the early morning, followed by a large increase in mid-morning, and smaller
increases thereafter. Although the shape of the diurnal mixing height curve
remained invariant in these tests, the height itself could be manipulated at
any time by specifying different 8:00 a.m. and/or maximum afternoon mixing
heights. Since there is usually some uncertainty about these morning and
afternoon estimates, we examined the model's sensitivity to different values.
Chemical Mechanism
We used the Dodge chemical kinetic mechanism currently incorporated in
OZIPP in all 10 tests (Whitten and Hogo, 1978) and employed observed NO2/NOX
fractions to apportion initial NOX between NO and N02- We used default
parameters for all other variables (Gipson et al., 1981).
RESULTS AND DISCUSSION
Figure 4-5 presents the results of the 10 tests. The solid line
beginning at the origin represents perfect agreement between observed peak
hourly 03 concentrations (abscissa) and predicted peaks (ordinate). The
broken lines on either side of this 45-degree line represent agreement to
within - 30%. The numbers on the graph represent the Julian day corresponding
to each of the 10 test cases. The series of vertical lines shown in
Figure 4-5 depicts the range within which model predictions varied when mixing
179
-------
4. TRAJECTORY ANALYSIS APPROACH FOR EVALUATING OZIPP/EKMA Gipson and Meyer
3
s§
20 40 60 80 100 120 140 160 180 200 220 240 260
OBSERVED OZONE. PPB
Figure 4-5. Comparison of observed and predicted peak hourly O3
concentrations.
180
-------
4. TRAJECTORY ANALYSIS APPROACH FOR EVALUATING OZIPP/EKMA Gipson and Meyer
heights and/or grid-square size were varied. Table 4-3 shows the range over
which these three variables were varied for each test case. It is difficult
to generalize about the sensitivity of the predicted peak O3 to the three
inputs which were varied. Model sensitivity to these inputs depends to some
extent upon factors which were not varied (i.e., initial precursor
concentrations and O3 aloft) . Nevertheless, for the sensitivity studies
performed, the inputs that were varied could usually be ranked (most sensitive
first) as follows:
(1) morning mixing heights
(2) afternoon mixing heights
(3) grid-square size
Referring to Figure 4-5 for 6 of the 10 test cases, predicted peak 03 agreed
to within ± 30% of the observed values, at least over part of the range of
sensitivity tests. An apparent tendency exists, however, for the model to
underpredict observed peak 03 levels. Also, no one has yet attempted to
compare observed and predicted diurnal precursor concentrations within the
column of air. For 4 days (Days 159, 195, 238, and 276), the trajectory model
grossly underpredicted observed peak 03 for all cases. Examining these 4 days
in further detail reveals several possible explanations for the poor model
performance.
First, the derived trajectories may be grossly incorrect. For example,
on Day 159, upper-air soundings suggest that the winds tend to veer clockwise
181
-------
4. TRAJECTORY ANALYSIS APPROACH FOR EVALUATING OZIPP/EKMA Gipson and Meyer
TABLE 4-3. RANGE OF SENSITIVITY TESTS FOR 10 TEST CASES
Julian
Case Day
1 159
2 160/(103)
3 160/(115)
4 195
5 201
6 221
1 238
8 261
9 275
10 276
Grid- Square
(km)
5x5,
5x5,
5x5,
5x5,
5x5,
5x5,
5x5,
5x5,
5x5,
5x5,
10x10,
10x10,
10x10,
10x10,
10x10,
10x10,
10x10,
10x10,
10x10,
10x10,
Size
20x20
20x20
20x20
20x20
20x20
20x20
20x20
20x20
20x20
20x20
Sunrise Mixing Maximum Afternoon
Height Mixing Height
(m) (m)
90-290
100-250
100-250
50
100-250
100-250
100-250
100-250
100-250
100-250
1850-1990
1957-2010
1957-2010
1300-1860
1500-2310
1233-1570
1764
1720
530-850
1800
with increasing altitude within the mixed layer. Therefore, in at least some
cases, use of surface winds to derive trajectories may result in inaccurate
estimates for trajectories. In the case of Day 159, winds measured above the
surface but still within the mixed layer suggest a trajectory that goes
through areas of greater precursor emissions. While such corrections may
result in better model performance on some days, upper-air data are limited,
especially with regard to sampling time. Using such limited data to derive
continuous trajectories may also introduce large uncertainties into the
trajectory analysis.
182
-------
4. TRAJECTORY ANALYSIS APPROACH FOR EVALUATING OZIPP/EKMA Gipson and Meyer
Second, in some cases, the methodology employed in the model may be too
rigid to allow accurate simulation of key variables. For example, upper-air
data taken on Day 195 suggest that the mixing height remains low until
mid-to-late morning. This includes a period in which the trajectory passes
through an area with a high emissions density. For this day, the
characteristic curve computerized fitting scheme results in the mixing height
rising too soon, thereby diminishing the impact of these fresh emissions.
Thus, a different mixing height growth pattern may be more appropriate for
this day.
Third, in some cases, the available information may be inadequate to
provide reasonable input to the model. This may occur on Day 276, for
example, when there are no upper-air soundings for St. Louis. For Day 276 (a
Saturday), routine sounding data from a station almost 100 km from St. Louis
had to be used to estimate mixing heights.
The foregoing discussion suggests possible explanations for the lack of
agreement between results obtained on many of the days when the model
performed poorly. The problems generally center on uncertainties in data or
the lack of adequate data. Given the comprehensive nature of the St. Louis
RAPS data base, applications of the trajectory model to other cities with less
detailed data will more than likely lead to similar problems. In fact, the
problems encountered with other cities are likely to be even more extensive
than those found in the St. Louis application.
183
-------
4. TRAJECTORY ANALYSIS APPROACH FOR EVALUATING OZIPP/EKMA Gipson and Meyer
CONCLUSIONS AND RECOMMENDATIONS
This paper has discussed one method for evaluating OZIPP/EKMA. The major
advantage of this method is that it attempts to incorporate physical reality
to the maximum extent consistent with a simple model. Hence, unlike methods
that are statistical or based on comparisons of predicted and observed
frequency distributions, application of the model in this manner is more
likely to reflect cause-effect relationships. The method does not directly
address the question of greatest importance (i.e., how accurately changes in
O3 are predicted). In this respect, it is inferior to comparison with
historical trends or with controls predicted by sophisticated models.
However, these latter two methods have disadvantages as well. In the case of
trend comparisons, uncertainty is associated with historical data, few cities
are available where the method can be evaluated, and in these, only small
changes in precursor levels have been realized. For comparison with
sophisticated models, there is no guarantee that the sophisticated model's
control predictions are accurate. Furthermore, if the trajectory model can be
made to agree with base-case conditions found in cities with widely varying
emissions, some insight into the model's ability to replicate changes in
emissions may be possible.
In addition to its inability to address directly the key question of how
accurately the impact of controls is simulated, the method described in this
paper contains several other disadvantages. The first disadvantage of the
method is it requires a detailed data base. Secondly, because of the data
184
-------
4. TRAJECTORY ANALYSIS APPROACH FOR EVALUATING OZIPP/EKMA Gipson and Meyer
limitations, the model is not often likely to be applied using the trajectory
analysis approach described here. That is, the model will be most often
applied in an EKMA Level III analysis in which a trajectory is chosen such
that a city's early morning emissions are directly related to the observed 03
peak (Gipson et al., 1981). The transferability of these findings to a
Level III analysis is uncertain. One possible alternative would be to test
the model's ability to predict observed peak O3 when applied in the Level III
mode. Such comparisons have been conducted in St. Louis and Philadelphia. In
these comparisons, predictions appear to agree reasonably well with
observations (see Appendix 4-1).
The results reported herein are not exceptionally good. The following
recommendations are suggested as possibilities for further evaluation of the
model.
The existing procedure for developing model inputs may be too rigid,,
thus, more specific consideration of each day may be warranted.
Estimation of the air-parcel trajectory may be grossly inaccurate in some
cases. Other approaches for estimating trajectories need to be investigated.
One possible method is to use air quality data in addition to wind data to
derive trajectories. This may decrease reliance on uncertain and limited
information, and would be similar to the Level III analysis since air quality
data are used to establish the trajectories in this approach.
185
-------
4. TRAJECTORY ANALYSIS APPROACH FOR EVALUATING OZIPP/EKMA Gipson and Meyer
The performance of the simple trajectory model described herein could be
compared with more sophisticated trajectory models. Assuming the more
sophisticated models perform better, sensitivity tests could be derived to
perhaps develop compromises that are less data-intensive than the
sophisticated models, but perform better than the simple model.
The simple model needs to be tested with other, updated chemical
mechanisms. Mechanisms that are more efficient in producing O3 may produce
better results.
REFERENCES
Federal Register. 1981. 46(14):7182-7192, January 22.
Gipson, G.L., W.P. Freas, R.F. Kelly, and E.L. Meyer. 1981. Guideline For
Use of City-Specific EKMA in Preparing Ozone SIP's. EPA-450/4-80-027, U.S.
Environmental Protection Agency, Research Triangle Park, NC.
Haws, R., and R. Paddock. 1975. The Regional Air Pollution Study (RAPS) Grid
System. FPA-450/3-76-021, U.S. Environmental Protection Agency, Research
Triangle Park, NC.
U.S. Environmental Protection Agency. 1978. Ozone Isopleth Plotting Package
(OZIPP). EPA-600/8-78-014b, U.S. Environmental Protection Agency, Research
Triangle Park, NC.
U.S. Environmental Protection Agency. 1977. Uses, Limitations, and Technical
Basis of Procedures for Quantifying Relationships Between Photochemical
Oxidants and Precursors. EPA-450/2-77-021a, U.S. Environmental Protection
Agency, Research Triangle Park, NC.
Whitten, G.Z., and H. Hogo. 1978. User's Manual for Kinetics Model and Ozone
Isopleth Plotting Package, EPA-600/8-78-014a, U.S. Environmental Protection
Agency, Research Triangle Park, NC.
186
-------
4. TRAJECTORY ANALYSIS APPROACH FOR EVALUATING OZIPP/EKMA Gipson and Meyer
WORKSHOP COMMENTARY
McRAE: Have you compared the amount of material in the column from the
initial conditions to the amount that is injected through emissions?
Related to that, if you can use one of these models to predict what is
occurring in the atmosphere, how do you change the initial conditions when you
are exercising control strategies?
GIPSON: In some cases, the model predictions obviously will be very sensitive
to initial conditions. That is all tied into this trajectory and where the
starting point is.
As for changing the initial conditions with regard to control strategy, I
think you have to look at where the initial conditions are assumed to come
from.
If they are assumed to be locally generated materials, and, in fact, in
the control strategy you reduce emissions, then you would make some assessment
concerning how the initial conditions, the initial concentrations, might
change.
An equally proportionate reduction would be a reasonable assumption.
LLOYD: What did you assume for initial aldehydes? Did you do any sensitivity
study on those?
GIPSON: No, this is strictly the Dodge mechanism, using the default chemistry
in the OZIPP package.
I think aldehyde was 5%.
McKEE: You may have explained this and I missed it. In your 5-km corridor,
did you assume that the air parcel remained 5 km wide and moved along like
this and picked up sources, or did you assume that each point source diffused
by 10° or 15° and became more diluted and contributed less as time went on to
that 5-km corridor?
GIPSON: In this type of model, you assume there Ls a 5-km corridor, and no
di ffusion.
JEFFRIES: Point sources are treated li'
-------
4. TRAJECTORY ANALYSIS APPROACH FOR EVALUATING OZIPP/EKMA Gipson and Meyer
GTPSON: Yes.
KELLY: Other people have been running the model and have had dilution by a
factor of two. Do you have any explanation for that?
GIPSON: They are starting from a standard diagram and diluting it. This, of
course, implies that the model is overpredicting. However, those conditions
are probably not realistic in the St. Louis situation.
Most of the models that have been applied to St. Louis, I think, have a
tendency to underpredict 03. I think it's not only this one, but also some of
the more sophisticated models, though, maybe not to this extent.
KELLY: This is going to have a better estimate of dilution than just the
standard?
GIPSON: We are trying to tie the specific meteorology of the day to the
observed O-j. With observed 0-j, we're trying to model that situation by
inputting the best type of information, the best available information we
have, even though it's still a simple modeling approach.
DODGE: Mr. Gipson, did you try to validate the meteorology by checking CO
levels before you ran the chemistry?
GIPSON: In the original work that we did, we did look at CO levels. However,
we have completely revamped our trajectory analysis approach. I haven't done
that in this set of —
AUST: We did almost the same thing in the Baltimore area and got the same
type of results. We took the EKMA model and then fell back to all the default
valaes, usiig the center-city atmospheric effects. We got essentially good
correlation.
Did you get that here? Why wasn't there better correlation here?
GIPSON: Yes, we did it for St. Louis, and we got a much better correlation.
I think the answer is very simple. We're starting off in the urban
corridor with high initial concentrations, and in that type of trajectory
model, you're going to be able to form more 0^.
I think the one thing you itu.jht want to consider .vith this
air-parcel-type approach is that when you start off in the city, the aLr
parcel with high initial concentrations is going to go downwind.
It may miss the monitoring site slightly, but --
133
-------
4. TRAJECTORY ANALYSIS APPROACH FOR EVALUATING OZIPP/EKMA Gipson and Meyer
AUST: I understand the mechanics of it. Is this part of the analogy of the
model itself or should we be working with the trajectory at all, using this
particular model?
GIPSON: Well, I have some reservations about deriving trajectories from
surface wind measurements. I think there are problems in that.
I don't think some of the problems with underprediction will ever be
solved by substituting a different mechanism. There are some trajectories
that simply miss the entire city, completely. That is a problem.
KILLQS: How does the amount of dilution in these runs compare with the amount
in the standard-EKMA model? Are these much greater dilutions?
GIPSON: That is hard to explain. We have to go back to the standard EKMA and
talk about an exponential growth. The standard EKMA is about 3%. These are
comparable, I think, to about 23%, 23 or 24%. But we changed the mix in our
profiles.
JEFFRIES: The same delta occurs, but it's distributed in time differently.
The standard EKMA model gives you a constant dilution rate.
KILLUS: This wasn't then city-specific EKMA using day-specific mixing?
JEFFRIES: No, it's the same profile. The sane profile is used, but
day-specific initial and final input numbers are used. Overall dilution is
exactly the same, theoretically, as what would occur, but the time of
occurrence has been moved around.
Thus, there's less dilution in the morning, more dilution at midday, and
less dilution in the evening.
But he [Mr. Gipson] would put a straight line through what's the sigmoid
shape of this one.
WHITTEN: I'd like to address two points. One is that some of the more
updated versions of chemistry tend to make more 03 from the same amount of
precursors, which would tend to bring your points further up.
In fact, the Dodge chemistry results, explained in my talk, can be
modified with just one small change, which would produce the same effect.
The second point I want to address is that an FKMA version we've been
preparing for the South Coast District to use in their SIP's has trajectories
that go up to 30 h in time. We had to address this problem of a 5-km box
maintaining its size.
189
-------
4. TRAJECTORY ANALYSIS APPROACH FOR EVALUATING OZIPP/EKMA Gipson and Meyer
What we were coming up with was a kind of reverse funnel, in that the
trajectory started out being very wide and ended up being very narrow.
What this addresses is an averaging of the emissions density. Thirty
hours earlier you may have taken an emissions density over a wide area and
averaged all of the emissions. As you come closer and closer to the
trajectory's destiny, the averaging area narrows, so that the local emissions
become the only ones that it influences.
This is an approach that we've taken. It's actually the opposite of what
you're saying.
JEFFRIES: It is true, as Dr. Whitten said, that the newer mechanisms are
more reactive for a given amount of precursor material.
They are also more sensitive to the initial source of radicals and to
aldehydes. Lots of things average out between the Dodge and default
mechanisms and the newer mechanisms that supposedly represent events that need
more data.
So changing the mechanism often leads to worse predictions because you
don't know the inputs that those new mechanisms need. If you believe the
numbers that you have, you may end up predicting lower values.
So there is a trade-off in terms of where the uncertainty is.
WRITTEN: Part of that is coming from the fact that the original mechanism has
a default —
JEFFRIES: Right.
WHITTEN: Newer mechanisms are now getting there.
JEFFRIES: I think one has to be very careful when you're talking about the
Dodge mechanism.
When you're talking about the Dodge mechanism in a case like this and
OZIPP, you must carry with that an enormous range of standard default
assumptions and conditions like the 2S% to 75%, the 5%, and so forth.
In the newer mechanisms, all those things are adjustable.
Whereas with Dodge, you want to hold all of those constant because any
change in them means the original validation work doesn"t hold up.
So when you talk about Dodge, you're also talking about several default
values that aren't modified in a case like this.
190
-------
4. TRAJECTORY ANALYSIS APPROACH FOR EVALUATING OZIPP/EKMA Gipson and Meyer
TRIJONIS: What do you think is the most likely reason for the
underestimation? Have you given that much thought?
GIPSON: Well, I think there are different considerations on different days.
There's no one single cause.
For example, on Day 238, when there was significant underprediction, the
trajectory just missed the whole city. There's nothing that you're going to
be able to do to make it work. That says something about the control-strategy
design. You can control in the rural areas. I just don't believe the
trajectory is telling us where the 0-3 is coming from.
You know, it's an error in the input data, basically, or the assumptions
of the simple type model don't hold up under this particular situation.
For some of the other days, I think maybe the results could be improved.
It looks like the characteristic curve for mid-morning mixing on Day 195
overstates the dilution at that particular time of the day. That's a time of
day when the air parcel goes through the city where the emissions are
occurring.
If you dropped it down a little bit on that day you would get greater
concentrations around 10:00 a.m. or 11:00 a.m. You may in fact increase your
03 prediction.
I think there are different reasons on different days.
JEFFRIES: Can you be specific about Day 159? I predicted it well with the
older approach. Why is Day 159 so poorly predicted in this case, compared to
the way it was before.
GIPSON: The trajectory changed significantly.
JEFFRIES: Where is the trajectory now?
GIPSON: It's still in the same general area.
JEFFRIES: It's further out?
GIPSON: It's moved, well, it's actually further up, too. So it shows the
critical nature of getting the trajectory right.
WAYNE: I wonder if there is a possibility that in some of these cases with
this characteristic curve, you're starting with such low mixing heights that
some substantial amount of precursors is sequestered in the stratified air
above the initial mixing layer, so that when you do your dilution you're
ignoring a proportion of the contaminants that will eventually wind up in that
parcel?
191
-------
4. TRAJECTORY ANALYSIS APPROACH FOR EVALUATING OZIPP/EKMA Gipson and Meyer
GIPSON: I'm sure. The model is simplistic in the sense that it does assume a
uniform mixing under the mixing layer in the early morning.
WAYNE: And a lot of it above the mixing layer?
GIPSON: Yes. We did ignore precursors above the mixing layer. What we did
here was vary the mixing height at sunrise, which is one of the inputs to the
characteristic curve, and I varied that across the ranges of 100 m to 250 m.
That is what is showing up most of the sensitivity.
The emissions differences didn't really have that much of an effect.
However, we have no way of getting at how high the mixing layer is.
JEFFRIES: When you vary the mixing height like that, with that particular
model, you're not only changing the dilution, the total dilution that's going
to take place for the day, but you also change the mass of material that you
start with.
WAYNE: Exactly.
JEFFRIES: There is a balance and it's clear that you can shift things
dramatically by changing the ratio between how much material you start with
and how much you're going to dilute it. So you can shift the distribution
from initial conditions to emissions tremendously by changing that one initial
mixing height up and down.
WAYNE: And the temperature structure doesn't necessarily show you what is
immediately above that morning mixing layer.
JEFFRIES: The model, of course, doesn't take into account point sources that
emit at elevations above the mixing layer. It is a simple model.
GIPSON: There are many more detailed approaches; it's a simplified approach.
DEMERJIAN: One suggestion you might want to consider is putting in layer
averaging rates. My guess is it will raise those numbers.
JEFFRIES: Won't he have to carry layers in the vertical structure, then?
DEMERJIAN: No, what I'm saying is that he can use an integrated average for
the mixing layer height.
JEFFRIES: Okay. The numbers will keep going up as the mixing --
KELLY: Did you say that fie mixing is about 20%/h more than the standard
EKMA? You said about 23%?
192
-------
4. TRAJECTORY ANALYSIS APPROACH FOR EVALUATING OZIPP/EKMA Gipson and Meyer
GIPSON: Standard is 3%/h under the old exponential. These mixing depths
changed on the order of, starting, say, at 100 m in the morning at sunrise, up
to 2000 m on some days in the afternoon.
JEFFRIES: Two hundred seventy-five with the nine hundred.
DEMERJIAN: It might be easier, just to draw it on the board, just the
profiles.
JEFFRIES: The 3% is for the standard Dodge in Los Angeles with the 600 to 7,
whatever. That is only 3%/h.
When you use the exponential shape in these, of course, it's the same
overall dilution. You're still going from 100 to 900. You just get there
differently. So it is the same total dilution on Day 275, for example, you
may start at 100 —
KELLY: There's hardly any dilution at all at that height.
JEFFRIES: That's right, 3%/h. Here it could be 900%. But individual days
may start out at 100% and go to 1900%.
GIPSOM: I think that's the exponential shape that was originally used in the
Los Angeles case. Of course, it's only 3%. That's a very small change. But
that's almost, in effect, a straight line.
In fact, if you use the characteristic curve with the same limits, you
get the same answer. The changes are small.
JEFFRIES: I will show you some individual mixing heights calculated by three
or four different ways tomorrow.
DODGE: Are there any more questions?
MEYER: One of the points Mr. Gipson mentioned was that we had a lot of
trouble using surface wind data to corne up with reasonable trajectories that
make the model work well.
One thought we had was it might be better since people are unlikely to
have anything other than surface wind data to rely on pollutant concentration
data to infer some kind of trajectory and then just use that.
In effect, that's like what's done in Level III. We assume the scenario
based on the concentration data and simply use the wind data to establish
that, at least, the site observing the peak 03 is downwind From the city.
I would like to hear people's reactions to an approach like that. That
is, not directly using wind data to produce trajectories, but instead, trying
193
-------
4. TRAJECTORY ANALYSIS APPROACH FOR EVALUATING OZIPP/EKMA Gipson and Meyer
to draw inferences about an approximate trajectory based on the concentration
information that we have.
WALKER: May I respond to that?
Some recent data put together on an episode that occurred just last month
in Houston, TX, where wind data were plotted hourly during the episode,
revealed some very interesting patterns.
The only explanation of the wind vectors that could be made was that the
winds were blowing at each other and going up. The heat element affected
rising air currents.
I think under these conditions, deciding what the true trajectory is is
extremely difficult.
The synoptic trajectory at that time was quite different from any of the
surface winds. The synoptic wind was from the northeast but the surface wind
was from the southeast and the northwest for a good part of the day.
So during episode conditions where you had low winds, it was very hard to
be sure what your trajectory was.
McKEE: I would like to ask a question about using ambient air concentrations
to trace a trajectory. First of all, what contaminant did you use? Or
contaminants?
MEYER: All we really did was look at the peak 03 concentrations, let's say in
the St. Louis area, and noted what time of day the concentrations occurred.
Then we looked at the surface wind data and reassured ourselves that at least
the wind wasn't blowing in the opposite direction so those peak 03
concentrations weren't upwind from the city.
Then we assumed that the air parcel started in the center of the city and
moved towards where the peak 03 was observed.
JEFFRIES: It's assumed.
McKEE: Yes, that wouldn't work in Houston because, first of all, 03
precursors do not originate in the center of the city to a major extent.
Secondly, the variation of 03 with location within the city is quite
great. We don't have enough monitoring stations to describe the 03 isopleths
accurately over the area of the city at any one particular time, but we have
several reasons for believing that 03 may change rather significantly if you
move crosswind of the trajectory by as little as a mile or two.
We know from having stations 5 or 6 mi apart that one station may show O3
at better than 0.2 ppm. A station crosswind from the prevailing air movement
194
-------
4. TRAJECTORY ANALYSIS APPROACH FOR EVALUATING OZIPP/EKMA Gipson and Meyer
at that time and distant from the first station by as little as 5 mi may show
03 in the natural background range of 0.04 or 0.05 ppm.
So, I don't think you could use 03 concentrations to determine what the
trajectory was unless you had a monitoring station on each square mile of a
rectangular grid or some totally impossible number of stations like that.
MEYER: Well, again, I don't think we are talking about applying a fine-tuned
trajectory or anything like that, just an approximation.
McKEE: Well, if you vary 4 to 1 or 5 to 1 over a distance of 5 mi, that's
hardly fine tuning. That's a gross difference.
JEFFRIES: We have two airshed modelers here. I'd like to know how they go
about getting to the problem.
The EKMA models specify the wind flow in every single grid. How do they
do that? Same data? Not the same data? Some other method?
KILLUS: It is quite typical that airshed models have what seems to be quite
a reasonable pollutant cloud that is somewhat misplaced, by as much as 60° off
what was likely to be the correct trajectory.
In our first EKMA model, for example, we found that the trajectory was
more west of the city, whereas the actual pollutant cloud, as drawn by
observers looking only at the air quality monitoring data, was going more or
less northeast of the city.
The same sort of thing happened out in St. Louis.
JEFFRIES: These are being derived from surface wind data?
KILLUS: These are being derived from surface wind data.
JEFFRIES: Five-layer model?
KILLUS: That's right, three, two, three, four.
So I would actually suggest that in a circumstance like you described in
Houston, if one has the peak 03 at a particular station outside the city, and
knows and observes that 5 mi away from that one has very close to background
values, far from being a circumstance where you are limiting the possibility
of drawing your trajectory from the 03 monitoring data, you have, in fact,
described a very precise trajectory from the 03 monitoring data.
You know that the O3 precursors are being emitted at a fairly narrow
source area and you know pretty much where the receptor site is.
195
-------
4. TRAJECTORY ANALYSIS APPROACH FOR EVALUATING OZIPP/EKMA Gipson and Meyer
If one then draws a line between the two, one would draw that trajectory.
And in some cases we recommend it for airshed modeling.
If you see the 63 pollutant cloud being convected 30° off because of
errors in surface monitoring, you might very well just go in and rotate the
wind fields and see how much better you can do this.
Trying to prepare a large wind field from extremely noisy and oftentimes
biased wind data is at present one of the largest single errors in airshed
modeling.
196
-------
4. TRAJECTORY ANALYSIS APPROACH FOR EVALUATING OZIPP/EKMA Gipson and Meyer
APPENDIX 4-1
SCATTER DIAGRAMS FOR LEVEL III MODEL PERFORMANCE
IN ST. LOUIS AND PHILADELPHIA
197
-------
4. TRAJECTORY ANALYSIS APPROACH FOR EVALUATING OZIPP/EKMA Gipson and Meyer
OJ
+->
o
•r-
"g
i.
a.
030
025
0.20
01S
010
0051
000
\ I I I I I
000 0.05 0.10 015 020
observed 03 (ppm)
025 030
Figure 4A-1. Observed O3 (ppm) for Philadelphia EKMA analysis 1982 SIP.
198
-------
4. TRAJECTORY ANALYSIS APPROACH FOR EVALUATING OZIPP/EKMA Gipson and Meyer
e,30
0.25 —
0,20
Q.
.5 0. 15
-a
01
o 0.10—
S-
O-
0.05 -
0.00
0.20 0.25 0.30
0.00
observed 03 (ppm)
Figure 4A-2. Level III EKMA U.S. air quality (St. Louis RAPS).
199
-------
5. APPLICATION OF EKMA TO THE HOUSTON AREA
Harry M. Walker
Monsanto Company
Alvin, Texas 77511
ABSTRACT
Ozone, nonmethane hydrocarbon, and oxides of nitrogen data gathered
during the summers of 1977 and 1980 have been analyzed as a test of the EKMA
predictive model. Data from six ozone monitoring sites, five oxides of
nitrogen sites, and five nonmethane hydrocarbon sites were used. Predictions
were possible for 114 days, and data were plotted on the standard-EKMA
isopleth set.
Analysis indicated that 53 days should be discarded as an inappropriate
test of EKMA because of adverse meteorology. The validity of EKMA was judged
against the 61 remaining days. EKMA predictions were found valid (± 20%) for
20 days, underpredictive for 5 days, and overpredictive for 36 days.
Special attention was given to the "heatwave" period of June to July,
1980, which produced little ozone. Lack of adequate photochemical initiation
was deemed to be the principle cause of overprediction. An attempt should be
made to incorporate improved initiation factors into EKMA.
The prevailing nonmethane hydrocarbon:oxides of nitrogen ratio in Houston
was found to be 11 to 1. Scenarios regarding the degree and method of
abatement necessary to achieve the ozone standard are discussed. None are
deemed feasible.
200
-------
5. APPLICATION OF EKMA TO THF HOUSTON AREA Walker
INTRODUCTION
The Houston Area Oxidant Study (HAOS) gathered a large amount of
aerometric data in Houston during the summer of 1977. Included were ambient
air data taken at numerous sites for ozone (03), nonmethane hydrocarbon
(NMHC), and oxides of nitrogen (NOX)• SRI International in its analysis
project completed for HAOS in 1979 carried out a preliminary Empirical Kinetic
Modeling Approach (EKMA) correlation using these data (Ludwig and Martinez,
1979) .
In SRI's work, peak 03 values for 54 days in 1977 were predicted using
standard-EKMA isopleths. Nonmethane hydrocarbon and NOX values were obtained
from monitors at five centrally located sites in Houston, TX. Predicted O-j
values were compared with the observed maximum 03 values for the day. In
SRI's methodology, only those days were evaluated for which values for both
6:00 a.m. to 9:00 a.m. NMHC and 6:00 a.m. to 9:00 a.m. NOX were available from
at least three central sites. Missing data severely restricted the number of
days that could be studied and, in part, contributed to uncertainties in the
final results. To study the value of EKMA as a predictive tool more
decisively, the EKMA correlation was repeated using the same data from 1977,
but supplemented with recent data available for June and July, 1980. As a
result, 114 days were defined for which an FKMA prediction could be made.
201
-------
5. APPLICATION OF EKMA TO THE HOUSTON AREA Walker
METHODS
Hydrocarbon (HC) and NOX monitors were centrally located (McGregor,
Crawford, Clinton, Mae Drive, Parkhurst, and Aldine) and appear on the average
to sense pollutant concentrations in the central city and West Ship Channel
area. During this study, a large number of 03 monitors were in operation.
With most wind directions, one or more monitors were located downwind of the
core complex. Thus, the availability of data and the location of the monitors
would appear to conform to the concepts of the EKMA methodology, which
attempts to relate 6:00 a.m. to 9:00 a.m. precursors in a core area to the
subsequent 03 maximum in a downwind direction (EPA, 1977).
For the purposes of this study, the robustness criteria was relaxed to
2-2; that is, two or more 6:00 a.m. to 9:00 a.m. NMHC values, plus two or more
6:00 a.m. to 9:00 a.m. NOX values, were required in each average used for any
EKMA prediction. SRI used a 3-3 robustness criteria that severely limited
the number of days for which a prediction was possible. As a result of this
change, plus the addition of the 1980 data, 114 days were found during May
through September, 1977, plus June through July, 1980, for which a prediction
could be made.
Predictions were made by plotting each day's average 6:00 a.m. to
9:00 a.m. NMHC and NOX values on the standard-EKMA isopleth diagram and then
reading back the predicted daily maximum 03 concentration. The isopleth
curves were interpolated as necessary (see Figure 5-1).
202
-------
5. APPLICATION OF EKMA TO THE HOUSTON AREA
Walker
4-J
Q)
0
w
•H
ro
C
T!
i-i
(C
•o
C
tJ
JJ
CO
0
a;
4-1
o
•H
'O
a)
LO
1>
203
-------
5. APPLICATION OF EKMA TO THE HOUSTON AREA Walker
When predictions were completed for all 114 days, many of them quite
evidently had failed to test EKMA for factors that were obvious from the
weather records. To investigate this matter further, meteorological data as
follows were tabulated for each day: 6-h wind distance (9:00 a.m. to
3:00 p.m.), 6-h wind direction, 6-h sky cover, and noontime temperature.
All days were discarded if the 6-h wind vector exceeded 50 mi or if the
6-h sky cover exceeded 60%. Cloudy days or days with high wind produce little
03, and thus cannot be regarded as testing the validity of EKMA.
The exact cut-off levels were fixed based on analysis of the actual data.
Table 5-1 presents the pertinent data supporting the cut-off levels used.
Interestingly, temperature proved not to be a key factor, and hence was not
used.
Some data available for morning mixing heights appeared to be significant
and would have been used as an exclusion criterion except that values were
available only for a minor portion of the period (Table 5-2).
About half of all predicted days (53) were thus classified as F or
explained failure days. Decisive days (non-E days) numbered 61 for this study
(27 for the SRI study). Values for the 61 days are plotted on an EKMA diagram
(Figure 5-2). All 114 days predicted are plotted in Figure 5-1.
204
-------
5. APPLICATION OF EKMA TO THE HOUSTON AREA Walker
TABLE 5-1. CHECKING PREDICTED DAYS WITH HIGH ADVERSE FACTORS
Date
5/1
5/11
6/1
7/9
7/12
7/14
7/26
6/27/80
6-h Wind
Distance
(mi)
45
29
34
28
48
37*
32
48
6-h Sky Cover
(%)
63
90
70
67
23
60
60
0
*Rain
TABLE 5-2. 03 VERSUS MIXING HEIGHT*1"
Mixing Height > 250 m > 100 m < 100 m
03 >
03 >
03 <
• 0.2
• 0.1
: 0.1
s .— ss— — =rra
1
9
37
5
14
13
10
11
6
*Days = 5-9/1977.
tTotal days with mixing height data = 106.
205
-------
5. APPLICATION OF EKMA TO THE HOUSTON AREA
Walker
£
as
(0
•H
•D
.C
-P
1)
0
W
•H
O
'C
C
(0
4J
0)
c
0
'D
(0
•H
O
-------
5. APPLICATION OF EKMA TO THE HOUSTON AREA Walker
RESULTS
Inspection of Figures 5-1 and 5-2 reveals little difference between them.
However, this similarity was expected. The position of each point depends
entirely on the precursor analytical results — not on 03 formation. There
was never reason to expect that the E days, which were excluded from Figure
5-2, would belong to a different subgroup with respect to precursor
concentrations.
The bulk of the points (days) lies in the region of the EKMA grid where
the 03 isopleths are very flat, suggesting that the formation of 03 in Houston
will be relatively insensitive to HC abatement. Real-world results have
confirmed this suggestion; indeed, predicted 03 levels, with the exception of
a few cases, were critically dependent on the NOX concentration, but were
scarcely sensitive to NMHC concentration.
To judge better the validity of EKMA, a cross plot of measured maximum 03
versus predicted 03 was constructed (Figure 5-3). Only the 61 days from
Figure 5-2 were used in plotting Figure 5-3.
The striking disclosure from Figure 5-3 is the random shotgun
distribution of the decisive point group. The 45° line passes through the
upper center of this pattern, but by no means its top. Broadly, EKMA
underpredicted by more than 20% on 5 days (8% of 61), was within ± 20% of
measured results on 20 days (33% of 61), and, for the remaining 36 days (59%
207
-------
5. APPLICATION OF EKMA TO THE HOUSTON AREA
Walker
01 .10 .15 .20 -2B .10 .!» .40
Figure 5-3. EKMA O3 predictions versus actual observed 03 values.
208
-------
5. APPLICATION OF EKMA TO THE HOUSTON AREA Walker
of 61), EKMA overpredicted actual 03 by more than 20%. On 22 of the latter
days, the predicted value was twice or more the actual value. In judging
these results, note that all days for which the documented meteorology
predicted a shortfall in photochemical O3 production (E days) were eliminated
from Figures 5-2 and 5-3.
This work does not support the contention that EKMA can supply an
accurate upper-limit value for potential O3. This conclusion is based on the
59% incidence of high (>20%) overprediction. This consistent tendency to
overpredict is the most serious shortcoming of the procedure.
One may argue that the problem may be simply the lack of an 03 monitor at
the precise downwind plume position to find the true maximum. However, a real
geographical miss is unlikely because the core area is very broad and so many
analyzers were running during 1977.
In the course of this work, we noticed that the days that validated FKMA
tended to occur in groups. These groups appeared to coincide with intense 03
episodes. Since 03 episodes are well known to be related to frontal passage,
this possible association was investigated. In two of three cases where
consecutive high 03 days were noted, frontal passage was found to have
occurred not more than 72 h prior. Thus, the theory that stratospheric 03
injection can help initiate 03 episodes received support. Apparently,
initiation by stratospheric 03 or by other factors is required to consistently
209
-------
5. APPLICATION OF EKMA TO THE HOUSTON ARKA Walker
induce 03 production at the levels predicted by EKMA. This insight was a
major finding of the study.
The June-July, 1980 period was included in this study partly because it
was a noteworthy summer drought period for Houston. June and July established
new records for the number of consecutive 100°F days in Houston. Little rain
fell during the entire period. Winds were predominantly from the southwest,
and generally lighter than those of the same period during 1977. This period
of high temperatures, few clouds, little rain, and light winds was a
conspicuously low 03 period. On only 24 days was 0.1 ppm exceeded at any site
(3 > 0.2). During the corresponding 1977 period, 0.1 ppm was exceeded on 34
days (8 > 0.2; 1 > 0.3). Yet, pollutant concentrations were higher in 1980
(NMHC 0.84, NOX 0.89) than in 1977 (NMHC 0.84, NOX 0.65) (average of all
robustness 2-2 or better days). Of all 2-2 robustness days, 18% verified EKMA
in 1977, but only 14% in 1980. Apparently, the June-July, 1980 period
represented a period of poor initiation. High temperatures, sunshine, and
pollutant concentrations alone were not sufficient to guarantee a high 03
period.
One additional insight was obtained from this study. The center of all
points on Figure 5-1 is located at about NMHC = 0.9 ppm, NOX = 0.08 ppm,
corresponding to a HC:NOX ratio of 11.25. Attainment of the 0.12 ppm O3
standard by HC abatement would require a level of NMHC = 0.25 ppm or a 72%
reduction (little changed from predictions made in 1973). Abatement by NOX
reduction to the same level would require NOX = 0.02 ppm or a 75% reduction.
210
-------
5. APPLICATION OF EKMA TO THE HOUSTON AREA Walker
These figures are based on averaged 6:00 a.m. to 9:00 a.m. average values, not
on peak values. Should peak values be used, the abatement level for HC's
increases to 77.5% and for NOX to 87.5%. None of these abatement scenarios is
technically or politically possible.
DISCUSSION
EKMA appears to yield a scattered pattern of predictions that broadly
overlap the range of actual 03 values. Thus, some checking values will be
found in every group predicted. However, many overpredicted values and a few
underpredicted values will also be obtained. Yet, no other simple prediction
scheme for 03 can do any better, and most do worse.
It is the author's opinion that if the meteorological options available
in the city-specific version of EKMA were carefully utilized the results
would be improved. The initiation factor appears very desireable. Such a
factor would endeavor to quantify everything that bears upon the input of
free-radical initiators to the photochemical process. Background 63,
stratospheric 03, and day-to-day atmospheric recirculation would all be
components.
EKMA does not appear presently to weigh input 03 sufficiently heavily in
the initiation process. Other models that are more cognizant of the
initiation potential of input 03 and other free-radical sources seem to be
more successful in making specific day predictions and in coping with multiday
211
-------
5. APPLICATION OF EKMA TO THE HOUSTON AREA Walker
episodes. However, these models are far more complex and expensive to use.
The practicality of adding such a factor to EKMA without making the
computation much more complex and at the same time creating a need for
additional sophisticated input analytical information is questionable.
However, it should be given some thought as the simplicity and convenience of
the EKMA procedure justifies major effort toward its improvement.
REFERENCES
Ludwig, F.J., and J.R. Martinez. 1979. Aerometric Data Analysis for the
Houston Area Oxidant Study, Final Report, Vol. 2. SRI International, p. 241.
(Available from the Houston Public Library).
U.S. Environmental Protection Agency. 1977. Uses, Limitations, and Technical
Basis of Procedures for Quantifying Relationships Between Photochemical
Oxidants and Precursors. EPA-450/2-77-021a, U.S. Environmental Protection
Agency, Research Triangle Park, NC (November).
WORKSHOP COMMENTARY
JEFFRIES: Could we go back to the slide for decisive days on your isopleth
diagram?
WALKER: Yes. That one?
JEFFRIES: Yes.
May I suggest that you multiply all the O3 isopleth numbers by about a
half and count the number of points above each one to see if the frequency
distribution predicted by your points agrees with the observed frequency
distribution.
WALKER: It would probably help, yes.
JEFFRIES: In effect, that's the method that I describe. You can count the
number of points in each interval and do the proportion against what was
212
-------
5. APPLICATION OF EKMA TO THE HOUSTON AREA Walker
actually observed. Then you can compare the frequency distributions instead
of trying to do an absolute prediction of HC, NOX, and O-j, on the same day,
with the diagram.
It is very close. You could probably sit down and do it by hand very
quickly.
WALKER: Yes, that distribution is not greatly different from the one you
showed for St. Louis, really.
MARTINEZ: I am struck that it was so widely underpredicted. Pointing to the
data, it is pretty clear to me that the HC's were way underestimated. If you
had higher HC on that day you would have, of course, moved things over closer
to the 45° line.
WALKER: Yes. I didn't attempt to critique the HC's. I just took what we
had. That could well be. I wouldn't question that.
The HC and NOX data are problematic. I think there were poor recovery
data in many of the analyzers, which was disappointing. As you say, they are
hard analyzers to run, but get really good results.
You can always worry about the quality of the data.
KELLY: Maybe this is a good time to say that when we did the bag radiations
in Houston in 1977, we did get good agreement between predicted 03 in the
bags, using EKMA, and the observed 03 in the bags, using Houston air.
You don't really need initiation under those conditions. I really
question whether you need stratospheric 03 to initiate photochemical 03
formation.
WALKER: I recognize that a lot of people question this thesis, but I say that
the bags are always contaminated with free radicals so they don't really
constitute a test, whether the supply of free radicals is adequate or not.
KELLY: Whether they are contaminated with free radicals or not, you can look
at the production of 03 in bags at low amounts of precursors and see what the
contamination is.
You can look for that with NOX or HC. We don't find that there is an
excess OH radical in those conditions.
WALKER: What you're saying is, it works in the bags, is that right?
KELLY: Yes. In other words, the OH that we would predict in the bags based
on ethylene or NOX loss experiments is comparable to what people measure in
the air using spectrometers, or predict in the air, using models, for the OH
radical. There wasn't any evidence for any excess OH radical.
213
-------
5. APPLICATION OP EKMA TO THE HOUSTON AREA Walker
What I think is happening is that these things occur together. These
stratospheric intrusions in our data, which are traces of stratospheric air,
bear that out. The berylium-7 problem is higher on the back side of a high
pressure system. Stagnation and photochemical formation of 03 are also higher
on the back side of high pressure systems.
I'm not saying you're wrong. I'm saying that under conditions where
there is a higher stratospheric input, you don't measure more 63 downwind. I
just don't think the cause is that initial stratospheric 03.
I think the 03 downwind of cities is the result of photochemistry.
WALKER: Yes, it certainly is. The question is, something to start the
process, not that it adds to the process as much as you find downwind.
214
-------
6. A COMPARISON OF THE EMPIRICAL KINETIC MODELING APPROACH MODELS WITH
AIR QUALITY SIMULATION MODELS
Gary Z. Whitten
Systems Applications, Inc.
101 Lucas Valley Road
San Rafael, California 94903
ABSTRACT
One method for testing a simple model such as the Empirical Kinetic
Modeling Approach (EKMA) is to compare its performance with that of more
complex models using the same input data. The many differences between EKMA
and a model such as a complex grid model can be used to define a series of
progressively more complex models. The steps in this model series can then be
used to highlight significant factors that define EKMA.
The results of this study show that in the absence of wind shear, the
simplistic dispersion treatment used in the EKMA model typically produces
results that are very similar to those of models with more complete treatments
of dispersion. However, differences in the chemical mechanism, dependence on
hydrocarbon:oxides of nitrogen ratios, entrainment of reactive precursors, and
other factors were all found to often lead to significant differences between
the present Level III EKMA and more complex models. The study also found that
the assumption of a linear relationship between emissions and the initial
hydrocarbon and oxides of nitrogen concentrations does not always hold,
especially in the presence of significant background hydrocarbon levels.
215
-------
6. COMPARISON OF EKMA MODELS AND AQSM'S Whitten
INTRODUCTION
The U.S. Environmental Protection Agency (EPA) has defined three levels
of modeling for use in 1982 State Implementation Plans (SIP's). Level III
consists of a city-specific Empirical Kinetic Modeling Approach (EKMA) model
used in accordance with a guideline document issued in March 1981 (EPA, 1981).
Level II refers to more complex versions of EKMA, and Level I refers to large,
sophisticated Air Quality Simulation Models (AQSM's) such as the SAI Urban
Airshed Model.
In a recent study, we tested EKMA against more sophisticated modeling
approaches using a series of intermediate models that fall between standard
Level III EKMA and large-grid models such as the SAI Urban Airshed Model. The
purpose of our study was not to demonstrate that EKMA models generate
different results from other models or to determine whether such discrepancies
are significant. In general, one can expect different models to generate
different results. Instead, we wanted to find the main reasons for any
differences in results that stem from the specific parts defining each of the
models. To that end, we used the same input data to compare any two models,
and the intermediate models we created were designed to test a minimum number
of model differences between any two models.
The main points we wish to make in this presentation are as follows:
• EKMA has two parts (a trajectory model and a diagram).
216
-------
6. COMPARISON OF EKMA MODELS AND ApSM'S Whitten
• The EKMA trajectory model treats chemistry more accurately than
many other models, but treats dispersion simplistically.
• Neither its accurate chemical treatment nor its simplistic
dispersion treatment has been shown critical to the reasonable
use of the model, except in cases involving wind-shear effects.
• The results of most comparisons involving the use of the diagram
are reasonably close except in the definition of hydrocarbon:
oxides of nitrogen (HC:NOX) ratios and in situations where
background is important.
• Initial concentrations and HC:NOX ratios do not respond linearly
to controls, especially when background is important.
METHODS
We first need to clarify our own generic use of the acronym EKMA. For
our purposes, EKMA is an approach to estimating the changes in maximum ozone
(03) expected in some urban area resulting from changes in emissions. This
approach implies the use of an 03 isopleth diagram defined by an ordinate and
abscissa that can be related to NOX and HC precursors. Each point on the
diagram can be associated with the simulated 1-h 03 maximum generated by the
execution of a computer-coded trajectory model, using the inputs specified by
the ordinate and abscissa values associated with that point on the diagram.
Our definition of an EKMA trajectory model includes any moving-column model
that treats at least the chemistry within that column, along with some
treatment of dispersion. This generic definition allows us to construct
highly complex versions of EKMA without needing to create names for each hew
version.
217
-------
6. COMPARISON OF EKMA MODELS AND AQSM'S Whitten
To a large extent, we have been working on the next generation of EKMA
models by testing the effect of added degrees of sophistication to the
existing Level III EKMA model. The models we have constructed might be called
Level II EKMA models, but we have not been using these models independently
for control-strategy purposes. In general, we have used these models with the
same data base as that of some highly sophisticated or Level I models.
However, we have delivered one of these Level II models to the South Coast Air
Quality Management District (SCAQMD), and they will be using this model as
part of their 1982 SIP.
GRID MODEL COMPARED WITH LEVEL III EKMA MODEL
Most of the work reported here was funded under EPA contract. Table 6-1
presents the most recent results obtained under this contract for Tulsa, OK.
The table compares the standard Level III EKMA model, the same model with the
Carbon Bond II chemical mechanism (CBM), and the SAI Urban Airshed Model. In
this particular comparison, the EKMA models were applied according to the 1982
SIP guidelines except that the design O3 values were chosen to be equal to the
value predicted at the Apache monitoring site by the SAI grid model.
Several reasons exist for presenting these particular data here. The
results shown supersede those published in our final report to EPA covering
the Tulsa study (Whitten and Hogo, 1981). This update represents a cycle we
consider very important to the modeling process. The original EKMA study
218
-------
6. COMPARISON OF EKMA MODELS AND AQSM'S Whitten
TABLE 6-1. LEVEL III EKMA RELATIVE TO SAI AIRSHED MODEL*
Control
HC NOX Airshed Model
(%) (%} (ppm 03)
(base case) 0.127
32 7 0.104
49 8 0.096
75 7 0.087
18 -2 0.117
EKMA Model
(ppm 03)
(design)
0.115
0.10
0.07
0.123
EKMA/CBMt Model
(ppm 03)
( design)
0.95
0.075
0.04
0.11
*Apache station, 29 July 1977; observed 03 = 0.133.
tCarbon Bond II mechanism.
helped identify the importance of high HC concentrations used aloft in the
early grid-model simulations. Subsequent grid-model simulations employed much
lower HC concentrations aloft and adopted several other modifications. Thus
the simultaneous use of a simple model such as the Level III EKMA trajectory
model along with a more complicated one can help identify areas of the
modeling exercise that significantly affect the results of the complicated
model and may require reconsideration.
Table 6-1 demonstrates the point made earlier, that is, different models
merely generate different results, providing little insight into why the
results may differ. The results from the modified model (i.e., the same model
using the Carbon Bond II chemical mechanism) demonstrate that matching the
219
-------
6. COMPARISON OF EKMA MODELS AND AQSM'S Whitten
chemistry of the EKMA trajectory model and the grid model creates a model that
is much more sensitive to HC control than either of the original models.
Hence, any attempt to create an "improved" EKMA model by updating only the
chemistry may generate EKMA results that are farther away from those of the
more sophisticated models. The standard Level III EKMA model apparently has
several compensating discrepancies when compared with more detailed models
such as this grid model. However, we know of no guarantee that the
discrepancies will consistently compensate for each other.
GRID MODEL COMPARED WITH LEVEL II EKMA MODEL
Table 6-2 presents the latest results for Tulsa obtained using the
trajectory model developed in our studies for Level II EKMA. Whereas we
obtained the results shown in Table 6-1 using the relative isopleth
methodology of EKMA, we did not need this methodology to obtain the results
shown in Table 6-2 because the absolute predictions were fortuitously
identical. We use the term fortuitous because the many remaining
discrepancies between the models have apparently compensated for one another.
Again, there is no guarantee that the compensations will remain, but
considerable effort has been expended to ensure that the remaining
discrepancies between these models are consistently small.
Several discrepancies between these models are worth noting. The grid
model uses steady-state approximations and the Crank-Nicholson integration
scheme to integrate the ordinary differential equations generated by the
220
-------
6. COMPARISON OF EKMA MODELS AND AQSM'S Whitten
TABLE 6-2. LEVEL II EKMA RELATIVE TO SAI AIRSHED MODEL*
Control
HC NOX
(%) (%)
(base case)
32 7
49 8
75 7
18 -2
Airshed Model Level II FKMA Model
(ppm 03 ) (ppm 03 )
0.156 0.156
0.114 0.122
0.101 0.100
0.088 0.073
0.138 0.132
*Region-simulated maximum, 29 July 1977; station-observed 03 - 0.133.
chemistry, whereas the Level II EKMA model uses the Gear method with no
steady-state approximations. In addition, the grid model follows four cells
in the vertical direction, two within the mixed layer and two above. Mixing
between cells, both vertically and horizontally, is governed by wind speed,
temperature, stability, and surface roughness, all of which vary in space and
time. The Level II trajectory model follows only one cell in the mixed layer;
the concentrations aloft cannot be varied in time and must be specified as
input. Both models, of course, assume instant mixing within one cell.
The grid model treats surface sinks for each species, and these surface
sinks vary spatially. The grid model also treats eddy diffusion between the
four cells. The trajectory model ignores both surface sinks and eddy
221
-------
6. COMPARISON OF EKMA MODELS AND AQSM'S Whitten
diffusion. Further, whereas the grid model treats horizontal dispersion, the
trajectory model ignores it.
Finally, emissions in the grid model can be added into each of the four
cells plus the surface below the first cell; the reactivity of these emissions
can vary in space, level, and time. The reactivity of the emissions added
into the single cell of the trajectory model cannot be varied.
Note that the Level II EKMA model uses the data files prepared for the
grid model, and the grid model identifies the time and place of the maximum 03
concentration in a region. Thus, Table 6-2 is not intended as evidence that
our Level II EKMA model can be used as a replacement for the grid model. In
fact, our Level II EKMA methodology presumes the existence of a validated
grid-model simulation prior to the trajectory simulations. Inputs for the
Level II EKMA trajectory model are obtained from the grid-model files by using
a special trajectory model similar to the SAI Urban Airshed Model. This
special model can be operated backward in time using the surface winds from
any point in space and time defined by the grid-model region. When operated
forward in time, this model generates averaged information, such as hourly
emissions, which can be used directly in the Level II EKMA trajectory model.
At this point it seems appropriate to explain the differences between our
Level II EKMA model and the Level III EKMA trajectory model. Our model
appears to provide results similar to the SAI Urban Airshed Model, when using
the same data files and when wind-shear effects are not important. The
222
-------
6. COMPARISON OF EKMA MODELS AND AQSM'S Whitten
differences between the Level II and Level III models are summarized in the
following discussion.
The chemistry in the Level II model is the same Carbon Bond II mechanism
currently encoded into the SAI Urban Airshed Model. An explicit propylene/
butane mechanism is used in the Level III KKMA model. The starting time and
length of simulation are optional in the Level II model. The maximum single-
simulation time is currently 24 h, and multiple simulations can be linked.
The Level III model starts at 8:00 a.m. and lasts only 10 h.
Mixing heights are input hourly and internally interpolated linearly in
the Level II version, in contrast to the input morning and afternoon heights,
with interpolation based on a special characteristic curve, used in the
Level III model. Reactivity data can be input using different splits for
emissions and initial conditions, background and aloft, in the Level II model.
In the Level III model, the initial concentration reactivity and emission
reactivity must be identical; transported surface and aloft reactivity is
fixed at 10% propylene, 90% butane, and zero aldehydes.
In addition to the regular HC species and 03, the Level II model provides
for entrainment of peroxyacetylnitrate (PAN) and the intermediate carbon-bond
species GLY or BZA. These species have been found to act as important
reservoirs of reactivity.
223
-------
6. COMPARISON OF EKMA MODELS AND AQSM'S Whitten
Finally, the temperature can be input hourly and interpolated linearly in
the Level II model, but is fixed in the Level III model. Only the Level II
model can treat the emissions and chemistry of carbon monoxide (CO).
We found all of the differences between the Level II and Level III models
to be important under some circumstances, but found none of the differences
between the grid model and the Level II model important under most of the
circumstances studied. The most important difference between the Level II and
Level III model tended to be the chemistry; however, we found that most of the
difference could be explained by the ratio of rate constants between
peroxy radical-nitric oxide (R02 + NO) reactions and the hydroxyl-nitrogen
dioxide (OH + NO2) reaction. Updating that ratio in the standard Level III
chemistry to the value of 0.86, the ratio value used in the grid-model
chemistry, typically increased the peak 03 simulated in the Level III FKMA
model by 30%.
Gross differences appear between the information provided by a grid model
and a single-moving-cell model such as the Level II model. The grid model
simulates the entire pollutant field, whereas a trajectory model simulates
only a single Lagrangian air parcel. However, the grid model encompasses the
Level II model, and the results between models can be compared at that point.
The most significant differences in results between the single cell and an
equivalent spot in the grid model seemed to occur when wind shear was
important. Other differences in results can, for the most part, be explained
by spatial variances in reactivity, surface sinks, and aloft concentrations.
224
-------
6. COMPARISON OF EKMA MODELS AND AQSM'S Whitten
We have attributed large relative, but small absolute, differences to the
integration schemes used during nighttime simulations.
ELIMINATION OF AMBIENT HC:NOX RATIOS
The use of the isopleth diagram in the Level III FKMA model requires
obtaining ambient HC:NOX ratios and a design 03 value. The results shown in
Table 6-1, for instance, were obtained using the HC:NOX ratio of 9.1, which
was computed according to the EPA guidelines for use of the Level III FKMA
model. Our research and experience have led to a growing concern regarding
this aspect of EKMA. Hence we propose an alternate, and hopefully improved,
method of using the isopleth phase of EKMA. The new method essentially
substitutes an emissions inventory for the ambient HC:NOX ratio and starts the
simulation early enough so that the overall simulation is dominated as much as
possible by emissions rather than by the initial conditions.
Several concerns regarding the present use of ambient HC:NOX ratios must
be addressed before discussing this method, however. Wide fluctuations are
often seen in ambient measurements and averaging measurements from nonepisode
days to reduce the statistical noise may not be appropriate for use on episode
days. Although some of the fluctuations may currently be the result of poor
quality instrumentation and techniques, the use of accurate and robust
measurements may still produce large fluctuations. Many fluctuations are
caused by micrometeorological effects stemming from local emissions and by
poor mixing, which can occur during the morning hours. In some cities such as
225
-------
6. COMPARISON OF EKMA MODELS AND AQSM'S Whitten
San Francisco/ the episode trajectories may not be in the urban core at
8:00 a.m. when the HC and NOX data are obtained. Such data would clearly be
inappropriate.
A basic premise of EKMA requires that the initial conditions specified by
the axis of the isopleth diagram should be linearly related to the emissions
of precursors. When compared to more sophisticated models, this premise can
be proven false, and the degree of failure can be shown to be a function of
background HC's.
A measure of the trajectory model's accuracy in simulating the design 63
value for the episode conditions used is difficult to obtain. The
fluctuations and uncertain appropriateness of the measured HC and NOX data
obscure this type of validation.
An emission-oriented approach has been developed for constructing and
using isopleth diagrams. This new approach eliminates these concerns and
appears to have several additional advantages over the old isopleth diagrams.
First, although emissions inventories may currently have many errors, the
most recent series of corrections and reestablishments seems to show only
small overall changes in reactivity and total integrated amounts of emissions.
Therefore, fluctuations in ambient measurements are small.
226
-------
6. COMPARISON OF EKMA MODELS AND AQSM'S Whitten
Another advantage of the emission-oriented approach is that whereas the
quality of emissions inventories can improve, the micrometeorological effects
that can influence HC:NOX ratio data cannot. Thus, the emission-oriented
approach offers the opportunity for refinement. In addition, emissions on
episode days do not usually differ significantly from emissions on nonepisode
days, so the averaging of emissions information appears to be more appropriate
than the averaging of HC:NOX ratios.
When the initial conditions of the model are not important, a starting
time earlier than 8:00 a.m. need not have accurate mixing height inputs until
8:00 a.m. because the chemistry of fresh emissions is not signficantly
affected by concentration until the sunlight intensity has increased. Also,
the definition of a trajectory path before 8:00 a.m. is not critical unless
the emissions inventory is gridded and extreme fluctuations are found between
alternate grid squares. Presumably, the trajetory path on episode days would
be expected to pass over major emissions before 8:00 a.m..
An additional advantage of the emission-oriented approach is that
linearity between the emissions and the axis of the isopleth diagram is
automatically and directly built into the model.
The 1,1 point on the new-type diagram in Figure 6-1 represents the 03
generated by the trajectory model using the base-case emissions inventory and
model conditions. This point provides an instant and obvious indication of
227
-------
6. COMPARISON OF EKMA MODELS AND AQSM'S
Whitten
08* t OS'.t OZ'.l
06'0 09*0 OE'.O
c
0
•D
cn
(0
e
(0
JS ID
4-1 OJ
0) r-H
•H <1>
a o1
o c
tn rt
•H
05
< c
K C
•H
H
H >i
tl
rH O
CD JJ
> O
0) -i «-
o
I ID
C C
O 3
•H
-------
6. COMPARISON OF EKMA MODELS AND AQSM'S Whitten
model validation when this base-case 03 value is compared to the design or
observed 03 value.
The diagonal line through the 1,1 point from the origin takes the place
of the HC:NOX ratio line used in the standard EKMA model. The distance along
this line between the design 03 and the 1,1 point represents the combined
discrepancy between the model and the observations. Two major types of
discrepancies are the overall dilution and the overall emissions total. Both
of these are assumed to be linear when moving up or down this diagonal line.
First let us explore in detail some of the problems associated with the
use of HC:NOX data. For Tulsa, the HC:NOX ratio varied beween 380 and 0.9 for
the period from July through September at the six monitoring stations. The
minimum single-station variation (recorded at the Tulsa County Health
Department) ranged between a maximum of 31 and a minimum of 1.7 during this
period. The minimum standard deviation also occurred at the same station;
this minimum standard deviation value was 5.7 about a mean ratio of 8.7.
Hence the fluctuations in data can be large. The use of some range of values
for the ratio to bracket the situation eliminates part of the advantage EKMA
might have over previous approaches to control-strategy estimates.
Table 6-3 demonstrates our test of the premise concerning the linear
connection between the 8:00 a.m. concentrations and changes in emissions. The
table presents two comparisons for a Level II trajectory started 22 h earlier,
for which the simulated air originated over the Pacific Ocean and moved inland
229
-------
6. COMPARISON OF EKMA MODELS AND AQSM'S
Whitten
TABLE 6-3. CONCENTRATION AT 8:00 A.M. AFTER 22 H OF MODIFIED EKMA SIMULATIONS
Species
Paraffinic HC
Carbonyls
NO
NO2
NOX
Species
Paraffinic HC
Carbonyls
NO
N02
NOX
Low
1974
Base Concentrations
(ppm)
0.74
0.068
0.025
0.067
0.092
High
1974
Base Concentrations
(ppm)
0.95
0.12
0.006
0.057
0.063
Background (0.03 ppm
Control led-Emiss ions
Concentrations
(ppm)
0.35
0.029
0.021
0.037
0.058
Background (0.18 ppm
Control led- Emissions
Concentrations
(ppm)
0.46
0.073
0.0015
0.0116
0.0131
RHC)
Percentage Change
(55 HC and 38 NOX)
53
57
16
45
37
RHC)
Percentage Change
(55 HC and 38 NOX)
52
39
75
81
79
over Los Angeles. The simulated air cell arrived at the coast at
approximately 2:00 p.m. on the previous day, and remained under the influence
of the emissions inventory from then on. Note that when a low background of
230
-------
6. COMPARISON OF EKKA MODELS AND AQSM'S Whitten
approximately 0.03 ppmC HC's was assumed, the 55% control of the HC emissions
resulted in a near-linear reduction of the paraffinic HC concentration in the
model at 8:00 a.m. on the second day of the simulation. The carbonyl
concentration was similarly reduced; paraffinic carbon was reduced 53% and
carbonyls 57% from the 55% control of emissions. Oxides of nitrogen were also
fairly linear, with a 35% reduction from a 38% control. However, the response
of NO and NO2 was not consistent; the N02:NOX ratio changed from 0.73 to 0.64
with control.
When a background of 0.18 ppmC HC was assumed, the response of the model
was not as linear with regard to emissions. The 55% reduction in HC emissions
produced a 52% drop in paraffinic carbon, but only a 39% reduction in
carbonyls. Since the latter are highly reactive, the overall reactivity of
the 8:00 a.m. mix actually increased. The 38% NOX control that accompanied
the HC control resulted in a 79% reduction in overall NOX, but the NO2:NOX
ratio showed very little change for this high background case. Note that the.
controls should have reduced the 8:00 a.m. HC:NOX ratio, if linearity were
true; but, according to the model used for this test, the 8:00 a.m. HC:NOX
ratio actually increased significantly. The reason for this anomaly stems
from the high HC's assumed for the background air. Hydrocarbons oxidize to
intermediate compounds, which in turn photolyze to radicals that convert the
NOX to nitrates. Hence, the linearity premise breaks down as background HC's
increase in concentration; the initial HC:NOX ratio may not even respond in
the same direction as expected from the emissions controls.
231
-------
6. COMPARISON OF EKMA MODELS AND AQSM'S Whitten
Another element of concern for the HC:NOX ratio is the appropriateness of
urban core data for all episode simulations. According to our analysis of the
grid-model files used for the 1979 SIP in San Francisco, the episode
trajectory was out over the Pacific Ocean at 8:00 a.m.; the grid model used in
this case was the Lawrence Laboratory Livermore Regional Air Quality (LIRAQ)
model. Using the LIRAQ model wind fields and emissions inventory, we obtained
reasonable agreement between the Level III EKMA model and the LIRAQ model.
Table 6-4 demonstrates that this trajectory was completely dominated by the
emissions after 8:00 a.m.. Post-8:00 a.m. emissions are normally input to the
EKMA model as a function of the initial conditions, so the emissions inputs
for the simulation results shown in Table 6-4 were each changed to generate
the same concentration contribution from emissions within the model. Hence,
Hence, Table 6-4 actually shows total lack of sensitivity to initial
conditions for this trajectory.
TABLE 6-4. VALUES FOR MAXIMUM O3 VERSUS VALUES FOR INITIAL CONDITIONS*
NOX
(ppm)
0.003
0.003
0.003
0.001
0.001
0.001
0.0003
0.0003
0.0003
HC
( ppmC )
0.03
0.01
0.003
0.03
0.01
0.003
0.03
0.01
0.003
Maximum
03
0.197
0.196
0.194
0.198
0.196
0.195
0.197
0.195
0.195
26 July 1973 trajectory No. 1.
232
-------
6. COMPARISON OF EKMA MODELS AND AQSM'S Whitten
An attempt to use EKMA as part of the 1979 SIP had to be abandoned
because the HC:NOX ratios measured in the urban core were so low that no 03
lines intersected the HC:NOX lines. Our application of EKMA, therefore, shows
that the standard city-specific version of EKMA could have been used; however,
the HC:NOX data were inappropriate. We also generated the isopleth diagram(s)
that might have been used in the 1979 SIP. Since the trajectory model used to
generate the points on the diagram was not sensitive to the initial
conditions, a new definition for the HC:NOX ratio appropriate for applying
EKMA was needed. An appropriate ratio was chosen to be whatever initial ratio
was used to compute the emissions input for the trajectory model, so that the
base-case simulation generated the proper concentrations dictated by the
emissions density and mixing heights found in the LIPAQ files. Figures 6-2
and 6-3 present isopleth diagrams generated from two of the cases given in
Table 6-4 for various initial conditions. Note that both isopleth diagrams
are virtually identical even though the "design" HC:NOX ratios to be used are
1 and 100, respectively. Table 6-5 presents the comparison of several LIRAQ
and EKMA control-strategy predictions.
In conclusion, we wish to point out that the close agreement we were able
to obtain among various EKMA models and other AQSM's stems from analysis of
the precursor concentrations as a function of time- As long as the precursor
levels are close, and the same chemical mechanism is used, one can reasonably
expect similar 03 levels to be generated in the models. Figure 6-4 presents
typical results for the models we have studied. To emphasize our concern'for
more accurate estimates of background HC's, we conclude with one last
233
-------
6. COMPARISON OF EKMA MODELS AND AQSM'S
Whitten
o
o
0
•H
4J
It)
in
X
O
1Z
u
K
S-l
o
JJ
o
(U
•n
(0
r-
CTl
CM
M
0
0)
0)
S-l
&
•H
(ujdd)
234
-------
6. COMPARISON OF EKMA MODELS AND AQSM'S
Whitten
o
o
0
•H
-P
X
o
u
a
i
0
4-)
U
0)
n
I
ID
S-i
&
•H
fc,
(uidd) ON
235
-------
6. COMPARISON OF EKMA MODELS AND AQSM'S
Whitten
9
$
H
J
W
an
EH
2 CO
H A
Cd K
S D
EH O
a s
00
«c
0 S
u ^
CO K
H
U H
Z H
< H
a
fc »j
t*-1
z >
< u
CO iJ
a a
o g:
CK <:
CO -
z < (i,
O W
K 1
r iJj
•"-i ?^
<: e-'
rV L-J
l-u f—1
EH U
W
i ^
Jq £
Q (C
a u
EH \
Z <
0 §
u P
(d
m
i
&
K
K?
03
"C
EH
CQ
D
3
i-(
id
^
o
•H
4J
3
c?
z
u
3;
*
C
0
•d
4J
U
3
T3
en
c2
T3
01
•H
tJ
3
4J
03
01
(H
U
T!
O
O O
^r
o o
oo ^r
0 0
•3* CN
0 0
co
o o
10
o o
T
O 0
CN
O O
X
0 0
X Z
CO
m
CN
O
T
10
O
o
o>
^
o
in
in
o
o
en
V£>
o
o
CN
CO
o
o
0
**
^»
o
en
00
,
o
m
t-
in x
CD IO
CTl CN
T- X
r-
0)
0] VI
(0 0
o y-i
i
4J 01
01 c
Vi O
0 -H
S 01
01
Ol -H
•? e
C6 0)
H
J
o
U3
^
O
CN
IO
O
O
*<•
^
^
0
\o
m
o
o
CN
O
o
in
*»
o
f_
ID
O
en
00
.
o
D m
en *-* r*
10 g \
e x 3 ^o
00 g CN
N 10 -H \
O eu x t-~
fl
en en g ^
C C 0
•H -H en
01 4J CO 01
3-1-1 •- C
0 • O
OH 0 -H
•HO. 01
tH C 01
•^ £ 0 -H
O 4-1 — g
CU CD Cu T3 a)
ft r-l CU CD
ai Gj H 01 in
1 0 csi 10 r-
>i 01 O ^ C*
4J H — — • «-
•H
U
CN
r-
»-
o
co
rl-
o
o
in
f!
*^
O
CN
O
O
V
o
t~
o
o
t-
CN
0
CN
lO
T-
o
en
00
t—
o
^*
r-
Ot s f
Oi 3 CN
M g X
N -H p.
O ^
10 Vi
0> g 0
C Xi en
4J , ,—
• d
CJ
CN
P-
*-
0
1C
in
0
o
«
m
***
o
m
o
0
in
00
o
o
m
m
o
T
ID
0
en
00
.
o
01
c
3
n
Cu
ft
1-1
N
O
CD
D
V4
x:
4J
VI
0
rt
o
lO
"-
o
o
1C
0
o
m
^>
^*
o
IO
n
o
o
CO
en
o
o
CN
^»
O
en
IO
o
en
CO
,
o
^-^
a. g
04 3
M g 0
N -d
OX T-
10 Vi
en g 0 II
C i ft en x
-------
6. COMPARISON OF EKMA MODELS AND AQSM'S
Whitten
•c
a)
3
•H
4-1
c
0
0
*-^
iTi
kC
w
l_3
m
U)
01
3
r-l
10
•*
O
•H
JJ
&
X
o
Z
U
df
C
O
•H
JJ
U
•o
&
•o
ID
-H
"D
3
tn
m
rH
a1
•o
0
O 0
^r
o o
00 t
o o
o o
00
0 O
vc
o o
tr
O 0
CN
O O
X
U 0
x z
O
VD
*~
O
m
lO
o
o
in
^*
^
o
o
»J
o
o
in
o
«-
•
o
kD
^*
*-
o
in
vo
*~
o
en
m
^
(
o
o< "E
PJ I
HE O
N -H 0
ox «-
« h
O1 E O n
•H en X
in co tn O
3 «- C 2
. o ••
U 0 -1 U
•H tn E
I i
•H Ol JJ
to co tn tn
3 «- C 1
• 0 *
O 0 -H 41
-H in c
CU
1 (0 t~ (N 0
JJ ^-- »- r> -H
•H
CJ
(N
r^
«~
o
ro
m
o
o
T
m
*~
o
CO
o
0
oo
r-
o
•
o
01
fN
*~
O
r-
VD
*~
O
0>
CO
^
t
o
to
01 q
00 O
*- -H
• M
0 0)
•H
C E
O 4)
•o m
c> r^
tn
*_- ^-s VD
Z |^
C_) -H
X X >j
< IB 0
g E 1H
f^
^"
(N
O
O
00
o
o
•"
o
o
00
o
o
V
0
CO
o
,
o
V
0
00
o
o
o
m
*~
o
01
00
^
,
o
c
0
13
41
tn
ID
^,
tt
G
K
M
H
H
rH
4)
>
s
rn
[v.
"V
U*l {O
r-- CNJ
Cfi X
E -M
3 0
E iu
•H
X tn
us c
e-3
01 tn
CD 01
«- -H
• E
O 4J
(*7
T-
»n
0
CO
o
o
V
oo
0
o
00
o
o
V
00
o
•
o
V
CO
o
o
V
IN
^
*~
O
*
00
^
«
o
r-
— X
£ I CN
« i x
U "-I P-
X
O> (0 K
CEO
to ^i
3 oo tn
^ c
rt • o
K tn
H c in
0 -H
H £
H T3 41
H 4)
rH CC r~
4) ,C CJl
> -^ *~
4)
237
-------
6. COMPARISON OF EKMA MODELS AND AQSM'S
Whitten
0.20-,
0.16-
0.12-
o 0.08-
0.04-
0.0
* CBM/OZIPM
• SAI TRAJECTORY
i r^ ^ n
0600 0800 1000 1200
Time (Hours, COT)
1400
0.20-,
0.16_
§0.012-
c_
0.04-
o.a
* CBM/OZIPM
• SAI TRAJECTORY
0600 0800 1000
Time (Hours, COT)
1200
1400
Figure 6-4. Comparison of NOX, PAR, ETH, BZA, PAN, and O3 for the SAI
trajectory and CBM/OZIPP models for 29 July 1977.
238
-------
COMPARISON OF EKMA MODELS AND AQSM'S
Whitten
0.005 -j
0.004 _
1 0.003-
o.
t 0.002-
0.001 -
0.0
CBM/OZIPM
SAI TRAJECTORY
I I I I
0600 0800 1000 1200
Time (Hours, CDT)
1400
0.0020-,
0.0016-
'f 0.0012-
Q.
S 0.0008-
0.0004 -
0.0
« CBM/OZIPM
4 SAI TRAJECTORY
060C 0800 1000
Time (Hours, CDT)
1200
1400
Figure 6-4. (continued)
239
-------
6. COMPARISON OF EKMA MODELS AND AQSM'S
Whitten
0.0060-,
0.0048-
0.0036-
< 0.0024H
Q_
0.0012-
0.0
* CBM/OZIPM
• SAI TRAJECTORY
0600 0800 1000 1200
Time (Hours, CDT)
1400
i
a.
UJ
s
o
0
0
0
0
0.
0.
20-,
16_
12_
08-
04-
0 _
* CBM/OZIPM
• SAI TRAJECTORY
*
*
*
t *
*
* ; *
i r 1 1 i
0600 0800 1000 1200 U
Time (Hours, CDT)
Figure 6-4. (continued)
240
-------
6. COMPARISON OF EKMA MODELS AND AQSM'S
Whitten
comparison (Table 6-6) of the SAI Airshed Model and the latest Level II EKMA
model, using a 30-h trajectory in Los Angeles. These results represent the
maximum 1-h 03 value estimated in the entire Los Angeles basin for a typical
1987 emissions inventory. Low background air was defined to be about
0.03 ppmC HC, 0.005 ppm NO2, and 0.04 ppm 03; medium background, or our best
estimate of "clean" air, was taken to be 0.06 ppmC HC, 0.004 ppm NOX, and
0.06 ppm 03; the high'background, or our upper-limit estimate for "clean" air,
was about 0.18 ppmC HC, 0.005 ppm NOX, and 0.10 ppm 03.
TABLE 6-6. SIMULATED MAXIMUM 03
Background
SAI
Model
Airshed
Level II EKMA
Low
(ppm)
0.124
0.125
Mid
(ppm)
0.194
0. 190
High
(ppm)
0.273
0.295
Future-year emissions; worst case in South Coast Air Basin:
simulations.
2-day
241
-------
6. COMPARISON OF EKMA MODELS AND AQSM'S Whitten
WORKSHOP COMMENTARY
DERWENT: May I ask about this carryover of NOX. I would think that how well
you describe the behavior of PAN is quite important when you're describing the
carryover. I have the distinct impression from my model that when I've got a
longer radiation, most of my reactive NOX present is PAN. How well do you
think you are describing it in those various model approaches? Are you
actually using peroxyacetyl radicals, or some surrogate for them, and are you
thermally decomposing the PAN and those sorts of things?
WHITTEN: Yes, PAN decomposition is in the kinetics and the kinetics do follow
the peroxyacetyl-type chemistry as well as we can.
We don't really have good measurements in the atmosphere to add all of
the PAN's together. What measurements we do have indicate that regular PAN,
peroxyacetylnitrate, is by far the largest one.
The chemistry reflects that. The chemistry is very close to the
chemistry of normal PAN. We have no way of really fixing anything any better
because there aren't many data from the atmosphere. But the chemistry itself
is definitely modeled, thermal decomposition effects and effects of
temperature, as well.
Level II EKMA, as we saw there, has a temperature that varies, the same
temperature as in the airshed model. One of the reasons we put that in there
is we had a hard time tracking PAN between a version of EKMA with constant
temperature and the airshed model in which the temperature was variable.
DERWENT: At night the thermal decomposition shuts off, so the important
period is during the afternoon. May I ask about your various grid levels in
the airshed model.
WHITTEN: Yes. There are four in this version.
DERWENT: From your comparisons with EKMA, can you say if the chemistry occurs
preferentially at various levels, or do you have adequate resolution really to
say whether low levels or high levels are more or less important:
WHITTEN: That's an interesting question.
The airshed model set-up defines two levels within the mixing layer and
two levels above the mixing layer. When I made the comparisons, hour by hour,
we took the two layers in the mixing layer and averaged them together and
compared that with the EKMA model results.
There can be some differences between those first two layers, say, in the
morning when mixing isn't very good.
242
-------
6. COMPARISON OF EKMA MODELS AND AQSM'S Whitten
In the afternoon, the averaging is trivial in a few places when the
chemistries are similar, because the concentrations are the same in those few
places. The chemistry is very similar, usually.
What's happening at the third level, the one just above the mixing layer,
is probably something that needs to be improved upon in the airshed model. If
you look at what we see here in this 30-h trajectory, on the first day the
mixing layer is going up, and the chemistry is expanding in these two layers.
Then as the mixing layer comes down in the afternoon of the first day, we trap
the pollutants aloft. We reenter those pollutants on the second day, which
turns out to be very influential in determining what the control strategy is.
Even the airshed model treats the pollutants stored aloft in just one big
layer, not a lot of little layers. How important it is to treat it as a lot
of little layers instead of one big one is a research project that needs to be
looked at.
McRAE: In one of the slides you showed, you compared PAN to EKMA
concentrations on the entrained 03 aloft. You had something like a 50%
increase in peak 03.
I am curious whether you have any feeling about the future of those 03
measurements in EKMA, whether you should give consideration to —
WHITTEN: Yes. That's in our reports. I didn't go into that because of the
time frame today, but that's true.
At Level II we entrained PAN and aromatic intermediates, as well as all
of the various intermediate species in the Carbon Bond II mechanism. This
work was the reason why the precursor reactivities are handled separately
(initial, emissions, aloft, and background) in the particular version of EKMA
that we have. We found that all of these things are important. In fact, we
even found that PAN and CO entrainment related chemistry was responsible for a
5 to 10% effect on 03 maximum. We had to go through and eliminate each one of
these things to get to the point we finally felt confident to say that the
sophisticated dispersion going on in one model didn't seem to be that
important.
I think that is a fairly strong statement to make on that. I wasn't
going to make it until we had gone through and eliminated the possibility that
entraining PAN made a difference and that entraining these other little things
made a difference. We had to eliminate all those things until we got down to
the point where the only real difference between the two models you could
really see were these dispersion factors, for example, eddy diffusion and two
layers versus one layer.
McRAE: I have a couple other quick questions. You mentioned in your talk
that EKMA doesn't include sinks in making these calculations. How important
243
-------
6. COMPARISON OF EKMA MODELS AND AQSM'S Whitten
are sinks to the removal mechanism other than for 03; how important are they
for some of the other species, like PAN?
WHITTEN: PAN is not very sensitive. We did include surface sinks in the
airshed model and we saw about a 5% effect.
As I said, another thing we could put into a version of EKMA is a first
order dilution specific for each species to treat surface sinks.
The way this is handled in the airshed models is as a negative emission.
So I looked at the algorithm, and it turned out to be in reality a
first-order-dilution process. I got that first-order-dilution coefficient and
put that into EKMA and then we got the difference between the two.
The biggest effect was observed at night. Ozone comes down and in the
first layer of the trajectory model is essentially zero at night and yet hangs
up in the second layer. However, over a 2-day period, the 03 concentration,
whether it was low or zero, was dominated by emissions.
McRAE: If you're saying that some of these secondary species are so important
to what we see on the ground, would you care to comment on what kinds of
measurements you need for doing future EKMA predictions?
WHITTEN: That's a very good question. The measurements aren't that
important. What you have to do is be reasonable about what you are putting in
the models. Think of EKMA as a main box within another box aloft. In the box
aloft, you build up a certain equilibrium level of PAN, of aldehydes, and of
the other species that pick up a lot of radicals and take up some activity.
Now, if you just entrain the initial conditions of the box aloft again into
the main box, your main box has to cook those things back up again, using up
that photochemical energy. However, if you entrain PAN already in its
equilibrium concentration, then you're not adding a deficit of radicals, a
deficit of chemistry.
What you need to do is have measurements there of the precursors. The
things you didn't measure can be generated with the model, so you're
entraining a reasonable amount. It's like bringing it up to the same
temperature, so it doesn't make it cold when you put it in.
McRAE: Does that suggest that the typical EKMA run should be extended in time
to longer periods than what it is presently, 2-day periods?
WHITTEN: Well, for example, take San Francisco. At 8:00 o'clock in the
morning it's apparently out over the ocean. And out there the standard EKMA
can start at 8:00 o'clock in the morning, but the standard idea of EKMA in
the middle of the city is inappropriate. This trajectory only takes about 6 h
to get from downtown San Francisco to Livermore where the 03 values are high.
244
-------
6. COMPARISON OF EKMA MODELS AND AQSM'S Whitten
In Los Angeles it can take 30 h. So fixing it at 8:00 o'clock in the
morning isn't necessarily good for all cities. San Francisco and Los Angeles
are two exceptions.
In Tulsa we found that 8:00 o'clock was a pretty good time to be in the
middle of the city. But then you had to have the right HC:NOX ratio.
So, we backed out of the city and started the model up there. We brought
it in over the city and it generated its own HC and NOX/ depending on what the
emissions were and whether there was a wind shear.
KILLUS: What is important in measuring pollutants entrained aloft is not so
much that you get every single species exactly correct, but rather that you
capture the overall characteristics of the air. When you go through and make
various estimates of what might be considered background air, in most cases
you do not have anywhere near the appropriate data to give an accurate account
of the HC species. More commonly, we have an estimate as to more or less the
rate at which the air will oxidize NO to NO2, the rate at which it will
convert OH radicals into HO2/ and so forth. Given those kinds of estimates,
it is then possible, based on which kinetic mechanism you're using, to select
a reasonable HC species to give the overall reactivity characteristics that
seem correct, and also give a reactivity that does not change too rapidly with
time. We certainly don't want to have the entirety of the air being entrained
to be all olefins, for example, which are very reactive at the outset, then
cool off pretty quickly, or all of it paraffins, which start off very
unreactive and gain in reactivity.
MEYER: I'm having trouble reconciling the recommendation to initiate the
model with an emissions inventory. The findings Dr. Jeffries had in St. Louis
showed that the later you started the trajectory, i.e., the more up to date
the initial concentrations were, the better the model performed in predicting
peak 03 concentrations.
My question boils down to if you're using emissions you're balancing an
imperfect knowledge of emissions and some difficulty in figuring out when the
trajectory is in the early morning hours versus the difficulty one encounters
in coming up with a characteristic HC:NOX ratio based on the ambient data,
going the way that one currently goes with EKMA.
I don't know, perhaps there could be some discussion about weighing the
advantages and disadvantages of these two different approaches.
WHITTEN: My feeling on the matter is that the measurements of HC to NOX
suffer in two points: (1) the instrumentation tends to be unreliable, and
(2) it suffers from micromixing and micrometeorology, in that they have small
pockets of HC and NOX that are separated into different amounts in the early
morning, when mixing isn't all that good. Yet, all of this stuff eventually
fuses together in a horizontal sense so that by the time the 10 h passes and
you generate 03, they are all pretty well mixed.
245
-------
6. COMPARISON OF EKMA MODELS AND AQSM'g Whitten
When you're trying to measure these things at 8:00 a.m., or even 9:00
a.m., it would be crucial that you get all these little pockets of high
concentration and low concentration. Even if you had perfect instruments, you
might not have a sensitive control strategy which is so much affected by the
measurements. The next point is that you don't really know about the
nonlinear effects that occur. In fact, there are some instances where there
is some influence of background concentrations. Controlling emissions by 50%
may not necessarily reduce these 6:00 a.m. to 9:00 a.m. HC and NOX concentra-
tions. The whole premise of EKMA is based on there being at least some linear
relationship between these. The crucial factor in running an emission-
oriented run, if you start it early in the morning and decide that the initial
concentration is background, is the actual emissions density you're going to
put into the model.
The other parameter you might consider is mixing height. Since it's
early in the morning, there is very little chemistry going on. So, whether
you use a 50-m mixing gap or a 250-m mixing gap before 8:00, your emissions
density is still the only crucial factor until 8:00 a.m. when there is a much
more well defined mixing level. Whether you start out at 50 m or 250 m at say
4:00 a.m., it's not going to make any difference because the amount of the
emissions is what is really important. There is no chemistry going on there.
JEFFRIES: Part of the problem is that there is a difference in the approach
used by a sophisticated modeling group when it manipulates the data using
intuition and a knowledge of what's going on and the approach used by, say,
the Office of Air Quality Planning and Standards (OAQPS), when they are trying
to propose what states are allowed to do or not allowed to do.
If you attempt to come up with a set of rules and regulations for
everyone's use, as I think Dr. Lloyd said, what is the Federal Register going
to say you should do to run this model? It is very difficult for someone who
has never run a model to pick out the mixing height, trajectory, or emissions
for a particular day and manipulate them. So, in some ways, the work we did
was an effort to approximate what a smart local agency or state agency might
do.
There are some bounds on the problem. You're not allowed to tweak and
tune and manipulate each and every day to get the optimum fit for that day.
You've got to choose one description of the meteorology or another. You're
not allowed to mix everything up and see what you can come out with. Now,
what you're doing mainly here is comparing one model against another model.
The other model is a very super-sophisticated model that you've spent a lot of
time and energy on. You didn't just make up that mixing height. You didn't
go to a book and just take a number out and say, that's the mixing height
today.
WHITTEN: But the point is that up until 8:00 o'clock in the morning what the
mixing height is doesn't really make any difference.
246
-------
6. COMPARISON OF EKMA MODELS AND AQSM'S Whitten
JEFFRIES: It does in St. Louis, when you try to run OZIPP and you start out
with 50 m mixing height and run it along a trajectory over the emissions
height—
WHITTEN: That's becuase it affects emissions concentrations.
JEFFRIES: That's exactly right, and here's what happens. By starting later
and later and later in the day, you're inputting initial conditions.
WHITTEN: Right.
JEFFRIES: You're inputting initial conditions instead of having to generate
them by the correct description of the meteorology, emissions, and trajectory.
The problem is the difference between comparing two models and comparing a
model with the real world.
WHITTEN: We've tried to address real-world versus model issues for the South
Coast District. We're looking into the idea of running a box model of the
entire Western basin, for, say, the first 10 h to generate the initial
conditions, i.e., the average trajectory over an average location for an
episode day, which, for a box model, is the average conditions for that whole
sector of the basin.
JEFFRIES: I think the point is that we do not have adequate descriptions,
simple descriptions, of the mixing, the trajectory, and the emissions, so that
a reasonably simple manual can be written to direct state agencies.
WHITTFN: Maybe you're missing the point I tried to make before. I'm saying
that at 9:00 a.m., 10:00 a.m., whenever, the mixing heights can be 250 m.
Before that it need only be something less than 250 m, and it needs only
the emissions density defined since say 3:00 o'clock in the morning.
JEFFRIES: Yes, so?
WHITTEN: So, what I'm saying is that it's not going to make any difference if
you use 50 m at 3:00 o'clock in the morning or 250 m. The concentration you
end up with at 9:00 o'clock in the morning will be the same, mathematically.
JEFFRIES: How do you do that? If you run a mixing height of 50 m over a
mixing surface, you come out with a very different concentration than if you
run a box that is 250 m over that same emissions surface.
WHITTEN: But when you expand the box to 250 m, you end up with so many
molecules, i.e., you have X number of molecules in the box at 9:00 o'clock in
the morning.
Whether you put those in there at high concentration early in the morning
or at low concentration early in the morning, they are still the same number
of molecules in the box at 9:00 a.m..
247
-------
6. COMPARISON OF EKMA MODELS AND AQSM'S Whitten
JEFFRIES: Another problem we're dealing with here is that there are lots of
EKMA models. You have already talked about at least three different EKMA
models.
The models differ in very subtle ways from each other because of critical
differences. There are people here who have the EPA EKMA, and if they attempt
to produce some of the things you've done, they're going to get different
answers.
With EKMA you've got to plug in the initial concentration and an initial
mixing height, that is, a mess. You've got to come up with that somewhere.
WHITTEN: Yes, but Appendix B in the EPA guide has shown a way to handle the
emissions so that the emissions density relates to the concentration that you
get in the model. That would be independent then of the initial
concentration.
JEFFRIES: It's impossible.
WHITTEN: That's what Gerry Gipson of EPA did in St. Louis. We'll talk about
it later. I think the main thing I want to say before I sit down is that I
was out to test something that a lot of people are concerned about in EKMA.
That is, is the simplistic treatment of this box model compared to that of the
more sophisticated model really an important factor? We have yet to find it
to be important, once we know where the key trajectory is.
JEFFRIES: The message I get from you is that as long as everything in input
data and so forth are the same, the answers are basically the same. The Gear
method and the single-mix-layer method versus multiple layers and
super-sophisticated methods don't matter very much. But the input data is
going to be —
WHITTEN: All the sophisticated data necessary to run the airshed model can be
used to make the EKMA model run better, too.
Secondly, you have to have the airshed model find out where the worst-
case trajectory is. When you run the airshed model in Los Angeles for 1974,
you get maximum 03, say at one spot. When you change the emissions inventory
for 1987, or whatever, it's someplace else.
DODGE: I think maybe this would be a good time to try to discuss the relative
merits of all the models.
DIMITRIADES: Let me offer a few summary comments and see if we can perhaps
start some discussion on everything that we have presented and talked about
today.
You have heard from the speakers about the different methods in existence
for verifying the EKMA model.
248
-------
6. COMPARISON OF EKMA MODELS AND AOSM'S Whitten
John Trijonis spoke on an approach which compares the observed impact of
emissions changes with the predicted impact of emissions changes.
Since this is how we want to use EKMA to predict the impact of an
emissions change, it seems that this approach is the most direct for
validating or verifying EKMA.
On the other hand, there are problems. You have to select a certain
HC:NOX ratio and depend on how well it has been selected. That is an almost
insurmountable problem, and I guess that constitutes a limitation of the
method.
Then we heard about an approach that used a one-to-one comparison of
predictions and observations of absolute 03 air quality.
We use the concentration levels of HC:NOX which we measure or compute
from emissions rates for a given day, and then we also use data on 03
concentrations observed on that same given day. Then we compare these
observed concentrations versus the ones predicted by the model. By this
approach, this comparison constitutes the basis for judging how good the EKMA
model is.
Again, this sounds like a direct method for checking the accuracy of the
model, but we have some problems here, too. It is highly questionable that
the observed concentrations of 03 and HC:NOX for the same day are for the same
air parcel. Because of this uncertainty, the comparison of predictions versus
observations of 03 has questionable validity.
To get around the problems of this one-to-one day-by-day comparison,
investigators have offered an approach that is based on comparison of
predicted frequency distributions versus observed frequency distributions of
03 concentrations. We do circumvent the problems I mentioned earlier, but
there is a question whether this comparison alone is a sufficient criterion to
tell us truly how good the model is.
Finally, we heard about an approach based on comparison of EKMA with the
more comprehensive AQSM's which presumably lend themselves better to
verification with real-world data. That comparison is useful in some
respects.
Here at EPA, we need to make a decision on which models to adopt and do
further work on. After discussing the merits and limitations of the models,
we would like to get some response from you.
We want to avoid discussing simply every method in terms of its good
parts and bad parts and letting it go at that. We would like to put the
models in some kind of order of priority or usefulness.
249
-------
6. COMPARISON OF EKMA MODELS AND AQSM'S Whitten
Also, we would like to hear some suggestions about additional research
that is needed to eliminate the weaknesses of those me1:hods. For example, do
we need a new data base? Is the existing data base insufficient for one
reason or another? Are the smog chamber data bases or the ambient air data
bases sufficient? Perhaps a new data base is what we need against which to
verify the EKMA model.
I would like to call to your attention that for the time being when we
talk about EKMA, we're talking about the model, not the chemical mechanism.
The subject of chemical mechanisms, how they compare among themselves, is the
subject of tomorrow's discussions. Right now we would like to concentrate on
those methods that you have heard for verifying EKMA. If at all possible,
we'd like to get some reaction in terms of which one is the one we should
stay away from; which ones we should concentrate on, etc.
Would any one of the speakers like to offer to speak first?
WHITTEN: I have a relatively minor point to bring up first, but it is
something I feel strongly about. I think that a crucial point in EKMA is
relating the HC and NOX, both concentrations and ratios, that you have to use
in the model to the ambient data.
Now I have suggested an alternate method that makes the model relate to
ambient emissions, since that is what you want to control. But that, of
course, also has weaknesses. So what I suggest is that we need data to verify
the HC:NOX ratio more significantly. It may be like a van that moves around
in a 2- to 4-km^ area to get an integrated value of the 6:00 a.m. to 9:00 a.m.
HC concentrations.
I think there is so much fluctuation in the 6:00 a.m. to 9:00 a.m. data
we get from fixed monitoring sites, that it makes the whole use of them very
questionable.
That is one of the reasons I went to the emissions idea, because it
becomes much more stable; it only changes when you change the emissions.
I think it's possible, since the Australians were able to, to put HC and
NOX instrumentation in vans and get an integrated concentration in the
downtown area, then find out whether these fluctuations disappear if you
measure HC over a grid square, rather than just a single point. I think
gathering those data is important.
If it turns out that what's being measured across the nation right now at
individual stations correlates extremely well and you do see the same amount
of fluctuations in a van as you do at a single site, maybe it's a realistic
effect.
TRIJONIS: What is a realistic effect?
250
-------
6. COMPARISON OF EKMA MODELS AND AQSM'S Whitten
WHITTEN: These wide fluctuations that you get from day to day. As
Dr. Jeffries and Dr. Martinez observed, very large differences in 6:00 a.m.
to 9:00 a.m. —
TRIJONIS: They aren't spatially homogeneous. I've been doing a study for the
Air Resources Board correlating NMHC:NOX ratios at five or six sites in Los
Angeles, and they don't correlate at all.
WHITTEN: Yes, but those sites are separated by 10 mi or more. I'm talking
about within a 5-mi square, something you would think of as an air parcel
that's going to end up as the maximum 03.
TRIJONIS: There's always the possibility —
WHITTEN: In that square, is it uniform?
TRIJONIS: There's always the possibility of data quality artifacts. The
ratio could be fairly uniform from day to day, and the variation you see be a
data quality problem.
WHITTEN: That's another problem, but eventually you're going to come to a
point where you've got perfect instrumentation.
JEFFRIES: I don't think you can pass it off as a data quality problem.
TRIJONIS: I didn't say it was, but there is a possibility it is.
JEFFRIES: There are a lot of problems with HC instruments, and that's been
true historically. That's one of the problems that was dealt with when the
only data base we have that's worth looking at for testing EKMA was developed.
You have to consider that what was measured back then versus what's measured
now is not necessarily the same thing at all. The Australians had one HC
instrument and one NOX instrument. They still saw the same kind of variations
and distributions. The pollutants are simply not spatially or temporally
homogeneous. The air parcels don't come from the same places, and they pass
over different sources with different characteristics and end up with mixtures
of source contributions. The Australians actually had 30 HC species, 500 to
600 samples, and source reconciliations on graph samples from all over the
place. They can tell you that the composition in this parcel had so much
automobile exhaust, so much evaporant, so much from solvents, etc. —
There is a tremendous variation, depending on where that actual air
parcel came from.
As times goes on and they mix more and more, sure, they'll become more
homogeneous.
TRIJONIS: The model relates to that homogeneous average.
251
-------
6. COMPARISON OF EKMA MODELS AND AQSM'S Whitten
JEFFRIES: In terms of the initial input conditions, to use it in the way that
it has to —
TRIJONIS: Then you take that one little point that's being measured and
you're saying that that's the average.
JEFFRIES: That's a weakness in EKMA. The technique that both Dr. Martinez
and I have used is one which considers the whole distribution. You've got
to go to the whole distribution. You can't talk about a single point. The
single point doesn't represent anything but that one number, that measurement.
What does a mean HC:NOX ratio mean in a case like RAPS data where the
stuff is spread all over the whole axis?
WHITTEN: I showed the same thing in Tulsa. You see various --
JEFFRIES: I'll bet in the airshed model every square is different from every
other square. When you do these comparisons between EKMA and the airshed
model, how do you go from an airshed model number which has all the spatial
numbers you need in the input to a single number that gives you the EKMA?
WHITTEN: That's why we went to the emission-oriented one, so that we would
go back far enough in time so that out in the ocean they're all the same.
Aren"t they?
MARTINEZ: I think that asking for more data at this point simply postpones
the decision on the issue for about 5 or 6 years. It's always a good thing to
do. The more you know, the better off you are. But, we are faced with fairly
short-term decisions, and I disagree with Dr. Whitten on that. We are stuck
with certain kinds of data, good or bad, that's what we have.
That was part of my motivation in going the way I went with the EKMA
approach. I thought, we have data and we have 03 measurements, and we have to
make some decisions in the immediate future. Using the model, perhaps these
data aren't so bad after all. How well can we do? Once we know how well or
how poorly we can do, what can we do about it? You have to consider the
approach that I suggested, which is similar to Dr. Jeffries' from that point
of view.
Then, for the ultimate application, the numbers and distributions are of
no use unless you plan to use them, and that's where the two possible
applications that I discussed come in. Basically, you're trying to improve on
your predictions, and you're doing it within the framework of the available
data.
niMITRIADES: So what you are saying is with the data base that is presently
available, this is what you recommend as being the approach?
252
-------
6. COMPARISON OF EKMA MODELS AND AQSM'S Whitten
MARTINEZ: I don't want to say that this is the best possible way in all the
world, no. It's just one way that, considering the imperfections of the data
and the model, allows you to take into account your answer.
DIMITRIADES: I don't disagree with that, but we are going to do additional
work in our efforts to get an improved model. In view of this, would you
propose another method if we were to get another data base?
MARTINEZ: Given that we cannot get perfect data and we cannot get a perfect
model, then that approach also applies to any other model and any other data
base you can come up with.
So does Dr. Jeffries', for that matter; and so probably do the others.
What I want to stress is that I do not mean to imply that once we apply EKMA
in this fashion, then that's your answer and then that's it. The EKMA model
gives you an answer that should be checked. It doesn't tell you how the value
you chose gave you the HC and NOX that you put into a model in the first
place. That's for you to decide.
There are other methods that optimize the cost of all these strategies.
That is another factor that you can throw into the pot.
Running airshed models is a time-consuming, rather expensive process,
compared to these simple models. I propose that the simple models be run
first, to look at your strategies.
In light of Dr. Whitten's comparisons, maybe you will not be that far
off. But, if you really want to be sure, then once you've selected your
control strategies, you screen them, pick the ones that seem to work best, and
check them against the big models.
I am troubled by Dr. Whitten's remark that neither the chemistry nor the
dispersion matter very much.
WHITTEN: I didn't say that.
MARTINEZ: What matters?
JEFFRIES: The input.
WHITTEN: No, I didn't say that. T didn't mean to say that.
MARTINEZ: My point is that it seems that both the simpler models can give an
account of your uncertainties. The complex models, with their uncertainties;
one leads into the other.
TRIJONIS: I think the strong point of Dr. Martinez's and Dr. Jeffries' work
is the use of a distribution of the ratios rather than a median ratio. I
253
-------
6. COMPARISON OF EKMA MODELS AND AQSM'S Whitten
think it's really not a "validation" to try to validate EKMA in an absolute
sense. It's intended to be used in the end to evaluate emissions changes, so
you want to evaluate it in a relative sense. I think that's where the
strength of Dr. Whitten1s and my own work is. We look at historical emissions
changes and see how EKMA performs historically. Dr. Whitten can do that by
seeing how EKMA performs under emissions changes and how a more sophisticated
model does under emissions changes.
So possibly we could combine the two strengths and do future studies
either with historical trend analysis or by comparing EKMA to other types of
models. When those studies are done we should not use the median ratio
approach but rather the distribution-type approach. We might use a normal
distribution of ratios. Using that we could then make comparisons, either
with historical data or with results from a more sophisticated model.
DIMITRIADES: Do you have enough historical data to allow you to arrive at
distributions?
TRIJONIS: The distribution is fairly well known. That would not be a
problem. However, instead of taking one point on the EKMA model and tracing
that through time, you'd take a distribution of points and trace the
distribution through time. I think that could be done.
ESCHENROEDER: So you're saying that the work breaks down into two parts. One
is working on models and their improvement; the other is using these models in
the statistical framework once they are developed to play the distribution
game.
TRIJONIS: Right. I wouldn't go with only the distribution game, in looking
at that in an absolute sense, because that's not at all doing a verification
study. I think Mr. Gipson in one of his first slides made that point.
ESCHENROEDER: Both Dr. Martinez's and Dr. Jeffries' work depends on the model
as a starting point in their scheme, in their logic, and that that is used as
a tool to do a statistical study of different sorts on an existing body of
data.
TRIJONIS: Right.
ESCHENROEDER: What you're saying is we can sharpen the tool but we should
also be trying to apply it as a tool for making decisions in an uncertain
environment because it's always going to be uncertain to some extent. As we
narrow our uncertainties, the model is going to be accordingly narrowed.
However, you should start out with the best model you can on an absolute
basis.
TRIJONIS: Right, but it should be tested under changing emissions conditions,
either by historical comparison or by comparison to changing emissions with
the SAI model.
254
-------
6. COMPARISON OF EKMA MODELS AND AQSM'S Whitten
ESCHENROEDER: Try to get as much dynamic range as you can?
TRIJONIS: Right.
ESCHENROEDER: Even if it's going from one morning to another. I know that's
not turning the emissions up and down, but that's all we have for varying
morning ratios for HC and NOX.
WHITTEN: About the only change in emissions, really, that you have, is
weekday to weekends.
ROMANOVSKY: I wonder about the wisdom of taking distributions as regards
concentrations and the ratio. If you take the distribution of the 03 and
average it out, you find we don't have a big problem, not even in Los Angeles.
TRIJONIS: The distribution of what?
ROMANOVSKY: The distribution of the peak 03 values over a period of a year.
If you take the distribution of 03 and average it out, then you dont" have
very much of a problem, not the kind of problem that control agencies have to
address.
I'm wondering if we shouldn't be looking at worst-case scenarios as
opposed to distribution.
TRIJONIS: If you take the entire distribution, you're taking the mean. If
you address the entire distribution, you can very easily address worst-case
situations. You can ask the question, what's the worst first percentile,
fifth percentile, tenth percentile, what's the mean. You should address the
entire distribution, not just the mean of the distribution.
ROMANOVSKY: You should make sure that the data you use in regard to the
product pertain or relate to the appropriate data for the precursors.
TRIJONIS: I guess that's where Dr. Jeffries' model comes in, where it matches
the two.
DIMITRIADES: If you want to combine the two approaches as you suggest, don't
you have to have an adequate data base for 1964 and 1965 that will allow you
to construct the frequency distribution of 03, HC, and NOX at that time?
TRIJONIS: Yes, you could run the model in 1965, essentially, that's right.
DIMITRIADES: You have the data base?
TRIJONIS: Yes, for Los Angeles.
JEFFRIES: Then you'd do it for the next 3-year period, and the next 3-year
period, and the next 3-year period.
255
-------
6. COMPARISON OF EKMA MODELS AND AQSM'S Whitten
DIMITRIADES: Yes, as long as you have the data —
TRIJONIS: I think you'd do it for 1965, and then make predictions of what
happens to that distribution in subsequent years. You only do it once, then
you predict what that distribution is for every subsequent 3-year period based
on the emissions changes.
JEFFRIES: But, there is also an alternative to that. Other things like
meteorology over the 5-year or 3-year period might influence the shape of the
diagram. So, you would fit the frequency distributions for each year and
compare the isopleth diagrams to see if the distributions remain stable or
change, and if they change whether they change in a way that reflects the
underlying changes in the meteorology over a longer time period.
TRIJONIS: How well could EKMA predict, starting in 1965 to 1972, a
50th-percentile 03 level?
I think you would want to start with only 1965 data and your emissions
changes that answer that question. You would be answering a slightly
different question, I think, than the one I was thinking of.
JEFFRIES: The difference is that the isopleth diagram derived from the 1965
data incorporates the meteorology for 1965.
TRIJONIS: Right. Well, then you would incorporate 3 years of meteorology.
It could differ slightly from the 1972 to 1974 meteorology, but that would be
one of the reasons for the disagreement. I don't think you ought to force-fit
them. You ought to see; what you predict should be the distribution of 03 in
1975 for 1975 to 1978 what is the actual — yes, that could be done, that's
true.
DIMITRIADES: I guess what I was asking was whether quality data are
available. Aren't there questions about the entire HC:NOX data base?
TRIJONIS: No more than for present data. There has been a calibration
problem for NOX for a long time, but that's like a 17% error. It's a constant
type of error.
The HC data and the nonmethane hydrocarbon (NMHC) data for Los Angeles
are essentially worthless. The total hydrocarbon (THC) data are somewhat
better, and all we use is THC, using an empirical formula to calculate NMHC
from that. They aren't as good as, say, the NOX, CO, or 03 data. If you
correlate 03 levels from site to site and then look at what the correlation
goes up to as the sites tend to get closer (the closest sites are within
3-4 mi of each other), and you do the extrapolation on the 0.0 mi for 03, you
get a correlation of about 0.85. For NOX and CO you get a correlation of
about 0.75. For THC it goes down to about 0.60 or 0.65. That's the worst.
If you go to NMHC, it goes down to about 0.3. So those data are really
256
-------
6. COMPARISON OF EKMA MODELS AND AQSM'S Whitten
worthless. The THC data are not as good as the 03, NOX, or CO data, but they
contain some information.
DIMITRIADES: I can see the advantages of combining the two approaches
provided we have adequate data to do so.
I have some questions about that approach alone, whether the data for
frequency distributions alone are adequate criteria for judging the
performance of the EKMA model.
TRIJONIS: My one point is that that doesn't change, that doesn't test the
model under changing emissions, which Dr. Romanovsky, Dr. Whitten, and I think
is important.
JEFFRIES: The method is not meant for validating the EKMA model. It is a
different method altogether. There is a method for using ambient data in an
isopleth diagram. It is different.
The isopleth diagram can come from a hundred different places. I just
happened to show one way to get it.
What is implied in the method, though, is that over some time period in
an urban area there is a single isopleth diagram that adequately captures the
essence of the 03 precursor relationship. That's all it says.
So far, in two different cities, over two different time periods, it has
been proven possible to derive a diagram so that that relationship holds.
DIMITRIADES: It sounds like being empirical.
JEFFRIES: In many ways it is. It's an engineering answer in one sense. You
can provide all kinds of kinetic underpinning, but when you come down to it,
it is merely a transfer function for converting precursor distributions into
63 distributions. That is the problem with it.
It doesn't have the kind of cause-effect relationship you can point to in
the model, where you can say, this chemistry does this, and the wind blew it
from here to there, and it made that. It can't do that.
ESCHENROEDER: I think you're selling it short, because embedded in that
technique, as you pointed out, is a model that has a lot of coefficients, B,
C, N, each of which is identifiable with some phenomenological thing that—
JEFFRIES: Absolutely.
ESCHENROEDER: So there's some underlying truth to it in a deterministic
sense, to begin with, or you wouldn't have gotten to the answer as fast as you
did.
257
-------
6. COMPARISON OF EKMA MODELS AND AQSM'S Whitten
JEFFRIES: That's correct. In a sense we didn't have those parameters totally
free as they've been described by Dr. Trijonis; the parameters are quite
restrained.
ESCHENROEDER: Absolutely.
JEFFRIES: We could have done the whole thing with one parameter, C. That's
it. In fact, what you could do with the data he showed on the board is take
the highest point up there and the highest 03 concentration and figure out
what you have to multiply by to get that value to come out right, and that's
it. That's the C. You can then plug it back in and come out with the
frequency distribution. I'm going to take the Houston Area Oxidant Study
(HAOS) data and run through it very quickly and try it out. The weakness I
see in the method is that it's very data-dependent. You have to have HC and
NOX values that are reasonable representations of the frequency distributions
of measured values in the city, and you have to have enough 03 stations to
know that you've got a reasonable representation of the 03. So, in that
sense, the weakness of the method is that it's simply data-dependent.
MARTINEZ: There is a small problem, also, with fitting the whole distribution
on that. You're really interested in the distribution. And the least-squares
method that considers the whole distribution spends most of its time fitting
the part of the distribution that doesn't really interest you that much.
JEFFRIES: We found out very quickly that we could take a frequency
distribution printout and an isopleth, and looking at a few points, we could
figure out what the thing was going to be. So you don't have to have it fit
the whole distribution; you can optimize to fit at one end or the other end or
in the middle or wherever you want it.
WHITTEN: I think that part of what Dr. Jeffries says is probably true, that
it's a way of mapping the frequency distribution of concentrations of
precursors to 03 generated. However, I would add, and this is the thing that
for me is very different, at constant emissions.
Because, basically, the emissions of all of these data are constant.
What we are looking for is a model that relates emissions and the control of
those emissions to the generation of 03.
I don't think Dr. Jeffries' model does that. I don't think it relates
fluctuations in the concentration you see due to meteorology or other
effects -- the fact that automobiles are driving around differently one day or
whatever — to change in emissions. I don't think that's been demonstrated;
that emissions, which are what we're all trying to get to, are supposed to be
HC-control methodology.
258
-------
6. COMPARISON OF EKMA MODELS AND AQSM'S Whitten
JEFFRIES: A clear assumption in the model is that there is a linear
relationship between emissions and concentrations.
WHITTEN: Not in the way you've used the model.
KILLUS: I'd like to point out that this sort of historical trend analysis
Dr. Trijonis has been discussing is one of the very few methodologies we have
that independently verifies our emissions data base. One of the nightmares
modelers have is that they seldom have any sort of control on inputs. What do
you do if, for example, the emissions data given are incorrect?
What does EKMA do, for example, if it is depended on and the control
strategies are totally incorrect. Obviously, all the models and all the
control strategies break down under those circumstances. Historical trend
analyses, on the other hand, are one way of getting an estimate of whether one
can believe that control strategies have been effective, and by implication,
whether there is any likelihood they're going to be effective in the future.
WHITTEN: I'd like to bring up another point. I think something that's been
left out of the central discussion here is the method given in the EPA
guidelines for Level III. That method tries to address, to a certain extent,
the statistical problem in 03 distribution by taking the five, or however
many, highest days and doing a separate city-specific Level III trajectory for
each of those days. It then does controls on each day. I think that is more
related to solving the problem than looking at frequency distributions or
fluctuations and assuming that a poor model will generate a poor distribution.
A good distribution from a poor model seems possible to me.
I would be more comfortable in terms of Dr. Martinez's problem, i.e., the
problem of working with what we have, to stick with Level III guidelines as
they are. To me that's a more satisfying methodology for handling the
statistics.
JEFFRIES: The problem with that is that you end up deriving a control for a
point that in the future is no longer the point of concern, and the airshed
model automatically switches and produces more 03 someplace else in the
future.
WHITTEN: But that's a problem you have to live with when you're looking at
trajectory versus isopleth-grid-model sets. If you can't afford a grid model
the next best thing is some sort of trajectory, and you have to live with that
uncertainty.
JEFFRIES: Even so, with the airshed model, how do you relate that to the 03
standard? The 03 standard says that it's site-oriented and the airshed model
is predicting the concentrations of each grid. Do you base your control
strategy to meet the O3 standard at each individual grid or do you base your
strategies to meet them just at the sites? That's a whole other question.
259
-------
6. COMPARISON OF EKMA MODELS AND AQSM'S Whitten
WHITTEN: That's another workshop.
ESCHENROEDER: We're still at the meteorology of that particular design day.
It may not produce the worst case for the new emissions pattern at a future
time.
CARTER: So far it never has in the airshed model.
What Dr. Whitten was referring to with the city-specific EKMA guidelines,
I think, is that we did look at, say, the 5 highest days that give a high 03
concentration at any particular site and then tried to model one of those
days. I think the EKMA model at least does predict that the control
requirements do not decrease monotonically with 03 values.
But using the design 03 level from the design value approach with some
sort of isopleth diagram may not be appropriate. You do have to consider
day-specific type situations. Now, with the city-specific guidance, we're
doing it in a very brute-force approach. We're considering 5 days. To do it
completely, you would look at all days above the standard.
McRAE: Dr. Romanovsky, I'd like to ask you, and perhaps Dr. Whitten whether
you would like to comment on the question. Do you have any feeling about
whether the worst days are in fact the toughest ones to control?
Some of the things that are coming out of multiday runs and the kinds of
things that Dr. Whitten has been doing make me very nervous about the levels
of controls that we developed on the basis of the 5 worst days, providing a
guarantee that it will generate levels below the standard on no other days.
Do you have any feelings about the choice of statistics and whether, in
fact, it will achieve the goal you want to see?
CARTER: I think with a specific model like city-specific EKMA, you can pretty
much predict which days are going to be the toughest; that's the control and
that's the one with the highest background levels.
You don't reduce the background levels at all, or you reduce them only a
moderate amount. That just means the city has to implement greater control.
I think that's being demonstrated with the airshed model, too. Some of
the results we've seen in St. Louis are very day-specific, e.g., the change in
03 with reduction of HC. And, of course, that's going to be influenced by the
boundary. So what does that mean? It means that in terms of the control
strategy developed, the change in boundary conditions is going to be a
critical factor, and not only to the modeling analysis itself.
I don't know if I answered your question, but I can't see it being
appropriate simply to pick a high 03 day or a few high 03 days, do a modeling
analysis, and determine what the controls are.
260
-------
6. COMPARISON OF EKMA MODELS AND AQSM'S Whitten
I think you should look over a wider range of the distribution. How far
you'd have to go, I don't know.
MEYER: One pertinent comment to that, Dr. Carter, is that for many cities
(not Los Angeles, but many cities), if you look at the 5 highest 03 days, the
distribution tends to tail off fairly rapidly after that. There is frequently
a fairly substantial difference for a lot of cities after you've looked at
the 4 or 5 days.
MARTINEZ: One problem with the Level III approach is that new chemical models
can bring another joker, so to speak. Which model do you choose?
Dr. Jeffries will show you tomorrow that he can play a game that you can
choose the model that gives you the results you want. That is something to be
considered in the whole EKMA selection process. Thus, the trend analysis
perhaps ought also to be done for the other models to see how they do.
My work has shown that they are quite different in their capabilities, so
that you may select one over another, depending on what part of the HC:NOX
plane you are in.
I'm not sure how Dr. Jeffries' work addresses that problem. I suspect
that if you have an isopleth surface for any model, you'll be able to fit it
fairly well.
JEFFRIES: I'll just change all the parameters so that they all look the same
anyway.
KELLY: I'd like to ask a practical question. For most cities where this may
be applied, like Detroit or Cleveland, where they have 03 exceeding the
standard, there haven't been any NMHC measurements for a long time. The
only measurements that have been made are the SIP measurements made last
summer for 3 months with one HC monitor.
JEFFRIES: Missouri has 3 days' total nonmethane hydrocarbon (TNMHC).
KELLY: What do you do in that case? In Detroit we did not find this wide
variation; from 6:00 a.m. to 9:00 a.m.. I think we found a ratio like 4 or
5 to 1, plus or minus 1. This is at one site, using a very modern NMHC
analyzer, one that measures NMHC chromatographically.
We did find a very stable ratio. If what you say is true and the
micrometeorology influences the ratio, it's going to be used for this whole
city, and it might not be representative.
MEYER: Let me ask, was that ratio integrated over 3 h? I'm wondering if part
of the explanation for this isn't the temporal integration that comes into
play here.
261
-------
6. COMPARISON OF EKMA MODELS AND AQSM'S Whitten
For example, one might expect to find a great deal more variability if
one looks at 30-min average concentrations or hourly average concentrations,
as opposed to 3-h average concentrations. I wonder if it's been people's
experience to find that some of these differences get smoothed out over longer
sampling periods. Does anybody have any experience in that regard?
BUFALINI: I think I'm inclined to agree with you. I'm amazed at the
difference in the HC:NOX ratios that have been presented. I don't recall
seeing that high a ratio when going through some sort of integration, an hour
or longer. One wonders whether there is a puff of HC coming in to give you a
full 100:1 ratio.
The 0.9 seems more reasonable with auto exhaust when someone is
monitoring right behind a tailpipe. I think in general one does not observe
those high ratios over a period of time.
KELLY: The criterion at the site of the HC and NOX we use is to get so many
meters off the road, based on traffic, and —
UNIDENTIFIED SPEAKER: Presumably the HC monitors are sited in the summer so
that they avoid any one dominant source; they may be near a road and be down
at the exhaust, but not on the tailpipe. You're measuring a couple of ppmC
at 6:00 a.m. to 9:00 a.m., or something like that.
JEFFRIES: I think it has a lot to do with identification of large point
sources of HC here where the monitor might be. But with these kinds of
variations, either the automobile HC:NOX ratio is varying greatly or most of
the influence we're seeing is due to point sources of one or the other.
TRIJONIS: There is a more important factor, actually, which is carryover.
The greater the age of the air mass, the more HC:NOx-enriched it is. In
fact, in the afternoon the ratio goes up to 100:200 sometimes, because the NOX
go away while the HC's stick around.
So, rather than the specific source area you go to, it's probably the
amount of carryover you have from yesterday that determines the day-to-day
variation in your ratio. If you have more carryover, you have a higher ratio.
KELLY: I don't think that happens in the Detroit area. General Motors, in
the urban core measured HC and NOX, using our mobile laboratory and our HC
upwind monitor, as well as two monitors from EPA, one of which they set up in
the city all summer, and one of which they set upwind of the city. We thought
the upwind site would show more HC's, but the upwind site didn't see any HC's.
Our HC upwind monitor used the total-minus-methane technique which presumably
isn't reliable unless you get up above 0.5 ppmC. But there wasn't any
evidence of transport of HC's into Detroit.
I don't know about Los Angeles or other places where you have a lot of
stagnation, but in the Detroit area, downwind or in the afternoon, your HC
262
-------
6. COMPARISON OF EKMA MODELS AND ApSM'S Whitten
levels fall off dramatically down to tenths of ppmC, and NOX levels fall down
to 5, 10, 20 ppb. However, in the morning you do see a big peak from
6:00 a.m. to 9:00 a.m. That's what you're measuring, and presumably that can
photolyze and form the 03 downwind.
But, my question was that with all this controversy over ratios and using
distributions, well, that may be all right if you have a lot of data, but what
if you have 3 months worth of data, and —
JEFFRIES: Three days.
KELLY: — and you must come up with a ratio to use in this model to predict
controls?
WHITTEN: There is another aspect to this idea of the HC:NOX ratio that
Dr. Bufalini touched on, namely, if you use more and more days to average your
HC:NOX ratio, it zeros in on this nice, solid number. On the other hand,
you're trying to predict a very specific episode date for the 03, so you're
only looking for one especially bad day. So, on the one hand, you're
averaging things over a lot of days to get this HC:NOX ratio that is extremely
critical to your control strategy, and, on the other hand, the whole thing is
an episode day and it is very difficult to get data for that day.
I have a lot of reservations about the current EKMA with the HC and NOX.
That's one of the reasons I went and worked on this emission-oriented
idea where you start upwind from the city where the initial condition is low
and use an emissions inventory to generate the isopleth diagram. The abscissa
in the diagram is relative to emissions, and it doesn't relate to an initial
concentration.
JEFFRIES: You're substituting knowledge of emissions for knowledge of the
HC:NOX ratio.
WHITTEN: Definitely.
KILLUS: But, you have to have the knowledge of the emissions anyway, or the
control strategies couldn't be effective.
JEFFRIES: No, you don't. You have to assume that there is a relationship
between emissions and ambient concentrations, that's all.
KELLY: It doesn't seem to me that emissions can give you HC's and NOX. At
least they can't in the threat area. You don't really know what those
emissions are, specific HC's, so you don't know what the ppmC is of those
emissions.
You know tons of organic compounds and tons of NOX; what does that give
you in terms of ratios?
263
-------
6. COMPARISON OF EKMA MODELS AND AQSM'S Whitten
WRITTEN: If you know the tons then you use that to generate the
concentrations/ it's very simple.
JEFFRIES: Assuming splits and —
TRIJONIS: Even if we know that, the ambient ratio is about three to four
times as high as the emissions ratio, and because of the short-lived nature of
NOX, you can measure tailpipe emissions until you get the right emissions
ratio, and 1-h or 2-h or 3 h later it starts going down. Or, if you have
some carryover from the previous day, you have an ambient ratio of about two
to four times the emissions ratio. That's why I distrust just using the
emissions data. Those data might not be representative of the real ambient
ratio and the real ambient reactivity, unless you have something that will
destroy NOX in your model.
WHITTEN: That's the point. You start the model at 4:00 o'clock in the
morning or 2:00 o'clock in the morning, and by 9:00 o'clock in the morning a
lot of NOX are already destroyed. So you do see a higher ratio in the model
already, much higher than the emissions ratio.
I'm just saying that I feel there are certain advantages. They both have
disadvantages, but I think there are certain advantages of the emission-
oriented EKMA over the initial-conditions HC:NOX ambient measurement method.
You can always check the emission-oriented EKMA. If the model predicts
at 9:00 o'clock in the morning that the HC:NOX ratio is such and such and the
concentrations are such and such, and if they are an order of magnitude off,
then you think ma.ybe something's wrong with the model. You have a check on
it. You don't lose that. You don't need those numbers. You don't depend on
HC and NOX measurements.
264
-------
7. DERIVING EMPIRICAL KINETIC MODELING APPROACH ISOPLETHS FROM
EXPERIMENTAL DATA: THE LOS ANGELES CAPTIVE-AIR STUDY
Daniel Grosjean
Richard Countess
Kochy Fung
Kumaraswamy Ganesan
Alan Lloyd
Fred Lurmann
Environmental Research and Technology, Inc.
2625 Towngate Poad
Westlake Village, CA 91361
ABSTRACT
Under the sponsorship of the Coordinating Research Council, Environmental
Research and Technology, Inc. conducted a series of captive-air experiments in
support of photochemical kinetic model validation and Environmental Kinetic
Modeling Approach (EKMA) evaluation. Captive-air experiments are described
which were conducted over approximately a 7-week period from September to
November 1981. Sunlight irradiations were carried out in a large (100-m^) and
four smaller (4-m3) satellite Teflon chambers. The facility was located on
the roof of a building on the campus of the University of Southern California
in Los Angeles. Species measured included nitric oxide, nitrous oxide, ozone,
peroxyacetylnitrate, nitric acid, individual hydrocarbons, and individual
aldehydes. Subequent data analysis is discussed in which the data are
compared with the Ozone Isopleth Plotting Package diagram derived from the
EKMA model constructed for Los Angeles during October 1981. The model
employed both the standard-EKMA chemistry and the Environmental Research and
Technology, Inc. chemical mechanism.
265
-------
7. LOS ANGELES CAPTIVE AIR STUDY Grosjean et al.
INTRODUCTION
Control of ambient ozone (03) concentrations in urban and rural areas of
the United States is of continuing concern to regulatory agencies. The fact
that 33 areas of the country have applied for a 5-year extension of the date
for attainment of the National Ambient Air Quality Standard (NAApS) for 03
(Federal Register, 1981) illustrates the magnitude of the problem. The
reduction in ambient concentrations of 03 requires control of the precursors,
oxides of nitrogen (NOX) and hydrocarbons (HC's), which undergo a complex
series of sunlight-initiated reactions in the atmosphere. The U.S.
Environmental Protection Agency (EPA) developed the Empirical Kinetic Modeling
Approach (EKMA) to calculate the precursor reductions needed to achieve the
NAAQS for 03. EKMA has been described described fully elsewhere (Dodge, 1977;
Whitten and Hogo, 1978; EPA, 1981) and was used for many of the 1979 State
Implementation Plan (SIP) submissions.
While EKMA is a significant improvement over the Appendix J approach
(Dimitriades, 1977), it has its own shortcomings (no spatial resolution of
emissions, similar treatment of ground-level and elevated emissions sources,
assumption of fixed reactivity for the HC mix regardless of the geographic
location). The EPA recognized the need for a more sophisticated model for
those areas of the country most heavily impacted by high levels of 03. For
such areas, EPA proposed (Federal Register, 1980) that photochemical air
quality simulation models should be applied, since they treat emissions,
meteorology, chemistry, and their interrelationships in a more realistic way
266
-------
7. LOS ANGELES CAPTIVE AIR STUDY Grosjean et al.
than does EKMA. However, largely because of the concern expressed by the
states that the data requirements for these models were substantial, costly,
and would be unavailable for the 1982 SIP revision submittals, EPA decided to
recommend that the city-specific version of FKMA be applied nationally.
Because the EKMA methodology relies heavily on a chemical approach to reducing
ambient levels of 03, it is important that the chemistry and its use in EKMA
be examined in detail before enormous financial resources are committed to
following a control strategy dictated by the results of EKMA.
Several workers have questioned the adequacy of the standard chemical
mechanism used in EKMA to describe atmospheric reactions under a variety of
conditions (Hayes et al., 1980; Carter et al., 1982). Hayes et al. (1980)
evaluated the sensitivity of the EKMA analysis to the chemical mechanism
employed as part of an overall evaluation of EKMA to the control of 03 and
(nitrogen dioxide (NO2) in the South Coast Air Basin in California. They
employed two chemical mechanisms, the standard-EKMA chemistry (Dodge, 1977)
and the carbon-bond chemistry (Whitten, Killus, and Hogo, 1980), and found
significant differences in the control requirements for each case.
Carter et al. (1982), using indoor smog chamber data, assessed the impact
of the chemical mechanism employed and of the initial levels of precursors
(including nitrous acid and aldehydes) on the shape of the 03 isopleths in
EKMA. They concluded that the shapes of the isopleths, and thus the HC
control requirements, are significantly influenced by the aldehyde content and
by the chemical mechanism employed. Other workers have recently obtained
267
-------
7. LOS ANGELES CAPTIVE AIR STUDY Grosjean et al.
results based on different chemical mechanisms and have reached similar
conclusions (Jeffries et al., 1981; Martinez et al., 1981).
The objective of this study was to obtain experimental data in a captive-
air facility in the Los Angeles basin. Los Angeles was selected because of
the severity of photochemical smog and because of the number of air monitors
within the basin. Ambient air, captured in the early morning hours (between
6 and 9 a.m.), was subjected to sunlight irradiation in Teflon chambers, and
the maximum 03 formed each day was plotted as a function of initial HC and
NOX. These results are being compared with EKMA isopleths generated
, /
'„ .specif ically for the Los Angeles area for October 1981, under the EPA
guidelines for EKMA.
This paper describes the experimental data collection and the
methodologies employed for data reduction and for the comparison of
experimental data with the results of EKMA calculations. The experimental
data are currently being reduced and validated, and full results of the
program are expected to be available in early 1982.
EXPERIMENTAL DESIGN
Captive-air experiments were conducted for approximately 7 weeks from
September to November, 1981. The facility was located on the roof of the
Gerontology building on the campus of the University of Southern California
(USC). Sunlight irradiations were carried out in a large (approximately
268
-------
7. LOS ANGELES CAPTIVE flIR STUDY Grosjean et al.
(100-m3) and four smaller (4-m^) satellite Teflon chambers. All chambers
were inflated with ambient air between 6 and 9 a.m. Initial concentrations
measured included nitric oxide (NO)/ NO2, 03, and peroxyacetylnitrate (PAN)
(all chambers), individual HC's and individual aldehydes (large chamber), and
total nonmethane hydrocarbon (NMHC) (satellite chambers). Light-molecular-
weight HC's were measured on-site by gas chromatography (GC). Heavier HC's
and aldehydes were measured by GC and by high-performance liquid
chromatography (HPLC), respectively, upon return of the samples to
Environmental Research and Technology's (ERT's) Westlake Village laboratory
according to the methods described below. Aldehydes measured included
formaldehyde, acetaldehyde, benzaldehyde, and any C3 to Cj aliphatic aldehydes
present at detectable concentrations (typically 1 to 2 ppb) in the samples.
Before exposure to sunlight, additional amounts of NOX (50, 100, and
200 ppb, respectively) were injected into three of the four satellite
chambers. This approach provided a cost-effective way to generate four points
on the NOX-NMHC-O3 isopleth diagrams for each run (i.e., four HC:MOX ratios
for each initial HC level). This protocol also provided a built-in daily
control between the large chamber and one smaller satellite chamber to which
no NOX were added. Because of the significant difference in surface-to-volume
ratios, any significant discrepancy between the results obtained in the large
chamber and those obtained in the smaller satellite chamber may be indicative
of wall effects. Selected dilution experiments were also carried out. After
injection of NOX in the satellite chambers, all black covers were removed and
sunlight irradiatiions were carried out for approximately 6 h. Measurements
269
-------
7. LOS ANGELES CAPTIVE AIR STUDY Grosjean et al.
also were made of temperature, relative humidity, and solar ultraviolet
radiation intensity.
Description of the Chamber Facility
The large outdoor chamber is constructed from 12 panels of FEP Type A
Teflon film, each 30 ft x 48 in x 0.002 in. The panels are heat-sealed
together and the seams are externally reinforced with Mylar tape. Teflon was
selected because of its excellent transmission of actinic ultraviolet light
(295 to 400 nm), its low permeability, and its chemical inertness. The small
chambers were also made of Type A Teflon film.
The large pillow-shaped Teflon chamber is supported by a net and by ropes
running across a steel pipe frame 2 ft above the roof to allow air circulation
under the chamber. The chamber is covered by a black plastic sheet during
filling, injection, and mixing procedures before irradiation. The chamber has
two ports for the introduction of air and reactants and for sampling of
gaseous and particulate products and reactants, as well as for flushing and
dilution. Monitoring instruments are connected to each chamber compartment
via Teflon lines and Pyrex manifolds.
Measurement Methods
Aldehydes were measured by sampling the matrix air with microimpingers
containing an aqueous solution of the reagent 2,4-dinitrophenylhydrazine
270
-------
7. LOS ANGELES CAPTIVE AIR STUDY Grosjean et al.
(DNPH) (Grosjean et al., 1980; Fung and Grosjean, 1981). DNPH reacts
specifically with carbonyl compounds to form hydrazone derivatives, which are
extracted by selected organic solvents and separated by HPLC using reversed
phase columns.
These hydrazone derivatives were detected by ultraviolet absorption at
360 nm and guantitated against calibration curves constructed for standard
mixtures prepared from stock solutions. Other quantitative aspects of this
method, including reproducibility and detection limits, have been described
elsewhere (Fung and Grosjean, 1981).
Light-molecular-weight HC's (€2 to Cg) were analyzed on-site by flame
ionization gas chromatography (FID-GC) after cryogenic trapping from matrix
air. Quantitative analysis involved daily calibration with a multicomponent
mixture of known composition and calibration against a National Bureau of
Standards (NBS) propane standard. In addition, hydroxyl radical (OH~)
concentrations in the chamber were derived from the concentration ratios of
selected paraffins (e.g., n-butane:isobutane, n-butane:neopentane) in samples
collected on an hourly basis. Heavier-molecular-weight HC's (Cg to C-|2) were
analyzed by FID-GC upon return to ERT's laboratory in passivated steel
canisters pressurized with matrix air. Quantitative analysis involved the use
of a 38-component mixture and of a Scott-Marrin toluene standard. A number of
canisters were also analyzed independently by the Washington State University
method (EPA, 1980) .
271
-------
7. LOS ANGELES CAPTIVE AIR STUDY Grosjean et al.
Two PAN analyzers built at ERT (modified electron-capture gas
chromatographs with automated sampling loops) were employed for this project.
The instruments were calibrated using a PAN synthesis and calibration unit
also built in our laboratory. This unit consists of a photochemical flow
reactor in which parts-per-billion levels of PAN are prepared by chlorine-
initiated photooxidation of acetaldehyde in the presence of NC>2 in air. The
portable unit allows for direct, on-site calibration of PAN analyzers in the
range of concentrations relevant to ambient levels (approximately
2 to 50 ppb).
Interlaboratory Comparison of Methods for Nitrogenous Pollutants
The captive-air study experiment also provided an opportunity to carry
out interlaboratory comparison studies of methods for measuring ambient levels
of nitrogenous pollutants. Thus, the University of Michigan group headed by
Dr. Stedman and the Unisearch Company headed by Dr. Schiff conducted
measurements of NO, NC>2, and nitric acid (HNO3) in both ambient and chamber
air. The ERT staff also conducted a limited number of HNC>3 measurements,
while PAN and NC>2 photolysis rate constant measurements were conducted by ERT
and University of Michigan staff, respectively. Data from all three
Coordinating Research Council-sponsored groups will be shared and used for
modeling where applicable.
272
-------
7. LOS ANGELES CAPTIVE AIR STUDY Grosjean et al.
STATUS OF PROGRAM AND EXAMPLES OF PRELIMINARY DATA
The field component of this program was completed on November 13, 1981.
All continuous data including NO, NO2, 03, PAN, individual light-molecular-
weight HC's, temperature, and solar radiation intensity are being validated
and processed in computerized form. Laboratory analyses of aldehydes and
heavier-molecular-weight HC's have been essentially completed, as have the
corresponding post-study calibrations and quality assurance tests.
Figures 7-1 and 7-2 show examples of chromatograms for light- and
heavier-molecular-weight HC's, respectively, while concentrations of both
light and heavier HC's for a typical sample are listed in Table 7-1. Total
HC's are listed by class in Table 7-2. The most abundant paraffins in this
sample included n-butane, isopentane, n-pentane, and isobutane. Toluene and
xylene isomers accounted for a large fraction of the sample aromatic HC
content. The substantial aromatic and low olefinic content measured for this
sample were typical of the HC mix present in ambient Los Angeles air during
the period studied.
Major carbonyl compounds consistently identified in morning and
irradiated air samples included formaldehyde, acetaldehyde, propanal and
acetone, n-butanal, and benzaldehyde. Other as yet unidentified carbonyls
were often detected. Table 7-3 lists carbonyl concentrations for a typical
morning air sample. It is of interest to note, in the context of the above
discussion concerning the importance of initial carbonyl levels on model
273
-------
7. LOS ANGELES CAPTIVE AIR STUDY
Grosjean et al.
to-
-CVJ
-to
UJ
l-OO
i X
M - 0) - 0
(U
c
•H
c~H
i
c
0
*
0)
4J
•H
W
|
C
0
H
^i
x:
0)
^-,
fS]
*^
• ^
0)
C
<0
.p
V
^^
CO
^-I
...
•
(U
c
0)
4-1
3
(N
1
Jj
T3
C
10
^^
01
»~
^
%^
(U
C
(0
4J
c
(U
ft
o
r-4
O
^
0
U
a
tn 5
•H 0
0) .-I
c a; £
* c 4-)
4J 0) 0)
3 4-> e
•Q 3
O »O t?
woe
•H (0 t)
H (0
3 10
O
Q) T)
O
I -H
4-) 4J
£ C
v- 0)
C *
••> rO 4-1
i
^ C £
ft 4-1
(U
0)
o g
•H T5 in i- I
^ -H ^-- ^- ro
0)
(-1
•H
fc.
274
-------
7. LOS ANGELES CAPTIVE AIR STUDY
Grosjean et al
UJ
(0
co
ro c
01 id
"• 4J
co 3
0) X)
rH rH
I*
(0 4-1
(0 01
0) -O
-p I
to ro
O
E
I
S-l
rH U ft
X! U >i
4-1 X<
01 T) 4J
B C 01
•H 10 6
T> -r-l
I 01 r<
^ C 4J
- Q)
CN N
10
U
I4_j s
0
CO 0)
•H C
CO <0
>^ 4->
<8 X)
C rH
id ^i
Xi
o 01
41
H tJ
fo I
^ CN
co CN
U ~
4J
•& ..
•H W
•* X)
CN
•- I
01 rH
C >i
rd .G
4J 4-1
C Oi
I
*-» 0> CN
in X! •>
^- CN
.» (-^ x-^
0)
c
(fl
X
01
XI
01
a> 6 f
ft -H ro
H -O
I <""<11*
ro
X!
4J v
01 CN
ro 10
I
^
(0
rH
3
U
O
I-l
iH
O
(D
(U
3
•d
X!
4-1
I
HI
(0
O
CD (0
C 4J
rd ft
X 01
- — ft
Q)O •* CU
C CN —- •- 0) -D
it 0) C C
CN C Oi 3
CU N
01
3 C
0) XI C •- H O CN
X! H 0) CD O Xl ro
H >i N C 4-> H -^
>i X! C OI rH N
X! 4-1 0) N >i 4J tJ
4J Oi X! C X! 3 C
01 B rH 01 4-1 X) 10
6 I >i X! 01 O
•H ro XI rH
4J
I ~ 0)
m VD
I O> -
0 CO (U
— CN
0)
c ~
(0
X
(U
XI
ro 0) ro
v- C •• CN
^- (!) 01
a CN CN c
XI
•> H
0) >i
C 4->
O 4J
C (0 .
01 O X)
01
x;c.puc>i'cc
rd
X —
0) in "-
•§C2
0 (0 4->
C -H
O W CO
CN ro
U •»
o-"
^- (J C 0) ~-
>_- (0
10 C
4J *
CN ••» •>
- CN 0) 0) 0)
C C
0) CU 0)
N N
X 0) C C
N
C 0) 0) 1
x;
4-1
01
•H
4->
ro
3SNOdS3d
CN
275
-------
7. LOS ANGELES CAPTIVE AIR STUDY
Grosjean et al.
TABLE 7-1. EXAMPLE OF LIGHT- AND HEAVIER-MOLECULAR-WEIGHT HC
COMPOSITION IN AMBIENT LOS ANGELES AIR
HC
Amount
(ug/m3)
HC
Amount
( ug/m3)
Ethane 13.5
Ethylene 21.8
Acetylene 11.1
Propane 17.3
Propene 10.3
Propyne -
Propadiene
Isobutane 24.2
Butane 53.2
1-butene 2.2
Isobutene 6.6
trans-2-butene 5.9
cis-2-butene
Isopentane 48.4
Pentane 32.1
3-methyl-1-butene 1.8
1,3-butadiene
1-pentene 0.9
Isoprene
trans-?-pentene -
cis-2-pentene
2-methyl-2-butene
2,2-dimethylbutane
Cyclopentene -
Cyclopentane 3.0
2,3-dimethylbutane 3.9
2-methylpentane 14.1
cis-4-methyl-2-pentene
3-methylpentane 11.9
2-methyl-1-pentene 4.2
Hexane 9.3
trans-2-hexene
2-methyl-2-pentene
cis-2-hexene
Methylcyclopentane 10.6
2,2,3-trimethylbutane
2,4-dimethylpentane 2.7
1-methylcyclopentene
Benzene 10.4
Cyclohexane 4.6
2-methylhexane 4.7
2,3-dimethylpentane 7.3
3-methylhexane
Dimethylcyclopentanes
2,2,4-trimethylpentane
Heptane
Methy1eyelohexane
Ethylcyclopentane
2,5-dimethylhexane
2,4-dimethylhexane
2,3,4-trimethylpentane
Toluene
2,3-dimethylhexane
2-methylheptane
3-methylheptane
2,2,5-trimethylhexane
Dimethyleyelohexane
Octane
Ethylcyclohexane
Ethylbenzene
p- and m-xylene
Styrene
o-xylene
Nonane
Isopropylbenzene
Propylbenzene
p-ethyltoluene
m-ethyltoluene
1,3,5-trimethylbenzene
o-ethyltoluene
tert-butylbenzene
1,2,4-trimethylbenzene
secbutylbenzene
1,2, 3-trimethylbenzene
Decane
Methylstyrene
1,3-diethylbenzene
1,4-diethylbenzene
1,2-diethylbenzene
Undecane
Dodecane
a-pinene
(3-pinene
Limonene
Myrcene
7.3
5.0
5.6
5.0
5.4
3.
2.
138.
0.9
4.8
2.7
2.7
1.0
24.9
109.1
1.7
28.3
1.7
3.7
5.8
2.4
2.7
3.1
9.9
2.2
1.8
1.6
0.8
2.7
2.6
276
-------
7. LOS ANGELES CAPTIVE AIR STUDY Grosjean et al.
TABLE 7-2. TOTAL HC'S BY CLASS
HC
Class
Alkanes
Alkenes
Alkynes
Aromatics
Terpenes
Unidentified
Amount
( yg/m 3 )
309.6
53.7
11.1
350.8
0.0
27.1
Percent
42.7
7.4
1.5
48.4
0.0
TABLE 7-3. EXAMPLE OF INITIAL CARBONYL CONCENTRATIONS
IN LOS ANGELES AMBIENT AIR - 9/30/81
Concentration
Carbonyl ( yg/m^)
Formaldehyde 30
Acetaldehyde 17
Propanal and acetone 9
n-Butanal 4
Benzaldehyde 2
277
-------
7. LOS ANGELES CAPTIVE AIR STUDY Grosjean et al.
predictive ability, that levels of aldehydes are often comparable to those of
olefins in morning Los Angeles air.
Listed in Table 7-4 are initial NOX concentrations and the corresponding
03 maxima for 22 irradiations conducted from September 30 to November 13,
1981 in the large Teflon chamber. During the study period, air quality in
the Los Angeles area was substantially better than in previous years (e.g.,
the worst 1980 smog episodes were encountered in early October).
Nevertheless, high initial NOX levels were recorded on a number of days and
included a high value of 660 ppb on October 6, 1981. Not surprisingly,
sunlight irradiation of the corresponding ambient air sample yielded the
highest 03 maximum recorded in this project, 524 ppb.
After the results are plotted, they will be compared with the Ozone
Isopleth Plotting Package (OZIPP) diagram derived from the EKMA model
constructed for Los Angeles during October 1981, using the standard EKMA
chemistry. In addition, an isopleth diagram will be prepared using the
chemical mechanism formulated by Atkinson et al. (1981). The two isopleth
plots will be compared with the experimental data to ascertain which mechanism
performs better in terms of shapes of the isopleths and of control
requirements to attain the NAAQS for 03. Modeling of selected days with data
obtained in the large outdoor chamber also will be carried out using the ERT
chemical mechanism.
278
-------
7. LOS ANGELES CAPTIVE AIR STUDY
Grosjean et al.
TABLE 7-4. PRELIMINARY 03 AND NOX
Date
09/30/81
10/02/81
10/06/81
10/14/81
10/15/81
10/16/81
10/19/81
10/20/81
10/22/81
10/23/81
10/28/81
Initial NOX
Concentration
(ppb)
140
~ 60
660
204
253
428
484
696
258
518
206
Maximum 03
Concentration
(ppb)
136
90
524
120
140
203
261
325
95
168
96
DATA FOR
Date
10/29/81
10/30/81
11/02/81
11/03/81
11/04/81
11/05/81
11/09/81
11/10/81
11/11/81
11/12/81
11/13/81
LARGE OUTDOOR
Initial NOX
Concentration
(ppb)
150
453
403
502
134
218
359
444
381
283
170
CHAMBER
Maximum 03
Concentration
(ppb)
110
217
194
164
107
163
211
232
179
89
84
CONCLUSION
A protocol has been developed to evaluate the EKMA 03 isopleth diagrams
using the results from captive-air experiments. The field study for the Los
Angeles experiment has been completed, and data analysis is in progress.
Discussion of the possible application of the captive-air approach for
assessing pollutant control measures in Los Angeles and other areas of the
country should await completion of the data validation and modeling tasks.
279
-------
7. LOS ANGELES CAPTIVE AIR STUDY Grosjean et al.
ACKNOWLEDGMENTS
Support of the Los Angeles captive-air study by the Coordinating Research
Council, CAPA-17 and CAPA-19 Project Groups, is gratefully acknowledged, as is
the significant technical input provided by members of the CAPA-17 and CAPA-19
Project Groups and by Professor Jack G. Calvert in the design of this project.
Mr. J. Collins, Mr. J. Harrison, and Mr. E. Breitung of ERT had major
responsibility in the field operations at the USC. site, and their dedicated
assistance is much appreciated.
REFERENCES
Atkinson, R., A.C. Lloyd, and L. Winges. In press. An updated chemical
mechanism for hydrocarbon/NOx/SOx photooxidations suitable for inclusion in
atmospheric simulation models. Atmos. Environ.
Carter, W.P.L, A. Winer, and J.N. Pitts, Jr. 1982. Effects of kinetic
mechanisms and hydrocarbon composition on oxidant-precursor relationships
predicted by the EKMA isopleth technique. Atmos. Environ., 16:113-120.
Dimitriades, B. 1977. An alternative to the Appendix J method for
calculating oxidant and NO2~related control requirements. In: Proceedings of
the International Conference on Photochemical Oxidant Pollution and Its
Control. EPA-600/3-77-001, U.S. Environmental Protection Agency, Research
Triangle Park, NC.
Dodge, M.C. 1977. Combined use of modeling techniques and smog chamber data
to derive ozone-precursor relationships. In: Proceedings of the
International Conference on Photochemical Oxidant Pollution and Its Control.
EPA-600/3-77-001, U.S. Environmental Protection Agency, Research Triangle
Park, NC.
Federal Register. 1980. State Implementation Plans: Approval of 1982 ozone
and carbon monoxide plan revisions for areas needing an attainment date
extension. 45:64856-64861 (September 30, 1980).
280
-------
7. LOS ANGELES CAPTIVE AIR STUDY Grosjean et al.
Federal Register. 1981. State Implementation Plans: Approval of 1982 ozone
and carbon monoxide plan revisions for areas needing an attainment date
extension. 46:7182-7192 (January 22, 1981).
Fung, K., and D. Grosjean. 1981. Determination of nanogram amounts of
carbonyls as 2,4-dinitrophenylhydrazones by high-performance liquid
chromatography. Anal. Chem., 53:168-171.
Grosjean, D., K. Fung, and R. Atkinson. 1980. Paper No. 80-50.4, 73rd Air
Pollution Control Association Annual Meeting, Montreal, Quebec.
Hayes, S.R., M.A. Yocke, H. Hogo, and J.A. Johnson. 1980. Evaluation of
requirements for the control of ozone and nitrogen dioxide in the South Coast
Air Basin, SAI No. 218-ES80-178, Systems Applications, Inc., San Rafael, CA.
Martinez, J.R., C. Maxwell, H.S. Javitz, and R. Bawel. 1981. Performance
Evaluation of the Empirical Kinetic Modeling Approach (EKMA). 12th
International Technical Meeting, NATO/CCMS, Menlo Park, CA.
U.S. Environmental Protection Agency. 1977. Uses, Limitations and Technical
Basis of Procedures for Quantifying Relationships Between Photochemical
Oxidants and Precursors. EPA 450/2-77-021a, U.S. Environmental Protection
Agency, Research Triangle Park, NC.
U.S. Environmental Protection Agency. 1980. Guidance for the Collection and
Use of Ambient Hydrocarbon Species Data in Development of Ozone Control
Strategies. EPA-450/4-80-08, U.S. Environmental Protection Agency, Research
Triangle Park, NC.
WORKSHOP COMMENTARY
ESCHENROEDER: I have three questions. First, what precautions did you take
to prevent distortion of your alkene sample by reaction with 03, sort of dark
reactions?
LLOYD: As you notice from Table 6-4, most of the 03 concentrations appear to
be relatively low; the olefins were taken directly from the chamber into a
freezing loop and then measured on site. Other samples were taken for
measuring off-site.
ESCHENROEDER: That sampling time for withdrawing that sample was short enough
that there wasn't sufficient time for reaction of alkenes with 03?
281
-------
7. LOS ANGELES CAPTIVE AIR STUDY Grosjean et al.
LLOYD: I think that in terms of olefins, there will always be that question.
We took the precautions we could in terms of trapping cryogenically and
sending some of the samples away, in which case some NO was added to get rid
of the 03. I didn't focus heavily on the fact that the olefins were low
in the atmospheric samples because there may be some uncertainty in those
measurements because of the possible reactions with 03 that you mention.
However, for morning samples, 03 concentrations were low.
ESCHENROEDER: You could probably estimate it from the rates, the time
required, and the residence required.
LLOYD: That would be part of the data reduction. I think the major result
from some of the preliminary data is that the aromatics seemed to be very
high, and I think that's the major focus until we see what the olefins are for
the samples.
ESCHENROEDER: What was the highest carbon number on the samples that you
analyzed by chromatography?
LLOYD: I think C-11.
ESCHENROEDER: Finally, could you briefly outline how Don Stedman did his
photodissociation rate measurements?
LLOYD: His standard technique was to measure the concentration of N02 that
flows through a glass tube and then downstream measure the NO by
chemiluminescence, so giving the amount of NO photolyzed.
McRAE: Have you made any comparisons of the HC composition measured against
the trends of compositions you're getting out of the emissions inventory that
Fred Lurmann compiled?
LLOYD: Not yet. As we saw yesterday, one thing that is lacking in most
areas, and particularly in L.A., with its high photochemical oxidant, is
information on HC composition. We've really got no good handle on how the
character of the atmosphere may be changing. We're looking at control
strategies to take effect 10 years hence.
We should make sure that our chemical mechanism can also reproduce results in
those regimes that may exist in the future as well as those mixes currently
existing.
McRAE: What were the typical levels of gas-phase HNO3?
LLOYD: Those data haven't been reduced yet and I'm a little bit cautious,
because we had three groups measuring the HN03- Grosjean is most heavily
involved with that. I think from the previous Claremont study you see levels
of HNO3 up to 30 ppb. At times, again, you see levels in that range, 10, 20,
or 30 ppb. These samples were taken further upwind of the USC site, and I
282
-------
7. LOS ANGELES CAPTIVE AIR STUDY Grosjean et al.
think some of the levels in the bag are probably higher. Also, we did some
measurements in ambient air, and some of those were low in HNC>3.
DIMITRIADES: Let me ask about three questions about the chromatographic
analysis of the organics. First, was the identification based on retention
times alone? Second, what was the fraction of the total sample you couldn't
identify or that was of questionable identification? Finally, how did your
HC analysis column perform? How did it perform with the oxygenates; did it
result in sharp peaks or were they spread all over the background?
LLOYD: The oxygenates were reasonably resolved, although in some samples
there wasn't resolution between acetone and propionaldehyde. In general the
resolution is pretty good.
DIMITRIADES: I'm talking about the HC analysis column, not the MBTH
(3-methyl-2-benzothiazolone hydrazone hydrochloride).
LLOYD: I'm not sure.
DIMITRIADES: The other question was what fraction of the total sample was
unidentified?
LLOYD: In the example I gave here, about 95% of the sample was identified.
DIMITRIADES: Was the identification based on retention times?
LLOYD: Mostly retention times, but there was also some mass spectrometry
checks for some specific compounds.
BUFALINI: My original objection to the smog chamber study was that I felt
that it was doubtful that you would get meaningful results down at
concentrations that would produce 0.12 ppm of 03. The main reason for this is
that I felt that chamber contamination effects would be extremely important.
Could you comment on what sort of experiments were performed to make certain
that the smog chamber was performing in a satisfactory fashion in the low end
of the EKMA plots where one would observe 0.12 ppm? I also have another
question about PBZN. I had made a calculation based on toluene that showed
relatively low yields of PBZN from toluene. If one assumes about 100 ppb of
toluene, which is not atypical in Los Angeles, one would observe no more than
probably 1 to 2, maybe 3 ppb of PBZN; have you observed this high a
concentration or have any experiments been done along that line? The yield
really isn't large, and I think the experimental techniques are going to be
severely tested to look for PBZN.
LLOYD: We did do some control and reference runs with propene/NOx and
butane/NOx mixtures to check out how significant some of the dirty chamber
effects would be. We haven't finished all the data reduction on that.
I think that that's still of concern for some of the lesser-reactive
compounds, such as the butane. Chamber contamination effects for propylene
283
-------
7. LOS ANGELES CAPTIVE AIR STUDY Grosjean et al.
seem to be less significant, but there is no question that some effect is
there, and the extent to which it's there will have to be ascertained by doing
the modeling.
BUFALINI: We also found that not only chamber contamination is important, but
if the bag, even a large bag, has a very small hole in it, material gets in it
very readily. This is puzzling, as it would seem that most of the material
would go in the opposite direction. Thus if a bag has even a slight leak in
it, the results would have to be discarded. At least this is true with the
200-liter bags that we've been working with.
LLOYD: As I mentioned, we did have some data earlier than the end of
September, in which we had some problems with the large chamber collapsing too
early, well before the 6-h irradiation time. During the course of the
experiments, leaks would occur now and again, and we would have to change the
Teflon bags. Examining some of the detailed HC's as a function of time may
shed some light on this effect. Regarding PBZN, again it's my understanding
in talking with Dr. Grosjean that the column for PAN would have to be run for
longer times, so I'm not sure that it's fully suitable for PBZN. In looking
at NOX sinks and accounting for HNC>3, PAN, and some of the organic nitrates,
one might see some differences in the chromatograms which possibly may be
attributed to PBZN, because I presume that they would also be reduced in the
same way that PAN would be reduced and would actually give interference with
the chemiluminescence instruments. After we complete our data analysis and
calibration, we will know whether some of the differences we noted were real
or just artifacts.
BUFALINI: It was my understanding that you had a PBZN analyzer; I think Las
Vegas is using one of yours right now, isn't it?
LLOYD: There's been some talk about it, and we will probably perform PBZN
measurements in the Las Vegas area in 1982.
DIMITRIADES. We would now like to hear Harvey Jeffries, who has a few things
to say about work in connection with the mechanistic studies; after that, we
may have some more questions on both presentations.
284
-------
8. DESCRIPTION AND COMPARISON OF AVAILABLE CHEMICAL MECHANISMS
Gary Z. Whitten
James P. Killus
Systems Applications, Inc.
101 Lucas Valley Road
San Rafael, California 94903
ABSTRACT
Chemical mechanisms intended for use in air quality simulation models
need to provide a reasonable compromise between chemical realism and
computational efficiency. The type of approach to such a compromise can be
used to define three types of mechanisms: (1) surrogate mechanisms, (2)
lumped-molecule mechanisms, and (3) lumped-structure mechansims.
A list of criteria can also be defined, and how currently available
mechanisms deal with these criteria is discussed. The criteria are defined
from issues such as the following: the key features of smog chemistry,
hydrocarbon reactivity, carbon balance, validation, useability, and
documentation. In general, the performance of any currently available
mechanism can be improved by updating rate constants, carefully choosing
parameters, or making small modifications to the chemistry so that criteria
associated with the single issue of validation can be satisfied. Therefore,
the choice of mechanism should involve the entire list of issues.
285
-------
8. DESCRIPTION AND COMPARISON OF CHEMICAL MECHANISMS Whitten
INTRODUCTION
Over the past decade the scientific and engineering development of
atmospheric photochemical modeling has progressed at a rapid rate. During
this time the state of the art of smog modeling has advanced from its
tentative academic beginnings to the point that photochemical models are today
a standard tool for regulatory analysis. However, since it is neither
theoretical nor experimental in the classic sense, modeling must be recognized
as a relatively new area of scientific specialization. The criteria for good
scientific modeling are not yet as well established as the criteria we take
for granted when evaluating new theories or experiments. One area of
scientific specialization apparently fostered by the expansion in modeling is
the evaluation of experimental kinetics data for use by the modeling
community. This presentation is, in part, intended to provide a set of
criteria for evaluating whole kinetic mechanisms for use in atmospheric
modeling.
Concurrent with the expansion of knowledge and modeling techniques has
been a general confusion resulting from the plethora of models and kinetic
mechanisms available. Clearly photochemical models are not all the same, not
even in so basic a subsystem as the kinetics package. Different kinetic
mechanisms give different results under similar conditions, although what
"similar" means in this context is often hard to state. Each different
mechanism has a different methodology for speciation of hydrocarbons (HC's),
averaging of ensemble rate constants, and selection of stoichiometric
286
-------
8. DESCRIPTION AND COMPARISON OF CHEMICAL MECHANISMS Whitten
parameters. Thus, if a single investigator tries to compare one mechanism
againt another, he is often liable to the accusation of incorrect usage.
Indeed, improper usage is likely, since few mechanisms are documented
sufficiently to assure replication of results. As part of the cycle, the lack
of documentation is another incentive for each investigator to develop his own
mechanism. At least he will be using it "correctly."
In this review, we attempt to identify the important issues in the design
of photochemical kinetic mechanisms for the atmosphere. First, we describe a
general classification of mechanisms which are currently documented, and we
then discuss the important features of smog chemistry as the phenomenon is
presently understood. We then specifically deal with two major issues in the
treatment of smog formation: HC reactivity and carbon balance. Finally, we
address the issues of validation and usage of mechanisms, since these are the
greatest concerns for those interested in applying photochemical models to the
atmosphere.
CURRENT PHOTOCHEMICAL MECHANISMS
Photochemical kinetic mechanisms intended for atmospheric application may
be classified into three types: surrogate mechanisms, lumped-molecule
mechanisms, and lumped-structure mechanisms. Each mechanism differs in the
nature of the simplifications it uses to make the complex reaction sequences
and the multitude of organic compounds involved in urban smog more tractable.
287
-------
8. DESCRIPTION AND COMPARISON OF CHEMICAL MECHANISMS Whitten
All mechanisms tend to be the same in their explicit treatment of the basic
inorganic chemistry.
Surrogate mechanisms are perhaps the easiest to understand. For
laboratory smog chamber experiments involving individual HC species,
mechanisms are available which treat every compound with a significant role in
the chemical process explicitly. In the surrogate-mechanism approach, the
explicit mechanisms for one or more HC's are added together. The complex
urban mix is then assumed to be represented by some blend of the HC's treated
by the explicit chemistry. The obvious difficulty with this approach is the
specification of the surrogate blend. The mechanism used in the present form
of the Empirical Kinetic Modeling Approach (EKMA) is such a surrogate
mechanism, and a 25% propylene/75% butane mix is used to simulate automobile
exhaust. The appropriate mix for nonautomotive emissions has not been
established.
One possible method for adapting the surrogate-mechanism approach to HC
mixes other than automobile exhaust has been offered by the users of the
Lawrence Livermore grid model (known as LIRAQ). In the LIRAQ emissions
inventory, each individual HC species is represented by "propylene/butane
equivalents"—the amount of propylene and butane which might represent a
kilogram of HC emission. This procedure has not been validated, however.
The lumped-molecule approach to kinetic modeling is the most widely used
method of simplifying the smog-chemistry problem. Lumped-molecule mechanisms
288
-------
8. DESCRIPTION AND COMPARISON OF CHEMICAL MECHANISMS Whitten
are based upon the approach of Hecht, Seinfeld, and Dodge (1974), wherein
molecular species are classified as olefins, paraffins, aldehydes, and,
usually, aromatics. The rate constant for each generalized HC class should,
in theory, be an appropriate average rate of the individual compounds
contained in each class. However, in practice, lumped-molecule mechanisms
usually specify some default value which may or may not approximate the
ensemble average.
Lumped-molecule mechanisms in general require noninteger stoichiometric
parameters to describe reaction pathways for the lumped molecular species and
the lumped intermediates generated by the chemistry. Choice of the proper
values for the stoichiometric parameters poses a problem which is initially
related to the averaging of rate constants just described; the stoichiometric
parameters usually relate to the average size and distribution of the lumped
initial species. However, the stoichiometric parameters in lumped-molecule
mechanisms may also affect both reactivity and carbon balance. Specification
of these parameters to satisfy all of these constraints simultaneously can be
difficult.
The lumped-structure approach is an attempt to decouple reactivity and
carbon conservation. The concept owes its inception to the work of Benson
(1968), who calculated molecular reactivity from submolecular components.
Thus, in the lumped-structure approach, carbon structures within the HC
molecule are the lumping category. In an olefin molecule, for example, the
289
-------
8. DESCRIPTION AND COMPARISON OF CHEMICAL MECHANISMS
Whitten
rH
H)
W
y|
W
H
z
£
U
U
s
o
1
u
. *]
m
2i
H
s£
rt
^1
cn
K
W
^
CO
0)
fl
c
fl
*u
fl
CO
•H
o
CO
01
O!
s
c
CO
01
rH
£
fl
W
g
CO
£j
;-•
CJ
01
£
4J
3 •
o* x
0> -n
•C S
fl
o
JJ -H
o w
C 0)
>i Q<
fl cn
e °
U JJ
X «
JJ C
fl 0)
O* 0}
0 01
ti a
3 0)
CO ^
0)
01
1
c
o
•H
O
fl)
JJ fl)
-H rH
o ja
•H -H
I-H tn
A n
£ £
p
r-
.
u
Oi
TJ
g
<
y
U
DI
TJ
0)
JJ
fl
DI
0
^t
3
CO
in
2
«
,-H
JJ
01
rH
11
fl
O
,-H
01
13
fl
O
14
O
CO
TJ
C
fl
6
TJ •
CO
rH g
fl CO
C -H
0 C
•rt fl
fl O
3 £
E JJ
0 ~H
U U
X rH
01 a
•H X
X 01
V
•H
fl
£
CO
C
o
u
i
c
&
o
CN
CO
s 1
*- c
JJ
r-l C
0} 0)
4-* S-l
1) H)
Q
D
JJ rH
V-l rH
S 3
— rH
(Q
0) *
rJ ffi
u <
>. i
•H -H
u m
fl) • OT
Qt 9) id
(Q 0 rH
C 0
0 fl •
« JJ C
•p C rH - U
OJ pC -rf X
g 0 >
n) u .
V4 >1 >r & 01
04 £ £ C>5 0)
,
V *
•H C rH
JJ -H JJ
0
C CO 4)
•H O> -H
fl fl
3 U O
&.H
0 JJ
0 JJ -H
O in
CO O
U O 0
TJ CO
S S*1
„
TT
•- CO
Cl
01 <-
OH- O
D g 00
C N. — CO
fl TJ r~
& 01 &•* -*^ *~
2 C W S ~
"4^ fl) • ^ Ot
C CO -I — <
•H Ifl OS
fl) T3 C H
id fl) -H \
JJ W 73 V-l 0)
U iH 0 £ fl)
OJ id i— ( fl) 3
33 fc rJ Q Q
fl)
rH
3
1 O
T3 0)
0) rH
O* 0
H s
J
£
Jj
•H
fl> (d
0^ rH
C 3
fl O
O rH
o
>i i
§fl
IH
JJ
CO C
\4 -H
JJ IJH
0) O
g
fl JJ
^4 C
fl 01
a g
• jj
B 6 91
O -H V-<
CO JJ EI
,
fl) *
•H C rH
U
c 01 a)
•H tj* -H
JJ C "^
i V
fl C fl •
0> fl 14 JJ
•H 01 0) 01 -rl
jj > 01 .a g
a a c • c u
CO 0) fl 3 JJ
O fl CO M 14 O
O -H C H 01 -H
1 O O fl JJ rH
jj So o) n I 01
VH jJ 0) ^4
fl 0 fl > fl 0
TJ
c
«
01
s
01
c.
CO
^1
0
is
rH
01
3
u
s
c
fl)
1
0]
•H
rH
J3
(0
Ed
J^
5-1
a>
«
K
^
fl)
c
0
1
o
JJ
<
II
<
*
g!
0
0
c
fl)
H
*4-l
01
JJ
JJ
•H
CO
fl
c
V4
o
•H
rH
fl
u
II
£4
o
fl
fl
c
o
•H
JJ
Vj
o
0
(X
&1
c
-H
>1
fl
^
0
01
Ij
3
X
1
c
id
U-l
fl
fl
s
•H
IU
01
o
c
fl
c
-H
CO
3
.§
0 01
C J=
H 0
CO
&i O
O -rl
rH JJ
o o
£. 01
u *
0)
73 01
c c •
'U -
U 0 ffi
rfl U
(fl k
4) (A fl)
Qj
JJ -H g
c c jj
C "5 14
o 01 o
•H
> Q( 0)
C "^ JJ
H 01
II iJ O
£H 0) ^4
U H CO
290
-------
8. DESCRIPTION AND COMPARISON OF CHEMICAL MECHANISMS Whitten
chemistry of the olefinic group is treated independently from any paraffinic
side chains.
The lumped-structure approach offers several advantages over the
lumped-molecular approach. Because of similar reaction rate constants, the
averaging problem is minimized. Carbon conservation can be expressly
maintained, and carbon balance during the simulation can even be monitored.
The principal disadvantage is that intramolecular processes such as
decomposition and side-chain activation can be difficult to treat. Also,
because the lumped-structure approach is often less intuitive than the
lumped-molecule approach, the documentation requirements may be higher.
GENERAL REMARKS ON SMOG CHEMISTRY
Ozone (03) formation in photochemical smog is a complex phenomenon
involving the radical-initiated degradation of HC's and oxides of nitrogen
(NOX). Despite the complexity of the process, its basic features are well
established, and, for the most part, noncontroversial. We will now describe
some of the more basic features of smog chemistry before we analyze how
different kinetic mechanisms treat these basic processes.
0-3 Generation
Ozone is the principal oxidant formed in smog. Ozone formation occurs as
a result of a set of linked processes:
291
-------
8. DESCRIPTION AND COMPARISON OF CHEMICAL MECHANISMS Whitten
NC>2 + hv -»• NO + 0,
O + 02 -»• 03,
03 + NO •»• N02 + 02 /
R02 + NO -»• NO2 + RO.
The first three reactions above form a photochemical cycle, passing an
oxygen atom from nitrogen dioxide (NO2) to O3 and back again. The last
reaction injects oxygen atoms into the cycle. As nitric oxide (NO) is
converted to NO2/ there is less and less room in the nitrogen oxide
"compartment" for an odd oxygen, and the injected oxygen atoms tend to
•
accumulate as 03. The peroxy radicals (R02) which inject the oxygen into the
NO/NO2/O3 system are formed in the HC oxidation process.
Precursor Decay
Hydrocarbons and NOX are the chemical precursors to photochemical smog.
The oxidation of HC's yields the peroxy radicals which drive 03 formation:
HC + OH ->• RO2 (+ H2O) .
Nitrogen oxides are consumed in a variety of reactions:
292
-------
8. DESCRIPTION AND COMPARISON OF CHEMICAL MECHANISMS Whitten
OH + NO2 ->• HNC>3,
RO' + NO2 > RONO2-
R02 + NO ->• RONO2,
+ NO
Olefin + NO3 -> Dinitrates,
Cresol + NO3 ->• Nitrocresol.
The last two reactions are of interest because they destroy NOX without
consuming free radicals. Thus/ these reactions do not inhibit the
smog-formation process in the way that radical termination processes do.
Radical Sinks
Radical-sink processes are important for several reasons. Because RO2
formation from HC is a radical-initiated oxidation process, radical sinks
reduce the rate of HC oxidation and limit the 03 formation rate.
Most radical-sink processes involve the destruction of NOX as well. In
addition to those NOx-radical sink reactions listed above, the
peroxyacetylnitrate (PAN) formation process also deserves mention.
«-
RCO3 + N02 ->• PAN.
293
-------
8. DESCRIPTION AND COMPARISON OF CHEMICAL MECHANISMS _ Whitten
Peroxyacetyl nitrates can decompose thermally to yield the RCC>3 and NC>2
precursors. Thus PAN formation consumes radicals and NOX, whereas PAN
decomposition serves as a source for these species.
At high radical concentrations, generally seen only in combinations of
high precursor concentrations and high HC:NOX ratios, radical- radical
reactions may become significant. The best known of these is the reaction of
HO2 with itself:
HO2 + HO2
The product, hydrogen peroxide (H2C>2), is an oxidant which has been
observed in urban smog.
Radical Sources
To balance the effects of radical sinks and to provide a radical
concentration level sufficient to catalyze the smog process, sources of
radicals must be present in the system. The most important of these radical
sources is the photolysis of oxygenated HC's (aldehydes, ketones, dicarbonyls,
etc.), for example:
O
2
RCHO + hv ->• RCC>3 + H02-
294
-------
8. DESCRIPTION AND COMPARISON OF CHEMICAL MECHANISMS Whitten
Another source of radicals from HC precursors comes from the olefin-O3
reaction via several steps:
Olefin + 03 •>->•->• RC>2 + HO2.
Some mechanisms produce radicals from as much as 50% of all olefin-C»3
reactions. At this level olefin-O3 chemistry becomes a significant radical
source. More recent evidence suggests that the radical yield from this
process is much less than 50% and that olefin-03 chemistry is at most a minor
source of radicals.
There are two significant radical sources in the inorganic chemistry of
smog:
HONO + hv -»• OH + NO,
H20
03 + hv •*• (O2) 01D -»• OH + OH.
Photolysis of HONO may be significant to urban smog. Some evidence
exists that this compound is emitted directly or is formed from heterogenous
processes from NOX emissions. Small concentrations (1 to 5% of ambient NOX)
have been detected in the urban atmosphere.
Radical formation from 03 photolysis is the dominant known radical source
in the rural environment, where background 03 concentration is much higher
295
-------
8. DESCRIPTION AND COMPARISON OF CHEMICAL MECHANISMS Whitten
than other sources. For urban smog this source is less important, because 03
concentrations are suppressed in the early morning (when other radical sources
are low). Later in the day, when 03 is high, other radical sources are also
high and the O1D radical source has little effect on the overall chemistry.
REACTIVITY CONSIDERATIONS
Given our basic description of smog chemistry, we may identify three
factors which influence the production of 03. The first of these factors is
the radical sources needed to maintain the concentration of hydroxyl (OH)
radicals necessary to initiate the HC oxidation process. The second factor is
a concentration of HC's; the reaction of OH with HC's produces the peroxy
radicals which inject oxygen atoms into the NO/N02/O3 cycle. The final factor
is the presence of NOX, which react with peroxy radicals and generate 03.
The first two of these factors, radical sources and HC oxidation,
comprise what is generally referred to as "reactivity." The effects of NOX
are more complex. Although it is true that NOX are necessary for the
generation of 03, NO2 also serves as a principal radical sink in smog
chemistry. Thus, increasing NOX serves to reduce the radical concentration,
to reduce the rate of HC oxidation, and thus to slow the production of 03.
Given a particular level of HC, increasing NOX slows the 03 formation rate,
but increases the peak 03 until a particular HC:NOX ratio is reached (the
"ridge line" ratio on an isopleth diagram). Additional increases in NOX
levels further slow 03 production and reduce peak 03, although this reduction
296
-------
8. DESCRIPTION AND COMPARISON OF CHEMICAL MECHANISMS Whitten
depends greatly on factors such as total allowed reaction time, dilution rate,
and heterogenous 03 destruction.
Most current chemical kinetic mechanisms treat NOX chemistry in a similar
fashion, since the basic reactions have been studied intensively in laboratory
experiments. Treatment of ^05 chemistry is an area of uncertainty leading to
minor differences among mechanisms. Major differences, however, exist in the
treatment of HC oxidation behavior.
The oxidation reactivity of the precursor HC mix appears very similar in
most current mechanisms. The principal difference in this mechanism feature
occurs in the HC lumping procedure. For example, most mechanisms lump
ethylene along with the other olefins, yet ethylene has an OH reaction rate
constant of less than one-third that of propylene. If the propylene rate
constant is nevertheless used for all olefins, the reaction chemistry of
olefins may be dominated by an erroneously high value for all ethylene OH
reactions. Since ethylene typically comprises 50% of olefins, the error in
overall mechanism performance may be large. The effect of the error is too
much reactivity early in the day and too little reactivity later in the day.
A more important difference in mechanism behavior results from
assumptions concerning product yields. Subsequent to the initial reaction
step, HC oxidation is dominated by the reactions of oxygenated products:
297
-------
8. DESCRIPTION AND COMPARISON OF CHEMICAL MECHANISMS Whitten
HC + OH > R02,
RO2 + NO ->• NO2 + RO,
RO + HO2 + Aldehyde,
Aldehyde + OH ->• R'C03,
+ 02
+ NO -»• NO2 + CO2 + RO "2 •
The R'02 at the end of this cycle is not the same as the initial R02; it
contains one less carbon atom. Lumped mechanisms tend to treat all RO2
radicals as being similar. To terminate the reaction chain, some kind of
chain-length parameter is required.
Aldehydes are intensely reactive compounds. They react very rapidly with
OH, and they also photolyze to generate radicals. After a period of time, the
balance of aldehyde production and aldehyde loss will dominate the behavior of
a chemical kinetic mechanism. Current mechanisms are sufficiently dissimilar
in their treatment of aldehyde formation and loss that disparities in
calculations caused by this feature will become significant in less than one
day of simulation time.
298
-------
8. DESCRIPTION AND COMPARISON OF CHEMICAL MECHANISMS Whitten
CARBON BALANCE
When tested against smog chamber data, a kinetic mechanism may perform
quite well. However, smog chamber experiments typically use only initial
concentration of precursors, the light source may not vary in time, dilution
and surface losses are fixed, and the duration of the experiment may be as
short as 6 h. However, the same kinetic mechanism may not perform well for
atmospheric applications which involve continuous emissions, diurnal light,
complex dispersion, widely variable surface losses, and simulation times up to
several days. One factor which can play a role in the quality of atmospheric
modeling is the inherent carbon balance of a kinetic mechanism.
The basic function of a kinetic mechanism in an atmospheric model is to
deplete the precursors chemically into intermediate products which, in turn,
generate 03 and other secondary pollutants. The rest of the model deals with
generating the proper precursor levels from a combination of emissions,
dispersion, and deposition, plus treating the secondary pollutants via
dispersion and deposition. Hence, the improper loss or gain of carbon from a
kinetic mechanism negates much of the effectiveness built into the rest oF the
model. The use of a kinetic mechanism validated against smog chamber data but
known to have poor carbon balance adds an extra measure of uncertainty when
such a mechanism is used for atmospheric modeling.
Carbon-balance analysis can be a useful tool to determine the progression
of reactivity over time. To exemplify such a carbon-balance analysis, let us
299
-------
8. DESCRIPTION AND COMPARISON OF CHEMICAL MECHANISMS Whitten
examine the aldehyde cycle in the Hecht, Seinfeld, and Dodge (HSD) mechanism
(1974).
The aldehyde cycle in the HSD mechanism is governed by the parameter g,
the control parameter for the aldehyde oxidation sequence. In the HSD
mechanism, the initial HC oxidation step produces an RC>2 radical that oxidizes
an NO to N02'.
R02 + NO -> NO2 + RO- ,
The lumped alkoxyl radical then decomposes to form a hydroperoxy radical
and an aldehyde:
02
RO ->- Aldehyde + HO2,
The parameter 0, "the fraction of aldehydes which is not formaldehyde,"
controls subsequent carbon oxidation:
°2
Aldehyde + OH- * 3 RCO- + (1-3) [HO- + CO].
3 2
Subsequent reactions of RCO3 return an aldehyde:
°2
RC03 + NO -> RO- + NO2 + C02,
300
-------
8. DESCRIPTION AND COMPARISON OP CHEMICAL MECHANISMS Whitten
RO' + NO > RO' + NO2/
RO' -»• Aldehyde + HO',
2
As long as 3 is less than 1.0, each of these cycles reduces the
concentration of R moieties and simultaneously creates a carbon monoxide (CO)
or carbon dioxide (CO2). Several successive cycles in turn produce
progressively lesser reductions in R and progressively lesser amounts of CO
and CO2, assuming no outside sources of R moiety.
Hence, the parameter 8 can be pictured as a ratio in the geometric series
L =
whose sum (S) becomes
S = a
1 - 3,
where N goes to infinity. The factor a represents the number of alkyl
moieties (R) produced in the initial HC oxidation step (e.g., for butane in
HSD a=1, for propylene a = 2:
Paraffin +• OH -> RO',
2
301
-------
8. DESCRIPTION AND COMPARISON OF CHEMICAL MECHANISMS Whitten
Olefin + OH ->- R02 + RCHO.
The sum S, therefore, is simply the total number of whole carbon atoms of CO
and CO2 generated per original molecule. If the parameter g is chosen so that
S equals the number of carbon atoms in the molecule, carbon balance is
achieved. For example, Hecht, Seinfeld, and Dodge recommend a g of 0.75 for
butane which gives an S of 4.0—the correct number for butane. In other
words, if butane is allowed to react completely according to the pathways
described above, four molecules of CO or C02 will be produced per molecule of
butane when g is 0.75.
For HC mixtures, the setting of a carbon-conservative 0 is more
difficult. Yet, carbon conservation is ignored if one uses g merely as an
adjustable reactivity parameter or if one attempts to follow the ratio of
formaldehyde to the total aldehydes. One might envision a scheme for
selecting g on the basis of the average molecular weight of the mix, with
perhaps some minor adjustments for the relative reactivities of the HC's
involved.
We have analyzed several mechanisms for their carbon-balance properties.
In most lumped-molecule mechanisms the carbon-balance properties depend upon
the HC species assumed. Typically we found current mechanisms to be somewhat
carbon negative for paraffins (tending to lose carbon faster than an explicit
mechanism), whereas carbon balance for olefins tended to be somewhat carbon
positive (yielding more carbon in the products than exists in the precursors).
302
-------
8. DESCRIPTION AND COMPARISON OF CHEMICAL MECHANISMS Whitten
One mechanism that we examined was actually carbon divergent, each aldehyde
cycle yielding 17% more aldehydes than the previous cycle. Under some
conditions such a mechanism would yield highly nonphysical results.
VALIDATION OF MECHANISMS
The problem of model validation is not confined to photochemical kinetic
mechanisms or even simulation models in general. The problem of hypothesis
testing is central to the scientific endeavor. No rigorous procedure which
guarantees a validated simulation model is presently available. We will,
however, attempt to describe certain criteria which, if met, will support the
belief that the mechanism in question is a reasonable description of the
underlying physiochemical processes.
Chemical Foundations of Kinetic Mechanisms
Because of the assumptions and extrapolations which must be made in
applications of kinetic models, the fundamental basis for these models should
be sound and robust. The central chemical reactions and reaction rate
constants used in the mechanism should be the most accurate available; where
uncertainty exists, the limits of that uncertainty should not be exceeded.
The mechanism should also conserve mass and avoid the use of arbitrary tuning
parameters whenever possible.
303
-------
8. DESCRIPTION AND COMPARISON OF CHEMICAL MECHANISMS Whitten
However, a mechanism which violates one of these rules is not necessarily
invalid. Since some uncertainty still exists in the rates and products of the
important chemical reactions in smog formation, we may consider all kinetic
mechanisms to be in error to some degree. However, the behavior of smog
chemistry is insensitive to changes in some reaction rates. In many cases it
is the ratio of competing reactions that determines overall behavior; if the
ratio is correct, the mechanism is valid even though the absolute value of the
rate constants may be incorrect. Even violations in mass conservation may
have an insignificant effect under many circumstances.
The essential point remains, though, that a mechanism which exhibits
fundamental chemical realism is more valid than a mechanism which does not.
An explicit mechanism is, therefore, fundamentally more robust for the
specific HC's for which it was developed. This point provides a methodology
for testing lumped mechanisms—they should be comparable to explicit
mechanisms.
As we have previously noted, all current mechanisms tend to treat
inorganic chemistry similarly. Inorganic smog chemistry is well characterized
by direct kinetic measurements—the first validation of any photochemical
mechanism. The organic chemistry of some organic molecules is also well
characterized. The ability of a lumped mechanism to simulate the known
chemistry of these organic molecules (as reflected in an explicit mechanism)
forms the second validation test of a lumped mechanism. Note especially that
in comparisons between mechanisms, one is not limited to comparison with
304
-------
8. DESCRIPTION AND COMPARISON OF CHEMICAL MECHANISMS Whitten
observed species; one may compare predictions using computer output from
explicit mechanisms for short-lived intermediates, for example, OH and RO", as
2
a further validation check. The greater the degree of correspondence for all
species between the lumped and explicit mechanisms, the more stringent the
validation test and the more sound the construction of the lumped mechanism.
This last point (the use of all species) is also applicable to the validation
of mechanisms using smog chamber data.
Validation Using Smog Chamber Data
The use of smog chamber data is indispensable for the development and
validation of chemical kinetic mechanisms. In our own efforts to simulate
smog chamber experiments, we have developed a number of methodologies for the
improvement of the validation process. These include the following
principles:
• All modeling exercises should attempt to simulate not only the 03-
formation behavior of the smog mixture, but also NOX, HC,
aldehyde, and CO data, within the limits of experimental
uncertainty.
• The experiments used as a validation test of a lumped mechanism
should include a broad spectrum of reactivity. Experiments at
different precursor concentration levels and different HC:NOX
ratios should also be performed.
• For lumped mechanisms, validation using a variety of HC mixes
should be attempted even if their reactivity is known to be
similar. This should certainly include "realistic" mixes such as
auto exhaust.
305
-------
8. DESCRIPTION AND COMPARISON OF CHEMICAL MECHANISMS Whitten
If possible, some experiments should be run to completion, that
is, to a point where either HC or NOX are depleted so that a true
03 peak is reached. In an outdoor chamber this may require a
2-day run if the diurnal change in sunlight produces the 03 peak.
The issue of chamber-related effects is controversial, but several points
may be made. First, the successful application of a mechanism to several
different chambers certainly enhances claims for its validity. Second, some
chamber effects are fairly well characterized and thus do not necessarily
detract from the validation of a mechanism. The surface loss of 03 may be
such a chamber-dependent effect; the shape of the light intensity spectrum can
be another chamber effect.
The use of "adjustable" chamber effects in the development of a mechanism
can obscure the fundamental photochemical process and limit the area of its
validity. For example, the use of large, chamber-dependent radical sources in
the application of a mechanism to a smog chamber obscures the validation of
that mechanism with regard to radical source and sink phenomena.
Our own work has involved the study of numerous kinetic mechanisms
validated for several different smog chambers. We believe that smog chamber
effects are such that validation of mechanisms is entirely possible at
urban-level precursor concentrations. At lower concentrations of HC and NOX
(< 0.2 ppmC HC and < 0.05 ppm NOX), chamber contamination becomes a problem
for many facilities, and special methodologies must be used if the smog
process is to be studied under these circumstances.
306
-------
8. DESCRIPTION AND COMPARISON OF CHEMICAL MECHANISMS Whitfaen
An analogy between electrical circuits and smog chemistry can be used to
help explain the sizeable 03 concentrations seen from smog chambers when no
precursors are added. The flux of radicals is comparable to the current of
electricity and 03 comparable to the voltage or the signal generated. Thus
high-concentration experiments have the robustness of a low-impedance circuit
and low-concentration experiments are subject to spurious effects even though
the 03 or signal strengths of the two types of experiments may be similar.
MECHANISM USAGE
The application of photochemical kinetic mechanisms to atmospheric
simulations places another set of criteria on mechanism comparison. Ideally,
a mechanism intended for atmospheric applications should be computationally
compact, and straightforward methodologies should exist both for the
speciation of emissions into the generalized species and for the selection of
parameters such as stoichiometric coefficients and photolysis rates. Finally,
if the mechanism is ever to gain general usage, it must be well documented.
Computational compactness is a prime requirement for mechanisms used in
models that include an adequate treatment of atmospheric transport processes.
Such models often reduce the computational overhead by invoking the
steady-state approximation for many of the intermediate chemical species of
the mechanism. To make optimal use of this approximation, the mechanism
should be designed with a minimum of non-steady-state species. Furthermore,
307
-------
8. DESCRIPTION AND COMPARISON OF CHEMICAL MECHANISMS Whitten
reactions between steady-state species must be held to a minimum, since such
reactions increase the algebraic complexity of the steady-state equations.
Converting a complex mixture of HC's into the special precursor species
of a given mechanism is often the most difficult task confronting the user of
a kinetic mechanism. Although it is relatively easy to specify the HC
categories for simple olefinic or paraffinic compounds, more complicated
compounds defy such easy analysis. Acrolein, for example, has both an
olefinic and an aldehyde functional group. It thus cannot be placed easily
into a lumped-molecule mechanism. Such concerns were an important factor in
the development of the lumped-structure approach. In the Carbon Bond
mechanism, for example, acrolein can be treated as one olefinic group and one
carbonyl group. Another type of example is octene. Most of this molecule is
paraffinic, yet it is technically an olefin. A surrogate-mechanism approach
might consider octene as a blend of propylene and butane; a lumped-molecule
mechanism could do likewise or treat octene as a pure olefin precursor making
special adjustments to the intermediate parameters; a lumped-structure
mechanism merely treats octene as six paraffinic units and one olefinic unit.
If octene is treated as a pure olefin using a lumped-molecule mechanism, then
the parameter adjustment must also be appropriate to the overall mixture.
Noteworthy is that the concentration of the special olefinic species would
typically be the same in all three types of mechanism for this example.
One feature that is useful in a kinetic mechanism is comparability with
normally observed concentration data. Both surrogate mechanisms and
308
-------
8. DESCRIPTION AND COMPARISON OF CHEMICAL MECHANISMS Whitten
lumped-structure mechanisms can be compared to total nonmethane hydrocarbon
(TNMHC) measurements. Such a comparison is more difficult for lumped-molecule
mechanisms because chemical reaction tends to alter the average molecular
weights of the HC mix. Similarly, mass-balance calculations during a
simulation are difficult for lumped-molecule mechanisms, but are simple for
surrogate and lumped-structure mechanisms.
If a mechanism is sufficiently detailed to warrant it, certain individual
HC's may be treated explicitly, while those remaining might still be fitted
into the lumped-species concept. Two possible candidates are ethylene and
toluene. Both are significant contributors to ambient HC mixes; each has been
observed to constitute as much as 10% of the total reactive mix. Each is
sufficiently different in reactivity from related compounds to warrant
separate treatment. Each has an OH reaction rate constant that differs from
similar compounds by a factor of three. Separate treatment of these HC's may
then allow comparisons of model predictions to observed data for the purpose
of source reconciliation and a unique check on the combined model performance
from emissions, chemistry, and dispersion.
Related to the speciation problem is the problem of lumped-parameter
estimation. Attempts to design mechanisms around this problem run into
conflicting goals. The greater the degree of lumping, the more difficult and
important the parameter estimation problem becomes. Yet lumping is required
to reduce computational requirements. As we have already seen, the
lumped-molecule mechanisms often present difficulties arising from a conflict
309
-------
8. DESCRIPTION AND COMPARISON OF CHEMICAL MECHANISMS Whitten
between reactivity and carbon conservation. This tends to happen whenever a
parameter is used for two purposes. Generally, therefore, a parameter should
not have multiple effects; but again, the greater the degree of lumping in the
system, the more difficult this task becomes.
Documentation
As a general rule, a flexible mechanism requires extra documentation to
implement that flexibility. The purposes of documentation are several:
• To describe the theoretical basis for the mechanism, and to
demonstrate its consistency with current knowledge.
• To document its validation for smog chamber experiments.
• To describe the methodologies for its use in applications.
A useful part of such a document would be a set of default parameters for
use in those circumstances wherein the detailed knowledge required to
determine such factors rigorously is lacking.
The ultimate test of a mechanism's documentation is fairly simple. An
inadequately documented mechanism is rarely used by anyone, save its
developers, even though the mechanism has been judged by many reviewers to be
the best available.
310
-------
8. DESCRIPTION AND COMPARISON OF CHEMICAL MECHANISMS Whitten
REFERENCES
Benson, S.W. 1968. Thermochemical Kinetics, Wiley, New York.
Carter, W.P.L., et al. 1979. Computer modeling of smog chamber data:
Progress in validation of a detailed mechanism for the photooxidation of
propene and n-butane in photochemical smog. Int. J. Chem. Kinet., 11:45.
Demerjian, K.L., and K. L. Schere. 1979. Application of a photochemical box
model for 03 air quality in Houston, Texas. In: Air Pollution Control
Association's Proceedings Ozone/Oxidants: Interactions with the Total
Environemnt II, Houston, TX, October, pp. 329-352.
Derwent, G., and O. Hov. 1982. Computer modeling studies of the impact of
vehicle exhuast emission controls on photochemical air pollution formation in
the United Kingdom. Environ. Sci. Technol.
Dodge, M.C. 1977. Effect of Selected Parameters on Predictions of a
Photochemical Model. EPA-600/3-77-048, U.S. Environmental Protection Agency,
Research Triangle Park, NC.
Duewer, W.H., M.C. MacCracken, and J.J. Walton. 1978. The Livermore
Regional Air Quality Model, II. Verification and sample application in the
San Francisco Bay area. J. Appl. Meteorol., 17:274-311.
Falls, A.H., and J.H. Seinfeld. 1978. Continued development of a kinetic
mechanism for photochemical smog. Environ. Sci. Technol., 12:1398-1406.
Graedel, T.F., L.A. Farrow, and T.H. Weber. 1975.. The influence of aerosols
on the chemistry of the troposphere. Int. J. Chem. Kinet., Symposium No. 1,
pp. 581-594.
Hecht, T.A., J.H. Seinfeld, and M.C. Dodge. 1974. Further development of a
generalized mechanism for photochemical smog. Environ. Sci. Technol.,
8:327-339.
Killus, J.P. , and G.Z. Whitten. 1981. A New Carbon-Bond Mechanism of Air
Quality Simulation Modeling. SAI No. 81245, Systems Applications, Inc., San
Rafael, CA.
Lloyd, A.C., F. Lurmann, and D. Godden. 1980. Sensitivity tests, data
requirements, and accuracy of a photochemical dispersion model - ELSTAR
(Environmental Lagrangian Simulator of Transport and Atmospheric Reactions).
In: Proceedings of the American Medical Association/Air Pollution Control
Association 2nd Joint Conference on Applications of Air Pollution Meteorology,
American Medical Association, Boston, MA.
311
-------
8. DESCRIPTION AND COMPARISON OF CHEMICAL MECHANISMS Whitten
MacCracken, M.C., and G.D. Sauter. 1975. Development of an Air Pollution
Model for the San Francisco Bay Area. UCRL-51920, Vol. 1, Lawrence Livermore
Laboratory/ Livermore, CA.
Whitten, G.Z., H. Hogo, and J.P. Killus. 1980. The carbon-bond mechanism:
A condense kinetic mechanism for photochemical smog. Environ. Sci. Technol.,
14:690.
Whitten, G.Z., J.P. Killus, and H. Hogo. 1980. Modeling of Simulated
Photochemical Smog with Kinetic Mechanism. Vol I. Final Report.
EPA-600/3-80-028a, U.S. Environmental Protection Agency, Research Triangle
Park, NC.
WORKSHOP COMMENTARY
DERWENT: I like your separation of the reactions into what they are actually
doing. I'd like to ask if you have considered putting a further class on the
end of your reactions, which consists of those reactions describing the
behavior of the secondary conclusions.
You've talked about 03 generation. What about discussing those reactions
that lead to 03 loss? These might include deposition, but they might also
include reactions with HO2 radicals or something like that. Also, you might
discuss the processes that lead to the loss of PAN and aldehydes, because they
may be treated quite differently by the various reaction mechanisms you're
covering.
WHITTEN: Yes, that is true, though I didn't go into it. I did mention the
idea of aldehyde balance over a period of time, but I didn't really emphasize
any idea of performance because I think that is Dr. Jeffries' main area, which
he will be talking about today.
A mechanism certainly must be able to follow the generation of 03 and the
03 concentrations that are seen. That includes having proper source and sink
reactions of 03. Certainly the cycle I showed was the basic, simplistic,
single main area of 03 generation. Reactions of H02 and OH with 03 are surely
well known and need to be accounted for.
I might mention, also, that a significant side issue to having a proper
aldehyde balance, radical source balance, and sink balance is that when you
end up having the right OH concentrations, you get the right HC decay.
All of those things are important criteria to look for before you look at
how well it fits 03. For instance, you may have an 03 mechanism that, when
you put material in a smog chamber, starts with the same amount of material
312
-------
8. DESCRIPTION AND COMPARISON OF CHEMICAL MECHANISMS Whitten
and makes the same amount of 03, but the HC's don't decay at all or they decay
too far. When used in the atmosphere with continuous emissions, e.g., in a
2-day situation, you would be doubtful that this mechanism would really
perform over a wide spectrum, even though its performance in that initial test
was really quite good. So, I think that HC decay, precursor decay, and
aldehyde balance are all involved and are connected to maintaining these
radical sources and sinks that are secondary to precursors.
DERWENT: Secondary products decay as well.
WHITTEN: Right. Like PAN, for instance. It forms and decomposes and if
there is NO around it reacts with it. Actually, the radical that is formed
reacts with NO, and then you don't get the PAN back. It is very sensitive to
that.
McRAE: Dr. Whitten, in your talk you emphasize the importance of carbon
balance. Could you give me some indication of how much uncertainty, on a
typical day, the 03 production would generate using one of these mechanisms?
WHITTEN: Well, you can put together a hypothetical experiment with a
mechanism; just say put a pure aldehyde—
McRAE: I mean for the typical values of the different mechanisms that you
mentioned.
WHITTEN: I'm not quite sure what you're trying to get at, because it's
difficult to define what's typical, and then say what's typically important.
I think the point T was trying to make is that there should be something
involved in the choice of the mechanism concerning how it deals with this
particular problem of carbon balance.
The molecular-species-type generic mechanisms deal with this through the
parameters that they have. I pointed that out generally.
KILLUS: What becomes important in a single day's reaction, in urban
application, is what the level of aldehydes is that you reach given a certain
input of HC's, that is, what the equilibrium level of aldehydes is that you
reach. That tells you what the reactivity of the mixture is.
In a mechanism which does not lose aldehydes fast enough, for example, or
one that loses them too rapidly, that level becomes very different, even in
the course of a day. From our experience operating the Carbon Bond mechanism
for both the urban mixes and the theoretical type simulations, the usual level
may be perhaps 20 to 25% aldehyde in total carbon balance within a relatively
short period of time. By a short period of time I mean, say, 2:00 or
3:00 p.m. in Los Angeles, that is, by the time you get out near the Riverside
area.
313
-------
8. DESCRIPTION AND COMPARISON OF CHEMICAL MECHANISMS Whitten
For a mechanism that in some extreme cases does not lose aldehydes at
all, or in the most extreme case, one that actually gains aldehydes during
that time, that 25% level becomes unbounded. In the case of a mechanism that
doesn't lose aldehydes obviously you lose HC's if the aldehydes remain there,
and so eventually you get entirely 100% aldehyde/smog mixture and no HC
precursors whatsoever.
That would probably take maybe 3 or 4 days, as near as I can tell from
general HC decay. But, you exceed the 20 to 25% level within the course of a
day.
LLOYD: Dr. Whitten, you mentioned carbon bonds and I think also NOX bonds. I
think the Carbon Bond mechanism has been an excellent approach for giving good
fidelity to the chemistry going on and yet making the package usable in an
urban airshed model. However, as you mentioned, we are not faced with those
same constraints for EKMA. I think we already have some mechanisms which
separate out ethylene, toluene, etc. Thus, I would like to hear your comments
on how far you think we can go with looking at more explicit chemistry which
can be used in EKMA, consistent with what we know about the atmosphere.
Do you have any comments on that?
WHITTEN: We have done some EKMA modeling where we split specific compounds
out of the carbon-bond chemistry and looked at those separately for specific
contrasts.
Recently, we looked at eye irritation from toluene in the Las Vegas area.
The reaction is propelled by chlorine. We had the carbon-bond chemistry to
take care of the regular HC stuff and another separate chlorine package
associated with it.
I think this is done quite regularly, and it's something that I would
recommend, that in future versions of EKMA there be some expanded amount of
chemistry. Especially since the chemistry package in EKMA seems to work
rather efficiently and is not very limited, having to have a highly condensed
mechanism. It's not like the air shed model, so I think your point is very
well taken. It has been used, and I support it very strongly.
In terms of what you said about NOX balance, I tried to point out that
the NOX chemistry, by and large, comes from the evaluation studies that have
been made in stratospheric chemistry. There is really only a NOX decay
associated with smog chemistry. There is no NOX formation like you get in
the stratosphere from nitrogen molecules. So, the balance is one of nitrogen
decay, which I did discuss as being an important factor.
The carbon balance is a different story because of the possibility that
it would make carbon or decay carbon. You can make it within parameters.
There are no parameters that are associted with the NOX chemistry. That is
314
-------
8. DESCRIPTION AND COMPARISON OF CHEMICAL MECHANISMS Whitten
explicit. So, there is not really a problem with nitrogen balance because of
the lack of parameters.
LLOYD: When you were talking about the simulations in Los Angeles
particularly, I heard you say and I heard Dr. Killus say on a couple of
occasions, "what's right." How do you define "what's right" when you're
talking about model simulations? What have you got to compare it with?
WHITTEN: That's certainly a good question. When I say "what's right," it
means I have some 2-day information, both in terms of what characterizes the
age, the aldehydes, and the total mix in the atmosphere. We also have some--
LLOYD: You had the ground-level measurements and didn't have the elevated
levels?
WHITTEN: I believe that's true, although I believe there may have been
some—I'll have to check on this—I believe there were some at elevated
levels.
WHITTEN: Certainly, in smog chambers with certain mixes, one could track
these species fairly well. There are even some fairly long-term smog chamber
data; for example, Dr. Jeffries observed more or less what's going on.
More importantly, we compare the aldehyde buildup to explicit mechanisms;
that is, if you look at an explicit mechanism and observe where the aldehyde
concentrations develop, those should be matched by a lumped mechanism. This
explicit mechanism would have all the chemistry, as best ^e understand it.
Now, if there are errors in the explicit chemistry, certainly there will
be errors in the chemistry derived from it.
I daresay that such derived chemistry is more right than chemistry which
disagrees with the explicit chemistry as we know it. Certainly, though, in
situations where your aldehydes don't go away, it's wrong. So we do have some
bounds on it. If the mechanism ever exceeds those bounds, we know, of course,
we've made a mistake.
WHITTEN: I might add that the radical balance in mechanisms is currently not
always handled the same.
Ken Demerjian's mechanism obtained some of its radicals from the 03
olefins, more so than other mechanisms, and therefore it behaves differently
for generated 0-j. Dr. Jeffries will show that it produces difffernt shaped
curves. We feel that such differences in the shape of the curves are
associated with response to the aldehyde balances. There is uncertainty in
the yield of radicals from the O3 olefins.
315
-------
8. DESCRIPTION AND COMPARISON OF CHEMICAL MECHANISMS Whitten
LLOYD: Do you feel that there are sufficient smog chamber data available for
mechanism testing, or would you like to see more data?
WHITTEN: I'm not going to say that more total data are needed, but I will
say —
JEFFRIES: There are never enough.
WHITTEN: Right, there are never enough good data. But there is a tremendous
volume of data. We have computer tapes and tapes of data from about 20 smog
chambers. In fact, we don't have time to look at all the points on almost
1,000 runs.
I think there should be a test set put together. A set of smog chamber
runs should be used to look at any potential candidate kinetic mechanism, if
you're going to use a model like EKMA, that gives us sufficient reproduction
of this set of smog chamber runs.
If the mechanism qualifies at that level, you'd want to use it. If the
mechanism can't reproduce that, without assuming some unbelievable chamber
effects or something like that —
DODGE: What set of data would you recommend?
WHITTEN: Well, T think one could use the current Bureau of Mines data, or the
set of data that was put together at the University at Riverside where they
looked at a range of reactivities of high aromatics and low aromatics.
I think Dr. Jeffries has put together a set of experiments that
could be used. One could have some automobile-exhaust experiments, some fixed
light-evacuable-chamber-type experiments , and some field experiments. I think
this set .should probably include a 2-day experiment like the one that
Or. Jeffries is working on.
It shouldn't all be in one chamber. It shouldn't all be the same type of
thing; there should be some spectrum involved there.
WHITTEN: I think you run the risk of some bias in a set by just saying that
data from this one chamber, this one set of data, is the only one. I think
you need to look at the spectrum of data.
JEFFRIES: The issue here is not that a set of smog chamber data represents
the truth; it's that a set of conditions for which we have some experimental
information represents a good basis for all the mechanisms being compared
against each other as well as against the smog chamber data. This is what you
find when you do that, this mechanism does perform in some way relative to all
other mechanisms you go through and also relative to this whole wide range of
data.
316
-------
8. DESCRIPTION AND COMPARISON OF CHEMICAL MECHANISMS Whitten
It's very clear that one chamber is not adequate for that purpose, nor is
the kind of experiment typically run in a smog chamber satisfactory for models
that are going to be used in the ambient environment.
McKEE: Did I understand you correctly? Are you suggesting that to obtain the
data for this kind of modeling technique, emissions inventories should be
compiled in terms of molecular-weight range of the olefins present and the
aromatics and so on, and that once you do all this you would monitor,
presumably continuous ethylene or toluene or some one constitutent as a
surrogate, for this whole mix in the emissions inventory. Is that the
approach you're talking about here, Dr. Whitten?
WHITTEN: The approach that I was talking about is a little bit like that, the
idea being that all of these mechanisms have gome condensation involved in
them. You lose sight of how well the HC decay is doing and how well your
overall model is going in terms of emissions.
Carbon monoxide is sometimes used for that, but it doesn't really test
the kinetics very much. Unless your emissions inventory and meteorology for
CO levels aren't close to what's being measured, you have a feeling that
something's wrong with your meteorology, your model, or your emissions
inventory. But there is really nothing like that now in HC's. We have no
tests on emissions inventory. We don't test the dispersion quality of a model
or how the chemistry is doing in the atmosphere.
McKEE: Well, two things about this occur to me. First, the data requirements
for compiling that type of emissions inventory appear to me to be several
orders of magnitude above what is affordable. I am told that many companies
in the Houston Ship Channel area now are spending over $100,000 a year for
engineering time just to maintain the basic data that go into periodic
emissions inventories which are measured in total hydrocarbons (THC's) without
any breakdown.
If you have to give this kind of a breakdown, it seems to me that the
costs would be more like $100 million instead of $100,000, or some—
WHITTEN: I didn't mean to imply that it had to be done everywhere with equal
vigor. What t meant was that it might be possible to test some of the models
or mechanisms that are currently available with atmospheric tests.
Some areas of the country have very detailed emissions. We heard about
gas chromatographic (GC) measurements of what's in the atmosphere a few
minutes ago.
The Air Resources Board in California has broken source categories down
into about 200 different specific molecules. ^nd each source category—
317
-------
8. DESCRIPTION AND COMPARISON OF CHEMICAL MECHANISMS Whitten
McKEE: Do all large refineries and petrochemical plants submit an emissions
inventory broken down in these terms?
WHITTEN: The individual pounds and the individual molecules; very, very
detailed.
McKEE: Are those updated every hour or every day or every time a unit
malfunctions or a compressor's lost, or something like that?
Composition will change radically and O3 formation will react to that
change, we think, on the basis of an hour or two if there is a major
malfunction in one large unit.
This brings me to my second point. Your ethylene monitoring will not
give a surrogate for that whole mixture because if a billion-pound-a-year
ethylene plant loses its product compressor, the ethylene concentration in the
air at your monitoring station 3 to 4 mi downwind might suddenly go to 10 or
20 times what it was before, but the other HC constituents in the atmosphere
may not have changed very much. So, I don't see ethylene or any other one
component being a surrogate for your whole emissions inventory on a short-term
basis.
WHITTEN: I think it wouldn't be on a short-term basis, and I think that if
all of this were to be done and it didn't work, you'd have to come up with
this sort of explanation. However, I think that not doing it, even though the
information in some cases may be available, is perhaps forfeiting an
opportunity to test models in a way that they are not being tested.
McKEE: Theoretically I agree with you, but as a practical matter I don't see
that kind of information being available. The proportion of the molecular
weight range of olefins and aromatics and other constituents is going to
change every time a plant gets in a crude oil supply from a different
supplier. They are scrambling for crude oil all over the world and are
getting all sorts of different crude from time to time. Every tanker that
comes in is different, in some cases.
Secondly, every time there is a malfunction or process upset or a change
in catalyst activity, or something like that, the molecular composition of the
emissions from that unit may change rather drastically over a period of a few
hours or a day or two. I don't see how all of this can be kept catalogued and
updated.
WHITTEN: I'm not suggesting that, by any means. I'm suggesting that it be
used where the information is already available, for instance, in the Los
Angeles area. In Los Angeles they do source categories and break down average
emissions into the number of pounds of each particular kind of molecule that
is coming out.
318
-------
8. DESCRIPTION AND COMPARISON OF CHEMICAL MECHANISMS Whitten
This all gets blended together over a large area, and a lot of these
fluctuations tend to average out. If you get an average concentration of
ethylene and of toluene, which can be monitored, you can look at a lot of the
things Alan Lloyd suggested where we can see toluene every day and aromatics
coming up at between 25 and 30% every day. Then you can have a model that has
20% toluene in it, treated separately.
I think these fluctuations in the Los Angeles Basin aren't necessarily
seen, and that would be a short-term model.
If you wanted to put together a model that would tell what really happens
to smog in Houston when there's an upset in an ethylene plant, that's a
different thing. You would look at that individual plume and study it and put
an upset in your model. That's different from modeling a whole smog problem
in a city where things are mixed together more and all of these fluctuations
have averaged out.
MEYER: Trying to reconcile some of the practical concerns that have been
expressed with the desirability of monitoring certain species, I'm wondering
if it would be useful to do a chromatographic sum of species and monitor the
diurnal patterns of those kinds of readings. You could then test that against
the mechanism?
WHITTEN: I don't know whether or not it's necessary to do that in every city
over a long period of time, but, I do think it would be helpful to do it in
one or two places. We need to get out and use the models to find out what's
wrong with them.
It would have to be in terms of checking out the models that are in use.
Once you have decided that the model works great, you don't need to worry
about those little details. I don't think you really can say over a long
period of time that every city in the country would have to have this much.
DIMITRIADES: Can we get around the problem Dr. McKee mentioned by doing
ambient analyses, rather than by offering some composition of the emissions?
Perhaps we could get the composition information by analysis of ambient data
in the morning or at some time when—
MEYER: Is it the composition that is that important or is it the decay of the
total mix?
WHITTEN: For a model to work properly, if it's a good model and can adjust to
that, it will show what the composition is, whether it's in emissions
inventory or validated through the monitoring data. But, to see if a chemical
mechanism really works over a long period of time, it has to be tested during
the day when the models are really working in the atmosphere.
I think atmospheric testing of these mechanisms and models is important.
We need to have more tests to give us more confidence that they really are
319
-------
8. DESCRIPTION AND COMPARISON OF CHEMICAL MECHANISMS Whitten
working and can be used more. This is one piece of data which helps check out
the combination of the model all working together. It's a check on the model.
KILLUS: I saw an emissions inventory from the Houston area just last week. I
believe it was prepared before that. It said that the fraction of aromatic
HC's in the Houston area was 4%.
I've seen also a significant amount of GC data from ambient Houston air.
It indicates that the concentration for the proportion of toluene in the
Houston area is at least 5% and more likely 10%.
Given these two pieces of information, which is a very small amount of
information, one conclusion is that the emissions inventory was incorrect and
should have been redone in some fashion. However, going to every single
emitter and having them prepare hourly estimates of what their fractionation
is is probably not necessary. It certainly is necessary to do it at a greater
level of detail than what was done for the emissions inventory project that I
saw.
McKEE: Whether it's necessary or not, it's impossible.
KILLQS: I don't think it is impossible to do on a yearly or bi-yearly basis
in a general fashion. We'd then have it to compare to GC analyses. They
certainly do it in Los Angeles, and it doesn't seem to have bankrupted the
city. I see no reason why something similar to that couldn't be done in the
Houston area.
MARTINEZ: It is clear that the models are going in the direction of greater
fidelity in the way of reproduction not only of the 03 but of the HC's, also.
At some point, the models and the inventories have to be married. If there is
a total unwillingness to come up with inventories that are appropriate for the
models, then you have to make it efficient; junk the models, or bite the
bullet. It's being done in California and probably will be done elsewhere,
too.
WALKER: In response to your comment on Houston, the enormous effect of the
micrometeorology on the location of emissions at any one point is so large
that unless you had a very well located set of many, many samples taken, I
would think you would have very little chance of ever having an emissions
inventory check of some random samples of the atmosphere. Most of the
samplings I am aware of that have been taken in Houston have been taken at
sites predominantly influenced by automobile and vehicle emissions.
Contrarily, I would suspect the emissions inventory you saw was largely
industrial emissions. It probably did not even have the area sources added
in.
320
-------
8. DESCRIPTION AND COMPARISON OF CHEMICAL MECHANISMS Whitten
KILLUS: At least one of the GC data sets I saw came from the Houston Ship
Channel itself. I don't believe that that's an area dominated by automobile
emissions influence.
WALKER: And that was 5%—
KILLDS: That area showed, in fact, 20% aromatic contribution.
WALKER: It could have been 2 km downwind from a BTX unit. That certainly
doesn't make that a typical sample of the whole area.
KILLUS: I also know that that particular emissions inventory was prepared by
going into the volatile organic compound (VOC) manual and getting four
individual random-source profiles and applying them to the generic area in
Houston. For example, they used a commercial site and an industrial site, not
even bothering to use a chemical-industrial site. It was quite clear that the
emissions inventory was inappropriate.
WALKER: I agree. But I agree with Dr. McKee also. I don't think you can get
them much better very easily.
CARTER: I have a couple of comments, first related to what they were talking
about. No matter how difficult it is, we need to have some idea of what the
HC composition is, just so we can know how to test the representative chemical
models.
Also, you were discussing the validity of smog chamber data and their
performance. One point I don't think was made strongly enough was that for a
model to be valid, it has to not only fit smog chamber data, but also, its
parts have to be consistent with basic laboratory studies, such as kinetic
studies. If it isn't, how can it have any validity? It may just fit, but
that could be simply partly coincidental.
Our knowledge of the chemistry has come to the point that we no longer
have to rely on engineering approaches, at least to smog chambers. We know
most of the major reactions and at least the magnitude of the major
smog chamber effect. So we really can fit the simpler alkanes, olefins, and
aromatics to the smog chamber data without arbitrary adjusting of parameters.
But, even if there is disagreement about whether adjustment of parameters is
arbitrary, at least the known part of it should be consistent with basic
results.
The one problem I have with some of the discussion, especially of the EPA
model and the chemical mechanism that's in the Ozone Isopleth Plotting Package
(OZIPP) computer programs that I saw, is that a number of the individual rate
constants and mechanisms are just plain wrong, based on our current knowledge
of the chemistry. Some of them have the wrong HO2 plus NO rate constant, the
wrong OH plus NO2 rate constant, or the wrong OH plus propylene rate constant,
and yet they're still being used.
321
-------
8. DESCRIPTION AND COMPARISON OF CHEMICAL MECHANISMS Whitten
An incorrect mechanism may fit smog chamber data, but so would something
perfectly arbitrary, like using those five parameters you're talking about
with the EKMA models. That would be a more honest and straightforward
approach than using an incorrect mechanism.
Concerning the reactivity of compounds, one other type of reactivity was
being considered, and I'm sure you will agree that some types of compounds act
as radical inhibitors in their mechanism. In particular, we have evidence
that the larger alkenes in their mechanism have a considerable amount of
termination involved, and if you have a termination of radical levels, it's
opposite the way aromatics behave. That is another aspect of reactivity that
needs to be considered. As you know, I don't agree with your conclusions
about radical sources, so I'll be discussing that today.
WHITTEN: I certainly agree overwhelmingly with your comments. First of all,
I did a very poor job of emphasizing the importance of what I meant to talk
more about, actually the foundation validation part. I had intended to say
more, I just forgot. Thank you for pointing that out.
The mechanism is built on a strong foundation of pieces which are
individually checked out. We have, for instance, the inorganic chemistry
which has been recommended by a group of eminent scientists working in that
area, and you have the formaldehyde which is basic to absolutely every form of
smog. So, you can do a smog chamber experiment with just formaldehyde, and
test that NOX chemistry and the formaldehyde.
Any modern mechanism should be able to do that; it should have the
formaldehyde chemistry built into it. I think they do. If you have higher
aldehydes and other forms of chemistry or just propylene, it should work for
just propylene.
On the other hand, there is another problem. Perhaps some generic
mechanisms actually perform better as you go to a more complex mixture. I
think that when looking at a pure compound, you may have to adjust the
mechanism to account for the fact that it's a pure compound and not a mixture.
Some mechanisms have been developed really to deal with a mixture effect.
Dr. Jeffries has studied some of these effects in his recent smog chamber
experiments.
As you blend these mixtures together, you get some strange effects. When
things start to disappear there's a leveling effect.
CARTER: I understand the curves. For EKMA models, at least, you can get the
detail that you want. Wouldn't it be better to start with a more detailed
model instead of adjusting the mechanism or adjusting the representation of
the mixture using six or eight different HC's and aldehydes? If you know what
the mixture is in the ambient air, especially—
322
-------
8. DESCRIPTION AND COMPARISON OF CHEMICAL MECHANISMS Whitten
WHITTEN: As Dr. Dimitriades said yesterday, there are things that have a
positive and a negative aspect. When you use a mechanism that accounts for a
wider spectrum of compounds than propylene and butane, instead of an explicit
mechanism for propylene and butane, you may be sacrificing the carbon-balance
effect that this mechanism has- It's a nice clean mechanism for propylene and
butane.
You put your problem in a different place; the emissions inventory or the
reactivities. But, once you've decided that a certain ratio of propylene and
butane actually represents how a mixture reacts in the atmosphere, your model
now is carbon conservative. Your model, within itself, is not going to have
problems. You don't have to worry about it.
JEFFRIES: Are you suggesting a mechanism with 20 or 30 explicit pieces?
CARTER: I don't think it's necessary to have that many, because it's—
JEFFRIES: Well, 5, 10?
CARTER: Actually, I was able to fit the surrogate smog chamber data with just
three: propene, butane, and formaldehyde. Perhaps for something with more
aromatics I would have to add an aromatic and different classes of things, but
there aren't 30 different ways that things react.
JEFFRIES: How do they differ from a regular lumped mechanism with, say, two
different kinds of olefins, two different kinds of aromatics, or three
different kinds of aldehydes?
CARTER: Well, if the details are chemically reasonable and it doesn't have
reactions that have no basis in reality, and the mechanism is sufficiently
flexible—
DIMITRIADES: The comment I want to make is that I think I have an answer to
your remarks, Dr. Carter. The EKMA mechanism uses all kinds of inaccurate
data. There is a good reason for that, but it is involved and needs to be
explained. to raise this question in the afternoon general discussion when we
can allow sufficient time. But, for the time being, I would like to ask Dr.
Whitten a question.
You suggested two methods for validating a mechanism; one is based on
explicit chemistry and the other on how well the mechanism fits the
smog chamber data. You didn't say anything about a method based on how well
the mechanism fits ambient data. Am I to take it to mean you don't think very
much about a method based on ambient data?
WHITTEN: I touched on that issue by talking in our discussion about having
toluene broken out. I'm in favor of it, but once you get out into the
atmosphere, you have the problems of emissions and dispersion. You need some
way of knowing that emissions and dispersion are okay, so that you can say if
323
-------
8. DESCRIPTION AND COMPARISON OF CHEMICAL MECHANISMS Whitten
the mechanism works, the model works, and it's the chemistry that makes it
work. Or if it doesn't work, it's the chemistry. I think at the moment there
is uncertainty.
If you don't have the correct meteorology in the model, the correct
emissions, then whether the mechanism works or not, you don't really know. If
the model seems to work, it could be a combination of compensating errors, and
if it doesn't, it could be because — that's a problem you don't have in the
environment of the smog chamber as long as you have your impedance low enough.
ESCHENROEDER: But you get yourself into a logical trap, because you're going
to use the model in that messy environment, ultimately, and take it seriously.
So at some point I agree with Dr. Dimitriades. You have to face up to these
imperfections that you don't have in the laboratory-controlled study.
DIMITRIADES: When you deal with ambient data, you have meteorological factors
and chemical factors all compounded together. Can you delineate all those
effects so that you will be able to concentrate on, say, validating this one
aspect of the model, the chemistry?
Is this feasible? Everybody agrees it's desirable. But can we do it?
Is it possible to use atmospheric data for the purpose of validating the
chemical mechanism, that is, isolating just that aspect of the model?
WHITTEN: I think we're getting pretty close in that we have some pretty
detailed emissions inventories in Los Angeles; we have some detailed
compositional results for Los Angeles and we can follow the composition, the
percentage of aldehydes, and some specific compounds throughout the day.
There may very well be a set of LARPP data, or a set of Los Angeles data
for which you could say, this is part of the validation test, this and the
smog chamber data. Then you ought to be able to test this trajectory that has
been worked over for hours and hours and probably be satisfied with the
emissions inventory and the meteorology.
If the chemistry worked, you should get this result. I think we're
pretty close to this.
WALKER: I have a couple of questions I want to direct to Dr. Whitten on a
slightly different subject. I was pleased at your comment that the
03~generation process appears to get more efficient as the reaction mixture
dilutes. I've held this view for a long time.
I'd like to ask your thoughts on the hypothesis that really high levels
of 03 will not occur unless the mixture is diluted considerably.
Also, I'd like to ask, are there in these various mechanisms reactions
that would lead to this performance, of changing the site reactions? Would
324
-------
8. DESCRIPTION AND COMPARISON OF CHEMICAL MECHANISMS Whitten
you anticipate from the chemistry embodied in these models that you would get
more efficient 03 production as the mixture was diluted?
WHITTEN: One of the things that goes on in the efficiency effect is that
radical-radical reactions go to the square of the concentration of radicals.
As the radical concentration goes down, these reactions drop out because
they're important to the square.
However, they don't seem to be important under a wide range. A few years
ago they were felt to be much more important. Another example is the
formation of nitrates. The OH-NOX reaction, and reactions like it, slow down
as the NOX go down to lower and lower concentrations. This is a major sink in
the chemistry of these.
Thus, you go to low NOX concentrations, and the radicals don't disappear.
The steady-state concentration of radicals can still remain very high, so you
have a lot of radicals to work with in relationship to the amount of
pollutants there are. The cycle rate around can still be very high, so you
get very high efficiency.
One of the ways you can characterize how much 03 you make is to keep this
NO-to-N02 conversion going until you run out of NOX. The speed at which the
NOX runs out is proportional to the amount of radicals that are around.
The mechanism then says how much 03 you can make on this NO-to-NO2
conversion rate before you run out of NOX.
The current mechanism in the EKMA model has lots of rate constants that
could be updated. The systems approach, if you want to use that word, could
also be used to greatly improve the performance of EKMA by one little
adjustment in the ratio of the HO2~to-NO conversion to the OH + NO2 reaction;
in other words, this conversion rate to the loss. If you make that
adjustment, it starts performing like a lot of other modern mechanisms.
Now, that's the only adjustment we needed to close the major gap in
performance. We made it equal of the modern ratio without changing all those
other rate constants.
If you change one rate constant indiscriminately and not the others, you
upset the apple cart. You can change everything, or maybe just' this one
ratio.
JEFFRIES: But, Dr. Whitten, when you change that ratio, you increase the
reactivity of the system. What you have to do is compensate for that
increased reactivity by readjusting the propylene-butane split so that now
those factors fit the smog chamber data it was used on before. I suspect in
some of the studies you've done with the Carbon Bond mechanism, the numbers
are like 10% and 90% instead of 25% and 75% once you change that rate
constant.
325
-------
8. DESCRIPTION AND COMPARISON OF CHEMICAL MECHANISMS Whitten
WHITTEN: Either that or as Dr. Dodge has indicated in the original
smog chamber modeling work of the Bureau of Mines, there were some assumptions
as to pollutants coming off the walls/ the chamber effects, and—
JEFFRIES: No, that's not significant compared to the 03. You have to get the
ppm of the material; for example, instead of making 0.3 ppm of 03, make
0.5 ppm of 03; that is what the Carbon Bond mechanism does. That's what
Carbon Bond does with 25% and 75%. You make Carbon Bond 10% and 90%; it's
much more. To match the Dodge mechanism as you change that—
WHITTEN: I was addressing a point made by Gerald Gipson that if there were a
change in the mechanism so that it looked more like the Carbon Bond mechanism,
it wouldn't fit the Bureau of Mines chamber anymore.
JEFFRIES: The 25%-75%, it might not.
WHITTEN: That's true.
KELLER: What was your answer to his question about absolute dilution? You
actually make more 03 by diluting the emissions; the efficiency goes up but
the absolute amount of 03 doesn't.
WHITTEN: Right. The answer to that is seen in the atmosphere. First of all,
the Appendix J curve originally showed that quite clearly. If you take the
most modern mechanisms now and look at the 03 generated from HC along a
potential maximum, it looks like the Appendix J curve.
What happens at the higher concentrations is that you have a lot of NOX
around and the radicals are held down. The efficiency of converting NO to NO2
before you run out of NOX goes down because there is so much NOX present that
the radicals in proportion are very low in concentration compared to the
pollutants. So the cycle time is relatively slow compared to what it is at
the lower concentrations.
KELLER: But the dilution shouldn't change. The HC:NOX ratio is going to
affect whether the radicals terminate, react with NO2, propagate, or react
with the HC. But dilution, just dilution, per se, shouldn't affect that
composition.
WHITTEN: I guess what Harry Walker said was that possibly dilution was
necessary to generate higher levels of 03.
WALKER: That is a phenomenon everybody is aware of, that the highest levels
of 03 are always 15 to 20 mi out and not at the point where the most
pollutants are.
I know this is part of the analysis phenomenon, but I wonder to what
extent this dilution factor is involved in that.
326
-------
8. DESCRIPTION AND COMPARISON OF CHEMICAL MECHANISMS Whitten
WRITTEN: Well, the effect of dilution has been shown in smog chamber
experiments, and also with the mechanisms and models, to be less than linear.
Diluting a smog mixture by a factor of two doesn't cut down on the amount of
03 by a factor of two. So, in a sense, dilution makes it more efficient, or
in another sense, it helps.
Certainly, however, you can conduct a smog chamber experiment with a lot
of undiluted materials and produce plenty of 03. So, I don't think it is
totally a necessary amount.
WALKER: Dr. Whitten, I have a second question that concerns the initiation
process. The radical reactions you gave were dependent either on HONO,
aldehydes, or 03.
Now, if you don't have an aged air mass, you have a relatively one-pass
air situation. In our analyses of Houston air, we certainly never found many
aldehydes. I wouldn't think there'd be much HONO, to go along with it, as far
as aged air. That's been much of the reason I think of background 03 as being
a factor of the initiation process.
WHITTEN: I didn't mention the O3~olefin reaction which is possible.
WALKER: Yes. Well, it was mentioned in your list. But, the key point is the
dramatic initiation of episodes we have, much more so in Houston than in Los
Angeles, and what starts the process?
I realize higher concentrations can start the process, but is there
anything in the chemistry that is built into these models that would make
increased concentrations per se, that would become a source of radicals to
begin the whole thing?
WHITTEN: There are inklings of something fishy going on in that area that we
don't understand.
Harvey Jeffries has done some experiments where he spiked the mixture
with 03 or titrated the NO with 03. He then compared the results with those
obtained by simply putting NO and NO2 in the smog chamber at the same ratio.
There was a certain lack of reproducibility between those two things.
The chemistry, as we know it, is very simple and straightforward. It
says there is really no effect between those two processes whether you have
N02 naturally or whether the NO2 comes from NO reacting with 03. Dr. Jeffries
is considering looking at this again.
If you took most of the mechanisms and models we currently have and
spiked them with 03, there is really nothing at the moment that says that it
is going to suddenly take off, other than the effect of the NO2 ratio. It is
not written off but it's not obvious what it is.
327
-------
8. DESCRIPTION AND COMPARISON OF CHEMICAL MECHANISMS Whitten
JEFFRIES: I would like to return to the point that Dr. Dimitriades was making
about the testing of the mechanisms in the atmosphere. What we are dealing
with here, and what has become pretty clear procedure over the last couple of
years, is in effect a hierarchy of tests. I think that if one were
constructing a mechanism today, as opposed to 1976 or 1979, it would be like
the lawyer with himself for a client if he didn't use the data available on
chemical kinetics in constructing that mechanism to begin with.
There is no reason to believe that a mechanism that has known kinetic
deficiencies is going to be reliable. Even if it does fit the smog chamber
data, it is suspect from the very beginning. So, you begin by constructing a
mechanism that's based on known information at the time.
Next you'd want to see whether or not that mechanism can reproduce the
experiments in which only chemistry is the major driving factor. Often you
separate out all the temperature and light effects and everything else by
going through a chamber in which those things aren't changing. Once you can
do that, you can start looking at behavior in systems where light,
temperature, and humidity are changing and so forth.
Then you begin to find things wrong with the data and the model, and you
fix them, and do the whole thing over again, which is why there is always a
need for more smog chamber data.
In the end, you test the model in the ambient atmosphere, and it either
works or it doesn't. If it doesn't work, you have lots of places to go to
look for the answer. The answer may be that the chemistry in the smog chamber
is not adequate for the kinds of conditions you're seeing in the ambient
environment. If so, you go back to the smog chamber and run those experiments
over again under conditions that are more appropriate for the behavior that
you're looking at.
So, it's a clear hierarchy of applications, testing, recycling, etc.,
etc. There is no one suitable test I can give anyone to determine whether the
model's mechanism's performance in one situation means it's okay. It is a
complicated, long process that involves constant incorporation of new
information.
I think for that reason a lot of people who aren't in the modeling
community get really upset with modelers because they're constantly changing
things. That's the problem with the regulation end of things. How does the
regulator say this model is satisfactory and this model is not?
WHITTEN: I think everything you say is essentially true, but I still think
that there can be some standard tests set up that say if you can't do this,
then you can't play the game. You're going to find some people who say we
can't do that but that's not a good test. That is an adequate response, too.
If you have a good legitimate reason for saying this particular set of
328
-------
8. DESCRIPTION AND COMPARISON OF CHEMICAL MECHANISMS Whitten
smog chamber data is really bad and you can prove it, because you've got this
other set that's better, that's also still possible.
JEFFRIES: Fine, I agree with that.
CARTER: I'd like to add a third comment. I don't think it is a good idea for
the government to specify the test we have to follow. There may be
disagreement. The people who work on them are the best judges of what types
of test should be used.
The mechanism I developed reflects the knowledge of the chemistry in
1979. I have also been doing calculations on models, essentially the model of
Atkinson and Lloyd, which represents 1980 and 1981 chemistry. At least during
that period, I found very little change essentially, 10% difference in the 03.
In that period there wasn't any great change. It looks like they're beginning
to converge on the proper chemistry.
Finally, I think I remember your mentioning earlier that there is some
evidence now that there is initial HONO emitted to some extent in the
atmosphere, so there might be some initiation of HONO either being emitted or
being formed by some unknown dark reaction. I don't think it's the only thing
that starts it off, but I think it may possibly contribute.
WHITTEN: Yes. We've added HONO to our models. The airshed model has had it
in it for several months, since we heard about it.
CARTER: Yes, you did.
WHITTEN: It makes some difference, especially on the first day of a 2-day
simulation. But on the second day, you don't see much of an effect because
the aldehydes tend to be higher on the second day; they build up.
The effect is in the first hour of the day when the sun comes up, but
that carries over throughout the day.
KILLUS: I would have to say that I am extremely pessimistic about the notion
of validating chemical mechanisms in the atmosphere.
If we take the various chemical mechanisms available right now and run
them under similar conditions and compare the results, we get differences, 10%
or 20%. In fact, a difference as large as 50% usually means that you've
mispunched a card somewhere and might have put in the wrong rate constant.
If, on the other hand, you look at comparisons of models to the
atmosphere and for that matter, if you look at just general variability in the
atmosphere, what do you see? You see that your HC input data are unknown to
factors of 2 or 3. You see that a shift in wind direction by 10 to 20% can
give you orders-of-magnitude difference in your 03 concentration. You see
differences in mixing depths that cannot be estimated from currently known
329
-------
8. DESCRIPTION AND COMPARISON OF CHEMICAL MECHANISMS Whitten
meteorology or methodology. Those areas are much more uncertain than the
present chemistry, and are far in excess of the present chemistry.
I quite agree with Dr. Jeffries that there may very well be differences
in smog chamber phenomena and differences in the chemistry in the atmosphere.
There are certainly differences in the kind of background phenomena that
you see. But, those effects are still extremely small compared to the
enormous meteorological uncertainty, the enormous uncertainty in emissions
inventories and speciation in general, i.e., carryover phenomenon.
I can't imagine anyone comparing two mechanisms and saying, because this
mechanism seemed to work better in St. Louis or Denver or Sacramento or
Houston or Los Angeles, this mechanism is better and is giving more realistic
chemistry than the other mechanism. It's almost certainly going to be some
other variable in the computation.
330
-------
9. EFFECTS OF CHEMISTRY AND METEOROLOGY ON OZONE-CONTROL
CALCULATIONS USING SIMPLE TRAJECTORY MODELS AND THE
EMPIRICAL KINETIC MODELING APPROACH PROCEDURE
H.E. Jeffries
K.J. Sexton
C.N. Salmi
[Editor's Note: The text of this presentation has been published previously
and is available as an U.S. Environmental Protection Agency report:
Jeffries, H.E., K.J. Sexton, and C.N. Salmi. 1981. Effects of
Chemistry and Meteorology on Ozone-Control Calculations Using Simple
Trajectory Models and the EKMA procedure. EPA-450/4-81-034, U.S.
Environmental Protection Agency, Research Triangle Park, NC (November).
331
-------
9. EFFECTS OF CHEMISTRY AND METEOROLOGY ON O^-CONTROL CALCULATIONS Jeffries
WORKSHOP COMMENTARY
ESCHENROEDER: I have a couple of comments, Dr. Jeffries, and then a question.
First of all, I question why you insist on exact prediction and how this
affects the percent reduction. It's my understanding that the precursor
concentrations are only used to get the ratio line, not a starting point, on
the EKMA diagram.
It seems to me there is only one way to use it—lay the ratio line
through an observed 03 level and proceed from there, accepting that the air
mass that was at this point as a precursor measurement doesn't exactly
represent the air mass in which that 03 level was observed.
I'm not really concerned with that. In fact, I might average precursor
concentrations over several days to get the slope. So I really question
whether that is a concern about FKMA.
.Secondly, I think your stress on changes in 03 to relative changes in
precursors is one of the most important messages of your work, rather than
absolute 03 predictions from a model. Someone said this morning that all the
models are within 20% of predicting 03. I felt like saying, well, I guess all
the chemists should go on a vacation for a couple of years. However, when you
cast it in the light of Delta 03 per Delta precursor, the situation isn't
nearly so rosy.
I would like to preface my last question with the statement that when I
read your paper on the way here, I almost wished I hadn't come because it
looked like such an insuperable problem.
In view of what you know, how would you advise a control agency with
limited data to proceed? You have, very correctly, taken scenarios available
from the literature, any of which could conceivably be legitimately used by
control agencies in justifying a stiff decision. How should we proceed?
JEFFRIES: The emphasis in Level II is on absolute prediction. That was the
fundamental basis for the funding of our work, the issue of absolute
prediction.
The idea at Level II is to achieve an absolute prediction. That is one
of the primary differences between an isopleth diagram generated at Level III
and an isopleth diagram generated at Level II. In Level II you should be
putting in the best description you have of the chemistry, the best
description you have of the meteorology, the best description you have of the
trajectory, and all of these other factors.
ESCHENROEDER: So you tell the agency to strive for Level II.
332
-------
9. EFFECTS OF CHEMISTRY AND METEOROLOGY ON O^-CONTROL CALCULATIONS Jeffries
JEFFRIES: No. The Agency decided in 1979 that there were four levels and the
second level was the one we were testing. That was a probable recommendation
to the states for calculating SIP's. That level is an absolute prediction
level.
So part of this was to see whether if you go to Level II the method will
give you good answers. Are the answers dependent upon the chemistry and the
meteorology? My answer to these questions is, absolutely.
If you fall back on Level III, that is, if you use the method in a
relative sense, you fall into the trap of the correlation between the
isopleth's facing and the actual HC number obtained where the HC:NOX ratio
line intercepts the 03. The further out that is, the lower the control
requirement. The closer to the origin it is, the higher the control
requirement.
ESCHENROEDER: I see what you mean.
JEFFRIES: Thus, one has difficulty using it in an absolute sense because
you've got to specify the meteorology in the physical modeling conditions
better, and one has difficulty using it in the relative sense because the
degree of control is proportional to absolute prediction.
The better you predict, the closer the control. Still, however,
different mechanisms when predicting correctly give different absolute HC,
Delta 03, and Delta HC values.
If you abandon EKMA as a correction method when your model doesn't give
you the right answer and go to absolute models with absolute predictions, you
are still stuck with the Same problem. How does the model say Delta Oj per
Delta HC gbes?
Also, different models give you different Delta 03 per Delta HC.
Furthermore, 03 aloft i"s a dominant factor. It is the fifth highest day of
the year that is the real day for control in St. Louis, not the highest day.
That means the control on that day is the function of how the mechanism treats
that entrained 03. The mechanisms weren't developed to deal with entrained
03-
We observed that the answer lies somewhere in between 1:1 and 2:1, if you
believe the chemistry. So one of the things to do is to take your control
diagram and draw a line that's 1:1, and use that as one bound, and draw
another line on there that says the 03 changes twice as fast as the HC and use
that as the other bound. Then you can run a lot of models and see where they
cluster, and then make your choices from that range of answers.
There is a fair amount of uncertainty in where we're going.
333
-------
9. EFFECTS OF CHEMISTRY AND METEOROLOGY ON 0-,-CONTROL CALCULATIONS Jeffries
ESCHENROEDER: It sounds like you're saying we don't need more research, we
already have almost too much research, because there are so many different—
JEFFRIES: No. We clearly need models that can model more accurately, that
have data bases to test the models on all the factors that are important in
the atmosphere, and that haven't been used in the smog chamber.
It is possible to put a model together in a smog chamber that can't model
the atmosphere at all, because the factors that control 03 in the smog chamber
on that side of the surface aren't the factors that control 03 in the ambient
environment.
Dilution and emissions are the dominant factors in the ambient
environment. When the lights are turned off, dilution is also the dominant
factor in the smog chamber. Thus, radical sources in the smog chamber that
can change the speed of the radical initiation process, are going to control
the smog chamber.
We haven't had the data bases to test the models. The models may be
okay. We don't know. What we do know is that the models are different.
ESCHENROEDER: That is what disturbs me. You have no data base now, you are
saying, on which to choose one or the other model.
JEFFRIES: No, what we have to do is go back to what I said before. You start
out by going through the model and validating all the gas-phase fundamental
kinetics we know.
This is Dr. Carter's point. You go back to the beginning, and you
realize that all the numbers used in the model when it was first put together
aren't the right numbers here. So you have to reconstruct those numbers.
Then you model certain sets of data that you can say, yes, for these
conditions, it works. Then you continue to build that up. What we don't have
are those kinds of conditions to exercise the models against.
Now, even when you get through doing all of that, there may be another
problem. Radical sources in the atmosphere may be totally different from
radical sources in the smog chamber. That's the whole question of the HONO
measurements.
If the model can characterize a smog chamber well under dynamic
conditions of dilution, injection, 03 aloft, and so forth, and you apply that
model to the atmosphere with the best meteorology data you have, and it still
doesn't work, there is something else going on in the atmosphere that you
don't have going on in the smog chamber.
LLOYD: Dr. Jeffries, do you see anything inherent in the chemical mechanisms
which would lead you to believe that when you inject 03, you entrain 03 aloft?
334
-------
9. EFFECTS OF CHEMISTRY AND METEOROLOGY ON O^-CONTROL CALCULATIONS Jeffries
JEFFRIES: What happened here was simply that the mechanisms had different
representations. Some included processes, some didn't. There were different
deletions, different distortions, and different generalizations used to
construct the mechanisms.
If all the mechanisms had been the same, they may have all treated it the
same. No, I don't think there is anything funny about what's going on. I
think we haven't had to deal with that.
LLOYD: What do you mean by treating it differently? We've got 03 reacting
with olefins, 03 reacting with NO. How—
JEFFRIES: Some researchers photolyze the O3, some researchers don't. Some
have it reacting with a lot of things, and some don't.
LLOYD: What do you mean, a lot of things, besides the NO and the 03?
JEFFRIES: The answer to that is get out the mechanisms and look at all the
03 reactions that are in them. All the loss mechanisms for 03, the loss
processes for 03, are not the same in four different mechanisms that we looked
at. That's the issue.
For example, for the question of deposition, all those things are not
there. Whether a model includes or doesn't include deposition will influence
how much of this entrained 03 gets reactive in different places.
It is just a matter of different people making different choices in these
three areas when they put their models together. You can't point to one and
say, that's wrong. It may have been that for the conditions they tested, the
model for what they had was quite adequate. But, that is not the case here.
The smog chamber they tested it on didn't have 03 injected in in throughout
the whole day.
It might turn out that if you had a data base for a smog chamber in which
we injected 03 all day, and you ran a model against it, it might work. Some
mechanisms didn't include processes to account for all of the possible 03
behavior when 03 was going to be entrained into the parcel; others put
different things in that did cause differences.
What I can say is when we ran the models for the same conditions, they
gave different answers. That is because different choices were made in
constructing the model. I'm not saying one of those choices was good or bad.
Different choices were made by different reasonable people.
McRAE: I have two quick questions. Would you care to pat your last slide
back on and take a crack at what you think might be the criterion?
One of the things we've found in our work is that simply comparing the
absolute predictions at any given time leads to false interpretations. For
335
-------
9. EFFECTS OF CHEMISTRY AND METEOROLOGY ON O^-CONTROL CALCULATIONS Jeffries
example, if you have a 1-h peak shift, for an extraordinarily well-defined
peak, just that 1-h shift can give you rapid changes in scatter plots that can
give you very meaningless answers.
JEFFRIES: That's why I got away from the scatter plots as fast as I could,
and why the report is full of actual plots of data versus data. The problem
is you get a very voluminous quantity and people like Ned look at it and say,
what does all this mean? Scatter plots are one way to do it, but they have to
have all sorts of hooks attached to them that say, read these and caution.
Many people got upset over the scatter plots for the Bureau of Mines, and
we received a lot of comments on that. As a consequence, I revised a lot of
the text. I'm still not happy with it. But, I agree with you. One of the
reasons a lot of this didn't come out before was that the kinds of data plots
we were doing hadn't been done in some of the previous studies where all they
did was look at the peak 03. It looks pretty good on the basis of peak 03.
When you actually look at the degree of predictions and look at the
precursor predictions, and so forth, they don't look very good. So, I agree
with you.
McRAE: My second question is, have you tried any multiday runs to help
characterize the 03? One of the major conclusions you made was that the
control requirements are critically dependent upon the level of entrained 03.
That is a pretty difficult parameter to specify for the agencies because they
don't have access to that kind of information.
What happens if you do the multiday runs on some of these trajectory
models? Have you looked at the amount of 03 that's left aloft at the end of
those days?
JEFFRIES: No, the model doesn't have the structure to do that. It really
doesn't. The mixing height is always rising and that's it. It may stop and
not go up, but nothing ever happens after that. So they don't even have the
ability to handle the collapsing and mixing height of the entrainment. It's a
respecification problem.
So, if you ask where the 0.12 ppm 03 came from that we entrained on Day
159, well, if you believe all of the other data from St. Louis, it came from
250 mi upwind.
WRITTEN: The OZIPP model has the ability to come back in.
JEFFRIES: No, we weren't using OZIPP. We didn't look at 2-day 03, is a short
answer to your question.
DERWENT: Can you put that slide back up. I am interested that you might have
a criterion which tells you whether the model representation of Delta 03 over
Delta HC is, in fact, what happens in the atmosphere.
336
-------
9. EFFECTS OF CHEMISTRY AND METEOROLOGY ON O^-CONTROL CALCULATIONS Jeffries
Could you answer a question? Do you think if the models *rere made more
complex or more complicated, it would be easier to find a criterion which
would tell you whether that answer was more or less realistic?
Because the models are simplified, does that mean it's more difficult to
find a criterion? If the models were better representations of atmospheric
processes, would that criterion be easier to find?
JEFFRIES: I think the models are going to become more complicated, though one
or two are not. I also think what we'll end up doing is having a range of
complicated models that will allow us to capture the process. Then we're
going to model the model, that is, get a simple model to feed back into the
very complex airshed models.
My intuition tells me that the models are going to get more complicated,
more complicated in the sense that they have a better chance of representing
various pieces and are likely to do a better job. Again, the thing a model
must be able to do is describe reality in some sense, and what we're missing
is, what's reality. We don't know what the 03 is as a function of aromatic
changes from 25% to 45%, da da ta da.
I suspect that to do all these things the models are going to become
complicated. Because, we're going to stick all these things into the model to
generate the behaviors we want to observe.
Look at 03 aloft. Most people could leave out some of the 03 chemistry
that is included in many models because in an urban-type smog chamber
simulation, that is not important. But, for entrainment aloft and low
concentrations in the ambient air, it is important. So, people take it out
because it's not important in that condition. Then they go and apply the
model some place where it is important. It's one of those things where/ as
Dr. Whitten said before, you go through and trim. You write all this out and
you run it against your data base, and you say, well, that doesn't make any
difference, I'll take it out. You keep doing this. But, when you apply it
some place where one of those things may make a difference, you have to be
very careful. So, yes, the models are going to get more complicated.
MEYER: I would like to comment a little on that last point. One of the
things we have been doing is attempting to compare the control predictions one
gets with the simple models versus those one gets with the urban airshed
model, using the Carbon Bond II mechanism, and so on.
We've looked at several days, and, generally speaking, the ranges one
gets, the discrepancies one gets between the simple models and the air shed
control predictions, are nowhere near as large as the ranges which Dr.
Jeffries was talking about in some of these calculations. Thus, there may be,
indeed, some promise in doing a number of simulations with the complex models
and perhaps using those as a criterion for how well the simple models perform.
337
-------
9. EFFECTS OF CHEMISTRY AND METEOROLOGY ON 0^-CONTROL CALCULATIONS Jeffries
The one major fallacy with that is, you're not entirely sure that the
complex models are predicting what's going to happen with the strategies
correctly, either.
ESCHENROEDER: Also, those comparisons contain implicitly the same sets of
assumptions about mechanism mixing height and all that and they agree very
well, as Dr. Whitten showed yesterday. When you start going to other more
reasonable sets of assumptions, you get much larger disparities than those
comparisons indicate. Because that is a model versus model comparison, and
they come from the same data base.
338
-------
10. EFFECT OF RADICAL INITIATORS ON EMPIRICAL KINETIC
MODELING APPROACH PREDICTIONS AND VALIDATION
William P.L. Carter
Statewide Air Pollution Research Center
University of California
Riverside, California 92521
ABSTRACT
This paper discusses the impact of uncertainties concerning radical
initiators in smog chambers and urban atmospheres on Empirical Kinetic
Modeling Approach (EKMA) predictions. It presents evidence that both initial
nitrous acid and continuous nitrogen dioxide-dependent radical sources are
important in smog chambers/ and must be taken into account when validating
kinetic mechanisms for use in EKMA calculations. Presently, the extent to
which the nitrogen dioxide-dependent chamber radical source operates in the
open atmosphere is unknown. The paper examines the effect of this uncertainty
on calculated ozone isopleths and predicted EKMA control strategies. Since
ambient nitrous acid recently has been observed in the early morning at sites
in Los Angeles, the paper also examines the effect of varying initial nitrous
acid concentrations on EKMA simulations. In general, increasing radical
initiation results in higher predicted ozone yields at high oxides of
nitrogen:hydrocarbon ratios, while not significantly affecting the predicted
ozone when the oxides of nitrogen:hydrocarbon ratio is low. This increases
the degree of hydrocarbon control predicted in EKMA analyses to be required to
achieve specified ozone reductions.
339
-------
10. EFFECT OF RADICAL INITIATORS ON EKMA PREDICTIONS Carter
INTRODUCTION
The Empirical Kinetic Modeling Approach (EKMA) can be a useful technique
for obtaining quantitative ozone (03)-precursor relationships, provided that
its inherent limitations and the ranges of uncertainty in its predictions are
adequately understood. Many of the uncertainties and limitations of this
technique have been discussed in detail elsewhere (EPA, 1977; Dimitriades,
1977; Dodge, 1977; Bilger, 1978; Carter et al., 1982; Hov and Derwent, In
press; Whitten, In press; Jeffries, 1981); in this paper we restrict
discussion to uncertainties concerning radical initiators and the effect of
these uncertainties on EKMA predictions. Recent sensitivity calculations
(Carter et al., 1981; Hov and Derwent, In press; Jeffries, 1981) show that the
results of EKMA calculations differ significantly when one uses different
kinetic mechanisms or different representations of the composition of the
reactive organics, even if one uses the "relative" isopleth analysis technique
(EPA, 1977; Dimitriades, 1977). However, most of these calculations compare
predictions of entirely different kinetic mechanisms or representations of
reactive organics. Thus, the effects of uncertainties in specific mechanistic
features or specific reactivity characteristics of the initial or emitted
species, such as those related to radical initiation, could not be determined.
The major areas of uncertainty related to radical initiation that this
paper discusses concern the "chamber" radical source (Carter et al., 1979;
Carter et al., 1981; Carter et al., In press) and the effects of a known
photoinitiator, nitrous acid (HONO), on EKMA predictions. Many investigators
340
-------
10. EFFECT OF RADICAL INITIATORS ON EKMA PREDICTIONS Carter
have long realized that model calculations significantly underpredict rates of
oxides of nitrogen (NOX) oxidation and hydrocarbon (HC) consumption in smog
chamber experiments if the model does not provide for radical sources beyond
those provided for by the known homogeneous reactions of measured (or upper-
limit) levels of known photoinitiators (Carter et al., 1979; Hendry et al.,
1978; Falls and Seinfeld, 1978; Whitten et al., 1979; Whitten et al., 1980).
Modelers disagree about how to account for this discrepancy. Some assume that
the discrepancy results from the presence of initial HONO (formed
heterogeneously when NOX are injected) (Falls and Seinfeld, 1978; Whitten et
al., 1979; Whitten et al., 1980); some assume it results from continuous input
of hydroxyl (OH) (Carter et al., 1979) or hydroperoxyl (HO2) radicals (Hendry
et al., 1978); and some assume a combination of both (Hendry et al., 1978).
These approaches differ significantly. The use of initial HONO alone leads to
a rapidly decreasing radical flux over the time scale of 30 min to 60 min,
whereas a continuous radical source results in a considerably greater total
radical input during a typical 6-h to 10-h smog chamber irradiation. Since
the kinetic mechanisms used in EKMA models are generally validated against
smog chamber data, this discrepancy represents an important uncertainty in
EKMA predictions, particularly since aspects of the mechanism related to
radical initiation and termination can not be tested unambiguously.
In addition to the uncertainties in model validation caused by poorly
characterized chamber radical sources, no one has adequately considered the
possibility that some of the factors causing these sources may operate in the
open atmosphere. Indeed, Harris et al. (In press) have observed up to 8 ppb
341
-------
W. EFFECT OF RADICAL INITIATORS ON EKMA PREDICTIONS Carter
HONO in a Los Angeles source area, indicating that initial HOMO is a factor in
the ambient air as it is in smog chamber simulations. In addition, recent
data discussed in this paper suggest that at least some of the factors causing
the continuous nitrogen dioxide (NC>2)-dependent radical source observed in
smog chambers may also be important in the open atmosphere. This paper
discusses the evidence for the existence of the chamber-dependent radical
source, and the possibility that there are also unknown atmospheric radical
sources. It also examines the effect of initial HONO and of an unspecified
NO2~dependent radical source on EKMA predictions. The results help to
characterize the sensitivity of EKMA predictions to uncertainties concerning
radical sources.
EXPERIMENTAL STUDIES OF RADICAL SOURCES IN ENVIRONMENTAL CHAMBERS
Four different environmental chambers were employed in our studies of
radical sources. The majority of irradiations were carried out in the SAPRC
5800-liter evacuable, thermostatted, Teflon-coated environmental chamber
equipped with a 25-kW solar simulator (Winer et al., 1980) . Experiments in
this chamber were performed at a variety of temperatures (282°K to 325°K),
humidities (dry to ~100% RH), total pressures (~350 torr to atmospheric), and
light intensities (k-|, the NO2 photodecomposition rate, = 0.25 to 0.5 min"^).
A more limited set of irradiations were carried out in the SAPRC ~6400-liter
indoor all-Teflon chamber, with irradiation provided by two diametrically
opposed banks of 40 Sylvania 40-W BL lamps, backed by arrays of Alzak-coated
reflectors. All runs in this chamber were performed at ~300°K, but the
342
-------
10. EFFECT OF RADICAL INITIATORS ON EKMA PREDICTIONS Carter
humidity was varied from < 10% RH to ~50% RH, and the light intensity was
varied from k-j =0.1 min~* to = 0.45 min~^. A number of irradiations were
also carried out in ~40,000-liter outdoor Teflon-bag chambers with natural
sunlight irradiation (Carter et al., 1981). In these experiments, the
temperature varied from ~320°K to ~340°K, humidity from < 10% RH to ~50% RH,
and light intensity from K-j =0.25 min"1 to = 0.4 min""1. Irradiations were
performed in new, newly conditioned, and extensively used bags. Finally, a
few irradiations were carried out using ~100-liter Teflon bags irradiated with
an array of fluorescent lamps yielding an NC>2 photolysis rate, k-|, of 0.27
min . In this system, the temperature was held at ~300°K, and ultra-high-
purity dry air was used. It should be noted that the ~100-liter, ^6400-liter,
and the ~40,000-liter Teflon chambers employed in this program were all
constructed from the same roll of 2-mil thick, FEP Teflon film. The
characteristics of these chambers and the associated experimental operating
procedures are described in detail elsewhere (Carter et al., In press; Winer
et al., 1980; Carter et al., 1981; Pitts et al., 1979), and thus are not
discussed further here.
The experiments performed consisted of monitoring radical levels in
NOx-air irradiations in which the levels of organics were kept sufficiently
low that their effects on the NOx-air reactions would be minor. In such a
system, the major gas-phase reactions (Carter et al., 1979; Hampson and
Garvin, 1978; Baulch et al., 1980; Atkinson and Lloyd, 1980) are as follows:
343
-------
10. EFFECT OF RADICAL INITIATORS ON EKMA PREDICTIONS
Carter
NO2 + hv ( X > 295 nm) •> NO + O( 3P)
O( 3P) + O2 + M •> 03 + M
NO + 03 •* N02 + O2
O( 3P) + NO2 -»• NO + O2
, M
O( dP) + NO2 ->• NO3
NO + NO + 02 •»• 2 NO2
NO2 + 03 ->- NO3 + O2
NO3 + NO •»• 2 NO2
NO3 + N02 -»• NO + N02 + O2
M
NO3 + NO2 -»• N2O5
M
N2°5 "*• N02 + NO3
NO3 + hv •> NO + 02
N03 + hv •> NO2 + O( 3P)
03 + hv •»• 02 + O( 3P)
03 + hv (X < 310 nm) •»• 02 ( 1Ag) + O( ^
O( 1D) + M ->- O( 3P) + M
0( 1D) + H2O ->• 2 OH
M
OH + NO -»• HONO
HONO + hV > OH + NO
M
OH + NO2 ->• HNO3
(Reaction 1)
(Reaction 2)
(Reaction 3)
(Reaction 4)
(Reaction 5)
(Reaction 6)
(Reaction 7)
(Reaction 8)
(Reaction 9)
(Reaction 10)
(Reaction 11)
(Reaction 12)
(Reaction 13)
(Reaction 14)
(Reaction 15)
(Reaction 16)
(Reaction 17)
(Reaction 18)
(Reaction 19)
(Reaction 20)
344
-------
10. EFFECT OF RADICAL INITIATORS ON EKMA PREDICTIONS Carter
In these experiments, nitric oxide (NO) levels were kept sufficiently high so
that significant 03 formation could not occur, and thus reactions 7 through 17
were of minor importance. In particular, the OH radical input rate calculated
from the above mechanism is ~4 x 10~^ ppm min~* for conditions of a 50% RH
evacuable chamber run where [NO] = [N02]« This gives rise to predicted OH
radical levels one to two orders of magnitude lower than those observed (see
below).
We also included in the reaction mixture added traces ~< 10 ppb) of
propene and propane (or propene and n-butane) in order to monitor OH radicals
from their relative rates of decay. The use of two tracers instead of one has
the advantage that OH radical levels can be related only to changes in their
concentration ratios (see below), which can be experimentally measured with
more precision than changes in absolute concentrations. Under the conditions
employed in these runs, reaction with OH radicals is the only significant mode
of consumption of n-butane, and it is the major reaction consuming propene.
However, under some conditions, consumption of propene by reaction with 03 and
O( dP) is non-negligible, and a correction must be made. When [NO] » [03], as
is the case for all of the experiments discussed here, OH levels can be
derived using Equation 10-1:
[propane]
d In [propene] k-|kc [NO2J
[OH] = (ka-kb)-: - - (Eq. 10-1)
at k3 [NO] k2 [02] [M] ,
345
-------
10. EFFECT OF RADICAL INITIATORS ON EKMA PREDICTIONS Carter
where ka and kj-, are/ respectively/ the rate constants for the reaction of OH
with propene and propane, kc and k^ are rate constants for the reactions of
propene with 03 and O(^P), and k-) through k3 are the rate constants for
reactions 1 and 3. The second and third terms in the above equation are the
corrections for the consumption of propene by reaction with 03 and O( dP) and
are derived (Carter et al., In press) by assuming that O(3P) and 03 are in a
photostationary state. This correction is < 10% when [N02J ~< 0.3 ppm and
[NO]/[NO2] > 1/ which is the case for most of our experiments.
The presence of the propene or propane tracers at the < 10-ppb level in
these NOx-air irradiations has a negligible effect on the NOX chemistry and
radical levels, except that they cause a slight increase in the conversion of
NO to NO2 (Carter et al., 1979; Carter et al., In press; Atkinson and Lloyd,
1980). (At the reactant levels employed in these runs, the rate of this
conversion is minor, being generally less than the conversion caused by the
reaction of OH with the carbon monoxide (CO) impurity (0.5 to 4 ppm) that
passes through our air-purification system.) In particular, although the
reaction of propene with 03 and O(JP) and the photolysis of the oxygenated
products formed from the reactions of the tracers can lead to radical
production, these radical sources are minor at the reactant levels employed in
these runs (Carter et al., 1979; Atkinson and Lloyd, 1980).
For this tracer technique to be valid, the observed changes in the
propane:propene ratio must be shown to actually reflect the presence of
radicals, and to not be caused by heterogeneous or other unknown loss
346
-------
10. EFFECT OF RADICAL INITIATORS ON EKMA PREDICTIONS Carter
processes for propene in these chambers. (The loss of propane was relatively
slow in these irradiations, and primarily reflects dilution.) Repetitive
pre-irradiation sampling has shown that no significant dark decay of propene
(or of other simple HC's) occurred in our smog chambers, but this result does
not rule out the possibility of light-induced heterogeneous decay. However,
no evidence exists that light-induced heterogeneous decay of propene or other
simple HC's is important in chamber systems, and there are a number of reasons
to believe it is not. In the first place, a large number of OH rate constant
ratios for organics, derived from their rate of decay in smog chamber
irradiations, agree to within experimental uncertainty with values derived
using entirely different, absolute techniques (Atkinson et al., 1979). This
agreement could not be possible if a light-induced heterogeneous loss process
were occurring in chamber systems at sufficiently high rates to account for
the results of our tracer-NOx-air irradiations. In addition, the rates of
decay of the small n-butane impurity observed (in sub-ppb levels) in most of
our irradiations were consistent with radical levels derived from the
propane:propene ratio. Finally, note that if light-induced heterogeneous
decay of propene were important, then all current propene-NOx-air models
(Carter et al., 1979; Hendry et al., 1978; Falls and Seinfeld, 1978; Whitten
et al., 1979; Whitten et al., 1980) would be incorrect, since none have taken
such a process into account.
Several control experiments were performed to further test the validity
of the organic-tracer technique. One such experiment was based on a
suggestion by Killus and Whitten (1981) and consisted of a NOx-air irradiation
347
-------
10. EFFECT OF RADICAL INITIATORS ON EKMA PREDICTIONS Carter
in the evacuable chamber « 10% RH, 0.4 ppm NO, 0.1 ppm NO2) in which the
propene/propane tracers were replaced by 50 ppm CO. The addition of CO causes
the following reactions to become important:
°2
OH + CO -»• HO2 + C02 (Reaction 21)
HO2 + NO -> NO2 + OH (Reaction 22)
These reactions result in no net change in radical levels, but they cause NO
to be converted to NO2 at a rate approximately equal to the rate of reaction
(Atkinson et al., 1979). Thus [OH] can be derived from the following,
_ d[NO] 2 R = k [OH] [C0] (Eg. 10-2)
dt 23 23
entirely independent of HC-decay rates. The NO-decay rate observed in this
CO-NOx-air irradiation was ~0.74 ppb min"1, from which [OH] = 0.9 x 10b
molecule cm" a (Eq. 10-2) can be derived. This is very similar to the average
OH radical levels of 1,1 x 106 cm~3 derived from a 2-h NOx-air irradiation
carried out under the same conditions but with the organic tracers and no
added CO. Thus the two techniques for measuring [OH] give consistent results.
An additional test to confirm that rates of change of the HC tracer ratio
reflect radical levels was performed by determining if addition of known
radical inhibitors would suppress these rates. Benzaldehyde was employed as
the radical inhibitor, since it is known to inhibit 03 formation in chamber
348
-------
10. EFFECT OF RADICAL INITIATORS ON EKMA PREDICTIONS Carter
irradiations (Gitchell et al., 1974; Atkinson et al., 1980), and the current
mechanism for its NOx-air photooxidation predicts that radicals are not
regenerated following its reaction with OH (Atkinson et al., 1980). Thus a
tracer-NOx-air irradiation was performed in the evacuable chamber (< 10% PH,
T = 303°K, NO = 0.4 ppm, NO2 = 0.1 ppm). Approximately 1 ppm benzaldehyde was
added to the mixture after a 2-h irradiation. The addition of 1 ppm
benzaldehyde suppressed the rate of change of the propane:n-butane ratio by a
factor of ~7 (Figure 10-1). The amount of suppression observed in this and
other experiments using benzaldehyde agrees reasonably well with the amount of
suppression calculated based on the known OH + benzaldehyde (Niki et al./
1978; Kerr and Shepard, 1981) and OH + NO2 (Baulch et al., 1980; Atkinson and
Lloyd, 1980) rate constants and assumed 100%-efficient inhibition by
benzaldehyde. Thus, these control experiments, as well as others that will be
described in a subsequent publication (Harris et al., Tn preparation),
validate the tracer technique for monitoring radical levels.
The OH radical levels observed in all of the runs carried out were
significantly higher than expected from the heterogeneous gas-phase reactions
listed previously. These higher values can be seen in Figures 10-2 and 10-3,
which show OH radical concentration-time profiles derived from the propene
and propane decay rates observed in a representative standard evacuable
chamber run (Figure 10-2) and from a representative high initial NO2
concentration run (Figure 10-3) (Carter et al., 1982). These concentration-
time profiles are compared with results of model calculations (curve A) using
only the known gas-phase chemistry (where the largest single radical source
349
-------
10. EFFECT OF RADICAL INITIATORS ON EKMA PREDICTIONS
Carter
O.IOi-
-O.I5
2 3
IRRADIATION TIME (hrl
Figure 10-1. Plots of 1n([n-butane]/[propene]) against time for the evacuable
chamber run EC-623 in which benzaldehyde was injected during the
run.
350
-------
10. EFFECT OF RADICAL INITIATORS ON EKMA PREDICTIONS
Carter
ro
I
E
o
0>
a
o
o>
"o
e
X
o
ID
b
0 30 60 90
IRRADIATION TIME (mins)
I 20
Figure 10-2. OH radical concentrations as a function of irradiation time.
represents experimental data for EC-457 (not corrected for
the consumption of propene by reaction with 03 or O( P) , which
was minor); [NO]initial = 0.499 ppm, [NO2linitial = 0.115 ppm;
[propane]initial = 0.013 ppm, [propene]initial = 0.010 ppm;
[HCHO]initial K 0.020 ppm, T = 303°K, RH = 50%, NO2 photolysis
rate constant K-| = 0.49 min~^; A - model calculations with the
homogeneous gas-phase chemistry; B - model calculations with
[HONO]initial = 0.010 ppm; C - model calculations with a
constant OH radical flux of 0.245 ppb min"1; D - model
calculations with [HONO]initial = 0.010 ppm and a constant OH
radical flux of 0.245 ppb
351
-------
10. EFFECT OF RADICAL INITIATORS ON EKMA PREDICTIONS
Carter
0
30 60 90
IRRADIATION TIME (minutes)
120
Figure 10-3.
OH radical concentrations as a function of irradiation time.
represents experimental data for EC-442 (not corrected for
the consumption of propene by reaction with 03 or O(aP), which
was minor); A - model calculations with the homogeneous gas-
phase chemistry; B - model calculations with [HONO]initial =
0.050 ppm; C - model calculations with a constant OH radical
flux of 0.61 ppb min~^; D - model calculations with
[HONO]initial = 0.050 ppm and a constant OH radical flux of
0.61 ppb min~ .
352
-------
10. EFFECT OF RADICAL INITIATORS ON FKMA PREDICTIONS Carter
was formaldehyde photolysis based on the observed initial formaldehyde levels
of 20 ppb and 6 ppb, respectively), and assuming (1) only initially present
HONO at levels adjusted to fit the initial OH radical concentrations (curve
B), (2) a constant radical flux at rates adjusted to fit the final OH radical
levels (curve C), and (3) a combination of both (curve D).
The known radical sources are clearly at least an order of magnitude too
low to account for the observed radical levels in these runs, and assuming
only initial HONO greatly underpredicts radical levels after the first ~15 min
of the run, with initial HONO being, at best, only a minor contributor to the
observed radical source after the first ~30 min of irradiation. On the other
hand, using only a constant radical flux in the calculation results in an
underprediction of the initial OH radical levels, especially in the high
[NO2]/[NO] runs. The best fits to the data are obtained if some contribution
from initial HONO is assumed. However, in terms of the overall input of
radicals during a chamber irradiation (typically > 6 h for smog-simulation
runs), the continuous radical flux is by far the more important factor.
The radical flux required to fit the data for a given run can be
estimated, without carrying out detailed model calculations, from the fact
that radical-initiation and radical-termination processes must balance. Since
the only significant radical-termination processes in this system are the
reactions of OH radicals with NO and NO2 (reactions 18 and 20), and since the
only major known radical initiation process is HONO photolysis (reaction 19),
then:
353
-------
10. EFFECT OF RADICAL INITIATORS ON EKMA PREDICTIONS Carter
Ru + k19 [HONO] = k-|g [OH] [NO] + k20 [OH] [N02] , (Eq. 10-3)
where Ru is the radical-initiation rate from all sources other than HONO
photolysis. Since known radical sources (other than HONO photolysis) are
minor in these irradiations (see above), Ru is thus identified with the rate
of radical input from unknown sources. Reactions 18 and 19 are the major
reactions affecting HONO levels, so Eq. 10-3 can be rearranged to yield
R = d [HONO] + k [OH] [NO ] . (Eq. 10-4)
u dt 20 2
Furthermore, since the photolytic half -life of HONO in these experiments is
< 15 min (Atkinson et al., 1979), HONO can reasonably be assumed to be in
photostationary state after the first hour. Therefore, the radical- initiation
rates for t > 60 min in these photolyses can be estimated from the equation:
Ru(t > 60 min) = k2o [OH] avg[NO2] avg > (Eq- 10-5)
where k2Q is accurately known (Whitten et al., 1980; Atkinson and Lloyd, 1980)
and [OH]aVg and [NO2]aVg (the average OH radical and NO2 concentrations for
t > 60 min) are experimentally determined. Note that, in general, the OH
radical levels were approximately constant after the first hour of
irradiation.
354
-------
10. EFFECT OF RADICAL INITIATORS ON EKMA PREDICTIONS Carter
The radical-initiation rates derived using Eq. 10-5 were found to be
dependent on the chamber employed and, to some extent, on the chamber history,
and were proportional to light intensity, increasing significantly with
temperature, humidity, and NC>2 concentrations. On the other hand, these rates
were totally independent of the total pressure and NO levels (Carter et al.,
In press). The dependence of this radical flux on [N02] and humidity (at
T = 303°K in the evacuable chamber) is shown in Figure 10-4. A monatonic
increase in the radical source with increasing [NO2] can be observed, with the
rate of increase depending on humidity. However, the data are such that
whether the dependence is linear (as indicated by the least-squares regression
lines), or whether, as some of the data points suggest, there is significant
curvature at both high and low [NO2] levels is unclear.
The radical flux depended somewhat on chamber conditioning. For the
~40,000-liter outdoor Teflon chambers, NOx_air irradiations were performed
when the chamber was newly constructed, after it was "conditioned" by
irradiation of ~0.5 ppm each of propene and NOX, and after it was extensively
used in a series of NOx-air irradiations of ~25 ppmC aircraft or motor-vehicle
fuels (Carter et al., 1981). The radical flux was found to be non-negligible
even in new chambers, and did not increase measurably after propene-NOx
conditioning, but it did tend to increase (by factors of ~1.5 to ~3.5)
following extensive conditioning by fuel-NOx runs. On the other hand, for the
evacuable chamber, when a NOx-air irradiation was performed immediately after
the chamber was "cleaned" by an overnight evacuated bakeout at ~360°K, the
radical input rates were higher than those following other NOx-air or
355
-------
10. EFFECT OF RADICAL INITIATORS ON EKMA PREDICTIONS
Carter
3.0 r
_o
Q.
Q.
UJ
o
cr
Z)
o
c/)
_J
<
o
Q
<
cr
0
0.5 I.O 1.5 2-0
N02 CONCENTRATION (ppm)
2.5
Figure 10-4. Plot of (radical source/k-|) against the average NO2
concentration for t > 60 min in evacuable chamber
irradiations at 303°K; • - ~0% RH; o - ~50% RH.
356
-------
10. EFFECT OF RADICAL INITIATORS ON EKMA PREDICTIONS Carter
organic-NOx-air irradiations. (The standard procedure between runs in this
chamber is to evacuate the chamber overnight, but at ambient temperature
[Pitts et al., 1979].) Thus, although "dirty-chamber" effects may contribute
somewhat to this radical source under some conditions, the data do not support
the assertion that contamination is the only contributor.
Table 10-1 illustrates the dependence of the radical flux on the chamber
volume and material (at low humidity and T = ~303°K). Since the radical
fluxes were shown (at least for the evacuable and ~6400-liter indoor Teflon
chambers) to be proportional to light intensity, the values shown in the table
are normalized by dividing the radical fluxes by k-| (the NC>2 photodissociation
rate constants) for a more direct interchamber comparison. The data shown in
Table 10-1 were taken from typical runs when the chamber was new or newly
conditioned (for the 100-, 6400-, and 40,000-liter Teflon chambers) or when
the chamber was in "standard" condition (i.e., following an overnight,
ambient-temperature evacuation for the evacuable chamber). The normalized
radical flux appears to be somewhat higher in the evacuable chamber than in
the Teflon chamber of similar volume; to what extent this higher flux is
related to the different type of surfaces involved, or to contamination of the
evacuable chamber, remains unclear. For the Teflon chambers, the radical flux
decreased significantly when the volume was increased from ~100 liters to
~6400 liters, but surprisingly did not decrease further (within experimental
variability) when the volume was further increased to ~40,000 liters. This
observation suggests, though it certainly does not prove, that at least some
357
-------
10. EFFECT OF RADICAL INITIATORS ON EKMA PREDICTIONS
Carter
component of this radical source may be independent of the chamber, and thus
may operate in the open atmosphere.
TABLE 10-1. DEPENDENCE OF RADICAL SOURCE ON CHAMBER*
Approximate k-j
Volume
Chamber
Teflon Bag No. 4
Teflon Bag No. 5
Teflon Bag No. 6
Evacuable
Indoor Teflon
Outdoor Teflon
(L)
100
100
100
5800
6000
40000
(min-1)
0.27
0.27
0.27
0.49
0.45
~0.3
[OH] Radical Source/k •)
( 10 6 cm-3)
4.3 ± 0.6
1.4 ±0.1
2.1 ± 0.8
2.4 ± 0.4
0.88 ± 0.4
0.5 ± 0.4
(ppb)
1.0
0.3
0.5
0.3
0.1
0.1
*lnitial [NO] = 0.4 ppm, [NO2] = 0.1 ppm; RH < 10%; T = 303°K to 308°K.
Additional indications that the radical flux may not be entirely
heterogeneous in nature comes from results of evacuable-chamber experiments in
which the NO2 concentrations were suddenly increased in the middle of the run
either by injecting NO2 directly, or by injecting sufficient 03 to convert
some (though not all) of the NO to NO2. Figure 10-5 gives the results of two
such experiments (initial NO = 0.4, NO2 = 0.1, T = 303°K, -60% RH). The
sudden increases in [NO2] can be seen to cause no detectable change in the OH
radical levels as measured by the decay rates of the tracers, despite the fact
that NO2 is the major radical sink in this system. The slight increase in the
propene-consumption rate by reaction of propene with 03 or O(3P), which
becomes more important as [NO2] and the [NO2]:[NO] ratio are increased is much
too small to account for this observation. This result is consistent with the
358
-------
10. EFFECT OF RADICAL INITIATORS ON EKMA PREDICTIONS
Carter
Ld
Q.
O
cc
Q.
LU
Q.
O
o:
Q_
60 120 ISO
IRRADIATION TIME (minutes)
240
Figure 10-5. Plots of 1n ([propane]/[propene]) against irradiation time for
evacuable chamber run in which 03 or NO2 was injected during
the run.
359
-------
10. EFFECT OF RADICAL INITIATORS ON EKMA PREDICTIONS Carter
fact that the radical flux increased with [NO2J, but that the sudden increase
in [NC>2] does not cause even a temporary suppression of the radical levels is
surprising. This lack of radical suppression suggests that the process in
which the radical precursor is formed from NC>2 is quite rapid, more rapid than
one might reasonably expect for a heterogeneous reaction is a 5800-liter
chamber.
EFFECT OF AN NO2-DEPENDENT RADICAL SOURCE ON EKMA PREDICTIONS
To obtain an indication of the uncertainty introduced into EKMA
predictions by uncertainties related to the chamber radical source, one may
find it useful to determine how EKMA predictions would be affected if the
unknown, NO2~dependent radical source were assumed to operate in the
atmosphere. Such a determination would also show how EKMA predictions could
change if such a radical source were found to actually operate in the
atmosphere. Further, it would give an indication of how predictions of models
validated against chamber data, but ignoring the chamber radical source, would
differ from those validated with this effect taken into account, since the
former models would require some artificial "internal" radical sources in the
kinetic mechanism to compensate for their deficiency in representing the
chamber radical source in chamber simulations. Note, for instance, that the
original Dodge mechanism (Dodge, 1977) and the various Carbon Bond mechanisms
(Whitten et al., 1979; Whitten et al., 1980; Killus and Whitten, 1981), which
are most frequently employed or proposed for use in EKMA models (EPA, 1977;
Dimitriades, 1977; Dodge, 1977; Bilger, 1978; Carter et al., 1982; Hov and
360
-------
JO. EFFECT OF RADICAL INITIATORS ON EKMA PREDICTIONS Carter
Derwent, In press; Jeffries, 1981; Killus and Whitten, 1981), all fall into
this category. Thus, these models may be significantly overpredicting radical
levels when applied to the open atmosphere.
Calculation Method and Kinetic Mechanism Employed
The kinetic mechanism and HC representation employed in all calculations
reported her« was the SAPRC propene-n-butane-formaldehyde model, which was
validated (Carter et al., 1982) against the SAPRC "surrogate" HC-NOx-air glass
smog chamber data base (Pitts et al., 1976). The model was validated assuming
a continuous chamber radical source that was consistent with results of the
NOx-air irradiations described above. The kinetic mechanism, the HC
representation, and the chamber validation have been described elsewhere
(Carter et al., 1982). The mechanism was recently updated in several minor
details to be consistent with more recent basic laboratory studies and
mechanistic interpretations as described by Atkinson and Lloyd (1980), but
these changes had only minor effects on its 03 predictions.
For the purposes of these calculations, the possibility of including an
atmospheric, NC^-dependent radical source was provided by replacing reaction 1
(NC>2 photolysis) with the following:
NO2 + hv •*• NO + O(3P) + a OH, (Reaction 1')
361
-------
10. EFFECT OF RADICAL INITIATORS ON EKMA PREDICTIONS Carter
where k-j" = k-j and a was varied from zero (no radical source) to 10~3.
Although reaction 1" has no physical meaning, it is a convenient formalism to
relate the radical flux to k-| and [NC>2] for computational purposes. The type
of dependence of the radical source on [NC>2] implied by reaction 1' (linear
with a zero intercept) is not strictly consistent with our smog chamber data
(see Figure 10-4)/ but it is a useful enough approximation for comparative
purposes. The value of a = 10~3 corresponds roughly to the radical fluxes
observed in our larger environmental chambers (Carter et al., In press).
The conditions employed for the EKMA simulations discussed here are based
on those used for the standard (or default OZIPP [Whitten and Hogo, 1978])
isopleth calculations, except for the kinetic mechanism and HC representation .
(Carter et al., 1982). Full pollutant loading at 7:00 a.m. local standard
time (LST) was assumed, and the simulation was terminated at 6:00 p.m. LST. A
constant dilution rate of 3% h~ * was used. The initial NO:NC>2 ratio was
assumed to be 3, and all calculations were done for a latitude of 34.1°N and a
solar declination of 23.5°, which are appropriate for Los Angeles in the
summer. Initial HONO was assumed to be negligible in these calculations.
Photolysis rates were calculated using the actinic irradiances as a function
of zenith angle derived by Peterson (1976) using his "best estimate" surface
albedos. These conditions may not be the most realistic, but they have been
used in previous sensitivity studies (Dodge, 1977; Carter et al., 1982), and
are useful for comparative purposes.
362
-------
10. EFFECT OF RADICAL INITIATORS ON EKMA PREDICTIONS Carter
Results of Calculations
Figure 10-6 compares isopleths for 03 = 0.12 and 0.3 ppm calculated with
and without the NOx-dependent radical source. This inclusion of this radical
source had no singificant effect at low HC:NOX ratios, but significantly less
inhibition by NOX is predicted at high HC:NOX ratios if the NOx-dependent
radical source is assumed. This is reasonable, since at high HC:NOX ratios,
03 formation is limited by NOX availability, and increasing the radical
initiation makes the 03 maximum occur earlier without strongly affecting the
03 yield. On the other hand, at high HC:NOX ratios, 03 formation is limited
by the length of the day, and thus if more radical initiation is occurring,
more 03 can be formed while light is available.
To determine whether the predicted changes in the isopleths caused by
assuming an N02~dependent radical source significantly impact the predictions
for HC control strategies, these isopleths were analyzed using the standard
EPA-recommended "relative" isopleth analysis technique (EPA, 1977;
Dimitriades, 1977). The technique was used to determine the amount of HC
control required to reduce 03 from 0.3 to 0.12 ppm, assuming that the NOX
levels remain the same. The predictions of this technique are sensitive to
the NMHC:NOX ratio assumed, and since this ratio is highly uncertain even for
the Los Angeles basin (CIT, 1980), the required HC controls were calculated
for a range of HC:NOX ratios from 6 to 12. The results are shown in Figure
10-7. Regardless of the HC:NOX ratio assumed, the percentage of HC control
predicted to be required is greater if an NO2~dependent radical source is
363
-------
10. EFFECT OF RADICAL INITIATORS ON EKMA PREDICTIONS
Carter
0.5r
0
0.5 I.O 1.5
NMHC CONCENTRATION, ppmC
Figure 10-6.
Calculated 03 isopleths at 0.12 ppm and 0.3 ppm. A - no
NO2~dependent radical sources; B - radical input rate =
10" ^ k-j [NO2] .
364
-------
10. EFFECT OF RADICAL INITIATORS ON EKMA PREDICTIONS
Carter
lOOr
90
O
cr
o
O
QL
UJ
80
70
6 60
DQ
CC
8 50
O
cr
Q
>: 40
30
20
10
8 9 10 I I 12 13
NMHC/NOX RATIO
I4
15
Figure 10-7.
Predicted present HC control required to reduce 03 from 0.3 ppm
to 0.12 ppm for a range of HC:NOX ratios. A - no NO2~dependent
radical source is used; B - radical input rate = 10"^ k1[N02]•
365
-------
10. EFFECT OF RADICAL INITIATORS ON EKMA PREDICTIONS Carter
assumed. In addition, the discrepancy between the models increases as the
NMHC:NOX ratio is decreased.
EFFECT OF INITIAL HONO ON EKMA PREDICTIONS
A less speculative source of radicals in ambient air is HONO, a powerful
photoinitiator (Stockwell and Calvert, 1978), which has recently been observed
at levels of up to ~6 ppb in a Los Angeles source area (Harris et al., 1982).
Although the mechanism of formation of HONO in the atmosphere is unknown, HONO
was observed to increase slowly throughout the night. Nitrous acid may be
emitted along with NOX, or it may be formed homogeneously or heterogeneously
from NOX previously emitted. Since the mode of HONO formation is unknown, and
direct ambient HONO measurements in urban source areas are extremely limited,
the amount of initial HONO appropriate for use in city-specific EKMA
calculations if unceratin. To determine the nature and magnitude of the
impact of this uncertainty on EKMA predictions, and to compare this to the
effect of the uncertainties related to the N02~dependent radical source, EKMA
calculations including 5 ppb and 10 ppb initial HONO (independent of NOX) were
compared with those assuming no initial HONO.
Except as noted, the kinetic mechanism, HC representation and conditions,
and input data assumed were the same as those employed in the calculations
examining the effect of the NO2~dependent radical source discussed above.
These calculations were performed prior to our updating the mechanism based on
the review of Atkinson and Lloyd, 1980); but, as mentioned above, these
; 366
-------
10. EFFECT OF RADICAL INITIATORS ON EKMA PREDICTIONS Carter
changes were minor, and thus the mechanism was essentially the same. In these
calculations, no NC>2-dependent radical source was assumed (a = 0). In those
calculations where initial HONO was included, it was incorporated as a
component of the initial NOX because it rapidly forms NO upon photolysis.
Thus, for example, a calculation assuming 0.1 ppm initial NOX with 10 ppb
initial HONO used initial NO + NO2 = 90 ppb.
Figure 10-8 compares isopleths for [03] = 0.12 ppm and 0.3 ppm calculated
using 0, 5, and 10 ppb initial HONO. The effect on the isopleths of assuming
initial HONO is similar to the effect of assuming an N02~dependent continuous
radical source — the 03 yeilds are unaffected at low NOX:HC ratios, but
greater 03 formation is predicted at high NOX:HC ratios. The main difference
between assuming initial HONO and assuming an NO2~dependent radical source is
that in the former case, the isopleth lines at high NOX are parallel to those
of the control calculations, rather than diverging (compare Figure 10-8 with
Figure 10-6). This difference can be attributed to the fact that in the
initial HONO calculations the amount of HONO added (5 ppb or 10 ppb) was the
same at all NOX levels, and thus the amount of excess initiation in the
calculations was independent of NOX. In the calculations with the
NO2~dependent radical source, the excess initiation increased with NOX. If
calculations were performed in which the initial HONO was assumed to increase
with initial NOX (as is perhaps reasonable to expect for ambient air
conditions), then the isopleths would diverge as observed when the
NO2~dependent radical source is assumed. Thus the effects of these two
367
-------
10. EFFECT OF RADICAL INITIATORS ON EKMA PREDICTIONS
Carter
0.5 I.O 1.5
NMHC CONCENTRATION, ppmC
Figure 10-8. Calculated 03 isopleths at 0.12 ppm and 0.3 ppm for cases where
HONO =0, 5, and 10 ppb.
368
-------
10. EFFECT OF RADICAL INITIATORS ON EKMA PREDICTIONS Carter
different types of NOx-dependent radical sources on the isopleths can be
considered quite similar.
Figure 10-9 shows the percent HC control required to reduce 03 from 0.3
ppm to 0.12 ppm at constant NOX, as a function of NMHC:NOX ratio, calculated
using the "relative" isopleth analysis techniques (EPA, 1977; Dimitriades,
1977), and the HOMO =0, 5, and 10 pph isopleths. Consistent with the results
shown in Figure 10-7, increasing radical initiation increases the predicted
amount of HC control required. (The increased control predicted as a result
of increased initiation is also consistent with the effect of increasing
initiation by increasing the aldehyde content in the representation of the
reactive organics [Carter et al., 1982]). The results of these calculations
indicated that a ± 5 ppb uncertainty of initial HONO assumed in EKMA
calculations can cause an uncertainty of greater than ~ ± 5% in predictions of
required HC control, regardless of the NMHC:NOX ratio used.
CONCLUSIONS
The amount of radical initiation in EKMA model calculations can be
increased either by changing aspects of the kinetic mechanism related to rates
of radical production or by increasing the levels of radical initiation
assumed to be present. Both of these increases tend to affect calculated 03
isopleths by decreasing the efficiency of NOX in suppressing 03 at high NOX:HC
ratios, while leaving the isopleths unaffected at low NOX:HC ratios. We find
that the "relative" technique for analyzing isopleths can be quite sensitive
369
-------
10. EFFECT OF RADICAL INITIATORS ON EKMA PREDICTIONS
Carter
100
80
O
cr
o
o 60
00
o:
O
cr
Q
40
UJ
Q:
UJ
20
0
D [HONO]Q - 0 ppb
• [HONO]O = 5 ppb
A [HONO]Q = 10 ppb
8
10
12
NMHC/NOX RATIO
Figure 10-9.
Predicted percent HC control required to reduce 03 from 0.3 ppm
to 0.12 ppm for a range of HC:NOX ratios in cases where HONO =
0, 5, and 10 ppb.
370
-------
10. EFFECT OF RADICAL INITIATORS ON EKMA PREDICTIONS Carter
to this type of change in the isopleths, particularly when it is used to
predict the percent HC controls required to achieve a specified 03 reduction
without controlling NOX. This sensitivity of the "relative" isopleth analysis
technique contrasts with the results of a previous study conducted by Dodge
(1977), in which this type of analysis was found to be fairly insensitive to
other aspects of the model, such as HC reactivity (i.e., the propene:n-butane
ratio), dilution, or light intensity. However, our results are consistent
with results of other sensitivity studies (Carter et al., 1978; Hov and
Derwent, In press; Jeffries, 1981) in which EKMA predictions were found to be
significantly affected when different kinetic mechanisms were employed.
The sensitivity of EKMA predictions to the kinetic mechanism employed
means that it is essential to utilize a valid mechanism in EKMA models. Thus,
for example, the continued use of the original Dodge mechanism (Dodge, 1977)
cannot be supported, since that mechanism, originally developed in 1974-75
(Durbin et al., 1975) is now known to be incorrect in a number of its details
(Carter et al., 1982; Hov and Derwent, In press; Pitts et al., 1980) and to
give significantly different predictions than more current models (Carter et
al., 1982; Hov and Derwent, In press; Jeffries, 1981; Pitts et al., 1980).
Two necessary conditions for a mechanism to be valid are that it be consistent
with the results of basic kinetic and mechanistic studies and that it be able
to simulate adequately results of smog chamber experiments, while properly
taking chamber effects into account. Until recently, validating aspects of
the mechanisms related to radical initiation has been difficult because of
excess radical sources observed in chamber experiments and disagreement among
371
-------
10. EFFECT OF RADICAL INITIATORS ON EKMA PREDICTIONS Carter
modelers on how best to handle them (Carter et al., 1979; Hendry et al., 1978;
Falls and Seinfeld, 1978; Whitten et al., 1979; Whitten et al., 1980; Harris
et al., 1982; Killus and Whitten, 1981). However, the chamber
characterization experiments discussed in this paper and in more detail
elsewhere (Carter et al., In press) have shown that continuous radical
source(s) do exist and provide a method by which their magnitude can be
readily measured. Thus, the radical source no longer must be treated as an
adjustable parameter when models are validated against smog chamber data;
measured values from separate characterization experiments can be used.
Clearly, mechanisms that have not been validated with the chamber radical
source properly taken into account should not be used in EKMA and other
airshed models, and in particular should not be recommended for use in control
strategy planning.
However, even for properly validated kinetic mechanisms, there remain
uncertainties in EKMA calculations related to radical initiation. For
example, it is not presently known to what extent the continuous NO2~dependent
chamber radical source is important in the open atmosphere. The levels of
initial MONO appropriate for use in city-specific airshed simulations are also
unknown, except perhaps for Los Angeles (Harris et al., 1982). Calculations
performed by varying these parameters within their range of uncertainty show
that these factors have non-negligible impacts on calculated 03 isopleths and
control-strategy predictions. Clearly, additional research is required to
establish characteristic atmospheric levels of known radical initiators such
as HONO and aldehydes more clearly, and to determine whether there are other
372
-------
10. EFFECT OF RADICAL INITIATORS ON EKMA PREDICTIONS Carter
significant atmospheric radical sources that are not included in current
models.
ACKNOWLEDGMENTS
Helpful discussions and the assistance of Drs. R. Atkinson, A.M. Winer,
and J.N. Pitts, Jr., are gratefully acknowledged. Dr. Atkinson also provided
substantial assistance in supervising the experimental studies of chamber
radical sources, which were conducted by S.M. Aschmann, W.D. Long, M.C. Dodd,
F.R. Burleson, C.G. Smith, and P.S. Ripley. The experimental HONO
measurements discussed in this paper were made by Drs. G.W. Harris, J.J.
Treacy, and D. Perner.
This work was supported primarily by California Air Resources Board
Contract Nos. A8-145-31 and A1-030-32, and in part by NSF Grant ATM-8001634.
REFERENCES
Atkinson, R., W.P.L. Carter, K.R. Darnall, A.M. Winer, and J.N. Pitts, Jr.
1980. Int. J. Chem. Kinet., 12:779.
Atkinson, R., K.R. Darnall, A.C. Lloyd, A.M. Winer, and J.N. Pitts, Jr. 1979.
Adv. Photochem., 11:375.
Atkinson, R., and A.C. Lloyd. 1980. Evaluation of Kinetic and Mechanistic
Data for Modeling of Photochemical Smog. Final Report to EPA Contract No.
68-02-3280, ERT Document No. P-A040, July. In press in J. Phys. Chem. Ref.
Data.
Baulch, D.L., R.A. Cox, R.F. Hampson, Jr., J.A. Kerr, J. Troe, and R.T.
Watson. 1980. J. Phys. Chem. Ref. Data, 9:295.
373
-------
10. EFFECT OF RADICAL INITIATORS ON EKMA PREDICTIONS Carter
Bilger, R.W. 1978. Environ. Sci. Technol., 12:937-940.
California Institute of Technology. 1980. Conference on Air Quality Trends
in the South Coast Air Basin, February 21-22.
Carter, W.P.L., R. Atkinson, A.M. Winer, and J.N. Pitts, Jr. 1981. Int.
J. Chem. Kinet., 13:735.
Carter, W.P.L., R. Atkinson, A.M. Winer, and J.N. Pitts, Jr. In press.
An experimental investigation of chamber-dependent radical sources, Int. J.
Chem. Kinet.
Carter, W.P.L., A.C. Lloyd, J.L. Sprung, and J.N. Pitts, Jr. 1979. Int.
J. Chem. Kinet., 11:45.
Carter, W.P.L., P.S. Ripley, C.G. Smith, and J.N. Pitts, Jr. 1981.
Atmospheric Chemistry of Hydrocarbon Fuels. Draft Final Report on Contract
No. F08635-80-C-0036 to Air Force Engineering and Services Center, Tyndall Air
Force Base, FL, November.
Carter, W.P.L., A.M. Winer, and J.N. Pitts, Jr. 1982. Atmos. Environ.,
16:113.
Dimitriades, B. 1977. An alternative to the Appendix J method for
calculating oxidant and N02~related control requirements, In: Proceedings of
the International Conference on Photochemical Oxidant Pollution and Its
Control. EPA-600/3-77-001, U.S. Environmental Protection Agency, Research
Triangle Park, NC.
Dodge, M.C. 1977. Combined use of modeling techniques and smog chamber data
to derive ozone-precursor relationships, In: Proceedings of the International
Conference on Photochemical Oxidant Pollution and Its Control.
EPA-600/3-77-001, U.S. Environmental Protection Agency, Research Triangle
Park, NC.
Dodge, M.C. 1977. Effect of Selected Parameters on Predictions of a
Photochemical Model. EPA-600/3-77-048, U.S. Environmental Protection Agency,
Research Triangle Park, NC.
Durbin, R.A., T.A. Hecht, and G.Z. Whitten. 1975. Mathematical Modeling of
Simulated Photochemical Smog. EPA-650/4-75-026, U.S. Environmental Protection
Agency, Research Triangle Park, NC.
Falls, A.H., and J.H. Seinfeld. 1978. Environ. Sci. Technol., 12:1398.
Gitchell, R., R. Simonaitis, and J. Heicklen. 1974. J. Air Pollut. Control
Assoc., 24:357.
374
-------
10. EFFECT OF RADICAL INITIATORS ON EKMA PREDICTIONS Carter
Hampson, R.F., Jr., and D. Garvin, eds. 1978. Reaction Rate and
Photochemical Data for Atmospheric Chemistry - 1977. National Bureau of
Standards Special Publication 513, May.
Harris, G.W., W.P.L. Carter, A.M. Winer, J.N. Pitts, Jr., U. Platt, and D.
Perner. In press. Observations of nitrous acid in the Los Angeles atmosphere
and implications for predictions of ozone-precursor relationships. Environ.
Sci. Technol.
Harris, G.W., U. Platt, J.J. Treacy, R. Atkinson, W.P.L. Carter, A.M. Winer,
and J.N. Pitts, Jr. In preparation.
Hendry, D.G., A.C. Baldwin, J.R. Barker, and D.M. Golden. 1978. Computer
Modeling of Simulated Photochemical Smog. EPA-600/3-78-059, U.S.
Environmental Protection Agency, Research Triangle Park, NC.
Hov, Q., and R.G. Derwent. In press. Sensitivity studies of the effects of
model formulation on the evaluation of control strategies for photochemical
air pollution formation in the United Kingdom. J. Air Pollut. Control. Assoc.
Jeffries, H.E., K.J. Sexton, and C.N. Salmi. 1981. Effects of Chemistry and
Meteorology on Ozone-Control Calculations Using Simple Trajectory Models in
the EKMA Procedure. EPA-450/4-81-034, U.S. Environmental Protection Agency,
Research Triangle Park, NC, November.
Kerr, J.A., and D.W. Shepard. 1981. Environ. Sci. Technol.
Killus, J.P., and G.Z. Whitten. 1981. A New Carbon Bond Mechanism for Air
Quality Simulation Modeling. Final Report Contract No. 68-02-3281, U.S.
Environmental Protection Agency, Research Triangle Park, NC.
Killus, J.P., and G.Z. Whitten. 1981. Int. J. Chem. Kinet., 13:1101.
Niki, H., P.D. Maker, C.M. Savage, and L.P. Breitenbach. 1978. J. Phys.
Chem., 82:132.
Peterson, J.T. 1976. Calculated Actinic Fluxes (290-700 nm) for'Air
Pollution Photochemistry Application. EPA-600/4-76-025, U.S. Environmental
Protection Agency, Research Triangle Park, NC.
Pitts, J.N., Jr., K.R. Darnall, W.P.L. Carter, A.M. Winer, and R.L. Atkinson.
1979. Mechanisms of Photochemical Reactions in Urban Air. EPA-600/3-79-110,
U.S. Environmental Protection Agency, Research Triangle Park, NC, November.
Pitts, J.N., Jr., A.M. Winer, P.J. Bekowies, G.J. Doyle, J.M. McAfee, and
P.H. Wendschuh. 1976. Development and Smog Chamber Validation of a Synthetic
Hydrocarbon-Oxides of Nitrogen Surrogate for California South Coast Air Basin
Ambient Pollutants. Final Report California Air Resources Board Contract No.
2-377, September.
375
-------
10. EFFECT OF RADICAL INITIATORS ON EKMA PREDICTIONS Carter
Pitts, J.N., Jr., A.M. Winer, W.P.L. Carter, G.J. Doyle, R.A. Graham, and
E.G. Tuazon. 1980. Chemical Consequences of Air Quality Standards and of
Control Implementation Programs. Final Report California Air Resources Board
Contract No. A7-175-30, June.
Pitts, J.N., Jr., A.M. Winer, K.R. Darnall, G.J. Doyle, and J.M. McAfee.
1975. Chemical Consequences of Air Quality Standards and of Control
Implementation Programs: Roles of Hydrocarbons, Oxides of Nitrogen, and Aged
Smog in the Production of Photochemical Oxidant. Final Report California Air
Resources Board Contract No. 3-017, July.
Pitts, J.N., Jr., A.M. Winer, K.R. Darnall, G.J. Doyle, and J.M. McAfee.
1976. Chemical Consequences of Air Quality Standards and of Control
Implementation Programs: Roles of Hydrocarbons, Oxides of Nitrogen, and Aged
Smog in the Production of Photochemical Oxidant. Final Report California Air
Resources Board Contract.
Stockwell, W.R., and J.G. Calvert. 1978. J. Photochem., 8:193.
U.S. Environmental Protection Agency. 1977. Uses, Limitations, and Technical
Basis for Procedures for Quantifying Relationships Between Photochemical
Oxidants and Precursors. EPA-450/2-77-021a, U.S. Environmental Protection
Agency, Research Triangle Park, NC.
Whitten, G.Z. In press. Comparison of EKMA with AQSM's. In: Proceedings of
the EKMA Validation Workshop, U.S. Environmental Protection Agency, Research
Triangle Park, NC.
Whitten, G.Z., and H. Hogo. 1978. User's Manual for Kinetics, Model and
Ozone Isopleth Plotting Package. EPA-600/8-78-014a, U.S. Environmental
Protection Agency, Research Triangle Park, NC.
Whitten, G.Z., H. Hogo, M.J. Meldgin, J.P. Killus, and P.J. Bekowies. 1979.
Modeling of Simulated Photochemical Smog with Kinetic Mechanisms, Vol. 1,
Interim Report. EPA-600/3-79-001a, U.S. Environmental Protection Agency,
Research Triangle Park, NC, January.
Whitten, G.Z., J.P. Killus, and H. Hogo. 1980. Modeling of Simulated
Photochemical Smog with Kinetic Mechanisms, Vol. 1, Final Report.
EPA-600/3-80-028a, U.S. Environmental Protection Agency, Research Triangle
Park, NC, February.
Winer, A.M., R.A. Graham, G.J. Doyle, P.J. Bekowies, J.M. McAfee, and J.N.
Pitts, Jr. 1980. Adv. Environ. Sci. Technol., 10:461.
376
-------
COMMENTS McKee
COMMENTS
Herbert McKee
I wanted to make a few comments on the ways in which the Empirical
Kinetic Modeling Approach (EKMA) and other mathematical models are used in the
real world. I realize this is a technical meeting, and I certainly don't
intend to take up any time with our administrative and legal problems at the
state and local program level. However, program administrators in air
pollution control have some serious problems in attempting to meet the 03
standard.
The reason I am bringing these problems up is that their solutions can
only be found at the technical level, that is, at the level where these models
are developed and their limits determined and described.
Our past history in this area is not encouraging. In 1971, the U.S.
Environmental Protection Agency (EPA) decided everyone had to use Appendix J,
and they absolutely guaranteed that everyone would attain the oxygen standard
by 1975. The only trouble with that was that in Houston, Appendix J would
have required 110% reduction in hydrocarbon (HC) emissions. When this was
pointed out as theoretically and practically impossible, EPA reverted to
proportional rollback, and that did not work either. So, then they came up
with modified rollback.
The end result of all this, modeling aside, was that massive monetary
expenditures were made for HC controls. Dr. Basil Dimitriades and Dr. John
Trijonis disagree on the amount of reduction. I can assure you there have
been some very substantial expenditures for control equipment; I have seen the
hardware and I have watched it operate. I think some of Dr. Trijonis1s
figures are based on comparing emissions inventories at two different times
that were assembled by different techniques and are not comparable.
377
-------
COMMENTS McKee
Nevertheless, the easy and obvious things to control HC's have been done.
Further reductions in HC emissions comparable to what the control measures
that have already been implemented have accomplished are not possible. We
have done the easy and the obvious. Any further control efforts involve the
smaller subtle sources that are harder to find and control. So, now we are
trying to use EKMA to determine how much of a reduction is required.
Some of the discussions today and some of the things I ha've heard over
the last 2 or 3 years about EKMA remind me of the young engineer who was told
to derive a very quick and very approximate estimate of the capacity of a tank
sitting out on a tank farm. He went out and paced around it so many paces and
assumed he walked 3 ft at a step, and reached up on the side and figured it
was about so much higher than he could reach. Then he went back in with that
beginning data and divided by pi on his calculator to 14 decimal places and
reported the capacity of the tank as 1,751,489.5 gal.
Now, obviously, his input data did not justify the accuracy which he
tried to attach to the results. I think the same thing is happening here to
some extent. The State Implementation Plan (SIP) requirements, the Clean Air
Act, and EPA regulations presume that control requirements will be established
that will exactly meet the standard by July 1, 1987, and that after that, with
the exception of 3 exceedances every 3 years, the highest ozone (03) anyone
will record is going to be 11.999 ppm for maximum day for each year. I think
our experience with reducing HC's but failing to reduce 03 measurably over the
last 10 years in Houston indicates there is no assurance this is going to work
out that way.
Here are a few reasons I think apply. First of all, there has been some
discussion about emissions inventories. Emissions inventories can, or
usually, include only what is expected to be emitted into the atmosphere under
steady-state conditions. I am talking now about stationary sources,
industrial sources which can, by and large, control our Houston situation—
steady-state conditions at or near design capacity.
In reality, operating rates vary. A lot of refineries are now running at
70% capacity, and that makes more than a 30% difference in the emissions
compared to running at 100% capacity. Emissions during start-up and shut-down
differ drastically from what they are both qualitatively and quantitatively at
steady state. In addition, fugitive emissions are either not included in
emissions inventories or are included only by some very rough approximating
techniques whose accuracy is not known. Over the past 3 or 4 years, I have
heard estimates of fugitive emissions expressed as a percentage of process
emissions, ranging from 10% to 200%. So somewhere in that range there are
fugitive emissions from leaking valves, from pipe flanges, pump packings,
accidental spills, and sources of that nature. There are thousands of them in
a typical plant.
Temporary malfunctions are also not in the inventory. No one compiling
an emissions inventory can predict, for example, when a large ethylene plant
378
-------
COMMENTS McKee
might lose electric power to a compressor and possibly lose as much as
1,000,000 Ibs/h of feed stock for several hours or something like that. Those
emissions are not in the emissions inventory.
However, we are beginning to accumulate some evidence, although
circumstantial, that some of the 03 episodes in Houston may be related to
accidents of this type. The 03 episodes occur in the areas immediately
downwind of whatever plant experiences a major emergency a few hours after the
emergency occurs.
Another factor is that HC reactivity varies widely. The modeling efforts
in the past have treated everything alike in terms of tons of nonmethane HC's
(NMHC's). I am glad to see we are getting away from that in terms of the
description of chemical reaction mechanisms here. But, obviously, refinery,
chemical plant, and automobile exhaust emissions vary widely in their
reactivity, pound per pound.
Ambient monitoring is uncertain for some of the same reasons, especially
for NMHC's. When the NMHC standard was first promulgated, the Beckman
instrument was the only one available. Beckman has gone out of business, and
about every year someone tells me something different concerning whether or
not it is now possible to measure NMHC's in the ambient atmosphere with any
accuracy or precision. I get a different answer every time I ask someone who
is presumably an expert in the field.
Monitoring of NOX was not very reliable until recently, in part because
there was no standard for NO. There was only a standard for NO2, and the
Jacobs method for measuring that had to be abandoned. In addition, the
chemiluminescence measurement technique was developed too rapidly. The first
instruments we bought after that method was selected proved to be unstable and
unreliable. We are still trying to replace some of those with more reliable
updated instruments. Our NOX data are not very good, either, partly for that
reason.
Chemical reaction mechanisms may vary as has been discussed here in great
detail. Instead of trying to select the Dodge, the Carbon Bond II, or
whatever other mechanism, it is conceivable that a different reaction
mechanism might be needed for the area downwind of one end of the Houston Ship
Channel, for example. The emissions in this area are largely from refineries.
Perhaps a different reaction mechanism is needed for another portion of the
Ship Channel downwind from the petrochemical operations. A different
mechanism might be needed for the feed stock area with its highly reactive
butylenes and those type materials. The time of the reaction varies,
depending on the meteorological conditions and what the materials are,
olefins, aromatics, or whatever.
Our 03 episodes are extremely variable, both in time and in space. We
have seen high 03 levels over most of the city simultaneously, but more often
we will see high 03 at the monitoring station. Other monitoring stations
379
-------
COMMENTS McKee
will be much lower. I have even seen two monitoring stations 6 to 8 mi apart,
one of which measured over 0.2 ppm 03 and the other one at practically natural
background, 0.04 ppm, 0.05 ppm, or some 03 level like that.
Obviously, some of the precursor sources in some portion of the city are
involved in that one high reading, but certainly other precursors in the
emissions inventory on a county-side basis can have nothing to do with what we
measure.
In Houston, 03 occurs without the simultaneous occurrence of high
irritation and photochemical vegetation damage that has occurred in Los
Angeles at comparable 03 levels. And, 03 can occur with or without haze. I
have seen conditions of 0.2 ppm 03 with a visibility of 2.5 or 3 mi. I have
also seen 0.2 ppm 03 with a visibility of 100 mi if you get up high enough in
the air to see 100 mi to the horizon.
Another factor is transport. A lot of days we can account for our 03
based on our own precursors. Other days it looks like half or occasionally
even more than half of the 03 we measure over Houston results from
precursors coming into the Houston area from somewhere 50 to 100 mi away.
This is based on measurements of the upwind area inside of the city.
I could give lots of other reasons, but, in short, the 03 episodes that
occur in Houston, and I suspect in some other cities where you do not have a
mountain barrier to create a basin effect, are too variable for any single
modeling method to measure accurately, even most of the time.
The principle of modeling is that you are looking at a series of events
which qualitatively are identical or at least very similar. The purpose of
modeling is to describe accurately the quantitative differences from one time
to another. I think we are dealing with a phenomenon here which qualitatively
is different from one episode to the next, so obviously, the quantitative
differences described by modeling cannot be very accurately described as a
basis for predictions unless you account for the qualitative differences that
caused the episode in the first place.
Can these problems be rectified? I do not think so, at least not all of
them. Certainly our knowledge of chemical reaction mechanisms will be
approved, and I am glad to see the knowledge that is being gained in that
area. But, no emissions inventory will ever include an accurate estimate of
the emergency emissions from upset manufacturing processes or emergency
malfunctions.
So, I think the best approach, at least from the standpoint of the local
control agency, is to change the monitoring approach a little. This is
critical to all we are trying to do and to all the use that is going to be
made of whatever models are ultimately developed.
380
-------
COMMENTS McKee
At the present time, modeling is used as practically the sole determinant
to find out what control strategy is required and how far it has to go.
Sometimes this is carried out to several decimal places, like the engineer
that I described earlier that used pi to 14 decimal places.
We got into a big argument years ago over whether certain control
measures in Houston that were proposed would reduce HC emissions by 0.1% or by
0.2%. If anyone can tell me how to measure that difference in an
(^-monitoring instrument anywhere in the city of Houston, I would like to hear
it. This kind of argument certainly misses the main point, both of monitoring
and of trying to control 03 in the first place.
Monitoring is not an automatic cure-all. It is not reasonable to plug
numbers into a computer and use the results that the computer feeds back
automatically. That is where I think the major fallacy is—in the way
monitoring is being used now—because with that approach, no judgment or
common sense is involved.
Modeling can be very useful, but to be useful I think modeling should be
used as an aid in exercising judgment and common sense, not as a substitute
for them.
That philosophical point, I think, has a bearing on what you the
technical experts in developing mathematical models are going to do or can do.
Present efforts seem to be aimed at eliminating all the sources of air in
these models so that you can develop some perfect model that is going to be
accurate, maybe even to 14 decimal places, and everybody can use it. We can
use it in Houston, our counterparts can use it in Philadelphia or anywhere
else they see the 03 problem; they can use this mathematical model by plugging
in the input data and the computer will tell them that they need to reduce
HC's by 30.78% or whatever the answer is.
I think this is impossible. If models are to be most useful for their
ultimate application of trying to improve air quality, it would be more
helpful if the experts could define the advantages and the limitations of the
various models available and of the various chemical reaction mechanisms that
might be used in these models, and, thus, make it possible to evaluate the 03
problems of a particular locality.
I do not think an approach will ever work where every urban area that has
to control 03 uses the same model, or even different variations of the same
model.
If the advantages and limitations of a series of models are described, we
can pick out certain ones that seem to be applicable to our problem. As I
said before, we might use one for one end of the Houston Ship Channel and a
different one for the other end of the Channel, and still a different one when
the wind blows.
381
-------
COMMENTS McKee
Or, one might use a different one when there is evidence that part of our 03
precursors are coming in from Texas City or Beaumont or somewhere else. Our
counterparts in Philadelphia and St. Louis or wherever else might pick
different models designed to fit their particular local circumstances to
control their 03 problems.
If this could be done, I think models would be most helpful as an aid in
exercising the judgment and common sense that has to go into control
strategies. Anyhow, this would make it easier and much more practical for the
control agencies to get away from what appears to be the present approach of
using models as a substitute for judgment and not as an aid to it.
382
-------
10. EFFECT OF RADICAL INITIATORS ON EKMA PREDICTIONS Carter
WORKSHOP COMMENTARY
ESCHENROEDER: Dr. Jeffries, what kind of sample automobiles do you intend to
use? Will they be new versus in-use or aged ones?
JEFFRIES: We've looked at about 10 to 12 vehicles that Ron Bradow has a long
history on.
They have a series of vehicles that they've countinued testing over a
long time, so we have a range from which to choose.
My initial tendency would be to take one or two vehicles and focus on
those until we can understand the behavior and performance of those.
There are some problems with measuring HC emissions from the new
catalyst-equipped vehicles. The HC emissions you do get only occur in the
first 506 sec of the federal test. After that, the catalyst is hot and there
aren't really emissions. So you have to take a sample from the tailpipe
during that period, because after that you don't get very much.
The mixture will depend upon how well we can get everything else working.
If it proves to be simple, yes, we can take samples from a lot of them and we
can cover a wide range.
Some of the cars have been misfueled. That is, they have had leaded gas
put into the catalyst system, and they come out way up on the top of the
diagram. Of course, we would like to look at one of those versus one that's
brand new.
The newest Buick available, a 1981 Buick, with double filtering on the
emissions has an extremely low HC emission and extremely low evaporative
emission.
It doesn't look like a very productive vehicle for our purposes. If all
the cars on the road were like that, we wouldn't have any trouble.
So, a series of cars is available that we can examine. We just haven't
decided exactly what we'll look at and at how many, because it depends upon
how difficult it is to do.
CARTER: Do you have any diesels?
383
-------
10. EFFECT OF RADICAL INITIATORS ON EKMA PREDICTIONS Carter
JEFFRIES: Yes, some diesels. I don't know that we have plans at this stage
to look at those. That may be for a second year study.
UNIDENTIFIED SPEAKER: There are gas-based organics on those that are really
in question, though—
JEFFRIES: Ron Bradow had to generate 2.5 kg of diesel particulate for it, so
he's got plenty of diesels and so forth.
LLOYD: Do you have any composition data on the evaporative compared to the
tailpipe?
JEFFRIES: Yes. Ron Bradow has a fair amount of data that he's obtained from
a wide range of vehicles, for approximately 54 compounds I think. I've done
some analyses and looked at the relative groupings and behavior. Winter
versus summer gasoline makes a substantial difference in emissions because the
blend is changed to raise and lower the vapor pressure on it, which greatly
affects the evaporative emissions.
It looks like for all the newer vehicles, evaporative emissions
constitute at least 50% of all emissions. However, the question arises of how
to sample evaporative emissions in this case. Our chamber's big, but we don't
want to park an automobile in it. So, it's a question of I can't take the
whole shed volume.
KELLY: What about a canister?
JEFFRIES: That's an option we're considering. We've discussed getting an
actual canister, using whole gasoline, and going through an equivalent. The
problem is that it's a process; the car starts out hot and as it cools down
you get very strange emissions behavior.
LLOYD: Are you going to try different brands of gasoline?
JEFFRIES: I think in the beginning we'll freeze gasoline. Then, as we go,
we'll see how well things move. All kinds of options are open to us.
Initially, we would like to have enough of a data base to exercise the
critical factors in the models that are being popped up right now and see how
well they track those kinds of behavior.
But many, many issues are open. We will just have to see how well things
go.
McKEE: You mentioned that some of the vehicles had been misfueled. Were they
misfueled over a period of time or just once or twice?
JEFFRIES: I think this one had been so misfueled that the catalyst had burned
out.
384
-------
10. EFFECT OF RADICAL INITIATORS ON EKMA PREDICTIONS Carter
McKEE: Okay. The reason I asked is that if you only run one or two tankfuls
of leaded gas and then return to unleaded gas, the catalyst will recover.
I've confirmed this on my own car as part of an experimental program we did.
JEFFRIES: Sure, we heard about the Houston program using police cars.
McKEE: That was a different experiment. In the experiment of the city
vehicles that I drove—
JEFFRIES: The emissions on the misfueled car, a Ford Mustang, were factors of
5 or higher than on the other cars.
McKEE: In my case, I ran one or two tankfuls of leaded gas and then returned
to unleaded gas. In about three or four more tankfuls of unleaded gas, the
emissions had returned to the prior level when measured as THC's by infrared.
Now I don't know whether the composition of the HC had changed, but to an
I&M type screening instrument, the emissions were the same, essentially the
same.
JEFFRIES: I think the issue of misfueling may be something critical we should
look into pretty early in the program. Considering the California experience
in the last gasoline shortage when small adapters went on sale everywhere, I
think we should find what the effects might be.
TRIJONIS: Concerning the aging of the evaporative control systems, the
Aerosol Research Branch of EPA discovered a lot of deterioration in those, so
your mix can change with the vehicle.
JEFFRIES: Yes, but this is a follow-up to another program that we are now
conducting in which we are looking at a data base for reactivity testing for
photochemical models. We have systematic variation in composition and
complexity of the mix over a wide range. This is on both sides of the chamber
at the same time. We've already produced 30 or 40 dual runs in which the
compositions differed and we've substituted toluene versus xylene.
So, there is a preceding simple data base, if you want to think about it
that way. That data base can exercise the models up to the point where if it
can track all of those changes you then give it the automobile exhaust with
its complexity and see if it can also track those changes.
ROMANOVSKY: How do you propose to deal with the question of the dirty chamber
and leaking chamber?
JEFFRIES: If I'm at levels of 6 ppmC and 0.6 ppm NOX, I don't think I have to
worry about dirty chambers.
395
-------
10. EFFECT OF RADICAL INITIATORS ON EKMA PREDICTIONS Carter
We will probably be at a high enough level, and we've got plenty of data
on our chambers, that the modelers won't have any troubles dealing with our
chamber. We've got 153 m3 volume in each side, and we've done a whole range
of systematic tests to examine dirty chamber effects.
We don't expect to do experiments at this point at low levels of HC and
NOX. We may make low levels of 03, but that's because of the HC:NOX ratio
problem.
ROMANOVSKY: Dr. Bufalini, I believe, made the comment earlier that evidently
it is extremely important to have the data at that point. As far as Los
Angeles is concerned, we're probably 50 years away from worrying about getting
under point—
JEFFRIES: In my presentation this afternoon, I will discuss the problems
associated with mechanisms and model predictions and experimental data as you
decrease the HC concentration and drop to lower and lower 03 levels. So you
will see some impact of what happens this afternoon.
ROMANOVSKY: I have a short question for Dr. Lloyd. Did you get a chance to
look at and compare the O3 which you generated in the chamber with the O-j
measured by the district monitoring station downwind along the trajectory of
the air mass?
Also, did you compare temperatures inside and outside the chamber?
LLOYD: We are gathering data to do that. We have not gathered all the data
together yet, but that is part of the program.
Concerning temperature effects, a couple of days got pretty warm. In
general, the fall was cooler than expected, consistent with the fact that the
03 wasn't as high as normal. Thus, that didn't turn out to be the problem
that we expected. Some days were actually pretty cool.
ROMANOVSKY: It's a concern I've had that the differences in temperature might
make that approach somewhat defective.
LLOYD: On a couple of the days, the temperature was really thought to be too
high. But most days it was cool; it felt like winter on some of those days.
386
-------
FINAL DISCUSSION
FINAL DISCUSSION
DIMITRIADES: On the subject of chemical mechanisms, the first presentation
you heard was by Gary Whitten. He described the various mechanisms now in
existence, and pointed out some of the differences and some of the advantages
and disadvantages of the various mechanisms. You then heard Harvey Jeffries
talk about comparisons of the various mechanisms against a set of data.
Finally, Bill Carter made a case for consideration of chamber reactions.
Several questions have been raised, and I'm sure some of them have been
answered. But, some questions remain that I think are important to discuss,
or at least to use as a starting point for the discussion.
I guess the first question I would like to raise is: Would it be
desireable and appropriate, or necessary, to define a standard set of data and
a standard procedure by which you can compare those items? I appreciate the
desire of each modeler to define his own mechanism in terms of not only the
reaction pathways but also in terms of a set of data that suit his mechanism.
But, I wonder if this is right. Should we come up with a standard set of data
against which we can compare the mechanisms?
I would like to go back to the question I raised earlier. If we do need
a set of data against which to compare or judge mechanisms, should this set of
data be smog chamber data, ambient air data, or what? In fact, I'd like to go
a bit further, speaking of smog chamber data. Should it be outdoor chamber
data or indoor smog chamber data? We want to discuss conceptual advantages
and disadvantages as well as the practical ones.
The other question I have is that suppose we do have a mechanism that is
perfect in the sense that it describes the mechanisms of some individual
organics like propylene, butane, or aldehydes. We want to use this mechanism
on the real atmosphere, which consists of a mixture which is undefinable,
that is, we don't know what the composition of the ambient organics mixture
is. How do we get around this problem? If we don't have the composition of
the ambient mix of organics, what good is it to have a perfectly good
mechanism for individual organic components? In other words, how do we use
this mechanistic knowledge in predicting or making judgments of the real
atmosphere?
I think these are some of the questions that I, personally, would
like to raise. I don't for a minute suggest that these should be the only
387
-------
FINAL DISCUSSION
ones. But perhaps we can use those as a starting point. So, if you agree, I
would like to start with the first question. Do we need a standard set of
data; do we need a standard procedure by which we should judge the
mechanisms relative to each other, and, of course, with respect to their
absolute performance? If we do need such a set of data, do we have it, or if
we don't, how do we go about getting it?
KILLUS: I would like to address the question of chemical mechanism use.
Dr. Whitten and I began Carbon Bond mechanisms quite a number of years
ago. We are right now, as everyone knows, on Carbon Bond III. From the very
outset, we were pretty much like any other kinetic mechanism development group
in that the mechanisms we developed were for our use, and when other people
tried to use them, they had all the standard problems.
Since that time we have developed something of a methodology for at least
trying to let them be used by other users. We generally do not reformulate
the mechanisms in a large sense. Carbon Bond III is the third such mechanism,
and we do it about once every 3 years. Between those times, we allow
ourselves about one reasonable rate constant change such that it can be
changed in the airshed model, but we do not reformulate the chemical
expressions or the product yields at all.
Contrary to what Dr. Jeffries said, we don't have 30 different versions
of the Carbon Bond mechanism for each time. The difficulty we had with Dr.
Jeffries was we were sending in computer outputs when we were in the
development phase of the Carbon Bond mechanism. During that development we
had 30 different sets of rate constants, but we nailed it down pretty
carefully right after that.
I would suggest that chemical mechanisms should be documented. One
suggested test of a mechanism for someone other than the developers would be
whether it has adequate documentation. We have tried to do that, and strongly
urge other groups to do it. Then, at least, given a standard set of data on
which to prepare mechanisms, one would be able to have others besides the
developers use the mechanism against that standard data set.
So, I believe that a standard data set is necessary. I also believe that
a standard procedure or procedures of documentation and use are necessary.
Those do not exist right now.
DIMITRIADES: What do you mean by documentation?
KILLUS: Well, we have just written what can't be called a manual, but is
something that's close to it. It has been given at least in its draft form to
several users. Dr. Jeffries sent it to someone in Australia who about 2 days
after he received it was able to prepare a smog chamber simulation on a smog
chamber that we had never seen before on a HC mix that we had never seen, and,
in fact, we still don't know precisely what was done on it. But, the user was
388
-------
FINAL DISCUSSION
reasonably satisfied. He said the mechanism worked pretty well and the
documentation was such that he could prepare the simulation without—
DIMITRIADES: I presume that includes getting data as well, rate constants?
KILLUS: Of course. In general, despite what Dr. Whitten said about being
able to fix mechanisms when you're allowed to alter the rate constants, if
once you've allowed mechanisms to go out into the general user community,
certainly that is no longer true. Chemical rate constants should certainly be
fixed, and that's really what you're talking about when you say a mechanism.
The differences between a Demerjian mechanism and the Cal Tech mechanism
and the ERT mechanism have more to do with rate constants and stoichiometric
yields of organics than any other single feature. In fact, I've demonstrated
that at least some mechanisms can be made to look very much like other
mechanisms if you're allowed to adjust several rate constants.
I took the Cal Tech mechanism about a year and a half ago and prepared an
isopleth diagram that was almost identical to a Carbon Bond mechanism isopleth
diagram; whereas, as Dr. Jeffries showed you, the original isopleth looks
quite different.
DIMITRIADES: To make a judgment on the mechanism, should we have a bunch of
experts, who are familiar with the latest kinetic data information, make a
judgment of whether those rate constants are right or wrong? Or rather
than do that, should we develop this judgment by comparing the mechanistic
predictions against those on a second smog chamber?
KILLUS: Not rather than, but that is another criterion, that is, a mechanism
that can be used by experts to give reasonable smog chamber simulations is one
thing; a mechanism that can be used by experts to give reasonable smog chamber
simulations and one that can also be used by novice modelers to give
reasonable smog chamber simulations is quite another.
At present, the documentation does not exist for mechanisms to undergo
such a test. If you believe that the Carbon Bond mechanism has been
adequately documented, in that case, it's the only one, and even then that's
only 4 months old. We can't possibly be sure yet.
DIMITRIADES: Somebody suggested earlier that complexity is a factor that
varies with the mechanism. The mechanisms that we have now are quite a
bit different in terms of number of reactions or complexity. So then, the
complexity, that is, the detailed description of the pathways, as well as the
set of kinetic data used, are the two elements or aspects of the mechanism
which can differ from mechanism to mechanism. My question is then whether it
is advisable to use one set of kinetic data for all mechanisms if you just
want to evaluate the pathway complexity factor for the various mechanisms?
389
-------
FINAL DISCUSSION
KILLUS: Do you mean rate constants?
DIMITRIADES: Yes.
KILLUS: For inorganic rate constants, the reactions are pretty similar from
mechanism to mechanism. That is very close to the case right now. That is,
if you look at the inorganic reaction rate constants for the Carbon Bond
mechanism, the Cal Tech mechanism, and so forth, they're almost identical.
There may be 5% difference here, 10% difference there, but not a sufficient
difference to perturb the mechanisms.
How you go about formulating your organic steps, however, is very
different. There is no way to regularize the organic components of mechanisms
because they treat different species and different methods of handling the
HC's.
The methodology I used to take the Cal Tech mechanism and turn in into
sort of a carbon copy of the Carbon Bond mechanism involved a rather
complicated analytical procedure that described the organic components of the
Cal Tech mechanism in their molecular form to similar sorts of components in
the Carbon Bond scheme. That is not something that can be easily done.
JEFFRIES: About half way through my project, I thought about what I was doing
and I got really scared. Only half way through. It took me that long to
really understand what I'd gotten into.
I had been dealing with Dr. McRae, I had been dealing with Dr. Demerjian,
I had been dealing with SAI, I'd been dealing with Dr. Dodge, and there is not
very much support for putting somebody else's mechanism up. If you're not
careful, you can really get in trouble. We spent a lot of effort just trying
to figure out whether we had the mechanism or not. When I talked about the
different versions of Carbon Bond, I had different computer print-outs, and as
Dr. Killus said, the problem was that it was undergoing all kinds of change.
The same thing is true of Dr. McRae's mechanism.
There was only one mechanism I was sure I had and that was Dr. Dodge's.
I knew I had hers because I looked at a computer code where it was built in
and had been solid for years, and I could pull it out. Everybody else changed
everything while I was doing the work.
I will have to differ with Dr. Killus on one point though. Although the
manual is out and the mechanism is there, and though it may turn out that
Carbon Bond III is identifiable in the same sense that the Dodge mechanism is
identifiable, I suspect if you ask him for a copy of Carbon Bond III and you
look at one that I've drawn up from this manual, it's not quite the same.
Then you get worried about these little fine details of differences. That is
because he is still tinkering and playing and whatever. So, I agree that
there is a need for documentation or you're going to really get in trouble.
390
-------
FINAL DISCUSSION
I wouldn't do it over again. It requires too much effort and too much
energy to take something that people have sunk a lot of time and effort into,
in too many different ways, and to test his system and his system and
his system. It's not worth it.
DIMITRIADES: As an afterthought, would you like to have used the same input
information, that is, the same assumption for light intensity in the smog
chamber, the aldehyde concentrations, etc., and see how the mechanisms would
compare with each other?
JEFFRIES: That's not a legitimate test of the mechanism because he made
choices when he did his, he made choices when he did his, and he made choices
when he did his. And the choices are not independent choices. As soon as you
say—
DIMITRIADES: But there's only one light intensity. Of course, I appreciate
the problems. I was just wondering whether use of the same input would be
desireable just for the purpose of judging the relative performance of the
mechanisms, not in an absolute sense, the relative performance. Would that be
useful?
JEFFRIES: It's a nonlinear process, and so many factors are important. It
may take you 3 months of poring through runs and details and integrated rates
to figure out how come in one condition he made more 03 than someone else did.
It's a very complicated process. The first thing you have to know is whether
you've got the right mechanism. Just having the right mechanism isn't enough,
so the other things you have to know are what light intensity he used, what
distribution light intensity he used, and so forth.
BUFALINI: When you compared your original model calculations, you compared
the Bureau of Mines chamber data when you tested Dodge?
JEFFRIES: Yes.
BUFALINI: Carbon Bond II, Demerjian?
JEFFRIES: Yes.
BUFALINI: Is there a sufficiently adequate data base from the Riverside smog
chamber to test the three or four mechanisms to see whether any one of them
would work better?
It seems like when you were comparing those mechanisms some were working
satisfactorily at one concentration, and others were working better at some
higher or lower concentrations.
I would assume that the newer data that Riverside has obtained have less
scatter, but are they sufficient?
391
-------
FINAL DISCUSSION
JEFFRIES: Each smog chamber has its own kind of uncertainties, and you almost
have to know the operator of a smog chamber on a first-name basis. You
certainly have to be able to call him on the telephone 25 times a week if you
model his runs.
BUFALINI: Okay, that would be my next question then. Is there no end to
getting smog chamber data? You're continuing to get some more. I know there
is the ERT work that's supposed to be testing EKMA. Are these data going to
be satisfactory or what?
JEFFRIES: There has to be a very hand-in-glove cooperation between the guy
who's doing the modeling and the model development and the guy who's doing the
smog chamber runs.
If I sit down and say, these sets of runs are the ones I'm going to do to
test the mechanism, and they're not the ones that are important at all, they
don't separate the effects that are in the mechanism—
BUFALINI: So let me get back to what Dr. Dimitriades said. Maybe the whole
problem is that he measured his light intensity with his technique, and he
measured the aldehydes with his technique, etc. I think the problem with Dr.
Dimitriades1 data is that the reproducibility is so bad.
JEFFRIES: There were unmeasured parameters. Dr. Dimitriades, at the time he
did that, used everything he could get his hands on and he did it the best he
could. A lot of things have changed since then.
BUFALINI: Yes, but the new data—
JEFFRIES: That Riverside—
BUFALINI: —they're obtaining are not going to be that good.
JEFFRIES: No, I take it back. That's not quite the same. Is there an
existing data base that's adequate for separating mechanisms? The answer to
that is no, probably not.
BUFALINI: My next question is—
JEFFRIES: Can you run experiments so that you exercise mechanisms? The
answer to that is yes. That's exactly what we're doing. But, you can't
necessarily say that because there's a collection of existing data which is
adequate for separating out the mechanisms.
MEYER: Is there a list that one could come up with, let's say 8 or 10
different factors that are important in the different mechanisms that one
could then check off the list and make sure that their set of experimental
data accounted for? Is there a finite number of factors one could consider
in designing these experiments to fairly evaluate existing mechanisms?
392
-------
FINAL DISCUSSION
DIMITRIADES: We know some factors, yes.
JEFFRIES: I didn't say that you couldn't collect together a bunch of smog
chamber runs from various conditions and whatever, that a decent mechanism
ought to be able to model those to some degree. That is possible.
What happens, though, as you discover when you start doing an application
with the new chemistry, is that there is some aspect of the chemistry that
needs exercising and the data base you have doesn't exercise that aspect
enough. So he calls me up and says, can you run an experiment in which you do
this and this and then follow it up with this? And we say okay.
Two weeks later, he gets the numbers back and he now can exercise that
piece of the mechanism. It's that kind of operation that's going on. The
point is, this work is going on now. I mean, EPA is not sitting here doing
nothing. There is a lot of this kind of stuff going on. What happens is—
MEYER: How many different mechanisms would you say are variable?
JEFFRIES: What happens is that I get paid to run experiments; I don't get
paid to spend a whole year documenting them and laying them out in beautiful
details and making it easy for anybody who wants to use it to do it. It turns
out, that's a big effort. You stick your neck out when you do it because what
you discover is, your calibration factor was off a little bit on that run back
there and somebody's gone off and modeled it. You get into all these kinds of
problems.
So, the documentation of both the mechanisms and of the data used is a
real hang-up. What happens is a lot of informal arrangements, whereby the
work gets done, with few formal operations.
TRIJONIS: First, I would just like to reiterate the comment that Jim Killus
made before, that I don't think it's feasible to try to validate the different
mechanisms against ambient data, because of the uncertainties in the ambient
data.
You don't know if the disagreements are caused by the mechanism or by the
meteorology which might not be well known, or by emissions, or what.
I think it might be useful to form a committee or group to select four or
five sets of smog chamber data that would be appropriate for validating
various EKMA mechanisms. They could choose some Bureau of Mines data, the
North Carolina data, the University of Riverside data, whatever. That way the
people who are familiar with the data and with the needs of EKMA validation
studies could select the data sets they think would be most appropriate for
validating EKMA.
Also, in our previous work, we spent a lot of time and effort compiling
farily careful stoichial trend data for Los Angeles. I think once we've
393
-------
FINAL DISCUSSION
compiled this it might be a fairly quick and convenient procedure for people
examining the alternative mechanisms to just do a validation study using that
historical trend data. They could generate a standard isopleth with their
mechanism — I don't know if it's Level I or Level IV, the most simple type of
analysis. It would be a fairly simple calculation to use the historical trend
data in a routine way for validating or for checking out the various
mechanisms, or at least for comparing them, to see which is performing
better.
I see at this point two possibilities. We could spend some time deciding
what the best smog chamber data to use are. That would probably be the most
extensive test. Or, we could do a quick test using some of the historical
trend data that we've compiled in our previous studies.
CARTER: What parts of the mechanism should be held constant and what parts—
DIMITRIADES: The reaction pathways?
CARTER: Yes, basically, I mean the parts of the mechanism that should be held
constant are the parts that we have the basic laboratory data about, for
example, most of the inorganic reactions. But, let's not restrict it to the
inorganic reactions.
We know for practically all major HC's, simple HC's anyway, the primary
rate constants for decay. And for formaldehyde, it looks like we've got a
pretty good handle on its beta rate of—
Maybe the panelists should establish what these things are. Assuming
that there is no evidence that something is wrong with the old data, I mean
the basic laboratory data, this is what is used in all mechanisms. The
modeler should not have the freedom to change or to adjust any known rate
constant to fit the data.
DIMITRIADES: That's my concern.
CARTER: —be honest; those are just due to the adjustable parameters. Let's
not try to pretend that you're doing something chemical.
It is disturbing in a way, when you're talking about the Dodge mechanism,
for example, to identify it with a person. Really, there's only one chemistry
that's going on. The problem is that there are a whole bunch of different
compounds. Also, not only are there a whole bunch of different compounds, but
there are for many of the compounds unknown processes involved in the organic
mechanism.
I think the best approach would be to try to validate, try to get a
chemically correct mechanism for these individual HC's. Certainly not all 50
million of them, but at least selected representatives of the different kinds
of chemistry you have, like alkenes or oxinates, things that are different.
394
-------
FINAL DISCUSSION
Then, concerning the ambient air data, one could try to look at detailed
mechanisms by combining them and seeing how best to represent the complicated
mixture by what ratio of the compounds. Or alternately, one could go to
lumping schemes that are consistent with basic laboratory data and don't have
things like 03-olefin going to 100% radicals, which we know is not true. We
shouldn't have that degree of freedom.
Another problem with this is with the process of studying the individual
reactions. They are dynamic and every now and then they'll come across, and
all of a sudden another rate constant is known, and that would make somebody's
mechanism invalid because he has the wrong rate constant as a meaningful part
of it.
In a way I don't like the idea of freezing mechanisms, but I can
understand the practical problems. I think the best approach might be to have
some sort of ongoing panel that evaluates these things. For example, if some
rate constant changes by 10% and it's not critical, they should just stick
with the one. But, say the H02~plus-NO reaction changes by an order of
magnitude or so and it's an important reaction. They shouldn't continue on
ahead and use the model which has the wrong rate constant for control
strategies. Nobody would really believe you. If you're going to do that, you
might as well stick with an engineering approach to start with.
Now, as far as complex HC's, that is, the complex mixes in ambient air, I
think it's important to have some idea of what compounds are present in the
ambient air so you know how best to try to represent it in your models.
Ideally, you would have a detailed model with as many as 30 different
compounds and you'd test it against more simplified mixtures using
calculations. But, of course, before that's worth bothering with, you need to
know what is present in the ambient air.
There is another way to approach it, which is more empirical but maybe
more practical. I think these trapped air type smog chamber experiments like
Dr. Lloyd was talking about might be fairly useful provided that there are
enough control experiments to characterize the chamber effects.
The thing is, with those experiments, you shouldn't just take the
isopleths or whatever to be derived from those and apply them directly to the
ambient air. You should look carefully at the control experiments and then
try to model those results with your model, using the chamber effects that you
get from your control experiment, whatever kinds of control experiments you
want, like NOX air radiations. They advocate as a conbrol experiment setting
up the only kind that should be appropriate in number and should include that.
But then you can use that to validate your representation of the HC
mixture, which then you would apply to the ambient air. With all the problems
with ambient air, as mentioned before, it sounds like an impossible task, at
least presently, to validate that. Probably, at present, it's a waste of
money, really, to spend too much of an effort to try it.
395
-------
FINAL DISCUSSION
DIMITRIADES: To comment on your last subject, the complexity of the ambient
air mixture, in a couple of experiments that were done recently, automobile
exhaust was irradiated in a chamber and then synthesized. There was a big
difference in results between the real exhaust and synthesized exhaust tests,
suggesting that the simulation wasn't correct.
CARTER: Well, there was something you didn't measure that was important.
DIMITRIADES: Right. I guess I would like to invite Alan Lloyd to have CRC
consider an experiment, or a study, in which you could put some ambient air
mix in a bag, and simulate the air mixture in another, and do such experiments
that would either verify or refute whether you can correctly simulate the
ambient air mixture based on present knowledge of the ambient air composition.
MEYER: I think one of the single findings that was expressed here today that
is of greatest concern is the dependence of this Delta 03 over Delta HC ratio
that one sees with the different mechanisms.
I would like to suggest that some efforts be made to define compositional
limits, I guess, within which some of these mechanisms are compared. For
example, in the work that we did in St. Louis, we assumed very low aldehyde
fractions, simply because that is what the emissions inventory suggested was
there. I have a real question as to whether or not some of the findings in
that study, particularly considering that some of the mechanisms are
apparently very sensitive to the amount of radical initiators at low HC:NOX
ratios, are an artifact of the fact that we selected very low aldehyde
compositions for some of the mechanisms that explicitly consider that.
I am wondering if the difference perhaps might not be so great if one
were to choose a range of initial aldehyde compositions, let's say, between 5
and 10% of the total NMOC, if that's a realistic range based on atmospheric
observations. I'm wondering if one were to do that, whether one would still
find this dependence of Delta 03 over Delta HC on the different mechanisms, or
whether a lot less difference occurs.
So, I guess one suggestion I have is that some of the research should be
directed toward better characterizing what's out there in the atmosphere so
that we can make more reasonable assumptions in some of the smog chamber
experiments and in some of the modeling.
JEFFRIES: I didn't show you Level III with the same kinds of slides, the same
kind of data. Of course, at Level III we assumed an R% composition based on
Ken Demerjian's input for initial aldehydes. And, of course, the initial
aldehyde assumption coupled with two other assumptions going between Level II
and Level III cause all Level III simulations to overpredict the 03. But,
they also exhibit quite different Delta 03 versus Delta HC between mechanisms,
days, whatever.
396
-------
FINAL DISCUSSION
So, the short answer to your question — changing the aldehydes from 2%
to 8% — is not all there is. You're looking at fundamental treatments of
chain links and other factors within the mechanism that are responsible for
the slopes, and it's a choice of the assumptions that were made at the time
the mechanism was constructed.
KILLQS: I'd like to make a point on this Delta 03 over Delta HC. As Dr.
Jeffries mentioned, in St. Louis the Carbon Bond mechanism seemed to have the
largest Delta 03 per Delta HC, purportedly the great effectiveness of HC
control in the EKMA model.
This seemed to be because the Carbon Bond mechanism was somewhat more
sensitive to HC's, and, therefore, you get a greater degree of 03 control.
However, that particular thing does not show up in the airshed model. The
airshed model is far from predicting the 30 to 40% control of HC that's
required for St. Louis. In fact, the airshed model is predicting more like
60% or 65%, 75% total HC.
If you look into it, you discover that probably the reason for this is
that in the airshed model there are background HC's. Not a lot, however. In
fact, when we were looking through and preparing inputs for St. Louis, we very
carefully limited those to fairly small concentrations, about 0.03 to 0.05
ppmC of HC. But, that extra little additional HC is sufficient to give you
enough reactivity so that because the Carbon Bond mechanism is more sensitive
to the HC's it then gives you more 03. Well, if you have a greater background
reactivity, you have to decrease emissions proportionately to make up for
this.
In this particular case, the more complex model wound up giving the same
answer as the simplified model, but for different reasons. If you only have
one of those effects, in the simplified model or not, you can have an
erroneous estimation of the degree of control necessary.
JEFFRIES: I think any modeler who is going to put his name on his mechanism
is going to be careful enough to make choices that are pretty reasonable in
the beginning. In other words, the driving force for picking out the
information in the literature and information that could be most justified,
and so forth, it's already there. The problem that we had here was that when
we looked around for mechanisms which had been published in the literature and
had some potential for being used, we found in existence some mechanisms that
effort and work had been put into some time ago but hadn't been worked on
since. The Demerjian mechanism hadn't been looked at for about a year or two,
ever since he did the Houston work.
There was a mechanism that people knew about. They knew he'd worked on
it, they knew he'd published it, and that it would potentially be available.
He cooperated by saying, you can have my mechanism. But, he wasn't then going
to go through the mechanism and change all the rate constants and make up all
the other choices and so forth.
397
-------
FINAL DISCUSSION
So one of the problems we have to deal with is that we're constantly
dealing with evolving, ongoing research projects that are constantly putting
out mechanisms. But, that's one thing. Dealing with what a state agency is
going to pay a contractor to go find a mechanism and put into his model is
another thing.
It's a question that goes back to the documentation issue. Whatever
mechanism is documented and made easy to use is going to be the one used.
MARTINEZ: The moment you have several mechanisms, the question has to come
up, how do they compare. To speak directly to the question that he raised, I
think it is absolutely desireable to have a data base that is used to run all
these mechanisms and to compare them against. You cannot compare a mechanism
developed on the University of California-Riverside data against a mechanism
developed under Bureau of Mines data, using those data. You have to run them
on the same data base so you aren't comparing apples and oranges.
You should come up with a standard data base that may be updated
periodically, and if you have more than one mechanism, then that data base
would be used to compare them.
DIMITRIADES: How far do you go in defining this standard data base? For
example, take the University of California-Riverside chamber data and the
Bureau of Mines data. The data base includes observed rates or yields as well
as light intensities. Should it also include the same kinetic rate constant
values?
MARTINEZ: No, I'm talking about the measurements, the experimental data base.
Each model will have its own rate constants and kinetics. Like Dr. Jeffries
and Dr. Carter say, that varies. But, I'm talking about some chemical
knowledge that everybody agrees on.
The moment you get into lumping, whether it's molecular lumping or Carbon
Bond lumping, you run into a mine field. The choices are open to reasonable
differing assumptions. You don't touch the mechanisms themselves; you just
define the data with which they will be run and compared, I think.
JEFFRIES: You make the data easily and readily available, and the guy who
produced it has to be available to answer questions about it. He has to make
it easy for someone who's developing a mechanism to get hold of the data, put
them into his computer, and model them. He has to have a number to call up
and say, I did this and this number down here just doesn't make any sense;
what do you know about this number?
CARTER: I have another comment. They talk about comparing mechanisms or
models. I would think it would be more useful, instead of comparing the
different models as a whole, to look at the parts of them. Presumably, if
they're all current models, we know the chemistry would be the same in all of
398
-------
FINAL DISCUSSION
them. We should be looking at how they differ and looking at the individual
assumptions that are made that differ, for example, the radical source
question. obviously we have different assumptions about that, and other
models.
But, that's not the only one. They say in the aromatic mechanism there
are different reaction ratios. But just try to break it down to what part the
differing assumptions are in.
I think we're getting to the point in our knowledge of the chemistry, not
for lumped models, but at least for the molecular models, where the
uncertainties are within a manageable number and a great bulk of the
parameters are known and fixed.
I think it may be more useful instead of seeing the effect of the
isopleth using one whole mechanism, seeing how these mechanisms differ in
these 10 or so different ways. We can then do calculations to see the effect
of these 10 different parameters and then maybe these in combinations, since
obviously there would be synergistic effects.
It's a much more difficult process, but at least that way you may
understand what the source of the differences are, which differences are
important, and where you need to do the basic research to find out what's
causing this.
DODGE: Or determine what smog chamber studies should be run in order to
determine if the mechanisms are behaving.
CARTER: Yes, smog chamber studies or basic studies, any sort of study that is
sensitive to that particular uncertainty.
JEFFRIES: I agree with that. A part of that is running the models to find
out what the model does or doesn't do so you know what to do in the smog
chamber and go back and test the model.
DODGE: The models are sensitive to what? Figure out those pieces and then
it's very easy to design smog chamber studies that will determine whether or
not that mechanism is correct and performing that way.
CARTER: That aspect of the mechanism, yes, that element of it.
DIMITRIADES: In other words, you develop some hypothesis concerning the model
performance and then you design the experimental data set to test the
hypothesis?
CARTER: Yes. The fact that there is more than one model that reasonably
competent modelers are using results in the fact that there are uncertain
aspects of the mechanisms that they make different assumptions about.
399
-------
FINAL DISCUSSION
Just looking at these different aspects, maybe someone would make a
different assumption about a radical reaction that's negligible in importance
anyway. Then you don't have to bother about that, and the modeler might as
well arbitrarily put in something that at least is consistent than something
that's an entirely different thing. He should put in something that is
sensitive; he should try to identify it.
JEFFRIES: The subtle difference, Dr. Dimitriades, is that you use the model
to design the experiment instead of doing the experiment to test the model.
BUFALINI: Is there going to be a consensus of what the sources and the
points—
CARTER: Or whether they even exist.
BUFALINI: Obviously this is a problem, because before you start modeling smog
chamber data, it seems somebody uses HONO, somebody else uses just an OH
source, and others use a combination. There is some talk of perhaps having
formaldehyde come off the chamber. That seems like a good starting point—
CARTER: Dr. Dimitriades, as mentioned before, in the last several months or
year, a very large proportion of the grants we've been working on are
basically aimed at trying to understand this particular problem. This problem
is critical to smog chamber experiments, and it may turn out to have relevance
outside the smog chambers.
BUFALINI: It's always bothered me that you conveniently put down 8 or 10 ppb
of HONO, since the system contained a lot of 03 the day before when you had
used the chamber. It seemed to me that everything should be oxidized very
nicely if one used HONO. Similarly, it would bother me if someone were to use
a formaldehyde coming off the walls, because formaldehyde would have probably
been associated from the previous run, I would assume.
CARTER: Also, you can measure the amount of formaldehyde mix there.
BUFALINI: The problem is you do measure it, though. When you turn the lights
on in most of these chambers, unless it's an artifact from the system, if you
used atropic acid you will see a little bit of formaldehyde.
CARTER: Well, in our smog chambers, at least now that we've got the technique
down to better result, you don't generally see enough to make, certainly not
enough to account for the radical source.
Another thing, you mentioned initial HONO. We do have actual direct
evidence that initial HOMO is present; it seems to be formed heterogeneously
in the dark from the NOX that we put in there. We don't know what the
mechanism is.
400
-------
FINAL DISCUSSION
BUFALINI: This is in the smog chamber?
CARTER: Yes. It seems to be formed in the bulk — I mean, not homogeneously.
But, once you've got the NOX in, there's a little bit to start with, then if
you wait awhile, it goes up. We've measured it directly.
BUFALINI: Yes, but presumably you get HONO even if you don't put NOX in there
originally. If you just put in an inert compound, I mean a paraffin,
presumably when you turn the lights on that paraffin will disappear.
CARTER: Yes, but that may not necessarily be HONO; that may be the continuous
source. I don't know about that.
DUNKER: I have two comments. One relates to the discussion of documentation
of the mechanisms. I think that any mechanism that's being documented should
include a test case with all the various inputs so that someone can run this
test case on the computer and be sure they've got the right mechanism. I make
this comment because recently I was coding up the Carbon Bond II mechanism and
I looked at tables in the 1979 and 1980 reports. I just happened to also look
at an appendix to Volume II of the 1980 report in which they had a computer
output listing, and I discovered five mistakes in the table in reactions which
do not balance carbon. In the latest report I got from Gary Whitten,
apparently those mistakes have been corrected, so as far as he knows. They do
not exist in this latest report dated, I guess, November 1981. But anyone who
has coded a Carbon Bond II mechanism from the 1979, 1980 reports, and used
Table 36, should be aware that there are five reactions in there where
products are missing.
The second comment relates to the sensitivity of mechanisms. I think
techniques certainly have been developed recently — for example, Greg McRae
and John Seinfeld have done it for their mechanisms — where you do a
straightforward sensitivity analysis of the mechanism, looking at reaction
rates and initial concentrations, and you isolate which reactions have rate
constants that strongly affect the species of interest. Those techniques are
published in the literature.
CARTER: One other thing about this problem of documentation and validation of
the model. What might be useful for control purposes and what the funding
agency can do is decree some sort of format for transmitting these things in a
computable, readable manner. Then we could, instead of having to go to the
trouble of hunting it up in the table, just send off for a computer tape. It
wouldn't be that difficult to modify whatever different computer programs we
have as long as there was some well known, standard format for transmitting
this information. It would save an awful lot of labor.
MARTINFZ: There is not guarantee that computer tape will have any great
reactions, anyway.
401
-------
FINAL DISCUSSION
CARTER: The people who develop the mechanism, as far as their test is
concerned would have to put it on the computer tape and use the program that
reads it back off again and make sure it's the right one.
DIMITRIADES: Let's switch now to the other subject that is equally important.
This is the one related to the complexity of the ambient organic mixture.
I have two questions. One is, is it imperative that we have a smog
chamber data set for auto exhaust mixtures or for ambient mixtures, in
contrast to synthetic mixtures? Do we need one such data set? We do have
one, the Bureau of Mines set. Perhaps we need another more comprehensive,
more complete one. But the question is, do we need a set of exhaust or
ambient air chamber data?
The other question is, again, one that I have already raised. Suppose we
do have mechanisms and models which predict synthetic mixture data very well
in small chambers but do not predict exhaust data or ambient air data; then
what do we do? Do we continue these efforts to further identify and
characterize the ambient mixture in the hope that we have a better
understanding of the composition and that we will be able to better simulate
those mixtures in the smog chamber? Or, do we want perhaps to tune the model
to make the mechanism agree with the smog chamber data on exhaust mixtures or
ambient air mixtures?
KILLUS: I can tell you how we handled a similar problem in at least a
hypothetical circumstance. Joe Bufalini in the early 1970's, I believe, ran
some experiments in which he tried to compare Los Angeles bag samples with
some synthetic laboratory mixtures. He observed that the ratio of conversion
of NO to NO2 versus HC decay varied substantially between these two
situations. He suggested at the time, in fact, that in Los Angeles air there
was a reactive compound that was not measured by their GC column. He
hypothesized that this compound was aldehydes.
I went into that and calculated the amount of aldehydes necessary to give
that difference in reactivity. It turned out to be only 10% of the carbon
mixture, the HC mixture. It's fairly easy for an aged mixture, although this
took place at 8:00 a.m., to have in excess of that. The point is, however,
that if your emissions mix only has 1% aldehydes, it's very difficult to get
up to 10% by 8:00 a.m.. We take that as fairly strong evidence that if there
isn't some unknown source of aldehydes in the Los Angeles air, there certainly
would be some unknown source of something that's giving an additional
reactivity boost to the HC mix. And, we have said that the simplest thing for
you to do is just believe that that is aldehydes and add that onto your
emissions inventory.
Now, that is concentrating a form of uncertainty in — quite an
assumption — but it isn't putting it into the mechanism. It is using the
mechanism to get at what the source of uncertainty in monitoring is. Perhaps
it's not aldehydes. Maybe it's some other HC compound; it may be actually a
402
-------
FINAL DISCUSSION
radical initiator like HONO. However, you'd need a lot of HONO to boost you
that much, and I don't think that's going to increase your NO to NO2
conversions. But there might be other kinds of oxygenated compounds that
could be responsible for that. Whether or not it's aldehydes in this context
doesn't necessarily matter. What matters is whether or not it's coming from
automobile exhaust, or from an emissions source that you can identify. Also,
it's important to know whether or not you have it in your emissions inventory.
Even if you can't identify where it is in your emissions inventory, you better
have it in the model if you are trying to do an ambient simulation. Otherwise
the best mechanism in the world is not going to give you adequate results.
Trying to adjust the mechanism to give those better results is lying.
JEFFRIES: In fact, if you didn't know what it was, you'd probably put it in
the model as aldehydes anyway.
KILLUS: That's entirely possible, yes.
DIMITRIADES: I think you put the emphasis on a single species like aldehydes
or some of the other ones that we are aware of. I would like to suggest, and
this is a personal feeling based on work I have done in the past, that we
probably deal with the collective effect of several unidentified species. The
reason why I think this issue is important is because I doubt that we will
ever get to the point of completely characterizing the ambient air mixture.
Then what?
LLOYD: I wanted to answer your question of whether we need more smog chamber
data. I understood that we've already answered that, but now he's already
started it by—
JEFFRIES: He wants to find out whether he wants to take the money back.
DIMITRIADES: No, I don't mean that. I think it's good to have smog chamber
data with simple mixtures, but I think it's also good to have data with
exhaust mixtures or more real atmosphere-like ambient mixtures. You'll find a
big difference between the two types of data, and I think we need to
understand why.
LLOYD: The previous question focused on the short-term needs. What can we do
now, do we have enough smog chamber data, do we know enough about kinetics to
do something?
I was really addressing that. Suppose we make do with this, what should
we be doing in the future? I agree with what was said earlier. I also think
that your comment is very valid, because what is there as the mix of the HC's
change and we see the aromatics increasing? I think it would be very pompous
of us to think that we're not going to come up with some surprises. I think
we have to have a research program going ahead to look at the character of the
atmosphere. So we need some measurements out there, some chamber data, we
need some tailpipe emissions testing. But, on the other hand, we do need some
403
-------
FINAL DISCUSSION
synthetic mixes. If we have areas such as Houston, which have different
mixtures next to refinery plants, we want to make that applicable. Also, in
those research programs, do we want to focus on some of the things that nick
Derwent has mentioned, some of the intermediates, so we can get a better
handle on that?
But, that's ahead 5 or 10 years time. We do have some data now. I have
worked with Dr. Dodge trying to characterize the kinetics data and the
mechanistic data, and those need updating regularly. I think we have some
tools to do something with now and we can improve what's going on. Yet, we do
need further work. I heartily concur that we need both characterization of
the atmosphere and automobile exhaust as well as conducting and refining
ongoing tests.
BUFALINI: Dr. Dimitriades, in keeping with your comments earlier about some
unidentified something--
DIMITRIADES: Some things.
BUFALINI: Some things, okay. Are we spending the proper amount of effort and
funding at the University of North Carolina with this automobile exhaust in
trying to identify the oxygenates? As I know, Dr. Jeffries, you're measuring
the formaldehyde and I guess some other aldehydes, but are you using the DNPH
and the total aldehyde techniques? Are some of the higher aldehydes going to
be cross-checked with GC analysis?
I am bringing this up because there is some concern as to whether we're
properly identifying the oxygenates. I would hate for EPA to spend this
effort in doing the work and then say, maybe we missed some oxygenate that was
speeding up the reactions.
JEFFRIES: One of the first things, of course, we want to find out is whether
the real automobile exhaust, unaltered, versus the frozen and reconstituted
automobile exhaust is the same. If they're not the same, then we've go a lot
of work to do to try to figure out why they're not and what's going on. At
least we have them all in 200 cc at that point with a factor of 400
enrichment, and we have GC/MS and all kinds of other capabilities to look at.
That's a leading question, and we don't know where we're going to go with that
yet.
DIMITRIADES: Not the same if you built a dynamometer facility with University
of North Carolina money.
JEFFRIES: That's why I'd even want a dynamometer; that's why I want to come
out to EPA and use the EPA dynamometer.
BUFALINI: It might be cheaper for you to move your smog chamber out here.
404
-------
FINAL DISCUSSION
JEFFRIES: No way.
MEYER: I wonder if it would be a useful approach in designing set(s) of test
data to first look at the existing types of models for different types of
mechanisms that now exist. Perhaps we could come up with some kind of a
matrix whereby each of the major mechanisms are listed in the columns of the
matrix and then in the rows of the matrix. You could come up with a series of
questions about how each mechanism might perform or might be sensitive to
these various questions.
Then you could go to the developers of the different mechanisms and try
to get the best possible insight into the responses to each of these questions
in the rows, to see what the most significant types of questions are likely to
be. Then you can go about designing your experiments based on the response of
the existing mechanisms.
Hopefully, there wouldn't be too many of these mechanisms, maybe six or
eight. That seems to me like a very logical first step to take.
405
-------
FOLLOW-UP COMMITTEE RECOMMENDATIONS
Richard Derwent, Alan Eschenroeder
Gregory McRae, and Alan Lloyd
The authors of this section of the proceedings attended the workshop and
subsequently met as a committee to recommend actions that could be taken by the
U.S. Environmental Protection Agency (EPA) to make the best use of information
presented at the workshop. This section summarizes and describes the
recommendations of the committee.
INTRODUCTION
The Clean Air Act and its subsequent amendments mandate a set of ambient
air standards to serve as goals for air quality planning in the United States.
The Act establishes a mechanism for improving air quality by controlling primary
pollutant emissions. Historically, direct proportionality of pollutant
concentrations in air to pollutant emission levels was assumed for designing
control measures. Unfortunately, the inherently nonlinear nature of the
formation of some pollutants, particularly photochemical oxidants, precludes the
use of such simple approaches.
An important characteristic of oxidants is that they are not emitted by
pollutant sources, but rather, are formed as products of chemical reactions in
the atmosphere. Oxidants such as ozone (03), nitrogen dioxide (N02),
406
-------
Committee's Recommendations Derwent, Eschenroeder, McRae , Lloyd
peroxyacetylnitrate (PAN), and hydrogen peroxide (H202) form in urban
atmospheres. These and other pollutants are produced as a result of the action
of sunlight on oxides of nitrogen (NOX) and reactive hydrocarbon (RHC)
emissions. Since oxidants are formed by atmospheric reactions rather than
emitted in measurable quantities, their control is very difficult. The amount
of oxidant formed in any given urban area has a complex dependence on time of
day, meteorological conditions, and the nature of the pollutant sources, making
the design of effective abatement programs an extremely complex undertaking.
Indeed, depending on the initial state of the atmosphere, it is possible to
produce an increase, decrease, or no change at all in oxidant levels from a
simple strategy based on reducing one of the precursor emissions. These
counter-intuitive results highlight the need for a formal methodology capable of
predicting the air quality impact of changes in emissions.
Prediction of the effects of emission changes on ambient air quality
depends on three basic elements of input data:
• A chemical kinetic mechanism that describes the rates of atmospheric
chemical reactions as a function of sunlight intensity and the
concentration of the various species present.
• An emission inventory that gives the temporal and spatial distribution
of emissions from significant pollutant sources within the airshed.
• A meteorological description, including wind speed and wind direction at
each location in the airshed as a function of time, the vertical
temperature structure, and radiation intensity.
A detailed formulation of a system linking all these elements is a difficult
407
-------
Committee's Recommendations Derwent, Eschenroeder, McRae, Lloyd
undertaking because it is necessary to maintain a balance between the input data
requirements and the desire for an accurate representation of the underlying
physics and chemistry. Partially in response to these conflicting requirements,
EPA (referred to as the Agency) has identified four basic approaches that can be
used for implementation planning purposes in the United States (Federal
Register, 1979):
• Standard EKMA (EPA Level IV)
• City-specific EKMA (EPA Level III)
• Trajectory-specific EKMA (EPA Level II)
• Photochemical dispersion modeling (EPA Level I)
Each of these approaches uses different levels of detail in the treatment of
emissions, meteorology, and chemistry. Apart from the Level I analysis, all of
the other techniques are based on the Empirical Kinetic Modeling Approach
(EKMA).
The EKMA concept involves the use of a chemical mechanistic model to relate
03 to its precursors, total nonmethane volatile organic compounds (NMVOC) and
NOX. The model and the way it can be used to formulate control strategies are
described by Dodge (1977), Whitten et al. (1980), and by Trijonis and Hunsaker
(1978). Guidelines for the use of city-specific EKMA were published in March
1981 by the Agency (EPA, 1981). This publication was followed by a further
guidance document from the Office of Air Quality Planning and Standards in
December 1981. This latter document provided for the use of alternate chemical
408
-------
Committee's Recommendations Derwent, Eschenroeder, McRae, Lloyd
mechanisms from the standard Dodge mechanism and followed the results of studies
that raised questions about the adequacy of the chemical mechanism used in EKMA
(Jeffries et al., 1981; Carter et al., 1982). The conclusions that can be drawn
from comparisons of EKMA with atmospheric observations are so limited that
doubts have been expressed regarding the applicability of EKMA predictions to
implementation planning.
This section presents a series of recommendations that offer some
guidelines on the valid use of the EKMA concept. To begin, it covers the
committee review and perception of the papers presented at the workshop. It
also provides some responses to current Agency needs, including proposed
actions, using currently available information; identifies research planning
goals; and discusses suggested approaches covering a broader spectrum of
information but based upon inputs from the workshop. Finally, it outlines a
plan of action for the Agency to implement the recommendations.
*
REVIEWS OF PAPERS PRESENTED AT WORKSHOP
This section contains a brief abstract of each paper presented at the
workshop, followed by the committee's comments. The order of the papers follows
that of the workshop agenda.
409
-------
Committee's Recommendations Derwent, Eschenroeder, McRae, Lloyd
Trend Analysis of Historical Emissions and Air Quality Data (j. Trijonis)
Abstract
Historical precursor trends are documented in Los Angeles and Houston using
emission data and ambient data for nonmethane hydrocarbons and oxides of
nitrogen. The precursor trends are entered into the standard EKMA model to
predict historical ozone trends. The predicted ozone trends are then compared
to actual ozone trends to test the EKMA model. The Los Angeles analysis covers
the years 1964 to 1978; the Houston analysis covers the years 1974 to 1978.
Review Comments
This study, while simple in concept, is complicated by the large
uncertainty in the emission inventory. Although emission-inventory figures for
the Los Angeles area were stated to have shown decreases of 29% in RHC and
increases of 35% in NOX, corrections of various inventorying errors over this
same time period are at least of this same order; thus, the confidence one can
place in this trend analysis is at best questionable. Evidently, the emission
trends are smaller than the successive corrections imposed by improvements in
methodology. One must, at least, make retrospective corrections in the
inventories compiled by the South Coast Air Quality Management District
consistent with present knowledge before this difficulty can be overcome.
It is not surprising, therefore, that predicted 03 levels tended to
410
-------
Committee's Recommendations Derwent, Eschenroeder, McRae, Lloyd
underestimate observed 03 levels by as much as 35% for peak hourly 03 levels at
Azusa. The discrepancy is greatly alleviated by the use of the more robust
statistic, the 95th-percentile 03 level. The choice of different statistical
measures should be investigated further since it might be feasible to design a
more robust criterion for defining compliance with air quality standards. The
disagreement seems to be comparable between the case of precursor-derived
nonmethane hydrocarbon (NMHC):NOX ratios and that of emission-derived NMHC:NOX
ratios. Although attempts were made to isolate the emission "footprints" that
had most influence on each 03 monitoring station and to correct 03 formation for
meteorology, no account was made for day-to-day differences in initial chemical
composition, mixing-depth composition at the upper boundary, or path differences
over the emission pattern. One important conclusion of the Houston study is
that the calculated emission changes during the period 1974 to 1978 were not
large enough to provide an adequate test of the EKMA model. Despite the
limitations that hampered the results of this work, the sensitivity study was a
valuable demonstration of the need for an accurate knowledge of the NMHC:NOX
emissions ratio and ambient air quality data. Thus, with improvements in the
area noted above, the trend analysis approach can be recommended.
Evaluation of the Empirical Kinetic Modeling Approach As A Predictor of Maximum
Ozone (j. Martinez, C. Maxwell, H.S. Javitz, and R. Bawol)
Abstract
The performance of EKMA when used to estimate maximum ozone levels in an
411
-------
Committee's Recommendations Derwent, Eschenroeder% McRae, Lloyd
urban area is evaluated. A quantitative measure of EKMA's ability to predict
maximum ozone is obtained, and conditions are defined under which ozone
estimates for EKMA can achieve specific accuracy levels. The evaluation is
conducted using data for St. Louis, Houston, Philadelphia, Los Angeles, and
Tulsa. For St. Louis, results are presented comparing EKMA performance using
three different chemical models. A Monte Carlo method for using EKMA to predict
the distribution of ozone maxima is also described. Applications of the method
to the analysis and design of ozone control strategies are discussed.
Comments
This study, on a statistical analysis of EKMA, presents the first results
in the treatment of the use of uncertainties in practical decision-making. A
central theme to this approach is the probability that the ratio (R) of observed
to estimated 03 lies within the range 0.8 to 1.2. The analvsis uses nonmethane
organic compounds (NMOC) and NOX inputs in both standard and city-specific
trajectory models to generate the 03 estimates. The graphical representation of
the output is represented as accuracy probability isopleths on the NMOC-NOX
plane. The Carbon Bond II, Demerjian, and Dodge chemical mechanisms are
compared.
Qualitatively, all three mechanisms give similar shapes for the over-
(R < 0.8) and underprediction (R > 1.2) regions, but the Dodge mechanism is
represented best of all of the mechanisms in the 0.8 < R < 1.2 range. The use
of this technique for future conditions could generate 03 frequency
412
-------
Committee's Recommendations Derwent, Eschenroeder, McRae, Lloyd
distributions from input joint frequency functions for NMVOC and NOX.
Another product of this research is the development of a linear
transformation from standard EKMA to the city-specific version. This could
considerably simplify the generation of working data for air quality planning in
specific regions.
The methodology could be improved by extending its use to day-specific
meteorological air quality and emission conditions. Worthy of attention is the
recommendation that this technique could be employed in a decision-tree analysis
for State Implementation Plan (SIP) revisions. Instead of replacing EKMA with
more refined approaches, one outcome might be a branch of the tree that combines
safety margin concepts with risk assessment. Thus, if 03 is underpredicted and
P(R < 0.8) is low (for a case where 03 is not greater than 120 ppb) or if 03 is
overpredicted and P(R > 1.2) is low (for a case where 03 is greater than
120 ppb), the probabilities of falling outside the 0.8 < R < 1.2 regime can be
used with acute response data for 03 pulmonary effects to generate risk
profiles. Then, safety margins on the control strategy can be set by specifying
acceptable risks. This is accomplished by running the distribution of output
uncertainty from the model around midpoint 03 levels given by alternative
control strategies. A combination of model uncertainty and health effects
uncertainty will then generate a profile of risk for each strategy. Backing
away from some nominal control point, we can thus get declining risk profiles
down to a point of acceptability. To apply the approach of Martinez et al. in
this manner, we need to develop a risk module as an added feature.
413
-------
Committee's Recommendations Derwent, Eschenroeder, McRae. Lloyd
Predicting Ozone Frequency Distributions £rom Ozone Tsopleth Diagrams and Air
Quality Data (H. Jeffries and G. Johnson)
Abstract
Air quality data are-used to derive an ozone isopleth surface that predicts
the observed frequency distribution, given the observed joint hydrocarbon and
oxides of nitrogen distribution. The method uses a seven-parameter mathematical
description of ozone isopleth surfaces. The process begins with parameter
values that describe the standard Dodge chemistry EKMA surface or a standard
Carbon Bond chemistry EKMA surface, and uses a nonlinear convergence technique
to find the parameters for a specific city. The method is applied to St. Louis,
MO, using Regional Air Pollution Study data, and to Sydney, Australia.
Comments
In addition to the generation of error probability and risk distributions,
the uncertainties of EKMA output can be treated by using a broader statistical
stratum than the worst hour. Jeffries and Johnson, following the work of Post,
have used a technique for curve-fitting the ridgeline region of the EKMA surface
to joint distributions of NMOC and NOX points observed. Reversing the process,
they generate an 03 frequency distribution for precursor input distributions
shifted by hypothetical emission control.
As discussed above, both model testing and air quality standard setting,
using a reasonable sample of the frequency distribution, produce a more robust
414
-------
Committee's Recommendations Derwent, Eschenroeder, McRae, Lloyd
result than previous approaches that were restricted to the worst hour of a
year. The approach preserves the chemical character of a functional mechanism,
fits the observations in the atmosphere, and maximizes predictive success by
using an adequate sample of 03 episodes. Although curve-fitting may be
criticized on the basis that it can give erroneous results when applied beyond
the range of data, the kinetic adjustments to remove smog chamber artifacts are
also nonphysical, as shown by the disparity of results displayed by Jeffries et
al. While the approach described in this paper has many desirable attributes,
there is one major problem with the analysis. In preparing the frequency
distribution, the observed maximum 03 was not matched with its upwind 6 a.m. to
9 a.m. hydrocarbon (HC):NOX ratio. The analysis should be repeated to account
for the transport affects.
Simplified Trajectory Analysis Approach for Evaluating Ozone Isopleth Plotting
Package/Empirical Kinetic Modeling Approach (G. Gipson and E. Meyer)
Back air parcel trajectories are calculated from the site observing the
highest hourly ozone concentration in St. Louis and Philadelphia. The EKMA
model is then used to simulate the chemistry that occurs within a uniformly
mixed parcel of air as the parcel moves along the calculated trajectory. The
resulting peak ozone concentration predicted at the site is compared to the
actual observed maximum value.
Comments
This work represents a test of EKMA's validity. For Level II analysis, the
415
-------
Committee's Recommendations Derwent, Eschenroeder, MeRae, Lloyd
Ozone Isopleth Plotting Package (OZIPP) with the Dodge mechanism is applied to
10 days of Regional Air Monitoring Study (RAMS) data from St. Louis. Level III
comparisons are made between calculated and observed 03 levels for St. Louis and
Philadelphia. In the Level II test, the authors give the OZIPP-type model
detailed input information for an urban environment. This environment has been
characterized by data from an extensive research-level network of sensors of air
quality and meteorology coupled to a carefully compiled emission inventory.
Thus, the Level II test should provide a "best case" situation for success in
EKMA applications using the available guidelines and codes.
It was found that the Level II model gave agreement with observations to
within 30% for 6 of the 10 days. Four days were significantly underpredicted.
Level III gave predictions within 30% for 8 of the 10 days for St. Louis and 23
of the 29 sample days for Philadelphia. These results indicate that the
trajectory inaccuracies with the more refined version of the model generate
anomalous emissions that degrade its performance. This result illustrates two
points: (l) that improvements in the trajectory integration procedures are
desirable, and (2) that additional complexity does not guarantee performance
enhancement. In fact, it can generate large errors traceable to the
complexities breaking down as a result of faulty data. These points also
illustrate the value of statistical samples to elucidate performance as opposed
to the selection of one worst-case hour or day.
416
-------
Committee's Recommendations Derwent, Eschenroeder, McRae, Lloyd
Application of Empirical Kinetic Modeling Approach to the Houston Arpa
(H.M. Walker)
Abstract
Data for the Houston area for the summers of 1977 and 1978 are used to test
the EKMA model. Ambient data for ozone, nonmethane hydrocarbon, and oxides of
nitrogen are plotted on the standard-EKMA diagram. Predictions by EKMA were
found to be variable compared with ambient data. Methods for abstract of high
ozone concentrations are discussed.
Comments
Part of the reason for the poor agreement with EKMA predictions is probably
the poor quality of the HC data, as stated by Walker. The uncertainty in this
aspect of the Houston data here has been recognized for many years. This study
also points out the importance of characterizing the improved air quality and
the amount of transport of 03 and its precursors into the Houston area.
Comparison of the Empirical Kinetic Modeling Approach with Air Quality
Simulation Models (G. Whitten)
Abstract
A range of models, progressing in complexity from standard EKMA to large
grid models, is used to test various parts of the EKMA trajectory model. As a
417
-------
Committee's Recommendations Derwent. Eschenroeder, McRae, Lloyd
rule, the physical EKMA model compares closely to the more complex models; the
chemical mechanism used can produce the most significant differences between
models.
Comments
If the same chemistry, emissions, mixing-layer growth, elevated 03, and
initial conditions are used in two different models, one with a single fully
mixed cell, the other averaging over several fully mixed cells (coupled by
vertical diffusion), Whitten reports that the results do not differ
significantly. Thus, neither the complex chemistry nor the simplified diffusion
formulation is a decisive feature in the applications of trajectory models
reported here. Whitten derives both the NMOC:NOX ratio and the initial
conditions from emission distributions and realistic flow conditions because
linear rollback may not be reliable.
Whitten found that simulations exceeding an interval of one day are
sensitive to background pollutant level assumptions. The importance of
entraining pollutants other than 03 from aloft was stressed. The uncertainty
imposed by poor guesses of initial conditions can easily be replaced or
overwhelmed by the uncertainty introduced by poor guesses of background levels
or emission inventories. The same tradeoff is operative in the dilemma of
whether to use Lagrangian trajectory models such as EKMA or Eulerian grid
models, for example, the Livermore Regional Air Quality (LIRAQ) model.
418
-------
Committee's Recommendations Derwent, Eschenroeder, McRae, Lloyd
Deriving Empirical Kinetic Modeling Approach Isopleths from Experimental Data:
The Los Angeles Captive-Air Study (D. Grosjean, R. Countess, K. Fung, K.
Ganesan, A. Lloyd, and F. Lurmann)
Abstract
A captive-air facility in Los Angeles is described. The use of the results
of maximum ozone levels generated in the main and satellite Teflon bags under
different hydrocarbon/oxides of nitrogen conditions to test EKMA is discussed.
Evaluations of the method for testing different chemical packages in EKMA are
presented.
Comments
If the uncertainties in a chemical mechanism can be shown to outweigh
considerably those in meteorology and influx rate of pollutants, then a
captive-air study, such as the one proposed in this paper, can go a long way
toward improving trajectory model accuracy. This kind of experiment has the
potential to eliminate the uncertainties of initial conditions, synthetic mixes,
and sunlight distribution present in smog chamber laboratory experiments. The
complication for this type of experiment is the difficulty in obtaining data on
maximum 03 formation as a function of initial HC and NOX under the same
conditions of sunlight intensity and temperature for which EKMA is used.
If the influx of pollutants and their transport in the air parcel are
significant, it is necessary to do an airborne experiment such as that of the
419
-------
Committee's Recommendations Derwent, Eschenroeder, McRae , Lloyd
Box Model; and (4) the mechanism developed at the California Institute of
Technology. The mechanisms are first evaluated using smog chamber data, and
then the predictions of the various mechanisms are compared against data
collected in the St. Louis Regional Air Pollution Study program. The choice of
mechanisms is found to have a large impact on control-strategy estimates.
Comments
The findings of this report are a major focus of the Agency's concern over
the future application of EKMA for regulatory purposes. This work constitutes a
reasonable simulation of what well-informed state and district staffs might
conclude using various acceptable city-specific EKMA approaches. Four chemical
mechanisms were run for 10 Regional Air Pollution Study (RAPS) data days to test
an OZIPP-like approach for a city-specific EKMA and for a simplified trajectory
model.
A significant statement on the testing of the various chemical mechanisms
was that the "experimental uncertainties in the Bureau of Mines (BOM) data base
are greater than the differences in predictions among three of the four
mechanisms." (The fourth did not fit the BOM results in high NOX.) In addition
to mechanistic uncertainties, large variations in predictability for 6 out of 10
days were also observed in "frozen chemistry carbon monoxide (CO)/HC/NOX" runs
for a set of 30 simple trajectory simulations. Presumably, meteorological or
emission inventory inadequacies generate this class of uncertainty.
422
-------
Committee's Recommendations Derwent, Eschenroeder, McRae, Lloyd
Probably the greatest cause of concern is the breadth of the uncertainty
range in HC control requirements generated by the various models and levels of
application. The uncertainties were generated both by differences in the
chemical mechanism and by meteorological factors.
Several recommendations are offered in the report. From a practical
viewpoint, the suggestions presenting immediate possibility of application are:
(1) that planners should be allowed freedom of choices based on technical
justification, and (2) that the Agency should provide wind-field methodology
improvements in describing trajectories and point sources. Both of these could
be provided with available resources; however, increases in latitude of choices
will involve Regional EPA staffs in a plethora of quasitechnical discretionary
decisions that may require resources beyond those available.
The recommendations in the report lack focus from a research viewpoint in
that there is no indication of specific problems or of the need for specific
solutions. Whether the suggested further studies will address the key issues is
unclear. In other words, since this investigation generates more questions than
answers, what is the likelihood that any more physical and chemical research is
going to reduce uncertainty? Indeed, the continued use of smog chamber chemical
data has revealed experimental artifacts that have not been successfully removed
in atmospheric applications to date.
This study revises the proposition that the time may have come for a change
from an approach based on a hope that more detail in models generated from more
423
-------
Committee's Recommendations Derwent, Eschenroeder, McRae, Lloyd
Determine EKMA Uncertainty Statistics by Tests of the Assumptions
and the Results
Evaluation of the performance of air quality models has occupied a
significant portion of the recent air pollution literature. The importance of
such an evaluation cannot be overstated, particularly in view of the necessity
of making control-strategy decisions based on model projections. The essential
question underlying all of these studies is how well the model performs when
applied to simulate past events. There are three basic steps that need to be
undertaken when evaluating the performance of a model:
• Basic assessment of model validity
• Comparison of predictions and observations for particular events
• Analysis of the sensitivity of the predictions to uncertainties in
model components
In most previous studies, emphasis has been given to the second step, and
discussions of model performance have invariably focused on the inevitable
mismatch between predictions and observations. Often it is impossible to
ascertain whether the discrepancies result from errors in input data, such as
emission inventories, in solution procedures, or in the characterization of the
basic physical and chemical processes. While it is important to be able to
separate the relative influences of these effects, the practical problems
associated with obtaining the necessary information virtually preclude a
rigorous and definitive assessment of the formal validity of the overall model
using field data. Nevertheless, comparisons of predictions and observations for
426
-------
Committee's Recommendations Derwent, Eschenroeder, McRae, Lloyd
particular events is a crucial component of the model evaluation.
A Three-Step Process for Evaluating EKMA—
1. A thorough evaluation must be made of each component of the EKMA model,
that is, not only of the chemical mechanism but also of the treatment of the
meteorological processes. The basic goals of this work should be a dissection
of the model components, an analysis of their interactions and, finally, a
sensitivity study to determine what features have the most influence on the
model predictions.
2. Smog chamber data serve as an essential means for evaluating the
performance of different reaction mechanisms. While it is highly desirable to
use HC mixtures that replicate those found in urban atmospheres, this approach
makes it extremely difficult to determine, in a systematic way, the explanation
for differences in predictions. The reason for this difficulty is that, in the
case of automobile exhaust or captured-air samples, many of the species present
will not be accurately measured or identified. In the face of this dilemma, a
dual strategy is required. First, the mechanisms must be compared under
conditions for which all the needed composition and irradiation data are
available. A number of research groups have assembled information of this form.
The testing of each mechanism should involve more than simply looking at
concentration-time histories. Every effort should be made to look at the
different modes of initiation, radical production, chain lengths, and mass
conservation. Once these experiments have been completed and satisfactorily
427
-------
Committee's Recommendations Derwent, Eschenroeder, McRae, Lloyd
The next step would be to use the various panel or subjective assessments
of uncertainties in a study of model sensitivity to input data. The aim is to
evaluate how the uncertainty in input propagates through the model into the
predition of A03/AHC, for example. A variety of techniques may be suitable for
this analysis. One is systematic perturbation of one independent variable at a
time to evaluate sensitivity derivatives, for example A03/AHC, where A03 is a
shift in peak hourly 03 concentration resulting from a perturbation AHC in the
HC precursor concentration. Another is a random drawing of values (e.g., HC,
NOX, mixing depth, or temperature) from a statistically representative group of
ensembles. Each sample is then used to derive a Monte Carlo realization of
EKMA-generated 03 concentration. The compilation of the results gives an
expected statistical distribution of 03 prediction uncertainties.
A second source of uncertainty is generated by artifacts introduced by
imperfect mathematical representation of physical or chemical processes. This
uncertainty is distinct from that introduced by data errors and can be
quantified by relaxing one assumption at a time. This technique does not
necessarily require development of completely valid air quality simulation
models. Specifically, tests can be conducted by analyzing variance between EKMA
output and that of more complex and detailed models. These detailed models can
serve as a standard of comparison, but may require too much data to be useful in
planning applications. The neglect of vertical variations in species
concentration, lateral concentration gradients in plumes, generalization or
empirical parameterization of smog chemistry, and the simplification of
atmospheric HC reactivity into a small number of surrogate HC's are all
430
-------
Committee's Recommendations Derwent, Eschenroeder, McRae, Lloyd
assumptions in EKMA models that could be subject to evaluation using the
detailed approaches.
Implement Safety Margins that Reflect Prediction Uncertainties
The Agency is faced with a multitude of different model approaches, and
sufficient data are not available to pick a single "best" technique. For this
reason, the committee recommends a performance-based standard using variable
safety margins.
As presently formulated, EKMA-type models show markedly different control
requirements with different chemical schemes and other assumptions concerning
the fundamental processes involved. It is difficult to appreciate the
significance of these differences in predicted control requirements because the
accuracy of the individual EKMA model predictions has not yet been determined.
The uncertainty/sensitivity studies proposed in the previous
recommendations are aimed toward an evaluation of the probability distribution
of the predicted 03 change for a given amount of HC control; that is, A03/AHC,
for a particular EKMA model. With this probability distribution, it is possible
to use statistical confidence testing to decide whether the model predicted
AOs/AHC value is in fact different from zero. Hence, this distribution allows
determination of the chances that the reverse outcome may be predicted from the
proposed oxidant control strategy.
The requirements of the oxidant control measures are to reduce 03
431
-------
Committee's Recommendations Derwent, Eschenroeder, McRae, Lloyd
improvement, the cost of formulating better models could be set against the
advantages coming from the reduced safety factors in control requirements and,
thus, against expense in unnecessary precursor controls.
Determine Modeling Time Intervals Needed for EKMA Applications
The time interval prescribed for running a model to generate an EKMA
diagram depends on upper-boundary conditions (03 and precursors aloft;
entrainraent or dilution rate), surface boundary conditions (emissions and
deposition), initial conditions (species profiles at time zero), and
environmental conditions (ultraviolet [UV] insolation rate, temperature, and
humidity). From an examination of the various modeling guidelines and, in
particular, the paper presented by Gipson and Meyer, a number of possible
courses of action is apparent.
Short-Term Approaches—
1. For Level II-type analyses, high priority should be given to improving
the recommended procedures for following parcels of air as they traverse the
airshed. With relatively little effort, some of the currently available
techniques for generating mass consistent wind fields could be extended to
provide improved trajectory integration capability.
2. The guideline procedures should be modified to incorporate caveats
about the basic validity of a trajectory model when the wind field is subjected
434
-------
Committee's Recommendations Derwent, Eschenroeder, McRae, Lloyd
to shear. Ample evidence is available to suggest that one of the major sources
of potential error in trajectory model calculations is the neglect of wind
shear.
3. Users of EKMA-type procedures need to examine the relative contribution
of initial conditions and emissions to the concentration inside the air parcel.
Starting the EKMA procedure later in the day or near the center of the city can
lead to situations where the influence of initial conditions on 03 formation is
considerably greater than that from emissions. Since it is not clear how the
initial conditions might change as emissions are lowered in the future, every
effort should be made to minimize the influence of this uncertainty. There are
several ways to minimize this influence. One is to run the model long enough so
that the effect of initial conditions is no longer seen. Typically, the
calculations would have to be performed for a period of time longer than the
characteristic ventilation time. Another approach is to start the trajectory
well away from any emission source so that "background" conditions might apply.
The mass loading into the column is almost entirely derived from emissions as
the parcel traverses the airshed.
Develop Statistically Robust Methods of Model Applications
Because of the large risk of error in using a single rare event as a basis
of air quality planning, ambient standard setting is moving toward the use of
samples of data days to represent episodes. Thus, applying models to groupings
of data in the same manner becomes desirable.
435
-------
Committee's Recommendations Derwent, Eschenroeder, McRae, Lloyd
The straightforward application of this principle is to select a standard
sample of days (say, the top fifth-percentile stratum of the peak 03 events) and
apply the model to each day with the various strategies. Statistics would be
generated for future air quality predictions driven by the same meteorological
variability as that of the set of design days. Another approach would be to
draw randomly a subset of the sample; however, care must be taken to avoid
serious truncation errors.
The significance of truncation errors can be tested by selecting successive
batches of model runs beginning with a relatively large sample of cases. Each
successive batch is either a randomly drawn subset of the last or a stratum of
the last batch rank ordered by peak 03 level. The spread of uncertainty
resulting from these several truncations will provide a numerical experimental
result for general use in assessing uncertainties in future routing calculations
of 03 control strategies.
The basic goal of this work should be to assess if the current or proposed
day selection procedures result in a set of reliable conditions for modeling
studies. For example, the day that has the worst air quality may require a
lower level of precursor control under some other meteorological conditions.
Finally, if the sample called for under a robust standard is too large for
either of the above approaches to be practical, a Monte Carlo technique could be
applied. The repeated model runs would be driven by meteorological conditions
drawn from the entire sample space required by the standard. This method is
436
-------
Committee's Recommendations Derwent, Eschenroeder, McRae, Lloyd
clearly less desirable than an actual representative subset of days, but it will
out perform single calculations of mean or external conditions from the
criterion of robustness.
Develop Computer-Aided, Remotely Accessible, User-Interactive Techniques for
EKMA or EKMA-Like Models
One way of maintaining quality assurance of SIP modeling work done by state
and local agencies is to centralize the computer codes. The agencies can gain
access to the code library through remote terminals using an extension of the
UNAMAP system or a new stand-alone system. User-friendly software would be
implemented to minimize resistance of personnel unfamiliar with modeling
approaches.
Because of the growing number of choices that must be made in 03 SIP
modeling, fashioning a workable set of technical guidelines that are
understandable to agency users is a difficult task. A key aspect of the
interactive system would be a need to supply the program with uncertainty
estimates to generate sensitivity results. Default values could be provided for
users who are unable to input error statistics.
Terminal and communications hardware would be standardized to fit the
budgets and needs of a typical user agency. The Agency would maintain
assistance through the annual workshops recommended below and through
publications documenting updating of the codes and, possibly, of a standardized
437
-------
Committee's Recommendations Derwent, Eschenroeder, McRae, Lloyd
data base that might be archived centrally.
Research Planning Goals
In addition to the previous programs, the Agency should identify and
implement programs of a longer-term nature to acquire the data necessary to
improve EKMA methodology. The following areas are recommended for additional
study.
Identify Additional Criteria for Guiding Evaluation Protocols
The comparison of model calculations for selected trace-gas species with
the recorded atmospheric observations is the only available means of assessing
the reliability of the models. These comparisons are necessarily limited by the
spatial averaging inherent in the model formulations and by uncertainties in the
ambient concentration data base resulting from instrumental imperfections or
inadequate siting of the instruments. These comparisons are subjective, and
what may pass for good agreement may be fortuitous with inadequate models and an
imperfect ambient data base. Furthermore, reconciling good agreement in one
part of the model with poor agreement elsewhere can be difficult when the
atmosphere, and hence the models, are meant to be interactive.
To go further and evaluate the model's representation of the outcome of
changing precursor emissions on predicted 03 concentrations is to place further
emphasis on the comparisons between models and observations. Whether the
438
-------
Committee's Recommendations Derwent, Eschenroeder, McRae, Lloyd
statements from the models about the possible decrease of O3 concentrations
following precursor emission controls realistically describe what will actually
occur in the atmosphere depends on their accuracy and coverage of all relevant
processes.
Model evaluation will require a detailed treatment of model sensitivity to
uncertainties in input data, the removal of the subjective elements of model
comparison with observations, and the identification of critical criteria for
the evaluation of model performance. Broadly, we could envisage the following
framework (Figure 2) in which sensitivity/uncertainty is linked with model
comparison through the probability density distribution of the predicted A03:
Aprecursor ratio.
For each chosen value of the input parameter within its uncertainty
distribution, a given value of the evaluation criterion is predicted in the
standard atmosphere together with a A03:Aprecursor ratio found from an
additional model run with reduced precursor emissions. As the input parameters
are varied throughout their accessible ranges, the probability distributions of
the evaluation criteria and the A03:Aprecursor ratios are plotted out. However,
some of the combinations of input parameters should lead to the prediction of
unphysical values for some of the evaluation criteria and, hence, the
corresponding A03:Aprecursor ratios should accordingly be deleted. The better
the atmospheric determination of the evaluation criteria and the more sensitive
the model is to this parameter, the more unreal simulations can be eliminated
from the A03:Aprecursor distribution. A model comparison becomes a model
439
-------
Committee's Recommendations Derwent, Eschenroeder, McRae, Lloyd
validation when it restricts the probability distribution of the predicted
outcome.
Hopefully, the more complicated models, with their better representation of
the atmosphere, would be capable of better performance. That is, the more
complete models would be more readily checked against the atmosphere, so that
the range of valid A03:Aprecursor ratios would be much reduced. The tighter the
probability distribution of the A03:Aprecursor ratios, the less the safety
factor required in the precursor controls.
This model evaluation requires the identification of suitable performance
criteria. These criteria are parameters that are amenable to theoretical
investigation and readily measured in field studies. Species concentrations are
obvious evaluation criteria, but there are also many others that come to mind.
Certain diurnal variations in species concentrations, midweek-weekend
difference, 03 versus temperature correlations, or ratios of species
concentrations may also be suitable for this purpose.
To check all aspects of model performance, attention to species other than
O3 itself is warranted. Model evaluation may well require additional
observations of precursors, other photochemically generated secondary
pollutants, free-radical species, temporary reservoir species, photochemical
degradation and product species, and unreactive species to check emission and
dispersion. In this respect, species such as PAN, hydroxyl radicals, N20s,
nitric acid (HN03), H202, and sulfate aerosol require investigation or further
440
-------
Committee's Recommendations
Derwent, Eschenroeder, McRae, Lloyd
Probability, p
Model Input
Parameter
Model Results
Observations
*
Evaluation Criterion
Parameter
Range of Validated
A03/APrecursor Values
A03 For a Given
APrecursor
Figure 2. Framework in which sensitivity/uncertainty is linked with model
comparison through the probability density distribution of the
predicted A03:Aprecursor ratio.
441
-------
Committee's Recommendations Derwent, Eschenroeder. McRae, Lloyd
monitoring to test model representations of the underlying atmospheric physics
and chemistry.
Investigate the Possible Application of EKMA to Pollutants Other Than Ozone
Background — Currently, the EKMA model is used for calculating the maximum 03
concentrations predicted from the initial mix of nonmethane hydrocarbons (NMHC)
and NOX under defined meteorological conditions and sunlight intensity.
However, the model also calculates the concentrations of product species other
than 03. Nitrogen dioxide, PAN, and HN03, for example, play key roles in
influencing the maximum 03 predicted by the chemical model. It is possible,
therefore, that EKMA could also be used for N02, PAN, and gas-phase HN03. The
maximum short-term concentrations of these species as a function of initial HC's
and NOX could be calculated. If sulfur dioxide (S0?) is included in the
chemical mechanism, then the production of sulfate by gas-phase processes,
particularly oxidation by the hydroxyl radical, can also be predicted as a
function of HC's and NOX. Note that the prediction of HN03 will be a maximum
value since it does not take into account any reaction with ammonia (NH4) to
give ammonium nitrate (NH4N03).
Suggested Approach — The EKMA approach for species other than 03 has
already been explored by several workers (Whitten et al., 1980; Lloyd et al.,
1982). The approach appears to have merit for further study, and possible
applications of the approach to N02, PAN, HN03, and sulfate should be explored.
Some comparisons for N02 may be made with predictions from other commonly used
442
-------
Committee's Recommendations Derwent, Eschenroeder, McRae, Lloyd
models with simple chemistries to compare the predicted values with both
approaches. Using EKMA for the prediction of N02 permits the incorporation of
realistic chemistry for the oxidation of NO in a relatively simple model.
Identify Critical Measurements of Ambient Volatile Organic Compound Composition
Based on the Current Chemical Kinetic Modeling and Environmental Chamber Needs
Background — Standard EKMA partitions the NMHC into two components,
propylene and n-butane. This choice was made on the basis of the chemical model
developed by Dodge; the proportion of propylene to n-butane was derived by
obtaining the best fits between computed and measured 03 from the BOM smog
chamber studies (Dimitriades, 1972). The BOM studies were carried out with
automobile exhaust representative of the early 1970's. However, significant
changes in automobile emission controls and in gasoline composition have
occurred since the studies. There is also some question about the
representativeness of pure automobile exhaust for urban areas, which may have a
significant component of stationary-source volatile organic compounds (VOC).
Identifying the key VOC compounds and their relative proportions in the
ambient atmosphere of the major urban areas in the United States is important
for two reasons:
• To ensure that the chemical mechanism derived adequately treats the
chemical transformations of the major species. For example, with the
increasing aromatic HC content in gasoline and in urban atmospheres, it
is important that the chemical mechanism adequately treats the
photooxidation of these species.
• To design smog chamber experiments to test and define EKMA for current
443
-------
Committee's Recommendations Derwent. Eschenroeder, McRae, Lloyd
and future uses, it is important to identify the specific compounds in
the atmosphere so that reasonable simulations with this mix may be
carried out. Results of these experiments can then be compared with
those of experiments carried out with pure auto exhaust.
Clearly, chemical mechanisms and smog chamber experiments should keep pace and
be consistent with the changing urban mix, for example, increase in aromatic
content and use of alcohol fuels. A design strategy based on a mix obtained in
previous years may not be as effective in reducing 03 when the future mix of
HC's changes.
Suggested Action — Detailed HC data should be collected at the key urban
areas in the United States that have not attained the 03 standard. Diurnal
analyses of ambient HC's should be carried out during the season exhibiting peak
03 concentrations in the major nonattainment urban areas. These data should be
archived and regularly updated (every 2 to 3 years) to keep pace with possible
changes in the mix.
Oxygenate concentrations are important inputs to the EKMA model.
Therefore, the ambient analysis should also include oxygenated HC data.
Specifically, formaldehyde and some of the higher aldehydes and ketones should
be measured at the same time as the NMHC.
COMMITTEE RECOMMENDATIONS BASED ON WORKSHOP INPUTS
This section of the committee report contains the committee's
recommendations, which are based on information provided formally or during
444
-------
Committee's Recommendations Derwent, Eschenroeder, McRae, Lloyd
discussions at the workshop, but which are not necessarily in response to the
Agency's specific requests.
Development of Design Standards for Air Quality Control Strategies
The Empirical Kinetic Modeling Approach is a tool used in the development
of an air quality control strategy to assess the amount of emission reduction
necessary to bring the air quality of a particular region into compliance with
the air quality standard for 03. However, design of an air quality control
strategy is a highly complex process, and the committee feels that a new
approach should be examined involving separate air quality control criteria and
design documents.
An air quality control planning methodology document for photochemical
pollutant strategies would make use of the information provided alreadv in
criteria and control technology^'documents. It would provide methods and
' /
procedures based on our scientific and technical knowledge for designing air
quality control strategies. Practical alternatives would be offered to local
and regional control agencies, and criteria for selecting appropriate approaches
would be spelled out in terms of resources at the disposal of the particular
agency (e.g., personnel, data, hardware, and software). This information would
include a review of 03-formation, process, control-option availability,
availability of data on emissions, air quality and precursors, estimates of
uncertainties in each of these areas, and an assessment of methods to relate
emissions to air quality based on the reviewed data availability. This work
445
-------
Committee's Recommendations Derwer.t, Eschenroeder, McRae, Lloyd
should also include a study of the methodologies and problems encountered in the
design of control strategies prepared as part of the SIP process. State and
local agencies should be canvassed to document their views and comments on the
planning processes.
This task should be performed in a thorough and unbiased way. The National
Academy of Sciences would appear to be one appropriate vehicle to meet these
goals.
Based upon the information in the planning methodology document, a set of
detailed procedures should be developed and set forth in a control design
document. This document would update existing guidance documents and would
explicitly provide a set of steps to be followed by control agencies in planning
for 03-standard attainment. This design document would be similar to the
guidance provided in the Federal Register for control officials to apply to
various levels of EKMA analysis. Thus, the required input data for emissions
with specified temporal and spatial resolution, the air quality concentrations
for initial conditions of 03 precursors, key intermediates and 03 background and
transport, the meteorological data requisites, and uncertainty bear on all the
input data.
The next phase of the program would be to review the results of the above
procedure. This review should include an assessment of the success of the
various individual procedures and of the final result. The overall plan would
permit the Agency to revise and update the criteria and design documents.
446
-------
Committee's Recommendations Derwent, Eschenroeder, McRae, Lloyd
The committee feels that the program outlined above represents a
significant attempt to formalize some current procedures and to provide constant
feedback from the generation of improved basic data to the translation in the
control implementation arena. This link needs substantial improvement over the
current methodology.
Information to Be Supplied From a Research Program
The information required from research planning goals is discussed briefly
above. Some additional research areas and data requirements needed from
research programs to improve the current quality of data for the implementation
of regional-specific EKMA are discussed below. Table 1 summarizes the variety
of topics (some of which were discussed at the workshop) meriting additional
research.
Two additional topics that offer considerable scope for future research
are: the incorporation of particulate formation processes and a study of
currently unregulated pollutants. Technically, the most challenging is the
implementation of the aerosol mechanics. The capability to predict the
formation and growth of fine particulates will be an integral element of any
strategy directed at improving visibility in urban areas.
In addition to the species of regulatory interest, most air quality models
also predict the concentration of many other pollutants that have known or
anticipated effects on health and welfare. For example, gas-phase HN03 can
447
-------
Committee's Recommendations Derwent, Eschenroeder, McRae, Lloyd
TABLE 1. SUMMARY OF AREAS AND QUESTIONS FOR ADDITIONAL RESEARCH
Turbulence
Entrainment Process at Inversion Base
Diffusive Transport Under Stable Condition
Cost-Effective Closure Models
Objective Analysis Procedures
Wind-Field Generation in Remote Areas
Applications of Remote Sensing
A Priori Generation of Mixing Heights
Surface Removal Processes
Characterization of Deposition for Different Stabilities
Surface Affinity Characterization
Point-Source Treatment
Dispersion Coefficients
Procedures for Imbedding Plumes in Grid Model
Plume Rise Calculations in Arbitrarily Stratified Environments
Chemistry
Improved Representation of Aromatic HC Kinetics and Mechanisms
Temperature Effects on 03 Formation
Reactions Involving Natural HC Emissons
448
-------
Committee's Recommendationjs Derwent, Eschenroeder, McKae, Lloyd
react with NH3 to form particulate NH4N03, that in turn can have a major
influence on visibility degradation. One area that deserves special attention
is the feasibility of preferentially abating some of thes_e pollutants as part of
ongoing oxidant and particulate control programs.
In many areas, further model development is hampered more by the paucity of
measurements than by a lack of understanding of the basic physics and chemistry.
Data deficiencies occur in three areas: field measurements needed to verify a
chemically resolved model, source test information required for construction of
emission inventories, and experimental determination of basic chemical data.
These requirements are detailed in Tables 2 and 3. While not strictly a part of
a measurement program, one aspect that is often ignored is a thorough assessment
of the accuracy of the basic data. This consideration is particularly relevant
to the emission information. Unless the emission data have been prepared at a
level consistent with the desired accuracy of the model predictions, there is
little point in using air quality models. Consistency checks need to be applied
to individual sources, source classes, and the region as a whole, and thus
should include: fuel-usage patterns, operating conditions, pollutant ratios,
exhaust composition, and control efficiencies. One useful approach is to
compare the results from top-down and bottom-up estimating procedures. These
methods can provide bounds on the accuracy of emission inventories. A formal
methodology using weighted sensitivity analysis techniques is described by Ditto
et al. (1976). Recently, McRae and Seinfeld (1982) have estimated the error
449
-------
Committee's Recommendations Derwent, Eschenroeder, McRae, Lloyd
TABLE 2. SUMMARY OF METEOROLOGICAL MEASUREMENTS NEEDED FOR MODEL EVALUATION
Wind Measurments
Vertical Shear Distributions
Flow Patterns Close to Mountains (Upslope Flows)
Magnitudes of Nocturnal Drainage Flows
Quantitative Evaluation of Monitoring Site Exposure
Characterization of the Effects of Surface Roughness
Mixing-Height Distribution
Increased Spatial and Temporal Resolution of Mixing Height
Effect of Mixing-Height Distributions Close to Mountains
Solar Radiation
Detailed Spatial and Temporal Measurments of UV Flux
450
-------
Committee's Recommendations Derwent, Eschenroeder, McRae, Lloyd
TABLE 3. SUMMARY OF NEEDED CHEMICAL MEASUREMENTS
Concentration Measurements—General Aspects
Quantitative Evaluation of Inteference Effects
Detailed Characterization of Monitoring Site Exposure
Establishment of Bounds on Measurements Resulting from Errors and Averaging
Improved Resolution of Vertical Concentration Distributions
Routine Measurements of Certain Noncriteria Pollutants
HC Measurements
Spatial and Temporal Variations of HC Reactivities
Characterization of Aldehydes and Natural HC's
Need for Increased Species Resolution Beyond Total Hydrocarbon-
Reactive Hydrocarbon-Methane (THC-RHC-CH4)
Background Air Quality
Values away from Urban Region
Vertical Profiles of 03
HC Concentration and Composition
Concentration of NO, N02, and 03
Source Profiles and Emission Factors
Detailed Emission Distributions from Mobile Sources
Chemical Composition and Solvent Use by Industries
Extent and Magnitude of Emissions from Gasoline Evaporation
Industrial Fuel-Usage Patterns
Improved Characterization of Emissions from Area Sources
451
-------
Committee's Recommendations Derwent, Eschenroeder, McRae, Lloyd
bounds on emission inventories as part of an overall model performance
evaluation. Uncertainties in mass emission rates and chemical reactivity are
included. This method could be widely applied in future studies.
Short-Term Recommendations
Based on the results of a systematic sensitivity analysis of the EKMA model
concept, as well as some alternative approaches, research priorities should be
established for the items raised in Tables 2 and 3.
AN ACTION PLAN FOR THE AGENCY
The papers given at the workshop and the recommendations of the panel both
point to certain active steps that the Agency can follow to (1) resolve the
dilemma posed by EKMA uncertainties and (2) meet deadlines only a few years away
for updating SIP's.
Even with unlimited resources, it is doubtful that a single well-supported
technical approach could be evolved for 03 control strategy planning under the
existing constraints. What then is the best course of action for the Agency?
Following our short-term approaches, it will be necessary to make do with
available techniques, but also to translate an enhanced understanding of their
limitations into a practical policy guiding user agencies. The action items
enumerated below are steps over and above a research program needed to satisfy
the regulatory mandates placed on the Agency.
452
-------
Committee's Recommendations Derwent, Eschenroeder, McRae, Lloyd
Assessment of the Present Status
Several of the efforts described were in partial stages of completion as
reported at the December 1981 workshop. The Agency should first undertake an
objective critical review of where the work stands now and what the prospects
are for completing any unfinished tasks. Some of the yardsticks against which
the current work can be measured are found in this report. Of special interest
will be the results of EKMA performance statistical evaluations (both brute
force and Monte Carlo) and the data from vehicle exhaust smog chamber runs.
Careful interpretations of these research outputs placed on the backdrop of the
workshop information will set the stage for the remaining action items in this
plan.
Review of Recently Submitted SIP Updates
An update cycle for SIP's has just been completed. Whatever has been used
in these SIP's for 03 control strategy planning will give the Agency a clear
idea of the level of skill, the quality of data, and the facility with model
applications presented by state, regional, and local air quality agency staffs.
The progress made beyond the previous round of SIP submittals will help
establish requirements to be placed on methodology improvement that are
consistent with expected learning curves of users. An important part of SIP
review will be an evaluation of (1) the extent to which results of earlier SIP
estimates have been observed in air qualty improvements, and (2) the revision of
needed reduction in HC's and NOX resulting both from inventory corrections and
453
-------
Committee's Recommendations Derwent, Eschenroeder, McRae, Lloyd
model changes.
Establishment of a Heirarchy of Approaches
Following the spirit of the multi-level scheme originally proposed for
EKMA, a set of alternative approaches should be formulated based on the findings
of the two action items discussed above. Various forms of EKMA may or may not
be included in this heirarchy. For each approach the statistics of uncertainty
should be assigned, based, wherever possible, on technically defensible
sensitivity test data. These statistics can be expressed as probability of
error for each method and for each class of input data. Implicit in this action
item is the development of a capability for discriminating causes of error and
for classifying input data sets within prescribed quality categories. Each
alternative within the heirarchy will have clean and objective ground rules,
which govern the assignment of uncertainty statistics.
Assignment of Acceptability Scores of Various Levels of Uncertainty
Corresponding to each distribution of errors from the methodologies will be
an expectation of some potential for harm as a result of underestimating peak 03
levels achieved with a given control strategy. (Conversely, errors in the other
direction generate probabilities of needless social costs incurred for overly
stringent emission-reduction requirements.) Errors of the first kind should be
scored using values based on health risks so that a quantitative assessment of
safety margin can be assigned to each of the alternative approaches.
454
-------
Committee's Recommendations Derwent, Eschenroeder, McRae, Lloyd
Probability of exceedance of some ambient standard or number of persons exposed
to levels violating this standard both fall short of being continuous measures
of acceptability. The error penalties or margins of safety to be assessed to
each control agency will, therefore, be set so that each planning technique for
03 prediction results in the same acceptability index expressed in terms of
probability of some adverse effect. Thus, the methods and data sets that merit
relatively high confidence levels will be penalized less than cruder approaches
to afford the same degree of protection. A cost-benefit module can be added to
the product of this action item directly if that becomes part of Agency policy
needs.
Development of Reviews and Documents
The panel findings recommend the formation of a review group and the
production of certain documents. The Agency could enhance the objective
formulation of goals and selection of techniques by setting up peer-review
mechanisms that formalize some of the evaluations and permit outside feedback to
the Agency. The review group could get comments from the public, industry, and
other government agencies on the proposals generated under the above action
times. It will then set up a framework for the documents dealing with
background information as well as design standards for control strategy.
Focused effort on each document could be handled by a third party like the
National Academy of Sciences, which already has provided EPA support on the
background for criteria documents and the examination of special topics.
455
-------
Committee's Recommendations Derwent, Eschenroeder, McRae, Lloyd
Promulgation of Regulations
The background and technical policy built up in the previous action items
will provide a firm foundation for regulatory requirements for the preparation
of 03-attainment SIP's. The regulations themselves could be relatively simple
and incorporate by reference the outputs of the above action items. Support
provided by an active review system and explicit documentation that will have
preceded the rulemaking itself will streamline the process considerably.
Each of the steps enumerated above embodies implicitly the scientific
recommendations given elsewhere in the report. This action plan is offered as a
set of tasks that will build on the present store of reseach information over a
2- or 3-year period. Simultaneously, it is assumed that ongoing research will
be building a basis for the next cycle of planning methods improvement.
Attainment of perfection of chemical mechanisms and atmospheric models is not
clearly as important here as learning to make decisions in a world of
uncertainty. The sequence of steps traced above, therefore, leads more toward
the goal of reasonable applicaton of imperfect methods than toward one of
complete understanding of chemical or physical processes.
REFERENCES
Carter, W.P., A.M. Winer, and J.N. Pitts, Jr. 1982. Effects of kinetic
mechanisms and hydrocarbon composition on oxidant-precursor relationships
predicted by the EKMA isopleth technique. Atmos. Environ., 16:113-120.
456
-------
Dimitriades, B. 1972. Effects of hydrocarbon and nitrogen oxides on
photochemical smog formation. Environ. Sci. Technol., 6:253-260.
Ditto, F.H., L.T. Guitierrez, and J.C. Bosch. 1976. Weighted sensitivity
analysis of emissions inventory data. J. Air Pollut. Control Assoc.,
26:875-880.
Dodge, M.C. 1977. Combined use of modeling techniques and smog chamber data to
derive ozone-precursor relationships. In: International Conference on
Photochemical Oxidant Pollution and Its Control. Vol. II. EPA-600/3-77-001b,
U.S. Environmental Protection Agency, Research Triangle Park, NC.
Federal Register. 1979, 44:65669-65670, November 14.
Jeffries, H.E., K.G. Sexton, and C.N. Salmi. 1981. Effects of Chemistry and
Meteorology on Ozone Control Calculations Using Simple Trajectory Models and the
EKMA Procedure. EPA-450/4-81-034, U.S. Environmental Protection Agency,
Research Triangle Park, NC.
Lloyd, A., F. Lurmann, and B. Nitta. 1982. Homogeneous gas phase S02 oxidation
rates as a function of nonmethane hydrocarbons and nitrogen oxides
concentrations. In: API Publication 4348, March.
McRae, G.J., and J.H. Seinfeld, 1983. Development of a second generation
mathematical model for urban pollution: Part 2 model performance evaluation.
Atmos. Environ., 17:501-523.
Trijonis, J., and D. Hunsaker. 1978. Verification of the Isopleth Method for
Relating Photochemical Oxidant to Precursors. EPA-600/3-78-019, U.S.
Environmental Protection Agency, Research Triangle Park, NC.
U.S. Environmental Protection Agency. 1981. Guidelines for Use of
City-Specific EKMA in Preparing Ozone SIP's. EPA-450/4-80-027, U.S.
Environmental Protection Agency, Research Triangle Park, NC.
Whitten, G.Z., J.P. Killus, and H. Hogo. 1980. Modeling of Simulated
Photochemical Smog with Kinetic Mechanisms. Vol. I. EPA-600/3-80-028a, U.S.
Environmental Protection Agency, Research Triangle Park, NC.
«JS. GOVERNMENT PRINTING OFFICE 1983/659-095/747
457
------- |