*\
o
V
»-
u
o
T
STUDY PLAN
September 27,1991
UNITED STATES ENVIRONMENTAL PROTECTION AGENCY
Office of Air Quality Planning and Standards
Research Triangle Park, North Carolina 27711
-------
100R91119
Project MOHAVE
Study Plan
U.S. Environmental Protection Agency
September 27, 1991
-------
TABLE OF CONTENTS
List of Figures iii
List of Appendices iv
List of Acronyms v
1. Introduction 1
Reason for the Study 1
Goals of the Study 2
Project MOHAVE Organization 2
Study Planning and Review to Date ., 3
Study Schedule 4
Plan Organization 5
2. Current Knowledge and Available Data 6
Setting 6
Transport Regimes 7
WHTTEX 10
SRP Study 11
MPP Emission Modulation Studies 12
3. General Field Study Design 14
Selection of the Intensive Periods 14
Siting of Monitoring Instrumentation 17
4. Tracer 23
Purpose . 23
Choice of Tracer 23
Tracer Release 25
PFT Programmable Samplers 26
Tracer sample analysis . 27
5. Air Quality Monitoring 32
IMPROVE Samplers 32
DRUM Samplers 33
High Volume Dichotomous Samplers 34
Hydrogen Peroxide Measurements 34
Methylchloroform Measurements 35
6. Meteorological Monitoring 37
Background 37
-------
Objectives 37
Field Study Plan 39
Data Collection 40
Data Quality Assurance 40
Data Processing and Analysis 41
7. Optical Monitoring 42
Overview . . . 42
View Monitoring 43
Electro-optical Monitoring 44
Monitoring Locations and Sampling Frequency 44
8. Emission Inventory and Characterization 47
Purpose 47
Review of Existing Data and Inventories 47
MPP Stack Sampling 47
9. Centralized Data Management and Validation 48
Overview 48
Aerosol Sampling (UC-Davis) 49
Transmissometer Data 49
Radar wind profilers and RASS 50
10. Descriptive Data Analysis and Interpretation 52
Goals 52
Descriptive Statistics 52
Extinction Budget 52
Empirical Orthogonal Function Analysis 53
Meteorological Classification 54
11. Attribution 56
Overview 56
Deterministic Meteorological Modeling 57
Transport, Chemical and Deposition Modeling 59
Hybrid and Receptor Modeling 60
Extrapolation of Intensive Study Periods to the Long-Term .... 61
MPP Emission Modulation Study 62
Framework for Interpreting Results 63
12. Overall Quality Assurance 65
Approach 65
System Audits - Study Planning and Preparation 65
Measurement System and Performance Audits 66
References 68
-------
LIST OF FIGURES
Figure 1. Map of the southwestern U.S. illustrating location of MPP.
Figure 2. Synoptic flow patterns of concern.
a. Dry summer southwesterly flow
b. Summer monsoonal flow
c. Winter storms
Figure 3. Dri Mountain quarterly wind roses from 1990.
Figure 4. Las Vegas moisture climatology
a. Average specific humidity by month: 1951-1980.
b. Average relative humidity by month: 1951-1980.
c. Average percent daytime cloud cover by month: 1921-1950,
1951-1980 and 1921-1980.
Figure 5. Mean monthly dew point at Dri Mountain: 1982-1990
Figure 6. Mean (resultant) wind direction by month at Dri Mountain: 1976-
1990.
Figure 7. Location of air quality and tracer monitoring sites.
Figure 8. Chromatogram of a 20 L sample of ambient air.
Figure 9. Schematic of IMPROVE sampler.
Figure 10. Meteorological modeling grids.
111
-------
List of Appendices
1 Project MOHAVE Summary Table
2 Conceptual Plan
3 Project MOHAVE Planning Workshop Participant List
4 Scoping analysis of potential MPP impacts at GCNP
5 MPP Emission Modulation Study Plan
6 Colorado State University Regional Atmospheric Modeling System
7 Washington University cooperative agreement proposal: Development of
an Interactive Data Analysis Tool Using the Monte Carlo Model
8 Project MOHAVE Response to NAS WfflTEX Comments
9 CMB, TMBR, and DMB Model Formulations
IV
-------
List of Acronyms
ABL Atmospheric Boundary Layer
ARS Air Resource Specialists
BNL Brookhaven National Laboratory
CD4 Deuterated methane
CMB Chemical Mass Balance
DMB Differential Mass Balance
DRUM Davis Rotating-drum Universal-size-cut Monitoring
ECD Electron Capture Detector
EOF Empirical Orthogonal Function
EPA U.S. Environmental Protection Agency
EMSL Environmental Monitoring Systems Laboratory
GC Gas Chromatography
GCNP Grand Canyon National Park
INAA Instrumental Neutron Activation Analysis
LIPM Laser Integrating Plate Method
LQL Lower Quantifiable Limit
LOD Limit of Detection
MOHAVE Measurement of Haze and Visual Effects
MPP Mohave Power Project
NAS National Academy of Sciences
NGS Navajo Generating Station
NOAA National Oceanic and Atmospheric Administration
NPS National Park Service
OAQPS EPA Office of Air Quality Planning and Standards
PDCB Perfluorodimethylcyclobutane
PDCH Perfluorodimethylcyclohexane
PESA Proton Elastic Scattering Analysis
PFT Perfluorocarbon Tracer
PIXE Particle Induced X-ray Emission
PMCP Perfluoromethylcyclopentane
QC Quality Control
RASS Radio Acoustic Sounding Systems
SCE Southern California Edison
SF6 Sulfur hexafluoride
SO2 Sulfur dioxide
SO4 Sulfate
SRP Salt River Project
TMBR Tracer Mass Balance Regression
WHTTEX Winter Haze Intensive Tracer Experiment
XRF X-Ray Fluorescence
-------
1. Introduction
Reason for the Study
In 1977, in Section 169A of the Clean Air Act, Congress set as a national
goal, "the prevention of any future, and the remedying of any existing,
impairment of visibility in mandatory Class I Federal areas which results from
manmade air pollution." Section 169A also required EPA to promulgate
regulations to assure reasonable progress toward meeting the national goal for
mandatory Class I areas where visibility is an important air quality related value.
On November 30, 1979, EPA identified 156 areas, including Grand Canyon
National Park (GCNP), where visibility is an important air quality related value.
On December 2, 1980, EPA promulgated the required visibility regulations. In
broad outline, the visibility regulations require the States to coordinate their air
pollution control planning activities with the appropriate Federal Land Managers
to develop a program to assess and remedy visibility impairment from new and
existing sources.
More recently, Congress reaffirmed its desires to address visibility issues
by adding section 169B to the Clean Air Act amendments of 1990. Section 169B
calls for a substantial research program to study regional haze, and requires the
Administrator of EPA to establish a visibility transport commission for the region
affecting the visibility of GCNP.
In January and February, 1977, the National Park Service (NPS), acting
in its capacity as the Federal Land Manager for GCNP, conducted a study known
as the Winter Haze Intensive Tracer Experiment (WHTTEX). WHTTEX involved
a six-week long intensive monitoring period during which an artificial tracer was
released from the Navajo Generating Station (NGS) northeast of GCNP. National
Park Service analysis of optical, air quality and meteorological data indicated a
significant fraction of the haze in GCNP during this time period was due to
sulfates resulting from NGS emissions (Malm et al, 1989).
Salt River Project (SRP), the operators of Navajo Generating Station,
conducted a study during early 1990. The SRP study also indicated a
contribution of NGS emissions to haze in GCNP, but at a lower frequency of
occurrence. A difference in prevailing meteorological conditions during the
years of the NPS and SRP studies would at least partially account for the
differences in magnitude and frequency of impacts identified by the two studies.
The results and limitations of the NPS and SRP studies are described briefly in
section 2.
Based on these studies and additional evidence presented, EPA has
proposed regulations that would require substantial reduction of sulfur dioxide
emissions from NGS. While NGS has been linked to a portion of the haze at
GCNP, it is generally recognized that a number of other area and point sources
also contribute to haze at GCNP. One potential source is the Mohave Power
1
-------
Project (MPP), a 1580 Megawatt, coal-fired steam electric power plant located
in Laughlin, Nevada, southwest of GCNP and operated by the Southern
California Edison Company (SCE). Like NGS, MPP has no pollution control
equipment for sulfur dioxide. Congress, desirous of additional information
concerning the sources of visibility impairment in GCNP, added $2.5 million to
the fiscal 1991 appropriation for EPA to conduct "a pollution tracer study at the
Mohave Powerplant". Project MOHAVE (Measurement Of Haze And Visual
Effects) is EPA's response to this congressional mandate.
Goals of the Study
The primary goal of Project MOHAVE is to determine the contribution
of the MPP to haze at GCNP and other mandatory Class I,areas where visibility
is an important air quality related value. This implies a quantitative evaluation
of the intensity, spatial extent, frequency, duration and perceptibility of the MPP
contribution. The improvement in visibility that would resijlt from control of
MPP emissions is included in the primary goal. Secondary goals include an
increased knowledge of the role of other sources on haze in GCNP and the
southwestern United States in general. Because knowledge of regional transport
and air quality levels is necessary to separate the effect of MPP from other
sources, meeting the primary goal will result in increased knowledge about the
impacts from other sources.
It is hypothesized that the maximum impacts of MPP on visibility at
GCNP occur during periods with clouds present (to facilitate transformation of
SO2 to sulfate) and wind directions that transport the MPP plume toward GCNP.
The study is designed to test this hypothesis.
Project MOHAVE Organization
The EPA Office of Air Quality Planning and Standards (OAQPS) in
Durham, North Carolina has overall management responsibility for Project
MOHAVE. Robert Bauman is the manager of Project MOHAVE. and has
selected staff from the Environmental Monitoring Systems Laboratory (EMSL)
in Las Vegas as the technical advisors. Staff includes Marc Pitchford, a National
Oceanic and Atmospheric Administration (NOAA) employee assigned to EPA and
Dr. Mark Green, a Desert Research Institute (DRI) employee working under a
cooperative agreement with EPA. To be advised in the overall direction of the
study, Mr. Bauman has formed a steering committee composed of government
and industry scientists. The steering committee includes:
Dr. Carol Ellis Southern California Edison Company
Dr. William Malm National Park Service
Dr. Peter Mueller Electric Power Research Institute
Marc Pitchford EPA (EMSL-LV)
-------
Dr. William Wilson EPA (AREAL)
Temporary technical advisory panels provided recommendations during a
planning workshop, as discussed later in this section. Coordination committees,
composed of Project MOHAVE participants and their contractors responsible for
various components of the study, will meet on an ad hoc basis to refine and
coordinate in the following areas:
(1) Monitoring
(2) Modeling
(3) Data Management
(4) Data Analysis
These committees will facilitate joint analyses with SCE and other contributing
participants. The participants in Project MOHAVE include Federal agencies,
universities and private companies. A list of the main participants and their areas
of responsibility is given in the summary table presented in Appendix 1.
Study Planning and Review to Date
The first significant planning effort was the formulation of a conceptual
study plan. The conceptual plan outlined the main components of the study and
gave generalized approaches for each aspect of the study. Preliminary monitoring
locations and schedules were also identified. The purpose of the conceptual plan
was to serve as a preliminary planning document to provide a common starting
point for outside review. The conceptual plan was reviewed by (1) the Project
MOHAVE steering committee, (2) members of the Haze in National Parks and
Wilderness Areas Committee of the National Research Council, National
Academy of Sciences (3) participants in a Project MOHAVE planning workshop
(a group of about 40 experts), and (4) various other individuals. The conceptual
plan underwent several revisions; the most recent version, which led to the
current plan, is presented in Appendix 2.
The Haze in National Parks and Wilderness Areas Committee was briefed
on the conceptual plan on March 14, 1991 at the University of California-Irvine.
Individual members of the Committee asked clarifying questions and made some
suggestions on the conceptual plan. Several of the members made additional
comments at later dates. The Committee as a whole did not comment on the
plan.
During the week of April 23, discussions were held between SCE, DRI,
and EPA in Las Vegas to formulate conceptual models of conditions during which
MPP emissions may be transported to GCNP. This included a review of the
dynamic processes affecting MPP plume transport and dispersion, and the diurnal
and seasonal variation of these processes. Alsoj considered were issues
concerning chemical transformation and deposition, in particular gas-phase and
aqueous phase oxidation and the roles of clouds and H2O2. These discussions and
-------
a summary of the meeting provided by SCE and DRI helped in selecting the
intensive study periods as well as providing insight about the important physical
mechanisms.
A planning workshop was held April 30-May 2 in Denver. Thirty-nine
individuals with expertise in one or more study components attended. A plenary
session was held first during which the conceptual plan was presented. Following
the plenary session, subgroups met to make recommendations on the study
components. The subgroup topic areas were: 1) tracer, 2) air quality
measurements, 3) emissions, 4) deterministic modeling and upper air
measurements, and 5) quality assurance. Another plenary session followed,
during which clarifying questions were asked and different subgroups coordinated
their plans. The subgroups again met to compile recommended study
components; these were presented in a final plenary session. After the workshop,
a small group met to evaluate the recommendations and plan the implementation
of the study. A list of the participants attending the workshop appears in
Appendix 3.
In July 1991, a table summarizing the main components of the study and
the responsible persons for each component, and a map showing expected
monitoring locations were prepared. These were sent out to study participants.
The purposes of the summary table and map were to provide an update on the
plan and to ensure that the Project MOHAVE staff and other study participants
had a mutual understanding of the responsibilities and plans for each study
component. The summary table was updated after review by participants. It is
presented in Appendix 1. More detailed descriptions of the information in the
summary table appear in subsequent sections of this plan.
Study Schedule
The field measurement portion of the study will last for one year, from
September 1991 through August 1992. Intensive monitoring and tracer release
periods are scheduled for January 4-31, 1992 and July 15-August 25, 1992. A
list of milestones of and anticipated dates of completion major operational phase
is given below. Coordination, data review, and planning meetings will be
scheduled as appropriate.
MILESTONE DATE
Deploy year-round monitoring equipment 9/91
Deploy winter intensive equipment 11/91-12/91
Winter intensive study 1/92
Begin data processing 3/92
-------
Preliminary analysis of winter intensive 5/92
Deploy summer intensive equipment 6/92
Summer intensive study 7/92-8/92
End monitoring 9/92
Preliminary analysis of summer intensive 12/92
Receive final monitoring and modeling data 3/93
Draft report 7/93
Final report 12/93
Plan Organization
This plan is composed of 12 sections and 8 appendices. Section 2
discusses current knowledge, including recent tracer studies and data available for
further study. Section 3 provides an overview of the field study design in terms
of monitoring locations and schedules. Section 4 describes the tracer aspects of
the study. Sections 5-7 discuss the air quality, meteorological, and optical
monitoring plans. Emission inventory and source characterization are outlined
in Section 8. In Section 9, data management and validation are discussed.
Section 10 details the descriptive data analysis and interpretation study
components. The methods of attribution to be used appear in Section 11. Section
12 describes the overall quality assurance plan.
-------
2. Current Knowledge and Available Data
Setting
MPP is located at Laughlin, NV, about 125 km south-southeast of Las
Vegas, 350 km northeast of Los Angeles, and 340 km northwest of Phoenix (see
Figure 1). The MPP is a coal-fired, base loaded generating facility with a 153
m high stack. The base of the stack is at 210 m msl. It uses low sulfur (0.6%
by wt.) Arizona coal delivered by slurry pipeline. Its SO2 emission rate averages
about 150 tons/day at full operation (Nelson, 1991).
GRAND CANYON
NATIONAL PARK £/\ A
"A
"V
A-.. A
Figure 1. Map of the southwestern U.S. illustrating location of MPP.
The topography in the vicinity of the MPP is complex with sparse
vegetation. A portion of the Colorado River Valley, the Mohave Valley, lies to
the north of the MPP between Davis and Hoover Dams. The Mohave Valley is
bordered on the west by the El Dorado and Newberry Mountains and on the east
by the Black Mountains. Long north/south oriented valleys lie to the east
(Detrital Valley) and west (El Dorado Valley) of these ranges.
The Mohave Valley walls are not symmetric with respect to the valley
axis. Western slopes rise gradually, while eastern slopes rise slowly for the first
few kilometers with steep walls further to the east. The border between Nevada
and Arizona also extends along the valley axis. The bottom of the valley is about
-------
200-300 m msl and the ridges reach 1200 m msl. Toward the west, the Mohave
Valley extends into a high plateau and toward the east into the Detrital Valley
plateau (600 m msl). The Mohave Valley narrows significantly as it approaches
Hoover Dam. At Lake Mead the terrain flattens. The western entrance to GCNP
is at the end of the eastern arm of Lake Mead (180 m msl).
This terrain controls the mesoscale, but not the synoptic scale flow
patterns.
Transport Regimes
Several modeling and measurement studies have been conducted in the
vicinity of the MPP over the past 20 years (Freeman and Egami, 1988; Yamada,
1988; Koracin et al, 1989; White et al, 1989). Results from these studies
provide a conceptual model of pathways by which MPP emissions can reach
GCNP. Figure 2 illustrates the three synoptic flow patterns of greatest
importance; (1) summertime dry southwesterly flow (flow from the southwest
toward the northeast), (2) summertime monsoons, and (3) winter storms.
Both mesoscale and synoptic scale meteorological conditions influence the
movement of the MPP plume. The relative influence from each of these transport
and transformation scales differs from summer (June, July, August) to winter
(December, January, February). Southerly to southwesterly flows are needed to
transport MPP emissions to the GCNP. Spring and fall are transitional periods
that contain mixtures of the summer and winter regimes and are not as well-
differentiated from each other. Figure 3 illustrates the dominant air flow for each
quarter of 1990 as derived from the Dri Mountain wind data.
During the summer, southwesterly, westerly and southerly winds are
common in the vicinity of the MPP. There are two distinct cases; one with dry
airmasses and a second with moist monsoon airmasses, respectively. During the
winter, the most common situation is northerly winds associated with a high
pressure ridge over the Pacific Coast. However, infrequent frontal passages
result in westerly and southwesterly flows on the order of 10% of winter days.
The latter conditions can transport MPP emissions toward GCNP.
Dry, Southwesterly Flow from Southern California and the Pacific
Ocean
The most common occurrence is dry, southwesterly synoptic flow caused
by heating of the Mojave Desert which creates a lower pressure with respect to
incoming air masses. These air masses traverse the Mojave Desert after
entraining pollutants emitted from urban-southern California. These include
pollutants flowing through Tehachapi, Cajon and Banning passes.
This scenario has a high frequency of occurrence during the summer
months. The regimes change daily from decoupled flow during the night with
localized circulation patterns within the Mohave Valley and along the slopes of
-------
Summer Monsoonal Flow
4
high drifts west
Winter Storms
Figure 2. Synoptic flow patterns of concern, a) Dry summer southwesterly flow.
b) Summer monsoonal flow, c) Winter storms.
8
-------
!/:590 3/3: .990
J =jCO
3==ED CUSSES I-DS,
23 1 9< -S
1 3 3< -5
= 5 j< »s < 95
5 I I 10 3< -s
OH I -OUf^T4 IN
7' 1/199O 9/30/199O
0 2300
33 ; MQu^'i I '<
: 350 :5 3 :
0 5300
10X 20X
Figure 3: Dri Mountain quarterly wind roses from 1990.
9
-------
the constraining mountains, to coupled flow that is dominated by the synoptic
winds aloft.
Summer Monsoons
During July and August, moist air is frequently transported from the Gulf
of California and/or the Gulf of Mexico in southeasterly to southerly flows.
Synoptic wind speeds vary from 6 to 20 m/s at 6000 m agl. These air masses
traverse northern Mexico, the southern part of Texas, New Mexico, and most of
the state of Arizona. Pollutants emitted from the smelters in Arizona and Mexico
as well as those from Phoenix and Tucson can be entrained into this airmass.
This synoptic flow is driven by a large-scale low over the western part of the
U.S. created by strong surface heating.
Differential heating causes updraft motions on the slopes of mountain on
both sides of the valley, resulting in chains of clouds developing along the
ridgetops. These clouds may offer a mechanism for rapid oxidation of SO2 to
SO4 if the plume is entrained in them and if oxidants such as H2O2 are present' in
sufficient amounts.
The reacted and unreacted emissions could then be carried through the
Mohave Valley by the southerly component of the wind toward Lake Mead, after
which they might be transported toward GCNP by locally channeled flow or
caught up in the monsoonal flow and transported across the plateau to the GCNP.
Summer monsoon episodes are usually of 3 to 5 days in duration.
Winter Storms
In general, the synoptic weather patterns are not as favorable for transport
from MPP towards GCNP in winter. The Great Basin and the Colorado Plateau
are frequently dominated by high pressure cells creating a flow that is not
conducive for transport from MPP to GCNP. Southwesterly to westerly flow
occurs mainly during the movement of frontal systems, developing over the
Pacific Ocean from west to east. These storms in general exhibit a minimal
warm frontal activity. As a consequence the southwesterly to westerly flow
needed for transport from MPP to GCNP will occur as the cold front with its
associated trough approaches the Mohave Valley. This weather type can last
from a day to three days, be wet or dry, and usually there are about ten cases
during the winter period.
WHITEX
The Winter Haze Intensive Tracer Experiment (WHITEX), conducted by
the National Park Service, was designed to evaluate the feasibility of attributing
single point source emissions to visibility impairment in selected geographical
regions. WHITEX was conducted during a six week period in January and
10
-------
February 1987. During this time, an artificial tracer, deuterated methane (CD4),
was released from the NGS. Aerosol, optical, tracer and other properties were
measured at Hopi Point, which is in GCNP, and other locations. Synoptic
weather maps indicated a high frequency of high pressure over the area, which
resulted in transport of the NGS plume from the northeast toward GCNP.
Trajectory analysis and deterministic modeling indicated transport from the area
of NGS to.Hopi Point during the period with highest sulfate concentrations.
The extinction budget at Hopi Point indicated that sulfate aerosol (and
associated water) contributed two-thirds of the non-Rayleigh light extinction
during WHTTEX. Attribution analysis used the Tracer Mass Balance Regression
(TMBR) receptor model and the Differential Mass Balance (DMB) hybrid model.
According to the NFS analyses, NGS contributed substantially to sulfate and light
extinction at Hopi Point.
The WHITEX data analysis methodology, results, and use of the results
were cause for considerable controversy. The Committee on Haze in National
Parks and Wilderness Areas evaluated WHTTEX (National Research Council,
1990). The Committee neither fully supported or fully discredited the WHITEX
report. Based on evaluations of meteorological, photographic, chemical and other
physical evidence, the Committee concluded "at some times during the study
period, NGS contributed significantly to haze in GCNP." However, the
committee also concluded that "WHITEX did not quantitatively determine the
fraction of S04= aerosol and resultant haze in GCNP that is attributable to NGS
emissions."
A key uncertainty identified by the Committee is the use of TMBR and
DMB to apportion secondary species such as SO4=. Limitations of the regression
analyses noted by the committee are: "(1) satisfactory tracers were not available
for all major sources; (2) the interpretation did not adequately account for the
possible covariance between NGS contributions and those from other coal-fired
power plants in the region; and (3) both models employ inadequate treatment of
sulfur conversion, which is an important controlling factor in the formation of
haze at GCNP." Another limitation noted by the Committee was the lack of
measurements within the canyon (beneath the rim). A more complete review of
the National Research Council WHITEX evaluation is provided in Appendix 8.
SRP Study
The Navajo Generating Station Visibility Study was conducted for the
SRP, the operators of NGS, from January 10 through March 31, 1990. Its
purpose was to address visibility impairment in GCNP during the winter months
and the level of improvement that might be achieved if SO2 emissions from NGS
were reduced. The study was performed to provide input to the rulemaking
process of the EPA regarding NGS SO2 controls (Richards et al, 1990).
Perfluorocarbon tracers were released from each of the three stacks of
NGS. Surface and upper air meteorology, particle and gaseous components, and
11
-------
tracer measurements were made at many sites. Deterministic modeling was done
to estimate the contribution of NGS and other sources to sulfate levels for two 6
day periods with poor visibility. Various data analysis techniques were used to
examine the relationships among NGS emissions, meteorology, air quality, and
visibility during both episode and non-episode conditions.
The SRP study concluded that NGS emittants were absent from the
vicinity of Hopi Point most of the time . The study estimated that the average
contribution of NGS to-fine sulfur at Hopi Point was small, although NGS sulfur
dominated during one 4-hour period. However, it was noted that the frequency
of wind directions transporting the plume toward GCNP were lower than normal
during this time period.
MPP Emission Modulation Studies
The MPP was inoperable for the seven month period June through
December 1985. Using data from the period of shutdown and during operation
of MPP, a study was done by SCE (Murray et al, 1990) to assess the effect of
MPP upon particulate sulfur levels at Spirit Mountain, Meadview, and Hopi
Point. Spirit Mtn. is 20 km northwest of MPP, Meadview 110 km north-
northeast of MPP and Hopi Point 240 km northeast of MPP. Meadview, 5 km
west of the boundary of GCNP, is expected to have the highest particulate sulfur
impact from MPP among the three sites. The study found no statistically
significant difference in sulfate levels at the three sites between operation' and
shutdown of MPP. It was suggested that the substantial year to year variability
of sulfate was responsible for not detecting a statistically significant difference.
The 95% confidence bounds for the MPP impact was from less than 11.6% to
less than 21% at Meadview and less than 3.3% to less than 7.8% at Hopi Point
during favorable transport conditions. The upper limit on average sulfate at
Meadview was estimated to be 15%, which is the level of uncertainty in the
statistical analysis.
From data presented by Murray, it can be seen that sulfate levels at Spirit
Mountain, generally not affected by MPP, were greater during the outage
compared to non-outage periods, indicating higher background levels during the
outage. However, at Meadview, average sulfate levels were lower during the
outage. Thus, levels at Meadview were lower during the outage even though-
regional levels were higher. While suggestive, the number of samples was not
sufficient to prove an impact from MPP. This comparison, done as part of the
scoping process, appears in Appendix 4.
A more sophisticated study of the outage will be conducted under the
Desert and Intermountain Air Transport program at DRI, sponsored by SCE.
Chemical and physical analysis of filters for the SCENES program, used in
Murray's study, were analyzed only every third day. Samples for the
intermediate days were archived. The new study will analyze all samples,
including those previously analyzed. A more sophisticated meteorological
12
-------
classification scheme will also be done. The sulfate levels for the same
meteorological regimes can then be compared for the outage and non-outage
conditions. Other emission modulations of shorter duration (i.e. periods where
only one of the two units at MPP was operating) will also be analyzed.
Deterministic wind field and transport modeling will be done for each of the
meteorological regimes. The modeling will account for variations within each
regime. A detailed compilation of regional SO2 emissions for the control and
outage periods will be done. A draft version of the outage study plan appears in
Appendix 5.
13
-------
3. General Field Study Design
The duration of the field study will be one year. It was considered
important in evaluating the overall impact of a source to consider a complete
annual cycle. By monitoring for an entire year, all the seasons may be studied.
For practical reasons, the year was divided into "intensive" and "non-intensive"
periods. During the intensive periods tracer will be emitted from the MPP stack
and tracer and particulate data will be collected continuously at over 30 sites.
During the non-intensive periods tracer will not be released, the number of
particulate monitoring sites will be scaled back considerably and sampling will be
done only two days per week. Meteorological and optical monitoring will be
done continuously.
Selection of the Intensive Periods
In selecting the intensive study periods, it was desired to select periods in
which the MPP may be most likely to contribute to haze in GCNP. It is expected
that secondary sulfates formed from oxidation of MPP SO2 emissions is the
largest portion of the MPP contribution to haze in GCNP. Primary particulate
emissions from MPP contribute to haze nearer to the power plant, but at the
distance of the GCNP, secondary sulfates are expected to dominate. Dry phase
oxidation of SO2 is much slower than aqueous phase oxidation. Thus, cloudy
periods can cause much more rapid conversion of SO2 to sulfate. Aqueous phase
oxidation is on the order of 50-100% per hour if oxidants are present in sufficient
quantity (Lee, 1986).
Cloudy periods with wind directions transporting the MPP plume toward
GCNP are the periods when impacts to visibility at GCNP due to MPP would be
most likely to occur. As discussed in Section 2, these conditions may occur
during the summer monsoon and certain winter periods. Calculations of the
potential impact of MPP to haze at GCNP under highly simplified conditions
were done for dry southwesterly and monsoonal summer conditions, and pre-
frontal winter conditions. These calculations indicated a potential for perceptible
visibility impairment at GCNP from MPP emissions for all three cases (see
Appendix 4).
Moisture parameters calculated from long term National Weather Service
data from Las Vegas are shown in Figure 4. Specific humidity, which gives the
amount of water vapor in the air, is highest in August, with July having slightly
less moisture. Average monthly dew point temperature for the years 1982-1990
(Figure 5) at Dri Mountain also showed a peak in August, with slightly lower
values in July. Relative humidity peaks in December and January. Also note that
August has higher relative humidity than July. December and January are the
cloudiest months, with February and March only slightly less cloudy. A
secondary, peak in cloudiness occurs in July, with somewhat less cloudiness in
14
-------
Relative Humidity - Las Vegas
Specific Humidity - Las Vegas
40-
351
[
3°<;
20-
10-
5-
0---
§
j 9
i i 5
i
1 M
HIlMllll
LU.IUWL
i
j
!
i
i
I
8-1
7-
~Ss 6-
f 5-
Specific Humid
D -* K) CO .&
-
N
. s 5 I
1 1
III
! .1. .1 i .1
! ^
C
! !
j i
i 5
i ^
liiii
HI
Jan Feb Mar Apr May Jun Jul AugSep Oct NovDec
(a)
Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
(b)
Cloud Cover Climatology Las Vegas Precipitation Climatology Las Vegas
o
O
_o
o
a>
3U .
45" «
k
40 -j
35-j
j
1
2o-|
15J
10J
5-i
n
i
\
; t
: 1
1
.-,
i i
I
[
fc
t,
_|
t
i 1
>
H
k.
E
k
t,
-!
{
^
k
i
k
s
^
3
(
\
]"
;
i
i *
j
^ ^
; i
k
k
k
i t
^
! !
; t
j-|
^
^
j
^
|~
i
L
k
|
k
H
fc
k,
i
II
t
t
l
H
1
r fc
^ t
^ J
« t
5 *
^ k
j
'
r
]
1
i
Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
1921-50
1951-80
1921-80
1921-50
1951-80,
1921 -80
(c)
Figure 4. Las Vegas moisture climatology
(a) Average specific humidity by month: 1951-1980.
(b) Average relative humidity by month: 1951-1980.
(c) Average percent daytime cloud cover by month.
(d) Average precipitation by month.
15
-------
c
o
-10 -
L] 2 3 4 5 6 7 8 9 10 iT 12
month
Mean Monthly DewPoint (C) - 1982 to 1990
Figure 5. Mean monthly dew point temperature at Dri Mountain: 1982-1990.
Ol
01
C
a
u
111
a
c
360 -
315 -
270 -
225 -
180 -
135 -
90 -
0 -
6 7
month
i 1
10 11
12
Mean Monthly Wind Direction (deg) - 1976 to 1990
Figure 6. Mean (resultant) wind direction by month at Dri Mountain: 1976-1990.
16
-------
August. The precipitation data show two distinct peaks; one is July and August,
the other December through February. Of interest is the substantial difference
in average precipitation in November and December between the 1921-1950 and
1951-1980 data. This suggests that year to year variability is large.
The climatology described above suggests a summer intensive period
covering portions of July and August and a winter period that could be any time
between December and February. January and December showed the highest
values for the moisture related parameters, with January's precipitation data being
more consistent than December's. January was chosen for the winter intensive.
August has slightly higher relative and specific humidity than July. Thus, the
summer intensive will be centered on early August.
Even though we are attempting to optimize the study periods for the
specific conditions described above, meteorological conditions are highly variable
from year to year. Most frequent winter flow at MPP is away from GCNP.
However, winter flow is often toward the Joshua Tree Wilderness, another Class
I visibility protected area. In summer, we are likely to experience dry flow from
the southwest a significant portion of the time in addition to the moist monsoonal
flow. Thus, information about other common conditions will also be obtained.
Mean vector (resultant) wind direction each month for the years 1976-1990
at Dri Mountain is shown in Figure 6. Dri Mountain is a pointed hill 150 meters
high and adjacent to the Colorado River a few kilometers north of MPP. The
instrument level is approximately at the same elevation (MSL) as the top of the
MPP stack. However, the plume centerline is generally 400-700 m, averaging
663 m above stack base, which is 250-550 m above Dri Mountain. The winds
at Dri Mountain would be expected to be influenced more by channeling due to
topographic features than the winds at plume height, particularly during nighttime
and morning hours. Winds at plume height typically have a greater westerly
component (toward GCNP) than Dri Mountain winds. The winds at Dri
Mountain indicate a predominance of northerly winds during November through
February and southerly winds during April through September. March and
October are transitional periods. Three January periods during 1976-1990 had
south to southwest resultant winds, indicating more frequent flow toward GCNP
during these years.
The analysis of humidity, clouds, precipitation, and winds suggest optimal
of January 4-31, 1992 for the winter intensive and July 15 to August 25, 1992 for
the summer intensive.
Siting of Monitoring Instrumentation
The aerosol, tracer and optical monitoring network includes three classes
of sites. These are denoted as (1) receptor, (2) other Class I, and (3) background
sites. A more detailed description of air quality and meteorological monitoring
is described in sections 5 and 6, respectively. The aerosol and tracer monitoring
was designed to provide sampling and analysis every day for many sites during
17
-------
the intensive periods, and sampling and analysis two days a week during the rest
of the study year. The reduction of monitoring for the non-intensive periods is
necessary due to cost considerations.
The preliminary network of sites is shown in Figure 7. The siting will be
finalized after a monitoring planning meeting to be held in Las Vegas in early
October. A listing of the sites, approximate elevation, instrumentation, and a
brief reason for selecting each site is given in the table below. The receptor sites
(R1-R4) are either within or in very close proximity to GCNP. The other Class
I sites (11-16) are in areas that may be impacted by MPP and/or serve as
background sites. Most of the receptor and other Class I sites had some degree
of existing or planned monitoring prior to Project MOHAVE. These sites will
have supplemental monitoring associated with Project MOHAVE and will operate
during the entire study year. The background sites (B1-B21) are intended to
characterize high elevation and low elevation transport into the study area as well
as showing more detailed concentration patterns within the study area. The
background sites will operate only during the intensive periods. The
instrumentation to be used is described in Sections 4-7 and references cited in
those sections.
SITE IDENTIFICATION TABLE
Id No Name Elevation
(meters)
Rl Meadview 900
R2 Long Mesa 1830
R3 Hopi Point 2160
R4 Indian Gardens 1220
II San Gorgonio 1680
12 Joshua Tree 1500
13 Tonto 730
14 Sycamore Canyon 2000
15 Petrified Forest 1680
16 Bryce Canyon. 2600
Bl Tehachapi Pass 1240
B2 Cajon Pass 1380
B3 Baker 280
B4 Amboy 190
B5 Parker 130
Particle Optical
& Tracer
1
4
1
1
2
2
2
2
2
2
3
3
3
3
3
T.N.P
N
T,N,P
T,N,P
T,P
P
T,P
P
T,P
P
Meteorology
S,U
S
S
S
S
S
18
-------
B6 Wickenburg 620 3
B7 Las Vegas Wash 370 3
B8 Cottonwood Cove 210 3 S,U
B9 Yucca 580 3
BIO Dolan Springs 850 3
fill Truxton 1370 3 S,U
B12 Seligman 1620 1
B13 Prescott (airport) 1620 3
B14 Overton Beach 370 3
B15 New Harmony 1520 3
B16 Marble Canyon 1220 3
B17 Mt Springs Summit 1680 3
B18 Spirit Mountain 1700 3
B19 Hualapai Mt Park 1980 3
B20 Camp Wood 1980 3
B21 Jacob Lake 2400 3
Explanatory notes:
1 Full IMPROVE samplers. 24-hour samples midnight to midnight, Wednesday
and Saturday during the non-intensive periods. Twice daily samples of aerosol
and tracer will be taken each day during the intensives. Specific hours for the
beginning and end of each daily sampling period will be determined at the
monitoring coordination meeting. DRUM samplers with 4 or 6 hour sampling
periods (only selected samples will be analyzed).
i
2 Full IMPROVE samplers. 24-hour samples (aerosol and tracer) each day
during the intensives. 24-hour samples midnight to midnight Wednesday and
Saturday during non-intensive periods.
3 IMPROVE channel A and filter pack for SO2. 24-hour samples (aerosol and
tracer) each day during the intensives. No sampling during non-intensives.
4 Long Mesa will only have a DRUM sampler for particle monitoring.
T= transmissometer, N= nephelometer, P= photography, S= surface
meteorology, U= upper air meteorology
Surface and upper air meteorological data will be collected at additional sites
identified in Section 6.
Important considerations in selecting sites include the availability of power
and accessibility. The power requirement imposes strict limitations on siting.
19
-------
o
\
Points of Reference
Coal Fired Power Plants
Receptor Sites
Other Class I sites
Background Sites
LA Basin Pass Sites
Low Elevation Transport
High Elevation Transport
\
Reid Gardner
0
Ov<
Bryce Canyon
New Harmony
Tehachapi Summit
S Angeles
Las Vegas Wa
r
Sprs. Summit
Cottonwood Cove
Spirit Mountain
rton Beach
avajo
Marble Canyon
T » Truxton
Oolan Springs
. , T
Mohave A Hulapai Mt. Park
Yucca
ndian Gardens
Hopi Point
Petrified Forest
Sycamore Canyon
Cajon Summit
San Gorgonio
Camp Wood
v
Prescott (airport)
Parker
Joshua Tree
Wickenburg
Tonto
O Phoenix
O
-------
Meadview was chosen because it is within about 5 km of GCNP, has existing
monitoring by the Desert Research Institute (DRI), and is at the west end of
GCNP, thus closer to MPP than other areas of GCNP. Long Mesa is also at the
edge of GCNP and the location of another DRI monitoring site. Hopi Point and
Indian Gardens are existing NPS monitoring sites within GCNP. Joshua Tree and
Sycamore Canyon are potentially impacted by MPP. The remaining Class I sites
will help characterize transport into the area.
The Tehachapi Pass site (Bl) is located in a pass between the San Joaquin
Valley and Mojave Desert and is intended to monitor the exchange of air between
these areas. The San Joaquin Valley is a large source of SO2. The Cajon Pass
site (B2) is located between the Los Angeles Basin and Mojave Desert and is a
major exit pathway for Los Angeles Basin air. Sites B3-B6, 12, and 13 are low
elevation southern boundary sites. These locations form an arc to characterize
the sulfur flow into the main study area from the southwest to southeast.
Locations B17-B20 form a second arc to the south of GCNP. These sites
are located on terrain that rises 900 to 1200 meters above the surrounding area.
The sites should be in the middle of the mixed layer during the summer intensive
and frequently above the mixing layer during the winter intensive. Measurements
from these sites when tracer is absent, coupled with the nearby low-elevation
southern sites B7-13, should characterize the sulfur flux from the southwest
through southeast exclusive of MPP sulfur into the receptor area. At other times,
tracer from MPP may be present at these sites. In conjunction with the low
elevation southern sites, these sites will help determine vertical distributions of
sulfur and tracer.
Sites B7-B13 are located in possible MPP transport corridors between the
southern boundary sites and GCNP. These locations will indicate if the emissions
from MPP are transported toward GCNP in a narrow cone or more widely
dispersed air mass, in addition to identifying the most common transport corridor
from MPP to GCNP.
The MPP plume usually travels to the south along the Colorado River in
the wintjsr. It is suspected that the plume may sometimes leave the river area in
an eastward direction through a gap in the mountains near site B9. The high
elevation sites B18-B20 along with B10-B12 should be able to verify if MPP
emissions are being transported from the area of B9 to the east or northeast in a
low-level surface layer or more dispersed in a deeper layer.
Locations B7, B8, BIO, and B14 are placed in an attempt to isolate
emissions of MPP, the Reid Gardner power plant, and Las Vegas as they are
mixed over Lake Mead on their way to the western end of GCNP (Meadview).
Under stagnation conditions, the high elevation sites B17 and B18 should
characterize the cleaner regional air above the mixing layer.
The northern boundary sites B15, B16, B21, and 16 are located to
characterize flow into the region from the north. These sites will help identify
the effects of the Wasatch Front urban and industrial sources, the NGS, and other
coal-fired powerplants to the north and east. Site B15 is very close to Zion
21
-------
National Park. Flow from MPP is likely to be transported toward this site often
during the summer. Locations B15 and B16 serve as low elevation sites. 16 and
B21 are high elevation sites.
22
-------
4. Tracer
Purpose
During the intensive study periods, an artificial tracer will be released
from the stack of the MPP and monitored at the same 31 locations as the air
quality monitoring. There are several reasons for releasing tracer. Tracer
monitoring data will identify the general transport patterns for the MPP plume.
Knowing where the plume goes is critical to begin to understand the larger
question of the MPP's visibility effects. To fully resolve the plume position and
extent, a very extensive monitoring network, including aircraft measurements
would be required. This is beyond the resources of the study. The 31
monitoring locations should provide the approximate location of the plume,
although its horizontal and vertical extent will be uncertain.
Different artificial tracers will be released from the Los Angeles Basin and
San Joaquin Valley during one-half or more of the summer intensive. These
additional tracers will be released to gain insight into the transport of emissions
from these large source areas into the Project MOHAVE study area. They will
help identify the interaction between MPP and southern California emissions and
provide dilution ratios for southern California emittants.
The tracer will be used to provide a check of deterministic modeling
results. A transport model, using wind fields generated by a dynamic
meteorological model, will predict plume locations. The tracer data will be
compared to the transport model predictions to evaluate the model performance.
The concentrations of the tracer will be used to evaluate dispersion of the plume
predicted by the models as well as location. The dynamic meteorological and
transport models are discussed in section 10.
The tracer will also be used for receptor and hybrid modeling purposes.
The tracer will serve as a unique "tag" for the MPP. The receptor and hybrid
modeling is described in section 13.
Choice of Tracer
Ideally, a tracer should closely mimic the species of interest for receptor
modeling and chemical transformations; in this instance SO2 and its conversion
to SO4, and deposition of the sulfate particles. This would suggest using isotopes
of sulfur or oxygen. However, the large amounts of these materials that would
be required are not available; to produce them would require resources greater
than those available for this study.
Among the potential tracer materials are deuterated methane (CD4),
various perfluorocarbons (PFT's), Sulfur hexafluoride (SF6), and particulate rare
earth oxides. CD4 and PFT's and SF6 are conservative tracers; thus conversion
of SO2 to SO4 and deposition of SO2 and SO4 can not be directly simulated. It
has been suggested that nonconservative rare earth particle tracers be used
23
-------
because of their potential to mimic sulfate particles. However, sulfate particles
are not directly emitted in significant quantities; rather they are typically formed
during transport at rates which vary with meteorologic and other atmospheric
conditions. Thus some variable proportion of the rare earth particles will have
deposited before the sulfates are formed. Additionally the deposition of SO2
occurs more rapidly than either sulfate or rare earth particles. A combination of
conservative and paniculate tracers could yield additional insights into the fate of
MPP emissions than that obtained using a single tracer. However, the additional
expenses associated with using an additional class of tracer is beyond the
resources of Project MOHAVE.
SF6 has been used in many short range experiments. Although the cost
per kilogram is low compared to other conservative gaseous tracers, the
background concentration is much higher, which more than offsets the decreased
unit cost. SF6 is not practical for the spatial scale of the study region.
! CD4, used in WHITEX has low background values and is detectable at
vejry low concentrations, so small amounts of this tracer are sufficient. Though
the cost per unit mass is high, the total cost of tracer material is less than the cost
of PFT's. However, the sample analysis cost is very high ($800-$1000/sample),
compared to about $20/sample for PFT's. The lower cost of PFT analysis
encourages the analysis of many more samples, for the available budget. For CD4
the strategy is to analyze a subset of all possible samples. Analysis of all samples
allows a more thorough evaluation of the deterministic modeling. Different PFTs
can be released from other sources of interest and analyzed from the same sample
for virtually the same low analytical cost.
The SRP tracer study, which used PFT's, apparently had some major
problems with the tracer portion of the study. Collocated samplers showed near
zero correlation. Apparently this was at least partially due to the fact that many
samples were near the limit of detection. Other tracer studies have also had
apparent quality control problems, for example, contaminated samples. It is
imperative to have a rigorous quality control program for the tracer components
of the study. The quality control methods to be used' for the Project MOHAVE
tracer study is described later in this section. There is no fundamental reason
that would prohibit PFT's or other tracers from giving reliable, quantitative
results.
Project MOHAVE will use perfluorocarbon tracers. The tracer to be used
to track the MPP plume is ortho-cis perflorodimethylcyclohexane (ocPDCH).
The tracer material to be released is ortho (o) PDCH, 45 % of which is ocPDCH.
Perfluoromethylcyclopentane (PMCP) will be used to tag the Los Angeles Basin.
Perfluorotrimethylcyclohexane (PTCH) will be used to track emissions from the
San Joaquin Valley. The ambient background of ortho-cis PDCH is very low,
0.3 parts per quadrillion (ppq) (Dietz, 1987). The SRP study used PDCH and
other PFT's but analyzed for total PDCH, not individual isomers. The
background of total PDCH is 22 ppq. PMCP background concentrations are 3.3
ppq; PTCH background is 0.3 ppq. Brookhaven National Laboratory (BNL) will
24
-------
do the tracer analysis for Project MOHAVE. In addition to analyzing isomers,
BNL pre-concentrates the sample; thus much greater sensitivity is achieved
compared to the SRP analysis methodology (Dietz, 1991).
Tracer Release
Tracer can be released at a constant emission rate or at a constant ratio of
tracer to SO2. Variation of tracer to SO2 ratios was a complicating factor in the
WHTTEX receptor modeling analysis. If released at a constant rate, SO2 emission
rate variations would complicate the receptor modeling, requiring adjustment of
the ratio of tracer to sulfur dioxide concentration. This requires knowledge of
plume age. However, for use in deterministic modeling, it is more desirable to
have a constant tracer emission rate, to simplify the dispersion calculations. If
a constant release rate were used, the deterministic model would be used to give
the plume age1 necessary to adjust the tracer to sulfur dioxide emission rates in the
receptor modeling. The MPP is a base loaded unit. It typically operates at either
full capacity, 1/2 capacity (one unit down) or is down. Tracer will be released
at a rate proportional to the SO2 emissions if a practical approach to do it can be
devised. If not, then the tracer release rate will track the status of the power
generation units with full, one-half or zero tracer emissions, corresponding to
two, one, or zero units operating. This will more closely preserve the ratio of
tracer to SO2 emissions than a constant tracer release rate. Good coordination
between MPP operators and the tracer release personnel will be expected. Tracer
release from the San Joaquin Valley and Los Angeles Basin will be at a constant
rate.
Release Equipment
The perfluorocarbon tracer liquids are very similar in viscosity to silicone
fluids, but are quite dense (densities from 1.7 to 1.8 g/mL liq.). Large release
rates, tens of kilograms per hour, have been accomplished with (1) atomizers
spraying directly into the air, or (2) by vaporizing a PFT liquid stream, diluting
with air below the PFT dewpoint at the exit, and emitting the diluted stream into
the air or other fluid (such as the flue gases going up a power plant stack).
For low release rates, tenths of kilograms per hour, such as will be needed
for Project MOHAVE and as was used in METREX in 1984 (Draxler, 1985), the
tracer can be released by evaporation using the METREX-designed equipment.
The release unit has only two moving parts: a squirrel cage fan motor and a
metering pump. The tracer flows in a closed circuit from the reservoir through
the peristaltic pump rollers (the tubing is compressed to move the liquid) directly
into the airstream on the surface of a heated disk. The disk and heater are
located in a cylindrical mixing chamber. The heater, adjustable up to 600 W,
maintains the temperature of the disk above the tracer's boiling point. The
25
-------
system's electronics control the duration of release and the duration that the
system is off. Times for each on-off cycle can be set by tens/whole/tenths of an
hour for each cycle. A small strip chart recorder notes when the pump and
heater are on. The pump rate is preset on a calibrated dial. The airflow should
be sufficient to ensure all the vapor is diluted below the saturation mixing ratio
for the expected ambient temperature without blowing the tracer drops off the
heater before they vaporize.
Three release units were built by the NOAA Air Resources Laboratory in
Silver Spring, MD, and now reside at their laboratory in Idaho Falls. The system
was designed to handle release rates of the magnitude needed for Project
MOHAVE. However, substantial design changes may be made by NOAA in
consultation with Brookhaven in order to insure reliable operation (including
accurate release rates and constancy of release).
PFT Programmable Samplers
Each site will be equipped with a programmable Brookhaven atmospheric
tracer sampler (BATS). The sampler was initially developed by BNL and was
commercially manufactured by the Gilian Instrument Corporation (West Caldwell,
New Jersey). The unit consists of two sections: the lid, containing the sample
tubes, and the base, containing the power control. The entire unit is housed in
a weather-resistant 36 cm x 25 cm x 20 cm container and weighs approximately
7 kg. Power is supplied by an internal rechargeable nominal 8-VDC battery for
operation at remote locations, or by a charger where 155-VAC is available. For
Project MOHAVE, each unit must be run on a charger in order to collect the full
twenty-three (23) 36-or 72-L air samples.
The BATS removable lid holds 23 stainless steel sampling tubes, each
packed with approximately 150 mg. of Ambersorb adsorbent. The Ambersorb
adsorbs the tracers from the sample air flowing through the tube. Breakthrough
of the perfluorocarbon tracer gases is less than 0.1 %. The tracer gases remain
adsorbed until extreme heat is applied to the tube to desorb the tracer at analysis
time. The sample air flow is directed consecutively through the adsorbent tubes
by means of a multiple port switching valve which is controlled by the base.
Since the lid is removable and interchangeable, multiple lids can be used on a
single base.
The BATS base contains a DOE-Environmental Measurement Laboratory
constant mass flow pumping system (Latner, 1986) which draws sampler air
through each tube. The flow rate is selected by setting an internal switch to draw
either 10, 20, 30, 40, or 50 mL/min of air; the switch cpntrols the on-off cycling
rate of the pump over a 1-min period. A constant flow rate through each sample
tube in the lid is regulated by a pressure sensing circuit located at the outlet side
of the pump. The circuit is an integrator that supplies a voltage ramp to the
pump motor, rising or falling as indicated by the outlet pressure. A flashing
light-emitting-diode (LED), mounted on the BATS base control panel, gives a
26
-------
visual indication that the pumping system is operating properly. This pumping
system has proved to be more reliable than the originally installed pump, but
consumes more power. Programmable controls are also placed on the base
control panel which are used to control the number of samples, the sample
duration, and to control either single or multiple sample start and stop times of
a 7-day period. Two liquid crystal displays (LCDs), also mounted on the control
panel, show the clock time, day of the week, and current tube number. A digital
printer and integrated circuit memory module (Lagomarsino, 1989) record the
start time, the day of the week, and the tube number for each sample. The BATS
base controls are also used to assist in automated analyses when the lid is coupled
to a gas chromatograph (GC).
For analysis, the perfluorocarbon tracers, retained on the Ambersorb
adsorbent in the BATS tubes from the sampled air, are desorbed by resistance
heating of the stainless steel tubes to 460°C. Current from the BNL gas
chromatograph system (16.3 Amps AC) is supplied from a low voltage
transformer (1.55 VAC at the lid jacks) through the Canivalve solenoid
assembly. The assembly consists of a 24-position rotary solenoid having two
power decks capable of handling 20 amps. Twenty-three leads are wired to the
power deck, each connected to the adsorbent tube floating clamp at one end of
the respective tubes. The clamp at one end must float to allow for thermal
expansion of the tube on heating (~ 0.8mm). A set screw secures the collar on
the tube within the clamp; a similar set screw on a common aluminum rail
secures the other end. Polyurethane rubber tubing (1/8-inch OD by 1/16-inch ID)
is expanded over the 1/8-inch OD stainless steel adsorbent tubes and wire
clamped to secure; the other end is attached to the Scanivalve 1/16-inch
protrusions.
Tracer sample analysis
Tracer sample analysis will be done with a gas chromatography system.
The gas chromatograph system is composed of a gas chromatograph, of data
handling devices, gas standards, and a BATS. The Varian 6000 gas
chromatograph consists of a series of specially designed traps, catalysts, columns,
and an ECD-electron capture detector. The data handling system consists of an
analogue electronic filter on the ECD electrometer output connected to a Nelson
Analytical 300 Chromatograph system comprised of a Model 7653 Intelligent
Interface and an IBM PC/AT with an ink jet printer and Nelson 2600
Chromatography Software. Brookhaven has also written extended software for
further data processing and GC calibrations.
Analysis of a sample occurs when the sample is automatically thermally
desorbed from the BATS sample tube. The sample is passed through a precut
column and a Pd catalyst bed before being reconcentrated in an in-situ Florisil
trap. Once the trap is thermally desorbed, the sample again passes through the
same catalyst bed, another Pd catalyst bed, and then through a permeation dryer.
27
-------
The sample is then passed into the main column where it is separated into the
various perfluorocarbon constituents and then ultimately into the ECD for
detection. Further details on the analytical system is given in Dietz (1987).
Release rates and expected crosswind average peak centerline
concentrations at the Long Mesa and Hopi Point receptor locations are shown in
the table below.
Summary of Expected Tracer Concentration
Tracer Release
Site
MPP
San Joaquin Valley
Los Angeles Basin
PFT
oPDCH
PTCH
PMCP
Rate, kg/h
0.14a
0.14"
0.50°
Receptor
Site
Long Mesa
Hopi Point
Hopi Point
Expected PFT levels
PFT
ocPDCH
PTCH
PMCP
Conc.d, fL7L
4±2
2 ± 1
14 + 5
fL = femto Liter = 10'15L
100 kg for 30 days in January 1992 and 170 kg for 50 days in July-August
1992
70 kg for 21 days in July 1992
250 kg for 21 days in July 1992
Crosswind average peak centerline concentration
At receptor sites, 12-hour tracer samples will be collected, and will sample
36 liters (L) of air. All other sites will sample 72 L over a 24-hour period. The
table on the next page shows relevant information regarding the amounts of tracer
expected, backgrounds, levels of detection, and signal to background ratios for
the GC analysis. It can be seen that the limit of detection (LOD) and uncertainty
are very small compared to background, except for PTCH, which has a limit of
detection of about 16% of background and uncertainty of 50% of background.
Thus, for the MPP and Los Angeles Basin tracers (ocPDCH and PMCP), even
an additional tracer concentration of a fraction of background can be reliably
quantified. For all 3 tracers, the uncertainty of the amount of tracer above
background (signal-background) is small for expected crosswind plume centerline
concentrations.
A sample chromatogram for a 20 L of ambient.(background) air is shown
in Figure 8. Background levels of the tracers used (ocPDCH, PMCP, and
PTCH) can be clearly distinguished and quantified. PMCH, which is not being
released, can be used as a reference.
28
-------
(b)
Figure 8. Chromatogram of 20 L sample of ambient air. (a) Elution time up to 6.5 minutes.
(b) Elution time 6.5 to 13 minutes.
29
-------
For 12 hour (36L) samples
Background
Area of Response (counts/fL)
Receptor concentration (fL/L)
Quantity in 36L (sample and
background) fL
Quantity in 36 L background, fL
Limit of Detection, fL
Limit of Detection, counts
Uncertainty = 3 Limits of Detection,
counts
Counts in 36L (sample and
background)
Counts in 36L background
Signal to background
Signal - background, counts
ocPDCH
= 0.3 fL'/L
360
4
155
10.8
0.05
20
60
55,800 ± 60
3,888 ± 60
14.35 ± 0.22
(± 1.5%)
51,912 ± 85
(± 0.16%)
PMCP
= 3.3 fL/L
160
14
623
119
0.12
20
60
99,680 ± 60
19,040 ± 60
5.24 ±0.017
(± 0.3%)
80,640 ± 85
(± 0.11%)
PTCH
« 0.1 fL/L
300
.2
72
3.6
0.6
180
500
21,600 ± 500
1080 ± 500
20.00 ± 9.27
(± 46.4%
20,520 ± 707
(± 3.4%)
fL = femto Liter = 10'1SL
Tracer Quality Control
Rates of air flow through the sampler are checked before and after the
sampler is sent to the field monitoring site. This is to determine the total quantity
of air sampled each sampling period. Adjustments are made to compensate for
altitude and temperature differences. Three additional PFTs that are not released
are used as a cross-check of the sampling volume. The concentration of these
PFTs is essentially constant, so the quantity of air sampled can also be calculated
from the amount of these tracers collected.
The sample analysis is done at 460°C. This is 50°C above the
temperature needed to desorb all PFTs. After analysis, the sample tubes are
"baked out" at 510° C to remove any remaining traces of PFTs. Before sending
the tubes out to the monitoring sites, every fourth tube is analyzed. At this time,
the tubes should have zero tracer. They are analyzed down to 30-50 counts,
30
-------
which is about 1 % of background. If a tube has zero signal, then it has not been
sampled, because the ambient background has not been detected. The samplers
will be programmed to collect 20 or 21 samples; tubes 22 and 23 should be zero.
31
-------
5. Air Quality Monitoring
Purpose
Air quality monitoring for Project MOHAVE has many applications. The
extinction budget analysis requires data for all the major particle components (e.g.
sulfate, organic and elemental carbon, crustal, and liquid water as estimated by
relative humidity) by particle size to be used in conjunction with optical data
(scattering and extinction coefficients). The hybrid and receptor models need
particle and gaseous sulfur concentrations and particulate trace elements as
endemic tracers (such as arsenic for smelters) in addition to measurements of
artificial tracer. Oxidants, especially hydrogen peroxide, should be monitored to
assess the potential oxidant limitations of SO2 to sulfate conversion. The air
quality monitoring network will document the regional distribution of particulate
and SO2 and establish boundary conditions for the study area; used along with
wind field information, transport of pollutants into the area will be identified.
Eigenvector analysis of the pollutant fields will identify common patterns and may
identify specific sources with the patterns. These data will also provide for a
check of the deterministic modeling results.
IMPROVE Samplers
The IMPROVE sampler consists of four independent filter modules and
a common controller, as shown in Figure 9. Each module has its own inlet, PM-
2.5 or PM-10 sizing device, flow rate measurement system, flow controller, and
pump. In the three PM-2.5 modules, the airstream passes through a cyclone that
removes particles larger than 2.5 pm in diameter. The airstream then passes
through a filter, which collects all the fine particles. In the PM-10 module, the
inlet prevents particles larger than 10 urn from being sampled.
Channel A collects fine particles (<2.5 fj.m) on a Teflon filter and
provides total fine mass, elemental analysis (H and Na-Pb), and absorption.
Particle Induced X-Ray Emission (PIXE) analysis gives the concentration of the
elements Na-Pb; Hydrogen is obtained by Proton Elastic Scattering Analysis
(PESA). Absorption is determined by the Laser Integrating Plate Method
(LIPM).
Channel B uses a fine nylon filter behind a nitrate denuder for ion
chromatography analysis (Cl', NO2', NO3' and SO42-). Channel C is used to
obtain organic and elemental carbon from a fine quartz filter. A thermal/optical
carbon analyzer which makes use of the preferential oxidation of organic and
elemental carbon compounds at different temperatures is used. Channel D
measures PM-10 total mass on a Teflon filter and SO2 with an impregnated quartz
filter. More detailed descriptions of the IMPROVE samplers, analysis
methodologies, and protocol appear in Pitchford and Joseph (1990), and Eldred
32
-------
et al. (1988). The location of sites and monitoring schedules for IMPROVE
samplers is shown in Section 3.
PM10 teflon
sunshleld
wires
and
vacuum hoses
in conduit
power
Figure 9. Schematic of IMPROVE sampler.
DRUM Samplers
DRUM (Davis Rotating-drum Universal-size-cut Monitoring) samplers will
be used at six locations. The DRUM particulate samplers partition the aerosol
into eight size ranges. This provides critical information to relate aerosol to
extinction because of the strong relationship between particle size and light
scattering. PIXE and PESA analysis is done to determine the concentration of
33
-------
each element (H and Na-Pb) by size range. The size distribution, hence the light
scattering efficiencies, for different particulate component species can be inferred,
if sufficient material is collected (e.g. see Cahill et al, 1987). The DRUM
sampler is described by Raabe et al (1988).
Six DRUM samplers will be deployed. The sampling time will be either
four or six hours. The receptor sites will have DRUM sampling for the entire
study; the remaining samplers will be placed at other locations of interest yet to
be identified, Among the,possible sites are Tehachapi Pass, Cajon Pass and Spirit
Mountain. Analysis of all samples is beyond the resources of Project MOHAVE.
All samples will be archived; analysis of samples will be done for selected
periods of interest.
High Volume Dichotomous Samplers
High volume (300 L/minute) dichotomous samplers will be used to
improve the trace metal data base for receptor modeling with endemic tracers.
Aerosols in the size ranges 0.05-2.5 jim and 2.5-20 /im will be collected on
Teflon filters. Instrumental Neutron Activation Analysis (INAA) and X-Ray
Florescence (XRF) will be done on the samples. Three samplers will be used.
One sampler will be equipped with a trap to collect semi-volatile organics. The
locations have not been decided yet; one will characterize background and one
will be near the mouth of the Grand Canyon.
Scanning Electron Microscopy (SEM) will be used to characterize
individual particle morphology and elemental composition. In addition,
Computer-Controlled SEM (CCSEM) analysis can be used to increase the
numbers of particles analyzed and eliminate possible human operator microscopy
bias. CCSEM data can then be used in data analysis approaches that require
quantitative composition as a function of particle size and shape distributions.
Mie Theory calculations to determine extinction budgets can use this data. Unlike
other data sets, CCSEM data allow for direct analysis of the questions of aerosol
mixture (i.e., the extent to which component species are constant in all particles
in an individual sample). Receptor models can be based upon an endemic tracer
approach using CCSEM data, or it can use the individual particle characterization
information to aide in resolving issues raised by other attribution approaches.
Hydrogen Peroxide Measurements
Hydrogen peroxide (K2OJ is likely to have a significant role in the
formation of sulfate in the study region when clouds are present. Aqueous phase
conversion of SO2 to SO42- is critically dependent upon hydrogen peroxide (HjO^
and ozone (O3) (Penkett et al., 1979; Calvert et al., 1985). Hydrogen peroxide
is thought to be the leading oxidant of dissolved SO2 in the eastern United States,
where the pH of atmospheric water is generally below 4.5 (Heikes et al., 1987).
In the desert southwest, where the pH of atmospheric water is typically higher,
34
-------
ozone may also be important in the aqueous phase oxidation of SO2. Saxena and
Seigneur (1986) also identify O2 catalyzed by Fe3+ and Mn2+ as an important
aqueous phase oxidant of SQj. Hydrogen peroxide reaction rates with dissolved
SO2 are typically 50-100% per hour (Lee et al, 1986); thus the presence of
clouds with sufficient H2O2 present can result in rapid sulfate formation. The
amount of hydrogen peroxide available for oxidizing SO2 may be limited,
especially during winter, when photochemical generation of H2O2 is low (Calvert
et al., 1985; Kleinman, 1986).
The NAS review of WHTTEX noted that H2O2 measurements were not
made; the NAS used values measured in Tennessee (about the same latitude as
GCNP) to estimate potential sulfate formation. Members of the Committee on
Haze in National Parks and Wilderness Areas suggested that Project MOHAVE
make some measurements of hydrogen peroxide. If measurements of hydrogen
peroxide show sufficient amounts to convert all the SO2 to sulfate, we can likely
conclude the atmosphere is not oxidant limited. However, showing that molar
quantities of hydrogen peroxide less than sulfur dioxide does not necessarily
indicate oxidant limiting conditions. Ozone effects may be significant if the pH
is adequately high. Heikes et al., found that SO2 concentrations were a factor of
3-5 greater than H2O2 concentrations in the surface layer, but above the surface
layer H2O2 concentrations were twice the SO2 concentrations. Even with aircraft
vertical profile measurements, Heikes et al. concluded that the hydrogen peroxide
measurements were ambiguous in determining if the atmosphere was oxidant
limited. Their near cloud observations suggested that physical-dynamical
processes may be as or more important than a simple molar comparison of SO2
to H2O2 at ground or cloud level.
It is not possible for Project MOHAVE to fully characterize the temporal
and spatial distribution of atmospheric hydrogen peroxide necessary to
conclusively determine oxidant limitations. However, limited measurements may
provide some insight into the potential for hydrogen peroxide oxidation of S02.
As in the NAS report, sulfate concentrations may be compared to H2O2
concentrations to see if sufficient H2O2 existed to account for the measured sulfate
values. Project MOHAVE will make a limited number of hydrogen peroxide
measurements. The SRP NGS study made hydrogen peroxide measurements
during the winter of 1990. These measurements may be used to estimate H2O2
for the winter intensive.
Methylchlorofonn Measurements
Methylchloroform has been identified as a tracer of weekday emissions
from the Los Angeles Basin (White et al., 1990). Miller et al. (1990) found that
methylchloroform levels at Spirit Mountain are correlated with particulate light
scattering, with the majority of hazy conditions having elevated methylchloroform
levels. Methylchloroform measurements, in conjunction with meteorological data
and modeling, can aid in identifying periods when air previously in the Los
35
-------
Angeles Basin is in the study area. However, a limitation of methylchloroform
as a Los Angeles Basin tracer is that the emissions are primarily weekday
emissions, with weekend emissions being much lower. Thus weekend emissions
from the Los Angeles Basin might not be tracked using this tracer and the
absence of methylchloroform does not necessarily indicate an absence of air from
the Los Angeles Basin.
Desert Research Institute will measure methylchloroform at Spirit
Mountain, Meadview, and Long Mesa. These data will be investigated for use
in identifying the presence of air previously in the Los Angeles Basin. During
the summer intensive the release of perfluorocarbon tracers from the Los Angeles
Basin and San Joaquin Valley should provide a check on the utility of
methylchloroform as a Los Angeles Basin tracer.
36
-------
6. Meteorological Monitoring
Background
i
Meteorological monitoring is necessary to characterize the speed,
direction, and depth of transport in the region and for model initiation and
validation. The existing National Weather Service (NWS) surface and upper air
monitoring sites are insufficient to characterize the complex meteorological setting
of the study area. In addition, NWS upper air measurements (rawinsondes) are
taken only twice per day. Thus, they may not capture important small time scale
meteorological changes and because they provide nearly instantaneous
measurements, they may not be representative of average conditions.
The Wave Propagation Laboratory (WPL) of the National Atmospheric
and Oceanic Administration (NOAA) will provide much of the meteorological
measurements data for Project MOHAVE. Air Resource Specialists (ARS), the
optical monitoring contractor, will provide surface meteorological data at the four
receptor sites. WPL has a unique capability of providing continuous wind and
temperature profiles in the atmospheric boundary layer (ABL) using wind
profiling radars with Radio Acoustic Sounding Systems (RASS). The radars
transmit 915 MHz signals and receive back-scattered signals from the atmosphere.
With three antennas, usually two tilted and one vertical, the three components of
the wind can be measured using the Doppler effect. The best results are obtained
when the winds are averaged over about one hour. The RASS component uses
the Bragg scatter of radar waves from vertical propagating acoustic waves to
measure the sound speed. Because the sound speed depends upon air temperature,
temperature profiles can be derived. Usually the instrument is configured to
provide one 5-minute averaged temperature profile each hour. The backscattered
intensities received by the wind profiler in the form of signal-to-noise ratios can
also qualitatively indicate mixing depths. The advantage of the wind
profiler/RASS instruments over rawinsondes is that they provide continuous
profiles in time.
Objectives
The wind profiler/RASS data consist of wind profiles, nominally to 2.5 km, and
temperature profiles to almost 600 m. These data are necessary to characterize
the speed, direction, and depth of material transport in the region and also
necessary for model initiation and validation. The primary objective is to
measure the transport of material from the MPP to GCNP. Also, it is important
to characterize the flow from major urban areas in the region (e.g., South Coast
Basin, Las Vegas, Phoenix/Tucson) and to separate this flow from flow
containing the MPP emissions. There are two other major power plants nearby,
the Reid Gardner Plant near Overton, Nevada, to the northwest of the Grand
Canyon and the NGS near Page, Arizona to the northeast. It is also desirable to
37
-------
determine the frequency of transport from these sources.
There are several ancillary problems which relate to the potential transport
paths which the MPP plume may take into the Grand Canyon region. An indirect
path is along the Colorado River to the north and then over Lake Mead. Because
the lake is lower in eieiation than the surrounding terrain and the ABL over the
lake is usually more stable than that over the surrounding'land due to the
relatively cool water, the pollution may pool and collect in the Lake Mead Basin.
Also, material from other sources (e.g. Las Vegas/Henderson or Reid Gardner)
may collect in the same basin. A change in wind may transport this material into
the lower portion of the Grand Canyon near Meadview. It is therefore important
to monitor the winds and stability in the Lake Mead area, both over the lake and
near the western entrance to the Grand Canyon.
With southwesterly flow near the surface, the material from the MPP may
be transported more directly toward the Grand Canyon region over the high
plateaus to the northeast of the plant. This path also requires meteorological
monitoring.
Surface meteorology will be monitored at all the wind profiler sites so that
the lower gates of the profilers can be compared with surface parameters. Also,
NOAA/WPL will provide at least four surface pressure sites. Gaynor et al.
(1991) have shown that winds calculated from surface pressure gradients can be
used as surrogates for transport winds. The pressure array will allow calculations
of mesoscale transport winds which can be compared with and be adjunct to
profiler winds.
Another contribution from NOAA/WPL will be the wind and temperature
data from profiler/RASS operations performed as part of the South Coast Air
Basin study beginning in July, 1992, and continuing through the summer intensive
period. Among the tentative locations for these instruments are the Cajon,
Banning and Tehachapi passes. These data will be useful adjuncts to Project
MOHAVE by providing upstream information on potential transport from the Los
Angeles Basin and San Joaquin Valley into the MPP region. Starting in
February, 1992 wind profiler data from sites on the Mogollon Rim (central
Arizona) will be available. These wind profiler sites will help characterize
periods of flow from the southeast into the study area.
NOAA/WPL will also provide tethersonde and airsonde profiles for short
periods and at critical locations during the winter and summer intensives. These
profiles will be measured at transport, drainage, or pooling locations that will not
have regular continuous measurements. Because of the limited height range of
the tethersonde, the preferred locations for these profiles will be in regions with
shallow boundary layers. One general area of this type is the Lake Mead basin
where a relatively shallow ABL compared to the surrounding desert may persist
well into the morning due to the cool water surface and to the nocturnal drainage
of cool air into the basin.
38
-------
Field Study Plan
A wind profiler with RASS will operate in close proximity to the MPP
from September 1991 through September 1992. This location will be
supplemented with the DRI operation of an AeroVironment Doppler sodar for a
quality control (QC) check of the profiler and to supplement the profiler with
detailed low level winds. Another wind profiler with RASS will be located at
Truxton to monitor the possibility of direct southwest to northeast transport from
the plant. The Truxton site is in open terrain; this allows the data from this site
to reflect the general flow patterns over the entire study area. It will also
measure the south and southeast summer monsoonal flow from which directions
material may be transported from Phoenix and Tucson or from smelters to the
south and southeast. A doppler sodar will also operate most of the study period
at Mead view.
In support of the winter intensive study, two additional wind profilers will
be operated from mid-November 1991 through late-January 1992. The site
locations will be the following:
1) South of MPP in the vicinity of Needles, which is usually downwind of
MPP during the winter.
2) At Temple Bar, on the south shore of Lake Mead, about 30 km west of
GCNP. This site will help characterize low level flow over Lake Mead,
which may vary significantly from the flow at higher levels.
During the winter intensive period, NOAA/WPL will intermittently
operate a tethersonde and/or radiosonde to supplement the upper air data. The
locations may be at Cottonwood Cove to monitor wind and stability in the upper
Mohave Valley, or near Lake Mead to monitor the meteorology in the Lake Mead
Basin.
From July 1992 through September 1992, a supplemental profiler will
operate at Cottonwood Cove (Lake Mohave) in support of the summer intensive
experiment. The plume is typically transported past this site, especially during
night and morning hours, and may exit the Colorado River valley near this site
during the late morning and afternoon. An additional wind profiler will be
located at Meadview. The sodar at Meadview will be moved to Temple Bar to
measure low level flow above Lake Mead. The combination of doppler sodar at
lake level, combined with a wind profiler at Meadview, 500 meters above lake
level, will provide a vertical profile extending to about 3 km above lake level.
NOAA/WPL will likely participate in the South Coast Air Basin Study which will
occur during the same period as the MOHAVE summer intensive. WPL will
have six profilers operating in the South Coast Basin. One or two of those will
be on the east (desert) side of the Tehachapi, Cajon, or Banning Passes.
Combining data from the South Coast profilers with data from the profilers
39
-------
deployed for the MOHAVE summer intensive will provide a rare opportunity to
continuously monitor the winds from Southern California to the Grand Canyon.
Data Collection
All the profilers will provide hourly consensus averaged winds in two
modes ~ a high range resolution mode, usually about 60 to 100 m and a low
range resolution mode, usually 200 to 400 m. Minimum heights of around 150
m and maximum ranges of about 2.5 km are expected. During the more moist
summer monsoon period, much higher ranges may be expected.
The RASS temperature profiles are measured once per hour representing
5 minute consensus averaged profiles. The minimum range is about 150 m; the
maximum range expected under dry desert conditions is nominally 600 m. The
Doppler sodar at Overton or Temple Bar will provide a minimum range of about
50 m and a maximum range of about 600 m with about a 50 m range resolution.
The surface meteorological data associated with the profilers will probably
represent 5 minute averages of wind, temperature, and relative humidity measured
about 3 m above the ground. The locations measuring surface pressure will also
have temperature and relative humidity instrumentation.
Where phone lines are available, all profilers, including those with RASS,
and the sodar will be interrogated by phone once per day and the ASCII files sent
to a hub work station located at NOAA/WPL in Boulder, Colorado. This
validation level zero data will also be available at each site from printer paper and
on the hard disks of each controlling PC. The surface meteorological data
collected at the profiler sites will also be sent over the same phone lines to the
hub. The pressure sites, unless co-located with the profilers, may not have phone
line capabilities depending on the feasibility of installing lines.
Data Quality Assurance
Wind profilers and Doppler sodars identical to those to be deployed for
Project MOHAVE are periodically tested and compared at NOAA's Boulder
Atmospheric Observatory which includes a 300 m meteorological tower. The
RASS derived temperatures are also compared to thermometers on the tower. All
instrumentation will have been previously tested in other field studies prior to
deployment. The collocation of a Doppler sodar at the MPP with a wind profiler
will provide a continuous field quality assurance check on both the profiler and
the sodar.
All the data that is recorded and printed out at each site and sent over
phone line to the hub in Boulder will be level zero. The field programs on each
control computer for the radar/RASS and sodar provide consensus averaging
which is equivalent to on-line, real-time sorting of data according to consistency
criteria. The wind profiler/RASS and sodar data will be screened by an
automated editor (Wuertz and Weber, 1989) after each 24 hour collection period.
40
-------
This data will in turn be inspected by qualified staff and flagged if required. The
resulting ASCII files of winds and temperatures, along with graphical displays,
will then be available for quick dissemination by diskette or by electronic
transfer.
The in situ surface meteorological data will require similar inspection and
will be averaged into one hour blocks. These data will be available for similar
dissemination.
Data Processing and Analysis
The senior scientific staff at NOAA/WPL will cooperate closely with the
modelers to ensure that level one data are readily available to them in a useful
form. NOAA/WPL scientific staff will take leadership in analyzing wind
profiler/RASS, sodar, tethersonde, radiosonde, and surface meteorological data
to gain insight into the often complex transport processes in the project region.
This effort will require the use of various types of data from project collaborators
outside of NOAA. The surface pressure array may be very critical in extending
the understanding of material transport over a larger area than that covered by
upper air wind measurements.
41
-------
7. Optical Monitoring
Overview
The optical monitoring plan for project MOHAVE consists of two
fundamental aspects:
1) View Monitoring
View monitoring documents the visual impairment of specific unique
vistas under various air quality conditions. View monitoring is primarily
accomplished with 35mm color slide photography and 8mm color time-
lapse photography. Color slides provide high resolution documentation of
the visible effects of uniform and layered hazes on the vista. Digitization
of the slides can be done to yield relative radiance fields that can be used
to calculate color contrast, average landscape contrast, visual range,
modulation depth, equivalent contrast, and just noticeable change. In
addition, slides of extremely clean days can be used as the basic input to
present visual air quality scenarios. 8mm time-lapse photography captures
the important spatial and temporal patterns of visibility events that allow
for a more in-depth understanding of visual air quality.
2) Electro-Optical Monitoring
Electro-optical monitoring measures the basic electro-optical properties of
the ambient atmosphere and aerosols, independent of specific vista
characteristics. Monitoring will include measurements of the ambient
atmospheric extinction coefficient (bext), and its scattering (bscj and
absorption (b,,,,) components. Primary operational monitoring techniques
include the transmissometer (bext), nephelometer (bK^, and filter
absorption (babs). Temperature and relative humidity measurements, taken
simultaneously with electro-optical measurements, are mandatory to infer
visibility effects associated with chemical and physical interactions
between water vapor, liquid water, and aerosols.
Project MOHAVE will incorporate current state-of-the-art monitoring
instrumentation, operating and quality assurance procedures, and data collection,
reduction, editing, and reporting protocols that have been developed for the
IMPROVE monitoring program (ARS, 1990a; ARS 1990b).
42
-------
View Monitoring
Equipment
Automatic 35mm and 8mm camera systems will be an integral part of the
optical monitoring for project MOHAVE. The spatial and temporal variations in
visual air quality captured by these systems will be used to:
Document how vistas appear under varied conditions;
Qualitatively record the frequency that various conditions occur;
e.g. incidence of uniform haze, layered haze, plumes, and
meteorology;
Provide a quality assurance reference for collocated electro-optical
measurements;
Serve as a backup method to estimate the electro-optical properties
of the atmosphere (if appropriate teleradiometric targets are in
view);
Support the calculation of advanced visibility indices;
Support computer imaging studies;
Provide quality media for visually presenting program goals,
objectives, and results to study participants, decision makers, and
the public.
Systems based on the following cameras will be used:
35 mm cameras: Olympus OM series
Contax 136 and 167
Cannon EOS series
8mm time-lapse: Minolta 601 series
Standard operating procedures developed for the IMPROVE monitoring
program will be followed (ARS, 1990a).
Monitoring Locations and Sampling Frequency
The 35mm camera systems will be located at all receptor sites and other
Class I Area sites. The 8mm time-lapse systems will be located at Meadview and
various scenic view points along the south rim of the Grand Canyon. During
non-intensive periods, only 35mm cameras will operate, taking three exposures
daily at 0900, 1200, and 1500 hrs.
During intensive monitoring periods, 35mm cameras at the receptor sites
and at GCNP will take nine exposures daily from 0800-1600 hrs. Time-lapse
photography will take 1 frame per minute from 0800-1600 hrs daily. Additional
43
-------
view monitoring locations will be added as the study progresses.
Electro-optical Monitoring
Extinction Measurements
The Optec, Inc. LPV-2 long path transmissometer will be the primary
instrument used to measure b^ for project MOHAVE. The transmissometer
incorporates a light detector (receiver) at one end of a specific atmospheric sight
path. The receiver directly measures the illuminance of a constant output light
source (transmitter) located at the opposite end of the path. Calibration of the
transmissometer accurately determines the inherent output of the transmitter. The
transmission of the sight path can then be calculated:
where
T = transmission of sight path r
Ir = illuminance measured by receiver at distance r
!, = calibration illuminance of transmitter
By measuring the exact length of the sight path the average atmospheric
extinction coefficient of the path can be calculated:
where
bext = average extinction coefficient of sight path r
T = transmission of sight path r
r = length of sight path r
During the past ten years, transmissometers have been developed, tested,
and deployed "in the IMPROVE monitoring network, National Park Service
IMPROVE protocol sites, and various other monitoring programs. They have
become the accepted method for reliably making continuous precise, accurate, bext
measurements. Standard operating and data reporting procedures developed for
the IMPROVE program will be followed (ARS, 1990b).
44
-------
Scattering Measurements
Integrating nephelometers will be used to measure b^.,. The integrating
nephelometer measures b,^ by directly measuring the light scattered by aerosols
and gases in an enclosed sample volume. The scattered radiation is integrated
over a large range of scattering angles. Since the total light scattered out of a
sight path is the same as the reduction of light along the sight path due to
scattering, a properly calibrated integrating nephelometer gives a direct
measurement of b,,
-'scat'
Nephelometer measurements are involved in considerable controversy
because of the modification of the ambient aerosol as it passes through the
sampling train and optical chamber. The instrument heats the air thus lowering
the relative humidity environment of the aerosols. This leads to an
underestimation of ambient bscat. Extreme efforts have been made to operate
nephelometers as close to ambient temperatures as possible. The best results have
been a heating of approximately 1.5' C. This is approximately a 10% change
in relative humidity, which can lead to underestimation of ambient bsc4t
measurement. In addition, nephelometers underestimate the scattering by coarse
particles (> 2.5jim in diameter). As with the transmissometers, standard
protocols developed for the IMPROVE program will be followed (ARS, 1990b).
Absorption Measurements
Where collocated transmissometers and nephelometers are collecting data,
b^ will be estimated by subtracting b^^ from bext. The term, babs, will also be
estimated by absorption measurements from channel A filters collected by the
aerosol monitoring network. These babs measurements will be average values for
the collection period of each filter. Data from these measurements will be
available only for periods when aerosols measurements are taken.
Temperature and Relative Humidity Measurements
Accurate air temperature and relative humidity data are critical to establish
the relationship between ambient aerosols and visibility effects. Small changes
in relative humidity, especially above 70%, can dramatically affect aerosol size
and optical characteristics. Rotronic Instrument Corporation Model MP-100F
sensors will be used in Project MOHAVE. The MP-100F combines a 100 ohm
platinum temperature sensor with an enhanced hygroscopic polymer film humidity
sensor to provide an integrated air temperature/relative humidity device that will
maintain a 2% relative humidity measurement accuracy over the range of 0-100%
relative humidity. These sensors will be operated with every transmissometer and
nephelometer in the Project MOHAVE network.
45
-------
Monitoring Locations and Sampling Frequency
Transmissometers and nephelometers will operate continuously through the
year of Project MOHAVE at various locations. Data from instruments
specifically installed for Project MOHAVE as well as data from other existing
networks will be collected for inclusion in the Project MOHAVE data base.
Measurements from the following sites, with sponsoring networks, are listed in
the table below. Data will be collected and archived as hourly averaged values
for the entire monitoring year.
Transmissometer and Nephelometer Monitoring Locations in the Southwest
Site
Bandelier NM
Big Bend NP
Bryce Canyon NP
Canyonlands NP
Chirichahua NM
Grand Canyon NP
south rim
in-canyon
Long Mesa
Meadview
Mesa Verde NP
Page, Arizona
Petrified Forest NP
San Gorgonio W
Spirit Mt. , Nevada
Tonto NM
Guadalupe Mts. NP
Total
Sponsoring Network
NPS
IMPROVE
SRP
IMPROVE
NPS
IMPROVE/SRP
NPS
SCE
MOHAVE/SCE
IMPROVE
SRP
NPS
IMPROVE
SCE
IMPROVE
NPS
_
Transmissometer
/
/
/
/
/
/
/
/
1 |
/
/
/
/
/
13
Nephelometer
/
/
/
/
/
<
/
6
46
-------
8. Emission Inventory and Characterization
Purpose
Emission inventory and source characterization are necessary for the
deterministic and receptor modeling. Receptor models need source
characterization for the main sources of interest. This involves compiling a ratio
of elements that uniquely identifies a source and can be monitored at the receptor
sites. The emission inventory is used to supply input to the deterministic
modeling. Emission inventory consists of quantifying the emission rates of
substances of interest from all sources that may be reasonably expected to impact
the study area. For Project MOHAVE, sulfur dioxide emissions are of the
greatest interest. The SO2 emissions from MPP will be modeled with the
transport and chemical models described in section 10. The level of modeling of
other sources is still being investigated. Project MOHAVE intends to include
transport and first-order chemical modeling of other significant sources of SO2,
including the southern San Joaquin Valley, the Los Angeles Basin, other
powerplants, and copper smelters within the domain of the meteorological
modeling area. The source profiling will also detail the primary particle
emissions in order to assess whether primary particles contribute significantly to
extinction.
Review of Existing Data and Inventories
The emission inventory used in the SRP NGS study (Systems Applications
International, 1991) will be reviewed. State air pollution agencies will be
consulted about emission data, especially regarding any changes for the main
sources of SO2. The power output of the MPP will be used to determine the
emissions from the MPP. The operational status of other large SO2 sources will
also be checked and emission rates adjusted if necessary before the modeling
analysis.
MPP Stack Sampling
Stack sampling will be done to determine the composition and quantity of
MPP emissions, which are needed for the receptor, hybrid and deterministic
modeling analyses. This component of the study has not yet been planned. The
study plan will be updated when details of the stack sampling are known.
47
-------
9. Centralized Data Management and Validation
Overview
EPA/EMSL in Las Vegas will be the data managers for Project
MOHAVE. Information will be obtained from the following sources:
SOURCE DATA TYPE
NOAA/WPL Surface and Upper Air Meteorology
Brookhaven National Laboratory Tracer concentrations
UC-Davis Aerosol and SO2
Air Resource Specialists Optical and Surface Meteorology
EPA-RTP/AREAL Aerosol
National Meteorological Center Surface and Upper Air Meteorology
Colorado State University Meteorological Modeling
CAPITA- Washington University Monte Carlo Modeling
The data to be collected are described in more detail in Sections 3-7. A data
management and validation plan will be developed by the data management
coordination committee. A sketch of the expected elements of the plan to be
developed is presented in the remainder of this section.
Two levels of validation (Levels 1 and 2) will be systematically applied.
Level 1 (univariate) validation involves checking the data for outliers, rates of
change, proper indication of time and location of data, etc. In Level 2
(multivariate) validation, consistencies among variables and the appropriateness
of spatial and temporal patterns are investigated. For example, the light
scattering (b^,) measured by a nephelometer should be less than the total
extinction (bext) obtained by a transmissometer. Level 3 validation occurs during
the data analysis. If data inconsistencies are found, the documentation regarding
the questionable observation is examined for correctable errors (e.g. transcription
errors). Uncorrectable, suspect data are flagged, but not removed from the data
set. Data known to be incorrect and not recoverable are removed from the data
set.
Each group responsible for collecting data will perform at least Level 1
validation. UC-Davis will do a partial Level 2 validation of the aerosol data.
The data managers at EPA/EMSL Las Vegas are responsible for the Level 2
48
-------
validation. Systematic procedures and protocol for the Level 2 validation will be
developed and fully documented prior to releasing the data. Level 1 protocols
utilized by organizations responsible for each data subset will also be
documented. A computerized listing of the data will be prepared. Level 2 data
will be distributed to data analysts and other interested parties. At the end of the
study, all data will be assembled and documented. A brief discussion of
validation conducted by some of the participants appears in the following
subsections.
Aerosol Sampling (UC-Davis)
A number of the measured or derived parameters are interrelated. This
allows data intercomparisons as a method to evaluate system performance and
check for outliers. The intercomparisons made are listed below:
(1) Fine sulfur vs. fine sulfate
(2) Fine sulfur vs. PM-10 sulfur
(3) Fine hydrogen vs. fine mass
(4) PM-10 hydrogen vs. PM-10 mass
(5) Sum of fine components vs. fine mass
(6) Sum of PM-10 components vs. PM-10 mass
(7) Elemental carbon vs. optical absorption
(8) Organic carbon vs. nonsulfate hydrogen
(9) Fine mass vs. extinction
(10) PM-10 mass vs. extinction
(11) Fine mass components vs. extinction
(12) PM-10 mass components vs. extinction
Details of the quality assurance arid data validation are given in Pitchford and
Joseph (1990).
Transmissometer Data
The transmissometer data is subjected to three levels of validation. In the
first level, validity codes reflecting transmissometer instrument operation are
added to the raw transmissometer data files. In the second level, data and
validity codes are checked for inconsistencies using a screening program. The
bext data are adjusted for lamp drift of 2% per 500 hours of lamp-on time.
Validity codes are added to all data. The third level, consists of 2 steps,
(1) Calculation of uncertainty values for all data; and
(2) Identification of bext values affected by weather.
49
-------
Validity codes for be;ct include:
0 = valid
1 = Invalid: Site operator error
2 = Invalid: System malfunctioned or removed
3 = Valid: Data reduced from alternate logger
4 = Weather: Relative Humidity > 90%
5 = bext > maximum threshold
6 = Abext > delta threshold
7 = bext uncertainty > threshold
8 = Missing: Data acquisition error
9 = Invalid: bext below Rayleigh
A = Invalid: misalignment
L = Invalid: Defective lamp
S = Invalid: Suspect data
W = Invalid: Unclean optics
Radar wind profilers and RASS
Real-time processing consists of a Doppler spectra peak picking routine
which searches for spectra peaks beginning from the highest level of good signals
to the lowest gate. As the routine searches for peaks in a downward direction,
it requires consistency from gate-to-gate. If a peak shifts beyond a given
threshold between gates, that peak is rejected. To help eliminate ground clutter,
the algorithm also rejects peaks near zero velocity if a secondary peak away from
zero is available. After the peaks, or first moments, for each individual radial
are chosen in this way, a consensus averaging is performed. This technique
requires at least 50% of the points on each gate of each radial for a 55 minute
period (5 minutes for RASS temperature) to fall within a bin of 2 m/s in width
before the individual points in the bin are averaged. If less than 50% of the
points fall within the bin, the radial component is flagged as bad and is not
available for that period. A similar technique is used for the RASS derived
temperatures with a bin threshold of 1 ° C.
The normal post-processing quality assurance procedures consist of
applying a time/height editor, normally referred to as the Weber/Wuertz editor
(Weber and Wuertz, 1989), to each 24 hour period of one hour averaged profiler
wind data or 5 minute averaged temperature data (one 5 minute average provided
each hour). The editor assesses the neighborhood of each point for consistency
in both speed and direction (or temperature), allowing for a larger tolerance for
direction differences at lighter wind speeds. The tolerances are adjustable and
depend on the prevailing meteorology during a particular experiment. The
neighborhood size is also adjustable, but usually the eight adjacent points are
chosen, if available. This editor has proven to be very powerful in eliminating
outlier points. The results of this processing provide the Level 1 data.
50
-------
NOAA/WPL is experimenting with applying a more sophisticated form of
this editor on the radial moments before performing an hourly average. The test
data are from the 1990 San Joaquin Valley Air Quality Study. The technique
requires considerable processing. The decision to use this technique for Project
MOHAVE depends on the quality of the data, which in turn depends on site
characteristics.
The post-processed, Level 1 data will be compared with optically tracked
rawinsonde (airsonde) wind and temperature profiles measured at each location.
Several rawinsonde profiles will be available at each of the wind profiler locations
representing different stability and meteorological conditions at each site.
51
-------
10. Descriptive Data Analysis and Interpretation
Goals
A large quantity of data will be collected in support of Project MOHAVE.
The descriptive data analysis and interpretation component of the study is
intended to summarize the main features of the data as well as especially
interesting cases,-and offer physical explanations whenever possible. In contrast
to the attribution analyses described in Section 11, this section will organize the
data in a manner that will allow inference of effects from different sources, but
will not generally be quantitative sufficiently to permit source apportionment.
Descriptive Statistics
Descriptive statistics will include calculation of means, standard
deviations, skewness, and extreme values of the variables. In addition, time
series of the data will be presented. These will include time series of the
extinction coefficient (bext), tracer, sulfate, nitrate, organics, light-absorbing!
carbon, fine soil, and various trace elements and meteorological variables, for
example. Correlations between variables will also be calculated.
Extinction Budget
Light extinction is caused by scattering and absorption by particles and
gases. In general, particle scattering is the primary component of extinction,
although in remote areas of the southwest, scattering by gases that compose the
atmosphere (Rayleigh scattering) is a significant fraction on the clearest days.
Black carbon (from diesel engines, forest fires, etc.) is the principal agent of
particle absorption, and is occasionally an important contributor to haze in the
study region. NO2 is the only common gaseous pollutant that absorbs in, the
visible portion of the spectrum and is not likely to be a significant contributor to i
haze in GCNP. !
The extinction budget analysis involves determining the contribution tp
extinction by all the major aerosol components. There are two fundamentally
different approaches to estimate the extinction budget. A statistical approach uses
multivariate analysis to explain the optical parameter (bext or b,^ by a linear
combination of the components. These components are the concentrations of the
pollutant species (e.g., crustal, sulfate, nitrates, elemental and organic carbon,
etc.) multiplied by best-fit determined coefficients interpreted as extinction
efficiencies. The hygroscopic particle species (e.g., sulfate and nitrate) include
a function of relative humidity to incorporate the effects of water upon the
extinction efficiencies of these species.
An externally mixed aerosol (i.e. separate aerosol components are not
contained within the same particles; for example, sulfate-coated crustal particles
52
-------
would not constitute an external aerosol mixture) is implicitly assumed by the
statistical approach for extinction budget analysis. The extent to which this
assumption is true, and the implication of it being violated, are hard to estimate
in any individual situation. In general, the greatest impact of non-external
mixtures is thought to be associated with an interpretation of how changes in
aerosol composition would affect atmospheric optics. In other words, there is
increased uncertainty associated with the prediction of how visibility will respond
to changes in emission caused by .violation of this assumption.
In addition to the concern about implied assumptions, any use of
multivariate statistics carries with it the concerns caused by use of possibly highly
covariant independent parameters, and the use of measured parameters with large
differences in relative measurement uncertainty. Both of these concerns can
result in biased results. However, there are standard approaches to detect and
minimize the impacts of these concerns.
The other approach to estimating extinction efficiencies for the various
aerosol components is by first principle calculations (Mie Theory). These
calculations require as input, certain paniculate characteristics such as the
distribution of particle size, shape, and indices of refraction. Generally, the size
distribution can be estimated from size segregating sampler measurements. For
Project MOHAVE, this will be done with the DRUM impactors for some
components, such as sulfur and crustal components, but not for others, such as
organic carbon and nitrate species. A functional relationship between water and
the hygroscopic particles must be assumed to estimate its effects on particle size.
Particle shape is generally assumed to be spherical, and the refractive indices are
assumed to be the same as the bulk indices for the various measured particle
chemical components. Assumptions must also be made concerning the nature of
the aerosol mixture (i.e., external, internal, or some combination) in order to
calculate the extinction efficiencies of the components. The extent to which these
deficiencies and assumptions affect the calculated extinction is unknown.
In spite of the uncertainties discussed above, extinction budget analysis
done by the two approaches generally results in similar extinction efficiencies.
Since Project MOHAVE will use both Mie Theory and statistical approaches, the
results can be intercompared for consistency, and reconciled with extinction
efficiency values from the literature to arrive at best estimates of the extinction
budget.
Empirical Orthogonal Function Analysis
Empirical, orthogonal function (EOF) and possibly other types of eigenvector
analysis will be done to help summarize the data and gain insights into possible
physical mechanisms at work. When working with large amounts of data, EOF
analysis is especially useful by effectively reducing the dimensionality of the data
set. A large number of observations at many locations can be reduced into a
reasonable number of spatial patterns (eigenvectors), with a time series associated
53
-------
with each eigenvector showing the time variability of each pattern.
The EOF analysis is purely a statistical technique that attempts to account
for most of the variability in a data set by a few eigenvectors. Although no
physics is explicitly included in the analysis, the data set represented by the
eigenvectors is certainly affected by physical.processes. EOF analysis in
conjunction with sound physical reasoning, including knowledge of meteorological
conditions, location of emission sources, etc. can help in the formation of
hypotheses and provide a qualitative check of receptor and deterministic modeling
results.
The variables for which EOF analysis is likely to be done are sulfate, SO2,
tracer, elemental carbon, organic carbon, fine soil, and certain trace elements.
EOF analysis of the vector wind field may be done as well. EOF analyses of the
modeled output wind and concentration fields will be investigated as a method to
help organize the large amount of model output. Additional EOF analysis using
two (or more) parameters such as sulfate and tracer can be used to identify
common jointly occurring patterns of more than one parameter.
Meteorological Classification
A meteorological classification scheme will be developed and applied to
the study years and several previous years. The scheme will classify days into
types on the basis of similarity of meteorological parameters. There are several
reasons for doing a classification. One reason is to compare the frequency of
each weather pattern during the study year with other years to determine how
representative the study year is. Each pattern is likely to have transport of
visibility affecting pollutants from different areas; the relative frequency of
patterns for the study year compared to long-term averages can help put the
impacts during the study year into perspective. It also provides a logical method
of stratifying the data of the study year into a manageable number of patterns.
Averaged spatial patterns of sulfate, etc. along with the variation within each
pattern can reveal the main pathways for transport of both hazy and clear air into
the study area. Contributions from individual source areas may also be inferred
from the concentration fields associated with each pattern.
The meteorological classification scheme can aid in the interpretation of
the EOF analyses. The time series of the EOF analyses indicate the times a
particular eigenvector is significant. By determining the corresponding
meteorological pattern most commonly associated with each eigenvector, it is
easier to interpret the physical factors associated with the eigenvectors.
The classification scheme will also be used to study the MPP outage of
June-December 1985. Sulfate concentration levels and spatial patterns associated
with each weather pattern will be compared for the outage year and other years
with SCENES data. This will help put bounds on the contribution of MPP to
regional sulfate levels.
Of critical importance in the classification scheme are surface and upper
54
-------
air wind speed and direction, atmospheric moisture and thermal stratification.
The wind data are necessary to account for the transport and dispersion properties
of the flow. Moisture is necessary to determine the potential for aqueous phase
oxidation of SO2 to sulfate and washout. Thermal stratification is needed to know
if pollutant emissions are likely to remain trapped in basins or are mixed through
a deeper layer of the atmosphere.
55
-------
11. Attribution
Overview
The attribution analysis will be done using a variety of analytical tools;
particularly deterministic, hybrid, and receptor models; and the MPP emission
modulation study. Deterministic modeling is an approach that attempts to
explicitly account for physical and chemical processes transporting and acting on
emissions from a source. Receptor models use measurements made at the area
of concern (receptors) along with characterization of the emissions from sources
potentially affecting the receptors. The contribution of each source to
concentrations at the receptors is determined statistically through multivariate
analysis techniques which link the sources to the measured concentrations.
Hybrid models use a combination of deterministic and receptor modeling
techniques. Apportionment implies determining the concentration of sulfate at the
receptor areas resulting from MPP and other sources. The apportionment of
secondary aerosols such as sulfate is a complex problem as noted by the National
Research Council (1990) and others. Transport, dispersion, deposition, and
transformation must be accounted for.
Results from the extinction budget analysis will be used in conjunction
with the sulfate attribution to determine the fractional contribution of MPP sulfate
to the extinction coefficient. The effect of primary particles will also be
considered. The next step is to evaluate the perceptibility of the contribution of
MPP to the extinction coefficient. Finally, the question of the effect of reducing
emissions from MPP upon visibility will be addressed. Each of the main study
components used in the attribution analysis is described in the following sub-
sections.
The results from the various models and analyses will be compared and
reconciled. Reconciliation is a critical component of the analysis. If results from
a particular model or analysis cannot be reconciled with other analyses, the
results will not be used. The range of uncertainty in each calculation and the
reconciled results or consensus will be estimated.
A major area of concern that has been expressed by some is the
possibility for misuse of tracer data for source apportionment. Specifically,
two issues have been expressed: (1) that any tracer level above background
measured at the receptor sites will be interpreted as attribution of visibility
impairment by tracer sources; (2) that tracer data will be incorporated into
analyses inappropriately by repeated regression analysis with whatever
parameters and formulisms are needed until a statistically significant
relationship is found, though no physical relationship is evident.
Project MOHAVE planners do not interpret the appearance of any
tracer above background as sufficient criteria to indicate visibility
impairment and will contest any who make such a claim. To demonstrate
good faith and concern for appropriate scientific methods, Project MOHAVE
56
-------
will arrange for tracer data to be withheld from all who have any role in
attribution analysis until such tune as physically meaningful empirical source
attribution formulisms have been developed based upon other Project
MOHAVE data. This will be done to promote the development of physically
reasonable models prior to the availability of tracer data, and to avoid the
appearance of forcing the models to fit preconceived notions.
Deterministic Meteorological Modeling
Deterministic meteorological modeling is based on fundamental physical
conservation relationships. These relationships include conservation equations for
momentum, temperature, mass, and the three phases of water. Meteorological
modeling for Project MOHAVE will be done by Colorado State University using
the Regional Atmospheric Modeling System (RAMS). A brief overview of
RAMS is given in Appendix 6. Additional information about RAMS appears in
Pielke et al., (1990). A dedicated super workstation (IBM RISC) will be used
for the modeling. The model will provide detailed wind and turbulence fields and
a prediction of cloud height and location. Cloud predictions will be checked
against satellite photographs.
The meteorological domain for the simulations will cover the southwestern
United States. To obtain better terrain resolution near MPP, a telescoping nested
grid will be used. In a nested grid approach, the larger scale results provide the
boundary conditions for input into a finer scale modeling domain. The entire one
year study period will be modeled. For selected cases from the intensive study
periods, modeling with much finer resolution will be done. The preliminary grids
to be used for the analysis are shown in Figure 10. Grids 1 and 2 will be used
for the year-long study; the case studies will also use grids 3, 4, and 5. The
horizontal and vertical number of grid points, and the horizontal grid spacing for
each grid are shown below.
Grid # of grid points Spacing (km)
Grid 1 x=100 y= 60 z=44 32
Grid 2 x=104 y= 72 z=44 8
Grid 3 x=144 y=144 z=44 2
Grid 4 x= 80 y=80 z=44 0.5
Grid 5 x= 80 y=80 z=44 0.5
57
-------
GricH
Figure 10. Meteorological modeling grid.
-------
The model is initialized every 12 hours using the analysis field supplied
the National Meteorological Center's Nested Grid Model. The Nested Grid
Model uses surface and upper air data to generate initial fields of variables such
as pressure, temperature, moisture and wind. The CSU RAMS modeling takes
this initial field and generates mesoscale fields for the next 12 hours before being
re-initialized.
The use of data from the wind profilers will be investigated for use- in
four-dimensional data assimilation. In this mode, the measured data are used to
adjust, or "nudge" the model results. Data from the profiler site at Truxton are
the most likely to be used for nudging. The Truxton site is in relatively open
terrain and more likely to be representative of general flow in the study area man
the other sites which are expected to be considerably influenced by local terrain
features. Data from the radar wind profilers not used in the nudging process will
be used to evaluate the performance of the model. The evaluation will be done
for every hour having modeled and wind profiler data. A quantitative comparison
between the predicted and observed winds will results in "figures of merit" for
model predictions as a function of meteorological conditions. The wind and
turbulence fields obtained from the deterministic meteorological model will
provide the necessary input to calculate the transport and dispersion of MPP and
other emissions of interest.
Transport, Chemical and Deposition Modeling
Once the wind fields have been determined, a model is needed to account
for transport, chemical transformation and deposition. The transport model to be
used is a version of the CAPITA Monte Carlo model currently being developed
for EPA under a cooperative agreement with Washington University in St. Louis.
A copy of the cooperative agreement proposal, which describes the modeling
approach in more detail, appears as Appendix 7. The model is being developed
specifically for visibility related studies. Evaluation and calibration of the model
is being done using data sets such as IMPROVE, SCENES and NESCAUM.
Modifications to the model to fully utilize the wind, turbulence, and moisture
field supplied by the meteorological model may be necessary.
In the modeling approach, simulated pollutant quanta (particles) are
"emitted" from each source. These quanta are moved in fixed time increments
using wind fields supplied by the meteorological model. During transport the
pollutant quanta are subject to chemical transformation and removal. The
dispersion is achieved by imposing a randomized perturbation to the trajectory at
each time step. Transformation and removal are also imposed as stochastic
events at each time step. The result of the Monte Carlo simulation is a large
number (lOMO6) of pollutant "particles" dispersed geographically for every time
step of the simulation. The model is considered a Monte Carlo simulation
because of the probabilistic treatment of transport, transformation, and removal.
59
-------
The model will make use of the turbulence field generated by the
meteorological model to perturb the trajectory. The moisture fields given from
the meteorological model will be used to select between wet (heterogeneous) and
dry (homogeneous) conversion rates in the SO2 to sulfate transformation
parameterization. Details of the modeling methodology are still to be determined.
The model is based on that described by Patterson et al (1981).
Hybrid and Receptor Modeling
Measurements of endemic and artificial tracers will be used to estimate the
transport and dispersion of MPP and other sources. The transport can be verified
by checking with trajectories given by the meteorological model. . The
transformation and deposition of SO2 and sulfate are also necessary; these may
be parameterized based upon such information as solar radiation, moisture
(especially clouds), oxidant availability, and vertical mixing. The hybrid models
will use tracer measurements to account for transport and dispersion, and
parameterizations to account for deposition and transformation.
Versions of the chemical mass balance (CMB), differential mass balance
(DMB), and possibly the tracer mass balance regression (TMBR) models will be
used. CMB uses the relative ratios of natural or man-made tracers at the sources
and receptor locations to apportion primary species for each measurement period.
CMB will be used to apportion primary species using the high volume
dichotomous sampler data. DMB, a hybrid model, uses trace material to establish
dispersion factors and calculates the effects of deposition and oxidation. TMBR
uses the variation of trace material over time to estimate primary or secondary
aerosol contributions from each source. In its original formulation, TMBR
requires a constant tracer to SO2 emission ratio. However, Project MOHAVE
will use a tracer emission rate that will be only approximately proportional to SQ
emissions. TMBR may be used in an exploratory mode to investigate the effect
of departure from model assumptions. Any use of the results must acknowledge
and quantify the effects of departure from the model assumptions.
In DMB and TMBR it is assumed that each source has a uniquely emitted
tracer associated with it. The best available emissions and source characterization
data will be used to identify unique tracers for each significant source. If unique
tracers are not available, CMB may be applied first to partition the ambient
species concentrations into components attributable to the various groups of
sources.
The NAS review of WHTTEX noted a number of concerns about the use
of DMB and TMBR. A summary of the NAS WHITEX comments and the steps
that will be taken by Project MOHAVE to help resolve these issues appears as
Appendix 8. The DMB and TMBR models will be modified to help alleviate
concerns in the NAS review of WHITEX by taking advantage of the more
detailed meteorological fields generated by the meteorological modeling that is a
part of Project MOHAVE. For example, the effect of moisture upon conversion
60
-------
rates for both DMB and TMBR will utilize the moisture fields generated by the
meteorological model.
As mentioned above, the implementation of DMB will differ from
WHTTEX by incorporating physical processes in a more robust manner. In
Project MOHAVE, the effect of moisture on sulfate formation will be treated
more rigorously than in the WHTTEX study. In addition to surface moisture
measurements, the deterministic meteorological model will give calculations of
moisture at many vertical levels. This information will include prediction of
clouds, which can be compared to satellite photos and surface observations.
Rather than scaling linearly with surface relative humidity, the probability of
plume-cloud interaction will be estimated and used to assign an SO2-to-sulfate
conversion rate. Rates of sulfate formation occur rapidly in clouds, and much
slower without clouds. Different conversion rates based upon whether or not
clouds are present should more appropriately account for the effect of moisture
on conversion rates.
In DMB, as currently formulated, deposition and conversion rates are
constant; the assumed rates are multiplied by plume age to give sulfate
concentrations and deposition loss. Trajectory calculations are used to give plume
age. Dispersion is accounted for by ratioing ambient trace material
concentrations attributable to a source by known trace material release rates.
In Project MOHAVE variable conversion rates will be used, such as described
above. With the deterministic meteorological modeling wind fields, reliable
plume age calculations should be possible.
The equations for, and assumptions used in, CMB, DMB, and TMBR as
presently formulated are given in Appendix 9.
Extrapolation of Intensive Study Periods to the Long-Term
To determine longer term impacts to visibility at GCNP, it is necessary
to extrapolate from results of the intensive study periods. This will be a two-step
process; the first step will relate the entire 12 month study period to the intensive
period, while the second will extrapolate from the 12 month period to a multi-
year period. The first step involves application of source-oriented and hybrid
models, which will be developed and evaluated with intensive period data, to the
meteorology and air quality data for the entire study period. In the second step
the relative frequency of long-term meteorological patterns will be compared with
those of the study period.
Deterministic models will be evaluated and calibrated using the more
complete data of the intensive periods. The resulting models will then be run
with data from the entire study period. For all modeling analyses a portion of the
data may be withheld in order to independently test the models.
During the intensive studj periods, hybrid modeling will use the artificial
tracer results for MPP and any other sources tagged with artificial tracers.
Hybrid models will also use endemic tracers for the remaining significant sources.
61
-------
Results from hybrid models based on endemic tracers will be compared to results
of the same models using artificial tracers to evaluate the utility of endemic
tracers. If successful, models using endemic tracers will then be applied to the
entire study period (covering a complete annual cycle) and used in conjunction
with the deterministic modeling analysis.
The representativeness of the study year to longer term average conditions
will be studied. It should be acknowledged that significant year to year variability
in meteorological conditions occurs and that the likelihood of any given year
being "typical" is not high. The frequency of occurrence of each meteorological
regime identified in the meteorological classification process described in Section
11 will be compared for the study year and other years for which data are
available. Where they exist, optical and air quality measurements from previous
years will be compared to the study year measurements. The frequency of
occurrence of each pattern for the study period and longer term average can then
be compared to put the study year into perspective.
MPP Emission Modulation Study
The MPP was inoperative for a seven month period from July to
December 1985. This presents a unique opportunity for investigating the effects
of MPP. The MPP emission modulation study, discussed in Section 2 and
Appendix 5 is a potentially powerful receptor approach to estimate the extent of
MPP contributions to downwind sulfate levels. The analysis will be conducted
by using a meteorological classification scheme to control for year-to-year
variations in meteorology, and comparing measured sulfate at Spirit Mountain,
Meadview and Hopi Point for periods of varying MPP and other SO2 emissions.
The study will include the following elements:
Independent statistical analysis of the experiment.
Chemical analysis of all filters. (Quality assurance will be evaluated
through comparison of current results to past data through regression and
time series analysis).
Classification of the synoptic and mesoscale weather patterns
(meteorological regimes) affecting transport of the MPP plume.
Deterministic wind field, transport, and dispersion modeling for each of
the meteorological regimes.
A detailed compilation of regional SO2 emissions data for the control and
outage period to allow an accounting for variation in SO2 emission
'patterns.
62
-------
All data manipulation will be performed in the "blind" to avoid charges
of bias or data selection. Results of the study will be used along with the
modeling and other analyses to estimate the effect of MPP on visibility at GCNP.
Framework for Interpreting Results
In a complex program such as this, a sound plan for compilation of results
is as important as the collection of high quality and representative data and the
performance of appropriate interpretive analysis. Development of an approach
to organize the results from this program helps to focus attention and resources
on critical steps for the entire program and communicate those ideas to others.
Just as it is inappropriate .for worst case results to receive primary
attention, it is also inappropriate to dwell on average or typical conditions,
especially for an instantaneous effect such as visibility. The 12 month study
period with hourly deterministic model results requires some method for
summarizing the results of the study that avoids these pitfalls. A preliminary
conceptual framework for summarizing the results of Project MOHAVE is shown
in the table on the following page. The key idea is the stratification of time
periods based upon
the locations with respect to GCNP of MPP emissions and those of other
significant sources, such as from southern California. These would be based
upon the modeling studies. Another stratification is whether the plumes have
undergone wet or dry chemistry (based upon modeling results and observations).
If useful, other stratifications could be developed. The frequency of each
condition, the average and standard deviation of the percent sulfate from MPP,
the percent of extinction from MPP and a measure of the perceptibility of the
MPP impact is estimated for the study period.
Extrapolation to a long-term average will be done through the use of a
meteorological classification scheme as previously described. This type of
approach provides an efficient manner of presenting the magnitude and frequency
of estimated MPP emissions on GCNP over a long-term period that could be used
to evaluate the significance of existing impairment.
63
-------
Conceptual Framework for Summarizing Project MOHAVE Results
GCNP Impact &
Condition
No MPP in GCNP
MPP & SCA Dry
MPP & SCA Wet
MPP & Other
Sources Dry
MPP & Other
Sources Wet
MPP Alone Dry
MPP Alone Wet
Other Appropriate
Categories
Frequency
(Deterministic
Model)
% Sulfate
(Reconciled
Models)
% Extinction
(Extinction
Budget)
Measure of
Perception
SCA refers to the urban and industrial areas of southern California.
64
-------
12. Overall Quality Assurance
Approach
An independent quality assurance audit will be done by ENSR. The major
emphasis of independent quality assurance in Project MOHAVE will be upon
verifying the adequacy of the participants' measurement procedures and quality
control procedures, and upon identifying problems and making them known to
project management. Although routine audits will play a role, major emphasis
will be placed upon the efforts of senior scientists in examining methods and
procedures in depth. This approach will be followed because fatal flaws in
experiments emerge not from incorrect application of procedures by operators at
individual sites or laboratories, but rather from incomplete procedures,
inadequately tested methods, deficient quality control tests, or insufficient follow-
up of problems.
System Audits - Study Planning and Preparation
Senior auditors will review study design documents to ensure that all
measurements are being planned to produce data with known precision and
accuracy. The auditors will verify that adequate communications exist between
measurement and data analysis groups to ensure that measurements will meet data
analysis requirements for precision, accuracy, detection limits, and temporal
resolution. Quality control components of the measurements will include:
Determination of baseline or background concentrations and their
variability.
Tests for sampler contamination.
Adequate and precise measurement of aerosol and tracer sampler volume
and time.
Blank, replicate, and collocated samples.
Assessment of lower quantifiable limits (LQL), and determination of
measurement uncertainty at or near the LQL.
Regular calibrations and calibration checks, traceable to standard reference
materials.
Procedures for collecting QC test data and for calculating and reporting
precision and accuracy.
65
-------
Periodic QC summary reports by each participant.
Documented data validation procedures.
Verification of comparability among groups performing similar
measurements.
A senior auditor will visit each measurement group, laboratory and data
management and analysis group prior to the intensive field studies to verify that
adequate progress is being made toward beginning measurements on schedule and
within acceptable quality limits. A thorough review of written procedures will
be part of this evaluation, including a review of all standard operating procedures.
Issues to be addressed include:
Availability of equipment and supplies.
Manpower availability.
Readiness of written procedures and data collection protocols.
Adequate sample ID and sample tracking system.
Thoroughness of method evaluation tests.
Understanding of QC procedures and adequacy of protocols for collecting
QC test data.
Testing of software used for data management, data validation, and data
analysis.
Measurement System and Performance Audits
Audits of the field sites, the laboratories, and the data management and
analysis center will be conducted once during the study, probably at the beginning
of the winter intensive measurement period. System audits will verify that the
items described in the system audits section are being applied. Performance
audits will include:
Field sites - Instrument calibration checks, leak-checks on aerosol and
tracer samplers, and on the tracer injection system.
Laboratories - Relabeling of existing samples by ENSR and reanalysis by
the study laboratories to verify precision and reproducibility. Submittal
of prepared samples of known concentration, where needed. If the
66
-------
laboratory already participates in a regular intercomparison program or
if it uses standards directly traceable to NIST, then a system audit will
verify this, and no additional samples will be prepared.
Data management - Manual calculation of derived concentrations and
uncertainties.
Data analysis - Manual data traceability tests to verify pre-analysis
processing.
Based on audit results and discussions with project management, the
auditors will identify problems which have the potential to jeopardize data quality.
They will provide immediate feedback to operational personnel and will provide
letter reports following the audits. Corrective action request forms, to be
completed by operational personnel and returned to the auditor, will verify that
problems have been addressed. Throughout the study, the auditors will review
the participants' QC summary reports.
67
-------
References
ARS, 1990a: Visibility monitoring and data analysis using automatic
camera systems: standard operating and quality control procedures document. Air
Resource Specialists, Inc., Fort Collins, CO.
ARS, 1990b: Standard operating procedures for monitoring ambient
atmospheric extinction and scattering coefficients. Air Resource Specialists, Inc.,
Fort Collins, CO.
Cahill, T.A., P.J. Feeney, R.A. Eldred and W.A. Malm, 1987:
Size/time/composition data at Grand Canyon National Park and the role of
ultrafine sulfur particles. Transactions TR-10; Visibility Protection: Research and
Policy Aspects (P.S. Bhardwaja, ed.). Air pollution Control Association,
Pittsburg, PA, pp. 657-667.
Calvert, J.G., A.L. Lazrus, G.L. Kok, E.G. Heikes, J.G. Walega, J.
Lind and C.A. Cantrell, 1985: Chemical mechanisms of acid generation in the
troposphere. Nature, 317, 27-35.
Dietz, R.N., 1987: Perfluorocarbon tracer technology. From "Regional
and long-range transport of air pollution", Lectures of a course held at the Joint
Research Center, Ispra, Italy, September 15-19, 1986, S. Sandroni, ed., pp. 215-
247, Elsevier Science Publishers, Amsterdam.
Dietz, R.N., 1991: Personal communication.
Draxler, R.R., 1985: One year of tracer dispersion experiments over
Washington, D.C., Atmos. Environ., 21, 69-77.
Eldred, R.A., T.A. Cahill, M. Pitchford and W.C. Malm, 1988:
IMPROVE-a new remote area paniculate monitoring system for visibility studies.
Proceedings of the 81st annual meeting of APCA, June 19-24, Dallas, TX, 88-
54.3.
Freeman, D. and R. Egami, 1988: Dispersion modeling at Mohave
Generating Station. Report no. DRI-8525-F1.0 prepared for Southern California
Edison Co., Rosemead CA, February 1988.
Gaynor, J.E., D.E. Wolfe and Y. Mori, 1991: The effects of horizontal
pressure gradients and terrain in the transport of pollution in the Grand Canyon
region.
68
-------
Heikes, E.G., G.L. Kok, J.G. Walega and A.L. Lazrus, 1987: H2O2, O3
and SO2 Measurements in the Lower Troposphere Over the Eastern United States
During Fall. J. Geophys. Res., 92, 915-931.
Koracin, D., T. Yamada, B. Grisogono, T.E. Hoffer, D.P. Rogers and
J. Lukas, 1989: Atmospheric boundary layer in Mohave Valley. Presented at
AWMA/EPA specialty conference "Visibility and fine particles", October 15-19,
1989, Estes Park CO.
Lagomarsino, R.J., TJ. Weber, N. Latner, M. Polito, N. Chiu and I.
Haskel, 1989: Ground-level air sampling systems. In "Across North America
Tracer Experiment (ANATEX)", Vol. 1, R.R. Draxler and J.L. Heffter, ed.,
NOAA Tech. Mem. ERL ARL-167, Silver Springs MD, January 1989, pp. 13-
18.
Latner, N., 1986: Tethered Air Pump System, Report EML-456, U.S.
Dept. of Energy Environmental Measurements Laboratory, New York NY.
Lee, Y.-N., J. Shen, P.J. Klotz, S.E. Schwartz and L. Newman, 1986:
Kinetics of hydrogen peroxide- sulfur (iv) reaction in rainwater collected at a
northeastern U.S. site. /. Geophys. Res., 91, 13264-13274.
Malm, W., K. Gebhart, D. Latimer, T. Cahill, R. Pielke and J. Watson,
1989: National Park Service report on the winter haze intensive tracer
experiment.
Murray, L.C., R.J. Farber, M. Zeldin and W.H. White, 1990: Using
statistical analysis to evaluate modulation in SO2 emissions. In Visibility and Fine
Particles, C.V. Matthai, ed. AWMA, Pittsburg, PA pp. 923-934.
National Research Council, 1990: Haze in the Grand Canyon - An
evaluation of the Winter Haze Intensive Tracer Experiment. Prepared by the
Committee on Haze in National Parks and Wilderness areas. National Academy
Press, Washington D.C.
Nelson, L. R., 1991: Personal communication. May 8, 1991.
Patterson, D.E., R. B. Husar, W.E. Wilson and L.F. Smith, (1981):
Monte Carlo simulation of daily regional sulfur distribution - comparison with
SURE data and visibility observations during August 1977. J. Appl Meteor., 20
404-420.
Penkett, S.A., B.M.R. Jones, K.A. Brice, and A.E.J. Eggleton, 1979:
The importance of atmospheric ozone and hydrogen peroxide in oxidizing sulfur
69
-------
dioxide in cloud and rain water. Atmos. Environ., 13, 1615-1632.
Pielke, R.A., W.A. Lyons, R.T. McNider, M.D. Moran, D.A. Moon,
R.A. Stacker, R.L. Walko, and M. Uliasz, 1990: Regional and mesoscale
meteorological modeling as applied to air quality studies. Proc. of 18th
NATO/CCMS Int. Tech. Meeting on Air Pollution Dispersion Modeling and Its
Application, 13-17 May 1990, Vancouver, British Columbia.
Pitchford, M. and D. Joseph, 1990: IMPROVE Progress Report. Report
EPA-450/4-90-008, U.S. Environmental Protection Agency, Office of Air Quality
Planning and Standards, Research Triangle Park, NC, May 1990.
Raabe, O.G., D.A. Braaten, R.L. Axelbaum, S.V. Teague and T.A.
Cahill, 1988: Calibration studies of the DRUM impactor. J. Aerosol ScL, 19,
183-195.
Richards, L.W., C.L. Blanchard, D.L. Blumenthal, 1991. Navajo
Generating Station Visibility Study: Executive Summary (Draft number 2).
Sonoma Technology Inc. report STI-90200-1124-FRD2, April 16, 1991.
Prepared for Salt River Project, Phoenix, AZ.
Systems Applications International, 1991. Deterministic modeling in the
Navajo Generating Station Visibility Study (Draft Final Report). Prepared by
Systems Applications International, San Rafael, CA, January 17, 1991. Report
SYSAPP-91/004b. Prepared for Salt River Project, Phoenix, AZ.
Saxena, P. and Seigneur, C., 1986: On the oxidation of SO2 to sulfate in
atmospheric aerosols. Atmos. Environ., 21, 807-812.
White, W., D.P. Rogers, T.E. Hoffer and J. Lukas, 1989: 1986 Mohave
Generating Station plume intensive study. Final report prepared for Southern
California Edison Co., February, 1990.
Yamada, T., 1988: Preliminary simulations of wind, turbulence and
tetroon trajectories. Interim report prepared for Desert Research Institute, Reno,
NV, December 1988.
Wuertz, D.B., and B.L. Weber, 1989: Editing wind profiler
measurements. NOAA technical report ERL 438-WPL 62, U.S. Government
Printing Office, 78 pp.
70
-------
Appendix 1
Project MOHAVE Update Summary - September 16, 1991
Study
Component
Description of Study Component
Responsible Party
Schedule
Overall - 12 months starting in Sept. '91
Intensives - Two 4 to 6 week intensives, (1) January '92, (2) July and
August '92.
Emissions
Conduct review of available emission data and compile inventory.
Do source profiling of MPP during a portion of each intensive period.
Continuous source sampling at MPP: SO2, NOX and paniculate
concentrations (plus frequent panicle composition). *
Bruce Polkowsky,
OAQPS
Italics used to indicate unfunded study components.
1- 1
-------
Study
Component
Description of Study Component
Responsible Party
Deterministic
Modeling
Deterministic meteorological modeling (wind, turbulence and moisture
fields) for each day of the 12 month period with domain and resolution as
feasible.
Apply a Lagrangian Monte Carlo transport model with some chemistry
included to transport, disperse and chemically transform the plume using
the output from the meteorological model.
Full chemistry deterministic modeling (RADM, with enhanced particle
treatment, in-cloud processes and optics) for about 20 to 30 selected days.
Expected input includes ammonia gas, paniculate ammonium ion, and
hydrogen peroxide measurements at a few locations during intensives.
Would employ the output from Pielke 's modeling.
Roger Pielke, CSU
William Wilson, RTP
Jason Ching, NOAA-
RTP
Tracer
Continuous in-stack release of perfluorocarbon tracer during each of two
intensive periods. Sampling at 31 sites on 12-hour (at 4 receptor sites)
and 24-hour (all other sites) sampling schedule (see attached map for
sampling locations). The Hopi Point and Meadview sites will each have
two additional collocated samplers. A 21 day pre-release sampling pre-
test at all sites to establish background levels and for QA.
Additional perfluorocarbon tracers released to tag the Los Angeles Basin
and San Joaquin Valley. *
Increased time resolution of tracer data: 6 hours at receptor sites, 12
hours at other sites.
Russell Dietz
(overall), Brookhaven
Nat'l Lab and
Ray Dickson (tracer
release), NOAA Idaho
Falls
Dietz & Dickson
Dietz & Dickson
Italics used to indicate unfunded study components.
1-2
-------
Study
Component
Description of Study Component
Responsible Party
Meteorologic
monitoring
12 month period: Continuous vertical wind profiling over the 12 month
period at MPP plant site and Truxton using radar wind profilers. A Radio
Acoustic Sounding System (RASS) will also be deployed at the plant site
to give boundary layer vertical temperature structure. A doppler sodar
will be deployed at Meadview most of the study period to measure wind
profiles. Additional instrumentation to include at least 4 surface
meteorology stations with pressure sensors (temperature & relative
humidity also) to examine response of locally channeled flow to larger
scale pressure gradients (November 1991-August 1992).
Surface meteorological stations at the 4 receptor sites measuring wind
speed, wind direction, temperature, relative humidity and solar radiation.
Intensives: Two additional radar wind profilers will be operated during
the winter intensive near Needles and Temple Bar. Two additional
profilers during the summer intensive will be located at Meadview and
Cottonwood Cove. The Meadview wind profiler will replace the sodar,
which will be moved to Temple Bar for the summer intensive. Radar
wind profiler data from Los Angeles Basin and western Mojave Desert
sites will also be available for the summer intensive. Surface
meteorological stations at radar wind profiler sites measuring winds,
temperature and relative humidity. Some special studies using
tethersondes and/or radiosondes may be done at locations of interest.
John Gaynor, NOAA
Boulder
John Molenar, Air
Resource Specialists
John Gaynor
1- 3
-------
Study
Component
Description of Study Component
Responsible Party
Air quality
monitoring
Particle Monitoring: Full IMPROVE samplers at the 4 receptor, 4
IMPROVE sites and 2 Improve protocol sites.* IMPROVE channel A at
all 21 remaining sites. See attached map for site names and locations
(total of 31 sites). Drum sampling in 8 size ranges at the receptor and 2
additional sites, with 4 or 6 hour resolution. Selected drum sampler filters
to be analyzed.
Intensives: two 12 hour samples per day, every day at receptor sites; 24
hour samples every day at all other sites. Tracer and SO2 sampling at all
sites following the particle sampling schedule. H2O2, NH4 and NH3
monitoring periodically during intensives.
Hi vol dichotomous samplers and annular denuder samplers at 3 sites for
high sensitivity particulate analysis necessary for CMB modeling. More
details will be available soon.
Remainder of study year: 24 hour particle and SO2 sampling with
IMPROVE samplers every Wednesday and Saturday at receptor,
IMPROVE and IMPROVE protocol sites.
Increase number of particle monitoring sites for non-intensive periods.**
Bob Eldred, UC-Davis
* The IMPROVE sampler has 4 channels. Channel A collects fine panicles (<2.5fjLm) on a teflon
filter and provides total fine mass, elemental analysis (H and Na-Pb), organic and elemental carbon and
absorption. Channel B uses a fine nylon filter for ions (Cl~, NO.,', NO3' and SO42~). Channel C is used to
obtain organic and elemental carbon from a fine quartz filter. Channel D measures PM-10 total mass on
a teflon filter and SO2 with an impregnated quartz filter.
1-4
-------
Study
Component
Description of Study Component
Responsible Party
Optical
monitoring
Continuous monitoring for the entire period. Nephelometers at all receptor
sites and a transmissometer added at Meadview, in addition to ones
already at IMPROVE sites.
Airborne lidar aerosol mapping several weeks during the intensive
periods. *
John Molenar, Air
Resource Specialists
Jim McElroy, EMSL-
Las Vegas
Data
Interpretation
Analysis of historic meteorologic data to optimize timing of intensive
periods. Analysis of MPP emission modulation (1985 shut-down).
Eigenvector Analysis
DMB modeling
CMB modeling with high sensitivity particulate data.
Extinction Budget
Reconciliation of results from receptor & deterministic modeling and
eigenvector analysis, extinction budget, trajectory analyses, etc. Overall
summary of results.
Mark Green, DRI
Mark Green
Bill Malm, NPS
Robert Stevens, RTF
Marc Pitchford,
EMSL- Las Vegas
Marc Pitchford
Italics used to indicate unfunded study components.
Italics used to indicate unfunded study components.
1-5
-------
Study
Component
Description of Study Component
Responsible Party
Quality Each component of the study is responsible for QA on its' portion of the
Assurance study.
Overall QA audit covering all portions of the study to be done by
independent reviewer.
Charles McDade
1 -6
-------
Appendix 2
Project MOHAVE1 Conceptual Plan
Introduction
This plan documents the thoughts and intentions of those who are
preparing to determine the contributions by the Mohave Power Project (MPP) to
haze in Grand Canyon National Park (GCNP). Its purpose is to provide a
vehicle to obtain review and comment by various interested parties at an early
point in the planning process when adjustments are more easily accommodated.
This conceptual plan is designed to provide overall guidance to the technical
experts who are responsible for developing the more detailed study plan.
The first part of this paper contains information on the study background,
objectives, and an overview of the approach. This is followed, in the second
part, by an expanded discussion of the approach which contains information on
the visibility attribution process, use of artificial tracers, ambient monitoring, and
data interpretation and models.
Background
The 1991 fiscal year budget for the United States Environmental
Protection Agency (EPA) includes a Congressional "add-on" at the level of $2.5
million for a 2-year effort titled "Pollution tracer study at the Mohave
Powerplant". Discussion has revealed that congressional intent was to have EPA
perform a study to assess MPP's contribution to visibility impairment in GCNP.
Members of congress have demonstrated an interest in visibility impairment in
the Federal Class I Areas (i.e., national parks and wilderness areas meeting
certain requirements); and in particular an interest in GCNP impairment by large
point sources of SO2. For many this interest was intensified by the results of the
1987 Winter Haze Intensive Tracer Experiment (WHITEX) conducted by the
National Park Service (NPS).
1 While Mohave is the name of a coal-fired power plant in
Nevada, Project MOHAVE contains an acronym for
Measurement Of Haze and Visual Effects.
2-1
-------
WHITEX involved a six-week long intensive monitoring study during
which an artificial tracer was released from the Navajo Generating Station
(NGS)2. NFS analysis of optical,, air quality, and meteorological data indicated
that a significant fraction of the winter hazy periods in GCNP were due in large
part to sulfates resulting from NGS emissions. EPA used these results as the
basis for proposing additional emission controls at NGS. The WHITEX data
analysis methodology, results, and use of the results were cause for considerable
controversy.
In an attempt to resolve the technical issues raised by WHITEX, the
National Research Council of the National Academy of Sciences (NAS) was
requested to consider the relative importance of human derived and natural
emissions that contribute to visibility reduction. The Council established a
Committee on Haze in National Parks and Wilderness Areas. One task of the
committee was to evaluate WHITEX. Their report neither wholly endorsed nor
discredited the NPS WHITEX findings, though it did provide an illuminating
discussion of the technical issues. In an effort to avoid some of the controversy
of WHITEX and to take advantage of the expertise assembled by NAS, Project
MOHAVE has requested the opportunity to discuss this conceptual plan with the
committee. The committee is scheduled to be briefed on this effort in early
Spring 1991.
Salt River Project (SRP), the operators of the NGS, in an attempt to
resolve their doubts concerning WHITEX, supported a more extensive tracer
study in the winter of 1990. Though only preliminary results of this study are
now available, it appears to also indicate NGS emissions in GCNP during haze,
though at a lower frequency of occurrence.
It is the goal of the planners of Project MOHAVE to take advantage of
the best and most successful aspects of the WHITEX and SRP studies, and to
address the issues raised by the NAS WHITEX review to the maximum extent
possible, and to use and extend information previously obtained by numerous
efforts.
Previous air quality studies in the region containing the desert southwest
(including SCENES, VIEW, VISTA, WRAQ and RESOLVE) provide a great
deal of background information useful to the planning of this project, Prevailing
NGS is a 2250 Mw(e) coal-fired powerplant located near Page,
Arizona, approximately 25 Km northeast of GCNP.
2-2
-------
southwest winds, especially in the summer, carry MPP emissions toward GCNP.
They also carry emissions from the southern California urban/industrial area
towards GCNP. There is considerable evidence that southern California is the
dominant source area of pollutant haze for GCNP. A major technical challenge
for Project MOHAVE is to separate the influence of MPP from that of southern
California and other regional influences.
The most important man-made pollutant species responsible for GCNP
haze are particulate sulfates. These are generally formed in the atmosphere by
chemical conversion of gaseous S02, which is emitted by combustion of fuel
containing sulfur. Other particulate components important to GCNP haze,
organics and crustal species, are from natural and man-made sources. GCNP
visibility levels are often so good that light scattering by air molecules (Rayleigh
scattering) is also a significant contributor to the extinction coefficient.
MPP's most significant potential contribution to GCNP haze is by
emissions of SO2 that are converted to sulfates. Other sources contributing to
particulate sulfate are southern California (primarily by oil refineries), other coal-
fired power plants (e.g., Reid Gardner north of Las Vegas, NV and NGS near
Page, AZ), copper smelters in southern Arizona, New Mexico, northern Mexico
and Utah and oil refineries in Texas and the Monterrey area of Mexico. Other
sources which may influence GCNP visibility are large urban areas (e.g., Las
Vegas, NV, Phoenix/Tucson, AZ and the Wasatch Front in Utah) and wildfires.
These sources are expected to be more dominated by organic and elemental
carbon pollutants than sulfate.
Objectives
This conceptual plan considers two related objectives: (1) to determine the
MPP contribution to GCNP haze and (2) to determine the relative contributions
of the major pollution emission sources (including MPP) affecting GCNP haze.
For both objectives, determining the contribution to GCNP haze implies a
quantitative evaluation of intensity, spatial extent, frequency, and duration. The
intensity of haze contributed by a source includes both an absolute physical
measure of haze (e.g.,. contribution to the extinction coefficient) and its
-perceptibility (e.g., scenic element contrast change, or change in modulation
transfer function). A part of both objectives is an assessment of the changes in
visibility at GCNP that would be expected if MPP emissions were changed.
2-3
-------
The first objective implies determining the contributions to GCNP haze
by two source categories: MPP and a composite of all non-MPP sources. The
second objective expands upon the first objective. Instead of concentrating on
one source's impact, it calls for simultaneous assessment of all the important
sources of haze for GCNP. There is no doubt that a study designed to meet the
first objective would also address other sources to some extent. However, this
would be incidental to the first objective, unlike the second objective where it is
the primary focus.
A program designed to meet the second objective is beyond the resources
presently available for this effort. Unless additional support becomes available
Project MOHAVE will be designed to meet the first objective and to prepare a
foundation for further investigation of the impact of regional haze in GCNP.
Approach Overview
1 The EPA Office of Air Quality Planning and Standards (OAQPS) has
overall management responsibility for Project MOHAVE. Robert Bauman, the
OAQPS Project Leader, has selected a Project Steering Committee to advise him
on the overall direction of the study. Several technical advisory panels are being
constituted to provide recommendations at a greater level of detail. Experts
selected for the technical advisory panels will provide the primary means for
Project MOHAVE to incorporate insights gleaned from earlier investigations.
Figure 1 indicates the program's management/advisory structure.
Project MOHAVE will use sophisticated deterministic and receptor
modeling to identify the MPP influence. During two intensive study periods
(four to six weeks each), a unique tracer material will be Combined with the MPP
emissions in concentrations sufficient to be detected hundreds of kilometers away.
The tracer will provide a check of the deterministic modeling results and provides
a unique signature for the MPP plume, for use in receptor modeling. The main
emphasis will be on deterministic modeling, with secondary emphasis given to
receptor modeling. It is not prudent to implicitly trust the results of either
modeling approach alone; thus both approaches will be tried. If results of the
two approaches are substantially different, an in-depth investigation into the
reasons for the differences and an evaluation of the results will be done before"
any conclusions are reached regarding MPP's impact.
The intensive periods will be selected to optimize the chances of
establishing the maximum contribution to GCNP impairment by MPP.
2-4
-------
PROJECT MOHAVE
QUALITY ASSURANCE
GROUP
PROJECT MANAGER
R. Bauman, EPA
TECHNICAL
STEERING COMMITTEE
PANELS
Responsible for technical d
TRACER DESIGN
AND RELEASE
esign and study oversight.
FIELD DATA
COLLECTION
MODELING AND
DATA ANALYSIS
Tracer selection
Release mechanisms
In-stack monitoring
Release protocol
- Equipment selection
- Site selection
- Ambient monitoring
- Optical monitoring
- Meteorology
- Data processing
- Filter analysis
- Dispersion modeling
- Trajectory analysis
- Receptor modeling
- Extinction budget
- Attribution analysis
- Perception & effects
id
i
CJ
-------
Tentatively these would be the summer monsoon season and the mid-winter storm
season. Both periods have the possibility of transporting MPP emissions to
GCNP decoupled from southern California emissions, and sufficient moisture for
possible liquid phase conversion of S02 to sulfate (much faster conversion than
the alternative gas phase reactions). These conditions are intermittent even
during the periods of their greatest frequency. Thus the intensive periods will
also include more typical summer and winter conditions where MPP influence in
GCNP is not expected to be as great.
To further ensure that data from .the intensive periods can be interpreted
in terms of longer term typical conditions, the overall study period will be 12 to
15 months. During the non-intensive periods of the study, air quality and
meteorology measurements will be made at numerous locations throughout the
study area. Intensive study period data will be used to evaluate source-oriented
deterministic models using augmented upper air meteorological data and receptor
models based upon endemic tracers. The deterministic models will then be
applied to the entire study period. If successful, receptor models using endemic
tracers will be also" be applied using data collected for the entire study period.
Finally, the study period results will be extrapolated to the long-term by
comparison with and if necessary adjustment to climatological characteristics of
importance.
A tentative schedule for Project MOHAVE calls for field measurements
to start in July 1991 and continue until September of 1992. The winter intensive
period will be in January 1992, with the summer intensive period from mid-July
to late August 1992. Data interpretation and report preparation is anticipated to
continue for approximately 18 months after the end of the field monitoring
program.
Approach
The attribution of impacts from MPP and other sources will ultimately be
derived from an extinction budget by air pollutant species. The majority of the
MPP impact is expected to be from secondary sulfate particles. Measurements
of the particle components such as sulfate, nitrates, carbon and crustal species .are
related to optical measurements by statistical and first principle approaches to
produce the extinction budget.
It is expected that the contribution of sulfate particles from MPP and other
sources will be estimated primarily from deterministic modeling. Receptor
2-5
-------
modeling will also be done for this purpose, providing a check of the
deterministic modeling analyses. The results for the two types of models will be
compared for consistency (model reconciliation). Eigenvector analysis will also
be done to support results from the modeling studies. During the two intensive
periods, an artificial tracer will be injected into the MPP plume. The tracer
provides a check of the transport and dispersion calculated by the deterministic
modeling. It also provides a unique signature of the MPP plume for use in
receptor modeling. To estimate impacts for the remainder of the study period,
deterministic modeling will be performed and receptor modeling using endemic
tracers will be investigated.
Substantial monitoring will be required to support the extinction budget
and modeling studies. This will include ground based and airborne meteorologic,
air quality and optical measurements and remote sensing of vertical wind and
temperature profiles. Expanded descriptions of the main components of the study
follow.
Tracer
During the intensive study periods an artificial tracer will be released
continuously either through the stack at MPP or by balloon at plume height in the
immediate vicinity of the power plant. A stack release would give more
confidence that the plume and tracer are well mixed and is the preferred method.
However balloon release of tracers has been routinely done (NOAA, Idaho Falls)
and is a feasible alternative. For objective 2, different artificial tracers would be
released at other sources or source areas to tag their emissions more precisely
than through the use of endemic tracers. Other sources to tag may include the
San Joaquin Valley (Tehachapi Pass), the Los Angeles Basin (Cajon Pass), Las
Vegas, Reid Gardner Powerplant, Navajo Powerplant and copper smelters.
Tracer can be released at a constant emission rate or at a constant ratio
of tracer to SO2. Variation of tracer to S02 ratios was a complicating factor in
the WHITEX receptor modeling analysis. If released at a constant rate, S02
emission rate variations would complicate the receptor modeling, requiring
adjustment of the ratio of tracer to sulfur dioxide concentration. This requires
knowledge of plume age. However, for use in deterministic modeling, it is more
desirable to have a constant tracer emission rate, to simplify the dispersion
calculations. Also, the deterministic model can give the plume age necessary to
adjust the tracer to sulfur dioxide emission rates in the receptor modeling.
Ideally, a tracer should closely mimic the species of interest for receptor
modeling and chemical transformations; in this instance SO2 and its conversion
2-6
-------
to SO4 and deposition of the sulfate particles. This would suggest using isotopes
of sulfur or oxygen. However, the large amounts of tracer necessary may not
be available and to produce them would require resources greater than those
available for this study. For studying transport and dispersion patterns, a
conservative tracer is desirable.
Among the potential tracer materials are deuterated methane (CD4),
various perflorocarbons (PFT's) and particulate rare earth oxides. CD4 and
PFT's are conservative tracers; thus conversion of SO2 to S04 and deposition of
S02 and S04 must be accounted for. It has been suggested that non-conservative
rare earth particle tracers be used because of their potential to mimic sulfate
particles. However, sulfate particles are not directly emitted in significant
quantities; rather they are typically formed after considerable transport time
which varies with meteorologic conditions. Thus some variable proportion of the
rare earth particles will have deposited before the sulfates are formed.
Additionally the deposition of SO2 occurs more rapidly than either sulfate or rare
earth particles. Issues such as these must be further investigated before any
decision regarding the use of rare earth tracers is made.
CD4 has low background values and is detectable at very low
concentrations, so small amounts of this tracer are sufficient. Though the cost
per unit mass is high, the total cost of tracer material is expected to be much less
than the cost of PFT's. However, the sample analysis cost is very high ($800-
$1000/sample), compared to about $20/sample for PFT's. Thus, it may not be
feasible to analyze all samples. If CD4 were used, samples would be selected for
analysis based on air quality and meteorologic data.
The lower analysis costs for PFT's makes it possible to analyze many, if
not all of the samples. More information canlbe obtained regarding the plume
position and spatial extent. This would allow a more thorough evaluation of the
deterministic modeling. In addition, regression analyses with the receptor models
and other statistical analyses would be based on a larger number of samples than
if CD4 were used. With the availability of various PFTs, release times can be
staggered such that the age of the samples can be estimated from the sample as
well as from trajectory analyses. Alternately, different PFTs could be released
from different sources, as previously discussed and the deterministic model
results used to estimate plume age.
The SRP tracer study, which used PFT's, apparently had some major
problems with the tracer portion of the study. Collocated samplers showed near
zero correlation. Four different PFT's were used. The analyses for the first two
2-7
-------
PFT's were apparently of better quality than for the third and fourth. The South
Coast Air Quality Study (SCAQS) is said to have shown high variability of
collocated samples near the detection limits while at the higher concentrations
variations of a factor of two were common. There is no theoretical reason that
prohibits the use of PFT's or other tracer materials to give quantitative,
consistent data. However the pitfalls associated with past experiments demand
careful attention'and a quality assurance program that monitors the tracer data
during the collection process. These issues must be resolved before selection of
a tracer approach. A quality assurance plan for tracer release and monitoring,
including collection/analysis of background and collocated samples will be
developed.
Monitoring
Project MOHAVE field measurements are designed to meet the data
requirements discussed in the Data Interpretation and Modeling Section, below.
The extinction budget analysis requires data for all of the major particle
components (e.g., sulfates, organic and elemental carbon, crustal, and liquid
water as estimated from relative humidity) by particle size and concurrent optical
parameters (e.g., extinction and scattering coefficient). The attribution analysis
requires data for tracer, particle and gaseous sulfur concentrations, particulate
trace elements as endemic tracers (e.g., arsenic for smelters and selenium for
coal burning), and meteorology (e.g., surface and upper air winds, temperature,
and humidity). Additional monitoring of endemic tracers (e.g.,
methylchloroform for southern California) for non-MPP sources will be
conducted to the extent that the resources will allow. Table 1 summarizes the
measurements that are anticipated for this program.
To aid in the presentation, monitoring locations Have been categorized into
several types. Receptor sites are in or near (representative of) GCNP.
Monitoring at receptor sites must be capable of supporting extinction budget and
attribution analysis. Gradient sites are designed to produce data for attribution
analysis. They include sites between sources of interest and GCNP, upwind and
background monitoring locations. Upper air meteorological monitoring locations
are selected to improve the spatial resolution of the National Weather Service
network and to provide vertical wind and temperature profiles in critical areas for
input to the deterministic models. Finally aircraft are needed to make tracer and
pollutant measurements ranging from near the source to the most distant areas of
the study region, and to evaluate vertical distributions.
2-8
-------
Table 1. List of the optical variables, aerosol species, meteorological variables and
measurement methodologies proposed for the monitoring sites.
Measurement Type
Optical
b«- '
b.t
M,,,.
Particulate Matter
Fine Particles
Mass
Ions
Nitrate
Elemental & Organic Carbon
Trace Elements (includes sulfur)
Size Segregating Trace Elements
Large Particles
Gases
SO2
Tracer
Methylchloroform
Meteorological - Surface
Wind Speed & Direction
Temperature, Relative Humidity
Meteorological - Upper Air
Wind Speed & Direction
Temperature
Cloud Heigfit and Vertical Pollutant
Distribution
Location
A.B.C"
A
A
A, B, C
A
A
A
A, B, C
A
A
A, B, C
A, B, CT
A, B, C
A, B
A,B
Laughlin,
Pierce's Ferry
Laughlin,
Pierce's Ferry
Laughlin,
Pierce's Ferry
Method
1
Nephelometer
Transmissometer
Photographic
(
- - "1 1
1
IMPROVE/SCICAS
IMPROVE/SCICAS
IMPROVE/SCICAS
IMPROVE/SCICAS
IMPROVE/.SCICAS/SFU
DRUM
IMPROVE/SCICAS
,
'
K2CO3 Impregnated Filter
-t
'I
1
i I
ji
RADAR Profiler
RASS
Ceilometer Or Upward
Looking LIDAR
Frequency
Continuous
Continuous
Hourly
12 Hours
12 Hours
12 Hours
12 Hours
1 12 Hours
1 12 Hours
12 Hours
12 Hours
12 Hours
12 Hours
Continuous
Continuous
i
Continuous
Continuous
Continuous
A = Receptor sites, B = Gradient sites, C = Aircraft, " = method and frequency may be different for
aircraft
2-8a
-------
The selection of monitoring locations has an influence on the utility of the
data. Table 2 and figure 2 indicate a preliminary list of monitoring locations
appropriate for meeting the first .objective. An expanded investigation of the
impact of all sources of visibility impairment would require additional gradient
sites and perhaps additional artificial and endemic tracer measurement capabilities
at all sites. Final site selection by the appropriate advisory panel will be
influenced by results of simple trajectory analyses run on two years of data
(anticipated in March 1991), and practical considerations (i.e., available power,
access, security, etc.).
To the maximum degree possible, existing monitoring sites within the
study area will be incorporated into the monitoring program. In some cases this
would involve providing supplemental equipment or modifying procedures to
make data collected at these sites consistent with the other sites in the program.
Meteorology data from existing sources (i.e., National Weather Service surface,
upper air, and satellite measurements) will be incorporated into the project data
base. To the extent that they exist, records of wildfires and prescribed burning
and other intermittent source activities will be documented.
Data Interpretation and Modeling
Extinction Budget:
Light extinction is caused by scattering and absorption by particles and
gases. In general particle scattering is the principal component of extinction,
though in the remote Southwest, scattering by gases that make up the atmosphere
(also known as Rayleigh scattering) is a significant fraction on the best air quality
days. Black carbon (from diesel engines, forest fires, etc.) is the primary agent
of particle absorption, and is occasionally an important cause of haze in the study
area. NO2 is the only common gaseous pollutant that absorbs in the visible
portion of the spectrum. It is not expected to play a significant role in Project
MOHAVE.
The extinction budget analysis involves determining the contribution to
extinction by all of the major contributing components. This can be done
statistically using multivariate analysis to explain the optical parameter (bext or
bscat) by a linear combination of the components. These components are the
concentrations of the pollutant species multiplied by best-fit determined
coefficients interpreted as extinction efficiencies. The hygroscopic particle
species (e.g., sulfate and nitrate) include a function of relative humidity to
incorporate the effects of water upon the extinction efficiencies of these species.
Alternately, first principle calculations (Mie Theory) of the extinction coefficients
2-9
-------
Table 2. Possible monitoring sites for intensive and entire study periods by site type.
Site Types
Receptor
Gradient
Upper Air
Meteorology
Aircraft
Entire Study
Pierce's Ferry, Meadview*,
Hopi Point*, Indian Gardens*,
Phantom Ranch*, Long Mesa*
W. Lake Mead, Cotton wood
Cove*, Spirit Mtn.*, Overton,
Needles, Mojave Desert
Laughlin, Pierce's Ferry
Intensive Only
Peach Springs
Additional sites along Colorado
River, southern California, and
northern Arizona
Peach Springs, Mojave Desert
Near stack, upwind, along plume,
'across plume, vertical distribution
* existing NFS and SCE air quality monitoring sites
2-9a
-------
Existing Moniloring Sites
« Polculiiil Monilofing Silos
Points of Reference
Teh«ch«pi P»s>
Cajon l>OSj
Los Angeles
piril Mountain
x \
Euslcrn Mojavc Dosed MPP I
Silci (1-3) |
a "
Needle,
Ccnlril Arizmit
Silci (1-3)
,o P«rkcr Dam
Phoenix
Winilow
-0
o\
i
Tigure 2. Existing and potential -monitoring sites.
-------
can be done, if sufficient paniculate characteristics are known (e.g., size
distributions). This program will use both procedures and will reconcile the
results with literature values of extinction efficiencies.
Attribution analysis:
Attribution analyses will be done using both source-oriented deterministic
models and receptor models. Source-oriented deterministic models transport
emissions from the source and can account for physical processes en route,
including chemical transformation, dispersion and deposition. Receptor models
use measurements made at the area of concern (receptors) along with
characterization of the emissions from sources potentially affecting the receptor.
The contribution of each source to concentrations at the receptors is determined
statistically through multivariate analysis techniques which link the sources to the
measured concentrations.
Deterministic meteorological and dispersion modeling provides a source-
receptor pollution apportionment procedure which is based on fundamental
physical conservation relationships. These relationships include conservation
equations for velocity, temperature, mass and the three phases of water. The
model will provide detailed wind and turbulence fields and a prediction of cloud
height and location. Cloud predictions will be checked against satellite
photographs and ceilometer measurements, where available.
For Project MOHAVE, deterministic models will be run in an analysis
mode using assimilation of the observed data for the entire period of the project.
Data assimilation means that measured data will be incorporated into the
modeling. The models' utility is to fill in areas between data locations making
use of the fundamental physical relationships governing the atmosphere. The
incorporation of data assimilation into the deterministic model offers an effective
methodology to achieve the best estimate of meteorological transport and
dispersion.
The meteorological domain for the simulations will cover the southwestern
United States with horizontal grid intervals on the order, of 10 km. To obtain
better terrain resolution near MPP, a telescoping nested grid will be used. In a
nested, grid approach, the larger scale results provide the boundary conditions
for input into a finer scale modeling domain. In Figure 2, the rectangle bounded
by dashed lines "nested" within the larger area demonstrates the concept of a
nested grid approach. Horizontal grid intervals in the smallest domain may be
2- 10
-------
approximately 500 m. For this smaller grid interval, non-hydrostatic models are
generally more appropriate than hydrostatic models.
The. wind and turbulence fields obtained from the deterministic
meteorological model provide the necessary input to calculate the transport and
dispersion of the MPP plume. Using this input, a Lagrangian model will be used
to transport, disperse and chemically transform the plume. The first step is to
transport and disperse the plume; the model results will be compared to the tracer
data to evaluate the model. The next step is to incorporate simple chemistry to
calculate sulfate concentrations and any other species of interest. The predicted
location of emissions from other major sources within the study area will also be
identified.
Using complex chemical modeling (e.g. RADM) and explicit inclusion of
all the major .pollutant sources is very resource intensive and is beyond the scope
of objective 1. These analyses may be done as part of objective 2, depending on
the level of additional resources.
The use of a deterministic model can also assist in the design of the field
program by indicating where instrumentation should be sited and aircraft cross-
sections flown so as to optimize the spatial representativeness of the
measurements. Also, since the model is based on fundamental concepts, it
provides a scientific framework to interpret the data. The same meteorological
model simulations can also be used with a wide range of emission inventories in
order to assess potential emission control scenarios.
The use of receptor models in apportioning primary particles has been
done routinely; however using receptor models to apportion secondary aerosol,
as in WHITEX, is more controversial. As in WHITEX, the receptor models
used will include the tracer mass balance regression (TMBR) and differential
mass balance (DMB) models. These models were designed to estimate the
portion of sulfate due to the MPP and the other sources. The results of these
receptor modeling will be evaluated in light of the deterministic modeling results.
Project MOHAVE will address concerns raised by WHITEX review by making
additional measurements and more complete source characterizations. More
information on particle size distribution, use of endemic or artificial tracers for
other sources, and upper air humidity and cloud height measurements can reduce
the uncertainties involved with the use of these models to apportion secondary
aerosol. In TMBR and DMB it is assumed that each source has a uniquely
emitted tracer associated with it. If not, other methods such as chemical mass
balance (CMB) may be first applied to partition the ambient species
2-11
-------
concentrations into components attributable to the various groups of sources.
TMBR, DMB and CMB are described in detail in the WHITEX report.
Eigenvector analysis, e.g. empirical orthogonal function (EOF) analysis,
principal components analysis and factor analysis, will also be done to investigate
impacts by specific sources or source areas. The eigenvector analysis results can
be used to qualitatively check the deterministic and receptor modeling analyses.
Eigenvector analysis shows commonly occurring spatial patterns and their
variation in time. The main patterns may be associated with specific sources or
source areas. By examining the time series (time variation) of each eigenvector,
it can be determined which times a particular source area contributes to
concentrations at each site. Meteorologic information, such as wind speed and
direction and humidity and its temporal patterns along with source information
provides physical information to help interpret and support results of the
eigenvector analyses.
Thp Modeling and Data Analysis technical panel will make specific
recommendations concerning the modeling and data analysis approaches to be
used. !
Extrapolation of intensives study periods to the long-term:
To determine longer term impacts to visibility at GCNP, it is necessary
to extrapolate from results of the intensive study periods. This will be a two-step
process; the first step will relate the entire 12-15 month study period to the
intensive period, while the second will extrapolate from the 12-15 month period
to a multi-year period. The first step involves application of source-oriented and
receptor models, which are developed and evaluated with intensive period data,
to the meteorology and air quality data for the entire study period. In the second
step |the relative frequency of long-term meteorological patterns will be compared
with those during the study period and qualitative adjustments made if necessary.
Source-oriented models will be evaluated and calibrated using the more
complete data of the intensive periods. The resulting models will then be run on
data from the entire study period. For all modeling analyses a portion of the data
may be withheld in order to independently test the models.
During the intensive study periods, receptor modeling will use the
artificial tracer results to apportion sulfate due to MPP and any other sources
tagged with artificial tracers. Receptor models will also use endemic tracers to
apportion remaining significant sources. Results from receptor models based on
2-12
-------
endemic tracers will be compared to results of the same models using artificial
tracers to evaluate the utility of endemic tracers. If successful, endemic tracer
models will then be applied to the entire study period to apportion sources over
a complete annual cycle and used in conjunction with the deterministic modeling
analysis.
The representativeness of the study year to longer term average conditions
will be studied. It should be acknowledged that significant year to year variability
in meteorological conditions occurs and that the likelihood of any given year
being "typical" is not high. The frequency of occurrence of conditions associated
with impacts from each source such as wind speed and direction, humidity, etc.
can be compared for the study year and other years for which data are available.
Where they exist, optical and air quality measurements from previous years will
be compared to the study year measurements. A meteorological classification
scheme that uses criteria affecting visibility may be developed. The frequency
of occurrence of each pattern for the study period and longer term average can
then be compared to put the study year into perspective.
Framework for Summarizing Results:
In a complex program such as this, a sound plan for compilation of results
is as important as the collection of high quality and representative data and the
performance of appropriate interpretive analysis. Development of an approach
to organize the results from this program helps to focus attention and resources
on critical steps for the entire program and communicate those ideas to others.
Just as it is inappropriate for worst case results to receive primary
attention, it is also inappropriate to dwell on average or typical conditions,
especially for an instantaneous 'effect such as visibility. The 12-15 month study
period with hourly deterministic model results requires some method for
summarizing the results of the study that avoids these pitfalls. A preliminary
conceptual framework for summarizing the results of Project MOHAVE is shown
in Table 3. The key idea is the stratification of time periods based upon the
locations with respect to GCNP of MPP emissions and those of other significant
sources, such as from southern California. These would be based upon the
modeling studies. Another stratification is whether the plume(s) has undergone
wet or dry chemistry (based upon modeling results and observations). If useful,
other stratifications could be developed. The frequency of each condition, the
average and standard deviation of the % sulfate from MPP, the % of extinction
2-13
-------
Table 3 - Conceptual Framework for Summarizing Project MOHAVE Results
GCNP Impact &
Condition
No MPP in GCNP
MPP & SCA Dry
MPP & SCA Wet
MPP & Other
Sources Dry
MPP & Other
Sources Wet
MPP Alone Dry
MPP Alone Wet
Other Appropriate
Categories
Frequency
(Deterministic
Model)
% Sulfate
(Reconciled
Models)
% Extinction
(Extinction
Budget)
Measure of
Perception
SCA refers to the urban and industrial areas of southern California.
2-13a
-------
from MPP and a measure of the perceptibility of the MPP impact is estimated for
the study period.
Stratification of conditions is expected to not only aid in summarization,
but to reduce the uncertainty levels for some of the receptor model results by
restricting the variation of parameters assumed to be constant (e.g, chemical
conversion rate). Extrapolation to a long-term average may be done through the
use of a meteorological classification scheme as previously described. This type
of approach provides an efficient manner of presenting the magnitude and
frequency of estimated MPP emissions on GCNP over a long-term period that
could be used to evaluate the significance of existing impairment.
2-14
-------
Appendix 3
PARTICIPANT LIST
Project MOHAVE Planning Workshop
April 29 - May 2, 1991
Denver, Colorado
Robert Bauman
U.S. Environmental Protection Agency
OAQPSATSD (MD-14)
Research Triangle Park, NC 27711
(919) 541-5629
Donald Blumenthal
Sonoma Technology
5510 Skylane Drive, Suite 101
Santa Rosa, CA 95403
(707) 527-9372
Jason Ching
U.S. Environmental Protection Agency
AREAL (MD-80)
Research Triangle Park, NC 27711
(919) 541-4801
Russell Dietz
Brookhaven National Lab
Building 426
Upton, NY 11973
(516) 282-3059
FAX (516) 282-2887
Ray Dickson
NOAA
1750 Foote Drive
Idaho Falls, ID 83402
(208) 526-2328
David L. Dietrich
Air Resource Specialists, Inc.
1901 Sharp Point Drive, Suite E
Fort Collins, CO 80525
(303) 484-7941
Robert Eldred
Crocker Nuclear Laboratory
University of California at Davis
Davis, CA 95616
(916) 752-1120
FAX (916) 752-1124
Rob Farber
Southern California Edison Co.
2244 Walnut Grove Road
Rosemead, CA 91770
(818) 302-9693
John Gaynor
NOAA
325 Broadway R/E WP7
Boulder, CO 80303
(303) 497-6436
Mark Green
Desert Research Institute (EMSL-LV)
P.O. Box 93478
Las Vegas, NV 89193-3478
(702) 798-2182
FAX (702) 798-2692
Ronald Henry
Civil & Environmental Engineering
University of Southern California
KAP 224E
3620 S. Vermont Avenue
Los Angeles, CA 90089-2231
(213) 740-0596
Thomas Hoffer
Desert Research Institute
P.O. Box 60220
Reno, NV 60220
(702) 677-3193
Hari Iyer
Colorado State University
Foothills Campus
Fort Collins, CO 80523
(303)491-6769
3-1
-------
Jonathan Kahl
Department of Geosciences
University of Wisconsin-Milwaukee
P.O. Box 413
Milwaukee, Wl 53201
(414) 229-4561
Darko Koracin
Desert Research Institute
P.O. Box 60220
Reno, NV 60220
(702) 677-3193
Doug Latimer
Latimer & Associates
2769 Iris Avenue, Suite 117
Boulder, CO 80304
(303) 440-3332
William Malm
National Park Service
Colorado State University
Fort Collins, CO 80523
(303) 491-8292
FAX (303) 491-8598
Stan Marsh
Southern California Edison Co.
2244 Walnut Grove Road
Rosemead, CA 91770
(818) 302-9711
Sharon McCarthy
Sigma Research Group
234 Littleton Road, Suite 2E
Westford, (MA 01886
(508) 692-0330
Charles McDade
ENSR Consulting & Engineering
1220 Avenda Acaso
Carmarillo, CA 93012
(805) 388-3775
FAX (805) 388-3577
Janet Metsa
JCM Environmental
5 Pine Circle
Houghton, Ml 49931
(906) 482-5665
Vince Mirabella
Southern California Edison Co.
2244 Walnut Grove Road
Rosemead, CA 91770
(818) 302-9748
t
John Molenar
Air Research Specialists
1901 Sharp Point Dr., Suite E
Fort Collins, CO 80525
(303) 484-7941
Gene Mroz
Los Alamos National Laboratory
P.O. Box 1663, MSJ-514
Los Alamos, NM 87545
(505) 667-7758
FAX (505) 665-5688
Peter Mueller
EPRI
P.O. Box 10412
Palo Alto, CA 94303
(415) 855-2586
William Neff
NOAA
325 Broadway R/E WP7
Boulder, CO 80303
(303) 497-6265
John Ondov
Department of Chemistry
University of Maryland
College Park, MD 20742
(301) 405-1859
FAX (301) 314-9121
Roger Pielke
Colorado State University
Department of Atmospheric Science
Foothills Campus
Fort Collins, CO 80523
(303) 491-8293
Marc Pitchford
EPA-EMSL-LV
P.O. Box 93478
Las Vegas, NV 89193-3478
(702) 798-2363
FAX (702) 798-2692
3-2
-------
Bruce Polkowsky
U.S. Environmental Protection Agency
OAQPS/AQMD (MD-12)
Research Triangle Park, NC 27711
(919) 541-5532
Pradeep Saxena
EPRl
P.O. Box 10412
Palo Alto, CA 94303
(415) 855-2591
Nelson Seaman
Pennsylvania State University
503 Walker Building
University Park, PA 16802
(814) 863-1583
Chris Shaver
National Park Service
Air Quality Division
P.O. Box 25287
Denver, CO 80225
(303) 969-2075
Jim Sisler
Colorado State University
CIRA Foothills Campus
Fort Collins, CO 80523
(303) 491-8406
Jim Southerland
U.S. Environmental Protection Agency
AQMD/TSD (MD-14)
' Research Triangle Park, NC 27711
(919) 541-5523
Gene Start
NOAA
1750 Foote Drive
Idaho Falls, ID 83402
(208) 526-2328
Bob Stevens
U.S. Environmental Protection Agency
AREAL (MD-47)
Research Triangle Park, NC 27711
(919) 541-3156
Ivar Tombach
Aero Vironment
222 E. Huntington Drive
P.O. Box 5013
Monrovia, CA 91017
(818) 357-9983
John Vimont
National Park Service
Air Quality Division
P.O. Box 25287
Denver, CO 80225
(303) 969-2077
3-3
-------
Appendix 4
GROSS APPROXIMATION OF MOHAVE IMPACT TO GRAND CANYON NP
VISIBILITY USING HIGHLY SIMPLIFIED ASSUMPTIONS
Assume:
1) Concentration of S02 at long distances from source is approximately:
SO - Q
2
Q = SO2 source strength = 150 tons per day = 1.58 kg s"1 = 1.58 X 109 ug s"1
u = average wind speed in the mixed layer
h = depth of mixed layer
D = distance from source
8 = lateral plume dispersion in degrees
D tan 6 = width of plume at distance D
6 typically 5-15°
distance to GCNP is 120 km
Thus plume width at 120 km = 21 km for 6 = 10°, 32 km for 6 = 15°
sulfate is (NH4)2S04
2) Determine incremental sulfate concentration that is noticeable:
Assume change in bext of 10% is noticeable
Scattering efficiency of (NH4)2S04 is 5 m2g~l
For Rayleigh conditions, bext = 11 Mm"1, noticeable change is:
1.1
5m2g~l
-3
For average conditions, bext = 25 Mm"1, noticeable change is:
= 0.50Mm
' °
2-5 XlO""-' «
-1
-
4-1
-------
CASE 1: prefrontal winter conditions, cloudy
u = 20 m s"1 (conservatively high)
h = 3000 m
6 = 10°
(2 x 101 m .r J) (3 x 1 03 m) (2. 1 x 1 04 m)
-3 S02
=2.6/:m-3 (NH4)2SO4 at 100% conversion
= 1.3/im'3 (NH4)2S04 at 50% conversion
CASE 2: Typical summer afternoon conditions, a) cloudy, b)!dry
u = 6 m s"1 August average over 18 years at China Lake, 10000 feet MSL
h = 4000 m
6 = 15°
(6m5-1)(4xl03m)(3.2xl04m)
= 2.1/igm'3 S02 =4.2fjigm~3 sulfate at 100% conversion
a) cloudy:if - S02 contacts cloud, sulfate =2.
b) dry-.assume 3.5%/zr"1 conversion
transport time= 120A77Z
3 g hnhr'1
ms~l
=5.6 hours =19% conversion =0.
4-2
-------
CASE 3: weak pre-frontal winter conditions, cloudy
u = 6 ms"1
h = 1500 m
6 = 15°
cc _3
2 = 5.5 agm 3
(6m5-1)(1.5xl03/n)(3.2xl04m)
sulfate = ].l.3fjigm'3 with 100% conversion = 5.7/igm~3 with 50% conversion
For the conditions considered, the plume would range from marginally noticeable to quite
noticeable using this very simple methodology. The results indicate that further
consideration is justified; the potential for impact cannot be dismissed without additional
study.
4-3
-------
COMPARISON OF AVERAGE SULFUR PARTICULATE GRADIENT
BETWEEN SPIRIT MOUNTAIN AND MEADVIEW FOR MPP OUTAGE
AND NON-OUTAGE CONDITIONS
The difference between average particulate sulfur concentrations at Spirit Mountain and
Meadview were compared for outage and non-outage conditions. The gradients were compared
for all wind directions and for wind directions transporting the plume toward the site. Data is
from Murray, et al, 1989.
All wind directions
Site MPP status Sjxgnr3 n
Meadview off 0.363 36
on 0.383 454
Spirit Mtn off 0.410 54
on 0.385 442
Wind direction < ±90° from MPP to site
Site MPP status SuenT3 n
Meadview off 0.363 36
on 0.379 358
Spirit Mtn off 0.508 25
on 0.439 242
For all wind directions : Spirit^ - Meadview^ = 0.002 ^ignr3
Spiritoff - Meadviewoff = 0.047 /igm'3
Average difference in gradient= 0.045 /ignr3
For wind direction < ±90° :Spiritm - Meadview^ = 0.060 /ignr3
Spiritoff - Meadviewoff = 0.145 jignr3
Average difference in gradient= 0.085 ^ignv3
It can be seen that the average gradient in particulate sulfur between Spirit Mountain and
Meadview is greater when MPP is not operating, particularly for wind directions favorable for
transport from MPP. It is hypothesized that the gradient is small when MPP is operating
because increased dilution of the southern California sulfur between Spirit Mountain and
Meadview is balanced by formation of particulate sulfur in the MPP plume. During outage
4-4
-------
conditions, this does not happen; thus the gradient between Spirit Mountain and Mead view is
increased.
The difference in gradient of 0.085 pgrn'3 particulate sulfur corresponds to about
0.34 figm"3 sulfate as ammonium bisulfate. Assuming a mass scattering efficiency of 5 m2g'1,
this would add an average of 1.7 Mm"1 to the extinction coefficient. This increase would be
expected to be marginally perceptible for very clear days (bsp < 17 Mm"1) and imperceptible for
other days. However, these estimates are for concentrations averaged many days, while
visibility is likely to vary significantly over the course of a day, and between days.
It should be emphasized that the data base used is very limited and no conclusions can be made
regarding the impact of MPP using this limited data set. However, the difference in gradients
when MPP is off compared to on suggests the hypothesis put forth above may be correct.
4-5
-------
Appendix 5
August 26, 1991
MOHAVE OUTAGE STUDY-A PLAN SYNOPSIS
Prepard by:
Desert and Intermountain Air Transport Program
The appropriation by Congress of 2.5 million dollars to EPA to conduct a source
apportionment study on the Mohave Power Project (MPP) has generated widespread
interest in the analysis of -the data obtained during a seven month period the MPP was
inoperative in 1985. This outage period represents the ultimate in experiments. The plant
was turned off and the effects can be examined. It serves as a baseline for assessing the
impact of the MPP on visibility degradation in the Grand Canyon National Park (GCNP).
The data base can also be used in other statistical analyses that utilize the fluctuating power
plant load as a sort parameter.
The initial study was published by Murray et al. (1989) which showed that the sulfate
concentration at Meadview was not significantly lower than that observed during similar
periods in other years. The study set an upper bound to the sulfate due to MPP observed
at Meadview as less than 15%. The power of this analysis was limited by its rudimentary
treatment of inter-annual meteorological differences. The paper's impact is limited by the
fact that the authors did not consider the daily power plant load during the control periods.
The SCENES data base was utilized in the analysis of the outage period with respect to
similar periods in other years. That program was designed and implemented to acquire high
quality data for studies of visibility degradation.
The outage study used the 24-hour average particulate samples at three sites, one
background and two receptor, with respect to impact from MPP. Chemical and physical
analyses of the filters were carried out only on every third day. The samples for the two
intermediate days were archived.
A re-examination of the outage and other periods of reduced power plant output compared
to periods when the plant operates at or near capacity during the SCENES program is
envisioned. The new study will incorporate the data used in the original analysis but would
also embody the following elements:
Independent statistical analysis of the experiment
Chemical analysis of all the filters. (Quality assurance will be evaluated through
comparison of current results to past data through regression and time series
analysis.)
5-1
-------
Classification of the synoptic weather patterns affecting transport from MPP to
GCNP.
Deterministic modelling of the wind flow patterns associated with each of the
meteorological regimes.
A detailed compilation of regional SO2 emissions data for the control and outage
periods. (Changes in emission patterns must be included in the final analysis.)
These elements will be described in the following sections.
Statistical Analyses-Dr. Paul Switzer of Stanford University will serve as the independent
statistician. He has a history of involvement in physical measurement processes. He will:
Be responsible for the overall experiment design after consultation with the principal
scientists (Hoffer, White and Koracin), the participants in the original study and
familiarization with the existing data base. (The written experiment design document
would be a cooperative effort.)
Be responsible for the specification of the techniques used to handle the data and
the statistical tests that will be applied. All data manipulations can be performed in
the "blind". This procedure has been used hi the past within the meteorological
community to evaluate the results of weather modification experiments and has
proven effective hi eliminating cries of bias and data selection.
Be responsible for sample handling procedures (if the samples are assigned random
numbers), data stratification, application of statistical tests and reporting of the
results.
Participate in the redesign of the meteorological classification scheme, assisting with
the number of synoptic categories needed for stratification and in defining the
variability limits within categories when the wind field is applied in the deterministic
modelling effort. The statistician in consultation with the principal scientists, will set
the limits on the meteorological data stratification.
Participate with the individuals who have contributed substantially to the project in
the preparation and submission of a research paper to a peer reviewed journal.
Chemical Analysis-All the filters including those already analyzed will be analyzed using
XRF. The contractor will perform the analysis and report the information after the sample
date has been replaced by a random number supplied by the statistician or his agent.
Sample random numbers would be attached to the filters by the following procedure:
A list of dates versus random numbers would be prepared by the statistician.
5-2
-------
The sample ID numbers corresponding to the dates would be used to generate a list
of sample ID versus random number. The sample ID would be replaced by the
random number using the following procedure:
Two individuals not associated with the project would travel to Oregon (NEA)
to handle the samples within the contractor's facility.
The first individual would place the random number associated with the ID
number on the sample container.
The second individual would check to be certain that the two numbers were
correct before removing the ID number.
The primary element of interest to this study is sulfur. Selenium, arsenic and the other trace
elements are of secondary importance. These elements are stable, so the quality of the
sample should not have degraded with time. The filter analyses will be performed through
the external contractor who performed the original analysis.
Sulfur The sulfate concentration will be determined by measuring elemental sulfur
using XRF at an intermediate protocol, Protocol 5.
Arsenic, Selenium and Other Trace Elements The trace element concentrations will
be obtained from the XRF data. If at the end of the experiment it becomes essential
to use additional elements as tracers, arsenic and selenium could be determined
using neutron activation. Arsenic has a short half-life and will be counted by the
contractors. The long half-life of selenium as well as other long half-life elements
will be counted at DRI to lower the overall costs of sample analysis.
Meteorological Classification-The sampling period was a 24-hr day, from midnight to
midnight, starting in June 1985. Some adjustments in data handling will be made for the
data taken on 8 and 16 hour increments prior to 1985. The meteorological conditions
prevailing during each sampling period will be classified using the meteorological
classification scheme developed by Farber et al. (1989) with some modification to
incorporate more surface data and a probability of the occurrence of cloud, based upon
surface observations and upper air observations. All sample days will be included in the
computer calculations of the classification probabilities. As a part of the classification, a
parameter quantifying the strength of the synoptic flow, such as geostrophic wind, height
gradient or vorticity, will be tabulated. Statistical analysis of the strength parameter will be
used to define limits on the wind speed and direction parameters used in deterministic
modelling.
Deterministic ModellingA minimum of two meteorological models with the appropriate
grid spacing (telescoping grid starting at 1 km) will be exercised for each of the
meteorological classifications. - The strength of the synoptic flow determined from the
classification analysis will serve as an input to the model. The wind speed and direction
parameters and their variance will be fixed prior to running the models. At the present
5-3
-------
time, the addition of a chemical module to the meteorological model is not contemplated.
However, should a good chemical module become available it would be exercised along with
the meteorological model.
The transport modelling will be used to assign nominal MPP impacts at Meadview. A
potential dosage (concentration x time) will be calculated from the simulated dispersion and
duration, of the plume at Meadview. Calculations will be performed for all classifications
and synoptic strengths, yielding a nominal MPP impact corresponding to each sampling
interval. These daily nominal impacts will serve as input variables to the statistical analysis,
inputs that incorporate all relevant meteorological information in a physically correct way.
SO2 EmissionsA subcontract will be awarded to an outside contractor specializing in
emissions inventory following a competitive solicitation. The firm will inventory the regional
SO2 emissions and report the results by month and subregion. The inventory will be
compiled for all types of sources for the period of the study, and will be used as a guide to
regional changes in the background SO2/SO4 concentrations.
Project Personnel--The project would be undertaken by the Desert and Intermountain Air
Transport Program (DMAT) under the sponsorship of Southern California Edison Company
(SCE). The project manager will be Dr. Thomas Hoffer, the coordinating scientist Dr.
Warren White, the statistician Dr. Paul Switzer and the deterministic modellers Drs. Leif
Enger, Darko Koracin and David Rogers. The synoptic classification will be undertaken by
a team comprised of Dr. David Rogers, Dr. Mark Green, Dr. Rob Farber and Sara Pryor.
Summary-A reanalysis of data collected during the MPP outage is proposed to refine and
strengthen the bound on MPP's contribution to haze in the GCNP. The experiment, as
proposed, will strive to eliminate bias in the application of data stratification and statistical
analysis.
The project schedule calls for an immediate start with a spring 1992 completion date.
5-4
-------
Appendix 6
THE CSU RAMS
INTRODUCTION
The numerical atmospheric models developed independently under .the direction of
William R. Cotton and Roger A. Pielke have recently been combined into the CSU Re-
gional Atmospheric Modelling System (RAMS). Development of many of the physical mod-
ules has been accomplished over the past 15 years and has involved over 50 man years
of effort. RAMS is a general and flexible modelling system rather than a single purpose
model. For example, current research using RAMS includes atmospheric scales ranging
from large eddy simulations (Az & 100 m) to mesoscale simulations of convective systems
(Az » 100 km). This paper will discuss the options available in RAMS, the engineering
aspects of the system and how the flexibility is attained.
RAMS OPTIONS
RAMS is a merging of basically three models that were designed to simulate different
atmospheric circulations. These were a non-hydrostatic cloud model (Tripoli and Cotton,
1982) and two hydrostatic mesoscale models (Tremback et ai, 1985 and Mahrer and Pielke,
1977). The capability of RAMS was recently augmented with the implementation of 2-way
interactive grid nesting. Because of this, the modelling system contains many options for
various physical and numerical processes. These options are listed below.
The following options are currently available in configuring a model:
1. Basic equations:
Option 1 Non-hydrostatic time-split compressible (Tripoli and Cotton, 1980)
Option 2 Hydrostatic incompressible or compressible (Tremback et ai, 1985)
2. Dimensionality: 1, 2, or 3 spatial dimensions
3. Vertical coordinate:
Option 1 Standard cartesian
Option 2 Sigma-z
4. Horizontal coordinate:
Option 1 Standard cartesian
Option 2 Polar stereographic
5. Grid Structure:
Arakawa-C grid stagger
Unlimited number of nested grids
Unlimited number of levels of nesting
t Ability to add and subtract nests
Moveable nests
6-1
-------
6. Finite differencing:
Option 1 leapfrog on long timestep, forward-backward on small timestep, 2nd or 4th
order flux conservative advection.
Option 2 forward-backward time split, 2nd or 6th order flux conservative advection
(Tremback et ai, 1987)
7. Turbulence closure:
Option 1 Smagorinsky-type eddy viscosity with Rt dependence
Option 2 Level 2.5 type closure using eddy viscosity as a function of a prognostic
turbulent kinetic energy
Option 3 O'Brien profile function in a convective boundary layer (Mahrer and Pielke,
1977); local exchange coefficient in a stable boundary layer (McNider, 1981).
8. Condensation
Option 1 Grid points fully saturated or unsaturated
Option 2 No condensation
9. Cloud microphysics
Option 1 Warm rain conversion and accretion of cloud water (rc) to raindrops (rr),
evaporation and sedimentation (Tripoli and Cotton, 1980)
Option 2 Option 1 plus specified nucleation of ice crystals (r,-), conversion nucleation
and accretion of graupel (ra), growth of ice crystals (r,-), evaporation, melting
and sedimentation (see Cotton et ai, 1982)
Option 3 Option 1 plus option 2 plus predicted nucleation and sink of crystal con-
centration (./Vi), conversion and growth of aggregates (rfl), melting, evaporation
and sedimentation. The nucleation model includes: sorption/deposition, contact
nucleation by Brownian collision plus thermophoresis plus diffusiophoresis, sec-
ondary ice crystal production by rime-splinter mechanism (Cotton et a/., 1986).
Option 4 No precipitation processes
10. Radiation:
Option 1 Shortwave radiation model including molecular scattering, absorption of
clear air (Yamamoto, 1962), ozone absorption (Lacis and Hansen, 1974) and
reflectance, transmittance and absorptance of a cloud layer (Stephens, 1978),
clear-cloudy mixed layer approach (Stephens, 1977). (See Chen and Cotton
1983, 1987.) '
Option 2 Shortwave radiation model described by Mahrer and Pielke (1977) which
includes the effects of forward Rayleigh scattering (Atwater and Brown, 1974),
absorption by water vapor (McDonald, 1960), and terrain slope (Kondrat'yev,
1969).
Option 3 Longwave radiation model including emissivity of a clear atmosphere (Rodgers,
1967), emissivity of cloud layer (Stephens, 1978), and emissivity of "clear and
cloudy" mixed layer (Herman and Goody, 1976)
6-2
-------
Option 4 Longwave radiation model described by Mahrer and Pielke (1977) includ-
ing emissivities of water vapor (Jacobs et ai, 1974) and carbon dioxide (Kon-
drat'yev, 1969) and the computationally efficient technique of Sasamori (1972).
Option 5 No radiation
11. Transport and diffusion modules:
Option 1 Semi-stochastic particle model for point and line sources of pollution (Mc-
Nider, 1981)
12. Lower boundary:
Option 1 Surface layer similarity theory based on Louis (1979) as a function of spec-
ified surface roughness over land and predicted sea surface roughness based on
Garratt and Brost (1981).
Option 2 Surface layer temperature and moisture fluxes are diagnosed as a function
of the ground surface temperature derived from a surface energy balance (Mahrer
and Pielke, 1977). The energy balance includes longwave and shortwave radiative
fluxes, latent and sensible heat fluxes, and conduction from below the surface.
To include the latter effect, a multi-level prognostic soil temperature model is
computed.'
Option 3 Modified form of Option 2 with prognostic surface equations ^Tremback
and Kessler, 1985)
Option 4 Same as Option 2, except vegetation parameterizations are included (Mc-
Cumber and Pielke, 1981; McCumber, 1980)
13. Upper boundary conditions:
Option 1 Rigid lid (non-hydrostatic only)
Option 2 Rayleigh Friction layer plus Option 1-4
Option 3 Prognostic surface pressure (hydrostatic only)
Option 4 Material surface top. (hydrostatic only) (Mahrer and Pielke, 1977)
Option 5 Gravity wave radiation condition (Klemp and Durran, 1983)
14. Lateral boundary conditions:
Option 1 Klemp and Wilhelmson (1978a,b) radiative boundary conditions
Option 2 Orlanski (1976) radiative boundary conditions
Option 3 Klemp and Lilly (1978) radiative boundary condition
Option 4 Option 1, 2 or 3 coupled with Mesoscale Compensation Region (MCR')
described by Tripoli and Cotton (1982) with fixed conditions at MCR boundary
Option 5 The sponge boundary condition of Perkey and Kreitzberg (1976) when
large scale data is available from objectively analyzed data fields or a larger scale
model run. This condition includes a viscous region and the introduction of the
large scale fields into the model computations near the lateral boundaries.
15. Initialization
6-3
-------
Option 1 Horizontally homogeneous.
Option 2 Option 1 plus variations to force cloud initiation.
Option 3 NMC data and/or soundings objectively analyzed on isentropic surface
and interpolated to the model grid.
Option 4 NMC data interpolated to the model grid.
As one can see, RAMS is quite a versatile modelling system. RAMS has been applied
to the simulation of the following weather phenomena.
1. Towering cumuli and their modification
2. Mature tropical and mid-latitude cumulonimbi
3. Dry mountain slope and valley circulations
4. Orographic cloud formation
5. Marine stratocumulus clouds
6. Sea breeze circulations
7. Mountain wave flow
8. Large eddy simulation of power plant plume dispersal
9. Large eddy simulation of convective boundary layer
10. Urban circulations
11. Lake effect storms
12. Tropical and mid-latitude convective systems
ENGINEERING ASPECTS
Because of the large number of options in RAMS, the structuring of the code needs to
be carefully considered. This section will discuss various aspects of the code structure of
the system.
Pre-processor The code of RAMS is written in as close to the FORTRAN 77 standard
as possible. However, with a program as large as this, the FORTRAN standard is lacking in
several features such as global PARAMETER and COMMON statements and conditional
compilation. To remedy these insufficiencies, the RAMS code takes advantage of a pre-
processor written as part of the RAMS package. This pre-processor itself is written in the
77 standard so that the package as a whole is highly portable. It takes full advantage of the
character features of FORTRAN and has executed successfully on a number of machines
including a VAX, CRAY-1, CRAY-X-MP, and CYBER 205 without modification. Some of
the features of the pre-processor are described below:
6-4
-------
1) By including a character in the first column of a line of code, that line can be "acti-
vated" or "eliminated" from the compile file. This allows for conditional compilation
of single lines or entire sections of code.
2) A pre-processor variable can be set to a value. This variable can then be used in
other expressions including a pre-processor IF or block IF to conditionally set other
pre-processor variables. These variables also can be converted to FORTRAN PA-
RAMETER statements which can be inserted anywhere in the rest of the code.
3) A group of statements can be delineated as a "global" which then can be inserted
anywhere in the code. This is very useful for groups of COMMON and PARAMETER
statements.
4) DO loops can be constructed in a DO/ENDDO syntax, eliminating the need for
statement labels on the DO loops.
Two-way interactive grid nesting The use of grid nesting allows a wider range of mo-
tion scales to be modeled simultaneously and interactively. It can greatly ease the limi-
tations of unnested simulations in which a compromise must be reached between covering
an adequately large spatial domain and obtaining sufficient resolution of a particular local
phenomenon. With nesting, RAMS can now feasibly model mesoscale circulations in a
large domain where low resolution is adequate, and at the same time resolve the largo eddy
structure within a cumulus cloud in a subdomain of the simulation.
Nesting in RAMS is set up such that the same model code for each physical process
such as advection is used for each grid. This makes it easy for any desired number of grids
to be used without having to duplicate code for each one. Also, it is easy to add or remove
a nested grid in time, and to change its size or location. There is still the flexibility of
choosing many model options independently for different grids.
RAMS has adopted the two-way interactive nesting procedure described in Clark and
Farley (1984). This algorithm is the means by which the different nested grids communicate
with each other. The process of advancing coarse grid A and fine nested grid B forward in
time one step begins with advancing grid A alone as if it contained no nest within. The
computed fields from A are then interpolated tri-quadratically to the boundary points of B.
The interior of B is then updated under the influence of its interpolated boundary values.
Finally, the field values of A in the region where B exists are replaced by local averages from
the fields of B. An increase in efficiency over the Clark and Farley method was implemented
by allowing a coarse grid to be run at a longer timestep than a fine grid.
The following options are available with nesting in RAMS:
1) There is no imposed limit (only a practical one) to the number of nested grids which
can be used.
2) When two grids B and C are nested within grid A, they may be either independent
(occupying different space) or C may be nested within B.
-------
3) The increase in spatial resolution of a nested grid may be any integer multiple of its
parent grid resolution. Moreover, this multiple may be specified independently for the
three coordinate directions.
4) A nested grid may, but need not, start from the ground and extend to the model
domain top.
5) A nested grid may be added or removed at any time during a simulation.
6) A nested grid can travel horizontally at a prescribed velocity.
I/O structure For those machines with limited central memory and a "non-virtual"
operating system or for efficiency on virtual systems, RAMS is constructed with a disk I/O
scheme. When the scheme is operating, a subset of the model's three-dimensional variables
will reside in central memory at any one time. Computations then can be performed with
this subset. When these computations are finished, a new subset of three-dimensional
variables -are requested and computations performed with these. The RAMS structure,
thus, is dependent on this I/O scheme and consists of a series of calls to the I/O scheme
and to the routines which do the calculations.
Modularity For flexibility, RAMS is written as modular as possible. Each individual
physical parameterization or numerical process is put in a separate subroutine so that the
routines can easily be replaced for different options or with new developments.
Computational routines The routines that do the actual computations for the model
are written so that the implementor of a new or replacement routine does not need to
be concerned with most of the details of the rest of the model computations. All three-
dimensional variables are "passed" to the subroutines through the call statement with other
variables passed through COMMON. The implementor then has the flexibility to structure
his routine in whatever manner he wishes to produce the desired result. This concept will
also make the implementation of routines from other models and programs easier with less
modification required.
Analysis routines A set of subroutines has been developed for analyzing and plotting a
variety of quantities from fields output from RAMS. This greatly facilitates the interpreta-
tion and understanding of modeled atmospheric phenomena. The quantities diagnosed by
these routines include vorticity, divergence, streamfunction, energy, momentum flux, most
variances and covariances, and layer averaged quantities.
REFERENCES
Atwater, M.A. and P.S. Brown, Jr., 1974: Numerical calculation of the latitudinal variation
of solar radiation for an atmosphere of varying opacity. J. Appl. Meteor., 13, 289-297.
Chen, C. and W.R. Cotton, 1983: A one-dimensional simulation of the stratocumulus-
capped mixed layer. Bound.-Layer Meteor., 25, 289-321.
Chen, C. and W.R. Cotton, 1987: The physics of the marine stratocumulus- capped mixed
layer: J. Atmos. Sci., 44, 2951-2977.
6-6
-------
Clark, T.L., and R.D. Farley, 1984: Severe downslope windstorm calculations in two and
three spatial dimensions using anelastic interactive grid nesting: A possible mecha-
nism for gustiness. J. Atmos. Sci., 41, 329-350.
Cotton, W.R., M.A. Stephens, T. Nehrkorn, and G.J. Tripoli, 1982: The Colorado State
University three-dimensional cloud/mesoscale model 1982. Part II: An ice phase
parameterization. J. de Rech. Atmos., 16, 295-320.
Cotton, W.R., G.J. Tripoli, R.M. Rauber, and E.A. Mulvihill, 1986: Numerical simulation
of the effects of varying ice crystal nucleation rates and aggregation processes on
orographic snowfall. J. Climate Appl. Meteor., 25, 1658-1680.
Garratt, J.R., and R.A. Brost, 1981: Radiative cooling effects within the above the noctur-
nal boundary layer. J. Atmos. Sci.. 38, 2730-2746.
Herman, G. and R. Goody, 1976: Formation and persistence of summertime arctic stratus
clouds. J. Atmos. Sci., 33, 1537-1553.
Jacobs,C.A., J.P Pandolfo and M.A. Atwater, 1974: A description of a general three dimen-
sional numerical simulation model of a coupled air-water and/or air-land boundary
layer. IFYGL final report, CEM Report No. 5131-509a.
Klemp, J.B. and D.R. Durran, 1983: An upper boundary condition permitting internal
gravity wave radiation in numerical mesoscale models. Mori. Wea. Rev., Ill, 430-
444.
Klemp, J.B. and D.K. Lilly, 1978: Numerical simulation of hydrostatic mountain waves. J.
Atmos. Sci., 35, 78-107.
4.
Klemp, J.B. and R.B. Wilhelmson, 1978a: The simulation of three-dimensional convective
storm dynamics. J. Atmos. Sci., 35, 1070-1096.
Klemp, J.B. and R.B. Wilhelmson, 1978b: Simulations of right- and left-moving storms
produced through storm splitting. J. Atmos. Sci., 35, 1097-1110.
Kondrat'yev, J., 1969: Radiation in the Atmosphere. Academic Press, New York, 912 pp.
Lacis, A.A., and J. Hansen, 1974: A parameterization for the absorption of solar radiation
in earth's atmosphere. J. Atmos. Sci., 31, 118-133.
Louis, J.F., 1979: A parametric model of vertical eddy fluxes in the atmosphere. Bound.-
Layer Meteor., 17, 187-202.
Mahrer, Y. and R.A. Pielke, 1977: A numerical study of the airflow over irregular terrain.
Beitrage zur Physik der Atmosphare, 50, 98-113.
6-7
-------
McCumber, M.D., 1980: A numerical-simulation of the influence of heat and moisture fluxes
upon mesoscale circulation. Ph.D. dissertation, Dept. of Environmental Science, Uni-
versity of Virginia.
McCumber, M.C. and R.A. Pielke, 1981: Simulation of the effects of surface fluxes of heat
and moisture in a mesoscale numerical model. Part I: Soil layer. J. Geophys. Res.,
86, 9929-9938.
McDonald, J.E., 1960: Direct absorption of solar radiation by atmospheric water vapor. J.
Meteor., 17, 319-328.
McNider, R.T., 1981: Investigation of the impact of topographic circulations on the trans-
port and dispersion of air pollutants. Ph.D. dissertation, University of Virginia, Char-
lottesville, VA 22903.
Orlanski, I., 1976: A simple boundary condition for unbounded hyperbolic flows. J. Comput.
Phys., 21, 251-269.
Perkey, D.J. and C.W. Kreitzberg, 1976: A time-dependent lateral boundary scheme for
limited-area primitive equation models. Man. Wea. Rev., 104, 744-755.
Rodgers, C.D., 1967: The use of emissivity in atmospheric radiation calculations. Quart. J.
Roy. Meteor. Soc., 93, 43-54.
Sasamori, T., 1972: A linear harmonic analysis of atmospheric motion with radiative dissi-
pation. J. Meteor. Soc. Japan, 50, 505-518.
Stephens, G.L., 1977: The transfer or radiation in cloudy atmosphere. Ph.D. Thesis. Mete-
orology Department, University of Melbourne.
Stephens, G.L., 1978: Radiation profiles in extended water clouds. Webster Theory. J.
Atmos. Sci., 35, 2111-2122.
Tremback, C.J. and R. Kessler, 1985: A surface temperature and moisture parameteriza-
tion for use in mesoscale numerical models. Preprints, 7th Conference on Numerical
Weather Prediction, 17-20 June 1985, Montreal, Canada, AMS.
Tremback, C.J., G.J. Tripoli, and W.R. Cotton, 1985: A regional scale atmospheric nu-
merical model including explicit moist physics and a hydrostatic time-split scheme.
Preprints, 7th Conference on Numerical Weather Prediction, June 17-20, 1985, Mon-
treal, Quebec, AMS.
Tremback, C.J., J. Powell, W.R. Cotton, and R.A. Pielke, 1987: The forward in time
upstream advection scheme: Extension to higher orders. Mon. Wea. Rev., 115, 540-
555.
6-8
-------
Tripoli, G.J. and W.R. Cotton, 1980:-A numerical investigation of several factors contribut-
ing to the observed variable intensity of deep convection over South Florida. J. Appl.
Meteor., 19, 1037-1063.
Tripoli, G.J., and W.R. Cotton, 1982: The Colorado State University three-dimensional
cloud/mesoscale model - 1982. Part I: General theoretical framework and sensitivity
experiments. J. de Rech. Atmos., 16, 185-220.
Yamamoto, G., 1962: Direct absorption of solar radiation by atmospheric water vapor
carbon dioxide and molecular oxygen. J. Atmos. Sci., 19, 182-188.
6-9
-------
Appendix 7
1. DEVELOPMENT OF AN INTERACTIVE DATA
ANALYSIS TOOL USING THE MONTE CARLO
MODEL
1.1 INTRODUCTION
The 1990 Clear Air Act explicitly recognizes the existence of long range transport of
air pollution. Several provisions of this significant new law require regulatory actions
that involve multi-state regions, dictated by regional-scale air pollution. For instance,
the Act requires the establishment of Transport Commissions over the next five years.
These commissions will be charged with the policy developments for "airshed" on
regional scale involving several neighboring states.
The work of such commissions and many other provisions of the 1990 law has a strong
need for technical input on the nature and scope of regional air pollution. Typical
questions may be: What is the region of influence for specific sources; What are the
major source regions contributing to -a given receptor; How will certain emission
reduction scenarios reflect on ambient pollution levels.
In the past, the answers to such questions have been obtained either form intensive
monitoring and measurement campaigns or from prognostic regional models. Intensive
measurement programs are expensive and generally provide answers applicable only to
measurement domain. Prognostic models, on the other hand, are in general rather
unreliable. Hence, for the effective implementation of the new law, new approaches
and tools are needed. The PC diagnostic Monte Carlo model proposed for this project,
will provide such a policy oriented data analysis/interpretation.
Over the past two decades, much scientific knowledge has been gathered about the
nature and scope of regional air pollution. In fact, it can be stated that the main
causes, and physico-chemical processes that characterize regional air pollution are
reasonably well understood.
The CAPITA diagnostic Monte Carlo model, encapsulates and describes much of the
knowledge about regional sulfur pattern. It was developed in the early 1980s for the
analysis and interpretation of regional sulfur and visibility data. It's application to
other areas are illustrated in section 1.6.
The initial version of the CAPITA model served primarily as a research tool. In the
1990s there will be a strong need for operational, easy to use well calibrated models
that can aid the implementation of the complex new Act. The proposed PC Monte
Carlo model is intended to be a tool to aid policy-related decision making.
7-1
-------
1.2 PURPOSE
The purpose of this task is to develop an interactive and physics-based data analysis
tool for the analysis and interpretation of visibility related data. The data analysis tool
is to aid policy oriented and scientific decision making. The results of the work should
be directly applicable to the implementation of the 1990 Clean Air Act.
1.3 GOALS
The project has the following specific goals:
a. Implement a personal computer-based version of the CAPITA Monte Carlo
regional model (PCMC)
b. Re-examine the calibration of the diagnostic model using more recent high
quality aerosol, gaseous, precipitation chemistry, and visibility related data sets.
c. Develop interfaces to meteorological transport data produced by other, more
elaborate meteorological models.
d. Present the results suitable for answering policy questions, such as those posed
by the new Clean Air Transport Commissions.
e. Develop interactive graphic user interface that will:
aid the operation of the model by non-programmers
facilitate the graphic display of results
- presentation of the physical entities, such as spatial concentration maps, time
charts, frequency distributions
f. As much as possible, use off-the-shelf robust software building blocks in the
creation of the interactive PC Monte Carlo model.
1.4 SCOPE
The scope of this task will include the porting, testing, and re-calibration of the
CAPITA Monte Carlo model on a PC platform. It also involves building user
interfaces for the input/output of model data.
In this task, the main scientific/regulatory application of the PCMC model will be to
visibility. This work will not involve significant new research areas in of atmospheric
processes, policy analysis or other fields. Rather, it will use the available knowledge in
these areas and package these into generally usable PC tools.
1.5 EXPECTED RESULTS
The results of this task will be an interactive data analysis and presentation package that
will allow the simulation modeling of visibility-related atmospheric processes. The
model will be packaged as a tool. As a tool it should be usable by policy analysts
7-2
-------
within and outside the government as well as by the research community.
1.6 THE CAPITA MONTE CARLO MODEL
The proposed interactive data analysis tool will utilize the CAPITA Monte Carlo
regional atmospheric transport/transformation/removal model. The model principles
and some of its applications are described by Patterson et al. (1981). The following
description states the concept of the model and illustrates some of its past applications
relevant to this work.
In the Mote Carlo modelling approach, simulated pollutant quanta (particles) are
"emitted" in accordance with an emission inventory. These quanta are moved in fixed
time increments using the interpolated measured wind fields. During their transport,
the pollutant quanta may be subject to chemical transformations or removal. vThe
r ^ J J . fi>.vo.,f,- I }"* 5
transport as well as the transformations are somewhat randomized/; hence the name
Monte Carlo method. The method is also referred to as the Direct Simulation method
since the physico-chemical processes are simulated as discrete events rather than
obtained from the solution of differential equations. The result of Monte Carlo
simulations is a large number (10^ - lO^) of pollutant "particles" dispersed
geographically for every time step of the simulation.
7-3
-------
Table 1. Summary of CAPITA regional model
a.
b.
c.
d.
e.
f.
£
h.
i.
j-
k.
1.
m
n.
Model Type
Receptor Grid
Grid Resolution
. Model Domain
Model Output
Input requirements
Emissions
Winds
Precipitation
Cloud cover
Dewpoint
Mix heights
Emissions
Transport
Precipitation
Mixed Layer
Horizontal Dispersion
Vertical Dispersion
SC>2 Transformation Rate
. Dry Deposition Rate
Wet Removal Rate
Monte Carlo (Lagrangian, and Eulerian in the limit ot large number of quanta)
52 x 60 grids (variable for PCMC)
127 x 127 km at 60 degrees north latitude (one-third of U.S. National Meteorological
Center grid spacing)
North America (variable for PCMC)
Fields of particles each 3-h, each particle representing a mass of emitted pollutant
remaining in the atmosphere as each possible chemical species; converted to fields of
daily SC>2, SO^ concentration and dry and wet depositions at all grid points
ioto
Seasonal, surface and tall stack 1 890-i£«0 SC>2 grids.
0000 and 1200 GMT rawinsonde wind profiles at 1 30 sites. In PCMC, externally
generated wind fields accepted
3-hourly observation of precipitation of three intensities at surface synoptic sites.
gridded from 3h surface synoptic sites
gridded from 3h surface synoptic sites
Climatological by season, from work of Holzworth (1972) and Portelli (1977), of
maximum afternoon mixing heights
3-h SC>2 emissions. Released in mixed layer in day, and either 150-450 layer or 0-
150m at night; 1 % primary 804
Inverse-distance squared weighting. Upper air rawinsonde winds are interpolated in
space into 11 layers (0-150, 150-450, 450-750, 750-1050, 1050-1350, 1350-1650,
1650-1950, 1950-2250, 2250-2850, 2850-3450, 3450-5250 m above ground). Wind
for each layer is the vector average, and winds are linearly interpolated -from 1 2h to
3h, using seasonal diurnal interpolation factors at each height to reflect nighttime jets
and midday drag from convective mixing.
3h grids of the space-time average probability of encountering precipitation are used
to scale local wet removal rates as fraction of the maximum rate.
Climatological average, which varies with geographic location and season of year.
Representative peak P.M. values are 800 (winter), 1200 (spring, fall), 1350
(summer). Fixed 150m at night.
Lateral displacement by veering of layers overnight; "eddy diffusion" K = 2000 m^/s
day, 100 m^/s night.
instantaneously mixed throughout mixed layer during day (0900- 1 800 LST); no
vertical miring at night.
Varies seasonally, diurnal and locally. "Dry" part proportional to solar radiation,
function of latitude/season, time of day, and local total sky cover. "Wet" part
proportional to local surface dewpoint.
Zero above local mixed height and above 150m night surface layer. Varies with
stomatal density and opening.
Zero above local mixed height within mixing layer (precipitation probability over a
grid during a specified time) x (wet removal rate constant, 100%/h for 864 and
10%/h for S02).
7-4
-------
In the PC implementation, some of the above model parameters will be changed. The
changes will incorporate better physico-chemical knowledge, better computational
performance and a more general user interface. In what follows, the application of the
Monte Carlo model in different domains is illustrated. We consider that the
illustrations below demonstrate the potential applications of the new PC based model.
Receptor Modeling and Back Trajectory Analysis. The simplest application of the
model is for showing back trajectories leading to a specific receptor site. The approach
is illustrated below as applied to the analysis and interpretation of the measurements in
the VISTTA program (Macias et al. 1981).
Fig. 1. (a) Map of the portion of the southwestern U.S. of interest in this study. Some of the major emission sources are
indicated, (b) (c) and (d) Intercompanson of calculated air mass histories. Each figure shows two estimates for the history
of air sampled at Page at 11.00 MST on the indicated date. Single heavy tracks are back trajectories derived by
meteorologists from measurements of upper-air winds (MRI, 1980). Multiple light tracks are back trajectories computed
by CAPITA Monte Carlo model from adjusted midday surface winds, taking dispersion into account (Patterson ei ai.
1980). The two estimates, based on independent manipulation of independent data sets, agree satisfactorily for the three
differing transport regimes shown here, and for 16 of the 18 days considered.
7-5
-------
Multiple Plume Modeling. The model was also applied for the modeling of single and
multiple plume dispersion. The figure below indicates the model usage for the
visualization of multiple plumes in -the Southwestern U.S. (Macias et al., 1981).
A
B
C
6/27
7/2
7/7
6/28
7/3
6/29
7/4
7/8
7/9
6/30
7/5
7/10
SOUTHERN CALIFORNIA
COPPER SMELTERS
{~~1 FIRES
Fig. 2. Calculated plumes from potential source areas, from CAPITA Monte
Carlo model. Each figure shows approximate extent, at 11.00 MST on the
indicated date, of material released during the preceding 48 h (A, C) or 24 h (B).
The three rows cover the three characteristic time periods identified by Macias el
al. (1981). (A) 27-30 June: Air which had stagnated over southern California for
several days moved into the Page area during the latter half of this period. (B) 3-6
July: Shifting southerly winds brought material to the vicinity of Page from
wildfires north of Phoenix and smellers southeast of Tucson. (C) 8-11 July: A shift
to more southerly winds during the latter half of this period diminished the impact
of southern California on Page.
7-6
-------
Regional Transmission Modeling. The most extensive use of the CAPITA model has
been the regional modeling of sulfates and extinction coefficients over the eastern U.S.
(Patterson et al., 1981). In that application, the daily pattern for sulfate aerosol and
extinction coefficients were simulated as shown in the figure below.
PRESSURE
SURE MODEL AND
WINDS
10
::: 3-4
= 4-6
>t
10-15
= 15-20
20-25
>25
JIB""
W-15
= 15-20
120-25
I >2S
mb
= WW-1022
H >W22
Fie. 3 Daily maps of midday />, corrected lo 60% RH (first column), 24 h average SURE SO;
(2nd column), modeled 24 h distribution of emitted sulfur quanta (third column), and unmodified noon
surface wind field overlaid with the sea level pressure (last column) for 1-6 Aueust 1977.
7-7
-------
The daily pattern of measured $04, model 804, visual range-derived extinction
coefficient, bext, and air residence time is shown in Figure 4.
16
14
12
10
6
6
4
2
0
RESIDENCE TIME
3-5
3.0
'E
2.5 To
2.0
1.5
3 uT
2
u
z
UJ
o
10 15 20
AUGUST 1977
25
30
FIG. I|. Daily spatial averages within Ihe SURE region of SURE sulfale (thin line), fc,.x,
(thick line) and model sulfate (dashed line). Model sulfale scale assumes 1100 m scale height.
The dotted trace is proportional to the number of conservative quanta from a uniform emis-
sion grid remaining within the eastern United States, which defines a regional residence time
for the airmass.
7-8
-------
Global Pollution Modeling. (Patterson and Husar. 1981)
Fig. 5. (a) Emission field of trajectory origins, (b) Sample wind field grid.
CD
CD
* so so-co eo-aoo aoo-uo soo-noo >tooo
Fig. 5 Seasonal maps of vertical burden arising from 1974
850mbar winds and 5-day residence time. Shadings represent
sum of trajectory endpomts (puff arrivals per NMC grid
square) weighted by decay for exp(-i/r).
no-aoo 200400
Fig. 3. Seasonal maps of vertical burden arising from 1974
850mbar winds and 10-day residence time. Shadings repre-
sent sum of trajectory endpoints (puff arrivals per NMC grid
square) weighted by decay factor cxp(-r/t).
7-9
-------
Retrospective Modeling. In this application, the model transfer matrices along with
historical emission trends were used to reconstruct the SO2 concentration trend in New
York City Central Park for the period 1900-1980 (Husar et al., 1984) (Figure 6).
n
e
si
3
2
O
400 -
300 -
100 -
0 -<
1660
2000
19<>0 1920 1940 ' 1&60 i960
YEAR
a UPPER EST. r LOTTER EST,
Estimated SO2 concentration trend for New York city, Central Park.
1.7 APPROACH AND IMPLEMENTATION
This section states the approach and implementation of the proposed goals stated in
section 1.3.
1.7.1 PC Based Version of the Monte Carlo Model (PCMC)
The new model implementation will operate on standard IBM compatible personal
computers. While the model kernel will retain its features, the model will be
completely wrapped into a graphic user interface. It will utilize the readily available
Microsoft Windows graphic operating environment.
The. model will be implemented using object oriented programming techniques using
the C+ + language.
7-10
-------
1.7.2 Re-Calibration of the Model
Following the implementation, the PCMC model will be tested and its constants re-
evaluated using more recent high quality aerosol databases. The candidate data sets
include IMPROVE by the National Park Service; SCENES, a western U.S. research
consortium, and the particle network of NESCAUM (Northeast States for Coordinated
Air Use Management). These data sets will allow a more precise evaluation of the
transformation and removal rate constants.
1.7.3 Development of Interfaces for Externally Generated Wind Fields
The previous model used NWS (National Weather Service) upper air and surface
observations to derive the transport wind field. The gridded, x,y,z dependent wind
vectors were generated by the Monte Carlo model itself.
In the PCMC model the above wind generation facilities will be preserved. In
addition, "hooks" will be provided to allow the usage of externally generated model
winds. Candidate wind grids include the NWS 100 km mesh predictive model that is
available operationally. Another wind data source may be the MM4 Mesoscale Model
by NCAR/Penn State.
The use of these external wind field will not eliminate the need to use the surface
meteorological observations. Such input will provide estimates of solar radiation,
precipitation events, relative humidity, and other variables required by the PCMC.
1.7.4 Present the PCMC Model Output Suitable for Policy Analysis
Unlike the first, research oriented version of the model, the PCMC will be oriented
toward application in regulatory or other decision making. Hence, the output of the
model will have to be tailored to answer questions relevant to the regulatory function of
EPA or other agencies.
The first such regulatory activity under the 1990 Clean Air Act, is the formation of
Transport Commissions. The charge of such commissions is to evaluate the regional
(inter-state) aspects of air pollution. As its first application, The PCMC will provide the
Transport Commissions with a tool to examine the regional air pollution transport from
and to alternative source and receptor areas.
1.7.5 Interactive Graphic User Interface
The PCMC will be Windows-based program. It's operation is accomplished by menu
selections and point and click queries. Programming knowledge will not be required.
An example application of the graphic user is outlined below for illustration purposes:
7-11
-------
Suppose a member of the interstate Transport Commission wishes to evaluate the
potential impacts of various sources on a given receptor region. The following user
actions would be required. - .
- Select pollutant of concern. The emission field for that pollutant is automatically
displayed on a map.
- Zoom in on the map and point to a specific location of interest. The program
would automatically display a pie chart for the relative contribution various
sources to that location.
- Point to specific sources on the map. This would be a query to retrieve all the
characteristics of that source, including the emissions.
- From a menu ask for forward trajectories. The program would automatically
draw the trajectories for the previously selected receptor location of a year,
month, day or hour as specified be the user.
- Select "Show Monitoring Data" from a menu. The program would retrieve and
display the available monitoring data for the previously specified location and
pollutant.
- Select "Show Model Data" from a menu. The program would calculate instantly
the model concentration pattern for the selected location and overlay it to the
measured date. This would give the commission member a feel for the model
performance.
The above illustration is but a small sample of the possible implementation for a user-
friendly data interpretative model.
Since the data will be presented in physical units on maps and charts, and the user
actions will be intuitive, the training and instruction time will be small compared to the
use of the current modeling and data analysis software. Hypertext-based context-
sensitive help will also be available to aid the user.
1.7.6 Use of Robust Software Building Blocks.
The PCMC will utilize modem, object oriented software building principles. It will be
object oriented in principle as well as implementation. It will make use of "Software
1C V (integrated circuits) that are generic, robust, and suitable for integration into
larger software applications.
These software building blocks will include Dynamic link Libraries (DLL's);
Embedded Objects; Message based communication among objects; Software
Construction sets, (such as ToolBook by Asymetrix Corporation, and Voyager data
browser by Lantern Corporation.)
A key feature of object oriented approach is that most modules will be reusable. This
will reduce the complexity, size and the maintenance cost of the software.
7-12
-------
1.8 DELIVERABLES
The main deliverable of this project will be a PC based regional model based on the
Monte Carlo principle. The model will be packaged as policy analysis tool, including
tutorial, as well as on-line and hard copy documentation. The PCMC will be made
available and distributable without royalty or other legal constraints.
7-13
-------
Appendix 8
NAS review of Whitex - Limitations & Suggested Improvements
The Committee on Haze in National Parks and Wilderness Areas
of the National Research Council, National Academy of Sciences prepared
a report entitled "Haze in the Grand Canyon: An Evaluation of the
Winter Haze Intensive Tracer Experiment" (WHITEX). The WHITEX
experiment studied the effect of the Navajo Generating Station (NGS)
upon visibility in Grand Canyon National Park (GCNP). The
Environmental Protection Agency (EPA) is in the planning stage of a
study (named Project MOHAVE) to determine the effects of the Mohave
Generating Station on visibility in GCNP.
The NAS report on WHITEX noted a number of limitations in the
study and some suggestions for how the study could have been unproved.
The purpose of this document is to identify how EPA intends to improve
upon the limitations of WHITEX noted by the NAS and to incorporate
the NAS suggestions into the Project MOHAVE study plan. NAS
comments (paraphrased) on the WHITEX study that may be applicable
to Project MOHAVE are listed below. The comments are followed by
responses of how Project MOHAVE intends to consider these issues.
p.4 (Executive Summary) Committee identifies problems in the multiple
linear regression analysis (DMB and TMBR):
1) Satisfactory tracers are not available for all major sources;
Response:
Project MOHAVE will attempt to identify tracers for all major sources,
source types and source areas. For example, certain halocarbons may be
used as tracers for the Los Angeles Basin. Also, sulfur to selenium ratios
may be significantly different for different coal-fired powerplants. However,
it is acknowledged that all major sources may not have satisfactory tracers
8-1
-------
identified. This lack of complete source profiling often occurs in receptor
modeling and does not necessarily preclude the use of receptor modeling to
obtain quantitative results. However, uncertainties in source profiles needs
to be incorporated into the error analysis.
2) Interpretation did not account for possible covariance between
Navajo and other coal-fired powerplants in the area;
Response:
Trajectory analyses using the wind fields from the dynamic
meteorological model will allow determination of times that the MPP plume
and plumes from other sources are jointly present at receptor locations. This
will facilitate consideration of covariance of impacts from MPP and other
sources.
3) Both models treat sulfur conversion inadequately.
Response:
The exact methodologies of treating sulfur conversion in the receptor
models has not yet been determined. Rather than scaling tracer by ambient
surface relative humidity in TMBR, as in WHITEX, other methods will be
considered. For example, data may be stratified into "wet" and "dry"
conditions and the model run separately for each subset of data. Similarly for
the DMB analysis, instead of assuming constant conversion rates for the
entire data set, subsets of the data may be grouped, w{th Constant rates over
each group. It is acknowledged that some uncertainty in sulfur conversion is
unavoidable; however, with the use of deterministic modeling, checked by
tracer and sulfate data, along with receptor modeling, reasonable,
quantitative estimates of sulfate contributions from each source may be
obtained.
p.4 WHITEX did not quantitatively determine the fraction of SO4 aerosol and
resultant haze in GCNP attributable to NGS.
8-2
-------
Response:
As discussed above and in response to other comments, with
measurements, deterministic and receptor modeling and model reconciliation,
quantitative apportionment ofsulfate at GCNP can be done, within identified
error bounds. After sulfate has been apportioned, statistical and first
principle approaches can be used to attribute extinction.
p.4 WHITEX did not adequately quantify the sensitivity of the analysis to
departure from model assumptions, nor did it establish an objective and
quantitative rationale for selecting among various statistical models.
see response to 2nd comment on page 26.
p. 4 The conceptual framework for DMB involved physically unrealistic
simplifications for which the effect on quantitative assessments was not
addressed.
Response:
As discussed elsewhere in the responses, more physically realistic
assumptions will be made wherever possible. However some simplifications
will remain, as in all modeling studies. The effect of variations in
assumptions can be studied to some extent with sensitivity analysis. Also,
comparison to deterministic models (which also contain simplifications) may
help determine the effect of simplifications upon quantitative assessments.
p.4 The data base contained weaknesses; especially important was the lack
of measurements below the rim and the paucity of background measurements
(particularly SO4).
Response:
The conceptual plan for Project MOHAVE calls for monitoring below
the rim of the Grand Canyon and increased background monitoring compared
to WHITEX, including S04 and tracer. It should be recognized that the
number of feasible monitoring sites is limited due to power requirements and
8-3
-------
the inaccessibility of some areas.
p. 4 The background measurements were inadequately incorporated into the
data analyses (in particular, SO4).
Response:
The NAS comments emphasize that not enough sampling sites were
located in the vicinity ofGCNP (p. 25). In addition, tracer was not measured
at many locations and only a small subset of tracer data were analyzed.
Project MOHAVE will have more sampling sites in the vicinity ofGCNP and
operation over a 12-15 month period, compared to 6 weeks for WHITEX.
This includes more sulfate sample analysis and far more sample analysis of
tracer. However, due to accessibility problems and power requirements, the
number of feasible sampling sites in the location of GCNP is limited.
Thus, the actual number and location of sites may be less than ideal.
p.20 Literature does not demonstrate that MLR can successfully apportion
secondary species among several source types; therefore is not advisable to
rely solely on such models for the success of a major field experiment.
Response:
Project MOHAVE is emphasizing the use of deterministic models rather
than MLR for apportionment of secondary species. The analysis will also use
receptor models and eigenvector analysis as a check of the deterministic
models.
p.21 Deterministic met. modeling did not reproduce the diurnal fluctuation
in wind flow observed at Page.
see response to next comment
p.21 The met. data and deterministic meteorological modeling do not allow
quantification of the contribution that NGS might have made to haze at
GCNP. The deterministic modeling cannot pinpoint the location of'the NGS
8-4
-------
plume nor its entrainment into the canyon. The model uses a grid size of 5
km; hence it cannot reproduce the complex topography of GCNP nor the
associated small scale meteorological effects, such as gravity flows. Thus the
meteorological studies provide only qualitative evidence of transport.
Response:
Project MOHAVE will more thoroughly model MPP using increased
meteorological data and greater resolution of topography. Modeling will be
done for the entire 12-15 month study period. Wind profilers mil provide a
much increased meteorological data base compared to the WHITEX study.
Model grid size will be 500 m at key areas, allowing greater topographic
resolution and improved representation of small scale flows. It should be
understood that it is impossible to exactly model wind fields; of particular
difficulty is flow in highly complex terrain such as the study area. Monitoring
and modeling of moisture and chemical transformations will allow for a
reasonable quantification of MPP impacts to haze at GCNP.
p. 24 No tracer was used in WHITEX to evaluate urban emissions; therefore
the fraction of haze attributable to these sources is impossible to calculate.
Response:
Tracers for urban areas will be investigated. For example, certain
halocarbons have been identified as tracers for the Los Angeles Basin. Other
urban areas, particularly Las Vegas will also be investigated for endemic
tracers. In addition, the deterministic modeling will identify the time periods
when emissions from urban areas are in the Grand Canyon area.
p. 24 The source profile for powerplants was based on limited aircraft
measurements of NGS emissions downwind from the stacks. The copper
smelter profile was based on old and uncertain data from the literature.
Response:
The planners of Project MOHAVE are aware of the critical nature of
accurate source profiles for use in receptor models. All available data for
8-5
-------
powerplant emissions in the region will be used to generate powerplant source
profiles. The most recent data for smelter emissions will be used. Resource
limitations preclude significant field efforts to document source profiles of all
important sulfur sources with the potential to impact the GCNP area.
p. 24 Variabilities and uncertainties in NGS CD4 emission rates led to
substantial uncertainties in the day to day relationship between CD4 and NGS
sulfur emissions.
Response:
Unlike the WHITEX study, which used tracer data mainly for receptor
modeling, Project MOHAVE will also use tracer for estimating plume dilution
factors. For this purpose, a constant tracer release rate is desirable. With
variations in MPP load, this will result in variations in tracer to sulfur ratios.
For use in receptor modeling, as in WHITEX tracer concentrations need to
be scaled to the sulfur emissions, which requires plume age. Tfie more
sophisticated meteorological modeling to be done for Project MOHAVE will
give a better calculation of plume age than the simple trajectory models used
in WHITEX.
p. 24 At Hopi Point, CD4 concentrations were determined for only 36
samples, an undesirably small data set for the types and large numbers of
statistical analyses performed on the data.
Response:
The WHITEX study analyzed a small number of samples of CD4
because of the very high analysis costs. It is expected that perfluorocarbons
will be used for Project MOHAVE, for which the analysis costs are not
prohibitive. The tracer sample size will be many times the size for WHITEX,
allowing for a sufficiently large data set for use in statistical analyses.
p. 24 The ratio of SO2 to CD4 in the stack was not analyzed.
Response:
Project MOHAVE intends to analyze some stack samples for S02 to
tracer ratio.
8-6
-------
p. 24 The report provides little documentation of procedures and quality
assurance for the sampling and analysis of ambient CD4.
Response:
The participants in Project MOHAVE are acutely aware of quality
assurance problems with some past tracer experiments. The skepticism
regarding the quantitative use of tracers requires not only careful quality
assurance, but also detailed documentation of the procedures and quality
assurance performed. Project MOHAVE reports will provide detailed
documentation of quality assurance for tracer and other data collection.
p. 25 Without data from more stations, the effect of NGS emissions is
difficult to differentiate from other sources in the region.
Response:
Project MOHAVE expects to have data from additional stations in the
area of GCNP compared to WHITEX. Perhaps more significantly, the
deterministic modeling will help differentiate impacts from MPP and other
sources.
p. 26 WHITEX design did not provide the data necessary to quantify the
effects of departures from the statistical assumptions made.
Response:
see response to comment #6, page 4.
p. 26 SO4 contribution attributed to NGS depends strongly on the model
chosen, the tracers included in the model, and the criteria by which the model
is fit to the data. To establish a more rational basis for quantitative
attribution, more attention must be given to alternative formulations of TMBR
and DMB and the criteria for selecting among them. However, even if these
criteria were adequately considered, the statistical results would most likely
remain non-robust in the sense that source attributions generated by the
various statistical models would probably still differ substantially from one
another. One difficulty is that the number of plausible alternative models is
substantial relative to the number of samples for which CD4 data are
available. As the number of models increase, so does the likelihood that one
8-7
-------
of them will test significant merely by chance.
Response:
Model formulations will be done based on theoretical considerations.
Sensitivity analysis of varying model assumptions within reasonable ranges
will be done to determine the bounds of possible results. It is possible that
different receptor (and deterministic) models will yield significantly different
results. Reconciliation of model results will be done at this point. Many
more samples of tracer will be available compared to the WHITEX study, thus
decreasing the likelihood that a model will test significant by chance.
p. 26 WHITEX assumed SO4 yields from NGS and smelter emissions were
proportional to ambient relative humidity. This is a simple and indirect
assumption, which scales intermittent processes along the entire trajectory at
cloud level directly to a continuous variable measured at ground level.
Response:
In Project MOHA VE, the effect of moisture upon sulfate formation will
be treated more rigorously than done in the WHITEX report. In addition to
surface humidity measurements, the deterministic meteorological model will
give estimates of humidity at many vertical levels. This information will
include prediction of clouds, which can be compared to satellite observations.
Rather than scaling linearly with relative humidity, a determination will be
made whether or not the plume is in contact with clouds. Rates of sulfate
formation are thought to occur rapidly in clouds, and quite slowly without,
particularly in winter. Thus, stratification of data into "wet" and "dry"
categories seems appropriate.
p. 31 Given the overriding importance of the RH scaling factor, the
committee believes that the sensitivity of results to alternative assumptions
should have been explored in formulating the models used for the TMBR and
DMB analyses. The NFS WHITEX report assumes that the contribution of
background sources, such as other power plants and urban areas, were
unaffected by RH. The committee believes the report should have considered
the possibility that yields from other sources were also affected by RH.
8-8
-------
Response:
It is likely that contributions from other sources are affected by relative
humidity. This will be considered in the analysis.
p. 31 The DMB analyses are dependent on unique "plume ages", the validity
of which is questionable. Plume ages were estimated only for NGS and not
for other sources.
Response:
Plume ages will be estimated for MPP and a variety of other sources
using wind fields generated by the dynamic meteorological model; this should
provide reasonable estimates of plume ages.
p. 31 DMB is based upon linear models for the oxidation of SO2 to SO4 and
for the deposition of SO2 and SO4. In reality, both processes are likely to
occur at rates that can vary greatly in space and time.
Response:
see response to comment 3, page 4.
p. 32 Nonuniformities in conversion and deposition rates lead to variabilities
in the relationship between SO4 concentrations measured at the receptor sites
and tracer concentrations used in the regression analyses. Because these
nonuniformities were not taken into account in the DMB formulation, the
DMB results are of questionable applicability.
Response:
see response to comment 3, page 4.
p. 32 Possible covariance of impacts from NGS and other coal-fired power
plants makes it difficult to statistically distinguish the relative effects of NGS
and other plants.
Response:
Trajectory analyses using the wind fields from the dynamic
meteorological model will allow determination of times that the MPP plume
and plumes from other sources are jointly present at receptor locations. This
will facilitate consideration of covariance of impacts from MPP and other
sources.
8-9
-------
p. 35 No H2O2 measurements were made at or near GCNP during WHITEX.
Response:
Measurements ofH2O2 will be made in the study area, under varying
conditions.
8-10
-------
Association International Specialty Conference
'Visibility and Fine Particles', October 1989, Estes
Park, CC.
Appendix 9
SURVEY OF A VARIETY OF RECEPTOR
MODELING TECHNIQUES
William C. Malm
National Park Service, Air Quality Division
Cooperative Institute for Research in the Atmosphere
Colorado State University
Ft. Collins, CO 80523
Hari K. Iyer
Department of Statistics
Colorado State University
Fort Collins, CO 80523
John Watson
EEEC, Desert Research Institute
Reno, NV 89506
Douglas A. Latimer
Latimer & Associates
P.O. Box 4127
Boulder, CO 80306-4127
Abstract
The chemical mass balance (CMB) formalism has been used on a semi-routine
basis to apportion emissions used to mass concentrations at specific receptor sites.
Recently, two other techniques, differential mass balance (DMB) and tracer mass
balance regression (TMBR) have been used to apportion'secondary aerosols to
sources and source types of a variety of receptor areas. CMB uses known source
and receptor measured tracer profiles (gradients in tracer concentration at one
point in time) to apportion sources at one point in time. DMB uses gradients in
trace elements across space, while TMBR uses changes in tracers across time to
achieve apportionment of primary as well as secondary .aerosol species. Assump-
tions and limitations of each approach will be addressed and a unified formalism
building on strengths of all three approaches will be presented.
9-1
-------
SURVEY OF A VARIETY OF RECEPTOR
MODELING TECHNIQUES
Introduction
Receptor modeling approaches rely on known physical and chemical charac-
teristics of gases and particles at receptors and sources to attribute aerosols to a
source or source type. Historically, the CMS formalism has been used to appor-
tion primary particles. This formalism uses known relationships between emitted
tracers and an assumption that various tracer profiles, stay constant as material
is transported from source to receptor. These tracer profiles are then used to
apportion primary species for each time period that a measurement is made at a
receptor site. Other common types of models include principal component anal-
ysis (PCA) and multiple linear regression "(MLR). Explanations of these models
are given by Watson1'2',3 Chow,4 and Hopke.5 All these models are special cases
of a General Mass Balance (GMB) model which is deterministic in nature. A
regressional model similar to MLR is derivable from the GMB equations and will
be referred to as the TMBR model. The TMBR model incorporates changes in
tracer material over time to apportion both primary and secondary aerosols. Fi-
nally, the DMB model, a special case of GMB and referred to here as a receptor
oriented model, is really a hybrid model in that it relies on tracer material to
establish atmospheric dispersion characteristics but deterministically accounts for
deposition and oxidation. Stevens and Lewis,6 Lewis and Stevens,7 and Dzubay
et al.,8 have used models similar to the TMBR and GMB to create a hybrid model
which they have used for source apportionment.
General Mass Balance Equations
Each special case of GMB has its own set of limiting assumptions and special
requirements for solution. The assumptions that need to be satisfied for the
mathematical model to be valid will be apparent during the process of derivation
of the model equations. Nevertheless, the assumptions will be explicitly stated
after the derivations of the model equations have been explained. The statistical
aspects of the estimation of the fractional contribution by a given source and the
calculation of the associated uncertainties will also be presented.
9-2
-------
Notational Conventions
The following notation will be used throughout.
Total number of species under consideration = fn.
Total number of sources under consideration = n.
Total number of sampling periods = s.
The subscript i will be used for indexing the species, j for sources and k for
sampling periods.
= concentration of aerosol species i at source j corresponding to
sampling period k.
C^k = concentration of aerosol species i at the receptor, attributable
to source j corresponding to sampling period k.
tjk = travel time for the air mass from source j to the receptor, cor-
responding to sampling period k.
rijk = a factor that accounts for deposition of aerosol species i from
source j, for sampling period k.
r*jk = a factor that accounts for the formation of aerosol species i from
a parent species i" emitted by source j as well as its deposition
during transport for sampling period k.
djk = a factor that accounts for dispersion of aerosol mixture from
source j during sampling period fc, as the mixture travels from
the source to the receptor.
Whenever a subscript i denotes a secondary aerosol species, then the subscript
i' will denote the corresponding parent aerosol species. For instance, if i denotes
S04 then i" will stand for
Model Equations
It follows from the definitions that for primary aerosol species
and fpr secondary aerosol components we have.
k + Ci*jkr~kdjk. (2)
The quantities r,-jt are a function of deposition rates and transport time while
r'jlt are functions of deposition and transport times as well as conversion rates.
9 -3
-------
Simple functional forms for r^k and r,'^ can be derived if it is assumed that
chemical conversion and deposition are governed by first order mechanisms and
conversion and deposition rates are constant in space over some finite increment
in time.
Let X(t) denote the mass, at time t after emission, of a species i in a unit
volume of aerosol mixture. Assume, ignoring dispersion temporarily,
(3)
which, when solved, yields
+ Kl)t) '(4)
where X(0) is the mass at time 0 in unit volume of aerosol mixture, i.e. the
concentration of the species at the source. The quantities Kc and K\ are the con-
version and deposition rates, respectively, for the species under consideration. The
conversion and deposition rates have been assumed to remain constant through-
out the transport path in space and time. If d(t) denotes the dispersion factor
corresponding to t time units after emission of the aerosol mixture, then
X(t)x = X(Q}cxp(-(Kc + ffi)0
-------
If now the dispersion factor d(t) is taken into account
From this relation it becomes evident that the factor accounting for the formation
of the secondary aerosol species from its parent species as well as its deposition
during transport is of the form
Tf
Based on the above arguments, when the conversion and deposition rates of the
various species remain constant throughout the duration of transport from 'the
source to the receptor
rijk = exp(-(Ke(iJ,k) + Kd(iJ,k))tjk) (11)
and
r. = Kc(i'J,k]
c(i',j, k) + Kd(i>J, k) - Kd(iJ, k)
{exp(-Kd(i,j, k)t}k) - exp(-(Kc(i-J,k) + Kd(i',j, k)}tjk}} (12)
where
Kc(i, jt k} = conversion rate of species i from source j to its secondary form,
during sampling period k.
Kd(i,j, k} = deposition rate of species i from source j during sampling period
k.
\
Let dk = concentration of aerosol component i at the receptor during sampling
period k. Since the concentration of aerosol component i at the receptor is the sum
of the concentrations attributable to various sources, the mass balance equation
becomes
(13)
for each sampling period k 1,2, . . . ,5. From this basic equation various special
cases can be derived.
CMB Model
The first special case of the GMB equations to be examined is the Chemical
Mass Balance formalism.
9-5
-------
Model Equations
Suppose our list of aerosol components includes only material that is nonre-
active and maintains relative ratios between various species as material is trans-
ported from source to receptor. In this case Kc(i,j,k] are all zero and Kd(i,j,k]
are the same for all elements i. Their common value is denoted by Kd(j,k] in-
dicating the nondependence on i. This implies that the quantities Tijk do not
depend on i. Then,
which implies that the signature for source j at the source equals the signature
for source j as perceived at the receptor.
Let Sjk = ££:! Cijk- The quantity Sjk is the concentration of the aerosol
mixture at the receptor during sampling period k that is attributable to source j.
The fraction a,-jjt defined by
(15)
is then the fraction of species i in the aerosol mixture at the receptor attributable to
source j during sampling period fc. Assuming Equation (14) is valid, the numbers
o,-jjt for i = 1,2, ...,m represent the source signature for source j for sampling
period k. From Equations (13) and (15) it follows that the set of Equations in
(16) below also holds.
(16)
If the a,-jfc for all the sources affecting the receptor sites are known, then 16
is a system of linear simultaneous equations in n unknowns 5ut, Su,..., 5n*, for
each of the sampling periods k = 1,2,..., s. These are in fact the chemical mass
balance equations. The rank of the system of equations for each k must be equal
to n in order to uniquely solve these equations. In particular, the numbers of
equations must be greater than or equal to.the number of chemical species (i).
Solutions to the CMB equations that have been used are: 1) a tracer solution;
2) a linear programming solution-; 3) an ordinary weighted least squares solution
with or without an intercept; 4) a ridge regression weighted least squares solution
with or without an intercept; and 5) an effective variance least squares solution
with or without an intercept. An estimate of the uncertainty associated with the
source contributions is an integral part of several of these solution methods.
9-6
-------
Weighted linear least squares solutions are preferable to the tracer and linear
programming solutions because: 1) theoretically they yield the most likely solution
to the CMB equations providing model assumptions are met; 2) they can make
use of all available chemical measurements, not just the so- called tracer species;
3) they are capable of analytically estimating the uncertainty of the source con-
tributions;
CMB software in current use9 applies the effective variance solution developed
and tested by Watson11 because this solution: 1) provides realistic estimates of the
uncertainties of the source contributions (owing to its incorporation of both source
profile and receptor data uncertainties); and 2) chemical species with higher pre-
cisions in both the source and receptor measurements are given greater influence
than are species with lower precisions. The effective variance solution is derived10
by minimizing the weighted sums of the squares of the differences between the
measured and calculated values of dk and a,-j. The solution algorithm is an itera-
tive procedure which calculates a new set of Sjk based on the Sjk estimated from
the previous iteration.
Watson12 found that individual sources with similar source profiles would yield
unreliable values if included in the same chemical mass balance. Henry13 proposed
a quantitative method of identifying this interference between this similar source
compositions, which is known as "collinearity."- He uses the "singular value de-
composition" define an "estimable space into which resolvable sources should lie."
The-sources which do not fall into this estimable space are collinear, or too similar
to be resolved from the sources which do lie within the estimable space.
Williamson and Dubose14 claimed that the ridge regression reduces colinear-
ities. Henry13 tested the ridge regression solution with respect to the separation
of urban and continental dust and found that the bias resulted in physically un-
realistic negative values for several of the a,-j. The ridge regression solution .has
not been used in the CMB since these tests were published.
CMB Model Assumptions
The CMB model assumptions are:
Compositions of source emissions are constant over the period of ambient
and source sampling.
Chemical species do not react with each other, i.e., they add linearly.
9 -7
-------
All sources with a potential for significantly contributing to the receptor
have been identified and have had their emissions characterized.
The sources1 compositions are linearly independent of each other.
The number of sources or source categories is less than or equal to the
number of chemical species.
Measurement uncertainties are random, uncorrelated, and normally. dis-
tributed.
Effects of Deviations from CMB Model Assumptions
Assumptions 1 through 6 for the CMB model are fairly restrictive and will
never be totally complied within actual practice. Fortunately, the CMB model can
tolerate reasonable deviations from these assumptions, though these deviations
increase the stated uncertainties of the source contribution estimates.
The CMB model has been subjected to a number of tests to determine its abil-
ities to tolerate deviations from model assumptions.3-:2'13' "16-17-1Sl 19' 20- 21- 22
These studies all point to the same basic conclusions regarding deviations from-
the above-stated assumptions.
With regard to Assumption 1, source compositions, as seen at the receptor, are
known to vary substantially among sources, and even within a single source over
an extended period of time. These variations are both systematic and random
and are caused by three phenomena: 1) transformation and deposition between
the emission point and the receptor; 2) differences in fuel type and operating
processes between similar sources or the same source in time; and 3) uncertain-
ties or differences between the source profile measurement methods. Evaluation
studies have generally compared CMB results from several tests using randomly
perturbed input data and from substitutions of different source profiles for the
same source type. The general conclusions drawn from these tests are:
The error in the estimated source contributions due to biases in all of the
elements of a source profile is in direct proportion to the magnitude of the
biases.
For random errors, the magnitude of the source contribution errors decreases
as the number of components increases.
.9-8
-------
The most recent and systematic tests are those of Javitz22 which apply to a
simple four-source urban airshed and a complex ten-source urban airshed. These
tes^s, with 17 commonly measured chemical species, showed that primary mobile,
geological, coal-fired power plant, and vegetative burning source types can be
apportioned with uncertainties of approximately 30% when coefficients of variation
in the source profiles are as high as 50%. This performance was demonstrated even
without the presence of unique "tracer" species such as selenium for coal-fired
power plants or soluble potassium for vegetative burning. In a complex urban
airshed, which added residual oil combustion, marine aerosol, steel production,
lead smelting, municipal incineration, and a continental background aerosol, it was
found that the geological, coal-fired power plant, and background source profiles
were collinear with the measured species. At coefficients of variation in the source
profiles as low as 25%, average absolute errors were on the order of 60%, 50%, and
130% for the geological, coal-burning, and background sources, respectively. All
other sources were apportioned with average absolute errors of approximately 30%
even when coefficients of variation in the source profiles reached 50%. Once again,
these tests were performed with commonly measured chemical species, and results
would improve with a greater number of species which are specifically emitted by
the different source types.
With regard to the nonlinear summation of species, Assumption 2, no studies
have been performed to evaluate deviations from this assumption. While these
deviations are generally assumed to be small, conversion of gases to particles
and reactions between particles are not inherently linear processes. This assump-
tion is especially applicable to the end products of photochemical reactions and
their apportionment to the sources of the precursors. Further model evaluation is
necessary to determine the tolerance of the CMB model to deviations from this
assumption. The current practice is to apportion the primary material which has
not changed between source and receptor. The remaining quantities of reactive
species such as ammonium, nitrate, sulfate, and elemental-carbon are then appor-
tioned to chemical compounds rather than directly to sources. While this approach
is not as satisfying as a direct apportionment, it at least separates primary from
secondary emitters and the types of compounds apportioned give some insight
into the chemical pathways which formed them. As chemical reaction mecha-
nisms and rates, deposition velocities, atmospheric equilibrium, and methods to
estimate transport and aging time become better developed, it may be possible
to produce "fractionated" source profiles which will allow this direct attribution
or reactive species to sources. Such apportionment will require measurements of
gaseous as well as participate species at receptor sites.
A major challenge to the application of the CMB is the identification of the
primary contributing sources for inclusion in the model, Assumption 3. Watson12
9-9
-------
systematically increased the number of sources contributing to his simulated data
from four to eight contributors while solving the CMB equations assuming only
four sources. He also included more sources in the least squares solutions than
those which were actually contributors, with the following results:
Underestimating the number of sources had little effect on the calculated
source contributions if the prominent species contributed by the missing
sources were excluded from the solution.
When the number of sources was underestimated, and when prominent
species of the omitted sources were included in the calculation of source
contributions, the contributions of sources with properties in common with
the omitted sources were overestimated.
When source types actually present were excluded from the solution, ratios
of calculated to measured concentrations were often outside of the 0.5 to 2.0
range, and the sum of the source contributions was much less than the total
measured mass. The low calculated/measured ratios indicated which source
compositions should be included.
When the number of sources was overestimated, the sources not actually
present yielded contributions less than their standard errors if their source
profiles were significantly distinct from those of other sources. The over-
specification of sources decreased the standard errors of the source contri-
bution estimates.
Recent research suggests that Assumption 3 should be restated to specify that
source contributions above detection limits should be included in the CMB. At
this time, however, it is not yet possible to determine the "detection limit" of a
source contribution at a receptor since this is a complicated and unknown function
of the other source contributions, the source composition uncertainties and the
uncertainties of the receptor measurements. Additional model testing is needed
to define this "detection limit."
The linear independence of source compositions required by Assumption 4
has become a subject of considerable interest since the publication of Henry's13
singular value decomposition (SVD) analysis. As previously noted, this analysis
provides quantitative measures of collinearity and the sensitivity of CMB results
to specific receptor concentrations. These measures can be calculated analytically
in each application. Henry13 also proposed an optimal linear combination of source
contributions that have been determined to be collinear.
9-10
-------
Other "regression diagnostics" have been summarized by Belsley23 and have
been applied to the CMB by DeCesar.19- 20 Kim and Henry24 show that most of
these diagnostics are useless because they are based on the assumption of zero
uncertainty in the source profiles. They demonstrate, through the examination of
randomly perturbed model input data, that the values for these diagnostics vary
substantially with typical random changes in the source profiles.
Tests performed on simulated data with obviously collinear source composi-
tions typically result in positive and negative values for the collinear source types
as well as large standard errors on the collinear source contribution estimates. Un-
less the source compositions are nearly identical, the sum of these large positive
and negative values very closely approximates the sum of the true contributions.
With most commonly measured species (e.g., ions, elements and carbon) and
source types (e.g., motor vehicle, geological, residual oil, sea salt, steel production,
wood burning and various industrial processes), from five to seven sources are
linearly independent of each other in most cases.22
Gordon15 found instabilities in the ordinary weighted least square solutions to
the CMB equations when species presumed to be "unique" to a certain source
type were removed from the solution. Using simulated data with known pertur-
bations ranging from 0 to 20 percent, Watson12 found: "In the presence of likely
uncertainties, sources such as urban dust and continental background dust cannot
be adequately resolved by least squares fitting, even though their compositions are
not identical. Several nearly unique ratios must exist for good separation."
With regard to Assumption 5, the true number of individual sources contribut-
ing to receptor concentrations is generally much larger than the number of species
that can be measured. It is therefore necessary to group sources into source types
of similar compositions so that this assumption is met. For the most commonly
measured species, meeting Assumption 4 practically defines these groupings.
With respect to Assumption 6 (the randomness, normality, and the uncorre-
lated nature of measurement uncertainties), there are no results available from
verification or evaluation studies. Every least squares solution to the CMB equa-
tions requires this assumption, as demonstrated by the derivation of Watson.11 In
reality, very little is known about the distribution of errors for the source compo-
sitions and the ambient concentrations. If anything, the distribution probably, fol-
lows a log-normal rather than a normal distribution. Ambient concentrations can
never be negative, and a normal distribution allows for a substantial proportion
of negative values, while a log-normal distribution allows no negative values. For
small errors (e.g., less than 20%), the actual distribution may not be important,
but for large errors, it probably is important. A symmetric distribution becomes
9 -11
-------
less probable as the coefficient of variation of the measurement increases. This
is one of the most important assumptions of the solution method that requires
testing.
Model Input and Output Data
The chemical mass balance modeling procedure requires: 1) identification of
the contributing sources types; 2) selection of chemical species to be included;
3) estimation of the fraction of each of the chemical species which is contained
in each source, i.e., the source compositions); 4) estimation of the uncertainty
in both ambient concentrations and source compositions; and 5) solution of the
chemical mass balance equations, and 6) validation and reconciliation. Each of
these steps requires different types of data.
Emissions inventories are examined to determine the types of sources which
axe most likely to influence a receptor. Principal components analysis applied to
a time series of chemical measurements is also a useful method of determining the
number and types of sources. After these sources have been identified, profiles
acquired from similar sources25 (identify most of the available source profiles) are
examined to select the chemical species to be measured. Watson12 demonstrates
that the more species measured, the better the precision of the CMB apportion-
ment.
The ambient concentrations of these species, C,-, and their fractional amount
in each source-type emission, F,j, are the measured quantities which serve as
CMB model input data. These values require uncertainty estimates, era and crpij.
which are also input data. Input data uncertainties are used both to weight the
importance of input data values in the solution and to calculate the uncertainties
of the source contributions. The output consists of: 1) the source contribution
estimates (Sj) of each source type; 2) the standard errors, of these source contri-
bution estimates. 3) the amount contributed by each source type to each chemical
species.
TMBR Model
The TMBR model is a multiple regression based model which may be used to
apportion an aerosol species of interest measured at a receptor site to the various
contributing sources. The actual regression analysis may be performed using the
method of ordinary least squares. However, since the independent variables in
this model are ambient concentrations of various aerosol components which are
measured with error, the method of Orthogonal Distance Regression (ODR) is ex-
9 -12
-------
pected to give better estimates of the source contributions. A detailed theoretical
discussion of the method of ODR may be found in the book by Fuller (1987).26
Model Equations
In this section it is shown that, under appropriate assumptions, the general
mass balance model can be reduced -to a simpler linear model. Let aerosol com-
ponent i = I be a secondary aerosol with i* = 2 denoting the corresponding
parent species. It is of interest to determine the fractional contribution to the
ambient concentrations of this secondary aerosol component by a distinguished
source which will be denoted by the subscript j 1. We will also assume that
aerosol species i\ is a tracer for this distinguished source. Let sources j = 2 thru
j = ill have an associated tracer species z'j, sources j n2 + 1 thru j = n3 have
an associated tracer species i$ etc., and sources _;' = n/,_i -f 1 thru j = rih have
an associated tracer i/,. Sources j = n^ + 1 thru j = n may be unknown sources
or may be known sources with tracers that are not measured at the receptor. For
the sake of uniformity of notation we .let n! = 1. Thus the n sources have been
partitioned into h + 1 groups, each of the first h groups of sources being associated
with a unique tracer specfes or with a fraction of some reference species that has
been calculated using CMS or some other appropriate mode!.
In general for 1 < u < h and nu_i + 1 < j < nv we have
riujk
Therefore,
ljk
cljk= 0ukCiujk = piukcittk (is)
where /3,-u^- is defined as
For n/j + 1 < j < n let
(20)
The general mass balance equation then reduces to the equation
= Att + jSi.tC.-.fc (21)
for each sampling period k = 1, 2, . . . , s.
9-13
-------
If the quantities /?,-* are all independent of k for each u, /?,* = /#{, and the
above set of equations reduce to
(22)
u=l
The quantities C,-ujt are ambient concentrations of the tracer species z'i, 12, ..., ih
and are assumed known. The quantities C\k are the ambient concentrations of
the aerosol species being apportioned and are also assumed known. We thus have
a set of 3 linear equations in h + 1 unknowns 0o, /?;,, /3,-2, ..., /3,-h. If the system of
equations has rank h + 1, then these unknown beta coefficients may be obtained
by solving the above system of linear equations. The apportionment of the species
of interest to the various groups of sources is then carried out by calculating the
individual terms of the equations above.
In certain instances it is known that the beta coefficients will differ significantly
from one time period to another. In such cases it may be possible to determine,
based on physical and chemical reasons, a function of the field measurements, the
sampling period and the source, which we denote by jk, such .that it is more
reasonable to assume the quantities /?,-*/<£,-£ are constant for all sampling periods
rather than the quantities $*. In such cases we define 7;u = 0iuk/jk . For.
uniformity of notation we define 70 to be equal to /?o- This results in the system
of linear equations
(23)
We may refer to this set of equations as the TMBR model. Again, if this set
of equations has rank h + 1 then we may solve for the gamma coefficients and
consequently calculate the individual terms of the equations. This will yield the
apportionment we seek. Note that if we take 4>jk = 1 then this set of equations
reduces to the set of equations in (22).
Tracer Mass Balance (TMB) Model
This is a special case of the TMBR model.and'is obtained by partitioning
the sources contributing a particular secondary aerosol species, (say species i = 1"
with associated parent species designated as species i* = 2), into two groups rather
than h + 1 groups. That is, we take h = 1 in the TMBR model. The two groups
are: (i) A distinguished source labeled j = 1 with associated tracer species i i'i,
and (ii) All other sources. In this case, the TMBR model reduces to
= fok + AjfcC,-,* (24)
9 -14
-------
As before , if we assume that the beta coefficients are independent of the sampling
period, then the TMB model equations further reduce to
= fa + faCM (25)
If the quantities Cu- and Cilk are known, and if the set of linear equations in (32)
have rank 2 then we can solve for the unknown beta coefficients and consequently
carry out the apportionment of species 1 by computing the individual terms of
the above equations.
In certain instances it is known that the beta coefficients will differ significantly
from one time period to another. In such cases it may be possible to determine,
based on physical and chemical reasons, a function of the field measurements, the
sampling period and the. source, which we denote by ^u-, such that it is more
reasonable to assume the quantities P^k/Pik are constant for all sampling periods
rather than the quantities /3,-jt. In such cases we define 7,^ .= fi^k/fak For
uniformity of notation we define 70 to be equal to fa. This results in the system
of linear equations
Cik - 7o + 7i-jC;,i:<£u (26)
We may refer to the above system of equations as the TMB model. Again, if
this set of equations has rank 2, then we may solve for the gamma coefficients and
consequently calculate the individual terms of the equations. This will yield the
apportionment we seek.
A Special Case
The simplest versions of the TMBR model use uk = 1 for all time periods
and source groups. However, if Ke or K& are dependent on other variables such as
solar radiation, concentration of key atmospheric chemicals and so forth, it may
be possible to chose a form of ^ that will linearize the TMBR model.
In apportioning a secondary aerosol, the constant 0iujk derived from the GMB
model had the form
with
T*.. ' ' ; ; X
, k)tik) - exp(-(Kc(i'J, k) + Kd(i-J, k)}tjk)} (28)
and
iv,j,k} + Kd(iu,j,k})tjk) (29)
9-15
-------
If the species iu does not convert and its deposition rate is the same as that of
the secondary aerosol species i being apportioned, then
I-HJ* = exp(-Kd(i.j,k)tjk) (30)
so that the ratio r^k/riujk reduces to Kc(i",j, k)tjk after using the approximation
exp(x) « 1 + x (when x is sufficiently small). (31)
The full infinite series expansion for exp(x) is given by
x2 x3
exp(x) = l + x + + + --
and a first order approximation has been used in (31). It is possible to use higher
order approximations of exp(x) in these derivations but this is not pursued here.
An an example of the above approximation consider a case where Kc(i',j^ k) is
proportional to RHuk with proportionality constant Bi-j. Then the ratio r"jfc/r,-uj-fc
is equal to Bi-jtjkRHuk which gives
AujJt = Bi-jtjkRHuk (32)
Ciujfc
Defining
7.\,* = /3i,k/RHuk (33)
and assuming that 7,-,,* are constant for all sampling periods rather than the
quantities fruk suggests the use of RHuk as a linear factor in the TMBR model
equation (23).
i
The use of RH as a linearization parameter does not necessarily imply that
the RH dependence of Kc is grounded in some basic chemical process. Rather, in
the case of 502 to SO* oxidation, RH may be thought of as a surrogate variable
depicting the amount of time that SO? spends in contact with clouds where ox-
idation is accelerated. Therefore, assuming RHuk RHk, the TMBR model for
the 502 - SO 4 system becomes
CSo
-------
Csouk are
all observed with error. We shall denote the true values by Cut, C,-0jt and uk and
the observed values by the quantities Cut, C,-ujt and 4>uk- We then assume that
Cik = Cik + f-c
lk
Ik'
i
The quantity f.clk is a random error with mean 0 and standard deviation
The quantity .cl k is a random error with mean 0 and standard deviation (?civk-
Likewise, the quantity e^ is a random error with mean 0 and standard deviation
v
-------
From this we obtain the estimated fractional contribution Fujt of species 1 by
source group u for sampling period k as
The estimated fractional contribution for the entire sampling period, by source
group u, is denoted by Fv and is calculated as
To calculate the uncertainties to be associated with these estimates we may
use the following procedure. We construct several (say, 100) synthetic data sets
by perturbing the estimates of the true values Cut, C^k and $uk using gaussian
random deviates with mean zero and standard deviations equal to the respective
measurement uncertainties. Each such synthetic data set is subjected to an ODR
analysis to obtain estimates of contributions and fractional contributions of the
various source groups to the receptor as explained above. This procedure results
in a whole collection of estimates '(say, 100) for the various quantities of interest.
The root mean square error is then calculated for each quantity of interest using
the collection of estimates obtained from perturbed synthetic data sets and using
the" initial estimates obtained from the actual data set as if they were the true
values. This root mean square error associated with a given quantity of interest
is used to quantify the uncertainty associated with that quantity. Recall that if 0
represents the true value of a quantity and 0* represents an estimate of 6 obtained
from the qtk synthetic data set, then the root mean square error is calculated by
Root Mean Square Error =
Alternatively we may quantify the uncertainty associated with a given estimate
using confidence intervals but we do not discuss that approach here.
Second Approach. In this section we discuss an approximate method of
calculating the uncertainties associated with the model outputs. The concentra-
tions (7iujt of species 1 (secondary species of interest) associated with each trace
element iu for each time period may be calculated by multiplying the measured
values of Aivk Ctakuk for each trace element by the respective estimated regres-
sion coefficients as follows. (% would just be the estimated intercept representing
the estimated contribution from all sources not explicitly accounted for by any of
the reference species used in the TMBR model.)
9-18
-------
C'luk = 7i. x A,uk (35)
The uncertainties for each of these concentrations C~uk may be calculated by:
The quantities aA-uk are the uncertainties in the measured values Aiuk and is
assumed to be known. In the special case discussed in the previous section, these
uncertainties are part of the WHITEX data base. The quantities 7,-u may be
obtained as outputs from the regression packages that are used. Errors in A,-,,*
and the estimated regression coefficients have been assumed to be independent in
the calculation of Equation (36).
The total calculated amount of species 1, Cik for each time period is the sum
of the C*ujt's summed over all the reference aerosol species iu and the intercept 70-
Cik = 7o + £ C'iuk (37)
u=l
The uncertainty associated with the total calculated concentration of species 1 for
each time period is:
+ £ (38)
u=l
assuming the covariance terms arising in the derivation are negligible.
The estimated fraction Fuk of species 1 from each source for any given time
period is equal to the amount of species 1 associated with the trace element divided
by the total calculated concentration of species 1:
(39)
The uncertainty for each of these fractions is:
£Lt + C&gc» (40)
(~*1 f**^
The mean fraction Fu of species 1 attributed to each source type is estimated by
the mean species 1 concentration Cu for that source type divided by the mean
total calculated concentration of species 1, C, as follows:
JU = S± (41)
9-19
-------
where
and
Jt=l
(42)
(43)
k=l
The uncertainties for Cu and C are calculated by:
1 *i
and
(44)
(45)
k=i
The uncertainties associated with the mean fractions Fu are calculated by
(46)
The uncertainty formulas are all derived using propagation of error methods
and assuming the covariances between various terms occurring in the derivation
are negligible. These assumptions will not be true in practice and so the usefulness
of the above approximations will depend upon how severely the assumptions used
in the above derivations are violated.
Model Assumptions
The TMBR model assumptions are:
The chemical species used as tracers in the model are assumed to be uniquely
emitted by non-overlapping groups of sources. In particular none of the
species other than the tracer associated with the source of interest can be
emitted by another source unless there is an independent method such as
CMB to partition the ambient species concentrations into components at-
tributable to the various groups of sources.
The composition of source emissions are constant over the period of ambient
sampling.
9 -20
-------
Deposition and conversion are constant from one sampling period to the
next for each subgroup it.
Measurement errors are random, uncorrelated, and normally distributed.
For the special case where kc was assumed to be proportional to RH the addi-
tional assumptions are:
Exponential forms of deposition and conversions can be represented by first
order approximations.
The RH at the receptor site is indicative of the amount of time that air
parcels spend in contact with clouds and therefore can be used as an indi-
cator of oxidation rate.
Potential Deviations from Assumptions
It is highly unlikely that deposition and conversion are constant in space and
time and in many cases one can expect source profiles to change over the course
of the study. These assumptions are implicit to the assumption that background
and fractionation coefficients are time independent. Whether of not a lineariza-
tion scheme is appropriate can be examined through goodness of fit tests of the
proposed model and possible by direct experimental verification. The uniqueness
of tracer species can be assessed by source testing and by releasing unique tracers
from sources of interest.
Deviation from any of the assumptions will increase the calculated uncertainty
in the final apportionments. The extent to which the inflation of uncertainty
occurs will depend on how variable the regression coefficients are. Research into
the effect of deviation from assumptions on apportionments is needed.
Model Inputs
The model requires the following quantities as inputs:
The ambient concentrations of the aerosol species being apportioned.
The ambient concentrations of the reference tracer species.
Relative humidity at the receptor for each of the sampling periods, when
-------
The uncertainties in the above quantities, when-ODR is used to estimate
the 7 coefficients, rather than OLS.
Model Outputs
The model outputs include:
Estimates of the actual amount of the contribution and the fractional con-
tribution of the aerosol species of interest by the source or source type of
interest to the receptor, along with the associated uncertainty estimates.
Estimates of the average amount and the average fractional amount of the
aerosol species of interest contributed by each source or source type of in-
terest along with the associated uncertainty estimates.
Differential Mass Balance (DMB) Model
The DMB model is a receptor model combined with elements of a deterministic
model. In this approach dispersion is accounted for by ratioing ambient trace ma-
terial concentrations attributed to a source by known, trace material release rates
while deposition and conversion are explicitly calculated. The name "Differential
Mass Balance" refers to the use of difference in trace material concentration to
account for dispersion.
Model Equations
Suppose a particular source is of interest and we wish to determine the frac-
tional contribution of some aerosol species to the receptor by that source. We
shall designate the aerosol species of interest by the subscript ii and the source
of interest by j. If species i is a secondary species then the corresponding parent
species will be denoted by the subscript i". For example, if 5C?4 is of interest, then
i stands for SO '4 and i* stands for SOi. We are then interested in the quantity
djk for each of the sampling periods. We have, from Equation (2) that
jk + Ci'jkr'jkdjk _ (47)
If i represents a primary species, then r"jk is zero for all k. If i represents a
secondary aerosol species that is not emitted as a primary aerosol, the quantity
is zero for all k. Therefore, the above equation simplifies to
Cijk CijkTijkdjk ' ' 8)
9 -22
-------
when i is a primary species and
kdjk (49)
when i is a secondary species. A characteristic feature of DMB model applications
is that the dispersion factor djk is determined based on field measurements. If a
unique tracer is available for source j then djk may be calculated based on this
unique tracer. It can also be calculated based on a reference aerosol species that
may not be a unique tracer for source j by first calculating the amount of this
reference species contributed to the receptor by the source of interest. Chemical
mass balance model may be applied for this purpose. Other approaches are also
possible.
The following discussion assumes that a unique tracer is available for source
j of interest. This source will be referred to as St. The tracer material may be a
naturally emitted primary aerosol species or may be introduced artificially. The
aerosol species is denoted by the subscript iQ. Therefore Equation (48) becomes
Cigjk = Ci0jkri0jkdjk. (50)
Dividing the quantity djk by the quantity Ci0jk we get,
yijk Cijk Tjjk
when species i is a primary aerosol and
&j'fc _ fr-jfc r»jfc (52)
when species i is a secondary aerosol. It follows from this that
-^Ciajk (53)
for primary aerosols i and
for secondary aerosols.
Since aerosol component io is a tracer for source j, the quantity C,-OJ-fc is the
same as the quantity C;0jt which is the ambient concentration of species i0 at the
receptor and can be measured. If furthermore the quantities /Q(i, j, /:), Kc(i,j, k)
are known when species i is primary, or, Kc(i*,j,k), Kj(i',j,k) and Kj(i,j, k)
9-23
-------
are known when species i is secondary, and if in addition, /C*(*o, J, &), Kc(io,j, k),
tjk as well as the ratio c,-.jfc/c;0jjt are known, then the contribution of the source
of interest to the concentrations of the species of interest at the receptor can, in
principal, be calculated.
If T represents a unique nonconverting, non depositing tracer for source j ' = 1,
then for a species that is directly emitted by source j = 1, Equation (53) for
primary aerosols reduces to
Cuk = - rilkCT,k- (55)
cT,l,k
If the ratio Ciik/CT,i,k is known, the form of rm, is
For a species that is not directly emitted, but is a secondary species which is
absent at the source, the equation for the DMB reduces to
Cuk = r'ikCr,k- (56)
cT,l,k
The ratio Q.u/cr.i,* is assumed known and the form of r*lk in this case is
=
il* Ke(i; 1, *) + Kd(i; 1, k) - Kd(i, 1, fc)
{exp(-Kd(i, 1, k)tlk) - exp(-[Ke(i-, 1, k) + Kd(i', 1
where
A%(z, 1, k} = conversion rate of species z from source 1 to its secondary form,
during sampling period k.
Kd(i, 15 k) = deposition rate of species : from source 1 during sampling period
k.
Considering a specific example for SO^ and SO^ Equation (56) becomes
r _ a ,._, r ,. >
0504,1,* - - Tso*,\,k^T,k COY)
cT,i,k
and
(58)
cT,l,k
where
2, 1, Jb) - Kd(SO<, 1, A:)
9-24
-------
{exp(-Kd(S04, 1, k)tlk) - exp(-[Ke(SOz, 1, fc) + Kd(S02, 1, k)]tlk)} (59)
and
rso,,i,* = ezp(-(tfc(502, 1, fc) + ^(502, 1, fc))iu). (60)
From now on we shall use the notation A'c = #c(502, 1, Jt), /^ = Kd(SO2, 1, Jfc)
and /
-------
may be fitted and the adequacy of the fit judged by the resulting R2 value and the
closeness of the beta coefficients to one. Coo .- t refers to the total contribution
.iU4,lu,*
of SO* by source group t: to the receptor. If the chosen parameter combination
results in a high R2 value and beta values are not significantly different from one,
then the chosen parameter values v\,vi,Kc may be judged as being consistent
with observed data. The best possible value of R2 obtained, by varying the values
of ui,i>2 and A'c over their entire range of values suggested in the literature, may
be denoted by R\pt. The values v\ = ui,opt, ^2 = V2,opt, and Kc = KCt0pt which
result in the best R2 may be used to calculate the daily St contributions to SO*
and 502 at the receptor. By calculating the ratio of the total St contribution
over the entire sampling period to total ambient concentrations over the same
period we can calculate the fractional 504 and 50j contributions by 5t during
the experimental period.
Uncertainty Calculations
Uncertainties in the final results are primarily due to three sources.
Uncertainties in tjk.
Uncertainties in the model parameters such as Kc> K\ and K^.
- Uncertainties in the measured values.
Uncertainties in the extent to which the model assumptions are violated.
Uncertainties in the Model Parameters. The model parameters in
question are Ke, K\ and KI which are not known. Suppose a review of the
literature suggests deposition velocities v\ for 502 ranging from l\ to ui cm/sec
and U2 for 504 ranging from /2 to u^ cm/sec. In addition suppose the sulfur
dioxide oxidation rates varied from Kc = lc to Kc = uc percent per hour.
Clearly, not all combinations of values of Ui,t>2 and Kc are physically possible.
To judge which combinations of these parameters are reasonable, the following
procedure may be adopted. A grid of values for fi,U2 and Ke may be chosen by
taking all possible combinations of these parameters resulting from
i'i = /i to Ui in increments of S\.
t'2 = /2 to u-i in increments of <52.
A'c = le to ue in increments of 6e.
9-26
-------
To decide whether a particular combination of values of ul5u2 and Kc are rea-
sonable the regression model suggested by Equation 63 can be exercised and the
adequacy of the fit may be judged by closeness of beta values to one and the re-
sulting R2. The best possible value of R* for /? values close to one over the range
of these parameters is denoted by R2opt. A value E% less than R2opt but close to
it is chosen, based on subjective judgement, as a criterion value for judging the
reasonableness of various combinations of the parameter values. Parameter com-
binations resulting in an R* equal to R% or greater may be considered reasonable.
The set of all such parameter combinations will be denoted by the symbol A. St
contributions can be calculated1 for each of the parameter combinations in the set
A . This will result in a whole range of values for the daily St contributions and
the overall average St contributions. The mean and the standard deviation for this
range of values (as .well as the minimum and the maximum values) may be calcu-
lated to assess the uncertainty in the estimated St contributions due to imprecise
knowledge of the model parameters. The measured values of concentrations of
species are assumed to be exact in these calculations.
. ' I
Uncertainties in the Measured Values. To assess the effect of errors in
measurements on the estimated & contributions to SOi and SOi at the receptor,
the values of Vi,v-t and Kc are fix^ed at their optimum values obtained as explained
in the previous subsection. The measured values used in the calculations are: (1)
The ambient T concentration, Cx,k, (2) The ambient SO^ concentration Cso<,k,
(3) -The ambient SO? concentration Cso3,Jt) (4) Relative Humidity RHk at the
receptor, and (5) Transport time tst,k for the aerosol mixture from St to arrive at
the receptor. Suppose each pf these measurements have associated with them a
standard deviation characterizing the uncertainty in the respective measurements.
We generate a number of synthetic data sets (one hundred is sufficient for most
purposes) on the computer by perturbing the measured values using random gaus-
sian deviates with zerd means and standard deviations associated with each of the
measured values. For each synthetic data set thus generated, the daily St contri-
bution to 504 and 502 at the receptor as well ,as the average contributions over
the entire sampling period are 'calculated. The range of values thus obtained for
each of these quantities gives an indication of the uncertainty that would be due
to imprecise measurements alone. The results are reported in the form of means
and standard deviations of each of the quantities of interest calculated from the
synthetic data sets. Throughout- this exercise, the model parameters, viz, -the
conversion and deposition parameters, are to be kept constant at their optimum
values.
Uncertainties in the Extent to which the Model Assumptions are
Violated. Assessment of the uncertainties in reported results arising from model
9 -27
-------
violations can be evaluated by conducting extensive sensitivity studies involving
various perturbations in the model assumptions themselves.
Overall Uncertainties. Since the first two categories of uncertainties are
expected to be "independent", the total uncertainty due to these two sources may
be characterized by the effective total standard deviation
Total = (*\ +
where
-------
Conclusion
A set of deterministic general mass balance (GMB) equations describing how
primary and secondary aerosols and gases are transported and transformed as
they pass through the atmosphere were formulated. From the GMB equations it
is possible, with a variety of limiting assumptions, to derive the chemical mass
balance, the differential mass balance equations, and the tracer mass balance re-
gression model. Derivation of these receptor modeling approaches from a first
principle model allows for an examination of model assumptions, and deviations
from assumptions. With assumptions identified it possible to make a better de-
termination of how to incorporate measurement uncertainty and how to estimate
model uncertainty associated with an imperfect knowledge of model parameters.
References
1. J.G. Watson, Overview of receptor model principles. JAPCA, 34, 620, 1984.
2. J.G. Watson, J.C. Chow, D.L. Freeman, R.T. Egami, P. Roberts and R.
Countess, Model and Data Base Description for California's Level I PM10
Assessment Package. DRI Document 8066-002.1D1, Draft Report, Prepared.
for the California Air Resources Board, Sacramento, CA, 1987.
3. J.G. Watson, J.G., J.C. Chow and N.F. Robinson, Western States Acid
Deposition Project Phase I: Volume 4~An Evaluation of Ambient Aerosol
Chemistry in the Western United States. Prepared for the Western States
Acid Deposition Project by Systems Applications, Inc., San Rafael, CA,
SYSAPP-87/064, 1987,
4. J.C. Chow, Development of a Composite Modeling Approach to Assess Air
Pollution Source/Receptor Relationships. Doctor t>f Science Dissertation,
Harvard University, Cambridge, MA, 1985.
5. P.K. Hopke, Receptor modeling in environmental chemistry. Chemical Anal-
ysis, 76, John Wiley & Sons, New York, NY, 1985.
6. R.K. Stevens, .C.W. Lewis, Hybrid receptor modeling. In: Extended Ab-
stracts for the Fifth Joint Conference on Applications of Air Pollution Me-
teorology with APCA, November 18-21, 1986, Chapel Hill,. N.C. Published
by the American Meteorological Society, Boston, Massachusetts, 1987.
7. C.W. Lewis, R.K. Stevens, Hybrid receptor model for secondary sulfate from
an SOi point source. Atmos. Environ. 19,6:917-924, 1985.
9 -29
-------
8. T. Dzubay, R.K. Stevens, G.E. Gordon, I. Olmez, A.E. Sheffield, W.J.
Courtney, A composite receptor method applied to Philadelphia aerosol.
Environ. Sci. & Technol., 22, 1, 1988.
9. J.G. Watson, Transactions, Receptor Models in Air Resources Management,
Air and Waste Management Assoc., Editor, Pittsburgh, PA, 1989.
10. H.I. Britt, and R.H. Luecke, 1973: The estimation of parameters in nonlin-
ear, implicit models. Technometrics, 15, 233, 1973.
11. i J.G. Watson, J.A. Cooper and J.J. Huntzicker, The effective variance weight-
ing for least squares calculations applied to the mass balance receptor model,
Atmos. Environ., 18, 1347, 1984.
12. J.G. Watson, Chemical Element Balance Receptor Model Methodology for
Assessing the Sources of Fine and Total Particulate Matter. Ph.D. Disser-
tation, University Microfilms International, Ann Arbor, MI, 1979.
13. R|C. Henry, Stability analysis of receptor models that use least squares
fitting. Receptor Models Applied to Contemporary Air Pollution Problems,
Air Pollution Control Association, Pittsburgh, PA, 1982.
14. H.J. Williamson, and D'.A. DuBose, 1983: Receptor model technical series,
volume III: User's manual for chemical mass balance model. EPA-450/4-83-
014, U.S. Environmental Protection Agency, Research Triangle Park, NC,
1983.
15. G.E. Gordon, W.H. Zoller, G.S. Kowalczyk and S.H. Rheingrover, Composi-
tion of source components needed for aerosol receptor models. Atmospheric
Aerosol: Source/Air Quality Relationships. Edited by E.S. Macias and P.K.
Hopke, American Chemical Society Symposium Series #167, Washington,
B.C., 1981.
16. L.A. Currie, R.W. Gerlach, C.W. Lewis, W.D. Balfour, J.A. Cooper,- S.L.
Dattner, R.T. DeCesar, G.E. Gordon, S.L. Heisler, P.K. Hopke, J.J. Shah,
G.D. Thurston and H.J. Williamson, Interlaboratory comparison of source
apportionment procedures: results for simulated data sets. Atmos. Envi-
ron., 18, 1517, 1984.
17. T.G. Dzubay, R.K. Stevens, W.D. Balfour, H.J. Williamson, J.A. Cooper,
J.E. Core, R.T. DeCesar, E.R. Crutcher, S.L. Dattner, B.L. Davis, S.L.
Heisler, J.J. Shah, P.K. Hopke and D.L. Johnson, Interlaboratory compar-
ison of receptor model results for Houston aerosol. Atmos. Environ., 18,
1555, 1984.
9-30
-------
18. J.G. Watson, and N.F. Robinson, A method to determine accuracy and
precision required of receptor model measurements. Quality Assurance in
Air Pollution Measurements, Air Pollution Control Association, Pittsburg,
PA, 1984. '
19. R.T. DeCesar, S.A. Edgerton, M.A.K. Kahlil and R.A. Rasmussen, Sensi-
tivity analysis of mass balance receptor modeling: methyl chloride as an
indicator of wood smoke. Chemosphere, 14, 1495, 1985.
20. R.T. DeCesar, S.A. Edgerton, M.A. Khalil and R.A. Rasmussen, A tool for
designing receptor model studies to apportion source impacts with specified
precisions. Receptor Methods for Source Apportionment: Real World Issues
and Applications, Air Pollution Control Association, Pittsburgh, PA, 1986.
21. H.S. Javitz, J.G. Watson, J.P. Guertin and P.K. Mueller, Results of a re^
ceptor modeling feasibility study, JAPCA, 38, 661, 198S.
22. H.S. Javitz, J.G. Watson, and N. Robinson, Performance of the chemical
mass balance model with simulated local-scale aerosols, Atmos. Environ.,
22, 2309, 19S8.
23. D.A. Belsley, E.D. Kuh and R.E. Welsch, Regression Diagnostics: Identify-
ing Influential Data and Sources of Collinearity. John Wiley and Sons, New
York, NY, 1980.
24. B. Kim, and R.C. Henry, Analysis of multicollinearity indicators and in-
fluential species for chemical mass balance receptor model, Transactions,
Receptor Models in Air Resources Management, J.G. Watson, ed., Air and
Waste Management Assoc., Pittsburgh, PA, 1989.
25. J.C. Chow, and J.G. Watson, Summary of particulate data bases for recep-
tor modeling in the United States, Transactions, Receptor Models in Air
Resources Management, J.G. Watson, ed., Air and Waste Management As-
soc., Pittsburgh, PA, 1989.
26. W.A. Fuller, Measurement Error Models, John Wiley and Sons, New York,
New York, 1987.
9-31
-------
Project MOHAVE Summary
12/2/91
Study Component
Description of Study Component
Purpose and
Objectives
Purpose: Respond to Congressional mandate for "Mohave Power Plant tracer study."
Study Objectives: Estimate frequency and magnitude of any perceptible impact of
Mohave Power Plant to visibility at Class I areas; Estimate impacts of other sources
upon visibility in the southwest; Develop and evaluate tools for subsequent regional haze
analyses.
Approach
Detailed intensive study periods nested within year-long study period. Results and
conclusions to be based upon evaluation and reconciliation of multiple analysis
approaches.
Schedule
Field Study: September 1991 November 1992
Winter Intensive: January 1992 (30 days)
Summer Intensive: July August 1992 (50 days)
Draft Report: July 1993 Final Report: December 1993
Tracer
Continuous stack release of perfluorocarbon tracer during intensives. Monitoring with
35 samplers at 31 sites. Release different tracers from southern California (Los Angeles
Basin and San Joaquin Valley) during summer intensive.
Emissions
Continuous SOX and NOX stack monitoring during intensives.
Detailed source profiling using daily samples during intensives.
Air Quality
Monitoring
Full IMPROVE samplers at 10 sites, IMPROVE channel A + SO2 at 21 sites during
intensives (12 & 24 hour sampling). Sampling two days per week at 10 sites with
IMPROVE samplers during non-intensives. DRUM sampling (8 size ranges, 6 hour
resolution) at six sites during intensives. Sampling with medium volume particle
samplers at three sites during intensives. Hydrogen peroxide sampling for a portion of
summer intensive.
Optical Monitoring
Continuous monitoring entire study period. Nephelometers at all receptor sites, a
transmissometer at Meadview, in addition to ones already at IMPROVE sites. Time-
lapse photography at several sites.
Meteorological
Monitoring
Continuous vertical wind profiling for 12 months at two sites using radar wind profilers.
Two additional profilers during intensives. Surface meteorology at all wind profiler sites
and receptor sites. Doppler sodar at two sites. RASS temperature profiling at two sites.
Deterministic
Modeling
Deterministic meteorological modeling (wind, turbulence, moisture, etc.) every day for
12 month period. Calculation of influence functions. Detailed chemistry modeling
(RADM, RPM) for selected cases. Monte Carlo transport modeling with linear
chemistry every day for 12 month period.
Data Interpretation
Statistical study of historical sulfur concentrations and plant output. Spatial pattern
(eigenvector) analysis. Hybrid receptor modeling utilizing artificial and endemic tracer
data. Calculation of extinction budget. Reconciliation of modeling results. Source
apportionment.
Quality Assurance
QA audit by independent reviewer covering all portions of the study.
Potential SCE
Contributions
Upper air monitoring, particle monitoring (endemic tracers), chemical modeling, tracer
release, data analysis, aircraft measurements, stack sampling, and data base
management.
-------
O Points of Reference
O Coal Fired Power Plants
Receptor Sites
Other Class I sites
Background Sites
* LA Basin Pass Sites
Low Elevation Transport
High Elevation Transport
ndian Gardens
Hopi Point
w Truxton
Dolan Sprinfls
* Tehachapi Summit
T Baker \CTMohave A Hulapai Mt. Park T Seligman
» Yucca Sycamore Canyon
Needled A
\ Cnr
Amboy
Petrified Forest
Cajon Summit
Camp Wood
T
Prescott (airport)
\ San GorgonJo
Angeles Joshua Tree
Wickanburg
Tonto
O Phoenix
Monitoring Locations
Grid 1
Number of Points Spacing
Grid x y z (kml
1 100 60 44
2 104 72 44
3 144 144 44
4 80 80 44
5 80 80 44
32
8
2
0.5
0.5
Meteorological Modeling Grids
For more information, contact Mark Green at (702) 798-2182.
-------
December
Sunday
Monday
Tuesday
Wednesday
Thursday
Friday
Saturday
Begin returning
tracer samplers
toBNL
Finish returning
tracer samplers
toBNL
Begin analysis
of tracer
samples
IS
16
17
18
Complete tracer
analysis;
Distribute
background
tracer data
19
20
Assess
readiness;
21
22
23
Assess
readiness
24
25
CHRISTMAS
26
27
Ship tracer
28
29
30
31
January
Sunday
Monday
Tuesday
Wednesday
Thursday
Friday
Saturday
NEW YEAR'S
DAY
Ship tracer
samplers
Begin
deploying
tracer and
paniculate
samplers
12
17
18
19
20
21
23
26
27
28
29
30
February
Sunday
Monday
Tuesday Wednesday Thursday
Friday
Saturday
W»*nAVWp*'''"'.K:T ^yc"*;
xo^^^x^f^^^^pwxs'-'vvv*'
. vv^fc
11
12
13
14
sampling
17
18
19
20
sampling
Shaded days represent intensive sampling days, (prepared \d\
-------
Schedule
Project MOHAVE Winter Intensive
DATE
September -
October
November -
December
11/20- 11/26
12/4 - 12/9 or
12/10
12/14-12/18
12/18
12/20, 12/23
12/27
1/3
1/6 - 1/11
1/11 7am MST
1/14 7am MST
1/14 7am MST -
2/13 7am MST
2/16 7am MST
2/20 7am MST
ACTIVITY
Begin year-round paniculate monitoring at
receptor and other Class I area sites;
Begin year-round optical monitoring
Install radar wind profilers/RASS
Deploy tracer samplers for background test
Pickup tracer samplers, return to Brookhaven
Analyze tracer samples
Distribute background tracer data
Assess readiness for field program; if OK, then
the following schedule will hold. If major
problems exist, re-evaluate study.
Ship tracer material to NOAA-Idaho Falls
Ship tracer samplers to Lake Mead for winter
intensive
Deploy tracer and paniculate samplers
Start tracer sampling
Start paniculate sampling
Release tracer
Stop paniculate sampling
Stop tracer sampling, except at Meadview and
Hopi Point
ORGANIZATION
UC-Davis;
Air Resource
Specialists
NOAA-Boulder
UC-Davis
UC-Davis
Brookhaven
Brookhaven
EPA,
Brookhaven,
NOAA-
Idaho Falls,
UC-Davis
Brookhaven
Brookhaven
UC-Davis
UC-Davis
UC-Davis
NOAA-Idaho
Falls
UC-Davis
UC-Davis
(schedule as of 12/17/91)
------- |