Note  to:   I/M  Stakeholders  and  other  readers

        Attached is a copy of  a  guidance document outlining three
alternative inspection and maintenance (I/M)  program evaluation
methodologies that are available for use by the states as  they
work to meet the Clean Air Act's biennial  I/M program evaluation
requirement.  This guidance is being released in compliance with
the schedule included in the January 9,  1998  I/M rule amendment
revising the requirements for I/M program  evaluations to allow for
sound alternatives to the original,  IM240-based approach called
for by the 1992 I/M rule.  The approved methods are:

   •  Sierra Research Method: This method relies on the statistical
     sampling and analysis of state I/M program data, modeling
     data, and correlation to a base I/M program that can  be  used
     as a benchmark for effectiveness.  While this method  does
     ultimately require correlation to the IM240,  states are  not
     required to perform their own IM240  tests on any of their in-
     use fleet.  States can obtain the necessary data via  contract
     or from some other source able to provide paired data
     correlating the chosen I/M test to the IM240.

   •  NYTEST  (VMAS) Method: This method relies on less costly  test
     equipment which has been found to reliably simulate the  IM240
     test for purposes of performing program evaluation
     measurements.

   •  RG240 Method: Like the NYTEST method above, the RG240 method
     relies upon a lower cost emissions measurement system which
     is capable of simulating the IM240 for purposes of program
     evaluation.

These three alternatives were among four proposed to EPA as part
of a stakeholder process begun in August  1997.   The fourth method-
-a remote sensing  (RSD)  based fleet characterization approach--has
not been approved as of this guidance, due to inconsistencies in
the reported results EPA has reviewed thus far.  Nevertheless,  we
believe that the RSD approach warrants further investigation,
which we will pursue in the coming year.

   An electronic  version of  this guidance  has been posted on the
EPA web site, under the Office of Mobile  Sources (QMS)  section.
The QMS area may be accessed by setting your  browser to the
following web address: http://www.epa.gov/OMSWWW.

   We hope  that  you find this  information  helpful.  If you have any
questions concerning this document,  please contact  Dr.  Jim Lindner
of the EPA staff at (734)214-4558 or via  e-mail at
1indner.j im@epa.gov.

-------
           United States        Air and Radiation      EPA420-S-98-015
           Environmental Protection               October 1998
           Agency
&EPA     Inspection and
           Maintenance (I/M)
           Program Effectiveness
           Methodologies
                                > Printed on Recycled Paper

-------
    Inspection and Maintenance (I/M) Program Effectiveness Evaluation Methodologies
Introduction:

       Section 182(c)(3)(C) of the Clean Air Act (CAA) requires that all states subject to
enhanced I/M "...biennially prepare a report to the Administrator which assesses the emissions
reductions achieved by the program required under this paragraph based on data collected during
inspection and repair of vehicles.  The methods used to assess the emission reductions shall be
those established by the Administrator." Based upon this authority, on November 5, 1992, EPA
published in the Federal Register minimum, biennial program evaluation requirements for
enhanced I/M programs under section 51.353 of 40 CFR Part 51, subpart S (henceforth referred
to as the "1992 I/M rule"). As a cornerstone of this evaluation, states were required to perform
an analysis based upon test data from "...a representative, random sample, taken at the time of
initial inspection (before repair), of at least 0.1 percent of the vehicles subject to inspection in a
given year. Such vehicles shall receive a state administered or monitored EVI240 mass emissions
test or equivalent..." The 1992 I/M rule also initially included the requirement that alternatives
to the IM240, to be considered equivalent, would have to be mass emissions transient tests (also
known as METTs).

       As required by the Clean Air Act, EPA's 1992 I/M rule established a minimum
performance standard for enhanced I/M programs. While certain elements of the performance
standard were mandated by the Act — for example, test frequency, network design, and vehicle
coverage — EPA was also allowed to use its discretion with regard to other elements, such as test
type. As a result, the performance standard promulgated in the 1992 rule was based upon an
annual, centralized, test-only program including IM240 tailpipe testing as well as purge and
pressure evaporative system testing. It is important to note that states were not required to adopt
this performance standard program; instead, states simply had to  demonstrate that the programs
they did adopt would get the same level of emission reduction as would the performance
standard program under similar conditions.  For the purpose of developing and  submitting the
required I/M State Implementation Plans (SIP), states were to include a modeling demonstration
showing that the program met the performance standard using EPA's MOBILE emission factor
model. Among other things, one of the goals of the program evaluation requirement was to
determine the extent to which emission reductions projected for the program in the SIP were
actually being achieved in the real world.

       Although the 1992 I/M rule allowed states a certain amount of flexibility when it came to
things like testing frequency and vehicle model year coverage, as a practical matter, EPA
believed that it was unlikely that states would be able to design an enhanced I/M program that
would meet the performance standard without including a significant fraction of test-only EVI240
testing. Given the assumption that all enhanced I/M programs would include some amount of
IM240 testing (and therefore the necessary equipment and expertise would be readily available),
requiring additional IM240 testing for evaluation purposes on 0.1 percent of the subject vehicle
population did not seem an unreasonable or unduly burdensome requirement at the time the I/M
rule was adopted.

       As states began dealing with the practical and political realities of getting I/M programs
started, however, it became obvious that more flexibility was desired than was provided by the
1992 I/M rule. For example, some areas required to implement enhanced I/M were able to meet
their attainment or other CAA goals with substantially less stringent I/M programs than would be
required under the original I/M rule. Therefore, in 1995 and 1996, EPA revised its I/M rule to
October 30, 1998

-------
include alternative enhanced I/M performance standards aimed at providing greater flexibility to
those areas that did not need substantial emission reductions from I/M to meet their CAA goals.
EPA also revisited its credit calculations for the non-METT, alternative test known as the
Acceleration Simulation Mode (ASM) test and incorporated these revised credits in its MOBILE
model. Furthermore, in 1995, with passage of the National Highway System Designation Act
(NHA) Congress afforded I/M states even more flexibility by barring EPA from automatically
discounting I/M programs on the basis of network design (i.e., centralized vs. decentralized).

       The greater flexibility provided by the above changes led a significant number of
enhanced I/M areas to  develop and implement programs that did not include any IM240 or
equivalent METT testing. As a result, the program  evaluation requirement calling for 0.1
percent EVI240/METT  testing that seemed reasonable back in 1992 suddenly became a more
significant burden for those states using ASM or some other non-METT-based alternative for
their regular I/M program.  In effect, those states would be required to purchase, lease or
otherwise acquire EVI240 testing equipment for the sole purpose of meeting the program
evaluation requirement — a requirement which, while it quality controlled the credit levels
previously claimed by  the state, did not itself produce any emission reductions. In addition,
states with test-and-repair programs faced a significantly more difficult time obtaining random
vehicles for testing than was contemplated at the time of the 1992 I/M rule, with greater owner
inconvenience and/or government costs. The states complained, EPA listened, and in January
1998 (63 FR 1362, January 9, 1998), the Agency  finalized revisions to its program evaluation
requirements, removing the provision that alternative program evaluation methodologies must be
based upon mass emission, transient testing.  The previous METT requirement was replaced with
the stipulation that alternative methodologies be simply "sound," "approved by EPA," and
"capable of providing accurate information about the overall effectiveness of an I/M program."

       The January 1998: Federal Register notice included a schedule for determining and
assessing candidate, alternative program evaluation methodologies, which is outlined below:

       August 11. 1997: Stakeholder's meeting held by EPA for states, contractors, vendors, and
all interested parties for the purpose of seeking input regarding which alternative methods to
investigate.

       September 15.  1997: Candidate methodologies identified for further investigation.

       May 31. 1998: Data gathering on candidate methodologies completed.

       October 15. 1998: Data review and analysis completed.

       October 31. 1998: Policy memo and guidance on approved program evaluation
methodologies released.

EPA identified four alternative program evaluation methodologies as part of its stakeholder
process. These methods were: 1) The Sierra  Research method, a method that relies on state I/M
program data, modeling data, and correlation to a base I/M program that can be used as a
benchmark for effectiveness; 2) The VMAS method, a low-cost method for measuring exhaust
flow for the purpose of converting concentration measurements  into mass emissions
measurements; 3) The  California Analytical Bench method, a low-cost method using one type of
repair-grade EVI240 (also known as RG240);  and  4)  The RSD fleet characterization method,
which relies on remote sensing (RSD) data. As part of its evaluation, EPA reviewed the
available data and literature on the respective methods, including whatever additional

October 30, 1998                               2

-------
information was provided by the proponents of the various methods. As a result of this review,
EPA has concluded that several of these methods show sufficient promise to warrant additional
investigation, which the Agency intends to pursue in the coming months. It is likely, therefore,
that today's guidance will be supplemented as developments warrant.

       This document represents EPA's findings and guidance to date with regard to the four
alternatives identified above. Specifically. EPA is currently approving the first three of these
alternatives as sound methodologies capable of providing accurate information about the overall
effectiveness of I/M programs, while reserving judgement on the RSD fleet characterization
method, which requires further study.  Approval of these alternative methodologies is for the
purpose of program evaluation only. Specifically, this guidance does not address the separate
issue of assigning I/M program credit to alternative tests, which is a wholly different analysis.
Furthermore, it is important to note  that today's guidance is not intended to be exhaustive, nor
does it represent EPA's "last word" on acceptable alternatives.  Should states or other interested
stakeholders have methodologies they wish to explore which are not addressed by today's
guidance, EPA encourages those stakeholders to propose such alternatives for our review and
consideration.  However, states should be aware that under the revised I/M rule an alternative
method must be approved by the Administrator as sound prior to its incorporation in a SIP
submittal.

       Lastly, today's guidance also addresses an issue faced by all attempts to evaluate  I/M
program effectiveness — namely, the need to establish a basis for comparison. The most  reliable
and direct approach for determining the emission reductions achieved by an I/M program would
be to start by characterizing the pre-program level of fleet emissions and then comparing this to
the post-program emissions level. The difference between these two measurements equals the
decrease in emissions that can be attributed to the program (i.e., the I/M program's
effectiveness).  Unfortunately, due to the fact that many areas required to implement enhanced
I/M had pre-existing, basic I/M programs, it is not always possible for program evaluators to
establish an accurate, non-I/M baseline. Today's guidance therefore begins by suggesting
methods for determining the magnitude of an I/M program's effectiveness when it is not  possible
to clearly establish a non-I/M baseline.


Section 1:  Estimating Baseline Fleet Emissions

   The determination of the actual  emission reductions associated with a given enhanced I/M
program is frequently complicated at the outset due to the difficulty or impossibility of
establishing a no-I/M baseline for comparison. Specifically, most enhanced I/M program areas
have pre-existing, basic I/M programs which will have already lowered average fleet emissions
below what would be expected under a no-I/M scenario. This, in turn, will lead to understating
the impact of the new I/M program. As a result, direct measurement of I/M program
effectiveness becomes essentially impossible once any I/M program has begun operation, since
the opportunity to define the no-I/M case through direct measurement will have been
permanently precluded. Under such circumstances, the best one can hope for is a reasonable
basis for estimating what the no-I/M emission levels would have been, since direct measurement
is no longer possible. Of course, the uncertainty inherent in any estimation methodology will
ripple throughout all subsequent numbers based upon a comparison to that initial estimate.
Therefore, it is important to choose  estimation methods carefully.

   The EPA is aware of two, possible methods states could potentially use for addressing the
need to estimate baseline fleet emissions.  The first approach precludes the need for a no-I/M


October 30, 1998                               3

-------
baseline by comparing the program under evaluation to a benchmark program taken to be the
reference standard for effectiveness; the second would rely upon RSD measurements to provide
a basis for estimating pre-I/M fleet emissions.  The latter is not yet something EPA can support
until RSD issues raised later in this guidance have been resolved.

Section 1.1 Using Benchmark Comparison When a No-I/M Baseline is No Longer Possible

   One method for estimating program effectiveness when it is not possible to establish a
no-I/M baseline is to compare the program-in-question to an existing IM240 program which can
serve as a benchmark for effectiveness. Although arguments can be made regarding the choice
of which (if any) IM240 program should serve as a benchmark, EPA recommends using the
Phoenix, Arizona program, since it is one of the most mature EVI240 programs in operation, and
most closely resembles EPA's recommended program design.  The comparison of the
program-in-question to the benchmark program is facilitated through use of the MOBILE model.
An effort has been made to remove the impact of the model's assumptions on the final
comparison of effectiveness by using MOBILE as a common denominator that essentially
cancels itself out as part of the overall analysis. The following steps are provided to illustrate the
general benchmark comparison concept in the context of the Arizona program:

  (1) Calculate the projected effectiveness of the Arizona benchmark program by modeling the
   Arizona program using the MOBILE model and all the Arizona local area parameters (LAP)
   for fleet distribution, fuel type, average temperature, etc., then compare the resulting
   projected emission factor (EF) to the actual EF established as a result of the Arizona program
   evaluation. The difference between the modeled EF and the  actual EF equals the
   effectiveness of the Arizona program, which, in turn, establishes the upper boundary of what
   is able to be accomplished in an IM240 program.
  (2) Calculate the projected relative effectiveness of the program-in-question by modeling that
   program using the area-in-question's LAP for fleet distribution, fuel, temperature, etc. and
   compare this modeled EF to the "actual" EF derived through one of the alternative program
   evaluation methodologies described elsewhere in this guidance.
  (3) Correct for the impact of LAP differences between the program-in-question and the
   Arizona program by comparing the modeled EFs for the benchmark program using first the
   Arizona LAP and then again, using the area-in-question's LAP. There is no  need for a
   no-I/M baseline using this method since  program effectiveness is being derived as a percent
   of the program effectiveness of the benchmark program.

Section 1.2 Using RSD to Estimate a No-I/M Baseline

   Theoretically, the program methodology described in Section 5 below may also be used to
obtain fleet average baseline data, providing the data is collected prior to the start of the I/M
program. In the event that the area-in-question already has a pre-existing  basic I/M program, this
method would likely underestimate the benefits of the new program for the same reasons
discussed above. However, it may be possible to use a RSD-based fleet characterization
approach to estimate a no-I/M baseline by measuring the fleet  emissions from an otherwise
comparable area without an I/M program as  a no-I/M surrogate (once outstanding issues are
resolved).
October 30, 1998

-------
Section 2:     Sierra Research Methodology for Estimating Program Benefits1

       The methodology developed by Sierra Research under contract to EPA provides two
options for performing I/M program evaluations.  The first assumes the current operating
program is the IM240 or some other, equivalent METT.  The second method assumes the current
operating program is an ASM or Idle test and utilizes a METT sample for correlation purposes.
If a state has access to correlation data generated by another source using comparable testing
conditions and specifications to those used by the program to be evaluated, then this METT
correlation sample need not be generated by the state performing the program evaluation (i.e.,
paired correlation data can be "borrowed" from an outside source).

       EPA has contracted two independent peer reviews of this work since its completion to
address both the statistical validity of the method as well as the concept of using "in-program"
data to quantify the effectiveness of an I/M program.  Although EPA has not received the final
peer review on the statistical method developed by Sierra, preliminary discussions with the
reviewer indicate the statistical theory behind the method is correct2.  The second review raised a
number of operational issues with the Sierra method with regard to sampling bias, stratification
of vehicle technology groups, sample recruitment, and the uncertainties surrounding the use of
the MOBILE model for both predicting the emission reductions of a benchmark program and
adjusting the measured emissions of the program under evaluation to account for differences
between it and the benchmark program outlined in Section 1.1 above3. Some of the issues
identified by Wenzel and Sawyer in this second review are inherently difficult to solve when
using "in-program" data for program evaluation;  however, EPA will address these issues to the
extent they can be  addressed in the forthcoming technical guidance on program evaluations. As
a result of the operational shortcomings inherent in an evaluation based on "in-program" data,
the second review recommends the use of RSD as a program evaluation tool.  EPA's response to
this last issue is addressed below in Section 5.

Section 2.1: IM240 Method

       In an EVI240 program, the Sierra Research method requires a random sample of a
prescribed number of vehicles.  On 1981 and newer model year vehicles, tailpipe  emissions are
measured using the EVI240 while on pre-1981 vehicles idle emissions are measured and then
converted to IM240-equivalents using existing correlation equations cited in the Sierra Research
report entitled "Development of a Proposed Procedure for Determining the Equivalency of
Alternative Inspection and Maintenance Programs." Once the data are collected, a model year
based weighted average is determined for the fleet and adjusted to correct for the local  variables
such as fuel differences between the program-in-question and the benchmark, Arizona program
1 Radian International performed a similar analysis for the California Bureau of Automotive Repair. This report was
titled "Methodology for Estimating California Fleet FTP Emissions" and was dated December 3, 1997. However,
because this method focused on the use of ASM road-side pullover testing which is unique to California, it was not
believed this would be a feasible program evaluation methodology for other states to implement.


2private conversation, Dr. J. Lindner, US EPA, with Dr.  E. Rothman, Director, Center for Statistical Consultation
and Research-University of Michigan.


3"Review of Sierra Research Report 'Development of a Proposed Procedure for Determining the Equivalency of
Alternative Inspection and Maintenance Programs'", T.  Wenzel & Dr. R. Sawyer, Lawrence Berkeley National
Laboratory, September 23, 1998.

October 30, 1998                                5

-------
discussed above. The program evaluation is then made by a direct comparison between the state
IM240 program and the weighted average EVI240 emissions from the benchmark program.

       The methods used to obtain a suitable sample may include, but are not limited to, the
following: Vehicles may be obtained by allowing a computer to randomly select vehicles at a
test-only inspection facility; by randomly denying registration renewal to a sample of vehicles
until they have been submitted for program evaluation testing; or by using law enforcement
officers to select random vehicle samples for evaluation at a demographically balanced number
of roadside locations. Regardless of the recruitment option selected, however,  evaluation testing
must be done in a manner that prevents the personnel involved in the official inspection and
repair cycle from knowing whether a vehicle has been selected for such testing until the
inspection and repair cycle has been completed.

Section 2.2 ASM/Idle Method

       In an ASM or Idle program, the Sierra Research method requires development of a
correlation between the EVI240 and the program's ASM or Idle test on a prescribed number of
1967 and later model year vehicles.  Once this correlation has been established, the program can
be evaluated by recruiting a prescribed number of 1967 and later model year vehicles from the
fleet and testing them per the ASM or  Idle test procedure.  The number of vehicles needed for
the evaluation will be dependent on the variability of the alternative test.4

       The IM240-to-ASM/Idle correlations are then used to convert  the ASM or Idle
measurements to IM240-equivalents for each vehicle. At this point, the emission measurements
will be treated in the same manner as the IM240 program emissions measured in Section 2.1
above. That is, a model year based weighted average is determined for the fleet and adjusted to
correct for the LAP differences between the program-in-question and  the benchmark, Arizona
program. The program evaluation is then made by a direct comparison between the state ASM
or idle program and the weighted average EVI240 emissions from the benchmark program.

       Recruitment options and testing provisions used under the ASM/Idle Method should be
the same as those described above for  the IM240 Method.

Section 3:    Estimating Program Benefits Using VMAS Data

       VMAS is a patented technology developed by Sensors Inc. of  Saline, Michigan and is
currently being pilot tested by the New York Department of Environmental Conservation (NY
DEC). It is used in conjunction with a BAR97 analysis system that consists of an internal
tailpipe probe, gas analyzers, and computer. The unit provides second-by-second as well as
cumulative tailpipe mass emissions measurements over a defined test  period. On a
second-by-second basis, VMAS measures tailpipe exhaust flow, receives raw pollutant
concentrations from the BAR97 system, and computes real-time mass emissions from the
tailpipe flow and concentration data. The test period may be steady-state, transient, unloaded or
loaded; however, the only mode studied to date has been the transient IM240 cycle. When used
in this fashion, i.e. VMAS/BAR97 with an EVI240 drive cycle run on a BAR97 certified
dynamometer, the I/M  test is referred to as the NYTEST.
4EPA will work with states desiring to use this method to determine the appropriate sample size needed for this step
of the evaluation in each state.

October 30, 1998                               6

-------
       Although VMAS technology has the potential to be applied to I/M emission tests other
than the NYTEST, to date EPA has only focused on evaluating the system as used in the
NYTEST procedure.  NYTEST/IM240 correlation data have been collected to date on 99
vehicles and the results are very encouraging.  Furthermore, NY DEC is in the process of
collecting data on an additional 5,100 vehicles5. Based upon these preliminary findings, EPA is
currently approving the VMAS system operated according to the NYTEST protocol as
equivalent to the EVI240, with regard to its use as a program evaluation measurement tool. States
choosing to use the VMAS/NYTEST protocol for program evaluation purposes should be aware,
however, that this approval may be withdrawn should subsequent data collection and analysis
dictate.  The random sample recruitment and benchmark comparison provisions discussed
previously also apply to this alternative program evaluation methodology.
       Other states and/or vendors may submit VMAS-based correlation data to EPA using
configurations other than those specified for the NYTEST discussed above. However, prior to
embarking on a test program, such states should have their test and analysis plan reviewed by
EPA.

Section 4:     Estimating Program Benefits Using RG240 Data

       There are currently no standard equipment specifications for RG240 technology.
Therefore, when the term "RG" is used, it does not unequivocally define a specific analyzer,
dynamometer,  and flow measurement  system. The RG240 system referred to here is that
developed by California Analytical, Inc. and utilizes analyzer and flow measurement technology
similar to that specified in EPA's EVI240 and Evap Technical Guidance. The dynamometer used
must be equivalent to the B AR97 dynamometer used for the ASM to be considered an RG240
system for the purposes of this guidance.

       The paired IM240/RG240 correlation data submitted to EPA thus far has been limited to
a 100 vehicle pilot study performed by Rhode Island.  To supplement this data, EPA is currently
working to obtain correlation data on an additional 2,000 EVI240/RG240 vehicles,  including
paired Federal  Test Procedure (FTP) data on 200 of these vehicles. The collection and analysis
of this data will be performed in FY99. Based upon its preliminary findings, however, EPA is
currently approving the specified RG240 configuration as equivalent to the EVI240, with regard
to its use as a program evaluation measurement tool. States choosing to use this RG240
approach for program evaluation purposes should be aware, however, that this approval may be
withdrawn should subsequent data collection and analysis dictate.  The random  sample
recruitment and benchmark comparison provisions discussed previously also apply to this
alternative program evaluation methodology.

       Other states and/or vendors may submit RG240 correlation data to EPA using RG240
configurations other than the one identifed above. However, prior to embarking on a test
program, they should have their test and analysis plan  reviewed by EPA.

Section 5:        Estimating Program Benefits Using RSD Data

       There has been much discussion within the I/M community over the accuracy and utility
5At this time, EPA has not yet verified the credit levels associated with using NYTEST, but will work with NY
DEC to establish the appropriate level of credit based on the results of NY's test program.

October 30, 1998                               7

-------
of RSD technology in I/M programs. Improvements in site selection criteria and measurement
technology have led to RSD gaining acceptance by EPA with regard to the technology's use as a
clean-screening methodology. Interest has also been shown in RSD's potential as a program
evaluation tool, and today's guidance focuses on this particular RSD application. As is the case
with most alternative testing methodologies, if RSD-measured fleet average emissions can be
correlated back to the IM240, then this method (albeit non-METT) will be as valid a program
effectiveness methodology as the IM240-based methods previously advocated. The question
remains, however, whether or not such a correlation is possible.

       In evaluating this approach, EPA looked at two different I/M program evaluations which
have been performed using RSD technology by separate researchers.6-7 The details of these
studies with regard to site selection, data collection, etc. may be found in the literature; however,
future EPA program evaluation guidance (which will be issued should EPA find RSD methods to
be sound for evaluation purposes) will also provide more detailed information on these topics
since these  are critical to ensuring that RSD-measured fleet average emissions can be correlated
with IM240 or other METT data.

       The first study (Stedman, et al.) used RSD-based measurements to evaluate Denver's
current IM240-based I/M program which has been in operation since 1995. The study concluded
that the program was achieving a 4-7% carbon monoxide (CO) benefit, while no hydrocarbon
(HC) or oxides of nitrogen  (NOx) benefits were detected.  These values are well below the
benefits predicted by the MOBILE model for an IM240-based program, which is typically
projected to yield benefits in the range of 12-16% HC, 23-29% CO, and 7-11% NOx (assuming
national defaults for vehicle age distribution, fuels, and average temperature).  In considering
these conclusions, however, it is important to note that the Denver area also had a pre-existing
basic I/M program in place, which had been operating since 1981 until it was replaced by the
IM240 program in 1995. Furthermore, the Denver program is unusual among I/M programs, in
that it was designed primarily as a CO control measure, as opposed to ozone control.

       The second study (Rogers, et al.) examined the Atlanta program, and — in sharp contrast
to the Stedman study — concluded that Atlanta's hybrid idle-based I/M program achieved
emission reductions greater than those predicted by MOBILE for light-duty cars (109% of the
projected benefit), while somewhat below the MOBILE prediction for light-duty trucks (72% of
the projected benefit). It seems improbable that a centralized IM240 program would fare so
much worse than predicted relative to a hybrid idle test program, regardless of the geographic
and fleet characteristic differences  between the Atlanta and Denver areas. Furthermore, the
disparity cannot be written off as the result of Denver's having had a pre-existing basic program,
since Atlanta, too, has operated a basic I/M program since 1982.  Therefore, the reasons for this
range in RSD-calculated program evaluation results must be investigated further before EPA can
make a more definitive  statement regarding the viability of RSD  as a sound program evaluation
method. EPA is investigating this issue with the authors of both  studies and will include the
results of this effort in any subsequent program evaluation guidance  relating to RSD that EPA
may publish in the future.

       If they so desire, states and/or vendors are invited to submit to EPA RSD-based data in
support of this program evaluation methodology.  However, prior to embarking on a test
6Stedman, Bishop, and Slott, Environmental Science & Technology, 31(3), 927, 1997.


^Rogers, DeHart-Davis, Lorang, Atmospheric Environment, submitted for publication.
October 30, 1998

-------
program, they should have their test and analysis plan reviewed by EPA. Due to the
uncertainties which remain EPA is not approving RSD fleet characterization as a sound program
evaluation method as part of today's guidance, though it is still possible that we may approve
this method at some future date.


Section 6: Program Evaluation Data Format Specifications

   Though states are free to submit program evaluation data to EPA in whatever format is most
convenient for them, we do have a recommended data format which is included below for states'
consideration:

Vehicle Data Items

   State
   City
   VIN
   Fuel Type (Gas or Diesel)
   Make
   Model Year
   GVWR
   Curbweight

Test Data For Gram/Mile Tests

   Test Purpose (Baseline, Correlation, Program Evaluation)
   Submitter's Test Identification (optional)
   Test Procedure (IM240, RG240, NYTEST)
   Test Date
   Odometer
   Total Hydrocarbon Emissions (HC) (expressed in grams/mile)
   Carbon Monoxide Emissions (CO) (expressed in grams/mile)
   Carbon Dioxide Emissions (CO2) (expressed in grams/mile)
   Oxides of Nitrogen Emissions, corrected for humidity (expressed in grams/mile)
   -OR, for NYTEST,
   Nitric Oxide Emissions (NO), corrected for humidity, (expressed in grams/mile)

Test Data For Concentration Tests

   Test Purpose (Baseline, Correlation, Program Evaluation)
   Submitter's Test Identification (optional)
   Test Procedure (ASM-5015, ASM-2525, RSD, IDLE)
   Test Date
   Odometer
   Total Hydrocarbon Emissions (HC) (expressed in concentration units)
   Carbon Monoxide Emissions (CO) (expressed in concentration units)
   Carbon Dioxide Emissions (CO2) (expressed in concentration units)
   Nitric Oxide Emissions (NO), corrected for humidity, (expressed in concentration units)

EPA further recommends that these data items be submitted as either 1) tab-delimited ASCII text
files or 2) .DBF files.  Any widely used media may be used, such as 3.5 inch diskette,
October 30, 1998

-------
CD-ROM, "ZIP", "JAZ"). There should be one file for the vehicle data items and one or two
files for test data items depending on the test procedures used. The following field naming and
data types are recommended:

Vehicle Data Items:

STATE (Character, length 2)
CITY (Character, length 20)
VIN (Character, length 17)
FUELTYPE (Character, length 4, containing either GAS or DIES)
MAKE (Character, length 12)
MODEL_YR (Numeric, length 4)
GVWR (Numeric, length 6)
CURB WEIGHT (Numeric, length 6)

GM/Mile Test Data Items:

PURPOSE (Character, length 10, containing BASELINE, CORRELATE, orPROGEVAL)
TESTJD (Character, length 12)
TEST_PROC (Character, length 5, containing IM240, RG240, orNYTST)
TEST_DATE (Date, or Character length 6 containing MMDDYY)
ODOMETER (Numeric, length 6)
THC (Numeric, length 7 with 3 decimals)
CO (Numeric, length 8 with 3 decimals)
CO2 (Numeric, length 8 with 3 decimals)
NOX or NO (Numeric, length 7 with 3 decimals)

Concentration Test Data Items:

PURPOSE (Character, length 10, containing BASELINE, CORRELATE, orPROGEVAL)
TESTJD (Character, length 12)
TEST_PROC (Character, length 5, containing ASM50, ASM15, RSD, or IDLE)
TEST_DATE (Date, or Character length 6 containing MMDDYY)
ODOMETER (Numeric, length 6)
C_THC (Expressed as parts per million, Numeric, length 4,)
C_CO (Expressed as percentage, Numeric, length 6 with 2 decimals)
C_CO2 (Expressed as percentage, Numeric, length 6 with 2 decimals)
C_NO (Expressed as parts per million, Numeric, length 4)
October 30, 1998                            10

-------