Guidance on Biennial Performance
Evaluation Requirements for Enhanced
Vehicle Inspection and Maintenance
(I/M) Programs
United States
Environmental Protection
^1	Agency

-------
Guidance on Biennial Performance
Evaluation Requirements for Enhanced
Vehicle Inspection and Maintenance
(I/M) Programs
Transportation and Climate Division
Office of Transportation and Air Quality
U.S. Environmental Protection Agency
£%	United States
Environmental Protection
^1	Agency
EPA-420-B-20-040
June 2020

-------
1. Purpose of Guidance
The Environmental Protection Agency (EPA) is providing this guidance to clarify how the
biennial performance evaluation requirements can be met for states with mandatory enhanced
vehicle emission inspection and maintenance (I/M) programs. Biennial performance evaluations
are a necessary element of an I/M program that is required by the Clean Air Act (CAA) to be
operated at the enhanced performance level. Performance evaluations allow the effectiveness
and emission reduction benefits of an enhanced I/M program to be quantified every two years.
This guidance reaffirms and expounds upon EPA's previous program evaluation guidance,
including the 2004 document entitled, Guidance on Use of Remote Sensing for Evaluation of I/M
Program Performance (EPA 420-B-04-010, July 2004).1 This new guidance is to be used as a
supplement, not a replacement, to previous guidance and as such, only a few relevant portions
will be repeated here. This new guidance supplements previous guidance with additional
strategies that more specifically address the prevalence of Onboard Diagnostic (OBD) testing in
today's I/M programs.
The 2004 guidance mainly covers methods for conducting performance evaluations using out-of-
program data (e.g., data obtained from remote sensing devices (RSD)).2 It should also be noted
that due to the variabilities and biases of the various program evaluation methodologies, an
evaluation based on multiple methods, including using both out-of-program data and in-program
data, will provide a more accurate estimate of overall program performance than simply relying
on one method alone. To this end, this guidance also outlines a strategy to quantify an I/M
program's effectiveness using in-program data in conjunction with mobile source emission factor
modeling.
In support of efforts to increase compliance, EPA developed this guidance to help state and local
governments meet their CAA and regulatory requirements in enhanced I/M program areas. This
guidance also provides options and reflects the latest technologies and practices in use by
enhanced I/M programs across the United States. Finally, this guidance was written as a result of
a 2018 audit by the Office of Inspector General (OIG) regarding EPA oversight of enhanced I/M
programs entitled, Collecting Additional Performance Data from States Could Help EPA Better
Assess the Effectiveness of Vehicle Inspection and Maintenance Programs (OPE-FY17-0018).3
1	nepis.epa.gov/Exe/ZyPdf.cgi?Dockey=P1002J6C.pdf
2	Such data can also be collected to satisfy the on-road testing requirements for enhanced I/M programs. For more
information, see Guidance for On-Road Testing Requirements for Enhanced Vehicle Inspection and Maintenance
(I/M) Programs (EPA-420-B-20-020, March 2020). See nepis.epa.gov/Exe/ZvPDF.cgi?Dockev=P100YQX8.pdf.
3	This report was released on September 25, 2018 and is available at: www.epa.gov/office-inspector-general/report-
coHecting-additional-performance-data-states-wouid-heip-epa-better.
- 1 -

-------
2. What are the Clean Air Act and regulatory requirements?
The 1990 Amendments to the CAA required I/M programs for certain areas across the country
based upon various criteria, such as air quality status, population, and/or geographic location.
The CAA established two performance levels of I/M programs: "basic" I/M for ozone
nonattainment areas classified as moderate, and "enhanced" I/M. Pursuant to CAA sections 182,
184 and 187, enhanced I/M programs are mandated in the following areas:
•	All serious or worse ozone nonattainment areas that had a 1980 urban population of
200,000 or more;
•	Metropolitan statistical areas with a 1990 population of 100,000 or more in the Ozone
Transport Region (regardless of their air quality classification); and
•	All moderate or worse CO nonattainment areas with a design value greater than 12.7 parts
per million (ppm) at the time of classification that had a 1980 urban population of 200,000
or more.
One of the obligations of an enhanced I/M program is to conduct a performance evaluation every
two years. Among other things, section 182(c)(3)(C) of the CAA requires that all states subject
to enhanced I/M shall:
...bienniallyprepare a report to the Administrator which assesses the emission reductions
achieved by the program required under this paragraph based on data collected during
inspection and repair of vehicles. The methods used to assess the emission reductions
shall be those established by the Administrator.
In 1992, EPA promulgated the original I/M rule at 40 CFR 51 Subpart S, and EPA has since
amended the rule several times. The I/M rule establishes the technical, procedural and
administrative requirements to be met by basic and enhanced I/M programs. Within the I/M
rule, section 51.353 Network type and program evaluation establishes the requirements for a
biennial program evaluation:
(c) Program evaluation. Enhanced I/M programs shall include an ongoing evaluation to
quantify the emission reduction benefits of the program, and to determine if the program
is meeting the requirements of the Clean Air Act and this subpart.
(1)	The State shall report the results of the program evaluation on a biennial
basis, starting two years after the initial start date of mandatory testing as
required in §51.373 of this subpart.
(2)	The evaluation shall be considered in establishing actual emission reductions
achievedfrom I/Mfor the purposes of satisfying the requirements of sections
182(g)(1) and 182(g)(2) of the Clean Air Act, relating to reductions in emissions
and compliance demonstration.
-2-

-------
(3)	The evaluation program shall consist, at a minimum, of those items described
in paragraph (b)(1) of this section and program evaluation data using a sound
evaluation methodology, as approved by EPA, and evaporative system checks,
specified in §51.357(a) (9) and (10) of this subpart, for model years subject to
those evaporative system test procedures. The test data shall be obtained from a
representative, random sample, taken at the time of initial inspection (before
repair) on a minimum of 0.1 percent of the vehicles subject to inspection in a
given year. Such vehicles shall receive a State administered or monitored test, as
specified in this paragraph (c)(3), prior to the performance of I/M-triggered
repairs during the inspection cycle under consideration.
(4)	The program evaluation test data shall be submitted to EPA and shall be
capable ofproviding accurate information about the overall effectiveness of an
I/Mprogram, such evaluation to begin no later than 1 year after program start-
up.
(5)	Areas that qualify for and choose to implement an OTR low enhanced I/M
program, as established in §51.351(h), and that claim in their SIP less emission
reduction credit than the basic performance standard for one or more pollutants,
are exempt from the requirements ofparagraphs (c)(1) through (c)(4) of this
section. The reports required under §51.366 of this part shall be sufficient in
these areas to satisfy the requirements of Clean Air Act for program reporting.
Section 51.353(c)(3) cross-references to section 51.353(b)(1) for the minimum program
evaluation items. However, due to an error over the course of several I/M rule amendments,
paragraph (b)(1) was not included and is listed as (b) and "Reserved." EPA has committed to
revise the rule to remove this reference the next time the rule is revised for more substantial
revisions. This issue will be discussed further in Section 3 of this document below. States
conducting I/M program performance evaluations should consult this and the previous guidance
for the appropriate methods and elements.
In addition, the reporting section of the I/M rule requires qualitative program evaluations of all
I/M programs (both basic and enhanced) biennially. Section 51.366(e) states that all I/M
programs shall submit to EPA by July of every other year, biennial reports addressing:
(1)	Any changes made in program design, funding, personnel levels, procedures,
regulations, and legal authority, with detailed discussion and evaluation of the
impact on the program of all such changes; and
(2)	Any weaknesses or problems identified in the program within the two-year
reporting period, what steps have already been taken to correct those problems,
the results of those steps, and any future efforts planned.
Many states with enhanced I/M programs choose to include these additional biennial qualitative
program reporting elements with their quantitative biennial performance evaluations.
-3 -

-------
3. What OIG recommendation is addressed by this guidance?
The OIG report included a finding that several states with enhanced I/M programs were not
conducting biennial program evaluations that included estimates of the emission reduction
benefits of the program, as prescribed by regulation.4 As a result, the OIG highlighted that this
hampered EPA's ability to properly assess the effectiveness of these I/M programs:
When states do not conduct program evaluations, the EPA and states do not have
empirical evidence to determine whether the inspection and maintenance program is
achieving its projected emission reductions. This lessens the EPA 's assurance that the
programs are achieving the anticipated emission reductions and air quality
improvements projected for those programs. Further, in the absence of these reports,
deficiencies in the program can go unidentified and uncorrected.5
The OIG also noted that, due to a missing reference, the regulation for I/M program evaluations
caused confusion for some states:
Paragraph 40 CFR § 51.353(c)(3) describes the program evaluation requirement and
includes a cross reference to another paragraph for a description of the minimum
program items. However, the referenced paragraph was marked "reserved" and
provided no additional information.6
In addition, the OIG indicated some confusion with the EPA's program evaluation guidance.
Some states did not realize the 2004 guidance could be used in conjunction with OBD testing or
for I/M programs that no longer conduct tailpipe testing.7
As a result of these and other findings, the OIG audit made several recommendations to EPA's
Office of Air and Radiation (OAR) for assuring consistent and effective implementation of
enhanced I/M programs. The complete list of the OIG's recommendations may be found in
Appendix A of the OIG report. Recommendation #3 of the OIG's 2018 report addressed
mandatory biennial performance evaluations in enhanced I/M areas, and OAR responded:
Recommendation 3: Revise the vehicle inspection and maintenance rule to remove the
cross reference to Title 40, § 51.353(b)(1) of the Code of Federal Regulations, and
provide defined evaluation methodology guidance to enable states to quantify emission
reductions.
Response 3: OAR agrees with this recommendation and - as noted by OIG in its draft
report - intends to direct EPA 's Office of Transportation and Air Quality (OTAQ) to
revise the I/M rule to remove the reference the next time the rule is revised for more
4	Collecting Additional Performance Data from States Could Help EPA Better Assess the Effectiveness of Vehicle
Inspection and Maintenance Programs (OPE-FY 17-0018); pg. 11.
5	Ibid. pg. 12.
6	Ibid. pg. 11.
7	Ibid. pg. 12.
-4-

-------
substantial revisions. Additionally, and in the interim, OAR will direct OTAQ to issue
guidance to clarify this provision as well as that enhanced I/Mprograms that are not
already using some other approved program evaluation methodology should be using the
OTAQ guidance document issued in July 2004, Guidance on Use of Remote Sensing for
Evaluation of I/M Program Performance (EPA420-B-04-010).8
To satisfy this recommendation by clearing up some of the confusion that states with enhanced
I/M programs may have regarding the methods and requirements for biennial performance
evaluations, OTAQ worked closely with the EPA Regional Offices to develop this guidance. In
the next two sections, this document will briefly outline example methods to evaluate the
emission benefits of an OBD-based program using either RSD data (Section 4) and mobile
source modeling (Section 5).
4. Performance Evaluations of an OBD-based I/M program using methods from the 2004
Guidance
As part of its response to the 2018 OIG audit report, EPA reaffirms that the 2004 guidance9 may
be used to evaluate the effectiveness and emissions benefits of an I/M program that conducts
OBD testing.
The 2004 guidance details three methods on how to use independent or out-of-program data (i.e.,
data collected via an on-road testing regimen employing RSD and/or roadside pullovers)10 to
perform I/M program evaluations. These three methods are:
•	Step Change Method,
•	Comprehensive Method, and
•	Reference Analysis Method.
The Step Change Method is useful for evaluating the short-term impacts of a program during the
brief window after a new I/M program is implemented or after changes to an existing program.
Using the RSD data collected on a fleet of vehicles in an I/M program area, as described in the
2004 guidance, the fleet is then divided into two sub-fleets, based on whether individual vehicles
have been tested under the current I/M program or not. The emissions of the two sub-fleets are
then compared. After accounting for differences in vehicle type and age, the difference in the
emissions of the tested fleet and the untested fleet is the estimated benefit of the current I/M
program in reducing emissions.
The Comprehensive Method is similar to the Step Change Method in that it can evaluate the
short-term benefits (e.g., for a single inspection cycle) of an I/M program through comparison
with RSD data from two sub-fleets in the same I/M program area. However, the Comprehensive
Method involves comparing RSD data from the sub-fleet measured prior to initial I/M testing
8	Ibid. pg. 24.
9	Guidance on Use of Remote Sensing for Evaluation of I/M Program Performance (EPA 420-B-04-010, July 2004);
nepis.epa.gov/Exe/ZyPdf.cgi?Dockey=P1002J6C.pdf.
10	For the purposes of this document, out-of-program data collected either by RSD or roadside pullovers will
generally be referred to as "RSD data".
- 5 -

-------
with the sub-fleet measured during or after final I/M testing. The difference in average measured
emissions between the sub-fleets yields the benefit of the I/M program for that inspection period.
Sub-fleets can be further divided by vehicle type and/or age (model year or model year groups).
Then the average emissions for each vehicle type or age group can be weighted by the
corresponding percentage of these groups in the I/M fleet thus allowing the benefits of the RSD-
tested fleet to match the I/M-tested fleet.
The third method described in the 2004 guidance, the Reference Analysis Method, estimates the
benefits of an I/M program on a vehicle fleet by comparing the emissions of a fleet subject to
I/M with estimated fleet emissions if no I/M program were in place. The Reference Method
involves comparing RSD data from vehicles registered in an I/M program area to vehicles from a
non-I/M program (or reference) area. The difference in total fleet emissions between the I/M
program area and the untested reference area represents the emission reductions benefit of the
I/M program.
States with enhanced I/M programs can continue to refer to EPA's 2004 guidance for applying
the above methods. In addition, this guidance will elaborate on how the Comprehensive Method
in the 2004 guidance could also be used for states with OBD testing wishing to conduct a
biennial I/M program performance evaluation using RSD data.
As noted in the OIG report, some states did not realize the 2004 guidance could be used in
conjunction with OBD testing. Some of the confusion might be due to the fact the 2004
guidance does not specifically mention OBD testing. OBD testing does not yield emission
measurements, but rather verifies the operation of a vehicle's emission control system. Also, the
2004 guidance indicates that the collected RSD data is compared to actual emissions obtained
from an in-program tailpipe test. Indeed, on page 44, the 2004 guidance does say:
The Comprehensive Method differs from other remote sensing methods, in that it
explicitly compares emissions reductions of the I/M testedfleet as measured by the
program and as measured independently by remote sensing.
However, instead of using in-program data for this comparison, RSD data can also be used for
determining the post I/M-tested fleet's average emissions with this method. RSD data can be
used for comparing vehicle sub-fleets both before and after their I/M program test to determine
the net emissions benefit. Since a pre-inspection test RSD-sampled fleet is compared to a post-
inspection RSD-sampled fleet, it is irrelevant what type of inspection is conducted by the I/M
program. Thus, by analyzing the emission measurements of vehicles sampled prior to, and after,
their regularly-scheduled OBD test, the emissions benefit of the OBD-tested fleet can be
estimated. Several states are currently using the Comprehensive Method in developing their I/M
programs' biennial performance evaluations.
Using RSD for program evaluations is a highly complex process. Agencies wishing to conduct
these analyses should familiarize themselves with the 2004 guidance and develop a minimum
level of expertise with the Comprehensive Method procedures, found on pages 45-47 of the 2004
guidance, to ensure reliable data are collected and analyses performed. Below are some
-6-

-------
additional considerations when conducting a performance evaluation of an OBD I/M program
using the Comprehensive Method:
•	Pre-Inspection Repairs - Since OBD testing has become prevalent, initial fail rates have
trended downward. This is partially due to the newer vehicles entering the fleet being
cleaner and emission controls being more durable. Motorist education and awareness of
OBD testing is also a contributing factor. One of the benefits of the OBD system is that
it allows the motorist to know beforehand that the vehicle would fail the I/M test when
the malfunction indictor light (MIL) is illuminated. When an inspection is coming due, a
motorist, in seeing a lit-MIL, is likely to get the vehicle repaired prior to the inspection.
An advantage of the Comprehensive Method is that it allows the effect of pre-test repairs
on average emissions to be estimated. The 2004 guidance addresses how to handle this
possible bias of these pre-inspection repairs when developing a program evaluation by
eliminating vehicles from the RSD-sampled fleet that are within a month of their
scheduled inspection:
To minimize the effect of pre-test repairs on baseline emissions, remote sensing
measurements made within a month before a scheduled I/M test can be excluded
from the analysis (i.e., remote sensing measurements from 1 to 3 months prior to
the initial I/M test can be compared with remote sensing measurements from 0 to
3 months after the final I/M test).11
This paragraph also demonstrates that the emission measurements from prior to the initial
test, and after the final test, can both be obtained from RSD data, which is particularly
applicable to performance evaluation analyses on I/M programs with OBD testing.
•	Analysis using RSD data from vehicles that fail an initial test but pass a subsequent test -
To allow the limited amount of RSD-sampled data to best capture the I/M program's
effectiveness, the post-test average emissions may be estimated from only vehicles that
fail their initial test but pass a subsequent test. Per the Comprehensive Method, vehicles
in the RSD-sampled fleet can be further categorized into several groups, based on the
results of their I/M test(s): 1) vehicles that pass their initial I/M test; 2) vehicles that fail
their initial test but pass a subsequent test; 3) vehicles that fail their initial test and do not
receive a subsequent I/M test; and 4) vehicles that fail their initial test and fail a
subsequent I/M test.12 Given that the benefit to an I/M program comes from the repair of
vehicles as a result of inspection, only vehicles from category #2 (vehicles that fail their
initial test but pass a subsequent test) receive benefit from the I/M program. Therefore,
agencies conducting a performance evaluation of an OBD-tested fleet may wish to focus
their post-inspection RSD fleet data analysis only on this category #2 group. The other
three I/M test-outcome categories may be eliminated from the calculation of post-OBD
inspection emissions averages.
11	Guidance on Use of Remote Sensing for Evaluation of I/M Program Performance (EPA 420-B-04-010, July
2004); pg. 47.
12	Ibid. pg. 46.
-7-

-------
5. Program Evaluation Using Mobile Source Modeling
This section outlines how mobile source modeling can be used to perform a biennial
performance evaluation for an I/M program. EPA's Motor Vehicle Emission Simulator
(MOVES) is a state-of-the-science emission modeling system that estimates emissions for
mobile sources at the national, county, and project levels for criteria air pollutants, greenhouse
gases, and air toxics.13'14 MOVES includes the capability of modeling the essential elements of
an I/M program.
For a program evaluation, MOVES can be used to determine the benefits of an I/M program by
comparing the emissions of the current I/M program to a no-I/M scenario. This result is
analogous to that achieved with the Reference Method described in the 2004 guidance. In fact,
the 2004 guidance references mobile source emission models to be used in conjunction with the
Reference Method in performing an I/M program evaluation:
Finally, emission factor modeling output from MOBILE or another model that predicts
emissions of the inspected and non-inspected fleets can then be used to compare with
real-world differences in inspected and non-inspectedfleets measured by the remote
sensing data.15
And
RSD emission differences in inspected and reference fleets can be compared to the
differences predicted by EPA mobile models to determine an I/M program effectiveness
rating.16
To use MOVES to perform a biennial program evaluation, two model runs are needed:
1. I/M run using actual program details (the "Actual I/M run"): This run should include all
the relevant local inputs used for state implementation plan (SIP) demonstration
modeling and include the actual I/M program details. See the Inspection and
Maintenance Programs section of the most current MOVES Technical Guidance for a list
of relevant I/M program inputs and detailed instructions on how to create an I/M input
table.17 However, this run may differ from runs conducted for SIP purposes. For the
purposes of the biennial program evaluation, a MOVES I/M input known as the
Compliance Factor should reflect the actual program performance, as reported by the I/M
program in its annual reports (pursuant to 40 CFR 51.366) for the years covered by the
corresponding performance evaluation period or from data derived from the program's
13	For more detailed information on MOVES visit: www.epa. gov/moves.
14	With the exception of California (which uses the EMFAC model developed specifically for that state).
15	Guidance on Use of Remote Sensing for Evaluation of I/M Program Performance (EPA 420-B-04-010, July
2004); nepis.epa.gov/Exe/ZvPdf.egi?Pockev=P1002J6C.pdf: pg. 52.
16	Ibid. pg. 53.
17	As of the date of release of this guidance, the current version of the MOVES Technical Guidance is MOVES2014,
MOVES2014a, and MOVES2014b Technical Guidance: Using MOVES to Prepare Emission Inventories for State
Implementation Plans and Transportation Conformity, (EPA-420-B-18-039, August 2018),
https://nepis.epa.gov/Exe/ZvPDF.cgi?Dockev=P100 V7EY.pdf. Check the latest MOVES model webpage for
update guidance on the latest version of MOVES: https://www.epa.gov/moves/latest-version-motor-vehicle-
emission-simulator-moves.
- 8 -

-------
on-road testing regimen, rather than projected program data. See Section 5.2 Compliance
Factor Parameters and Calculation for further information.
2. No-I/M program run: This run should be functionally the same as the I/M run described
above (i.e. the RunSpec and Input Database should be the same) but the "No I/M
Program" box must be checked within I/M tab of the MOVES County Data Manager.
Checking the "No I/M program" check box will clear all existing I/M data from the input
database and set the database to a no I/M status.
The net difference in emissions between the No-I/M run and the Actual I/M run is the emissions
benefit of the I/M program for the purposes of the biennial program evaluation. Similar to
performance standard modeling, this estimated actual I/M program evaluation benefit can be
compared to the target benefit of the I/M program established in the SIP.
5.1 Background on the Use of Modeling for Program Evaluation
EPA believes that the use of MOVES for program evaluations is a viable option that is consistent
with I/M program implementation to date, including the continued reliance on OBD technology.
The provisions for biennial performance evaluations were added to the I/M rule at 40 CFR 51
Subpart S in 1998 at the beginning of the transition to OBD testing in I/M programs. At the
time, EPA's mobile source modeling system was MOBILE. But there was no accounting for the
benefit of OBD I/M testing in the model until the release of MOBILE6.0 in 2002. In the
proposal for the 1998 amendment to the I/M rule, EPA stated that:
The program effectiveness evaluation does not itself produce emission reductions.
Rather, the program evaluation is intended to confirm that emission reductions projected
by modeling and claimed in the states' implementation plans have been achieved in
actual practice.18
Thus, if emission reduction projections are made using modeling, it is appropriate to assess these
projections using the latest available emissions model based on actual data (obtained from RSD
or I/M program test results).
Similarly, prior to the release of the 2004 guidance which detailed the use of RSD data in
program evaluations, EPA had issued the 1998 document entitled, Inspection and Maintenance
(I/M) Program Effectiveness Methodologies (EPA420-S-98-015, October 1998).19 This
document outlined three alternative I/M program evaluation methodologies. One of those
methods relied on modeling data as an element of a performance evaluation. At the time, the
I/M program data was obtained from tailpipe testing and then correlated to a benchmark program
to determine program effectiveness. Today, however, I/M program testing is characterized by
OBD data. Over the years, with the advancements in mobile source emissions modeling,
including the incorporation of various sources of benchmark data, EPA's model effectively
correlates OBD data to fleet emissions while accounting for local I/M program variables. Thus,
18	62 FR 48196, September 19, 1987.
19	https://nepis.epa. gov/.Exe/ZvPDF.cgi/P1008F3N.PDF?Dockev=P.1008F3N.PDF.
-9-

-------
EPA's mobile source modeling system uses data obtained from the I/M program-tested fleet in
estimating the actual emission reductions of the I/M program.
5.2 Compliance Factor Parameters and Calculation
MOVES uses a single "compliance factor" to account for a given program's compliance rate,
waiver rate, and an adjustment to account for an I/M program that may not cover an entire source
type because the program only applies to certain weight classes. The compliance factor is entered
in MOVES as a number from 0 to 100 and represents the percentage of vehicles within a source
type that receive the benefits of the program.
The general equation for calculating the compliance factor is:
The compliance factor parameters are detailed in the Inspection and Maintenance Programs
section of the most current MOVES Technical Guidance.20 Descriptions of the compliance factor
parameters are provided below:
•	Compliance rate - the percentage of vehicles in the fleet covered by the I/M program that
complete the I/M program and receive either a certificate of compliance or a waiver.
However, the compliance rate can also be determined using out-of-program data
collected from the enhanced I/M program's required on-road testing regimen, such as
RSD sampling. The sampled fleet can be compared to the state's vehicle registration
database and emission inspection databases to estimate the percentage of the sampled
fleet that completed the I/M Program during the inspection cycle covered by this
evaluation period.
•	Waiver rate - the percentage of initially failed vehicles receiving a waiver. The waiver
rate should be calculated as the number of initially failed (OBD or tailpipe tested)
vehicles receiving a waiver divided by the total number of vehicles initially failing the
tailpipe or OBD test types. For more information on obtaining these vehicle counts from
actual program test data, see Section 5 of Guidance on Vehicle Inspection and
20 As of the date of release of this guidance, the current version of the MOVES Technical Guidance is MOVES2014,
MOVES2014a, and MOVES2014b Technical Guidance: Using MOVES to Prepare Emission Inventories for State
Implementation Plans and Transportation Conformity, (EPA-420-B-18-039, August 2018),
https://nepis.epa.gov/Exe/ZvPDF.cgi?Dockev=P100 V7EY.pdf. Check the latest MOVES model webpage for
update guidance on the latest version of MOVES: https://www.epa.gov/moves/latesf-version-motor-vehicle-
emission-simulator-moves.
Compliance
Factor
% Compliance Rate x (100%-Waiver Rate) x
Regulatory
class
coverage
adjustment
- 10 -

-------
Maintenance (I/M) Test Data Statistics as Part of Annual I/M Reporting Requirements.21
The waiver rate can therefore be calculated by the equation:
#Waivered Vehicles
Waiver Rate(%) =	x 100%
#Initial OBD Failed Veh. + #Initial Tailpipe Failed Veh.
• Regulatory Class Coverage Adjustment - I/M programs entered in MOVES can only be
applied by source types. However, I/M programs and source type may be inconsistent
with state I/M program regulations that define I/M programs by the vehicle weight
classes. Since MOVES source types are a composite of several vehicle weight classes,
applying I/M benefits to the entire MOVES source type may be inappropriate. The
MOVES Technical Guidance contains a table of regulatory class coverage adjustments to
account for this discrepancy. The adjustments are percentages of VMT by the various
regulatory weight classes within a source type.
For the "Actual I/M run" described above, the compliance factor parameters (i.e. compliance rate
and waiver rate) should reflect the actual program performance for the years covered by the
corresponding performance evaluation period or from data derived from the program's on-road
testing regimen, rather than projected program data. Because a biennial performance evaluation
covers the span of two years, the compliance factor used in the MOVES "Actual I/M run" should
be a weighted average based on the total number of unique vehicles tested22 in the respective
years covered by the biennial program evaluation. See the Appendix for an example of a
compliance factor calculation.
6. Who can I contact for more information on this guidance?
For questions concerning a particular state or I/M program area, contact the I/M coordinator
through the primary Mobile Source Contact at your EPA Regional Office. A listing of Regional
Mobile Source Contacts is available at: www.epa.gov/transportation-air-pollution-and-climate-
change/office-transportation-and-air-quality-contacts-topic.
General questions about this guidance can be directed to Joe Winkelmann at EPA's Office of
Transportation and Air Quality: winkelmann.joseph@epa.gov.
Additional information regarding vehicle emission I/M programs can be found on EPA's website
at: www.epa.gov/state-and4ocal4ransportation/vehicle-emissions-inspection-and-m.aintenance.
Another good resource for I/M programs is the EPA-supported National OBD Clearinghouse:
www.obdclearinghouse.com/.
For more information on EPA's MOVES mobile source emissions modeling, visit:
www, epa. gov/ m oves.
21	Guidance on Vehicle Inspection and Maintenance (I/M) Test Data Statistics as Part of Annual I/M Reporting
Requirements (EPA-420-B-20-030, May 2020); https://nepis.epa.gov/.Exe/ZyPDF.cgi?Dockey=P.1.00ZBX7.pdf; p.
4-9.
22	Ibid. pg. 5.
- 11 -

-------
7. Does this guidance create any new requirements?
No, this guidance is based on CAA requirements, existing associated regulations, and does not
create any new requirements. The CAA and EPA's I/M rule at 40 CFR Part 51, Subpart S
contain legally binding requirements. This document is not a substitute for those provisions or
regulations, nor is it a regulation itself. Thus, it does not impose legally binding requirements on
EPA, states, or the regulated community, and may not apply to a particular situation based upon
the circumstances. EPA retains the discretion to consider and adopt approaches on a case-by-
case basis that may differ from this guidance but still comply with the statute and applicable
regulations. This guidance may be revised periodically without an opportunity for public
comment.
- 12 -

-------
APPENDIX: Sample Compliance Factor Calculation
Example 1
For this example, for a given year covered by the biennial performance evaluation, we will
assume that the actual compliance rate is 96%, the actual waiver rate is 8%, and that the program
covers gasoline passenger cars (source type 21), passenger trucks (source type 31), and light
commercial trucks (source type 32), for vehicles less than 8501 pounds gross vehicle weight
rating (GVWR). First, the user must determine the regulatory class coverage adjustment using
the information in the regulatory class coverage adjustments table in the MOVES Technical
Guidance. For source type 21, the regulatory class adjustment is 100%, as all passenger cars are
less than 8501 pounds GVWR. For source type 31, the regulatory class coverage adjustment is
98%) (meaning that 98%> of the vehicles in source type 31 are less than 8501 pounds GVWR).
For source type 32, the regulatory class coverage adjustment is 93%.
Given the conditions above, the calculation of compliance factor for passenger cars, source type
21, using the general equation for finding the compliance factor is as follows:
..	Regulatory
factor06 = % Compliance Rate x (100% - Waiver Rate) x class coverage
adjustment
88.32% =	96%	x (100%-8%) x 100%
Thus, for rows in the IM Coverage table input for source type 21, the value for the compliance
factor column should be "88.32".
A similar sample calculation for rows in the IM Coverage table input for source type 31, yields a
value for the compliance factor column of "86.5536":
..	Regulatory
factor06 = % Compliance Rate x (100% - Waiver Rate) x class coverage
adjustment
86.5536% =	96%	x (100%-8%) x 98%
Lastly, a sample calculation for rows in the IM Coverage table input for source type 32, yields a
value for the compliance factor column of "82.1376":
r ..	Regulatory
factor*6 = % Compliance Rate x (100%-Waiver Rate) x class coverage
adjustment
82.1376% =	96%	x (100%-8%) x 93%
- 13 -

-------
Example 2
This example details the weighting calculation for the compliance factor to be used in the
MOVES "Actual I/M run" to represent the two years covered by the biennial performance
evaluation. The following data was collected and calculated for the two years covered by the
biennial performance evaluation for source type 31:

Compliance
Factor
Number of unique
vehicles tested
Year 1
93%
1,600,000
Year 2
91%
1,540,000
Total
-
3,140,000
The calculation of a weighted compliance factor is as follows:
Year 1
Number of
unique
vehicles
Compliance
Factor
(
Year 1
Compliance
Factor
92.0191%
(
(
93%
93%
Total
number of
unique
vehicles
tested (Year
1 + Year 2)
1,600,000
3,140,000
x 50.9554%
%) + (
Year 2
Compliance
Factor
%) + (
) + (
91%
91%

Year 1
Number of
unique
vehicles
Total
number of
unique
vehicles
tested (Year
1 + Year 2)
1,540,000
3,140,000
X 49.0446%
Thus, for the MOVES "Actual I/M run" rows in the IM Coverage table input for source type 31,
the value for the compliance factor column should be "92.0191". A similar calculation should be
performed for each source type covered by the I/M program.
- 14 -

-------