&EPA
*'
'('*'"
, EnvironmenfaJ'RrotecUo^
Office of Afr and Radiation
(ANR-443P; '
OOOR91100
*-
20460
Air
Network "Type:
Effects On Emission
Reductions, Cost, and
Convenience
Ť. -
;,oO".*>*'*
Technical Information
Document
-I '.'<---
By
Eugene J. Tierney
A'-'
-------
<
U.S. Environmental Protection Agency
Region 5, Library (PL-12J)
77 West Jackson Boulevard, 12th Floor
Chicago, It 60604-3590
-------
I/M Network Type:
Effects On Emission Reductions, Cost, and Convenience
l^v.
rVi
r~, EXECUTIVE SUMMARY
TX"
VS- The Clean Air Act Amendments (CAAA) of 1990 require
iŁt that EPA revise and republish Inspection and Maintenance
* (I/M) guidance in the Federal Register addressing a variety
7- of issues including network type, i.e., whether the program
^' is centralized or decentralized.
^
""^ Nearly every State that must operate an I/M program
will need to obtain legislative authority to meet revised
guidelines. This need provides an opportunity to reassess
the effectiveness of current I/M program designs and make
changes that will lead to improved air quality over the next
decade. Network type is the most obvious and influential
factor at work in an I/M program. This report attempts to
analyze how emission reduction effectiveness, cost, and
convenience vary by network type.
Major factors that influence the emission reductions in
an I/M program are test procedures, analyzer accuracy,
quality control, inspector competence and honesty, and
quality assurance. One of the fundamental features that
distinguishes centralized programs from decentralized is the
number of analyzers, stations, and inspectors. Decentralized
programs range from hundreds to thousands of stations,
analyzers and inspectors. By contrast, large centralized
programs have 10-20 stations in a given urban area with 3-5
testing lanes each. Consequently, maintaining a high level
of quality assurance and quality control over analyzers and
inspectors is very difficult and costly in decentralized
programs while readily available and easily implemented in a
centralized system. The results of a wide array of program
audits, studies, and other analyses by EPA and local
governments show that decentralized programs suffer from
improper testing by inspectors, less accurate analyzers, and
from more variation in test procedure performance. The
magnitude of the differences is difficult to accurately
quantify, but evidence indicates that decentralized programs
may be 20-40% less effective than centralized.
Obtaining additional emission reductions from in-use
motor vehicles is essential to meeting the goals for enhanced
I/M programs in the CAAA of 1990. To achieve this, I/M
programs need to adopt more sophisticated procedures and
equipment to better identify high emitting vehicles. This
would result in higher costs, which may make it difficult to
justify low volume decentralized stations, and longer test
times, implying the need for more lanes and stations in a
centralized system. Further, alternatives such as loaded
transient testing may not be practical in decentralized
-------
programs. Thus, centralized programs, which are already more
effective, may also have greater potential for improving
effectiveness.
The cost of I/M programs is also important Two
components of cost are addressed in this report: the cost to
the motorist and the cost of program oversight. There is a
distinct difference in cost between centralized and
decentralized programs; centralized programs are
substantially cheaper, and the gap is widening. In most
cases centralized programs have had decreasing or stable
costs while decentralized programs have gotten more expensive
due to improved equipment and the demands by inspectors for
higher compensation. This trend is likely to continue given
the additional inspection requirements imposed by the CAAA of
1990.
Convenience is named as the major advantage of
decentralized programs. There are many more inspection
locations, and waiting lines are generally non-existent.
Vehicles which are identified as high emitters can be
repaired by the testing facility, which is also authorized to
issue a compliance or waiver certificate after retest .
Centralized testing facilities, on the other hand, can be
visited without appointment, but the motorist may encounter
waiting lines during peak usage periods. Vehicles which fail
the inspection must be taken elsewhere for repair and
returned to a testing facility afterwards.
This report discusses the results of a consumer survey
on decentralized program experiences and also discusses
inspection facility siting and reported waiting time in
centralized programs. Individual's experiences in
decentralized programs vary, depending on whether they seek
an inspection without an appointment, and whether the nearest
licensed facility is also capable of completing complex
engine repairs. So do the experiences in centralized
networks; most vehicles will not need repair and one well-
timed inspection visit will suffice. On the other hand,
inadequately sized networks can produce serious problems for
consumers.
In summary, the majority of centralized programs have
been found to be more effective at reducing emissions at a
lower cost than decentralized programs. Improvements to
decentralized programs are needed to assure that objective,
accurate inspections are performed. These improvements are
likely to be costly, and desired levels of sophistication may
be beyond the capabilities of most private garages. Consumer
convenience continues to be an important factor in network
design and public acceptance.
-11-
-------
Table of Contents
Executive Summary i
Table of Contents iii
List of Figures iv
List of Tables v
1.0 Introduction 1
2 . 0 Background 2
3.0 Emission Reductions from I/M Programs 5
3.1 Formal Test Procedures 5
3.2 Emission Analyzer Accuracy and Quality Control 12
3.3 Emission Testing Objectivity and Accuracy 15
3.3.1 Background 15
3.3.2 Supporting Evidence 21
3.3.2.1 Failure Rate Data 22
3.3.2.2 Overt Audit Data 24
3.3.2.3 Covert Audit Data 26
3.3.2.4 Anecdotal Evidence 27
3.3.2.5 Waiver Rate Data 2 S
3.3.2.6 Test Record Data 3;
3.3.3 Conclusions 32
3.4 Visual and Functional Inspections of Emission Controls 2~
3.4.1 Background 25
3.4.2 Supporting Evidence 2-1
3.4.3 Conclusions 4 j
4 . 0 Program Costs -44
4.1 Inspection Costs 44
4.2 Repair Costs 4 -
4 . 2 Conclusions 4 .-
5 . 0 Convenience i .
6.0 Future Considerations Affecting the Comparison of Network
Types
7.0 Predicted Emission Reductions From MOBILE4 and an Additicr.a.
Assumption Regarding Waivers
8.0 Conclusions
-111-
-------
List of Figures
3-1 Vehicles Passing California I/M After Extended
Preconditioning 9
3-2 Old and New Emission Test Failure Rates in I/M Programs ....23
3-3 Failure Rates in Manual and Computerized Stations in New
Hampshire 24
3-4 Fraction of Covert Audits Finding Improper Tests 27
3-5 Overall Tampering Rates in Select I/M Programs 37
3-6 Catalyst and Inlet Tampering Rates in Select I/M Programs . . 38
3-7 Tampering Rates in Decentralized and Centralized Programs . . 39
3-8 Aftermarket Catalyst Usage in Anti-Tampering Programs 40
3-9 Tampering Rates in I/M and Non-I/M Areas in California 41
3-10 Frequency of Proper Tampering Tests 42
4-1 Cost Per Vehicle of I/M Programs 45
4-2 Cost Per Vehicle By Network Type 46
4-3 Cost By Program Age 47
4-4 Nationwide Inspection.and Oversight Cost of I/M 48
5-1 Average Daily Waiting Times in Illinois' I/M Program 51
7-1 Benefits From Various Potential Changes to I/M Programs .... 57
-iv-
-------
List of Tables
PaC
2-1 Currently Operating or Scheduled I/M Programs 2
3-1 Test Procedures Currently Used in I/M Programs 6
3-2 Error of Commission Rates Using High Speed Preconditioning . . 7
3-3 Change in Failure Rates From First Idle to Second Idle 8
3-4 Model Year Coverage of Anti-Tampering Inspections 11
3-5 Gas Audit Failure of Emission Analyzers By Program 13
3-6 Early Emission Test Failure Rates in I/M Programs 17
3-7 Waiver Rates in I/M Programs in 1989 29
3-8 Model Year Switching Between Initial and Retests 31
3-9 Vehicles With Dilution Levels Below 8% 32
-v-
-------
1.0 INTRODUCTION
At any point*in time, a certain percentage of vehicles on zhe
road are emitting pollutants in excess of their design standards
due to repairable causes. Motor vehicle emissions inspection and
maintenance (I/M) programs employ a short screening test to
identify high emitters and a retest after repairs to confirm their
effectiveness in reducing emissions. It is the mandatory initial
screening test and the retest after repairs that differentiate I/M
from a public information campaign about motor vehicle maintenance
or a program to train automotive mechanics.
Where and how those tests are conducted has been one of the
fundamental choices in designing an I/M program. This choice is
generally made by the elected officials who establish the necessary
authorizing legislation. Two basic types of I/M systems exist: 1)
inspection and retest at high volume, test-only lanes (a
centralized network), and 2) inspection and retest at privately-
owned, licensed facilities (a decentralized network) . A
combination of centralized and decentralized inspections is also
found, the latter usually being for retests only.
This report discusses how the choice of network design can
affect the quality, cost, and convenience of I/M inspections. This
report examines operating results from inspection programs acrcss
the United States to compare the effects of network design. Eacr.
section will present available information and attempt to come to a
conclusion regarding the relative merits of each network option.
The discussion of network choice is particularly important ir.
light of the Clean Air Act (CAAA) Amendments of 1990, which induce
significant changes for I/M programs. The CAAA amendments require
centralized testing in enhanced areas, unless the State car.
demonstrate that decentralized testing is equally effective. The
phrasing of this requirement implies a desire on the part of
Congress for getting the most out of I/M, while not wishing -_ _
close out the decentralized option altogether. This report la_. _
the groundwork for discussing what may be necessary to .~a--~
decentralized as effective as centralized testing.
-------
2.0 BACKGROUND
The first emission I/M program was established in the State of
New Jersey with mandatory inspection and voluntary repair in 1972.
In 1974, repairs became mandatory for vehicles which failed the
inspection test. New Jersey added the emission inspection to its
existing, centralized safety inspection network. The States of
Oregon and Arizona followed suit by establishing centralized
inspection programs in 1975 and 1976. Oregon, like New Jersey,
established a State-operated network, while Arizona was the first
State to implement a contractor-operated network.
Following passage of the Clean Air Act Amendments of 1977,
which mandated I/M for areas with long term air quality problems,
other State and local governments established inspection programs.
Table 2-1 illustrates the choices which have been made regarding
network design in the United States as of January, 1991.
Table 2-1
Currentlv Ooeratina or Scheduled I/M Procrrams
Centralized
Contractor
Operated
Cleveland 1
Connecticut
Arizona
Illinois
Florida 3
Louisville
Maryland
Minnesota 3
Nashville
Washington
Wisconsin
(11)
Centralized
State/Local
Operated
D.C.
Delaware
Indiana
Memphis
New Jersey
Oregon
(6)
1 Texas and Ohio are counted
2 Committed to
3 Scheduled to
switching to
begin in 1991
Decentralized
Computerized
Analyzers
Anchorage
California
Colorado
Dallas/El Paso
Fairbanks
Georgia
Massachusetts
Michigan
Missouri
Nevada
New Hampshire
New Mexico
New York
Pennsylvania
Virginia
(15)
Decentralized Decentralized
Manual
Analyzers
Davis Co,UT 2
Idaho
N.Carolina 2
3 Rhode Island
Provo, UT
Salt Lake City
(6)
as one program each but are listed
Parameter
Inspection
Houston -
Louisiana
N . Kentucky
Ohio
Oklahoma
(3)
in 2 columns .
decentralized computerized analyzers.
-2-
-------
Decentralized networks are more abundant due to a variety zi
factors. In seventeen areas, the emission inspection requirement
was simply added to a pre-existing decentralized safety inspection
network. Another ten areas implemented decentralized systems
because they were perceived to be less costly and more convenient
to the vehicle owner.
Those areas choosing to implement centralized systems cited
improved quality control and avoidance of conflict of interest as
the main reasons. Eleven areas selected to have private
contractors own and operate the inspection network so as to avoid
the initial capital investment as well as the ongoing
responsibility for staffing and maintenance. Following New Jersey
and Oregon's lead, four States chose to use State-owned centralized
facilities and government employees to operate the test system.
The U.S. Environmental Protection Agency (EPA) did not attempt
to specify network choice when it first established I/M policy in
1978. EPA assumed that decentralized programs could be as
effective as centralized programs, if well designed and operated.
At the time that the policy was developed, there were no
decentralized I/M programs to observe. EPA did, however,
anticipate the increased need for oversight in a decentralized
network, and the 1978 policy required additional quality assurance
activities for such systems.
In 1984, EPA initiated I/M program audits as a part of the
National Air Audit System. The audit procedures were developed
jointly by EPA, the State and Territorial Air Pollution Program
Administrators (STAPPA) , and the Association of Local Air Pollution
Control Officials (ALAPCO). Since the audit program's inception,
EPA has fielded audit teams on 96 different occasions totaling 323
days of on-site visits to assess program adequacy. Audit teams are
composed of trained EPA staff specializing in I/M program
evaluation. Every currently operating I/M program has been audited
at least once and many have been audited several times. The major
elements of I/M audits include:
1) overt visits to test stations to check for measurement
instrument accuracy, to observe testing, to assess
quality control and quality assurance procedures
instituted by the program, and to review records kept in
the station;
2) covert visits to test stations to obtain objective data
on inspector performance;
3) a review of records kept by the program, including the
history of station and inspector performance (e.g.,
failure rates and waiver rates), enforcement actions
taken against stations and inspectors found to c-r
violating regulations, and similar documents;
-3-
-------
4) analysis of program operating statistics, incl-air.r
enforcement rates, failure rates, waiver ra~es, and
similar information; and,
5) entrance and exit interviews with I/M program officials
and a written report describing the audit findings and
EPA recommendations on correcting any problems found.
Further detail on audit procedures can be found in the National Air
Audit System Guidance.1
EPA has pursued several other means of monitoring I/M program
performance. Roadside emission and tampering surveys2 are
conducted in I/M areas throughout the country each year. These
surveys provide information on how well programs are correcting
tampering and bringing about tailpipe emission reductions. EPA
requests certain I/M programs to submit raw test data for in-depth
analysis. These data are analyzed in a variety of ways to
determine whether programs are operating adequately.3 Studies
conducted by individual States and other organizations also provide
information on program effectiveness.4'5'6 In preparing this
report, all of these sources were considered in the discussion of
network design impacts.
-4-
-------
3.0 EMISSION REDUCTIONS FROM I/M PROGRAMS
From an environmental perspective, the most critical aspect in
evaluating an I/M program is the emission redaction benefit it
achieves. There are three major factors related to network type
that must be considered in making such an assessment: the
prescribed test procedures, instrumentation and quality control,
and actual test performance (including the issuance of waivers).
3.I Formal Test Procedures
In order to assure that air quality benefits are achieved, it
is necessary to assure that high emitters are identified by the
emission test and are properly repaired. The Federal Test
Procedure (FTP), which is used to certify that new vehicle designs
meet emission standards, is not practical for use in the field,
because it is very expensive and time consuming. Short emission
tests were developed for I/M programs to allow rapid screening of
in-use vehicles for emission performance. It is desirable,
however, that an I/M test be able to predict about the same
pass/fail outcome that the FTP would, and especially that it not
fail a vehicle which could not benefit from repair.
Generically, there are three short tests currently in use in
I/M programs: idle, two-speed and loaded/idle. Which programs use
which test is shown in Table 3-1. There are some variations
between how programs define and carry out these tests, but the
basic approach is the same for each test . There is no direct
relationship between network type and test type, although loaded
testing is currently only done in centralized programs.
The use of preconditioning, or preparation of the vehicle for
testing, is another test-related variable illustrated in Table 3-1.
Preconditioning is performed to assure that the vehicle is at
normal operating temperature, and that any adverse effect of
extended idling is eliminated. Some I/M programs utilize a period
of high speed operation (2500 rpm for up to 30 seconds) to
precondition vehicles; some operate the vehicle on a chassis
dynamometer prior to conducting the idle test; others do no
preconditioning. Most decentralized programs, especially those
employing computerized analyzers, conduct high speed
preconditioning. To reduce costs and test time, many centralized
programs did not adopt preconditioning. As with loaded testing,
only centralized programs are doing loaded preconditioning.
The importance of properly preconditioning a vehicle for the
short test has become increasingly apparent to EPA during the last
few years due to concerns about false failures of newer model
vehicles. As a result, the merits of high speed preconditioning
and loaded preconditioning have been studied to assess their
effectiveness. Two recent EPA studies7 gathered data related to
this issue, one which recruited vehicles which failed the short
test in Maryland's centralized I/M program, and one which recruited
vehicles from Michigan's decentralized I/M program. In both
-5-
-------
Table 3-1
Test Procedures Currently Used in I/M Programs
Program
D.C.
Delaware
Indiana
Memphis
New Jersey
Oregon
Arizona
Connecticut
Florida
Illinois
Louisville
Maryland
Minnesota
Nashville
Washington
Wisconsin
Alaska
California
Colorado
Dallas
El Paso
Georgia
Idaho
Massachusetts
Michigan
Missouri
Nevada
New Hampshire
North Carolina
New Jersey
New Mexico
New York
Pennsylvania
Rhode Island
Utah
Virginia
Network
Type1
CG/D
CG
CG
CG
CG
CG
CC
CC
CC
CC
CC
CC
CC
CC
CC
CC
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
Test Type2
Idle
Idle
Two Speed
Idle
Idle
Two Speed
Loaded/Idle
Idle
Idle
Two Speed
Idle
Idle
Idle
Idle
Two Speed
Idle
Two Speed
Two Speed
Two Speed
Idle
Idle
Idle
Idle
Idle
Idle
Idle
Two Speed
Idle
Idle
Idle
Idle
Idle
Idle
Idle
Two Speed
Idle
Preconditioning
None
None
High Speed
None
None
High Speed
Loaded
Loaded
Loaded
High Speed
High Speed
High Speed
Loaded
None
High Speed
Loaded
High
High
High
High
High
High
None
High
High
High
High
High
High
High
High
High
High
None
High
High
Speed
Speed
Speed
Speed
Speed
Speed
Speed
Speed
Speed
Speed
Speed
Speed
Speed
Speed
Speed
Speed
Speed
Speed
1 CG = Centralized government, CC = Centralized contractor, D= Decentralized
2 Idle = pass/fail determined only from idle emission readings
Two Speed = pass/fail determined from both idle and 2500 rpm readings
Loaded/Idle = pass/fail determined from loaded and idle readings.
-6-
-------
studies, vehicles that had failed their regularly schea-le^
test were tested on the full FTP and carefully examines
anything that needed repair. Both studies revealed the exists
of errors of commission (i.e., failed vehicles which had low
emissions and no apparent defects but nevertheless failed the I/M
short test) in these programs which utilize 15-30 seconds of high
speed preconditioning. The results are shown in Table 3-2. Since
owners of incorrectly failed vehicles are subjected to unnecessary
inconvenience and repair expense, the elimination of incorrect
failures is a priority. To address the need for improved test
procedures, EPA issued a technical report detailing recommended
alternative short test procedures in December, 1990.8 The
procedures include three variations of the idle test, two
variations of the two speed idle test, and a loaded test procedure.
In all but the loaded test procedure and the idle test procedure
with loaded preconditioning, a second-chance test which includes
three minutes of high speed preconditioning is recommended upon
failure of the initial test.
Table 3-2
Error of Commission Rates Using Hiah Speed
Total Number
Vehicles Incorrect
Program Recruited Failures
Michigan 237 70
Maryland 178 55
* The Michigan study used FTPs to verify whether each
correct. The Maryland study gave vehicles a second
as incorrect failures the vehicles that passed. No
these vehicles, but they were assumed to be passes.
Preconditioning*
Incorrect
Failure
Rate
30%
33%
I/M failure was
short test and counted
FTPs were conducted on
Twenty-four of the thirty-five programs currently operating
use high speed preconditioning. Seven programs do no formal
preconditioning. Three centralized programs utilize loaded
preconditioning and Florida and Minnesota will soon begin such
testing.
EPA has not done a definitive study of loaded preconditioning
or no preconditioning programs but there is some revealing evidence
in the data from the programs that use loaded preconditioning. Ir.
these programs, the idle test is repeated on initially failed
vehicles after 30 or more seconds of loaded operation. The change
in pass/fail status is shown in the Table 3-3. Although these data
are not accompanied by FTP results, the in-depth studies ir.
Maryland and Michigan indicate that vehicles which pass I/M
cutpoints after loaded operation are more likely to be
-7-
-------
emitters. The seven programs that use no tormal precc
may be experiencing failure patterns similar to tnose c:
idle of the loaded preconditioning programs, i.e., they
large number of incorrect failures.
Chancre in
Model Years
Pre-1981
Post 1980
Table 3-3
Failure Rates From First Idle to
Failed First Failed Both
Idle Test Tests
46% 21%
52% 39%
Second Idle
Delta*
-54%
-25%
* Delta is the percent of unpreconditioned failures eliminated by loaded
preconditioning.
Fifteen to thirty seconds of 2500 rpm operation does not seem
adequate to precondition all vehicles. No preconditioning at all
is probably worse. Thirty to ninety seconds of loaded operation
seems to work well but may only be feasible in a centralized
network because the purchase and installation of a chassis
dynamometer is considered to be beyond the financial capability of
most private garages. The Motor Vehicle Manufacturers Associaticr.
has endorsed three minutes of 2500 rpm operation as adequate to
precondition recent model year vehicles, and EPA's analysis shows
that extended unloaded preconditioning reduces incorrect failures.
This type of preconditioning is feasible in a decentralizes
network, and several programs that have recently switched to BAR 9C
type analyzers are currently pursuing this approach. Figure 3-1
shows the results of second chance testing on 42,555 vehicles
covering 1968 and newer model years in California using 3 minutes
of 2500 rpm preconditioning on vehicles that fail the initial test.
The data show that 37% of the vehicles that fail the initial test
pass after extended preconditioning.
Based on this evidence, improving preconditioning is a .-.. ;
priority. The relative cost of loaded preconditioning and exter. >r:
high speed preconditioning is an important question-. Lea:-:
preconditioning requires the installation of dynamometers. :
centralized programs, that cost can be spread over tens :
thousands tests. The typical decentralized station inspects a:, .
1000 vehicles per year. Loaded preconditioning seems to accorr.c. _
the task of readying the vehicle for testing in much less time -.:.-.:
high speed preconditioning, i.e., 30 seconds vs. 180 seconds. 1: :
decentralized station, the time factor is important in terr.s
wage labor. In centralized programs, minimizing test tire
essential to keeping throughput high and test costs low.
-------
Figure 3-1
Vehicles Passing California I/M After Extended Precondit-i
Vehicles Tested
24% of
Initial Failures SBBB^BBl Vehicles
Tested
Pass After
Preconditioning § ^7,"^
Loaded mode operation is definitely necessary if an I/M
jurisdiction wishes to test for emissions of oxides of nitrogen
(NOx) . There has been little attention focused on NOx in recent
years. Only Los Angeles, California experiences violations of the
ambient air quality standard for nitrogen dioxide, and emission
control planners elsewhere have historically concentrated on
hydrocarbon-only control strategies for reducing ozone levels.
There has been growing interest lately, however, in a strategy
which combines HC and NOx controls to reduce the potential for
ozone formation. As discussed above, loaded mode testing may be
practical from a cost standpoint only in centralized networks.
Tampering checks may also provide NOx emission reductions. Section
3.4 addresses such tampering checks.
Transient loaded mode operation may also be essential for
implementing tests that better identify vehicles that should get
repairs. The steady-state idle, two speed and loaded tests
currently used in I/M programs identify only 30-50% of the vehicles
that need repair. EPA has developed a prototype short transient:
test that identifies essentially all vehicles that need repair.
This procedure is being evaluated in the centralized I/M program in
Indiana. Results thus far show that the test accurately identifies
vehicles that need repair, while minimizing the number of falsely
failed vehicles.
A transient loaded test is a more complicated, expensive and
time consuming test. It involves the use of a more sophisticated
dynamometer than those in use in most loaded testing programs.
-9-
-------
More sophisticated sampling and measurement equipment is d_o_
involved. The test has many advantages, however, over the currer.-
tests. First, by accurately predicting FTP results it identifies
more vehicles for repair, which should lead to greater emission
reductions from the program and better cost effectiveness. It also
allows for NOx testing and a functional test of the evaporative
control system. Evaporative emissions represent a very large
portion of the total hydrocarbon emissions from motor vehicles.
Having an effective functional test to identify vehicles that need
evaporative system repair is essential to reducing in-use
emissions. Transient testing may also allow the elimination of
other tampering checks frequently performed in I/M programs. It is
accurate enough to not require this "double check."
Anti-tampering inspections are another part of the formal test
procedure in I/M programs. As Table 2-1 shows, some programs only
inspect for the presence and proper connection of emission control
devices (parameter inspection), while others do both emission tests
and tampering checks. Table 3-4 shows the current tampering test
requirements in I/M programs. Centralized programs typically do no
anti-tampering checks or fewer than decentralized programs. Again,
the concern for rapid throughput in centralized lanes has been an
important factor in this choice. Recently, centralized programs
have been adding anti-tampering checks to their emission only
programs. Maryland, Arizona, and New Jersey are examples of this.
Their experiences show that anti-tampering inspections can be done
quickly if limited to the items that are not in the engine
compartment: the presence.of a catalyst, the integrity of the fuel
inlet restrictor, and a test for leaded fuel deposits in the
tailpipe. Inspection of underhood components is possible in both
centralized and decentralized networks, but only two centralized
programs have so far adopted such inspections. Decentralized
programs, in which throughput is less of a consideration, have more
readily included underhood checks in their regulations.
To conclude, the choice of network type affects the relative
costs of improvements in preconditioning and test procedures.
Decentralized programs spread the equipment cost across as few as 3
or 4 tests per day. Centralized programs have shied away from
preconditioning and anti-tampering checks in the past because of
lack of clear need (at the time of original program adoption) and
the impact on throughput, not because of the hardware cost. Anti-
tampering checks are currently more prevalent in decentralizea
programs (the effectiveness of decentralized anti-tampering checks
is an important factor which will be discussed in a later section)
Centralized networks are probably the best choice for loaded rcae
emission testing for NOx control and for improved identification of
vehicles needing repair. An alternative that has not yet been
tried but might be practical from a cost perspective would be a
limited-participation decentralized network, in which a small
number of stations would be licensed to insure a high volume o:
business at each.
-10-
-------
Table 3-4
Model Year Coveraae of Anti-Tamoerina
(oldest
Network
Program
Arizona
Connecticut
D.C.
Delaware
Florida2
Illinois
Indiana
Louisville
Maryland
Memphis
Minnesota2
Nashville
New Jersey
. Oregon
Washington
Wisconsin
Anchorage
Fairbanks
California
Colorado
Dallas/El Paso
Georgia
Idaho
Massachusetts
Michigan
Missouri
Nevada
New Hampshire
New Mexico
New York
North Carolina
Pennsylvania
Rhode Island
Davis Co., UT
Provo
Salt Lake City
Virginia
Louisiana
N. Kentucky
Ohio
Oklahoma
Houston
1 C = Centralized
Type1
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
D
A
A
A
A
A
model year
Catalytic
Converter
75
75
77
76
75
75
75
75
75
75
75
80
78
84
81
81
85
75
84
77
84
77
84
75
80
80
80
79
80
, D= Decentralized,
checked
Fuel
Inlet
75
75
77
76
75
75
75
75
75
75
80
78
84
81
81
85
75
84
77
84
77
84
75
80
80
80
79
80
Insoect ions
is listed)
Lead
Test
75
75
75
80
75
77
80
80
80
79
80
Air
Pump
67
80
75
75
67
75
68
78
84
71
81
75
84
77
84
77
84
73
80
80
80
79
68
PCV
80
75
75
67
68
71
64
77
84
73
80
80
80
79
68
Evap
Canister
80
70
68
71
84
84
73
80
80
80
79
68
A = Ant i -tampering only
2 Not currently operating
-11-
-------
3.2 Emission Analyzer Accuracy and Quality Cor.rrol
I/M tests are conducted with non-dispersive infrared analyzers
which can measure hydrocarbons (HC) (as hexane), carbon monoxide
(CO) , and carbon dioxide (C02) , although not all equipment includes
the latter capability. The concentration of pollutants in vehicle
exhaust is measured by placing a steel probe, attached to the
analyzer by a sample hose, in the exhaust pipe and drawing a sample
of exhaust into the instrument. The concentration in the exhaust
stream is compared to the I/M program's pass/fail cutpoints (1.2%
CO and 220 ppm HC for the majority of the fleet) .
Accurate measurement capability is essential for several
reasons. First, although cutpoints are high enough that small
errors do not critically affect emission reductions, any
significant error affects equity and public confidence in the
testing process. Naturally, public support of the program is
critical to the ultimate success of the effort. Second, leaks in
the analyzer dilute the emission sample and could lead to false
passes of high emitting vehicles. Leaks can quickly and easily
become large enough to cause large amounts of dilution. This
results in a loss in emission reduction benefits to the program.
Finally, bad data obscure what is happening in the program.
Tracking the performance of vehicles in the fleet over time is one
indicator of the success of an I/M program. Inaccurate readings
could mislead planners and decision makers.
The accuracy of the reading depends on periodic calibration
checks and adjustments, regular maintenance of filters and water
traps, and detection and repair of leaks in the sampling system.
Each I/M program has requirements for maintaining the analyzers and
also has an independent audit program in which analyzers are
checked on a periodic basis. EPA recommends that program
administrators use a +5%/-7% tolerance when auditing inspection
equipment. That is, if an analyzer reads a known concentration of
gas more than 5 percent too high or 7 percent too low during an
audit, then it should be taken out of service until it can be
adjusted or repaired.
EPA audits of operating I/M programs also include independent
instrument checks with a gas bottle that EPA names and ships to the
site from its Motor Vehicle Emission Laboratory in Ann Arbor,
Michigan. Table 3-5 presents the findings from EPA audits
regarding the accuracy of different programs' instrumentation.
The findings show mixed results depending upon program type.
Centralized contractor-run programs show very strong quality
control. Centralized government-run programs and decentralized
programs are comparatively weak.
-12-
-------
Gas Audit
Program
D.C.
Delaware
Indiana
Medf ord
Memphis
Subtotal
Connecticut
Illinois
Louisville
Maryland
Washington
Wisconsin
Subtotal
California
Colorado
Georgia
New Hampshire
North Carolina
Pennsylvania
Subtotal
1 CG = Centralized
Failure of
Network
Type1
CG
CG
CG
CG
CG
CG
CC
CC
CC
CC
CC
CC
CC
D
D
D
D
D
D
D
government,
Table 3-5
Emission Anal
Number of
Analyzers
Tested
13
9
24
4
9
59
32
47
6
11
12
11
119
22
21
21
14
17
11
106
CC = Centralized
yzers By P
Rate of
HC
Failures
23 %
67 %
29 %
50 %
44 %
37 %
0 %
0 %
0 %
0 %
25 %
18 %
4 %
9 %
29 %
14 %
29 %
35 %
27 %
23 %
contractor,
roaram,
Rate of
CO
Failures
15 %
22 %
8 %
0 %
33 %
15 %
0 %
0 %
0 %
0 %
8 %
9 %
2 %
14 %
14 %
10 %
21 %
65 %
18 %
23 %
D= Decentralized
Minimum quality control requirements are specified in t.v
Emission Performance Warranty Regulations promulgated pursuant :
Section 207(b) of the Clean Air Act Amendments of 1977. Amor
other things, these regulations require weekly gas span and le;
checks. Centralized programs go far beyond these minimi
requirements, typically conducting these checks two or more time
per day. The best centralized programs, in terms of quail:
control results, conduct leak checks and recalibrate equipment :
an hourly basis. Frequent multi-point calibrations, dai:
preventative maintenance, and careful monitoring of equipmer
performance also characterize these quality control program:
While such activities are possible in a centralized program with
limited number of analyzers and the economies of scale
purchasing large quantities of high pressure calibration ga:
-13-
-------
decentralized stations simply could not afford c
control. Also, the low average test volume in
stations limits the number of vehicles affected by an equicrer.*:
error. As a result, decentralized programs all require stations t :>
conduct these checks only weekly. Some quality control practices,
such as monthly multi-point calibrations, are never done in
decentralized programs.
Centralized government-run programs tend to have problems with
calibration gas quality. In fact, most of the quality control
failures in these programs can be traced to gas problems. This is
also true of the two centralized contractor-run programs that had
quality control problems. Finally, limited operating budgets in
centralized government-run programs usually result in less frequent
quality control checks and less technical expertise available to
maintain test equipment, when compared with centralized contractor-
run programs.
Decentralized inspection facilities suffer from both problems
to one degree or another. Each individual station usually is
allowed to purchase gas on the open market and has no way of
knowing if the gas is accurately labeled unless the State operates
a gas naming program. The program manager also purchases gas for
auditing station analyzers. These audits serve as the only tool
for .ensuring gas accuracy in decentralized programs. However, with
a few exceptions, quality assurance audits occur only two to four
times per year, which limits their effectiveness. Computerized
analyzers used in decentralized programs are programmed to lock out
from testing if the weekly calibration check and leak check by the
station owner are not conducted and passed. In theory, this
ensures a nominal level of instrument accuracy. Before
computerization, these quality control checks depended upon the
operator to remember to do them and do them correctly. However,
EPA auditors frequently report finding computerized analyzers with
leaking sample systems, even though the analyzer has recently
passed a leak check conducted by the owner. The leak check
mechanism is easily defeated (thus saving repair costs and analyzer
down time) by temporarily removing the probe or the entire sample
line and tightly sealing the system to pass the check. A simpler
approach is to kink the sample line to accomplish the same
objective: excluding the most susceptible portions of the sample
system from the leak check. EPA auditors have observed this
happening during overt station audits. At this point, no one has
devised a solution to this problem that could not be defeated.
To conclude, quality control in centralized programs can more
easily meet the highest standards achievable given current
technology since economies of scale allow the use of. better
practices. Decentralized programs may always have to settle for
lower quality control and the associated burden of extensive
quality assurance, and less emission reduction benefit due to the
number of stations and inspectors involved.
-14-
-------
3.3 Emission Testing Objectivity and Accuracy
3.3.1 Background
Apart from the design considerations related to test procedure
choice and instrument quality control, two crucial factors in the
effectiveness of the inspection process are whether the test is
actually conducted and whether it is done correctly. Even the best
design is defeated by failure to address these factors. Obviously,
simple error or inspector incompetence is a problem, but more
significantly, malfeasance is a problem, as well. Thus, the term
"improper testing" will be used in this report to refer to a
deliberate failure by inspectors to "fail" vehicles that exceed
idle emission cutpoints or which violate anti-tampering rules,
either at initial test or at retest. The degree of machine control
of the testing process and the level of human supervision of the
inspector influence the degree to which improper testing is
possible. Obviously, decentralized programs do not allow for the
same level of human supervision of the testing process as
centralized programs. Overt quality assurance visits occur, at
most, once per month and rarely involve the observation of the
testing process. Even if testing were observed during these
visits, only the most inept inspectors would deliberately perform
the test incorrectly while being observed by the State auditor.
In the early 1980's, almost all decentralized I/M programs
employed manual emission analyzers, and the inspector/mechanic was
responsible for every step of the process: selecting the proper
cutpoints based on a vehicle's model year and type, recording the
results from an analog (needle on dial) display onto a test form,
and making the pass/fail decision. In short there was no machine
control of the inspection process, so the inspector's proficiency
and honesty were key to objective and accurate tests.
Across the board, decentralized manual analyzer programs were
reporting operating statistics which did not agree with those
reported by either centralized programs or decentralized programs
using computerized analyzers, or with the available information
about in-use emission performance. In most cases, the reported
failure rate was only 20 to 40 percent of what would be expected.
Table 3-6 shows failure rates from all programs reporting data from
the early 1980's.
Table 3-6 might be hurriedly interpreted to mean that the
manual programs were bad and the computerized and centralized
programs were good. The reported failure rate, however, is only one
indicator of I/M program performance. It would be rash to conclude
that a program is succeeding or failing based only on that
statistic. There are several reasons why this statistic is not a
completely reliable indicator. The failure rate in a program is
defined as the number of vehicles that fail an initial test. In
order to accurately calculate this statistic, it must be possible
to distinguish initial tests from retests. The computerized
analyzers used in decentralized programs include a prompt that asks
-15-
-------
whether the test is initial or a retest . In some cases, ir.sc^czjrs
are not aware that a vehicle has been inspected before at another
station and code a retest as an initial test. Another more common
problem is that the inspector simply chooses initial test (the
default prompt) for any test. This confuses actual failure rates.
Some centralized programs have the same kind of prompt but require
the owner to present a document (e.g., vehicle registration or test
form) which is marked when a test is performed. This reduces the
problem but EPA audits have found instances where coding is not
accurate. Many centralized systems use a computerized record call
up system, based on the license plate or VIN, that automatically
accesses the pre-existing vehicle record in the database. Thus, it
is nearly always clear which is the initial test and which is the
retest. It is possible to sort out the miscoding problem in any
database that includes reliable vehicle identification information
but this dramatically complicates data analysis, and I/M programs
have not systematically pursued this. Another reason why reported
initial tests failure rates are not reliable indicators is the
problem of false failures discussed previously. The quality of
preconditioning and the sophistication of the test procedures vary
from program to program. Inadequate preconditioning or less
sophisticated test procedures may result in exaggerated failure
rates. Most importantly, even if the reported failure rate on the
initial inspections were entirely accurate, there could still be
problems with retests. In a manual program, the quickest approach
to completing any test is to make up a passing reading, so it is
not surprising that few failures were ever reported. In a
computerized program the analyzer must at least at some point be
turned on, vehicle identification data entered, and the test cycle
allowed to run its course or no compliance document will be
printed. Since most cars will pass their first test with no
special tricks, the least-time approach for a dishonest inspector
may be to do a reasonably normal first inspection on all vehicles,
producing a respectably high failure rate report, and devote
special attention only to the retest of the failed vehicles. So,
failure rates were useful in identifying the severe problems that
existed in manual I/M programs, but they are not useful in drawing
finer distinctions between better run programs.
EPA examined in more depth six possible causes of the low
failure rates in the manual programs [see EPA-AA-TSS-IM-87-1].
This study analyzed and discussed roadside idle survey data,
reported I/M program data, and data collected during I/M program
audits. EPA concluded that five of the explanations - quality
control practices, fleet maintenance, fleet mix, differing emission
standards, anticipatory maintenance, and pre-inspection repair -
did not sufficiently explain low reported failure rates. By
process of elimination, EPA concluded that the major problem
contributing to low reported failure rates in decentralized, manual
I/M programs was improper inspections by test station personnel.
Anecdotes and observations reinforced this explanation.
-16-
-------
Table 3-6
Early Emission Test Failure Rates in I/M Programs1
Centralized
Arizona
Connecticut
Delaware
Louisville
Maryland
Memphis, TN
Nashville, TN
New Jersey
Oregon
Washington, D.C.
Washington
Wisconsin
Reported
%
20.2
17.2
13.7
15.7
14,
8,
24,
26,
24.0
18.4
19.0
15.3
6
1
5
1
Expected2
%
36.8
33.0
7 .7
16.2
14 .0
3.7
25.4
27.8
38.3
13.4
28.1
19.3
.55
.52
.00
. 97
.00
.00
.97
. 94
. 63
.00
.68
.79
Average
Decentralized Computerized Analyzers
.85
Fairbanks, Alaska
Anchorage, Alaska
California
Michigan
New York4
Pennsylvania
19.4
15.7
27.7
15.8
5.1
17.6
Average
Decentralized Manual Analyzers
22.7
24 .7
28.7
12.9
33.4
19.5
.85
.63
.96
.00
.15
.90
.75
Georgia 6.6
Idaho 9.8
Missouri 6.7
North Carolina 5.6
Clark Co., Nevada 9.5
Washoe Co., Nevada 11.0
Davis Co., Utah 8.7
Salt Lake Co., Utah 10.0
Virginia 2.3
Average
25.0
16.9
20.5
21.1
29.4
29.4
21.3
21.3
15.6
.26
.58
.33
.27
.32
.37
.41
.47
. 15
.35
1 1983-1985 data, for all model years, including light-duty trucks.
2 Expected rates are based on data from the Louisville I/M program for
1988. They vary by area due mainly to differences in cutpoints.
3 Values greater than 1.00 are reported as 1.00.
4 New York's analyzers are only partially computerized.
-17-
-------
-J L
In part because of the poor experience of the manual cr;gr;
that had started earlier and because of encouragement frcrr. EPA, :
next round of six decentralized programs that started in the rr.:
1980s chose to require computerized analyzers. By the begir.nir.g
1989, four of the previously manual programs had required
inspection stations to purchase and use new computerized analyzers
to reduce improper testing. Two more programs are scheduled to
switch in the near future. Thus, most decentralized programs now
utilize computerized analyzers to control the testing process. The
timely question is whether decentralized programs using
computerized analyzers in fact get as much emission reduction as a
comparable centralized program, as Table 3-6 might suggest could be
the case. The next section examines the evidence that is
available, including that from the newer computerized centralized
programs. The remainder of this section discusses hypothetical
problems as a useful preliminary to an examination of the evidence
with some comparison to centralized programs.
The new computerized analyzers reduce many of the errors
likely with the use of manual equipment by reducing the number and
difficulty of the decisions an inspector has to make. The
inspector enters vehicle identification information (make, model
year, vehicle type, etc.), and based on this, the computer
automatically selects appropriate emission standards, performs the
test, makes the pass/fail decision, prints the compliance document,
and stores the results on magnetic recording media. However, much
of the process still relies on the inspector to correctly conduct
the manual steps of the process: determine vehicle information,
key in information correctly, properly precondition the vehicle,
properly insert the probe in the tailpipe for the duration of the
test, maintain idle speed at normal levels, etc. Thus, improper
testing is still possible, and is not limited to cases of simple
human error.
There are a variety of motivations that can lead an inspector
to intentionally perform an improper test. First and foremost,
some inspectors may not be interested in deriving revenue from the
repair of vehicles but are primarily in the business to profit from
testing. In many cases, I/M tests are performed by service
stations or other outlets that have test analyzers but do not stcc'<
a full range of emission parts and do not employ fully qualifier
engine mechanics. The service such an outlet offers may be t,
provide as many customers with a certificate of compliance with ;:;
little hassle as possible. In addition to improper testing on ~-
initial test, improper testing on the retest of a failed ver.:._.-
also occurs. One example of the motivation for this is when .
mechanic attempts a repair but fails to resolve the excess emissi--
problem. It puts the mechanic in a difficult position to te-1 :
customer that the vehicle still failed after the agreed _L-
repairs were performed. To save face, the mechanic falsifies
results and the vehicle owner believes the problem was correcte-:
Finally, some inspectors may not hold the I/M program in hi--.-.
regard or may doubt its technical effectiveness, and may want -
-18-
-------
help out customers by passing them since chey perceive no r.arr. _r.
doing so.
The mid-1980s experience in manual decentralized programs
amply demonstrated that many licensed inspectors were willing to
report test results they had not actually obtained, for the reasons
given in the previous paragraph or others. To determine whether
this willingness still exists and the degree to which it is
affecting computerized programs, it is useful to enumerate the ways
in which it is possible to conduct a computerized inspection
improperly. This will suggest the type of evidence one might
expect to be able to see. The most basic form of deliberate
improper testing is no test at all, i.e. the inspector provides the
customer with a compliance document without inspecting the vehicle.
The accounting systems in most programs require that the inspector
somehow produce an electronic record of an inspection for each
certificate or sticker issued. This electronic record can be
produced by testing a "clean" vehicle to achieve a passing score
but entering the required identification information as if it were
the subject vehicle.
Another approach is to intentionally enter erroneous vehicle
identification information to make it easier for a vehicle to pass.
All I/M programs have looser emission standards for older
technology vehicles and most have looser standards for trucks
compared with cars. Model year input to the computerized analyzer
governs automatic selection of emission standards used by the
system to decide pass/fail, outcome. Thus, by entering an older
model year or truck designation into the computer, the system
automatically selects looser standards. The compliance document
will then show the improper model year, but may never be examined
closely enough to be questioned.
The decentralized computerized analyzer requires a minimum
amount of CC>2 in the sample stream in order to consider a test
valid. Most programs use cutpoints of 4-6%, well below the 10-14%
COz found in most vehicles' exhaust. One way to improperly test is
to partially remove the probe from the tailpipe such that the
sample is diluted enough to pass the HC and CO standards but not
enough to fail the COj check.
Similarly, computerized analyzers allow engine speed during
the idle tes-t to range between 300-1600 RPM. Improper testing may
be accomplished by raising the engine speed above normal during the
idle test. This usually lowers apparent emission levels leading to
a passing result. To EPA's knowledge, no I/M agency tries to
detect such actions on a routine basis.
Another technique is to make temporary vehicle alterations,
e.g., introduce vacuum leaks or adjust idle, to get the vehicle co
pass and then readjust the vehicle after the test. This type of
improper testing is nearly impossible to detect.
-19-
-------
Finally, vehicle owners can obtain inspection documents cr. tr.e
black market in some programs. A major element of quality
assurance in a decentralized program is to determine whether all
stickers or compliance certificates can be accounted for during the
periodic audits of stations. A routine problem is the loss or
theft of certificates or stickers, despite regulatory requirements
that such documents be kept secure. Recently, Texas, New York, and
other I/M programs have experienced problems with counterfeit I/M
stickers.
As with the comparison of decentralized and centralized
programs, the failure rates in Table 3-6 should not be taken to be
reliable indications of which centralized programs were working the
best. The low failure rate programs may have been doing a better
job at getting cars fixed one year so they do not appear as
failures the next, and in recording only one test on each vehicle
as the initial test. A closer look at hypothetical possibilities
and at the supporting evidence is necessary.
Improper testing is also possible in centralized networks,
with some reason to suspect that such problems could be more common
in government-run systems than in contractor-run systems. The lane
capacity in some government-run programs was not designed or
upgraded over time to handle the growing volume of vehicles
demanding inspection. Also, the test equipment used is essentially
identical to the equipment used in decentralized computerized
programs. (This has been a cost-based decision, not one inherent
to government-run stations.) Thus, all of the pitfalls associated
with the use of those analyzers are possible. Finally, because
enough aspects of government-run programs continue to be manual
operations, they are subject to both error and malfeasance, making
close personal supervision essential. As a result of these
problems, vehicles have, at times, been waved through without
emission testing in government-run lanes as a result of routine
problems like equipment breakdown, and sloppy testing practices
have been observed. Through the audit process, EPA and the State
and local jurisdictions have made some progress in resolving these
problems. It should be noted here that intentional violation of
program policy or regulations by individual inspectors (i.e.,
deliberate improper testing) is not evident in these programs.
Shortcuts have been observed on visual inspections, cut
infrequently. On occasion inspectors have been caught stealir.g cr
selling certificates or accepting payments but the supervision
typical of centralized programs generally prevents this.
Centralized contractor-run programs can be expected to suffer
few if any problems with improper testing for several reasons. The
level of machine control in these programs is such that the
inspector has almost no influence over the test outcome. In fully
automated systems, the inspector only enters a license plate number
to call up a pre-existing record that contains the other vehicle
identification information used to make the pass/fail decision.
Individual inspectors are held accountable for certificates c:
compliance just as a cashier is held responsible for a bala.-.c-:
-20-
-------
cash drawer. The actual test process is completely computerizea.
In addition to machine control of the inspection process, the level
of supervision in centralized programs is very high. Tr.e
contractor is under pressure and scrutiny from both the public ar.a
the government agencies responsible for oversight. This pressure
has led to high levels of quality control and quality assurance in
every aspect of the inspection system.
Two other problems that may affect emission benefits of a
program regardless of network type are readjustment after repair
and test shopping. Readjustment after getting a "repair" and
passing the test probably happens to some extent in all programs.
A survey9 of mechanics in Arizona showed a significant percent
admitting to have made such readjustments. This problem stems from
the fact that poor quality repairs may sacrifice driveability in
order to reduce emissions to pass the test. Readjustment occurs
after obtaining a passing reading to improve driveability. The
improvement may be real or imagined. Some owners still have a
1970s mind set when it comes to emission controls and will not
believe that a readjustment is not necessary. This problem is
somewhat limited to older technology vehicles since computerized
vehicles are not adjustable, per se.
Test shopping occurs when an individual fails an emission test
at one I/M test station (either centralized or decentralized) and
goes to another station to get another initial test. EPA auditors
hear complaints from inspectors in decentralized programs that
sometimes they fail a vehicle but the owner refuses repair and
finds another station to get a passing result. In some cases these
complaints have been verified by follow-up investigations conducted
by program officials. Test shopping can result in a small number
of dishonest garages "inspecting" a disproportionately large number
of cars that should be failed and repaired. However, detecting or
preventing this type of problem has only been systematically
pursued in one decentralized program. In a centralized program,
the main effect of test shopping is that cars with variable
emission levels sometimes pass on a second or third test.
3.3.2 Supporting Evidence
The foregoing background discussion described what can ac
wrong with testing in each program type, and why it is reasona'c 1-
to suppose a special problem may exist in decentralized progrars.
However, the closest possible scrutiny is appropriate given -:.
stakes involved: air quality benefits and basic network choice.
Ideally, an in-depth field study of the issue would be useful -_ .
quantify the extent of each of these problems and measure tr.e-:
impact on the emission reduction benefits of the program. However,
such a study would require years of investigation and cost milli;-
of dollars. Further, the results may not be clear cut because .:
the difficulty of observing I/M program operations withe-t
affecting the normal behavior of those observed and the difficult..
obtaining a vehicle or station sample that is not biased by -..--
-21-
-------
non-participation of owners and mechanics who know they have evaaea
program requirements.
Short of doing such a study, there are a variety of sources of
information that shed light on the extent of improper testing in
I/M programs. One major source is the audit program EPA initiated
in 1984. Another source is the auditing conducted by programs
themselves. Data analysis by both programs and EPA have also
provided information. Complaints and anecdotal information from
consumers and others involved in test programs are also useful.
Finally, EPA has conducted some in-depth studies of testing
programs that contribute additional data on improper testing.
3.3.2.1 Failure Rate Data
It was expected that switching from manual analyzers to
computerized analyzers would solve the problem of low reported
failure rates on the initial test, and that appears to have been
the case. Figure 3-2 shows failure rate data from I/M programs in
the 1987-1988 time frame.
Figure 3-2 also compares failures rates from the 1986 time
frame and failure rates from the 1988 time frame. The data
indicate that centralized program failure rates have decreased in
most cases. This is expected as more new technology vehicles,
which fail less often, enter the fleet and as the program
effectively repairs existing vehicles in the fleet. Some
centralized programs, Wisconsin and Louisville, for example, do not
show this trend because they regularly increase the stringency of
their cutpoints to maintain high levels of emission reduction
benefits. The decentralized manual analyzer programs show little
change or small increases in failure rates in this time period.
The increases may result from increased pressure on these programs
to perform. It is clear that the failure rates in these programs
remain lower than expected.
A good example of this is the New Hampshire program which, at
the time, used manual analyzers in about 20% of the stations and
computerized analyzers in the rest. Figure 3-3 shows the failure
rate data from each segment of the program, with the manual
analyzer stations reporting a failure rate approximately 25% that
of the computerized stations.
-22-
-------
Figure 3-2
Old and New Emission Test Failure Rates in : '"M Prngrar
Centralized Programs
AZ
CT
DC
MD
NJ
WA
WI
NC
V A
Decentralized Manual Programs
Decentralized Programs Recently Switching to Computerized
NV
GA m
Decentralized Programs: Computerized Analyzers From Inception
10
Old Data
15 20
Recent Data
25
30
* The recent data is from 1987 - 1988 and the old data is from 1985 - 1986.
-23-
-------
Figure 3-3
Failure Rates in Manual and Computerized Stations in New Hampshire'
Computerized ||||||^^ 19%
Manual
Analyzers
* About 20% of the stations in the New Hampshire program used manual
analyzers while the rest used computerized equipment.
The decentralized computerized programs now report high
failure rates, in the range of what would be expected based on the
emission standards and vehicle coverage of these programs. Thus,
the operating data from decentralized computerized programs would
suggest that more initial inspections are being performed properly
than was the case using manual analyzers. As discussed previously,
there are problems with relying solely on failure rate as an
indicator. Increased failure rates are certainly a precondition to
successful emission reductions, but not sufficient. The central
factor is whether the final test on each vehicle is performed
objectively.
3.3.2.2 Overt Audit Data
Quality assurance efforts in decentralized I/M programs always
include overt audits of licensed inspection stations, typically on
a quarterly basis or more often in systems using manual analyzers.
Overt audits generally consist of collecting data from recent tests
(either magnetic media or manually completed test forms), checking
analyzer accuracy, and observing inspections in progress. These
audits serve to maintain a presence of program management and
control in the station. With computerized analyzers, software
prompts inspectors through each step of the procedure, so overt
audits of the test stations rarely find problems with inspectors
not knowing how to correctly perform an emission test. Of course,
overt audits still find problems unrelated to the emission
analyzer: missing stickers, certificate security problems, disarray
in record-keeping, and similar administrative details that coula
lead to a finding of improper testing. Overt audits c:
decentralized I/M programs by EPA generally involve observations at
only a small fraction of the licensed test stations and no attempt
is made to obtain a statistically accurate sample. As a result,
the following discussion will speak in general terms about overt
audit findings without citing specific rates. Nevertheless, E?A
-24-
-------
believes that the findings from these station visits are fairlv
representative.
Manual operations observed during overt audits of
decentralized programs are done incorrectly so often that this is
almost always identified as a major problem. This is true of both
anti-tampering checks, discussed in more detail in Section 3.4, and
emission tests. Inspectors fail to conduct the emission test
properly by eliminating important steps, such as preconditioning or
engine restart, or by not strictly following criteria, such as when
to take the sample and at what speed to precondition. The properly
conducted test is the exception rather than the rule when it comes
to manual operations. For example, the most recent audit of a
manual program by EPA included overtly observing 10 different
inspectors test vehicles. Only two inspectors followed procedures
completely. Computerized analyzers prevent some but not all of
these problems. Sloppiness is also observed in steps such as
properly warming up the vehicle, avoiding excessive idle, and
properly inserting the probe in the tailpipe.
Overt audits find evidence of improper testing based on review
of paperwork and test records. Auditors find that stickers or
certificates have been issued or are missing but no test record is
available to document proper issuance. Inspectors doing improper
inspections issue the certificate or sticker and then at a
convenient time enter the data into the computer and probe a
passing vehicle to create both the magnetic and paper test record.
In programs in which stations are required to keep detailed repair
records, suspicious information has been found. For example, a
station might charge the same exact amount of money for repairing
and retesting most vehicles that come to it. This is an extremely
unlikely occurrence. Another example is where the station
supposedly documents the same repair on different vehicles again
and again. The chances that any one repair is the one actually
needed in so many cases makes the records dubious.
Centralized programs tend to vary considerably with regard to
manual operations. In some cases, the only manual operations are
verification of vehicle identification, probe insertion, and
pressing a button. In other cases, especially where anti-tampering
checks are conducted, manual operations can be significant. In
these cases, more variation has been observed in the consistency of
testing. This is especially a problem in programs that do
uncontrolled high speed preconditioning. Subjective judgment must
be used by the inspector in instructing the motorist to raise
engine speed without the use of a tachometer. Centralized
inspectors can also get sloppy with probe insertion, although the
carbon dioxide check prevents excessive dilution of the sample.
The evidence from EPA and State overt audits indicates that
some degree of improper testing occurs in every decentralized
program, whether manual or computerized. The findings in
centralized programs indicate that high levels of quality control
are usually achieved, especially in the contractor-run systems.
-25-
-------
3.3.2.3 Covert Audit Data
In addition to overt audits, many programs and EPA's audit
program have initiated covert audits. These audits involve sending
an auditor to a station with an unmarked vehicle, usually set to
fail the emission test or the anti-tampering check. The auditor
requests an inspection of the vehicle without revealing his or her
true identity. Similarly, some programs have initiated remote
observation of suspect inspection stations. Auditors use
binoculars to watch inspectors working and look for improper
practices. If problems are found in either of these approaches,
enforcement actions are taken against stations. These audits serve
to objectively assess the honesty and competence of licensed
inspectors.
The findings from covert audits in 19 decentralized programs
are shown in Figure 3-4 . Note that the sample sizes vary
considerably. The small samples represent covert audits conducted
during an EPA audit. I/M program quality assurance staff and EPA
auditors formed teams to conduct the audits. The large samples
represent data on covert audits conducted by local I/M program
staff. The stations visited are randomly selected, but the small
samples may not be statistically significant (no attempt was made
to obtain a statistically valid sample). Nevertheless, the samples
reported by California, North Carolina, New York, and Utah
represent large fractions of the station population and are likely
to be representative. In any case, the results indicate a
consistent pattern: that improper testing is found during covert
audits in which the inspector does not know that his or her
performance is being evaluated by an auditor. Improper testing has
always been found to one degree or another whenever EPA audits have
included covert efforts. The problems discussed in Section 3.3.1
are those encountered during these audits. As a result of the
ability of covert audits to identify problems in I/M programs,
covert auditing is being given a higher priority during EPA audits
and EPA is encouraging programs to conduct such audits on a routine
basis. It should be clear, however, that these are covert audits
on the initial test. EPA is concerned that there does not appear
to be a practical means by which covert audits can be done on
retests. Covert audits have other limitations, as well. Covert
vehicles and auditors must be changed frequently to avoid
recognition. Experience indicates that inspectors are getting wise
to covert audits and perform perfect inspections for strangers.
Thus, the effectiveness of covert audits as a long-term quality
assurance tool is questionable.
Covert audits have not typically been done in centralized
programs. EPA included covert audits of the centralized and
decentralized stations in the hybrid New Jersey program because
overt audits showed problems with quality control. In 2 out of 8
State-run stations covertly audited in 1989, the vehicle was not
failed for not having a catalyst, as it should have been. The same
vehicle was used for covert audits of the licensed decentralize?.
-26-
-------
stations in New Jersey and 6 out of
vehicle for a missing catalyst.
Figure 3-4
Fraction of Covert Audits Finding Improper Tests*
in Decentralized Programs
Virginia
Michigan
New Hampshire
Northern Kentucky
New York
New Jersey
Houston
North Carolina
El Paso
Nevada
Salt Lake City
Colorado
Provo, UT
Georgia
Dallas
California
Davis County, UT
Tulsa
New Mexico
Overall
36
Sample Size
11746
0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%
* Improper tests includes the full range of problems, including failure to
conduct the test and use of improper procedures
3.3.2.4
Anecdotal Evidence
Anecdotal evidence and complaints derived from decentralize
program managers and auditors, consumers, inspectors, and other:
are also primary sources. Reports of improper testing cover th-
full range of possibilities. Common anecdotes include failure t
test the vehicle, probing another vehicle, raising the engine spee
and not fully inserting probes into tailpipes. EPA and I/M progri
-27-
-------
management are often alerted to improper testing at a particular
I/M station by a concerned citizen. Another source of ccrrplair.ts
and anecdotes is inspectors themselves. Inspectors complain accut
people who are test shopping, a practice which hurts the hor.est
inspector. Inspectors have explained to covert auditors how they
can get around the test program requirements. Forthcoming
inspectors also tell of customers who try to bribe them or convince
them to falsify results. When the inspector refuses, the customer
simply announces that another station will surely comply with sucn
a request. Sometimes, sharp inspectors have noticed vehicles they
recently failed showing a current sticker but which are still in
violation.
EPA has not been privy to similar anecdotal evidence in
centralized programs. It is not clear whether that is because it
does not exist or that because of the tighter control, EPA auditors
are simply not given that information.
3.3.2.5 Waiver Rate Data
Most I/M programs issue waivers to vehicles that fail a retest
but have met certain minimum requirements. Programs vary widely in
what they require motorists to do before qualifying for a waiver,
but the most common requirement is to spend a minimum amount of
money on emission-related repairs. Table 3-7 shows the percentage
of failed vehicles that get waived in programs that allow waivers
and the minimum cost expenditure required.
A waiver represents lost emission reductions to the I/M
program, so high waiver rates mean substantial numbers of vehicles
that are high emitters are not getting adequate repair.
Conversely, a truly low waiver rate indicates that maximum emission
reduction benefits are being obtained. However, improper testing
is, in a sense, a form of unofficial waiver that may be an
alternative to the legitimate waiver system employed by programs.
A low reported waiver rate by itself is therefore ambiguous v^tr.
respect to program success.
Waiver rates tend to vary by program type to some degree .
Centralized programs typically have higher rates and manual
analyzer decentralized programs typically have lower rates. The
centralized programs in Arizona, Delaware, Illinois, Indiana, an^
Wisconsin probably represent reasonable waiver rates. This is r.c:
to say that these limits are acceptable but all of these progra-.:.
have good to excellent enforcement. Enforcement is importer:
because there is no need to apply for a waiver if there is :
threat from driving without an I/M sticker. These programs na-. -
established substantial procedural requirements for receivirg :
waiver,and try hard to limit waivers to only those vehicles tn
have met the requirements, in most cases. Given this, the wai--.-
rates seen in these programs might be what one should expect fr.
other programs.
-28-
-------
Table 3-7
Waiver Rates in I/M Programs in 1989
(percent of initially failed vehicles)
Program
Pre-1981 Vehicles
Waiver Cost
Rate1 Limit2'3
Decentralized Manual
Davis Co., UT
Idaho
North Carolina
Provo, UT
Salt Lake City, UT
13%
7%
0%
3%
4%
Decentralized Computerized
Alaska 1%
California 29%
Colorado 2%
Dallas/El Paso, TX na
Georgia 14%
Massachusetts na
Michigan 10%
Missouri 11%
Nevada 4%
New Hampshire na '
New York na
Pennsylvania 2%
Virginia 8%
Centralized Government
Delaware 3%
Indiana 10%
Memphis 1%
$60
$15
$50
$15
$15
$150
$50
$50
$250
$50
$100
$74
L
$200
$50
L
$50
$60-$175
$75
$100
$50
Post-1980
Waiver
7%
26%
0%
4%
2%
$150
$30
$50
$100
$100
1%
9%
1%
na
12%
na
9%
14%
2%
1%
na
1%
3%
1%
13%
2%
$150
$175-$300
$200
$250
$50
$100
$74
L
$200
$50
L
$50
$200
$200
$100
$50
Centralized Contractor
Arizona
Connecticut
Illinois
Louisville
Maryland
Seattle, WA
Spokane, WA
Wisconsin
12%
5%
11%
17%
20%
21%
9%
12%
$200
$40
L
$35
$75
$50
$50
$55
12%
4%
11%
12%
19%
22%
9%
9%
$300
$40
L
$30-$200
$75
$150
$150
$55
1 na = data not available
2 Some programs vary cost limits by model years and by pollutant failure;
thus, some of the limits listed here are only typical of the model year
group. Some programs do not have a set cost limit but required specific
repairs, indicated by the letter L.
3 Except in Alaska and Utah, cost waivers are not given for tampering.
-29-
-------
In decentralized programs, low waiver rates occur when it is
inconvenient for either the motorist to obtain or the inspector to
issue a waiver. In Colorado, an appointment must be made witn a
State representative to visit the station when the vehicle is
present to issue the waiver. In North Carolina, the motorist must
go to a State office to get the waiver. Alternative ways of
avoiding an objective test become very attractive when this level
of inconvenience is involved. Similarly, if the mechanic has to do
extra paperwork, e.g., in Pennsylvania, the improper testing
alternative is again more attractive. Effective January 1, 1990,
California switched to a centralized waiver processing system. The
data shown in Table 3-7 represent the combination of easy to get
waivers and the aggressive enforcement against stations in
California lead to waiver rates that would be considered typical
given the cost limits.
Centralized programs are able to control the issuance of
waivers to a high degree, but end up with higher waiver rates than
are found in decentralized programs. This paradox results from the
fact that it is usually very difficult to escape the test
requirement in centralized programs. Lower rates are sometimes
found when the enforcement system is not adequate and motorists
faced with tough requirements simply choose to operate in a non-
complying mode, e.g., Connecticut. Another factor that may be
operative in some centralized programs (e.g., Seattle and Spokane,
Washington) is that the program's geographic coverage is so
constrained that it is relatively easy for an otherwise subject
motorist to provide an .address, for the purpose of vehicle
registration, that places it outside the I/M area.
Thus, the potential for waiver control is very high in
centralized programs that have high quality enforcement programs
and adequate geographic coverage. High costs limits and other
rigorously enforced qualification requirements are needed to keep
waiver rates below 10% of failed vehicles. Waiver control in
decentralized programs may not accomplish much since the
alternative of avoiding an objective test is readily available.
3.3.2.6 Test Record Data
In an ongoing effort to identify techniques for finding and
correcting operating problems, EPA has begun analyzing computerizes
test records from a variety of decentralized programs. The
analyses involve looking at test histories of vehicles, especially
comparing initial tests and retests. At this point, the analysis
is still underway and the results discussed in this section
represent a small part of the complete project. It is often
difficult to calculate similar statistics because programs do net
all record and store the same data. Thus, some of the analyses
that follow were done on some programs but could not be done en
others.
Three programs were analyzed to assess the rate of switches
model year between the initial test and a retest . An incorre;-
-30-
-------
model year entry could mean that the wrong outpoints are select;;:
for a vehicle, since the computer automatically selects cutccin~3
based on model year. I.i decentralized programs, audits have four.a
cases where the model year entry on an initial test was changed to
an older year, yielding a less stringent cutpoint, and resulting in
a test pass decision. Table 3-8 shows that a small amount of model
year switching between initial and retest occurred in all three
programs.
Naturally, some level of error is to be expected in any
program. However, higher rates of switching were found in the two
decentralized programs as compared with the centralized program.
There is no reason an inspector must wait for a retest to
input an incorrect model year, so Table 3-8 may not reflect the
true problem. The incidence of incorrect model year entries on
first tests was therefore also analyzed for Pennsylvania by
comparing the vehicle identification number with the model year
entry. Only 1981 and later vehicles were analyzed because these
are easily distinguished by the standardized 17 character VIN that
went into effect for that model year. The data revealed that 6.4%
of the 1981 and later vehicle records had model year entries that
were pre-1981. This means that looser emission standards were
being applied for these tests.
Table 3-8
Model Year
Program
Connecticut
New Hampshire
Pennsylvania
Switchincr
Network
Type
C
D
D
Between
Sample
Size
1949
3632
4958
Initial and
Number
Switched
20
67
156
Revests
Percent
Switched
1.0
1.8
3.1
Another data element analyzed was the CO + CC>2 scores f r - '
initial test failures and retest passes on the same vehicles. Test
histories of vehicles failing the initial test were constructed ar. .1
the dilution levels were compared between initial and retest in New
Hampshire and Wisconsin. (Pennsylvania was not included beca-s-
te'st time is not recorded, making it difficult to determine tr.-.-
actual final test). These I/M programs accept as valid any CO-^::
measurement above 6%. A vehicle with no exhaust leaks and no a.r
injection system has a CO+COa level of about 14%. Table 3-9 sh.. .-
the fraction of vehicles in the sample that were below 8% CO+CC; .
the initial test and the retest. On the initial test about 12*
the vehicles in Wisconsin and about 10% of the vehicles in :.- -
Hampshire scored below 8%. On the retest, the number of vehic.-
scoring below 8% nearly doubled in New Hampshire to 19% of
-31-
-------
vehicles, while the number was essentially unchanged in Wiscc
This analysis, while in no way conclusive, is consistent wiin
audits and anecdotes are telling us aboun intentional 3:
dilution in decentralized programs.
The time that elapses between an initial test on a vehicl
a retest is another aspect that EPA is investigating. An analysis
of New Hampshire data indicates that about half of all failed
vehicles get retests within ten minutes of the initial test. The
overall average time between initial and retests is only 11
minutes. Presumably, if an inspector fails a vehicle, and then
does diagnosis and repair on that vehicle, one might expect a
longer amount of time to have elapsed. Covert audit experience
indicates that inspectors in decentralized stations have performed
multiple tests on a vehicle that initially fails in an attempt to
get a passing score without any diagnosis or repair.
Vehi
Program
New Hampshire
Wisconsin
cles With
Sample
Size
390
503
Table 3-9
Dilution Levels Below
Initial
Test Failures
Below 8%
10%
12%
8%
Retest
Passes
Below 8%
19%
11%
3.3.3 Conclusions
The available evidence shows that objectivity and quality of
testing and the accuracy of instrumentation differ by program type.
It was previously found that decentralized programs using manual
analyzers suffered from severe quality control problems both in
testing and instrumentation. At this point, only one manual
program has not committed to switching to computerized analyzers.
Decentralized programs using computerized analyzers represent
a substantial improvement over manual systems. Analyzer quality
control is better, but EPA audits still fail about 20-25% of the
analyzers checked. The gross level of errors made by inspectors
and the influence of incompetent inspectors are far less because
software controls some aspects of the test and determines some of
the major decisions about the outcome.
Computerized decentralized programs seem to have substantial
failure rates, much closer than before to what we would expect
based on the emission standards being used and the vehicle mix
being tested. Nevertheless, we observe improper testing during
audits, and program records describe in detail cases discovered by
-32-
-------
program auditors. Improper retests are certainly problematic sir.c-i
these are vehicles that have already been found to need reoair.
Thus, improper testing of these vehicles directly impacts tr.e
emission benefit of the program.
At this point, the information available on improper testing
in decentralized programs is sufficient to conclude that a problem
definitely exists. Where waivers are cheap and convenient, the
waiver rate is typically about 10-20% in both centralized and
decentralized programs. Improper testing is a cheap and convenient
alternative in the decentralized programs where waivers are not
readily available, and for some vehicles easier than a true repair.
It may be optimistic to think that more than 60-80% of high
emitting vehicles are actually repaired in any decentralized
computerized analyzer program. The actual percentage may be more
or less, but it is difficult or maybe impossible to accurately
determine. Detecting improper testing is extremely difficult
because of the ways in which it occurs. It is relatively easy to
catch the gross violators using covert vehicles set to fail the
test, as EPA and State experience shows. But, some stations simply
will not test vehicles for other than regular customers. Cautious
inspectors may not do an improper initial test for strangers.
Doing covert audits at new car dealerships presents formidable
problems; usually, only regular customers who purchased a vehicle
from the dealer would get tested at the dealership. Conducting an
evaluation of improper retesting would require purchase of repairs
and subsequent retesting, driving the cost of quality assurance
higher. Given these problems, it may be too difficult for most
State programs to adequately quantify the incidence of improper
testing.
The question arises about what can be done to deal with the
vulnerability of decentralized computerized programs to improper
testing. It is unlikely that test equipment can be developed that
will completely prevent improper testing, at least not at a
reasonable cost . Improvements are being made in the form of the
"BAR-90" equipment and software, but programs that have adopted the
updated equipment have experienced costs about 2-3 times the price
of "BAR-84" equipment of about $6000 - $7000. Nevertheless, even
these improvements will not prevent improper testing in
decentralized computerized programs.
Another way to address the problem is through more aggressive
quality assurance and enforcement. More intensive covert and overt
auditing and more sophisticated data analysis will enhance
identification of problem stations and inspectors. Obtaining
funding for additional auditors, covert vehicles, computers,
programmers, analysts, judges, etc. will have to be given 3
priority even in the face of tight government budgets.
Streamlined administrative procedures and broader legal
authority for suspending or revoking station licenses and imposing
fines will also help rid programs of problem stations ar.a
inspectors. Most decentralized programs face major obstacles .r.
-33-
-------
trying to get rid of problem stations once they nave iae
them. Administrative procedures requirements saddle
managers with difficult and expensive barriers. Convincing judges
to impose significant penalties or to suspend stations for a
substantial amount of time are considerable problems. Furthermore,
permanently barring specific individuals from a program is more
difficult. Decentralized programs will have to put more financial
and good will resources into this aspect of quality assurance.
Some, but not all, centralized government-run programs have
suffered from improper testing although for different reasons than
decentralized programs. Unlike the situation in decentralized
programs, it has been easier to gather sufficient evidence to
quantify the emission reduction loss from poorly run programs.
Ultimately, the problems found in these programs can be resolved
and high quality results can be obtained. The solution is better
management and equipment, more computer controls and expanded test
capacity. EPA is working with the remaining problem government-run
programs to upgrade their systems to achieve better results.
Contractor-run centralized I/M programs do not seem to suffer
from serious improper testing problems as far as we can tell. The
efficient management and thorough computer control of these
programs eliminate nearly all opportunities for improper testing.
Even with all the possible improvements, decentralized
programs will have a more uncertain measure of their own
effectiveness than centralized programs, due to the greater
possibility of continuing but invisible test irregularities.
-34-
-------
3 .4 Visual and Functional. Inspections of Emission "or,-.:--'.-.
3.4.1 Background
EPA provides State Implementation Plan emission reduction
credits for the inspection of certain emission control components.
The inspection involves determining whether the emission control
device is present and properly connected. The components for which
credit is available are the catalytic converter, air injection
system, PCV valve, evaporative canister, EGR valve and gas cap.
Additional credit is given for misfueling checks: requiring
catalyst replacement if the fuel inlet restrictor is tampered or if
there is evidence of lead deposits in the tailpipe.
Implementation of anti-tampering inspections has varied widely
among I/M programs, as can be seen in Table 3-4. Of course, anti-
tampering inspections are a completely manual operation. They
depend upon the skill and the persistence of the inspector to find
components and make the determination that they are properly
connected. While this is fairly easy for the catalyst and
misfueling checks, finding underhood components can be difficult.
The configuration of components varies from make to make and even
among models from the same manufacturer. Thus, inspector training
is an essential component of an anti-tampering program.
With the cooperation of State and local governments, EPA has
conducted roadside tampering surveys of motor vehicles in cities
throughout the country each year for the past 10 years. These
surveys provide information about national tampering rates and
trends in tampering behavior. The surveys also provide EPA with
the basic information needed to estimate the emission reduction
potential of a program. In addition, they provide information on
local tampering rates.
There are two components to the emission benefit attributed to
I/M programs that relate to tampering. Anti-tampering inspections
are supposed to identify and correct existing tampering when the
program starts. Also, tampering programs are supposed to deter new
intentional tampering both on existing vehicles as well as new
vehicles entering the fleet. As it turns out, emission testing
programs have the latter effect as well. Because of the difficulty
in determining whether a vehicle actually exhibits tampering, the
emission reduction benefit includes an assumption that not all
tampering will be identified and corrected. In the past, this
concept applied equally to centralized and decentralized programs.
Until 1984, when California and Houston, Texas began
comprehensive anti-tampering programs, Oregon was the only program
that inspected the complete range of emission control components
(see Table 3-4 for details). A few other decentralized programs
were doing comprehensive emission control checks, but only as pare
of pre-existing safety inspection program and no emission reduction
credit was being claimed. Texas and California were formal
-35-
-------
systems was based on the experience in Oregon.
Improper anti-tampering testing is as much if not more of a
problem in I/M programs than improper emission testing, since the
process is completely manual. The same opportunity exists for
irregularities as with tailpipe testing, with no greater ability by
the program to detect them. Decentralized inspectors have many of
the same motives for improperly inspecting a vehicle for tampering
as they do for omitting or improperly conducting an emission test.
Centralized programs are also subject to improper anti-
tampering checks. Unlike emission tests, the tampering check is
completely manual and relies on the honesty, attention to detail,
and competency of the inspector to be performed correctly.
Centralized programs may benefit from the presence of on-site
supervision, the importance to the contractor or agency of
maintaining a public image of being accurate and impartial, and by
the opportunity for inspectors to become more familiar with
underhood geometries due to their constant exposure. At this
point, Oregon is the only centralized program that has been
conducting comprehensive tampering checks long enough to be fairly
evaluated, in terms of fleet-wide tampering rates. Nevertheless,
observations of tampering checks in the centralized lanes in
Arizona and New Jersey provide additional information about
potential effectiveness and will be discussed where appropriate.
3.4.2 Supporting Evidence
Tampering surveys conducted by EPA are the main source of
tampering rate information. Comparing programs is difficult since
the model year and emission control component coverage varies
widely among programs. Even a perfect program would not eliminate
all tampering since inspections are spread over one or two years
(in biennial programs) and the fleet is constantly changing.
Immigration of vehicles from outside the program area is one source
of "new" tampering. Also, as vehicles age the likelihood of either
passive or intentional tampering increases. Thus, an ongoing
program is needed to control these problems, and we expect to see
at least low levels of tampering whenever a survey is conducted.
-36-
-------
Figure 3-5
Overall Tampering Rates in Select I/M Programs*
24 . 3*
D Centralized
I/M + ATP
Decentralized
I/M + ATP
Decentralized
ATP Only
No Program
* The rates shown here are for catalyst, inlet, air, PCV and evaporative
system on 1980 - 1984 vehicles. Programs listed inspect and have
been doing so for at least two years at the time of the survey.
Figure 3-5 lists all programs that were operating for at lease
two years at the time of the survey and inspecting at least rr.e
1980 - 1984 model years for catalyst, inlet, air injection system,
PCV and evaporative canister. The analysis was limited to these
model years and components to establish a fair basis for comparing
results. Two no-program areas are included for reference and tir.e
rates are all from 1987 and 1988 surveys. Decentralized anti-
tampering-only programs still show high overall tampering rates ar2
appear to be at most about 65% effective (Dallas) compared to tne
extremes of San Antonio and the Oregon sites. Decentralized I/M -
anti-tampering programs have lower tampering rates (except for El
Paso) but not as low as in the Oregon cities.
-37-
-------
Given the model years and survey years involved, i- ra ce
that the I/M tailpipe test requirement has played a ro.e ir.
deterring tampering in the combined programs, rather than nr.e
tampering check being successful in getting tampering fixed or.ce i-
has occurred. At this point, the Oregon program is the only one
that serves to represent centralized I/M anti-tampering
effectiveness. The low rate in Portland may also reflect
deterrence more than detection but that cannot be said of Medford
since the program there started much later, in 1986. When the
analysis is limited to catalyst and inlet tampering, decentralized
programs appear to be more effective at finding and fixing this
tampering, as shown in Figure 3-6. This may reflect the lower
skill and effort required to detect these types of tampering, or
the higher cost and therefore profit associated with repair.
Figure 3-6
Catalyst and Inlet Tampering Rates in Select I/M Programs*
14.0%
Centralized
I/M + ATP
Decentralized
I/M + ATP
Decentralized
ATP Only
No Program
* The rates shown here are for catalyst and inlet tampering 1980 - 1984
model year vehicles only.
-38-
-------
Figure 3-7
Tampering Rates in Decentralized and Centralized Program:
Decentralized S3 Centralized
15%
13%
12%
PCV
Catalyst
Inlet
Air System
Fuel Evap
Switching Canister
* The rates shown here are for 1975 - 1983 model year vehicles only in the
1987, 1988 and 1989 tampering surveys..
Tampering rates may be an indicator of the effectiveness of
emission testing, with or without tampering checks. Figure 3-"
shows the combined tampering rates by component from the surveys in
1987-1989 for model years 1975-1983. The tampering rates are lower
in the centralized I/M programs for catalyst, inlet and fuel
switching. Underhood tampering shows a less dramatic difference
but is still lower in centralized programs, despite the fact that
only two of the seven centralized programs represented in the
survey do underhood tampering checks. Only one decentralized
program represented here does no tampering checks at all.
-39-
-------
Figure 3-8
Aftermarket Catalyst Usage in Ant, 1-Tampering
Phoenix
Medford
Tucson
408
5.6%
259
5.4%
378
4.5%
Tulsa |474
Oklahoma City |48l
1.5%
1.5%
El Paso 5442;
1.4%
San Antonio
Covington |433l0.5%
Dallas |482i0.4%
New Orleans 0.2%
493
mmm
LJ Centralized
I/M + ATP
ss Decentralized
I/M + ATP
Decentralized H No Program
ATP Only
* All of the operating programs listed here started in January 1986 except
Phoenix, Tucson, and Oklahoma City which started in January 1987. The
sample size is the number listed inside the bar.
Another indicator of the effectiveness of anti-tamper ir.g
programs is the frequency with which aftermarket catalysts are
found during tampering surveys, i.e., evidence of actual
replacement of catalysts. Since aftermarket catalysts are .TIUCT.
cheaper than original equipment manufacturer parts, one wou.u
expect them to be the replacement of choice in all programs .
Figure 3-8 shows the findings for aftermarket catalysts from ar~-
with catalyst inspections that started operation after 1934.
Programs that started earlier mostly have been causing owners r.ct
to remove their original catalysts, not making them replace or.e;
already removed. Also, prior to 1985, anyone wanting to replace a
catalyst would not have been able to buy an aftermarket ' catal> s:
The three centralized programs show relatively high rates . :
-40-
-------
af terrnarket catalyst installation. On the otner hana, ir.~
decentralized programs show relatively low rates of aftermarKe-
catalyst usage, in some cases no different than non-I/M areas.
Figure 3-9
Tampering Rates in I/M and Non-I/M Areas in California
60
50
40
30
20
10
Pre 1975
1975-1979 1980-1983
Model Years
Overall
I/M
NON I/M
A variety of sources of data on decentralized anti-tamperi:
programs are available. California has done extensive review a:
evaluation10 of its decentralized biennial program, which start-
in the major urban areas around the State in 1984. The study uŁ-
a variety of techniques including roadside tampering surveys. .:
of the many important findings of this study was that roads:
tampering rates for the items checked in the I/M test did .-.
differ substantially between the vehicles that had already ce-
subject to I/M and those that had not. It should be noted -
California uses a broader definition of the term "tampering" :
both its survey and I/M checklist than that used by EPA; thus, i
overall rates are not comparable to EPA's national survey rat-
These results are illustrated in Figure 3-9.
-41-
-------
In addition to the survey data, audits of decentralized ar.-_-
tampering programs find improper inspections. Covert
investigations continually find that inspectors fail to checK fcr
components, fail to fail tampered vehicles, and sometimes fail tc
do the inspection at all. California's covert auditing work
indicates that licensed inspectors neglect to fail tampered
vehicles in the majority of cases. Figure 3-10 shows the results
by component. During EPA overt audits in decentralized programs,
inspectors have been asked by auditors to demonstrate an
inspection, and are frequently unable to do the check correctly,
either neglecting to check for one or more components or improperly
identifying components.
100%
90%
80%
70%
60%
50%
40%
30%
20%
10%
0%
Figure 3-10
Frequency of Proper Tampering Tests
in California's Covert Audit Program
Full Effectiveness Level
!Observed Effectiveness Level
ť
f f-^ : : A . -.wb
Inlet
Evap
Canister
PCV Air System Catalyst
In contrast to the apparent ineffectiveness of decentralized
tampering programs, the success of the centralized Oregon program
can be seen during audits of the program. The inspection system is
well run and the inspectors, employees of the Department of
Environmental Quality, are highly trained and perform the
inspections diligently. By contrast, EPA's observations of the
-42-
-------
centralized lanes in New Jersey have found that inscectors
sometimes neglect to check for the presence of catalysts, the or.lv
component they are supposed to check. Management oversight ar. ^
motivation of inspectors is not adequate in that program. Xew
Jersey has historically low tampering rates, so roadside surveys
are of only limited use in evaluating program effectiveness tnere.
Arizona, Wisconsin, and Maryland also recently started anti-
tamper ing inspections, the first contractor-run systems to do so.
3.4.3 Conclusions
Based on the evidence that decentralized anti-tampering
inspections were not resulting in tampered emission controls
getting repaired to the degree expected/ EPA reduced the emission
reduction benefit (in MOBILE4, for purposes of approving post-1987
SIPs) associated with them by 50%, unless a specific program
demonstrates better performance. Centralized tampering programs,
except in New Jersey, seem to be working about as well as expected,
but additional survey work needs to be done in centralized programs
that have recently started anti-tampering inspections.
The potential for effective anti-tampering checks in
centralized and decentralized programs is influenced by basic
constraints. In the decentralized systems, there is less concern
about throughput, but inspection quality seems inherently limited
by the difficulties of imposing high levels of quality control. In
centralized systems the time it takes to conduct the inspection is
a major constraint leading most programs that have started anti-
tampering checks to do only the catalyst, inlet and sometimes tr.e
air system. It should be noted that these three checks obtain 5a-
of the potential HC and 85% of the potential CO benefit of visual
anti-tampering checks. Furthermore, the remaining available checks
are all for underhood components which require considerable
expertise and care in inspection. Given this, limited but
effective centralized checks may result in greater emission
reduction benefits than the comprehensive but largely ineffective
decentralized checks. It is not known if decentralized program
performance could be significantly improved by limiting the
tampering check to the easily found items and focusing enforcement
and educational resources on those items.
The magnitude of training differs sharply between centralized
and decentralized programs. Depending on the size of the system,
the number of licensed inspectors in a decentralized program car.
range from a few hundred to over 10,000. Centralized programs
range from a few dozen to a couple of hundred inspectors. So, just
the physical magnitude of the training requirements for anti-
tampering inspections can be daunting in a decentralized systerr,
which partly explains why EPA audit findings show that inspector
proficiency is an ongoing problem in decentralized programs.
-43-
-------
4.0 PROGRAM COSTS
4.1 Inspection Costs
Just as there is a wide range of I/M program designs, there is
also a wide range of program costs. In every case, there are
expenditures related to establishing inspection sites, purchasing
equipment, labor associated with conducting the inspection, and
program oversight. But the actual level of expenditure seems to be
most related to the age of the inspection network, and to the
centralized/decentralized choice.
To compare the full cost of different I/M designs, EPA
collected and analyzed data from as many I/M programs as could
provide four basic pieces of information for calendar year 1989:
I/M program agency budget, number of initial tests, the fee for
each test, and the portion of the test fee returned to the State or
local government. Using these parameters, EPA calculated an
estimated cost per vehicle in 33 I/M programs currently operating
around the country. The results are displayed in Figure 4-1.
Decentralized computerized programs have the highest costs,
averaging about $17.70 per vehicle, as shown in Figure 4-2.
Removing the two highest cost areas (Alaska and California) reduces
the average to $13.41. These two programs are much more expensive
due to larger fees retained by the two States for aggressive
program enforcement, higher labor and material costs, in Alaska
especially, and a more involved and complicated test in California.
Decentralized programs with manual inspections incur lower costs at
an average of $11.60. Most decentralized programs cap the test
fee, which may not represent the full cost to the station or,
eventually, the public. Centralized contractor-run programs
average $8.42 per vehicle, and centralized government-run programs
claim the lowest costs at $7.46 per vehicle (the latter figure is
somewhat uncertain because detailed budget information is not often
available).
-44-
-------
Figure 4-1
Cost Per Vehicle of I/M Programs
I Centralized ..'. Decentralized
Anchorage
Fairbanks
California
Nevada
New York
El Paso
New Mexico
Massachusetts
New Hampshire
Virginia
North Carolina
Indiana
Idaho
Michigan
Wisconsin
Georgia
Connecticut
Colorado
Salt Lake City
Provo
Washington
Missouri
Davis
Pennsylvania
D.C.
Illinois
Maryland
Arizona
Oregon
Nashville
Louisville
Memphis
New Jersey
$32
-45-
-------
Figure 4-2
Cost Per Vehicle Bv Network Tvoe
Computer ^
Manual
Contractor
Government
$11.60
$7.46
There is also a relationship between the age of a program and
its inspection fees. Older programs that added I/M to existing
safety systems have amortized the infrastructure costs and tend to
utilize old technology in the inspection process. These programs
come out cheapest, but at a price. They have neither the capacity
nor equipment capability that matches the demand created by a
growing population of vehicles.
The middle aged programs generally came on line after
computerization was in full swing and tend to be more
sophisticated. In this category, centralized and decentralizea
systems tend to experience similar costs - around ten dollars.
In the newest programs, and in those where a transition is
occurring from manual to computerized analyzers, a divergence
appears. The increasing sophistication and growing expertise in
operating centralized testing networks, along with growing
competition among centralized contractors, has tended to keep costs
stable, or even decreasing in some cases. At the same time, the
increase in mechanic labor costs and the requirement that garages
purchase new, more sophisticated testing equipment has caused the
fees in upgraded decentralized programs to rise significantly. A
comparison of the average fees for programs of different ages is
shown in Figure 4-3.
-46-
-------
Figure 4-3
Cost Bv Program Acre
Upgraded
Decentral
Middle Aged
New Central
Contractor
Old
$15
$13
The ability of "new" centralized programs to provide
inspections at a lower cost is well illustrated by the following
example. The cost of inspections in Arizona's centralized,
contractor-run program recently dropped from $7.50 to $5.40. The
decrease resulted from the competitive bidding process that ensued
when Arizona issued a new RFP for the program. The decrease
occurred despite substantial improvements in the quality, quantity,
and range of equipment and services called for in the new contract .
The changes include all new test stations and test equipment .
Expansion of the network to include an additional station and 14
additional lanes. Test station hours are expanded on Saturdays.
There are three lanes for testing heavy duty vehicles rather than
two. Finally, the contractor built and staffed a new referee
station, and all of the other State referee stations were upgraded
with new equipment. The open market process associated with
contractor operated systems has forced suppliers to innovate
technically, allowing these reductions in cost.
If we assume that all I/M programs in the country were
centralized and that the inspection and oversight costs would be
the same as the average of current centralized programs, che
national cost of I/M would be about $230 million less than the cost
in 1989. Figure 4-4 shows the total current cost of I/M to ce
about $570 million. If all programs were central, at today's
average cost per vehicle in centralized programs, the national cos_
of I/M would be $340 million. It may be the case that the per
vehicle cost in some decentralized programs, such as the one ir.
California, would be higher than the current national average
switched to centralized testing. Thus, the potential savings
not be as high as $230 but likely would be substantial.
i-
-47-
-------
Figure 4-4
Nationwide Inspection and Oversight Cost of I/M
Currently and if All Programs Were Centralized
$600,000,000
$500,000,000
$400,000,000
$300,000,000
$200,000,000
$100,000,000
$0
1 Centralized
3 Decentralized
Current Cost All Central
4.2 Repair Costs
Repair expenditures are also a legitimate cost of I/M. But,
regardless of the inspection network, the repairs will be performed
in the same way - either by the vehicle owner or by a professional
mechanic. Any difference in the cost of similar repairs should be
attributable to the relative efficiency of different mechanics cr
to differences in shop labor rates, rather than where the initial
test was conducted.
Repair cost information is collected only sporadically IT.
decentralized I/M programs, and is unreliable. Generally, program
do not require cost data to be entered into the record unless tr.-
vehicle is to get a waiver. Only a few centralized prcgra-
collect cost data. These programs generally require motor in-
coming in for a retest to provide cost information. Thus, wr. i.-/
some reliable repair cost data exists for centralized programs, .:
analysis of the difference between centralized and decentral i .:- :
repair costs is not possible.
It may be that total repair costs are higher in central! ;::
programs, since improper testing in decentralized programs all;*-
some vehicles owners to avoid needed repairs. One should bear .
mind, however, that decentralized programs put more vehicle owr-r
-48-
-------
in a situation in which they may be persuaded to obtain repairs ar.i
maintenance services they do not need or would otherwise nave
purchased elsewhere. In a 1987 public opinion survey1"- , sixteen
percent of motorists living in four decentralized program areas
reported that while their vehicle was being tested, service
technicians recommended or suggested other services such as tune-
ups, brakes, or tires. Forty-three percent of the motorists who
had services recommended believed that they were not really needed.
4.2 Conclusions
Inspection and oversight costs of I/M programs differ widely
among programs. Decentralized programs are more expensive than
centralized programs, in nearly all cases. The average cost of
decentralized computerized analyzer programs is about double the
cost of centralized contractor systems. The national cost of I/M
could be substantially lower, on the order of $200 million less, if
all programs were centralized.
-49-
-------
5.0 CONVENIENCE
The relative convenience of decentralized vs. centralized I-M
is an issue that should concern policy makers because inconvenience
and lost time can be the most negative impact of an I/M program on
the public, and it is important that the program be accepted by the
public. The factors influencing the convenience or inconvenience
of an I/M program are station location, hours of operation,
required number of visits, waiting time, and certainty of service.
Decentralized programs typically offer numerous test stations
scattered throughout the program area and are open to the public
during the week and on weekends. These features form the basis for
providing a convenient inspection network. Theoretically, I/M
tests can be combined with other planned vehicle services, or a
vehicle owner can simply drop into the corner garage for a quick
test. In practice, the situation is not as simple. According to
survey work done in decentralized programs11, the majority of the
public experience one or more "inconveniences" in getting their
vehicles tested. About 60 percent of the vehicle owners surveyed
reported waiting anywhere from 25 to 50 minutes to be tested.
Thirty four percent had left their vehicles at a station as long as
a day for testing. Some of these may have chosen to do so in order
to have other services performed, however. About 20 percent of
vehicle owners surveyed were turned away by the first decentralized
station they visited for a test. Also 20 percent reported being
told to make an appointment and return at a later date. (The
responses total more tha-n 100 percent, because some motorists
surveyed had had multiple experiences with I/M inspections.)
In a decentralized program, all vehicles must get service at a
licensed station, not just those which will need repair.
Decentralized I/M does not guarantee that a vehicle owner will be
able to get a quick emission test on demand at the first station
visited.
Centralized test networks appear less convenient because there
are a limited number of test stations operating during fewer hours
than at a typical service station. Further, centralized test
stations are not as conveniently located as service stations.
Nevertheless, centralized testing has been shown to be reasonably
convenient when the network is well designed. A good example is
the Milwaukee, Wisconsin system which imposes an average travel
distance of 5-6 miles on vehicle owners and a maximum of 10-15
miles. The Wisconsin program is also a good example of providing
rapid service. About 98 percent of all vehicle owners wait less
than 15 minutes for a test. In the busiest month in Wisconsin,
only 4 percent of the vehicle owners have to wait more than 15
minutes and maximum waiting time is 30 minutes. In other
centralized programs, average waiting times are generally
comparable to the Wisconsin experience. Maximum waiting ti~es
vary, however, due to the rush at the end of the month for those
who wait till the last minute to get tested. Figure 5-1 shows -r.-
daily and overall average waiting times in Illinois. Towards - - -
-50-
-------
end of the month, the deadline for compliance, waitir.cr iL:
increase, but overall Illinois achieves an average waiting -irre
under ten minutes. So, good network design means short v;ait
times and short driving distances for vehicle owners.
Figure 5-1
Average Daily Waiting Times in Illinois' I/M Program
30
25
WRFSTWRFSTWRFSTWRFST
Operating Days in the Month of
August
Some centralized I/M programs have experienced problems with
waiting times. In one instance, the problem related to enforcement
of the program which was very uneven. On occasion the police would
suddenly start to enforce the testing requirement and long queues
would develop of people who neglected to get tested when they were
scheduled. This has happened in a decentralized program, too. Ir.
another case, population growth was not matched by expansion ;:
testing facilities. As the area grew, the program administrati;r
failed to install new lanes and test facilities and eventually ~
crisis developed. It is obvious that good management and adeq-at-j
funding of the program are needed to avoid these kinds of problems
For the majority of centralized I/M areas where that is the case,
however, waiting times are not a problem.
For some motorists whose vehicles fail the initial inspect i ~r,
a centralized program may, in fact, be considerably less conver.i-.-r~
than a decentralized program, because they will need to return -
an inspection facility for a retest following repairs. But - .
inconvenience will be limited to the portion of the population ::
fails the initial inspection. At present, this portion is abc-~
-51-
-------
to 20 percent. In the survey mentioned previously, a much greater
percentage of the respondents in decentralized programs needed to
make two trips just to accomplish an initial inspection. This is
in part because the type of automotive service facility that can
provide on-demand inspections most readily (retail gasoline
stations) in a decentralized program is also most likely to lack
the repair expertise and parts inventory to repair many vehicles.
This will become more likely in the 1990s and as a result, the
"extra" trip for a repair in a centralized program would often
occur in a decentralized program anyway.
The biggest potential convenience problem in a centralized
program is where owners are "ping-ponged" between failing tests and
ineffective repairs. On-site State advisors and steps to improve
repair industry performance can help. Well run centralized
programs do these types of things. For example, Wisconsin monitors
repair facility performance and visits shops that are having
trouble repairing vehicles to pass. The visits include providing
assistance to the mechanic in proper retest procedures, providing
calibration of the test equipment the shop might own, and referrals
on where additional training can be obtained.
-52-
-------
6.0 FUTURE CONSIDERATIONS AFFECTING THE COMPAR
A number of future developments may affect the relative
performance and cost of different network types. For the most part:
these developments cannot, at this time, be subject to the kinds of
evidential examination produced above. What follows is a
discussion of the potential outcomes based on known constraints.
Biennial inspections are becoming an increasingly attractive
alternative to annual inspections. The largest portion of the cost
of I/M is in the inspection process. Thus, reducing inspection
frequency will cut the overall cost of I/M, although it may
increase the per test cost. EPA has found that the loss in
emission reduction benefits is less than the savings. Thus, in a
typical switch, about 10% of the benefits will be lost but a
minimum 20% of the dollar cost will be averted. Owner
inconvenience will be reduced by essentially 50%, since a test is
required only half as often.
As a result of switching to biennial inspections, some
existing centralized networks will have extra capacity that can be
used to absorb growth, provide shorter lines in peak periods, or to
allow a longer inspection process. However, in the short run,
there could be an increase in the per test cost in some networks
unless excess existing inspection facilities or lanes are closed.
Biennial inspections in decentralized programs mean fewer tests per
year per station and analyzer. This means that overhead costs
(training, staff, equipment, etc.) must be spread over fewer tests,
unless sufficient numbers of stations drop out . In order to
maintain profitability, an increase in the test fee will likely be
required.
The CAAA of 1990 require EPA to establish a performance
standard for enhanced I/M programs using an annual program as a
model. This means that testing on a biennial basis will require
programs to make up for the associated emission reduction loss.
Given the reduced effectiveness of decentralized programs,
achieving enough reductions will be more difficult, if not
impossible.
Another option being considered by some and pursued by at
least two programs is exempting new vehicles until they reach three
or four years of age. The most recent three model years represent
about 22% of the vehicle fleet. However, because they have haa
little time to develop problems, inspecting them produces very
little emission reduction. Exempting these vehicles would have
effects similar to that of biennial testing, reducing test voiu.-r.e
and revenue. The same kinds of impacts in terms of each networ<
type would follow as well. However, it should be noted that
exempting new vehicles can cause owners to miss the opportunity to
exercise the 2 year/24,000 mile emission performance warranty
provided under Section 207 of the Clean Air Act.
-53-
-------
Another major change needed in all I/M programs in the r.e:-;-
few years will be adoption of advanced test procedures, improved
preconditioning methods, better emission sampling algorithms, r.cre
sophisticated computer systems, and more extensive data recording.
All of these improvements are needed as a result of refinements in
our understanding of existing vehicle technology and what we expect
in terms of future technology. The need for preconditioning was
discussed in detail in section 3.1. The adaptability of I/M
programs to changing conditions becomes a major issue in light of
these needs.
Most centralized programs use mainframe or mini computers to
operate the analyzers and record data. These systems can easily be
reprogrammed whenever a change in the inspection process is
desirable. Such a change can be debugged and implemented quickly
with a minimum of expense and difficulty. The existing
computerized emission analyzers in decentralized I/M programs are
for the most part hardware programmed or closed-architecture
systems, with computer chips rather than software governing
operation of the analyzer. Thus, making changes to these analyzers
requires installation of new computer chips, a more costly
proposition than a simple software update.
The latest developments in decentralized analyzers call for
the use of an open-architecture system that will allow
reprogramming; however, these analyzers cost $12,000-$15,000 for
the basic machine as compared to $6,000-$8,000 for the existing
technology. This additional expense will further the demand for
increased test fees in decentralized programs. The existing
analyzers in most programs are aging, although still good for
repair work if properly maintained and calibrated regularly. Many
decentralized programs will be faced with the choice of replacing
computerized analyzers to meet new testing requirements, or making
a switch to centralized testing.
On the newest fuel injected vehicles, current short tests
leave room for improvement. As time passes, the fuel injected
portion of the fleet will grow more and more important, making it
more important to improve testing of these vehicles. However,
improved test procedures may require the use of steady-state or
transient dynamometers. At this point, the use of either type of
dynamometer is most feasible in centralized programs where the cost
can be spread over many tests. Use in a decentralized program
would likely result in fewer test locations.
Another testing frontier relates to the use of "on-board
diagnostics" or OBD. Starting in 1981, some motor vehicles were
manufactured with computers that monitor engine performance during
vehicle operation, detect any malfunctions in the system, and store
diagnostic information in the computer memory. Usually, the
motorist is alerted through the use of a malfunction indicator
light on the dashboard of the vehicle. OBD is currently require::
for all new vehicles in California, and EPA is developing
regulations to standardize the systems at the federal level, a;
-54-
-------
required by the CAAA of 1990. I/M programs will be
perform OBD checks once these vehicles are in use . 03D has grean
potential to enhance repair effectiveness and provide an
alternative or an add-on to emission testing. In the near term,
decentralized I/M programs may suffer from improper testing of OBD
since checking for the malfunction light, while simple, is
essentially a manual process. Even in the long term when the
vehicle computer will "talk" directly with the analyzer computer,
there will be ways to defeat a decentralized test, for example, by
connecting the analyzer to a known "clean" vehicle instead of the
subject vehicle, as is done currently with the emission test.
-55-
-------
7.0 PREDICTED EMISSION REDUCTIONS FROM MOBILE4 AND AM ADI
ASSUMPTION REGARDING WAIVERS
The emission reduction impacts of various potential
I/M programs were analyzed for their relative effects and to assess
the difference between centralized and decentralized networks. The
most recent version of EPA's mobile source emission model, MOBILE4,
was used for the mechanics of the calculation. MOBILE4 assumes
that anti-tampering inspections in decentralized are only half as
effective as centralized programs; it assumes emission testing is
equally effective in either program type. For the purpose of this
analysis, an additional "what if" assumption regarding the outcomes
of a higher waiver limit was made, as explained below.
Figure 7-1 serves mainly to put the known and possible
differences between centralized and decentralized programs into
perspective with the emission reduction effects that are possible
via other program changes. The first bar in each of the charts in
Figure 7-1 shows the VOC emission reduction benefit from a typical
emission test-only I/M program. While no two programs are exactly
alike, a typical program was taken to be one which covered all
model years, all light duty vehicles and light duty trucks, failed
20% of the pre-1981 vehicles, used an idle test, and annual
inspections. The waiver rate is assumed to be 15%.
The next bar shows the level of benefit this program design
would achieve if it switched to biennial inspections. The third
bar shows the benefit from a typical program with biennial
inspections that exempts vehicles up to four years old. The next
bar shows the impact of reducing waivers to 5% of failed vehicles
by increasing the cost limits and tightening procedures. For the
purposes of this bar, decentralized programs are assumed to achieve
no additional benefit on the theory that improper testing would be
substituted when waivers are constrained. Other judgmental
estimates can be interpolated visually. The next two bars show the
benefit from adding catalyst and misfueling checks. The
differential impact is based on the assumptions built into MOBILE4.
The assumptions for decentralized programs are that detection is
50% of centralized, but deterrence is assumed to be equal that of a
centralized program. The addition of an improved emission test
(i.e., a transient test) would not be feasible in a decentralized
program, so no additional benefit is credited there. Finally,
underhood checks are added and the differential impact is once
again based on the assumptions built into MOBILE4. The figure
shows the benefit in 1990, 1995 and 2000. These scenarios do not
take into consideration any loss in benefit associated with
improper emission testing in decentralized programs (except as
noted in the reduced waiver scenario). MOBILE4.1, which will be
used to construct base inventories required by the CAAA of 199C,
will do so. Benefits for carbon monoxide reductions are not shown
but are similar except that tampering checks are not as important.
-56-
-------
Figure 7-1
Benefits From Various Potential Changes to I/M Pr
Centralized
s-.-.j Decentralized
Typical Program
Switch to Biennial
Exempt New Vehicles
Reduce Waivers
Add Catalyst & Inlet Test
Add Lead Test
Add Better Emission Test
Add Under hood Checks
.000 .100 .200 .300 .400 .500
Grams Per Mile VOC Reductions in 1990
.600
Typical Program
Switch to Biennial
Exempt New Vehicles
Reduce Waivers
Add Catalyst & Inlet Test
Add Lead Test
Add Better Emission Test
Add Underhood Checks
.000 .100 .200 .300 .400 .500
Grams Per Mile VOC Reductions in 1995
.600
Typical Program
Switch to Biennial
Exempt New Vehicles
Reduce Waivers
Add Catalyst & Inlet Test
Add Lead Test
Add Better Emission Test
Add Underhood Checks
.000 .100 .200 .300 .400 .500
Grama Per Mile VOC Reductions in 2000
-57-
-------
8.0 CONCLUSIONS
The first I/M programs, established voluntarily by tr.ree
States in the 1970s, were set up as centralized networks.
Decentralized networks appeared in response to the requirements of
the 1977 Clean Air Act Amendments and currently comprise the
majority of I/M programs in the United States. The early thinking
was that decentralized inspection would be less costly and more
convenient to the vehicle owner. It was acknowledged that more
quality assurance would be required to ensure equivalent emission
reductions, but it was also assumed, at least officially, that
success could be achieved with a reasonable level of resources.
This report discusses the relative effects of centralized and
decentralized I/M programs. It focuses on three key issues:
emission reduction effectiveness, cost, and convenience. It
presents information derived from EPA testing programs, EPA and
State audits of I/M programs, and analyses of I/M operating data.
Recent studies have found that vehicles require adequate
preconditioning before a test to assure that they are at normal
operating temperature, and that any adverse effect of extended
idling is eliminated. A period of loaded operation on a chassis
dynamometer has been found to be most effective. Most
decentralized programs, especially those requiring the use of
computerized analyzers do unloaded, high-speed preconditioning.
This can work if I/M programs extend the length of the
preconditioning, as EPA has recommended. A chassis dynamometer
would allow a shorter period but the purchase and installation of a
chassis dynamometer is considered beyond the financial capability
of most private repair facilities. Some centralized programs have
avoided pre-conditioning because it was not thought essential and
to keep costs and test time as low as possible. The trend now,
however, is to provide loaded preconditioning and a second chance
test to vehicles which fail an initial idle test.
Centralized programs typically do few or no anti-tampering
inspections. Decentralized programs typically require
comprehensive checks on at least some portion of the vehicle
population. The effectiveness of decentralized tampering
inspections is highly suspect, however. As with preconditioning,
centralized programs are starting to add anti-tampering checks t .
the normal test routine.
EPA audit findings show that centralized contractor-r ^r.
programs have very high levels of instrument quality control.
Centralized government-run systems and computerized decentralizei
programs are comparatively weak. The quality of calibration gaj
that is purchased, the frequency with which checks are perforre:,
the easy opportunity to defeat the checks, and the le-
sophist icated instrument technology are to blame. Man-.:.
decentralized programs have, with a few exceptions, h :
unacceptable levels of quality control. This has led to rr.o^-
manual programs changing over to computerized analyzers.
-58-
-------
The available evidence shows that objectivity and quality of
testing - the keys to emission reduction effectiveness - differ
greatly by program type. It was previously found thai:
decentralized programs using manual analyzers had a very high rate
of inspectors conducting tests improperly, either intentionally or
inadvertently. For the most part, inspectors were passing vehicles
which should have failed and been repaired. The use of
computerized analyzers in decentralized programs has reduced the
level of inadvertent improper testing. Correspondingly, the
initial test failure rates have risen substantially in programs
that have computerized. Audits have found, however, that improper
testing, both on the initial test and the retest, sometimes occurs
despite the use of computers.
Current inspection costs per vehicle were determined for a
number of operating programs. The earliest decentralized programs
did, in fact, charge lower inspection fees, because State
legislatures imposed fee caps to protect vehicle owners. The trend
has reversed, however, since requirements for computerized
analyzers have been imposed. Sophisticated, high through-put
centralized systems are now "outbidding" the local garage, much as
franchise muffler and tune-up shops have overtaken their respective
aspects of the repair business. The trend occurring in programs
which are new, revised, or reauthorizing is for increasing fees in
decentralized programs and decreasing fees among competing
contractors. Decentralized computerized programs now have the
highest costs, averaging $17.70 per vehicle. Centralized
contractor-run programs average $8.42 per vehicle. Centralized
government-run systems claim the lowest cost at $7.46 per vehicle,
on average.
The factors influencing the convenience or inconvenience of an
I/M program include station location, hours of operation, waiting
time, required number of visits, and certainty of service.
Decentralized programs offer stations located conveniently to most
motorists. However, a significant number of respondents to a
recent survey reported being turned away without an inspection, or
having to wait 25-50 minutes for an inspection. Because there is a
growing scarcity of qualified mechanics in the automotive
aftermarket, and since many gas stations have converted repair bays
into convenience stores, it has become increasingly difficult to
obtain "on demand" automotive services. Centralized programs have
a limited number of facilities, and may experience long lines if
the system is not well designed. Most new programs, however, have
taken steps to minimize the travel distance and the waiting time,
such that centralized programs are just as convenient or more
convenient than decentralized systems.
The overall conclusion is that centralized I/M will usually
offer greater emission reduction benefits than decentralized I/M,
unless the decentralized program makes special efforts that may
border on the unreasonable. It has been shown that this greater
benefit can be achieved at a lower cost and with limited
-59-
-------
inconvenience to the motorist. These advantages also dovetail vitr.
trends in I/M technology, which all point in the direction cf
increased sophistication, leading to higher cost unless economies
of scale can be achieved. There is a growing need to assure that
all of the emission reduction potential of I/M programs is achieved
in actual operation. Quality is a must, if I/M is to play its part
in achieving the ambient air quality standards.
Reglr
-60-
-------
References
1 National Air Audit System Guidance Manual for FY 1983 and
FY 1989, USEPA, Office of Air Quality Planning and Standards,
EPA-450/2-88-002, February 1988.
2 Motor Vehicle Tampering Surveys - 1984 -1988, USEPA, Office of
Mobile Sources.
3 A Discussion of Possible Causes of Low Failure Rates in
Decentralized I/M Programs, USEPA, Office of Mobile Sources,
EPA-AA-TSS-I/M-87-l, January 1987.
4 Evaluation of the California Smog Check Program, Executive
Summary and Technical Appendixes, California I/M Review
Committee, April 1987.
5 Report of the Motor Vehicle Emissions Study Commission, State
of Florida, March 1988.
6 Study of the Effectiveness of State Motor Vehicle Inspection
Programs, Final Report, USDOT, National Highway Traffic Safety
Administration, August 1989.
7 I/M Test Variability, EPA-AA-TSS-I/M-87-2, Larry C. Landman,
April 1987.
8 Recommended I/M Short Test Procedures For the 1990s: Six
Alternatives, USEPA, Office of Air and Radiation,
EPA-AA-TSS-I/M-90-3, March 1990.
9 Vehicle Emission Inspection Program Study, Behavior Research
Center, Inc., Phoenix, AZ, 1989.
10 Evaluation of the California Smog Check Program, April 1987
11 Attitudes Toward and Experience with Decentralized Emissicr.
Testing Programs, Riter Research, Inc., September 1987.
U.S. Environmental Protection Agency
Region 5, Library (PL-12J)
77 West Jackson Boulevard, 12th Floor
-61- Chicago, IL 60604-3590
------- |