$
<
73
\
Ml
r
ppo^
O
2
Lll
O
T
A?
OFFICE OF INSPECTOR GENERAL
Catalyst for Improving the Environment
Audit Report
Using the Program Assessment
Rating Tool as a
Management Control Process
Report No. 2007-P-00033
September 12, 2007

-------
Report Contributors:	Rae Donaldson
Bettye Bell-Daniel
Gloria Taylor-Upshaw
Dwayne Crawford
Steve Burbank
Patrick Gilbride
Abbreviations
EPA	U.S. Environmental Protection Agency
OIG	Office of Inspector General
OMB	Office of Management and Budget
PART	Program Assessment Rating Tool

-------
tf£D sr^
/ \ U.S. Environmental Protection Agency	2007-P-00033
?	nffironflncnorW^onoral	September 12, 2007
0*	U ¦ O • L. I I V11 Ul IIIICI I Lul a I UlCvll
Office of Inspector General
At a Glance
PRO"*^
Catalyst for Improving the Environment
Why We Did This Review
To examine management
controls, we reviewed the
U.S. Environmental Protection
Agency's (EPA's)
performance using the Office
of Management and Budget's
Program Assessment Rating
Tool (PART). We specifically
sought to determine (1) how
EPA scored overall, and (2) if
there are areas that require
management attention.
Background
PART is a diagnostic tool
designed to assess the
management and performance
of Federal programs. It is
used to evaluate a program's
overall effectiveness and drive
a focus on program results.
PART examines performance
in four programmatic areas:
1.	Program Purpose and
Design
2.	Strategic Planning
3.	Program Management
4.	Program Results/
Accountability
For further information,
contact our Office of
Congressional and Public
Liaison at (202) 566-2391.
To view the full report,
click on the following link:
www.epa.qov/oiq/reports/2007/
20070912-2007-P-00033.pdf
Using the Program Assessment Rating Tool
as a Management Control Process
What We Found
PART is a good diagnostic tool and management control process to assess
program performance and focus on achieving results. However, as currently
designed, programs can be rated "adequate" with a PART score of just 50 percent.
As a result, EPA programs with low scores in the Program Results/Accountability
section are receiving overall passing or adequate scores. This heightens the risk
that actual program results may not be achieved, and detracts from PART's
overall focus on program results.
Currently, EPA does not have a management control organizational element with
overall responsibility for conducting program evaluations. Also, EPA has not
allocated sufficient resources to conduct evaluations on a broad scale. PART
results show that for nearly 60 percent of its programs, EPA did not conduct
independent evaluations of sufficient scope and quality on a regular basis to
evaluate program effectiveness and support program improvements. With the
difficulty EPA faces in measuring results, coupled with the absence of regular
program evaluations, there is a heightened risk that programs may not be
achieving their intended results.
What We Recommend
We recommend that the Office of Management and Budget (OMB) modify the
Performance Improvement Initiative criteria to provide an ongoing incentive for
program managers to raise Program Results/Accountability PART scores. We
also recommend that OMB increase the transparency of PART results scores to
demonstrate the relationship between results scores and the overall PART ratings.
OMB provided oral and written comments on an earlier discussion draft of the
report. Their comments were incorporated into this report. OMB did not provide
a written response to the official draft report.
We recommend that the EPA Deputy Administrator increase the use of program
evaluation to improve program performance by establishing policy/procedures
requiring program evaluations of EPA's programs. We also recommend that the
Deputy Administrator designate a senior Agency official responsible for
conducting and supporting program evaluations, and allocate sufficient
funds/resources to conduct systematic evaluations on a regular basis. On
August 23, 2007, EPA responded that it agreed with the recommendations.

-------
UNITED STATES ENVIRONMENTAL PROTECTION AGENCY
WASHINGTON, D.C. 20460
OFFICE OF
INSPECTOR GENERAL
September 12, 2007
MEMORANDUM
SUBJECT:	Using the Program Assessment Rating Tool as a
Management Control Process
Report No. 2007-P-00033
V/\
FROM:	Melissa M. Heist
Assistant Inspector General for Audit
TO:	Marcus Peacock
Deputy Administrator
Robert Shea
Counselor to the Office of Management and Budget
Deputy Director for Management
This is our audit report on using the Program Assessment Rating Tool as a management control
process. This report contains findings that describe the problems the Office of Inspector General
(OIG) has identified and corrective actions the OIG recommends. This report represents the
opinion of the OIG and does not necessarily represent the final EPA position. Final
determinations on matters in this report will be made by EPA managers in accordance with
established audit resolution procedures.
The estimated cost of this project - calculated by multiplying the project's staff days by the
applicable daily full cost billing rates in effect at the time is - $684,025.
Action Required
In accordance with EPA Manual 2750, the Deputy Administrator is required to provide a written
response to this report within 90 calendar days. The Deputy Administrator should coordinate his
response with the Office of Management and Budget. The Deputy Administrator should include
a corrective actions plan for agreed upon actions, including milestone dates. We have no
objections to the further release of this report to the public. This report will be available at
http://www.epa.gov/oig.
If you or your staff have any questions, please contact me at 202-566-0899 or
Heist.Melissa@epa.gov. or Patrick Gilbride at 303-312-6969 or Gilbride.Patrick@epa.gov.
^tDSrX
* o \
136*

-------
Using the Program Assessment Rating Tool
as a Management Control Process
Table of C
Purpose		1
Background		1
Noteworthy Achievements		1
PART Scoring System Needs Improvement		2
Program Evaluation Could Help Improve Performance		6
Barriers to Conducting Evaluation Noted		9
Recommendations		10
Agency and OMB Responses		10
Status of Recommendations and Potential Monetary Benefits		12
Appendices
A Scope and Methodology		13
B Agency Response		14
C Distribution		18

-------
Purpose
As part of our examination of management controls, we reviewed the U.S. Environmental
Protection Agency's (EPA's) performance using the Office of Management and Budget's
(OMB's) Program Assessment Rating Tool (PART) process. We specifically sought to
determine (1) how EPA scored overall and (2) if there are areas that require management
attention. Details on our scope and methodology are in Appendix A.
Background
PART is a diagnostic tool used to assess the performance of Federal programs and drive
improvements in program performance. Once completed, PART reviews help inform budget
decisions and identify other actions to improve results. Agencies are held accountable for
implementing PART followup actions, also known as improvement plans, for each of their
programs. PART is designed to provide a consistent approach to assessing and rating programs
across the Federal Government. PART assessments review overall program effectiveness, from
how well a program is designed to how well it is implemented and what results it achieves.
PART is central to the Administration's Budget and Performance Integration Initiative (now
known as the Performance Improvement Initiative), as its purpose is to drive a sustained focus on
results. The initiative rates various aspects of Government performance using a color-coded
scale of green, yellow, and red to indicate both progress and performance. The program rating
indicates how well a program is performing so that the public can see how effectively tax dollars
are used. To earn a high PART rating, a program must use performance data to manage and
justify its resource requests based on the performance it expects to achieve.
PART assessments are conducted as collaborative efforts involving both Agency personnel and
OMB examiners. Within EPA, the Office of the Chief Financial Officer is the lead coordinator,
with each program office responsible for developing and submitting supporting documentation to
respond to the PART questions.
The methodology used to rate program performance should demonstrate the correlation between
the four areas assessed:
1.	Program Purpose and Design
2.	Strategic Planning
3.	Program Management
4.	Program Results/Accountability
Noteworthy Achievements
To date there have been 51 EPA programs assessed using the PART process. EPA has received
relatively high scores in the first three PART categories - Program Purpose and Design,
Strategic Planning, and Program Management (see Figure 1). Further, in areas where EPA has
scored low, program improvement plans have been developed to help raise the ratings in the
future. PART results show Agency progress in developing performance measures as
1

-------
demonstrated by the reduction from 17 programs to 3 programs with ratings of Results Not
Demonstrated.
Figure 1
Scores for First Three PART Categories
Category
Average Score
1. Program Purpose and Design
91%
2. Strategic Planning
69%
3. Program Management
82%
Source: OIG analysis of EPA and OMB data
According to the Office of the Chief Financial Officer, EPA has developed four strategic plans
under the Government Performance and Results Act that show progress in developing outcome-
based long-term measures, including baseline and target information. EPA's strategic
measurement framework of goals and objectives serves as the basis for the Agency's annual
performance plans and budgets. According to the Office of the Chief Financial Officer, the
Agency has made considerable progress in integrating PART measures into budget documents.
Over 60 percent of the measures in EPA's Fiscal Year 2008 Annual Plan/Budget are PART
annual measures. The Agency's performance and accountability reports close the feedback loop
by examining 4-year trend data of performance results against the planned targets identified in
the annual plan/budget. This information is used to improve performance measures, adjust
program strategies, and inform the next round of Agency planning, priority setting, and
budgeting. Since its inception in 2002, the PART process, including its directives on followup
actions, has increased the Agency's attention and capacity to develop and improve performance
measures and to examine results.
PART Scoring System Needs Improvement
PART is a good diagnostic tool and management control process to assess program performance
and drive a focus on results. However, as currently designed, the overall score needed to receive
a passing or "adequate" PART rating is set at 50 percent. As a result, many EPA programs are
scoring low or "ineffective" in the Program Results/Accountability category of the PART yet
receiving overall passing or adequate ratings. This heightens the risk that programs may not be
achieving desired results. It also detracts from PART's overall purpose of focusing on improved
performance and program results.
While many factors contribute to program results, programs with a well-defined purpose and
design and a carefully planned strategy (including goals, measures, and targets), coupled with
good program management, should be able to demonstrate results and accountability for
resources expended to achieve those results. In reviewing the individual category scores for
EPA's PART ratings, we found that while programs generally scored well in the first three
categories (see Figure 1), only 24 percent of EPA's programs received adequate or passing
scores for the Program Results/Accountability category.
2

-------
The answers to questions in each of the four PART categories result in a numerical score for that
category ranging from 0 to 100 (with 100 being the best score). To ensure program results are
viewed as a priority, a weighted scoring system is used and Program Results/Accountability is
weighted as 50 percent of the overall score (see Figure 2).
Figure 2
Weighted Scoring
1. Program Purpose/Design
20%
1. Strategic Planning
10%
2. Program Management
20%
3. Program Results/Accountability
50%
Source: OMB
Numerical scores for each category are then combined into one overall score and translated into
a qualitative rating, as shown in Figure 3:
Figure 3
Qualitative Ratings and Scores
Effective
85-100%
Moderately Effective
70-84 %
Adequate
50-69 %
Ineffective
0-49 %
Results Not Demonstrated
N/A
Source: OMB
A rating of Results Not Demonstrated is given when programs do not have acceptable long-term
and annual performance measures, or when they lack baselines and performance data. Of the
51 EPA programs assessed, 17 were initially rated as Results Not Demonstrated. After program
improvements were made, only 3 programs remain with an overall rating of Results Not
Demonstrated.
As shown in Figures 4 and 5, with 50 percent needed to achieve a passing or adequate rating,
nearly 90 percent of EPA's programs received moderately effective or adequate ratings.
However, raising the passing percentage just 10 percentage points, to 60 percent, would cause
over half of EPA's programs to receive an ineffective rating. This shows that a number of EPA
programs are just reaching the adequate mark. Nearly 80 percent of the programs would receive
an ineffective rating if the passing percentage was raised to 70 percent.
3

-------
Figure 4
Rates of Passing Based on Passing Scores
90
80
70
60
50
40
30
20
10
0
¦ 50%
~	60%
~	70%
50%
60%
70%
Source: OIG analysis of EPA and OMB data
Figure 5
EPA PART Program Ratings
35
30
25
Number of 20
Programs 15
10
5
0

>
1








/ /
1

<



<

—

*1®
#6 a^6<
•.tf


A®





^®

Source: OIG analysis of EPA and OMB data
As noted, EPA programs generally scored well in Program Purpose and Design, Strategic
Planning, and Program Management, but did not score well in Program Results/Accountability.
Programs with a well defined purpose and design and a carefully planned strategy (including
goals, measures, and targets), coupled with good program management, should be able to
demonstrate results. However, only 24 percent of EPA's programs received "adequate" or above
scores in the Program Results/Accountability category. The average score in this category was
38 percent. Three programs were able to receive an overall adequate rating with a Program
Results/Accountability score of only 16 percent.
4

-------
Figure 6
Overall Average Part Scores By Category

Source: OIG analysis of EPA and OMB data
In total, only 12 of the 51 programs received a Program Results/Accountability score above
50 percent, yet nearly 90 percent of EPA's programs received overall ratings of adequate or
above. Consequently, PART may not provide an incentive to strive for high performance and
program results. Further, the ability to receive an adequate overall score while scoring low in
Program Results/Accountability detracts from PART'S purpose of maintaining a focus on results.
We reviewed the PART questions in the Program Results/Accountability section to identify
those that were most often answered "no." As shown in Figure 7 below, 41 percent (21 out of
51) of the programs assessed reported that independent evaluations indicating whether the
program is effective and achieving results were not performed.
Figure 7
PART Questions Most Often Answered "NO"
for Program Results/Accountability
Number of
Programs
Percentage of
Programs
Do independent evaluations of sufficient scope and quality
indicate that the program is effective and achieving results?
21
41%
Does the program demonstrate improved efficiencies or cost
effectiveness in achieving program goals each year?
13
25%
Has the program demonstrated adequate progress in
achieving its long-term performance goals?
10
19%
Source: OIG analysis of EPA and OMB data
5

-------
We also reviewed the responses to the questions in the Strategic Planning category and identified
the following questions that were answered "no" most often:
Figure 8
PART Questions Most Often Answered "NO"
for Strategic Planning
Number of
Programs
Percentage of
Programs
Are independent evaluations of sufficient scope and quality
conducted on a regular basis or as needed to support program
improvements and evaluate effectiveness and relevance to the
problem, interest, or need?
30
59%
Are budget requests explicitly tied to accomplishment of the
annual and long-term performance goals, and the resource
needs presented in a complete and transparent manner in the
program's budget?
22
43%
Does the program have ambitious targets and timeframes for its
long-term measures?
20
39%
Does the program have baselines and ambitious targets for
annual measures?
16
31%
Do all partners (including grantees, sub-grantees, contractors,
cost-sharing partners, and other government partners) commit
to and work toward the annual and/or long-term goals of the
program?
16
31%
Source: OIG analysis of EPA and OMB data


Given EPA's reliance on States and local governments for program implementation and
reporting performance, we reviewed the responses to questions in the Program Management
section and found that one question was most often answered "no":
Figure 9


PART Questions Most Often Answered "NO"
for Program Management
Number of
Programs
Percentage of
Programs
Does the agency regularly collect timely and credible
performance information, including information from key
program partners, and use it to manage the program and
improve performance?
21
41%
Source: OIG analysis of EPA and OMB data
The management control processes for an organization should include performance measures
and targets for all programs, as well as the means to collect and monitor performance against
expected/planned performance, to demonstrate results in relation to the resources expended.
While EPA has made progress in developing measures, including short- and long-term targets as
well as baselines from which to measure, the Agency needs to continue its efforts to gain
increased commitment from its program partners to work toward these goals and provide
assistance in gathering and reporting needed performance information.
Program Evaluation Could Help Improve Performance
One tool that could assist EPA in designing, developing, and gathering program performance
information is program evaluation. PART results show that for nearly 60 percent of its
6

-------
programs, EPA did not conduct independent evaluations of sufficient scope and quality on a
regular basis or as needed to evaluate program effectiveness and support program improvements.
Program evaluation results provide management with vital information on how well a program is
designed and functioning to meet its intended objectives and goals (See Figure 8). Evaluation
results can be used to track program progress toward achieving objectives and goals and can also
be used to identify potential program improvements. Currently, EPA does not have a
management control organizational element with overall responsibility for conducting program
evaluations. Also, sufficient resources have not been allocated to conduct evaluations on a broad
scale. As a result, management does not always have information needed on program
performance.
While there are many similar definitions for program evaluation, OMB defines it as "an
assessment, through objective measurement and systematic analysis, of the manner and extent to
which Federal programs achieve intended objectives." There are several types of evaluations
that can be used during the development and execution of a program:
Development
•	Needs Assessment: An examination and systematic appraisal of the nature and
scope of the issue or problem to be addressed.
•	Formative Evaluation: An examination and assessment of the likely success of a
proposed program design or program activity to address a problem, generally
conducted during planning or early in the implementation of a program.
Execution
•	Process Evaluation: This form of evaluation assesses the extent to which a
program is operating as it was intended. It typically assesses program activities'
conformance to statutory and regulatory requirements, program design, and
professional standards or customer expectations.
•	Outcome Evaluation: This form of evaluation assesses the extent to which a
program achieves its outcome-oriented objectives. It focuses on outputs and
outcomes (including unintended effects) to judge program effectiveness but may
also assess program process to understand how outcomes are produced.
•	Impact Evaluation: Impact evaluation is a form of outcome evaluation that
assesses the net effect of a program by comparing program outcomes with an
estimate of what would have happened in the absence of the program. This form of
evaluation is employed when external factors are known to influence the program's
outcomes, in order to isolate the program's contribution to achievement of its
objectives.
•	Cost Benefit/Cost Effectiveness Analysis: These analyses compare a program's
outputs or outcomes with the costs (resources expended) to produce them. When
applied to existing programs, they are also considered a form of program
evaluation. Cost-effectiveness analysis assesses the cost of meeting a single goal or
objective and can be used to identify the least costly alternative for meeting that
goal. Cost-benefit analysis aims to identify all relevant costs and benefits, usually
expressed in dollar terms.
7

-------
PART contains questions on the Agency's use of program evaluation in both the Strategic
Planning and Program Results/Accountability sections. Our review of the Agency's 51 PART
assessments found that the two questions related to program evaluation were answered "no" by
more programs than any other PART questions (see Figure 10).
Figure 10
PART Questions Most Often Answered "NO"
Number of
Programs
Percentage of
Programs
Strategic Planning: Are independent evaluations of sufficient
scope and quality conducted on a regular basis or as needed to
support program improvements and evaluate effectiveness and
relevance to the problem, interest, or need?
30
59%
Program Results/Accountability: Do independent evaluations of
sufficient scope and quality indicate that the program is effective
and achieving results?
21
41%
Source: OIG analysis of OMB data
Until 1995, EPA maintained a program evaluation staff within the Office of Policy, Planning,
and Evaluation. This office, including approximately 40 staff, conducted evaluations of Agency
programs generally at the request of the Deputy Administrator. In 1995, the Agency underwent
a reorganization that resulted in the office being disbanded.
In 2000, the Office of Policy, Economics, and Innovation established the Evaluation Support
Division. The Agency regards the division as its center of expertise for program evaluation.
However, the division has very limited staffing (six full-time equivalents). Therefore, it does not
currently have the capacity to conduct systematic and regular evaluations of EPA's programs.
Rather, the division views itself as a capacity builder, assisting Agency programs in developing
expertise that will enable program staff to conduct their own evaluations by:
•	Providing leadership in fostering the use of program evaluations.
•	Providing training in developing and refining performance measures.
•	Assisting EPA program and regional offices in building capacity to conduct
program evaluations.
•	Conducting a limited number of evaluations of innovation projects or programs
upon request.
•	Funding a limited number of evaluations for program offices.
Currently, the Evaluation Support Division does not routinely perform evaluations of Agency
programs. Evaluations that are taking place are initiated by EPA program managers and are
generally conducted on an ad-hoc basis. We did not identify any systematic evaluation plans for
program offices. We did find that EPA includes upcoming evaluations in its Strategic Plan and
summarizes results in its Performance Accountability Report in accordance with the Government
Performance and Results Act.
8

-------
Barriers to Conducting Evaluation Noted
To better understand the reasons why EPA is not regularly conducting evaluations, we requested
that the Director of EPA's Evaluation Support Division provide us with views on the barriers
EPA faces to conducting evaluation on a broader scale. The Director provided the following:
Funding Limitations - Office of Policy, Economics, and Innovation estimates that,
Agency-wide, EPA spends about $1 million per year, or 0.01 to 0.03 percent of its
budget, on program evaluations. Other Federal agencies and private organizations
considered leaders in program evaluation designated more funds for evaluation through
various means:
•	Expected set-aside per project - Gates Foundation (15 percent), U.S. Agency for
International Development (10 percent), European Union (8 percent).
•	Statutory set-aside - U.S. Department of Health and Human Services budgets
approximately 1 percent of its total budget, or about $300 million, for evaluation.
•	Separate evaluation budget - U.S. Department of Education budgets $550 million,
or approximately 1 percent of its total budget, for evaluation.
Lack of Internal Expertise - EPA needs more staff with the ability to oversee and
manage independent, high-quality evaluations that produce evidence of effectiveness
and/or guide decisions to improve effectiveness and results.
Lack of External Expertise - Currently, there is not a large community of
knowledgeable and experienced evaluators for environmental programs.
Complexity of Measuring Long-Term Environmental Outcomes - Evaluations of
environmental outcomes often require multi-year time horizons to determine a program's
impact. Also, environmental programs present added challenges to measuring
effectiveness in light of the need to link program outcomes to long-term changes in the
environment and human health.
Current Need for Strategic Investment - Given limited resources, EPA must be
"strategic" in selecting programs to evaluate and deciding when evaluations should be
scheduled. Currently, EPA invests in program evaluation primarily as a means to
identify solutions to identified problems.
Insufficient Data/Performance Measurement Information - The need for consistency
across jurisdictions (e.g., States, tribes, and localities) adds complexity for data access
and data quality. EPA's reliance on partners for data on program performance makes this
a major challenge.
Evaluation Partnerships - Evaluation capacity at the Federal level often depends on the
willingness of State and local agencies and other grantees to participate in evaluations
and follow program evaluation protocols and standards. Many partners also face
resource limitation, making it difficult to engage them to conduct evaluations.
9

-------
While EPA has made progress in developing program performance measures, as demonstrated
by the reduction in programs rated as Results Not Demonstrated, challenges remain. The OIG,
the Government Accountability Office, and others have reported on the difficulties EPA faces in
measuring and demonstrating program results. Establishing a management control
organizational element with overall responsibility for conducting and supporting program
evaluations on a systematic and regular basis would reinforce and complement ongoing Agency
planning, budgeting, and accountability efforts in measuring and demonstrating performance
results. To accomplish this, EPA will need to establish accountability for conducting evaluations
and invest the resources needed to carry them out.
Recommendations
We recommend that the Office of Management and Budget:
1.	Modify the Performance Improvement Initiative criteria to provide an ongoing incentive
for program managers to raise Program Results/Accountability PART scores. This can
be accomplished by designating a target percentage of programs that must achieve
adequate or above scores in the PART results section in order to achieve a green rating.
2.	Increase the transparency of PART results scores to demonstrate the relationship between
results scores and the overall PART ratings.
We recommend that the EPA Deputy Administrator work with the Office of Policy, Economics,
and Innovation to place a stronger emphasis on the use of program evaluation to improve
program performance by:
3.	Establishing policy/procedures requiring program evaluations of EPA's programs.
4.	Designating a senior Agency official responsible for conducting and supporting program
evaluations and developing a strategy to address the barriers EPA faces to conducting
program evaluations.
5.	Allocating sufficient funds/resources to conduct systematic program evaluations on a
regular basis.
Agency and OMB Responses
The Agency agreed with our recommendations regarding program evaluation and has initiated
actions to strengthen evaluation capability within EPA. The Office of the Administrator has
proposed a reorganization of the Office of Policy, Economics, and Innovation that is designed to
provide a more robust evaluation capability. As part of this reorganization, the Deputy
Administrator envisions that the Office of Policy, Economics, and Innovation's Associate
Administrator will take on a more explicit role in developing evaluation policy and guidance.
Further, the Deputy envisions that his office will continue to improve its support to Headquarters
and regional offices in implementing strategic investments in evaluation activities. The Agency
recognizes that funding is critical to evaluation. However, it believes that developing a
performance management culture where there is a substantial source of program evaluation
10

-------
expertise internal and external to the Agency is currently the most important step to building a
robust program evaluation capability. The full Agency response is in Appendix B.
The OMB provided oral and written comments on an earlier discussion draft of the report. OMB
comments were incorporated into this report. However, OMB did not provide a written response
to the official draft report.
11

-------
Status of Recommendations and
Potential Monetary Benefits
RECOMMENDATIONS
POTENTIAL MONETARY
BENEFITS (In $000s)
Rec.
No.
Page
No.
Subject
Status1
Planned
Completion
Action Official	Date
Claimed
Amount
Agreed To
Amount
10 Modify the Performance Improvement Initiative
criteria to provide an ongoing incentive for program
managers to raise Program Results/Accountability
PART scores. This can be accomplished by
designating a target percentage of programs that
must achieve adequate or above scores in the
PART results section in order to achieve a green
rating.
10 Increase the transparency of PART results scores
to demonstrate the relationship between results
scores and the overall PART ratings.
10 Work with the Office of Policy, Economics, and
Innovation to place a stronger emphasis on the use
of program evaluation to improve program
performance by establishing policy/procedures
requiring program evaluations of EPA's programs.
10 Work with the Office of Policy, Economics, and
Innovation to place a stronger emphasis on the use
of program evaluation to improve program
performance by designating a senior Agency
official responsible for conducting and supporting
program evaluations and developing a strategy to
address the barriers EPA faces to conducting
program evaluations.
10 Work with the Office of Policy, Economics, and
Innovation to place a stronger emphasis on the use
of program evaluation to improve program
performance by allocating sufficient
funds/resources to conduct systematic program
evaluations on a regular basis.
Office of Management
and Budget
Office of Management
and Budget
EPA Deputy
Administrator
EPA Deputy
Administrator
EPA Deputy
Administrator
1 O = recommendation is open with agreed-to corrective actions pending
C = recommendation is closed with all agreed-to actions completed
U = recommendation is undecided with resolution efforts in progress
12

-------
Appendix A
Scope and Methodology
To accomplish our objectives, we compiled and analyzed EPA's PART assessments for the
51 programs that have undergone assessments from 2003 through 2006. We reviewed the
detailed assessment information publicly available from OMB in support of the ratings. We
reviewed the scores achieved from the assessments and analyzed the scores for each of the four
programmatic categories assessed. We discussed the PART process with the EPA PART
Coordinator within EPA's Office of the Chief Financial Officer. We also met with EPA officials
in the Office of Policy, Economics, and Innovation's ESD, and analyzed EPA processes for
conducting program evaluations for Agency programs and operations. We did not verify the
evaluation funding budget figures provided by ESD for other Federal and private concerns.
We provided EPA and OMB with discussion draft reports and met with EPA and OMB officials
to discuss the results. We conducted our field work from December 2006 through May 2007.
We reviewed prior EPA OIG and Government Accountability Office reports on EPA's
evaluation efforts. We did not identify any prior audit work related specifically to the PART
rating process.
We conducted our work in accordance with generally accepted government auditing standards,
except that we did not review the automated controls over data contained in the OMB PART
reporting system, as it was not necessary to accomplish our objectives. The standards require
that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a
reasonable basis for our findings and conclusions based on our audit objectives. We believe that
the evidence obtained provides a reasonable basis for our findings and conclusions based on our
audit objectives.
13

-------
Appendix B
Agency Response
MEMORANDUM
Subject: Comments on the OIG's Audit Report "Using the Program Assessment Rating
Tool as a Management Control Process
From:	Brian Mannix
Associate Administrator
Office of Policy, Economics and Innovation
Lyons Gray
Chief Financial Officer
Office of the Chief Financial Officer
To:	Melissa Heist
Assistant Inspector General
Thank you for the opportunity to provide comments on the Office of the Inspector General's Audit
Report "Using the Program Assessment Rating Tool as a Management Control Process,"
(Assignment No. 2007-000520, July 13, 2007). We will respond to the recommendations
concerning program evaluation that are made in the report. However, we will defer to the Office of
Management and Budget to respond to the report's recommendations regarding potential
modifications to the Budget and Performance Integration Initiative, now known as the Performance
Improvement Initiative, and the management of the Program Assessment Rating Tool (PART).
As you know, the Environmental Protection Agency (EPA) is committed to working through EPA
management and staff to improve how we use performance information to drive results throughout
the Agency. The Agency recognizes that program evaluation is a critical component of this effort.
We agree with the recommendations that you have made in the report regarding program
evaluation. However, we would like to relay a few observations about the Agency's program
evaluation activities. The Office of the Administrator has proposed a reorganization of the Office
of Policy, Economics, and Innovation (OPEI) that is designed to provide increased support to the
Agency priorities, including the development of a more robust evaluation capability. As part of this
reorganization, Deputy Administrator Marcus Peacock expects OPEI's Associate Administrator to
take on a more explicit leadership role in developing program evaluation policy and guidelines.
Also, the Deputy Administrator expects OPEI to continue to improve its support to other
Headquarters and Regional Offices in implementing their strategic investments in program
evaluation activities.
More specific comments related to needed changes to factual content and report text is in the
attached appendix. Thank you again for the opportunity to comment on this report.
cc: Marcus Peacock
14

-------
APPENDIX:
Comments Related to Changes in Factual Content and Other
Recommended Changes
Following are specific corrections that we would like to see in the final report:
In the discussion of PART, we recommend the following changes:
•	Background, Page 1 (paragraph 2)
Amend first sentence: "PART is central to the Administration's Budget and
Performance Integration Initiative, now known as the Performance Improvement
Initiative,..."
•	Noteworthy Achievements, Page 1 (paragraph 1)
Add new last sentence: "PART results show Agency progress in developing measures
(e.g., 17 of the 51 programs were initially evaluated as Results Not Demonstrated;
only 3 still have that designation.)"
•	PART Scoring System Needs Improvement, Page 6 (last paragraph)
Add new sentences after sentence 1: "The management control processes for an
organization should include performance measures and targets for all programs, as well
as the means to collect and monitor performance against expected/planned performance
to demonstrate results in relation to the resources expended. Consistent with the
discussion on page 2, EPA has made progress in developing measures, including
short- and long-term targets, as well as baselines from which to measure, as part of
the Agency's strategic and annual planning, budgeting and accountability processes.
This progress is underscored by improved PART results."
Delete: "The results of EPA's PART reviews demonstrate that EPA needs to continue
its efforts to develop short- and long-term targets, as well as establish baselines from
which to measures. They also show that EPA needs to gain increased commitment from
its program partners to work toward these goals and provide assistance in gathering and
reporting needed performance information once developed."
In the "At A Glance" and in the main report on Pages 9, 10, and 11, OIG's discussion of the
Agency's need to establish a management control organizational element with overall
responsibility for conducting program evaluation on a systematic and regular basis should be
modified to recognize the necessary role of the AAs and RAs in determining a strategic approach
to program evaluation at the Agency. We recommend the following changes:
At a Glance (last sentence)
Change: "We also recommend that the Deputy Administrator designate a senior Agency
official responsible for conducting program evaluations..." To: "We also recommend that
the Deputy Administrator designate a senior Agency official responsible for supporting
and conducting program evaluations.
15

-------
•	Barriers to Conducting Evaluation Noted, Page 9 (first full paragraph) Change first
sentence: "Measuring and demonstrating program performance and results has been a
long-standing challenge for EPA." To: While EPA has made progress in measuring
and demonstrating program performance and results, challenges remain.
Delete third sentence: "As mentioned earlier, an organization's management control
process should include performance measures and targets for all programs, as well as the
means to collect and monitor performance against expected/planned performance to
demonstrate results in relation to the resources expended."
Change fourth sentence: "Establishing a management control organizational element with
overall responsibility for conducting program evaluation on a systematic and regular
basis could assist EPA in meeting this challenge." To:
"Establishing a management control organizational element with overall responsibility
for supporting and conducting program evaluation on a systematic and regular basis
would reinforce and complement ongoing Agency planning, budgeting and
accountability efforts in measuring and demonstrating performance results."
•	Recommendation # 4, Page 10
Change: "Designating a senior Agency official responsible for conducting program
evaluations and developing a strategy to address the barriers EPA faces to conducting
program evaluations." To: "Designating a senior Agency official responsible for
supporting and conducting program evaluations and developing a strategy to address the
barriers EPA faces to conducting program evaluations."
Status Table Recommendation # 4, Page 11
Change: "Work with the Office of Policy, Economics, and Innovation to place a stronger
emphasis on the use of the program evaluation to improve program performance by
designating a senior Agency official responsible for conducting program evaluations and
developing a strategy to address the barriers EPA faces to conducting program
evaluations." To : "Work with the Office of Policy, Economics, and Innovation to place a
stronger emphasis on the use of the program evaluation to improve program performance
by designating a senior Agency official responsible for supporting and conducting
program evaluations and developing a strategy to address the barriers EPA faces to
conducting program evaluations."
On Page 7 there is a list of several types of evaluations that can be used during the development
and execution of a program. The list in the current draft of the report does not sufficiently
recognize the types of program evaluation. We recommend that this list be clarified by
acknowledging the difference between Ex-Ante and Ex-Post program evaluation, and that the list
of ex-post program evaluation be expanded to include the four distinct types that are defined by
the US Government Accountability Office (see GAO-05-7395P, May 2005,
http://www.gao.gov/new.items/d05739sp.pdf). We recommend the following change to the list:
16

-------
Ex Ante Program Evaluation
•	Needs Assessment: An examination and systematic appraisal of the nature and scope of the
issue or problem to be addressed.
•	Formative Evaluation: An examination and assessment of the likely success of a proposed
program design or program activity to address a problem, generally conducted during
planning or early in the implementation of a program.
Ex Post Program Evaluation
•	Process (or Implementation) Evaluation: An assessment of the extent to which a program
is operating as it was intended. It typically assesses program activities' conformance to
statutory and regulatory requirements, program design, and professional standards or
customer expectations.
•	Outcome Evaluation: An assessment of the extent to which a program achieves its
outcome-oriented objectives. It focuses on outputs and outcomes (including unintended
effects) to judge program effectiveness but may also assess program process to understand
how outcomes are produced.
•	Impact Evaluation: A high-level form of outcome evaluation that assesses the net effect of a
program by comparing program outcomes with an estimate of what would have happened in
the absence of the program. This form of evaluation is employed when external factors are
known to influence the program's outcomes, in order to isolate the program's contribution to
achievement of its objectives.
•	Cost-Benefit and Cost- Effectiveness Analyses: These analyses compare a program's
outputs or outcomes with the costs (resources expended) to produce them. When applied to
existing programs, they are also considered a form of program evaluation. Cost-effectiveness
analysis assesses the cost of meeting a single goal or objective and can be used to identify the
least costly alternative for meeting that goal. Cost-benefit analysis aims to identify all
relevant costs and benefits, usually expressed in dollar terms.
In the section "Barriers to Conducting Evaluation Noted" on Page 8, OIG lists the barriers to
conducting program evaluation, which is based largely on input received from the Director of the
Evaluation Support Division (ESD). However, as part of OIG's review, the auditors discussed
evaluation with personnel from Government Accountability Office (GAO), National Science
Foundation and Department of Defense to obtain information regarding the program evaluation
function at other Federal agencies and then compared EPA's program evaluation function with
other Federal agencies. The auditors also reviewed GAO reports about challenges to program
evaluation in federal agencies. The discussion of the barriers should reflect that the information
the ESD Director provided was consistent with information from these other sources. And it
should be noted that the data regarding other Agencies' budgets for evaluation were based on
information from presentations made by the other organizations' staff, but does not include
additional research to independently verify the other organizations' numbers. Also, the Agency
recognizes that funding is critical to program evaluation. However, the Agency believes that
developing a performance management culture where we have a substantial source of program
evaluation expertise internal and external to EPA is currently the most important step to building
a robust program evaluation capability.
17

-------
Distribution
Office of the Administrator
Deputy Administrator
Agency Followup Official (the CFO)
Agency Followup Coordinator
Associate Administrator, Office of Policy, Economics, and Innovation
Office of General Counsel
Associate Administrator for Congressional and Intergovernmental Relations
Associate Administrator for Public Affairs
Acting Inspector General
Appendix C
18

-------