v>EPA
Agency
Region 5
Water Division
Protection 230 S Dearborn Stree
Chicago, Illinois 6060-
June, 1982
905R82107
Monitoring and
Evaluation Guidebook
-------
-------
Purpose
Congress has assigned EPA the task of implementing the Clean Water Act,
the Safe Drinking Water Act and other environmental legislation. The
EPA Regional Offices that carry out the programs in the various parts
of the country are responsible and accountable for achieving the objec-
tives established in legislation. Whether the Region or another agency
(through delegation) is sharing work responsibilities or managing the
day-to-day activities of a program, the Region is ultimately responsible
to Congress and the public for the effects of the programs on environmen-
tal and public health protection.
The management system for Region V Water Division programs is aimed at
achieving the results forseen in the enabling legislation. The manage-
ment system consists of:
1. identifying and prioritizing program and management objectives;
2. planning work so as to direct resources toward priority problems;
3. implementing the work planned;
4. monitoring and evaluating program activities and results; and
5. feeding back findings from monitoring and evaluation into planning
and decisionmaking in order to make program improvements.
Monitoring and evaluation are important parts of the management system
because they measure whether activities were properly carried out and
whether the desired results were achieved. They are also important
for examining key issues relating to the water programs. Personnel from
throughout the Division will participate in the development of key
questions for monitoring and evaluation. The Division Director will
track whether the priority objectives of the individual programs are
attained, and assess the combined effects of related programs. Branches
will provide information to the Division Director and will gather the
information they need to effectively manage their programs. One part
of the Branches' monitoring and evaluation work will be the review of
State programs; however, efforts will certainly not be limited to this
area. States, as the Region's partners in program management, will par-
ticipate in the planning and implementation of monitoring and evaluation
work. States have an important role in monitoring and evaluation, both
in helping to assess overall programs and the media and in collecting the
information they need to manage State programs.
The information needs of all involved parties can only be met through a
well-planned, coordinated monitoring and evaluation program. The purpose
of this Guidebook is to help all Water Division personnel plan and carry
out a monitoring and evaluation program of this nature. Since each moni-
toring and evaluation project will be unique, this Guidebook does not
prescribe every step to be taken. Rather, the Guidebook provides general
procedures and concepts that can be tailored to your specific needs. This
book will be updated and expanded in the future, as necessary, to provide
worthwhile assistance in the planning and implementation of monitoring and
evaluation work.
-------
-------
- 2 -
The first major section of the Guidebook covers the annual or more frequent
preparation of monitoring and evaluation work plans—the documents that
generally describe the work to be undertaken. This involves:
1. specifying program and management objectives, and
prioritizing them;
2. identifying key questions about the extent to which
priority objectives are being achieved or about other
issues that do not fit neatly under any one objective; and
3. selecting an appropriate approach to answering
the key questions (including such things as monitoring,
evaluation, cost-benefit analysis, reporting systems
and audits) and appointing a team leader to organize the
development and implementation of a Plan of Study for each
monitoring and evaluation project.
The second major section of the Guidebook deals with the preparation and
implementation of a Plan of Study for a monitoring or evaluation project—
the specific plan you will use to try to answer your key questions. The
first part of this section focuses on planning, including identifying what
information you need, where you are going to get it and what you are going
to do with it. This work will normally take place in the summer months, as
program plans are negotiated with States and performance standards are
negotiated with staff for the upcoming fiscal year. The second section
of the Guidebook also covers implementation of your Plan of Study, including
gathering information, analyzing findings and preparing reports.
Implementation of monitoring and evaluation plans will involve individuals
throughout the Water Division, as well as other Divisions and the States.
Because most Division employees will have a role in monitoring and evalua-
tion work, it is important that you understand the principles and concepts
presented in the Monitoring and Evaluation Strategy (November 1981) and in
this Guidebook.
One final note, the words monitoring and evaluation each have a specific
meaning. Monitoring is the review or tracking of program operations to
ensure that resources are being applied according to legal and administra-
tive requirements and that program objectives are addressed. Evaluation is
the appraisal of information to determine whether program objectives are
being achieved. Evaluation attempts to establish if a causal relationship
exists between program operations and results by isolating program effects
from other factors. Evaluation focuses on environmental impacts, both
planned and unanticipated, and also attempts to ascertain if more effective/
efficient alternatives can be used to produce the desired results. (More
information on these ideas is provided in Tab D). As you go out and answer
your key questions, you will probably do some monitoring and some evaluation.
Do not let the terminology obfuscate your fundamental objectives—the
principles that apply to what we are trying to do are very basic, including
such things as gathering timely, accurate information and trying to get
a balanced, objective picture of the situation being assessed.
-------
-------
CONTENTS
Purpose
PREPARING ANNUAL MONITORING AND EVALUATION WORK PLANS
A Involving States
B Specifying Objectives, Measures and Standards
C Determining Key Questions
D Selecting Approaches to Answering Key Questions
IMPLEMENTING MONITORING AND/OR EVALUATION PROJECTS
E Planning
F Gathering Information
G Analyzing Findings
H Preparing Reports
ADDITIONAL INFORMATION
I Water Division Monitoring and Evaluation Strategy
J Information on Evaluation Methodologies
K References
-------
-------
PREPARING ANNUAL MONITORING AND
EVALUATION WORK PLANS
-------
-------
A-l
PREPARING ANNUAL MONITORING AND
EVALUATION WORK PLANS
The first steps in program monitoring and evaluation are identifying
the objectives toward which program management will be directed and
determining key questions about progress toward these objectives and
other high priority issues. You then need to make some some preliminary
decisions about how you will approach trying to answer your key questions
and about who will coordinate the work. The States, of course, should be
involved in these planning efforts. This Section of the Guidebook dis-
cusses these considerations and the work to be done as part of the annual
program planning process.
Involving States
Extensive involvement of our State counterparts is important in all steps
in the planning and implementation of monitoring and evaluation work. Over
the past several years, the Region has delegated major parts of the water
media programs to State agencies, which now have the authority and the
responsibility to routinely administer these functions. When a delegation
agreement is fully implemented, the relationship between the Water Division
and agencies which have assumed programs should be that of a partnership.
As co-managers of the water programs, the States share with the Region the
responsibility for program operations and results, and need feedback on
the efficiency and effectiveness of the programs. State agencies are also
directly influenced by Congressional decisions, not only because they
are grant recipients, but because they were assigned specific roles in
environmental statutes. They, therefore, share our concerns about accurately
representing the programs to legislators. States also require information
gathered through monitoring and evaluation in their dealings with State legis-
lators, State budget offices and the public. Since States carry out the
day-to-day operations of programs, they know what data can easily be made
available and have an important role in collecting this information. For
all these reasons, States should have direct input into the identification
of key questions, information requirements and the means through which
information will be collected.
-------
-------
B-l
Specifying Objectives, Measures and Standard^
Specifying comprehensive water media objectives that guide program managers
is a critical element of program management, planning and evaluation.
Because of the complexity associated with joint Federal and State manage-
ment responsibilities and the extremely general nature of the legislative
goals contained in the Clean Water Act, formulating objectives and associated
measures of progress toward these objectives is a demanding management respon-
sibility. Ideally, a list of objectives should be developed through extensive
collaboration among States, the Region and Headquarters to ensure a balanced
presentation. Program objectives should describe the desired impacts of the
water programs on the public and the environment. These objectives can be
derived from the goals and required outputs in the Clean Water Act and the
Safe Drinking Water Act. Program objectives may also be derived from other
legislative statements, expressions made by citizens or the media, or from
local, State or Federal program personnel. Management objectives describe
the management initiatives and program activities EPA and the States employ
to achieve program objectives. Basically, program objectives describe what
we are trying to accomplish, and management objectives describe how we intend
to do this.
Objectives are important in management because they can be used as bench-
marks toward which we direct our efforts and against which we can compare
actual performance. They provide a relative indication of whether work
was completed as planned, and whether or not this led to the desired results.
Because the objectives will serve several purposes, communication and agree-
ment on objectives and measures between Federal and State managers is
essential.
The objectives can be of a quantitative or a qualitative nature, since we
may want to know not only what was done but how well it was done. They
should be specific enough so that they are useful as basis for comparison,
and they should be in sufficient number to cover the intended and unintended
effects of program operations. The goal of achieving fishable/swimmable
waters, for example, is too vague and all-encompassing to be useful to program
managers. "To ensure that the highest priority treatment facilities are
constructed with available construction grants funds," an objective derived from
the general goal, is much more useful, but needs to be complemented by other
objectives covering other parts of the water quality programs.
Where you are assessing the extent to which an objective has been achieved, you
will often need to have one or more performance measures. The measures would
ideally cover all aspects of a given objective including quality, quantity
and/or timeliness. They serve the purpose of transforming an abstract concept
into a framework more useful to evaluators. For example, the objective "to
assure safe drinking water at public water supplies," can be measured by track-
ing the percent of water supplies in compliance with all applicable standards.
A measure for the objective "to achieve the best attainable water quality"
might be the percent of stream segments meeting water quality standards.
-------
-------
B-2
In some cases, performance standards or comparison standards will be useful
in judging the extent to which an objective has been achieved. Such a
standard provides a yardstick against which actual performance can be
compared to make a judgment about the relative value or success of that
part of the program.
Performance standards are generally negotiated with Headquarters, the
States and/or the Corps prior to the beginning of a fiscal year. The
standards are based on what has been accomplished in the past and/or
what is happening in other Regions or States, and they represent an
acceptable level of progress to be made. Your assessment then starts
with a determination of whether or not the standard was achieved.
Comparison standards may be appropriate in some cases to assess a program
activity or result to determine if it represents excellent performance,
satisfactory performance or unsatisfactory performance. Where perform-
ance standards have not been negotiated, comparison standards can be
established (based on sound technological or management considerations).
The point is that the determination of a program's adequacy is based on
objective, pre-determined criteria, rather than on the feelings of the
evaluator.
Comparison standards need not always relate to the performance of an
individual or office. Judgments about a particular stream, for example,
can be made through comparisons with:
- historical chemical and biological data on the
same stream segment;
- data from other portions of the same stream, (e.g.,
upstream/downstream of a point-source discharge);
- data from other streams in the same region, with
similar habitat characteristics; or
- baseline data on natural watersheds drawn from literature.
Developing standards will not be essential for all your objectives, but
they may be helpful in some cases. Whenever performance standards or
comparison standards are used, keep in mind the findings and recommenda-
tions of the evaluation will be influenced by the standard employed. To
the extent that the standard is poor, the results of evaluation will be
ambiguous or misleading. Development of valid standards is, therefore,
an important aspect of planning monitoring and evaluation work.
-------
-------
C-l
Determining Key Questions
The development of key questions about the operations and effects of the
water media programs is an important initial step in the preparation of
annual monitoring and evaluation plans. Such questions are important
because they define the scope and direction of your monitoring and evalua-
tion work—these are the things you will try to find out in your monitor-
ing and evaluation projects. An essential part of the monitoring and
evaluation work will relate to the extent to which a program or management
objective is being achieved. Key questions of this nature would be appro-
priate for management reviews of delegated or non-delegated programs. Key
questions could relate to cross-cutting issues or questions raised by OMB,
Congress (e.g., the GAO report on municipal compliance) or Regional or
State decisionmakers (e.g., Illinois' challenge of AWT review procedures).
Questions could also address special issues related to impending changes
in water media legislation, regulations or policies or from managers'
need to know about developing problems, new policy implementation or
best operating practices.
The Water Division Guidance for Preparing Monitoring and Evaluation Work
Plans (November 1981) suggested some general ways of identifying your
most important questions on program operations and effects. Some of
these procedures, including those outlined below, can help to highlight
areas to be addressed in a monitoring and evaluation project.
1. Look over lists of the program and management objectives and other
documents such as the Water Media Environmental Assessment and
Strategy. Decide what questions you as a program manager want to
address about the extent to which priority objectives are being
achieved.
2. Consult State counterparts to get their input on key questions that
you and the States should try to answer about program results,
efficiency and effectiveness.
3. Determine what information is needed for reporting to Headquarters,
based on the national management and evaluation systems, the Adminis-
trator's accountability system, guidances and past experience. Make
sure you get this information as part of a monitoring and evaluation
effort. Solicit input from Headquarters (if appropriate) on key
questions they think we should address.
4. Look at the delegation agreements (if applicable) and determine what
information collection is called for in the agreements. Carefully
consider each of these information requirements and decide if it
fulfills a priority information need. Any items that are not
essential should be dropped; the rest of the information should
be collected in the monitoring and evaluation projects.
5. Look at the program plans in the program grant applications, the
State/EPA Agreements, and any other documents that might include
performance standards. Determine what information should be
collected to measure progress toward the standards.
-------
-------
C-2
Here are some examples of key questions you might identify relating
to program or management objectives:
General Question
Specific Question
Is the program achieving the
original intent of the enabling
legislation and/or regulations?
Are beneficial uses being attained
on stream segments where control
measures are in place?
How does program performance
compare to past performance?
What is the trend over time?
Is compliance (with permit effluent
limitations) improving at municipal
facilities over time?
How does this program compare to
other programs or to similar
programs in other areas?
How does municipal compliance
compare to industrial compliance?
What accounts for the differences?
How does this program compare to
a theoretical program which could
exist instead of this program?
Would the existing public water
supply regulatory program be more
effective if it were supplemented
with a grants program?
Does the program direct its
resources to the most significant
problems first?
Are facilities on the municipal
compliance priority list the first
to receive funding under the State's
project priority list?
Undoubtedly, many of your key questions will relate to program or manage-
ment objectives, since they basically address the question: "Are we accom-
plishing what we are supposed to be accomplishing?" These types of concerns
should be coupled with monitoring of program activities to form the basis
for reviews of State and Federal program operations. However, focusing
entirely on accepted program and management objectives may cause you to miss
some questions which could or should be addressed in a monitoring and evalua-
tion work plan. Be open to special issues or problems that merit your atten-
tion.
Special issues could relate to cross-cutting programs, questions, problems,
successes or accomplishments that do not fit exactly into one of your objec-
tive statements but, nevertheless, are important. Such issues could be raised
by program managers, Congressional or public inquiries, newspaper or magazine
articles, audit reports or environmental groups.
-------
-------
C-3
Outlined below are some examples of issue-related key questions that could be
worthy of attention:
1. What are the unintended consequences (side-effects) of carrying
out this program?
2. Do GAO reports on the program accurately portray the situation in
Region V?
3. What changes would be appropriate to the program's legislation or
regulations?
4. Are there patterns of audit exceptions that indicate a significant
problem?
5. What municipal point source control measures have worked the best in
Region V?
6. Is there a relationship between municipal compliance with permit
effluent limitations and compliance with the drinking water
regulations?
7. Are sludge disposal practices having an effect on ground water?
8. Are fixed water monitoring stations providing useful information
for programmatic decisionmaking?
9. Assume water monitoring data indicates a pattern of lower than
expected water quality downstream from a certain type of treatment
plant. Does this indicate a problem with that type of plant?
Addressing issues such as those suggested above can provide managers with
important information about what is going right and what is going wrong
in the implementation of the water media programs.
After you have identified a list of key questions, you will probably need
to prioritize them with other Division managers and the Division Director
and eliminate those that are the least important. Some of the remaining
questions may need to be adjusted later in the process of developing
monitoring and evaluation work plans, due to resource constraints or other
considerations. Identifying important, answerable key questions is an
important first step, since finding answers to these questions will be the
basis for your monitoring and evaluation activities.
-------
-------
D-l
Selecting Approaches to Answering Key Questions
After program and management objectives, priorities and key questions have
been established, you next need to select approaches to answering your key
questions and to identify team leaders for the monitoring and evaluation
projects to be undertaken.
Depending on the information needed, any one of a number of approaches may
be suitable. A carefully balanced approach that strives to incorporate the
appropriate mix of qualitative/quantitative information should be sought.
Quantitative, compliance and input information can be most readily derived
from monitoring, reporting systems, cost analyses and audits. Causal,
output and outcome information should be acquired from evaluation. If
you determine that the needed information can be best obtained through
evaluation, you should then decide if the information can be obtained as
part of a continuing evaluation of objective attainment or through a
special one-time evaluation project.
Questions dealing with number of POTWs complying with permits or O&M costs
associated with specific treatment technologies can be answered through
monitoring or cost analyses, respectively. Questions such as "Are
appropriate treatment technologies used in new plants?" or "Can a relation-
ship be established between municipal compliance/non-compliance and water
quality?" can only be answered through evaluation.
The use of appropriate treatment technologies can be evaluated through
continuing assessment of program performance against objectives/performance
measures. The relationship between compliance/non-compliance and water
quality would be examined in a special one-time evaluation since more
variables must be examined and a more rigorous approach would be required
than possible in continuing evaluation of objective attainment.
Monitoring, as described in OMB Circular A-102, is the Agency's tool for
ascertaining whether use of federal resources meets legal and administrative
guidelines and leads toward accomplishing program objectives. Information
on program efficiency and level of effort (obligations, outlays, pre-construc-
tion lags and completions, etc., in the construction grants program) is
gathered through monitoring. The basic focus of monitoring is on inputs,
i.e., whether human and financial resources are applied toward achieving
objectives. These program inputs, however, are not automatically related to
program outputs or effectiveness.
-------
-------
D-2
Evaluation provides information on whether programs or projects
"achieve their objectives or produce other significant effects"
(OMB Circular A-117). Evaluation deals with the short-and long-term
program outputs and effects, and builds on monitoring data in assessing
whether programs meet their management and environmental objectives.
Evaluation provides the basis for ascertaining whether:
- programs and resources applied are producing desired
results;
- other results are being produced by the program and
whether these results are desirable or undesirable; and
- there are more efficient/effective alternatives for
solving problems that the program addresses.
The example at the end of this section illustrates the progression of
steps discussed thus far leading to the selection of an appropriate
information collection approach.
The final step in the program planning process for monitoring and evaluation
is the selection of a team leader for each key question or for a group of
related key questions. The team leader is responsible for planning the
monitoring and/or evaluation projects, assuring that plans are carried
out and completing the project within the appropriate timeframe. Other
individuals are selected to work on monitoring and evaluation projects,
as appropriate, as plans to answer key questions are developed in more
detail.
-------
-------
D-3
CM
d
o.
o
SE
O
IO
c
o
IO
o
a> m
•r- a*
aj
T> in
-S
J- •»-
a> m
J* M—
3 O.-*-
m +•»
in «i-
•^ me
C 3
r— O *J
10 t- i.
C -MO
O 10 CL
•r- f— Q.
in 3 o
in en
en
o
o
O 01
4-* O
10 V.
en
in
o
in
O
o
O)
IO
to tn 10 QJ
•>>. 01 i- en E
i— ••- *J CO)
10 •*•> in T- en
C f f- "O IO
os- c c c
•*- o -5: o>
01
•fW
•C
U
fO
e
•^"
• •
c
o
^3
O)
in
IO
h high costs of treatment; j
-M
IB
c
t.
01
u
c
o
u
h-
JC
•M
"5
c
o>
u
0
u
C.T3 JD ,— _
10
S.
Ol
Ol
g~
3:
0)
in
3
O>
C
•^
0)
c
o
I/I
u
Ol
"
IO
c
0
IO
c
1
(O
c
0
10
c
1
1
o
Cv
<*.
o
o>
o
c
lO
c
u
e
o
c
JT
•V
3t
£
o>
u
u
1
1
15
Q.
01
M
E
I.
en
o
Q.
Ol
4J
Ol
•o
c
3
•o
01
Aj
u .»
•M &»
in O
C Cv
o o>
u s-
e
in
01
en
«
u
'•o
•o
0)
2
in
c
C£
•o
c
f5
«o
c^
c'
£
a
o
a.
3
1
•0
J_
en
o
a.
U)
+J
C
10 in
c.-o
o c
J- (O
a.
in
JC O)
in >
m .a
LLJ O
a) in
i-
T3
O
O
S- m
a> c
c c
o> cr>
ai
-a oc
10 «
E.5C
in
01
en
o
u
O)
10
o ••
u >.
ia m 4->
Q. C l»- O
Ol 01 C
O— 3
<0 E
O Ol
H- O.
•o
IO
en
in
o»
•a
c c to
m (Q 4,
0) "cl^M
O i-
O +J^
«*- o
C -O (O
Ol 01 •*•
O. t- 4-»
C 3 C
f- cr 0)
m 01 E
3 U-M
10
"oj f- -M
•f o^ ^j
I- C 3
CL T- &.
O O -M
J- 3 m
o. -a c
o at o
IO U U
2
o.
•o
IO
in
3
cr
o>
IO
o
Q.
C
"c
*^» •»
IS
10 S-
i- T3
in u
lO J3 01
O) IO JC
IO 0)
>> > >
•M 10 0»
O 4-> U
Q. 0«4-i
•a 4J a>
4-> ai c
m -M in at
a> o-o .a
j= 3 c
cr. L. 3 ^~
jz in -M
j= o en o
•MS- W
<0 10 C -r-
J= O >•
•M m-»- c
ai -M a>
Cn-.— o
C -M 3 E
•^ -r- 1- 3
t. r--M E
3 •»- in t-
m o c x
C 10 O H- 01
01
> i
in
en o
en a>
£ 3
3 T3
32
S •*•»
-a j=
c -M
-a
OI
ki^
T3 4J
O) C
m 01
3 3
•a
$
o
a>
01
o
0. >
>> **
J3 rO
-O^ 2
•MO O>
"CU^M r-
IO lO IO
"? "S
0»-r-
CLr— in
in
f C O
3 Ol *J
C »— Q
in 3
cr ai in
c •«- 10
•t- cr. ai
>» o Z
i.
at
-Q
T3 ai
cni*-
O O)
"o-o
c c
J= 10
o
a> ><
10
•M O
IO f
2§
•M E
O
o
•M C
in ai _
03 e
Ur— r-
01
10 M -M
O) • C •*•
i- 01 01 •—
+J -M r— t—
10 10 U
•— -M > IQ
(O in t- «*-
o >\ cr c
•M JO Ol T-
-------
-------
oo
D-4
•o
3
Q
g^
VI
T3 C
0 O
5 5
i |
0 O
c <*-
T- e
W *v*
41
J= 4-»
4-> 3
IO CX
u e
^^
c
0 -
4^ g
S C
E iO
O ^^ M
tg
41
»» 4^
U E vi vt
at o >> at
4J • i_ I/I I/I
i- •• > C O>r—
S. VI f-'O •»- C IO
o c 4-> a» i- .1- c
o o 10 >• o 4-» 10 vi
t. »- 4->^-4J W 4->
§4J •*- S- t- O 4-> T-
ia 4J at c cx vi -o
«t s- c -o o at o 3
at ia E s- o 10
4-1 -0 3 01
O i- CT.C7 1 1 1 1
41 VI
P— C
41 O •
in o i—
§**
«*-
•o
1
1
e
o
•r-
• *
O
t$M
e
91
4JS
^J
U
3 *li
O i/c
• c
• 0
•M -r-
3 **
0. C.
4-> C O
3 0
O •«- £ C
4-> O O
• ia i- n*
r— 3 10 **
IO r— 41 re
VI IO VI ' 3
3 > Ol •—
10 at t. ' «
o >
1 1 LU
•
CM
VI >»
I/I L.
4-t ai at
c vi >
vi at vi 41
S S 10
° ai T3
•«-; 01 vi at
** 10 V. 4J
ia c at at
3 10 .V F-
O" •"* C iO CX
c 5-^ K E
3»2fei8
tl ^ 1- »
o t at u
c H- •o-^»
«»- ^- o
o «o e vi
4J J- O-O
4J 4J (J| 41
J- IO JO -O 41
10 E at c
CX iw 3 I/I
> C 10 X
I/I ••- J3 4->
IO 4-> O *^-
0 0> >» V-
C 41 ** ^- O
Q .£L,_ p.^_
•<- JD E ITS 1.
4-> O f 3 CX
10 •- C
EH- C U
i- O C lOt-
o o at
«»- C VI .C
C O "O tt) ••-" •
§^ ^"'^
•»- 1*- V>
4J O S-
1-300 10
IO F-«*- 4t 4-» 41
4J 10 J- —i C >,
^ >• Q)
O o Q. O E ^
•
m
c
o
4J
IO
3
*«
at
4J
E
•r-
1
at
c
o
fH»
10
•r-
U
at
cx
VI
c
o>
I/I
•
.a
01
1
c
^
o
+J
•fMk
c
£
•
•~
*
JC
01
3
O
L.
5
VI
41
•f—
4J>
VI
•^
10
Ol
IO
at
u
fO
E
a
en o1*-
•
CM
•o
c
IO
f"—
VI
c
o
'•13
t/1
41
3
OT
c
o
c
o
**
1
CO
t/1
c
o
•r-
4->
VI
4t
O*
J_
o
•o
fll
c
•V*
3
o-
at
i.
c
o
^J
IO
3
r—
10
LU
*
CM
O
*O
JC
u
at
4->
at
0
s_
cx
u.
o
4J
c
i
V)
VI
41
VI
IO
Ol
c
*3
C
•r-
C
O
u
•
V)
£
3
V)
fQ
rti
fc
at
u
c
i
^^
l^M
V.
at
cx
^s»
i/i
at
4-»
u
at
•A
0
Ol
3
2
JC
4^
(/I
4-1
*^»
^
at
Q.
Ol
c
4->
at
g
C
•r*
^^
en
o
"o
c
JC
u
at
4->
t^M
O
41
*O
i.
41
ia
3
10
LU
1
41
^g
f^
3
O
a)
^•M
jQ
•r*
V.
*•
V.
at
u
c
•r«
VI
c
o
•^v
4^
10
3
ia
^
41
"it
U
at
a
VI
c
IO
£
4-1
^
at
Z3
cr
4)
i.
§i
•F-
VI
41
VI
3
O
V.
o
*F"
*-
•t—
4-*
U
at
•«->
JQ
O
<*—
O
§
•*•
4->
e
i
^P»
«o
4^
4^ .
«O
cx
at
e
o
•£-
«
£
^
^5
^
1—1
•o
0
VI
O
4J
VI
at
<§
1o
o
•F-
u
c
•F—
S-
Q.
41
C
'i
s-
at
*j
at
o
at
s_
10 4)
4J vr
IO fit
4J -C.
C
VI O
c ••-•
O VI
4*^ ^J
i/i at
0) "O
0"0
at
2 **~
>J
ia
i/i
VI
vi at
c at
fly ^*
£
a, v^
•»- o
2 »^
C- 4-»
at u
OL- <
.;r.j.»
(J C
c at
•^ T3
i_ t-
cx
u-
0) o
c
•^ 4-*
M- i.
at 10
o a.
*
—
•o
at
V.
3
CT
S-
C
O
«3
£
o
c
J
IO
cx
u
c
c_
cx
at
£
^
at
^j
«
C\J
o-
*o
4T
IO
£
VI
o
VI
41
CT
at
"e t^-'os
J. 4-> O
at o r-
cx at o
o
*o
v>
o
u
u
at
o> o
c
f- at
>, M
d V) IO
O >-. W
0 J3 4-»
vi ia at
3 4J 4-»
»— t- *
O Q-r-
Q- IO i-
U CX
«•- O
O '- S_
at cx
l- cx a
at 10
E v> at
301.
IO
g
at
IO
j_
cx<
o
a
cx
IO
VI
S
c- -o
i/i
4J 41
IQ O
*CX
3 10
01 J=
C it
VI
•o
41
4-»
U
41
"oi
I/I
VI
41
4-» O
1*0
s_ c
a> j=
cx u
at
O.4J
^- 4J
4J C
at *
E 4J
IQ
f !_
4->
>>
io at
•o
&
IO
at
cx
o
^
"a>
4->
IO
3
cr
at
•o
O- iO
at
t— at
10
-o v»
I- 4->
««- IO
1C *CX
•o o>
C C
u
tv. 41
VI «»-
41 «»-
1st IO
•r-
VI >,
at o
Ur-
O
O U
+•> at
•— at >»
F— vi cr. • at
10 J- O VI 4J
at i— c «
«*- > o a-*-
o -a c T- j.
IO JC 4J CX
vi u 10 o
at v) at t— i.
t- C 4-» 3 CX
4-> O O*. CX
,n c TJ +-> i_
13 CTI (0 rtS ^*™
O--r- ^ f- « O
•f- VI 3 J- I/I
u at en cx at c
•r- "O Ol O •*- O
C I L. t- U ••-
3 J- "•_ ~
E at vi cxr- u
> at ia o at
>, o -r- CL>—
ja o«»- at
I
•o
41
at at
VI r—
c o
O 5-
4>
•o
2
3
o-
£
t- 4-» O
e\jco
at r-
> 3
•13 at
u L.
41
•o o F-
01 O C
c at a. o
at
at
<0 C X,4-» C VI
4-> at c u 10 c
c -o 10 at at
•r- -I- F— Ot
e uj o vi «c 4->
L.
O
U
•r-
3 10
CX
O IQ
T3 C t.
at 10 at
O V) r— Q.
«*- 3 cx o
•»- vi >, c
at 4J o
at ^- Y- ^_
«J O-r— 4-»
c o 10 o
10 f— O. 3
•*• O i- t.
f— C O 4-»
C.jC f- I/I
e o c c
o at 3 c
u 4-» e o
•o
£ «=
at o>
3tr
I/I IO-
O r-
rt*"
at F- 3
t- 10 cr
i i i i
-------
-------
IMPLEMENTING MONITORING AND/OR
EVALUATION PROJECTS
-------
-------
E-l
IMPLEMENTING MONITORING AND EVALUATION PROJECTS
Every successful monitoring and evaluation project, i.e., one that answers
the key question(s) includes four essential functions: planning, information
collection, analysis and reporting. This section of the Guidebook describes
each of these four essential functions and the activities associated with
them.
Planning
The success of each monitoring and evaluation project is directly related to
the attention devoted to planning each project. Evidence of complete and
careful project planning that will ensure that key questions are answered is
provided in the deliverable (output) of the planning function—the Plan of
Study.
A well-prepared Plan of Study is important for three reasons. First, it will
be used as a checklist to direct activities of the monitoring and evaluation
team, so that you obtain the information when it is required for decision-
making. Second, the Plan of Study documents your approach to the monitoring
and evaluation work and your methodologies. This will be important if the
findings of the study are questioned at some later time, or if the study is
to be reproduced in another area or at another time. Third, it can become
an integral part of your monitoring or evaluation project reporting because
it summarizes why and how the project was undertaken. It is an invaluable
tool to ensure managers agree on your objectives and methods before you begin.
The key components of a Plan of Study are:
1. Purpose of the study
2. Scope of the study
3. Specific information needed to answer key questions
a. What information is needed?
b. When is it needed?
c. How and where will you get it?
4. Analysis (of findings) to be done
5. Reports
a. When will reports be prepared?
b. What will be the scope of the reports?
c. Who will review draft reports?
6. Chart of tasks and milestones
-------
-------
E-2
You should strive to develop a simple, realistic study plan that will provide
the information needed to answer key questions in a timely manner. You should
also strive to ensure objectivity and balance in your Plan of Study. This
is fundamental to all of the activities in project planning. In order to
produce a Plan of Study, the following tasks are discussed as part of
project planning:
1. Reassessing initial key questions and identifying related questions
2. Clarifying schedules, deadlines and budget restrictions
3. Identifying information requirements and sources
4. Identifying team members
5. Preparing information collection plans
6. Planning information analysis
7. Planning report preparation
8. Training team members
9. Planning logistics
Task 1 - Reassessing Key Questions
The first step in planning your work is to reassess the initial key
question. This is particularly important if time has elapsed since
key questions were identified in the annual planning process—make sure
the question is still important and pertinent. The source of this feed-
back is the manager(s) who will use the monitoring or evaluation informa-
tion and findings. It is essential that the team leader and the study
user share a common understanding of the nature and scope of the key
question to be studied. To help identify managers' information needs
and perception of the key question(s), the team leader can use such
questions as:
- What is the manager's perception of the problem?
Is there dissatisfaction with effectiveness or consequences
of the program? With the lack of a program?
- What uses are to be made of the information to be collected?
- When is the final report needed? Will interim reports be needed?
Discussion with the study user is essential in focusing your monitoring
or evaluation project on the most relevant issue(s).
You should also consider related questions you might want to address.
Try to assess the evaluability of the questions—can they be answered
within your time and resource constraints? It is extremely important
to remember that evaluation is a relatively "expensive" technique
because it requires a significant investment of human resources. Ascertain-
ing whether evaluation findings will be used by managers and whether there
are alternative methods for obtaining information is necessary.
-------
-------
E-3
Another important initial step is to review all the background available in-
house on the situation being assessed. This will put your project in its
proper context, help you identify related questions and help you to plan
your work.
Task 2 - Clarifying Schedules, Deadlines and Budget Restrictions
To carry out a study that will be completed when the answers to key
questions are needed, you will need to establish deadlines and schedules
to be followed by all involved in the project. The monitoring and evaluation
work will only be worthwhile if your answers are available when they are needed
for planning and decisionmaking. Timing considerations as well as budget
restrictions will impact the scope of your study, including the number of
people involved, the amount of information collected, the number of trips
taken and the level of detail of analysis and reporting. Establishing timing
and budget constraints early in the planning process will help set the limits
for your Plan of Study.
Task 3 - Identifying Information Requirements and Sources
After planning the scope and timing of your study, you must identify the
precise information needed to answer key questions and related questions.
Many of your information requirements can be drawn directly from performance
measures. The measure "percent of stream segments in compliance with water
quality standards," for example, would require data on the number of seg-
ments being counted, the standards for each of these segments and current
water monitoring data for low-flow periods for these segments. Other
information requirements may not be so easily recognizable, but they
nevertheless must be identified. Consultation with the States will be help-
ful, as will discussion with other individuals in the Region, e.g., Regional
water monitoring staff, the sludge coordinator or the O&M coordinator.
Try to identify information requirements so that key questions can be
answered in the most efficient manner possible. In some cases, information
you need may not be available, or may be obtained only at excessive cost.
You then need to find a different approach or measure to answering your key
question, or to consider revisions to your initial question. Consider-
ation of existing sources of data and estimation of the work involved in
getting other information are very important as you identify your
information requirements.
A number of existing data sources throughout the Region can serve as a
useful starting point for planning the most cost-effective means of
answering key questions. These sources may also be helpful in identifying
a design for your evaluation, or in identifying important related issues
or questions. A list of potential sources of data includes the following
programmatic areas.
-------
-------
E-4
1. Water Compliance Programs
a. Special studies such as fish studies to identify potential
hot spots
b. Sampling inspections (especially toxics and biomonitoring)
c. Special Studies
d. Self-monitoring data
e. Compliance tracking records
2. Hater Quality Programs
a. Waste Load Allocation, AST, AWT, Water Quality Standards, Nonpoint
Source Programs
1. Intensive surveys
2. 305(b) reports
3. 208 plans
4. Intensive survey data
5. Ambient monitoring data
b. Environmental Impact Statements (EIS)
1. Intensive surveys
2. Biological surveys
3. Ambient data
4. Special Surveys (e.g., septic snooper)
c. Clean Lakes Program
1. Remote sensing laboratory surveys
2. Biological surveys, etc.
d. Wetlands Dredge and Fill Program
1. Monitoring of bottom sediments
2. Water monitoring
e. Construction Grants Program
1. CGMS
2. Facilities plans
3. Drinking Water/Ground Water Programs
a. Ground Water Monitoring Data
b. Sanitary Survey Reports
c. Investigations of suspected spills or contamination incidents
d. Water system compliance data
-------
-------
E-5
4. Water Monitoring Programs
a. Exposure/Risk Fate Assessment Dilution Studies
b. Hot Spot Studies
c. Operation & Maintenance (O&M) Inspections
d. STORE!
e. Fixed Station Monitoring Network
f. Reach File
5. Hazardous Waste Programs
a. Hazardous Waste Site Investigations
1. Intensive surveys
b. PCB Investigations
1. Special remedial and mitigation feasibility studies
6. Radiation Program
a. Special Studies in Response to Accidental Spills or Other
Emergencies
7. Great Lakes National Program Office
a. Whole Lake Fish Monitoring for Toxic Substances
b. Harbor & River Mouth Fish Monitoring for Toxic Substances
c. Sediment Surveys for Toxic Substances in Harbors
d. Water Intake Monitoring
e. Water Intake Verification Studies
f. Supplementary Storm Event Tributary Monitoring to Improve Estimate
of Loading to the Lakes
g. Atmospheric Loading Network
h. Periodic Intensive Studies of each of the Great Lakes
i. Special Studies
8. U.S. Geological Survey
a. NAWDEX
b. NASQUAN
c. WATSTORE
-------
-------
E-6
The Assessment of the Region V Water Monitoring Activities
(EPA-905/4-81-001) can be consulted for further information
about what water quality data is available.
9. Financial, cost accounting and operational management information
Be ingenious in finding sources of information. Patterns of audit
exceptions, newspaper and magazine articles, public participation records,
environmental groups, State studies and many other sources will be helpful
to you. The more effective you are in identifying information requirements
that are complete yet relatively easy to fulfill, the less resource inten-
sive your monitoring and evaluation work will be.
Task 4 - Identifying Team Members
After identifying your information requirements, you will have a good
idea of the types of individuals needed to conduct the study. As a first
step in selecting team members, the team leader should go over the Plan
of Study (as developed to this point) with appropriate program managers.
This review will serve as a check on the plans you have developed, to
assure you have correctly interpreted questions and are moving in the
right direction. You can then jointly agree on and arrange for the
people to be on project teams.
In almost all cases, information collection will require the coordinated
efforts of two or more individuals. If, for example, you will need a
highly-trained engineer, you must make arrangements to have such a
person on the project team. Try to put together a monitoring and evalu-
ation team whose members have a mix of skills appropriate to the inter-
disciplinary nature of environmental problems. A team could consist
of personnel from inside and outside the program in the Water Division,
other Divisions, the States, local levels or outside the Government
(when this is possible).
Identifying team members early is important so they can schedule their
time to get the project done within deadlines. It will also allow them
to participate in planning information collection activities. Not only
do team efforts help reduce the time required for information collection,
but they help to control the possible biases of any one person doing all
the work. Using teams allows you to tap a wide range of expertise. Also,
different individuals with different backgrounds will contribute to a
more balanced assessment. By specifying each individual's role and
responsibilities, you will minimize potential problems and give everyone
clear direction regarding what is expected of him or her.
-------
-------
E-7
Task 5 - Preparing Information Collection Plans
Information collection plans are shaped by three considerations:
- the need to answer key questions;
- the analyses necessary to answer these questions; and
- the need to collect information that provides the most complete,
objective picture of the program or issue.
An information collection plan describes what information will be obtained,
and how and where it will be obtained. A good plan employs a variety of
information collection methods to gather information from a variety of
sources. This variety is essential to a balanced picture of the issue or
program being studied because it compensates for the limitations of a
single method or source and allows you to compare methods/sources to see
if they support the same set of findings and conclusions.
The structure of each plan should utilize internal checks and balances
that include:
- using research evaluation designs and techniques that
ensure objective and representative information is collected
and analyzed;
- using teams whose members have a mix of skills appropriate
to the interdisciplinary nature of environmental problems
and include personnel from inside and outside the program
and inside and outside the government (when possible);
- gathering many types of information, including both qualitative
(discussion, observation) and quantitative (statistical, ambient
monitoring, financial research, etc.) information;
- sampling program operations in a variety of settings (rural/
suburban/urban, R&D sites, serious/minimal environmental problems,
etc.);
- collecting information in a variety of ways (existing analyses,
direct observation, case studies, discussions, file searches,
ADP systems, etc.);
- contacting a variety of information sources (local/State/Federal
program administrators, elected officials, members of the public,
private groups with program knowledge, etc.); and
- employing analytic techniques that compare and contrast qualitative
and quantitative information, information from various sources and
sites, etc.
-------
-------
E-8
In addition to specifying these elements, the information collection plan
should identify the individuals who will perform the study, the precise
responsibilities of each individual, and the milestones and deadlines for
information collection activities. This planning must take into account
budget restrictions and the timeframes in which answers to key questions
are needed for program decisions. (If, for example, information is needed
for a hearing, information must be collected in sufficient time to allow
for analysis and report preparation before the hearing.)
When information needs and collection methods are clearly identified,
attention must be directed to where information can be collected. In some
cases, considerable information will be available in the Regional Office.
For example, existing sources of water quality data should be carefully
examined before additional data collection is initiated. Arrangements with
the Water Monitoring Coordinator (Water Division/Planning and Standards Section),
other Divisions and the States should be made as soon as possible, so that
the necessary water quality data will be available when it is needed. In
some cases, information collection may also extend to visits to State Agencies
for file searches, discussions and observations.
Some studies may require teams to conduct site visits. There are several
approaches to selecting sites. The first approach, after identifying all
possible sites and categorizing them by size, compliance status or other
characteristic of interest for the study, is to select sites at random for
a visit. The second approach is to use purposive sampling, i.e., sampling
with a specific purpose that relates to the study objectives, questions
and issues. The following illustrate purposive sampling.
A "representative snapshop of reality" study aims to present an overview
of the program that reflects the Regional profile in as representative
a manner as possible.
A "suspected problems" study aims to determine whether and to what extent
a particular program suffers from some suspected operational problems.
An "early warning" study might place even more emphasis on visiting sites
with suspected problems in order to identify emerging problems that must
be corrected. These sites might include the most problematic sites, but
might also include older or more fully-implemented sites that could reason-
ably be expected to show more advanced symptoms of the problems.
A "best practices" study would logically select those sites that have
been identified as being particularly effective in delivering services.
In addition, this type of study might profit from several visits to
"average practices" and "worst practices" sites to gain a perspective
on the excellent practices seen elsewhere.
-------
-------
E-9
Sites where no program activities are funded or R&D sites might also
provide useful information. Visits to sites where there are no water
programs in operation may illustrate the true extent of need and can be
used as a basis for comparison with areas where programs are being
implemented. Visits to R&D sites might be helpful in assessing best
practices. A combination of techniques might be used to select the types
of sites to visit. For example, some sites might be selected randomly in
order to provide an overview of representative sites. But the remaining
sites might be selected purposively from the largest, most visible or most
problematic sites, depending on the specific nature of the suspected problems.
Selecting types of respondents at various sites is a relatively
straightforward process of identifying all relevant groups and sub-
groups (Federal/State/local program staff, professionals in relevant
disciplines, members of the public, interest groups, treatment plant
operators, etc.) and allocating a number of discussions to each type.
Where you are selecting specific individuals in each category to contact
for discussions (such as members of the public, etc.), you should ensure
that you use random selection procedures.
Statistical analysis will frequently allow you to draw conclusions and
generalizations about the subject being studied. If you intend to do
such analysis, you must consider accepted statistical procedures in
determining the size and make-up of the sample you include in your study.
By using variables such as the size of the total population and the size of
your sample, in conjunction with the normal distribution curve, you can
determine with a great deal of reliability if your findings would apply
to other, unsampled parts of the population. It is important, therefore,
that you consider statistical principles in deciding what portion of a
population you will study in your monitoring/evaluation project.
Task 5.1 - Preparing Discussion Guides
Discussions will frequently be an important part of your information
collection activities. They could be with State personnel, water supply
and sewage treatment plant operators, local government officials, concerned
citizens, etc., and could take place as individual discussions, telephone
conversations or group conferences. The use of in-person discussions,
telephone discussions or group discussions will depend upon the type of
information needed and time and resource constraints. The personal
contact during a one-to-one discussion allows for more personal rapport
than during group discussions and a better understanding of the respondent's
reactions to various issues. On the other hand, telephone discussions
facilitate the speed with which needed information can be obtained and
are much cheaper than in-person discussions. Finally, a group discussion
offers the advantage of obtaining several different perspectives simultane-
ously, but lack of privacy may inhibit some respondents and the discussion
may be dominated by a few vocal participants. In all cases, discussions
should be carried out according to a plan to assure you get the information
you need. A discussion guide is the instrument to be used for planning
and successfully carrying out your conversations.
-------
-------
E-10
The discussion guide should outline the flow of the overall discussion to
logically progress from topic to topic and assure all the important ques-
tions are answered. Important questions should be specifically included,
along with space to record responses, to assure all team members come away
with equivalent information. A well-structured discussion guide will also
facilitate summarizing and batching findings from a number of discussions.
The guide should not be designed as a questionnaire to be administered
verbatim to respondents in a formal interview. Instead, you should use it
as an outline to structure the discussion and capture the appropriate
responses.
Not every monitoring and evaluation project will require formal discussion
guides. The need to conduct discussions informally in a variety of settings
(e.g., at sewage treatment plants, on river banks) may preclude the use of
any paper-and-pencil materials. In such cases, the team leader must ensure
that team members are well trained and record information discussed imme-
diately after the discussions. The use of discussion guides should be an
explicit decision, not an automatic reflex.
Step One: Planning the Discussion Guides
Discussion guides will, of necessity, be as varied and unique as moni-
toring and evaluation subjects. Although there are no hard and fast rules
governing their construction, the guides should be concise, comprehensive,
orderly, functional and flexible. They must structure discussions so you
will get all the information required for your study.
Discussion guides should be comprehensive and cover all information needed
to answer key questions. Since different types of respondents can best
speak to certain issues with their knowledge, you may need separate guides
targeted to different respondents. For example, State program managers
can reasonably discuss the burden of Federal paperwork requirements, a
topic on which local citizens benefitting from a new sewage treatment
plant probably have no knowledge. On the other hand, local citizens, can
express their satisfaction/dissatisfaction with the plant better than
program managers.
Guides should be orderly and follow a logical series of concerns with a
carefully-planned progression. The flow of this discussion can be aided
by some simple rules. First, start with several open-ended, non-threatening
issues that allow the team member and the respondent to establish rapport
and create an atmosphere of trust. This early confidence sets the proper
tone for the rest of the discussion.
Then, as the discussion progresses, group related items together. It is
confusing for both team members and respondents if the discussion skips
from topic-to-topic. All items relating to one particular issue should be
covered before moving to another subject. For example, all concerns about
quality assurance should be addressed before exploring data management.
-------
-------
E-ll
Threatening issues should generally be raised near the end of the discussion,
after the needed rapport and trust have been firmly established. Raising
these issues too early could seriously jeopardize the discussion. At the
conclusion of all discussions, include one last, open-ended question such as,
"Is there anything else you would like to tell me about these issues?"
Guides should be functionalI. Just as the guide must be well-organized to
help direct the respondent's thoughts, it must also be structured to assist
the team members to conduct the discussion. For example, since team members
need to introduce themselves and their purpose, a helpful discussion guide
includes an introductory statement with this information. These statements
will probably not be read verbatim, but their presence will help insure a
relatively consistent and clear explanation to all respondents. Spaces should
be set aside in the guide for answers to the most important questions, to
simplify summarizing and coding and assure critical information is collected.
It is also helpful if guides contain suggested transition statements to
move the discussion from one area to the next. For example, "That's
all the questions about Federal paperwork. Now I would like to ask about
the attitudes of local citizens" aids the flow of the discussion by helping
the team member signal a new topic.
Also, if certain topics are to be omitted for certain respondents, the
guide should contain clear, easily identifiable instructions for the team
member to "Omit issues 7 and 8 for citizens" for example. Certain responses
need to be probed further ("If respondent agrees, find out why"). Including
these types of instructions on the guides in a clearly recognizable and
consistently formatted manner will greatly facilitate the proper use of
guides.
Above all, guides must remain flexible. Content, formatting and instruc-
tions should not lead to an over1y-structured, inflexible instrument. Your
discussion guides should allow the flexibility to pursue unexpected findings
and to capture anecdotes, observations, hunches or insights. In practice,
this flexibility can be maintained by including a substantial number of
open-ended items with follow-up probes and by allowing respondents' answers
to influence the order and intensity of the discussion. It also helps to
designate certain portions of the guide for team members to record their
own perceptions. Including a specific portion helps to legitimize the need
for this type of information and encourages team members to pay extra
attention to the subtleties of the situation.
In certain cases, you may decide that confidentiality of responses will be
necessary. Without assurance of confidentiality, some respondents might
refuse to participate or may alter their responses in unknown ways. To be
sure that respondents are aware of the confidentiality of their responses,
all guides should contain an introductory statement declaring that no infor-
mation will be associated with the respondent by name, if he/she prefers
to remain anonymous.
-------
-------
E-12
To further ensure confidentiality, discussions should be held in private
and names, specific titles, organizations and other identifiers should not
appear on guides where respondents prefer to remain anonymous. However,
generic titles such as "citizen," "State agency manager," and "plant
operator" are necessary information for later analysis. All completed
guides containing confidential information should be destroyed following
the preparation of the final report and completion of the project.
Step Two: Preparing the Questions
Once the general outline of your guide has been developed, the next step
is to prepare the individual questions. Preparing the most effective and
efficient questions is a critical and difficult task requiring expertise,
experience and a full knowledge of the subject at hand.
There is a common progression in discussions conducted to gather infor-
mation. You should be aware of this progression, and plan questions
and remarks to make the discussion go smoothly. You may want to begin
by setting the stage for a productive discussion. Some initial remarks
about non-threatening topics can be used to put the respondent at ease.
The attitudes of both parties will, after all, have a'great deal to do
with the working relationship and the progress made. State the purpose
of the study and the goals of this particular discussion to help elimi-
nate the respondent's anxiety.
Once the stage is set, you move toward collecting the information. There
are many approaches to conducting a discussion. What is suggested here
is a three phase approach. Three kinds of questions are recommended:
search questions, planned information questions, and interpretive questions,
The sequence is from exploring, to concrete facts, to judgment.
Search Questions - Search questions, to some extent, allow the respondent
to lead the discussion. They are open-ended, that is, they cannot be
answered with a yes or no. Often they are questions about opinion, e.g.,
"What are your impressions of the effectiveness of water pollution control
programs?" These kinds of questions also tend to relax respondents. If
a problem is uncovered that will require more factual data, the factual
questions should normally be held until the search questions have all been
asked. This first phase is not the time to pin your respondent down; this
is the time when you convey your willingness to approach the situation
with an open mind, and really listen to what the other person has to say.
Search questions are open-ended questions, which have many advantages.
The respondent can tell his/her own story in an unconfining manner and can
emphasize the importance of various aspects of the issues. Unanticipated
information can surface during discussions of related issues. The skilled
team member can pursue interesting leads and can capture the flavor, as
well as the content, of the responses.
-------
-------
E-13
There are, however, problems with using a large number of open-ended
questions. First, they require careful probing and thus necessitate more
experienced team members and/or more extensive team training. Second,
recording answers can be unwieldy and much of the richness of the discus-
sions can be lost. Finally, they can be difficult and time-consuming to
analyze, sometimes requiring you to sift through many pages of completed
guides.
Planned Information Questions - The second phase of the discussion should
concentrate on factual information. The questions should be phrased to
elicit short factual answers, and should have a logical progression. At
this point, you want to control the discussion. By using closed-ended
questions, you can often solicit the information you need in a timely manner.
Try to cut off long stories. Be alert to red herrings, subjects brought up
which avoid questions asked. If the other individual has forgotten a point
and now wishes to expand upon it, keep track of the direction of the conver-
sation and get back to your agenda. Suggest that the topic can be pursued
when the factual information has been collected. When information is not
available, ask if it can be supplied and by whom. Keep a list of the docu-
ments requested and promised.
Interpretive Questions - This is an important phase of data collection.
In this phase, you solicit other people's interpretation or evaluation of
the situation. Questions can be phrased, "Based upon these facts, how do
you feel the situation stacks up? Where do you see the most need for
improvement?" The other person's judgment could be very accurate or revealing.
Hybrid questions that are a combination of open-ended and closed-ended
formats are often useful for this purpose. For example, "Are the figures
cited in GAO's report accurate for your State? How do you explain them?"
is a hybrid question in which the respondent provides a definite answer, and
then elaborates with additional information.
You will probably use all three types of questions, but in varying propor-
tions, in your discussion guide. The purpose and scope of a particular
monitoring and evaluation project will determine the type of information
needed and the kinds of questions you will ask. All of your questions,
whether search, planned information or interpretive, should be simple, direct
and unbiased. Questions should be simple enough that team members
can phrase them easily and respondents can understand them readily.
Questions should also be short, contain only a single issue (not "What
do you think of the program and staff here?") and engage the respondent's
interest.
-------
-------
E-14
Task 6 - Planning Information Analysis
After you have identified the information you will get and how you will get
it, you then need to consider what you are going to do with it. You will
need to lend meaning to your findings by analyzing, interpreting and drawing
appropriate conclusions. There are two important reasons to
anticipate the analysis to be done and include it in your Plan of Study.
First, by thinking through the analysis to be done, you double-check the
information requirements you have identified. Have you provided for all
the information you need for the analyses you want to do? Second, the
plans developed can be used to guide the activities of team members
in interpreting and analyzing findings.
The analysis you plan could be comparisons designed to shed light on
the relative value or success of the situation under consideration.
You could, for example, plan to compare your findings to other empiri-
cal information or to a theoretical situation to aid you in formu-
lating conclusions. The evaluation methodologies in Tab J may give
you some additional ideas about the use of comparisons.
An important tool for your analytical work may be triangulation. This
involves using several methods or approaches for answering a key question
as described under Preparing Information Collection Plans. By blending
and integrating a variety of methods you can capture a more complete
picture of the subject being studied. Using different methods also helps
assure that your findings actually reflect program results, not the method-
ology chosen. This approach contributes to a balanced, objective assessment
and, if the findings from different methods support each other, lends
credibility to your conclusions.
In some instances, specific case studies can be used to draw inferences
about a program or situation. Case studies involve in-depth analysis of
an individual case, as opposed to collecting a limited amount of informa-
tion about a large number of cases. For the Region V water programs, case
studies might involve studying one treatment plant or examining the combined
efforts of water programs in one basin.
The primary advantage of the case study approach is the level of detail
in which a situation can be studied. All the interrelationships, problems,
successes and effects can be brought out to achieve a thorough understanding
of what is going on. The most significant drawback to using the case study
approach is the lack of information on whether your case study is repre-
sentative of other similar situations. However, in planning your informa-
tion collection activities, you should remember that generalization is
not only based on the number of units observed. More important are the
kinds of units observed, that is, the range of characteristics of the
units examined and the range of conditions under which observation occurred.
The four criteria you should keep in mind when attempting to generalize
from single-case studies are:
1. the range of attributes included in the case study;
-------
-------
E-15
2. the number of similarities between the case study and
large groups of interest;
3. the lack of unique features in the case study; and
4. the relevance of the attributes previously identified.
Another analytical technique that may be useful is cost-benefit analysis.
This is a form of analysis in which dollar values are assigned to both
the costs and the benefits of a program or a part of a program. Com-
paring these two dollar figures may give you some insights into the
situation being assessed.
The cost-benefit analysis approach has the advantage of combining several
evaluation criteria, which in an analysis would otherwise be expressed in
different units, into one unit of outcome—the dollar. If an analysis
shows that the dollar benefits would exceed the dollar costs, the project
is presumably worthwhile. Alternatives can be readily ranked on a
scale showing the ratio between costs and benefits, or on a scale showing
value of benefits after the costs have been subtracted. A significant
problem with this approach is the lack of precision involved in assign-
ing dollar values to costs and benefits, particularly to benefits such as
improved water quality or the protection of public health. For this
reason, it is often best to use cost-benefit analysis as a supplement to
other forms of analysis, rather than as the sole method on which you
base your conclusions.
The analysis you perform will be a critical part of your monitoring and
evaluation work, since it will point you to your conclusions (and possibly
recommendations) about the situation being assessed. It should, therefore,
be carefully thought out as you plan your monitoring and evaluation work.
Task 7 - Planning Report Preparation
Reports to be prepared (and any briefings to be presented) are other
important parts of your Plan of Study. Your report will be the deliverable
product resulting from your information collection and analysis, and the
quality of your reports will have much to do with the extent to which
your conclusions and recommendations have an effect.
The primary concerns for reporting, with respect to the Plan of Study, are
when you will prepare reports, what they will cover, who will prepare them,
who will review them and to whom they will be sent. You should also antici-
pate, however, special needs for reports and/or briefings. For example, it
may be appropriate in some cases to include maps or photographs. Have you
provided for these in your planning? Plans for reports and briefings to
be prepared, along with a chart of tasks and milestones, will be the last
part of your Plan of Study.
-------
-------
E-16
Task 8 - Training Team Members
Before implementing your specific monitoring and evaluation Plan of Study,
you should make sure that all members of the monitoring and evaluation team
have the background information and training they need. Training should
provide everyone with a complete understanding of the program or part of a
program being assessed, the purpose and approach of the study, and the
specific responsibilities of each individual. Let the team members know
how their work will fit into the total assessment. This will make team
members more comfortable with their work and contribute to a better monitor-
ing and evaluation project.
In all cases, training should provide detailed knowledge of and practice
with information collection instruments, especially discussion guides.
Training could also include background on the setting for work and on contacts
with whom members will
ations that could come
details, training must
tion work.
monitoring
be dealing, or on political or other special consider-
into play. To ensure everyone is aware of pertinent
also cover the logistics of the monitoring and evalua-
Training may, in some cases, be needed on the fundamentals of
and evaluation.
Task 9 - Planning Logistics
The final concern in preparing your Plan of Study is attention to detailed
arrangements for the work to be done. If you will be making a site-visit,
you will need to make phone calls and send out letters to set up the visit
well in advance of the actual trip. If you will have a contractor helping
you with data collection, you will need to prepare a procurement request
well in advance of the time the work is to be done. Considering logistics
as early as possible will help you assure you get the information you need
and meet the deadlines you have established.
-------
-------
F-l
Gathering Information
When you have completed all the planning for your monitoring or evaluation
project, the next step is to gather the information you need. Information
must be collected in accordance with your Plan of Study. It is essential
in this stage that every attempt be made to control biases and collect
objective information.
Conducting Site Visits
Much of your information collection work will be away from the Regional
Office. Site visits should be carefully planned and carried out to effi-
ciently get the information you need with a minimal amount of intrusion.
As part of your site visits, in virtually all studies, observation can be a
useful method of information collection. Relevant data can be collected
in an unobtrusive manner to supplement and complement other information.
(To enhance the reliability of your observations, record them as soon as
possible.) A walk through a treatment plant could, for example, point to
important information about how the plant is managed and operated.
There are three principal stages in most site visits, each with a distinct
objective. The three steps are: initial discussions, actual information
collection (including observation, file searches, discussions, etc.) and
exit discussions.
Setting the Stage - Initial discussions should be with individuals at the
highest appropriate management level. During this stage the primary purpose
is to explain the objective(s) of the visit and to set the tone for the site
visit. If you are acquainted, it may not take much time to establish an
informal and positive tone of collaboration. You should review and clarify
the purpose of the project. Stress the objective of working together to
identify and solve problems. Explain that problem solutions that have
been implemented at this site may be very helpful to others. Review the plan
for the whole visit, the areas to be explored and the people to be contacted
for further discussions. Solicit and answer questions. If possible, provide
the managers with a list of requested factual information and documents.
Explain that you may discover additional information is needed as the visit
proceeds. Explain how the information will be used, and emphasize that you
plan an exit discussion that will provide an opportunity for them to clarify
or correct your findings before the final report is prepared.
The attitude of the individuals with whom you are dealing will not be
difficult to pick up in this initial discussion. You may already know
their reputations and have a pretty good idea of what to expect. If you
expect a negative attitude, you can plan how you intend to handle it. Some
effort on your part to change the situation could result in a change of
attitude. Sometimes just being willing to listen to their problems will
relax the mood and allow them to accept your presence. In any case, the
discussions with the managers will have a lot to do with how much coopera-
tion you receive. It is critical, worth preparing for carefully. Part of
that preparation is to examine your own attitude toward the program and
the people with whom you will be meeting.
-------
-------
F-2
Collecting Information - After the opening discussions, the next step is to
carry out detailed discussions, perform file audits, review work that has
been done or do whatever else is necessary to collect data. This work should
conform to the information collection plan. The quality of the information
you collect will determine the accuracy and quality of your findings and
analysis so exercise great care in this work.
Throughout the information collection stage, clear and accurate notes
should be taken to record information for analysis and reporting. When
a group of individuals is involved in a site visit, the team members
should check their impressions as information gathering progresses to see
if everyone is getting equivalent information and to agree on areas needing
special attention. All of the individuals on the team should take notes
that are complete enough so that someone who was not there could use them
as an analytical tool. Remember that all of the information will have
to be synthesized when you get back to the office.
Summarizing and Concluding - The final steps in the site visit are
to summarize and clarify findings and preliminary conclusions in an exit
discussion. This discussion can be critical to the overall effect of
the visit. Present your findings objectively, and give managers the
opportunity to respond to what you have said. This could involve, for
example, using questions such as: "This is what I have in my notes, does
it sound correct?" "Based upon what you have said, It sounds to me like .
Is that a fair statement?" Follow with questions that summarize the
factual information; questions can be worded to draw conclusions, e.g.,
"It sounds to me like what is needed is . Would you agree?"
Try to maintain a positive spirit of mutual problem solving when you report
your general findings to the managers. Start with areas where your findings
can be affirmative and complementary. Solicit their feedback on potential
recommendations.
Explain what will happen next. Will they have an opportunity to comment
on the draft report? When? Ask for their assessment of your work. This
allows both groups to be evaluated, and can make it easier for everyone
to accept. Finally, express appreciation for the cooperation received,
and your opinion that the site visit has been a successful joint effort.
Conducting Discussions
There is no guaranteed formula that will result in a productive discussion.
The basic skills involved are listening and asking the right questions.
There are, however, some hints that can be offered to help to make your
discussions more productive. Some of the "Do's and Don'ts" of conducting
discussions are outlined below.
-------
-------
F-3
PREPARING FOR THE DISCUSSION
- DO obtain all necessary clearances and permissions, in the proper sequence,
before contacting any respondents. This may require assistance from State
and/or local officials.
- DO familiarize yourself completely with the general objectives, priority
issues, specific concerns and reporting formats of the study. This will
prevent a stiff, formal style and will allow a more casual, conversational
approach.
- DON'T contact the wrong respondent.
- DO schedule the discussion at the mutual convenience of both you and the
respondent.
- DO find a private location with no distractions.
BEGINNING THE DISCUSSION
- DO create an atmosphere that is relaxed, positive and friendly. Put
the respondent at ease by your sincere interest and cooperative attitude.
Be polite and courteous and use a natural, conversational tone of voice.
- DO summarize briefly the purpose of the study. Answer all questions
asked, and do not lie or withhold information from a respondent.
- DO explain how the site and respondent were selected and the methods used
to gather information (e.g., discussions, observations). Emphasize the
importance of obtaining the respondent's personal perspective on the
i ssues.
- DO estimate how much time is required. Almost all in-person discussions
should involve one hour or less, and telephone discussions should probably
be completed within 30 minutes.
- DON'T coerce any respondent into participating in the study. Although
gentle persuasion is sometimes appropriate, all participation must be
strictly voluntary.
- DON'T deceive or harm a respondent in any way. This includes misrepre-
senting the study or needlessly raising expectations.
ASKING AND LISTENING
- DO pay particular attention to the first few issues. Getting off on the
right foot creates a pattern for the entire discussion.
- DO record all background information required. Use these easy-to-answer
issues to involve the respondent in a give-and-take discussion.
-------
-------
F-4
- DO speak slowly. A deliberate, unhurried pace sets a relaxing tone and
allows the respondent to understand the full scope of an issue and
prepare a full reply.
- DON'T skip issues. If at all possible, cover every issue appropriate
for the respondent.
- DO use transition statements to preface certain issues and to overcome
possible resistance. Use phrases like "I know we talked about this
before, but..." to soften the impact of certain issues.
- DON'T make the respondent feel uncomfortable if he/she cannot respond
to all issues. Be willing to occasionally accept a "Don't Know"
response.
- DON'T grill the respondent by demanding more information. Remember
that the respondent is being consulted, not cross-examined.
- DO listen very carefully to the respondent's answers. Your undivided
attention (e.g., eye contact, body language, voice tone, facial expres-
sions) is the greatest compliment and the greatest inducement to further
participation.
- DON'T interrupt the respondent. If you must speak, keep your comments
brief. You should speak no more than 10-20% of the time.
- DO control the discussion from taking irrelevant directions or dwelling
on one topic too long. Retain enough flexibility to pursue interesting
leads yet still keep the respondent directed to the issues.
- DON'T let a respondent "con" you. Challenge superficial or self-serving
answers. On those occasions when a constructive confrontation is needed,
focus on specific answers and not on personal judgments.
- DON'T show emotional reactions to the answers you receive. Keep your
opinions to yourself. Avoid judgmental statements and/or leading ques-
tions that reveal pre-judgment.
PROBING
- DO watch non-verbal behavior. Try to be aware of clues to important
points. Often it will be clear when tension or apprehension is high.
Most times tension is not helpful to what you are trying to accomplish.
When you detect it, try to back off and change to less threatening areas
until you can see that the respondent is less tense.
- DO probe deeper into partial, ambiguous or contradictory responses.
Such probing may lead the respondent to elaborate or contradict
the response or to illustrate the limits of his/her knowledge.
- DO use silence as a technique for eliciting more information. Do not
be afraid of silence, but strive for "facilitating pauses" not
"embarrassing lulls".
-------
-------
F-5
- DON'T be too quick to always understand the response. It sometimes
helps to act naive and allow the respondent to explain further. This
technique is especially helpful on complex issues.
- DO repeat or rephrase a question if you need more information or
if you think you were misunderstood. When rephrasing, take great care
to retain the original meaning of the issue.
- DO make neutral comments asking for more information "Could you tell
me more about that?" and "Could you be more specific?" are two examples.
- DON'T make leading comments such as "Don't you mean..." and "Wouldn't
you say..." Allow the respondent, and only the respondent to provide
information.
- DO ask for examples, specifics, documents, etc. to "flesh out" the
answers. Besides stimulating further information, this material will
assist later in writing the report.
RECORDING ANSWERS
- DO record the answers as the discussion progresses. Memory is not
sufficient for a long conversation.
- DO write legibly, since others will have to read and understand your
information.
- DO take the time needed to record answers properly. If necessary, say
"That's an interesting point. Could you say it again so I can make sure
I've got it down?"
- DON'T allow awkward silences while you write. Write as the respondent
answers, use common abbreviations, cross out instead of erasing and
use a "telegraphic" style of omitting extra words. Learn to ask about
the next issue while you record the last.
- DON'T edit, summarize or paraphrase unnecessarily. Record the respondent's
answers as verbatim as possible. Place quotations marks around verbatim
words, phrases or sentences that may provide useful quotes for the report.
- DO summarize a response to ensure you've heard it correctly or to occasion-
ally indicate to the respondent that you are faithfully recording his/her
answers.
- DO be consistent in your recording, providing the same quality and quantity
of information for each issue and respondent.
-------
-------
F-6
ENDING THE DISCUSSION
- DO thank the respondent for the time and effort involved in participating
in the study.
- DON'T become so fatigued that you jeopardize the accuracy of the infor-
mation. Vary tasks to break the routine of numerous discussions.
- DO conduct the last discussion as thoroughly and as conscientiously
as the first. Maintain a consistently high level of professionalism
throughout all the discussions.
- DO review each recorded response immediately after the discussion, while
it is still fresh in your mind. Fill in omissions of which you are certain,
correct errors, rewrite illegible information. Return and complete any
discussions if necessary.
- DO begin each discussion with a fresh, unbiased perspective. Avoid
patterns or bias from previous discussions.
-------
-------
G-l
Analyzing Findings
When all information collection has been completed, you need to assemble all
information from all sources in preparation for analysis and interpretation,
including background research, site visits, water monitoring, observation--
all areas specified in your information collection plan. Balance and objecti-
vity in your analysis methods and procedures is particularly important at
this stage in a monitoring or evaluation project. The need for balanced,
objective analysis and interpretation is underscored because no single piece
of information and no single line of evidence, in and of itself, is suffi-
cient in assessing a program and its results. Every line of evidence—whether
qualititive or quantitative--is imperfect, flawed or biased in some way.
Your information collection plan should ensure that you have many lines of
evidence about program operations or results that can be seen as pieces of
a large puzzle. Analysis should be conducted to allow all the pieces of
information to fit together to form a pattern. The pattern, or its inter-
pretation, will indicate the answer(s) to the key question and related
questions. To the extent that the same effect or conclusion is repeated
in all the lines of evidence, more confidence can be placed in the findings.
To carry out this approach, you should begin by "sorting" the information
obtained by type, i.e., background research, qualitative information
(discussion and observation as well as team members' opinions), quantita-
tive information (file audits, computer systems) etc.
In some cases, it is appropriate to verify the information as an addi-
tional check on the accuracy of findings before proceeding with analysis.
Verification involves considerations such as:
Is the information collected through discussions and other
means consistent with other records against which it can be
cross-checked? Does it contradict previous information or
information which is generally known? Look closely at findings
away from the norm and try to determine whether they represent
an anomaly or a mistake.
Is the information plausible? Does it make sense?
Is the information objective or is it based on the interpreta-
tions of individuals? (This is not to imply that qualitative
information and even opinions are not important; just make
sure facts are separated from opinions when information is
analyzed.)
When verification, if appropriate, is complete, more rigorous analysis of
the qualitative and quantitative lines of evidence should be begun. Within
each type of information, the information should be sorted further. For example,
completed discussion guides by type of respondents, location, etc. Responses
to the same question across respondent types and at various locations should
be counted by respondent type and locations (where closed-ended questions or
-------
-------
G-2
easily categorized questions were used.) Counting could be accomplished
by hand or in some cases, by computer. Counting responses might be appro-
priate, for example, for indicating that 75% of small community treatment
plant managers stated that they had problems finding competent operators.
Where questions are more complex and do not permit simple counting, it may
be appropriate to establish some general categories that provide general
descriptions of more open-ended responses. Considerable attention must be
devoted to ensuring that responses are not inappropriately categorized.
Organization of your findings is sometimes called data reduction; it
involves putting together diverse information components, to extract
main issues and themes and to make sure data is suitable for further
analysis and for presentation in reports. After completing these steps
you can clearly and accurately describe your findings. In addition to
your descriptions, however, it is important for you to analyze and inter-
pret the findings and to reach valid conclusions.
Analyses can take various forms. Most frequently they will explain certain
findings and highlight their implications. A good first step in your
analysis is to look for trends and patterns in data sets and consistencies
and inconsistencies among data sets. Examine information that was verified
and found to be away from the norm; determine why these cases are different.
Statistical packages such as SPSS can be useful in organizing and interpret-
ing information you have collected. As part of your analysis you should
relate information you have collected to your initial key question and to
measures, performance standards and comparison standards where appropriate.
You may want to do other comparisons as well; this could include comparing
your findings with other, similar situations or with the same situation in
a different period of time. Most of the formal analysis to be done should
be described in your Plan of Study.
Outlined below are some additional suggestions that may help in your
analysis.
Look for linkages or possible cause and effect relationships
between various findings. Try to determine if these relationships
always exist, or if they only occur under certain conditions. What
are the conditions under which this happens? Why?
Draw preliminary conclusions and test them. Do they hold true in
all cases you can consider? Is there any negative evidence?
Consider what other informed individuals think of your findings.
Have they observed the same things and drawn the same conclusions?
In some cases, it may be appropriate to consider implications for improving
program efficiency or effectiveness. Suggestions could be drawn directly
from your analysis or possibly from other sources e.g., States, other Regions
or other treatment facilities facing similar problems. Recommendations
should only be offered after they have been carefully considered and found
to be feasible.
-------
-------
H-l
Preparing Reports
The final step of every monitoring or evaluation project is the clear, concise
and coherent presentation of findings and conclusions. This step can be accom-
plished by preparing a short written report and, in certain instances, providing
an oral briefing to Division and/or State program managers.
Written Reports
The first way in which monitoring and evaluation findings are presented is
via a written report. A report is a tangible product of monitoring and
evaluation activities for distribution, review, action and future reference.
As Division or program priorities shift or as staffs change, the report will
remain as a respected examination of certain issues of interest.
Monitoring and evaluation reports should be as concise as possible, and prepared
as rapidly as possible. They should be informative and convey findings
accurately and completely. The report should include:
1. purpose of the study (how did it originate, why?);
2. scope of issues examined (which issues, why?);
3. methods used to gather information, specifics of the review
(who was contacted, where, when?);
4. background (program origins and history, activities, funding etc.);
5. major findings;
6. comparisons (trends, differences);
7. interpretations (relationships, possible causes and consequences);
8. conclusions; and
9. recommendations (where appropriate).
Monitoring and evaluation reports should be understandable to both experienced
top managers and new staff members. Toward this end, reports should avoid
involved discussions of obscure issues, complex statistics, technical language
(where it is used, it should be carefully defined) or jargon. Commonly under-
stood terms, straight-forward discussions and a broad perspective on the issues
and operations of the program(s) being assessed should be reflected in monitoring
and evaluation reports. Supporting technical material should be presented in
appendices.
All significant findings disclosed as a result of monitoring and evaluation
activities, even those which may be unpopular with certain Regional or State
personnel, should be objectively discussed in the written report. Since
human judgments are an inherent part of monitoring and evaluation processes,
findings will never be totally free of the preferences or philosophies of the
individuals sponsoring or conducting the evaluation. Nonetheless, the clear
presentation of assumptions and reasons for stated conclusions and
recommendations will enable managers to more effectively deal with areas
of subjectivity. Any opinions included should clearly be labeled as such.
They should not, however, be deleted from reports where they contribute
to understanding the situation.
-------
-------
H-2
The report must be equally clear and forthright about the limitations of
the study. Even with good planning, things will go wrong during the
course of the study. These things are often attributable to factors such
as funding or field conditions outside your control. If these limitations
have the potential for altering study results in any significant way, they
should be identified in the report and, if possible, their potential
impact described. While you could not possibly list all conditions or
situations to which study results are not applicable, findings should
be sufficiently qualified to help readers avoid drawing improper inferences.
The report should contain enough information about monitoring or evaluation
scope and procedures for the users to understand what was done and why.
While analyzing and reporting results you may often recognize factors that
should have been examined but were not, thus indicating the need for further
study. The report should identify and explain those issues and questions
that need further study and consideration.
Circulate the draft to all team members for their comments. To ensure
that reports are informative, understandable and objective, you may also
choose to circulate draft reports and obtain comments from Regional and
State program managers and other individuals responsible for the program
or activity being monitored or evaluated. Careful review and verifica-
tion of the accuracy of all facts contained in the report provides an
important element of quality control and will enhance the credibility of
the findings, conclusions and recommendations presented in the final
report.
Briefings
The second way in which monitoring and evaluation findings can be presented
is through personal briefings of top Regional and State program managers.
Briefings and informal question and answer sessions may be useful to
clarify certain aspects of the written report or to answer questions raised
in the report but not answered.
Oral briefings should also be informative, understandable and objective.
The use of well-prepared visual aids such as charts, videotapes, photographs
and slides often can improve a presentation and better illustrate monitoring
and evaluation findings.
Follow-Up
The Division Director and/or top State program managers may sometimes ask
the staff that performed the monitoring and evaluation work to undertake
several additional tasks. These can include preparing materials for future
monitoring and evaluation activities, developing policy options for
consideration, obtaining further information on certain findings or
suggesting additions or changes to current legislation or regulations.
Follow up could also include tracking recommendations accepted or made
by managers to ensure that they are implemented. The team leader is also
responsible for distribution of the final report but only after agree-
ment with the Division Director and/or State management regarding appro-
priate distribution.
-------
-------
H-3
A final step in following up your monitoring and evaluation work may
often times be an assessment of your project. The primary concern should
be did you supply timely, accurate, useful information to the manager(s)
for whom the study was intended? If the project did fulfill this
fundamental requirement, further evaluation of your work may be appro-
priate to help the Division learn from and improve on monitoring and
evaluation work. Outlined below are some questions you might want to
explore with respect to your study.
- Were the reasons for the study found to be valid? Were the
cause, scope, and intensity of the original problem or issue
redefined as part of the study or as a result of the study?
Why did it need attention at this time? Was full consideration
given to the expressed needs of all potential users of the study?
- Were any special problems, either conceptual or practical,
encountered in using input, process, output, efficiency, or
effectiveness measures? Were valid standards for comparisons
used? Was it necessary to employ surrogate measures and what
was the rationale for their choice? What other quantifiable
or intangible consequences were measured and how?
- Were information collection instruments sufficient under the
circumstances?
- Are findings significant and practically important? Do they
answer the questions posed at the beginning of the study?
- Were uncertainties resulting from problems with data identified
and properly considered? Compared to other studies or evidence,
do data and conclusions agree? If not, why not? If so, how so?
- Were the lessons learned identified? Can suggestions be made
for immediate improvements?
- To what extent can your findings and conclusions be generalized
to apply to other settings within which the program takes place
or may take place? What should and should not be done in the
future in other locations or in similar programs? Are these
conclusions based on demonstrated causal relationships? Are
reasons for program weaknesses indicated?
- Have recommendations been developed for alternatives to be
analyzed and compared?
- What is still left to be studied? What new questions were
raised that require further research? Which areas of research
still need further exploration? What research methods need to
be developed or improved in order to make future appraisals
more authoritative?
-------
-------
EPA Region V, November 10, 1981
WATER DIVISION MONITORING AND EVALUATION STRATEGY
INTRODUCTION
The effectiveness of many environmental programs, all of which consume resources,
is increasingly being challenged by congressional legislators, State and local
government officials, industrial leaders, and concerned taxpayers. A great deal
of attention has been directed toward the water media programs, particularly
those established by the Clean Water Act of 1972 under which large amounts of
resources have been expended. The General Accounting Office, for example, in
its November 14, 1980 study entitled "Costly Wastewater Treatment Plants Fail
to Perform as Expected," concluded that the lack of compliance with effluent
limitations "may not only have an adverse impact on the Nation's ability to meet
its clean water goals, but may also represent the potential waste of tens of
millions of dollars in Federal, State, and local moneys." In another case, a
report prepared by the House Subcommittee on Oversight and Review (December
1980) concluded that Advanced Waste Treatment (AWT) projects are not cost-
effective, and recommended that a moratorium be placed on the funding of such
projects. As economic pressures increase and demands for resources far exceed
the resources available, this trend of questioning can be expected to continue
and intensify. Water media program managers must recognize that they are
accountable not only for how resources are applied, but also for what is
accomplished with these resources.
To respond to the concerns of Congress, OMB, and others, and to improve decision-
making at all levels, water media managers must have timely, accurate information
about the activities and the results of program operations. They need to
systematically track how resources are used and measure the environmental impacts
of the programs. This can best be accomplished through a comprehensive, carefully-
planned program of monitoring and evaluation, focusing on the following areas
of program performance:
1. Appropriateness - Is the program directed toward solving the most important
water quality problems?
2. Adequacy - Is the program making a significant contribution to correcting
the overall problem?
3. Effectiveness - Is the program achieving its environmental goals and
objectives? Can changes be attributed to the program?
4. Efficiency - Are there more efficient means through which the desired
results could be achieved?
5. Unintended Consequences - Are there effects of program operations other
than attainment of pre-established objectives? Are these desirable or
undesirable?
-------
-------
PURPOSE
The purpose of this Strategy is to describe the Region's approach to monitoring
and evaluating water media program operations and accomplishments. The Water
Division's monitoring and evaluation activities will be directed toward achieving
the following objectives:
1. to collect information on the social, economic, and environmental effects
of programs, as well as their inputs and activities, for use in program
management;
2. to measure the extent to which valid program and management objectives
are achieved;
3. to establish a cause and effect relationship between water media activities
and progress toward program and management objectives;
4. to provide continuing feedback on the validity and relevance of program
and management objectives and measures of progress;
5. to assure that resources are distributed in a manner consistent with the
nature, extent, and location of environmental needs;
6. to identify and implement changes where program performance can be
improved;
7. to foster integrated approaches to management by examining how the various
programs interact; and
8. to strengthen the management and performance of both delegated and non-
delegated water programs.
Included as part of the monitoring and evaluation program will be overview
activities associated with program grants, delegation agreements, State
assumption of primacy, and State/EPA Agreements. The need for monitoring and
evaluation has also been recognized by EPA Headquarters, as exemplified in
the recently developed Construction Grants and Water Quality Management
Program Management and Evaluation Systems. EPA Region V will participate in
the data collection required by Headquarters' systems, but will also go beyond
these requirements to cover the drinking water and ground water protection
programs and to evaluate the environmental impacts of all the programs for
which the Division is accountable.
DEFINITIONS
To facilitate a common understanding of the key monitoring and evaluation terms,
the following definitions will be be used:
1. goal - a legislatively enacted desired social or environmental condition,
e.g., to improve water quality or protect public health.
-------
-------
2. objective - an operational, specific target toward which program activities
are directed, e.g., to achieve maximum municipal compliance with permit
effluent limitations. Objectives are desired results which, when achieved,
represent progress toward goals. They are what managers use to carry out general
national goals.
3. activity - work done to accomplish an objective. Activities are distinguished
from objectives by the fact that activities consume time and resources, whereas
objectives do not.
4. resource - personnel, funds, materials, and facilities necessary to
support the performance of an activity.
5. performance measure - a gauge used to determine the extent to which a goal
or objective has been attained, an activity performed, or a resource expended.
Comprehensive measures should ideally incorporate both the qualitative and
quantitative aspects of program performance. An example of a performance
measure for the program objective "to achieve maximum municipal compliance
with permit effluent limitations" might be the percent of facilities which
substantially comply with their NPDES permits.
6. performance standard - the level of attainment that will be regarded as
the critical or threshold value above which the objective may be regarded as
having been attained, the activity performed, or the resource expended as planned.
A standard for the above mentioned measure, indicating outstanding performance,
might be 80% of the plants built substantially comply with their permits.
7. monitoring - the review or tracking of program operations to ensure that
resources are being applied according to legal and administrative requirements
and that program objectives are addressed.
8. evaluation - the appraisal of information to determine whether program
objectives are being achieved. Evaluation attempts to establish if a causal
relationship exists between program operations and results by isolating
program effects from other factors. Evaluation focuses on environmental impacts,
both planned and unanticipated, and also attempts to ascertain if more
effective/efficient alternatives can be used to produce the desired results.
GOALS, OBJECTIVES, AND PERFORMANCE MEASURES
The fundamental bases for planning the implementation and evaluation of water
media programs are the legislative goals and the objectives and performance
measures. The goals represent the desired results of program activities and
expenditures, and objectives and measures are used by decisionmakers to manage
toward these goals. An important test of program effectiveness is the extent
to which valid objectives are achieved.
The objectives and performance measures should reflect potential impacts on
the environment and community in addition to management priorities, such
as dollars obligated in the Construction Grants Program. Although management
objectives and measures may be useful for explaining certain aspects of a
-------
-------
program's success or lack of it, they may say little about the extent to which
citizens and the environment are impacted. Program objectives such as the
desired environmental effects and economic impacts, must be considered. It
is important, therefore, that complete lists of management and program objectives
and measures be prepared for each program.
The objectives and measures should encompass media-wide issues as well as
each of the following programs:
a. Water pollution control programs (including planning, water quality standards,
wasteload allocations, permits, nonpoint source management, and the water
quality inventory)
b. Planning, design, and construction of municipal waste treatment facilities
c. Water enforcement
d. Public water system supervision programs
e. Underground injection control programs
f. General ground water protection programs
g. Wetlands dredge and fill programs
ROLES
Management of the water media programs requires the participation of EPA
Headquarters, Regional Offices, and States. It is important to recognize,
however, that the levels of concern are not the same for each of these
levels of management. The information requirements of each level of
government structure should reflect the management role of that level.
EPA Headquarters' responsibilities associated with management of the water
programs include:
a. setting national program objectives and measures of achievement;
b. developing policies and regulations to achieve legislative goals
and national program objectives;
c. managing the Agency budget;
d. providing technical assistance, where appropriate; and
e. monitoring expenditure of Agency funds and evaluating program
efficiency and effectiveness for feedback to the Administrator, OMB,
and Congress.
-------
-------
The information needs of Headquarters should be based on these responsibilities.
The focus should be on national trends and the overall success (or failures)
of the programs, rather than on specific projects or the day-to-day management
of the programs. Some information on financial obligations will always have
to be maintained in detail, but the main thrust of Headquarters' information
collection efforts should be toward evaluating program outcomes and overall
progress toward national objectives.
The management responsibilities of Regional Offices include:
a. managing non-delegated program operations, and delegating authority for
water programs (that can legally be delegated) to States willing and
able to assume such authority;
b. providing technical assistance to States and municipalities, where
appropriate;
c. managing the Regional operating budget and administering program grants;
d. participating in the development of National objectives and measures,
policies and regulations;
e. monitoring delegated and non-delegated program activities and outputs;
and
f. evaluating the efficiency and effectiveness of delegated and non-delegated
programs, and providing feedback to National, Regional, and State program
managers.
Region V Water Division personnel will participate in the collection of
information for EPA Headquarters, but will, in addition, collect information
required by their roles as Regional program managers. For delegated pro-
grams, the Division's monitoring and evaluation activities will focus on
the end results of State efforts and State progress toward program and
management objectives. Where programs have not been delegated, the Division
has the dual role of collecting information on the overall effectiveness of
the programs, as well as information needed for day-to-day program management.
The primary State responsibilities associated with the management of water
proarams include:
a. managing the State's own programs, delegated water programs, and
individual projects to meet program and management objectives;
b. participating in the development of National program policies and
regulations;
c. participating in the development of National and Regional objectives
and performance measures;
d. applying National and Regional policies and objectives while, at the
same time, accommodating State and local environmental needs and
priorities;
-------
-------
e. managing State and Federal resources, including program grants; and
f. monitoring and evaluating State program operations to provide information
to State and Federal program managers.
The State's primary information collection efforts should be directed toward
obtaining the information needed by State decisionmakers for efficient and
effective program management. As full and equal partners with EPA in the
management of water programs, the States also have a vested interest in
the Agency's endeavors to evaluate those programs. The Water Division will
strongly encourage the Region V States to participate in its monitoring and
evaluation activities to the extent State resources, willingness, and
capability allow.
MONITORING AND EVALUATION APPROACHES
The processes of monitoring and evaluating water media programs in Region V
will include:
- monitoring program activities and expenditures;
- evaluating the extent to which objectives are being achieved on a
continuing basis; and
- conducting special one-time evaluation projects.
Monitoring Program Accomplishments
Program monitoring is an on-going effort to track the use of resources, the
performance of activities and the achievement of performance standards. It
also can provide decisionmakers with information on program trends, problems
and effects. Monitoring essentially involves obtaining data on actual program
performance and comparing this to planned program performance. Management
objectives, performance measures, and performance standards will suggest much
of the data needed to track program performance; other items that need to
be monitored may be included in State delegation agreements. It is important
to note, however, that not all delegated activities need to be monitored in
detail once a track record has been established by the State. Delegation agreements
should be changed so that only information that is useful and meaningful for
program management is collected.
A number of mechanisms can be used to obtain data on actual performance including
reports, reviews of files and records, interviews, and data processing systems
(e.g. CGMS). In comparing actual performance to plans, deviations from the
performance standards should be carefully examined. Attempts should be made
to determine if the original standard was inappropriate, or if standards were
not achieved for some valid reason. When program performance seems to be deficient
in a particular area, a more in-depth assessment may be appropriate to seek
out the reasons things happened as they did, and look for ways performance
could be improved.
-------
-------
Evaluating Program Effects
In order to determine if objectives are being achieved, and if the program
is responsible for changes that occur, decisionmakers must go beyond monitoring
program activities and outputs to examining the program effects. Although
some information on program effects may be collected in the monitoring program,
additional information collection and analysis will be needed.
In some cases the attainment of an objective can be tracked directly. If, for
example, an objective of the drinking water program is to reduce the number
of waterborne disease outbreaks, the number of outbreaks can be tracked to
determine the extent to which the objective is achieved. In other cases
surrogate measures can be used to indicate the extent to which objectives
are being met. The percent of stream segments in compliance with water
quality standards could, for example, be considered a measure of the extent
to which beneficial uses have been achieved. Where a direct or surrogate
measure is available, the extent to which important objectives are being
achieved can be tracked on a continuing basis. This tracking can provide
information on significant trends, and achievements in a particular area can
be compared to other similar areas, or to a level of achievement that was
forecasted at an earlier time. This can provide important information to
decisionmakers with a rather limited investment of resources, although it is
not the most rigorous form of evaluation in that it does not isolate program
effects from other factors.
Where State and/or Regional decisionmakers have key questions or important
information needs which cannot be satisfied through continuous tracking of
objective attainment, special evaluation projects will be planned and implemented.
Such studies might be appropriate to examine the short and long term effects
of individual programs, the combined effects of interacting programs, or other
special issues. Special studies will use accepted research design principles
to measure program effects, isolate these effects from other impacts on the
environment, and link them to the accomplishment of program and management
objectives. An example of a study of this nature'would be an in-depth examination
of municipal compliance with discharge permits — What is a meaningful definition
of compliance? What percent of facilities are in compliance? What are the effects
of compliance/non-compliance on water quality?
There are a number of valid evaluation methodologies which can be used, including
randomized experiments, time series designs, and before and after comparisons.
The suitability of these designs for conducting the evaluations will be
carefully examined on a case-by-case basis. Before a causal link between program
activities and their ultimate effects can be established, evaluators should
look for other possible explanations for the apparent effects of a program
before attributing these changes to the program being evaluated. If other
possible explanations exist, the evaluators should estimate the effects of
outside influences or at least identify then when presenting the evaluation
findings. The evaluations should also, to the extent possible, identify
unintended consequences and, if appropriate, look for more efficient means of
achieving the desired program results.
-------
-------
MONITORING AND EVALUATION WORK PLANS
To assure that decisionmakers have the information needed for effective program
management, a workable plan for information collection and analysis must be
developed and implemented. This overall plan might include existing information
collection mechanisms, but in total it would represent an organized, comprehensive
picture of all the information collection and analysis to be undertaken. The
resulting system should provide valid, timely, and reliable information and utilize
all available sources of data.
The planning of water media monitoring and evaluation activities will be
accomplished each year through the preparation of monitoring and evaluation
work plans. These work plans should specify, at a minimum, each of the following:
a. The purpose of the monitoring and evaluation program
b. The information needed by decisionmakers
1. What management information is needed?
2. What water quality information is needed?
3. What is the unit of observation (e.g. riverbasin, county, etc.)
for data collection?
4. When is the information needed?
c. The data collection procedures
1. What data can be collected to satisfy the information needs?
2. What mechanisms will be used to collect the data?
3. What are the sources of the data?
d. The data analysis procedures
1. Who will be responsible for various parts of the evaluation?
2. What evaluation methodology will be used?
e. Procedures for providing and receiving feedback
1. When will evaluation reports be prepared?
2. To whom will the report be sent?
3. How will the Region receive feedback on the appropriateness of the
evaluations and on the recommendations made by the Region?
f. Chart of tasks and milestones
The water media managers will prepare monitoring and evaluation plans, covering
program monitoring and continuous tracking of progress toward objectives
for their own areas based on their most important needs; the program evaluation
staff will provide support to the program managers in this process. The
program evaluation staff and the Program Branches will also prepare plans
for the special evaluations requested by top managers.
A key step in preparing the monitoring and evaluation work plans will be the
precise identification of decisionmakers' needs. Using the program and
management objectives as guides, the water media managers must determine
what are the most important questions they have regarding the efficiency and
-------
-------
effectiveness of the programs they administer. This will include questions
of both a quantitative and qualitative nature, e.q., "How many permits have
been written?" "Are these permits based on valid water quality standards?"
The next step is to ascertain what data could be collected to fulfill these
information needs.
Identifying the sources from which data can be obtained is another important
step in the preparation of monitoring and evaluation work plans. Management
information and information on the quality of the water in lakes and streams
will be needed to measure the efficiency and effectiveness of water programs,
and much of this data may not be readily available. The Water Division
can, however, look to States, other Federal agencies, and other Divisions
within the Region for assistance in accessing data. Managers should also
look carefully at existing sources of data such as monitoring activities,
audit reports, and existing information systems before additional data
collection is attempted.
In preparing work plans, all information requirements must be judged for their
reasonableness and appropriateness and the data must be assessed to assure it
is reasonably available. To satisfy highest priority needs within resource
limitations, data requirements should be eliminated where they are not essential.
Data collection activities should be organized and prioritized to provide
decisionmakers with continuous information on the adequacy of program operations
and also to support legislative, budget and policy reviews within appropriate
timeframes.
IMPLEMENTING THE MONITORING AMD EVALUATION STRATEGY
As a general rule, monitoring and evaluation efforts should attempt to satisfy
Headquarters' information needs, while also meeting the needs of decisionmakers
at the State and Regional levels. They should assess the efficiency and
effectiveness of State and Regional operations, looking especially at the
program environmental impacts. The desired result of monitoring and evaluation
efforts should be improved program management at all levels, and ultimately,
more effective water media programs.
More specific guidance on the planning and implementation of monitoring and
evaluation activities to be undertaken by the water media program managers is
provided in the accompanying "Guidance for Developing Monitoring and
Evaluation Work Plans." The key steps involved in the preparation of the
work plans, such as identifying valid objectives, specifying comprehensive
measures, and identifying useful sources of data are discussed in the
Guidance.
-------
-------
J-l
Information on Evaluation Methodologies
Outlined below is some information on accepted evaluation methodologies.
As mentioned in Tab D, you should not expect to be able to use these
methodologies, particularly controlled experimentation, in the purest
form. Try to understand the intentions of these methodologies (e.g., to
single out the program's effects, to make an objective judgment) and to
carry these over to your attempts to answer key questions.
1. Before vs. after program comparison
This methodology compares program results from the same jurisdiction
measured at two points in time: immediately before the program was
implemented and at some appropriate time after implementation. This
approach is perhaps the simplest and cheapest type of evaluation.
It identifies changes brought about by the program as differences
between the values of the evaluation criteria measured before and an
appropriate period after the programs's introduction. Of the first
four designs described it is probably the most common, but is least
capable of separating the effects of program activities from other
inf1uences.
The steps in this evaluation design are:
Identify relevant objectives and corresponding evaluation criteria,
- Obtain the values of these criteria as they existed before the
program's introduction and for the period since introduction.
- Compare the "before" and "after" program data to estimate changes
brought about by the program.
Look for other plausible explanations for the changes. If there
are any, estimate their effect on the data or at least identify
them when presenting the findings. (This last step is often
neglected. It is, however, a vital step to making this design
credible.)
This design often is the only type which is practical when available
time and personnel are limited. It is most appropriate when the
program's duration is short and of narrow scope. Such circumstances
make it less likely that nonprogram-related factors that might also
affect the evaluation criteria will occur during the period encom-
passed in the evaluation. It is also appropriate where conditions
measured have been fairly stable over time (and are not, for example,
likely to be distorted by seasonal changes), and where there is
reason to believe such stability will continue during the evaluation
period. Otherwise, "before" vs. "after" program comparisons using
such data may reflect short-term fluctuations rather than program-
related changes.
-------
-------
J-2
2. Time trend projections of pre-program data vs. actual post-program
data
This design compares actual post-program data on the evaluation criteria
with projections for the criteria based on data from previous years.
Changes caused by the program are identified as the differences between
present-day conditions as they actually are and as they were estimated
to be by the projections if the program had not been instituted.
The steps in this evaluation methodology are:
- Identify relevant objectives and corresponding evaluation criteria.
- Obtain data on each of the criteria at several intervals prior to
the program and after implementation.
- Using statistical methods and the data from pre-program years make
projections of the values of the criteria to the end of the time
period covered by the evaluation (e.g., water quality modeling).
- Compare the actual and projected estimates as to the amount of
changes resulting from the program.
Look for plausible explanations for changes in criteria other
than those resulting from the program. If there are any,
estimate their effect on the data or at least identify them
when presenting the findings.
This design is useful where there appears to be an underlying trend
(upward or downward) over a period of time that would seem likely to
have continued if the new program had not been introduced. However,
if data for prior years are too unstable, statistical projections may
not be meaningful. Or, if there is strong judgmental evidence that
underlying conditions have changed in very recent years, data on prior
years should probably not be utilized.
For example, statistics on compliance with water quality standards may
rise and fall for individual years, but when many years are considered
together a trend may be apparent around which annual statistics vary.
Simple comparison of data from one "before" year to post-program data
may be influenced by extremes and thereby be misleading. The projection
of this trend to the time of the evaluation and comparison with post-
proqram results will indicate whether or not the trend has been altered
by the new program.
-------
-------
J-3
3. Comparisons with jurisdictions not involved in the program
This methodology compares data from the jurisdiction where the program
is operating with data from other jurisdictions where the program is
not operating. The steps in this methodology are:
Identify relevant objectives and corresponding evaluation criteria.
- Identify other similar jurisdictions where the program is not
operating.
- Obtain data on each of the criteria in each of the jurisdictions
from before implementation of the program to the time of evaluation.
- Compare the changes in the values of the criteria for jurisdictions
where the program does not exist with those from the jurisdiction
where the program is operating. Compare both rates of change as
well as amount of change.
Look for plausible explanations for changes in criteria other than
the program. If there are any, estimate their effect on the data or
at least identify them when presenting the findings.
This design protects somewhat against attributing change to a specific
program when external factors that may affect environmental impacts
are responsible for bringing about the change. Without random assignment
of groups (as called for in Evaluation Design Number 4) the areas being
compared may in reality be significantly different. Nevertheless, even
if comparison groups differ in important characteristics, the information
on relative apparent program effects for the different groups is still
likely to be useful to public officials.
4. Controlled experimentation
This methodology compares pre-selected, similar groups, some of which
are served and some of which are not (or are served in different ways).
The critical aspect is that the comparison groups are pre-assigned
before program implementation so that the groups are as similar as
possible except for the program.
This evaluation design is by far the most "powerful." It assesses the
effectiveness of a program by systematically comparing specific changes
in two or more carefully separated groups: ones in which the program
is operating and others in which it is not. This design can also be
used to try variations of a program to determine which is most
effective, thus, resulting in more groups to be compared.
-------
-------
J-4
The basic evaluation design consists of the following steps:
Identify relevant objectives and corresponding evaluation criteria.
- Select the groups to be compared, i.e., the "control" and the
"experimental" groups. It is vital to randomly select groups from
a universe that have similar characteristics with regard to their
likelihood of being effectively "treated" by the program.
- Measure the pre-program performance of each group using the
evaluation criteria.
- Apply the program to the experimental but not the control group.
- Monitor the operation of the experiment to see if any actions occur
that might distort the findings (such as the behavior of program
operators toward one of the groups). If appropriate and possible,
such behavior should be adjusted, or if not, at least identified
and its impact on the eventual findings explicitly estimated.
- Measure the post-program performance of each group against the
evaluation criteria.
- Compare the pre- vs. post-changes in the evaluation criteria for
the groups.
Look for plausible explanations for differences between the two
groups due to factors other than the program. The randomization
called for in the second step above protects against this to a
great extent. Nevertheless, there remains the small possibility
that some event, perhaps occurring during the experiment, by
chance affects one group and not the other in some special way.
Obviously while this is the most rigorous approach to evaluation, it is
also the most time-consuming and the most resource intensive.
5. Comparisons of planned vs. actual performance
This methodology compares actual, post-program data to targets set
earlier—either before program implementation or at any period since
implementation.
The steps in this evaluation approach are:
Identify relevant objectives and corresponding evaluation criteria.
- Set specific goals or targets for these criteria for specific time
periods.
-------
-------
J-5
- Obtain data after the time period on actual performance.
- Compare the actual performance to the targets.
- Look for plausible explanations for changes in the criteria other
than the program. If there are any, estimate their effect on the
criteria or at least identify them when presenting the findings.
This methodology requires that appropriate, realistic targets (performance
standards) be established for specific achievements for specific time
periods. Actual performance is then compared against the targets that
were set. This design, like Design Number 1, provides no direct means
to indicate to what extent the change in values of the effectiveness
criteria can be attributed solely to the new program. As with the
other evaluation designs, evaluators should explicitly look for other
plausible explanations, other than the programs, as to why the targets
have been met, exceeded, or not met.
-------
-------
K-l
References
Much of the material in this guidebook has been drawn from the following
sources, which can be consulted for additional information.
Bland, James K. "The Relationship Between Biological Integrity and
Water Quality Standards".
Hendricks, Michael. Department of Health and Human Services, Office
of the Inspector General. Service Delivery Assessment.
U.S. Environmental Protection Agency, Office of Drinking Water.
Overview of the Evaluation Program.
Hatry, Harry and Louis Blair, Donald Fisk, and Wayne Kimmel. Program
Analysis for State and Local Governments. Washington, D.C., The Urban
Institute, 1976.
Hatry, Harry and Richard Winnie and Donald Fisk. Practical Program
Evaluation for State and Local Government Officials. Washington, D.C.,
The Urban Institute, 1973.
Jick, Todd D. "Mixing Qualitative and Quantitative Methods:
Triangulation in Action." Administrative Science Quarterly,
December, 1979, pp. 602-611.
Kennedy, Mary M. "Generalizing from Single Case Studies."
Evaluation Quarterly, November 1979, pp. 661-678.
Soderstrom, John. Evaluation: A Primer for Region V Personnel.
November 1980.
Miles, Mathew B. "Qualitative Data as an Attractive Nuisance:
The Problem of Analysis." Administrative Science Quarterly,
December, 1979, pp. 590-602.
Weiss, Carol. Evaluation Research. Englewood Cliffs, NY,
Prentice-Hall Inc., 1972.
-------
------- |