EPA/233/B-98/003
&EPA
United States November 1998
Environmental Protection Agency
Washington, DC 20460 OP-235-B-98-003
Office of Policy
Hearing the Voice of
the
Customer
Customer Feedback and
Customer Satisfaction
Measurement Guidelines
November 1998
-------
Hearing the Voice of the
Customer
Customer Service Feedback
Guidelines
November 1998
"My vision is that EPA will be a model for all regulatory
agencies by fully integrating customer satisfaction measures
into our strategic planning, budgeting and decision making
while recognizing the diversity of our customers and the need
for balancing sometime competing and conflicting interests.
Above all, we will strengthen our ability to listen to the voice of
our customers so that we can identify their need and act upon
them."
Carol M. Browner
March 23, 1994
-------
Acknowledgments
These Guidelines are the product of many people's efforts. After assessing the state of
customer satisfaction survey work across the Agency and coordinating a three-year plan for
surveys across the country, customer service staff determined that more people needed to
understand how to obtain actionable feedback from customers. As a first step, in December 1997,
the Customer Service Steering Committe's.(CSSC) formed the Feedback & Measurement Work
Group to help plan the best way to accomplish the goal. In February 1998, the Customer Service
Program (CSP) sponsored a workshop attended by nearly twenty people from program offices
and regions. At that workshop, the CSP contractor (Macro International) facilitated a process
designed to explain and exemplify what customer satisfaction measurement entailed, and to
produce an outline of the Guidelines contents.
Members of the Work Group reviewed two drafts. Representatives of several federal and
state agencies and with an internal expert panel commented on the third draft. The work group
accepted and approved the next draft, prepared by customer service staff, for publication by the
Customer Service Steering Committee in October 1998. Everyone who actively participated
made this document possible.
Work Group Members
Michael Binder, Office of the Inspector General
Charlotte Cottrill, Office of Research & Development
Judi Doucette, Office of the Chief Financial Officer
William Garetz, Office of Policy
Elizabeth Harris, Office of Solid Waste & Emergency Response
Beth Means, Office of Administration and Resource Management
Wayne Naylor, Region 3
Arnold Ondarza, Region 6
Nan Parry, Office of Research & Development
Caren Rothstein, Office of Prevention, Pesticides and Toxic Substances
Stan Siegel, Region 2
Lawrence Teller, Region 3
Betty Winter, Region 4
External Reviewers
Terry Bergerson, National Park Service
Dan Bius, North Carolina Department of Environment and Natural Resources
Gary Machlis, National Park Service
Nancy Manley, Vermont Department of Environmental Compliance
Lance Miller, New Jersey Department of Environmental Protection
Tom Roberts, Social Security Administration
Internal Expert Panel
Barry Nussbaum, Chairman, Office of Policy Charlotte Cottril, Office of Research & Development
Steve Burkett, Region 8 Kevin Rosseel, Office of Air & Radiation
-n-
-------
Table of Contents
Item Page
ACKNOWLEDGMENTS ii
PREFACE v
INTRODUCTION 1
Why is customer feedback necessary? 1
Who can use the Guidelines? 1
What's in these Guidelines for managers? 1
Why have Guidelines for customer feedback? 2
ORD and its customers 2
How are the Guidelines organized? 3
A note about conducting customer feedback 3
PLAN THE CUSTOMER FEEDBACK PROJECT 4
Who should conduct a customer feedback initiative? 4
How ready is your organization for customer feedback? 5
What kinds of customer feedback are already occurring? . 5
What are the core questions to ask for customer feedback? 6
How often should we ask customers for feedback? 6
How long should feedback take? 7
Who are our customers and what services and products do we supply them? 7
Customer Service in permitting 9
Why establish quality control procedures 10
OARM 10
OIG - Serving many customers : 11
Develop a written plan for the customer survey (checklists) 12
The plan (checklist) 12
Establish the purposes of customer feedback (checklist) 13
CONSTRUCT DATA COLLECTION PROCEDURES 14
What is the "best" approach for assessing customer satisfaction? 14
Continuous assessment 14
Decide on data collection method . 14
Comparison of feedback methods (table) 16
The sample 18
Determining the sample size 18
Develop the questions 19
On developing questions 21
Construct the questionnaire ~ 22
Mail surveys 23
Focus groups 23
Telephone surveys 25
Other methods of obtaining feedback 26
Pretest 26
-iii-
-------
Table of Contents (continued)
Item Pas
Contingency for non-response 27
OMB clearance 28
Additional resources 28
Effective questions (checklist) 29
Choose and approach (checklist) 29
CONDUCT DATA COLLECTION 30
Focus Groups 30
Mail Surveys 31
Telephone Data Collection 32
Electronic feedback 34
ANALYZE THE DATA 35
Data Clean-up 35
Types of Data and Analyses 36
Analysis: An Example : 36
Driver Analysis 41
Presenting the Data 42
Formulate Recommendations Based on the Data 42
Presenting Recommendations -Using graphics 43
On developing recommendations 43
ACT ON THE RESULTS 44
Is this the beginning or the end of the process? 44
How do you decide what to do with the feedback you receive? 44
How good is good enough? 44
How do we know what to work on first? 45
SUGGESTED READING 48
FACT SHEETS
I Who Are EPA's Customers
II Internal Control Procedures
III Sampling - The Basics
IV Sampling - More on Sample Size
V Sampling - More Advanced Topics
VI OMB Clearance
VII Unit of Analysis
VIII Examples of Graphs
IX Survey Software and Corporate Pulse Information
-IV-
-------
Preface
Serving the public, stakeholders and partners is nothing .new to the Environmental Protection
Agency. Communicating with them and listening to their ideas is part of the way that everyone at
the agency does the job of protecting public health and the natural environment. What is new to
many is the term customer. But when you think about it, we all have customers, including one
another.
Our customer base is very large and varied, so it is necessary for us to use many ways and every
opportunity we recognize to hear the voices of our customers. We have forums, workshops,
conferences, training sessions, and meetings of all sizes (from one-on-one interviews with CEOs,
mayors, tribal leaders, governors, etc. to Federal Advisory Committee Act group sessions and
community wide exchanges around a Superfund problem or an environmental protection
opportunity). We use informal sessions, focus groups, surveys, comment cards, Internet feedback
screens and more to hear what customers think of our services. Speakers answer questions and
listen to comments following speeches, and officials use interactive media opportunities such as
radio and television talk shows to reach the public. Hot lines, dockets, visitor centers and libraries
seek customer comments. We work with partners in pollution prevention and our co-regulators
in state, tribal, local and other federal agencies collaboratively to plan activities. We actively seek
input to our rules, regulations and decisions. Top officials meet regularly with industry sector,
environmental and other constituency groups.
What is different since President Clinton signed Executive Order 12682 in September 1993, is that
we hold ourselves accountable for providing service that rivals the best in the private sector. We
have a set of standards against which we measure ourselves, the Six Principles of Customer
Service:
1. Be helpful! Listen to your customers.
2. Respond to all phone calls by the end of the next business day.
3. Respond to all correspondence within 10 business days.
4. Make clear, timely, accurate information accessible.
5. Work collaboratively with partners to improve all products and services.
6. Involve customers and use their ideas and input.
They apply to the work of anyone at the EPA, whether a manager making multibillion dollar
decisions, or a brand-new summer hire. We also have sets of process standards for permitting,
pesticides regulation, partnership programs, public access, state, tribal and local grants,
enforcement inspections and compliance assistance, research grants, and rulemaking. Cross-
agency groups under the national Customer Service Program (CSP) developed all of the
standards.
"Hearing the Voice of the Customer" is designed to help individuals and organizations decide.
whether and how to gather customer feedback. All of us have a need to hear that we are doing
the right things right, and we also need constructive criticism so we can do an even better job.
We need to learn from customers what their expectations are and how well we are satisfying
-v-
-------
them (meeting or exceeding those expectations) Formal and informal feedback from our
customers can also provide some sound ideas for transforming an organization.
Gathering feedback may take only a minute or two of 1 • :ening to an unsolicited comment, or it
may require an extensive nationwide survey. This document is not a cookbook on how to do
feedback and customer satisfaction measurement, but it does provide an array of techniques to
help you to effectively seek and then use what you hear from your customers. The process and
tools presented in this document will help to bring the voice of the customer further into EPA's
work, enabling us to improve processes, products and services in ways that customer will
recognize and value.
To simplify the processes required of federal agencies for doing voluntary customer feedback
activities and formal surveys, the CSP obtained a generic Information Collection Request (ICR)
from the Office of Management and Budget. A fact sheet to help you navigate the process is part
of the Guidelines. The CSP also has software to assist those who wish to construct
questionnaires and analyze results from respondents.
To further enhance everyone at EPA's ability to provide outstanding customer service, the
Customer Service Program, with help from many regional and headquarters staff f members and a
contractor has developed an introductory customer service workshop called "Forging the Links "
Its purpose is to clarify the links between providing great service and achieving our mission, and
the links between those of us who are direct service providers and our external customers. The
workshop also underscores the important links between people within the agency as customers
and suppliers for each other. A series of highly interactive follow-up skills courses are also
available through a network of EPA trainers, and the CSP also has video programs to lend.
In trying to find better ways to provide world class customer service, the CSP has benchmarked
with other federal agencies and with several corporations. Findings have been helpful in
developing and implementing the overall CSP Benchmarking against the best in the federal
government and listening to the voice of the customer were a large part of the EPA's first
National Customer Service Conference, hosted by Region 6 from April 14-16, 1998. A
proceedings document will soon be available.
This Guidelines document is an important piece of the new picture that is being drawn each day as
EPA gets prepared for the next century. Using the suggestions and steps outlined in this
document will help you to hear the voice of the customer and work to implement the kinds of
changes in products, processes and services that will enable EPA to be an agency that provides
world class customer service.
Don't assume you know....continuously j
ask what your customers want
Skip this step and yoy'tl get it wrong.
At Gore
VI
-------
INTRODUCTION
What's in these Guidelines for Managers?
Customer feedback is not fluff, and customer
satisfaction measurement is not mystifying. These
Guidelines were developed so more people across
the agency will understand the value of customer
feedback. WE hope this document will help you
feel comfortable with the concept, the processes,
and with your capacity to perform or manage
customer feedback activities and measure customer
satisfaction.
This document provides information about
collecting and receiving feedback from EPA's
customers. Using these Guidelines will improve the
agency's ability to effectively collect, receive, and
use feedback from EPA's customers, both within
and outside the agency.
Why is customer feedback necessary?
Learning how we can better .serve our customers
can help all of us to provide better environmental
and public health protection. Feedback—which
refers to input on needs, expectations, and
experiences—from EPA's customers enables us to
measure whether the Agency is increasing its ability
to satisfy customers. The bottom line is that
finding out what customers think about what we do
and how we do it will help us to make
improvements in our products and services, the
kinds of changes that customers will notice and
value.
Finally, all federal agencies are required by the
Government Performance and Results Act (GPRA)
to measure customer satisfaction and make changes
to improve service and satisfaction.
Who can use the Guidelines?
The Guidelines focus on obtaining feedback from
EPA customers on their needs and experiences with
How many times have you discovered
you were missing a critical piece of
information so you could not:
• Figure out why service-delivery was
inefficient?
Understand why customers seemed
dissatisfied?
Answer an inquiry about your program's
accomplishments or weak spots?
Make a strong case for additional budget
dollars?
Be sure you made the right decision
about which action would make the most
significant program improvement?
You may have missed an opportunity
because you lacked timely and reliable
information. Armed with the right
information, you could have made more
informed decisions, eliminated a bottleneck,
understood your customers' problems,
documented your resources, protected the
program from profiteers, or known which
changes would produce the biggest payoff.
As a program manager, you are probably
inundated with data and statistics, and may
not even recognize a customer focused
information deficit. Yet you may find you
don't have data-based answers to policy and
operational questions when you need them,
despite huge investments in data collection
and reporting.
These Guidelines are about getting the
customer generated information you need
quickly and at a relatively modest cost. This
"information," is something more than the
data typically produced by management
information systems. This is a collection of
facts and logical conclusions which answer
the types of questions like those above. By
learning and using a variety of strategies for
obtaining customer satisfaction information,
you can better address specific problems,
gain insight into what's happening in your
program, and determine what directions you
should be taking.
-------
EPA products, processes, and services. The
Guidelines are for:
•> Policy makers as they determine
improvements for EPA's products and
services; Program managers who seek
information from EPA's customers;
>• States, tribes, local entities, and other EPA
partners interested in assessing customers'
satisfaction with services and products they
provide; and
•> Project officers who monitor government
contractors conducting customer feedback
projects for EPA.
Why have Guidelines for customer
feedback?
These Guidelines are designed to help you
perform your work. By following them, you
will have a clear road map. The Guidelines
will enable you to conduct customer feedback
with less labor intensity, trouble and personal
concern. By having a set of Guidelines that
everyone can follow, EPA will have a
consistent approach to customer feedback.
The Guidelines can benefit staff responsible for
planning and conducting customer feedback
activities by helping them lead or do the work,
understand the importance of obtaining
management and employee buy-in for
conducting customer feedback inquiries, act on
lessons learned, and respond to staff concerns
such as fears about extra work, change, or
how to begin customer feedback.
The Guidelines can benefit managers because
they outline what is necessary to obtain and
use feedback that can assist them to improve
decisions about changing products, processes
and services. The Guidelines can benefit the
agency because following a uniform set of
ORD & Its Customers
The Office of Research & Development
(ORD) will use Hearing the Voice of the
Customer to provide models that can be
adopted and used immediately, and to provide
a structure for systematically capturing and
reporting on customer feedback on a more
regular basis than some of the ad hoc
approaches we now employ.
ORD has,two broad categories of customers;
1) EPA program offices and regions, and 2)
external customers for research results and/or
information concerning environmental science
and engineering. The second group is diverse
and includes other Federal agencies, State,
Tribal, local and international governments,
university researchers, private industry, and
the public.
From the EPA community, ORD receives
feedback from Research Coordination Teams
(work groups of ORD, Program Office and
Regional representatives who guide ORD's
planning process); from the Research
Coordination Council (a senior level steering
committee of program office, Regional, and
ORD executives); during on-site program
reviews at our Laboratories; from ORD
Regional Scientists, who are detailed from 1-2
years to EPA Regions; and from staff-to-staff
contacts between researchers and EPA
customers.
Two Federal Advisory Committees - the
Science Advisory Board and the Board of
Scientific Counselors - provide advice and
guidance on research and organizational
issues to ORD. The members of both
committees come from outside EPA, so that
they provide independent feedback to ORD
on its products, plans, and priorities.
The breadth of ORD's customers - from the
EPA staff person drafting a new regulation to
the scientist in Asia interested in the impact of
airborne particulate matter on the respiratory
system - provides a challenge in designing
feedback systems.
-------
f principles and procedures will help EPA institute a consistent approach to customer eedback,
build a repository of information about customers to track developments and improvements in
customer service, and establish information about who has been contacted, which will help in
subsequent customer feedback activities.
How are the Guidelines organized?
The Guidelines begin with an introduction and progress through a five-step model which can be
successfully applied for obtaining customer feedback that constitute a model and can be applied
successfully. The Guidelines contain discussion and checklists at the back of each section to help
you organize and facilitate your customer feedback projects.
The five steps are:
Act Plan
• PLAN the customer feedback project
• CONSTRUCT the data collection procedures Analyze
• CONDUCT data collection __
Construct
• ANALYZE the data Conduct
• ACT on the results
At the end of the Guidelines are several Fact Sheets that provide additional help. They are
referenced throughout the text.
A note about conducting customer feedback activities
The purpose of these Guidelines is to help EPA staff conduct customer feedback activities in a
systematic, scientific manner. The principles and practices in this document are sound, but there
are many other informal ways to listen to your customers. Some of them may provide you with
more valuable information than you will ever get from a statistically solid" formal survey.
The most obvious way to get feedback is to talk to your customers. It may be a casual
conversation while you are providing a service or product, attending a meeting, or sharing
information. You might find valuable feedback in a complaint that is jam-packed with good
information about what needs fixing. You may hear a small or huge suggestion about how to
make things easier for the customer or the agency. Much of this kind of feedback is unsolicited,
so you have to be sensitive to it. You need to know when to stop and listen; recognize the
chances to learn from your customers, use them, and remember to take notes! You have
opportunities every day - every time a customer contacts you - to get feedback. This "gut level"
customer reaction can be the strongest indicator of satisfaction. When you pay attention to their
comments, customers will notice that you are listening to them and that you care about what they
say. That builds trust between you, and trust in EPA.
-------
PLAN THE CUSTOMER Flan
FEEDBACK PROJECT Ana,yze
Construct
j*»"
Conduct
Who should conduct a customer feedback initiative?
Customer feedback is valuable for everyone, and everyone can easily ask his or her customer for
direct feedback about their needs and how things are going. In fact, EPA staff and managers
have many opportunities to interact with customers Among the most common are face to face
meetings, telephone calls, public meetings and other events, and written correspondence. You can
find perspectives containing feedback in newsletters and other informational materials, videos,
Web site messages or electronic mail, newspapers and interactive radio and television talk shows
and news. Many customer interactions provide an immediate opportunity to hear from customers
how well EPA is satisfying their needs.
For EPA offices that wish to track and analyze customer feedback over time, organizing your
efforts is important. A critical question to ask yourself is whether you, as the initiator of a
customer measurement project, have the ability to act on the data yourself, or whether others
(potentially states, tribes or local agencies with delegated programs, other external partners, or
other offices and regions) will be critical to the process. For an EPA unit or branch to seek
feedback, the decision to proceed may be made within the group. For larger, more complex,
more resource intensive customer studies that have broader impact, more coordination may be
needed at the Division, Office, or even Regional or Assistant Administrator level.
If other EPA staff or managers will be involved or affected, you should include them in the
planning stages as early as possible. It is important that all interested or potentially affected
individuals support the decision to obtain feedback, and are willing and able to act on the
feedback they receive. They may have their own very constructive ideas about what the research
objectives and methodology might need to be. So you can be responsive and act on the feedback
you get, work things out early. Front-end coordination can avoid potential roadblocks such as
fear of extra work that may develop from customer suggestions, fear of possible negative
management reactions or reprisals based on customer criticisms, politically incorrect results, or
unrealistic customer expectations about EPA capabilities.
How ready is your organization for customer feedback?
As you begin to plan customer feedback activities, consider how ready your organization is for
customer feedback by asking these questions:
-------
>• Does staff members understand why the organization needs customer feedback7
*• Do staff members and managers sincerely intend to pay attention to customer feedback and
act on it?
>• Are key managers committed to taking action based on customers' input?
*• Have staff members directly participated in defining the need for customer feedback and in
identifying the approaches to use for obtaining customer feedback?
*• Have managers, employees, and other users of customer feedback information expressed their
needs, issues, concerns, and objectives?
*• Is there managerial and employee buy-in and ownership?
*• Are there any possible barriers—such as concerns about change, extra work, adverse
findings—to using customer feedback successfully?
»• If there are barriers, are there identified methods to overcome them?
If you answered these questions "yes," your organization is clearly ready for customer feedback.
If you answered "no" to some questions, you might consider what you can do to prepare your
organization to obtain and use customer feedback Simply put, the more ready your organization
is for customer feedback, the more meaningful and successful the activity will be, which in turn
means that EPA will be more responsive to customers' needs and preferences.
If your organization is not fully ready for customer feedback, you should not necessarily halt your
customer feedback activities. Instead, just understand that you will probably face some challenges
in getting the work done, getting managers to pay attention to findings, and assuring customers
that your organization is committed to implementing the changes they may want. You may need
to start slowly, collecting and documenting unsolicited feedback and informal opportunities to
gather customer input. You can make some positive changes based on that feedback, and build
a case for performing broader and more formal information collections to verify and expand the
anecdotal information you gathered.
What kinds of customer feedback are already occurring?
Before proceeding with a new customer feedback activity, check with EPA's Customer Program
in the Office of Policy to see what recent work has been conducted. You should also check with
delegated program representatives (for certain customer feedback functions). This will enable
you to see if anyone else has collected the same or similar information that you can use, possibly
avoiding unnecessary duplication, saving time and money, and making best use of previously
gathered data.
-------
What are the core questions to ask for customer feedback?
It is important to have some core questions that are always used by those doing customer
feedback. Core questions represent broad levels of understanding and impressions about
expectations, EPA responsiveness, and customer satisfaction. By using core questions, EPA can
compare and aggregate customer feedback information, both across the agency and over time.
The following are core questions that customer feedback should incorporate:
*• Overall, how satisfied are you with the services and products you have received from EPA?
123456
not at all very
*• How courteously did EPA staff treat you?
123456
not at all very
> How satisfied are you with the communications you have received from EPA?
123456
not at all very
*• How fully did EPA respond to your needs?
123456
not at all very
How often should we ask customers for feedback?
Many organizations find it useful to contact their customers once a year to get an overall measure
of satisfaction. Other types of feedback, such as follow-up telephone calls or comment cards,
provide immediate information at the point of contact with customers. When organizations need
targeted customer information, most find it useful to conduct multiple studies each year.
As a rule, EPA does not want to overburden our customers, so take care to:
»• avoid feedback activities that duplicate work already conducted
>• organize customer feedback projects to avoid contacting the same customer repeatedly
> seek consent from customers to participate in feedback projects, especially those that are
lengthy or where customers have been contacted previously.
So, there is no standard answer to the question about how often to ask for feedback. The
-------
frequency of customer feedback will depend on several factors.
•• Were the findings of previous customer feedback studies positive or negative? If EPA took
action in response to concerns customers raised, has there been enough time to see whether
those actions have been effective in improving customer satisfaction?
* Considering the issue(s) involved in the feedback activity, how often does it make sense to ask
customers' opinions?
*• Can we distinguish annual versus ongoing information needs and obtain feedback accordingly?
»• Is there a way to match feedback with EPA-to-customer transactions? Can we ask customers
at the end of a call if the information provided was useful? Is there any follow-up with them
later to see if they used the product provided?
* Has some critical event occurred for which customer feedback would be important? (e.g.
Was the office reorganized to speed customer service or product delivery?)
*• Are any changes in programs anticipated, which call for surveying customers both before and
after the change?
How long should feedback activity take?
Obviously, many variables can affect the time it takes to complete a feedback effort. A few-of
these variables might include the type and method of feedback selected, the number of
respondents, and the extent to which those responsible for the survey project are prepared to plan
and act on the results. It is likely that many individuals, including the customer, will have
expectations about how long the effort will last, and when results may become available.
Therefore, it is important to carefully plan the schedule of a feedback effort. On the following
page is an example of the timetable of one feedback survey:
Who are our customers and with what services and products do we supply them?
A customer is someone who directly relies on a provider for a product or service. Customers are
defined based on the service or product they receive. Customers:
»• Have a direct relationship with EPA, including through interactions through a contractor that
represents the agency.
*• Receive one or more services or products from EPA.
-------
CUSTOMER FEEDBACK SURVEY -- PROJECT TIMETABLE
DELIVERABLE
TIME FRAME
Project Planning and Design
June
(2-3 meetings)
Design Survey Instrument
- focus groups
- internal draft of questionnaire
- 1st draft to survey team
- mark-up meeting
- 2nd draft to survey team
- revised draft sent to field
- final version sent for approval
- final approval from agency
6/22 & 6/29
7/7
7/10-7/12
7/12
7/17-7/20
7/25
8/4
8/7
Data Collection
- field testing
- revisions (if necessary)
- phoning
8/14 & 8/15
8/16-8/18
8/21 -9/15
Analysis and Report
- analysis
- report
- briefing charts
9/15-10/15
10/17
10/31
Process Improvement Workshops
- coordinating committee
- executive board
- notes to coordinating committee
- notes to executive board
11/1 &11/2
11/8 & 11/9
11/16
11/27
Performance Standards and Process
Improvement Implementation
- action teams
start 12/10
-------
Rely on EPA for a work product
or for specialized expertise.
Are directly affected by the
actions of EPA.
May receive financial assistance,
such as grants.
Include those for whom we carry
out a mandate or mission, such
as Congress and the Office of
Management and Budget.
Include EPA employees as
internal customers of each other.
Relationships and transactions
among EPA staff are essential for
delivering consistent, excellent
service to external customers.
Stakeholders are individuals
whose primary relationship with
EPA is characterized by having
an interest in our work and
policies; someone who may
interact with the agency for
another person or group; or
someone who influences our
future direction (including
financial resources). Clients are
individuals and organizations with
a dependent relationship to the
agency.
Customer Service & Permitting
While identifying many customer groups interested in the
permitting process is possible, there are only two major groups:
interested and impacted parties, and permit applicants.
Interested and impacted parties are those individuals, interest
groups, communities, states, or tribes that raise a concern or
have comments regarding the permit action. Permit applicants
are the entities that are seeking approval from EPA or a
delegated authority to conduct a regulated activity. In addition,
relationships between the governmental entities involved in the
permitting process (EPA Headquarters, Regional Offices and
Delegated Authorities) are also important.
In permitting programs, receiving and effectively using feedback
from customers results in actions that are more acceptable and
supported by interested and impacted parties, permit applicants,
and regulators. Interested and impacted parties are individuals
or groups that raise a concern or have comments regarding a
permit action. When the permitting authority effectively listens
and responds, the interested and impacted parties and permit
applicants generally feel better served by government. Also,
through the information from the feedback, permitting agencies
can more effectively plan and allocate resources to address
issues that, in turn, more directly relate to customer concerns.
Experience has shown that permitting actions often benefit from
customer input, particularly about site-specific conditions that
technical staff alone cannot provide. Effective customer service
in the long-run saves resources by promoting more efficient
permitting decisions.
The Customer Service in Permitting (CSiP) Workgroup is a
continuation of the Agency's efforts to improve the permitting
processes. Early efforts in customer service focused on setting
standards and developing surveys to obtain feedback from
permitting customers. CsiP members recognized that since
most permitting occurring at state, tribal and local levels, efforts
to encourage customer service at those levels are also needed.
The CSiP provides an opportunity for Headquarters and
Regional staff to work with state representatives in developing
the necessary tools to receive effective feedback and to deliver
customer service in permitting. The CsiP's mission is to promote
high quality customer service in EPA permitting. This includes
permits issued by EPA or by delegated authorities at state, tribal
and local levels. To accomplish this mission, the Workgroup
uses customer feedback to improve permitting activities by
using the feedback to:
• measure standards of customer service;
• increase the skills and abilities of individuals involved in the
permitting process;
create a culture that values customer service.
-------
Feedback from stakeholders and clients is also
necessary and valuable for specific activities of
the organization. However, it is important to
know when the individual who is giving
feedback is trying to influence your decisions or
is very dependent upon you and maintaining
goodwill in your relationship.
Before beginning a customer measurement
project, it is important to be clear about which
customers and which products and services are
the focus. Fact Sheet I lists some of the most
common products and services, and the
customers for each.
Why establish quality control procedures in
customer feedback activities?
Developing and applying good internal control
procedures is a sound business practice and
helps assure the quality, reliability, and integrity
of information used for decision making. The
standards and techniques of quality control
should apply to data collection, administration
of data collection activities, analysis, and
reporting of results from customer feedback
Controls vary and may be as simple as merely
limiting access to raw, customer-specific data;
and separating the data collection,
administrative and presentation duties from the
affected action officials; or as thorough as
performing independent quality assurance
reviews. The purpose of internal controls is to
provide reasonable assurance that the objectives
of customer feedback will be accomplished in a
reliable and cost-effective way. For a
description of specific control standards and
techniques, see Fact Sheet IX..
OARM
Th . Office the Administration and Resources
Management (OARM) is responsible for
providing a wide range of services to internal
EPA customers. Just a few of these many
services include telephone, voice mail, and e-
mail services, personnel transactions and
retirement counseling, contracts and grants
management, shuttle bus and parking services,
printing and mail delivery, office moves, and
safety and health services.
Improving customer service is one OARM's
highest priorities. Senior managers and all of
OARM staff members are charged with
improving the quality and timeliness of services
and service delivery so EPA employees can
accomplish the business of the Agency efficiently
and effectively.
To measure progress in improving service,
OARM began measuring customer satisfaction
through service-specific transaction surveys and
through an annual OARM-wide customer
satisfaction survey. Results of these surveys
provide managers with current and detailed
information on how well OARM is meeting
customer needs, what is most important to
customers, and where making changes and
improvements is most critical. Feedback
enables OARM to respond directly to customer
needs and suggestions when "quick fixes" are
possible, to target long range improvements
through Customer Service Improvement Plans,
and to track customer satisfaction over time.
dos't
does not matter xsljieh way you; ga.
, • Lems Carroll
then it
10
-------
OIG - Serving Many Customers
Customers of the Office of the Inspector
General (OIG) cut across EPA programs
and their customers. The OIG is unique by
its statutory mandate (the Inspector General
Act 1978) requiring it to be organizationally
independent to ensure its objectivity,
impartiality and to prevent interference in
the conduct of its work. The President
appoints the Inspector General, without
regard to political affiliation and reports
directly to Congress.
To prevent and detect possible fraud waste
and mismanagement, and promote
economy, efficiency and effectiveness in
EPA's programs and operations, the OIG
conducts audits and investigations. The
OIG also performs, evaluative, consulting
and advisory services to reduce risks,
improve accountability, and ensure financial
integrity. Although organizationally
independent, the OIG is part of the EPA
management team, dedicated to Agency's
environmental mission.
While the OIG is the Agency's fiscal and
operational watchdog, it is also the
Agency's consulting partner for
collaborative problem solving and
recommending sound business practices.
The OIG is in the management and
enforcement service business. But this
unique role creates unique customer
relationships, often with disparate
expectations from a variety of customers
with frequently different points of view.
The OIG is both independent and
collaborative, part of the EPA team, yet
independently reporting to Congress. So
how does the OIG know how well it is
serving its customers when different
customers value different things? By
working very hard to improve modes and
means of communications.
The OIG maintains frequent two way
communications through personal contact
and correspondence with both key Agency
managers and key staff members of
Congressional Committees. The OIG also
works directly with other federal, state and
public auditing and law enforcement
organizations.
We understand that while the critical nature
of our work may provoke other than positive
responses, we want our customers to realize
that ultimately we share the same objectives
that they do. We act as agents of change
and strive for constructive solutions. Our
challenge is to use customer surveys to tell
us what is important to our customers in the
context of our mission, and measure how
well we are achieving the attributes of our
mission. "Hearing the Voice of the
Customer" will give us the process to obtain
the most relevant information possible that
can influence the OIG success as valued
agents of change.
Specifically, we will begin seeking customer
feedback from several sources following
each major audit, investigation and
assistance project. We will be measuring
attributes of OIG products/services and
staff. Agency officials may not like all of our
findings, but they can still strongly agree that
our work is relevant and accurate, and that
our staff is professional and encourages
constructive communications. As these are
the attributes needed for agents of change.
(See sample survey with Fact Sheet IX.)
Before we start talking, let us decide]
what we are talking about
Socrates
11
-------
Develop a written plan for
the customer survey checklist
purposes of the activity
quality control procedures
ways findings will be used
identify the target group
methods of data collection
timing for data collection
analysis plan
tools for carrying out
feedback activity
discussion topics
survey instrument
database
_anticipated products
tables and graphs
text that interprets findings
slides
specific conclusions
recommended actions
THE FLAN
et ready
jsee what feedback you already have
..decide which core questions to ask
_decide frequency for customer feedback
.define the target customer population
.identify services supplied to customers
.establish purposes of customer
feedback (see next checklist)
jJecide whether to do the activity or
contract out
.develop written
.determine resources needed
^obtain agreement to proceed (if needed)
12
-------
^ checklist ; I
For establishing the purposes of customer feedback
the fiit^jiogs will be osed
as a fay fatsimssgerformarm tw%mK*r?
&> revise,
- to inform planning, decmm ma}&ngr mtdrvsource atfocttffm?
- to r&vaKi r&c&$ptimf or
- to help validate stc&tdwds^ spentfimtims, atidmeasams?
I>ete«ni»e wh« will use tlie ftndisgs
« Wh& else is intemstestm ikeffltiKngs?
much time am &qy &$te t&'gpve f& teaming
" "
brieftngs, wr
13
-------
Act Plan
CONSTRUCT DATA
COLLECTION PROCEDURES
Construct
What is the "best" approach for assessing customer satisfaction?
There is no one best approach for assessing customer satisfaction What will work best for any
particular EPA program area will depend on the kind of product or service provided, the kinds of
customers served, how many customers are served, the longevity and frequency of customer-supplier
interactions, and what you intend to do with the results. Two very different approaches both produce
meaningful and useful findings:
»• Continuous assessment methods — Methods to obtain feedback from the individual customer at
the time of product or service delivery (or shortly afterwards).
»• Periodic survey approaches — Methods that obtain feedback from groups of customers at
periodic intervals after service or product delivery. They provide an occasional snapshot of
customer experiences and expectations.
Understanding customers' expectations and satisfaction requires multiple inputs from customers. It is
like peeling away layers of an onion — each layer reveals yet another deeper layer, closer to the core.
Both method types are helpful methods to obtain customer feedback for assessing EPA's overall
accomplishments, degree of success, and areas for improvement.
Continuous Assessment
These Guidelines focus on methods for obtaining customer feedback periodically, but it is very
important to remember that you can adopt continuous assessment as a standard method for obtaining
customer satisfaction information. Some ways to include continuous assessment in your work
include:
*• inserting a feedback card in every copy (or every nth copy) of any published report sent out; and
>• making a follow-up phone call to every customer (or to every fifth, or twelfth, or nth customer)
within one or two days of interacting with that customer.
The information you obtain from continuous assessment can provide valuable and timely insight into
the experiences that your customers have had with EPA.
Decide on Data Collection Method
Before considering systematic methods for collecting data, remember that informal methods for
obtaining information from customers clearly produce information that is valuable. Everyone at EPA
14
-------
needs to recognize and use these everyday opportunities for customer feedback. Use this information
to complement the more systematic forms of gathering feedback discussed here. (See previous
discussion, page 4 )
Many different more formal methods can be used to collect customer feedback data. Methods
frequently used to gather customer feedback include: focus groups; a mail-back postcard that is
included among materials sent to EPA customers; a mail survey; a telephone survey; a publication
evaluation form included at the back of every copy, a printed or in-person survey (which might
include computer-assisted personal interviews or an intercept survey when you ask every «th
customer attending a function or visiting a facility to participate). Electronic mail will become an
increasingly more important means for collecting customer feedback as more people gain access to
the Internet.
When you decide which method to use, you should consider several factors, such as the types and
number of questions to ask. The decision will also be affected by available resources to gather
customer feedback, how fast decision makers need to have the information, and how "representative"
the findings need to be. The response rate — the number of customers who actually answer questions
divided by the number contacted for information — is also an important consideration because it will
affect the way you can use findings. A summary of different methods appears in the table on the next
page.
When selecting a method for obtaining customer feedback, recognize that you need different kinds of
information for different methods of obtaining customer feedback. If, for example, you choose a mail
or phone survey, you will need an accurate name, address, and/or telephone number. At times it may
also be critical to know which EPA programs or services the customer sought or received, as well as
any demographic information available.
Note that several different practices can affect the ratings of various data collection methods:
>• Focus groups, telephone and in-person surveys require trained staff to conduct proper interviews
and prevent interviewer bias.
*• Focus groups, telephone and in-person surveys provide EPA with the opportunity to show
through direct personal contact that the agency takes customer feedback seriously.
»• Telephone surveys can more readily accommodate differences in language and literacy levels than
can mail surveys, but they cannot accommodate lengthy questionnaires or visuals.
*• Some people are difficult to reach by or do not have a telephone, and many who do are reluctant
to or simply will not participate in telephone interviews.
15
-------
•o
o
C8
£
•o
1
S
S
/">
es
3)
a.
r-
2
§•
•e
1
3
Jg
CS
§•
1
1
I1
1
1
f
f
£
s
3
2
3
c
Ability to encourage custor
participate
1
•a
1
fb
•a
IB
1
I
•£
g
1
Ability to provide instructic
explanation to customer
!
1
1
o
c
o
c
8
I
8
D
Requires customer to initial
3
o
1
g
§
1
moderate
-2
2
1
t3
u
1
2
£
.f>
1
1
Respondent's perception of
anonymity
.e
1
J:
1
,
ll
c v
§
0*
13
i
i
11
,
c
?
"O *^
™ w
-0
jo
*y
ll
E g
o
t
•o ¥
§§
— a.
0 O
3
IT
O
H
IS
•"3
1
limited
f
•a
'£
o
'ed
jj
1
*O
i
i
.i
f>
£
„
^
^
^
Opportunity to probe and a
questions
o
c
l
I
1,
I
8
O
C
u
g
f
Need for accurate list of tel
numbers or addresses
§
some
I
I
D >
-------
»• Mail surveys can be longer, since respondents can work at their own pace, but they have the
longest response time and may not reach the intended target
»• Mail surveys allow no interviewer bias to creep in, but they offer little ability to probe or ask
complex questions, and should there be any ambiguity in questions, it cannot be clarified.
»• The amount of follow-up can dramatically influence costs, timeliness, and the ability to
generalize results Mail surveys, for example, may have several follow-up mailings to
customers who do not initially respond. Customers who initially decline to participate in a
telephone survey may be assigned to a special staff member who is charged with trying to
convince the customer to answer the questions. An advance letter can increase participation
and response rates for mail and telephone surveys. It can also allay customers' concerns
about such matters as: how they were selected, why they have been selected to participate
again (if applicable), anonymity, how long it will take them to answer the questions, and how
findings will be used. (See sample advance letter following.)
Sample Letter
EPA
w, ,jonrt
Alpha, Beta, and Gamma Co,, inc.
, Doe:
i am writing io let J«HJ fcnowtfeat your name lias been selected al random; to partaspate ki a survey aboat
fcasiness OVHWS' exosffertees wKli EFA^ You ar« &w gseriiences cart feefp shape our Mure direction. , ;
We at EPA wilt take findings from the survey fo*o eoasideraS&n as we develop bur pfansi.for tfee next
decade:. : We; are somrrtiltetf to irtcorijsrdtijig csstorner vtev^36ists and F$ccimffterKla^0ns into ear :
s^atefic planning, Isadgellng, and sfectston mafciag; vnt^ile fect^nei«g the oeed for::ba!anciag; so*ne8mes
f realize 8tat we may have contacted y&z before to <
to respond to customer eoscersSi so t Is very Important 1o nearfrom VQH again, Yoer responses will t)e
«
r&oeive th« survey in »e fttxt ?®tt iSays. it will ta*& le$s thas 10 rnim»f^ for -you to
I arge you to consider the questions careteily and let us Icnow how we ear* better sejrye yoti. Is the,
ittaantairie, if you rt^ ^y qijlgoiis, |5l^s« esll 1-$00-xxx-»sxx t& $p««k wath a. sfeff nwnis^f o«
survey team (or someone at YYY ConsQl^ng, 8»e Snfn oonductinoj the s«rvey for EPA},
I Itiank vo» Is advance for yo«r ^me and conskieratios.
Sincerely.
Title of highest possible EPA person
17
-------
The Sample
If the number of customers of interest is relatively small, not more than 50, each could be
contacted to obtain feedback. This is the census approach. In many cases, EPA services or
products are provided to a large group of customers, one too large for a census approach. In
such cases, a sampling approach is needed, and two options are possible: (1) a judgment sample,
in which you consciously select the customers that you will contact from the entire group of
customers served, and (2) a probabilistic sample, in which customers you will contact are picked
randomly from the entire group of customers served during the period of interest (i.e., the past
year).
In most cases, it is better to rely on a probabilistic sample than a judgment sample. Judgment
samples may be biased because of the way customers are selected for the study. If a sample is
biased, it is impossible to draw inferences about the entire group of customers served. As long as
the response rate is high enough, probabilistic samples are not biased, so inferences can be made
about the entire group of customers that the selected ones represent.
Determining the Sample Size
If you choose to conduct a mail, telephone, or in-person survey, you will need to decide the
number of people who will be selected to participate. To determine this number — the sample
size — several factors should be considered, such as the total number of customers served, the
intended use of the results, available resources, and time.
»• The larger the percentage sampled, the more certain you can be that the feedback obtained
will be representative of the results that you would have obtained if you had contacted and
gotten feedback from every customer.
> The smaller the percentage sampled, the greater the likelihood that feedback from those in the
sample will differ significantly from those in the full list of customers.
The relationship between sample size and accuracy of findings is due to sampling error, a
measurement that indicates the extent to which the sample of customers is different from the
entire group of customers under study. In a news article that reports the President's approval
rating as 62 percent, plus or minus 5 percent, the "plus-or-minus" value is the sampling error.
To decide the size of the sample, you can either:
»• Determine the largest sample size that you can afford and calculate the associated sampling
error; OR
»• Determine the maximum sampling error that is acceptable and then select the sample size that
will produce that level of error.
18
-------
The sampling error — which is the difference between a measure based on findings from a sample
and the measure that would be obtained if the entire group of customers served were surveyed —
can be estimated through a confidence interval A confidence interval specifies a range of values
within which the true measure is found. Typically, survey results rely on a 95 percent confidence
interval, but lower levels are acceptable, depending on K~>w you plan to use the findings.
Popular media reports rarely stipulate confidence intervals, but they are implied. Using the
President's popularity rating as an example, the unstated premise is that the analyst is 95 percent
certain that the President's popularity is between 57 and 67 percent; that is, 62 percent, plus or
minus 5 percentage points, the likely error.
One last point to consider in determining the sample size is the kinds of comparisons that you will
want to make with survey findings. Many times, analysts are interested in comparing ways that
different customers react to various services. These comparisons may involve large- vs. small-
sized businesses, the general public vs. educators, and so forth. If these comparisons are a critical
portion of the analysis, you must plan for them in the sample design so that enough of each
customer type is surveyed to make the findings meaningful. See Fact Sheets III, IV and V for
further information on sampling.
Develop the Questions
In deciding the questions to ask customers, it is a good idea to keep two principles in mind: (1)
make sure that the questions and answers address your objectives and (2) set limits on the length
of the survey instrument.
Many sources are available to help develop questions for surveys. These include software
packages such as Corporate Pulse (which is available to EPA staff through the Customer Service
Program), prior surveys sponsored by EPA and other agencies, journal articles, and item banks
maintained by some universities and survey organizations. When possible, it is better to use a
previously tested and validated question, rather than one newly created for the current survey.
Survey questions are generally of two types: open-ended and closed-ended. In open-ended
questions, the customer creates his or her own answers. The following are examples of open-
ended questions:
»• Do you have any suggestions for improving service? [IF YES], What are they?
*• How could EPA be more responsive to your concerns?
*• Could you please describe the most satisfying experience you 've had with EPA ?
Closed-ended questions limit the responses a customer can provide. They may include yes/no
answers, categories of responses, rank-ordered responses, or scales. The following are examples
ofeach type:
19
-------
2 , seburbaa
pleas aic ,
6
With closed-ended questions, it is relatively easy to record and analyze responses, and you will
not receive irrelevant or unintelligible responses. However, you risk "missing the boat." To
illustrate, suppose you ask the closed-ended questions, "What was the main reason for your
visit?," giving several possible answers, and 30 percent of your respondents mark "other."
Drawing valid conclusions about why customers visited would be hard. If you decide to use
closed-ended questions, pretest them to identify all the likeliest responses to your questions.
In developing questions and answers for closed-ended items, the advisability of including response
options such as "don't know" and "no opinion" should be carefully considered. While customers
should not be forced into providing responses when they really do not have answers, it is better to
find ways to encourage a response than to let customers default to a neutral position. In mail
surveys, this encouragement can be accomplished through instructions; in telephone and in-person
surveys, it can be fostered by not offering "don't know" and "no opinion" as response options.
On the other hand, including "not applicable" as a response is important in mail surveys, so that
customers are able to indicate this when they have not had a particular experience. In asking
20
-------
questions about a past event, consider giving
a "don't remember" option. Keep the survey
to a reasonable length by asking only the
questions you need to address the issues of
concern that prompted your survey; leave
out the "nice to know" questions.
Recognize that open-ended questions will
provide a richness of data that can
complicate analysis. Reducing responses to
a few categories that can be coded, entered
into a data base, and analyzed can be
difficult. It is probably best to use a mix of
questions, both closed and open, in most
customer feedback questionnaires.
If you are planning an ongoing or
periodically repeated survey, identify a few
key program goals that are unlikely to
change very soon, and focus your questions
on them. Develop questions that will
indicate how well customers think the goals
are being met. These key questions need not
be elaborate or profound, but should be very
basic to your program. To effectively
compare results over time, you need to use
essentially the same core questions in your
survey on each iteration. You will need to
avoid making any major changes to these key
questions, whether in wording, scaling, or
placement, so be sure to ask the right
questions from the beginning.
Questions that are relevant to customers
should be developed to fulfill the purposes
and the objectives of the specific customer
feedback activity being conducted. Although
this may seem obvious, it is important to
remember throughout the development of
questions. Be particularly wary of
On Developing Questions:
Whatever type of feedback method you
choose, allow plenty of time and resources
for developing your questions. This process
involves several cycles of writing, testing
(using actual customers served), and
rewriting. Remember, what you are
looking for is action-able information for
managers, so ask yourself, "What action
could I take with this kind of answer?"
Questions that are too vague can mean
different things to different people. The
best thing to do is be specific. For
example, you may have identified an issue
such as, "What do our customers think of
our new application form?" That's pretty
bland. Go on to questions like:
What do our customers like or dislike about
our new application form?
Do our customers find the instructions on
the new application form helpful or
confusing?
Is the new form easier or harder to
understand than the old form?
Do customers spend less time filling out the
application?
Do front line staff spend less time
answering questions about the form?
Specific questions will be more likely to
give you information you can act on to
improve your program. And asking front
line staff for their input is always a good
thing to do, as well.
21
-------
questions that may be interesting to ask, but may only add time and cost while not producing
useful information. These questions could be:
1. Extraneous questions that do not address the stipulated purposes and objectives of the
feedback activity.
2. Questions that are subject to misinterpretation. These may have vague words, use unfamiliar
jargon, or could be understood differently by different types of customers.
3. Double-barreled questions that embed more than one item, such as "On a scale of 1 to 6,
please indicate how clear and useful the materials are." The customer may have one opinion
about clarity and another about usefulness, but is not given an opportunity to distinguish
between them in his or her response.
4. Questions that may upset some respondents. Some that customers may perceive to be
intrusive, such as household income, are best when worded neutrally (such as by asking
whether the customer's household income falls above or below a certain level) and placed at
the end of the survey.
5. Questions on matters that customer may consider to be sensitive or offensive, especially about
cultural, ethnic, gender, and socioeconomic considerations.
6. Questions that do not elicit responses that point to specific remedying actions.
If you do not ask the right questions in the right way, relatively soon after the service experience,
feedback will not be as useful as it might have been. Also, remember that to effectively compare
results over time you will need to avoid making major changes to key questions, whether in
wording, scale, or order in the questionnaire.
There is no single correct scale to use. However, there are several important issues to consider:
*• Whenever possible, the same scale should be used throughout a given questionnaire to help
ensure that different responses within a questionnaire can be validly compared.
* Different survey efforts within an organization should use the same scale. To this end, we
recommend that when using the core questions described above, you consistently use the
same scale of one to six (1-6).
Construct the Questionnaire
No matter what method you use to collect data, all questionnaires follow a similar format:
»• Introduction — sets forth the purpose of the survey and guides the customer through the
questions
> Customer experience — establishes the customer's ability to answer various parts of the
22
-------
questionnaire
»• Measurement — asks the person surveyed to characterize his or her experiences, needs and
desires as an EPA customer
*• Customer information — gathers data that will be used to classify respondents
Mail Surveys. The mail survey has to do everything you would do if you were with the customer.
It has to the visually appealing , have a pleasant tone, and be crystal clear. The survey instrument
is under the direct control of the customer. Its physical look will affect the customer's willingness
to respond; the clarity of the instructions and questions will affect the customer's ability to
interpret their meaning correctly.
Single page questionnaires and comment cards should be attractive and easy to read. Longer
questionnaires should be printed in booklet form, on 11" x 17" paper that is folded in half and
stapled in the middle to produce a standard 8 1/2" x 11" page. The cover should be visually
appealing and use a logo or other graphic design to interest the customer; and no questions should
appear on the cover. Use of color ink and high-quality paper will add only minor costs to the
survey, but can substantially improve response rates and reduce the cost of follow-up
correspondence and telephone work by staffer contractors. The cover should give the title of the
survey activity and indicate who is conducting the work. For its surveys, the Social Security
Administration uses brightly colored paper, desk-top publishing to allow more flexibility in
design, and larger print to accommodate the needs of its elderly and disabled customers.
The methods used to construct the questionnaire are different, depending on the mode of data
collection that will be used to obtain customer feedback. In the next section we present methods
for constructing questionnaires for focus groups, mail surveys, and telephone surveys — the most
frequently used forms of data collection in periodic surveys to obtain customer feedback.
Focus Groups. As knowledge about customer surveys has expanded and entered the public
domain, more and more people claim to be conducting "focus groups." It is important to
distinguish between focus groups — which are based on scientific procedures and understanding
of human interactions — and more casual discussions among people who share a common
interest or concern. Both approaches provide potentially useful information, but analysts should
recognize the difference between data from focus groups and data from more informal gatherings
(See pages 27-28.)
The key instrument for a focus group is the Moderator's Guide. This is a series of questions,
probes, and discussion topics that are arrayed in a logical order. The moderator uses the Guide to
elicit opinions and experiences from participants, and to ensure that discussions stay focused as
much as possible on the critical issues around which the group was formed. A sample
moderator's guide appears on the next page.
23
-------
Tropically, a Moderators Guk&i- is organized as follows;
» tBtra&jctioBS by raodeialaf a¥t<3 participants
>> iReview of $rou?n3 rutes^ suctvas
1* You have t&&i asfced
t: th& eoflv&satios tUHJSfasf rte&d! to fto* ihrt»&g;h
'
mbdeiatar,
31 None vide^ taping, audio taping* and observ^fs
4* Tliere are so .raglrt. ^twr»^i
Brief 03^anatH)il of ifce focu» group
Survey questions should be presented in a logical sequence. Many survey experts believe that the
first question on the survey, more than any other, will determine whether your customer
completes or discards the questionnaire. Starting with a fairly simple question is a good idea
because it suggests to the customer that completing the survey will be neither difficult nor time-
consuming. It is also advisable to ask a fairly interesting question to gain the customer's interest.
The next set of questions should focus on matters that the customer is most likely to judge as
useful or salient. This continues the process of drawing the customer in so that he or she becomes
engaged with thinking about the questions being asked and becomes invested in completing the '
survey. Grouping questions together that share common themes makes sense because the
customer then focuses on that particular area of inquiry. To the extent practical, group questions
together that have similar types of response options. For example, questions that have yes/no
responses should be together and questions that have scale responses should be together.
The order of questions should also mirror the thought processes that customers are likely to
follow. For example, questions about particular experiences with on-site inspections should
precede questions about suggestions to improve those inspections.
The final set of questions should center on those most likely to be sensitive or offensive. These
may include questions about personal characteristics (race, age, income) and unsuitable behaviors.
24
-------
The final page of the booklet should not have any survey questions. Instead, it should invite the
customer's comments or suggestions about anything raised in the survey or other issues and
concerns important to the respondent. It should also indicate the address for returning the
questionnaire (in case the survey gets separated from the reply envelope) and, when possible, a
toll-free number set up exclusively to receive inquiries about the survey.
Telephone Surveys Because customers have no questionnaire in front of them during a
telephone survey, concerns about visual appeal are not applicable for this form of data collection.
Issues regarding ordering and clarity of questions are important, and the same principles apply as
with mail surveys.
The difference between mail and telephone surveys is that spoken language is very different from
written language, and customers must be able to respond to questions based only on the
information they hear. So, it is critical that you ensure that your interviewers speak clearly and
are well trained. Additionally, the interviewer acts as an intermediary between the customer and
the questions posed. With this in mind, the following principles apply to telephone surveys:
*• The introduction the customer hears will probably determine whether the interview is
conducted or the customer hangs up. The introduction should be concise, state the purpose
of the call, estimate the length of the call, and assure confidentiality. This is a sample:
Hello, my name is [fill in], and I'm with the Environmental Protection Agency [or XXX
Consulting]. We 're conducting a survey of people who have received materials from the
EPA to learn about their experiences and opinions. Let me assure you that this is not a
sales call, and that we will keep all information about you and your responses private.
We will use the information you provide only to help improve EPA 's services. The
survey will take less than 15 minutes to complete and is purely voluntary. Is this a
convenient time, or would you like to set up a better time for me to call you back?
*• Because customers will rely on verbal cues and instructions, rather than written ones,
questions should have a limited number of responses (about three or four).
*• Because customers will rely on verbal questions, each question should be relatively short.
> Avoid questions that ask the customer to look up information or check with others.
»• In constructing the questionnaire, be sure to read the questions aloud to others to see if they
sound clear and are understandable. Remember, what works for the written word does not
always work for the spoken.
»• Complex skip patterns and branching are easily accommodated through computer-assisted
telephone interviewing (CATI) systems. Skip patterns occur when a particular answer to one
question means the respondent is not asked certain questions that would otherwise follow;
branching occurs when a particular answer to one question leads to a series of questions that
25
-------
are customized to that particular answer
Rank-order questions are subject to error in telephone interviews in a way that they are not
for mail or in-person surveys. Rather than asking a customer to rank-order a list of, say, eight
items, it is better to ask that person questions in a series of pairs ("Which is more important to
you, X or Y?") or break up the list into a series of separate scaled items ("On a scale of 1 to 6,
where 1 is extremely important and 6 is not at all important, how do you feel about X? On a
scale of 1 to 6, how do you feel about Y? How about Z?").
When changing subjects, telephone surveys should cue the customer with transitional
language. Statements such as, "Now, I'd like to turn to your experiences with ..."
accomplish this shift.
Instructions for the interviewer must be perfectly clear, and the same format should be used
throughout the survey. For example, interviewer instructions are typically written inside
brackets, in all capital letters.
For a sizable telephone survey (of say, more than 50 people), use of computer-assisted
telephone interviewing (CATI) should be considered. For large studies, CATI will be more
cost-effective and produce more reliable information.
Other Methods for Obtaining Feedback. While many customer feedback activities at EPA are
likely to rely on focus groups, mail surveys, and telephone surveys, remember there are other
methods for obtaining customer feedback. These include prepaid postcards attached to materials
EPA distributes, asking recipients to complete a couple of questions and return the card. They
also include questions asked of Internet users, in-person interviews that make use of a semi-
structured list of questions, continuous feedback obtained by calling every fifth, tenth or nth
customer a few days after product or service delivery, and any other formal or informal
opportunity for listening to customers.
Pretest
A pretest is a small-scale trial of the instrument and data collection methods. Conducting a
pretest is extremely important because the results will provide opportunities for refining the
instrument and methods before the comprehensive data collection activity begins.
It may seem that a pretest is unnecessary if a survey has been carefully researched and designed.
However, even the best plans cannot anticipate all real-world circumstances.
26
-------
Results from a pretest can tell the analyst:
> whether the flow of questions is logical and orderly,
«• whether questions seem relevant and appropriate to the customers
•• if customers were able to easily understand and respond to questions,
» if response categories are adequate, and
>• whether questions truly reflect the issue that is intended to be measured.
A pretest is helpflil for cost projections, and also provides information about actual burden (that
is, the amount of time to complete the survey), which is essential for Office of Management &
Budget (OMB) clearance (required for federal agencies, their contractors and cooperative
agreement partners performing surveys of direct benefit to the sponsoring agency). A pretest
that involves more than nine people who are not federal employees, also requires OMB clearance.
One of the best ways to conduct a pretest is to randomly select individuals from the target group
of customers served, have them complete the survey according to the method planned for the
overall effort, and then participate in a focus group session to review their opinions. If, for
example, you intend to conduct a telephone survey, customers should be recruited, come to a
central location where they can be interviewed by telephone, then meet as a group to go over the
draft questionnaire and their experiences in answering the questions. Those who are involved in
the pretest should not be included in the sample selected for the actual survey.
Contingency for Non-Response
Occasionally, regardless of planning, there will be times when response rates are simply too low
for you to make inferences and recommend action. In these cases, it is important to have a
contingency plan for non-response. The plan will need to include the potential additional steps
you will take to increase the level of participant response. Some potential steps include:
»• Reminder Calls or Postcards — If these steps were not included in the original survey plan,
they should be considered if the response is low. If they were included in the original plan, it
may be advantageous for. you to repeat them.
*• Follow-Up Contact with Non-Respondents — You may need to make telephone calls or
other types of personal contact to non-respondents to identify the reasons for their non-
response. You may want to learn if they understood the intent of the survey and the
questions, if the questions were relevant to them, and if there were specific factors that caused
their reluctance to respond.
»• Improve Contact Information — It may be that many addresses or phone numbers of the
target group are incorrect or out-of-date. Improving this information would very likely
27
-------
improve the response rate. Places to check include the Internet, credit bureaus, and business
directories.
*• Revision of Survey Instrument — In some instances, some of the survey questions may
make respondents feel uncomfortable or unable to respond, so you may need to revise the
instrument. NOTE: If you change the survey instrument significantly, you may not be able to
compare the results received before the change with those received after the change. You will
need to carefully consider the trade off of response rate vs. data validity.
Some of these steps may require a great deal of effort, time, and money. The group or individual
in charge of the survey will need to carefully consider the various options. If the response rate
remains too low, you may need to wait for a better time and a different customer base, or may
wish to rely on direct conversations with customers .
OMB Clearance
Under the Paperwork Reduction Act of 1995, the U.S. Office of Management and Budget must
approve any federally-sponsored collection of information that asks the same question of more
than 9 non-federal respondents. Typically referred to as "OMB Clearance," the process is an
exacting one and demands strict adherence to OMB requirements. For example, if a customer
feedback activity is subject to OMB clearance, the cover of the data collection instrument must
contain standard language and the date on which the clearance expires.
EPA has obtained OMB approval of a generic Information Collection Request (ICR) to conduct
customer satisfaction work. Under this authority, the clearance process is streamlined and the
time for clearance is reduced from as long as 6 months to between 10 and 15 days. This generic
ICR is available only for strictly voluntary collections of opinions from customers who have
experience with the existing product or service that is the subject of each particular feedback
instrument.
Fact Sheet VI explains the streamlined process and provides several examples of cleared survey
instruments. You may request the fact sheet as a separate electronic document from Patricia
Bonner, Director of EPA's Customer Service Program (Mail Code 2161). You may also send
her survey instruments for quick review to ensure that questions are worded to address customer
satisfaction issues, not focused on program or outreach effectiveness. In some cases, another
information collection request may be more appropriate to use than the customer service generic
clearance mechanism.
Proposed EPA survey packages should be sent for final review to Barbara Willis of the
Regulatory Information Division (2137), at Headquarters. She will check the package for
compliance with OMB regulations regarding use of the generic clearance for customer satisfaction
surveys and review the burden placed on the public, state officials, tribes, and other non-federal
government customers. She will forward to OMB all survey instruments and the required
clearance package.
See Fact Sheet VI for more information about specific procedures to follow, forms to complete,
28
-------
and general information about EPA Customer Feedback OMB Clearance. EPA personnel listed
below may be able to provide additional information'
Barbara Willis
202-260-9453
202-260-9322 (fax)
Barbara Willis (EPA internal e-mail)
willis.barbara@epamail.epa.gov
Additional Resources
Pat Bonner
202-260-0599
202-260-4968 (fax)
Patricia Bonner (EPA internal e-mail)
bonner.patricia@epamial.epa.gov
The EPA Customer Service Program collects copies of survey instruments, reports, and resulting
plans. These materials are a resource for other EPA offices and staff who want to learn more
about their customers.
Checklist for effective questions:
use short stalieinents oi? questions
use simple words
^avoid jargon :;
jje clear and easy to
Arrange questions in
L#se appropriate response
-------
CONDUCT DATA
COLLECTION
Whatever methods you choose for collecting data, Plan
adequate planning, training, quality control, and Analyze
supervisory practices are essential to ensure that
the data collected meet certain standards, namely that
the information is:
»• timely Conduct construct
»• accurate
»• efficient
»• useful
>• reliable
>• valid
Focus Groups
A focus group project typically involves several steps, as discussed below.
To recruit participants, you will need to compose an effective recruitment script. Use this tool to
create dialogue between the person recruiting participants and the candidate and to qualify
potential participants considering factors such as age, socioeconomic status, and race/ethnicity.
Then you will invite individuals who meet requirements to participate in the group. You should
recruit about twelve qualified participants for each focus group; allowing for last-minute change
of plans and illness, the moderator should expect that about nine will attend.
Several practices can maximize the efficiency of the recruitment process:
•• Well before the group meets, mail a letter to participants that confirms the date, time, and
location of the group and states whether the respondents will be paid for participating. The
letter thanks the participants, gives directions to the focus group facility, and repeats the
general objectives of the focus group.
* Additionally, you may decide to provide transportation to the focus group facility for those
participants who need this service.
> On the day of the focus group (or the previous day, if the group is scheduled for the
morning), make & follow-up telephone call to the participants to remind them to attend.
30
-------
Running a successful focus group also requires arranging logistical matters, such as:
arranging for focus group facilities
providing video and audio taping equipment or people assigned as recorders
providing a video hookup between the room where the focus group will meet and the room
where you (or others) will observe the focus group (if this is part of the design)
coordinating participants' schedules
During the focus group, it, is a good idea to use both a moderator and an assistant to conduct the
session. The moderator will pose questions to elicit candid opinions from the participants, keep
the discussion moving, cover all topics in the discussion guide, recognize when participants bring
up valuable new information, and steer the discussion in that direction if warranted. The assistant
supports the moderator as needed, takes notes, and handles logistics.
Mail Surveys
In setting up data collection procedures for a mail survey, a good database is important. The
database should contain, for each customer, a unique identification number, the customer's
characteristics relevant for the sample selection (such as geographic location, size of business, or
date of last contact with EPA), name and address, mail-out date(s), and the date the response is
received. This database is a tracking system.
A mail survey typically involves several separate mailings, each of which they call a "wave." Send
out each wave of a mail survey on the same date:
*• If you use an advance letter, mail all advance letters to customers on the same day.
>• About a week later, mail the first questionnaire to all customers. Attach a label with the
unique identification number to each questionnaire. Include a letter in the package that
refers to the advance letter, asks for cooperation, and (when possible) provides a toll-free
number for customers to call if they have questions. The package should also contain a
prepaid, pre-addressed envelope for the customer to use to return the completed survey.
*• As completed questionnaires come in, record their return in the tracking system. Similarly,
as undeliverable questionnaires come back (e.g., the customer has moved and left no
forwarding address or the address is incorrect), note that they were undeliverable in the
tracking system.
»• About three weeks after mailing the first questionnaire, send out the second copy to all
those who have not yet responded. The letter in this packet should note the importance of
31
-------
the study and ask customers to respond The second copy of the questionnaire should be a
different color from the first version. This distinguishes between the two copies, sends a
signal to customers, and aids efforts to track responses
The following often help improve response rates:
the advance letter (if used) should be on official letterhead, with a signature or title that is
meaningful to the customer
any signed correspondence should use a real signature, rather than a rubber stamp
(scanning in the signature can work well for many letters)
use a "live" stamp (if possible), rather than metered or prepaid postage, to send out the
survey
use "address correction requested" to get information on customers whose surveys cannot
be delivered, then use the corrected information in the next mail-out
use a large enough envelope so that the survey booklet does not have to be folded
establish, when possible, a toll-free number for the duration of the data collection period,
and encourage customers to call with questions or comments
allow respondents to fax back the completed survey
if the budget permits, send out a third mailing via certified mail or using an overnight
delivery service (This is a last resort and may produce only minimal results.)
Data from mail surveys must be key-entered or scanned. It is usually most cost-efficient to wait
until you have a sizable batch of completed surveys before beginning data entry procedures. Be
sure to do a periodic quality check to uncover data entry errors.
Telephone Surveys
Whether using computers assisted telephone interviewing technology (CATI) or a traditional
paper-based technique, you must train telephone interviewers specifically on the study's
questionnaire and data collection procedures. The following are topics to cover during interviewer
training sessions:
»• Background and scope of the survey. A project leader gives interviewers general
information about the background and scope of the project. She/he explains the types of
information to be collected and the ways in which that information will be used.
32
-------
Review of the questionnaire. A person responsible for data collection goes through the
questionnaire and leads an item-by-item discussion.
Dealing with uncooperative respondents. Experienced staff lead discussions about ways
to start off the interview right, enlist cooperation, build rapport, and minimize break-offs
and non-responses. The interviewers will also review strategies for ways to manage
challenging situations.
Answering customers' questions. Some frequent questions are:
How was I selected?
What is the survey about?
Who is conducting the survey?
Who wants to know these answers?
How will the information be used?
How long will this take?
Will I be identified?
How do I know you are who you say you are?
Quality control procedures. Project leaders monitor matters such as posing questions
accurately, tone, courteousness, and responsiveness to customers' concerns throughout the
survey, and review these procedures with interviewers. Telephone interviews for any
sizable study are usually conducted using Computer-Assisted Telephone Interviewing
(CATI) technology. CATI systems use computers to facilitate the interviews, which is a
vast improvement over traditional paper-based systems because CATI:
- greatly reduces the possibility of mistakes
- ensures accurate recording of the survey response
- instantly establishes a tracking system and a record of each call
- provides significant improvements in quality control and efficiency, and
- allows complex branching and skip patterns
When using CATI, the computer automatically handles tasks such as controlling pace,
organizing which questions are to be asked and which are to be skipped, rejecting invalid or
unlikely responses, and recording closed-ended and open-ended responses. This frees up
the interviewer to focus on smooth delivery and good interviewing skills. It also eliminates
the need to enter data after the survey is completed. The net result is a higher quality
interview and more reliable information.
33
-------
Electronic Feedback
As access to the Internet spreads, electronic communication will become an important method for
gathering customer input. You can easily collect feedback by asking Web page visitors a few
questions, inviting grantees to complete comment forms and submit them electronically, and
through on-line discussions in "chat rooms."
E-mail surveys are one of the fastest and least intrusive means for gathering customer
feedback. Up to 50% of the responses are received within 24 hours. They are also cheaper to
conduct since you pay no interviewers or printing and distribution costs. In addition, the survey
will definitely get to the right individual; they will usually not intercept it and routed to another
person. However, respondents are not anonymous.
34
-------
Plan
xgsiiiiiiiiiiiiiiro
Act
ANALYZE - \ntiy ze^mmmmmmmm? construct
THE DATA
Conduct
Throughout the customer feedback activity, the framework for analyzing findings should be
established and modified. An analysis plan is a useful tool for organizing the data analysis. The
analysis plan should specify how your organization will analyze the survey responses to produce
the desired products. The plan is helpful for making sure that the data you collect will answer the
overarching questions being posed, for ensuring that you do not gather extraneous data, and for
setting forth expectations about the kinds of information that will result from the customer
feedback activity.
You should include two important items in the analysis plan: (1) the designation of dependent and
independent variables and (2) the stipulation of the unit of analysis. A dependent variable is the
phenomenon you are investigating. For EPA's feedback activities, the dependent variable will
likely be the degree of customer satisfaction with a specific product or service. Independent
variables help explain the observed level of the dependent variable, and may include factors as
differences in the nature of the product or service (e.g., customers were consistently more satisfied
with one service than with another), frequency and type of interaction, and customer differences
(e.g., educators, students, local planners and small business owners using the same service). The
unit of analysis is what you are studying. In customer feedback surveys at EPA, the unit of
analysis will, in most cases, be the individual person served. When you use continuous feedback
methods, the unit of analysis will generally be the individual customer transaction. For further
discussion of unit of analysis, see Fact Sheet VII.
Data Clean-up
Once you have set up the database and entered all data, you must review the data and prepare data
for analysis. This may entail a broad set of activities, such as deleting cases that left all answers
blank on a mail survey and coding open-ended responses into categories. Generally, this is the
time to a run a set of frequencies to show the number of responses of each kind to each question
(the number of yes's and no's to a yes/no question) and the total number of responses of all kinds
to each question. This quick analysis gives you a rough check on the completeness and accuracy
of your data (the total number of responses to any one question cannot exceed the total number of
respondents and rarely will differ greatly fro the total number of responses for each of the other
questions). Frequencies flag out-of-range values (i.e., responses to one question that are so
different from responses to similar questions that you doubt their accuracy).
35
-------
Types of Data and Analyses
Data from focus groups tend to be qualitative in nature. Analysts may tabulate data from focus
groups, such as "X percent of the participants expressed satisfaction." You should treat these se
numbers cautiously and not generalize them to the full set of customers because (1) focus groups
usually have only a relatively small number of participants and (2) participants may have been
recruited because they had specific experiences or characteristics. You may review transcripts
from focus groups to detect patterns and inconsistencies or you may apply more rigorous content
analysis.
For mail and telephone surveys, you can produce a variety of statistics:
1. Descriptions of central tendencies, such as the mean, median, or mode (i.e., the average
value, the middle value (half are larger and half are smaller), or the most frequently
occurring value).
2. Other descriptive statistics, such as frequencies, percentiles, and percentages. In customer
satisfaction surveys, the most commonly reported result is of this kind: the percentage of
respondents who expressed satisfaction with a specific aspect of their interaction with EPA.
3. Cross-tabulations that array independent variables against the dependent variable (for
example, type of customer displayed against a summary measure of customer satisfaction,
like the percentage of customers of each type who reported being satisfied with the product
or service they received).
4. Multi-variate statistics—such as factor analysis, analysis of variance, and regression
analysis—to determine the relationship between and among selected variables.
5. Chi-square, z scores, t-tests, and other statistics to determine statistical significance.
6. Time-series and trend analyses to determine long-term changes, seasonal, and cyclical
patterns in the data.
In most cases, products developed using items 2 and 3 above will meet all the needs and
expectations of the EPA program or project conducting feedback.
Analysis: An Example
The following is a simple example of how you might analyze data from customer feedback.
Suppose an EPA group has distributed several thousand copies of the ABC Booklet, and because
you want to know how satisfied customers are with the booklet, you asked a sample of 450
customers this question:
On a scale of I to 6 where 1 represents "highly dissatisfied" and 6 represents "highly
36
-------
satisfied, " how would you rate your satisfaction with the ABC booklet you received from
EPA?
The most straightforward way to analyze the responses is to provide the average score, which in
this case is 3.5. Although an average score is a very important piece of information, there is a lot
more you can do with the data from your customers. It is often useful to begin with a
frequency distribution where you determine the number and percentage of respondents who gave
each score between 1 and 6. Here is one way to present that distribution:
Customer Satisfaction with the ABC Booklet (n = 450)
Score
1 — highly dissatisfied
2
3
4
5
6 — highly satisfied
Number
42
27
122
132
38
32
Percent of those
expressing an opinion
11
7
31
34
9
8
Total:
don't remember receiving the ABC Booklet:
don't know/no opinion
393
22 (5 percent of 450)
35 (8 percent of 450)
100
This example points out several items you need to consider. First, of the 450 customers asked this
question, 22 did not remember receiving the booklet and 35 said they had no opinion or did not
know how they would rate their satisfaction with the booklet. In the example provided above, the
information about those who do not remember or have no opinion is presented outside the table
because the analyst decided that it was more important to focus attention on those who did have
opinions to express. Thus, the percentages of those with opinions is based on the 393 respondents
who expressed opinions. If it is important to determine the percentage of customers who don't
remember or who have no opinion about the booklet, you would calculate those figures using
450—the total number who were asked the question—as the denominator. By including the
sample size in the table (the information that "n = 450"), readers can do these calculations, should
they be interested.
Second, the information presented may be at too great a level of detail for many audience
members. The difference between a "2" and a "3" rating, for example, may not be meaningful for
them. Thus, you may find it useful to collapse the information into some smaller number of
categories. One possibility is to create three categories: dissatisfied, neutral, and satisfied. Scores
of 1 to 2, 3 to 4, and 5 to 6 might be collapsed to create three categories and then report:
37
-------
Customer Satisfaction with the ABC Booklet (n =450)
Rating Number Percent of those
J ' || expressing an opinion
dissatisfied
neutral
satisfied
69
254
70
18
65
18
Total:
* Total is greater than 100 due to rounding
don't remember receiving the ABC Booklet:
don't know/no opinion
393 101*
22 (5 percent of 450)
35 (8 percent of 450)
Note that the information can now be grasped much more immediately. It is reasonable to ask: If
you will eventually collapse responses, why does the question posed to customers have six possible
answers? Research has shown that people answering survey questions prefer to have a fairly wide
range of responses because they don't like to feel "forced" into a limited set of options. In
addition, analysts may have different approaches to collapsing categories.
The responsibility for reducing information to a manageable amount falls to the analyst. It is the
analyst's task to identify sensible ways to collapse categories and to present these decisions to the
audience (often as a footnote or technical appendix).
Third, as discussed in the next section, you should consider how to present the data. Although
these tables are simple and easy to interpret, compare them to a chart that summarizes the
information instantly.
Customer Satisfaction with the ABC Book
Fourth, the analysis you anticipated during the planning phase of the customer feedback activity
38
-------
should guide you whether you need to do "subgroup analysis " Subgroup analysis examines
whether different kinds of customers have different kinds of responses Suppose you want to
examine whether educators and representatives of advoca . v organizations have the same or
different opinions about the ABC booklet. You could collapse categories and sort respondents by
their status as educators or advocates (to be sure, some respondents may be both educators and
advocates, but for simplicity, let us assume you had customers indicate their primary role), then
present the findings:
Selected Customers' Satisfaction with the ABC Booklet (n = 450)
Rating
dissatisfied
neutral
satisfied
Educators
|_ Number
27
94
35
Percent
17
60
22
Advocates
Number |L Percent J
17
78
12
16
73
11
Total
* Total is less than 100 due to rounding
156
99*
107
100
This table provides important information to the audience, but you might want to present it using
charts for the two separate groups. You could also perform a statistical test to see if the two
groups differ statistically in their satisfaction with the ABC Booklet.
Selected Customer's Satisfaction with the ABC Booklet
Educators Advocates
Dissatisfied
Satisfied
Neutral
39
-------
The fifth item to consider is the adequacy of your findings. Be sure how strong your findings are
before formulating recommendations. Many factors affect adequacy, such as the sample size,
response rate, and objectivity of questions posed—plus the way you will use the findings. With a
sufficient sample size, a good response rate (more than 75 percent for mail and telephone surveys,
for example), and questions that are not biased, you can use the information with confidence.
OMB requires an 80% response rate for survey results to be considered statistically valid.
However, when less than 80% of those sampled return questionnaires in a customer feedback and
satisfaction measurement activity, the information gathered should still be used to improve
customer service. Do not ignore the findings.
Let's say that in the above example, there was an additional group of people — small business
owners ~ who were your customers, and that a total of 17 small business owners responded to
your survey. This is a small enough number that the sampling error for this one group of
customers may be quite high. Nevertheless, pay attention to the results.
Even if they do not adequately represent the larger group of small business owners who were your
customers, you can still:
>• Decide whether the findings are suggestive (rather than definitive). Should your office pay
attention to the concerns suggested by these findings?
»• Compare the findings to other similar data. Are small business owners generally pleased or
displeased with other EPA products?
«• Compare the findings to information EPA gets from continuous feedback methods. If you call
small business owners after providing a service or product, what do they have to say in those
conversations?
> How do the continuous feedback findings compare with the results of this survey?
•• Discuss the findings with colleagues. Have they gotten similar reports? Is there a pattern
emerging about small business owners' level of satisfaction with EPA products?
•• Raise the findings with program managers, being careful to note that this might be an area that
requires attention to improve customers' satisfaction with EPA.
» Investigate the findings further. Should you use this as a starting point for more in-depth
discussions with small business owners? Conduct focus groups to see how products could
produce higher levels of satisfaction?
One final comment on this example. EPA has a large number of programs and offices, some of
which may have customer bases much smaller than the thousands used in the example. If your
customer base is quite small, you first must decide whether a statistical sample and quantitative
survey is still viable because other techniques may be more suited for your purposes. If you decide
to go ahead with a quantitative survey, recognize that the analyses you conduct should be
40
-------
1* A reasonable probability is fits $ai|
, . . EW.Howe
carefully considered and constructed. If, for instance, you have 500 customers and survey 100 of
them, you can perform the same analyses as in the example above, but you should examine the
frequency distribution first. In an extreme case, let's assume that 10 of your 100 respondents gave
a score of "0," 60 gave a score of "3," and 30 gave a score of "6." Although the average score of
3 0 may be close to the average of 3.5 in the example, the distribution of responses is very
different.
Sixth, you need to consider how past responses compare with the new responses, and to ensure
that you can compare the most current results with those you expect to from future questionnaires.
This is time series or trends analysis and is vital to being able to measure change.
Driver Analysis
An analytical approach that is very useful in customer research is driver analysis. Driver analysis
identifies the service or services that most significantly affect respondents' satisfaction. This type
of analysis provides decision makers with a tool to prioritize findings, which is important because
customer feedback efforts often yield more information than an organization can deal with. Also,
managers often do not have enough resources to adequately address all aspects of customer
service that receive low satisfaction ratings. Driver analysis enables the study team to identify
which areas deserve the highest levels of attention.
As an example, let us assume that an EPA program is assessing three ways of providing
information: by telephone, by mail, and through published materials. Analysis of customer
feedback can identify which of these methods results in the highest level of respondent
satisfaction. This is the delivery system that most strongly "drives" satisfaction with the program's
products and services. When you identify the method that significantly affects satisfaction,
additional analysis can determine which factor within that method most significantly affects
satisfaction. Continuing with the example, let us assume that you identify "information received by
telephone" as the method producing highest satisfaction. Digging down another level, you can use
driver analysis to identify the factor that most affects the respondent's opinion. Such factors may
include one or more of the following: the accuracy of the information, the courtesy shown by the
employee, or the accessibility of the correct person to answer the question. Identifying the driver
in this way greatly enhances a manager's ability to set priorities for improvement efforts.
Two primary analytical techniques, stated importance and derived importance, are used in driver
analysis:
Stated importance uses respondents' answers to specific questions regarding the importance of
the services. Simply ask the respondent to rank or rate items on a prescribed scale (such as a scale
from 1 to 6) according to their importance.
41
-------
Derived importance uses multi-variate analysis to identify the most important factors affecting
satisfaction. In short, the overall level of satisfaction with the organization is compared to the
levels of satisfaction with particular products or services received. Driver analysis will identify the
degree to which variation in the overall level of satisfaction is explained by the variation in the
product or service received. Those individual products or services that most adequately explain
the variation in overall satisfaction are the drivers.
Presenting the Data
One critical activity is to remove all identifying information from the data To ensure credibility
and confidentiality, your should never present findings that could be used to identify a specific
customer. A typical practice is to strip names, addresses, and telephone numbers from the
analytical database and keep them in a separate file that includes the unique identification number
assigned during the data collection activity. If ever warranted, you can link the file with identifying
information with customer feedback through the identification numbers.
Most people are interested in the "bottom line," presented as succinctly and clearly as possible.
Therefore, it may be best to present the data reflecting survey results in simple, straightforward
ways to most EPA audiences and save the mathematical details for an appendix or supplementary
briefing. Many audience members want a brief summary of the study's findings. Two pages of
text, with key findings presented as bullets, are usually sufficient.
Graphic representations of data are powerful displays of findings. It is very easy for audiences to
grasp information presented in bar graphs, pie charts, and similar designs. The rapid growth of
low-cost color printers means that these displays can be easily produced in color, adding to their
ease of understanding. Examples of graphs are presented in Fact Sheet VIII.
Formulating Recommendations Based on the Data
Customer feedback may suggest many potential improvements or enhancements to consider.
Narrowing down the list to those that will have the most direct effects on overall customer
satisfaction is the ideal. Most organizations will have limited staff and other resources, so practical
considerations must guide their choices. Usually, three to five targeted improvements are
sufficient. Sometimes, a single improvement can present a significant challenge, and focusing on it
can have a major impact.
Each organization will consider its own capacity for action. However, it is important to do
something or customers may feel that their input was not valued and the effort they- expended to
respond was wasted. They may place even less trust in the surveying agency.
Recognize too, that not everyone will be ready for the feedback results. Presenting them can raise
sensitive issues for some individuals. Some people may feel threatened by anything but glowing
results, become defensive, or emotional. Some may question the credibility of the findings,
especially if they build logically to recommendations for changes that affect them.
42
-------
To get buy-in and use the results to influence
change, results must be honest, and presented in a
constructive way that emphasizes the positives.
Results, findings and recommendations should be
presented as opportunities for improvement. If
the survey cannot be used to influence change or
improvement, it did not meet its objective, no
matter how carefully the whole feedback activity
was conducted.
Presenting Recommendations - Using Graphics
First, remember, at least 70% of the message is
visual, so take advantage of how people take in
information. Use the right visuals to
communicate your message. You can:
emphasize main numerical facts;
uncover facts, trends, comparisons
and relationships that might be
overlooked in text or table;
summarize, group or segment
(stratify) data;
add variety and interest to text,
tables and briefings.
It's best to use Pie charts to display components
or parts of a whole. Use Line charts when you
want to show independent or cumulative values
when:
• your data cover a long period of time
several series are compared on one chart
• you want to show change, not quantity
• to exhibit trends, or
• to show relationships.
plot, or the series fluctuates sharply. Do not use
column charts for comparing several data sets, for
showing data with many plottings, or to show
many components. Finally, Use Picture graphs to
demonstrate concepts or ideas. (See Fact Sheet
VIII for examples of graphics.)
On Developing Recommendations
Whether you should develop
recommendations depends on the purpose
of the feedback activity, the significance of
the issues, the quality and significance of
the findings, and your audience. Your
original purpose should be action-oriented;
answers to your issue questions should
naturally lead to ideas for actions that
would improve program effectiveness.
If you develop and make
recommendations, they should be feasible,
supported by the findings (which are in turn
supported by the data), and stated
unambiguously. Providing a list of options
for achieving a recommended improvement
can increase the likelihood that it will be
implemented. Another critical, although
sometimes subtle, consideration in
developing recommendations is the
"political climate." It's a fact of life that
some recommendations, no matter how
well you support them, will not be accepted
by those in authority due to factors beyond
your control. Just be aware of these
factors so that you can develop an
alternative recommendation or recognize
that your recommendation may not be
implemented until the climate changes or
until others have helped "tip the scale."
(from "Practical Evaluation for Public
Managers", Office of the Inspector General,
Department of Health & Human Services,
November 1994.)
Patterson
43
-------
ACT ON
THE RESULTS Act
Is this the beginning or the end Analyze
of the process?
When your efforts to collect customer data appear to be
coming to a close, your real work may just be starting! If this
is the first time your organization has collected and analyzed customer
data in a systematic way, you are probably discovering a whole new world of information.
Depending on the feedback method you have chosen, you have probably created a baseline of
information that characterizes how your customers evaluate your products and services. You may
wish to repeat the same process again in a year, or in whatever period of time makes sense in your
situation.
Customers who respond to you expect for you not only to act on their feedback, but also to tell
them what you have done. Whenever possible, you should build in some way to let them know.
To be cost effective, the EPA program that did the feedback activity wants to make best use of the
information. Therefore, this next stage of the process is vitally important to the success of the final
phase-action planning and implementation.
How do you decide what to do with the feedback you receive?
Once you receive and analyze the feedback, most people will be anxious to know the results. How
did we do? What's the bottom line? Work hard to avoid giving answers that over-simplify the
feedback you have received. Depending on the methodology you have used, you may have an
average score or rating to report, but chances are that you will have far more information that can
provide a wealth of insights about how your customers view the products and services they have
received from your organization.
How good is good enough?
That is a very hard question to answer. In fact, the only real way to answer it is to say "it
depends." For example, is an average score of 4.9 on a 6-point scale a good score? If last year's
average score was 2.5, indeed you may have reason to celebrate—and for more than one reason!
For one thing, your score nearly doubled. Even better, it leaped from the dissatisfied range, to the
middle of the satisfied range. However, you may want to look deeper: How does the customer
rate other service providers who provide similar services? Is that organization getting ratings
above or below the 4.9? And what about the distribution of ratings—are some customers still
rating you below a 3.0 while some are rating you above a 5.5? If so, are the more positive ratings
obscuring the negative ones? If so, you still may have customers out there who are sharply critical
of the products and services you provide.
44
-------
Setting acceptable goals for customer satisfaction ratings is a decision that each EPA organization
must make for itself. Keep in mind, however, that leading service organizations tend to:
»• Target overall satisfaction scores at the upper end of the scale. On a 6 point scale, that should
be a 5.0, and in very competitive situations it may even be at the 5.5 level or higher.
> View any less-than-satisfied ratings as being unacceptable because they indicate an
opportunity for dissatisfied customers to quickly convey their dissatisfaction to others by word
of mouth. In the long run, that can undermine your efforts to achieve a reputation for service
and product excellence.
How do we know what to work on first?
Many organizations are overwhelmed with the amount of information they receive from customers.
This is especially true if a survey instrument is lengthy, or if there is a large volume of open-ended
comments and ideas. Decision makers, particularly at more senior executive levels, are likely to
ask the questions: What do we do first? What improvements will yield the best improvement in
overall customer satisfaction? What improvement or enhancement investments are worth making?
During the planning phase, you, your colleagues, and managers will have identified potential
methods and procedures for acting on the results of customer feedback activities. The following
are some ideas to consider.
Recovery. Be prepared to hear from customers who report a negative experience with EPA. Set
up a quick alert and response mechanism to respond in any case such as this. (That may require a
special question that asks if the respondent is willing to be identified and contacted for follow-up.)
A quick response is a very positive way to convert a negative impression into a positive one for the
customer.
Report. Even if the primary means for action is an oral briefing, having written documentation for
others to read and refer to is a good idea. It also creates a historical record for tracking changes
over time.
Most people who will review information about customer feedback want to see graphics and
summary tables. Reports may include an executive summary, a description of the study objectives
and data collection methods, a comprehensive investigation of findings (illustrated with graphs and
tables), and conclusions and recommendations. To keep the report a reasonable length,
supplementary material can be presented in appendices.
Brief Action planning workshops get management's attention. Gather decision makers together
and go over the findings with a verbal presentation. Software graphics packages' can help make
the briefing interesting and informative. Conducting a dry run before your presentation helps with
timing, pacing, and finding out how well you can verbally communicate your written findings.
Hard-copy handouts give participants a tangible reminder of the information conveyed.
Prioritize It is likely that customer feedback will provide a wealth of information. Try to package
45
-------
the information so that it leads the audience or reader to a series of practical action steps that fit
logically together. Acting on results may be more successful if several smaller action plans are
developed that contain three to five next steps, rather than one large plan that may appear
overwhelming.
Communicate In addition to briefing management, it is a good idea to communicate results to
others. Sending a thank you letter to focus group participants and customers who completed the
survey is important The letter should note what EPA learned and what will be done with the
findings. EPA employees are often eager to learn what customers have said, so results should be
summarized and distributed widely.
Improve. There is no reason to elicit customer feedback unless you will use the information to
improve EPA's processes, services, or products. Recognize that some employees may be excited
about possible changes, but others may feel threatened and be highly resistant. The best way to
use customer feedback may be to develop and define action plans. Action plans are most likely to
be successful when "owners" of each issue:
• are identified and included,
• help assess their activities and customers' feedback,
• participate in review and strategy sessions, and
• have an opportunity to discuss concerns and shortcomings in a nonthreatening, non-
confrontational environment.
Enhance. Sometimes, customers are satisfied, but want the agency to expand or further improve
what it offers. This is an opportunity to enhance products or services.
Reward. Conducting customer feedback activities can be exciting and worthwhile; the process can
also be exhausting and threatening. Be certain that you recognize the efforts of staff and
customers who made the activity possible and reward them for their involvement. Rewards can
take the form of public acknowledgment, mention in performance reviews, and attention to
findings.
Plan. Use the immediacy of the customer feedback activity to see what worked well and what
could be improved for the next similar activity. Identify aspects that facilitated or impeded
achieving the project's objectives, including features of processes followed for planning, data
collection, analysis, and development of findings.
Feed Results into the Strategic Plan and GPRA Goals and Planning Activities. In addition to
EPA's performance metrics, recent management initiatives, including the President's directives on
strategic planning, reinvention, and customer service improvement, and the Government
Performance and Results Act (GPRA), suggest that customer data be included in performance
data. To address these needs, quantitative data from surveys and trend data accumulated from on-
46
-------
going feedback mechanisms may be most useful Focus group and other qualitative data can be
used to clarify customers' views.
As government agencies go about "reinventing" programs to meet customers' needs and
expectations and to comply with the requirements of GPRA, managers will need to develop
customer-based performance goals and indicators to assess progress. The basic way to do this is to
get input directly from customers.
"It ain't so ranch the things we don't know that
gets us in trouble. It's the things we know
i that ain't so;'1
47
-------
Suggested Reading
Alreck, Pamela L. and Robert B. Settle. The Survey Research Handbook: Guidelines and
Strategies for Conducting a Survey. Irwin Professional Publishers, 1994.
Description:
Without technical buzzwords or statistical jargon, this book provides the methods and'
guidelines for conducting practical, economical surveys from start to finish.
Dutka, Alan. AMA Handbook for Customer Satisfaction: A Complete Guide to Research.
Planning. & Implementations. NIC Publishing Group, 1995.
Description:
Covers planning customer satisfaction activities, designing questionnaires, conducting surveys,
analyzing the results, applying the results, and maintaining customer satisfaction (Booknews, Inc.,
2/1/96).
Environmental Protection Agency, Survey Management Handbook. Volumes I (November 1983)
and II (December 1984).
Description:
Volume I focuses on survey design principles and ways to productively apply them in planning
and managing a contract survey related to regulatory decisionmaking . Volume II focuses on the
conduct and management of EPA sponsored surveys. Contain good lists of recommended
additional reading.
Gerson, Richard F., Ph.D. Measuring Customer Satisfaction. Crisp Publications, 1993.
Description:
Provides a definition of customer satisfaction and warns of the dangers associated with poor
service or quality. The author describes research methods and includes sample forms and
questions. The book also explains analysis techniques and notes the importance of measuring
employees' satisfaction.
Green, Samuel B., Neil J. Salkind, Theresa M. Akey, Theresa M. Jones, and Sam Green. Using
SPSS for Windows: Analyzing and Understanding Data. Prentice Hall, 1997.
Description:
Offers both the beginning and advanced individual a complete introduction to SPSS. In two
parts, coverage proceeds from an introduction to how to use the program to advanced information
on the specific SPSS techniques that are available. Special features of this book include a high
level of readability and a class tested text, examples using screen shots and step-by-step procedures
for successful completion of data analysis, tips to help the user in both learning SPSS and making it
even easier to use, sidebars featuring material that is particularly interesting and important to
understanding the analytic technique under discussion, and guidance in the selection and
application of statistical techniques and interpretation, as well as documenting and communicating
results.
48
-------
Hayes, Bob E Measuring Customer Satisfaction. Survey Design. Use, and Statistical Methods.
ASQC Quality Press, 1998.
Description:
Provides detailed information about how to construct, evaluate, and use questionnaires.
Clearly presents the scientific methodology used to construct questionnaires utilizing the author's
systematic approach. Important scientific principles are presented in simple, understandable terms.
Both the qualitative and quantitative aspects of questionnaire design and evaluation are included
Hill, Nigel. Handbook of Customer Satisfaction Measurement. Gower Publishing, 1996.
Description:
This book was written for customer service professionals, not statisticians. Using
work examples and real-life case studies, this guide takes the reader step by step through the entire
process, from formulating objectives at the outset to implementing any necessary action at the end.
Among the topics covered are questionnaire design, sampling, interviewing skills, data analysis,
and reporting.
Hurlburt, Russell T. Comprehending Behavioral Statistics. Brooks/Cole Publishing Company,
1994.
Description:
A textbook that provides the same material found in most introductory statistics texts, but goes
beyond the standard by teaching students how to estimate statistics before computations are
performed. The optional ESTAT software helps students build this skill by allowing them to learn
to make accurate "eyeball-estimates". These estimation techniques are provided for both
descriptive and inferential statistics. Alternatively, students can learn estimation from information
in the book alone. Annotation: Book New, Inc. Portland, Or.
Kessler, Sheila. Measuring and Managing Customer Satisfaction: Going for the Gold. American
Society for Quality, 1996.
Description:
Includes chapters on topics such as: Problems and Opportunities with Current Customer
Satisfaction Measurement, Selecting Your Tools, CSS Data Analysis, Tools for Gathering Data,
and Tools for Designing, Analyzing, and Synthesizing Data.
McDaniel, Carl. Marketing Research Essentials. South-Western College Publishing, 1998.
Description:
Provides key chapters on: the concept of measurement and attitude scales; questionnaire
design; data processing, basic data analysis, and statistical testing of differences; and correlation
and regression analysis.
49
-------
Fact Sheets
I Who are EPA's customers?
II Internal control procedures
III Sampling - the basics
IV Sampling - more on sample size
V Sampling - more advanced topics
VI OMB clearance
VII Unit of Analysis
VIII Examples of graphs
I Survey Software & Corporate Pulse™ information
-------
Fact Sheet I: Who are EPA's customers?
The following table can help you get started in identifying your particular customers and the
products or services they receive from you. Remember, some of you also work with
individuals who can be customers, stakeholders or clients, depending on the specific
interaction.
Customer
Service/Product
Regulated industries, such as
manufacturers and power
companies
permits/compliance enforcement
public meetings
hearings
regulations
facility inspection
Agriculture
information
referrals
guidance and access
environmental cleanup
Small businesses, such as
dry cleaners, printers, and
developers
guidance
grants
environmental impact statements
standards
Consultants
information
Local governments
outreach
assistance
funds
information
brown fields
guidance
grants
program support
enforcement
coordination of resource efforts
States
information
direct implementation
grants
technical assistance
guidance
grants
program support
enforcement
coordination of resource efforts
Tribes
guidance
information
technical assistance
grants
enforcement support
Grant applicants
applications
information
Public interest groups
funds
information
opportunities for input into decisions
-------
Customer
Service/Product
Community-based groups,
including environmental
justice organizations
advice and assistance
data and information
grants
program support
training and job development
technology transfer
The public
information
site clean-up
Freedom of Information Act requests
hot line complaints
environmental education
access to environmental decision making
opportunities for involvement
Congress
information
responses
reports
action
regulations
program implementation
Program offices (one office to
another at HQ)
technical assistance
compliance assistance
research and development support
EPA employees
human resources support
facilities
financial management
training
information technology
audits
evaluation
EPA Regional program
offices
policy guidance
money support
regulations
Other federal agencies
referrals
information
support
access to data
lAGs
site cleanup
FIS
International/global
technology and information transfer
standards
training
conferences
studies
monitoring
collaboration
-------
Fact Sheet II: EPA Internal Control Procedures
Internal Controls Ensure the Integrity of Survey Data and Results.
The U.S. General Accounting Office and OMB have issued Internal Control Standards
that apply to all operations and administrative functions in assuring the quality, reliability and
integrity of information used for decision making. These standards and techniques, integrated
throughout a number of laws and requirements, apply to the collection, administration and
reporting of results from customer surveys and other forms of data used for purposes of
performance measurement, verification, planning and management action. Developing internal
control procedures is exercising good business practice and important in our role as stewards of
public trust. What constitutes an effective control system varies with program circumstances.
While controls may be as routine as second party reviews or limiting access to the data, they
should be generally applied to provide reasonable assurance that the objectives of customer
surveys will be reliably and cost effectively accomplished. Any audits, evaluations or verifications
of the data from customer surveys will usually start with an examination of the system of internal
controls.
Summary of Specific Control Standards and Techniques
Management must provide reasonable assurance and a supportive attitude that assets
(information) are safeguarded against waste, loss, unauthorized use, and misapplication; and that
supporting documentation be clear and available for examination. Management controls should
be logical, applicable, reasonably complete, efficient and effective in accomplishing management
objectives. Managers and employees must have professional and personal integrity and are
obligated to support the ethics program and maintain a level of competence that allows them to
accomplish their assigned duties. Managers should ensure that appropriate authority,
responsibility, and accountability are defined and delegated and that an appropriate organizational
structure is established to effectively carry out program responsibilities. Key duties and
responsibilities in authorizing, processing, recording, and reviewing official information and
transactions should be separated among individuals so that individuals do not exceed or abuse
their assigned authorities. Access to assets and records should be secured limited to authorized
personnel, with custody assigned and maintained. All program operations, obligations and costs
should be in compliance with applicable laws and regulations; and resources are used efficiently,
effectively and duly authorized.
H-1
-------
Fact Sheet III: Sampling - the basics
If you have decided to use a survey approach for obtaining customer feedback, you need to
determine what sample size to use. This Fact Sheet first discusses sample sizes, sampling error,
and confidence intervals—all of which factor into decisions about the sample size. It then
presents a table for you to use in determining what sample size to use ~ and tells you step by step
how to make use of that table. This Fact Sheet then describes how to go about randomly
selecting that number of customers from the total list of customers you have served during the
time period to be covered by the survey.
What Kinds of Sample Sizes Are We Talking About?
Before we give specific guidelines on how to choose the sample size, it will be useful to set some
general expectations. National public opinion polls like the Gallup Poll and the Roper Poll
typically use sample sizes in the range of 1,350 to 1,800. These polls use fairly large sample sizes
to obtain a result that represents the entire adult U.S. population with a sampling error on the
order of plus or minus 2.5% to 3%. Such small levels of sampling error are needed because the
polls often address matters of national importance. The decisions made, based in part on the
results of these national polls, may be far-reaching, long-lasting, and affect millions of people.
The surveys you will be conducting to obtain customer feedback will be of a very different nature.
The target group whose opinions you need will be much smaller: it will probably be the people
who have come to you and your colleagues in one specific program area within EPA, within a
limited time (e.g., during one year) to request certain products or services. We are therefore
talking about a target group of maybe as many as 500 to 1,000 people (few EPA programs
directly serve more customers than that) and in some cases 50 people or fewer. Furthermore,
although the decisions that will be affected by customer feedback are important, they will
probably not be far-reaching and long-lasting. The scope of decisions to be made in most cases
will be, for example:
• Should we change a process to reflect customer comments?
• Should we revise some of our written products?
• Should we provide a half day of customer feedback training to each staff member?
Even in the worst case—we make the wrong decision about whether our products need to be
revised and whether the staff members need further training—we will (if we continue to obtain
feedback from our customers at least once each year) discover our error soon enough and be able
to correct it, without incurring excessive or irreparable damage in the meantime.
Based on these considerations, it is reasonable to have higher sampling errors than those
associated with national surveys like the Gallup poll. We can feel comfortable with sampling
errors of 5 percent or even 10 percent.
-------
Additionally, for getting feedback from EPA customers, we have relatively small target groups
who were served by a specific program during the time period of immediate interest. For this
reason, it is reasonable for you to use a much smaller sample size than is used in the Gallup and
Roper polls, which seek to accurately capture the opinions of millions of people.
Sampling Error
"Sampling error" is normally presented as a percentage with a plus or minus sign in front of it.
For example, the sampling error in one particular situation may be ± 3.5 percent. That means that
the true value of a given measure for the entire population—that is, the whole target group you
are getting feedback from—is the value obtained from your sample of customers, plus or minus
3.5 percent. If for example, 62.4 percent of your sampled customers are satisfied, the actual
percentage of satisfied customers lie within the range between 58.9 percent (62.4 percent - 3.5
percent) and 65.9 percent (62.4 percent + 3.5 percent).
But that is not quite true. In fact, there is no range of reasonable size that we can identify for
which we can be certain that the true value for the full list of customers lies in that range.
Why is that so?
Because there's always the possibility of very flukey, very unlikely circumstances occurring --
with the result that the characteristics of the customers in the sample are very different from the
characteristics of the customers not in the sample. In such circumstances, the true value for all
customers will be very different from the value obtained from the customers in the sample
surveyed. The only way to get around this statistical fact is to specify "how certain we want to
be" that the true value does, in fact, fall with a specific range around the value we obtain from the
sample. This degree of certainly we are looking for is known as the "confidence level."
Confidence Level
The "confidence level" indicates how confident we want to be that the true value lies within a
specific range.
There is no one confidence level that is the "right" one to use. There are many different possible
confidence levels, and only you can decide which confidence level is appropriate for your survey.
Much of the work in the area of public opinion surveys uses the 95 percent confidence level. That
means that if you determine the sampling error using the 95 percent confidence level, you can be
95 percent certain that the true value for all your customers will lie within a specific percentage
band (one equal to the size of the sampling error) around the result you obtain from the sample of
customers you contact.
Ill-2
-------
Another confidence level commonly used is the 90 percent confidence level. With a 90 percent
confidence interval, you can be confident that 9 times out of 10, the true value falls within the
value obtained from your sample of customers, plus or minus the sampling error. Some analysts
use 80 percent confidence intervals.
To decide what confidence level to use, you might want to think of a scale running from 80 to 95,
where 95 represents a high level of confidence and 80 represents a lower level of confidence.
Decide which confidence level to use based on the way in which your results will be used, how
products and services may be affected by the results, and the frequency with which you will
collect additional information to confirm or revise your findings.
Determining the Sample Size
Now that we have established appropriate expectations with regard to sampling error and sample
size, we will provide you with some guidance on selecting your sample size. Please recognize
that there are several factors to consider in determining the sample size. The information
provided here is intended to help get you started. Please refer as well to the additional
information provided in Fact Sheets III, IV and V. If you wish, you may also consult a statistician
within your Office at EPA. A list of EPA statisticians showing the EPA Office in which each of
these statisticians is located can be obtained from the Office of the Chief Statistician of EPA
within EPA's Center for Environmental Information and Statistics by calling 202-260-5244.
Number in Target Group
1000
1000
1000
500
500
500
200
200
200
100
100
100
50
50
Sampling Error
±5
±5
±5
±5
±5
±5
±5
±5
±5
±5
±5
±5
±5
±5
Confidence Level
80
90
95
80
90
95
80
90
95
80
90
95
80
90
Sample Size
141
214
278
124
176
218
90
116
132
62
74
80
39
43
III-3
-------
50
1000
1000
1000
500
500
500
200
200
200
100
100
100
50
50
50
±5
±10
±10
±10
±10
±10
±10
±10
±10
±10
±10
±10
±10
±10
±10
±10
95
80
90
95
80
90
95
80
90
95
80
90
95
80
90
95
45
39
64
88
38
60
81
34
51
66
29
41
50
23
29
34
The above table is that appropriate for simple random sampling (SRS), which is a sampling
procedure based on sampling without replacement. Simple random sampling is the most
commonly used sampling procedure. The table is based on the approximate formula given in Fact
Sheet IV. This approximate formula includes an adjustment comparable to the finite population
correction factor for each combination of target population and sample size.
The precise formula that can be used instead of this approximate formula is also given in Fact
Sheet IV. For a discussion of the finite population correction factor, see Fact Sheet IV. For a
discussion of the meaning and significance of sampling without replacement (as contrasted with
sampling \vith replacement), see the discussion of this matter in the last section of Fact Sheet V.
The procedure described below in this Fact Sheet for randomly selecting a sample from the full
list of customers served in a specific period of time is simple random sampling and is therefore
consistent with the above table.
Here's How to Use the Above Table
The instructions that follow assume that the unit of analysis for the survey will be the "person
served." (See Fact Sheet VII for a discussion of "Unit of Analysis.")
Ill-4
-------
(1) Identify the number of persons you have served in the time period of interest Find that
number in the column labeled "Number in Target Group."
(2) Select the confidence level that you consider to be the most appropriate given the magnitude
of the decisions that will be made based (in part) on the results obtained from the survey:
o If the decisions to be made using the survey results will be far-reaching, long-lasting
and/or costly, use the 95% confident level.
o If the decisions to be made using the survey results will be less far-reaching, less long-
lasting or less costly, use the 90% confidence level.
o If the decisions to be made using the survey results will have more limited consequences,
mostly in the short-term (e.g., in the next 6-12 months) and the cost implications of the
decisions will be moderate, you may use the 80% confidence level.
(3) Select the level of sampling error you consider to be acceptable given the magnitude of the
decisions that will be made using the results obtained from the sample.
o For most EPA customer satisfaction surveys, a sampling error of ±10% should be
acceptable.
o In cases where the decision to be made based (in part) on the survey results are of such a
nature that a smaller level of sampling error is needed, a sampling error of ±5% can be
used instead.
(4) Read off the corresponding sample size.
(a) If the total number of customers served falls between two of the values shown above
in the column "Number in Target Group," you can use interpolation to obtain an initial
estimate of the appropriate sample size.
(b) You can then use the approximate formula for determining sample size presented in
Fact Sheet IV to obtain a much better estimate of the sample size needed.
(c) You can stop here and make use of the approximate value for the sample size obtained
in step (4)(b) immediately above. Alternatively, you can, if you wish, now make use of
the trial and error approach presented in Fact Sheet IV or, even better, the combined
approach, also presented in Fact Sheet IV, to calculate the precise value for the sample
size needed.
Ill-5
-------
Here's Wow to Randomly Select a Sample of Customers Once You Have
Determined What Sample Size to Use
Once you have determined the appropriate sample size to use, the next step is to randomly select
that number of customers from the total number served in the time period of interest. Here is a
procedure you can use to make that random selection:
(1) Make a complete list of all the persons served in the period of interest for which you already
have (or can obtain, with a reasonable expenditure of effort) the needed contact information (i.e.,
name, plus address or phone number). Put the customers in alphabetical order to ensure that
there are no duplicate names. Eliminate any duplicate names.
(2) Once all duplicate names have been eliminated (so that each name appears only once), starting
at the top of the list, number each name. The result is the "master lisf of customers served. The
number next to each name is that person's "customer number."
(3) Here is a computer based approach for selecting a sample of customers from the master list:
(a) You will use spreadsheet software (like Lotus 1-2-3 or Excel) to carry out the
remaining steps of this procedure. Before you begin to make use of any particular
spreadsheet software, first make sure that it has a "randomize" function. Not all
spreadsheets do.
(b) Enter the customer numbers in numerical order into the spreadsheet, one number per
row. Place each of these numbers in the second column of the spreadsheet, leaving the
first column in each row blank. The result will be a spreadsheet with the number of rows
equal to the number of customers and with the rows having the numbers "1", "2", "3", and
so on (up to the total number of customers served), with these numbers in the second
column of each row.
(c) Use the "randomize" function on the second column of the spreadsheet. The numbers
in the second column are now in random order.
(d) Enter numbers into the first column of each row. Enter the number "1" into this
column in the first row, enter "2" into this column in the second row, and so on. These
new numbers are the row labels.
(e) Mark off the number of rows corresponding to the sample size chosen above. For
example, if the sample size is 65, mark off the first 65 rows.
(f) The numbers appearing in the second column of the rows marked off in step (e) above
are the customer numbers corresponding to the customers to be included in the sample.
For each of these customer numbers, read off the name of the customer appearing next to
III-6
-------
this number on the master list prepared in step (2) above and place it in a new list. This is
new list is the list of customers selected for inclusion in the sample - the people you will
contact during the survey and ask to respond to the survey questions,
(g) If due to a lower than expected response rate, the number of customers from whom
responses are received is less than the desired sample size, and all reasonable followup
efforts have already been made to increase the response rate, go back to the spreadsheet
and mark off the additional number of rows needed to reach the desired sample size. The
numbers appearing in the second column of these additional rows are the customer
numbers for the additional customers to be added to the sample.
For an equivalent procedure that does not make use of a computer or a computer spreadsheet, see
the last section of Fact Sheet V.
Ill-7
-------
Fact Sheet IV: Sampling - more on sample size
The Effect of the Response Rate on Sample Size
The initial sample size is the number of customers you attempt to contact and obtain a response
from during the survey. The final sample size is the actual number of customers for which
responses were received during the survey. The response rate is the percentage of customers
included in the initial sample for which a usable response was received. The response rate will vary
depending on the kinds of customers being contacted, the kind of product or service received, the
kinds of questions asked in the survey, and so on.
Since the response rate is almost always less than 100%, the total number of customers from
whom responses are received will almost always be less than the number of customers initially
selected to be part of the sample. The table in Fact Sheet III shows the approximate sampling
error associated with the final sample size. Since a certain final sample size is needed (which
considers only the customers from whom responses were received), the number of customers
included in the initial sample (the initial sample size) must always be greater than the desired final
sample size.
For periodic surveys that reiterate in whole or in part questions asked in the previous iteration of
the same survey (in order to determine to what extent customer satisfaction has changed in the
intervening period, due to changes in service provision), the response rate for the next iteration of
the survey can be estimated by using the response rate actually observed in the previous
iteration(s) of that same survey. Where a particular survey is being conducted for the first time, it
would be reasonable to assume a response rate of, say, 85% when determining how many
customers to select for the initial sample. If the estimate of response rate turns out to be too high,
then more customers can be added to the sample later, using the procedure described in step (3)
(g) of the procedures presented on pages 6-7 of Fact Sheet III for selecting the sample of
customers to be contacted during the survey.
Note, however, that it is better to achieve the desired final sample size by having a higher response
rate and a smaller total number of customers selected to be in the sample than through a lower
response rate and a higher number of customers selected to be in the sample. The reason for this is
non-response bias. Non-response bias is encountered if the customers who did not respond to the
survey are significantly different from those who did respond. Non-response may be due to the
inability of those conducting the survey to reach a specific customer in the sample, (e.g., because
his or her telephone number has changed), or may be due to the unwillingness of that customer to
participate in the survey at all, or to answer one or more questions in the survey. Because some
customers contacted will answer some questions but not others, the degree of non-response
encountered will vary from question to question on the survey questionnaire.
Non-response bias is one source of the overall bias in the survey results resulting from the fact that
those surveyed are not representative of those in the target group that we are seeking to
characterize. Another sources of such bias is use of a poorly chosen or poorly constructed master
list from which we randomly select the sample of people to be surveyed. One of the best known
-------
examples of such bias is a national poll of likely voters that was conducted by the Literary Digest
in 1936, a few days before the presidential election that year. The poll showed that Alf Landon
would win the election. In fact, as became clear a few days later, Franklin Roosevelt won the
election by a landslide. The reason for the erroneous polling results was bias. The poll was
conducted relying primarily on lists of telephone subscribers and 1936 being at the lowest point of
the Great Depression, many voters could not afford phone service. It turned out that those voters
who could not afford phones were much more likely to vote for Franklin Roosevelt than were
those who did have phones.
While this particular case gives an unusually dramatic example of bias, any level of non-response
(like any serious systematic errors in preparing the master list of people to be contacted) poses
potentially serious problems. Furthermore, the magnitude of these problems will generally not be
known because we in general do not know if and how the non-respondents differ from those who
did respond. After all, we were never able to gather any information about them in our survey that
could be used to see if and how they differ.
For this reason, non-response should always be keep to the lowest level achievable. This is
accomplished through active followup with those customers in the sample from whom we were not
at first able to get a response. Only after all reasonable followup efforts have been made should a
shortfall in the number of customers responding (compared with the desired final sample size) be
made up by selecting additional customers to be part of the sample.
An Adjustment Factor
The values for the sampling error shown in the table presented in Fact Sheet III are approximate.
One reason why they are approximate is that they do not take into account a factor that, if
considered, would result in lower values. We will now provide you with an adjustment factor that
you may use to account for this additional factor and, in so doing, obtain a more precise value for
the sampling error:
An adjustment factor to reflect that the sample result was greater than or less than 50%
One significant complication associated with the calculation of sampling error is that the sampling
error varies markedly with the magnitude of the sampling result obtained. By sampling result, we
mean, for example, the percentage of customers in the sample who say they are satisfied with the
product or service they received. All else being equal, the largest sampling error is associated with
a degree of satisfaction of exactly 50%. Any higher or lower level of satisfaction will result in a
lower level of sampling error. The lowest level of sampling error is associated with a level of
satisfaction of 100% or 0%.
IV-2
-------
Here are the specific values of this correction factor that should be used for various specific values
of the sample result:
The Sample Result
(i.e., the percentage of customers
in the sample who said they were
satisfied with the product or
service received ) Correction factor
99% 0.20
98% 0.28
95% 0.44
90% 0.60
80% 0.80
70% 0.92
60% 0.98
50% 1.00 (i.e., no correction)
40% 0.98
30% 0.92
20% 0.80
10% 0.60
5% 0.44
2% 0.28
1% 0.20
Thus, if the sample result shows that 90% of the customers in the sample were satisfied with the
product or service they received, then the associated sampling error is obtained by multiplying 0.60
times the sampling error shown in the standard tables (including the table provided in Fact Sheet
III). So if the sampling error shown in the table is ±10% for the sample size used, then the actual
sampling error is really only ±6% (= ±10% x 0.60).
If the sample result shows that 80% of the customers were satisfied, and the sampling error
obtained from the table was ±10%, the actual sampling error associated with that specific sampling
error would be ±8% (= ±10% x 0.80). These are rather significant adjustments.
Since the levels of satisfaction likely to be obtained for most EPA products and services are likely
to be in the range of 80 to 90% or more, it is highly advisable to take this adjustment factor into
consideration: (1) when estimating the sampling error that will result from use of a specific sample
size, and (2) when determining the actual sampling error associated with a given sample result after
the sampling process has been completed and the results have been obtained.
IV-3
-------
There is a major implication of the fact that the sampling error varies with the sample result.
Since the sample result varies from question to question asked in the survey, there is no one level
of sampling error associated with the survey as a whoL' Instead, there will be a different level of
sampling error for each result obtained (i.e., a different sampling error for the response to each
question). If the degree of satisfaction obtained from the customers sampled is close to 50% on
one question and close to 100% on another, the sampling error for the second will be much fewer
than (possibly much less than half of) the sampling error for the first. The plus or minus figure
given should therefore be different for each result reported (i.e., it should be different for each
question for which the response is shown). It is common practice, however, for only one level of
sampling error to be shown: this may either be (1) the largest sampling error associated with any of
the results reported or (2) the sampling error that would be obtained in the worst possible case,
i.e., if the result had been a level of satisfaction of 50%.
In presenting the results for customer satisfaction surveys conducted at EPA, those preparing the
results may either conform to this common practice or they may give question-specific sampling
errors, as they prefer. The latter can be accomplished by simply presenting a plus or minus figure
after each sample result shown.
For example:
The Question on
the Survey for
Which the Result The Degree of
Is Being Reported Satisfaction Reported
Question 1 83% ± 8%
Question 2 91% ±6%
Question 3 78% ± 9%
Question 4 87% ± 8%
Question 5 94% ± 5%
Precise Formula for Calculating the Sampling Error
Here is an alternative approach for (1) estimating the sampling error that will occur in a planned
sampling survey or (2) calculating the actual sampling error associated with a specific result in a
survey that has already been completed. Instead of obtaining values of the sampling error from a
table (like that included in Fact Sheet III) and then applying the adjustment factor presented above
in the previous section (and if necessary also applying the second adjustment factor presented in
the next section below), simply calculate the sampling error directly from the precise formula.
Here is the precise formula for calculating the sampling error:
IV-4
-------
p xq N- n
The sampling error = (Z) times the square root of x
n N-l
where p = the sample result (i.e., the percentage of customers who were
satisfied with the product or service they received)
q=l-p
n = the sample size
N = the total number of customers served
Z = is a constant coefficient (i.e., multiplier) associated with
the confidence level that is being used. (This must be
looked up in a table in a statistics book). Each of these
constants is known as the Z-score for that confidence level.
Here are the coefficients (i.e., Z-scores) for the three confidence
levels that have been suggested for use in these Guidelines:
For the 95% confidence level, Z = 1.960
For the 90% confidence level, Z = 1.645
For the 80% confidence level, Z = 1.282
The precise formula presented above is based on the simple random sampling (SRS) procedure, in
which the sample is drawn using the sampling without replacement procedure. Simple random
sampling is the most commonly used sampling procedure and is the procedure recommended in
these Guidelines for use in customer satisfaction surveys conducted by EPA. It is the procedure
reflected in the table presented in Fact Sheet III, in the sample selection procedure presented in
Fact Sheet III, and is assumed in all other discussion of sample selection in the Guidelines, in Fact
Sheets III and IV, and in all but the last section of Fact Sheet V. For further discussion of this
topic, see the last section of Fact Sheet V.
The above formula will give the exact size of the sampling error for any combination of: number
of customers served, sample size, sample result and confidence level. Using this formula
automatically takes into account and reflects the differences in the magnitude of the sampling error
due to differences in the sample result (which was discussed in the previous section of this Fact
Sheet) and also automatically includes the finite population correction factor, which is discussed in
the next section of this Fact Sheet.
IV-5
-------
Another Adjustment Factor
The table presented in Fact Sheet III reflects both the sample size (n) and the total number of
customers served (N) in determining the sampling error for any given confidence level selected.
You may come across reference books on statistics or sampling procedures that present tables in
which the sampling errors are shown for various different sample sizes but in which no
consideration is given to the total number of customers served. In such cases, to get the actual
sampling error, it is necessary to multiply the sampling error given in such tables by an additional
factor known as the finite population correction factor.
The finite population correction factor
The standard sample survey techniques were developed for use in situations where there is a very
large number of people in the pool of those from whom the sample is to be drawn. This is true, for
example, of surveys of national public opinion. The standard formulas and tables used are therefore
predicated on sampling from a very large pool, one that is, in practical terms, "as good as infinite"
and is treated by statisticians as though it were infinite.
When the number of people in the target group from which the sample is to be drawn is much
smaller, a correction factor (one known as the finite population correction factor) should be used
to correct for this circumstance. The finite population correction factor can always be used (its use
never gives an incorrect result), but it is generally not needed if the sample size chosen is less than
about one-tenth (10%) the size of the target group from which the sample is to be selected.
If the sample size of customers to be contacted is greater than 10% of the total number of
customers served, then the finite population correction factor should be used in calculating the size
of the sampling error. These circumstances will apply in a large percentage of customer
satisfaction surveys conducted by EPA. Luckily, use of the finite population correction factor
always results in a lower sampling error than would have been obtained without its use. Therefore,
if you are satisfied with the magnitude of the sampling error calculated for a specific survey
without using the finite population correction factor, then there is no need to use it for that survey,
unless you want to know exactly how much lower the true sampling error is.
The finite population correction factor (FPCF) can be calculated using the following formula:
FPCF = the square root of (N-n)
(N-l)
where: N = the total number of customers served
n = the number of customers in the sample (i.e., the sample size)
IV-6
-------
The corrected sampling error is obtained by multiplying the finite population correction factor and
the sampling error obtained from a standard table that considered only sample size and confidence
level (and did not consider the size of the target group (i.e., the population) from which the sample
is to be drawn). Because of the way the finite population correction factor is calculated, the
adjustment factor varies with the sample size as a fraction of the size of the target population from
which the sample is to be drawn. See the following table:
Sample Size as a Fraction (Percentage)
of the Size of the Target Population Approximate Value of the
= n/N Finite Population Correction Factor
10%
20%
40%
50%
60%
70%
75%
0.95
0.89
0.77
0.71
0.63
0.55
0.50
As can be seen from the above table, if the sample size is approximately 10% of the size of the
target group (i.e., the total number of customers served, from which the sample is to be drawn),
then the correction factor is approximately 0.95 ~ thus, when using a sample size that is 10% of
the total number of customers, the sampling error will be reduced to 95% of what it otherwise
would have been (e.g., the sampling error would be reduced from ±10% to ±9.5%).
If the sample size is 20% of the size of the target group, then the correction factor is approximately
0.89 — thus, when using a sample size that is 20% of the total number of customers served, the
sampling error will be reduced to 89% of what it otherwise would have been (e.g., the sampling
error would be reduced from ±10% to ±8.9%).
If the sample size is 50% of the size of the target group, then the adjustment factor will be
approximately 0.71 — thus, when using a sample size that is 50% of the total number of customers
served, the sampling error will be reduced to 71% of what it otherwise would have been (e.g., the
sampling error would be reduced from ±10% to ±7.1%).
There is a general rule of thumb used by many statisticians: the finite population correction factor
should be applied whenever the sample size is 10% or more of the size of the target group from
which the sample is to be drawn.
IV-7
-------
Note, however, that use of the finite population correction factor always gives a more accurate
value for the sampling error than would be obtained by not using it. You should therefore never be
reluctant to use it. It is just that there are certain circumstances (i.e., when the sample size
obtained from a standard table is less than 10% of the size of the target population) when it is
possible to disregard it (i.e., not apply it) without there being an undue adverse effect on the
estimated size of the sampling error.
Note also that the last element in the precise formula for calculating sampling error given in the
previous section of this Fact Sheet is the finite population correction factor. Use of that precise
formula therefore will ensure that the finite population correction factor is automatically taken into
account when determining the size of the sampling error.
One final technical note: The reason why the second column in the table presented above in this
section is labeled the approximate value of the finite population correction factor rather than the
exact value is that the following approximation was used to calculate the value shown in the
second column that corresponds to each value in the first column:
Instead of using the precise formula for the finite population correction factor:
N-n
FPCF = the square root of
N-l
The following approximate formula was used:
N-n
FPCF = (approx) = the square root of
N
For most values of N (the size of the target population), the difference between the true value
obtained from the precise formula and the approximate value obtained from the approximate
formula is very small.
A Trial and Error Procedure and An Approximate Formula for Determining Sample
Size
A Trial and Error Procedure
The precise formula given above can be used directly to determine the level of sampling error for
any combination of confidence level, number of customers served, and sample size. That same
formula can also be used to determine sample size when the desired confidence level, the desired
maximum level of sampling error and the number of customers served are known. It just cannot be
solved directly to obtain sample size in such situations. This is so because sample size (n) appears
IV-8
-------
two different places in the equation, and the form of the equation is such that it is not possible to
rearrange the equation so that it can be used to solve directly for sample size. Instead, you must
use the precise formula as shown to determine the neeced sample size. You can do so as follows:
(1) Begin by guessing what the needed value of the sample size is. (Any guess will do as a
starting point, although the closer to the true value your guess turns out to be, the sooner
you will be finished.)
(2) Use that value of the sampling size (i.e., your initial guess) to solve the precise formula
equation for sampling error.
(3)(a) If the value of sampling error you obtain from the formula is less than the
maximum level of sampling error you are willing to accept, then you should decrease your
guess as to the corresponding value of the sample size and solve the equation again.
(3)(b) If the value of sampling error you obtain from the formula is greater than the
maximum level of sampling error you are willing to accept, then you should increase your
guess as to corresponding value of the sample size and solve the equation again.
(4) Continue steps (3)(a) and (3)(b) above until you arrive at the appropriate sample size.
That will be the largest value of the sample size that, when plugged into the precise formula
along with the number of customers of served and the Z-score corresponding to the
confidence level you have selected, gives you the highest possible value of sampling error,
i.e., one that equals (or is slightly less than) the level of sampling error you have set as the
maximum you are willing to accept.
The Approximate Formula for Determining Sample Size
The trial and error approach described above will always give you the best possible value for
sample size. However, the process for arriving at that value can be rather tedious. For this reason,
an approximate formula has been developed that will give you a reasonable value for the sampling
error that is close to the one you would get from the above trial and error procedure. This
approximate formula needs only to be solved once - no repeated calculations are needed. The
resulting value will, however, in most cases, be a larger sample size than what you would get from
the trial and error procedure. That is, the approximate formula will give you a larger sample size
than that actually needed to achieve your target level of sampling error.
Here is the approximate formula:
NxZ2
n =
[4x(N-l)xE2] + [Z2]
IV-9
-------
Where:
n = sample size
N = number of customers served (from which the sample is to be drawn)
E = the maximum acceptable level of sampling error, expressed as a decimal
fraction (e.g., 5% = 0.05)
Z = the Z-score corresponding to the confidence level selected
(this can be obtained from most standard statistics references,
including most basic statistics textbooks). The Z-scores for the
80%, 90% and 95% confidence levels are given above in this Fact Sheet in
conjunction with the precise formula.
A Combined Approach
You can, if you wish, make use of both the approximate formula and the trial and error approach
given above. Begin by using the approximate formula to get an approximate value for the sample
size. Then use this approximate value as your first guess for sample size in the trial and error
approach, and proceed from there with the trail and error approach as described above.
This combined approach will allow you to come up with the lowest possible sample size with the
least amount of effort.
Why is So Much Attention Given to Sample Size?
Much of Fact Sheet III and all of this Fact Sheet have been devoted to considerations related to
sample size. Why, you might ask, do people spend so much time worrying about sample size?
The reason is: if a larger sample size is used than is really needed, you will have incurred a greater
cost in conducting the survey and you will have imposed a greater response burden on your
customers than was needed.
o The extra, unneeded costs alone can be quite considerable. For each extra customer in
the sample, additional time has to be spent: conducting the telephone interview (we are
here assuming for clarity that a telephone survey was conducted), following up with those
who did not answer when originally called, following up with those who did not initially
agree to participate, and so on. It also means more data to be recorded and analyzed.
o The extra burden on your customers in terms of time spent responding can also be quite
large when the total time spent by all customers surveyed, taken together, is considered.
IV-10
-------
If the sample size used turns out to be greater than was needed, then the extra cost incurred and
the extra burden imposed were wasted.
On the other hand, if too small a sample size is used, then you may end up with so much
uncertainty about the true degree of satisfaction of your customers (because the sampling error
was so large) that you do not learn much from the survey. You were uncertain about their degree
of satisfaction before (that's why you decided to conduct the survey) and you may now find that
your level of uncertainty afterward is not much reduced. In this case, the whole cost of conducting
the survey may prove to have been wasted.
Keep in mind that any wasted time and dollars associated with conducting surveys using sample
sizes that were too large or too small are time and dollars that could otherwise have been used to
improve the products or services you provide to your customers. So the effort you spend helping
to ensure that you use the most appropriate sample size will help maximize the time and dollars
you will have left for improving customer service in the ways your survey has shown to be needed.
In conclusion, sample size is very important - you want it to be large enough to give meaningful
and useful results, but not so large that you incur unneeded extra expense or unduly burden your
customers with the time needed to respond. What this adds up to is that when you conduct a
customer satisfaction survey, you should use the smallest possible sample size that will give you
results of sufficient precision to be meaningful and useful to you. And greater precision ~ which
is another way of saying a lower level of uncertainty — comes from a smaller level of sampling
error. The smaller the sampling error the greater the precision.
So you want to choose the smallest possible sample size that will give you a level of sampling
error that you can live with. By that we mean that the results will be precise enough to give you
the degree of certainty you need about: (1) what the true current level of satisfaction of your
customers is, and (2) how their degree of satisfaction has been changing over time ~ as a result of
your continuing efforts to improve your products and services.
IV-11
-------
Fact Sheet V: Sampling - more advanced topics
This Fact Sheet addresses the following more advanced topic in the area of sampling:
o Stratified sampling - what it is, when to use it, how to do it.
This Fact Sheet also provides:
o A crosswalk between the customized terminology with regard to customer feedback sampling
used in these Guidelines and Fact Sheets and the more general statistical terminology used by
survey statisticians.
o A discussion of some other kinds of errors (beyond sampling error and non-response bias) that
will be encountered in sampling.
o A discussion of sampling without replacement and how this differs from sampling with
replacement. This section also contains a description of how to randomly select a sample
without using a computer or a computer spreadsheet.
Stratified Sampling
In the section of the Guidelines describing how to analyze the data obtained from a sample of
customers, an example is given on pages 39-40 of a simple procedure that can be used to determine
if the degree of satisfaction of your customers varies among different kinds of customers. The
procedure presented there is simple and useful and will be quite satisfactory for use in conjunction
with most customer satisfaction surveys conducted at EPA.
There is, however, one aspect of that procedure that should be noted: the sample results obtained for
the different specific kinds of customers (e.g., educators and advocates in the example on pages 39-
40) are not as precise as the results obtained for all kinds of customers in the sample, considered all
at once. The reason for this is that only a portion of the sample is relevant to each specific kind of
customer. Therefore the sampling error determined for the entire sample does not apply to each
specific kind of customer. Instead, in effect, we have a smaller sample size for each specific kind of
customer, and there will therefore be higher sampling error for each specific kind of customer, when
considered separately.
Again, for most customer satisfaction surveys conducted by EPA, the results obtained from the
procedure shown on pages 39-40 will still be quite satisfactory and the increased sampling error
associated with each specific kind of customer should not be of concern. The results obtained from
using those results will still be meaningful and useful.
There may, however, be some situations in which it is so important to track the degree of satisfaction
of one or more specific kinds of customers that you decide that the sampling error for each specific
kind of customer of concern must be kept within specified limits. In such cases, a specialized
procedure known as stratified sampling may be used.
-------
The basic principle underlying stratified sampling is very simple. For each subgroup of customers for
which it is important to know their degree of satisfaction with a specified level of precision (i.e., with
a known maximum level of sampling error), the sample size for that subgroup should be determined
separately and a random sample of those customers selected separately. This can be done by
applying the procedures presented in the Guidelines and in Fact Sheets III and IV separately for
each of these subgroups of customers. It is as though you are no longer conducting one survey but
instead are conducting two or three (or more) surveys simultaneously, one for each subgroup of
customers who are so important that their degree of satisfaction must be known and tracked with a
known maximum level of sampling error for that specific subgroup.
The results are then analyzed separately for each of these different subgroups, again using the
methods of analysis that are presented in the Guidelines. The results of these analyses are then used
to track separately the degree of satisfaction of each of these subgroups of customers.
The results from each of these subgroups may then also be combined to give an overall result for all
the subgroups surveyed taken together. They may be combined by using the following formula:
Pan = (fi x p,) + (f, x p2) + (f3 x p3) + ....
Where the equation continues for as many subgroups of customers as were used in the survey
and where:
Paj, = the sample result for all customers served
PJ = the sample result for the first subgroup of customers (i.e., the number of customers
in the first subgroup who reported being satisfied)
fi = the fraction (percentage) of all customers served who fall in the first subgroup of
customers
p2 = the sample result for the second subgroup of customers
f2 = the fraction (percentage) of all customers served who fall in the second subgroup of
customers
etc.
For the above formula to give an accurate result for all customers, every customer must be included
in one (and only one) of the subgroups. If, for example, there are five different kinds of customers
and only two kinds are so important that they have to be tracked separately with a known level of
sampling error, then the remaining three kinds of customers can be included in a third subgroup
consisting of the remaining three kinds of customers grouped together.
V-2
-------
There is one further consideration. Since you have set the sample size separately for each of the
subgroups in order to get a known level of sampling error for each of those subgroups, you will
know the level of sampling error for each subgroup, but you do not know what the level of sampling
error is for any sample results obtained (using the formuu. given above) for all the customers taken
together. There is, however, a second formula that can be used to determine this:
E^ = the square root of the following sum
N,2 Nr n, E,2
sum=
-)-
A
N2
N22
N2
Nrl
N2-n2
._
N,-l
«i
E22
n,
+ etc.
Where:
Ejj, = the sampling error for sample results applicable to all customers taken together
N =the total number of customers served
E! = the sampling error for the first subgroup of customers
N! = the total number of customers served for the first subgroup of customers
nj = the sample size for the first subgroup of customers
E2 = the sampling error for the second subgroup of cust<»ners
N2 = the total number of customers served for the second subgroup of customers
n2 = the sample size for the second subgroup of customers
etc. (for each additional subgroup of customers served)
Note that what is referred to as sampling error in this section on stratified sampling is actually the
standard error since it was values obtained from the sample that were used in computing it. The
standard error is used by statisticians to estimate the sampling error. (See the next section of this
Fact Sheet for further clarification of this point.)
V-3
-------
A Guide to the Applicable Statistical Terminology
In the discussion of surveys and sampling in the Guidelines and Fact Sheets, we have used
terminology customized to the special circumstances of surveys conducted by EPA to assess
customer satisfaction and have modified some other statistical terminology to make it easier for non-
statistician to understand. Some users of the Guidelines may, however, wish to consult text books,
reference books or journal articles on one cr more aspects of sampling. For that reason, we here
provide a crosswalk between the terminology used here and the more general terminology used in
general works and articles on sampling procedures.
Terminology used in these Guidelines and
Fact Sheets
The more General Terminology
Used by Sampling Statisticians
1. "the customers served in a
specific period of time" (for
which information about their
level of satisfaction is being
sought)
or
"the customers served"
1. "the target population"
or
"the population"
or
"the universe"
or
"the target group"
2. "the sample of customers"
2. "the sample"
or
"the customers in the sample"
3. "the sample result" (shown as
the percentage of customers who
responded who said that they
were satisfied in response to a
specific question about some
aspect of the product or
service they received)
3. "the sample proportion"
V-4
-------
4 the "master list" of customers served
4. "the sampling frame"
5. "the sampling error"
"the sampling error" (when calculated using
values obtained from the full target
population - which is not possible under
normal circumstances because it would be
too costly since it would require a census of
the target population. And it is to keep costs
to a reasonable level that we are using a
sampling procedure in the first place.)
6. "the unit of analysis"
or
"the standard error" (when calculated using
values obtained from the sample — which are
what we must use in most cases. This is so
since the corresponding values for the entire
target population are not known and cannot
be learned without going to great additional
expense. Going to such expense would
defeat the purpose of using a sampling
procedure rather than a census). "The
standard error" is also known as "the
standard error of the mean."
6. "the unit of analysis"
or
"the sampling unit."
7. "the 95% confidence level"
7. "a 95% confidence level"
By tradition, the article "a" is used, which
suggests that there is more than one kind of
95% confidence level. In fact there is only
one kind of 95% confidence level, so "the" is
the more appropriate article, and "the" is
used in these Guidelines to improve clarity
for the sake of the non-statisticians seeking
to make use of them.
V-5
-------
Other kinds of error experienced in sampling surveys
We have so far limited our discussion of error encountered in sampling surveys to three kinds of
error: (1) sampling error, (2) non-response bias, and (3) the bias associated with use of a poorly
chosen master list of the persons in the target group to be sampled. Each of these has been discussed
earlier, sampling error was discussed in Fact Sheet III; non-response bias and the bias associated
with use of a poorly chosen master list were discussed in the first section of this Fact Sheet. We will
now describe briefly two other kinds of error that occur in sampling surveys:
o Reporting error. This is the kind of error that results when the customer misunderstands
the question asked and therefore gives an incorrect answer. It can also occur if the customer
misunderstands how to use an interval scale (by thinking, for example, that a high number on
the scale means "highly unsatisfied" when in fact it means "highly satisfied"). It can also
occur when a customer purposely gives a wrong answer.
This kind of error can be kept to a minimum by pre-testing the questionnaire on
people who are similar to those who will be surveyed. Such pre-testing can help
identify: (1) questions that are confusing, (2)clear instructions on how to respond
(e.g., how to use the interval scale to answer a question) to questions that are
confusing, and (3) questions that customers may find so intrusive, threatening or
offensive that they may prompt some customers to give a false answer.
o Recording error. This is error on the part of the staff conducting the survey. Even if the
customer has responded correctly, the survey staff may misunderstand or mis-record what the
customer said (in a telephone or in-person survey) or may misread or mis-record what was
written (in a mail survey).
It should, however, be noted that, unlike sampling error, which results from the use of a probabilistic
sampling procedure and is an inevitable consequence of using such a procedure, the additional kinds
of error identified above will be encountered whenever customers are contacted and asked to
respond to specific questions. These kinds of error would, for example, still occur even if a full
census were conducted of all customers served. In other words, use of a census will eliminate
sampling error, but reporting error and recording error will still occur. Similarly, non-response bias
will occur in the results obtained from a census just as much as in those obtained from a sampling
survey. That's what the recent controversy about the year 2000 Census of the U.S. population has
been all about. The concern is that past censuses have had a non-response bias that works to the
disadvantage of the poor, because those with low incomes or no incomes are disproportionately less
likely to participate than those who are more well off economically.
Should the sample be selected with or without replacement?
There is one basic decision that must be made before selecting a random sample of customers from
the larger total number of customers served. That decision is: will the selection be made with
replacement or without replacement!
V-6
-------
In order to explain what this means, we will describe an alternative procedure for selecting a random
sample of customers that tracks closely with that presented in Fact Sheet III. This alternative
procedure will, however, be a simpler one for which it is easier to follow the implications of each
step. In particular, the process we are about to describe will parallel perfectly the procedures in Fact
Sheet III, but the procedure will be described not in terms of entering customer numbers into a
spreadsheet on a computer, but rather in terms of putting the customer numbers on slips of paper,
putting these slips into a box, and then pulling these slips from the box. This simpler alternative
approach will make it clearer what the difference is between sampling with replacement and without
replacement.
Here is the alternative approach for selecting a random sample of customers:
Begin by carrying out the actions called for in steps (1) and (2) of the procedure that begins on page
5 of Fact Sheet III. After having compiled the master list of persons served described in step (2),
take the following additional steps:
(3) Here is the alternative approach for selecting a sample of customers from the master list:
(a) For each of the customers on the master list, place the number corresponding to that
customer on a slip of paper, one customer number per slip of paper. Then fold each slip of
paper in a uniform way. All slips of paper used should be identical in every way.
(b) Put the folded slips of paper into a box, and shake the box sufficiently that the slips of
paper have been well mixed within the box.
(c) Have someone begin to remove slips of paper from the box one at a time. While this is
being done, the box should be held or positioned in such a way that the person removing the
slips of paper cannot see the slips of paper that he or she is picking from.
(d) Have a sheet of paper (a recording sheet) ready with numbers running from 1 to a number
at least twice the size of the sample size that has been chosen. For example, if the sample
size chosen was 65, have numbers on the sheet of paper running up 130. This sheet will be
used to record the outcome of the selection process.
(e) As each slip of paper is picked from the box, unfold the slip of paper, read the customer
number on it, and record that number in order on the recording sheet. Then set that slip of
paper aside (or throw it away). The customer number on the first slip picked should be
recorded next to the number 1 on the recording sheet. The customer number on the second
slip of paper should be recorded next to the number 2 on the recording sheet, and so on.
Continue this process until a slip of paper and a corresponding customer number has been
picked for each number on the recording sheet.
(f) Determine the number of customers to be included in the initial sample by taking into
consideration both the desired sample size and the anticipated response rate. Thus, for
example, if the desired sample size is 65 and the expected response rate is 85%, the number
V-7
-------
of customers to be included in the initial sample should be 65/85% = 76.47 The size of the
initial sample should therefore be 77 (since it is always prudent to round fractions up to the
next whole number in situations where a certain minimum level must be achieved).
(g) The initial sample will then consist of the customers whose customer numbers are
recorded next to positions 1 through 77 on the recording sheet. Carry out the survey using
these 77 customers.
(h) If the response rate in the survey is 85% or greater, then nothing further need be done.
No further use will need to be made of the recording sheet. If however, the response rate
turns out to be less than 85% and all reasonable followup actions have been taken to increase
the response rate and it is still less than 85%, then a determination should be made as to how
many additional customers need to be added to the sample to bring the number of customers
responding up to 65.
If for example, the number of customers so far responding is 62, then 3 more
responding customers are needed. Since the response rate so far has been 62/77 =
80.5%, the minimum additional number of customers who need to be added to the
sample is 3/80.5% = 3.73 or 4 after rounding up. However, since the there is no
guarantee that the response rate for the next set of customers contacted will be
identical to that of the customers already contacted, it might be prudent to add not 4
but 5 or 6 additional customers to the sample to ensure that a third survey cycle will
not be needed. In this case, the number of additional customers that it is decided to
add to the sample is 6.
(i) Starting at the point where you left off in taking customer numbers from the recording
sheet in step (g) above (position 77 in the sample used here), take the customer numbers
appearing in the next 6 positions on the list (i.e., those in positions 78 though 83), and add
the customers' names corresponding to these customer numbers to the sample.
The above procedure is an example of sampling without replacement. This expression is used
because the procedure used called for slips of paper to be picked from the box and after each slip of
paper was picked, it was not replaced (i.e., put back) in the box.
The alternative procedure would have been to sample with replacement. Using the same physical
arrangements described above, sampling with replacement would have entailed picking the first slip
of paper from the box, unfolding it, reading the customer number off of that slip, then folding the slip
back up as it was and putting it back in the box, shaking the box up again, so that all the slips in the
box (including the one already removed and put back in into it (i.e., replaced into it)) are once again
fully mixed. A second slip of paper is then picked from the box, unfolded, the number is read off, the
slip is folded again and put back in the box and the box is shaken again. This procedure continues
until the same number of slips has been taken from the box as before or, to be more precise, until the
same number of picks have been made from the box (in this case, 130).
V-8
-------
One of the obvious implications of this new procedure just described (which calls for sampling with
replacement) is that it is very possible for a single slip of paper to be picked from the box more than
once. In fact it could be picked three times or more. If that happens, then that same number is
recorded a second time (and if necessary a third time, a fourth time, etc.) on the recording sheet.
Let's say for example that the slip with customer number 278 was the fourth slip picked and was also
the 36th picked. Then customer number 278 will appear on the recording sheet both at position 4 and
at position 36. Slips continue to be picked a total of 77 times as before (i.e., a total of 77 picks are
made from the box).
Now, you may well ask, should an additional slip be picked to make up for the duplicate picking of
customer number 278? The answer is "no." Customer 278 has been picked twice, so he or she now
counts as two customers. Does this mean that customer 278 will be contacted two different times
during the survey and on the second occasion be asked to respond a second time to the survey
questions? The answer again is "no." Customer 278 will be contacted only once, but his or her
response will be used twice in computing the survey results, as though two different customers had
responded in exactly the same way to the survey questionnaire (which does in fact sometimes
happen).
We have just described a situation in which one slip was picked twice. It's clearly possible for a
single slip to be picked three times or four times or more. Similarly, it's possible for two different
slips to be picked two or more times each. When the total number of customers served is relatively
low and the sample size is relatively large in comparison, multiple picks of two or more slips will be a
fairly common occurrence.
The initial reaction of most people to a description of sampling with replacement is generally rather
negative. Why would anyone ever do it that way? Putting the slip of paper back in the box,
knowing full well that it may be picked again, seems bizarre and unreasonable to them.
So why is this procedure used? The answer is that the mathematics that result from using the
sampling with replacement procedure are much simpler than those that result from use of the
sampling without replacement procedure.
Why is this so? Because the odds of any one slip being picked are the same throughout the sampling
with replacement procedure. If for example, there were a total of 534 customers served (and
therefore 534 slips of paper in the box) from which the 77 customers to be included in the sample are
to be selected, then the odds that any one customer will be selected on the first pick is 1 in 534, the
odds of any one customer being selected in round two are 1 in 534, the odds of being selected on the
third round are 1 in 534 and so on. The odds of being picked never change from the beginning to the
end of the process of picking 77 slips of paper form the box (and thereby picking 77 customers to be
surveyed from the total of 534 customers served).
With the sampling without replacement procedure, however, the odds of any one customer being
picked are constantly changing from the beginning of the process to the end. On the first pick, every
customer's odds of having his or her slip picked is 1 in 534. But then, after one slip has been picked,
the customer number on that slip has been recorded, and that slip has been set aside, everyone's odds
V-9
-------
of being picked on the second pick are changed. For the customer already picked, his or her odds of
being picked again are now zero. That slip has been set aside so there is no further chance of being
picked on that or any further round of picks. But for the remaining customers, the chances of being
picked are now 1 in 533. The odds are lower since there is now one fewer slip in the box — only 533
instead of the 534 that were there before the first pick was made. The odds of being picked on the
third round then go down to 1 in 532. The odds continue to decrease in this same way throughout
the process of picking 77 slips. Because of these changing probabilities, the resulting mathematics
get rather messy.
In summary, we have two different procedures for selecting a random sample of customers from the
total number of customers served. Sampling without replacement is the process that seems most
reasonable to non-statisticians, but it results in very messy mathematics that make calculations much
more difficult. Sampling with replacement seems bizarre to most non-statisticians (especially
counting a customer's response twice if that customer gets picked twice), but results in much simpler
mathematics.
Luckily, statisticians have adopted the sampling without replacement procedure as the one generally
used for selecting the sample in most surveys. Statisticians refer to this procedure as Simple Random
Sampling (SRS), but it might instead be called the standard approach for random sampling. The
standard stratified sampling procedure described above in the first section of this Fact Sheet is a
slightly more complex variant of Simple Random Sampling. Stratified sampling as described above
is also based on sampling without replacement.
In these Guidelines, we strongly recommend use of sampling without replacement because it is a
procedure that EPA employees who are not statisticians are much more likely to feel comfortable
with and is the procedure most commonly used by statisticians as well. If, however, any unit of EPA
determines that it has a compelling reason to use a sampling with replacement procedure instead, it
should feel free to do so as long as those who will conduct the survey understand the implications of
doing so, will make use of the appropriate formulas (some of which will differ from those presented
in these Guidelines and Fact Sheets), and are willing to double or triple count customers (those that
were that were selected more than once) when calculating the survey results.
V-10
-------
Fact Sheet IV: How to Obtain Clearance for EPA Customer Satisfaction Surveys
QUESTION 1: WHO CAN USE THE CUSTOMER SERVICE ICR?
OMB's Resource Manual for Customer Surveys (dated October 1993) and other relevant guidance documents state
that the generic clearance shall be used for "strictly voluntary collections of opinion information from clients that
have experience with the program that is the subject of each data collection" and precludes this option for use:
by regulatory agencies to survey regulated entities'
o in any situation where a respondent may perceive that a response will result in risks to his interests
through potential penalties of loss of benefits;
o for collecting factual information (other than simple identifying information, where needed):
or
o for collecting data from the general public.
QUESTION 2: HOW DO I RECEIVE APPROVAL FORMYSURVEY, IF IT MEETS THE CONDITIONS
OUTLINED ABOVE?
Below are the instructions for submitting your survey for clearance:
Prior to initiating the survey, sponsoring programs must seek final approval from OMB. To obtain approval,
sponsoring programs must submit a clearance package consisting of a memorandum and a copy of the survey
instrument through Regulatory Information Division (RID). The memorandum will be addressed from the
program or office director to the RID Desk Officer at Office of Policy, (2136) memorandum must address the
following2:
o Survey title, identification of survey originator (office, point of contact, phone number);
o Description and intended purpose of the survey as it relates to EPA customers;
o Methodology and use of anticipated results;
o Collection schedule, follow-up plans;
o Costs and burden to the Agency and respondents, and the number of respondents.
The memorandum will vary in length and detail, depending on the complexity of the survey. IPS staff, experience
with the requirements of the Paperwork Reduction Act, will review each submission to ensure that it meets the
requirements of the PRA and any conditions of the generic approval, and may reject any proposed customer survey
that does not meet the criteria above. In the methodological issues, the program shall solicit Agency statistical
experts through EPA's Statistical Policy Branch or program office to make any final determinations as to the
statistical validity of the customer survey.
*EPA interprets this to preclude any EPA surveys conducted fact finding for the purposes of regulatory
development or enforcement.
2For customer feedback forms and short questionnaires, a one page memorandum should be sufficient.
Mail or telephone surveys making use of statistical sampling must include statistician's name/phone, and a brief
design, precision requirements, and pretests/pilot tests.
-------
QUESTIONS: HOW LONG WILL THE PROCESS TAKE?
Following review within RID. RID will submit surveys and attached materials to OMB for a 10 working-
day review.
WHA T ELSE SHOULD I KNOW?
Sponsoring organizations within the EPA should maintain records according to each survey schedule. In
general, survey results should be maintained for three years or until after follow-up has been performed.
Sponsoring offices are encourage to provide feedback to RID on the success of their surveys (through a
memo, or summary report) which can be shared with fledgling customer survey programs within other parts of the
EPA. Feedback might include such things as:
1) response rates, follow-up strategies, important lessons related to survey design and implementation;
2) general trends established from analysis of data;
3) changes to the organization as a result of the survey;
4) points of contact for questions about the survey.
EXAMPLE OF BURDEN STA TEMENT FOR FORMS OR SUR VEY
The OMB Control Number and expiration date must appear on the front page of an OMB-approved form
or survey, or on the first screen viewed by the respondent for an on-line application. The rest of the burden
statement must be included somewhere on the form, questionnaire or other collection of information, or in the
instructions for such collection. Also include the following information:
Explain the reasons the information is planned to be and /or has been collected, and the way such
information is planned to be and /or has been used to further the proper performance of the functions of the
agency. State whether responses to the collection of information are voluntary, required to obtain or retain a
benefit (citing authority), or mandatory (citing authority), and the nature and extent of confidentiality to be
provided, if any (citing authority).
The following information must appear on the first page of the survey:
Form Approved OMB Control No. 2090-0019. Approval expires 10/31/99.
Public reporting burden for this collection of information is estimated to average eleven (11) minutes per response,
including the time for reviewing instructions, gathering information, and completing and reviewing the collection
of information. Send comments on the Agency's need for this information, the accuracy of the provided burden
estimates, and any suggestions for reducing the burden, including the use of automated collection techniques to the
Director, OPPE Regulatory Information Division, United States Environmental Protection Agency (Mail Code
2137), 401 M Street, SW, Washington, D.C. 20460; and to the Office of Information & Regulatory Affairs, Office
of Management & Budget, 725 17th Street, NW. Washington, DC 20503, Attention: Desk Officer for EPA.
Include the EPA ICR number and the OMB control number in any correspondence.
CUSTOMER SERVICE EXECUTIVE ORDER (12862) REQUIREMENTS
o Identify customers who are or should be receiving EPA service
VI-2
-------
o Survey customers for the kind/quality of sen ices the\ \\ant. their level of satisfaction \\ith the services.
and whether standards are set for what matters to them
o Develop, post and implement standards,
o Measure results against them
o Report annually to customers on progress toward achieving standards
o Integrate customer service standards, measurement and tracking with reinvention, planning,
budgeting(GPRA), operating plans, regulations and guidelines, training and personnel classification and
evaluation
o Recognize employees for meeting and exceeding customer service standards
o Benchmark customer service performance against the best in business
o Survey front-line employees on barriers to, and ideas for, matching the best in business
o Provide customers with choices in sources of service and methods
o Make information, services and complaints systems easily available
o Address customer complaints
o Develop cross-media (within an Agency) and cross-Agency programs to serve shared customer groups
o Take advantage of new technologies to better serve customers
o Develop cross-media (within an Agency) and cross-Agency programs to serve shared customer groups
o Take advantage of new technologies to better serve customers
Following are examples of successful applications to OMB.
UNITED STATES ENVIRONMENTAL PROTECTION AGENCY
OFFICE OF ADMINISTRATION AND RESOURCES MANAGEMENT
CINCINNATI. OHIO 45268
May 22, 1997
MEMORANDUM
SUBJECT: Request for OMB Approval for Customer Feedback Survey
FROM: William Henderson, Director
Office of Administration & Resources Management
TO: Barbara Willis
IPD Desk Officer
• Regulatory Information Division
The Office administration and Resources Management, Cincinnati, is planning to conduct a
Customer Survey on the effectiveness of meeting customer needs in accessibility to the
VI-3
-------
Agency's publications through the centralized publications clearinghouse, the National Center for Environmental
Publications and Information {NCEPI. The survey will also establish document format preferences as the Agency
moves towards an "electronic first" environment. We will use the results of the survey to improve the way the
Agency disseminates its documents. A copy of the survey is attached.
More than 3 1,000 monthly requests for publications are received through the NCEPI, through phone, fax,
postal, or Internet services. The survey will be forwarded to three distinct target audience groups, each lasting 30
days and limited to 500 respondents per phase. The first phase will involve the end customer who orders through
more conventional methods including phone, fax^ or mail; the second will target a selected mailing list audience;
and the third will target the Internet ordering customers. The burden of the survey on the respondents is small.
There are only five questions and the Agency has prepaid return postage. We estimate that it will take the
respondent approximately 10 minutes to complete the survey.
The Office of Administration and Resources will track responses, use the information to prepare a report
summarizing the findings and make recommendations on how information should be disseminated and the format
which meets the end users needs. Costs will be minimized by using in-house resources to prepare the report and in-
house printing of survey results. A copy of the survey is attached.
If you have any questions or concerns about the survey, please contact Deborah McNealley of my staff at
513/569-7986.
Attachment
To our customers:
The National Center for Environmental Publications and information (NCEPI), has taken great strides
towards improving accessibility to the U.S. Environmental Protection Agency;s publications since our start up in
1991. EPA publications are more centralized and offered through various, user-friendly avenues including the new
1-800/490-9198, toll free number, fax (489-8695), and the on-line ordering on the Internet at
http://222.epa.gov/ncepihom/index.html.
You may have utilized one, or all of these access points when ordering publications. Or, you may use our
more traditional ordering mechanism, the mailing address at U.S. EPA/NCEPI, P.O. Box 42419, Cincinnati, Ohio
45242-2419. Please take a moment to let us know which services you currently use, those services you will try in
the future, and services NCEPI might consider in addressing your changing needs.
Thank you in advance for your time.
ACCESS TO EPA'S INFORMATION
1. How often do you order EPA publications from NCEPI?
_frequently _occasionally _Rarely _this is my first order
2. When ordering EPA publications from NCEPI, how do you place your orders?
_ 1-800 _Fax _ U.S. Postal Service (or private carrier) _ Internet
3 . What method of ordering EPA publications would you prefer when placing future orders? .
4. On this order and past orders, was the service timely and was the order filled correctly?
Timely _yes _no (if no, how long did it take to receive your document?
Correctly _ yes _ no(if no, please outline the problem
5. Which of the following media would best service your future needs for receiving EPA publications?
_Hardcopy Print _ On line viewing via Internet in full text
VI-4
-------
_ On-line viewing via Internet accompanied by a hardcopy print
_EPA Publications on diskette _EPA Publications on CD Rom_
6 I normally receive EPA publications from: _NCEPI Federal Depository Library
Government Printing Officer (GPO) National Technical information Service (NTIS)
Other (please specify)
OPTIONAL
NAME: Address:
TITLE/COMPANY NAME CITY STATE ZIP
PHONE
E-MAIL
COMMENTS:
(Please return your survey to NCEPI within 3 days of receipt)
Request for Approval of Information Collection Activity
Background
In 1991, EPA, through the Office of Administration and Resources Management (OARM), Cincinnati,
EPA implemented a publications distribution service, the National Center for Environmental Publications and
information (NCEPI). The service was designed to streamline distribution operations, eliminate inefficiencies and
duplication of effort, and improve public access to its information. Now, OARM would like to survey the and users
of the information to receive feedback on how effective the current distribution methods are in meeting the customer
needs and ways n how we can improve our service.
Survey Purpose
The purpose of this survey is to obtain feedback from current users of our services to ensure that the needs
of all end users are being met and identify preferences as to how the end user can best access the Agency's
publications, whether it be electronically over the Internet, or a hard copy publication.
Survey Methodology
OARM plans identified three target audiences over a 90-day period to ensure that a cross section of EPA's
customers are given the opportunity to comment. We will send a survey to up to 500 customers who make requests
for publications from either the phone, fax, or the U.S. Postal Service over the first 30 days. Historically, this target
audience is representative of the general public or concerned citizen. The next 30 days will target that audience
who has requested they be placed on one of our of our mailing lists. This group will be more representative of the
research community, business/industry, educational institutions and organizations. The third phase of the survey
will obtain feedback from the newest customers, those who access the EPA Publications Home Page. Surveys will
be mailed or E-mailed back to OARM Cincinnati.
OARM will print 1000 copies of the survey. As we cannot establish controls on how many customers will
respond to the Internet survey vehicle, we will limit the time frame in which the survey is made available to 30 days.
We estimate that the survey will take 10 minutes to complete and return to EPA. It is anticipated that
approximately 500 respondents will return the survey. Based on this assumption, we estimate that the user burden
VI-5
-------
\\ill be a total of 83 hours We expect the Agency's burden will be a total of 20 hours for the review and data entry
of the information and tabulation of the results. A complete break out of the burden will be a total of 20 hours for
the review and data entry of the information and tabulation of the results. A complete break out of the burden
associated with the task is listed below:
Respondents's Burden
Number of Respondents 500
Hours per Response 10 minutes x 500 - 83 hours
Total Burden
HOURS: 83
Agency Burden
EPA Staff 20 hours
Total burden
HOURS: 20
U.S. ENVIRONMENTAL PROTECTION AGENCY
REGION I
OFFICE OF ENVIRONMENTAL MEASUREMENT & EVALUATION
60 WESTVIEW STREET, LEXINGTON, MA 02173-3185
MEMORANDUM
DATE: June 12, 1997
SUBJECT: Request for OMB Approval of Customer Feedback Survey
FROM: Carol Wood, Manager
Ecosystems Assessment Branch
TO: Barbara Willis, RED Desk Officer
Regulatory Information Division
Office of Policy, Planning and Evaluation
EPA's Region 1, New England Office is preparing to distribute copies of the 1997 State of the New England
Environment Report. In order to learn whether the report is clear, easy to read and provides information that our
customers need, we are preparing a customer feedback survey to include with the report. A copy of the survey form
is attached.
Approximately 12,000 copies of the report will be distributed to EPA personnel, citizens, local, state and federal
offices out side the EPA Region 1 Office, with the survey form as an insert. We expect to receive approximately
3,00 responses. Region I will create a database to track survey form responses. The information will be used to
prepare a report which will summarize the findings and make recommendations on how to improve the next State
of the New England Environment report and our other outreach activities.
We will be receiving the reports from the Government Printing Office by June 24 and hope to receive approval for
the customer survey form and have the forms ready to include in the mailings.
If you have any questions or concerns about this request, please contact Diane Switzer at 617-860-4377 or me at
617-860-4316.
VI-5
-------
Attachments
Request for Approval of Information Collection Activity
I. Background
The 7997 State of the New England Environment Report is an outreach tool, designed to inform the public on
environmental conditions, using indicators that have been selected in the National and Regional processes as we
begin focusing more on environmental results. We discuss topics of concern to the public and EPA, signs of
improvement or degradation, and what EPA and our partners are doing to improve conditions. The purpose of this
outreach activity is to provide clear and concise information to the public that meets their informational needs and
allows them to better understand what we are doing to improve and protect the environment and public health. The
discussion topics are selected based upon regional priorities and what we think the public wants to know.
II. Survey Purpose and Description
The State of the New England Environment Workgroup is planning to conduct a customer feedback survey in the
form of a "Reader's Evaluation Form," to evaluate whether we are providing the public with the information they
want and need in a way that is easy to read and use. The results will be used to improve the reports content,
readability and use.
The evaluation consists of six questions. The first question will evaluate the reports readability. The second
question evaluates how well we do in communicating information the public wants to know. The third question
evaluates how the information is useful to the reader. The fourth and fifth question evaluate the information needs
of the reader that we are not meeting. The sixth question evaluates whether the report is something the public
wants to receive.
III. Survey Methodology and Use of Results
The potential target audience for the evaluation forms consists of approximately 12.000 citizens, businesses and
government personnel (local, state and national). EPA Region I plans to distribute the forms as inserts to copies of
the 1997 State of the New England Environment Report. Through this effort, we anticipate that approximately
3,000 readers will respond. We estimate that it will take a respondent approximately five minutes to complete and
evaluation form.
EPA Region I will create a database to track evaluation form responses. The information will be used to prepare a
report which will summarize the findings and make recommendations to the State of the New England
Environment Workgroup and Regional managers on how to improve the readability, use and content of this report
and other similar outreach activities.
IV. Respondents' Burden
Number of Respondents 3,000
Minutes per Response 5 minutes x 3,000 = 15,000 minutes = 250 hours
Cost per Hour $11.00*
Total Burden: 250 Hours; $2,750
* Based on Federal/State/Local Employment & Payroll averages as presented in the 1996 Statistical Abstract of the
United States
V. Agency Burden
EPA Staff Time 100 hours
VI-6
-------
Cost per Hour $36.00
Total Burden 100 hours; 83,600.00
YOUR COMMENTS, PLEASE . . .
We would like to know if the 1997 State of the New England Environment Report provides you with use useful
information. Your responses to the following questions will help us meet your needs.
1. a. Is. this report easy to read and understand? Yes_No_
b. What would make the report easier to read and use?
2. Please rate the report as to how informative the discussions within each of the sections are, with 1 = not
informative and 5 = very informative.
Report Section Not Informative Very Informative
a. New England Ecosystems 12345
b. Public Health & Our Environment 12345
c. Economic Opportunities 12345
d. Recreational Resources 12345
e. Environmental Education & Outreach 12345
f. New Directions 12345
3. In which areas is the report helpful to you? School Work Home
Leisure Time Local Community General Knowledge Other
4. What topic(s) would you like to see in future reports?
5. We welcome any other comments you have about this report:
6. Would you like to receive a copy of future reports? Yes No
If "Yes," please provide your mailing address:
Name
Organization
Address
Town/City State
Zip Code__ County
Please fold in half with EPA's return address on the outside, staple/tape shut, and mail.
Thank you for your response! Environmental Protection Agency Region I, New England Office
June 18, 1997
MEMORANDUM
SUBJECT: Review of Customer Satisfaction Questionnaire,
ICR No. 1711.01 (OMB 2090-0019)
FROM: Barbara N. Willis
Regulatory Information Division (2136)
To: Chris Wolz
Natural Resources, OIRA
VI-7
-------
As a condition of OMB approval for the generic ICR. EPA agreed to submit each specific questionnaire co\ered bv
this clearance to OMB for review. Therefore I am forwarding for your review Region I "1997 State of the New England
Environmental Report". The purpose of this survey is to evaluate whether Region I is providing the public with the
information they want and need in a way that is easy to read and use. The results will be used to improve the reports' content,
readability and use.
Your comments and suggestions would be much appreciated. Thank you for your cooperation in this matter. If you
have any questions, please contact me at (202) 260-9453.
Attachments
January 25, 1996
MEMORANDUM
SUBJECT: Request for approval of a questionnaire titled
"What Do You Think About This Report?"
FROM: Barry Burgan
National 305(b) Coordinator
US EPA (4503F)
TO: Matt Leopard, Director
Paperwork Clearance Officer
Regulatory Management Division (2136)
Office of Policy, Planning, and Evaluation
EPA constantly seeks to improve the content and presentation of information in the National Water Quality
Inventory Report to Congress. Readers who have experience in the program are asked to respond voluntarily to six
questions, and offer opinions, comments and suggestions that will help EPA tailor the content and presentation of future
reports to the readers' needs.
The questions are on a single, two-sided sheet or paper at the end of the report, designed to be easily removable,
folded, and mailed. The reader would fill out one side, fold the sheet, stamp it, and mail it to the address printed on the other
side (see attached sample). Three thousand (3,000) copies of the report will be published. Approximately 90 responses (3%) to
the questionnaire are expected. It should take no more than 15 minutes to complete each sheet; at $30/hour, the burden is
equivalent to $7.82/sheet (which includes a 32-cent postage stamp). With a total time of 22.5 hours for all responses
(equivalent to $675), the total burden for all respondents would be $703.80, postage included.
The responses would be reviewed at EPA, and are not for public distribution. They will be used to improve the
quality of future reports, and satisfy the needs of respondents and other readers.
If you need any further information, please feel free to call me (260-7060) or George Doumani (260-3666); Fax. 260-
1977.
Attachment
What Do You Think About This Report?
EPA constantly seeks to improve the content and presentation of information in the National Water Quality inventory
Report to Congress. Your response to the following questions will help EPA tailor the content and presentation of future
reports to address your needs. Please pull out this page and return your comments to the address on the reverse. Thank you
for taking the time to respond.
Yes No
VI-8
-------
Are there additional topics that you would like to see covered
In this document? D D
Please list topics:
2. Are there topics that should be removed from this document? D D
Please list topics:
3. Was the organization of the report adequate? D D
How could the organization be improved?
4. In general, were the figures and graphics easy to understand? D D
Which figures were most effective at conveying information to you?
5. Were there any figures that were difficult to understand? D D
Please list figures:
6. Do you have any other suggestions for improving the content D D
and presentation of information in this Report to Congress?
December 1, 1995
MEMORANDUM
SUBJECT: Review of Customer Satisfaction Questionnaire,
ICRNo. 1711.01 (OMB 2090-0019
FROM: Barbara N. Willis
Information Policy Branch (2136)
TO: Tim Hunt
Natural Resources, OIRA
VI-9
-------
As a condition of OMB approval for the generic ICR, EPA agreed to submit each specific questionnaire co\ ered b\
this clearance to OMB for review Therefore I am forwarding for your review the "User Survey for OSW's Catalog of
Hazardous and Solid Waste Publications". The purpose of the questionnaire is to obtain feedback from callers about the
usefulness of the catalog, how they would like to receive the catalog and other OSW documents, and what types of documents
they would like OSW to develop. This survey will be available on both paper and electronically (through EPA's Public
Access server on the Internet). This survey had previously been cleared for use under the USEPA Total Quality Management
ICR (OMB 2010-0023) but that ICR has expired. I have attached the memorandum from the program and a copy of the survey
instrument.
Your comments and suggestion would be much appreciated. Thank you for your cooperation in this matter If you
have any questions, please contact me at (202) 260-9453
Attachments
November 22, 1995
MEMORANDUM
SUBJECT: Request for Approval of Information Collection
Activity: User Survey for OSW's "Catalog of Hazardous
and Solid Waste Publications"
FROM: Loretta Marzetti, Director
Communications, Information, and resources Management
Division, OSW
TO: Barbara Willis, OPPE
The Office of solid Waste would like to include a user survey in its "Catalog of Hazardous and solid Waste
Publications: Eighth Edition." This survey would request feedback from customers on the usefulness of the catalog, how they
would like to receive OSW publications, and what types of documents they would like OSW to develop.
We are respectfully requesting OMB's approval of the survey for OSW's catalog under the Customer Service ICR.
Information collected through this survey will be used to revise the catalog and develop new publications, which will improve
our customer service as directed by Executive Order 12862. More detailed information on the proposed user survey is
attached.
If you have any questions or concerns, please contact Carie VanHook at 703-308-7891. Thank you.
Attachments
Request for Approval of Information Collection Activity Background
Background
OSW's "Catalog of Hazardous and Solid Waste Publications" provides our customers with a comprehensive list of
publicly available OSW documents. The catalog is organized in sections by title, subject area, and document number, and is
available on both paper and electronically (through EPA's Public Access Server on the Internet). OSW would like to include a
user survey with the paper and electronic versions of the catalog to obtain feedback from our customers on the catalog and
OSW's solid and hazardous waste publications.
Survey Purpose
VI-10
-------
The purpose of the survey is to obtain feedback from callers about the usefulness of the catalog, how they \\ould like
to receive the catalog and other OSW documents, and \\hat types of documents the\ \\ould like OSW to develop OSW \vill
use this information to improve the catalog for the next edition and to identify the needs for new OSW documents.
Survey Methodology
OSW plans to include the user survey with the "Catalog of Hazardous and Solid Waste Publications: Eighth
Edition." Users of the paper catalog will be able to detach the survey, fill it out, and mail it back to EPA. Users of the
electronic catalog will be able to print out the survey, fill it out, and mail it back to EPA or fill out the survey electronically an
return it to EPA via e-mail.
OSW will print 10,000 copies of the catalog and make it available electronically on EPA's public access server on thi
Internet. We estimate that the survey will take 5-10 minutes to complete and return to EPA. It is anticipated that 500 total
users will return the survey. Based on this assumption, we estimate that the user burden will be a total of 83 hours. We expec
the Agency burden to be approximately 30 hours for the review of comments and development of recommendation. The
RCRA Docket contractor burden is approximately 40 hours to collect the surveys and tabulate the results. A complete break-
out of the burden associated with the task is listed below.
The information gathered from this information collection activity will benefit the OSW to:
improve OSW's "Catalog of Solid and Hazardous Waste Publications."
• identify the types of documents that our customers would like to have developed.
• provide access to the catalog and other documents through other media, such as on a computer disk or via Internet.
Respondents' Burden
Number of Respondents 500
Hours per Response 10 minutes x 500 = 83 hours
Cost per Hour $36.65
Total Burden
Hours: 83
Cost: $3,041.95
Agency Burden
EPA Staff 20 hours @ 36.65 per hour = $ 733.00
Contractor Staff 40 hours @ 36.65 per hour = $ 1466.00
Hours: 60
Cost: $2,199
Catalog of Hazardous and Solid Waste Publications:
Eighth Edition
User Survey
EPA is interested in learning how useful you found the Catalog of Catalog of Hazardous and Solid Waste Publications: Eighth Edition. Please take a f<
minutes to answer the following Questions. Your voluntary input will help EPA continue to improve this publication. If you are completing a hard co[
of this survey, return the form by folding it, stapling or taping the bottom closed, stamping it, and mailing it If you are completing an electronic copy
this survey, return the form via e-mail to RARA-docketepamaiLepa.gov.
VI-11
-------
1. With what tjpe of organization are you
affiliated?
~ Law firm
— Consulting company
- Media
^ industry
— Local government
^ State government
n federal government
c School(K-12)
3 College/University
a Environmental group
a Community group
c Other (please specify)
12. Please provide any additional comments on the catalog,
Or on the availability and distribution of EPA's
Hazardous and solid waste documents in general.
13. Please provide your name and phone number so that we
can contact you if we have any questions about your
responses (optional).
Name
Telephone number:
2. Approximately how many documents have you
ordered from the catalog (all editions)?
DO o 1-10 n 11-50 a 51-100 a over 100
3. From which source do you most often order?
a EPA's office of Solid Wise (OSW)
a National Technical information Service (BTIS)
n Government Printing Office (GPO)
4. How would you rate the overall usefulness of the
catalog?
n Very good n Good a Satisfactory Q Poor a Very poor
5. How would you rate the organization of the catalog
(In sections by title, subject area, and document number)?
a Very good a Good a Satisfactory D Poor a Very poor
6. If you are not satisfied with the organization, what
alternative arrangement would yon recommend?
7. How would you rate the clarity of the document ordering
instructions in the catalog?
n Very good n Good a Satisfactory a Poor a Very poor
8. If you felt the ordering instructions were unclear, how
would you recommend improving them?
9. How would you prefer to receive or access the
catalog?
n As printed publication D On computer disk
n Via the Internet n Via an electronic bulletin board
10. How would you prefer to receive or access the
documents you order through the catalog?
o As a printed publication Q On computer disk
c Via the Internet n Via an electronic bulletin board
11. What new documents (related to hazardous or
Solid waste) would you like EPA to develop and make
Available through the catalog?
VI-12
-------
MEMORANDUM
SUBJECT Submittal of Customer Satisfaction Survey for Expedited
OMB Review
FROM: Michael B. Cook, Director
TO: Matt Leopard, RID Desk Officer
Office of Policy, Planning and Evaluation (2136)
Attached is a clearance package for an Office of Water Customer Satisfaction Survey as authorized under Executive
Order 12862, "Setting Customer Service Standards." This particular survey is designed to assess State opinion on the curren
level of satisfaction and desired improvements to the Agency's Water grant process. This voluntary survey focuses on three <
the primary water quality management grants under the Clean Water Act, Sections 106, 319 and 604 (b).
We are requesting an expedited review or this survey instrument in order to comply with the rather tight schedule tb
is mandated under the Executive Order. We anticipate initiating the survey no later than mid-November. I am requesting
your assistance in coordinating this review.
Please contact Jane Ephremides of my staff (260-5835), or Don Brady in the Office of Wetlands, Oceans and
Watersheds (260-7074) if you have any questions.
Attachment
cc:
Bob Wayland
Abby Pirnie
Don Brady
CLEARANCE INFORMATION COLLECTION REQUEST FOR 1994 THE CUSTOMER SATISFACTION SURVI
Identification of Information Collection
Executive Order 12862 requires Agencies to "survey customers to determine the kind and quality of services they
want and their level of satisfaction with existing services". This survey, will be conducted by customer satisfaction survey
professionals at the request of the Environmental Protection Agency's Office of Wastewater Management' Resource
Management and Evaluation Staff and the Office of Wetlands, Oceans and Watersheds Assessment and Watershed Protectio
Division. Tim Icke, Program Analyst, will be the point of contact at OWOW's Assessment and Watershed Protection
Division. He can be reached at (202)-260-2640.
Short Characterization of the Survey
The 1994 Customer Satisfaction Survey will solicit opinions from members of the grants community within the
States. The data collection is authorized by Executive Order Number 12862, "Setting Customer Service Standards," which
requires all federal executive departments and agencies that provide significant services directly to the public to carry out the
principles of the National Performance Review.
As a result of the Executive Order, The Office of Water is assessing its operations and procedures in order to provid
service to the public that matches or exceeds the best service available in the private sector. The Customer Satisfaction Surv
on three of the grants, those under Sections 106, 319, and 604(b) of the Clean Water Act. The survey is intended to determi.
the customers' current level of satisfaction aid desired improvements in these three grants programs. In the water program.
VI-13
-------
there are 11 sources of financial assistance available to assist the States and terntones m achieving the mandates of the Clean
Water Act. The questions focus on respondents opinions and perceptions of services rendered
Collection Methodology
Using a pretested telephone questionnaire, EPA will survey State water quality managers, grants administration
managers, and the program managers for Sections 106, 319, and 604(b) in each of the 57 States and territories. EPA estimates
that the number of respondents will vary considerably from State to State. Using a conservative estimate, the highest possible
burden will be 5 respondents per state. The survey instrument is a 15-minute, voluntary telephone questionnaire covering
approximately 30 questions. There are four open-ended questions. For those customers that request an opportunity to respond
at greater length, follow-up calls will be scheduled. Since these conversations are voluntary, will vary greatly, and will affect a
small percentage of respondents, the follow-up calls are not considered burdens under the definition of the Information
Request.
This one-time only information collection will involve approximately 285 voluntary respondents of which 70% are
anticipated to complete the telephone survey. The survey will require approximately 50 hours at a total cost to the respondents of
$1444. Exhibit I-a, Respondent Burden and Costs, provides a detailed description of the unit burden and costs to respondents for
this collection. The average burden per response is 15 minutes.
State grant program authorities are the only respondent group that will be affected by this survey, and by definition they
are not small governmental jurisdictions.
Use of Survey Results
The results of the Customer Satisfaction Survey will be summarized in a report or accompanying briefing document.
EPA intends to use the information gathered by the survey to identify tools to improve the grants management process by reducing
paperwork, focusing on results while maintaining accountability, and responding to State environmental priorities. The
fundamental purpose of the customer satisfaction survey is to assess States' satisfaction with the grant process and existing
services. The survey will help EPA:
• Identify potential changes that States would like to see in the administrative management of Sections 106, 319, and
604 (b) grant programs;
• Assess the three grant programs' potential to enhance/retard State's adoption with the watershed protection
approach; and
• Understand States' level of satisfaction/dissatisfaction with the three grant programs.
Collection Schedule and Follow-up Plans
EPA seeks to minimize the amount of data Collected through a one-time only data gathering effort while at the same time
gathering enough information for an effective Customer Satisfaction Survey. The survey will help Headquarters establish a
benchmark to compare EPA's customer service performance with that of other federal agencies and private sector businesses. In
the future, this information will help to provide customers with choices in both the sources of service and the means of delivery; to
make information, services, and complaint systems easily accessible; and to provide a means to address customer complaints.
Costs and Burden to the Agency and Respondents, and Number of Respondents
The total burden for EPA Regional and State grants program authorities is a function of the number of grants managers.
auditors and program managers for Sections 106, 319 and 604(b) of the Clean Water Act in each State and Interstate Agency and
the number of open-ended questions. Exhibits 1-a and 1-b give detailed descriptions of the individual reporting and record
keeping requirements associated with the survey. Burden estimates are based on EPA data from the Regions and Headquarters
VI-14
-------
Exhibit 1-a summarizes the State Respondents' burden and costs as respondents to the voluntary telephone survey The
total respondent burden associated with the Customer Satisfaction Survey is 50 hours (200 respondents at 15 minutes per call) an
the total respondent cost is $1444, which equates to a cost per respondent of $7 22. This estimate assumes that the average hourl;
labor cost for state employees is $28.96, comparable to a GS9, Step 10 salary.
The Agency's burden and cost arises from contacting appropriate regional Program Officers, and from reviewing,
analyzing, and processing the data. The total annual Agency burden associated with the customer Satisfaction Survey is 100
hours. This assumes that the average hourly labor cost of federal employees is $28.96, equal to a GS-9, Step 10 salary. The total
annual Agency cost resulting from survey reporting and record keeping resulting from the customer survey is $2896.
Exhibit I-a
Respondent Burden and Costs
Regulation Requirements
(A) (B) (C) (D)
Total # # Composite Total hours
Respondents3 Responses4 hrs/respondent (B)*(C)
(F) (G)
Hourly Total
labor costs5 Cost (D)*(F)
Survey Reporting
requirements (one-time
only)
. Respond to telephone
Customer Satisfaction
Survey
Total Burden and Costs
for all affected
Respondents: 4
285
200
0.25
50
50
$28.96
$1,444
$1,444
1 Respondents include State grants managers, auditors, and program managers for Sections 106, 319, and 604 (b) in each of the 57 States and
Territories.
2 Assumes approximately five calls to each State and Territory and assumes 70% response rate.
3 Hourly labor cost equals the annual salary for a GS-9 step 10 (37,651) times 1.6 (the benefits multiplication factor as listed in the June 1992 ICR
Handbook) and divided by 2,080 of work hours per year).
4 Numbers may not add due to rounding.
VI-15
-------
Exhibit 1-b '
Agency Burden and Costs (As users of Data)
Regulation
Requirements
RecordkeenjniL
Requirements
(Ongoing)
Agency Reviews
1st Draft of Report
Agency Approves
Final Draft of
Report
Total Agency
Burden and
Costs:2
(A)
Total no.
of
respondents
N/A
N/A
(B)
No. of
responses
N/A
N/A
(C)
Composite
hours per
respondent
N/A
N/A
(D)
Total hours
(B) * (C)
60
40
100
(F)
Hourly labor
cost1
$28.96
$28.96
(G)
Total
Costs
(D)*(F)
$1,738
$1,158
S2,896
Draft - October 25,1994
1994 CUSTOMER SATISFACTION SURVEY
HOW ARE WE DOING?
Grant Administration: Grant Administration Staff
Only a sample of the several versions of the surveys for a series of grants is presented.
Hello, may I please speak with (NAME FROM FACE SHEET)?
RESPONDENT AVAILABLE
RESPONDENT NOT AVAILABLE (SCHEDULE A CALL BACK)
Hello, my name is
Of Abt Associates. We are conducting a customer
satisfaction study for the Environmental Protection Agency (EPA) about three Office of Water program management grant
programs. The study is voluntary and the answers that you give will be kept strictly confidential.
1 Hourly labor cost equals the annual salary for aGS-9 step 10 ($37,651) times 1.6 (the benefits multiplication factor as listed in the June 1992 ICR Handbook)
and divided by 2,080 of work hours per year)
2 Numbers may not add due to rounding.
VI-16
-------
1 Are >ou familiar \\ith the Section 106 grant program that funds the management of state \\ater quality programs''
YES 1
NO (SKIP TO QUESTION 14) 2
DO/REF (SKIP TO QUESTION 14) 8
2. I have some questions about the FY 94 grant cycle. How satisfied are you with the level of reporting burden under
Section 106? Are you...
Very satisfied (SKIP TO QUESTION 4) 1
Satisfied (SKIP TO QUESTION 4) 2
Dissatisfied 3
Very dissatisfied 4
3. What are the one or two most important changes you would like to see in Section 106 reporting requirements?
4. Do you think EPA made good use of the FY 94 section 106 data you reported to them?
Yes 1
NO 2
Grants Administrators 1
5. Were any of the reports created by your state in complying with Section 106 requirements for FY 94 useful for other state
purposes such as state budgeting or accounting?
Yes 1
NO
6. How satisfied are you with the opportunity offered by EPA to file Section 106 FY 94 reports electronically? Are you...
Very satisfied (SKIP TO QUESTION 8) 1
Satisfied (SKIP TO QUESTION 8) 2
Dissatisfied 3
Very dissatisfied 4
7. What are the one or two most important changes you would like to see in Section 106 electronic reporting scope or
procedures?
How satisfied were you with the length of time it took EPA to respond to requests for information on grant administration
and reporting for FY 94 Section 106 grants? Were you...
Very satisfied 1
Satisfied 2
Dissatisfied 3
Very dissatisfied, or : 4
Did you not make any requests for information 5
How satisfied are you with the length of time it took to obtain the EPA approvals required at various states of
administration of FY 94 Section 106 grants? Were you...
VI-17
-------
Very satisfied 1
Satisfied 2
Dissatisfied '. 3
Very dissatisfied, or 4
Did you not need any EPA approvals 5
9. How satisfied are you with EPA's requirements for the close-out or rollover of the Section 106 grant fund? Are you...
Very satisfied 1
Satisfied 2
Dissatisfied 3
Very dissatisfied 4
Grants Administrators 2
10. What are the one or two changes you would most like EPA to make in its Section 106
reporting requirements?
11. Overall, how satisfied are you with EPA's FY '95 Section 106 grant programs? Are you...
Very satisfied 1
Satisfied 2
Dissatisfied 3
Very dissatisfied 4
12. What is the one most important change you would like to see made to the Section 106 grant program"?
Grants Administrators 3
REPEAT FOR SECTION 319 AND 604(B)
INSERT THE FOLLOWING AFTER QUESTION 1 FOR 605(b)
1A In meeting the reporting requirements for the Section 106. 319 and 604(b) programs for FY 94, did
your state ever have to submit the same report the different grant programs?
YES 1
No 2
40. Please compare your state's experience with the section 106, 319, and 604b programs for FY 95 with that of other grant
programs administered by the EPA Office of Water.
CLOSING: Thank you very much for your time. Analysts working on the project may contact
VI-18
-------
you later for further detail or clarification of the information you've given. Is there
a best time of dav or dav of the week to reach vou9
Thanks again, Goodbye.
VI-19
-------
Fact Sheet VII: Unit of Analysis
This Fact Sheet compares three alternative "units of analysis" that might be used for customer
feedback activities at EPA and recommends that one of them, the "person served," be used as the
unit of analysis in most surveys of customer satisfaction conducted by EPA. It also recommends
that another, different unit of analysis, the individual "customer transaction," be used as the unit
of analysis for most activities that rely on continuous feedback to track the level of customer
satisfaction and how it is changing over time.
The unit of analysis selected for the collection of customer feedback information is important for
a number of reasons: it affects the size of the list from which the sample needs to be drawn and
therefore affects the decision as to sample size, it affects what kinds of things will be included on
that list, it affects what is asked of each person contacted, and it affects how the responses of
those in the sample (i.e., those contacted) are analyzed.
There are three principal alternative units of analysis that might be used for any given customer
feedback activity at EPA:
(1) The unit of analysis is the "customer transaction." (This is explained in the discussion
that follows below.)
(2) The unit of analysis is the "person served."
(3) The unit of analysis is "the organization served" in each case where the person served
was acting on behalf of an organization.
To clarify the differences among these three possible units of analysis, let's look at the implications
of choosing one of these units of analysis as compared with each of the others.
To facilitate this comparison, let's assume that the customer feedback method that has been
selected for use is a telephone survey. Once the unit of analysis to be used is selected, the next
steps are to determine what sample size to use, then to randomly select that number of specific
people (or organizations) to be called.
Keep in mind that any customer feedback activity should seek feedback from customers on their
satisfaction with products and services received in a certain specific period of time. For clarity,
we will here assume that the period of time of interest is a specific calendar year.
In examining all the customers served in a specific (hypothetical) EPA program area for the year
of interest, we discover that a total of 236 different people were served (by being provided a
product or service). On closer examination we discover that a there were a total of 377 customer
transactions. The reason for the difference in these two numbers is that some customers, after
obtaining one product or service, called back later in the year to request another product or
service. A few then called back a third time, and so on. We here refer to each occasion on which
a specific person called to obtain a single specific product or service as a "customer transaction."
-------
Comparison of "Person Served" vs. "Customer Transaction" as the Unit of
Analysis
If the unit of analysis is the person served, then we will use 236 as the total number of
people/things to be characterized and this will be the basis for choosing the sample size. If the
unit of analysis is the customer transaction, then we will use 377 as the total number of
people/things to be characterized and this will be the basis for choosing the sample size.
If we decide to use the person served (of which there are 236) as the unit of analysis, and we
choose to use a sample size of 40, then we need to make a random selection of 40 persons served.
So at this point we will put together a list of all 236 persons served. Note that each person served
will appear on this list only once, no matter how many times he/she called during the year to
obtain a product or service. As a consequence, when 40 names are randomly picked from this
list, each person served has the same chance of being picked as every other person served, no
matter how many times he/she called during the year to obtain a product or service. Finally, when
the persons randomly selected are called, they will be asked about all of their experiences during
that year as customers of that EPA program area.
If instead, we decide to use the customer transaction (of which there are 377) as the unit of
analysis, then all else being equal, we will need a larger sample size, because we now have more
things (transactions) to sample from. Let's say we now, as a consequence, choose to use a sample
size of 70. To randomly select 70 of these 377 customer transactions, we will need to make a
different, longer list of things to pick from. This time the list will contain 377 items, and each
item in the list will be a customer's name plus the one product or service obtained in a single
transaction. (Each item in the list may also include the date on which that product or service was
obtained -- this will be desirable in those program areas where we find some individuals obtaining
the same product or service more than once during a single year.)
Customers who obtained more than one product or service during the year will appear on the list
more than once, and those customers appearing on the list twice (because they obtained two
products or services during the year) will have twice as great a chance of being randomly selected
to be part of the sample as those customers who only obtained one product or service during the
year. Furthermore, what will be picked is not just the name of a customer but the name of the
customer plus the specific product or service he/she obtained in a specific transaction during the
year (plus, if needed, the date it was obtained). Finally, when those picked are called, the
questions they are asked will focus specifically on that one transaction, i.e. they will be asked to
limit their response/comments to how satisfied they were with that one particular product or
service (obtained on that date), how courteously they were treated when obtaining that one
product or service (on that date), etc.
Because of the greater complexity associated with its use, it is expected that most EPA program
areas will not use the customer transaction as the unit of analysis for their customer feedback
VII-2
-------
surveys. At the same time, when there is reason to believe that the degree of customer
satisfaction varies greatly from one product or service to another provided by the same EPA
program, it may be decided that it is appropriate in that case to use the customer transaction as
the unit of analysis. If so, it is important to remember, when contacting each customer included in
the sample, to ask the customer to limit his/her comments to that one particular product or service
obtained in the transaction for which he/she was selected even if he/she obtained two or more
products or services during the year.
Note that useful product- and service-specific customer satisfaction data can also be obtained by
using the "person served" as the unit of analysis. This can be accomplished as follows: use the
total number of "persons served" as the basis for choosing the sample size. Next, use the list of
"persons served" as the basis for randomly selecting the specific persons to be contacted. Then,
when calling each person selected: first, ask him/her first to identify all of the various products or
services he/she received during the year; second, ask about his/her overall satisfaction with those
products and services; and finally, ask about his/her degree of satisfaction with each individual
product and/or service received. The analysis of the results obtained can then be used to
characterize the overall degree of satisfaction of the 236 customers as a whole, and will also
provide useful information about differences in degree of satisfaction with specific products and
services.
Comparison of "Person Served" vs. "Organization Served" as the Unit of
Analysis
For this same example (236 persons served in 377 customer transactions), there is yet another
possible unit of analysis: the organization served (rather than the person served) for customers
that were acting on behalf of an organization. In this (hypothetical) case, of 236 persons served,
96 were acting on their own behalf, and 140 were acting on behalf of an organization.
Furthermore, there were several cases where more than one person served was acting on behalf of
the same organization. For example: 7 different persons called to obtain products or services on
behalf of the XYZ Corporation, 5 different persons called to request products or services for the
ABC law firm, and 3 different people called requesting products or services for the LMN
environmental group. We find, on further examination, that the 140 persons served who were
acting on behalf of a organization were acting on behalf of a total of 63 different organizations.
With these facts in mind, the EPA program area conducting the survey may decide that it wants to
know how satisfied each of these organizations as a whole was with the products and services it
obtained. In this case, depending on how it is decided to approach those not acting on behalf of
an organization (i.e., persons acting as "members of the general public"), we could end up with a
total of 159 total "customers" (persons and organizations) — the total of the number of
organizations served (63) plus the total number of persons from the general public served (96).
Or we could instead treat the 96 members of the general public as one group and the 63
organizations served as a second group, and sample separately from each of these two groups. In
that case, we would again have a total of 159 "people or things" to sample from (63 organizations
VII-3
-------
in one group plus the 96 members of the general public treated as one group), but we would
approach the sample selection process differently.
Let's say that the program area seeking customer feedback decides to use the second approach:
we then have a total of 159 people and things to sample from, separated into two different
groups. We will need to sample separately from the group consisting of the 63 organizations
served and from the group consisting of the 96 members of the general public who were served.
Let's begin our discussion of how this sample selection process can be conducted with the group
consisting of the 96 members of the general public. We want to select a sample of these 96
persons to contact in our survey. In this case, all else being equal, with only 96 persons to sample
from, we can use a smaller sample size than we used earlier when the unit of analysis was "people
served" (of which there were 236) or "customer transactions" (of which there were 377). Let's
say it is decided to select a sample of 25 from this group. The most straight-forward method for
selecting these 25 would be to create a list of the 96 members of the general public and then
randomly select 25 names from that list. We then contact each of these 25 people in our phone
survey.
Let's now address the group consisting of the 63 organizations. We want to select from these 63
a sample to be contacted in our phone survey. Again, all else being equal, with only 63 things to
sample from, we can once again use a smaller sample size than we used earlier when the unit of
analysis was "people served" or "customer transactions." Furthermore, we can also use a smaller
sample size than we used for the group consisting of the 96 members of the general public. Let's
say it is decided to use a sample size of 20. We now have to select 20 organizations to contact in
our survey. Once again, the most straight-forward method for selecting these 20 would be to
create a list of the 63 organizations served and then randomly select 20 organizations from that
list.
But we now have another problem. We need to decide who to call at each of these organizations.
For those organizations where only one person called during the year to obtain a product or
service on behalf of that organization, there is no problem ~ that is the person who will be called.
But for organizations included in the sample for which more than one person obtained a product
or service on behalf of that organization, a decision has to be made -- will all of these persons be
called during the phone survey? If not all, then how will those to be called be selected and how
many will be selected for each organization?
As you can see, using the "organization served" as the unit of analysis results in a number of
complexities. There are further complexities that arise in analyzing the results obtained from
using such an approach. For this reason, those at EPA responsible for obtaining customer
feedback should in general not use the "organization served" as the unit of analysis (unless of
course they have a compelling reason to do so).
Conclusion
VII-4
-------
We conclude that, in most cases, for reasons of simplicity and convenience alone, the preferred
unit of analysis for obtaining customer feedback by means of surveys at EPA will be the "person
served." Experience with customer surveys elsewhere has shown that using the "person served"
as the unit of analysis gives meaningful and very useful results. Since surveys based on "person
served" are the easiest to design and carry out, EPA programs undertaking customer surveys are
encouraged to use "persons served" as the unit of analysis for all of their customer feedback
activities, except when there is a compelling reason to do otherwise. Furthermore, adopting
"person served" as the unit of analysis for most customer feedback surveys at EPA will maximize
the comparability across different program areas and over time of the results obtained from these
surveys.
Please note that the above conclusion applies only to customer satisfaction surveys (periodic
surveys). In any case where a continuous feedback approach is to be used (like a comment card
included in each copy of a publication sent out or a followup phone call to each nth customer two
days after a product or service has been provided), then the unit of analysis will instead normally
be the specific "customer transaction" (the transaction in which the product or service was
provided) about which the feedback is being sought.
VII-5
-------
Fact Sheet VIII: Examples of Graphs for Presenting
Customer Feedback Results
Once data have been collected, think hard about how you want to present them. Often, we focus lots
of attention on collecting feedback and performing complex analysis, and forget that we have to
market the findings if we are going to help bring about change. The form you select for presentation
can make or break all the previous work. Results need to be communicated clearly to the appropriate
people before an organization can begin learning from its customers.
Overall Satisfaction
1-3
Very
Dissatisfied
4-5
Dissatisfied
6-8
Acceptable
but Room for
Improvement
9-10
High Quality
There are many variables to consider when presenting data, such as the nature and level of the
audience, the reasons the feedback was collected and how it will be used, and the nature of the data
itself. Some of the more common forms are listed below, with a brief explanation of the unique use
of each presentation.
A very basic bar graph can be used to convey the percentage of the population that responds within
a given range. For example, the graph above indicates that 1.2 % of the respondents rated their
Overall Satisfaction as one, two or three on a scale of one to ten, 15.5% rated Overall Satisfaction
as four or five, 46.1 % as six, seven or eight, and 37.2% as nine or ten. Note that these groupings
of 1-3, 4-5, 6-8 and 9-10 are somewhat arbitrary, and can be changed to suit the needs of your
project. Additionally, the labels 'Very Dissatisfied1', 'Dissatisfied1, 'Acceptable, but Room for
Improvement' and 'High Quality' are also subject to change according to individual needs.
VIII-1
-------
Overall Satisfaction by Region
87
107
It is often useful to organize responses by customer segments that are meaningful to the survey
audience. In the case above, the mean, or average, Overall Satisfaction ratings are organized by
geographic regions. Note also that the base, or number of respondents in that region, is noted to the
right of each bar. This can be important to identify the relative validity of the information.
Overall Satisfaction
• Among those who received grants
totaling less than $50,000 a year
• Among those who received grants
totaling more than £500,000 a year
• Among those who receive grants totaling
between $50,000 and $500,00 a year
• Among those who have been interacting
with XYZ for less than one year
• Among those who have been interacting
with XYZ between one and three years
• Among those who have been interacting
with XYZ for more than three years
12345
7 8 9 10
Responses can also be organized by other types of segments. In the case above, respondents
answered questions about their length of use, and the amount of grant money they had received.
Note that the numbers of respondent in each category is to the right of each bar.
VIII-2
-------
Contract staff
Technical staff
Administrative staff
Billing/Accounting staff
246
Mean Satisfaction Rating
10
A bar graph can also be used to identify the mean, or average, response to various services received.
This is useful to compare the levels of satisfaction between services offered.
Quality of Health Care Received
100KT
0-3
7-8
9-10
best
0 Adults Surveyed (n=72)
S Children Surveyed (n=79)
A slightly more complex graph can allow the comparison of responses between two segments of
a population. In the example above, 62% of the children surveyed considered the Quality of the
Health Care they received to be nine or ten, on a scale of one to ten. In general, it appears that more
children rated the Quality of Health Care as higher, while more adults provided lower ratings.
VIII-3
-------
Getting the Care You Need
When You Need It
Adults Surveyed
Children Surveyed
20% 40% 60% 80% 100%
o»om«lllii««iif N»vtr QUiually OAlwiyt
Another way to compare two populations is to use segmented bar charts, as shown above. The
graph above indicated that children surveyed were more likely to feel they received the care they
needed, when they needed it.
Priority Issues for Building
Customer Loyalty
8
1
—
Lo
;
E
:
u
w
Action Matrix
>Contract3 Management
>Grants
Management
>!nformation Cteannghouw
>lnteractions with Agency
Staff
>Advocacy
^Distribution of Reports
Satisfaction High
If driver analysis is being performed, a useful way to present the results is in a quadrant chart, as
in the example above. By comparing the levels of satisfaction with the levels of importance, we can
prioritize results. In the example above, the services listed in the upper right quadrant are those that
were very important to the customer and were rated as providing high levels of satisfaction. These
services, Information Clearinghouse and Interactions with Agency Staff are identified as areas where
the organization is meeting or exceeding the customers needs. In contrast, the upper left quadrant
identifies services that are very important to the customer, but are rated as providing lo satisfaction.
It is these services, Contracts Management and Grants Management thai require immediate attention.
The lower right quadrant identifies services that provide high levels of satisfaction, but are not
important to the customer. In the example above, no services were found to be in the lower left
VIII-4
-------
quadrant This quadrant identifies services that are not important to the customer and provide low
levels of satisfaction.
Drivers of Satisfaction
Hi*-
Satisfaction-
-*Low
Hi Promote
Importance
Low
Maintain
Improve
Commit Minimal Resources
Another example of a chart that related the results of driver analysis is above. In this case, subjective
labels have been applied to the areas of the chart, according to the needs of the project
Compared to other government or government-like grant or funding
processes would you say that your experience was...
Better?
31.7%
The Best?
10.0%
Don't Know/
No Answer
15.0%
Worse?
10.0%
>///////
*moooocr
The Worst?
1.6%
About the Same?
31.7%
VIII-5
-------
A pie chart is another method useful for relating the proportions of a population that responded in
a particular way to a question. In the example on the previous page, the majority of the respondents
clearly felt that the funding process was About the Same as with others.
Examples of Customer Remarks
Concerning Billing Needs
* "Billing is sketchy and difficult to understand."
••"We are running approximately 6 months behind
on billing."
• "I have had problems with billing, and would like
XYZ to reassess the way they are billing;
timeliness and accuracy."
4 "Poorly itemized billing."
• "Billing report really hard to understand, very
inconsistent."
• "More prompt billing, so that I can delete them off the
records."
• "Billing is a twilight zone."
Open End responses are usually organized according to subject matter. In the example above,
comments that refer to problems with a Billing Service are grouped together. This is a very effective
way to communicate comments from customers to the audience. Following is an example of Open
End responses being organized and grouped together. The actual statement by the customer is not
listed, but the numbers of customers who felt a certain way is clearly communicated.
What Products or Services to XYZ
C us torn ers W an t to See 0 ffered ?
(# of responses)
• Information Distribution 34
- Internet 9
- Mailings 15
- Other 10
• Im p roved/C la rifled Policies 20
(supplies, photographic printing, m icrofilm , signs .Mb rary, etc )
• Improved C usto m e r S e rv ice 7
• Informing Customers ofExisting Services 8
One additional type of chart which can be useful for presentations is the trend or run chart
which is used to identify meaningful changes from year to year, or between feedback activities.
vin-6
-------
Such charts are used to monitor progress and portray improvement. A time series chart not
only can show trends, it can portray relationships. With time series, change and relationships
in two or more items can be compared that would otherwise appear on different scales (apples
and oranges) if the net change from one point to another is defined as a percentage.
Customer Satisfaction
100
80
60
40
20-j
0
IstQtr
2ndQtr
Courtesy
Timeliness
SrdQtr
4th Qtr
Accuracy
Accessibility
VIII-7
-------
Fact Sheet IX: Survey Software Information
There are numerous computer software packages available. The prices range from several hundred to
many thousands of dollars. Software is also on the market for Email and Internet surveys. In selecting survey
software, you may wish to consider the dimensions of the survey(s) that you are planning to perform. There are
three elements in the software packages that EPA Customer Service Program considered. These are the survey
form, the data base and other information needed to administer the survey, and the reporting features of an
effective survey.
The cost effectiveness of purchasing a scanner depends upon the survey software that is being used, size
of the length and number of surveys, and the size of the sample being surveyed. The software vendors can
recommend scanners that are appropriate for their software and expected .
Below are the names of four vendors that EPA has either considered using or has used for survey
activities within the last several years. The EPA Customer Service Program can provide support in the use of the
Corporate Pulse Software within EPA. More details about Corporate Pulse and a sample survey follow.
Esther H. Larson,
NCS Federal Government Marketing
4301 Wilson BLVD, Suite 200, Arlington, VA 22203
http: //www. ncs. com/ncscorp/govt/federal. htm
Phone: 1-800-359-9333, or 703-284-5810, Fax: 703-284-5819
Software: NCS Design Expert, NCS Survey, NCS Viewpoint, Scantools
Steve Hehl
Vitality Alliance
55 North University Avenue, Suite 225, Provo, Utah 84601
Phone:1-800-772-9478, or 801-373-2233, Fax: 801-373-8884
Software: Corporate Pulse
Colleen C. Thoresen
Auto Data Systems
6111 Blue Circle Drive, Minnetonka, MN 55343-9108
http://www.autodata.com
Phone: 1-800-662-2192, or 612-938-4710, Fax:612-938-4693
Software: AutoData Survey, AutoData Survey+, Autodata Pro
Raosoft, Inc
6645 NE Windermere Road, Seattle Washington, 98115-7942
http://www raosoft.com/
Phone: 206-525-4025, or in Wash. DC 703-742-5295, Fax: 206-525-4947
Software: Raosoft SURVEY, with numerous options for tailoring surveys.
IX - 1
-------
CORPORATE PULSE
What is Corporate Pulse?
Corporate Pulse is survey software. The EPA Customer Service Program (CSP) researched software
capabilities and selected this package because it best met the projected needs of the Agency.
Demonstration discs that guide the user through the program's general capabilities may be borrowed from the
CSP (202-260-9144). EPA staff may use the CSP copy after receiving general training. Programs and
Regions planning extensive survey work may wish to purchase discounted site licenses rather than use the
CSP copy.
What can Corporate Pulse do?
Corporate Pulse software has three capabilities: survey construction, survey administration, and analysis.
Survey Construction
• Library of tested questions - including topics such as customer service, leadership, team orientation,
communication)
• Demographics - select from pre-defined ones or develop your own
• Scannable for creation
• Response Scales - select from predefined ones or create your own
• Page layout
• Spelling check
• Autoreversing on negative questions
• Open-ended questions
• 360 degree profile surveys
Survey Administration
• Participant pool database
• User-definable field names
• Standards database-file importing and exporting
• Up to 32,000 records per pool
Simple random, stratified random, and cluster samples
• Produce distribution lists and mailing labels
• Track all surveys, administrations and reports
• Scan, import or manually enter survey response data
• Schedule, budget and track survey projects
• Create survey sample based on user-defined parameters including
• Expected percentage of survey to be returned
• Allowable margin of error
• Confidence level
Survey Analysis
• Types of reports (frequency distribution, descriptive mean, favorable/unfavorable, 360 degree
profile, trend analysis - automatic or manual
• Reports on entire survey or selected questions
• Reports on all 360 degree profile participants or selected participants
• Demographic cuts and comparisons
• Professional quality reports
• Analysis options (mean score, mode, rank, standard deviation, margin of error, confidence level, 360
degree profile options)
•
A copy of the OIG-developed survey that the OIG develop using Corporate Pulse follows.
IX - 2
------- |