United States
Environmental Protection Agency
Washington, DC 20460
May 1999
OP-233-B-99-002
Office of Policy
Hearing the Voice of
the
Customer
Feedback
Customer
Customer
Measurement
and
Satisfaction
Guidelines
Revised
May 1999
-------
-------
Hearing the Voice of the
Customer
Customer Feedback
and
Customer Satisfaction
Measurement
Guidelines
Revised edition
May 1999
"My vision \s that EPA will be a model for all regulatory
agencies by fully integrating customer satisfaction measures
into our strategic planning, budgeting and decision making
while recognizing the diversity of our customers and the need
for balancing sometime competing and conflicting interests.
Above all, we will strengthen our ability to listen to the voice of
our customers so that we can identify their need and act upon
them." ;
Carol M. Browner
-------
HOW THESE GUIDELINES WERE DEVELOPED
These Guidelines are the product of many people's efforts. After assessing the state of customer
satisfaction survey work across the agency and coordinating a 3-year plan for surveys across the
country, customer service staff determined that more people needed to understand how to obtain
actionable feedback from customers. As a first step, in December 1997, the Customer Service
Steering Committee (CSSC) formed the Feedback and Measurement Work Group to help plan
the best way to accomplish the goal. In February 1998, the Customer Service Program (CSP)
sponsored a workshop attended by nearly 20 people from program offices and regions. At that
workshop, the CSP contractor (Macro International Inc.) facilitated a process designed to explain
and exemplify what customer satisfaction measurement entailed and to produce an outline of the
Guidelines contents.
Members of the Work Group reviewed two drafts. Representatives of several Federal and State
agencies and with an internal expert panel commented on the third draft. The Work Group
accepted and approved the next draft, prepared by customer service staff, for publication by the
Customer Service Steering Committee in Octob er 1998. Everyone who actively participated
made this document possible.
WORK GROUP MEMBERS
Michael Binder, Office of the Inspector
General
Charlotte Cottrill, Office of Research and
Development
Judi Doucette, Office of the Chief Financial
Officer
William Garetz, Office of Policy
Elizabeth Harris, Office of Solid Waste and
Emergency Response
Beth Means, Office of Administration and
Resource Management
Wayne Naylor, Region 3
Arnold Ondarza, Region 6
Nan Parry, Office of Research and
Development
Caren Rothstein, Office of Prevention,
Pesticides and Toxic Substances
Stan Siegel, Region 2
Lawrence Teller, Region 3
Betty Winter, Region 4
EXTERNAL REVIEWEERS
Terry Bergerson, National Park Service
Dan Bius, North Carolina Department of
Environment and Natural Resources
Gary Machlis, National Park Service
Nancy Manley, Vermont Department of
Environmental Compliance
Lance Miller, New Jersey Department of
Environmental Protectio
Tom Roberts, Social Security Administration
INTERNAL EXPERT PANEL
Barry Nussbaum, Chairman, Office of Policy
Development
Charlotte Cottril, Office of Research and
Development
Steve Burkett, Region 8
Kevin Rosseel, Office of Air and Radiation
-11-
-------
PREFACE
Serving the public, stakeholders, and partners is nothing new to the Environmental Protection
Agency (EPA). Communicating with them and listening to their ideas is part of the way that
everyone at the agency does the job of protecting public health and the natural environment. What
is new to many is the term "customer." But when you think about it, we all have customers,
including one another.
Our customer base is very large and varied, so it is necessary for us to use many ways and every
opportunity we recognize to hear the voices of our customers. We have forums, workshops,
conferences, training sessions, and meetings of all sizes (from one-on-one interviews with CEOs,
mayors, tribal leaders, governors, etc., to Federal Advisory Cpmmittee Act group sessions and
communitywide exchanges around a Superfund problem or aiji environmental protection
opportunity). We use informal sessions, focus groups, surveys, comment cards, Internet feedback
screens, and more to hear what customers think of our services. Speakers answer questions and
listen to comments following speeches, and officials use interactive media opportunities such as
radio and television talk shows to reach the public. Hotlines, dockets, visitor centers, and libraries
seek customer comments. We work with partners in pollutiop prevention and our coregulators in
State, tribal, local, and other Federal agencies collaboratively ito plan activities. We actively seek
input to our rules, regulations, and decisions. Top officials meet regularly with industry sector,
environmental, and other constituency groups.
What is different since President Clinton signed Executive Order 12682 in September 1993 is that
we hold ourselves accountable for providing service that rivals the best in the private sector. We
have a set of standards against which we measure ourselves, the Six Principles of Customer
Service: i
1. Be helpful! Listen to your customers.
2. Respond to all phone calls by the end of the next business day.
3. Respond to all correspondence within 10 business days.
4. Make clear, timely, accurate information accessible.
5. Work collaboratively with partners to improve all products and services.
6. Involve customers and use their ideas and input. ,
They apply to the work of anyone at the EPA, whether a manager making billion-dollar decisions
or a brand-new summer hire. We also have sets of process standards for permitting; pesticides
regulation; partnership programs; public access; State, tribal, and local grants; enforcement
inspections and compliance assistance; research grants; and rule making. Cross-agency groups
under the national Customer Service Program (CSP) developed all of the standards.
Hearing the Voice of the Customer is1, designed to help individuals and organizations decide
whether and how to gather customer feedback. All of us have a need to hear that we are doing
-iii-
-------
the right things right, and we also need constructive criticism so we can do an even better job.
We need to learn from customers what their expectations are and how well we are satisfying them
(meeting or exceeding those expectations). Formal and informal feedback from our customers
can also provide some sound ideas for transforming an organization.
Gathering feedback may take only a minute or two of listening to an unsolicited comment, or it
may require an extensive nationwide survey. This document is not a cookbook on how to do
feedback and customer satisfaction measurement, but it does provide an array of techniques to
help you to effectively seek and then use what you hear from your customers. The process and
tools presented in this document will help to bring the voice of the customer further into EPA's
work, enabling us to improve processes, products, and services in ways that customer will
recognize and value.
To simplify the processes required of Federal agencies for doing voluntary customer feedback
activities and formal surveys, the CSP obtained a generic Information Collection Request (ICR)
from the Office of Management and Budget (OMB). A Factsheet to help you navigate the
process is part of the Guidelines. The CSP also has software to assist those who wish to
construct questionnaires and analyze results from respondents.
To further enhance everyone's ability at EPA to provide outstanding customer service, the
Customer Service Program, with help from many regional and headquarters staff members and a
contractor, has developed an introductory customer service workshop called Forging the Links.
Its purpose is to clarify the links between providing great service and achieving our mission, and
the links between those of us who are direct service providers and our external customers. The
workshop also underscores the important links between people within the agency as customers
and suppliers for each other. A series of highly interactive followup skills courses are also
available through a network of EPA trainers, and. the CSP also has video programs to lend.
In trying to find better ways to provide world-class customer service, the CSP has benchmarked
with other Federal agencies and with several corporations. Findings have been helpful in
developing and implementing the overall CSP. Benchmarking against the best in the Federal
Government and listening to the voice of the customer were a large part of the EPA's first
National Customer Service Conference, hosted by Region 6 from April 14 through 16, 1998.
Proceedings of the event are available from the Customer Service Program and on the program
website at www.epa.gov/customerservice/conference.htm.
This Guidelines document is an important piece of the new picture that is being drawn each day as
EPA gets prepared for the next century. Using the suggestions and steps outlined in this
document will help you to hear the voice of the customer and work to implement the kinds of
changes in products, processes and services that will enable EPA to be an agency that provides
world-class customer service.
Don't assume you know....continuously ask
what your customers want. Skip this step and
you'll get it wrong.
Al Gore
-IV
-------
TABLE OF CONTENTS
PAGE NUMBER
t
How These Guidelines Were Developed ii
Preface '; . ' Hi
Introduction > 1
Why is customer feedback necessary? 1
Who can use the Guidelines? 2
Why have Guidelines for customer feedback? '. 2
How are the Guidelines organized? ' 2
A note about conducting customer feedback activities 3
Plan the Customer Feedback Project 4
Who should conduct a customer feedback initiative? .. .' 4
How ready is your organization for customer feedback? .:.... 5
What kinds of customer feedback are already occurring? . 6
What are the core questions to ask for customer feedback? . . . 6
How often should we ask customers for feedback? .7
How long should feedback activity take? : 8
Who are our customers and with what services and products do we supply them? 10
Why establish quality control procedures in customer feedback activities? 11
Develop a written plan for the customer survey (checklist;) 13
The plan (checklist) 13
Construct Data Collection Procedures 14
What is the "best" approach for assessing customer satisfaction? 14
Continuous assessment 14
Decide on data collection method 15
The sample ; 17
Determining the sample size • • 18
Develop the questions 19
Construct the questionnaire 22
Pretest : 26
-v-
-------
TABLE OF CONTENTS (CONTINUED)
PAGE NUMBER
OMB clearance 26
Additional resources 28
Contingency for nonresponse 28
Effective questions (checklist) 28
Choose an approach (checklist) 29
Conduct Data Collection '. 30
Focus groups 30
Mail surveys 31
Telephone surveys 32
Electronic feedback 34
Analyze the Data 36
Data cleanup 36
Types of data and analyses 37
Analysis: An example 39
Driver analysis 43
Presenting the data 45
Formulating recommendations based on the data 46
Presenting recommendations—Using graphics 46
On developing recommendations 47
Act on the Results . 48
Is this the beginning or the end of the process? 48
How do you decide what to do with the feedback you receive? 48
How good is good enough? 48
How do we know what to work on first? 49
Suggested Reading 52
-VI
-------
TABLE OF CONTENTS (CONTINUED)
PAGE NUMBER
Factsheets :
Factsheet I Who are EPA'Js Customers? 1-1
Factsheet n EPA internal control procedures IE-1
Factsheet HI Sampling—The basics ffl-l
Factsheet IV Sampling—More on sample size . . . / IV-1
Factsheet V Sampling—More advanced topics V-l
Factsheet VI How to obtain clearance for EPA Customer Satisfaction Surveys . .. VT-1
Factsheet VH Unit of analysis : VII-1
Factsheet VTTT Examples of graphs for presenting customer feedback results VIII-1
Factsheet IX Survey software information IX-1
Tables ,.
Customer Feedback Survey—project timetable ....' 8
Comparison of feedback methods 16
Statistical Techniques 36
Driver Analysis 44
-vu-
-------
-------
INTRODUCTION
INTRODUCTION
Customer feedback is not fluff, and customer
satisfaction measurement is not mystifying.
These Guidelines were developed so more
people across the agency will understand the
value of customer feedback. We hope this
document will help you feel comfortable with
the concept, the processes, and with your
capacity to perform or manage customer
feedback activities and measure customer
satisfaction.
This document provides information about
collecting and receiving feedback from EPA's
customers. Using these Guidelines will
improve the agency's ability to effectively
collect, receive, and use feedback from EPA's
customers, both within and outside the
agency.
WHY IS CUSTOMER FEEDBACK
NECESSARY?
Learning how we can better serve our
customers can help all of us to provide better
environmental and public health protection.
Feedback—which refers to input on needs,
expectations, and experiences—from EPA's
customers enables us to measure whether the
agency is increasing its ability to satisfy
customers. The bottom line is that finding
out what customers think about what we do
and how we do it will help us to make
improvements in our products and seivices,
the kinds of changes that customers will
notice and value.
What's in these Guidelines for managers?
How many times have you discovered you were
missing a critical piece of information so you could not
• Figure out why service-delivery was inefficient?
• Understand why customers seemed dissatisfied?
• Answer an inquiry about your program's
accomplishments or weak spots?
• Make a strong case for additional budget dollars?
• Be sure you made the right decision about which
action would make the most significant program
improvement?
You may have missed an opportunity because you
lacked timely land reliable information. Armed with the
right information, you could have made more informed
decisions, eliminated a bottleneck, understood your
customers' problems, documented your resources,
protected the.program from profiteers, or known which
changes would produce the biggest payoff.
As a program manager, you are probably inundated
with data and statistics, and may not even recognize a
customer-focused information deficit. Yet you may find
you don't have data-based answers to policy and
operational questions when you need them, despite
huge investments in data collection and reporting.
These Guidelines are about getting the customer-
generated information you need quickly and at a
relatively modest cost. This information is something
more than the data typically produced by management
information systems. This is a collection of facts and
logical conclusions which answer the types of
questions like those above. By learning and using a
variety of strategies for obtaining customer satisfaction
information, you can better address specific problems,
gain insight into what's happening in your program,
and determine what directions you should be taking.
Finally, all Federal agencies are required by the Government Performance and Results Act
(GPRA) to measure customer satisfaction and make changes to improve service and satisfaction.
-------
INTRODUCTION
WHO CAN USE THE GUIDELINES?
The Guidelines focus on obtaining feedback from EPA customers on their needs and experiences
with EPA products, processes, and services.
The Guidelines are intended for
• Policy makers as they determine improvements for EPA's products and services and program
managers who seek information from EPA's customers
• States, tribes, local entities, and other EPA partners interested in assessing customers'
satisfaction with services and products they provide
* Project officers who monitor Government contractors conducting customer feedback projects
for EPA.
WHY HAVE GUIDELINES FOR CUSTOMER FEEDBACK?
These Guidelines are designed to help you perform your work. By following them, you will
have a clear road map. The Guidelines will enable you to conduct customer feedback with less
labor intensity, trouble, and personal concern. By having a set of Guidelines that everyone can
follow, EPA will have a consistent approach to customer feedback.
The Guidelines can benefit ^^responsible for planning and conducting customer feedback
activities by helping them lead or do the work; understand the importance of obtaining
management and employee buy-in for conducting customer feedback inquiries; act on lessons
learned; and respond to staff concerns such as fears about extra work, change, or how to begin
customer feedback.
The Guidelines can benefit managers because they outline what is necessary to obtain and use
feedback that can help them to improve decisions about changing products, processes, and
services. The Guidelines can benefit the agency because following a uniform set of principles
and procedures will help EPA institute a consistent approach to customer feedback; build a
repository of information about customers to track developments and improvements in customer
service; and establish information about who has been contacted, which will help in subsequent
customer feedback activities.
HOW ARE THE GUIDELINES ORGANIZED?
The Guidelines begin with an introduction and progress through a five-step model which can be
successfully applied for obtaining customer feedback that constitute a model and can be applied
successfully. The Guidelines contain discussion and checklists at the back of each section to
help you organize and facilitate your customer feedback projects.
-------
INTRODUCTION
The five steps are
Ir PLAN the customer feedback project
\/_ CONSTRUCT the data collection proced
I/CONDUCT data collection
•ANALYZE the data
.ures
^pAA/fty^SjU ** I
"_•*••-- ~-'..f>...'.'
•ACT
on the results.
At the end of the Guidelines are several Factsheets that provide
additional help. They are referenced throughout the text.
A note about conducting customer feedback activities
The purpose of these Guidelines is to help EPA staff conduct customer feedback activities in a
systematic, scientific manner. The principles and practices iij this document are sound, but there
are many other informal ways to listen to your customers. Some of them may provide you with
more valuable information than you will ever get from a statistically solid formal survey.
The most obvious way to get feedback is to talk to your customers. It may be a casual
conversation while you are providing a service or product, attending a meeting, or sharing
information. You might find valuable feedback in a complaint that provides good information
about what needs fixing. You may hear a small or large suggestion about how to make things
easier for the customer or the agency. Much of this kind of feedback is unsolicited, so you have
to be sensitive to it. You need to know when to stop and listen; recognize the chances to learn
from your customers, use them, and remember to.take notes!; You have opportunities every
day—every time a customer contacts you—to get feedback. This "gut-level" customer reaction
can be the strongest indicator of satisfaction. When you pay attention to their comments,
customers will notice that you are listening to them and that you care about what they say. That
builds trust between you, and trust in EPA. ;
-------
PLAN THE CUSTOMER FEEDBACK PROJECT
PLAN THE CUSTOMER FEEDBACK PROJECT
WHO SHOULD CONDUCT A CUSTOMER FEEDBACK
INITIATIVE?
Customer feedback is valuable for everyone, and everyone can
easily ask his or her customer for direct feedback about their needs
and how things are going. In fact, EPA staff and managers have
many opportunities to interact with customers. Among the most
common are face-to-face meetings, telephone calls, public meetings
and other events, and written correspondence. You can find
perspectives containing feedback hi newsletters and other
informational materials, videos, Web site messages or electronic
mail, newspapers, and interactive radio and television talk shows and news. Many customer
interactions provide an immediate opportunity to hear from customers how well EPA is
satisfying their needs.
For EPA offices that wish to track and analyze customer feedback over time, organizing your
efforts is important. A critical question to ask yourself is whether you, as the initiator of a
customer measurement project, have the ability to act on the data yourself, or whether others
(potentially States, tribes, or local
agencies with delegated programs;
other external partners; or other
offices and regions) will be critical
to the process. For an EPA unit or
branch to seek feedback, the
decision to proceed may be made
within the group. For larger, more
complex, more resource-intensive
customer studies that have broader
impact, more coordination may be
needed at the Division, Office, or
even regional or assistant
administrator level.
If other EPA staff or managers
will be involved or affected, you
should include them in the
planning stages as early as
possible. It is important that all
interested or potentially affected
individuals support the decision to
obtain feedback and are willing
and able to act on the feedback
Establish the purposes of customer feedback
Define the feedback objectives
• What do I want to accomplish with this feedback?
• Why am I conducting this feedback activity?
Determine how the findings will be used
• What will we do with the findings?
• Will they be used
As a key business performance indicator?
To revise, correct, or improve a process?
To identify customer needs and expectations?
As a management tool for customer relationships?
To inform planning, decision making, and resource allocation?
To reward, recognize, or compensate employees?
To help validate standards, specifications, and measures?
Determine who will use the findings
• Who else is interested in the findings?
• How much time are they able to give to learning about the
findings?
• How would they prefer to learn about the findings-in briefings,
written reports, graphics, action plans?
-------
PLAN THE CUSTOMER FEEDBACK PROJECT
they receive. They may have their own very constructive ideas about what the research
objectives and methodology might need to be. So that you can be responsive and act on the
feedback you get, work things out early. Front-end coordination can avoid potential roadblocks
such as fear of extra work that may develop from customer suggestions, fear of possible negative
management reactions or reprisals baised on customer criticisms, politically incorrect results, or
unrealistic customer expectations about EPA capabilities.
HOW READY IS YOUR ORGANIZATION FOR CUSTOMER FEEDBACK?
Do staff members understand why the organization needs customer feedback? As you begin to
plan customer feedback activities, consider how ready your organization is for customer
feedback by asking these questions: '.
• Do staff members and managers sincerely intend to pay attention to customer feedback and
act on it?
• Are key managers committed to taking action based on customers'input?
• Have staff members directly participated in defining the need for customer feedback and in
identifying the approaches to use for obtaining customer feedback?
• Have managers, employees, and other users of customer feedback information expressed
their needs, issues, concerns, and objectives? !
• Is there managerial and employee; buy-in and ownership?
• Are there any possible barriers—such as concerns about change, extra work, and adverse
findings—to using customer feedback successfully?
• If there are barriers, are there identified methods to overcome them?
If you answered these questions "yes" your organization is clearly ready for customer feedback.
If you answered "no" to some questions, you might consider what you can do to prepare your
organization to obtain and use customer feedback. Simply put, the more ready your organization
is for customer feedback, the more meaningful and successful the activity will be, which in turn
means that EPA will be more responsive to customers' needs and preferences.
If your organization is not fully ready for customer feedback,; you should not necessarily halt
your customer feedback activities. Instead, just understand that you will probably face some
challenges in getting the work done, getting managers to pay attention to findings, and assuring
customers that your organization is committed to implementing the changes they may want. You
may need to start slowly, collecting and documenting unsolicited feedback and informal
opportunities to gather customer input. You can make some positive changes based on that
-------
PLAN THE CUSTOMER FEEDBACK PROJECT
feedback, and build a case for performing broader and more formal information collections to
verify and expand the anecdotal information you gathered.
WHAT KINDS OF CUSTOMER FEEDBACK ARE ALREADY OCCURRING?
Before proceeding with a new customer feedback activity, check with EPA's Customer Program
in the Office of Policy to see what recent work has been conducted. You should also check with
delegated program representatives (for certain customer feedback functions). This will enable
you to see if anyone else has collected the same or similar information that you can use, possibly
avoiding unnecessary duplication, saving time and money, and making best use of previously
gathered data.
WHAT ARE THE CORE QUESTIONS TO ASK FOR CUSTOMER FEEDBACK?
It is important to have some core questions that are always used by those doing customer
feedback. Core questions represent broad levels of understanding and impressions about
expectations, EPA responsiveness, and customer satisfaction. By using core questions, EPA can
compare and aggregate customer feedback information, both across the agency and over time.
When using the questions provided below, individual programs, regions or labs may find it
useful to substitute their own organization's name for "EPA." For example, one question might
be, "How courteously did the Hotline staff treat you?"
-------
PLAN THE CUSTOMER FEEDBACK PROJECT
The following are core questions that customer feedback should incorporate:
Overall, how satisfied are you with the services and products you have received from EPA?
12 345 6 Don't know/Not Applicable
Very dissatisfied Satisfied
How courteously did EPA staff treat you?
12 3 4 5 j 6 Don't know/Not Applicable
Very dissatisfied Satisfied
How satisfied are you with the communications you have received from EPA?
1 2 3 4 5:6 Don't know/Not Applicable
Very dissatisfied Satisfied
How fully did EPA respond to your' needs?
1 2 3 4 5 ! 6 Don't know/Not Applicable
Very dissatisfied Satisfied
HOW OFTEN SHOULD WE ASK CUSTOMERS FOR FEEDBACK?
Many organizations find it useful to contact their customers once a year to get an overall measure
of satisfaction. Other types of feedback, such as follow-up telephone calls or comment cards,
provide immediate information at the point of contact with customers. When organizations need
targeted customer information, most find it useful to conduct multiple studies each year.
As a rule, EPA does not want to overburden our customers, s(o take care to
• Avoid feedback activities that duplicate work already conducted
• Organize customer feedback projects to avoid contacting!the same customer repeatedly
• Seek consent from customers to participate in feedback projects, especially those that are
lengthy or where customers have been contacted previously.
So, there is no standard answer to the question about how often to ask for feedback. The
frequency of customer feedback will depend on several factors:
• Were the findings of previous customer feedback studies positive or negative? If EPA took
action in response to concerns customers raised, has there been enough tune to see whether
those actions have been effective in improving customer satisfaction?
-------
PLAN THE CUSTOMER FEEDBACK PROJECT
Considering the issue(s) involved in the feedback activity, how often does it make sense to
ask customers' opinions?
Can we distinguish annual versus ongoing information needs and obtain feedback
accordingly?
Is there a way to match feedback with EPA-to-customer transactions? Can we ask customers
at the end of a call if the information provided was useful? Is there any follow-up with them
later to see if they used the product provided?
Has some critical event occurred for which customer feedback would be important? (e.g.,
was the office reorganized to speed customer service or product delivery?)
Are any changes in programs anticipated that call for surveying customers both before and
after the change?
HOW LONG SHOULD FEEDBACK ACTIVITY TAKE?
Obviously, many variables can affect the time it takes to complete a feedback effort. A few of
these variables might include the type and method of feedback selected, the number of
respondents, and the extent to which those responsible for the survey project are prepared to plan
and act on the results. It is likely that many individuals, including the customer, will have
expectations about how long the effort will last, and when results may become available.
Therefore, it is important to carefully plan the schedule of a feedback effort. On the following
page is an example of the timetable of one feedback survey.
Customer Feedback Survey—project timetable
* ' ":"" , 'Deliverable ' ' ' ' •••••'" ••
Project Planning and Design
Design Survey Instrument
- Focus groups
- Internal draft of questionnaire
- 1st draft to survey team
- Markup meeting
- 2nd draft to survey team
- Revised draft sent to field
- Final version sent for approval
- Final approval from agency
Data Collection
- Field testing
- Revisions (if necessary)
- Phoning
Time frame
Weeks 1-2 (2-3 meetings)
Weeks 4-5
Week 6
Week?
Week?
WeekS
Week 9
Week 10
Week 12
Week 13
Week 14
Weeks 14-1 6
-------
PLAN THE CUSTOMER FEEDBACK PROJECT
^S^MS^^l^^^^W^^^^^^^^^^^^^^^^^^^^^^^^&^^^
Data Collection
- Field testing
- Revisions (if necessary)
- Phoning
Analysis and Report
- Analysis
- Report
- Briefing charts
Process Improvement Workshops
- Coordinating committee
- Executive board
- Notes to coordinating committee
- Notes to executive board
Performance Standards and Process Improvement
Implementation
- Action teams
^^^^^^^^^^^^^^^^m^^^^^M^i.
Week 13
Week 14
Weeks 14-16
Weeks 17-21
Week 21
Week 23
Week 23
Week 24
Week 25
Week 26
Week 29
"Numbers are a poor surrogate for imagination, intuition,
judgement, critical thinking, creativity, and leaps of faith."
Bob Lutz, former Chrysler Corporation Executive
-------
PLAN THE CUSTOMER FEEDBACK PROJECT
WHO ARE OUR CUSTOMERS
AND WITH WHAT SERVICES
AND PRODUCTS DO WE
SUPPLY THEM?
A customer is someone who directly
relies on a provider for a product or
service. Customers are defined
based on the service or product they
receive. Customers
• Have a direct relationship with
EPA, including through
interactions through a
contractor that represents the
agency.
• Receive one or more services or
products from EPA
• Rely on EPA for a work
product or for specialized
expertise.
• Are directly affected by the
actions of EPA.
• May receive financial
assistance, such as grants.
• Include those for whom we
carry out a mandate or mission,
such as Congress and the Office
of Management and Budget.
•Include EPA employees as internal
customers of each other.
Relationships and transactions
among EPA staff are essential for
delivering consistent, excellent
service to external customers.
Customer service and permitting
While identifying many customer groups interested in the permitting
process is possible, there are only two major groups: interested and
impacted parties, and permit applicants. Interested and impacted
parties are those individuals, interest groups, communities, States,
or tribes that raise a concern or have comments regarding the
permit action. Permit applicants are the entities that are seeking
approval from EPA or a delegated authority to conduct a regulated
activity. In addition, relationships between the governmental entities
involved in the permitting process (EPA Headquarters, Regional
Offices, and delegated authorities) are also important.
In permitting programs, receiving and effectively using feedback
from customers results in actions that are more acceptable and
supported by interested and impacted parties, permit applicants,
and regulators. Interested and impacted parties are individuals or
groups that raise a concern or have comments regarding a permit
action. When the permitting authority effectively listens and
responds, the interested and impacted parties and permit applicants
generally feel better served by Government. Also, through the
information from the feedback, permitting agencies can more
effectively plan and allocate resources to address issues that, in
turn, more directly relate to customer concerns. Experience has
shown that permitting actions often benefit from customer input,
particularly about site-specific conditions that technical staff alone
cannot provide. Effective customer service in the long run saves
resources by promoting more efficient permitting decisions.
The Customer Service in Permitting (CSiP) Workgroup is a
continuation of the agency's efforts to improve the permitting
processes. Early efforts in customer service focused on setting
standards and developing surveys to obtain feedback from
permitting customers. CSiP members recognized that since most
permitting occurring at State, tribal and local levels, efforts to
encourage customer service at those levels are also needed. The
CSiP provides an opportunity for Headquarters and Regional staff
to work with State representatives in developing the necessary tools
to receive effective feedback and to deliver customer service in
permitting. The CSiP's mission is to promote high quality customer
service in EPA permitting. This includes permits issued by EPA or
by delegated authorities at State, tribal, and local levels. To
accomplish this mission, the Workgroup uses customer feedback to
improve permitting activities by using the feedback to:
• Measure standards of customer service
• Increase the skills and abilities of individuals involved in the
permitting process
• Create a culture that values customer service.
10
-------
PLAN THE CUSTOMER FEEDBACK PROJECT
work and policies; someone who may interact with the agency for another person or group;
or someone who influences our future direction (including financial resources). Clients are
individuals and organizations with a dependent relationship to the agency.
Feedback from stakeholders and clients is also necessary and valuable for specific activities of
the organization. However, it is important to know when the1 individual who is giving feedback
is trying to influence your decisions or is very dependent upon you and maintaining goodwill in
your relationship. ;
I
Before beginning a customer measurement project, it is important to be clear about which
customers and which products and services are the focus. ;
WHY ESTABLISH QUALITY CONTROL PROCEDURES IN CUSTOMER FEEDBACK
ACTIVITIES? ;
Developing and applying good internal control procedures is: a sound business practice and helps
assure the quality, reliability, and integrity of information used for decision making. The
standards and techniques of quality control should apply to data collection, administration of data
collection activities, analysis, and reporting of results from customer feedback.
"If you don't care where you're!going,
then it does not matter which way
you go." '
Lewis Carroll
or
"If you don't know where you're
going, you won't know when you get
there." '•
Yogi Berra
Controls vary and may be as simple as merely limiting access to raw, customer-specific data;
separating the data collection, administrative, and presentation duties from the affected action
officials; or as thorough as performing independent quality assurance reviews. The purpose of
11
-------
PLAN THE CUSTOMER FEEDBACK PROJECT
internal controls is to provide
reasonable assurance that the
objectives of customer feedback
will be accomplished in a reliable
and cost-effective way. For a
description of specific control
standards and techniques, see
Factsheet IX.
"Before we start
talking, let us
decide what we
are talking
about."
Socrates
OIG—Serving many customers
Customers of the Office of the Inspector General (OIG) cut across
EPA programs and their customers. The OIG is unique by its
statutory mandate (the Inspector General Act of 1978) requiring it to
be organizationally independent to ensure its objectivity, impartiality,
and to prevent interference in the conduct of its work. The President
appoints the Inspector General, without regard to political affiliation,
and the Inspector General reports directly to Congress.
To prevent and detect possible fraud, waste, and mismanagement
and to promote economy, efficiency, and effectiveness in EPA's
programs and operations, the OIG conducts audits and
investigations. The OIG also performs evaluative, consulting and
advisory services to reduce risks, improve accountability, and ensure
financial integrity. Although organizationally independent, the OIG
is part of the EPA management team, dedicated to the agency's
environmental mission.
While the OIG is the agency's fiscal and operational watchdog, it is
also the agency's consulting partner for collaborative problem
solving and recommending sound business practices. The OIG is in
the management and enforcement service business, but this role
creates unique customer relationships, often with disparate
expectations from a variety of customers with frequently different
points of view.
The OIG is both independent and collaborative, part of the EPA
team, yet independently reports to Congress. So how does the OIG
know how well it is serving its customers when different customers
value different things? By working very hard to improve modes and
means of communications.
The OIG maintains frequent two-way communications through
personal contact and correspondence with both key agency
managers and key staff members of Congressional Committees.
The OIG also works directly with other Federal, State, and public
auditing and law enforcement organizations.
We understand that while the critical nature of ourwork may provoke
other than positive responses, we want our customers to realize that
ultimately we share the same objectives that they do. We act as
agents of change and strive for constructive solutions. Our
challenge is to use customer surveys to tell us what is important to
our customers in the context of our mission, and measure how well
we are achieving the attributes of our mission. Hearing the Voice of
the Customer will give us the process to obtain the most relevant
information possible that can influence the OIG success as valued
agents of change.
Specifically, we will begin seeking customer feedback from several
sources following each major audit, investigation, and assistance
project. We will be measuring attributes of OIG products/services
and staff. Agency officials may not like all of our findings, but they
can still strongly agree that our work is relevant and accurate, and
that our staff is professional and encourages constructive
communications. As these are the attributes needed for agents of
change. (See sample survey with Factsheet VIII.)
12
-------
PLAN THE CUSTOMER FEEDBACK PROJECT
Develop a written plan for the customer
survey • checklist
Purposes of the activity '
Quality control procedures
Ways findings will be used
Identify the target group
Methods of data collection :
Timing for data collection ,
Analysis plan |
Tools for carrying out feedback activity
Discussion topics
Survey instrument
Database
_Anticipated products ;
Tables and graphs
Text that interprets findings
Slides
Specific conclusions
Recommended actions
Plan • checklist ;
Get ready
See what feedback you already have i
Decide which core questions to ask .
Decide frequency for customer feedback
Define the target customer population
Identify services supplied to customers
^Establish purposes of customer feedback (see
next checklist) !
Decide whether to do the activity or cohtract out
Develop written plan :'
Determine resources needed
Obtain agreement to proceed (if needed)
13
-------
CONSTRUCT DATA COLLECTION PROCEDURES
CONSTRUCT DATA COLLECTION PROCEDURES
WHAT is THE "BEST" APPROACH FOR ASSESSING
CUSTOMER SATISFACTION?
There is no one best approach for assessing customer satisfaction.
What will work best for any particular EPA program area will
depend on the kind of product or service provided, the kinds of
customers served, how many customers are served, the longevity
and frequency of customer/supplier interactions, and what you
intend to do with the results. Two very different approaches both
produce meaningful and useful findings:
Continuous assessment methods—Methods to obtain feedback
from the individual customer at the tune of product or service delivery (or shortly
afterwards).
Periodic survey approaches—Methods that obtain feedback from groups of customers at
periodic intervals after service or product delivery They provide an occasional snapshot of
customer experiences and expectations.
Understanding customers' expectations and satisfaction requires multiple inputs from customers.
It is like peeling away layers of an onion—each layer reveals yet another deeper layer, closer to
the core. Both method types are helpful methods to obtain customer feedback for assessing
EPA's overall accomplishments, degree of success, and areas for improvement.
Continuous assessment
These Guidelines focus on methods for obtaining customer feedback periodically, but it is very
important to remember that you can adopt continuous assessment as a standard method for
obtaining customer satisfaction information. Some ways to include continuous assessment in
your work include
• Inserting a feedback card in every copy (or everynth copy) of any published report sent out
• Making a followup phone call to every customer (or to every fifth, or twelfth, or nth
customer) within 1 or 2 days of interacting with that customer.
The information you obtain from continuous assessment can provide valuable and timely insight
into the experiences that your customers have had with EPA.
14
-------
CONSTRUCT DATA COLLECTION PROCEDURES
Decide on data collection method
Before considering systematic methods for collecting data, remember that informal methods for
obtaining information from customers! clearly produce infonriation that is valuable. Everyone at
EPA needs to recognize and use these everyday opportunities;for customer feedback. Use this
information to complement the more systematic forms of gathering feedback discussed here (See
previous discussion, page 4.) :
Many different, more formal methods can be used to collect customer feedback data. Methods
frequently used to gather customer feedback include focus groups, a mail-back postcard that is
included among materials sent to EPA customers, a mail survey, a telephone survey, a
publication evaluation form included at the back of every copy, and a printed or in-person survey
(which might include computer-assisted personal interviews or an intercept survey when you ask
every nth customer attending a function or visiting a facility to participate). Electronic mail will
become an increasingly more important means for collecting customer feedback as more people
gain access to the Internet.
When you decide which method to use, you should consider several factors, such as the types
and number of questions to ask. The decision will also be affected by available resources to
gather customer feedback, how fast decision makers need to have the information, and how
representative the findings need to be. The response rate—the number of customers who
actually answer questions divided by the number contacted for information—is also an important
consideration because it will affect the way you can use findings. A summary of different
methods appears in the table on the next page. :
When selecting a method for obtaining customer feedback, recognize that you need different
kinds of information for different methods of obtaining customer feedback. If, for example, you
choose a mail or phone survey, you will need an accurate name, address, and/or telephone
number. At times it may also be critical to know which EPA programs or services the customer
sought or received, as well as any demographic information available.
Note that several different practices can affect the ratings of various data collection methods:
• Focus groups and telephone and in-person surveys require trained staff to conduct proper
interviews and prevent interviewer bias.
• Focus groups and telephone and iri-person surveys provide EPA with the opportunity to show
through direct personal contact that the agency takes customer feedback seriously.
• Telephone surveys can more readily accommodate differences in language and literacy levels
than can mail surveys, but they cannot accommodate lengthy questionnaires or visuals.
• Some people are difficult to reach by or do not have a telephone, and many who do are
reluctant to or simply will not participate in telephone interviews.
15
-------
I?
ll
to
-a
I
o
I
I
I
O
s
1
•S
•s
1
^
I'Sir
? * &
i«
s c
Is
e
5|
fi
||
l*§
^>s
s
|
s
£
&
3
1
S
1
S
1
f
i
i
i
s
s
s
i
s
1
1
1
£
1
Convenience for customer to
complete
ti
w
>
i
T
T-
§•1
1
|
in
i
W
CM
S
§•
•e
W
fr
?
g
1
2
t
*o
t
1
1
1
2
9
JO
f
i
g
•s
.8
to
1
1
E
CD
JZ
JZ
_CT
S
1
1
Ability to encourage customer to
participate
1
1
1
f
g
1
s
Ability to provide instructions or
explanation to customer
1
'§
O
{=
8
O
C
I
o
Requires customer to initiate
OJ
1
>»
CO
1
moderate
2
OJ
1
S
s
03
-o
moderate
.c
.5*
3
XI
Respondent's perception of
anonymity
£
E
1
§
T3
CO
to
"o -a
•of
raf
"§ *
So
_ CL
13 o
1
O
"c
CO
~o ~n
Q> Q>
8"D
c
"0 OJ
1
»
w
o
•5
^-T3
sl
E o
i
I
CO "C
11
0 0
C/l
cr
(A
CD
i
I
i
s
2
2
S2
0)
•o
2
E
0.1
1
"L
s*
Opportunity to probe and ask "wr
questions
i
1
1
V)
S.
1
g
o
c
CD
Need for accurate list of telephon
numbers or addresses
i
O
W
CD
O
w
VI
o
to
o
c
I
Allows "branching" and skip
patterns***
1
1 moderate
•i
0>
T3
1
1
(D
i
i
!
i
£
S>
c
•a
3
IE
1
E
_O)
.2*
CD
IE
1 moderate to 1
^
.C
Ic
Response rates
i
1
1
1
1
| moderate
0)
1
1
Extent of data clean-up
1
S
£
2
(D
E Ic
JC
en
f
11
V C
is °
2 I
£f
Is
"is *-
i!
§ §
o-e
"8 o.
in P.
•S2
JT* M
1) 51*
3-B
£• P.
,^* JJ
•3'ub
'§'S .
§" c
oj •§
2 <3
C 4) 0 3
O S P "S
In 0 0. »
0 S d PC
1- «rt CJ
ra *s o3 M
2 .>-3 g
ll||
^<8 |"S
|l||
c *o £ *^*
*o °5
-------
CONSTRUCT DATA COLLECTION PROCEDURES
Mail surveys can be longer, since respondents can work at their own pace, but they have the
longest response time and may not reach the intended target.
Mail surveys allow no interviewer bias to enter in, but.thpy offer little ability to probe or ask
complex questions, and should there be any ambiguity hi questions, it cannot be clarified.
The amount of followup can dramatically influence costs,, timeliness, and the ability to
generalize results. Mail surveys, for example, may have several followup mailings to
customers who do not initially respond. Customers who initially decline to participate in a
telephone survey may be assigned to a special staff member who is charged with trying to
convince the customer to answer the questions. An advance letter can increase participation
and response rates for mail and telephone surveys. It can also allay customers' concerns
about such matters as how they were selected, why they have been selected to participate
again (if applicable), '.
anonymity, how long it
will take them to
answer the questions,
and how findings will
be used. (See sample
advance letter
following.)
The sample
If the number of customers
of interest is relatively
small, not more than 50,
each could be contacted to
obtain feedback This is the
census approach In many
cases, EPA services or
products are provided to a
large group of customers,
one too large for a census
approach In such cases, a
sampling approach is
needed, and two options
are possible: 1) a. judgment
sample, in which you
consciously select the
customers that you will
contact from the entire
group of customers served,
and 2) a probabilistic
Sample letter
[on EPA letterhead]
Mr John Doe ;
Alpha, Beta, and Gamma Co., Inc.!
555 Main Street
Anywhere, USA 12345 I
Dear Mr Doe:
I am writing to let you know that your name has been selected at random to
participate in a survey about business owners' experiences with EPA. You
are one of a small group of people we are contacting. Your feedback about
your experiences can help shape our future direction.
We at EPA will take findingsfrom the survey into consideration as we develop
our plans for the next decade. We !are committed to incorporating customer
viewpoints and recommendations into our strategic planning, budgeting, and
decision making while recognizing the need for balancing sometimes
competing and conflicting interests!
I realize that we may have contacted you before to answer similar questions.
We are tracking our efforts to respond to customer concerns, so it is very
important to hear from you again. Your responses will be reported only in
aggregate form. :
You should receive the survey in the next few days. It will take less than 10
minutes for you to complete. I urge you to consider the questions carefully
and let us know how we can better serve you. In the meantime, if you have
any questions, please call 1-800-xxx-xxxx to speak with a staff member on
EPA's survey team (or someone at YYY Consulting, the firm conducting the
survey for EPA). , '
I thank you in advance for your time and consideration.
Sincerely,
Name and Title of highest possible'EPA person
17
-------
CONSTRUCT DATA COLLECTION PROCEDURES
sample, in which customers you will contact are picked randomly from the entire group of
customers served during the period of interest (i.e., the past year).
In most cases, it is better to rely on a probabilistic sample than a judgment sample. Judgment
samples may be biased because of the way customers are selected for the study. If a sample is
biased, it is impossible to draw inferences about the entire group of customers served. As long as
the response rate is high enough, probabilistic samples are not biased, so inferences can be made
about the entire group of customers that the selected ones represent.
Determining the sample size
If you choose to conduct a mail, telephone, or in-person survey, you will need to decide the
number of people who will be selected to participate. To determine this number—the sample
size—several factors should be considered, such as the total number of customers served, the
intended use of the results, available resources, and time.
• The larger the percentage sampled, the more certain you can be that the feedback obtained
will be representative of the results that you would have obtained if you had contacted and
gotten feedback from every customer.
• The smaller the percentage sampled, the greater the likelihood that feedback from those in the
sample will differ significantly from those in the full list of customers.
The relationship between sample size and accuracy of findings is due to sampling error, a
measurement that indicates the extent to which the sample of customers is different from the
entire group of customers under study. In a news article that reports a politician's approval
rating as 62 percent, plus or minus 5 percent, the "plus-or-minus" value is the sampling error.
To decide the size of the sample, you can either
• Determine the largest sample size that you can afford and calculate the associated sampling
error
* Or determine the maximum sampling error that is acceptable and then select the sample size
that will produce that level of error.
The sampling error can be estimated through a confidence interval. A confidence interval
specifies a range of values within which the true measure is found. Typically, survey results rely
on a 95 percent confidence interval, but lower levels are acceptable, depending on how you plan
to use the findings. Popular media reports rarely stipulate confidence intervals, but they are
implied. Using a politician's popularity rating as an example, the unstated premise is that the
18
-------
CONSTRUCT DATA COLLECTION PROCEDURES
analyst is 95 percent certain that the politician's popularity is between 57 and 67 percent; that is,
62 percent, plus or minus 5 percentage points, the likely error.
One last point to consider in determining the sample size is the lands of comparisons that you
will want to make with survey findings. Many times, analysts are interested in comparing ways
that different customers react to various services. These comparisons may involve large- versus
small-sized businesses, the general public versus educators, and so forth. If these comparisons
are a critical portion of the analysis, you must plan for them in the sample design so that enough
of each customer type is surveyed to make the findings meaningful See Factsheets III, IV, and
V for further information on sampling. j
Develop the questions
In deciding the questions
to ask customers, it is a
good idea to keep two
principles in mind: 1)
make sure that the
questions and answers
address your objectives,
and 2) set limits on the
length of the survey
instrument.
Many sources are available
to help develop questions
for surveys. These include
software packages such as
Corporate Pulse (which is
available to EPA staff
through the Customer
Service Program), prior
surveys sponsored by EPA
and other agencies, journal
articles, and item banks
maintained by some
universities and survey
organizations. When
possible, it is better to use
a previously tested and
validated question rather
than one newly created for the
Yes/no
In the past 6 months, have you contacted the XYZ office?
1 Yes 2ND
Categories
In what kind of community is your business located? Would you say
it's... '
1 Urban
Rank order
2 Suburban
3 Rural
Of the following items, which three are most important to you?
Please indicate with a "1" for the most important, a "2" for the next
most important, and a "3" for the, third most important.
__ Clean air ;
Clean water
Hazardous waste disposal
A minimum level of Government regulation
Lower taxes
Scale
Please rate your satisfaction with the service you received, using a
scale of 1 to 6 "6" means you are very satisfied, and "1" means you
are very dissatisfied. '•
1 23 4 5 6
current survey.
19
-------
CONSTRUCT DATA COLLECTION PROCEDURES
Survey questions are generally of two types: open-ended and closed-ended. In open-ended
questions, the customer creates his or her own answers. The following are examples of open-
ended questions:
• Do you have any suggestions for improving service? [IF YES], What are they?
• How could EPA be more responsive to your concerns?
• Could you please describe the most satisfying experience you've had with EPA?
Closed-ended questions limit the responses a customer can provide. They may include yes/no
answers, categories of responses, rank-ordered responses, or scales. The following are examples
of each type:
With closed-ended questions, it is relatively easy to record and analyze responses, and you will
not receive irrelevant or unintelligible responses. However, you risk "missing the boat." To
illustrate, suppose you ask the closed-ended questions, "What was the main reason for your
visit?" giving several possible answers, and 30 percent of your respondents mark "other."
Drawing valid conclusions about why customers visited would be hard. If you decide to use
closed-ended questions, pretest them to identify all the likeliest responses to your questions.
In developing questions
and answers for closed-
ended items, the
advisability of including
response options such as
"don't know" and "no
opinion" should be
carefully considered.
While customers should
not be forced into
providing responses when
they really do not have
answers, it is better to find
ways to encourage a
response than to let
customers default to a
neutral position In mail
surveys, this
encouragement can be
accomplished through
instructions; in telephone
and in-person surveys, it
can be fostered by not
On developing questions
Whatever type of feedback method you choose, allow plenty of time and
resources for developing your questions This process involves several
cycles of writing, testing (using actual customers served), and rewriting
Remember, what you are looking for is actionable information for
managers, so ask yourself, "What action could I take with this kind of
answer?"
Questions that are too vague can mean different things to different people
The best thing to do is be specific For example, you may have identified an
issue such as, "What do our customers think of our new application form?"
That's pretty bland Go on to questions like
• What do our customers like or dislike about our new application form?
• Do our customers find the instructions on the new application form
helpful or confusing?
• Is the new form easier or harder to understand than the old form?
• Do customers spend less time filling out the application?
• Do front line staff spend less time answering questions about the form?
Specific questions will be more likely to give you information you can act on
to improve your program And asking front line staff for their input is always
a good thing to do, as well.
20
-------
CONSTRUCT DATA COLLECTION PROCEDURES
offering "don't know" and "no opinion" as response options. On the other hand, including "not
applicable" as a response is important in mail surveys, so maf: customers are able to indicate this
when they have not had a particular experience. In asking questions about a past event, consider
giving a "don't remember" option. Bleep the survey to a reasonable length by asking only the
questions you need to address the issues of concern that prompted your survey; leave out the
"nice-to-know" questions.
Recognize that open-ended questions will provide a richness of data that can complicate analysis.
Reducing responses to a few categories that can be coded, entered into a database, and analyzed
can be difficult. It is probably best to use a mix of questions, both closed and open, in most
customer feedback questionnaires. ', •
If you are planning an ongoing or periodically repeated survey, identify a few key program goals
that are unlikely to change very soon, and focus your questions on them. Develop questions that
will indicate how well customers think the goals are being met. These key questions need not be
elaborate or profound, but should be very basic to your program. To effectively compare results
over time, you need to use essentially the same core questions in your survey upon each iteration
You will need to avoid making any major changes to these key questions, whether in wording,
scaling, or placement, so be sure to ask the right questions from the beginning.
Questions that are relevant to customers should be developed to fulfill the purposes and the
objectives of the specific customer feedback activity being conducted. Although this may seem
obvious, it is important to remember throughout the development of questions. Be particularly
wary of questipns that may be interesting to ask, but may only add time and cost while not
producing useful information. These questions could be
• Extraneous questions that do not address the stipulated purposes and objectives of the
feedback activity. .
• Questions that are subject to misinterpretation These may have vague words, use unfamiliar
j argon, or could be understood differently by different types of customers.
• Double-barreled questions that embed more than one item, such as "On a scale of 1 to 6,
please indicate how clear and useful the materials are." The customer may have one opinion
about clarity and another about usefulness, but is not given an opportunity to distinguish
between them in his or her response. '.
• Questions that may upset some respondents Some that customers may perceive to be
intrusive, such as household income, are best when worded neutrally (such as by asking
whether the customer's household income falls above or below a certain level) and placed at
the end of the survey.
21
-------
CONSTRUCT DATA COLLECTION PROCEDURES
Questions on matters that customer may consider to be sensitive or offensive, especially
about cultural, ethnic, gender, and socioeconomic considerations.
Questions that do not elicit responses that point to specific remedying actions.
If you do not ask the right questions in the right way relatively soon after the service experience,
feedback will not be as useful as it might have been. Also, remember that to effectively compare
results over time you will need to avoid making major changes to key questions, whether in
wording, scale, or order in the questionnaire.
There is no single correct scale to use. However, there are several important issues to consider:
* Whenever possible, the same scale should be used throughout a given questionnaire to help
ensure that different responses within a questionnaire can be validly compared.
• Different survey efforts within an organization should use the same scale To this end, we
recommend that when using the core questions described above, you consistently use the
same scale of one to six (1-6).
Construct the questionnaire
No matter what method you use to collect data, all questionnaires follow a similar format:
•
• Introduction—sets forth the purpose of the survey and guides the customer through the
questions
• Customer experience—establishes the customer's level of knowledge regarding various parts
of the questionnaire
• Measurement— asks the person surveyed to characterize his or her experiences, needs, and
desires as an EPA customer
• Customer information—gathers data that will be used to classify respondents.
Mail surveys. The mail survey has to do everything you would do if you were with the
customer. It has to the visually appealing, have a pleasant tone, and be clear. The survey
instrument is under the direct control of the customer. Its physical look will affect the
customer's willingness to respond; the clarity of the instructions and questions will affect the
customer's ability to interpret their meaning correctly.
Single-page questionnaires and comment cards should be attractive and easy to read. Longer
questionnaires should be printed in booklet form, on 11" x 17" paper that is folded in half and
_ _
-------
CONSTRUCT DATA COLLECTION PROCEDURES
stapled in the middle to produce a standard 8 Vz" x 11" page. The cover should be visually
appealing and use a logo or other graphic design to interest the customer, and no questions
should appear on the cover. Use of color ink and high-quality paper will add only minor costs to
the survey, but can substantially improve response rates and reduce the cost of followup
correspondence and telephone work by staffer contractors. The cover should give the title of the
survey activity and indicate who is conducting the work. For its surveys, the Social Security
Administration uses brightly colored paper, desktop publishing to allow more flexibility in
design, and larger print to accommodate the needs of its elderly and disabled customers.
•The methods used to construct the questionnaire are different, depending on the mode of data
collection that will be used to obtain customer feedback In the next section we present methods
for constructing questionnaires for focus groups, mail surveys, and telephone surveys—the most
frequently used forms of data collection in periodic surveys to obtain customer feedback.
Focus groups. As knowledge about customer surveys has expanded and entered the public
domain, more people claim to be conducting focus groups. It is important to distinguish between
focus groups—which are based on scientific procedures and understanding of human
interactions—and more casual :
discussions among people who
share a common interest or
concern. Both approaches provide
potentially useful information, but
analysts should recognize the
difference between data from
focus groups and data from more
informal gatherings (See pages 27-
28.)
The key instrument for a focus
group is the moderator's guide.
This is a series of questions,
probes, and discussion topics that
are arrayed in a logical order. The
moderator uses the guide to elicit
opinions and experiences from
participants, and to ensure that
discussions stay focused as much
as possible on the critical issues
around which the group was
formed. A sample moderator's
guide appears on the next page.
Survey questions should be
presented in a logical sequence.
Many survey experts believe that
Typically, a moderator's guide is organized as follows:
• Introductions by moderator and participants
• Review of ground rules, such as
1. You have been asked here to offer your views and opinions;
everyone's participation is important; the conversation does not
need to flowthrough the moderator, although the moderatorwill
manage the group
2. Speak one at a time i(avoid side conversations)
3. Note videotaping, audiotaping, and observers (as applicable)
i
4. There are no right or wrong answers; consensus is not required
5. Okay to be critical; if you don't like something, say so
6. All answers are confidential, so feel free to speak your mind
Brief explanation of the focus group purpose and introduction of
the topic ;
Definitions :
Questions, probes, discussion topics •
Closing and thanks.
23
-------
CONSTRUCT DATA COLLECTION PROCEDURES
the first question on the survey, more than any other, will determine whether your customer
completes or discards the questionnaire. Starting with a fairly simple question is a good idea
because it suggests to the customer that completing the survey will be neither difficult nor time-
consuming It is also advisable to ask a fairly interesting question to gain the customer's interest.
The next set of questions should focus on matters that the customer is most likely to judge as
useful or salient. This continues the process of drawing the customer in so that he or she
becomes engaged with thinking about the questions being asked and becomes invested in
completing the survey. Grouping questions together that share common themes makes sense
because the customer then focuses on that particular area of inquiry. To the extent practical,
group questions together that have similar types of response options. For example, questions that
have yes/no responses should be together and questions that have scale responses should be
together.
The order of questions should also mirror the thought processes that customers are likely to
follow. For example, questions about particular experiences with onsite inspections should
precede questions about suggestions to improve those inspections.
The final set of questions should center on those most likely to be sensitive or offensive. These
may include questions about personal characteristics (e.g., race, age, income) and unsuitable
behaviors.
The final page of the booklet should not have any survey questions Instead, it should invite the
customer's comments or suggestions about anything raised in the survey or other issues and
concerns important to the respondent It should also indicate the address for returning the
questionnaire (in case the survey gets separated from the reply envelope) and, when possible, a
toll-free number set up exclusively to receive inquiries about the survey.
Telephone surveys. Because customers have no questionnaire in front of them during a
telephone survey, concerns about visual appeal are not applicable for this form of data collection.
Issues regarding ordering and clarity of questions are important, and the same principles apply as
with mail surveys.
The difference between mail and telephone surveys is that spoken language is very different from
written language, and customers must be able to respond to questions based only on the
information they hear. So it is critical that you ensure that your interviewers speak clearly and
are well trained. Additionally, the interviewer acts as an intermediary between the customer and
the questions posed. With this in mind, the following principles apply to telephone surveys:
• The introduction the customer hears will probably determine whether the interview is
conducted or the customer hangs up. The introduction should be concise, state the purpose of
the call, estimate the length of the call, and assure confidentiality. This is a sample:
24
-------
CONSTRUCT DATA COLLECTION PROCEDURES
Hello, my name is [fill in], and I'm with the Environmental Protection Agency [or XXX
Consulting]. We 're conducting a. survey of people who have received materials from the
EPA to learn about their experiences and opinions. Let me assure you that this is not a sales
call, and that we will keep all information about you and your responses private. We will use
the information you provide only to help improve EPA's services. The survey will take less
than 15 minutes to complete and is purely voluntary. Is this a convenient time, or would you
like to set up a better time for me to call you back?
Because customers will rely on verbal cues and instructions, rather than written ones,
questions should have a limited number of responses (about three or four).
Because customers will rely on verbal questions, each question should be relatively short.
Avoid questions that ask the customer to look up information or check with others.
In constructing the questionnaire, be sure to read the questions aloud to others to see if they
sound clear and are understandable. Remember, what works for the written word does not
always work for the spoken word.
Complex skip patterns and branching are easily accommodated through computer-assisted
telephone interviewing (CATI) systems. Skip patterns occur when a particular answer to one
question means the respondent is not asked certain questions that would otherwise follow;
branching occurs when a particular answer to one question leads to a series of questions that
are customized to that particular answer.
Rank-order questions are subject to error in telephone interviews in a way that they are not
for mail or in-person surveys, Rather than asking a customer to rank-order a list of, say, eight
items, it is better to ask that person questions in a series of pairs ("Which is more important
to you, X or Y?") or break up the list into a series of separate scaled items ("On a scale of 1
to 6, where 1 is extremely important and 6 is not at all important, how do you feel about X?
On a scale of 1 to 6, how do you feel about Y? How about Z?").
When changing subjects, telephone surveys should cue the customer with transitional
language Statements such as, "Now, I'd like to turn to your experiences with..." accomplish
this shift.
Instructions for the interviewer must be perfectly clear, and the same format should be used
throughout the survey. For example, interviewer instructions are typically written inside
brackets, in all capital letters. '
For a sizable telephone survey (of say, more than 50 people), use of CATI should be
considered. For large studies, CATI will be more cost-effective and produce more reliable
information. ;
25
-------
CONSTRUCT DATA COLLECTION PROCEDURES
Other methods for obtaining feedback. While many customer feedback activities at EPA are
likely to rely on focus groups, mail surveys, and telephone surveys, remember there are other
methods for obtaining customer feedback. These include prepaid postcards attached to materials
EPA distributes, asking recipients to complete a couple of questions and return the card. They
also include questions asked of Internet users; in-person interviews that make use of a
semistructured list of questions; continuous feedback obtained by calling every fifth, tenth, or rath
customer a few days after product or service delivery; and any other formal or informal
opportunity for listening to customers.
Pretest
A pretest is a small-scale trial of the instrument
and data collection methods. Conducting a
pretest is extremely important because the
results will provide opportunities for refining the
instrument and methods before the
comprehensive data collection activity begins.
It may seem that a pretest is unnecessary if a
survey has been carefully researched and
designed. However, even the best plans cannot
anticipate all real-world circumstances.
Results from a pretest can tell the analyst
• Whether the flow of questions is logical and orderly
• Whether questions seem relevant and appropriate
to the customers
• if customers were able to easily understand and
respond to questions
• If response categories are adequate
• Whether questions truly reflect the issue that is
intended to be measured.
One of the best ways to conduct a pretest is to randomly select individuals from the target group
of customers served, have them complete the survey according to the method planned for the
overall effort, and then participate in a focus group session to review their opinions. If, for
example, you intend to conduct a telephone survey, customers should be recruited, come to a
central location where they can be interviewed by telephone, then meet as a group to go over the
draft questionnaire and their experiences in answering the questions. Those who are involved in
the pretest should not be included in the sample selected for the actual survey
A pretest is helpful for cost projections, and also provides information about actual burden (that
is, the amount of time to complete the survey), which is essential for Office of Management and
Budget clearance (required for Federal agencies, their contractors, and cooperative agreement
partners performing surveys of direct benefit to the sponsoring agency), A pretest that involves
more than nine people who are not Federal employees also requires OMB clearance.
OMB clearance
Under the Paperwork Reduction Act of 1995, the U.S. Office of Management and Budget must
approve any federally-sponsored collection of information that asks the same question of more
than 9 nonfederal respondents. Typically referred to as OMB Clearance, the process is an
exacting one and demands strict adherence to OMB requirements. For example, if a customer
26
-------
CONSTRUCT DATA COLLECTION PROCEDURES
exacting one and demands strict adherence to OMB requirements. For example, if a customer
feedback activity is subject to OMB clearance, the cover of the data collection instrument must
contain standard language and the date on which the clearance expires.
EPA has obtained OMB approval of a generic Information Collection Request (ICR) to conduct
customer satisfaction work. Under tliis authority, the clearance process is streamlined and the
time for clearance is reduced from as long as 6 months to between 10 and 15 days. This generic
ICR is available only for strictly voluntary collections of opinions from customers who have
experience with the existing product or service that is the subject of each particular feedback
instrument \
Factsheet VI explains the streamlined process and provides several examples of cleared survey
instruments. You may request the factsheet as a separate electronic document from Patricia
Bonner, Director of EPA's Customer Service Program (Mail Code 2161). You may also send her
survey instruments for quick review to ensure that questions are worded to address customer
satisfaction issues, not focused on program or outreach effectiveness In some cases, another
information collection request may be more appropriate to use than the customer service generic
clearance mechanism.
Proposed EPA survey packages should be sent for final review to Barbara Willis of the
Regulatory Information Division (2137), at Headquarters. She will check the package for
compliance with OMB regulations regarding use of the generic clearance for customer satisfaction
surveys and review the burden placed on the public, State officials, tribes, and other nonfederal
government customers. She will forward to OMB all survey instruments and the required
clearance package. :
See Factsheet VI for more information about specific procedures to follow, forms to complete,
and general information about EPA Customer Feedback OMB Clearance. EPA personnel listed
below may be able to provide additional information:
Barbara Willis
202-260-9453
202-260-9322 (fax)
Barbara Willis (EPA internal e-mail)
willis.barbara@epa.gov
Pat Bonner
202-260-0599
202-260-4968 (fax)
Patricia Bonner (EPA internal e-mail)
bonner.patricia@epa.gov
27
-------
CONSTRUCT DATA COLLECTION PROCEDURES
Occasionally, regardless of
planning, there will be times when
response rates are simply too low
for you to make inferences and
recommend action. In these cases,
it is important to have a
contingency plan for nonresponse.
The plan will need to include the
potential additional steps you will
take to increase the level of
participant response. Some
potential steps include
• Reminder calls or postcards. If
these steps were not included in
the original survey plan, they
Effective questions • checklist
Use short statements or questions
Use simple words
__Avoid jargon
Be clear and easy to understand
Arrange questions in logical order
Use appropriate response choices (include all possible
answers and minimize overlap among the answers)
Do not use double negatives
Be upbeat and interesting
Write to the appropriate reading level (9th grade or less for
general public; several word-processing software packages
incorporate a feature that determines reading level)
_ Use questions pretested in other surveys whenever possible
Leave out the questions that are "nice to know" but not vital to
the success of the program /product/service
should be considered if the response is low. If they were included in the original plan, it may
be advantageous for you to repeat them.
Followup contact with nonrespondents. You may need to make telephone calls or other
types of personal contact to nonrespondents to identify the reasons for their nonresponse.
You may want to learn if they understood the intent of the survey and the questions, if the
questions were relevant to them, and if there were specific factors that caused their reluctance
to respond.
Improve contact information. It may be that many addresses or phone numbers of the target
group are incorrect or out-of-date. Improving this information would very likely improve the
response rate. Places to check include the Internet, credit bureaus, and business directories.
Revision of survey instrument. In some instances, some of the survey questions may make
respondents feel uncomfortable or unable to respond, so you may need to revise the
instrument. NOTE: If you change the survey instrument significantly, you may not be able to
compare the results received before the change with those received after the change. You
will need to carefully consider the tradeoff of response rate versus data validity.
Some of these steps may require a great deal of effort, time, and money. The group or individual
in charge of the survey will need to carefully consider the various options. If the response rate
remains too low, you may need to wait for a better time and a different customer base, or may
wish to rely on direct conversations with customers.
28
-------
CONSTRUCT DATA COLLECTION PROCEDURES
Construct • checklist
_Design the sample
_Decide method for collecting data
_Choose an approach
_Develop the questions
..Construct the questionnaire
_Pretest
_Prepare OMB clearance package
29
-------
CONDUCT DATA COLLECTION
Plan
Construct I
Conduct
1.
Analyze
Act
CONDUCT DATA COLLECTION
Whatever methods you choose for collecting data, adequate
planning, training, quality control, and supervisory practices are
essential to ensure that the data collected meet certain standards,
namely that the information is
• Timely
• Accurate
• Efficient
• Useful
• Reliable
• Valid.
FOCUS GROUPS
A focus group project typically involves several steps, as discussed below.
To recruit participants, you will need to compose an effective recruitment script. Use this tool to
create dialogue between the person recruiting participants and the candidate and to qualify
potential participants considering factors such as age, socioeconomic status, and race/ethnicity.
Then you will invite individuals who meet requirements to participate in the group. You should
recruit about twelve qualified participants for each focus group; allowing for last-minute change
of plans and illness, the moderator should expect that about nine will attend.
Several practices can maximize the efficiency of the recruitment process:
• Well before the group meets, mail a letter to participants that confirms the date, time, and
location of the group and states whether the respondents will be paid for participating. The
letter thanks the participants, gives directions to the focus group facility, and repeats the
general objectives of the focus group.
• Additionally, you may decide to provide transportation to the focus group facility for those
participants who need this service.
• On the day of the focus group (or the previous day, if the group is scheduled for the
morning), make & follow-up telephone call to the participants to remind them to attend.
30
-------
CONDUCT DATA COLLECTION
Running a successful focus group alsio requires arranging logistical matters, such as
• Arranging for focus group facilities
• Providing videotaping and audio taping equipment or people assigned as recorders
• Providing a video hookup between the room where the focus group will meet and the room
where you (or others) will observe the focus group (if this is part of the design)
• Coordinating participants' schedules.
During the focus group, it is a good idea to use both a moderator and an assistant to conduct the
session. The moderator will pose questions to elicit candid opinions from the participants, keep
the discussion moving, cover all topics in the discussion guide, recognize when participants bring
up valuable new information, and steer the discussion in that direction if warranted. The
assistant supports the moderator as needed, takes notes, and handles logistics.
MAIL SURVEYS
In setting up data collection procedures for a mail survey, a good database is important. The
database should contain, for each customer, a unique identification number> the customer's
characteristics relevant for the sample selection (such as geographic location, size .of business, or
date of last contact with EPA), name and address, mailout date(s), and the date the response is
received. This database is a tracking; system. j-
A mail survey typically involves several separate mailings, each of which they call a "wave."
Send out each wave of a mail survey on the same date: j
• If you use an advance letter, mail all advance letters to customers on the same day.
• About a week later, mail the first questionnaire to all customers. Attach a label with the
unique identification number to each questionnaire. Include a letter in the package that refers
to the advance letter, asks for cooperation, and (when possible) provides a toll-free number
for customers to call if they have questions. The package should also contain a prepaid, pre-
addressed envelope for the customer to use to return the completed survey.
• As completed questionnaires come in, record their return in the tracking system. Similarly,
as undeliverable questionnaires come back (e.g., the customer has moved and left no
forwarding address or the address is incorrect), note that they were undeliverable in the
tracking system.
• About 3 weeks after mailing the first questionnaire, send out the second copy to all those who
have not yet responded. The letter in this packet should note the importance of the study and
. ; . —
-------
CONDUCT DATA COLLECTION
ask customers to respond. The second copy of the questionnaire should be a different color
from the first version. This distinguishes between the two copies, sends a signal to
customers, and aids efforts to track responses.
The following often help improve response rates:
• The advance letter (if used) should be on official letterhead, with a signature or title that is
meaningful to the customer
* Any signed correspondence should use a real signature rather than a rubber stamp (scanning
in the signature can work well for many letters)
• Use a "live" stamp (if possible), rather than metered or prepaid postage, to send out the
survey
• Use "address correction requested" to get information on customers whose surveys cannot be
delivered, then use the corrected information in the next mailout
• Use a large enough envelope so that the survey booklet does not have to be folded
• Establish, when possible, a toll-free number for the duration of the data collection period, and
encourage customers to call with questions or comments
• Allow respondents to fax back the completed survey
• If the budget permits, send out a third mailing via certified mail or using an overnight
delivery service (this is a last resort and may produce only minimal results).
Data from mail surveys must be key-entered or scanned. It is usually most cost-efficient to wait
until you have a sizable batch of completed surveys before beginning data entry procedures. Be
sure to do a periodic quality check to uncover data-entry errors.
TELEPHONE SURVEYS
Whether using computer-assisted telephone interviewing technology (CATI) or a traditional
paper-based technique, you must train telephone interviewers specifically on the study's
questionnaire and data collection procedures. The following are topics to cover during
interviewer training sessions:
i
• Background and scope of the survey. A project leader gives interviewers general
information about the background and scope of the project. She/he explains the types of
information to be collected and the ways in which that information will be used.
32
-------
CONDUCT DATA COLLECTION
Review of the questionnaire. A person responsible for data collection goes through the
questionnaire and leads an item-by-item discussion.
Dealing with uncooperative respondents. Experienced staff lead discussions about ways to
start off the interview right, enlist cooperation, build rapport, and minimize breakoffs and
nonresponses. The interviewers will also review strategies for ways to manage challenging
situations.
Answering customers' questions. Some frequent questions are
How was I selected?
What is the survey about? ;
- Who is conducting the survey? ;
- Who wants to know these answers?
How will the information be used? ;
- How long will this take? :
- Will I be identified?
How do I know you are who you say you are?
Quality control procedures. Project leaders monitor makers such as posing questions
accurately, tone, courteousness, and responsiveness to customers' concerns throughout the
survey, and they review these procedures with interviewers. Telephone interviews for any
sizable study are usually conducted using CATI technology. CATI systems use computers to
facilitate the interviews, which is a vast improvement over traditional paper-based systems
because CATI
- Greatly reduces the possibility of mistakes
Ensures accurate recording of the survey response
- Instantly establishes a tracking system and a record of each call
Provides significant improvements in quality control and efficiency
- Allows complex branching and skip patterns.
When using CATI, the computer automatically handles tasks such as controlling pace,
organizing which questions are to be asked and which are to be skipped, rejecting invalid or
unlikely responses, and recording closed-ended and open-ended responses. This enables the
interviewer to focus on smooth delivery and good interviewing skills. It also eliminates the
need to enter data after the survey is completed. The net result is a higher-quality interview
and more reliable information.
ELECTRONIC FEEDBACK
33
-------
CONDUCT DATA COLLECTION
Internet surveys use a web-based form that the user completes online at a designated web
address. The survey manager should only consider this method of data collection if the potential
respondents have access to the Internet.
To administer an Internet survey, the survey manager must have a method of contacting the
people selected for the sample, preferably via e-mail addresses. After compiling the sample list,
the survey manager then sends an e-mail alert that will lead potential respondents to the survey
website. Upon entering the website, respondents can then log-in and take the survey. Internet
surveys have several advantages:
• The Internet survey is interactive, like a telephone survey, allowing programmed skip
patterns and links to more detailed survey instructions. Unlike a telephone survey,
respondents can see what they are answering.
• Respondents can complete the questionnaire at a time convenient to them.
• There are no calling or mailing costs associated with Internet surveys.
E-mail surveys are one of the fastest and least intrusive means for gathering customer feedback.
Up to 50 percent of the responses are received within 24 hours. They are also cheaper to conduct
since you pay no interviewers or printing and distribution costs. In addition, the survey will
definitely get to the right individual; they will usually not be intercepted and routed to another
person. However, there are also some disadvantages. For example, it is difficult to format an e-
mail message to be clear and concise. It is also likely that respondents will have no perception of
anonymity. Finally, as e-mail use increases, people are becoming less patient with the many
messages that can be received.
Online Focus Groups
Online focus group research is an exciting new potential of online conferencing. Traditional
focus groups require:
• The rental of a physical facility, transportation for participants, snacks, recording facilities for
transcription purposes, and time spent in setup and cleanup
• The recruitment of participants from the immediate local area
• Travel costs for moderators who must be located at the same site as the participants
Online focus groups overcome many of these limitations. Features of online focus groups
include:
• The ability to restrict access to pre-authorized participants
34
-------
CONDUCT DATA COLLECTION
Automatic production of instant word-for-word transcripts
Use of online fill-in survey forms without leaving the focus group
Use of online participant profiles filled out in advance (reduces the need for "get acquainted"
activities)
Elaborate electronic moderator discussion controls
Display (with no action needed by participants) of discussion materials such as PowerPoint
slides, Excel charts and spreadsheets, concept papers and other text materials, photographs
and other visuals, live websites and their contents, live pictures from web cameras, and even
streaming audio and video
The ability to continue discussion on a split screen while viewing materials such as those
described above. ;
Conduct
Checklist
.Choose an approach •
.Design the sample
.Decide method IFor collecting the data
.Develop the questions
.Construct the questionnaire
.Conduct a pretest
.Prepare OMB Clearance package
"Statistics are no substitute for
juilyinent."
Henry: Clay
35
-------
ANALYZE THE DATA
Plan
Construct
Conduct I
Analyze
Act
ANALYZE THE DATA
Throughout the customer feedback activity, the framework for
analyzing findings should be established and modified. An analysis
plan is a useful tool for organizing the data analysis. The analysis
plan should specify how your organization will analyze the survey
responses to produce the desired products. The plan is helpful for
making sure that the data you collect will answer the overarching
questions being posed, for ensuring that you do not gather
extraneous data, and for setting forth expectations about the kinds
of information that will result from the customer feedback activity.
You should include two important items in the analysis plan: 1) the
designation of dependent and independent variables, and 2) the
stipulation of the unit of analysis. A dependent variable is the phenomenon you are
investigating. For EPA's feedback activities, the dependent variable will likely be the degree of
customer satisfaction with a specific product or service. Independent variables help explain the
observed level of the dependent variable, and may include factors such as differences in the
nature of the product or service (e.g., customers were consistently more satisfied with one service
than with another), frequency and type of interaction, and customer differences (e.g., educators,
students, local planners, and small business owners using the same service). The unit of analysis
is what you are studying. In customer feedback surveys at EPA, the unit of analysis will, in most
cases, be the individual person served. When you use continuous feedback methods, the unit of
analysis will generally be the individual customer transaction. For further discussion of unit of
analysis, see Factsheet VII.
DATA CLEANUP
Once you have set up the database and entered all data, you must review the data and prepare
data for analysis. This may entail a broad set of activities, such as deleting cases that left all
answers blank on a mail survey and coding open-ended responses into categories. Generally, this
is the tune to a run a set of frequencies to show the number of responses of each kind to each
question (the number of yeses and noes to a yes/no question) and the total number of responses
of all kinds to each question. This quick analysis gives you a rough check on the completeness
and accuracy of your data (the total number of responses to any one question cannot exceed, the
total number of respondents and rarely will differ greatly from the total number of responses for
each of the other questions). Frequencies flag out-of-range values (i.e., responses to one question
that are so different from responses to similar questions that you doubt their accuracy).
36
-------
ANALYZE THE DATA
TYPES OF DATA AND ANALYSES
Data from focus groups tend to be qualitative in nature. Analysts may tabulate data from focus
groups, such as "x percent of the participants expressed satisfaction." You should treat these
numbers cautiously and not generalize them to the full set of customers because 1) focus groups
usually have only a relatively small number of participants, and 2) participants may have been
recruited because they had specific experiences or characteristics. You may review transcripts
from focus groups to detect patterns and inconsistencies or you may apply more rigorous content
analysis.
For quantitative you can produce a variety of statistics: ;
• Descriptions of central tendencies, such as the mean, median, or mode (i.e., the average
value, the middle value (half are larger and half are smaller), or the most frequently occurring
value).
• Other descriptive statistics, such as frequencies, percentiles, and percentages. In customer
satisfaction surveys, the most commonly reported result is of this kind: the percentage of
respondents who expressed satisfaction with a specific aspect of their interaction with EPA.
• Gross-tabulations that array independent variables against the dependent variable (for
example, type of customer displayed against a summary measure of customer satisfaction,
like the percentage of customers of each type who reported being satisfied with the product or
service they received).
• Multivariate statistics—such as factor analysis, analysis of variance, and regression
analysis—to determine the relationship between and among selected variables.
• Chi-square, z scores, t-tests, and other statistics to determine statistical significance.
• Time-series and trend analyses to determine long-term changes and seasonal and cyclical
patterns in the data. '
37
-------
ANALYZE THE DATA
The following table contains information about the statistical techniques that will most likely
meet all the needs and expectations of the EPA program or project conducting feedback:
Statistical
Technique
Use
Example
Mean
To determine the average
response
The mean rating for overall satisfaction is an
8.4.
(Sum of all scores divided by number of
respondents)
Median
To identify the middle response
The median score for overall satisfaction is a
9.
(When responses are listed in numerical
order, the middle response [if odd number of
respondents] or the average of the two
middle responses [if even number of
respondents]
Frequencies
The summarize the distribution'
of responses
67% of respondents rate overall satisfaction
a 9 or a 10.
Cross-tabulations
To summarize the distribution of
responses by another variable
78% of Maryland respondents rate overall
satisfaction a 9 or a 10, compared to 60% of
Virginia respondents.
T-Test
To test for statistically significant
differences between two
independent groups
Maryland respondents are significantly more
satisfied overall than Virginia respondents.
ANOVA (analysis
of variance)
To test for statistically significant
differences between three or
more independent groups
Overall satisfaction differs significantly
among Maryland respondents, Virginia
respondents, and D.C. respondents.
Correlation
To determine how much
responses to one question
predict responses to another
question
(Measures the strength of
relationship between variables)
Of all aspects of the office, satisfaction with
the cleanliness best predicts overall
satisfaction.
(Respondents who are satisfied with .
cleanliness tend to be satisfied overall, and
respondents who are dissatisfied with
cleanliness tend to be dissatisfied overall)
Regression
To analyze the effects of a
relationship among responses to
two or more questions
(Measures the effects of one or
more variables on another
variable)
As satisfaction with cleanliness decreases,
overall satisfaction decreases.
38
-------
ANALYZE THE DATA
ANALYSIS: AN EXAMPLE ;
The following is a simple example of how you might analyze data from customer feedback.
Suppose an EPA group has distributed several thousand copies of the ABC Booklet, and because
you want to know how satisfied customers are with the booklet, you asked 450 respondents to a
survey which included:
On a scale of 1 to 6 where 1 represents "highly dissatisfied" and 6 represents
"highly satisfied, " how would you rate your satisfaction with the ABC booklet
you received from EPA? \
If one were to tabulate the all the scores, the average score would be 3. 5. Although an average
score is a very important piece of information, there is a lot more you can do with the data from
your customers. It is often useful to begin with a frequency distribution where you determine the
number and percentage of respondents who gave each score between 1 and 6. Here is one way to
present that distribution:
Customer satisfaction with the ABC* Booklet (n = 450)
Score
1 — Highly dissatisfied
2
3
4
5
6 — Highly satisfied
Don't Remember
Don't Know
Total:
Number
42
27
122
132
38
32
22
35
393
Percent of those
expressing an opinion
; 11
7
31
34
10
8
5
8
100
This example points out several items you need to consider. First, of the 450 customers
responding to a survey this question, 22 did not remember receiving the booklet and 35 said they
had no opinion or did not know how they would rate their satisfaction with the booklet. In the
example provided above, the information about those who do not remember or have no opinion
is presented outside the table because the analyst decided that it was more important to focus
attention on those who did have opinions to express. Thus, the percentages of those with
opinions is based on the 393 respondents who expressed opinions. If it is important to determine
the percentage of customers who don't remember or who have no opinion about the booklet, you
39
-------
ANALYZE THE DATA
would calculate those figures using 450—the total number who were asked the question—as the
denominator. By including the sample size in the table (the information that n = 450), readers
can do these calculations, should they be interested.
Second, the information presented may be at too great a level of detail for many audience
members. The difference between a 2 and a 3 rating, for example, may not be meaningful for
them. Thus, you may find it useful to collapse the information into some smaller number of
categories. One possibility is to create three categories: dissatisfied, neutral., and satisfied.
Scores of 1 to 2, 3 to 4, and 5 to 6 might be collapsed to create three categories and then report:
"••'-"'••' Customer satisfaction with the ABC Booklet (n =450) '
Rating
Dissatisfied
Neutral
Satisfied
Total:
Number
69
254
70
393
Percent of those
expressing an
opinion
18
65
18
101*
* Total is greater than 100 due to rounding
Don't remember receiving the ABC Booklet:
Don't know/no opinion
22 (5 percent of 450)
35 (8 percent of 450)
Note that the information can now be grasped much more immediately. It is reasonable to ask: If
you will eventually collapse responses, why does the question posed to customers have six
possible answers? Research has shown that people answering survey questions prefer to have a
fairly wide range of responses because they don't like to feel forced into a limited set of options.
In addition, analysts may have different approaches
to collapsing categories.
Customer satisfaction with the ABC Booklet
The responsibility for reducing information to a
manageable amount falls to the analyst. It is the
analyst's task to identify sensible ways to collapse
categories and to present these decisions to the
audience (often as a footnote or technical appendix).
Third, as discussed in the next section, you should
consider how to present the data. Although these
tables are simple and easy to interpret, compare them
to a chart that summarizes the information instantly.
40
-------
ANALYZE THE DATA
Fourth, the analysis you anticipated during the planning phase of the customer feedback activity
should guide you whether you need to do subgroup analysis.' Subgroup analysis examines
whether different kinds of customers have different kinds of responses. Suppose you want to
examine whether educators and representatives of advocacy organizations have the same or
different opinions about the ABC Booklet. You could collapse categories and sort respondents
by their status as educators or advocates (to be sure, some respondents may be both educators
and advocates, but for simplicity, let us assume you had customers indicate their primary role),
then present the findings: i
Selected customers' satisfaction with the ABC Booklet (n = 450)
Rating
Dissatisfied
Neutral
Satisfied
Total:
Educators
Number
27
94
35
156
Percent
17
60 '•
22
99*
Advocates
Number
17
78
12
107
Percent
16
73
11
100
* Total is less than 100 due to rounding.
This table provides important information to the audience, but you might want to present it using
charts for the two separate groups. You could also perform aistatistical test to see if the two
groups differ statistically in their satisfaction with the ABC Booklet.
Setecjted custom*™1 satisfaction with tiw ABC Booklet
Educators Advocates
The fifth item to consider is the adequacy of your findings. Be sure how strong your findings are
before formulating recommendations. Many
factors affect adequacy, such as the sample size,
response rate, and objectivity of questions
posed—plus the way you will use the findings.
With a sufficient sample size, a good response rate
(more than 75 percent for mail and telephone
surveys, for example), and questions that are not
biased, you can use the information with
confidence. OMB requires an 80 percent response
rate for survey results to be considered statistically
valid. However, when less than 80 percent of
those sampled return questionnaires in a customer
feedback and satisfaction measurement activity, the
Dissatisfied
Satisfied
Neutral
41
-------
ANALYZE THE DATA
information gathered should still be used to improve customer service. Do not ignore the
findings.
Let's say that in the above example, there was an additional group of people — small business
owners — who were your customers, and that a total of 17 small-business owners responded to
your survey. This is a small enough number that the sampling error for this one group of
customers may be quite high. Nevertheless, pay attention to the results.
One final comment on this example: EPA has a large number of programs and offices, some of
which may have customer
bases much smaller than
the thousands used in the
example. If your customer
base is quite small, you
first must decide whether a
statistical sample and
quantitative survey is still
Even if they do not adequately represent the larger group of small-
business owners who were your customers, you can still
• Decide whether the findings are suggestive (rather than definitive).
Should your office pay attention to the concerns suggested by these
findings?
Compare the findings to other similar data. Are small-business
owners generally pleased or displeased with other EPA products?
Compare the findings to information EPA gets from continuous
feedback methods. If you call small-business owners after providing
a service or product, what do they have to say in those
conversations?
How do the continuous feedback findings compare with the results
of this survey?
Discuss the findings with colleagues. Have they gotten similar
reports? Is there a pattern emerging about small-business owners'
level of satisfaction with EPA products?
Raise the findings with program managers, being careful to note that
this might be an area that requires attention to improve customers'
satisfaction with EPA.
Investigate the findings further. Should you use this as a starting
point for more in-depth discussions with small-business owners?
Conduct focus groups to see how products could produce higher
levels of satisfaction?
viable because other
techniques may be more
suited for your purposes.
If you decide to go ahead
with a quantitative survey,
recognize that the analyses
you conduct should be
carefully considered and
constructed. If, for
instance, you have 500
customers and survey 100
of them, you can perform
the same analyses as in the
example above, but you
should examine the
frequency distribution first.
In an extreme case, let's
assume that 10 of your 100
respondents gave a score
of 0, 60 gave a score of 3,
and 30 gave a score of 6.
Although the average score of 3. 0 may be close to the average of 3. 5 in the example, the
distribution of responses is very different.
"A reasonable probability is the only
certainty. "
E. W. Howe
42
-------
ANALYZE THE DATA
Sixth, you need to consider how past responses compare with the new responses, and to ensure
that you can compare the most current results with those you expect to from future
questionnaires. This is time series or trends analysis and is vital to being able to measure change.
DRIVER ANALYSIS
An analytical approach that is very useful in customer research is driver analysis. Driver
analysis identifies the service or services that most significantly affect respondents' satisfaction.
This type of analysis provides decision makers with a tool to prioritize findings, which is
important because customer feedback efforts often yield more information than an organization
can deal with. Also, managers often do not have enough resqurces to adequately address all
aspects of customer service that receive low satisfaction ratings. Driver analysis enables the
study team to identify which areas deserve the highest levels of attention.
As an example, let us assume that an EPA program is assessing three ways of providing
information: by telephone, by mail, and through published miaterials. Analysis of customer
feedback can identify which of these methods results in the highest level of respondent
satisfaction. This is the delivery system that most strongly drives satisfaction with the program's
products and services. When you identify the method that significantly affects satisfaction,
additional analysis can determine which factor within that method most significantly affects
satisfaction. Continuing with the example, let us assume that you identify information received
by telephone as the method producing highest satisfaction. You can also use driver analysis to
identify the factor that most affects the respondent's opinion. Such factors may include one or
more of the following: the accuracy of the information, the courtesy shown by the employee, or
the accessibility of the correct person to answer the question. Identifying the driver in this way
greatly enhances a manager's ability to set priorities for improvement efforts.
Two primary analytical techniques, stated importance and derived importance, are used in driver
analysis:
Stated importance uses respondents' answers to specific questions regarding the importance of
the services. Simply ask the respondent to rank or rate items on a prescribed scale (such as a
scale from 1 to 6) according to their importance.
Derived importance uses multivariate analysis to identify the most important factors affecting
satisfaction. In short, the overall level of satisfaction with the organization is compared to the
levels of satisfaction with particular products or services received. Driver analysis will identify
the degree to which variation in the overall level of satisfaction is explained by the variation in
the product or service received. Those individual products or services that most adequately
explain the variation in overall satisfaction are the drivers.
43
-------
ANALYZE THE DATA
The following table relates a useful method of comparing importance data, such as from driver
analysis, and satisfaction data. When the results from a question is plotted according to the
levels of importance and satisfaction, some helpful inferences can be drawn.
High
LU
U
1
2
Low
Important to customers
Worse performance
Failing to meet customer
expectations
Less important to custome
Worse performance
Low customer expectations
matched well to poor
performance
If importance increases
performance will become an
issue
Important to customers
Better performance
Customer expectations being
addressed/met
B
4
Less important to customers
Better performance
Clearly exceeding customer
expectations
Efforts may be unrecognized,
priorities misplaced - two
options:
A: Redirect efforts at issues in
quadrant 2, allow these to
move to quadrant 3
B: Make these issues
Important to customers to
leverage unrecognized
performance, move to
quadrant 1
Low
Satisfaction
High
44
-------
ANALYZE THE DATA
PRESENTING THE DATA
One critical activity is to remove all identifying information from the data. To ensure credibility
and confidentiality, your should never present findings that cbuld be used to identify a specific
customer. A typical practice is to strip names, addresses, and telephone numbers from the
analytical database and keep them in a separate file that includes the unique identification
number assigned during the data collection activity. If ever warranted, you can link the file with
identifying information with customer feedback through the identification numbers.
Most people are interested in the "bottom line," presented as 'succinctly and clearly as possible.
Therefore, it may be best to present the data reflecting survey results in simple, straightforward
ways to most EPA audiences and save the mathematical details for an appendix or supplementary
briefing. Many audience members want a brief summary of the study's findings. Two pages of
text, with key findings presented as bullets, are usually sufficient.
Graphic representations of data are powerful displays of findings. It is very easy for audiences to
grasp information presented in bar graphs, pie charts, and similar designs. The rapid growth of
low-cost color printers means that these displays can be easily produced in color, adding to their
ease of understanding. Examples of graphs are presented in Factsheet VIII.
FORMULATING RECOMMENDATIONS BASED ON THE DATA
Customer feedback may suggest many potential improvements or enhancements to consider.
Narrowing down the list to those that will have the most direct effects on overall customer
satisfaction is the ideal. Most organizations will have limited staff and other resources, so
practical considerations must guide their choices. Usually, three to five targeted improvements
are sufficient. Sometimes, a single improvement can present a significant challenge, and
focusing on it can have a major impaict.
Each organization will consider its own capacity for action. However, it is important to do
something or customers may feel that their input was not valued and the effort they expended to
respond was wasted. They may place even less trust in the surveying agency.
Recognize too that not everyone will be ready for the feedback results. Presenting them can raise
sensitive issues for some individuals. Some people may feel threatened by anything but glowing
results, or become defensive or emotional. Some may question the credibility of the findings,
especially if they build logically to recommendations for changes that affect them.
To get buy-in and use the results to influence change, results must be honest, and presented in a
constructive way that emphasizes the positives. Results, findings, and recommendations should
be presented as opportunities for improvement. If the survey cannot be used to influence change
or improvement, it did not meet its objective, no matter how:carefully the whole feedback
activity was conducted.
45
-------
ANALYZE THE DATA
PRESENTING RECOMMENDATIONS—USING GRAPHICS
First, remember, at least 70 percent of the message is visual, so take advantage of how people
take in information. Use the right visuals to communicate your message. You can
• Emphasize main numerical facts
• Uncover facts, trends, comparisons and relationships that might be overlooked in text or
table
• Summarize, group or segment (stratify) data
• Add variety and interest to text, tables, and briefings.
It's best to use pie charts to display components or parts of a whole. Use line charts when you
want to show independent or cumulative values when
• Your data cover a long period of time several series are compared on one chart
• You want to show change, not quantity
• To exhibit trends
• To show relationships
• The plot or the series fluctuates sharply.
Do not use column charts for comparing several data sets, for showing data with many plottings,
or to show many components. Finally, use picture graphs to demonstrate concepts or ideas. (See
Factsheet VII for examples of graphics. )
ON DEVELOPING RECOMMENDATIONS
Whether you should develop recommendations depends on the purpose of the feedback activity,
the significance of the issues, the quality and significance of the findings, and your audience.
Your original purpose should be action-oriented; answers to your issue questions should
naturally lead to ideas for actions that would improve program effectiveness.
If you develop and make recommendations, they
should be feasible, supported by the findings (which
are in turn supported by the data), and stated
unambiguously. Providing a list of options for
achieving a recommended improvement can
increase the likelihood that it will be implemented.
"Measuring fuzzy conclusions to three
decimal places is akin to putting a
caliper on a dust bunny. "
Kerry Patterson
The Balancing Act
46
-------
ANALYZE THE DATA
Another critical, although sometimes subtle, consideration in; developing recommendations is the
political climate. It's a fact of life that some recommendations, no matter how well you support
them, will not be accepted by those in authority due to factors beyond your control. Just be
aware of these factors so that you can develop an alternative recommendation or recognize that
your recommendation may not be implemented until the climate changes or until others have
helped tip the scale (from Practical Evaluation for Public Managers, Office of the Inspector
General, Department of Health and Human Services, November 1994).
47
-------
ACT ON THE RESULTS
ACT ON THE RESULTS
IS THIS THE BEGINNING OR THE END OF THE PROCESS?
When the efforts to collect customer data appear to be coming to a
close, your real work may just be starting! If this is the first time
your organization has collected and analyzed customer data in a
systematic way, you are probably discovering a whole new world of
information. Depending on the feedback method you have chosen,
you have probably created a baseline of information that
characterizes how your customers evaluate your products and
services. You may wish to repeat the same process again in a year,
or in whatever period of time makes sense in your situation.
Customers who respond to you expect for you not only to act on
their feedback, but also to tell them what you have done. Whenever
possible, you should build in some way to let them know. To be
cost-effective, the EPA program that did the feedback activity wants to make best use of the
information. Therefore, this next stage of the process is vitally important to the success of the
final phase—action planning and implementation.
HOW DO YOU DECIDE WHAT TO DO WITH THE FEEDBACK YOU RECEIVE?
Once you receive and analyze the feedback, most people will be anxious to know the results.
Ha\v did we do? What's the bottom line? Work hard to avoid giving answers that oversimplify
the feedback you have received. Depending on the methodology you have used, you may have an
average score or rating to report, but chances are that you will have far more information that can
provide a wealth of insights about how your customers view the products and services they have
received from your organization.
HOW GOOD IS GOOD ENOUGH?
That is a very hard question to answer. In fact, the only real way to answer it is to say that it
depends. For example, is an average score of 4.9 on a 6-point scale a good score? If last year's
average score was 2.5, indeed you may have reason to celebrate—and for more than one reason!
For one thing, your score nearly doubled. Even better, it leaped from the dissatisfied range to the
middle of the satisfied range. However, you may want to look deeper: How does the customer
rate other service providers who provide similar services? Is that organization getting ratings
above or below the 4.9? And what about the distribution of ratings—are some customers still
rating you below a 3.0 while some are rating you above a 5.5? If so, are the more positive ratings
obscuring the negative ones? If so, you still may have customers out there who are sharply
critical of the products and services you provide.
48
-------
ACT ON THE RESULTS
Setting acceptable goals for customer' satisfaction ratings is a decision that each EPA organization
must make for itself. Keep in mind, however, that leading service organizations tend to:
• Target overall satisfaction scores at the upper end of the scale. On a 6 point scale, that should
be a 5.0, and hi very competitive situations it may even be at the 5.5 level or higher.
• View any less-than-satisfied ratings as being unacceptable^because they indicate an
opportunity for dissatisfied customers to quickly convey their dissatisfaction to others by
word of mouth. In the long run, that can undermine your efforts to achieve a reputation for
service and product excellence.
HOW DO WE KNOW WHAT TO WORK ON FIRST?
Many organizations are overwhelmed with the amount of customer information they receive. This
is especially true if a survey instrument is lengthy, or if there are many open-ended comments and
ideas. Decision makers, particularly at senior executive levels, are likely to ask these questions:
What do we do first? What improvements will yield the best improvement in overall customer
satisfaction? What improvement or enhancement investments are worth making?
During the planning phase, you, your colleagues, and managers will have identified potential
methods and procedures for acting on the results of customer feedback activities. The following
are some ideas to consider.
Recover. Be prepared to hear from customers who report a negative experience with EPA. Set
up a quick alert and response mechanism to respond in any case such as this. (That may require a
special question that asks if the respondent is willing to be identified and contacted for follow up).
A quick response is a very positive way to convert a negative impression into a positive one for
the customer.
Report. Even if the primary means for action is an oral briefing, having written documentation for
others to read and refer to is a good idea. It also creates a historical record for tracking changes
over time. Most people who will review information about customer feedback want to see
graphics and summary tables. Reports may include an executive summary, a description of the
study objectives and data collection methods, a comprehensive investigation of findings
(illustrated with graphs and tables), and conclusions and recommendations. To keep the report a
reasonable length, supplementary material can be presented in appendices.
Brief. Action planning workshops get management's attention. Gather decision makers together
and go over the findings with a verbsil presentation. Software graphics packages can help make
the briefing interesting and informative. Conducting a dry run before your presentation helps with
timing, pacing, and finding out how well you can verbally communicate your written findings.
Hard-copy handouts give participants a tangible reminder of the information conveyed.
Prioritize. It is likely that customer feedback will provide a wealth of information. Try to
: 49
-------
ACT ON THE RESULTS
package the information so that it leads the audience or reader to a series of practical action steps
that fit logically together. Acting on results may be more successful if several smaller action plans
are developed that contain three to five next steps, rather than one large plan that may appear
overwhelming.
Communicate. In addition to briefing management, it is a good idea to communicate results to
others. Sending a thank-you letter to focus group participants and customers who completed the
survey is important. The letter should note what EPA learned and what will be done with the
findings. EPA employees are often eager to learn what customers have said, so results should be
summarized and distributed widely.
"' ' '
Improve. There is no reason to elicit customer feedback unless you will use the information to
improve EPA's processes, services, or products. Recognize that some employees may be excited
about possible changes, but others may feel threatened and be highly resistant. The best way to
use customer feedback may be to develop and define action plans. Action plans are most likely to
be successful when owners of each issue:
• Are identified and included
• Help assess their activities and customers' feedback
• Participate in review and strategy sessions
• Have an opportunity to discuss concerns and shortcomings in a nonthreatening,
nonconfrontational environment.
Enhance. Sometimes, customers are satisfied, but want the agency to expand or further improve
what it offers. This is an opportunity to enhance products or services.
R&vard, Conducting customer feedback activities can be exciting and worthwhile; the process
can also be exhausting and threatening. Be certain that you recognize the efforts of staff and
customers who made the activity possible and reward them for their involvement. Rewards can
take the form of public acknowledgment, mention in performance reviews, and attention to
findings.
Plan. Use the immediacy of the customer feedback activity to see what worked well and what
could be improved for the next similar activity. Identify aspects that facilitated or impeded
achieving the project's objectives, including features of processes followed for planning, data,
collection, analysis, and development of findings.
Feed results into the strategic plan and GPRA goals and planning activities. In addition to
EPA's performance metrics, recent management initiatives, including the President's directives on
strategic planning, reinvention, and customer service improvement, and the Government
50
-------
ACT ON THE RESULTS
Performance and Results Act (GPRA), suggest that customer data be included in performance
data. To address these needs, quantitative data from surveys and trend data accumulated from
ongoing feedback mechanisms may be most useful. Focus group and other qualitative data can be
used to clarify customers' views.
As Government agencies go about reinventing programs to meet customers' needs and
expectations and to comply with the requirements of GPRA, managers will need to develop
customer-based performance goals and indicators to assess progress. The basic way to do this is
to get input directly from customers.
Act ^Checklist
_Recover
_Report
_Brief
_Prioritize
.Communicate conclusions and recommendations
Jmprove
_Enhance
_Reward
>lan .
_Feed results into the Strategic Plan, GPRA goals
and annual planning activities
"It ain't so much the things we don't
know that gets us in trouble. It's the
things we know that ain't so."
Artemus Ward
51
-------
SUGGESTED READING
SUGGESTED READING
Alreck, Pamela L., and Robert B. Settle. The Survey Research Handbook: Guidelines and
Strategies for Conducting a Survey. Irwin Professional Publishers, 1994. Description:
Without technical buzzwords or statistical jargon, this book provides the methods and
guidelines for conducting practical, economical surveys from start to finish.
Dutka, Alan. AMA Handbook for Customer Satisfaction: A Complete Guide to Research,
Planning, & Implementations. NTC Publishing Group, 1995. Description: Covers planning
customer satisfaction activities, designing questionnaires, conducting surveys, analyzing the
results, applying the results, and maintaining customer satisfaction (Booknews, Inc., 2/1/96).
Environmental Protection Agency, Survey Management Handbook, Volumes I (November 1983)
and II (December 1984). Description: Volume I focuses on survey design principles and
ways to productively apply them in planning and managing a contract survey related to
regulatory decision making. Volume II focuses on the conduct and management of EPA-
sponsored surveys. Contains good lists of recommended additional reading.
Gerson, Richard R, Ph.D. Measuring Customer Satisfaction. Crisp Publications, 1993.
Description: Provides a definition of customer satisfaction and warns of the dangers
associated with poor service or quality. The author describes research methods and includes
sample forms and questions. The book also explains analysis techniques and notes the
importance of measuring employees' satisfaction.
Green, Samuel B., Neil J. Salkind, Theresa M. Akey, Theresa M. Jones, and Sam Green. Using
SPSS for Windows: Analyzing and Understanding Data. Prentice Hall, 1997. Description:
Offers both the beginning and advanced individual a complete introduction to SPSS. In two
parts, coverage proceeds from an introduction to how to use the program to advanced
information on the specific SPSS techniques that are available. Special features of this book
include a high level of readability and a class-tested text, examples using screen shots and
step-by-step procedures for successful completion of data analysis, tips to help the user in
both learning SPSS and making it even easier to use, sidebars featuring material that is
particularly interesting and important to understanding the analytic technique under
discussion, and guidance in the selection and application of statistical techniques and
interpretation, as well as documenting and communicating results.
Hayes, Bob E. Measuring Customer Satisfaction: Survey Design, Use, and Statistical Methods.
ASQC Quality Press, 1998. Description: Provides detailed information about how to
construct, evaluate, and use questionnaires. Clearly presents the scientific methodology used
to construct questionnaires utilizing the author's systematic approach. Important scientific
principles are presented in simple, understandable terms. Both the qualitative and
quantitative aspects of questionnaire design and evaluation are included.
Hill, Nigel. Handbook of Customer Satisfaction Measurement. Gower Publishing, 1996.
Description: This book was written for customer service professionals, not statisticians.
— _
-------
SUGGESTED READING
Using work examples and real-life case studies, this guide takes the reader step by step
through the entire process, from formulating objectives at the outset to implementing any
necessary action at the end. Among the topics covered are questionnaire design, sampling,
interviewing skills, data analysis, and reporting.
Hurlburt, Russell T. Comprehending Behavioral Statistics. Brooks/Cole Publishing Company,
1994. Description: A textbook that provides the same material found in most introductory
statistics texts, but goes beyond the standard by teaching students how to estimate statistics
before computations are performed. The optional ESTAT software helps students build this
skill by allowing them to learn to make accurate "eyeball-restimates." These estimation
techniques are provided for both descriptive and inferential statistics. Alternatively, students
can learn estimation from information in the book alone. ;Annotation: Book New, Inc.
Portland, Or.
Kessler, Sheila. Measuring and Managing Customer Satisfaction: Going for the Gold.
American Society for Quality, 1996. Description: Includes chapters on topics such as
Problems and Opportunities with Current Customer Satisfaction Measurement, Selecting
Your Tools; CSS Data Analysis; Tools for Gathering Data; and Tools for Designing,
Analyzing, and Synthesizing Data.
McDaniel, Carl. Marketing Research Essentials. South-Western College Publishing, 1998.
Description: Provides key chapters on the concept of measurement and attitude scales;
questionnaire design; data processing, basic data analysis; and statistical testing of
differences; and correlation and regression analysis.
53
-------
-------
FACT SHEETS
Who are EPA's customers?
Internal control procedures
Sampling - the basics
IV Sampling - more on sample
size
V Sampling - more advanced
topics
VI OMB clearance
VII Unit of Analysis
VIII Examples of graphs
IX Survey Software information
-------
-------
WHO ARE EPA's CUSTOMERS?
FACTSHEETI
The following table can help you get started in identifying your particular customers and the
products or services they receive from you. Remember, some of you also work with
individuals who can be customers, stakeholders or clients, depending on the specific
interaction.
Customer II Service/Product
Regulated industries, such as
manufacturers and power
companies
Agriculture
Small businesses, such as
dry cleaners, printers, and
developers
Consultants
Local governments
States and Tribal
Governments
Grant applicants
Public interest groups
• permits/compliance enforcement
• public meetings
• hearings
• regulations
• facility inspection
• information
• referrals
• guidance and access
• environmental cleanup
• guidance
• grants
• environmental impact statements
• standards ;
• information
• outreach
• assistance
• funds
• information
• brown fields ,
• guidance
• grants
• program support
• enforcement
• coordination of resource efforts
• guidance
• information
• technical assistance
• grants
• enforcement support
• applications
• information ;
• funds
• information
• opportunities for input into decisions
FACTSHEETI
1-1
-------
Customer
Service/Product
Community-based groups,
including environmental
justice organizations
advice and assistance
data and information
grants
program support
training and job development
technology transfer
The public
information
site clean-up
Freedom of Information Act requests
hot line complaints
environmental education
access to environmental decision making
opportunities for involvement
Congress
information
responses
reports
action
regulations
program implementation
Program offices (one office to
another at HQ)
technical assistance
compliance assistance
research and development support
EPA employees
human resources support
facilities
financial management
training
information technology
audits
evaluation
EPA Regional program
offices
policy guidance
money support
regulations
Other federal agencies
referrals
information
support
access to data
lAGs
site cleanup
FIS
International/global
technology and information transfer
standards
training
conferences
studies
monitoring
collaboration
FACTSHEETl
1-2
-------
EPA INTERNAL CONTROL PROCEDURE
FACTSHEETH
INTERNAL CONTROLS ENSURE THE INTEGRITY OF SURVEY DATA AND RESULTS
The U.S. General Accounting Office and OMB have issued internal control standards that apply
to all operations and administrative functions in assuring the quality, reliability, and integrity of
information used for decision making. These standards and techniques, integrated throughout a
number of laws and requirements, apply to the collection, administration and reporting of results
from customer surveys and other forms of data used for purposes of performance measurement,
verification, planning, and management action. Developing internal control procedures is
exercising good business practice and important in our role as stewards of public trust. What
constitutes an effective control system varies with program circumstances. While controls may
be as routine as second-party reviews or limiting access to the data, they should be generally
applied to provide reasonable assurance that the objectives of customer surveys will be reliably
and cost effectively accomplished. Any audits, evaluations, or verifications of the data from
customer surveys will usually start with an examination of the system of internal controls.
SUMMARY OF SPECIFIC CONTROL STANDARDS AND TECHNIQUES
Management must provide reasonable assurance and a supportive attitude that assets
(information) are safeguarded against waste, loss, unauthorized use, and misapplication, and that
supporting documentation be clear and available for examination. Management controls should
be logical, applicable, reasonably complete, efficient, and effective in accomplishing
management objectives. Managers and employees must have professional and personal integrity
and are obligated to support the ethics program and maintain a level of competence that allows
them to accomplish their assigned duties. Managers should ensure that appropriate authority,
responsibility, and accountability are defined and delegated and that an appropriate
organizational structure is established to effectively carry out program responsibilities. Key
duties and responsibilities in authorizing, processing, recording, and reviewing official
information and transactions should be separated among individuals so that individuals do not
exceed or abuse their assigned authorities. Access to assets and records should be secured
limited to authorized personnel, with custody assigned and maintained. All program operations,
obligations, and costs should be in compliance with applicable laws and regulations, and
resources should be used efficiently and effectively and be duly authorized.
FACTSHEETll 11-1
-------
-------
SAMPLING—THE BASICS
FACTSHEET III
If you have decided to use a survey approach for obtaining customer feedback, you need to
determine what sample size to use. This Factsheet first discusses sample sizes, sampling error,
and confidence intervals—all of which factor into decisions about the sample size. It then
presents a table for you to use in determining what sample size to use—and tells you step by step
how to make use of that table. This Factsheet then describes how to go about randomly selecting
that number of customers from the total list of customers you have served during the time period
to be covered by the survey.
WHAT KINDS OF SAMPLE SIZES ARE WE TALKING ABOUT?
Before we give specific guidelines on how to choose the sample size, it will be useful to set some
general expectations. National public opinion polls like the Gallup Poll and the Roper Poll
typically use sample sizes in the range of 1,350 to 1,800. These polls use fairly large sample
sizes to obtain a result that represents the entire adult U.S. population with' a sampling error on
the order of plus or minus 2.5 percent to 3 percent. Such small levels of sampling error are
needed because the polls often address matters of national importance. The decisions made,
based in part on the results of these national polls, may be far-reaching, long-lasting, and affect
millions of people.
The surveys you will be conducting to obtain customer feedback will be of a very different
nature. The target group whose opinions you need will be much smaller: It will probably be the
people who have come to you and your colleagues in one specific program area within EPA,
within a limited time (e.g., during one year) to request certain products or services. We are
therefore talking about a target group of maybe as many as 500 to 1,000 people (few EPA
programs directly serve more customers than that) and in some cases 50 people or fewer.
Furthermore, although the decisions that will be affected by customer feedback are important,
they will probably not be far-reaching and long-lasting. The iscope of decisions to be made in
most cases will be, for example:
• Should we change a process to reflect customer comments?
• Should we revise some of our written products?
• Should we provide a half day of customer feedback training to each staff member?
Even in the worst case—we make the wrong decision about whether our products need to be
revised and whether the staff members need further training—we will (if we continue to obtain
feedback from our customers at least once each year) discover our error soon enough and be able
to correct it, without incurring excessive or irreparable damage in the meantime.
Based on these considerations, it is reasonable to have higher, sampling errors than those
associated with national surveys like the Gallup Poll. We can feel comfortable with sampling
errors of 5 percent or even 10 percent.
FACTSHEET III III-1
-------
SAMPLING—THE BASICS
Additionally, for getting feedback from EPA customers, we have relatively small target groups
who were served by a specific program during the time period of immediate interest. For this
reason, it is reasonable for you to use a much smaller sample size than is used in the Gallup and
Roper Polls, which seek to accurately capture the opinions of millions of people.
Sampling error
"Sampling error" is normally presented as a percentage with a plus or minus sign in front of it.
For example, the sampling error in one particular situation may be ± 3.5 percent. That means
that the true value of a given measure for the entire population—that is, the whole target group
you are getting feedback from—is the value obtained from your sample of customers, plus or
minus 3.5 percent. If for example, 62.4 percent of your sampled customers are satisfied, the
actual percentage of satisfied customers lie within the range between 58.9 percent (62.4 percent -
3.5 percent) and 65.9 percent (62.4 percent + 3.5 percent).
But that is not quite true. In fact, there is no range of reasonable size that we can identify for
which we can be certain that the true value for the full list of customers lies in that range.
Why is that so?
i |
Because mere's always the possibility of very unlikely circumstances occurring—with the result
that the characteristics of the customers in the sample are very different from the characteristics
of the customers not in the sample. In such circumstances, the true value for all customers will
be very different from the value obtained from the customers in the sample surveyed. The only
way to get around this statistical fact is to specify "how certain we want to be" that the true value
does, in fact, fall with a specific range around the value we obtain from the sample. This degree
of certainly we are looking for is known as the "confidence level."
Confidence level
The confidence level indicates how confident we want to be that the true value lies within a
specific range.
i
There is no one confidence level that is the right one to use. There are many different possible
confidence levels, and only you can decide which confidence level is appropriate for your
survey.
Much of the work in the area of public opinion surveys uses the 95 percent confidence level.
That means that if you determine the sampling error using the 95 percent confidence level, you
can be 95 percent certain that the true value for all your customers will lie within a specific
percentage band (one equal to the size of the sampling error) around the result you obtain from
the sample of customers you contact.
FACTSHEETlII III-2
-------
SAMPLING—THE BASICS
Another confidence level commonly used is the 90 percent confidence level. With a 90 percent
confidence interval, you can be confident that 9 times out of 10, the true value falls within the
value obtained from your sample of customers, plus or minus the sampling error. Some analysts
use 80 percent confidence intervals.
To decide what confidence level to use, you might want to think of a scale running from 80 to
95, where 95 represents a high level of confidence and 80 represents a lower level of confidence.
Decide which confidence level to use based on the way in which your results will be used, how
products and services may be affected by the results, and the frequency with which you will
collect additional information to confirm or revise your findings.
Determining the sample size
Now that we have established appropriate expectations with regard to sampling error and sample
size, we will provide you with some guidance on selecting your sample size. Please recognize
that there are several factors to consider in determining the sample size. The information
provided here is intended to help get you started. Please refer as well to the additional
information provided in Factsheets III, IV, and V. If you wish, you may also consult a
statistician within your Office at EPA. A list of EPA statisticians showing the EPA Office in
which each of these statisticians is located can be obtained from the Office of the Chief
Statistician of EPA within EPA's Center for Environmental Information and Statistics by calling
202-260-5244.
Number in
target group
1000
1000
1000
500
500
500
200
200
200
100
100
100
Sampling
, error
±5
±5
±5
±5
±5
±5
±5
±5
±5
±5
±5
±5
Confidence
, (level
80 !
90 :
95
80
90
95
80
90
95
80
90 !
95
Sample size
141
214
278
124
176
218
90
116
132
62
74
80
FACTSHEET III
III-3
-------
SAMF>LING—THE BASICS
:-JjjJ!iiiftbiirLi|Et'is|
^ target' group >",,
50
50
50
1000
1000
1000
500
500
500
200
200
200
100
100
100
50
50
50
:'; ?&ifi#fo$!5
;. ' ' erircir^ ::ii>.'<
±5
±5
±5
±10
±10
±10
±10
±10
±10
±10
±10
±10
±10
±10
±10
±10
±10
±10
i;ifil|gf|;
;';/'^iUe^e|'"'"'!;."."^':
"* pi llj1 "fi I1'1 ' ' ''' 'f ,
80
90
95
80
90
95
80
90
95
80
90
95
80
90
95
80
90
95
siiJWte
-------
SAMPLING—THE BASICS
The procedure described below in this Factsheet for randomly selecting a sample from the full
list of customers served in a specific period of time is simple random sampling and is therefore
consistent with the above table.
Here's how to use the above table
The instructions that follow assume that the unit of analysis for the survey will be the "person
served." (See Factsheet VII for a discussion of "Unit of Analysis.")
1) Identify the number of persons you have served in the time period of interest. Find that
number in the column labeled 'TSumber in Target Group."
2) Select the confidence level that you consider to be the most appropriate given the magnitude
of the decisions that will be made based (in part) on the results obtained from the survey:
• If the decisions to be made using the survey results will be far-reaching, long-lasting
and/or costly, use the 95 percent confident level
• If the decisions to be made using the survey results will be less far-reaching, less long-
lasting or less costly, use the 90 percent confidence level
• If the decisions to be made using the survey results will have more limited consequences,
mostly in the short-term (e.g., in the next 6 to 12 months) and the cost implications of the
decisions will be moderate, you may use the 80 percent confidence level.
3) Select the level of sampling error you consider to be acceptable given the magnitude of the
decisions that will be made using the results obtained from the sample.
• For most EPA customer satisfaction surveys, a sampling error of ±10 percent should be
acceptable.
• In cases where the decision to be made based (in part) on the survey results is of such a
nature that a smaller level of sampling error is needed, a sampling error of ±5 percent can
be used instead.
4) Read off the corresponding sample size.
• If the total number of customers served falls between two of the values shown above in
the column "Number in Target Group," you can use interpolation to obtain an initial
estimate of the appropriate sample size. '•
• You can then use the approximate formula for determining sample size presented in
Factsheet IV to obtain a much better estimate of the sample size needed.
FACTSHEET III
-------
SAMPLING—THE BASICS
You can stop here and make use of the approximate value for the sample size obtained in
step 4) b) immediately above. Alternatively, you can, if you wish, now make use of the
trial and error approach presented in Factsheet IV or, even better, the combined
approach, also presented in Factsheet IV, to calculate the precise value for the sample
size needed.
Here's how to randomly select a sample of customers once you have determined
what sample size to use
Once you have determined the appropriate sample size to use, the next step is to randomly select
that number of customers from the total number served in the time period of interest. Here is a
procedure you can use to make that random selection:
1) Make a complete list of all the persons served in the period of interest for which you already
have (or can obtain, with a reasonable expenditure of effort) the needed contact information
(i.e., name, plus address or phone number). Put the customers in alphabetical order to ensure
that there are no duplicate names. Eliminate any duplicate names.
2) Once all duplicate names have been eliminated (so that each name appears only once),
starting at the top of the list, number each name. The result is the master list of customers
served. The number next to each name is that person's customer number.
3) Here is a computer based approach for selecting a sample of customers from the master list:
• You will use spreadsheet software (like Lotus 1-2-3 or Microsoft Excel) to carry out the
remaining steps of this procedure. Before you begin to make use of any particular
spreadsheet software, first make sure that it has a "randomize" function. Not all
spreadsheets do.
• Enter the customer numbers in numerical order into the spreadsheet, one number per row.
Place each of these numbers in the second column of the spreadsheet, leaving the first
column in each row blank. The result will be a spreadsheet with the number of rows equal
to the number of customers and with the rows having the numbers 1, 2, 3, and so on (up
to the total number of customers served), with these numbers in the second column of
each row.
• Use the randomize function on the second column of the spreadsheet. The numbers in the
second column are now in random order.
• Enter numbers into the first column of each row. Enter the number 1 into this column in
the first row, enter 2 into this column in the second row, and so on. These new numbers
are the row labels.
FACTSHEET HI III-6
-------
SAMPLING—THE BASICS
Mark off the number of rows corresponding to the sample size chosen above. For
example, if the sample size is 65, mark off the first 65:rows.
The numbers appearing in the second column of the rows marked off in step e) above are
the customer numbers corresponding to the customers to be included in the sample. For
each of these customer numbers, read off the name of the customer appearing next to this
number on the master list prepared in step 2) above and place it in a new list. This is new
list is the list of customers selected for inclusion in the sample—the people you will
contact during the survey and ask to respond to the survey questions.
If due to a lower than expected response rate, the number of customers from whom
responses are received is less than the desired sample size, and all reasonable followup
efforts have already been made to increase the response rate, go back to the spreadsheet
and mark off the additional miimber of rows needed to reach the desired sample size. The
numbers appearing in the second column of these additional rows are the customer
numbers for the additional customers to be added to the sample.
For an equivalent procedure that does not make use of a computer or a computer spreadsheet, see
the last section of Factsheet V.
FACTSHEET III HI-7
-------
I It1... I,
-------
SAMPLING—MORE ON SAMPLE SIZE
FACTSHEET IV
THE EFFECT OF THE RESPONSE RATE ON SAMPLE SIZE
The initial sample size is the number of customers you attempt to contact and obtain a response
from during the survey. The final sample size is the actual number of customers for which
responses were received during the survey. The response rate is the percentage of customers
included in the initial sample for which a usable response was received. The response rate will
vary depending on the kinds of customers being contacted, the kind of product or service
received, the kinds of questions asked in the survey, and so on.
Since the response rate is almost always less than 100 percent, the total number of customers
from whom responses are received will almost always be less than the number of customers
initially selected to be part of the sample. The table in Factslieet III shows the approximate
sampling error associated with the final sample size. Since a certain final sample size is needed
(which considers only the customers from whom responses were received), the number of
customers included in the initial sample (the initial sample size) must always be greater than the
desired final sample size.
For periodic surveys that reiterate in whole or in part questions asked in the previous iteration of
the same survey (in order to determine to what extent customer satisfaction has changed in the
intervening period, due to changes in service provision), the response rate for the next iteration of
the survey can be estimated by using the response rate actually observed in the previous
iterations) of that same survey. Where a particular survey is being conducted for the first time,
it would be reasonable to assume a response rate of, say, 85 percent when determining how many
customers to select for the initial sample. If the estimate of response rate turns out to be too
high, then more customers can be added to the sample later, using the procedure described in step
3 g) of the procedures presented on pages 6-7 of Factsheet III for selecting the sample of
customers to be contacted during the survey.
Note, however, that it is better to achieve the desired final sample size by having a higher
response rate and a smaller total number of customers selected to be in the sample than through a
lower response rate and a higher number of customers selected to be in the sample. The reason
for this is nonresponse bias. Nonresponse bias is encountered if the customers who did not
respond to the survey are significantly different from those who did respond. Nonresponse may
be due to the inability of those conducting the survey to reach a specific customer in the sample,
(e.g., because his or her telephone number has changed), or may be due to the unwillingness of
that customer to participate in the survey at all, or to answer one or more questions in the survey.
Because some customers contacted will answer some questions but not others, the degree of
Nonresponse encountered will vary from question to question on the survey questionnaire.
Nonresponse bias is one source of the overall bias in the survey results resulting from the fact
that those surveyed are not representative of those in the target group that we are seeking to
characterize. Another source of such bias is use of a poorly chosen or poorly constructed master
list from which we randomly select the sample of people to be surveyed. One of the best-known
FACTSHEET IV IV-1
-------
SAMPLING—MORE ON SAMPLE SIZE
examples of such bias is a national poll of likely voters that was conducted by the Literary Digest
in 1936, a few days before the presidential election that year. The poll showed that Alf Landon
would win the election. In fact, as became clear a few days later, Franklin Roosevelt won the
election by a landslide. The reason for the erroneous polling results was bias. The poll was
conducted relying primarily on lists of telephone subscribers and 1936 being at the lowest point
of the Great Depression, many voters could not afford phone service. It turned out that those
voters who could not afford phones were much more likely to vote for Franklin Roosevelt than
were those who did have phones.
While this particular case gives an unusually dramatic example of bias, any level of nonresponse
(like any serious systematic errors in preparing the master list of people to be contacted) poses
potentially serious problems. Furthermore, the magnitude of these problems will generally not
be known because we hi general do not know if and how the nonrespondents differ from those
who did respond. After all, we were never able to gather any information about them in our
survey that could be used to see if and how they differ.
For this reason, nonresponse should always be keep to the lowest level achievable. This is
accomplished through active follow up with those customers in the sample from whom we were
not at first able to get a response. Only after all reasonable follow up efforts have been made
should a shortfall in the number of customers responding (compared with the desired final
sample size) be made up by selecting additional customers to be part of the sample.
An adjustment factor
The values for the sampling error shown in the table presented in Factsheet III are approximate.
One reason why they are approximate is that they do not take into account a factor that, if
considered, would result in lower values. We will now provide you with an adjustment factor
that you may use to account for this additional factor and, in so doing, obtain a more precise
value for the sampling error:
An adjustment factor to reflect that the sample result was greater than or less than 50 percent
One significant complication associated with the calculation of sampling error is that the
sampling error varies markedly with the magnitude of the sampling result obtained. By sampling
result, we mean, for example, the percentage of customers in the sample who say they are
satisfied with the product or service they received. All else being equal, the largest sampling
error is associated with a degree of satisfaction of exactly 50 percent. Any higher or lower level
of satisfaction will result in a lower level of sampling error. The lowest level of sampling error is
associated with a level of satisfaction of 100 percent or 0 percent.
FACTSHEET IV
IV-2
-------
SAMPLING—MORE ON SAMPLE SIZE
Here are the specific values of this correction factor that should be used for various specific
values of the sample result: ,
The sample result
(i.e., the percentage of customers
in the sample who said they were
satisfied with the product or
service received ) Correction factor
99 percent 0.20
98 percent 0.28
95 percent 0.44
90 percent 0.60
80 percent 0.80
70 percent 0.92
60 percent 0.98
50 percent 1.00 (i.e., no correction)
40 percent 0.98
30 percent 0.92
20 percent 0.80
10 percent 0.60
5 percent 0.44
2 percent 0.28
1 percent 0.20
Thus, if the sample result shows that 90 percent of the customers in the sample were satisfied
with the product or service they received, then the associated sampling error is obtained by
multiplying 0.60 tunes the sampling error shown in the standard tables (including the table
provided in Factsheet III). So if the sampling error shown in the table is ±10 percent for the
sample size used, then the actual sampling error is really only ±6 percent (= ±10 percent x 0.60).
If the sample result shows that 80 percent of the customers were satisfied, and the sampling error
obtained from the table was ±10 percent, the actual sampling error associated with that specific
sampling error would be ±8 percent (= ±10 percent x 0.80)^ These are rather significant
adjustments.
Since the levels of satisfaction likely to be obtained for most.EPA products and services are
likely to be in the range of 80 to 90 percent or more, it is highly advisable to take this adjustment
factor into consideration: 1) when estimating the sampling error that will result from use of a
specific sample size, and 2) when determining the actual sampling error associated with a given
sample result after the sampling process has been completed land the results have been obtained.
There is a major implication of the fact that the sampling error varies with the sample result.
Since the sample result varies from question to question asked hi the survey, there is no one level
of sampling error associated with the survey as a whole. Instead, there will be a different level of
sampling error for each result obtained (i.e., a different sampling error for the response to each
question). If the degree of satisfaction obtained from the customers sampled is close to 50
FACTSHEET IV 'iv^3
-------
SAMPLING—MORE ON SAMPLE SIZE
percent on one question and close to 100 percent on another, the sampling error for the second
will be much lower than (possibly much less than half of) the sampling error for the first. The
plus or minus figure given should therefore be different for each result reported (i.e., it should be
different for eaqh question for which the response is shown). It is common practice, however, for
only one level of sampling error to be shown: this may either be 1) the largest sampling error
associated with any of the results reported, or 2) the sampling error that would be obtained in the
worst possible case, i.e., if the result had been a level of satisfaction of 50 percent.
In presenting the results for customer satisfaction surveys conducted at EPA, those preparing the
results may either conform to this common practice or they may give question-specific sampling
errors, as they prefer. The latter can be accomplished by simply presenting a plus or minus
figure after each sample result shown.
For example
The question on the survey
for which the result is being The degree of satisfaction
reported reported
Question 1 83 percent ± 8 percent
Question 2 91 percent ± 6 percent
Question 3 78 percent ± 9 percent
Question 4 87 percent ± 8 percent
Question 5 94 percent ± 5 percent
Precise formula for calculating the sampling error
Here is an alternative approach for 1) estimating the sampling error that will occur in a planned
sampling survey or 2) calculating the actual sampling error associated with a specific result in a
survey that has already been completed. Instead of obtaining values of the sampling error from a
table (like that included in Factsheet III) and then applying the adjustment factor presented.
above in the previous section (and if necessary also applying the second adjustment factor
presented in the next section below), simply calculate the sampling error directly from the
precise formula.
FACTSHEET IV IV-4
-------
SAMPLING—MORE ON SAMPLE SIZE
Here is the precise formula for calculating the sampling error;
The precise formula
presented above is based
on the simple random
sampling (SRS) procedure,
in which the sample is
drawn using the sampling
without replacement
procedure. Simple random
sampling is the most
commonly used sampling
procedure and is the
procedure recommended in
these Guidelines for use in
customer satisfaction
surveys conducted by
EPA. It is the procedure
reflected in the table
presented in Factsheet III,
in the sample selection
procedure presented in
Factsheet III, and is
assumed in all other
discussion of sample
selection in the Guidelines,
in Factsheets III and IV,
and hi all but the last
section of Factsheet V. For
The sampling error = (Z) times the square root of p x q N - n
— — — — — X ——————
n N -1
whesre p = the sample result (i.e., the percentage of customers who
were satisfied with the product or service they received)
n = the sample size ;
N = the total number of customers served
Z = is a constant coefficient (i.e., multiplier) associated with the
confidence level that is being used. (This must be looked up in a
table in a statistics book). Each of these constants is known as
the Z-score for that confidence level.
Here are the coefficients (i.e., Z-scores) for the three confidence
lesvels that have been suggested for use in these Guidelines:
For the 95 percent confidence level, Z = 1 .960
For the 90 percent confidence level, Z = 1 .645
For the 80 percent confidence level, Z = 1 .282
further discussion of this topic, see the last section of Factsheet V.
The above formula will give the exact size of the sampling error for any combination of number
of customers served, sample size, sample result and confidence level. Using this formula
automatically takes into account and reflects the differences in the magnitude of the sampling
error due to differences in the sample result (which was discussed in the previous section of this
Factsheet) and also automatically includes the finite population correction factor, which is
discussed in the next section of this Factsheet.
Another adjustment factor
The table presented in Factsheet III reflects both the sample size (n) and the total number of
customers served (N) in determining the sampling error for any given confidence level selected.
You may come across reference books on statistics or sampling procedures that present tables in
which the sampling errors are shown for various different sample sizes but in which no
consideration is given to the total number of customers served. In such cases, to get the actual
FACTSHEET IV
1V-5
-------
SAMPLING—MORE ON SAMPLE SIZE
sampling error, it is necessary to multiply the sampling error given in such tables by an
additional factor known as the finite population correction factor.
The finite population correction factor
The standard sample survey techniques were developed for use in situations where there is a very
large number of people in the pool of those from whom the sample is to be drawn. This is true,
for example, of surveys of national public opinion. The standard formulas and tables used are
therefore predicated on sampling from a very large pool, one that is, in practical terms, "as good
as infinite" and is treated by statisticians as though it were infinite.
I i
When the number of people in the target group from which the sample is to be drawn is much
smaller, a correction factor (one known as ihe finite population correction factor) should be used
to correct for this circumstance. The finite population correction factor can always be used (its
use never gives an incorrect result), but it is generally not needed if the sample size chosen is less
than about one-tenth (10 percent) the size of the target group from which the sample is to be
selected.
If the sample size of customers to be contacted is greater than 10 percent of the total number of
customers served, then the finite population correction factor should be used in calculating the
size of the sampling error. These circumstances will apply in a large percentage of customer
satisfaction surveys conducted by EPA. Luckily, use of the finite population correction factor
always results in a lower sampling error than would have been obtained without its use.
Therefore, if you are satisfied with the magnitude of the sampling error calculated for a specific
survey without using the finite population correction factor, then there is no need to use it for that
survey, unless you want to know exactly how much lower the true sampling error is.
FPCF = the square root of (N-rO
(N-1)
where: N = the total number of customers served
n = the number of customers in the sample (i.e., the sample size)
The finite population
correction factor (FPCF)
can be calqulated using the
following formula:
The corrected sampling
error is obtained by
multiplying the finite
population correction factor and the sampling error obtained from a standard table that
considered only sample size and confidence level (and did not consider the size of the target
group [i.e., the population] from which the sample is to be drawn). Because of the way the finite
population correction factor is calculated, the adjustment factor varies with the sample size as a
fraction of the size of the target population from which the sample is to be drawn. See the
following table:
FACTSHEETlV
1V-6
-------
SAMPLING—MORE ON SAMPLE SIZE
Sample size as a fraction (percentage) Approximate value of the
of the size of the target population Finite population correction
__. =n/N ' factor
10 percent 0.95
20 percent 0.89
40 percent 0.77 '.
50 percent 0.71
60 percent 0.63
70 percent 0.55
75 percent 0.50
As can be seen from the above table, if the sample size is approximately 10 percent of the size of
the target group (i.e., the total number of customers served, from which the sample is to be
drawn), then the correction factor is approximately 0.95—thus, when using a sample size that is
10 percent of the total number of customers, the sampling errbr will be reduced to 95 percent of
what it otherwise would have been (e.g., the sampling error would be reduced from ±10 percent
to ±9.5 percent). :
If the sample size is 20 percent of the size of the target group, then the correction factor is
approximately 0.89—thus, when using a sample size that is 20 percent of the total number of
customers served, the sampling error will be reduced to 89 percent of what it otherwise would
have been (e.g., the sampling error would be reduced from ±10 percent to ±8.9 percent).
If the sample size is 50 percent of the size of the target group, then the adjustment factor will be
approximately 0.71—thus, when using a sample size that is 50 percent of the total number of
customers served, the sampling error will be reduced to 71 percent of what it otherwise would
have been (e.g., the sampling error would be reduced from ±10 percent to ±7.1 percent).
There is a general rule of thumb used by many statisticians: The finite population correction
factor should be applied whenever the sample size is 10 percent or more of the size of the target
group from which the sample is to be drawn. ^
Note, however, that use of the finite population correction factor always gives a more accurate
value for the sampling error than would be obtained by not using it. You should therefore never
be reluctant to use it. It is just that there are certain circumstances (i.e., when the sample size
obtained from a standard table is less than 10 percent of the size of the target population) when it
is possible to disregard it (i.e., not apply it) without there being an undue adverse effect on the
estimated size of the sampling error. ,
Note also that the last element in the precise formula for calculating sampling error given in the
previous section of this Factsheet is the finite population correction factor. Use of that precise
formula therefore will ensure that the finite population correction factor is automatically taken
into account when determining the size of the sampling error.;
FACTSHEET IV JV^7
-------
SAMPLING—MORE ON SAMPLE SIZE
One final technical note: The reason why the second column in the table presented above in this
section is labeled the approximate value of the finite population correction factor rather than the
exact value is that the following approximation was used to calculate the value shown in the
second column that corresponds to each value in the first column:
Instead of using the precise formula for
the finite population correction factor:
FPCF = the square root of
N-n
N-1
The following approximate formula was
used:
For most values of N (the size of the target
population), the difference between the
true value obtained from the precise
formula and the approximate value
obtained from the approximate formula is
very small.
FPCF = (approx) = the square root of
N-n
N
A trial-and-error procedure and an approximate formula for determining sample
size
A trial-and-error procedure
The precise formula given above can be used directly to determine the level of sampling error
for any combination of confidence level, number of customers served, and sample size. That
same formula can also be used to determine sample size when the desired confidence level, the
desired maximum level of sampling error and the number of customers served are known. It just
cannot be solved directly to obtain sample size in such situations. This is so because sample size
(n) appears two different places in the equation, and the form of the equation is such that it is not
possible to rearrange the equation so that it can be used to solve directly for sample size. Instead,
you must use the precise formula as shown to determine the needed sample size. You can do so
as follows:
1) Begin by guessing what the needed value of the sample size is. (Any guess will do as a
starting point, although the closer to the true value your guess turns out to be, the sooner you
will be finished.)
2) Use that value of the sampling size (i.e., your initial guess) to solve the precise formula
equation for sampling error.
FACTSHEET IV
IV-8
-------
SAMPLING—MORE ON SAMPLE SIZE
3) a) If the value of sampling error you obtain from the formula is less than the maximum
level of sampling error you are willing to accept, then .you should decrease your guess as
to the corresponding value of the sample size and solve the equation again.
3) b) If the value of sampling error you obtain from the formula is greater than the maximum
level of sampling error you are willing to accept, then you should increase your guess as
to corresponding value of the sample size and solve the equation again.
4) Continue steps 3) a) and 3) b) above until you arrive at the appropriate sample size. That will
be the largest value of the sample size that, when plugged into the precise formula along with
the number of customers of served and the Z-score corresponding to the confidence level you
have selected, gives you the highest possible value of sampling .error, i.e., one that equals (or
is slightly less than) the level of sampling error you have set as the maximum you are willing
to accept. ;
The approximate formula for determining sample size
The trial-and-error approach described above will always give you the best possible value for
sample size. However, the process for arriving at that value can be rather tedious. For this
reason, an approximate formula has been developed that will give you a reasonable value for the
sampling error that is close to the one you would get from the above trial-and-error procedure.
This approximate formula needs only to be solved once—no repeated calculations are needed.
The resulting value will, however, in most cases, be a larger sample size than what you would get
from the trial-and-error procedure. That
is, the approximate formula will give you
a larger sample size than that actually
needed to achieve your target level of
sampling error.
Here is the approximate formula:
A combined approach
You can, if you wish, make use of both
the approximate formula and the trial-
and-error approach given above. Begin
by using the approximate formula to get
an approximate value for the sample size.
Then use this approximate value as your
first guess for sample size in the trial-and-
error approach, and proceed from there
with the trial-and-error approach as
described above.
NxZ2
n =
[4x(N-1)xE2]
Where
n = sample size
N = number of customers served (from which the
sample is to be drawn)
E = the maximum acceptable level of sampling
error, expressed as a decimal fraction (e.g., 5
percent = 0.05)
Z = the Z-score corresponding to the confidence
level selected (this can be obtained from most
standard statistics references, including most
basic statistics textbooks). The Z-scores for
the 80 percent, 90 percent and 95 percent
confidence levels are given above in this
Factsheet in conjunction with the precise
formula.
FACTSHEET IV
IV-9
-------
SAMPLING—MORE; ON SAMPLE SIZE
This combined approach will allow you to come up with the lowest possible sample size with the
least amount of effort.
WHY is so MUCH ATTENTION GIVEN TO SAMPLE SIZE?
Much of Factsheet III and all of this Factsheet have been devoted to considerations related to
sample size. Why, you might ask, do people spend so much time worrying about sample size?
The reason is that if a larger sample size is used than is really needed, you will have incurred a
greater cost in conducting the survey and you will have imposed a greater response burden on
your customers than was needed.
* The extra, unneeded costs alone can be quite considerable. For each extra customer in the
sample, additional time has to be spent: conducting the telephone interview (we are here
assuming for clarity that a telephone survey was conducted), following up with those who did
not answer when originally called, following up with those who did not initially agree to
participate, and so on. It also means more data to be recorded and analyzed.
• The extra burden on your customers in terms of time spent responding can also be quite large
when the total tune spent by all customers surveyed, taken together, is considered.
If the sample size used turns out to be greater than was needed, then the extra cost incurred, and
the extra burden imposed were wasted.
! i
On the other hand, if too small a sample size is used, then you may end up with so much
uncertainty about the true degree of satisfaction of your customers (because the sampling error
was so large) that you do not learn much from the survey. You were uncertain about their degree
of satisfaction before (that's why you decided to conduct the survey) and you may now find that
your level of uncertainty afterward is not much reduced. In this case, the whole cost of
conducting the survey may prove to have been wasted.
i
Keep in mind that any wasted time and dollars associated with conducting surveys using sample
sizes that were too large or too small are time and dollars that could otherwise have been used to
improve the products or services you provide to your customers. So the effort you spend helping
to ensure that you use the most appropriate sample size will help maximize the time and dollars
you will have left for improving customer service in the ways your survey has shown to be
needed.
In conclusion, sample size is very important—you want it to be large enough to give meaningful
and useful results, but not so large that you incur unneeded extra expense or unduly burden your
customers with the time needed to respond. What this adds up to is that when you conduct a
customer satisfaction survey, you should use the smallest possible sample size that will give you
results of sufficient precision to be meaningful and useful to you. And greater precision—which
i _ \ i j t
FACTSHEET IV j\Mo
-------
SAMPLING—MORE ON SAMPLE SIZE
is another way of saying a lower level of uncertainty—comes from a smaller level of sampling
error. The smaller the sampling error the greater the precision.
So you want to choose the smallest possible sample size that will give you a level of sampling
error that you can live with. By that we mean that the results will be precise enough to give you
the degree of certainty you need about 1) what the true current level of satisfaction of your
customers is, and 2) how their degree of satisfaction has been changing over time—as a result of
your continuing efforts to improve your products and services.
FACTSHEETlV IV-11
-------
-------
SAMPLING—MORE ADVANCED TOPICS
FACTSHEETv
This Factsheet addresses the following more advanced topic in the area of sampling:
• Stratified sampling—what it is, when to use it, how to do it.
This Factsheet also provides :
• A crosswalk between the customized terminology with regard to customer feedback sampling
used in these Guidelines and Factsheets and the more general statistical terminology used by
survey statisticians.
• A discussion of some other kinds of errors (beyond sampling error and nonresponse bias) that
will be encountered in sampling. >
• A discussion of sampling without replacement and how this differs from sampling with
replacement. This section also contains a description of how to randomly select a sample
without using a computer or a computer spreadsheet.
STRATIFIED SAMPLING
In the section of the Guidelines describing how to analyze the data obtained from a sample of
customers, an example is given on pages 39-40 of a simple procedure that can be used to
determine if the degree of satisfaction of your customers varies among different kinds of
customers. The procedure presented there is simple and useful and will be quite satisfactory for
use in conjunction with most customer satisfaction surveys conducted at EPA.
There is, however, one aspect of that procedure that should bfe noted: The sample results
obtained for the different specific kinds of customers (e.g., educators and advocates in the
example on pages 39-40) are not as precise as the results obtained for all kinds of customers in
the sample, considered all at once. The reason for this is that only a portion of the sample is
relevant to each specific kind of customer. Therefore the sampling error determined for the
entire sample does not apply to each specific kind of customer. Instead, in effect, we have a
smaller sample size for each specific kind of customer, and there will therefore be higher
sampling error for each specific kind of customer, when considered separately.
Again, for most customer satisfaction surveys conducted by EPA, the results obtained from the
procedure shown on pages 39-40 will still be quite satisfactory and the increased sampling error
associated with each specific kind of customer should not be of concern. The results obtained
from using those results will still be meaningful and useful.
There may, however, be some situations in which it is so important to track the degree of
satisfaction of one or more specific kinds of customers that ypu decide that the sampling error for
FACTSHEET V V-1
-------
SAMPLING—MORE ADVANCED TOPICS
each specific kind of customer of concern must be kept within specified limits. In such cases, a
specialized procedure known as stratified sampling may be used.
The basic principle underlying stratified sampling is very simple. For each subgroup of
customers for which it is important to know their degree of satisfaction with a specified level of
precision (i.e., with a known maximum level of sampling error), the sample size for that
subgroup should be determined separately and a random sample of those customers selected
separately. This can be done by applying the procedures presented in the Guidelines and in
Factsheets III and IV separately for each of these subgroups of customers. It is as though you
are no longer conducting one survey but instead are conducting two or three (or more) surveys
simultaneously, one for each subgroup of customers who are so important that their degree of
satisfaction must be known and tracked with a known maximum level of sampling error for that
specific subgroup.
The results are then analyzed separately for each of these different subgroups, again using the
methods of analysis that are presented in the Guidelines. The results of these analyses are then
used to track separately the degree of satisfaction of each of these subgroups of customers.
The results from each of these subgroups may then also be combined to give an overall result for
all the subgroups surveyed taken together. They may be combined by using the following
formula:
P,ii = (fi x Pi) + (f2 x p^ + (f, x p3) + ....
Where the equation continues for as many subgroups of customers as were used in the survey
and where
pin = the sample result for all customers served
p, = the sample result for the first subgroup of customers (i.e., the number of customers
in the first subgroup who reported being satisfied)
f] = the fraction (percentage) of all customers served who fall in the first subgroup of
customers
p2 = the sample result for the second subgroup of customers
f2 = the fraction (percentage) of all customers served who fall in the second subgroup of
customers, etc.
For the above formula to give an accurate result for all customers, every customer must be
included in one (and only one) of the subgroups. If, for example, there are five different kinds
of customers and only two kinds are so important that they have to be tracked separately with a
FACTSHEETV V-2
-------
SAMPLING—MORE ADVANCED TOPICS
known level of sampling error, then the remaining three kinds of customers can be included in a
third subgroup consisting of the remiaining three kinds of customers grouped together.
l
There is one further consideration. SJince you have set the sample size separately for each of the
subgroups in order to get a known level of sampling error for each of those subgroups, you will
know the level of sampling error for each subgroup, but you do not know what the level of
sampling error is for any sample results obtained (using the formula given above) for all the
customers taken together. There is, however, a second formula that can be used to determine this:
Eall = the square root of the following sum
sum =
N,2
X
N2
N22
Nr H! Ej
X
Nrl n,
N2- n2 E22
x x
N2 N2-l n2
+ etc.
Where
E^ = the sampling error for sample results applicable to all customers taken together
N = the total number of customers served
E! = the sampling error for the first subgroup of customers
N! = the total number of customers served for the first subgroup of customers
H! = the sample size for the first subgroup of customers
E2 = the sampling error for the second subgroup of customers
N2 = the total number of customers served for the second subgroup of customers
n2 = the sample size for the second subgroup of customers
etc. (for each additional subgroup of customers served)
Note that what is referred to as sampling error in this section on stratified sampling is actually
the standard error since it was values obtained from the sample that were used in computing it.
The standard error is used by statisticians to estimate the sampling error. (See the next section of
this Factsheet for further clarification of this point.) \
FACTSHEET V V-3
-------
SAMPLING—MORE ADVANCED TOPICS
A GUIDE TO THE APPLICABLE STATISTICAL TERMINOLOGY
In the discussion of surveys and sampling in the Guidelines and Factsheets, we have used
terminology customized to the special circumstances of surveys conducted by EPA to assess
customer satisfaction and have modified some other statistical terminology to make it easier for
nonstatistician to understand. Some users of the Guidelines may, however, wish to consult text
books, reference books or journal articles on one or more aspects of sampling. For that reason,
we here provide a crosswalk between the terminology used here and the more general
terminology used in general works and articles on sampling procedures.
Terminology used in these Guidelines
and Factsheets
The more, general terminology
used by sampling statisticians
1. "The customers served in a
specific period of time" (for
which information about their
level of satisfaction is being
sought)
or
'The customers served"
1. "The target population"
or
"The population"
or
"The universe"
or
'The target group':
2. "The sample of customers"
2. "The sample"
or
"The customers in the sample"
FACJSHEETV
V-4
,| ;;;,
-------
"The sample result" (shown as
the percentage of customers who
responded who said that they
were satisfied in response to a
specific question about some
aspect of the product or
service they received)
SAMPLING—MORE ADVANCED TOPICS
3. "The sample proportion"
4. The "master list" of customers served
4. "The sampling frame"
5. "The sampling error"
values obtained from the full target
6. "The unit of analysis"
5. "The sampling error" (when calculated using
population—which is not possible under
normal circumstances because it would be
too costly since it would require a census of
the target population. And it is to keep costs
to a reaspnable level that we are using a
sampling procedure hi the first place.)
or
"The standard error" (when calculated using
values obtained from the sample—which are
what w^ must use in most cases. This is so
since the corresponding values for the entire
target population are not known and cannot
be learned without going to great additional
expense: Going to such expense would
defeat the purpose of using a sampling
procedure rather than a census). "The
standard error" is also known as "the
standard error of the mean."
6. "The unit of analysis"
or
"The sampling unit"
FACTSHEETV
V-5
-------
7. "The 95% confidence level"
SAMPLING—MORE ADVANCED TOPICS
7. "A 95% confidence level"
By tradition, the article "a" is used, which
suggests that there is more than one kind of
95% confidence level. In fact there is only
one kind of 95% confidence level, so "the"
is the more appropriate article, and "the" is
used in these Guidelines to improve clarity
for the sake of the nonstatisticians seeking to
make use of them.
OTHER KINDS OF ERROR EXPERIENCED IN SAMPLING SURVEYS
We have so far limited our discussion of error encountered in sampling surveys to three kinds of
error: 1) sampling error, 2) nonresponse bias, and 3) the bias associated with use of a poorly
chosen master list of the persons in the target group to be sampled. Each of these has been
discussed earlier: sampling error was discussed in Factsheet III; nonresponse bias and the bias
associated with use of a poorly chosen master list were discussed in the first section of this
Factsheet. We will now describe briefly two other kinds of error that occur in sampling surveys:
• Reporting error. This is the kind of error that results when the customer misunderstands the
question asked and therefore gives an incorrect answer. It can also occur if the customer
misunderstands how to use an interval scale (by thinking, for example, that a high number on
the scale means "highly unsatisfied" when in fact it means "highly satisfied"). It can also
occur when a customer purposely gives a wrong answer.
This kind of error can be kept to a rnuiimum by pretesting the questionnaire on people who
are similar to those who will be surveyed. Such pretesting can help identify: 1) questions that
are confusing, 2) clear instructions on how to respond (e.g., how to use the interval scale to
answer a question) to questions that are confusing, and 3) questions that customers may find
so intrusive, threatening or offensive that they may prompt some customers to give a false
answer.
i j
* Recording error. This is error on the part of the staff conducting the survey. Even if the
customer has responded correctly, the survey staff may misunderstand or misrecord what the
customer said (in a telephone or in-person survey) or may misread or misrecord what was
written (in a mail survey).
It should, however, be noted that, unlike sampling error, which results from the use of a
probabilistic sampling procedure and is an inevitable consequence of using such a procedure, the
additional kinds of error identified above will be encountered whenever customers are contacted
and asked to respond to specific questions. These kinds of errors would, for example, still occur
even if a full census were conducted of all customers served. In other words, use of a census will
eliminate sampling error, but reporting error and recording error will still occur. Similarly,
FACTSHEETV
V-6
-------
SAMPLING—MORE ADVANCED TOPICS
nonresponse bias will occur in the results obtained from a census just as much as in those
obtained from a sampling survey. Tliat's what the recent controversy about the Year 2000
Census of the U.S. population has been all about. The concern is that past censuses have had a
nonresponse bias that works to the disadvantage of the poor, because those with low incomes or
no incomes are disproportionately less likely to participate than those who are more well off
economically.
Should the sample be selected with or without replacement?
There is one basic decision that must be made before selecting a random sample of customers
from the larger total number of customers served. That decision is, will the selection be made
with replacement or -without replacement! .
In order to explain what this means, we will describe an alternative procedure for selecting a
random sample of customers that tracks closely with that presented in Factsheet III. This
alternative procedure will, however, be a simpler one for which it is easier to follow the
implications of each step. In particular, the process we are about to describe will parallel
perfectly the procedures in Factsheet III, but the procedure will be described not in terms of
entering customer numbers into a spreadsheet on a computer, but rather hi terms of putting the
customer numbers on slips of paper, putting these slips into a box, and then pulling these slips
from the box. This simpler alternative approach will make it clearer what the difference is
between sampling with replacement and without replacement.
Here is the alternative approach for selecting a random sample of customers:
Begin by carrying out the actions called for in steps 1) and 2) of the procedure that begins on
page 4 of Factsheet III. After having compiled the master list of persons served described in
step 2), take the following additional steps:
Here is the alternative approach for selecting a sample of customers from the master list:
a) For each of the customers on the master list, place the number corresponding to that customer
on a slip of paper, one customer number per slip of papen Then fold each slip of paper in a
uniform way. All slips of paper used should be identical in every way.
b) Put the folded slips of paper into a box, and shake the box sufficiently that the slips of paper
have been well mixed within the box.
c) Have someone begin to remove slips of paper from the box one at a time. While this is being
done, the box should be held or positioned in such a way that the person removing the slips
of paper cannot see the slips of paper that he or she is picking from.
d) Have a sheet of paper (a recording sheet) ready with numbers running from 1 to a number at
least twice the size of the sample size that has been chosen. For example, if the sample size
FACTSHEET V • V-7
-------
SAMPLING—MORE ADVANCED TOF»ICS
chosen was 65, have numbers on the sheet of paper running up 130. This sheet will be used to
record the outcome of the selection process.
e) As each slip of paper is picked from the box, unfold the slip of paper, read the customer
number on it, and record that number in order on the recording sheet. Then set that slip of
paper aside (or throw it away). The customer number on the first slip picked should be
recorded next to the number 1 on the recording sheet. The customer number on the second
slip of paper should be recorded next to the number 2 on the recording sheet, and so on.
Continue this process until a slip of paper and a corresponding customer number has been
picked for each number on the recording sheet.
f) Determine the number of customers to be included in the initial sample by taking into
consideration both the desired sample size and the anticipated response rate. Thus, for
example, if the desired sample size is 65 and the expected response rate is 85 percent, the
number of customers to be included in the initial sample should be 65/85 percent = 76.47.
The size of the initial sample should therefore be 77 (since it is always prudent to round
fractions up to the next whole number in situations where a certain minimum level must be
achieved).
g) The initial sample will then consist of the customers whose customer numbers are recorded
next to positions 1 through 77 on the recording sheet. Carry out the survey using these 77
customers.
!
h) If the response rate in the survey is 85 percent or greater, then nothing further need be done.
No further use will need to be made of the recording sheet. If however, the response rate
turns out to be less than 85 percent and all reasonable follow-up actions have been taken to
increase the response rate and it is still less than 85 percent, then a determination should be
made as to how many additional customers need to be added to the sample to bring the
number of customers responding up to 65.
If for example, the number of customers so far responding is 62, then three more responding
customers are needed. Since the response rate so far has been 62/77 = 80.5 percent, the
minimum additional number of customers who need to be added to the sample is 3/80.5
percent = 3.73 or 4 after rounding up. However, since the there is no guarantee that the
response rate for the next set of customers contacted will be identical to that of the customers
already contacted, it might be prudent to add not 4 but 5 or 6 additional customers to the
sample to ensure that a third survey cycle will not be needed. In this case, the number of
additional customers that it is decided to add to the sample is 6.
i) Starting at the point where you left off in taking customer numbers from the recording sheet
in step g) above (position 77 in the sample used here), take the customer numbers appearing
in the next 6 positions on the list (i.e., those in positions 78 though 83), and add the
customers' names corresponding to these customer numbers to the sample.
FACTSHEETV
V-8
-------
SAMPLING—MORE ADVANCED TOPICS
The above procedure is an example of sampling -without replacement. This expression is used
because the procedure used called for slips of paper to be picked from the box and after each slip
of paper was picked, it was not replaced (i.e., put back) in the box.
The alternative procedure would have been to sample with replacement. Using the same
physical arrangements described above, sampling with replacement would have entailed picking
the first slip of paper from the box, unfolding it, reading the customer number off of that slip,
then folding the slip back up as it was and putting it back in the box, shaking the box up again, so
that all the slips in the box (including the one already removed and put back in into it (i.e.,
replaced into it) are once again fully mixed. A second slip of paper is then picked from the box,
unfolded, the number is read off, the slip is folded again and put back in the box and the box is
shaken again. This procedure continues until the same number of slips has been taken from the
box as before or, to be more precise, until the same number of picks have been made from the
box (in this case, 130).
One of the obvious implications of this new procedure just described (which calls for sampling
with replacement) is that it is very possible for a single slip of paper to be picked from the box
more than once. In fact it could be picked three times or more. If that happens, then that same
number is recorded a second time (and if necessary a third tune, a fourth time, etc.) on the
recording sheet. Let's say for example that the slip with customer number 278 was the fourth
slip picked and was also the 36th picked. Then customer number 278 will appear on the
recording sheet both at position 4 and at position 36. Slips continue to be picked a total of 77
times as before (i.e., a total of 77 picks are made from the box).
Now, you may well ask, should an additional slip be picked to make up for the duplicate picking
of customer number 278? The answer is no. Customer 278 has been picked twice, so he or she
now counts as two customers. Does this mean that customer 278 will be contacted two different
times during the survey and on the second occasion be asked to respond a second time to the
survey questions? The answer again is no. Customer 278 will be contacted only once, but his or
her response will be used twice in computing the survey results, as though two different
customers had responded in exactly the same way to the survey questionnaire (which does in fact
sometimes happen). . -!
We have just described a situation in which one slip was picked twice. It's clearly possible for a
single slip to be picked three times or four times or more. Similarly, it's possible for two
different slips to be picked two or more times each. When the total number of customers served
is relatively low and the sample size is relatively large in comparison, multiple picks of two or
more slips will be a fairly common occurrence. ;
The initial reaction of most people to a description of sampling -with replacement is generally
rather negative. Why would anyone ever do it that way? Putting the slip of paper back in the
box, knowing full well that it may be picked again, seems bizarre and unreasonable to them.
FACTSHEET V V-9
-------
SAMPLING—MORE ADVANCED TOPICS
So why is this procedure used? The answer is that the mathematics that result from using the
sampling with replacement procedure are much simpler than those that result from use of the
sampling without replacement procedure.
Why is this so? Because the odds of any one slip being picked are the same throughout the
sampling with replacement procedure. If for example, there were a total of 534 customers served
(and therefore 534 slips of paper in the box) from which the 77 customers to be included in the
sample are to be selected, then the odds that any one customer will be selected on the first pick is
1 in 534, the odds of any one customer being selected in round two are 1 in 534, the odds of
being selected on the third round are 1 in 534 and so on. The odds of being picked never change
from the beginning to the end of the process of picking 77 slips of paper form the box (and
thereby picking 77 customers to be surveyed from the total of 534 customers served).
With the sampling without replacement procedure, however, the odds of any one customer being
picked are constantly changing from the beginning of the process to the end. On the first pick,
every customer's odds of having his or her slip picked is 1 in 534. But then, after one slip has
been picked, the customer number on that slip has been recorded, and that slip has been set aside,
everyone's odds of being picked on the second pick are changed. For the customer already
picked, his or her odds of being picked again are now zero. That slip has been set aside so there
is no further chance of being picked on that or any further round of picks. But for the remaining
customers, the chances of being picked are now 1 in 533. The odds are lower since there is now
one fewer slip m the box—only 533 instead of the 534 that were there before the first pick was
made. The odds of being picked on the third round then go down to 1 in 532. The odds
continue to decrease hi this same way throughout the process of picking 77 slips. Because of
these changing probabilities, the resulting mathematics get rather complicated.
In summary, we have two different procedures for selecting a random sample of customers from
the total number of customers served. Sampling without replacement is the process that seems
most reasonable to nonstatisticians, but it results in very complicated mathematics that make
calculations much more difficult. Sampling with replacement seems bizarre to most
nonstatisticians (especially counting a customer's response twice if that customer gets picked
twice), but results in much simpler mathematics.
Luckily, statisticians have adopted the sampling without replacement procedure as the one
generally used for selecting the sample in most surveys. Statisticians refer to this procedure as
Simple Random Sampling (SRS), but it might instead be called the standard approach for
random sampling. The standard stratified sampling procedure described above in the first
section of this Factsheet is a slightly more complex variant of Simple Random Sampling.
Stratified sampling as described above is also based on sampling without replacement.
In these Guidelines, we strongly recommend use of sampling without replacement because it is a
procedure that EPA employees who are not statisticians are much more likely to feel comfortable
with and is the procedure most commonly used by statisticians as well. If, however, any unit of
EPA determines that it has a compelling reason to use a sampling with replacement procedure
instead, it should feel free to do so as long as those who will conduct the survey understand the
FACTSHEETV
V-10
-------
SAMPLING—MORE ADVANCED TOPICS
implications of doing so, will make use of the appropriate formulas (some of which will differ
from those presented in these Guidelines and Factsheets), and are willing to double or triple
count customers (those that were that were selected more than once) when calculating the survey
results.
FACTSHEETV V-11
-------
-------
How TO OBTAIN CLEARANCE FOR EPA CUSTOMER SATISFACTION SURVEYS
FACTSHEET VI
QUESTION 1: WHO CAN USE THE CUSTOMER SERVICE ICR?
OMB's Resource Manual for Customer Surveys (dated October 1993) and other relevant
guidance documents state that the generic clearance shall be used for "strictly voluntary
collections of opinion information from clients that have experience with the program that is the
subject of each data collection" and precludes this option for use:
• By regulatory agencies to survey regulated entities1 ;
• In any situation where a respondent may perceive that a response will result in risks to his
interests through potential penalties of loss of benefits
• For collecting factual information (other than simple identifying information, where
needed)
• For collecting data from the general public.
QUESTION 2: How DO I RECEIVE APPROVAL FOR MY SURVEY, IF IT MEETS THE
CONDITIONS OUTLINED ABOVE?
Below are the instructions for submitting your survey for clearance:
Prior to initiating the survey, sponsoring programs must seek final approval from OMB. To
obtain approval, sponsoring programs must submit a clearance package consisting of a
memorandum and a copy of the survey instrument through Regulatory Information Division
(RID). The memorandum will be addressed from the program or office director to the RED Desk
Officer at Office of Policy, (2136) and must address the following2:
• Survey title, identification of survey originator (office, pqint of contact, phone number)
• Description and intended purpose of the survey as it relates to EPA customers
• Methodology and use of anticipated results
• Collection schedule, followup plsms !
• Costs and burden to the agency and respondents, and the number of respondents.
EPA interprets this to preclude any EPA surveys conducted for fact-finding for the purposes of regulatory
development or enforcement.
2For customer feedback forms and short questionnaires, a one-page memorandum should be sufficient.
Mail or telephone surveys making use of statistical sampling must include statistician's name/phone, and a brief
design, precision requirements, and pretests/pilot tests.
FACTSHEET VI : ' ', VM
-------
How TO OBTAIN CLEARANCE FOR EPA CUSTOMER SATISFACTION SURVEYS
The memorandum will vary in length and detail, depending on the complexity of the survey. IPS
staff, experience with the requirements of the Paperwork Reduction Act (TPRA), will review each
submission to ensure that it meets the requirements of the PRA and any conditions of the generic
approval, and may reject any proposed customer survey that does not meet the criteria above. In
the methodological issues, the program shall solicit agency statistical experts through EPA's
Statistical Policy Branch or program office to make any final determinations as to the statistical
validity of the customer survey.
QUESTION 3: HOW LONG WILL THE PROCESS TAKE?
Following review within RID, RID will submit surveys and attached materials to OMB for a 10-
working-day review.
What Else Should I Know?
i
Sponsoring organizations within the EPA should maintain records according to each survey
schedule. In general, survey results should be maintained for 3 years or until after followup has
been performed.
Sponsoring offices are encourage to provide feedback to RID on the success of their surveys
(through a memo or summary report) that can be shared with fledgling customer survey
programs within other parts of the EPA. Feedback might include such things as
'i I
• Response rates, followup strategies, important lessons related to survey design and
implementation
• General trends established from analysis of data
i •
• Changes to the organization as a result of the survey
• Points of contact for questions about the survey.
Example of burden statement for forms or survey
i
i
The OMB Control Number and expiration date muslrappear on the front page of an OMB-
approved form or survey, or on the first screen viewed by the respondent for an online
application. The rest of the burden statement must be included somewhere on the form,
questionnaire, or other collection of information, or in the instructions for such collection. Also
include the following information:
Explain the reasons the information is planned to be and/or has been collected, and the way such
information is planned to be and /or has been used to further the proper performance of the
FACTSHEETVIV
-------
How TO OBTAIN CLEARANCE FOR EF'A CUSTOMER SATISFACTION SURVEYS
functions of the agency. State whether responses to the collection of information are voluntary,
required to obtain or retain a benefit (citing authority), or mandatory (citing authority), and the
nature and extent of confidentiality to be provided, if any (citing authority).
FACTSHEETVI ; VU3
-------
How TO OBTAIN CLEARANCE FOR EPA CUSTOMER SATISFACTION SURVEYS
The following information must appear on the first page of the survey:
," i 'i
Form Approved OMB Control No. 2090-0019. Approval expires 10/31/99.
Public reporting burden for this collection of information is estimated to average eleven (11)
minutes per response, including the time for reviewing instructions, gathering information, and
completing and reviewing the collection of mformation. Send comments on the agency's need
for this information, the accuracy of the provided burden estimates, and any suggestions for
reducing the burden, including the use of automated collection techniques to the Director, OPPE
Regulatory Information Division, United States Environmental Protection Agency (Mail Code
2137), 401 M Street SW, Washington, DC 20460; and to the Office of Information and
Regulatory Affairs, Office of Management and Budget, 725 17th Street NW, Washington, DC
20503, Attention: Desk Officer for EPA. Include the EPA ICR number and the OMB control
number in any correspondence.
Customer service executive order (12862) requirements
• Identify customers who are or should be receiving EPA service.
• Survey customers for the kind/quality of services they want, their level of satisfaction with
the services, and whether standards are set for what matters to them.
' ' ' ' • t i ; ;
I I
• Develop, post, and implement standards.
i fc! , OtL, , i
• Measure results against them.
• Report annually to customers on progress toward achieving standards.
* Integrate customer service standards, measurement and tracking with reinvention, planning,
budgeting (GPRA), operating plans, regulations and guidelines, training, and personnel
classification and evaluation.
• Recognize employees for meeting and exceeding customer service standards.
• Benchmark customer service performance against the best in business.
• Survey front-line employees on barriers to, and ideas for, matching the best in business.
• Provide customers with choices in sources of service and methods.
• Make information, services, and complaints systems easily available.
• Address customer complaints.
FACTSHEET VI vi-4
-------
How TO OBTAIN CLEARANCE FOR EPA CUSTOMER SATISFACTION SURVEYS
Develop cross-media (within an agency) and cross-agency programs to serve shared customer
groups.
Take advantage of new technologies to better serve customers.
FACTSHEET VI
-------
How TO OBTAIN CLEARANCE F:OR EPA CUSTOMER SATISFACTION SURVEYS
Following are examples of successful applications to OMB.
UNITED STATES ENVIRONMENTAL PROTECTION AGENJCY
OFFICE OF ADMINISTRATION AND RESOURCES MANAGEMENT
CINCINNATI, OHIO 45268
MEMORANDUM
DATE: May 22,1997
"', ' , \ \
SUBJECT: Request for OMB Approval for Customer Feedback Survey
FROM: 'William Henderson, Director
Office of Administration and Resources Management
i f -",11
,: ",!! , 'll'ifvt'l'" ,1
TO: Barbara Willis
EPD Desk Officer
Regulatory Information Division
• • . • I « pt ! j
The Office administration and Resources Management, Cincinnati, is planning to conduct a Customer
Survey on the effectiveness of meeting customer needs in accessibility to the agency's publications
through the centralized publications clearinghouse, the National Center for Environmental Publications
and Information (NCEPI). The survey will also establish document format preferences as the agency
moves towards an "electronic first" environment. We will use the results of the survey to improve the
way the agency disseminates its documents. A'copy of the survey is attached.
More than 31,000 monthly requests for publications are received through the NCEPI, through phone, fax,
postal, or Internet services. The survey will be forwarded to three distinct target audience groups, each
lasting 30 days and limited to 500 respondents per phase. The first phase will involve the end customer
who orders through more conventional methods including phone, fax, or mail; the second will target a
selected mailing list audience' and the third will target the Internet ordering customers. The burden of the
survey on the respondents is small. There are only five questions and the agency has prepaid return
postage. We estimate that it will take the respondent approximately 10 minutes to complete the survey.
:| I
The Office of Administration and Resources will track responses, use the information to prepare a report
summarizing the findings and make recommendations on how information should be disseminated and
the format which meets the end users needs. Costs will be minimized by using in-house resources to
prepare the report and in-house printing of survey results. A copy of the survey is attached.
' ' ' ' ' ! ' I , i
If you have any questions or concerns about the survey, please contact Deborah McNealley of my staff at
513/569-7986.
Attachment
FACTSHEETVJ Vl-6
-------
How TO OBTAIN CLEARANCE FOR EPA CUSTOMER SATISFACTION SURVEYS
To our customers: :
The National Center for Environmental Publications and information (NCEPI), has taken great
strides towards improving accessibility to the U.S. Environmental Protection Agency's
publications since our startup in 1991. EPA publications are more centralized and offered
through various user-friendly avenues including the new 1-800/490-9198 toll-free number, fax
(489-8695), and the online ordering on the Internet at http://222.epa.gov/ncepihom/index.hmil.
You may have utilized one or all of these access points when ordering publications. Or, you may
use our more traditional ordering mechanism, the mailing address at U.S. EPA/NCEPI, P.O. Box
42419, Cincinnati, Ohio 45242-2419. Please take a moment to let us know which services you
currently use, those services you will try in the future, and services NCEPI might consider in
addressing your changing needs.
Thank you in advance for your time. -
f
^ 1 *
fj*
FACTSHEETVI ; Vl-7
-------
How TO OBTAIN CLEARANCE FOR EPA CUSTOMER SATISFACTION SURVEYS
ACCESS TO EPA'S INFORMATION
1. How often do you order EPA publications from NCEPI?
_JFrequently ...Occasionally _Rarely _This is my first order
2. When ordering EPA publications from NCEPI, how do you place your orders?
1-800 Fax. _U.S. Postal Service (or private carrier) _Internet
3. What method of ordering EPA publications would you prefer when placing future
orders?
4. On this order and past orders, was the service timely and was the order filled
correctly?"
_Timely _Yes _No (if no, how long did it take to receive your document?)
_Correctly _Yes _No(if no, please outline the problem )
.w..-.. .....
5. Which of the following media would best service your future needs for receiving EPA
publications?
_Hard-copy print On line viewing via Internet in full text
_0nline viewing via Internet accompanied by;"a harcl-copy print
_EPA Publications on diskette _EPA Publications on CD ROM
i,,ii •,• ;,,ii,, >i,ii,,i
6. I normally receive EPA publications from:
__NCEPI Federal Depository Library
_Govemment Printing Officer (GPO) ^National Technical Information Service (NTIS)
^.Other (please specify)
Tfc ,: ; ;: * : OPTIONAL
Name: Address:
Title/Company Name City State Zip.
Phone ^___ E-Mail__
Comments:
:l:
(Please return your survey to NCEPI within 3 days of receipt
)
FACTSHEETVI Vl-8
-------
How TO OBTAIN CLEARANCE FOR EPA CUSTOMER SATISFACTION SURVEYS
REQUEST FOR APPROVAL OF INFORMATION COLLECTION ACTIVITY
Background
In 1991, EPA, through the Office of Administration and Resources Management (OARM),
Cincinnati, EPA implemented a publications distribution service, the National Center for
Environmental Publications and information (NCEPI). The service was designed to streamline
distribution operations, eliminate inefficiencies and duplication of effort, and improve public
access to its information. Now, OARM would like to survey the end users of the information to
receive feedback on how effective the current distribution methods are in meeting the customer
needs and how we can improve our service.
«a.
:
-------
How TO OBTAIN CLEARANCE FOR EPA CUSTOMER SATISFACTION SURVEYS
Respondents's burden
Number of respondents 500
Hours per response 10 minutes x 500 = 83 hours
Total burden
HOURS: 83
Agency burden
EPA staff 20 hours
Total burden
HOURS: 20
•»-*(;,
''ti.
..,:-: '
|i "i"i , '
, ,if!"J iblliw'.ijri'jlNllli ,
l
FACTSHEETVI VI-10
-------
How TO OBTAIN CLEARANCE FOR EPA CUSTOMER SATISFACTION SURVEYS
U.S. ENVIRONMENTAL PROTECTION AGENCY
REGION I
OFFICE OF ENVIRONMENTAL MEASUREMENT AND EVALUATION
60 WESTVIEW STREET, LEXINGTON, MA 02173-3185
MEMORANDUM
DATE: June 12,1997
SUBJECT: Request for OMB Approval of Customer Feedback Survey
FROM: Carol Wood, Manager ;
Ecosystems Assessment Branch
jgf — -,
TO: Barbara Willis, RID Desk Officer \ w_ '~ _
Regulatory Information Division ' _
Office of Policy, Planning and Evaluation "* ~~"
EPA's Region 1, New England Office is preparing to distribute copies of the 1997 State of the
New England Environment Report, la order to learn whether the report is clear, easy to read, and
provides information that our customers ne§4, we are preparing a customer feedback survey to .
include with the report. A copy of the survey form is attached.
Approximately 12,000 copies of the report will be distributed to EPA personnel, citizens, and
local, State, and Federal offices put side the EPA Region 1 Office, with the survey form as an
insert. We expect to receive approximately 3,000 responses. Region I will create a database to
track survey form responses. The information will be used to prepare a report which will
summarize the findings and make recommendations on how to improve the next State of the New
England Environment report and our other outreach activities.
We will be receiving the reports from the Government Printing Office by June 24 and hope to
receive approval for the customer survey form and have the forms ready to include in the
mailings.
If you have any questions or concerns about this request, please contact Diane Switzer at 617-
860-4377 or me at 617-860-4316. ';
Attachments
FACTSHEETVI , VI-11
-------
How TO OBTAIN CLEARANCE FOR EPA CUSTOMER SATISFACTION SURVEYS
REQUEST FOR APPROVAL OF INFORMATION COLLECTION ACTIVITY
Background
The 1997 State of the New England Environment Report is an outreach tool, designed to inform
the public on environmental conditions, using indicators that have been selected in the National
and Regional processes as we begin focusing more on environmental results. We discuss topics
of concern to the public and EPA, signs of improvement or degradation, and what EPA and our
partners are doing to improve conditions. The purpose of this outreach activity is to provide
clear and concise information to the public that meets their informational needs and allows them
to better understand what we are doing to improve and protect the environment and public health.
The discussion topics are selected based upon regional priorities and what we think the public
wants to know.
Survey purpose and description
!'id '.,,,. ,'••!'!'!. •"
The State of the New England Environment Workgroup is planning to conduct a customer
feedback survey in the form of a "Reader's Evaluation Form" to evaluate whether we are
providing the public with the information they want an3 need in a way that is easy to read and
use. The results will be used to improve the reports content, readability, and use.
, „ „!,«, .!!|]n .i n'i, i , i. „!« , ,i
The evaluation consists of six questions. The first question will evaluate the reports' readability.
The second question evaluates how well we do in communicating information the public wants
to know. The third question evaluates how the information is useful to the reader. The fourth
and fifth question evaluate the information needs of the reader that we are not meeting. The sixth
question evaluates whether the report is something the public wants to receive.
• ' ' j^SmStf lifii ..... • • i i '
Survey methodology and use of results
I!!!;i:;:i!!=!!1 ::,", ,;":::: "is1 • • : :
The potential target audience for the evaluation forms consists of approximately 12,000 citizens,
businesses, and Government personnel (local, State, and national). EPA Region I plans to
distribute the forms as inserts to copies of the 1997 State of the New England Environment
Report. Through this effort, we anticipate that approximately 3,000 readers will respond. We
estimate that it will take a respondent approximately 5 minutes to complete and evaluation form.
EPA Region I will create a database to track evaluation form responses. The information will be
used to prepare a report which will summarize the findings and make recommendations to the
State of the New England Environment Workgroup and regional managers on how to improve
the readability, use, and content of this report and other similar outreach activities.
FACTSHEETVI Vl-12
-------
How TO OBTAIN CLEARANCE FOR EPA CUSTOMER SATISFACTION SURVEYS
Respondents' burden
Number of respondents 3,000
Minutes per response 5 minutes x 3,000 = 15,000 minutes = 250 hours
Cost per hour $11.00*
Total burden: 250 hours; $2,750
i
* Based on Federal/State/Local Employment and Payroll averages as presented in the 1996
Statistical Abstract of the United States
Agency burden
EPA staff time 100 hours :
Cost per hour $36.00
Total burden: 100 hours; $3,600.00
FACTSHEETVI VI-13
-------
How TO OBTAIN CLEARANCE FOR EPA CUSTOMER SATISFACTION SURVEYS
Your Comments, Please...
We would like to know if the 1997 State of the New England Environment Report provides
you with use useful information. Your responses to the following questions will help us meet
your needs.
1. a. Is this report easy to read and understand? Yes_No_
b. What would make the report easier to read and use?
2. Please rate the report as to how informative the discussions within each of the sections are,
with 1 = not informative and 5 = very informative.
Report Section Not Informative Very Informative
a. New England Ecosystems 1 2 345
b. Public Health and Our Environment 1 2 3 4 5
c. Economic Opportunities 1 2 3 4 5
d. Recreational Resources 1 2 345
e. Environmental Education and Outreach 1 2 £, 4 5
£ New Directions 1 2 3 4 5
! i1 Ji!';1;.!* 'V'"
3. In which areas is the report helpful to you? School Work Home
Leisure Time Local Community General Knowledge Other
4. What topic(s) would you like to see in future reports?
5. We welcome any other comments you have about this report:
6. Would you like to receive a copy of future reports? Yes No
If "Yes," please provide your mailrng address:
'.•':'' "' i i . '
Name
Organization,
Address
Town/City State
Zip Code County
'! i
Please fold,in half with EPA's return address on the outside, staple/tape shut, and mail.
Thank you for your response!
FACTSHEETVI vi-14
-------
How TO OBTAIN CLEARANCE FOR EPA CUSTOMER SATISFACTION SURVEYS
ENVIRONMENTAL PROTECTION AGENCY
REGION I, NEW ENGLAND OFFICE
MEMORANDUM
DATE: June 18, 1997 '•
SUBJECT: Review of Customer Satisfaction Questionnaire
ICRNo. 1711.01 (OMB 2090-0019)
FROM: Barbara N.Willis ;
Regulatory Information Division (2136)
i j&s^y
TO: Chris Wolz . !
Natural Resources, OIRA i
As a condition of OMB approval for the generic ICR, EPA agreed to submit each specific
questionnaire covered by this clearance to OMB for review. Therefore I am forwarding for your
review Region I "1997 State of the New England Environmental Report." The purpose of this
survey is to evaluate whether Region I is providing the JDublic with the information they want and
need in a way that is easy to read and use. The results will be used to improve the reports'
content, readability, and use. :
Your comments and suggestions would be much appreciated, Thank you for your cooperation in
this matter. If you have any questions, please contact me at (202) 260-9453.
Attachments
FACTSHEETVI VI-15
-------
HQWTO OBTAIN CLEARANCE FOR EPA CUSTOMER SATISFACTION SURVEYS
MEMORANDUM
DATE: January 25,1996
SUBJECT: Request for approval of a questionnaire titled
"What Do You Think About This Report?"
FROM: Barry Burgan
National 305(b) Coordinator
USEPA(4503F)
TO: Matt Leopard, Director
"Paperwork Clearance Officer
Regulatory Management Division (2136)
Office of Policy, Planning, and Evaluation
,' l
EPA constantly seeks to improve the content and presentation of information in the National
Water Quality Inventory Report to Congress. Readers who have experience in the program
are asked to respond voluntarily to six questions, and offer opinions, comments and suggestions
that will help EPA tailor the content and presentation of future reports to the readers' needs.
The questions are on a single, two-sided sheet or paper at the end of the report, designed to be
easily removable, folded, and mailed. The reader would fill out one side, fold the sheet, stamp it,
and mail it to the address printed on the other side (see attached sample). Three thousand (3,000)
copies of the report will be published. Approximately 90 responses (3 percent) to the
questionnaire are expected. It should take no more than 15 minutes to complete each sheet; at
S30/hour, the burden is equivalent to $7.82/sheet (which includes a 32-cent postage stamp).
With a total time of 22,5 hours for all responses (equivalent to $675), the total burden for all
respondents would be $703.80, postage included.
il '• "HlUf ' ,j
-' J' , ' •• ' i i
The responses would be reviewed at EPA, and are not for public distribution. They will be used
to improve the quality of future reports, and satisfy the needs of respondents and other readers.
, ,'•:' , , , • ;,, : i ] ' .'
If you need any further information, please feel free to call me (260-7060) or George Doumani
(260-3666); Fax: 260-1977.
Attachment
FACTSHEETVI
Vl-16
-------
How TO OBTAIN CLEARANCE FOR EPA CUSTOMER SATISFACTION SURVEYS
WHAT DO YOU THINK ABOUT THIS REPORT?
EPA constantly seeks to improve the content and presentation of information in the National
Water Quality Inventory Report to Congress. Your response to the following questions will help
EPA tailor the content and presentation of future reports to address your needs. Please pull out
this page and return your comments to the address on the reverse. Thank you for taking the tune
to respond.
Yes No
1. Are there additional topics that you would like to see covered
In this document? D D
Please list topics:
2. Are there topics that should be removed from this document? D D
Please list topics: :
!-, >£•,>
3. Was the organization of the report adequate? T^ ^ D D
How could the organization be improved? •
In general, were the figures and graphics easy to understand? D D
Which figures were most effective at conveying information to you?
Were there any figures that were difficult to understand? : D D
Please list figures: .
Do you have any other suggestions for improving the content and presentation
of information in this Report to Congress? D D
FACTSHEETVI VI-17
-------
How TO OBTAIN CLEARANCE FOR EPA CUSTOMER SATISFACTION SURVEYS
MEMORANDUM
!! !
DATE: December 1,1995
SUBJECT: Review of Customer Satisfaction Questionnaire,
iCR No. 1711.01 (OMB 2090-0019
FROM: Barbara N.Willis
Information Policy Branch (2136)
TO: Tim Hunt
Natural Resources, OIRA
As a condition" of OMB approval for the generic ICR, EPA agreed to submit each specific
questionnaire covered by this clearance to OMB for review. Therefore t am forwarding for your
review the "User Survey for OSW'sCatalog of Hazardous and S6iUd^as7erPublications." The
purpose of the questionnaire is to obtain feedback from callers about the usefulness of the
catalog, how they would like to receive the catalog and other OSW documents, and what types of
documents they would like OSW to develop. This survey will be available on both paper and
electronically (through EPA's Public Access server on the Internet). This survey had previously
been cleared for use under the USEPA Total Quality Management ICR (OMB 2010-0023) but
that ICR has expired. I have attached the memorandum from the program and a copy of the
survey instrument.
I » ; :
, , ll ... I'
Your comments and suggestion would be much appreciated. Thank you for your cooperation in
this matter. If you have any questions, please contact me at (202) 260-945 3
,: ' i " i „. "!f ' ' i
Attachments
,i • "'.. I
:i i"' , •• , I ' • '.•':!
FACTSHEETVI VI-18
-------
How TO OBTAIN CLEARANCE FOR EPA CUSTOMER SATISFACTION SURVEYS
MEMORANDUM
DATE: November 22,1995
SUBJECT: Request for Approval of Information Collection Activity: User Survey for
OSW's
"Catalog of Hazardous and Solid Waste Publications"
FROM: Loretta Marzetti, Director
Communications, Information, and Resources; Management Division, OSW
TO: Barbara Willis, OPPE '. ' '
The Office of Solid Waste would like to include a user survey in its "Catalog of Hazardous and
Solid Waste Publications: Eighth Edition." This survey would request feedback from customers
on the usefulness of the catalog, how they would like to receive OSW publications, and what
types of documents they would like OSW to develop. v V
We are respectfully requesting OMB's approval of the survey for OSW's catalog under the
Customer Service ICR. Information collected through this survey will be used to revise the
catalog and develop new publications, which will improve our customer service as directed by
Executive Order 12862. More detailed information M. the proposed user survey is attached.
If you have any questions or concerns, please contact Carie VanHook at 703-308-7891. Thank
you.
Attachments
FACTSHEETVI VI-19
-------
HOW TO OBTAIN CLEARANCE FOR EPA CUSTOMER SATISFACTION SURVEYS
REQUEST FOR APPROVAL OF INFORMATION COLLECTION ACTIVITY
BACKGROUND
Background
OSW's "Catalog of Hazardous and Solid Waste Publications" provides our customers with a
comprehensive list of publicly available OSW documents. The catalog is organized in sections
by title, subject area, and document number, and is available on both paper and electronically
(through EPA's Public Access Server on the Internet). OSW would like to include a user survey
with the paper and electronic versions of the catalog to obtain feedback from our customers on
the catalog and OSW's solid and hazardous waste publications.
" '. '...»,-„. : i
,» /•• i t
Survey purpose
;: ••-.. ,. • . - v/ • • <",', ' i •••
The purpose of the survey is to obtain feedback from callers about the usefulness of the catalog,
how they would like to receive the catalog and other OSW documents, and what types of
documents they would like OSW to develop. OSW will use this information to improve the
catalog for the next edition and to identify the needs for new OSW documents.
:''",., • 1*
Survey methodology
."' ',:: i , \ ••'..'. ' .i
i i j " 'i
OSW plans to include the user survey with the "Catalog of Hazardous and Solid Waste
Publications: Eighth Edition." Users of the paper catalog will be able to detach the survey, fill it
out, and mail it back to EPA. Users of the, electronic catalog will be able to print out the survey,
fill it out, and mail it back to EPA or fill out the survey electronically and return it to EPA via e-
mail,
OSW will print 10,000copies of the catalog and make it available electronically on EPA's public
access server on the Internet. We estimate that the survey will take 5 to 10 minutes to complete
and return to EPA. It is anticipated that 500 total users will return the survey. Based on this
assumption, we estimate that the user burden will be a total of 83 hours. We expect the agency
burden to be approximately 30 hours for the review of comments and development of
recommendation. The RCRA Docket contractor burden is approximately 40 hours to collect the
surveys and tabulate the results. A complete break-out of the burden associated with the task is
listed below.
FACTSHEETVI
VI-2Q
-------
How TO OBTAIN CLEARANCE FOR EPA CUSTOMER SATISFACTION SURVEYS
The information gathered from this information collection activity will benefit the OSW to
1. Improve OSW's "Catalog of Solid and Hazardous Waste Publications"
2. Identify the types of documents that our customers would like to have developed
3. Provide access to the catalog and other documents through other media, such as on a
computer disk or via Internet.
Respondents' Burden
Number of respondents
Hours per response
Cost per hour
Total burden
Hours: 83
Cost: $3,041.95
Agency burden
EPA staff
Contractor staff
Hours: 60
Cost: $2,199
500
10 minutes x 500 = 83 hours
$36.65
20 hours @||6.6lperhouf'=$ 733.00
40 houjs @ f |S.p5 per hour = $1,466.00
,,**!£,
,^ "fsrif
"'aisk "fi*'f"
FACTSHEET VI
VI-21
-------
'I
How TO OBTAIN CLEARANCE FOR EPA CUSTOMER SATISFACTION SURVEYS
CATALOG OF HAZARDOUS AND SOLID WASTE PUBLICATIONS: EIGHTH
EDITION
USER SURVEY
EPA is interested in learning how useful you found the Catalog of Catalog of Hazardous and
Solid Waste Publications: Eighth Edition. Please take a few minutes to answer the following
questions. Your voluntary input will help EPA continue to improve this publication. If you are
completing a hardcopy of this survey, return the form by folding it, stapling or taping the bottom
closed, stamping it, and mailing it. If you are completing an electronic copy of this survey,
return the form via e-mail to RARA-docket.epamail.epa.gov.
5.
With what type of organization are you 4.
affiliated?
n Law firm
o Consulting company
o Media
n Industry
n Local government
n State government
n Federal Government
n School (K-12)
o College/University
o Environmental group
n Community group
n Other (please specify):
I" V. "l" ......... I
6.
..... ..... .......... "
Approximately how many documents
have you orSered from the catalog (all
editions)?
D 0 a 1-10 a 11-50 a 51-100 n Over 100
From which source do you most often
order?
o EPA's Office of Solid Waste (OSW)
a National Technical Information Service
(BUS)
n Government Printing Office (GPO)
7.
How would you rate the overall
usefulness of the catalog?
n Very good a Good a Satisfactory
n Poor n Very poor
V',i||','; 'ill "" ' i
How would you rate the
•K*. .,ir~»'' J
organization of the catalog (In
sections by title, subject area, and
document number)?
n Very good n Good a Satisfactory
a Poor a Very poor
If you are not satisfied with the
organization, what alternative
arrangement would you
recommend?
How woukj you rate the clarity of
the document ordering instructions
in the catalog?
n Very good n Good n Satisfactory
n Poor n Very poor
FACTSHEETVI
VI-22
-------
How TO OBTAIN CLEARANCE FOR EPA CUSTOMER SATISFACTION SURVEYS
8. If you felt the ordering instructions were 12. Please provide any additional
unclear, how would you recommend comments on the catalog, or on the
improving them? availability and distribution of EPA's
hazardous and solid waste documents
in general.
9. How would you prefer to receive or access
the catalog?
13. Please provide your name and phone
n Printed publication n On computer disk n number so that we can contact you if
Via the Internet n Via an electronic bulletin we have any questions about your
board responses (optional).
10. How would you prefer to receive or access Name: I
the documents you order through the Telephone Number:
catalog?
n As a printed publication n On computer V
disk n Via the Internet n Via an electronic 11 ^
bulletin board , ^
11. What new documents (related
-------
How TO OBTAIN CLEARANCE FOR EPA CUSTOMER SATISFACTION SURVEYS
MEMORANDUM
• " " • " . .; j
SUBJECT: Subinittal of Customer Satisfaction Survey for Expedited OMB Review
FROM: Michael B. Cook, Director
TO:
Matt Leopard, RID Desk Officer
Office of Policy, Planning and Evaluation (2136)
Attached is a clearance package for an Office of Water Customer Satisfaction Survey as
authorized under Executive Order 12862, Setting Customer Service Standards. This particular
survey is designed to assess State opinion on the current level of satisfaction and desired
improvements'to the agency's water grant process. This voluntary survey focuses on three of the
primary water quality management grants under the Clean Water Act, Sections 106, 319, and 604
(b). "1, ' '
We are requesting an expedited review or this survey instrument in order to comply with the
rather tight schedule that is mandated under the Executive Order. We anticipate initiating the
survey no later than mid-November. I am requesting your assistance" in coordinating this review.
...;.• ,. •*•$•;" If ' ' • " •'" i • •;! j . , ,
Please contact Jane Ephremides of my staff £260-583frj, or Don Brady in the Office of Wetlands,
Oceans and Watersheds (260-7074) if you have any questions.
Attachment . . ' "^ . _ .... _. : t
cc: Bob Wayland
Abby Pirnie
Don Brady
, •: '" Jiiil Si $ ' • i
FACTSHEETVI
VI-24
-------
How TO OBTAIN CLEARANCE FOR EPA CUSTOMER SATISFACTION SURVEYS
CLEARANCE INFORMATION COLLECTION REQUEST FOR 1994 THE
CUSTOMER SATISFACTION SURVEY
Identification of information collection
Executive Order 12862 requires agencies to "survey customers to determine the kind and quality
of services they want and their level of satisfaction with existing services." This survey will be
conducted by customer satisfaction survey professionals at the request of the Environmental
Protection Agency's Office of Wastewater Management' Resource Management and Evaluation
Staff and the Office of Wetlands, Oceans and Watersheds (OWOW) Assessment and Watershed
Protection Division. Tim Icke, Program Analyst, will be the point of contact at OWOW's
Assessment and Watershed Protection Division. He can be reached at (202)-260-2640.
Short characterization of the survey
The 1994 Customer Satisfaction Survey will solicit opinions from members of the grants
community within the States. The data collection is authorized by Executive Order Number
12862, Setting Customer Service Standards, which requires all Federal executive departments
and agencies that provide significant services directly to the public to carry out the principles of
the National Performance Review.
As a result of the Executive Order, the Office of Water is assessing its operations and procedures
in order to provide service to the public that matches of exceeds the best service available in the
private sector. The Customer Satisfaction Survey on three of the grants, those under Sections
106, 319, and 604(b) of the Clean Water Act. The survey is intended to determine the
customers' current level of satisfaction aid desired improvements in these three grants programs.
In the water program, there are 11 soiirces of financial assistance available to assist the States and
territories in achieving the mandates of the Clean Water Act. The questions focus on
respondents opinions and perceptions of services rendered.
Collection methodology
Using a pretested telephone questionnaire, EPA will survey State water quality managers, grants
administration managers, and the program managers for Sections 106, 319, and 604(b) in each of
the 57 States and territories. EPA estimates that the number of respondents will vary
considerably from State to State. Using a conservative estimate, the highest possible burden will
be five respondents per State. The survey instrument is a 15-minute, voluntary telephone
questionnaire covering approximately 30 questions. There are four open-ended questions. For
those customers that request an opportunity to respond at greater length, followup calls will be
scheduled. Since these conversations are voluntary, will varyigreatly, and will affect a small
percentage of respondents, the foliowup calls are not considered burdens under the definition of
the Information Request.
FACTSHEET VI VI-25
-------
How TO OBTAIN CLEARANCE FOR EPA CUSTOMER SATISFACTION SURVEYS
This one-time-only information collection will involve approximately 285 voluntary respondents
of which 70 percent are anticipated to complete the telephone survey. The survey will require
approximately 50 hours at a total cost to the respondents of $1444. Exhibit I-a, Respondent
Burden and Costs, provides a detailed description of the unit burden and costs to respondents for
this collection. The average burden per response is 15 minutes.
State grant program authorities are the only respondent group that will be affected by this survey,
and by definition they are not small governmental jurisdictions.
Use of survey results
The results of the Customer Satisfaction Survey will be summarized in a report or accompanying
briefing document. EPA intends to use the information gathered by the survey to identify tools
to improve the" grants management process by reducing paperwork, focusing on results while
maintaining accountability, and responding to State environmental priorities. The fundamental
purpose of the customer satisfaction survey is to assess States' satisfaction with the grant process
and existing services. The survey will help EPA
Identify potential changes that States would like to see in the administrative management of
Sections 106, 319, and 604 (b) grant programs
Assess the three grant programs' potential to enhance/retard States' adoption with the
watershed protection approach
Understand States' level of satisfaction/dissatisfaction with the three grant programs.
Collection schedule and followup plans
I II • . '. . . •! | '
:; i l , • ••: . •.•: * i: :
EPA seeks to muirmize the amount of data collected through a one-time-only data-gathering
effort while at the same time gathering enough information for an effective Customer
Satisfaction Survey. The survey will help Headquarters establish a benchmark to compare
EPA's customer service performance with that of other Federal agencies and private-sector
businesses. In the future, this information will help to provide customers with choices in both
the sources of service and the means of delivery; to make information, services, and complaint
systems easily accessible; and to provide a means to address customer complaints.
, , ( •
Costs and burden to the agency and respondents, and number of respondents
The total burden for EPA Regional and State grants program authorities is a function of the
number of grants managers, auditors, and program managers for Sections 106, 319, and 604(b) of
the Clean Water Act in each State and Interstate Agency and the number of open-ended
questions. Exhibits 1-a and 1-b give detailed descriptions of the individual reporting and
recordkeeping requirements associated with the survey. Burden estimates are based on EPA data
from the Regions and Headquarters.
FACTSHEETVI
VI-26
-------
How TO OBTAIN CLEARANCE FOR EPA CUSTOMER SATISFACTION SURVEYS
Exhibit 1-a summarizes the State Respondents' burden and costs as respondents to the voluntary
telephone survey. The total respondent burden associated with the Customer Satisfaction Survey
is 50 hours (200 respondents at 15 minutes per call) and the tptal respondent cost is $1,444,
which equates to a cost per respondent of $7.22. This estimate assumes that the average hourly
labor cost for State employees is $28.96, comparable to a GS9, Step 10 salary.
The agency's burden and cost arises from contacting appropriate regional program officers, and
from reviewing, analyzing, and processing the data. The total annual agency burden associated
with the customer Satisfaction Survey is 100 hours. This assumes that the average hourly labor
cost of Federal employees is $28.96, equal to a GS-9, Step 10 salary. The total annual agency
cost resulting from survey reporting and recordkeeping resulting from the customer survey is
$2,896.
FACTSHEET VI VI-27
-------
How TO OBTAIN CLEARANCE FOR EPA CUSTOMER SATISFACTION SURVEYS
Exhibit l-a
Respondent Burden and Costs
Regulation
requirements
Survey reporting
requirements
(one-time-only)
Respond to
telephone
Customer
Satisfaction '
Survey
Total burden and
costs for all
affected
Respondents1 *
(A)
Total ho. of
respondents3
285
(B)
No. of
responses4
200
©
Composite
hours per
respondent
0.25
(D)
Total hours
(B) * ©
50
50
•:•»••
Hourly
Viktor-
Cost5
$28.96
(G)
Total
costs
(D)*(F)
$1,444
$1,444
' ,V
'lit
n
5!
I
,;:! If;'*,: 5
slffiJ""
Respondents include State grants, managers, auditors, and program managers for Sections 106, 319, and
604{b) in each of the 57 States and territories.
Assumes approximately five calls to each State and Territory and assumes 70 percent response rate.
5Hourly labor cost equals the annual salary for GS-9 step 10 (37,651) times 1.6 (the benefits multiplication
factor as listed in the June 1992 ICR Handbook) and divided by 2,080 of work hours per year).
FACTSHEETVI
VI-28
„; I
-------
How TO OBTAIN CLEARANCE FOR EPA CUSTOMER SATISFACTION SURVEYS
Exhibit 1-b
Agency Burden and Costs (as users of data)
Regulation
requirements
Recordkeeping
requirements
(ongoing)
Agency reviews
1st draft of
report
Agency
approves final
draft of report
Total agency
burden and
costs:6
(A>
Total no. of
respondents
N/A
N/A
(B),
No. of
responses
N/A
N/A
©,
Composite
_ hours per
respondent
N/A
N/A
(D)
Total
hours
(B)*©
60
40
(F)
Hourly
labor
. cost
$28.96
$28.96
100
' ''- ''•-
'-^- -':r
(G)
Total costs
(D)*(F)
$1,738
$1,158
$2,896
"*f Xv
•*»•*
"" V
Numbers may not add due to rounding.
FACTSHEETVI
VI-29
-------
How TO OBTAIN CLEARANCE FOR EPA CUSTOMER SATISFACTION SURVEYS
DRAFT-OCTOBER 25,1994
1994 CUSTOMER SATISFACTION SURVEY
HOW ARE WE DOING?
GRANT ADMINISTRATION: GRANT ADMINISTRATION STAFF
Only a sample of the several versions of the surveys for a series of grants is presented.
Hello, may I please speak with (NAME FROM FACE SHEET)?
', • :"" • • .1 i . .
RESPONDENT AVAILABLE
RESPONDENT NOT AVAILABLE (SCHEDULE A CALL-BACK)
Hello, my name is.
. of ABC Associates. We are conducting a customer
satisfaction study for the Environmental Protection Agency (EPA) about three Office of Water
program management grant programs. The study is voluntary and the answers that you give will
be ifeept strictly confidential.
1. Are you familiar with the Section 106 grant program that funds the management of State
water quality programs?
Yes 1
No (SKIP TO QUESTION 14) 2
DO/REF (SKIP TO QUESTION 14) . 8
2. I have some questions about the Fiscal Year (FY) 1994 grant cycle. How satisfied are you
with the level of reporting burden under Section 106? Are you...
Very satisfied (SKIP TO (QUESTION 4) 1
Satisfied (SKIF* TO QUESTION 4) 2
Dissatisfied ... 3
Very dissatisfied 4
3. What are the one or two most important changes you would like to see in Section 106
reporting requirements?
FACTSHEETVI
Vl-30
-------
How TO OBTAIN CLEARANCE FOR EPA CUSTOMER SATISFACTION SURVEYS
4. Do you think EPA made good use of the FY 1994 section 106 data you reported to them?
Yes 1
No 2
Grants Administrators 1
5. Were any of the reports created by your state in complying with S ection 106 requirements for
FY 1994 useful for other State purposes such as State budgeting or accounting?
Yes i
No 2
6. How satisfied are you with the opportunity offered by EPA to file Section 106 FY 1994
reports electronically? Are you...
Very satisfied (SKIP TO QUESTION 8) . .^....'._ 1
Satisfied (SKIP TO QUESTION 8) ; T 2
Dissatisfied .>.. 3
Very dissatisfied '. 4
7. What are the one or two most important changes you. would like to see in Section 106
electronic reporting scope or procedures?
How satisfied were you with the length of time it took EPA to respond to requests for
information on grant administration and.reporting for FY 1994 Section 106 grants? Were
you...
Very satisfied ..., .'.. 1
Satisfied 2
Dissatisfied 3
Very dissatisfied .. vv 4
You did not make any requests for information 5
FACTSHEETVI Vl-31
-------
How TO OBTAIN CLEARANCE FOR EPA CUSTOMER SATISFACTION SURVEYS
8. How satisfied are you with the length of time it took to obtain the EPA approvals required at
various states of administration of FY 1994 Section 106 grants? Were you...
Very satisfied 1
Satisfied .2
Dissatisfied .. 3
Very dissatisfied 4
You did not need any EPA approvals .." 5
•£, „• , ' - , • I ;i- : I' • i
9. How satisfied are you with EPA's requirements for the close-out or rollover of the Section
106 grant fund? Are you...
Very satisfied 1
Satisfied ;. ,...". 2
Dissatisfied 3
Very dissatisfied 4
Grants Administrators .2
10. What are the one or two changes you would most like EPA to make in its Section 106
reporting requirements?
•'' '::"; •••••• ' • v "* t • "
i
,..,). , • |
i ' '.'!., I
11. Overall, how satisfied are you with EPA's FY 1995 Section 106 grant programs? Are you...
Very satisfied 1
Satisfied 2
Dissatisfied t."."" ^ '.. :..,..,..'. .3
Very dissatisfied 4
12. What is the one most important change you would like to see made to the Section 106 grant
program?
Grants Administrators .. 3
FACTSHEETVI
VI-32
."<«' • ...'Sfftfr: V,, i '»' ,1 il.l|i:i,<,, 11'«hi! ,'.: i;< 11 '»!",!, i!
-------
How TO OBTAIN CLEARANCE FOR EPA CUSTOMER SATISFACTION SURVEYS
REPEAT FOR SECTION 319 AND 604(B)
INSERT THE FOLLOWING AFTER QUESTION 1 FOR 605(b)
12a. In meeting the reporting requirements for the Section 106, 319, and 604(b) programs for
FY 1994, did your State ever have to submit the same report the different grant programs?
Yes 1
No 2
13. Please compare your State's experience with the section 106, 319, and 604b programs for
FY 1995 with that of other grant programs administered by the EPA Office of Water.
Closing: Thank you very much for your time. Analysts working on the project may contact
you later for further detail or clarification of the information you've given. Is there a
best time of day or day of the week to reach you? -,
Thanks again, Goodbye.
FACTSHEET VI VI-33
-------
• i , '•
-------
UNIT OF ANALYSIS
FACTSHEETVH
This Factsheet compares three alternative units of analysis that might be used for customer
feedback activities at EPA and recommends that one of them, the person served, be used as the
unit of analysis in most surveys of customer satisfaction conducted by EPA. It also recommends
that another, different unit of analysis, the individual customer transaction, be used as the unit of
analysis for most activities that rely on continuous feedback to track the level of customer
satisfaction and how it is changing over time.
The unit of analysis selected for the collection of customer feedback information is important for
a number of reasons: It affects the size of the list from which1 the sample needs to be drawn and
therefore affects the decision as to sample size, it affects what kinds of things will be included on
that list, it affects what is asked of each person contacted, and it affects how the responses of
those in the sample (i.e., those contacted) are analyzed.
There are three principal alternative units of analysis that might be used for any given customer
feedback activity at EPA:
1) The unit of analysis is the customer transaction. (This is explained in the discussion that
follows below.)
2) The unit of analysis is the person served.
3) The unit of analysis is the organization served in each case where the person served was
acting on behalf of an organization.
To clarify the differences among these three possible units of analysis, let's look at the
implications of choosing one of these units of analysis as compared with each of the others.
To facilitate this comparison, let's assume that the customer feedback method that has been
selected for use is a telephone survey. Once the unit of analysis to be used is selected, the next
steps are to determine what sample size to use, then to randomly select that number of specific
people (or organizations) to be called.
Keep in mind that any customer feedback activity should seek feedback from customers on their
satisfaction with products and services received in a certain specific period of time. For clarity,
we will here assume that the period of time of interest is a specific calendar year.
In examining all the customers served in a specific (hypothetical) EPA program area for the year
of interest, we discover that a total of 236 different people were served (by being provided a
product or service). On closer examination we discover that a there were a total of 377 customer
transactions. The reason for the difference in these two numbers is that some customers, after
obtaining one product or service, called back later in the year to request another product or
FACTSHEET VII Vll-1
-------
UNIT OF ANALYSIS
service. A few then called back a third time, and so on. We here refer to each occasion on which
a specific person called to obtain a single specific product or service as a customer transaction.
COMPARISON OF PERSON SERVED VERSUS CUSTOMER TRANSACTION AS THE
UNIT OF ANALYSIS ' ' ' ' ' ' ""
,, •• , . ";:,' " :: " ' '.',•:','.. ! I
If the unit of analysis is the person served, then we will use 236 as the total number of
people/things to be characterized and this will be the basis for choosing the sample size. If the
unit of analysis is the customer transaction, then we will use 377 as the total number of
people/things to be characterized and this will be the basis for choosing the sample size.
If we decide to use the person served (of which there are 236) as the unit of analysis, and we
choose to use a sample size of 40, then we need to make a random selection of 40 persons served.
So at this point we will put together a list of all 236 persons served. Note that each person served
will appear on this list only once, no matter how many times he/she called during the year to
obtain a product or service. As a consequence, when 40 names are randomly picked from this
list, each person served has the same chance of being picked as every other person served, no
matter how many times he/she called during the year to obtain a product or service. Finally,
when the persons randomly selected are called, they will be asked about all of their experiences
during that year as customers of that EPA program area.
If instead we decide to use the customer transaction (of which there are 377) as the unit of
analysis, then all else being equal, we will need a larger sample size because we now have more
things (transactions) to sample from. Let's say we now, as a consequence, choose to use a
sample size of 70. To randomly select 70 of these 377 customer transactions, we will need to
make a different, longer list of things to pick from. This time the list will contain 377 items, and
each item in the list will be a customer's name plus the one product or service obtained in a
single transaction. (Each item in the list may also include the date on which that product or
service was obtained—this will be desirable in those program areas where we find some
individuals obtaining the same product or service more than once during a single year.)
Customers who obtained more than one product or service during the year will appear on the list
more than once, and those customers appearing on the list twice (because they obtained two
products or services during the year) will have twice as great a chance of being randomly
selected to be part of the sample as those customers who only obtained one product or service
during the year. Furthermore, what will be picked is not just the name of a customer but the
name of the customer plus the specific product or service he/she obtained in a specific transaction
during the year (plus, if needed, the date it was obtained). Finally, when those picked are called,
the questions they are asked will focus specifically on that one transaction, i.e., they will be
asked to limit their response/comments to how satisfied they were with that one particular
product or service (obtained on that date), how courteously they were treated when obtaining that
one product or service (on that date), etc.
FACTSHEETVH vii-2
-------
UNIT OF ANALYSIS
Because of the greater complexity associated with its use, it is expected that most EPA program
areas will not use the customer transaction as the unit of analysis for their customer feedback
surveys. At the same time, when there is reason to believe that the degree of customer
satisfaction varies greatly from one product or service to another provided by the same EPA
program, it may be decided that it is appropriate in that case to use the customer transaction as
the unit of analysis. If so, it is important to remember, when contacting each customer included
in the sample, to ask the customer to limit his/her comments to that one particular product or
service obtained in the transaction for which he/she was selected even if he/she obtained two or
more products or services during the year.
Note that useful product- and service-specific customer satisfaction data can also be obtained by
using the person served as the unit of analysis. This can be accomplished as follows: Use the
total number of persons served as the basis for choosing the sample size. Next, use the list of
persons served as the basis for randomly selecting the specific persons to be contacted. Then,
when calling each person selected, first, ask him/her to identify all of the various products or
services he/she received during the year; second, ask about his/her overall satisfaction with those
products and services; and finally, ask about his/her degree of satisfaction with each individual
product and/or service received. The analysis of the results obtained can then be used to
characterize the overall degree of satisfaction of the 236 customers as a whole, and will also
provide useful information about differences in degree of satisfaction with specific products and
services.
COMPARISON OF PERSON SERVED VERSUS ORGANIZATION SERVED AS THE UNIT
OF ANALYSIS
For this same example (236 persons served in 377 customer transactions), there is yet another
possible unit of analysis: the organization served (rather than the person served) for customers
that were acting on behalf of an organization. In this (hypothetical) case, of 236 persons served,
96 were acting on their own behalf, and 140 were acting on behalf of an organization.
Furthermore, there were several cases where more than one person served was acting on behalf
of the same organization. For example; seven different persons called to obtain products or
services on behalf of the XYZ corporation, five different persons called to request products or
services for the ABC law firm, and three different people called requesting products or services
for the LMN environmental group. We find, on further examination, that the 140 persons served
who were acting on behalf of a organization were acting on behalf of a total of 63 different
organizations.
With these facts in mind, the EPA program area conducting the survey may decide that it wants
to know how satisfied each of these organizations as a whole1 was with the products and services
it obtained. In this case, depending on how it is decided to approach those not acting on behalf
of an organization (i.e., persons acting as members of the general public), we could end up with a
total of 159 total customers (persons and organizations)—the total of the number of
organizations served (63) plus the total number of persons from the general public served (96).
Or we could instead treat the 96 members of the general public as one group and the 63
FACTSHEET VII
-------
UNIT OF ANALYSIS
organizations served as a second group, and sample separately from each of these two groups. In
that case, we would again have a total of 159 people or things to sample from (63 organizations
In one group plus the 96 members of the general public treated as one group), but we would
approach the sample selection process differently.
• i ' - i i •
Let's say that the program area seeking customer feedback decides to use the second approach:
We then have a total of 159 people and things to sample from, separated into two different
groups. We will need to sample separately from the group consisting of the 63 organizations
served and from the group consisting of the 96 members of the general public who were served.
Let's begin our discussion of how this sample selection process can be conducted with the group
consisting of the 96 members of the general public. We want to select a sample of these 96
persons to contact in our survey. In this case, all else being equal, with only 96 persons to
sample from, we can use a smaller sample size than we used earlier when the unit of analysis was
people served (of which there were 236) or customer transactions (of which there were 377).
Let's say it is decided to select a sample of 25 from this group. The most straightforward method
for s,eleqting these 25 would be to create a list of the 96 members of the general public and men
randomly select 25 names from that list. We then contact each of these 25 people in our phone
survey.
Let's now address the group consisting of the 63 organizations. We want to select from these 63
a sample to be contacted in our phone survey. Again, all else being equal, with only 63 things to
sample from, we can once again use a smaller sample size than we used earlier when the unit of
analysis was people served or customer transactions. Furthermore, we can also use a smaller
sample size than we used for the group consisting of the 96 members of the general public. Let's
say it is decided to use a sample size of 20. We now have to select 20 organizations to contact in
our survey. Once again, the most straightforward method for selecting these 20 would be to
create a list of the 63 organizations served and then randomly select 20 organizations from that
list.
„, : , • • • / ' i - •
But we now have another problem. We need to decide who to call at each of these organizations.
For those organizations where only one person called during the year to obtain a product or
service on behalf of that organization, there is no problem—that is the person who will be called.
But for organizations included in the sample for which more than one person obtained a product
or service on behalf of that organization, a decision has to be made—will all of these persons be
called during the phone survey? If not all, then how will those to be called be selected and how
many will be selected for each organization?
I
As you can see, using the organization served as the unit of analysis results in a number of
complexities. There are further complexities that arise in analyzing the results obtained from
using such an approach. For this reason, those at EPA responsible for obtaining customer
feedback should in general not use the organization served as the unit of analysis (unless of
course they have a compelling reason to do so).
FACTSHEETVH Vll-4
!,i la, i I .Hi id!,
-------
UNIT OF ANALYSIS
CONCLUSION
We conclude that, in most cases, for' reasons of simplicity and convenience alone, the preferred
unit of analysis for obtaining customer feedback by means pf surveys at EPA will be the person
served. Experience with customer surveys elsewhere has shown that using the person served as
the unit of analysis gives meaningful and very useful results. Since surveys based on person
served are the easiest to design and carry out, EPA program? undertaking customer surveys are
encouraged to use persons served as the unit of analysis for all of their customer feedback
activities, except when there is a compelling reason to do otherwise. Furthermore, adopting
person served as the unit of analysis for most customer feedback surveys at EPA will maximize
the comparability across different program areas and over time of the results obtained from these
surveys. !
Please note that the above conclusion applies only to customer satisfaction surveys (periodic
surveys). In any case where a continuous feedback approach is to be used (like a comment card
included in each copy of a publication sent out or a followup phone call to each nth customer 2
days after a product or service has been provided), then the unit of analysis will instead normally
be the specific customer transaction (the transaction in which the product or service was
provided) about which the feedback is being sought.
FACTSHEET VII VII-5
-------
-------
EXAMPLES OF GRAPHS FOR PRESENTING CUSTOMER FEEDBACK RESULTS
FACTSHEET VIII
Once data have been collected, think hard about how you want to present them. Often, we focus
a lot of attention on collecting feedback and performing complex analysis and forget that we
have to market the findings if we are going to help bring about change. The form you select for
presentation can make or break all the previous work. Results need to be communicated clearly
to the appropriate people before an organization can begin learning from its customers
There are many variables to consider when presenting data, such as the nature and level of the
audience, the reasons the feedback was collected and how it will be used, and the nature of the
data itself. Some of the more common forms are listed below, with a brief explanation of the
unique use of each presentation.
A very basic bar graph can be used to
convey the percentage of the population
that responds within a given range. For
example, the graph above indicates that
1.2 percent of the respondents rated their
overall satisfaction as 1, 2, or 3 on a scale
of 1 to 10; 15.5 percent rated overall
satisfaction as 4 or 5; 46.1 percent as 6, 7,
or 8; and 37.2 percent as 9 or 10. Note
that these groupings of 1—3, 4—5, 6—8 and
9-10 are somewhat arbitrary, and can be
changed to suit the needs of your project.
Additionally, the labels very dissatisfied,
dissatisfied, acceptable, but room for
improvement and high quality are also
subject to change according to individual
needs.
It is often useful to organize responses by
customer segments that are meaningful to
the survey audience. In the case above,
the mean, or average, overall satisfaction
ratings are organized by geographic
regions. Note also that the base, or
number of respondents in that region, is
noted to the right of each bar. This can
be important to identify the relative
validity of the information.
Overall Satisfaction
Overall Satisfaction by Region
FACTSHEET VIII
Vlll-1
-------
EXAMPLES OF GRAPHS FOR PRESENTING CUSTOMER FEEDBACK RESULTS
Responses can also be organized by
other types of segments. In the case
above, respondents answered questions
about their length of use, and the amount
of grant money they had received. Note
that the numbers of respondent in each
category is to the right of each bar.
Overall Satisfaction
Among those who received grants
totaling less than $50,000 a year
Among those who received grants
totaling more than $500,000 a year
Among those who receive grants totaling
between $50.000 and $500.00 a year
Among those who have been interacting
with XYZ (or less lhan 1 year
Among those who have been Interacting
wild XYZ between 1 and 3 years
Among those who have been inlaractlng
with XYZ for more than 3 years
Contract staff
Technical itaff
Adminl.traSvo staff
BiWoo/accounting staff
127
A bar graph can also be used to identify
the mean, or average, response to
various services received. This is
useful to compare the levels of
satisfaction between services offered.
2 4 ( 9
MMH •itfefacUon rating
10
A slightly more complex graph can
allow the comparison of responses
between two segments of a population.
In the example above, 62 percent of the
children surveyed considered the
quality of the health care rney received
to be 9 or 10, on a scale of 1 to 10. In
general, it appears that more children
rated the quality of health care as
higher, while more adults provided
lower ratings.
Quality of Health Care Received
Worst
0-3
9-10
Best
D Adults surveyed (n=72)
n Children surveyed (n=79)
FACTSHEETVIH
Vlll-2
•iiSli-i, ,, . IS.; •.;• ;, ,'! ;,|',:.h i ibiiiifi.;-,: til IE: i
'St.!!
-------
EXAMPLES OF GRAPHS FOR PRESENTING CUSTOMER FEEDBACK RESULTS
!GottIng tiw cars you need when you lw«d it
Adults surveyed
Children surveyed
0%
20%
40%
60%
80% 100%
D Utually E3Alw»y» [
Priority Issues for Building
Customer Loyally
Action matrix
Another way to compare two populations is
to use segmented bar charts, as shown
above. The graph above indicated that
children surveyed were more likely to feel
they received the care they needed when
they needed it.
If driver analysis is being performed, a
useful way to present the results is in a
quadrant chart, as in the example above.
By comparing the levels of satisfaction with
the levels of importance, we can prioritize
results. In the example above, the services
listed in the upper right quadrant are those
that were very important to the customer
and were rated as providing high levels of
satisfaction. These services, information
clearinghouse and interactions with agency,
staff 2XQ identified as areas where the
organization is meeting or exceeding the
customers needs. In contrast, the upper left
quadrant identifies services that are very
important to the customer, but are rated as
providing low satisfaction. It is these
services, contracts management and grants
management, that require immediate
attention. The lower right quadrant identifies services that provide high levels of satisfaction, but
are not important to the customer. In the example above, no services were found to be in the
lower left quadrant. This quadrant identifies services that are not important to the customer and
provide low levels of satisfaction.
^Contracts iTESiagement
>Qants '
rrBnaQement
^Information clearinghouse
>lnteractions vu'th agency
staff
>Advocacy
> Distribution of reports
Low
Satisfaction
High
Another example of a chart that related the
results of driver analysis is above. In this
case, subjective labels have been applied to
the areas of the chart, according to the needs
of the project
Drivers of Satisfaction
H
Importance
Lo
V
Promote
Maintain
Improve
Commit minimal resources
FACTSHEET VIII
VIII-3
-------
EXAMPLES OF GRAPHS FOR PRESENTING CUSTOMER FEEDBACK RESULTS
A. pie chart is another method useful for relating
the proportions of a population that responded in
a particular way to a question. In the example on
the previous page, the majority of the
respondents clearly felt that the funding process
was about the same as with others.
Compared to other Government or Government-like grant or funding
proceeeee would you »«y Hut your experience wu...
Examples of Customer Remarks
Concerning Billing Needs
* "Biting is sketch/ and difficult to understand.*
* "We are running approximately 6 months behind
on biMng ,"
»"l hava had problems with billing, and would like
XYZ to reassess the way they are billing:
timeliness and accuracy,'
»"Poorly tombed billing,"
»"Billing report really hard to understand, very
Inconsistent."
• "Mora prompt billing, so that I can delete them off the
records."
* *8Hng is a (wffljht zone,"
About the MUTW?
31.7%
Dontknow/
15.0%
What Product* or
- Memel
-MaXcgs
- O8»r
do XYZ euatonww want to ••• offend?
9
15
10
* ImprovedMarilted poDdes
• Improved cuatomer service
* Mbtrnlng customers of existing services
Open-ended responses are usually organized
according to subject matter. In the example
above, comments that refer to problems with a
billing service are grouped together. This is a
very effective way to communicate comments
from customers to the audience. Following is
an example of open-ended responses being
organized and grouped together. The actual
statement by the customer is not listed, but the
numbers of customers who felt a certain way
is clearly communicated.
FACTSHEETVHI
Vlll-4
', ii ,,,!iu, ;,: ,.,, . ' if" .|
-------
EXAMPLES OF GRAPHS FOR PRESENTING CUSTOMER FEEDBACK RESULTS
One additional type of chart which can be
useful for presentations is the trend or run
chart which is used to identify meaningful
changes from year to year, or between
feedback activities. Such charts are used
to monitor progress and portray
improvement. A time series chart not only
can show trends, it can portray
relationships. With time series, change
and relationships in two or more items can
be compared that would otherwise appear
on different scales (apples and oranges) if
the net change from one point to another is
defined as a percentage.
Customer Satisfaction
istQtr
2nd Qtr
Courtesy
Timeliness
3rd Qtr
T
4th Qtr
Accuracy
Accessibility
FACTSHEET VIII
VIII-5
-------
-------
SURVEY SOFTWARE INFORMATION
FACTSHEETlX
There are numerous computer software packages available. The prices range from several
hundred to many thousands of dollars., Software is also on the market for e-mail and Internet
surveys. In selecting survey software,, you may wish to consider the dimensions of the survey(s)
that you are planning to perform. There are three elements in the software packages that EPA
Customer Service Program considered. These are the survey form, the database, and other
information needed to administer the survey, and the reporting features of an effective survey.
The cost-effectiveness of purchasing a scanner depends upon the survey software that is being
used, size of the length and number of surveys, and the size of the sample being surveyed. The
software vendors can recommend scanners that are appropriate for their software and expected.
Below are the names of four vendors that EPA has either considered using or has used for survey
activities within the last several years. The EPA Customer Service Program can provide support
in the use of the Corporate Pulse Software within EPA. More details about Corporate Pulse and a
sample survey follow.
Esther H. Larson
NCS Federal Government Marketing
4301 Wilson Boulevard, Suite 200
Arlington, VA 22203
http://www.ncs.com/ncscorp/govt/
federal.htm
Phone: 1-800-359-9333 or 703-284-5810,
Fax: 703-284-5819
Software: NCS Design Expert, NCS Survey,
NCS Viewpoint, Scantools
Steve Hehl
Vitality Alliance
55 North University Avenue, Suite 225
Provo, UT 84601
Phone:1-800-772-9478, or 801-373-2233
Fax: 801-373-8884
Software: Corporate Pulse
Colleen C. Thoresen
Auto Data Systems
6111 Blue Circle Drive
Minnetonka, MN 55343-9108
http://www.autodata.com
Phone: 1-800-662-2192, or 612-938-4710,
Fax: 612-938-4693
Software: ^AutoData Survey, AutoData
Survey+, AutoData Pro
Raosoft, Inc.
6645 NE Windermere Road
Seattle WA, 98115-7942
http://www.raosoft.com/
Phone: 206-525-4025, or 703-742-5295
(Washington, DC location)
Fax: 206-525-4947
Software: jRaosoft SURVEY, with numerous
options for tailoring surveys
FACTSHEET IX
IX-1
-------
SURVEY SOFTWARE INFORMATION
CORPORATE PULSE
WHAT is CORPORATE PULSE?
Corporate Pulse is survey software. The EPA Customer Service Program (CSP) researched
software capabilities and selected this package because it best met the projected needs of the
agency. Demonstration discs that guide the user through the program's general capabilities may
be borrowed from the CSP (202-260-9144). EPA staff may use the CSP copy after receiving
general training. Programs and regions planning extensive survey work may wish to purchase
discounted site licenses rather than use the CSP copy.
WHAT CAN CORPORATE PULSE DO?
Corporate Pulse software has three capabilities: survey construction, survey administration, and
analysis.
Survey construction
• Library of tested questions—including topics such as customer service, leadership, team
orientation, communication
• Demographics—select from predefined ones or develop your own
• Scannable for creation
• Response scales—select from predefined ones or create your own
• Page layout
• Spelling check
• Autoreversing on negative questions
• Open-ended questions
• 360-degree profile surveys
Survey administration
• Participant pool database
• User-definable field names
• Standards database-file importing and exporting
• Up to 32,000 records per pool
• Simple random, stratified random, and cluster samples
• Produce distribution lists and mailing labels
• Track ail surveys, administrations, and reports
• Scan, import, or manually enter survey response data
• Schedule, budget, and track survey projects
• Create survey sample based on user-defined parameters including expected percentage of survey
to be returned
• Allowable margin of error
• Confidence level
Survey analysis
• Types of reports (frequency distribution, descriptive mean, favorable/unfavorable, 360-degree
profile, trend analysis—automatic or manual
• Reports on entire survey or selected questions
• Reports on all 360-degree profile participants or selected participants
• Demographic cuts and comparisons
• Professional quality reports
FACTSHEET IX
IX-2
-------
SURVEY SOFTWARE INFORMATION
A copy of the OIG-developed survey that the OIG develop using Corporate Pulse follows:
EPA Office of Inspector General Customer Survey on Audit
Products/Services To Auditees and Agency Managers 1998
Please help us to serve you better by taking about 10 minutes to answer the following questions.
We value your opinions and request that you please return the completed questionnaire within one
week. Just fold, staple and drop in the mail (it's preaddressed) or fax to (202) 260-1896.
1. Name and phone Repo)1 no.
Location (circle and specify)
1.HQ/NPM/office 2. Ftegion/div/office 3. Other
2. Please specify the OIG Audit product/service on which you ;are basing your responses (fill
circle).
1. Financial audit 6. Projects/assistance
2. Performance audit 7. Training/presentations
3. Contract audit 8. Testimony
4. Assistance(grant)audit 9. Other
5. Special review/comments
3. I am familiar with the IG Act, mission and role of the OIG
I.Yes 2. A little 3. No
1. Strongly disagree 4. Somewhat agree
2. Disagree .5. Agree
3. Somewhat disagree 6. Strongly agree
FACTSHEETlX |X-3
-------
SURVEY SOFTWARE INFORMATION
OIG audit products/services
4. Are factually accurate and consistent with available information 123456
5. Are objective and balanced (recognize agency assistance and corrective
action) 123456
6. Address relevant or significant issues 1 23456
7. Are useful for decisions, actions, and improvements 123456
8. Contain recommendations that are practical and appropriate 1 23456
9. Are clear, logical, and understandable 123456
10. Are timely 123456
11. Are responsive to agency needs or requests for assistance 123456
12, Contribute to the agency's strategic goals 123456
OIG audit staff
13. Are professional and courteous 123456
14. Are knowledgeable about the programs and/or issues involved 123456
15. Clearly communicate purpose, process, progress, issues, results,
and recommendations
123456
123456
123456
16. Seek and consider input, comments, and clarification on issues
17. Encourage a constructive working relationship
Suggestions and comments
18. How would you improve the audit process, products, or results? (continue on back if needed)
19. In what program or issue areas can OIG audit products best serve EPA? (continue on back if
needed)
20. If you do not agree that the audit results or products add value, why not? (continue on back if
needed)
Please provide additional comments about any of your responses on the back or attach additional
pages. For further information or to discuss comments and results, call (202)260-9684.
Thank You!
FACTSHEET IX
"&U.S. GOVERNMENT PRINTING OFFICE: 1999 -721-515/94313
IX-4
:: ::..::.: fc
------- |