vvEPA
United States
Environmental Protection
Agency
EPA-230-R-00-001
March 2000
Customer Service
in Permitting
A Toolkit for Regions, States, Tribes,
and Local Permitting Authorities
-------
as guidance. It is not binding on EPA or any other person
Tribes,
^^gjggaj, permitting ^horitie^^^ncouraged to consider and use the material in this Toolkit,
tft?1 !""! "*" "' fjheyare" free to act at variance with it. ...
-------
Introduction
About this Toolkit: Customer
Service in Permitting
Permits are a cornerstone of environmental protection because they spell out how regulated
communities must comply with environmental laws. EPA and its regulatory partners (states, tribes,
and local governments) issue permits to control facilities' emissions into the air and water, and to
ensure safe management of hazardous waste.
Many people have their first encounter with an environmental agency through the permitting
process. That makes customer service a particularly important part of permitting. Our customers
include the general public, individuals who may be affected by the permit decision, state, tribal, and
local governments authorized by EPA to issue permits, and permittees that need permits to operate.
Early in 1999, EPA announced its "Next Generation in Permitting," an action plan to move
permitting towards measuring performance, while providing regulated parties with more flexibility
in how they meet standards. This action plan is part of EPA's broad-based agenda for reinventing
environmental protection, and it builds on the work of EPA pilot programs that have tested
innovative approaches to permitting, including Project XL. The action plan and other permitting
reform efforts show that EPA is committed to:
• Strengthening the role of the public in important decisions
• Focusing on results instead of procedures
• Reducing unnecessary burden
As part of our plan to reinvent environmental permitting, EPA committed to preparing a
user-friendly toolkit of customer service processes and techniques for use by EPA and its partners.
This toolkit, which is also part of an Agency-wide Customer Service Strategy, is the result.
Through a workgroup that included EPA Headquarters, Regions, and states representatives, EPA has
endorsed four broad customer service standards for environmental permitting. These standards are
discussed in Chapter 3 of this toolkit:
• We will prepare permits that are clear, fair, appropriate, enforceable, and effective.
• Our staff will be knowledgeable, responsive, cooperative, and available.
• We will work with representatives of permitting authorities to continually improve
permitting processes and services.
• We will make our permit decisions within the time frames that are established for the type
of permit being requested.
-------
These permitting standards have been adopted by EPA. Other permitting authorities are encouraged
to adopt these or similar permitting standards designed for their organization. This toolkit is
intended to help staff at EPA, as well as other permitting authorities meet these customer service-
standards. Specifically, it offers tools for obtaining customer feedback and using it to improve
permitting processes to meet customer needs. After this introduction, the Toolkit is organized to
provide information on:
• A Customer Service Orientation - what is EPA's customer service program and what this
toolkit is about
• Partners in Permitting Service Delivery—who the permit process participants are and how
they interrelate as customers-suppliers
• Permitting Standards — setting standards by which we can gauge good customer service
• Feedback and Measurement — planning for and gathering of feedback
• Responding to Feedback — analyzing and acting upon feedback data
• Maintaining Good Customer Service — creating the cultural change necessary to keep
customers satisfied, permit after permit
• Customer Service in Action — examples of what other EPA regional offices and states have
done
As with any toolkit, readers of Customer Service in Permitting will need to decide for themselves
which tools are most appropriate for their particular circumstances.
i-2
-------
Acknowledgments
This document was conceived and prepared by the national Customer Service in Permitting
Workgroup established by the U. S. Environmental Protection Agency's Region 2 Regional
Administrator Jeanne Fox. She served as co-lead of the permitting core process under the EPA
Customer Service Steering Committee with Timothy Fields, Acting Assistant Administrator for Solid
Waste and Emergency Response.
Membership and level of participation in the workgroup included, at various times,
representatives from EPA Headquarters, EPA Regions 1,2,4, 5, 6, and 8, and more than 20 states.
We want to thank all contributors to this effort, but, in particular, acknowledge the following core
members of the workgroup without whose dedication and expertise this document could not have
been produced:
Stanley Siegel, EPA Region 2, Division of Planning & Protection (Workgroup Chair)
Andrew Bellina, EPA Region 2, Division of Planning & Protection
Patricia Bonner, EPAHQ, Office of Policy
Steve Burkett, EPA Region 8
Patricia (Tricia) Buzzell, EPA HQ, Office of Solid Waste & Emergency Response
Lance Miller, EPA and State of New Jersey
Vernon Myers, EPA HQ, Office of Solid Waste & Emergency Response
Joel Salter, EPAHQ , Office of Water
Leo Stander, EPA HQ, Office of Air Quality Planning & Standards
Peg Anthony, Macro International, Inc.
i-3
-------
-------
Table of Contents
Page No.
Introduction [[[ • ......... i - 1
Chapter 1: A Customer Service Orientation ................................... 1- 1
What Is EPA's Customer Service Program? ............................... 1 - 1
So How Will Things Be Different Now? .................................. 1 - 1
Who Should Use this Toolkit - and Why? ................................ 1-2
What Are Our Services? ........................... ................... 1-2
Who Are Our Permitting Customers? .................................... 1-3
How Is Quality Related to Services Provided? ............................. 1-4
Chapter 2: Partners in Permitting Service Delivery ............................ 2- 1
What Is the Permit Process? ........................................... 2 - 1
Who Are the Permit Process Participants? ................................ 2- 1
Who Are the Permit Service Providers? .................................. 2- 1
What Is the Relationship Between the Customers and Suppliers? .............. 2- 1
What Are the Services Being Delivered and How Are They Evaluated? ........... 2 - 1
What Do I Do With The Following Tables? ................................. 2-2
Chapter 3: Permitting Standards ................................ ........... 3 - 1
What Are Customer Service Standards? .................... .............. 3 - 1
What Are EPA's Permitting Customer Service Standards? .................... 3- 1
How Might a Permit Service Provider Fulfill These Standards? ............... 3-1
Chapter 4: Feedback and Measurement .......................... ............ 4 - 1
Why Do I Need Customer Feedback? .................................... 4 - 1
Why Should I Set Customer Service Standards and Goals? ................... 4 - 1
How Do I Design a Feedback & Measurement System? ...................... 4-2
Plan the Customer Feedback Procedure ................................... 4- 3
How Ready Is My Organization for Customer Feedback? .............. 4-3
What Kinds of Customer Feedback Are Already Occurring? ............ 4-4
What Are the Core Questions I Should Ask of My Customers? .......... 4-4
How Often Should We Ask Customers for Feedback? ................. 4-5
How Long Should Feedback Activity Take? ........................ 4-6
Why Should I Establish Quality Control Procedures? .................. 4-7
Construct Data Collection Procedures .................................... 4- 9
What Is the Best Approach for Assessing Customer Satisfaction? ........ 4-9
Continuous Assessment ......................................... 4- 9
Decide on Data Collection Method ................................ 4- 9
The Sample .................................................. 4-13
Determine the Sample Size ...................................... 4-13
Develop the Questions .......................................... 4-14
Construct the Questionnaire ...................................... 4-17
-------
Focus Groups 4-18
Telephone Surveys 4-19
Pretest 4-20
Contingency for Non-response 4 - 21
OMB Clearance (EPA Only) 4 - 22
Model Survey Instruments 4 - 23
Additional Resources 4 - 23
Conduct Data Collection 4 - 24
Focus Groups 4 - 24
Mail Surveys 4 - 25
Telephone Surveys 4 - 26
Electronic Feedback 4 - 27
Chapter 5: Responding to Feedback 5 -
Analyze the Data 5 -
5-
5-
5-
5-
5-
5 _
Data Clean-up
Types of Data and Analyses
Analysis: An Example
Driver Analysis
Presenting the Data
Making Recommendations Based on the Data
9
Presenting Recommendations - Using Graphics 5-9
Act on the Results 5-10
Is This the Beginning or the End of the Process? 5-10
How Do You Decide What to Do with the Feedback You Receive? 5-10
How Good Is Good Enough? 5-10
How Do We Know What to Work on First? 5-11
Chapter 6: Maintaining Good Customer Service
6- 1
Chapter 7: Customer Service in Action '. 7-1
Introduction 7-1
Permit Assistance/Information Center 7-1
Stakeholders' Early Participation in the Permit Process 7-2
Plain Language 7-2
Permit Processing/Issuance Timeframes 7-3
Customer Satisfaction Surveys 7-3
Incentive/Award/Reward Programs 7 - 4
Appendix A: Internal Control Procedures
Appendix B: Sampling
Appendix C: OMB Clearance
Appendix D: Model Surveys
-------
Chapter 1
A Customer Service
Orientation
What is this toolkit about? Who is it for and why should it be used? These
questions and other general questions about service to customers are addressed
in this first chapter of the Customer Service in Permitting Toolkit,
What is EPA's Customer Service Program?
On September 11,1993, President Clinton issued Executive Order 12862, Setting Customer Service
Standards," to all customers.
The order led EPA to adopt the following vision for the Agency's customer service efforts:
As we achieve our mission of protecting public health and the environment, EPA people are
becoming customer-focused, our products and services customer-driven, and our customers
satisfied.
EPA's Customer Service Strategy includes seven elements:
• Setting customer service standards
• Publicizing the standards
• Establishing measures and tracking systems and providing measurement assistance
• Building staff capacity by making training and information available
• Involving staff in the development of customer service programs
• Benchmarking EPA against world class service
• Providing managers assistance and actionable information
How Will Things Be Different Now?
As a customer-focused agency, we hold ourselves accountable for providing service that rivals the
best in the private sector. We have a set of standards against which we measure ourselves, the Six
Principles of Customer Service:
1) Be helpful! Listen to your customers.
2) Respond to all phone calls by the end of the next business day.
3) Respond to all correspondence within 10 business days.
4) Make clear, timely, accurate information accessible.
5) Work collaboratively with partners to improve product and services.
6) Involve customers and use their ideas and input.
1-1
-------
To implement these principles, EPA identified eight core processes that are most important to our
customers:
• Permitting
• Enforcement and Compliance Assistance
• State, Tribal and Local Grants
• Partnership Programs
• Public Access
• Pesticides Registration
• Research Grants
• Rulemaking
The workgroup prepared this Permitting Toolkit to help you improve customer service within the
permitting core process.
Who Should Use This Toolkit - and Why?
This toolkit is not just for EPA. Many non federal environmental agencies helped us make it a
resource for any permitting authority — states, tribal, local, or federal. It can help all of us to
provide better environmental and public health protection.
Like any toolkit, it is not a mandate, but a set of tools that all permitting programs can use. The
toolkit describes how you can gather reliable feedback from your customers, and how some agencies
have redesigned their permitting programs to meet customer needs. Asking customers about their
needs, expectations, and experiences enables us to measure whether they are satisfied with our
services. And customer satisfaction can save resources in the long run by avoiding unnecessary and
time-consuming confrontation.
The bottom line? Finding out customer opinions about what we do and how we do it will help us
improve our products and services. Customers will notice and value these improvements, learning
that government agencies can be more responsive to customer needs.
What Are Our Services?
Permits describes what facilities must do to meet environmental health and environmental safety
standards. In many cases, EPA authorizes or delegates state, tribal, and local governments to issue
permits. (We will refer to both these in this Toolkit as "delegating.") In jurisdictions where
delegation has not occurred, EPA issues permits directly. Permits control facility emissions into the
air (e.g., permits to modify or construct, and operating permits), protect surface and groundwater
(e.g., national pollution discharge elimination, storm water runoff, underground waste injection) and
ensure the safe management of hazardous waste (e.g., treatment, storage and disposal).
1-2
-------
Who are Our Permitting Customers?
EPA's Hearing the Voice of the Customer - Customer Feedback and Customer Satisfaction
Measurement Guidelines define a customer as someone who directly relies on a provider for a
product or service. Customers are defined on the basis of the service or product they receive. In
permitting, a customer may:
• have a direct relationship with the permitting authority
• receive one or more services or products from the permitting authority
• be directly affected by the actions of the permitting authority
• be an employee of the permitting authority, acting as an internal customer
While it is possible to identify and label many customer groups that are interested in the permitting
process, this toolkit focuses on two major groups — "interested and impacted parties" and "permit
applicants." Interested and impacted parties are those individuals, interest groups, communities,
states, or tribes that raise a concern or have comments regarding the permit action. Permit applicants
are seeking approval from EPA or a delegated authority to conduct a regulated activity.
This toolkit describes the relationships among the interested and impacted parties, the permit
applicants, and the permitting authorities. We also briefly discuss internal relationships among the
permitting authorities (EPA headquarters, regional offices and delegated authorities).
Table 1.1 outlines the relationship between the customers and their service providers in the
permitting process.
Table 1.1
Customers and Their Service Provider
Customer
EPA Regional Office
Delegated State, Tribal or Local
Government
Permit Applicant
Interested and Impacted Parties
Service Provider
EPA Headquarters
EPA Regional Office
EPA Regional Office or Delegated State,
Tribal or Local Government
EPA Regional Office or Delegated State,
Tribal or Local Government
Because these customers often have differing and conflicting needs, delivering customer service in
permitting becomes a complicated matter. However, as stated in the "Blair House Papers" (January
1997), an agency desiring to be a customer-driven organization must continuously ask its customers
what they want. Experience shows we can identify the needs of the permitting customers and work
to satisfy those needs while still carrying out our mission to protect human health and to safeguard
the natural environment.
1-3
-------
How is Quality Related to Services Provided?
Figure 1.1 shows that customer satisfaction depends both on the services we deliver and the way we
deliver them. Obviously, we should deliver services in a "right way" rather than in a "wrong way."
But, perhaps you haven't considered that the services you provide may not be the ones your
customers want. Imagine the reservoir of potential services that go untapped because we are out of
touch with our customers, spending unnecessary time doing the wrong things, or doing the right
things in a wrong way. A customer-focused organization should know whether its services meet
customer needs, and take advantage of opportunities to invest in newer and better services.
Figure 1.1 Services and Delivery Model
Delivery
How it's done
Right
Wrong
Services
What's done
Wrong Right
Right Things
Done Right
(Satisfied Customers)
Wrong Things
Done Right
(Bewildered
Customers)
Right Things
Done Wrong
(Irate Customers)
Wrong Things
Done Wrong
(Disenchanted
Customers)
Missed opportunities to do
more important things
because we are out of sync
with our customers
1-4
-------
Chapter 2
Partners in Permitting
Service Delivery
To provide outstanding customer service, you must understand the dynamics of
customers, suppliers and services. This chapter identifies the who of the permitting
process — those providing services and those receiving them — and the what of the
permitting process — the services being delivered.
What is the Permit Process?
The permit process encompasses all the steps in making permitting decisions, including how you
write guidance or regulations, obtain and review information, oversee state and local partners, seek
public input and make decisions. The ultimate product is official approval or denial for a given
permit or permitting program.
Who are the Permit Process Participants?
• EPA Headquarters
• EPA's 10 Regional offices
• State, tribal, or local government
• Permit applicant ,
• Interested and impacted parties
Who are the Permit Service Providers?
The permit-service providers are EPA Headquarters, EPA Regions, and a delegated governmental
permitting authority, where applicable. Permit service providers either make the permit decision,
or provide services to the decision-making organization.
What is the Relationship Between the Customers and Suppliers?
The relationship between the permit service providers and their customers is shown in Figure 2.1.
The solid lines of this figure represent services provided to the customers. The dotted lines indicate
opportunities for obtaining customer feedback on those services.
What are the Services being Delivered and How are they Evaluated?
Tables 2.1 through 2.4 show the types of customer feedback you should seek for each service you
provide. Don't forget to ask your customers about additional services they need that you're not
delivering.
2-1
-------
There are two special relationships shown in Figure 2.1. First is the direct feedback from delegated
states to EPA Headquarters through the Environmental Council of the States (EGOS) and other state
organizations. This relationship, which bypasses the Regions, often deals with higher level, non-
permit-specific issues. Second is the community-based relationship between the permit applicant
and the interested and impacted parties. We envision a relationship characterized by the Community
Based Environmental Protection (CBEP) program, where all interested parties develop a joint
environmental vision and processes to achieve it.
What Do I Do With The Following Tables?
Look at the table that represents your customer-supplier situation and the services being provided.
Then look at the delivery feedback needed to assess whether you are asking for the appropriate
customer feedback, i.e., whether you are customer-focused. The feedback surveys of Chapter 4
should help you get this important input from your customers.
You can also "turn" these tables. When you are a customer, you should have certain expectations
for the services delivered to you. If your expectations are not being met, perhaps you can start a
dialogue with your service provider.
2-2
-------
Figure 2.1
Relationships,
Services and
Feedback
•Surveys
•Interviews
•FTEand$
•Regulations
•Policy
•Guidance
•Initiatives
•Interpretations
•Program Reviews/
Evaluations
•ECOS
•Other State
Organizations
•Resources
•Policy
•Guidance
•Initiatives
•Interpretations
•Program Reviews/Evaluations
•Delegations
\
Regional
Office
Surveys
Interviews
Delegated
Authority
Provided by either the Regional
Office or Delegated Authority
•Regulations
•Policies
•Guidance
•Forms/Forums
•Interpretations
•Timely Permit Process
•Access to Information
Interested
& Impacted
Parties
Permit
Applicant
-> Service
<• Feedback
2-3
-------
Table 2.1
Delivery Feedback Needed for Services Delivered
Service Provider: EPA Headquarters
Service Customer: EPA Regional Offices
Services
Provided
Resources
Regulations
Policy
Guidance
Initiatives
Interpretations
Program Reviews and
Evaluations
Delivery Feedback Needed
Was the process used to allocate resources to the Regions fair and reasonable? Do
Regions want more input to the allocation process?
Do Regions have the regulatory tools to carry out their permit programs? Seek input
on regulatory reform needed.
Do Regions have the policies needed to carry out their permit programs? Are existing
policies clear and useful?
Do Regions have the necessary guidance to carry out their permit programs? Is
existing guidance clear and useful?
What initiatives do the Regions need to carry out the permit programs? Which
initiatives are counterproductive, ignored or unwanted? Seek input on initiatives'
usefulness.
Do regions receive timely and accurate interpretations? Assess quality of
interpretations to determine whether you are meeting regional needs and expectations.
Do regions receive timely and accurate program reviews and evaluations? Are they
useful? Assess whether you are meeting the Regions' needs and expectations.
2-4
-------
Table 2.2
Delivery Feedback Needed for Services Delivered
Service Provider: EPA Regional Offices
Service Customer: Delegated Authorities (State/Tribal/Local Governments)
Services
Provided
Resources
Regulations
Policy
Guidance
Initiatives
Interpretations
Delegation or
Authorization of
Programs
Program Reviews and
Evaluations
(Oversight)
Delivery Feedback Needed
Was the process used by EPA to allocate resources fair and reasonable? Do delegated
authorities want more input to the allocation process? Is NEPPS providing flexibility
for the delegated authorities to allocate resources where needed?
Are the federal regulatory tools appropriate and adequate to carry out the programs?
Seek input on regulatory reform needed.
Do delegated authorities have the national or regional policies they need to carry out
their permit programs? Are existing policies clear and useful?
Do delegated authorities have the guidance they need to carry out their permit
programs? Is existing guidance clear and useful?
What initiatives do the delegated authorities need to carry out the permit programs?
Which initiatives are counterproductive, ignored or unwanted?
Do delegated authorities receive timely and accurate interpretations? Assess quality of
interpretations to determine whether you are meeting delegated authorities' needs and
expectations.
Are delegations/authorization application instructions clear? Is the processing of
delegations/authorizations timely? Seek input on experiences with
delegations/authorizations.
Do delegated authorities receive timely and accurate program reviews and evaluations?
Are they useful? Do they meet the needs and expectations of delegated authorities?
2-5
-------
Table 2.3
Delivery Feedback Needed for Services Delivered
Service Provider: Permit Authority (EPA Regional Office or Delegated Authority)
Service Customer: Permit Applicant
Services
Provided
Regulations
Policy
Guidance
Forms
Interpretations
Timely Permit Process
Delivery Feedback Needed
Are regulations understandable and reasonable? Do they meet the permit applicant's
needs? Seek input on regulatory reform needed and forward it to EPA Headquarters, as
appropriate. Such feedback may lead to industry-wide or individual facility
reinvention activities, such as the Common Sense Initiative (CSI), ECOS Innovations
Agreement, or the XL program.
Are program policies understandable, reasonable and supportive of permit applicants'
needs?
Is permit guidance understandable, reasonable and supportive of permit applicants'
needs?
Are permit application forms and instructions understandable, reasonable and
supportive of permit applicants' needs?
Do permit applicants receive timely and accurate interpretations that meet their needs?
Are permit decisions timely? Assess permit applicants' needs for timeliness.
2-6
-------
Table 2.4
Delivery Feedback Needed for Services Delivered
Service Provider: Permit Authority (EPA Regional Office or Delegated Authority)
Service Customer: Interested and Impacted Parties
Services
Provided
Regulations
Policy
Guidance
Opportunities for
Involvement
Forums
Access to Information
Interpretations
Delivery Feedback Needed
Are regulations understandable and reasonable? Do they meet the needs of interested
and impacted parties? Seek input on regulatory reform needed and forward it to EPA
Headquarters.
Are program policies understandable and reasonable? Do they meet the needs of
interested and impacted parties? __
Is permit guidance understandable and reasonable? Does it meet the needs of
interested and impacted parties?
Do these parties have ample opportunity to be involved in the permitting process?
What additional involvement do they need?
^«^^^—
Are forums for public input and interaction reasonable and supportive of these
customers' needs? What additional forums do they need?
Do these customers have access to the information they need to review and comment
on the permit applications?
Do these customers receive timely and accurate interpretations? Are the interpretations
meeting their needs and expectations? ;
2-7
-------
-------
Chapter 3
Permitting Standards
EPA has established customer service standards for its own permitting work that
can serve as a guide to other permitting organizations. This chapter deals with
permitting standards - why it is important to have them and measure them, and
how permit service providers can fulfill them.
What Are Customer Service Standards?
EPA's customer service standards establish a yardstick by which the Agency will measure itself.
EPA has grouped these standards into 9 groups: Universal; Public Access; Partnership;
State/Tribal/Local Program Grants; Pesticide Registration; Enforcement Inspections and Compliance
Assistance; Rulemaking; Research Grants; and Permitting. EPA created the Customer Service in
Permitting workgroup to develop the tools needed to meet the customer service standards for
permitting, including ways we can obtain feedback from our permitting customers. Much of our
work is summarized in this toolkit.
What are EPA's Permitting Customer Service Standards?
We took draft standards developed by the Agency and worked them into this final set of standards:
We will prepare permits that are clear, fair, appropriate, enforceable, and effective.
Our staff will be knowledgeable, responsive, cooperative, and available.
We will work with representatives of permitting authorities to continually improve
permitting processes and services.
We will make our permit decisions within the time frames that are established for the
type of permit being requested.
We believe these standards, should lead to world-class customer service. However, we encourage
other permit-issuing organizations to modify these standards as needed to fit their particular situation
and customer expectations.
How Might a Permit Service Provider Fulfill These Standards?
Each of the permit service providers listed in Chapter 2 has a role in meeting the four permitting
customer service standards. Table 3.1 illustrates some activities that might fulfill these standards.
It is important to ask customers what they want before deciding how your organization will meet the
standards. That will be the focus of Chapter 4, Feedback & Measurement.
3-1
-------
Table 3.1
Fulfillment of Permitting Customer Service Standards by Service Providers
£7 We will prepare permits that are clear, fair, appropriate, enforceable, and
effective
Headquarters
• promulgate permit regulations that are understandable, written in plain English, and
workable
• write guidance materials that clarify the intent behind the permitting requirements, and
the rationale leading to final regulatory decisions
• make interpretations on a case-by-case basis whenever questions arise
• stress consistency in application to take the guess work out of implementation
• seek feedback from customers on whether Headquarters' services are clear, fair,
appropriate and effective
Regions
• write guidance in plain English
• tailor training programs to meet the needs of delegated permitting authorities
• make permit oversight reviews productive and to the point, taking into account
circumstances specific to the permit; do not nit-pick
• seek feedback from customers on whether the Region's services are clear, fair,
appropriate and effective services
State/Tribal/Local fand Region, if Region is directly implementing a permit program^
• write permits that reflect the unique nature of the permit applicant's situation
• seek input from the permit applicant and other interested and impacted parties
• make permit decisions after carefully weighing all of the input from the permit applicant
and the interested and impacted parties
• seek feedback from customers on whether the permitter's services are clear, fair,
appropriate and effective
£7 Our staff will be knowledgeable, responsive, cooperative, and available.
Headquarters
• prepare training programs covering all appropriate rules and regulations for regions,
states and regulated community
• respond to inquiries from permit writers in a timely fashion and in the appropriate level of
detail
• respond to correspondence, hotlines inquiries, etc., in a timely fashion at the appropriate
level of detail
• maintain records on policy memoranda that are used to interpret regulations, thus
ensuring that permit writers have consistent information
3-2
-------
• seek feedback on whether Headquarters' staff is knowledgeable, responsive, cooperative,
and available
• to the extent practicable, maintain key policy memoranda on the Internet to facilitate
access by Regions and delegated authorities
Regions
• develop and implement yearly training programs for permit writers
• train staff subj ect matter experts where necessary
• seek feedback on whether Regional staff are knowledgeable, responsive, cooperative, and
available
State/Tribal/Local
• develop yearly training programs for permit writers
• train staff subj ect matter experts where necessary
• seek feedback on whether their staff are knowledgeable, responsive, cooperative, and
available
Q We will work with representatives of permitting authorities to continually
improve permitting processes and services.
Headquarters
• identify and remove barriers to improved service
• involve regional and state permit writers in the drafting of regulations
• use formal and informal mechanisms to identify permiters' needs for guidance, training,
interpretations, new regulations, etc.
• seek customer feedback on whether Headquarters' efforts are actually improving
permitting processes and services
Regions
• involve regional permit writers in the drafting of regulations
• use formal and informal mechanisms to identify delegated programs' needs for guidance,
training, interpretations, new regulations, etc.
• identify and remove barriers to improved service
• seek customer feedback on whether the Region's efforts are actually improving
permitting processes and services
State/Tribal/Local
• identify and remove barriers to improved customer service
• participate through appropriate organizations in the development of EPA regulations
• use formal and informal mechanisms to identify the regulated community needs for
guidance, training, interpretations, new regulations, etc.
• seek customer feedback on whether efforts are actually improving permitting processes
and services
3-3
-------
O We will make our permit decisions within the time frames that are established
for the type of permit being requested
Headquarters
• strive for timely decisions, regulations, guidance, interpretations, training, etc.
• seek customer feedback on the timeliness of decisions and services
Regions
• notify states or applicants of expected response dates to submittals and meet or beat those
timeframes; if you can't meet the timeframe, notify the state or applicant prior to the original
due date and set revised date for the Agency response
• seek customer feedback on the timeliness of decisions and services
State/Tribal/Local
• notify applicants of expected response dates to permit applications and meet or beat those
timerrames; if you can't meet the timeframe, notify the applicant prior to the original due
date and set a revised date for the Agency response
• seek customer feedback on the timeliness of decisions and services
3-4
-------
%
Chapter 4
Feedback and
Measurement
One of the basic tenets of customer service is to ask customers what they want,
or you 'II likely get it wrong. This chapter introduces the concepts and
procedures that will help your organization develop an effective system to
identify customer needs and measure satisfaction.
Why Do I Need Customer Feedback?
Your customers' ideas can help you provide better environmental and public health protection. By
obtaining feedback from your customers, you can measure progress toward your customer
satisfaction goals. Ultimately, customer feedback will help you improve the way your organization
provides products and services to its customers.
What services do permitting customers usually want? We've found that applicants usually seek
clear, timely, and fair permits. Communities want permits that protect their environment.
Permitting authorities rely on others to supply resources, expertise, guidance, and support in an
effective manner. Your customers may have additional, or different, needs.
How do you know if your customers are getting the services they deserve? It has long been a basic
tenet of customer service that you must ask your customers what they want, or you will likely get
it wrong. If you assume you know what your customers want, they may perceive you as arrogant
or you may spend resources on program changes that they don't want or need. Meanwhile, you may
not notice a very simple, inexpensive, and straightforward need — because you didn't ask.
Also, asking your customers what they want promotes your agency's image as a more customer-
focused organization. It shows you care about their opinions and that you are truly interested in
serving them better. Also, reaching out to your customers gives them a sense of ownership of the
program, generally resulting in more support for decisions reached.
You may even need to revise or expand the standards you set at the beginning of your customer
service effort. You may find out the customer service goals you initially set are too high or too low.
That is okay — feedback is, by definition, meant to be a dynamic process and the feedback loop is
a crucial element of achieving excellence in customer service.
Why Should I Set Customer Service Standards and Goals?
As with any major project, you should set goals to identify where you're going and how you'll know
when you get there. When you are seeking excellence in customer service, you'll have to initially
4-1
-------
make some assumptions about your customers' needs so you can start asking them questions.
These assumptions will show up in your initial goals and standards. While EPA has developed four
standards for Customer Service in Permitting, as described earlier, they should not be construed as
the only right ones. Your organization should set standards that best fit your mission.
You should set quantifiable and measurable goals for each standard. These goals will help you
design the feedback and measurement system.
How Do I Design a Feedback and Measurement System?
This toolkit describes three phases for developing and implementing a successful feedback and
measurement system. (Additional phases needed to analyze and act on the feedback are presented
in Chapter 5.) You can find more detailed information in "Hearing the Voice of the Customer -
Customer Feedback and Customer Satisfaction Measurement Guidelines" OP-235-B-98-003,
November 1998. This document may also be found in the EPA web site at:
www.epa.gov/customerservice/guide.htm.
The three development and implementation phases are:
• Plan the customer feedback project
• Construct the data collection procedures
• Conduct data collection
4-2
-------
THE PLAN checklist
get ready
see what feedback you already have
decide which core questions to ask
decide frequency for customer feedback
define the target customer population
identify services supplied to customers
establish purposes of customer feedback
decide whether to contract out
develop written plan
determine resources needed
obtain agreement to proceed (if needed)
PLAN THE CUSTOMER FEEDBACK PROCEDURE
How Ready Is My Organization for Customer Feedback?
As you begin to plan, consider how ready your organization is for customer feedback by asking these
questions:
• Do staff understand why the organization needs customer feedback?
• Do staff members and managers sincerely intend to pay attention to customer feedback and
act on it?
• Are key managers committed to taking action based on customer input?
4-3
-------
• Have staff members directly participated in defining the need for customer feedback and in
identifying the approaches to use for obtaining it?
• Have managers, employees, and other users of customer feedback information expressed
their needs, issues, concerns, and objectives?
• Is there managerial and employee buy-in and ownership?
• Are there few or no barriers — such as concerns about change, extra work, adverse findings
— to using customer feedback successfully?
• If there are barriers, are there identified methods to overcome them?
If you answered "yes" to these questions, your organization is clearly ready for customer feedback.
If you answered "no" to some questions, you might consider what you can do to prepare your
organization to obtain and use customer feedback.
If your organization is not fully ready for customer feedback, you should not necessarily halt your
feedback activities. Instead, just understand that you will probably face some challenges in getting
the work done, getting managers to pay attention to findings, and assuring customers that your
organization is committed to implementing the changes they want. You may need to start slowly,
collecting unsolicited feedback and using informal opportunities to gather customer input. You can
make some positive changes based on that feedback, and build a case for seeking broader and more
formal customer feedback to verify and expand the anecdotal information you gathered.
What Kinds of Customer Feedback Are Already Occurring?
Before proceeding with a new customer feedback activity, check with EPA's Customer Service
Program in the Office of Policy to see whether any similar work has been conducted recently. You
also should check with delegated program representatives (for certain customer feedback functions).
By obtaining information from other sources, you may avoid unnecessary duplication, save time and
money, and make the best use of previously gathered data.
What are the Core Questions I Should Ask of My Customers?
We believe it is important to have some core questions that are always used by those doing customer
feedback within an organization. Core questions represent broad levels of understanding and
impressions about expectations, organizational responsiveness, and customer satisfaction. By using
core questions, we can compare and aggregate customer feedback information, both across the
agency and over time.
The following are the core questions that we endorse:
4-4
-------
> Overall, how satisfied are you with the way the permitting process was managed?
1 2 '3 4 5 6
not at all very
> How courteously did our organization staff treat you?
1 23 4 5 6
not at all very
> How satisfied are you with the quality and timeliness of the communications you have
received from our organization?
123456
not at all very
*• How fully did our organization respond to your needs for guidance, information, or
technical support under the permit process?
12345 6
not at all very
How Often Should We Ask Customers for Feedback?
Many organizations contact their customers once a year to get an overall measure of satisfaction.
Other types of feedback, such as follow-up telephone calls or comment cards, provide immediate
information at the point of contact. When organizations need targeted customer information, most
find it useful to conduct multiple studies each year.
As a rule, a customer service program does not want to overburden its customers, so take care to:
• Avoid activities that duplicate work conducted
• Avoid contacting the same customer repeatedly
• Seek consent from customers to participate in feedback projects, especially those that are
lengthy or where customers have been contacted previously.
There is no standard answer to how often you should ask for feedback. The frequency of customer
feedback will depend on several factors:
• Were the findings of previous customer feedback studies positive or negative? If your
organization took action in response to concerns customers raised, has there been enough
time to see whether those actions have been effective?
• Considering the issue(s) involved in the feedback activity, how often does it make sense to
solicit customers' opinions?
4-5
-------
Can you distinguish annual versus ongoing information needs?
• Is there a way to collect feedback during your everyday customer transactions? Is there a
way to match feedback with your organization-to-customer transactions? Can you ask
customers at the end of a call if the information provided was useful? Is there any follow-up
with them later to see if they used the product provided?
• Has some critical event occurred for which customer feedback would be important? (e.g.,
was the office reorganized to speed customer service or product delivery?)
• Do you anticipate program changes that may require surveying customers both before and
after the change?
How Long Should Feedback Activity Take?
Obviously, many variables can affect the time it takes to complete a feedback effort. These variables
include the type and method of feedback, number of respondents, and the extent to which your
organization is prepared to plan and act on the results. Many people, including the customer, will
have expectations about how long the effort will last and when results may become available.
Therefore, you should carefully plan the schedule of a feedback effort. Figure 4.1 is an example of
the timetable for one feedback survey:
Figure 4.1
CUSTOMER FEEDBACK SURVEY - PROJECT TIMETABLE
DELIVERABLE
Project Planning and Design
Design Survey Instrument
- focus groups
- internal draft of questionnaire
-1st draft to survey team
- markup meeting
- 2nd draft to survey team
- revised draft sent to field
- final version sent for approval
- final approval from agency
Data Collection
- field testing
- revisions (if necessary)
- phoning
TIME FRAME
June
(2-3 meetings)
6/22 & 6/29
7/7
7/10-7/12
7/12
7/17-7/20
7/25
8/4
8/7
8/14 & 8/15
8/16-8/18
8/21 -9/15
4-6
-------
Analysis and Report
- analysis
- report
- briefing charts
Process Improvement Workshops
- coordinating committee
- executive board
- notes to coordinating committee
- notes to executive board
Performance Standards and Process
Improvement Implementation
- action teams
9/15-10/15
10/17
10/31
11/1 &11/2
11/8 & 11/9
11/16
11/27
start 12/10
Why Should I Establish Quality Control Procedures?
Developing and applying good internal control procedures helps ensure the quality, reliability, and
integrity of information used for decision making. You should apply quality control standards and
techniques to data collection, analysis, and reporting of results.
Controls may be as simple as limiting access to raw, customer-specific data, as well as separating
the data collection, administrative and presentation duties from the affected action officials.
Alternatively, controls can be as thorough as performing independent quality assurance reviews.
Internal controls should provide reasonable assurance that your customer feedback objectives will
be accomplished reliably and cost-effectively. See Appendix A for a description of specific control
standards and techniques that apply to EPA.
Figure 4.2
*/ Checklist
Establishing the Purposes of Customer Feedback
Define the feedback objectives
• What do I want to accomplish with this feedback?
Determine how the findings will be used
• What will we do with the findings?
• Will they be used:
- as a key business performance indicator?
- to revise, correct, or improve a process?
4-7
-------
-------
CONSTRUCT DATA COLLECTION PROCEDURES
What Is the "Best" Approach for Assessing Customer Satisfaction?
Your best approach will depend on the kind of product or service you provided, the kinds of
customers you served, how many are served, the longevity and frequency of customer-supplier
interactions, and what you intend to do with the results. Two very different approaches can produce
useful findings:
• Continuous assessment methods — methods to obtain feedback from the individual customer at
the time of product or service delivery (or shortly afterwards).
• Periodic survey approaches — methods that obtain feedback from groups of customers at
periodic intervals after service or product delivery. They provide an occasional snapshot of
experiences and expectations.
You need multiple inputs from customers to understand their expectations and satisfaction. It is like
peeling away layers of an onion - each layer reveals yet another deeper layer. Both approaches can
help you assess your organization's overall accomplishments and areas for improvement.
Continuous Assessment
While this Toolkit focuses on methods for obtaining customer feedback periodically, you also can
adopt continuous assessment as a standard method for obtaining customer satisfaction information.
Include continuous assessment by:
• enclosing a feedback card with every draft and/or final permit issued
• enclosing a feedback card with every public comment response mailed
• making a follow-up phone call to every customer (or to every fifth, or twelfth, or nth
customer) within one or two days of contact
Decide on Data Collection Method
Informal methods for obtaining information from customers clearly produce valuable information.
Everyone needs to use these everyday opportunities for customer feedback. Use this information
to complement the more systematic forms of gathering feedback discussed earlier.
Formal methods frequently used to gather customer feedback include focus groups; mail-back
postcards included with materials you send to your customers; mail surveys; telephone surveys;
publication evaluation forms; printed or in-person surveys (possibly including computer-assisted
personal interviews or an intercept survey through which you ask every wth customer attending a
function or visiting a facility to participate). Electronic mail will become an important means for
collecting customer feedback as more people gain access to the Internet.
4-9
-------
When picking a method, you should consider several factors, such as the types and number of
questions you will ask. Your decision also will be affected by available resources, how fast decision
makers need to have the information, and how representative the findings need to be. The response
rate—the number of customers who actually answer questions divided by the number contacted for
information—is also an important consideration because it will affect the way you can use findings.
A summary of different methods appears in the table on page 4-12.
If you choose a mail or phone survey, you will need an accurate name, address and/or telephone
number. At times you'll need to know which EPA programs or services the customer sought or
received, as well as any demographic information available.
Note that several different practices can affect the ratings of various data collection methods:
• Focus groups, telephone and in-person surveys require trained staff to conduct proper interviews
and prevent interviewer bias. They can also demonstrate through direct personal contact that the
agency takes customer feedback seriously.
• Telephone surveys can more readily accommodate differences in language and literacy levels
thancanmail surveys, but they cannot accommodate lengthy questionnaires or visuals. However,
some people do not have a telephone, and many who do will refuse to participate in telephone
interviews.
• Mail surveys can be longer, since respondents can work at their own pace, but they have the
longest response tune and may not reach the intended target.
• Mail surveys allow no interviewer bias to creep in, but they offer little ability to probe or ask
complex questions, and may generate ambiguous answers.
• Your advance and follow-up efforts can dramatically improve costs, timeliness, and your ability
to generalize results. Mail surveys can follow up with customers who do not respond initially.
An advance letter can increase participation and response rates for mail and telephone surveys.
Such letters also allay customers' concerns about such matters as: how they were selected, why
they have been selected to participate again (if applicable), anonymity, how long it will take
them to answer the questions, and how findings will be used. (See the sample advance letter on
the following page.)
4-10
-------
[on your organization's letterhead]
Mr. John Doe
Alpha, Beta, and Gamma Co., Inc.
555 Main Street
Anywhere, USA 12345
Dear Mr. Doe:
I am writing to let you know that your name has been selected at random to participate in a survey about
business owners' experiences with our agency. You are one of a small group of people we are
contacting. Your feedback can help shape our future direction.
We at [XXX Agency] will take findings from the survey into consideration as we develop our plans for
the next decade. We are committed to incorporating customer viewpoints and recommendations into our
strategic planning, budgeting, and decision-making while recognizing the need for balancing sometimes
competing and conflicting interests.
I realize that we may have contacted you before to answer similar questions. We are tracking our efforts
to respond to customer concerns, so it is very important to hear from you again. You will not be
identified personally.
You should receive the survey in the next few days. It will take less than 10 minutes for you to complete.
I urge you to consider the questions carefully and let us know how we can better serve you. Meanwhile,
if you have any questions, please call 1-800-xxx-xxxx to speak with a staff member on XXX Agency's
survey team (or someone at YYY Consulting, the firm conducting the survey for us).
I thank you in advance for your time and consideration.
Sincerely,
Name
Title of highest possible agency person
4-11
_
-------
•3
o
1
•s
•a
CO
O
a
o
.52
3
'?
o
II
31
uu ^
*
||1
JlJ
«v>
In-person Survej
&•
£
3
W
Telephone
£
s
Mail-outS
J.
ii
§•
o
1
2
B
•a
moderate
1
1
•H
1
•a
3
B
moderate
moderate
S
f
fl
1 -
Convenience for customer to
complete
t:
o
.•g
1
i
o
JS
3
§•
„
"3
B
'g
-------
The Sample
If the number of customers of interest is relatively small, not more than 50, you could contact each
customer to obtain feedback. This is the census approach. In many cases, you provide services or
products to a large group of customers, perhaps too large for a census approach. In such cases, a
sampling approach is needed, and two options are possible: (1) a judgment sample, in which you
consciously select the customers that you will contact from the entire group served, or (2) a
probabilistic sample, in which customers are picked randomly from the entire group served during
the period of interest (e.g., the past year).
In most cases, it is better to rely on the probabilistic approach. Judgment samples may be biased
because of the way customers are selected. If a sample is biased, it is impossible to draw inferences
about the entire group of customers served. As long as the response rate is high enough,
probabilistic samples are not biased, so inferences can be made about the group as a whole.
Determining the Sample Size
If you choose to conduct a mail, telephone, or in-person survey, you will need to decide the number
of people selected to participate. To determine this number — the sample size — you should
consider several factors, such as the total number of customers served, the intended use of the results,
available resources, and time.
The larger the percentage sampled, the more certain you can be that the feedback represents the
results you would have obtained if you had surveyed every customer. The smaller the percentage
sampled, the greater the likelihood that feedback will differ significantly from those in the master
list.
The relationship between sample size and accuracy of findings is due to sampling error, which
indicates the extent to which the sample is different from the entire group under study. In a news
article that reports the President's approval rating as 62 percent, plus or minus 5 percent, the "plus-
or-minus" value is the sampling error.
To decide the size of the sample, you can either:
Determine the largest sample size that you can afford and calculate the associated sampling
error, or
Determine the maximum sampling error that is acceptable and then select the sample size
that will produce that level of error.
The sampling error can be estimated through a confidence interval, which specifies a range of values
within which the true measure is found. Typically, survey results rely on a 95 percent confidence
interval, but lower levels are acceptable, depending on how you plan to use the findings. Popular
media reports rarely stipulate confidence intervals, but they are implied. Again using the President's
popularity rating as an example, the unstated premise is that the analyst is 95 percent certain that the
4-13
-------
President's popularity is between 57 and 67 percent; that is, 62 percent, plus or minus 5 percentage
points.
One last thing to consider in determining the sample size is the kinds of comparisons you will make
with survey findings. Many times, analysts are interested in comparing ways that different
customers react to various services. These comparisons may involve large vs. small businesses, the
general public vs. educators, and so forth. If these comparisons are a critical portion of the analysis,
you must plan for them in the sample design so that enough of each customer type is surveyed to
make the findings meaningful. See Appendix B for additional information on sampling
considerations.
Develop the Questions
^/Checklist for effective questions:
use short, value-neutral statements or questions
_use simple words
_avoid jargon
_be clear and easy to understand
_arrange questions in logical order
_use appropriate response choices (include all possible answers and
minimize overlap among the answers)
_do not use double negatives
_be upbeat and interesting
_write to the appropriate reading level (9th grade or less for general
public; several word processing software packages incorporate a
feature that determines reading level)
use questions pretested in other surveys whenever possible
Jeave out the "nice to know" program/product/service questions not
vital to success
In choosing questions, you should keep two principles in mind: (1) make sure that the questions and
answers address your objectives and (2) set limits on the length of the survey instrument.
Many sources are available to help develop questions for surveys. These include software packages
such as Corporate Pulse (which is available to EPA staff through the Customer Service Program),
prior surveys sponsored by EPA and other agencies, journal articles, and item banks maintained by
some universities and survey organizations. When possible, it is better to use a previously tested and
validated question, rather than one newly created for the current survey.
Survey questions are generally of two types: open-ended and closed-ended. In open-ended
questions, the customer creates his or her own answers. The following are examples of open-ended
questions:
4-14
-------
• Do you have any suggestions for improving service? [IF YES], What are they?
• How could our organization be more responsive to your concerns?
• Could you please describe the most satisfying experience you 've had with our organization ?
Closed-ended questions limit the responses a customer can provide. They may include yes/no
answers, categories of responses, rank-ordered responses, or scales. The following are examples
ofeach type:
Yes/no
In the past 6 months, have you contacted the XYZ office?
1) yes 2) no
Categories
In what kind of community is your business located? Would you say it's...
1) urban 2) suburban 3) rural
Rank order
Of the following items, which 3 are most important to you? Please indicate with a "1" for the most
important, a "2" for the next most important, and a "3" for the third most important.
clean air
clean water
hazardous waste disposal
a minimum level of government regulation
lower taxes
Scale
Please rate your satisfaction with the service you received, using a scale of 1 to 6. "6" means
you are very satisfied, and "1" means you are very dissatisfied.
123456
With closed-ended questions, it is relatively easy to record and analyze responses, and you will not
receive irrelevant or unintelligible responses. However, you risk "missing the boat." To illustrate,
suppose you ask the closed-ended questions, "What was the main reason for your visit?," giving
several possible answers, and 30 percent of your respondents mark "other." Drawing valid
conclusions about why customers visited would be hard. If you decide to use closed-ended
questions, pretest them to identify all the likeliest responses to your questions.
In developing closed-ended questions, you should carefully consider the advisability of including
response options such as "don't know" and "no opinion." While customers should not be forced into
providing responses when they really do not have answers, it is better to find ways to encourage a
response than to let customers default to a neutral position. In mail surveys, you can provide this
4-15
-------
encouragement through instructions; in telephone and in-person surveys, you can encourage
responses by not offering "don't know" and "no opinion" as response options.
On the other hand, including "not applicable" as a response is important in mail surveys, so that
customers are able to indicate this when they have not had a particular experience. In asking
questions about a past event, consider giving a "don't remember" option. Keep the survey to a
reasonable length by asking only the questions needed to address the issues that prompted your
survey; leave out the "nice to know" questions.
Recognize that open-ended questions will provide a richness of data that can complicate analysis.
Reducing responses to a few categories that can be coded, entered into a data base, and analyzed can
be difficult. It is probably best to use a mix of questions, both closed and open, in most customer
feedback questionnaires.
If you are planning an ongoing or periodically repeated survey, identify a few key program goals that
are unlikely to change very soon, and focus your questions on them. Develop questions that will
indicate how well customers think the goals are being met. These key questions need not be
elaborate or profound, but should be very basic. To effectively compare results over time, you need
to use essentially the same core questions in your survey on each iteration. You will need to avoid
making any major changes to these key questions, whether in wording, scaling, or placement, so be
sure to ask the right questions from the beginning.
Make sure your questions are relevant to your customers. Although this may seem obvious, it is
important to remember. Be particularly wary of questions that may be interesting to ask, but may
only add time and cost while not producing useful information. These questions could be:
• Extraneous questions that do not address the stipulated purposes and objectives of the
feedback activity.
• Questions that are subj ect to misinterpretation. These may have vague words, use unfamiliar
jargon, or could be understood differently by different types of customers.
• Double-barreled questions that embed more than one item, such as "On a scale of 1 to 6,
please indicate how clear and useful the materials are." The customer may have one opinion
about clarity and another about usefulness, but is not given an opportunity to distinguish
between them.
• Questions that may upset some respondents. Questions that may seem intrusive, such as
household income, are best when worded neutrally (such as by asking whether the
customer's household income falls above or below a certain level) and placed at the end of
the survey.
• Questions on potentially sensitive or offensive matters, especially about cultural, ethnic,
gender, and socioeconomic considerations.
4-16
-------
Questions that do not elicit responses pointing to specific remedies.
If you don't ask the right questions in the right way, relatively soon after the service experience,
feedback will not be as useful as it might have been. Also, remember that to compare results over
time you should avoid making major changes to key questions, whether in wording, scale, or order
in the questionnaire.
There is no single correct scale to use. However:
Whenever possible, the same scale should be used throughout a given questionnaire to help
ensure that different responses within a questionnaire can be compared validly.
Different survey efforts within an organization should use the same scale. To this end, for
national consistency, we would recommend that when using the core questions described
above, you consistently use the same scale of one to six (1 - 6), whenever possible.
Construct the Questionnaire
No matter what method you use to collect data, all questionnaires follow a similar format:
Introduction — sets forth the purpose of the survey and guides the customer through the
questions
Experience—establishes the customer's ability to answer various parts of the questionnaire
Measurement — asks the person surveyed to characterize his or her experiences, needs and
desires as your customer
Customer information — gathers data that will be used to classify respondents
Survey questions should be presented in a logical sequence. Many survey experts believe that the
first question on the survey, more than any other, will determine whether your customer completes
or discards the questionnaire. Starting with a fairly simple question is a good idea because it
suggests that completingthe survey will be neither difficult nor time-consuming. It is also advisable
to ask a fairly interesting question to evoke interest.
The next set of questions should focus on matters that the customer is most likely to judge as useful
or salient This continues the process of drawing the customer in so that he or she becomes engaged
with thinking about the questions being asked and becomes invested in completing the survey.
Grouping questions together that share common themes makes sense because the customer then
focuses on that area of inquiry. Group questions with similar response options. For example,
questions that have yes/no responses should be together and questions that have scale responses
should be together.
4-17
-------
The order of questions should also mirror the thought processes that customers are likely to follow.
For example, questions about experiences with on-site inspections should precede questions about
suggestions to improve those inspections.
The final set of questions should center on those most likely to be sensitive or offensive. These may
include questions about personal characteristics (race, age, income) and unsuitable behaviors.
The final page of the booklet should not have any survey questions. Instead, it should invite the
customer's comments or suggestions about anything raised in the survey or other issues and
concerns. It should also indicate the address for returning the questionnaire (in case the survey gets
separated from the reply envelope) and, when possible, a toll-free number set up exclusively to
receive survey inquiries.
Mail Surveys. The mail survey has to do everything you would do if you were with the customer.
It has to be visually appealing, have a pleasant tone, and be clear while being concise and to the
point. The survey instrument is under the direct control of the customer. Its physical look will affect
the customer's willingness to respond; the clarity of the instructions and questions will affect the
customer's ability to interpret their meaning.
Single page questionnaires and comment cards should be attractive and easy to read. Longer
questionnaires should be printed in booklet form, on 11" x 17" paper that is folded in half and
stapled in the middle to produce a standard 8 Vin x 11" page. The cover should be visually appealing
and use a logo or other graphic design to interest the customer; no questions should appear on the
cover. Use of color ink and high-quality paper will add only minor costs to the survey, but can
substantially improve response rates and reduce the cost of follow-up correspondence and
telephoning by staff or contractors. The cover should give the title of the survey activity and indicate
who is conducting it. Note the Social Security Administration uses brightly colored paper, desk-top
publishing to allow more flexibility in design, and larger print to accommodate the needs of its
elderly and disabled customers.
The methods used to construct the questionnaire are different, depending on the mode of data
collection. In the next section we present methods for constructing questionnaires for focus groups,
mail surveys, and telephone surveys—the most frequently used forms of data collection in periodic
surveys.
Focus Groups. As knowledge about customer surveys has expanded and entered the public domain,
more and more people claim to be conducting "focus groups." It is important to distinguish between
focus groups based on scientific procedures and understanding of human interactions and casual
discussions among people who share a common interest or concern. Both approaches provide
potentially useful information, but analysts should recognize the difference.
The key instrument for a focus group is the Moderator's Guide. This is a series of questions, probes,
and discussion topics arrayed in a logical order. The moderator uses the Guide to elicit opinions and
experiences from participants, and to ensure that discussions stay focused as much as possible on
the critical issues.
4-18
-------
Typically, a Moderator's Guide is organized as follows:
Q Introductions by moderator and participants
Q Review of ground rules, such as
• You have been asked here to offer your views and opinions; everyone's participation is
important; the conversation does not need to flow through the moderator, although the
moderator will manage the group
• Speak one at a time (avoid side conversations)
• Note video taping, audio taping, and observers (as applicable)
• There are no right or wrong answers; consensus not required
• Okay to be critical; if you don't like something, say so
• All answers are confidential, so speak your mind
Q Brief explanation of the focus group purpose and introduction of the topic
Q Definitions
Q Questions, probes, discussion topics
Q Closing and thanks
Telephone Surveys. Because customers have no questionnaire in front of them during a telephone
survey, visual appeal is not an issue. However, ordering, clarity, and conciseness of questions are
important. It's also important to place the call at appropriate times (e.g., not at dinner time).
Additionally, the interviewer acts as an intermediary between the customer and the questions posed.
With this in mind, the following principles apply to telephone surveys:
• The introduction the customer hears will probably determine whether the customer hangs up.
The introduction should be concise, state the purpose of the call, estimate its length, and assure
confidentiality. This is a sample:
Hello, my name is [fill in], and I'm with the XXXAgency [or YYY Consulting]. We're
conducting a survey of people who have received materials from the XXX Agency to learn
about their experiences and opinions. Let me assure you that this is not a sales call, and that
we will keep all information about you and your responses private. We will use the
information you provide only to help improve XXXAgency's services. The survey will take
less than 15 minutes to complete and is purely voluntary. Is this a convenient time, or should
I call you back later?
4-19
-------
• Because customers will rely on verbal cues and instructions, rather than written ones, questions
should have a limited number of responses (about three or four).
• Each question should be relatively short.
• Avoid questions that ask the customer to look up information or check with others.
• Avoid use of leading questions.
• Be sure to read the questions aloud to others to see if they make sense. Remember, what works
for the written word does not always work for the spoken.
• Complex skip patterns and branching are easily accommodated through computer-assisted
telephone interviewing (CATI) systems. Skip patterns occur when a particular answer to one
question means the respondent is not asked certain questions that would otherwise follow;
branching occurs when a particular answer to one question leads to a series of questions
customized to that particular answer.
• Rank-order questions are subject to error in telephone interviews in a way that they are not for
mail or in-person surveys. Rather than asking a customer to rank-order a list of, say, eight items,
it is better to ask that person questions in a series of pairs ("Which is more important to you, X
or Y?"). Alternatively, you can break up the list into a series of separate scaled items ("On a
scale of 1 to 6, where 1 is extremely important and 6 is not at all important, how do you feel
about X?")
• When changing subj ects, telephone surveys should cue the customer with transitional language.
To accomplish this shift, use statements such as, "Now, I'd like to turn to your experiences with
• Instructions for the interviewer must be perfectly clear, and the same format should be used
throughout the survey. For example, interviewer instructions are typically enclosed by brackets,
in all capital letters.
• For a sizable telephone survey (of say, more than 50 people), use of computer-assisted telephone
interviewing (CATI) should be considered. For large studies, CATI will be more cost-effective
and produce more reliable information.
Pretest
A pretest is a small-scale trial of the instrument and data collection methods. Conducting a pretest
is extremely important because the results will allow you to refine the instrument and methods
before the comprehensive data collection activity begins. It may seem that a pretest is unnecessary
if a survey has been carefully researched and designed. However, even the best plans cannot
anticipate all real-world circumstances.
4-20
-------
Results from a pretest can tell the analyst:
>• whether the flow of questions is logical and orderly
»• whether questions seem relevant and appropriate to the customers
> if customers were able to easily understand and respond to questions
> if response categories are adequate
*• whether questions truly reflect the issue that is intended to be measured
A pretest is helpful for cost projections, and also provides information about actual burden (that is,
the amount of time to complete the survey). This information is essential for Office of Management
& Budget (OMB) clearance (required for federal agencies, their contractors and cooperative
agreement partners performing surveys of direct benefit to the sponsoring agency). A pretest that
includes more than nine people who are not federal employees also requires OMB clearance.
One of the best ways to conduct a pretest is to randomly select individuals from the target group,
have them complete the survey, and then conduct a focus group session to review their opinions.
If, for example, you intend to conduct a telephone survey, you should recruit customers, bring them
to a central location where they can be interviewed by telephone, then meet as a group to go over
the draft questionnaire and their experiences in answering the questions. Pretest participants should
not be selected for the actual survey.
Contingency for Non-Response
Occasionally, regardless of planning, there will be times when response rates are simply too low for
you to make inferences and recommend action. In these cases, it is important to have a contingency
plan for non-response. The plan will need to include the additional steps you may need to boost the
level of responses. Some potential steps:
• Reminder Calls or Postcards — If you did not include these steps in the original survey plan,
you should consider them if the response is low. If you did include them in the original plan, it
may be advantageous for you to repeat them.
• Follow-Up Contact with Non-Respondents — You may need to make telephone calls or other
types of personal contact to non-respondents to identify the reasons for their non-response. Find
out if they understood the intent of the survey and the questions, if the questions were relevant
and if there were specific factors that persuaded them not to respond.
• Improve Contact Information — It may be that many addresses or phone numbers of the target
group are incorrect or out-of-date. Updating this information would very likely improve the
response rate. Places to check include the Internet, credit bureaus, and business directories.
• Revise Survey Instrument—Some questions may make respondents feel uncomfortable, so you
may need to revise the instrument. NOTE: If you change the survey instrument significantly,
4-21
-------
you may not be able to compare the results received before the change with those received after
the change. You will need to carefully consider the trade off of response rate vs. data validity.
Some of these steps may require a great deal of effort, time, and money. The group or individual
in charge of the survey will need to carefully consider the various options. If the response rate
remains too low, you may need to wait for a better time and customer base, or to rely on direct
conversations with customers.
OMB Clearance (EPA Only)
Under the Paperwork Reduction Act of 1995, the U.S. Office of Management and Budget must
approve any federally-sponsored collection of information that asks the same question of more than
nine non-federal respondents. Typically referred to as "OMB Clearance," the process is an exacting
one and demands strict adherence to OMB requirements. For example, if a customer feedback
activity is subject to OMB clearance, the cover of the data collection instrument must contain
standard language and the date on which the clearance expires.
EPA has obtained OMB approval of a generic Information Collection Request (ICR) to conduct
customer satisfaction work. Under this authority, the clearance process is streamlined and the time
for clearance is reduced from as long as 6 months to between 10 and 15 days. This generic ICR is
available only for strictly voluntary collections of opinions from customers who have experience
with the existing product or service.
Appendix C explains the streamlined process and provides several examples. You may request the
fact sheet as a separate electronic document from Patricia Bonner, Director of EPA's Customer
Service Program (Mail Code 2161). EPA offices may also send her survey instruments for quick
review to ensure that questions are properly worded to address customer satisfaction issues. In some
cases, another information collection request may be more appropriate to use than the generic
clearance mechanism.
Proposed EPA survey packages should be sent for final review to Barbara Willis of the Regulatory
Information Division at EPA Headquarters (Mail Code 2137). She will check the package for
compliance with OMB regulations regarding the generic clearance and review the burden placed on
the public, state officials, tribes, and other non-federal government customers. She will forward to
OMB all survey instruments and the required clearance package.
See Appendix C for more information about specific procedures to follow, forms to complete, and
general information about EPA Customer Feedback OMB Clearance. EPA personnel listed below
may be able to provide additional information:
Barbara Willis
202-260-9453
202-260-9322 (fax)
willis.barbara@epamail.epa.gov
Pat Bonner
202-260-0599
202-260-4968 (fax)
bonner.patricia@epamail.epa.gov
4-22
-------
Model Survey Instruments
Appendix D contains three model surveys prepared by the workgroup that you may use as is, or
modify as desired. The three customer groups targeted are the permit applicants, citizens, and the
delegated authorities. Please note, while these are similar to previously developed permitting
surveys with OMB clearance, those approvals have expired. EPA offices must re-obtain OMB
clearance to use them. These approvals would be available under the above described streamlined
process.
Additional Resources
The EPA Customer Service Program collects copies of survey instruments, reports, and resulting
plans. These materials are a resource for other EPA offices and staff who want to learn more about
their customers.
4-23
-------
CONDUCT DATA COLLECTION
No matter what data collection methods you choose, you need adequate planning, training, quality
control, and supervisory practices to ensure that the information collected is:
• timely
• accurate
• efficient
• parsimonious
• reliable
• valid
• cogent
Focus Groups
A focus group project typically involves several steps, as discussed below.
To recruit participants, you will need to compose an effective recruitment script. Use this tool to
create dialogue between the person recruiting participants and the candidate, and to qualify potential
participants, considering factors such as age, socioeconomic status, and race/ethnicity. Then you
will invite individuals who meet requirements to participate in the group. You should recruit about
twelve qualified participants for each focus group. Allowing for last-minute change of plans and
illness, the moderator should expect nine will attend.
Several practices can maximize the efficiency of the recruitment process:
• Well before the group meets, mail a letter to participants that confirms the date, time, and
location of the group and states whether the respondents will be paid for participating. The letter
thanks the participants, gives directions to the focus group facility (including a map) and repeats
the general objectives of the group.
• Also, you may decide to provide transportation to the focus group facility for those who need
it.
• On the day of the focus group (or the previous day, if the group is scheduled for the morning),
make a follow-up telephone call to the participants to remind them to attend.
Running a successful focus group also requires:
• arranging for focus group facilities
• providing video and audio taping equipment or people assigned as recorders
• providing a video hookup between the room where the focus group will meet and the room
where you (or others) will observe the focus group (if this is part of the design)
• coordinating participants' schedules
4-24
-------
During the focus group, it is a good idea to use both a moderator and an assistant to conduct the
session. The moderator will pose questions to elicit candid opinions from the participants, keep the
discussion moving, cover all topics in the discussion guide, recognize when participants bring up
valuable new information, and steer the discussion in that direction if warranted. The assistant
supports the moderator as needed, takes notes, and handles logistics.
Mail Surveys
In setting up data collection procedures for a mail survey, a good database is important. The
database should contain, for each customer, a unique identification number, the customer's
characteristics relevant for the sample selection (such as geographic location, size of business, or
date of last contact with your organization), name and address, mail-out date(s), and the date the
response is received. This database is a tracking system.
A mail survey typically includes several mailings, each of which experts call a "wave." Send out
each wave on the same date:
• If you use an advance letter, mail all of them to customers on the same day.
• About a week later, mail the first questionnaire to all customers. Attach a label with the unique
identification number to each questionnaire. A cover letter should refer to the advance letter, ask
for cooperation, and (when possible) provide a toll-free number for customers with questions.
The package should also contain a prepaid, pre-addressed envelope for returning the completed
survey.
• As completed questionnaires come in, record them in the tracking system. As undeliverable
questionnaires come back (e.g., the customer has moved and left no forwarding address or the
address is incorrect), note this in the tracking system.
• About three weeks after mailing the first questionnaire, send out the second copy to all those
who have not yet responded. A cover letter should note the importance of the study and ask
customers to respond. The second copy of the questionnaire should be a different color from the
first version. This distinguishes between the two copies, sends a signal to customers, and aids
efforts to track responses.
The following often help improve response rates:
• the advance letter (if used) should be on official letterhead, with a signature or title that is
meaningful to the customer
• any signed correspondence should use a real signature, rather than a rubber stamp (scanning in
the signature can work well for many letters)
• use a "live" stamp (if possible), rather than metered or prepaid postage, to send out the survey
4-25
-------
• use "address correction requested" to get information on customers whose surveys cannot be
delivered, then use the corrected information in the next mail-out
• use a large enough envelope so that the survey booklet does not have to be folded
• establish, when possible, a toll-free number for the duration of the data collection period, and
encourage customers to call with questions or comments
• allow respondents to fax back the completed survey
• if the budget permits, send out a third mailing via certified mail or using an overnight delivery
service (this is a last resort and may produce only minimal results)
Data from mail surveys must be key-entered or scanned. It is usually most cost-efficient to wait until
you have a sizable batch of completed surveys before beginning data entry procedures. Be sure to
do a periodic quality check to uncover data entry errors.
Telephone Surveys
Whether using computer-assisted telephone interviewing technology (C ATI) or a traditional paper-
based technique, you must train telephone interviewers specifically on the study' s questionnaire and
data collection procedures. You should cover the following topics during interviewer training
sessions:
• Background and scope of the survey. Aproject leader gives interviewers general information
about the background and scope of the project. She/he explains the types of information to be
collected and how that information will be used.
• Review of the questionnaire. A person responsible for data collection goes through the
questionnaire and leads an item-by-item discussion.
• Dealing with uncooperative respondents. Experienced staff lead discussions about ways to
start off the interview right, enlist cooperation, build rapport, and minimize break-offs and non-
responses. The interviewers will also review strategies for ways to manage challenging
situations.
• Answering customers' questions. Some frequent questions:
*• How was I selected?
> What is the survey about?
>• Who is conducting it?
>• Who wants to know these answers?
>• How will the information be used?
> How long will this take?
»• Will I be identified?
»• How do I know you are not some con?
4-26
-------
• Quality control procedures. Project leaders monitor such matters as posing questions
accurately, tone, courtesy, responsiveness to customers' concerns throughout the survey, and
reviewing these procedures with interviewers. Telephone interviews for any sizable study are
usually conducted using Computer-Assisted Telephone Interviewing (CATI) technology. CATI
systems:
* greatly reduce the possibility of mistakes
*• ensure accurate recording of the survey response
> instantly establish a tracking system and a record of each call
*• provide significant improvements in quality control and efficiency, and allow complex
branching and skip patterns
Electronic Feedback
As access to the Internet spreads, electronic communication will become an important method for
gathering customer input. You can easily collect feedback by asking Web page visitors a few
questions, inviting grantees to complete comment forms and submit them electronically, and through
on-line discussions in "chat rooms."
E-mail surveys are one of the fastest and least intrusive means for gathering customer feedback. Up
to 50 percent of the responses are received within 24 hours. They are also cheaper to conduct since
you pay no interviewers, or printing and distribution costs. In addition, since e-mail is generally not
routed through others, the survey will get to the intended individual. However, respondents are not
anonymous.
4-27
-------
-------
Chapter 5
Responding to Feedback
This chapter offers methods to analyze and present the data received under a
feedback and measurement effort so that meaningful results can be determined
and acted upon.
Now that you've collected customer feedback, you need to understand the data and respond
appropriately to what the data is telling you. So:
• ANALYZE the data
• ACT on the results
5-1
-------
ANALYZE THE DATA
Figure 4.2 in Chapter 4 described the steps you should take for establishing the purposes of customer
feedback. These steps included preparing an analysis plan and identifying the products your
feedback project would produce. However, you can modify your framework for analyzing findings
at any time. Your analysis plan should specify how your organization will analyze the survey
responses to produce the desired products. This plan ensures that your data will answer the
overarching questions being posed, and that you do not gather extraneous data. It also sets
expectations about the kinds of information that will result from customer feedback.
Your analysis plan should: (1) designate the dependent and independent variables and (2) identify
the unit of analysis. Dependent variables are the phenomena you are investigating. In this case,
the dependent variable probably will be the degree of customer satisfaction with a specific product
or service. Independent variables help explain the dependent variable data you collect. For example,
they may include differences in the product or service provided (e.g., customers were consistently
more satisfied with one service than with another), variety in the frequency and type of interaction,
or differences among customers (e.g., educators, students, local planners and small business owners).
The unit of analysis is what you are studying. In customer feedback surveys, the unit of analysis
will, in most cases, be the individual person served. When you use continuous feedback methods,
the unit of analysis will generally be the individual customer transaction.
Data Clean-up
Once you have set up the database and entered all data, you must review the data and prepare it for
analysis. This may involve several activities, such as deleting cases that left all answers blank on
a mail survey and coding open-ended responses into categories. Generally, this is the time to run
a set of frequencies to show the number of yes and no responses to each question and the total
number of responses of all kinds to each question. This quick analysis gives you a rough check on
the completeness and accuracy of your data (the total number of responses to any one question
cannot exceed the total number of respondents and will rarely differ greatly from the total responses
for each of the other questions). Frequencies flag out-of-range values (i.e., responses to one question
that are so different from responses to similar questions that you doubt their accuracy).
Types of Data and Analyses
Data from focus groups tend to be qualitative in nature. Analysts may tabulate data from focus
groups, such as "X percent of the participants expressed satisfaction." You should treat these
numbers cautiously and not generalize them to the full set of customers because focus groups usually
have a small number of participants who may have been recruited because they had specific
experiences or characteristics. You may review transcripts from focus groups to detect patterns and
inconsistencies or you may apply more rigorous content analysis.
For mail and telephone surveys, you can produce a variety of statistics:
5-2
-------
• Descriptions of central tendencies, such as the mean (the average value), median (the middle
value - half are larger and half are smaller), or mode (the most frequently occurring value).
• Other descriptive statistics, such as frequencies, percentiles, and percentages. In customer
satisfaction surveys, the most commonly reported result is the percentage of respondents who
expressed satisfaction with a specific aspect of their interaction with your organization.
• Cross-tabulations that array independent variables against the dependent variable (for
example, type of customer displayed against a summary measure of customer satisfaction,
such as the percentage of customers of each type who reported being satisfied with the
product or service they received).
• Multi-variate statistics—such as factor analysis, analysis of variance, and regression
analysis—to determine the relationship between and among selected variables.
• Chi-square, z scores, t-tests, and other statistics to determine statistical significance.
• Time-series and trend analyses to determine long-term changes, seasonal, and cyclical
patterns in the data.
In most cases, focusing on bullet items 2 and 3 above will meet all your needs and expectations.
Analysis: An Example
The following example demonstrates how you might analyze data from customer feedback. Suppose
your organization has distributed several thousand copies of the ABC Booklet, and you asked a
sample of 450 customers this question:
On a scale of 1 to 6 where 1 represents "highly dissatisfied" and 6 represents "highly
satisfied, " how would you rate your satisfaction with the ABC booklet you received from our
organization?
The most straightforward way to analyze the responses is to provide the average score, which in this
case is 3.5. Although an average score is a very important piece of information, you can do a lot
more with the data from your customers. You might begin with a frequency distribution, where you
determine the number and percentage of respondents who gave each score between 1 and 6. Here
is one way to present that distribution:
5-3
-------
Customer Satisfaction with the ABC Booklet (n = 450)
Score II Number
1 — highly dissatisfied
2
3
4
5
6 — highly satisfied
Total
42
27
122
132
38
32
393
Percent of those
expressing an opinion
11
7
31
34
9
8
100 ||
don't remember receiving the ABC Booklet:
don't know/no opinion
22 (5 percent of 450)
35 (8 percent of 450)
This example points out several items you need to consider.
First, of the 450 customers asked this question, 22 did not remember receiving the booklet and 35
said they had no opinion or did not know. We presented this information outside the table because
the analyst decided it was more important to focus attention on those who did have opinions. Thus,
the percentages of those with opinions is based on the 393 respondents who expressed them. If it
is important to determine the percentage of customers who don't remember or who have no opinion,
you would calculate those figures using a denominator of 450—the total number who were asked
the question. By including the sample size in the table ( "n = 450"), readers can do these
calculations, should they be interested.
Second, the information presented may be at too great a level of detail for many audience members.
The difference between a "2" and a "3" rating, for example, may not be meaningful for them. Thus,
you want to collapse the information into some smaller number of categories. One possibility is to
create three categories: dissatisfied, neutral, and satisfied. Scores of 1 to 2,3 to 4, and 5 to 6 might
be collapsed to create three categories and then report:
Customer Satisfaction with the ABC Booklet (n =450)
Rating II Number II Percent of those
|| II expressing an opinion
dissatisfied
neutral
satisfied
Total
69
254
70
393
18
65
18
101*
* Total is greater than 100 due to rounding
don't remember receiving the ABC Booklet:
don't know/no opinion
22 (5 percent of 450)
35 (8 percent of 450)
5-4
-------
Note that the information can now be grasped much more immediately. It is reasonable to ask: If
you will eventually collapse responses, why give customers six possible answers? Research has
shown that people prefer to have a fairly wide range of responses because they don't like to be
"forced" into a Procrustean set of options. In addition, analysts may have different approaches to
collapsing categories.
The responsibility for making information manageable and understandable falls to the analyst. It
is the analyst's task to identify sensible ways to collapse categories and to present these decisions
to the audience (often as a footnote or technical appendix).
Third, as discussed in the next section, you should consider how to present the data. Although these
tables are simple and easy to interpret, compare them to a chart that summarizes the information
instantly.
Customer Satisfaction with the ABC Book
Fourth, the analysis you anticipated during the planning phase should guide whether you need to
do "subgroup analysis," which examines whether different kinds of customers have different kinds
of responses. Suppose you want to examine whether educators and representatives of advocacy
organizations have the same or different opinions about the ABC booklet. You could collapse
categories and sort respondents by their status as educators or advocates (to be sure, some
respondents may be both educators and advocates, but for simplicity, let's assume you had customers
indicate their primary role). You might present the findings this way:
5-5
-------
Selected Customer's Satisfaction with the ABC Booklet
Educators Advocates
Dissatisfied
Satisfied
Neutral
Selected Customers' Satisfaction with the ABC Booklet (n = 450)
Rating
dissatisfied
neutral
satisfied
Total
Educators
Number
27
94
35
156
Percent
17
60
22
99*
Advocates
Number
17
78
12
107
Percent
16
73
11
100
* Total is less than 100 due to rounding
This table provides important information, but you might want to present it using charts for the two
separate groups. You could also perform a statistical test to see if the two groups differ statistically
in their satisfaction with the ABC Booklet.
Fifth, consider the adequacy of your findings. Be sure how strong your findings are before
formulating recommendations. Many factors affect adequacy, such as the sample size, response rate,
and objectivity of questions posed—as well as the way you will use the findings. With a sufficient
sample size, a good response rate (more than 75 percent for mail and telephone surveys, for
example), and questions that are not biased, you can use the information with confidence. OMB
requires an 80 percent response rate for survey results to be considered statistically valid. However,
when less than 80 percent of those sampled return questionnaires, the information gathered should
still be used to improve customer service. Do not ignore the findings.
5-6
-------
Let's say that in the above example, an additional 17 small business owners responded to your
survey. This small number may make the sampling error for this group quite high. Nevertheless,
pay attention to the results.
Even if they do not adequately represent the larger group of small business owners who were your
customers, you can still:
*• Decide whether the findings are suggestive (rather than definitive). Should your office pay
attention to the concerns suggested by these findings?
> Compare the findings to other similar data. Are small business owners generally pleased or
displeased with other organization products?
>• Compare the findings to information your organization gets from continuous feedback methods.
If you call small business owners after providing a service or product, what do they have to say
in those conversations?
» How do the continuous feedback findings compare with the results of this survey?
>• Discuss the findings with colleagues. Have they gotten similar reports? Is there a pattern
emerging about small business owners' level of satisfaction with your organization's products?
» Raise the findings with program managers, being careful to note that this might be an area that
requires attention to improve customers' satisfaction with your organization.
>• Investigate the findings further. Should you use this as a starting point for more in-depth
discussions with small business owners? Should you conduct focus groups to see how
products could produce higher levels of satisfaction?
One final comment on this example. Your organization may have customer bases much smaller than
the thousands used in the example. If your customer base is quite small, you should decide whether
a statistical sample and quantitative survey is viable, since other techniques may be more suited for
your purposes. If you decide to go ahead with a quantitative survey, recognize that the analyses you
conduct should be carefully considered and constructed. If, for instance, you have 500 customers
and survey 100 of them, you can perform the same analyses as in the example above, but you should
examine the frequency distribution first. In an extreme case, let's assume that 10 of your 100
respondents gave a score of "0," 60 gave a score of "3," and 30 gave a score of "6." Although the
average score of 3.0 maybe close to the average of 3.5 in the example, the distribution of responses
is very different.
Sixth, you need to consider how past responses compare with the new responses, and ensure that you
can compare the most current results with those you expect from future questionnaires. This is time
series or trends analysis and is vital to measuring change.
5-7
-------
Driver Analysis
One useful approach in customer research is driver analysis, which identifies the service or services
that most significantly affect respondents' satisfaction. Driver analysis can help you prioritize
findings, which is important because customer feedback efforts often yield more information than
an organization can deal with. Also, managers often don't have enough resources to adequately
improve all areas receiving low satisfaction ratings. Driver analysis enables you to identify which
areas deserve the highest levels of attention.
As an example, let's assume that you are assessing three ways of providing information: by
telephone, by mail, and through published materials. By analyzing customer feedback, you can
identify which method results in the highest satisfaction rates. This is the delivery system that most
strongly "drives" satisfaction with the program's information services. When you identify the
method that significantly affects satisfaction, additional analysis can determine which factor within
that method most significantly affects satisfaction. Continuing with the example, let's assume that
you identify "information received by telephone" as the method producing highest satisfaction.
Digging down another level, you can use driver analysis to identify the factor that most affects the
respondent's opinion. Such factors may include: the accuracy of the information, the courtesy
shown by the employee, or the accessibility of the correct person to answer the question. Identifying
the driver in this way greatly enhances a manager's ability to set priorities for improvements.
You will use two primary analytical techniques, stated importance and derived importance, in driver
analysis:
Stated importance uses respondents' answers to specific questions regarding the importance of the
services. Simply ask the respondent to rank or rate items on a prescribed scale (as from 1 to 6)
according to their importance.
Derived importance uses multi-variate analysis to identify the most important factors affecting
satisfaction. In short, the overall level of satisfaction with the organization is compared to
satisfaction levels of particular products or services received. Driver analysis will identify the degree
to which variation in the overall level of satisfaction is explained by the variation in the product or
service received. Those individual products or services that most adequately explain the variation
hi overall satisfaction are the drivers.
Presenting the Data
Before presenting the data, you must remove all identifying information. To ensure credibility and
confidentiality, you should never present findings that could be used to identify a specific customer.
Typically, you would strip names, addresses, and telephone numbers from the analytical database
and keep them hi a separate file that includes the unique identification number assigned during data
collection. If ever warranted, you can link the file with identifying information through the
identification numbers.
5-8
-------
Most people want the "bottom line," presented as succinctly and clearly as possible. Therefore,
consider presenting your survey results in simple, straightforward ways to most audiences, saving
the mathematical details for an appendix or supplementary briefing. Many audience members want
a brief summary of the study's findings. Two pages of text, with key findings presented as bullets,
are usually sufficient.
Color bar graphs, pie charts and other illustrations can display your findings in a powerful way,
making it easy for your audience to grasp information.
Making Recommendations Based on the Data
Your customers may suggest many potential improvements or enhancements to your permitting
program. You probably should reduce the list to those that will most directly affect overall customer
satisfaction. Most organizations have limited staff and other resources, so practical considerations
must guide their choices. Usually, three to five targeted improvements are sufficient. Sometimes,
a single improvement can have a major impact.
You will have to consider your organization's own capacity for action. However, it is important to
do something. Otherwise, customers may feel that they wasted time providing input that you didn't
value.
Recognize too, that not everyone will be ready for the feedback results. Presenting them can raise
sensitive issues within your organization. Some co-workers may feel threatened by anything but
glowing results. Some may question the credibility of the findings, especially if they build logically
to recommendations for changes that affect them.
To get buy-in and use the results to influence change, results must be honest, and presented in a
constructive way that emphasizes the positives. Results, findings and recommendations should be
presented as opportunities for improvement. If the survey cannot be used to influence change or
improvement, it did not meet its objective, no matter how carefully the feedback activity was
conducted.
Presenting Recommendations - Using Graphics
Remember, at least 70 percent of your message is visual, so take advantage of how people absorb
information. Use the right visuals to communicate your message.
• emphasize main numerical facts
• uncover facts, trends, comparisons and relationships that might be overlooked in text
or table
• summarize, group or segment (stratify) data
• add variety and interest to text, tables and briefings
5-9
-------
It's best to use pie charts to display components or parts of a whole. Use line charts if you want to
show independent or cumulative values when:
• your data cover a long period of time and several series are compared on one chart
• you want to show change, not quantity
• to exhibit trends
• to show relationships
Do not use column charts for comparing several data sets, for showing data with many plottings, or
to show many components. Finally, use picture graphs to demonstrate concepts or ideas.
ACT ON THE RESULTS
Is this the beginning or the end of the process?
When your efforts to collect customer data appear to be coming to a close, your real work may just
be starting! If this is the first time your organization has collected and analyzed customer data
systematically, you are probably discovering a whole new world of information. Depending on the
feedback method you have chosen, you may have created a baseline of information that characterizes
how your customers evaluate your products and services. You may wish to repeat the same process
again, to measure improvements against the baseline.
Customers will expect you to not only act on their feedback, but also to tell them what you have
done. At the same time, your organization will want to make the best use of the information it paid
to collect. Therefore, this next stage of the process is vitally important to the success of the final
phase-action planning and implementation.
How do you decide what to do with the feedback you receive?
Once you receive and analyze the feedback, most people will be anxious to. know the results. How
did -we do? What's the bottom line? Work hard to avoid giving answers that over-simplify the
feedback you have received. Depending on the methodology you used, you may have an average
score or rating to report. However, your information probably will provide a wealth of additional
insights about how your customers view the products and services they have received from your
organization.
How good is good enough?
That is a very hard question to answer, hi fact, the only real way to answer it is to say "it depends."
For example, is an average score of 4.9 on a 6-point scale a good score? If last year's average score
was 2.5, you may have reason to celebrate. For one thing, your score nearly doubled. Even better,
it leaped from the dissatisfied range to the middle of the satisfied range. However, you may want
to look deeper: how do your customers rate others who provide similar services? Is that organization
getting ratings above or below the 4.9? And what about the distribution of ratings—are some
5-10
-------
customers still rating you below a 3.0 while some are rating you above a 5.5? Are the more positive
ratings obscuring the negative ones? If so, you still may have customers out there who are sharply
critical of your products and services.
Setting acceptable goals for customer satisfaction ratings is a decision that each organization must
make for itself. Keep in mind, however, that leading service organizations:
• Target overall satisfaction scores at the upper end of the scale. On a 6 point scale, that should
be a 5.0, and in very competitive environs it may even be at the 5.5 level or higher.
• View any less-than-satisfied ratings as being unacceptable because they indicate an opportunity
for dissatisfied customers to quickly convey their dissatisfaction by word of mouth. In the long
run, that can undermine your efforts to achieve a reputation for service and product excellence.
How do we know what to work on first?
Many organizations are overwhelmed with the amount of information they receive from customers.
This is especially true if a survey instrument is lengthy, or if the results contain many open-ended
comments and ideas. Decision makers, particularly at more senior executive levels, are likely to ask:
What do we do first? What improvements will yield the best improvement in overall customer
satisfaction? What investments are worth making?
During the planning phase, you, your colleagues, and managers will have identified potential
methods and procedures for acting on the results of customer feedback activities.
Recovery. Be prepared to hear from customers who report a negative experience with your
organization. Set up a quick alert and response mechanism for any such case. (That may require
a special question asking whether the respondent is willing to be identified and contacted for follow-
up.) A quick response is a very positive way to convert a negative impression into a positive one.
Report. Even if the primary means for action is an oral briefing, having written documentation for
others to read and refer to is a good idea. It also creates a historical record for tracking changes over
time. Most people will want to see graphics and summary tables. Reports may include an executive
summary, a description of the study objectives and data collection methods, a comprehensive
investigation of findings (illustrated with graphs and tables), and conclusions and recommendations.
To keep the report at a reasonable length, you can present supplementary material in appendices.
Brief. Action planning workshops get management's attention. Gather decision makers together
and go over the findings with a verbal presentation. Software graphics packages can help make the
briefing interesting and informative. Conducting a dry run before your presentation helps with
timing, pacing, and finding out how well you can verbally communicate your written findings.
Hard-copy handouts give participants a tangible reminder of the information conveyed.
5-11
-------
Prioritize. Try to package the information so that it leads the audience or reader to a series of
practical actions that fit logically. Acting on results may be more successful if several smaller action
plans are developed that contain three to five steps, rather than one large plan that may appear
overwhelming.
Communicate. In addition to briefing management, it is a good idea to communicate results to
others. Sending athank-you letter to focus group participants and customers should note what your
organization learned and what will be done with the findings. Your employees are often eager to
learn what customers have said, so results should be summarized and distributed widely.
Improve. There is no reason to elicit customer feedback unless you will use the information to
improve your organization's processes, services, or products. Recognize that some employees may
be excited about possible changes, yet others threatened and resistant. The best way to use customer
feedback may be to develop and define action plans. They are most likely to be successful when
"owners" of each issue:
• are identified and included
• help assess their activities and customers' feedback
• participate in review and strategy sessions
• have an opportunity to discuss concerns and shortcomings in nonthreatening, non-
confrontational environs.
Enhance. Sometimes customers are satisfied, but want the agency to do more. This must be seen
as an opportunity to enhance products or services.
Reward. Conducting customer feedback activities can be exciting and worthwhile; the process can
also be exhausting and threatening. Be certain that you recognize and reward the efforts of staff and
customers who made the activity possible. Rewards can take the form of public acknowledgment,
mention in performance reviews, and attention to findings.
Plan. Use customer feedback to see what worked well and what could be improved the next time
around. Identify aspects that facilitated or impeded achieving the project's objectives, including
aspects of planning, data collection, analysis, and development of findings.
Feed Results into the Strategic Plan and GPRA Goals and Planning Activities. Recent
management initiatives, including the President's directives on strategic planning, reinvention, and
customer service improvement, and the Government Performance and Results Act (GPRA), suggest
that customer data be included in performance data. To address these needs, quantitative data from
surveys and trend data accumulated from on-going feedback mechanisms may be most useful. You
can use focus groups and other qualitative data to clarify customers' views.
5-12
-------
As government agencies go about "reinventing" programs to meet customers' needs and
expectations and to comply with the requirements of GPRA, managers will need to develop
customer-based performance goals and indicators to assess progress. The basic way to do this is to
get input directly from customers.
5-13
-------
-------
!tomer Service in
I t
Chapter 6
Maintaining Good
Customer Service
This chapter of the Toolkit discusses some of the ways an organization can keep
its customer service program effective and viable.
Maintaining an effective and viable customer service program requires you to pay attention to
several factors. You can't achieve good customer service by implementing a one-time initiative.
It requires you to institutionalize the basic tenets and values of customer service throughout your
organization — and to regularly reinforce them. Some of the key elements that we recommend:
Q Establish a customer-driven organizational culture. Make sure
that the benefits of customer service are understood and appreciated
throughout your organization, that all staff feel empowered to practice
and improve customer service, and that customer service
accountability, recognition, and award systems are in place. Your
organization should have a customer-oriented mission, vision, and
organizational values. Your organization's commitment needs to be
visible and obvious to your staff. You can accomplish this through:
posters, placing customer service on meeting agenda, regular LAN or
e-mail messages, etc. You may need to assess staff attitudes and
assumptions about customer service within your organization and
appropriate corrective steps.
Q Maintain management support and commitment. Management
must commit the necessary resources to customer service. The
organization's strategic planning documents, such as a permit
program's annual operating plan, should reflect a commitment to
customer service. Your organization's policies and practices should
be flexible, adoptive and resilient.
Q Know the processes and tools of customer service. Managers and
staff need the skills to implement customer service programs and
maintain/improve customer satisfaction. Below is a service skills
matrix developed by and included here with the permission of, Macro
International, Inc. of Calverton, Md., to define the desired skills.
Training programs can help your staff understand why customer
6-1
-------
service is important and how to achieve customer satisfaction in your
organization. A major telecommunications company has reported
that customer satisfaction rose from 75 percent to 93 percent after its
staff received customer service training. EPA has developed a
training program that includes a 2-1/2 hour introductory workshop
and six additional modules tailored to specific situations. EPA staff
are conducting these workshops around the country. The
introductory workshop, entitled "Forging the Links," encourages
participants to unleash their creativity to enhance EPA customer
service. The six additional modules are entitled: "The Leader in Each
of Us;" "Moving From Conflict to Collaboration;" "Proactive
Listening;" "Clarifying Customer Expectations;" "Resolving
Customer Dissatisfaction;" and "Influencing for Win-Win
Outcomes." These modules help EPA staff develop and use the
personal skills they need to provide outstanding customer service.
You can obtain more information on this training by contacting
Patricia Bonner at EPA Headquarters, telephone number (202) 260-
0599.
Establish accountability for customer satisfaction. Managers and
staff need to understand and be held accountable for implementing
customer service programs. The most direct way to accomplish this
is through performance standards and performance appraisals.
Q Keep in touch with your customers. Regularly request, evaluate,
and respond to your customers' needs. This means obtaining
feedback through any of the approaches described earlier in this
toolkit. Respond as appropriate and keep them informed of
improvements designed to meet their needs. A Web page or
newsletter are excellent vehicles to reach your customers with such
information.
Q Institute appropriate incentives, particularly recognition and/or
award programs. Staff excellence in customer service needs to be
promoted, recognized, and reinforced. Incentives can range from
letters of appreciation to cash award programs. We recommend the
book, "1001 Ways to Reward Employees," by Bob Nelson, as an
information source for creative incentive programs.
Q
6-2
-------
Service Skills Matrix
SKILLS, KNOWLEDGE & ABILITIES
HANDLING CUSTOMER INTERACTIONS
Ability to define customers
Knowledge of what customers expect
Skills in providing responsive service to customers
Knowledge of how behaviors trigger productive and unproductive
customer interactions
Skills in reducing stress in tense customer interactions
Skills for resolving difficulties with angry customers
Customer problem-solving skills
Ability to respond to verbal attacks
Skills in saying 'no' to customers in a productive way
0*8
%%
UJ 5
og
< UJ
p
5 «
X
X
X
X
X
X
X
X
X
CO
UJ
g
Q.
Ill
X
X
X
X
X
X
X
X
X
6-3
-------
Customer Service
Skills Matrix
(Continued)
*%
&%
s>
1 SKILLS, KNOWLEDGE & ABILITIES H^H
COACHING & MOTIVATING CUSTOMER SERVICE
BEHAVIORS
Ability to create and communicate specific, measurable, customer-
focused performance
Ability to establish customer service performance standards for
employees
Skills in delivering effective, constructive feedback to employees
about their customer service skills
Ability to identify and evaluate performance problems, and to
determine how and when to take corrective action
Ability to reinforce desired staff behaviors with customers
MANAGING CUSTOMER SERVICE IMPROVEMENT
Knowledge of what drives customer "loyalty" in the context of
public organizations
Skills in analyzing customer transactions (cycles of service)
Ability to gather, analyze, and respond to customer feedback
Ability to use customer feedback to improve service processes
Ability to establish "service recoveries"
Ability to determine appropriate levels of service delivery
empowerment for subordinate employees
X
X
X
X
X
X
X
X
X
X
X
CO
ui
s
3
a.
LU
6-4
-------
Customer Service
Skills Matrix
(Continued)
SKILLS, KNOWLEDGE & ABILITIES
<*£
-------
-------
as „ «r»«T- »' i jy "
Itomer Service
Chapter 7
Customer Service
in Action
This chapter of the Toolkit offers some examples of what
environmental agencies are doing to improve customer satisfaction.
Introduction
This Toolkit presents an approach to developing world class customer service in permitting
programs. While we have described a process, and hopefully given you the tools to identify and
evaluate your customers' needs, we cannot prescribe the right solutions for your organization.
Nevertheless, we believe it is useful to offer some real-world examples of what organizations like
yours are doing. This chapter may inspire you as to what's possible, and give you some leads on
whom you may contact for further information.
This is by no means an exhaustive account of what is happening across the country and it may not
be completely up-to-date, but we think it reflects some of the more interesting and innovative efforts
we've learned about during our work.
Permit Assistance/Information Center
States offer a variety of assistance to permit applicants and citizens who want help with permit
information and application processing. The most common approach was establishing either a
permit information center or a designated coordinator to serve as a single point of access to the
regulatory agency.
Permit assistance offered by one or more states:
• Answering general phone inquiries and referring, as needed, to the appropriate program
permitting staff [Maryland, Missouri, North Carolina]
• Helping the permit applicant identify the types of permits, certifications, or approvals it needs
[North Carolina, Oklahoma, Vermont]
• Coordinating pre-application meetings between the permit applicant and the program offices
[Maryland, Missouri]
• Helping the permit applicant evaluate alternative permitting processes [Vermont]
7-1
-------
• Serving as a repository for all permit information; distributing guidance materials, permit
assistance directory, and application forms for easy distribution to the public [North
Carolina]
• Establishing a tracking system that provides the public/applicant with fast information on the
status of permit applications [Maryland]
Stakeholders' Early Participation in the Permit Process
Citizens often believe agencies make maj or decisions on a permit long before they seek public input,
or that the public doesn't have enough time to properly evaluate a permit application.
They may raise issues at the draft permit stage that could have been more readily, and less
expensively, resolved earlier in the process. To respond to these concerns, some organizations are
soliciting earlier stakeholder involvement than typically occurs.
For example, EPA Region 2 implemented a Permit Complaint System Policy with enhanced
stakeholder communication procedures. The policy includes enhanced public access to permit
information, including Internet access, mailings to stakeholders on a permit-specific basis, early
identification of issues, and expedited complaint resolution earlier in the permit process than
prescribed under regulation.
Among states, Connecticut and Maine established an early public involvement program to seek input
from the permit applicant and public at the beginning of the permit process. Under Connecticut's
Early Public Participation Program, a public notice informs citizens that the agency has received
a completed permit application. Citizens may request a public meeting to discuss the proposed
application before any decision is made by the agency. In Maine's case, a team of project managers
meets with potential applicants at the early stages of the permitting process. This pre-application
meeting allows both parties to identify any applicable regulations, and even analyze and resolve
potential issues before the official application is filed with the regulatory agency.
While Connecticut and Maine seek all stakeholders' input on the permit application process, South
Dakota seeks input from the public and the regulated community on proposed new regulations. The
goal is to seek early input from all stakeholders for applicability, practicality and enforceability
before rules are promulgated.
Plain Language
The federal government is working to improve the clarity of regulatory language in its letters and
communications by using plain English. In a similar effort, South Dakota has a "Plain English"
program underway. Under this program, South Dakota's written and oral communications are made
easier to understand than the usual bureaucratic or technical jargon. Through this program, even
enforcement letters are written in simple language so that violators better understand what needs to
be done. South Dakota believes using plain English helps small business owners better understand
what the agency is doing, what it requires, and what services it offers. It may even boost
compliance.
7-2
-------
Permit Processing/Issuance Timeframes
Currently, most permit applicants don't know when an agency will make a decision on their permit.
Such unpredictability may create the perception that the state is not responsive to the needs of the
business, which, in turn, creates frustration on both sides.
Consistent with one of EPA's aforementioned customer service standards for permitting, several
states have passed laws requiring their agencies to establish time frames for processing permit
applications and issuing certain final permits. Maine, Maryland, Hawaii, and Vermont have set
either a "guaranteed" or a targeted "goal" timeline on their permit decision processes. The "goal"
takes into account a number of factors, including complexity of the permitting process, extent of
public involvement, permit backlogs, or lack of program resources.
For most major permits, Maryland, requires agencies to establish money-back guarantees of permit
application fees if they fail to issue permit decisions by the dates promised. In Hawaii, the permit
is automatically approved if the agency fails to act within the prescribed time.
Other states, including Oklahoma and Connecticut, have set permitting timeframes without
legislation. They have established permitting timeline goals for regulated communities. The states
say this approach has reduced permit processing timeframes and, consequently, improved customer
satisfaction.
To measure the success of its agency in meeting the established timelines, Vermont's legislature
required the Vermont Department of Environmental Conservation to generate annual reports on how
they are doing and ways to correct problems.
Maryland requires its program offices to generate success reports on adherence to permit processing
timeframes.
Customer Satisfaction Surveys
Several states are conducting surveys to assess their customers' satisfaction. Maine, Oklahoma, and
Delaware use mail surveys to assess satisfaction with their staffs courtesy, timeliness, knowledge
and quality of service. Missouri, North Carolina and Vermont surveys measure satisfaction with the
permitting process. These agencies include a customer survey with each individual permit.
While the above-mentioned states primarily survey their external customers, Wisconsin also
conducts internal surveys of its permitting staff for continuous quality improvement of its processes.
They gather information on how and what data are used for permitting decisions. Wisconsin
believes that, by understanding staff needs, they will enable the state to focus on specific areasissues,
which should improve service to their external customers.
While most of the states conducted surveys annually, Vermont experienced some poor response rates
and has decided to revamp its processes. One option may be conducting a more intensive effort, but
less frequently.
7-3
-------
Incentive/Award/Reward Programs
EPA Region 6 has implemented an innovative and effective employee recognition system for
excellence in customer service. It includes initiatives such as Peer Recognition Certificates with
time off awards, a Monthly Customer Service Awards program, the Regional Administrator s
Annual Customer Service Award, theDivision Director Customer Service Awards, and a Mystery
Caller awards program to improve voice mail messages.
Delaware implemented monthly Customer Service Awards to recognize staff for providing
outstanding service to internal and external customers. Monthly award recipients are selected by
customer service representatives from each of the division's sections.
7-4
-------
APPENDICES
A: Internal Control Procedures
B: Sampling
• The Basics
• More on Sample Size
C: OMB Clearance
D: Model Surveys
• Applicant Permitting
• Citizens Permitting
• Delegated Authority Permitting
-------
-------
Appendix A: EPA Internal Control Procedures
Internal Controls Ensure the Integrity of Survey Data and Results
The U.S. General Accounting Office and OMB have issued Internal Control Standards that apply
to all operations and administrative functions, thus ensuring the quality, reliability and integrity of
information we use for decision making. These standards and techniques, found in a number of laws
and requirements, apply to the collection, administration and reporting of customer survey results
and other forms of data used for performance measurement, verification, planning and management
action. Developing internal control procedures is good business practice and important in our role
as public servants. What constitutes an effective control system varies with your circumstances.
While controls may be as routine as second party reviews or limiting access to the data, they should
provide reasonable assurance that you will reliably and cost-effectively accomplish the survey's
objectives. Any audits, evaluations or verifications of your data will usually start with an
examination of internal control systems.
Some Specific Control Standards and Techniques
Management must provide reasonable assurance and a supportive attitude to protect assets
(information) against waste, loss, unauthorized use, and misapplication. Supporting documentation
should be clear and available for examination. Management controls should be logical, applicable,
reasonably complete, efficient and effective in accomplishing management objectives. Managers
and employees must have professional and personal integrity and are obligated to support the ethics
program and maintain a level of competence that allows them to accomplish their assigned duties.
Managers should ensure that appropriate authority, responsibility, and accountability are defined and
delegated and that an appropriate organizational structure is established to effectively carry out
program responsibilities. Managers should spread key duties and responsibilities in authorizing,
processing, recording, and reviewing official information to avoid concentration of power and to
discourage abuse of authority. Access to assets and records should be secured and limited to
authorized personnel, with custody assigned and maintained. All program operations, obligations
and costs should be in compliance with applicable laws and regulations; resources should be used
efficiently and duly authorized.
A-l
-------
-------
Appendix B(i): Sampling - The Basics
If you have decided to use a survey approach for obtaining-customer feedback, you need to
determine what sample size to use. This Appendix first discusses sample sizes, sampling error, and
confidence intervals — all of which factor into decisions about the sample size. It then presents a
table for determining sample sizes — and tells you step by step how to use the table. You will also
learn how to randomly select customers for the survey.
What Kinds of Sample Sizes Are We Talking About?
Before we give specific guidelines on how to choose the sample size, we should set some general
expectations. National public opinion polls like the Gallup Poll and the Roper Poll typically use
sample sizes of 1,350 to 1,800. These polls use fairly large sample sizes to obtain a result that
represents the entire adult U.S. population, with a sampling error of plus or minus 2.5 to 3 percent.
Such small sampling errors are needed because the polls often address matters of national
importance. The decisions made, based in part on the results of these national polls, may be far-
reaching, long-lasting, and affect millions of people.
Your surveys to obtain customer feedback will be very different. The target group whose opinions
you need will be much smaller: the people who have come to one specific program within a limited
time (e.g., during one year) to request certain products or services. Your target group may be as
large as 500 to 1,000 people (few EPA programs directly serve more customers than that) or as small
as 50 people or fewer. Furthermore, although the decisions based on customer feedback are
important, they will probably not be far-reaching and long-lasting. In most cases, you will be asking
these types of questions:
• Should we change a process to reflect customer comments?
• Should we revise some of our written products?
• Should we provide a half day of customer feedback training to each staff member?
Even if we make the wrong decision based on a customer survey, we should discover our error soon
enough and be able to correct it without irreparable damage.
Based on these considerations, we can tolerate higher sampling errors than those associated with
national surveys like the Gallup poll. We can feel comfortable with sampling errors of 5 percent or
even 10 percent.
Additionally, you have relatively small target groups who were served by a specific program during
a defined time period. For this reason, you can use a much smaller sample size than is used in the
Gallup and Roper polls, which seek to accurately capture the opinions of millions of people.
-------
Sampling Error
"Sampling error" is normally presented as a percentage with a plus or minus sign in front of it. For
example, the sampling error in one particular situation may be ± 3.5 percent. That means that the
true value of a given measure for the entire population—that is, the whole target group you are
getting feedback from—is the value obtained from your sample of customers, plus or minus 3.5
percent. If for example, 62.4 percent of your sampled customers are satisfied, the actual percentage
of satisfied customers lies between 58.9 percent (62.4 percent - 3.5 percent) and 65.9 percent (62.4
percent + 3.5 percent).
But that is not quite true. In fact, there is no reasonable sample size through which we can be certain
that we will obtain the true value for all customers.
Why is that so?
The characteristics of the customers in the sample may occasionally be very different from the
characteristics of the customers not in the sample. In such circumstances, the true value for all
customers will be very different from the value obtained from the customers surveyed. The only
way to get around this statistical fact is to specify "how certain we want to be" that the true value
does, in fact, fall with a specific range. This degree of certainty is known as the "confidence level."
Confidence Level
The "confidence level" indicates how confident we want to be that the true value lies within a
specific range.
No one confidence level is the "right" one to use. There are many different possible confidence
levels, and only you can decide which confidence level is appropriate for your survey.
Many public opinion surveys use the 95 percent confidence level. That means that you can be 95
percent certain that the true value for all your customers will lie within a specific percentage band
(one equal to the size of the sampling error) around the result you obtain from your sample.
Another confidence level commonly used is the 90 percent confidence level. With a 90 percent
confidence interval, you can be confident that 9 times out of 10, the true value falls within the value
obtained from your sample of customers, plus or minus the sampling error. Some analysts use 80
percent confidence intervals.
To decide what confidence level to use, you might want to think of a scale running from 80 to 95,
where 95 represents a high level of confidence and 80 represents a lower level of confidence. Decide
which confidence level to use based on the way you will use your results, how products and services
may be affected by the results, and the frequency with which you will collect additional information
to confirm or revise your findings.
-------
Determining the Sample Size
Now that we have established appropriate expectations with regard to sampling error and sample
size, we will provide you with some guidance on selecting your sample size. Please recognize that
there are several factors to consider in determining the sample size. The information provided here
is intended to help get you started. Please refer as well to the additional information provided in
Appendix B(i) and Appendix B(ii). If you wish, you may also consult a statistician at EPA. A list
of EPA statisticians showing the EPA Office in which each of these statisticians is located can be
obtained from the Office of the Chief Statistician of EPA within EPA's Center for Environmental
Information and Statistics by calling 202-260-5244.
Number in Target Group || Sampling Error || Confidence Level
1000
1000
1000
500
500
500
200
200
200
100
100
100
50
50
50
1000
1000
1000
500
500
500
200
±5
±5
±5
±5
±5
+5
±5
±5
±5
±5
±5
±5
±5
±5
±5
±10
±10
+10
±10
±10
±10
±10
80
90
95
80
90
95
80
90
95
80
90
95
80
90
95
80
90
95
80
90
95
80
Sample Size |
141
214
278
124
176
218
90
116
132
62
74
80
39
43
45
39
64
88
38
60
81
34
-------
200
200
100
100
100
50
50
50
±10
±10
±10
±10
±10
±10
±10
±10
90
95
80
90
95
80
90
95
51
66
29
41
50
23
29
34
The above table is appropriate for simple random sampling (SRS), which is a sampling procedure
based on sampling without replacement. Simple random sampling is the most commonly used
sampling procedure. The table is based on the approximate formula given in Appendix B(ii). This
approximate formula includes an adjustment comparable to the finite population correction factor
for each combination of target population and sample size.
The precise formula that can be used instead of this approximate formula is also given in Appendix
B(ii). For a discussion of the finite population correction factor, see Appendix B(ii).
The procedure described below for randomly selecting a sample from the full list of customers
served in a specific period of time is simple random sampling and is therefore consistent with the
above table.
Here's How to Use the Above Table
The instructions that follow assume that your survey's unit of analysis will be the "person served."
(1) Identify the number of persons you have served in a given time period. Find that number in the
column labeled "Number in Target Group."
(2) Select the confidence level that you consider to be the most appropriate given the magnitude of
the decisions that will be made:
> If the decisions to be made using the survey results will be far-reaching, long-lasting and/or
costly, use the 95 percent confident level.
>• If the decisions to be made using the survey results will be less far-reaching, less long-
lasting or less costly, use the 90 percent confidence level.
> If the decisions to be made using the survey results will have more limited consequences,
mostly in the short-term (e.g., in the next 6-12 months) and the cost implications of the
decisions will be moderate, you may use the 80 percent confidence level.
B(i) - 4
-------
(3) Select an appropriate sampling error:
> For most EPA customer satisfaction surveys, a sampling error of± 10% should be acceptable.
* In cases where the decision s will require greater certainty and precision, a sampling error
of ±5% can be used instead.
(4) Read off the corresponding sample size.
> If the total number of customers falls between two of the values shown above in the column
"Number in Target Group," you can use interpolation to obtain an initial estimate of the
appropriate sample size.
> You can then use the approximate formula for determining sample size presented in
Appendix B(ii) to obtain a much better estimate of sample size needed.
> You can stop here and use the approximate sample size obtained in step (4)(b) immediately
above. Or, if you wish, you canuse the trial and error approach described hi Appendix
B(ii), or even better, the combined approach, also presented in Appendix B(ii), to calculate
the precise value for the sample size needed.
Here's How to Randomly Select a Sample of Customers Once You Have Determined
What Sample Size to Use
Once you have determined the appropriate sample size, you need to randomly select that number of
customers from the total number served in the time period of interest. Here is a procedure you can
use to make that random selection:
(1) Make a complete list of all the persons served in the period of interest for which you already
have (or can obtain, with a reasonable expenditure of effort) the needed contact information (i.e.,
name, plus address or phone number). Put the customers hi alphabetical order. Eliminate
duplicates (so that each name appears only once).
(2) Starting at the top of the list, number each name. The result is the "master list" of customers
served. The number next to each name is that person's "customer number."
(3) Here is a computer-based approach for selecting a sample of customers from the master list:
(a) You will use spreadsheet software (such as Lotus 1-2-3 or Excel) to carry out the remaining
steps of this procedure. Before you begin using any particular spreadsheet software, make
sure it has a "randomize" function. Not all spreadsheets do.
(b) Enter the customer numbers in the second column of the spreadsheet, placing the numbers
in numerical order, one number per row. Leave the first column blank. You should have a
spreadsheet with the number of rows equal to the number of customers. In the second
-------
column you should have the numbers "1", "2", "3", and so on (up to the total number of
customers served).
(c) Use the "randomize" function on the second column of the spreadsheet. The numbers in the
second column are now in random order.
(d) Enter numbers into the first column of each row. Enter the number "1" into this column in
the first row, enter "2" in the second row, and so on. These new numbers are the row labels.
(e) Mark off the number of rows corresponding to the sample size chosen above. For example,
if the sample size is 65, mark off the first 65 rows.
(f) The numbers appearing in the second column are the customer numbers for your sample.
Using the master list prepared in step (2) above, read off the names next to these customer
numbers. Place these names in a new list. These are the people you will contact to answer
the survey questions.
(g) Later, if you have a low response rate to your survey despite reasonable follow-up efforts,
go back to the spreadsheet and mark off the additional number of rows needed to reach the
desired sample size. The numbers appearing in the second column of these additional rows
are the customer numbers for the additional customers to be added to the sample.
B(i) - 6
-------
Appendjxj3(ii)jJ>ampling - More on Sample Size
The Effect of the Response Rate on Sample Size
The initial sample size is the number of customers you attempt to contact during the survey. The
final sample size is the actual number of customers for which responses were received. The response
rate is the percentage of customers included in the initial sample for which a usable response was
received. The response rate will vary depending on the kinds of customers being contacted, the kind
of product or service received, the kinds of questions asked in the survey, and so on.
The response rate is almost always less than 100 percent. The table in Appendix B(i) shows the
approximate sampling error associated with the final sample size. You should always include a
greater number of customers in the initial sample to achieve the desired final sample size.
For periodic surveys that ask the same or similar questions (in order to measure changes in customer
satisfaction) you can estimate response rates based on rates in previous surveys. If you are
conducting your survey for the first time, you could assume a response rate of, say, 85 percent when
determining how many customers to select for the initial sample. If fewer than 85 percent respond,
you can add more customers to the sample later, using the procedure described in step (3) (g) in on
pages 5-6 of Appendix B(i).
Note, however, that it's better to have high response rates and among a smaller number of customers
in the sample. The reason for this is non-response bias. Non-response bias is encountered if the
customers who did not respond to the survey are significantly different from those who did respond.
Non-response may be due to your inability to reach a specific customer, (e.g., because his or her
telephone number has changed), or may be due to the customer's unwillingness to participate in the
survey at all, or to answer one or more questions in the survey. Because some customers will answer
some questions but not others, the degree of non-response will vary from question to question on the
survey questionnaire.
Non-response bias is one source of the overall bias in the survey results. Another source of bias is
using a poorly chosen or poorly constructed master list from which we randomly select the sample
of people to be surveyed. One of the best known examples of such bias is a classic national poll of
likely voters that was conducted by the Literary Digest in 1936, a few days before the presidential
election that year. The poll showed that Alf Landon would win the election. In fact, Franklin
Roosevelt won the election by a landslide. The reason for the erroneous polling results was bias.
The poll was conducted relying primarily on lists of telephone subscribers. Since 1936 was the
lowest point of the Great Depression, many voters could not afford phone service. It turned out that
voters without phones were much more likely to vote for Franklin Roosevelt than those who had
them.
While this particular case gives an unusually dramatic example of bias, any level of non-response
(like any serious systematic errors in preparing the master list) poses potentially serious problems.
Furthermore, we won't know the magnitude of these problems because we probably won't know
how the non-respondents differ from those who did respond. After all, we were never able to gather
any information about them in our survey.
-------
For this reason, you should keep non-response rates to the lowest achievable levels. You can do this
by following up with customers who didn't respond to your first attempt. Only after you've made
all reasonable follow-up efforts should you make up the shortfall by selecting additional customers.
An Adjustment Factor
The sampling errors shown in the table presented in Appendix B(i) are approximate. They do not
take into account a factor that, if considered, would result in lower values. We will now provide you
with an adjustment factor that you may use to account for this additional factor and, in so doing,
obtain a more precise value for the sampling error:
An adjustment factor to reflect that the sample result was greater than or less than 50 percent
One significant complication is that the sampling error varies markedly with the magnitude of the
sampling result obtained. By sampling result, we mean, for example, the percentage of customers
in the sample who say they are satisfied with the product or service they received. All else being
equal, the largest sampling error is associated with a degree of satisfaction of exactly 50 percent.
Any higher or lower level of satisfaction will result in a lower sampling error. The lowest sampling
error is associated with a level of satisfaction of 100 percent or 0 percent.
Here are the specific correction factors you should use for various sample results:
The Sample Result (e.g., the percentage
of customers in the sample who said they
were satisfied with the product or
serviced received) Correction factor
99% • 0.20
98% 0.28
95% 0.44
90% 0.60
80% 0.80
70% 0.92
60% - .- 0.98
50% 1.00 (i.e., no correction)
40% 0.98
30% '. 0.92
20% 0.80
10% 0.60
5% 0.44
2% 0.28
1% 0.20
-------
Thus, if the survey shows that 90 percent of the customers sampled were satisfied with the product
or service they received, then the associated sampling error is obtained by multiplying 0.60 times
the sampling error shown in the standard tables (including the table provided in Appendix B(i), So
if the sampling error shown in the table is ±10% for the sample size used, then the actual sampling
error is really only ±6% (= ±10% x 0.60).
If the sample result shows that 80% of the customers were satisfied, and the sampling error obtained
from the table was ±10%, the actual sampling error associated with that specific sampling error
would be ±8% (= ±10% x 0.80). These are rather significant adjustments.
Since the levels of satisfaction likely to be obtained for most EPA products and services are likely
to be in the range of 80 to 90 percent or more [questionable assumption?], we advise you to take this
adjustment factor into consideration: (1) when calculating sample sizes, and (2) when determining
the actual sampling error associated with a given survey result.
There is a major implication of this variation in sampling error. Since the sample result varies from
question to question, there is no one level of sampling error associated with the survey as a whole.
Instead, you will have a different sampling error for each for the response to each question. If
customer satisfaction is close to 50 percent on one question and close to 100 percent on another, the
sampling error for the second will be much lower than (possibly less than half of) the sampling error
for the first. The plus or minus figure given should therefore be different for each result reported
(i.e., it should be different for each question for which the response is shown). It is common
practice, however, for only one level of sampling error to be shown: this may either be (1) the largest
sampling error associated with any of the results reported or (2) the sampling error that would be
obtained in the worst possible case, i.e., if the result had been a level of satisfaction of 50%.
In presenting the results for customer satisfaction surveys conducted at EPA, those preparing the
results may either conform to this common practice or present question-specific sampling errors, as
they prefer. The latter can be accomplished by simply stating a plus or minus figure after each
sample result shown.
For example:
The Question on the
Survey for Which the Result
Is Being Reported
Question 1
Question 2
Question 3
Question 4
Question 5
The Degree of
Satisfaction Reported
83% ± 8%
91% ±6%
78% ± 9%
87% ± 8%
94% ± 5%
-------
Precise Formula for Calculating the Sampling Error
Here is an alternative approach for (1) estimating the sampling error that will occur in a planned
survey or (2) calculating the actual sampling error associated with a specific result in a completed
survey. Instead of obtaining values of the sampling error from a table (like that included in
Appendix B(i)) and then applying the necessary adjustment factor(s) presented above in the previous
section, simply calculate the sampling error directly from the precise formula.
Here is the precise formula for calculating the sampling error:
p x q N -n
The sampling error = (Z) times the square root of x
n N-l
where p = the sample result (i.e., the percentage of customers who
were satisfied with the product or service they received)
q= 1-p
n = the sample size
N = the total number of customers served
Z = is a constant coefficient (i.e., multiplier) associated with
the confidence level used. (This must be looked up in a
table in a statistics book). Each of these constants is
known as the Z-score for that confidence level.
Here are the coefficients (i.e., Z-scores) for the three
confidence levels:
For the 95% confidence level, Z = 1.90
For the 90% confidence level, Z = 1.645
For the 80% confidence level, Z = 1.282
The precise formula presented above is based on the simple random sampling (SRS) procedure, in
which the sample is drawn using the sampling without replacement procedure. Simple random
sampling is the most commonly used sampling procedure for use in customer satisfaction surveys
conducted by EPA. It is the procedure reflected in the table and the discussion of sample selection
presented in Appendix B(i).
The above formula will give the exact size of the sampling error for any combination of number of
customers served, sample size, sample result and confidence level. Using this formula automatically
takes into account the differences in the magnitude of the sampling error due to differences in the
B(ii) - 4
-------
sample result (which was discussed in the previous section of this Appendix) and also automatically
includes the finite population correction factor, which is discussed in the next section of this
Appendix.
Another Adjustment Factor
The table presented in Appendix B(i) reflects both the sample size (n) and the total number of
customers served (N) in determining the sampling error for any given confidence level you select.
If you consult reference books on statistics or sampling procedures, you may find tables that show
sampling errors for various sample sizes without factoring in the total number of customers served.
In such cases, to get the actual sampling error, you must multiply the sampling error given by an
additional factor known as the finite population correction factor.
The standard sampling techniques were developed for sampling a very large number of people. This
is true, for example, of surveys of national public opinion. The standard formulas and tables used
are therefore predicated on sampling from a very large pool, one that is, in practical terms, "as good
as infinite" and is treated by statisticians as though it were infinite.
When the number of people in the target group is much smaller, you should use a correction factor
(one known as the finite population correction factor) to correct for this circumstance. The finite
population correction factor can always be used (its use never gives an incorrect result), but you
generally don't need it if your sample size is less than about 10 percent of your target group.
If the sample size of customers to be contacted is greater than 10 percent of the total number of
customers served, then you should use the finite population correction factor in calculating the
sampling error. These circumstances will apply in a large percentage of customer satisfaction
surveys conducted by EPA. Luckily, the finite population correction factor always results hi a lower
sampling error. Therefore, if you are satisfied with the sampling error calculated without using the
finite population correction factor, then there is no need to use it for that survey, unless you want to
know exactly how much lower the true sampling error is.
The finite population correction factor (FPCF) can be calculated using the following formula:
FPCF = the square root of (N-n)
(N-l)
where: N = the total number of customers served
n = the number of customers in the sample (i.e., the sample size)
The corrected sampling error is obtained by multiplying the finite population correction factor by
the sampling error obtained from a standard table that considered only sample size and confidence
level (and did not consider the size of the target group (i.e., the total population)). The adjustment
factor becomes more significant as you increase the percentage of the target population you sample.
See the following table:
B(ii) - 5
-------
Sample Size as a Fraction (Percentage)
of the Size of the Target Population
= n/N
Approximate Value of the
Finite Population Correction Factor
10%
20%
40%
50%
60%
70%
75%
0.95
0.89
0.77
0.71
0.63
0.55
0.50
As can be seen from the above table, if the sample size is approximately 10 percent of the target
group, then the correction factor is approximately 0.95. Thus, when you sample 10 percent of your
customers, the sampling error will be reduced to 95 percent of what it otherwise would have been
(i.e., the sampling error would be reduced from ±10 percent to ±9.5 percent).
If you sample 20 percent of the target group, then the correction factor is approximately 0.89 —
reducing the sampling error to 89 percent of what it otherwise would have been (e.g., the sampling
error would be reduced from ±10 percent to ±8.9 percent).
If you sample 50 percent of the target group, then the adjustment factor will be approximately 0.71
— reducing the sampling error to 71 percent of what it otherwise would have been (e.g., the
sampling error would be reduced from ±10 percent to ±7.1 percent).
There is a general rule of thumb used by many statisticians: the finite population correction factor
should be applied whenever the sample size is 10 percent or more of the target group from which
the sample is to be drawn.
Note, however, that using the finite population correction factor will always give you a more
accurate value for the sampling error. Therefore, you should never be reluctant to use it. Under
certain circumstances (i.e., when the sample size is less than 10 percent of the target population), you
can disregard it (i.e., not apply it) without placing undue adverse effect on the estimated size of the
sampling error.
Note also that the last element in the precise formula for calculating sampling error given in the
previous section of this Appendix is the finite population correction factor. By using that precise
formula, you can ensure that the finite population correction factor is automatically taken into
account when determining the size of the sampling error.
One final technical note: you may have noticed the second column in the table above is labeled the
approximate value of the finite population correction factor. The factors are not exact because we
used the following approximation to calculate the values shown in the second column:
B(ii) - 6
-------
Instead of using the precise formula for the finite population correction factor:
FPCF = the square root of N-n
N-l
The following approximate formula was used:
FPCF = (approx) = square root of N-n
N
For most values of N (the size of the target population), the difference between the true value
obtained from the precise formula and the approximate value obtained from the approximate formula
is very small.
A Trial and Error Procedure and An Approximate Formula for Determining Sample
Size
.You can use the precise formula given above to determine the sampling error for any combination
of confidence level, number of customers served, and sample size. You can use that same formula
to determine sample size when you know the desired confidence level, the desired maximum level
of sampling error and the number of customers served. Unfortunately, you can't use it directly to
obtain sample size in such situations. This is because sample size (n) appears two different places
in the equation, and the equation can't be rearranged to solve directly for sample size. Instead, you
must use the precise formula indirectly to to determine the needed sample size. You can do so as
follows:
(1) Begin by guessing what the needed value of the sample size is. (Any guess will do as a
starting point, although the closer to the true value your guess turns out to be, the sooner you will
be finished.)
(2) Use that value of the sampling size (i.e., your initial guess) to solve the precise formula
equation for sampling error.
(3)(a) If the value of sampling error you obtain from the formula is less than the maximum
sampling error you are willing to accept, then you should decrease your sample size and solve
the equation again.
(3)(b) If the value of sampling error you obtain from the formula is greater than the maximum
sampling error you are willing to accept, then you should increase your sample size and solve
the equation again.
-------
(4) Continue steps (3)(a) and (3)(b) above until you arrive at the appropriate sample size for the
sampling error you are willing to accept.
The Approximate Formula for Determining Sample Size
The trial and error approach described above will always give you the best possible value for sample
size. However, the process for arriving at that value can be rather tedious. For this reason, you may
want to use an approximate formula that will give you a sampling error close to the one you would
get from the trial-and-error procedure. This approximate formula needs only to be solved once - no
repeated calculations are needed. However, you will, in most cases, obtain a larger sample size than
you would get from the trial and error procedure. That is, the approximate formula will give you a
larger sample size than you need to achieve your target sampling error.
Here is the approximate formula:
NxZ2
[4x(N-l)xE2] + [Z2]
Where:
n = sample size
N = number of customers served (from which the sample is to be drawn)
E = the maximum acceptable level of sampling error, expressed as a decimal
fraction (e.g., 5% = 0.05)
Z = the Z-score corresponding to the confidence level selected
(this can be obtained from most standard statistics references,
including most basic statistics textbooks). The Z-scores for the
80%, 90% and 95% confidence levels are given above in this Appendix in
conjunction with the precise formula.
A Combined Approach
You can, if you wish, make use of both the approximate formula and the trial and error approach
given above. Begin by using the approximate formula to get an approximate value for the sample
size. Then use this approximate value as your first guess for sample size in the trial and error
approach, and proceed from there with the trail and error approach as above.
This combined approach will allow you to come up with the lowest possible sample size with the
least amount of effort.
B(ii) - 8
-------
Why is So Much Attention Given to Sample Size?
Much of Appendix B(i) and all of this Appendix have been devoted to considerations related to
sample size. Why, you might ask, do people spend so much time worrying about sample size?
If you use a larger sample size than you need, you will incurred greater costs and impose a greater
burden on your customers than needed.
• The extra costs alone can be quite considerable. Let's consider a hypothetical telephone
survey. For each extra customer in the sample, you have to spend additional time conducting
the telephone interview, following up with those who did not answer the first time, following
up with those who did not initially agree to participate, and so on. It also means more data
to be recorded and analyzed.
• The extra burden on your customers' time also can be quite large when you add up the total
time spent by all customers surveyed.
If the sample size used turns out to be greater than was needed, then the extra cost incurred and the
extra burden imposed were wasted.
On the other hand, if you use too small a sample size, then you may have greater uncertainty about
the true satisfaction of your customers (because the sampling error was so large). You were
uncertain about their degree of satisfaction before (that's why you decided to conduct the survey)
and you may now find that your level of uncertainty afterward is not much reduced. In this case, the
whole cost of conducting the survey may prove to have been wasted.
Keep in mind that any wasted time and dollars could otherwise have been used to improve the
products or services you provide to your customer. So you want to choose the smallest possible
sample size that -will give you a level of sampling error that you can live with. The results should
be precise enough to give you the degree of certainty you need about: (1) the true current level of
satisfaction of your customers and (2) how their degree of satisfaction has been changing over time
— as a result of your continuing efforts to improve products and services.
B(ii) - 9
-------
-------
Appendix C: How to Obtain Clearance for EPA Customer
QUESTION1: WHO CAN USE THE CUSTOMER SERVICE ICR?
According to OMB's Resource Manual for Customer Surveys (dated October 1993) and other
relevant guidance documents, the generic clearance shall be used for "strictly voluntary collections
of opinion information from clients that have experience with the program that is the subject of each
data collection" and precludes this option for use by regulatory agencies to survey regulated entities
• in any situation where a respondent may perceive that a response will result in risks to his
interests through potential penalties or loss of benefits
• for collecting factual information (other than simple identifying information, where needed)
• for collecting data from the general public
QUESTION 2: HOW DO I OBTAIN APPROVAL FOR MY SURVEY, IF IT MEETS THE
CONDITIONS OUTLINED ABOVE?
Prior to initiating the survey, sponsoring programs must seek final approval from OMB. To obtain
approval, sponsoring programs must submit a clearance package consisting of a memorandum and
a copy of the survey instrument through Regulatory Information Division (RID). The memorandum
will be addressed from the program or office director to the RID Desk Officer at, Office of Policy,
(2136). The memorandum must address the following2:
• Survey title, identification of survey originator (office, point of contact, phone number)
• Description and intended purpose of the survey as it relates to EPA customers
• Methodology and use of anticipated results
• Collection schedule, follow-up plans
• Costs and burden to the Agency and respondents, and the number of respondents
The memorandum will vary in length and detail, depending on the complexity of the survey. IPS
staff will review each submission to ensure that it meets the requirements of the Paperwork
Reduction Act and any conditions of the generic approval. They may reject any proposed customer
survey that does not meet the criteria above. In the methodological issues, the program shall solicit
JEPA interprets this to preclude any EPA surveys conducted fact finding for the purposes of regulatory
development or enforcement.
2For customer feedback forms and short questionnaires, a one-page memorandum should be sufficient.
Mail or telephone surveys making use of statistical sampling must include statistician's name/phone, and a brief
design, precision requirements, and pretests/pilot tests.
C-l
-------
Agency statistical experts through EPA's Statistical Policy Branch or program office to make any
final determinations as to the statistical validity of the customer survey.
QUESTIONS: HOW LONG WILL THE PROCESS TAKE?
Following review within RID, RID will submit surveys and attached materials to OMB for a 10
working-day review.
WHAT ELSE SHOULD I KNOW?
Sponsoring organizations within the EPA should maintain records according to each survey
schedule. In general, survey results should be maintained for three years or until after follow-up has
been performed.
Sponsoring offices are encouraged to provide feedback to RID on the success of their surveys
(through a memo, or summary report). That information may be shared with fledgling customer
survey programs within other parts of the EPA. Feedback might include:
1) response rates, follow-up strategies, important lessons related to survey design and
implementation
2) general trends established from analysis of data
3) changes to the organization as a result of the survey
4) points of contact for questions about the survey
EXAMPLE OF BURDEN STATEMENT FOR FORMS OR SURVEY
The OMB Control Number and expiration date must appear on the front page of an OMB-approved
form or survey, or on the first screen viewed by the respondent for an on-line application. The rest
of the burden statement must be included somewhere on the form, questionnaire or other collection
of information, or in the instructions for such collection.
Explain the reasons the information is planned to be and/or has been collected, and the way such
information is planned to be and/or has been used to further the proper performance of the functions
of the agency. (See the requirements of Executive Order 12862 below for ideas.) State whether
responses are voluntary, required to obtain or retain a benefit (citing authority), or mandatory (citing
authority), and the nature and extent of confidentiality to be provided, if any (citing authority).
The following information must appear on the first page of the survey:
Form Approved OMB Control No. xxxx-xxxx. Approval expires MM/DD/YY.
C-2
-------
Public reporting burden for this collection of information is estimated to average X minutes per
response, including the time for reviewing instructions, gathering information, and completing and
reviewing the collection of information. Send comments on the Agency' s need for this information,
the accuracy of the provided burden estimates, and any suggestions for reducing the burden,
including the use of automated collection techniques to the Director, OPPE Regulatory Information
Division, United States Environmental Protection Agency (Mail Code 2137), 401 M Street, SW,
Washington, D.C. 20460; and to the Office of Information & Regulatory Affairs, Office of
Management & Budget, 725 17th Street, NW. Washington, D.C. 20503, Attention: Desk Officer for
EPA. Include the EPA ICR number and the OMB control number in any correspondence.
CUSTOMER SERVICE EXECUTIVE ORDER (12862) REQUIREMENTS
• Identify customers who are or should be receiving EPA service
• Survey customers for the kind/quality of services they want, their level of satisfaction with the
services, and whether standards are set for what matters to them
• Develop, post and implement standards
• Measure results against them
• Report annually to customers on progress toward achieving standards
• Integrate customer service standards, measurement and tracking with reinvention, planning,
budgeting (GPRA), operating plans, regulations and guidelines, training and personnel
classification and evaluation
• Recognize employees for meeting and exceeding customer service standards
• Benchmark customer service performance against the best in business
• . Survey front-line employees on barriers to, and ideas for, matching the best in business
• Provide customers with choices in sources of service and methods
• Make information, services and complaints systems easily available
• Address customer complaints
• Develop cross-media (within agency) and cross-Agency programs to serve shared customer
groups
• Take advantage of new technology to better serve customers
C-3
-------
Following are examples of successful applications to OMB.
Sample #1
U.S. ENVIRONMENTAL PROTECTION AGENCY
REGION I
OFFICE OF ENVIRONMENTAL MEASUREMENT & EVALUATION
60 WESTVIEW STREET, LEXINGTON, MA 02173-3185
MEMORANDUM
DATE: June 12,1997
SUBJECT: Request for OMB Approval of Customer Feedback Survey
FROM:
TO:
Carol Wood, Manager
Ecosystems Assessment Branch
Barbara Willis, RID Desk Officer
Regulatory Information Division
Office of Policy, Planning and Evaluation
EPA's Region 1, New England Office is preparing to distribute copies of the 1997 State of the New England Environment
Report. In order to leam whether the report is clear, easy to read and provides information that our customers need, we are
preparing a customer feedback survey to include with the report. A copy of the survey form is attached.
Approximately 12,000 copies of the report will be distributed to EPA personnel, citizens, local, state and federal offices out
side the EPA Region 1 Office, with the survey form as an insert. We expect to receive approximately 3,000 responses.
Region I will create a database to track survey form responses. The information will be used to prepare a report which will
summarize the findings and make recommendations on how to improve the next State of the New England Environment
Report and our other outreach activities.
We will be receiving the reports from the Government Printing Office by June 24 and hope to receive approval for the
customer survey form and have the forms ready to include in the mailings.
If you have any questions or concerns about this request, please contact Diane Switzer at 617-860-4377 or me at 617-860-
4316.
Attachments
C-4
-------
Sample #1 - Continued
Request for Approval of Information Collection Activity
I. Background
The 1997 State of the New England Environment Report is an outreach tool, designed to inform the public on environmental
conditions, using indicators that have been selected in the National and Regional processes as we begin focusing more on
environmental results. We discuss topics of concern to the public and EPA, signs of improvement or degradation, and what
EPA and our partners are doing to improve conditions. The purpose of this outreach activity is to provide clear and concise
information to the public that meets their informational needs and allows them to better understand what we are doing to
improve and protect the environment and public health. The discussion topics are selected based upon regional priorities
and what we think the public wants to know.
II. Survey Purpose and Description
The State of the New England Environment Workgroup is planning to conduct a customer feedback survey in the form of a
"Reader's Evaluation Form," to evaluate whether we are providing the public with the information they want and need in a
way that is easy to read and use. The results will be used to improve the report's content, readability and use.
The evaluation consists of six questions. The first question will evaluate the reports readability. The second question
evaluates how well we do in communicating information the public wants to know. The third question evaluates how the
information is useful to the reader. The fourth and fifth questions evaluate the information needs of the reader that we are
not meeting. The sixth question evaluates whether the report is something the public wants to receive.
III. Survey Methodology and Use of Results
The potential target audience for the evaluation forms consists of approximately 12,000 citizens, businesses and government
personnel (local, state and national). EPA Region I plans to distribute the forms as inserts to copies of the 1997 State of the
New England Environment Report. Through this effort, we anticipate that approximately 3,000 readers will respond. We
estimate that it will take a respondent approximately five minutes to complete an evaluation form.
EPA Region I will create a database to track evaluation form responses. The information will be used to prepare a report
which will summarize the findings and make recommendations to the State of the New England Environment Workgroup
and Regional managers on how to improve the readability, use and content of this report and other similar outreach
activities.
IV. Respondents' Burden
Number of Respondents
Minutes per Response
Cost per Hour
Total Burden:
3,000
5 minutes x 3,000 = 15,000 minutes = 250 hours
$11.00*
250 Hours; $2,750
* Based on Federal/State/Local Employment & Payroll averages as presented in the 1996 Statistical Abstract of the United
States
V. Agency Burden
EPA Staff Time
Cost per Hour
Total Burden
100 hours
$36.00
100 hours; $3,600.00
C-5
-------
Sample #1 - Continued
YOUR COMMENTS, PLEASE...
We would like to know if the 1997 State of the New England Environment Report provides you with use useful
information. Your responses to the following questions will help us meet your needs.
1. a. Is this report easy to read and understand? Yes_No_
b. What would make the report easier to read and use?
2. Please rate the report as to how informative the discussions within each of the sections are, with 1 = not informative and 5
= very informative.
Report Section Not Informative
a. New England Ecosystems 234
b. Public Health & Our Environment 234
c. Economic Opportunities 234
d. Recreational Resources 234
e. Environmental Education & Outreach 234
f. New Directions 2 34
3. In which areas is the report helpful to you? School __Work Home
Leisure Time Local Community General Knowledge Other
4. What topic(s) would you like to see in future reports?
5. We welcome any other comments you have about this report:
6. Would you like to receive a copy of future reports? Yes No
If "Yes," please provide your mailing address:
Very Informative
5
5
5
5
5
5
Name _
Organization_
Address _
Town/City _
Zip Code
State
_County_
Please fold in half with EPA's return address on the outside, staple/tape shut, and mail.
Thank you for your response! Environmental Protection Agency Region I, New England Office
C-6
-------
Sample #1-continued
June 18, 1997
MEMORANDUM
SUBJECT: Review of Customer Satisfaction Questionnaire,
ICRNo. 1711.01 (OMB 2090-0019)
FROM: Barbara N. Willis
Regulatory Information Division (2136)
To: Chris Wolz
Natural Resources, OIRA
As a condition of OMB approval for the generic ICR, EPA agreed to submit each specific questionnaire covered by this
clearance to OMB for review. Therefore I am forwarding for your review Region I "1997 State of the New England
Environmental Report". The purpose of this survey is to evaluate whether Region I is providing the public with the
information they want and need in a way that is easy to read and use. The results will be used to improve the reports'
content, readability and use.
Your comments and suggestions would be much appreciated. Thank you for your cooperation in this matter. If you have
any questions, please contact me at (202) 260-9453.
Attachments
Sample # 2
MEMORANDUM
SUBJECT:
FROM:
TO:
Submittal of Customer Satisfaction Survey for Expedited
OMB Review
Michael B. Cook, Director
Matt Leopard, RID Desk Officer
Office of Policy, Planning and Evaluation (2136)
Attached is a clearance package for an Office of Water Customer Satisfaction Survey as authorized under
Executive Order 12862, "Setting Customer Service Standards." This particular survey is designed to assess state
opinion on the current level of satisfaction and desired improvements to the Agency's Water grant process. This
voluntary survey focuses on three of the primary water quality management grants under the Clean Water Act,
Sections 106,319 and 604 (b).
We are requesting an expedited reviefor this survey instrument in order to comply with the rather tight schedule
that is mandated under the Executive Order. We anticipate initiating the survey no later than mid-November. I am
requesting your assistance in coordinating this review.
Please contact Jane Ephremides of my staff (260-5835), or Don Brady in the Office of Wetlands, Oceans and
Watersheds (260-7074) if you have any questions.
Attachment
Bob Wayland
Abby Pirnie
Don Brady
C-7
-------
Sample #2 - continued
CLEARANCE INFORMATION COLLECTION REQUEST FOR 1994 THE CUSTOMER SATISFACTION
SURVEY
Identification of Information Collection
Executive Order 12862 requires Agencies to "survey customers to determine the kind and quality of services they want
and their level of satisfaction with existing services". This survey, will be conducted by customer satisfaction survey
professionals at the request of the Environmental Protection Agency's Office of Wastewater Management' Resource
Management and Evaluation Staff and the Office of Wetlands, Oceans and Watersheds Assessment and Watershed
Protection Division. Tim Icke, Program Analyst, will be the point of contact at OWOW's Assessment and Watershed
Protection Division. He can be reached at (202)-260-2640.
Short Characterization of the Survey
The 1994 Customer Satisfaction Survey will solicit opinions from members of the grants community within the States.
The data collection is authorized by Executive Order Number 12862, "Setting Customer Service Standards," which
requires all federal executive departments and agencies that provide significant services directly to the public to carry
out the principles of the National Performance Review.
As a result of the Executive Order, The Office of Water is assessing its operations and procedures in order to provide
service to the public that matches or exceeds the best service available in the private sector. The Customer Satisfaction
Survey applies to three of the grants, those under Sections 106,319, and 604(b) of the Clean Water Act. The survey is
intended to determine the customers' current level of satisfaction and desired improvements in these three grant
programs. In the water program, there are 11 sources of financial assistance available to assist the states and territories
in achieving the mandates of the Clean Water Act. The questions focus on respondents' opinions and perceptions of
services rendered.
Collection Methodology
Using a pretested telephone questionnaire, EPA will survey State water quality managers, grants administration
managers, and the program managers for Sections 106,319, and 604(b) in each of the 57 States and territories. EPA
estimates that the number of respondents will vary considerably from state to state. Using a conservative estimate, the
highest possible burden will be 5 respondents per state. The survey instrument is a 15-minute, voluntary telephone
questionnaire covering approximately 30 questions. There are four open-ended questions. For those customers that
request an opportunity to respond at greater length, follow-up calls will be scheduled. Since these conversations are
voluntary, will vary greatly, and will affect a small percentage of respondents, the follow-up calls are not considered
burdens under the definition of the Information Request.
This one-time only information collection will involve approximately 285 voluntary respondents of which 70% are
anticipated to complete the telephone survey. The survey will require approximately 50 hours at a total cost to the
respondents of $1,448. Exhibit I-a, Respondent Burden and Costs, provides a detailed description of the unit burden
and costs to respondents for this collection. The average burden per response is 15 minutes.
State grant program authorities are the only respondent group that will be affected by this survey, and by definition
they are not small governmental jurisdictions.
Use of Survey Results
The results of the Customer Satisfaction Survey will be summarized in a report or accompanying briefing document.
EPA intends to use the information gathered by the survey to identify tools to improve the grants management process
by reducing paperwork, focusing on results while maintaining accountability, and responding to State environmental
priorities. The fundamental purpose of the customer satisfaction survey is to assess states satisfaction with the grant
process and existing services. The survey will help EPA:
Identify potential changes that states would like to see in the administrative management of Sections 106,
319, and 604 (b) grant programs;
Assess the three grant programs' potential to enhance/retard state adoption with the watershed protection
approach; and
Understand state level of satisfaction/dissatisfaction with the three grant programs.
C-8
-------
Sample #2 - continued
Collection Schedule and Follow-up Plans
EPA seeks to minimize the amount of data Collected through a one-time only data gathering effort while at the same
time gathering enough information for an effective Customer Satisfaction Survey. The survey will help Headquarters
establish a benchmark to compare EPA's customer service performance with that of other federal agencies and private
sector businesses. In the future, this information will help to provide customers with choices in both the sources of
service and the means of delivery; to make information, services, and complaint systems easily accessible; and to
provide a means to address customer complaints.
Costs and Burden to the Agency and Respondents, and Number of Respondents
The total burden for EPA Regional and State grants program authorities is a function of the number of grants
managers, auditors and program managers for Sections 106,319 and 604(b) of the Clean Water Act in each state and
interstate agency and the number of open-ended questions. Exhibits 1-a and 1-b give detailed descriptions of the
individual reporting and record keeping requirements associated with the survey. Burden estimates are based on EPA
data from the Regions and Headquarters.
Exhibit 1-a summarizes the state respondents' burden and costs as respondents to the voluntary telephone survey. The
total respondent burden associated with the Customer Satisfaction Survey is 50 hours (200 respondents at 15 minutes
per call) and the total respondent cost is $1,448, which equates to a cost per respondent of $7.24. This estimate
assumes that the average hourly labor cost for state employees is $28.96, comparable to a GS9, Step 10 salary.
The Agency's burden and cost arises from contacting appropriate regional program officers, and from reviewing,
analyzing, and processing the. data. The total annual Agency burden associated with the customer Satisfaction Survey
is 100 hours. This assumes that the average hourly labor cost of federal employees is $28.96, equal to a GS-9, Step 10
salary. The total annual Agency cost resulting from survey reporting and record keeping resulting from the customer
survey is $2,896.
C-9
-------
Exhibit l-a
Respondent Burden and Costs
Regulation Requirements
Survey Reporting
requirements (one-time
only)
Respond to telephone
Customer Satisfaction
Survey
Total Burden and Costs
for all affected
Respondents:4
(A)
Total #
Respondents'
285
(B)
#
Responses2
200
(C)
Composite
hrs/respondent
0.25
(D)
Total Hours
(B)*(C)
50
50
(E)
Hourly
labor costs3
$28.96
(F)
Total
Cost (D)*(E)
$1,448
$1,448
1 Respondents include State grants managers, auditors, and program managers for
Sections 106, 319, and 604 (b) in each of the 57 States and Territories.
2 Assumes approximately five calls to each State and Territory and assumes 70%
response rate.
3 Hourly labor cost equals the annual salary for a GS-9 step 10 (37,651) times 1.6 (the
benefits multiplication factor as listed in the June 1992ICR Handbook) and divided by 2,080 of
work hours per year).
4 Numbers may not add due to rounding.
C-10
-------
Exhibit 1-b
Agency Burden and Costs (As users of Data)
Regulation
Requirements
Recordkeeping
Requirements
(Ongoing)
Agency
Reviews 1st
Draft of Report
Agency
Approves Final
Draft of Report
Total Agency
Burden and
Costs:2
(A)
Total no. of
respondents
N/A
N/A
(B)
No. of
responses
N/A
N/A
(C)
Composite
hours per
respondent
N/A
N/A
(D)
Total hours
(B)*(Q
60
40
100
(E)
Hourly labor
cost1
$28.96
$28.96
(F)
Total Costs
(D)*(E)
$1,738
$1,158
$2,896
1 Hourly labor cost equals the annual salary for a GS-9 step 10 ($37,651) times 1.6 (the benefits multiplication factor as listed in the June 1992
ICR Handbook) and divided by 2,080 of work hours per year).
2 Numbers may not add due to rounding.
C-ll
-------
Sample # 2 - continued
Draft - October 25,1994
1994 CUSTOMER SATISFACTION SURVEY
HOW ARE WE DOING?
Grant Administration: Grant Administration Staff
Only a sample of the several versions of the surveys for a series of grants is presented.
Hello, may I please speak with (NAME FROM FACE SHEET)?
RESPONDENT AVAILABLE
RESPONDENT NOT AVAILABLE (SCHEDULE A CALL BACK)
Hello, my name is _
. Of Abt Associates. We are conducting a customer
satisfaction study for the Environmental Protection Agency (EPA) about three Office of Water program management
grant programs. The study is voluntary and the answers that you give will be kept strictly confidential.
1)
2)
3)
4)
5)
6)
Are you familiar with the Section 106 grant program that funds the management of state water quality
programs?
YES
NO (SKIP TO QUESTION 14)
DO/REF (SKIP TO QUESTION 14).
,.... 1
.... 2
.... 3
I have some questions about the FY 94 grant cycle. How satisfied are you with the level of reporting
burden under Section 106? Are you...
Very satisfied (SKIP TO QUESTION 4) 1
Satisfied (SKIP TO QUESTION 4) 2
Dissatisfied 3
Very dissatisfied 4
What are the one or two most important changes you would like to see in Section 106 reporting
requirements?
Do you think EPA made good use of the FY 94 section 106 data you reported to them?
Yes 1
NO 2
Grants Administrators 1
Were any of the reports created by your state in complying with Section 106 requirements for FY 94 useful
for other state purposes such as state budgeting or accounting?
Yes.
NO..
How satisfied are you with the opportunity offered by EPA to file Section 106 FY 94 reports electronically?
Are you...
Very satisfied (SKIP TO QUESTION 8) 1
Satisfied (SKIP TO QUESTION 8) 2
Dissatisfied 3
Very dissatisfied 4
C-12
-------
7)
8)
9)
10)
1 1)
12)
13)
What are the one or two most important changes you would like to see in Section 106 electronic reporting
scope or procedures?
How satisfied were you with the length of time it took EPA to respond to requests for information on grant
administration and reporting for FY 94 Section 106 grants? Were you...
Very satisfied [[[ • ............................ 1
Satisfied [[[ 2
Dissatisfied [[[ 3
Very dissatisfied, or [[[ 4
Did you not make any requests for information ................................................. 5
How satisfied are you with the length of time it took to obtain the EPA approvals required at various states
of administration of FY 94 Section 106 grants? Were you...
Very satisfied
Satisfied
Dissatisfied
Very dissatisfied, or
Did you not need any EPA approvals
How satisfied are you with EPA's requirements for the close-out or rollover of the Section 106 grant fund?
Are you...
Very satisfied [[[ 1
Satisfied [[[ 2
Dissatisfied [[[ 3
Very dissatisfied [[[ • ............................. 4
Grants Administrators 2
What are the one or two changes you would most like EPA to make in its Section 106
reporting requirements?
Overall, how satisfied are you with EPA's FY '95 Section 106 grant programs? Are you...
Very satisfied [[[ 1
Satisfied [[[ 2
Dissatisfied [[[ 3
-------
-------
OMB Control No. XXXX-XXX
U.S. ENVIRONMENTAL PROTECTION AGENCY
APPLICANT PERMITTING SURVEY
Introduction:
The attached survey is a follow-up to your recent permit application (or modification request)
with the US EPA. We are interested in improving our permitting system, and we recognize that
to do so, we need your frank input.
Problem areas that are identified will be followed up with focus groups to obtain more specific
insights. While participation in this survey is voluntary, we encourage you to take this
opportunity to help us improve the quality of our permitting processes.1
Instructions:
Please complete this survey by circling your answers and returning it in the postage-paid
envelope provided. Most of the questions in this survey ask that you; rate some aspect of US
EPA's performance on the following scale: "1" means you are very dissatisfied, and "6"
means you are very satisfied. If a question does not apply to your interaction with US EPA,
please skip it and go on to the next question.
This survey is estimated to take an average of 10 minutes to complete.
1 The public reporting and recordkeeping burden for this collection of information is estimated to
average 15 minutes per response annually. Burden means the total time, effort, or financial resources
expended by persons to generate, maintain, retain, or disclose or provide information to or for a Federal
Agency. This includes the time needed to review instructions; develop, acquire, install, and utilize
technology and systems for the purposes of collecting, validating, and verifying information,
processing and maintaining information, and disclosing and providing information; adjust the existing
ways to comply with any previously applicable instructions and requirements; train personnel to be
able to respond to a collection of information; search data sources; complete and review the collection
of information; and transmit or otherwise disclose the information. An agency may not conduct or
sponsor, and a person is not required to respond to, a collection of information unless it displays a
currently valid OMB control number.
Send comments on the Agency's need for this information, the accuracy of the provided burden estimates, and
any suggested methods for minimizing respondent burden, including through the use of automated
collection techniques to the Director, OPPE Regulatory Information Division, U.S. Environmental
Protection Agency (2136), 401 M Street, S.W., Washington, D.C. 20460. Include the OMB control number in
any correspondence. Do not send the completed survey to this address.
D-l
-------
U.S. ENVIRONMENTAL PROTECTION AGENCY
APPLICANT PERMITTING SURVEY
Very
Dissatisfied
Very
Satisfied
123456
1) Please identify to which permit media program your response to this survey applies: (you may check
more than one, as appropriate, or, if your responses would appreciably differ for different program areas,
please reproduce this form and submit one for each program area.)
a) Air ( )
b) Water ( )
c) Hazardous Waste ( )
d) Other ( ). Please specify:
2) Pre-application meeting/discussion: These questions cover the pre-application discussion or meeting
(i.e., a phone call or meeting) with US EPA to discuss the application process before you submitted the
application.
a) How satisfied are you with the availability of US EPA staff
responding to your pre-application questions?
b) How satisfied are you with the assistance provided by US EPA
staff during the pre-application meeting/discussion?
c) How satisfied are you with the usefulness of the information
provided to you through the pre-application
meeting/discussion?
d) How satisfied are you that the US EPA staff provided
suggestions or information to help minimize the overall
permitting burden (e.g., as using pollution prevention
opportunities to reduce emissions or identifying future needs
now to minimize the need for modifications later)?
123456
123456
123456
123456
3) Permit Application Review and Determination: These questions cover the time period from the
submission of your permit application to US EPA's decision to either issue or deny the permit
a) How satisfied are you with the clarity of the permit application
forms? 123456
b) How satisfied are you with the clarify of the accompanying
instructions or guidance?
c) How satisfied are you with US EPA's timeliness in notifying
that your application was complete?
123456
123456
D-2
-------
d} If you received any requests for supplemental
information by the US EPA, how satisfied are you in the
following areas?
1) clarity
2) timeliness of US EPA's request
3) relevance
e) How satisfied are you with US EPA's timeliness in
determining the issuance or denial of your permit?
f) How satisfied are you with the clarity of the final permit
decision?
123456
123456
123456
123456
123456
4) Overall satisfaction: These questions cover your overall level of satisfaction with the manner which
the permit process was handled by US EPA.
a) Overall, how satisfied are you with the way the permitting
process was managed?
b) Overall, how satisfied are you that the US EPA permitting
staff treated you in a courteous manner?
c) Overall, how satisfied are you with the quality and
timeliness of the communications you have received from
US EPA?
d) Overall, how satisfied are you that the US EPA permitting
staff respond to your needs for guidance, information, or
technical support under the permit process?
123456
123456
123456
123456
5) Would you like someone with the US EPA to contact you regarding this survey?
Yes Please complete question 6
No Please complete question 6 (optional). Your responses will be used by US EPA for
informational purpose only.
6) Please provide the following information:
Name:
Organization:
Address:
Town/City: _
Zip Code:
State
Telephone Number:( )_
7) Please provide any other comments you would like us to consider:.
Thank you for taking the time to complete this survey.
D-3
-------
OMB Control No. XXXX-XXX
U.S. ENVIRONMENTAL PROTECTION AGENCY
CITIZENS PERMITTING SURVEY
Introduction:
The attached survey is a follow-up to your recent participation in a US EPA permitting action, through
public comment or attendance at a US EPA hearing or meeting. We are interested in improving our
permitting processes, and we recognize that to do so, we need your frank input.
Problem areas that are identified through the use of this survey will be followed up with focus groups
to obtain more specific insights. While participation in this survey is voluntary, we encourage you to
take this opportunity to help us improve the quality of our permitting process.1
Instructions:
Please complete this survey by circling your answers and returning it in the postage-paid envelope
provided. Most of the questions in this survey ask that you rate some aspect of US EPA's performance
on the following scale: "1" means you are very dissatisfied, and "6" means you are very satisfied.
If a question is not applicable to your interaction with EPA, please skip it and go on to the next question.
This survey is estimated to take an average of 10 minutes to complete.
The public reporting and recordkeeping burden for this collection of information is estimated to
average 10 minutes per response annually. Burden means the total time, effort, or financial resources
expended by persons to generate, maintain, retain, or disclose or provide information to or for a Federal
Agency. This includes the time needed to review instructions; develop, acquire, install, and utilize
technology and systems for the purposes of collecting, validating, and verifying information,
processing and maintaining information, and disclosing and providing information; adjust the existing
ways to comply with any previously applicable instructions and requirements; train personnel to be
able to respond to a collection of information; search data sources; complete and review the collection
of information; and transmit or otherwise disclose the information. An agency may not conduct or
sponsor, and a person is not required to respond to, a collection of information unless it displays a
currently valid OMB control number.
Send comments on the Agency's need for this information, the accuracy of the provided burden estimates, and
any suggested methods for minimizing respondent burden, including through the use of automated
collection techniques to the Director, OPPE Regulatory Information Division, U.S. Environmental Protection
Agency (2136), 401 M Street, S.W., Washington, D.C. 20460. Include the OMB control number in any
correspondence. Do not send the completed survey to this address.
D-4
-------
U.S. ENVIRONMENTAL PROTECTION AGENCY
CITIZENS PERMITTING SURVEY
Very
Dissatisfied
Very
Satisfied
1234 56
1) Please identify to which permit media program your response to this survey applies: (you may check
more than one, as appropriate, or, if your responses would appreciably differ for different program areas,
please reproduce this form and submit one for each program area.)
a) Air ( )
b) Water ( )
c) Hazardous Waste ( )
d) Other ( ). Please specify:
123456
2) How satisfied are you with US EPA's notification to provide the
public with the opportunity to comment in a timely manner?
3) How satisfied are you with US EPA's method of notification to
provide the public with the opportunity to comment in a
convenient manner (e.g., by newspaper, radio, direct mailing or
other preferred means)?
4) How satisfied are you that US EPA provided clear and concise
information about the permit process and application?
5) How satisfied are you with the suitability of the public
hearing/meeting location?
6) How satisfied are you with the suitability of public
hearing/meeting time?
7) How satisfied are you that US EPA provided you opportunities
to present your comments?
8) How satisfied are you that US EPA presented the criteria that
will be used for the permit decision?
9) How satisfied are you with the quality of US EPA's response to
your written or oral comment?
10) Overall satisfaction: These questions cover your overall level of satisfaction with the manner which
the permit process was handled by US EPA.
1
1
1
1
1
1
1
2
2
2
2
2
2
2
3
3
3
3
3
3
3
4
4
4
4
4
4
4
5
5
5
5
5
5
5
6
6
6
6
6
6
6
a) Overall, how satisfied are you with the way the permitting
process was managed?
b) Overall, how satisfied are you that the US EPA permitting
staff treated you in a courteous manner?
c) Overall, how satisfied are you with the quality and
timeliness of the communications you have received from
US EPA?
123456
123456
123456
D-5
SAMPLE - Not Approved by OMB
-------
d) Overall, how satisfied are you that the US EPA permitting
staff respond to your needs for guidance, information, or
technical support under the permit process?
123456
11) Would you like someone with the US EPA to contact you regarding this survey?
Yes
No
Please complete question 12
Please complete question 12 (optional). Your responses will be used by US EPA for
informational purpose only.
12) Please provide the following information:
Name:
Organization:
Address:
Town/City: _
Zip Code:
State
Telephone Number:( )_
13) Please provide any other comments you would like us to consider, including changes to the
permitting process:
14) Please put an "x" next to the line in each category that best describes you:
I live near the facility/facilities requesting the permit(s).
I am employed by the facility requesting the permit.
I am employed by the local or state government in which the facility is located.
I am a member of a local environmental or community group.
I am a member of a regional or national environmental or community organization.
Other (Please specify)
Thank you for taking the time to complete this survey.
D-6
SAMPLE - Not Approved by OMB
-------
OMB Control No. xxxx-xxxx
U.S. ENVIRONMENTAL PROTECTION AGENCY
DELEGATED AUTHORITY PERMITTING SURVEY
Introduction:
The attached survey is being distributed to state, tribal and/or local governments that have been
delegated or authorized to administer a permitting program under federal statutes administered by US
EPA. We are interested in improving our permitting delegation/authorization and evaluation processes,
and we recognize that to do so, we need your frank input.
Problem areas that are identified will be followed up with focus groups so that we may obtain more
specific insights. Focus groups will include state and regional personnel and other stakeholders as
appropriate. While participation in this survey is voluntary, we encourage you to take this opportunity
to help us work better for all of our stakeholders.1
Instructions:
Please complete this survey by circling your answers and returning it in the postage-paid envelope
provided. Most of the questions in this survey ask that you rate some aspect of US EPA's performance
on the following scale: "1" means you are very dissatisfied, and "6" means you are very satisfied.
If a question does not apply to your interaction with US EPA, please skip it and go on to the next
question.
This survey is estimated to take an average of 15 minutes to complete.
The public reporting and recordkeeping burden for this collection of information is estimated to
average 10 minutes per response annually. Burden means the total time, effort, or financial resources
expended by persons to generate, maintain, retain, or disclose or provide information to or for a Federal
Agency. This includes the time needed to review instructions; develop, acquire, install, and utilize
technology and systems for the purposes of collecting, validating, and verifying information,
processing and maintaining information, and disclosing and providing information; adjust the existing
ways to comply with any previously applicable instructions and requirements; train personnel to be
able to respond to a collection of information; search data sources; complete and review the collection
of information; and transmit or otherwise disclose the information. An agency may not conduct or
sponsor, and a person is not required to respond to, a collection of information unless it displays a
currently valid OMB control number.
Send comments on the Agency's need for this information, the accuracy of the provided burden estimates, and
any suggested methods for minimizing respondent burden, including through the use of automated collection
techniques to the Director, OPPE Regulatory Information Division, U.S. Environmental Protection Agency
(2136), 401 M Street, S.W., Washington, D.C. 20460. Include the OMB control number in any correspondence.
Do not send the completed survey to this address.
: .- D-7
-------
DELEGATED AUTHORITY SURVEY
Very Very
Dissatisfied Satisfied
1 2345 6
1) Please identify to which permit media program your response to this survey applies: (you
may check more than one, as appropriate, or, if your responses would appreciably differ for
different program areas, please reproduce this form and submit one for each program area.)
a) Air ( )
b) Water ( )
c) Hazardous Waste ( )
d) Other ( ). Please specify:
2) Planning & Priority-Setting: These questions cover your opinion of our planning and
priority-setting prior to finalizing each annual permits program work plan or performance
agreement. Please rate our performance on the following criteria:
a) How satisfied are you with US EPA conducting joint
planning and priority-setting with you as an equal partner:
b) How satisfied are you with US EPA addressing
state/tribal/local needs and specific circumstances in final
agreements:
c) How satisfied are you with US EPA identifying areas where
Federal technical assistance is needed:
d) How satisfied are you with US EPA's developing clear
program elements (i.e., those that are mandatory for the
delegated/authorized program):
e) How satisfied are you with US EPA's developing a permit
delegation/authorization that is aligned with your state, tribal
or local environmental strategic plan:
123456
123456
123456
123456
123456
3) Technical Assistance; These questions ask for your opinion of our performance in providing
technical assistance to you, based on the following criteria:
a) How satisfied are you with the quality of the training we
have provided to you:
b) How satisfied are you with the accuracy of our answers to
your technical questions:
c) How satisfied are you with the timeliness of our answers to
your technical questions:
d) How satisfied are you with our willingness to work with you
creatively to solve difficult permitting problems:
123456
123456
123456
123456
D-8
SAMPLE - Not Approved by OMB
-------
4) Oversight: These questions ask for your opinion of the Federal oversight conducted on
individual permits:
a) How satisfied are you that US EPA is conducting oversight
consistent with the agreement, if an agreement is in place:
b) How satisfied are you that US EPA is providing an
appropriate amount of oversight:
c) How satisfied are you with the clarity of the comments that
were made:
d) How satisfied are you with the appropriateness of the
comments:
e) How satisfied are you with the timeliness of the comments:
f) How satisfied are you with US EPA's ability to provide
comments which were helpful in maintaining or improving
the quality of permit decisions:
g) How satisfied are you with resolving issues that we have
raised:
123456
123456
123456
123456
123456
123456
123456
h) If you were to recommend an improvement in the agreement, it would be
that:
We should review fewer permits on a real-time basis
We should review more permits on a real-time basis
We should review only specific types of permits
(If yes, which types?)
We should conduct permit reviews to help implement new programs
We should review permits only when requested by the authorized entity or
other stakeholder
5) Permitting Program Evaluation: These questions concern our evaluation of your
permitting program. Please rate our performance on the following:
a) How satisfied are you that the delegation/authorization
agreement meets your agency's permitting needs:
b) How satisfied are you with our evaluation of your
delegated/authorized permitting program in terms of:
1. Accuracy:
2. Identification of accomplishments:
3. Identification of opportunities for improvement:
4. Resolution of disagreements:
5. Suggestions for program improvements:
6. Your overall performance:
123456
123456
123456
123456
123456
123456
123456
D-9
SAMPLE - Not Approved by OMB
-------
6) Keeping you informed: This question covers our communication with you regarding real or
potential changes to permitting regulations, process or delegations/ authorizations, and
providing you the opportunity to comment on them. Please rate our performance on the
following criteria:
a) How satisfied are you with the timeliness of US EPA's
notification of the opportunity to comment:
b) How satisfied are you with the convenience of the method of
comment (e.g., in writing, through meetings, etc.):
c) How satisfied are you with US EPA providing information
about real or potential changes to the permitting relationship
in an understandable manner:
d) How satisfied are you with our responsiveness to your
comments:
123456
123456
123456
123456
7) Trust and Fairness: These questions concern your opinion of how well we manage the
delegation of authority:
a) How satisfied are you with the timeliness of US EPA's
responsiveness to your request for delegation/authorization:
b) How satisfied are you with US EPA providing valuable
review and comments on your request for
delegation/authorization:
123456
123456
8) If delegation/authorization occurred in the last 3 years, How
satisfied are you with the delegation process overall:
9) Please provide the following information:
Name:
Organization:
Address:
Town/City: _
Zip Code:
State
Telephone Number:( )_
123456
10) Please provide any other comments you would like us to consider:.
Thank you for taking the time to complete this survey.
D-10
SAMPLE - Not Approved by OMB
-------
6") Keeping you informed: This question covers our communication with you regarding real or
potential changes to permitting regulations, process or delegations/ authorizations, and
providing you the opportunity to comment on them. Please rate our performance on the
following criteria:
a) How satisfied are you with the timeliness of US EPA's
notification of the opportunity to comment:
b) How satisfied are you with the convenience of the method of
comment (e.g., in writing, through meetings, etc.):
c) How satisfied are you with US EPA providing information
about real or potential changes to the permitting relationship
in an understandable manner:
d) How satisfied are you with our responsiveness to your
comments:
123456
123456
123456
123456
7) Trust and Fairness: These questions concern your opinion of how well we manage the
delegation of authority:
a) How satisfied are you with the timeliness of US EPA's
responsiveness to your request for delegation/authorization:
b) How satisfied are you with US EPA providing valuable
review and comments on your request for
delegation/authorization:
123456
123456
8) If delegation/authorization occurred in the last 3 years, How
satisfied are you with the delegation process overall:
9) Please provide the following information:
Name:
Organization:
Address:
Town/City: _
Zip Code:
State
Telephone Number:( )
123456
10) Please provide any other comments you would like us to consider:.
Thank you for taking the time to complete this survey.
D- 11
SAMPLE - Not Approved by OMB
-------
-------
-------
7
------- |