-------
ERA'S MANAGEMENT SYSTEM FOR
ENVIRONMENTAL DATA QUALITY
TOTAL QUALJTY MANAGEMENT (TQM) is the process whereby an
organization, led by senior management, commits to focusing on quality as a first
priority In every activity. TQM implementation creates a culture in which everyone
in the organization shares the responsibility for improving the quality of products
and services, and for "doing the right thing, the right way, the first time."
EPA's QUALITY ASSURANCE (QA) program for environmental data
operations is based firmly on the principles of Total Quality Management Quality
assurance is the process of management review and oversight at the planning,
implementation, and completion stages of an environmental data operation to
assure that the data provided by a line operation to data users are of the quality
needed and claimed. The TQM concepts which the Agency's QA program has put
into practice include the following:
* customer-supplier relationships, especially a clear statement of the
customer's (data user's) needs;
* establishment of measures of performance for supplier implementation and
customer evaluation;
* process analysis through techniques such as process flow diagramming; and
* employee development, involvement, and recognition.
QA is not Identical to QUALITY CONTROL (QC), which is an aspect of the
implementation phase of an environmental data operation. QC includes those
activities required during data collection to produce the data quality desired and
to document the quality of the collected data (e.g., sample spikes and blanks).
At EPA, quality assurance Is a management system based upon the proven
management philosophy of Total Quality Management The primary responsibility
for implementing QA belongs to the line managers of EPA organizations which are
involved in the collection or use of environmental data, whether in Headquarters,
Regions, or Research and Development Laboratories. EPA managers at all levels
benefit from a program which succeeds in bringing the Agency's environmental
data operations Into alignment with its decision-making needs - "the right thing, the
right way, the first time."
-------
CONTENTS
Agenda i
Executive Summary iii
Biographies of Speakers v
PRESENTATIONS
Introduction by Nancy Wentworth 1
Welcoming Remarks by Robert E. Layton 3
Keynote Address by Joe D. Winkle 5
EPA's Environmental Monitoring Management Council
by Ramona Trovato 9
Harmonization of QA Requirements Across the Government
Introduction by Nancy Wentworth 19
Integrity in Science by Adil E. Shamoo 23
Journey Toward Quality: The Federal Express Story
by Gilbert Mook 33
PANEL DISCUSSIONS
Good Automated Laboratory Practices 43
Hazardous Waste Site Remediation 63
Ecological Monitoring 89
Discussion Summaries 107
GROUP SESSIONS
National Program Offices 109
Regional Offices 113
Office of Research and Development 115
QUALITY ASSURANCE MANAGER OF THE YEAR AWARD
Introduction by Nancy Wentworth 119
Acceptance Speech by Marty Brossman 123
-------
AGENDA
Monday, April 22
1:00 pm - Call To Order, Introduction
Nancy Wentvorth, Director, Quality Assurance Management Staff
1:10 pm - Welcome
Robert E. Layton, Regional Administrator, Region 6
1:30 pm - Keynote Address
Joe Winkle, Deputy Regional Administrator, Region 6
2:15 pm - Good Automated Laboratory Practices (panel discussion)
Kaye Mathews, National Enforcement Investigation Center (MODERATOR)
Joan Fisk, Office of Emergency and Remedial Response
Jeff Worthington, TechLav, Inc.
Rick Johnson, Office of Information Resources Management
3:15 pm - EPA's Environmental Monitoring Management Council (EMMC)
Ramona Trovato, Executive Secretary of the EMMC Policy Council,
Analysis and Evaluation Division
4:00 pm - Common Interest Group Discussions,
Introduction and Planning by Nancy Wentworth
Tuesday, April 23
8:30 am - Harmonization of QA Requirements Across the Government
Introduction by Nancy tfenttrorth
9:00 am - Hazardous Waste Site Remediation (panel discussion)
Gary Johnson, Quality Assurance Management Staff (CHAIR)
Marcia Davies, U.S. Army Corps of Engineers
John Edkins, Naval Energy and Environmental Support Activity
Duane Geuder, Office of Emergency and Remedial Response
Tom Morris, Martin Marietta Energy Systems, Oak Ridge National Lab
11:00 am - Ecological Monitoring (panel discussion)
Robert Graves, Environment Monitory Systems Laboratory (CHAIR)
Adriana y Cantillo, National Oceanic and Atmospheric Administration
James Andreasen, U.S. Fish and Wildlife Service
Thomas Cuffney, U.S. Geological Survey
2:00 pm - Discussion Groups - Harmonizing QA Across Government
3:30 pm - Summary of Discussion Groups
4:00 pm - Integrity in Research
Adil Shamoo, Editor in Chief, Accountability in Research
-------
5:00 pm - Adjourn
6:30 pm - Banquet
- Journey Toward Quality: The Federal Express Story
Gilbert Mook, Vice President for Properties and
Facilities, Federal Express, Inc.
1990 Malcolm Baldrige National Quality Award Winner
- Presentation of the Qualify Assurance Management of the
Year Award
Wednesday, April 24
Training Sessions:
QUALITY AUDITS FOR IMPROVED PERFORMANCE
Dennis Arter, Columbia Quality, Inc.
D A one-day condensed version of the ASQC course
COMMUNICATION SKILLS
Dr. Linne Bourget, Positive Management Communications Systems
n A half-day seminar on communication skills for positive power
and influence, relationships, and organizational change
TRAIN-THE-TRAINER SEMINAR
Mary Ann Pierce, JWK International Corp.
a A half-day seminar on how to use QA training tools effectively
QUALITY CIRCLES
RADM Frank Collins, Frank Collins Associates
D A half-day seminar on how to organize, train, implement, and
nurture Quality Circles in your organization
QA CAREER MANAGEMENT SEMINAR
Joanne Jorz, Conceptual Systems, Inc.
D A half-day seminar designed to help QA professionals develop
and manage their careers
Thursday, April 25
Common Interest Group Sessions
Friday, April 26
8:30 am - Common Interest Group Sessions Wrap-up
10:00 am - Summary Reports from Common Interest Groups
11:30 am - Closing
12:00 noon - Adjourn
ii
-------
MEETING HIGHLIGHTS
Nancy Wentvorth, Director of the EPA Quality Assurance Management
Staff, welcomed attendees to the meeting, touched upon the theme
"Data Quality Across the Government," and stressed the need to work
together with a common goal to meet environmental challenges that
lay ahead.
Robert Layton, Regional Administrator of Region 6, described the
concept of Total Quality Management (TQM) and emphasized the
importance of providing a uniform approach to assuring data
quality.
Joe Winkle, Deputy Regional Administrator, Region 6, discussed the
benefits of standardizing quality assurance requirements in
Federal, state and private sectors.
Raaona Trovato, Executive Secretary for the Environmental
Monitoring Management Council's policy council, described the
Environmental Monitoring Management Council: how it was formed,
its structure, and its functions.
Adil Shanoo, Editor of Accountability in Research: Policies and
Quality Assurance, discussed integrity of research: problems and
their causes, the role of quality control and quality assurance,
and ideas for improving data integrity.
Gilbert Nook, Vice President for Properties and Facilities, Federal
Express, Inc., talked about the Malcolm Baldrige National Quality
Award: the goals and philosophies of Federal Express, their
commitment to the customer, and their quality of service.
PANEL DISCUSSION SUMMARIES:
Kaye Mathevs, QA Manager, National Enforcement Investigation
Center; Jeff fforthington, Quality Assurance Director for TechLaw;
Joan Pisk, Deputy Chief, Analytical Operations Branch, Office of
Emergency and Remedial Response; and Rick Johnson, a member of
EPA's Scientific Staff in the Office of Information Resources
Management, participated in a panel on "Good Automated Laboratory
Practices: Recommendations for Ensuring Data Integrity in
Automated Laboratory Guidance." The discussion focused on the
trend toward computer automation in the laboratory.
Gary Johnson, a Quality Assurance Management Staff member; Thomas
Morris, QA Manager for the Hazardous Waste Remedial Actions
Program; Duane Geuder, QA Manager/Program Analyst in the Office of
Emergency and Remedial Response; Marcia Davies, HTW Branch Chief of
the Missouri River Division of the Army Corps of Engineers; and
John Edkins, QA Manager for the Navy Installation Restoration
Program, participated in a panel discussion on the National
iii
-------
Consensus Standard that is being developed through the American
Society for Quality Control (ASQC). Panelists focused on the
structure of the proposed standard and ways to effectively
implement the standard process.
Robert Graves, Acting Coordinator for the Environmental Monitoring
and Assessment Program; Janes Andreasen, from the U.S. Fish and
Wildlife Service in the Division of Environmental Contaminants;
Adriana y Cantillo, Manager of the National Oceanographic and
Atmospheric Administration (NOAA) National Status and Trends
Program/Quality Assurance Management Program/Quality Assurance
Program; and Thomas Cuffnay, an Ecologist in the Hater Resources
Division of the U.S. Geological Survey, participated in a panel
that focused on harmonization of ecological monitoring. Panelists
discussed the monitoring process in their respective agencies.
iv
-------
BIOGRAPHIES OF SPEAKERS
Nancy Wentworth is the Director of EPA's Quality Assurance
Management Staff. Prior to working for QAMS, she was a manager in
the Office of Drinking Water for seven years. In 1988, she was
awarded the USEPA Bronze Medal for Commendable Service; in 1989 and
1990, she was the recipient of the USEPA Special Service Award.
Ms. Wentworth has a B.A. in Civil Engineering and a Master's degree
in Sanitary Engineering.
Robert Layton has served as the Regional Administrator of EPA's
Region 6 since February 1987. He initiated a Value Engineering
Program in the Hazardous Waste Management Division, and negotiated
the first EPA Superfund program agreements with the Navajo Tribe.
Mr. Layton is the Vice Chairman of the Policy Committee for the
Galveston Bay Estuary Program and Co-chairman of the Policy
Committee for the Gulf of Mexico Program. He is a member of the
National Council of Engineering Examiners, and was selected
Engineer of the Year for 1976-1977.
Joe Winkle has been the Deputy Regional Administrator of EPA Region
6 since April 1988. From October 1982 to 1988, he served as the
Director of the Disaster Assistance Programs for the Federal
Emergency Management Agency (FEMA). Prior to joining the national
office of FEMA, Mr. Winkle was the Acting Regional Director and
Deputy Regional Director for FEMA Region 6 in Denton, Texas. He
served as Regional Director of the Federal Disaster Assistance
Administration (FDAA) from 1973 to 1979 and has been the Federal
Coordinating Officer for more than 50 Presidentially-declared major
disasters.
Ramona Trovato is the Executive Secretary for the Environmental
Monitoring Management Council's policy council and a division
director in the Office of Water. She is a chemist who has worked
for EPA for many years: in the Region 3 Central Lab in Annapolis,
on the Quality Assurance Management Staff, and as a liaison between
Headquarters and the regional offices.
Adil Shamoo has been a professor of Biological Chemistry at the
Maryland School of Medicine since 1979. He is the editor of the
journal, Accountability in Research, and the textbook, Principles
of Research Data Audit. In 1988, Dr. Shamoo organized the First
International Conference on Scientific Data Audit, Policies and
Quality Assurance.
Gilbert Mook is the Vice President of Properties and Facilities for
Federal Express Corporation. He joined the company in 1983 as
Director of Space Operations and Advanced Programs, was named Vice
President of Satellite & Video Systems in 1985, and assumed his
-------
current position in 1988. Federal Express is the first winner of
the Malcom Baldrige National Quality Award in the service category.
Kaye Mathews serves as EPA's Quality Assurance Manager for the
National Enforcement Investigation Center in Denver, Colorado, she
provides evidentiary consultation to EPA's Contract Laboratory
Program and oversees the contract Evidence Audit Team's involvement
in the CLP.
Joan Fisk, Deputy Chief, Analytical Operations Branch, Office of
Emergency and Remedial Response, has been involved with the
Superfund Contract Laboratory Program since July 1983. With a B.S.
in Chemistry from the University of Bridgeport, Ms. Fisk has worked
as an Analytical Chemist throughout her entire career. She is
presently a member of EPA's Environmental Monitoring Management
Council's panel on Method Standardization and chairs the
"Interagency Work Group on Data Authority."
Jeff worthington is the Quality Assurance Director for TechLaw,
Inc. in Denver, Colorado. He also serves as the Technical Programs
Coordinator for TechLaw's Contract Evidence Audit Team (CEAT)
contract to EPA's National Enforcement Investigations Center.
Rick Johnson is on EPA's Scientific System Staff in the Office of
Information Resources Management. He is currently directing the
development of EPA's Good Automated Laboratory Practices, of which
he is the original author, and the integration of ORD's Data
Quality Objectives with the agency's System Design and Development
Guidance to establish a methodology for sharing environmental data
of documented quality.
Gary Johnson serves on EPA's Quality Assurance Management Staff.
With a B.S. in Nuclear Engineering, he was a nuclear engineer for
Duke Power Company and an environmental engineer for EPA's Office
of Research & Development. In his eleven years in Quality
Assurance, Mr. Johnson has received two EPA Bronze Medals for
outstanding contributions. He is a member of the American Society
for Quality Control (ASQC), and presently serves as an Associate
Regional Councilor for the ASQC.
Marc la Davies, the HTW Chemistry Branch Chief of the Missouri River
Division of the Army Corps of Engineers, has a Ph.D. in
Analytical/Inorganic Chemistry. Her branch of the Army Corps of
Engineers is responsible for the continuing development of the
ESACE Chemistry Data Quality Management Program and national
oversight of its implementation.
vi
-------
John Edkins is currently employed by the U.S. Navy at Port Hueneme,
California as a hydrologist, and is the Quality Assurance Manager
for the Navy Installation Restoration Program. He has an M.A. in
Geology and is registered as a professional geologist. As a
Section Vice-chairman in the American Society for Testing and
Materials subcommittee 0-1821 for Ground Water and Vadose Zone
Investigations, Mr. Edkins is also a member of the Interagency Ad
Hoc Committee for Quality Assurance in Environmental Measurements.
Duane Geuder has an M.S. in Radiation Biology/Biophysics. He has
worked as a chemist in the Marine Corps, as an oceanographer for
the U.S. Navy Oceanographic Office, and most recently as a Quality
Assurance Manager for EPA's Superfund. He has more than 25 years
of experience in environmental sampling and analysis and related QC
and QA activity. Mr. Geuder is a member of the Ad Hoc Panel on
QA/QC services under the Environmental Monitoring Management
Council, and belongs to the Association of Official Analytical
Chemists.
Thomas Morris is currently employed by Martin Marietta Energy
Systems, Inc as the Quality Assurance Manager for the Hazardous
Waste Remedial Actions Program. A member of both the Working Group
and the Policy/Steering Committee for QA Harmonization, Mr. Morris
has eight years of experience in Total Quality Management concepts.
Robert Graves has worked for EPA's Environmental Monitoring Systems
Laboratory in Cincinnati since October 1978. Prior to that, he was
employed by the U.S. Treasury's Bureau of Alcohol, Tobacco, and
Firearms to examine and analyze evidence from various law
enforcement agencies. Mr. Graves has an M.S. in Chemistry and an
M.B.A. in Finance.
Adriana y Cantillo currently works for the NOAA/National Ocean
Service (NOS)/Office of Oceanography and Marine Assessment/Coastal
and Estuarine Assessments Branch. She is the Manager of the NOAA
National Status and Trends Program/Quality Assurance Management
Program/Quality Assurance Program, and the Coordinator of NOAA
activities of the multiagency ocean Dumping Ban Act Research and
Monitoring Program, with a Ph.D. in Chemistry, Dr. Cantillo is a
member of EPA's Methods Integration Work Group, the NOS
representative to the National Ocean Pollution Policy Board, and
the NOS representative to the Working Group developing the Federal
Plan for Ocean Pollution Research. Development and Monitoring:
Fiscal Years 1991-1995.
James Andreasen has 18 years of experience, in both research and
operations, evaluating the effects of environmental contaminants on
fish and wildlife populations and water quality. He has a Ph.D. in
vii
-------
Zoology/Natural History and is employed by the U.S. Fish and
Wildlife Service in the Division of Environmental contaminants. As
the technical advisor for the Service on the Department of Interior
Irrigation Water Quality Program, Dr. Andreasen is responsible for
pulling together various elements that contribute to harmonization
of field methods and in formulating standard operating procedures
for the operational contaminant program.
Thomas Cuffney is an Ecologist in the Water Resources Division of
the U.S. Geological Survey. He serves on the Technical Issues
Committee of the North American Benthological Society and the 1992
Program Committee of the Society of Environmental Toxicology and
Chemistry. He has a Ph.D. in Biology and seven years experience in
aquatic ecology and environmental toxicology.
viii
-------
INTRODUCTION
Nancy Wentworth
Director
EPA Quality Assurance Management Staff
I'd like to welcome you to the llth Annual National Meeting on
managing environmental data quality. The theme of this meeting is
"Data Quality Across the Government."
Today is Earth Day. There are probably a fair number of people
here in the audience who participated in the first Earth Day 21
years ago. It was something that made a mark on my life and
created a commitment to protecting and preserving the environment.
I think that's what we're here to consider this week. There are
people from a wide diversity of agencies and their supporting
contractor communities, and we need to work together. We need to
work efficiently, because we have many things that we must do and
we do not have unlimited resources. So efficiency is our only hope
in the long term of being able to meet the challenges that are
before us in the environment.
One of the things that I have most enjoyed in my six years with the
Quality Assurance Management Staff (QAMS) is the opportunity to
learn and to grow, and to try to build continuous improvement into
EPA's Quality Assurance Program. I look at this meeting as a
significant opportunity for all of us to come to a much greater
understanding of our mutual concerns, our mutual problems, and our
mutual goals. Your gift to QAMS this week is your knowledge. The
QAMS staff who are here are going to be listening and probing and
trying to gain as much as they can. Please share your experiences
with us. This is what we need to hear so that we can do our jobs
better.
We value the diversity of the audience that's here. We have people
from all backgrounds, from all geographical areas of the country,
and from all of the programs that EPA is concerned with, either
directly or indirectly. That is very important to us. How we are
going to succeed in the long term is to bring all of these views
and visions together and move forward as a group. We can't afford
to go in separate directions; we must work together. I think we
have a tremendous goal ahead of us this week and I look forward to
achieving it.
-------
WELCOME
Robert E. Layton
Regional Administrator
EPA Region 6
Welcome to Region 6 of EPA and to Dallas, Texas. You know that
Texas is the largest state in the Union. Now don't tell me Alaska
is. I know it covers more area, but when all the ice melts, Texas
is still the largest.
Texans are proud of their quality as well as their quantity, and we
always like to think of Texas as being the biggest and the best.
I sound, perhaps, like a couple of fellas in this story about a
Texan and a man from a Arkansas who got on a train in Texarkana to
go to El Paso. The Texan began bragging about the size of the
state and he said, "You know, we could ride all day on this train
and still be in Texas." And the man from Arkansas quietly replied,
"Well, don't feel bad sir, we've got slow trains in Arkansas, too."
Of course, they weren't speaking from a uniform standard of data.
The man from Arkansas wasn't looking at the train from the same
standpoint as the Texan. And that is likely to occur sometimes in
quality assurance (QA) management—we may not all be looking at it
from the same perspective.
The theme for this year's meeting is "Management of Environmental
Data Quality Across the Government." All of you who are EPA QA
Managers should take great pride in your contribution to providing
quality data for this Agency. For all of you who are here
representing other agencies, we welcome your interest and your
participation, and we know you are as dedicated to helping provide
quality data as we at EPA are. We look forward to continuing a
dialogue that will result in strengthening all of our programs.
Perhaps this week could be the beginning of a process that could
change the way government does business in the field of data
quality management. I challenge you to begin a communication with
one another which transcends agency lines. You have a unique
opportunity to lay a foundation for a uniform approach to data
quality throughout the government.
If you can begin to devise methods for providing a uniform approach
to assuring data quality, you will provide an invaluable service to
not only your particular agency, but to the entire structure of
government as well—Federal, state, and local. Your ability to
communicate, compromise, and come to a consensus on QA and data
gathering would certainly be within the framework of what
Administrator Reilly calls Total Quality Management (TQM).
-------
Implementing the ideal of TQM is both realistic and beneficial, and
it is mandated in EPA. And so I charge you this week to seek out
each other. Discuss common values and standards as they relate to
data quality. Become involved in creating new and excellent
approaches for improving data quality across the lines of Federal
agencies. Find similarities and build on them. Look closely at
differences among existing QA programs and see if they can be
reconciled. Good luck in this worthy endeavor.
-------
KEYNOTE ADDRESS
Joe D. Winkle
Deputy Regional Administrator
EPA Region 6
It's a pleasure to be here today and to address such a
distinguished group of people that has such an important mission as
it pertains to the job of environmental protection and science in
general. F. Henry Habicht, our Deputy Administrator of EPA, in an
article back in November 1989 in The Quality Manager, said that
"few things are more important to our success as an Agency than
credible, usable data. Indeed, virtually every regulatory decision,
research activity, and budgetary action taken by this Agency is
based, in significant part, on environmental data. Our Agency is
in the business of making decisions—often difficult decisions. In
order to achieve our shared mission, our actions must be supported
by reliable environmental data."
Habicht went on to say that Administrator Reilly intended to pursue
top priority initiatives to ensure that the Agency had access to
the best data from all significant sources. To that end, our
Administrator and his Deputy have established an internal task
force to explore the merits of creating a National Center for
Environmental Statistics. However, the National Center will only
be as good as the data it receives from the various agencies.
Quality of interagency programs and projects across the government
is always a critical issue, particularly in Region 6 where we work
with many Federal, state, and local agencies. All are involved in
environmental data operations. We have numerous facilities under
our jurisdiction; sometimes we feel there are more than we can
comfortably work with. Standardization of QA requirements will
lessen the number of documents that must be written and reviewed,
resulting in savings of time and money. Each Agency has different
requirements for the generation and documentation of environmental
data; therefore, it would be prudent and feasible to have a uniform
approach for producing such information. Each time we standardize
and streamline we save ourselves untold work.
It is imperative that we address cost-saving issues in these days
of budget crunch. Oversight of interagency programs and projects
becomes simplified with harmonized data quality assurance
standards. The framework for QA in data operations would be common
across agencies, thus eliminating confusion. There could be no
question about which set of standards would apply, as all would be
the same. Newer, more efficient techniques would allow us to spend
our time working toward constructive solutions to our problems,
instead of discussing the style and scope of QA efforts which may
have resulted from a variety of approaches to assuring data
quality, none of which may be suitable for decision making. This
-------
additional time saved in having a uniform approach to data quality
would allow us to launch into new and even more challenging areas.
However, providing a single set of standards applicable across the
board for programs and assuring the quality of environmental data
will be a formidable task. Standard definitions are a must. You
must more clearly define what is being sought. At times those whom
you interface with do not clearly understand what is being stated.
Since environmental data is at the core of so much of what we do,
decisions are only as good as the data on which they are based.
Our effectiveness as regulators and enforcers depends on our
credibility. Without credibility, the regulated community and the
public will be reluctant to accept the decisions and actions of any
governing or regulating agency. We must, therefore, ensure the
quality of our data and thus the quality of the foundations on
which our decisions are based.
Perhaps you can build communication this week which will become a
cornerstone in building the data and enable us to make sound and
defensible environmental decisions. An important example of this
team communication concept took place in a recent meeting in
Washington between the Office of Water and the Office of
Enforcement. In this meeting the participants sought to identify
areas in which there were differences. The intent was to achieve
a smoother operation. The participants included members from the
Office of Regional Council, Office Directors from the Office of
Water, and the Office of Enforcement, as well as Water Management
Division Directors. The consensus was that with good communication
and smoother relationships, a rapid solution of problems would
result and get the job done in a timely and efficient manner.
We are hoping that you QA Managers can begin to fashion a national
standard for environmental data quality programs. You could format
QA principles and policies which would enhance cooperation and
improve the quality of environmental data in the Federal, state and
private sectors. As you begin discussions this week, there could
evolve a common goal—to provide a framework that would accomplish
our variant but important missions in addressing the complicated
task of a harmonized approach to data quality. You can make an
effort to implement Mr. Reilly's mandate to become involved in TQM,
for you have the unique opportunity of applying that TQM principle
to a vital task. As each of you provides your own special
expertise, and as you discuss and compromise for the common good,
you will exercise the very principles of TQM.
The key task you will face this week is to communicate on what the
Federal environmental mission really is and how you, as QA
Managers, can best support it. Through joint efforts you can lay
the foundation for harmonization of QA requirements: planning,
implementing and reviewing data collection programs. A closer look
at data raises some important questions. When a manager is making
a decision based on environmental data, that manager must ask: "Do
-------
these data meet the needs? Does it help me to make the decision
I'm facing? Can I live with this level of uncertainty? Are these
data legally defensible?" Much of our data must survive the close
scrutiny of court cases.
These are only a few of the many questions which arise when
assessing data as they relate to an environmental decision. One
very important question is often raised when resampling is
required: Was the resampling required as a result of too wide a
variance in the data? Or, was it required as a result of unclear
objectives to begin with? Exact standards, clearly defined
objectives, uniform procedures and precise testing—in short, a
good job of upfront planning—will eliminate a great deal of
resampling.
We at EPA use the term data quality objectives (DQO's). As you
Know, this term refers to a planning process which determines
upfront how good the data need to be to support Agency decisions.
So many times, we shoot first, then aim. We have to reload, aim,
and then shoot—a procedure we should have used to begin with.
All data have a level of uncertainty. We need to know that the
level of uncertainty is acceptable. During the planning stage we
must ensure that the technical insight of the laboratory analyst
who is testing the sample is compatible with the needs of the
decision-makers who will use the data. This communication is
essential between two cultures: management and technical. DQO's
are a cross-cultural communication that provide decision-makers
with the ability to manage the level of quality in data that is
needed to make the important regulatory decisions they face. The
astronomical costs involved in cleaning up the environment dictate
that trustworthy data be assured.
All these problems could be resolved with a standard, universally-
accepted approach to data quality assurance. Our tools are
acquiring data—and the data itself must be above reproach, for we
are the regulators. Just as the police force should be uniform in
the manner in which they enforce the laws, we should be uniform in
the manner in which we make regulatory decisions. To gain
confidence from the public and from those being regulated, the data
used in making those decisions must be appropriate.
One of the challenges of the DQO process is that you must strike a
balance. Too much quality is as poor an idea as not enough. There
should be a balance, and the emphasis must be placed upon arriving
at that balance.
So we set before you the task to begin communications which will
ensure uniform Federal standards for environmental data quality.
Provide us this week with a foundation for meeting this goal; give
us the building blocks for getting and creating a standard uniform
management system for environmental data quality which would begin
with how to plan it; then go on to say who will do it, what methods
-------
will be used, and most importantly, what standard will be applied
in evaluating. This is the heart of data generation.
Recent budget constraints make it mandatory for us to seek better
methods for doing our job. You QA Managers are at the forefront in
providing assistance in doing just that—better, more productive QA
management methods, derived from universally-acceptable standards.
Your challenge is to begin creating that harmony. I am confident
that you will excel at this task; your challenge is great, but it
can be achieved. We look forward to the results of your
deliberations during this conference.
-------
EPA'S ENVIRONMENTAL MONITORING MANAGEMENT COUNCIL
Ramona Trovato
EPA Analysis and Evaluation Division
The Environmental Monitoring Management Council (EMMC) was formed
by Deputy Administrator F. Henry Habicht in FY 1990. The EMMC was
formed because of three events. one was that a white paper was
developed on long term development of analytical methods and short
term analytical method development that recommended a need for a
senior management group to address monitoring methods issues. Then
there was a report called the Section 518 Report that looked at the
availability, adequacy and comparability of the Agency's analytical
test methods under section 304-H. They were then directed to look
further than just the water methods and to look at the other Agency
methods and to make recommendations. One of the recommendations
they made was to set up an environmental monitoring management
council.
That happened in 1988. Then in 1989 Region 3 got a new Regional
Administrator, Ted Erickson, and he sent a letter to the
Administrator that said he thought it would be a good idea if a
group was set up like this; the Agency does have some issues in
lacking comparability across programs and in methods and in QA and
QC.
As a result, the Deputy Administrator established the EMMC in FY
•90. The EMMC's purpose is to recommend coordinated Agency-wide
policies concerning environmental issues. Our charter covers seven
areas that EMMC is supposed to look at: 1) coordinate the Agency-
wide environmental methods research and development needs; 2)
foster consistency and simplicity on methods across media and
programs; 3) coordinate short and long term strategic planning and
implementation of methods development needs and promote adoption of
new technology and instrumentation; 4) coordinate development of
QA/QC guidelines as they apply to specific methods; 5) evaluate the
feasibility and advisability of a national environmental laboratory
accreditation program; and then, other activities that influence
environmental monitoring.
The EMMC can focus on any issues that affect environmental
monitoring which they feel are important. The bottom line is that
they're supposed to recommend coordinated Agency-wide policies
concerning environmental monitoring issues. All of us have
recognized for a long time that we need somebody who's looking
across the programs. And I think this is the first time we've had
a coordinated effort where we got some senior managers involved in
looking at the issues associated with environmental methods
development and the QA/QC in environmental laboratory
accreditation.
9
-------
I should mention that we're not just inside folks. On the ad hoc
panels we have folks from states and from other Federal agencies.
In one instance we're going to set up a Federal Advisory Committee
so we can have folks from industries come in and talk to us and
provide us with advice.
Figure 1 shows how the EMMC is structured. The Deputy
Administrator is a tie breaker if the Policy Council has issues
they can't decide on their own. The Annual Reports also go to the
Deputy Administrator. We're in the process of finalizing the first
year's Annual Report which we'll be talking about at the next
Policy Council Meeting later this week.
Figure l
Structure of Environmental Monitoring Management Council
Deputy Administrator
Policy Council
Steering Committee
QA
Services
Ad Hoc Panels
Methods
Integration
Automated
Methods
Compendium
Analytical
Methods and
Regulation
Development
National
Laboratory
Accreditation
The Policy Council identifies the big issues of concern that need
to be coordinated across programs. It meets at least twice a year;
in this first year I think it met four times. The Steering
Committee oversees the investigation of specific monitoring issues.
It meets at least four times a year; it met more this year.
The ad hoc panels are the people who really get out and do the leg
work. They're the ones who identify the issues, come up with the
10
-------
options, and make recommendations to the Steering Committee which
are then elevated up the chain and implemented. They meet as often
as they need to meet.
The folks who are on the Policy Council are: Erich Bretthauer, who
is the Assistant Administrator for the Office of Research and
Development, is one co-chair; Edwin (Ted) Erickson, who helped get
this ball rolling, is one of the other co-chairs. I'm the
executive secretary. Then we have members—one member from each
office. Members include the Deputy Assistant Administrators and
the Deputy Regional Administrator from the lead region for research
and development. The first year it was John Wise; recently it's
been Jack McGraw. This representative changes as the lead region
changes—every two years.
The Steering Committee reports to the Policy Council and they're
the ones who make sure that things on the ad hoc panel are moving
along in the direction they ought to be moving. The Director of
the Office of Modeling, Monitoring Systems, and Quality Assurance
is the chair of the Steering Committee. That was Rick Lindhurst
until he recently moved upstairs to work directly for Erich on some
other projects. Now Jack Puzack's acting in the meantime. David
Friedman, whom many of you may know from his days in the Office of
Solid Waste, is the executive secretary for the Steering Committee.
The members are comprised of Division Directors from headquarters
and the Office of Research and Development (ORD), the Environmental
Systems Division (ESD) Directors from the lead region for ESD, and
the ESD Director from the lead region for research and development.
So we have on both of these committees good representation from
ORD, the regions, and the headquarters program offices, which is
what we were trying to achieve. The regional folks know where the
problems are because they're trying to implement the regulations.
The program offices write the regulations and need feedback from
the regions. ORD folks have to go out and come up with whatever
the program offices think they need. We were trying to get
everybody represented on these committees so we could move ahead
and no one would feel left out.
There were five ad hoc panels established about a year ago which
started meeting maybe nine months ago. There was one on QA
Services which was supposed to look at what the Agency's needs were
in terms of QA services, what it cost, and how we could fund it.
There was a Methods Integration panel which was supposed to look
across the Agency and see where there were opportunities for
consolidating methods. I think this panel's an exciting one
because, having been in a lab for 13 years where I had to do one
method for drinking water and one method for the National Pollutant
Discharge Elimination System (NPDES) and another method for the
Resource Conservation and Recovery Act (RCRA), I was excited about
the idea of doing one method for the same analyte in the same
11
-------
matrix. So I have high hopes for this group. But they also have
a tough problem, because they have to make sure that they're going
to get the detection limits they need, the precision and accuracy
that they need, and still be able to meet the regulatory
requirements of that program.
The Automated Methods Compendium panel was set up so that we could
have a single database that would list all of the Agency's methods.
The Analytical Methods and Regulation Development panel was
supposed to figure out a way to get analytical methods and QA more
visible in the regulation development process. You have to have a
good idea of how our regulation development process works and where
you can plug things in in order to make a change. And I think this
group found out how to do that.
The last panel was set up to look at National Laboratory
Accreditation. First, is it feasible—can we do it—and second, is
it advisable—should we do it?
The QA Services panel is co-chaired by Tom Hadd from the Office of
Research Program Management and William Hunt from the Office of Air
Quality Planning and Standards. We tried to have two folks co-
chairing each ad hoc panel. The people had to come from a
different organization, and there were three main organizations:
ORD, the regions, and headquarter's program offices. We would pick
any of the two to try to get this covered and get people who were
interested in the area so that we could get good representation on
the chair so things didn't go too much one way or the other.
The charter of the QA Services group states that they will address
the issue of sustaining adequate funding for QA services and
research provided by ORD. The QA Services panel did a QA Service
and Research Needs Survey to identify needs; they're in the process
of trying to quantify those needs and decide how much money we're
spending now so they can get a good idea of how much more we need.
They're also trying to find a way, or make recommendations on a
way, to fund those needs in the future. QA services includes such
things as training and QA research methods. Everybody has a
slightly different idea of what we need and what they're willing to
pay for. So this one's turning out to be very difficult.
The next step will be to develop a Final Report and Recommendations
for the FY 1993 Budget Proposal. We are trying to get this budget
initiative together and are working hard to figure out how we can
pay for it and how we can have a stable base.
I think the most important thing that's come out of this panel so
far is the recognition that when it comes to funding QA services
and methods development activities, they always take a back seat to
everything else. It's the first time I think that has been
recognized by folks up the chain as a problem. So at least we've
12
-------
made headway that far, even if we're not able to get a budget
initiative going this year. We hope we'll be ready to have a
budget initiative next year for that one, so that we have money
actually set aside and earmarked for QA activities and QA research
and development.
The Methods Integration panel is chaired by Larry Reed from the
Office of Emergency and Remedial Response and Ort Villa from Region
3, Environmental Services Division. This is the group that is
trying to figure out how to consolidate methods. Their charter is
to evaluate the feasibility of standardizing analytical methods
across media and programs. They have two basic activities that
they need to address. One is how to consolidate existing methods
or integrate existing methods; the other is how to get a handle on
all the new methods that are getting developed in such a way that
you don't keep proliferating lots of methods that are slightly
different but do the same thing.
They've begun by addressing existing methods and they're trying to
find opportunities where they can consolidate those methods with a
minimum amount of difficulty. There are three method integration
pilot projects underway. They began with volatile organics, metals
digestions, ICAP—those are in process and they feel that they will
be done by the end of this fiscal year in September. The other two
that they have just decided to start looking at are semi-volatiles
and microwave digestion. They feel that microwave digestion will
fit in nicely with metals digestion and they're hoping to have
those two done by September '91. This group is trying to figure
out what the best way is for it to be organized to address these
issues, because this is a big area and there's a lot of work to
begin.
On this panel, we have a lot of other Federal agencies represented
because many other groups use EPA's methods. So we have the
National Oceanic and Atmospheric Administration, the Department of
Energy, the Department of Defense, and a number of others because
our methods are not just for our use but for other agency's uses
too.
There are some other issues: How do we format the method? Should
we come up with a whole new methods manual in addition to all the
already existing methods manuals that we have? How do we go about
making them legal, that is, including them in the Federal Register?
And when we figure out how to do that, do we make them supplement
or supplant existing methods? There are lots of different strongly
held opinions on how we should go about doing this. So those
meetings are very interesting. And if anybody has two cents they'd
like to throw in, I would encourage you to talk to David Friedman,
who is heading up the work group that's dealing with this issue.
The Automated Methods Compendium panel is co-chaired by Fred
Haeberer from the Quality Assurance Management Staff and Bill
13
-------
Telliard from the Office of Water. This is, I feel, one of the
success stories for EMMC in the first year. What the EMMC decided
to do was to adopt the existing List of Lists system that the
Office of Water had been putting together over the years as the
Agency-wide tracking system, and to include methods from Superfund
and RCRA and Air and other programs that they didn't already
include in their database.
Each of the other programs was going to kick in some money to help
fund this and keep it going. It has gone out in a read-only pilot
form to Region 3 and its states. I think about 36 copies of this
went out to Region 3. Another 22 got distributed to folks around
the Agency that Bill and Fred knew were interested. They're
expecting feedback by the end of this month on how things went and
then, based on that, they'll modify the existing system and make it
available for distribution. We thought it would be a good idea to
have this in EPA's library in headquarters so that the reg writer
folks could have access to it, and then also have it in the
regional lab libraries, the regional libraries, NEIC, and the three
Environmental Monitoring Systems Laboratory (EMSL) libraries.
We'll look at the results from the pilot study and make any
changes. So far the feedback I've heard from people who are using
it is that it does address which program the methods support, what
the detection limits are, and what type of method it is, and that
it's been very useful to those folks so far.
The Analytical Methods and Regulation Development panel started out
as the Quality Assurance and Regulation Development Project at one
of these national QA meetings. Anyway, the Regional QA Managers
decided that they didn't feel QA was getting enough attention in
the regulation development process, so they started an initiative
with Carol Wood of Region 1 as their leader, figuring out how they
could get QA better highlighted in the regulation development
process. When EMMC got started, and they were talking about
analytical methods in the regulation development process, we
volunteered Carol and said, "She's already leading this effort; why
don't we just let her continue on as one of the ad hoc panel co-
chairs." The other co-chair is Maggie Thielen from the Office of
Regional Operations and State/Local Relations. Their charter is to
develop a system for.factoring methods development and validation
concerns into the Agency's regulation development4process.
The panel was recently successful in getting the regulation
development steering committee to adopt their recommendations,
which is in the Start Action Request, the document that begins all
regulation writing at EPA. They would have a question that says,
"Does this method require environmental measurement?" And if the
answer is yes, then the work group chair must go out and find
somebody who knows something about environmental measurements and
include them on the work group to make sure that whatever they come
up with is implementable.
14
-------
In addition, Jim Weaver, who's from the Office of Regional
Operations and State/Local Relations, and is the steering committee
representative from that office, will contact the programs in the
region and the ESD in the lead region to make sure that we get
representation from the regions on the group.
Finally, once it comes to workgroup closure, the chair of the
Regulatory Steering Committee, who is, I think, Tom Kelly from the
Office of Policy Planning and Evaluation (OPPE), will ask the
question, "Is this rule implementable from the perspective of
environmental measurements?" If the answer is no, it doesn't go to
red border; it stops until they can work it out. If it's yes, it
can go on to red border, which is the final step within EPA in
getting something ready to go into the Federal Register.
This was a big step, because it's very difficult to get something
included in the guidance to reg writers. So we feel that this
one's quite a success story, too. They're going to try it out for
a year, and when the year's over, evaluate and see how well it
worked, how should they change it, and if they should leave it
alone.
We felt strongly that the national program office QA Officers
needed to play a strong role in this, because they're the folks who
are going to know what regs are coming forward and are already
being worked on. We wanted them to be our first line of defense to
notify others in case they feel that's not being addressed.
I think one of the driving forces was not whether there were court
challenges or not, but how well the regional folks felt they were
getting methods in time and appropriate QA and proper quality
control (QC) requirements in time to carry out the regulations as
they came down the pike. In the spirit of TQM, this will be one
incremental step forward in improving our whole process.
The last panel is the National Environmental Laboratory
Accreditation panel, which I co-chair with Jim Finger, the ESD
Director in Region 4. We were charged with two things: first, to
decide if a uniform National Environmental Laboratory Accreditation
program for labs performing environmental testing procedures was
feasible, and second, if it was advisable. After a number of
meetings and a lot of disagreements, we agreed that National
Environmental Laboratory Accreditation is probably a good idea and
that it would benefit the programs. We thought that if we decided
we wanted to do it, we could do it.
What we didn't decide is should we do it. And we didn't decide
that because we felt what we needed was a Federal Advisory
Committee because the reason this all started was that EPA's Deputy
Administrator F. Henry Habicht got a call from the private lab
industry and said we really need national environmental lab
accreditation. We felt very strongly that we ought to hear from
15
-------
all the users of environmental lab data and not just from EPA. So
our ad hoc panel did include one state representative.
Initially, we had some folks from the lab community and were told
by general counsel that we were not allowed to do that. If we
wanted to talk to the lab community or any of the users of
laboratory data such as API, we had to set up a Federal Advisory
Committee. So I'd say for about the last five or six months we've
been in the process of trying to figure out how to set up a Federal
Advisory Committee and get it going. That committee will consist
of representatives from the states, from EPA and other Federal
agencies, from the laboratory industry, laboratory associations,
and trade associations to make recommendations to the Deputy
Administrator on what they think would be valuable to them in terms
of a National Environmental Lab Accreditation Program.
We felt that one of the keys to having a National Environmental Lab
Accreditation Program was getting the states on board. Without the
states, it seems to me we'll just be layering another bureaucratic
program on top of a lot of already existing programs. That's why
we felt very strongly that we had to have a lot of state
representation on this Federal Advisory Committee. If we can't get
reciprocity, which seems to be one of the big problems, then I'm
not sure we're ever going to be successful with National
Environmental Lab Accreditation. We did invite the National
Institute for Standards and Technology (NIST) to come and talk to
us about their lab accreditation programs. They felt very strongly
that if they were going to set up a program it would have to be
centralized; they would have to have complete control of the
program from NIST in Maryland, and based on the existing programs
that we already have—almost every state has some kind of a lab
accreditation program—we felt that wouldn't work at all. So the
NIST model doesn't look good for what we want to do.
We're hoping to have our first Federal Advisory Committee meeting
in June. Our goals within the next year are to identify the
critical elements and design of a national environmental lab
accreditation program, identify the benefits to all the users of
laboratory data, and provide options on how it could be funded and
how it should be managed. So that's what we hope to do in the next
year on National Environmental Lab Accreditation.
We have talked to the Health Care Financing Administration, which
has recently set up a program where they have to certify medical
laboratories, and we invited them to come and tell us what they
were doing and how well it was working. They declined and said it
was too soon to tell and that they would have something to talk to
us about later; so we're going to try them again, since it's now
later.
The Clinical Lab Improvement Act was passed by Congress in 1988,
and they have to set up a clinical lab accreditation program. They
16
-------
proposed the requirements for that about a year ago in the Federal
Register, and that's something in excess of 20,000 comments. Under
court order they have, just recently I think, re-proposed because
of the extent of the comments they got. But they do have some very
good and interesting techniques in there for monitoring performance
of the lab.
Some of the states have said they want a Federal regulation that
says you have to participate in this. And some of the folks we've
talked to have said they want a voluntary program. So one of the
things we need to recommend is whether it's going to be Federally
mandated or not. What the private lab community is telling us is
that if we set up a national program, they will participate because
they'd rather be part of a national program than subject to each of
the individual states' programs.
Jeanne Hankins of the Office of Solid Waste has agreed to serve as
our executive director of the Federal Advisory Committee and to
handle these lab accreditation issues for us. I'm looking forward
to having her join us, because this is a big issue and I haven't
been able to give it as much of my time as I would have liked. We
welcome her to this project.
Just to sum things up: the two big achievements, I think, are the
Environmental Monitoring Methods Index (EMMI), which is the
automated database that Bill Telliard and Fred Haeberer were
working on and the QA Services group, which is still struggling to
convince program offices and others how much is really being spent,
how much needs to be spent, and how we should go about funding QA
needs and methods development needs in the long term. That's all
I have to say about it.
17
-------
HARMONIZATION OF QA REQUIREMENTS ACROSS THE GOVERNMENT:
INTRODUCTION
Nancy Wentvorth
EPA Quality Assurance Management Staff
We're going to be talking about the differences and similarities
between QA programs across the government. We have representatives
from the Army, the Navy, the Department of Energy (DOE), EPA, U.S.
Geological Survey (USGS), and the National Oceanic and Atmospheric
Administration (NOAA)—a wide range of people dealing with many
different programs and many different concerns.
One of the things I've enjoyed most in QAMS is the ability to meet
people from across the Agency and across the government. All of us
in QAMS enjoy the opportunity we have to learn from other people.
Harmonization of QA requirements is one of the areas that we have
learned a tremendous amount about over the last few years. We've
talked to many people across the Agency and, as part of our
responsibility for oversight and review, we've looked in detail at
a number of programs and found that there are many common concerns
manifested across the country. We have found that the exchange of
information has provided us with a tremendous resource.
One of the major things we have recognized is that the guidance
that we have available to the QA community within EPA is outdated.
It's in need of revision to better reflect the QA program that we
are now operating. Many of you are aware that the QAMS-005/80
guidance is approaching 11 years old. The QA program in the Agency
when that was written was not the program that it is today. There
have been many strides forward and we need to have our guidance
accurately reflect the current program. One of the things that is
a priority to me as the Director of QAMS is to bring our guidance
up to date to the program that we're operating now, and the program
that we see is needed across EPA and all of the environmental
programs.
We've learned a lot from you. And one of the things we've learned
is that we need your help if we're going to be able to revise the
guidance and improve it and make it usable across the Agency. We
realize that the guidance has to be readable; it has to be easy for
people to translate into their day-to-day environmental monitoring
operations. It can't be written in hard tech engineering dialect
if it's going to be used in laboratories where people are used to
the chemistry dialect. It has to be in a common dialect that we
all understand.
We're trying to look at the issue from a Total Quality Management
(TQM) perspective—not from a narrow EPA-only issue, but from the
wide spectrum of environmental monitoring operations that go on
across the government. It seems appropriate, if we are going to
19
-------
improve our guidance, that we consider all of the needs of the
other monitoring programs we run that affect other agencies in the
government. As long as revisions are needed, it seems appropriate
to incorporate the concerns and requirements of other agencies and
to harmonize the requirements. I think that's an important thing
to recognize. We're trying not to have EPA stand alone, but to
have EPA stand with the other government agencies—to have a
logical, reasonable, usable program across the board. I think that
will help significantly to eliminate a lot of the problems and
concerns that you share.
A few years ago when we saw TQM rising as an important way of doing
business, we made a point of talking to the QA managers of other
environmental programs across the government. What we found was
very similar to what was in our own Agency—that the same guidance
was being interpreted differently across the country and in
different applications. It became clear that there needed to be an
effort to take some of the differences out of the program so that
we could operate more efficiently.
The vehicle that presented itself to us as a way of doing this was
the American Society for Quality control (ASQC) Energy Division,
which has an Environmental Waste Management Committee with
representatives from EPA, DOE, Department of Defense (DOD), and
contractor communities. This committee provided an opportunity for
a small number of people to sit down and begin to work through the
issues and concerns that their agencies had. Through this vehicle
we were able to begin discussions on harmonizing a standard.
The EQA-1, Quality Assurance Program Requirements for Environmental
Programs, is a draft that has been prepared through this ASQC
committee to begin the discussion and the widespread comment within
the community to try to develop this consensus standard. We have
spoken about this document at a number of meetings outside of EPA—
ASQC meetings and other technical meetings—and the document has
been widely distributed.
We are seeking your comments. For this effort to succeed we must
have your input on how QA programs can be made more common across
the government. We'd like to get a wide range of comments from the
QA community, both from within EPA and across the DOE and DOD
organizations and the contractor community. We're interested in
hearing from all of you.
We have assembled two panels to talk about the differences and
similarities in QA programs across government agencies. I think
this is a valuable opportunity for us to exchange ideas. It's
important for us to talk about our differences or similarities in
programs, the differences in our needs, and the differences in our
management and operating circumstances.
For those of you who are not EPA employees, we need to know what
20
-------
your views are on the benefits of harmonizing QA requirements, the
savings that you can see in that kind of a shift, and the types of
concerns you have that are not, as you read it, addressed in the
EQA-1. This is an interagency activity that is unprecedented. We
are committed to improving EPA's QA program, EPA's guidance across
the country, and EPA's clarity in explaining what people are
expected to do to meet the EPA requirements. We are hoping that
this effort will respond to a lot of those points.
We*re also looking at this as an important team effort within EPA.
I do not want any of you to think that what's presented to you is
a done deal. We are in need of supporting and improving our QA
program. We have made a proposal that we think responds to the
concerns that have been voiced to us over the years. We need to
hear back from you whether your concerns have truly been addressed
and dealt with. With that information, we can revise, modify, make
more user friendly, and make clearer the document that you have so
that it can work for EPA.
After the panel discussions, we will be breaking into discussion
groups to talk about the issue of Harmonizing QA Across the
Government. The purpose of that discussion is to talk about the
process of changing our approach to QA and making it more
consistent across government and the processes that you, as
individuals with QA responsibilities in different organizations,
will have to go through to change your program. We'd like to know
what support you need from us, and what support you need from
within your own organization to help implement change.
I know from experience that change at EPA is often difficult,
because people don't want to do things a new way or a different
way. What we're trying to do now is begin both the continued
refinement of EQA-1 or a new QA guidance system for EPA, but also
the process of people looking at it and beginning to assimilate it
into their operations. So I think we'll have some interesting
discussions from a process perspective, not from a technical
standards perspective.
I'm particularly looking forward to hearing how other agencies are
handling QA issues, what they view as their major differences and
major similarities with EPA's program, and the results of the
discussions on what kinds of process things we need to consider as
we look at implementing changes in our QA program.
21
-------
INTEGRITY IN SCIENCE
Ad11 E. Shamoo
Editor in Chief,
Accountability in Research; Policies and Quality Assurance
I want to thank the organizers for giving me this opportunity to
share my views with you. Some of what I am about to say has been
said, but it's going to be from different perspective. I came into
this area 10 years ago, when there were a lot of people suffering
on both sides as the investigator and the field in general on
issues involved in integrity of research.
There are two key areas that we are trying to change in order to
ensure the integrity of research: policy and individual
investigators. Both areas are important; however, policy changes,
I believe, can have quicker and farther-reaching consequences.
Therefore, we need to address policy makers and emphasize to them
that these policies can foster integrity in research or breed
fraud, misconduct, and sloppy work. And that's basically what we
are dealing with here; that's quality assurance: to prevent and
reduce sloppy work.
To influence individual investigators without changes in policies
is nearly impossible. However, even if policies are appropriate,
changes in individual investigators will be slow. Are the current
concerns regarding the integrity of research data an indication of:
(a) the decline in the ethical values and conduct of research by
investigators; (b) a new awareness of an old problem that is
produced by a small and negligible number of sociopaths and
deviants of our society; or (c) an old problem, but increasingly
larger enterprise makes them appear bigger? What are we talking
about? Are we talking about fraud, misconduct, careless practices
or error, or all of them?
In order to answer these questions, an historical perspective of
the problem is needed. Researchers at the turn of the 20th century
were no longer monks on top of a mountain. After the industrial
revolution and specifically after the first and second World Wars,
the research enterprise grew rapidly. In the United States alone
there are now one million research scientists, two and a half
million participants, and a budget exceeding 160 billion dollars
annually. So it is a big business.
Just a few examples to illustrate some of the irregularities in the
research area in the 20th century; first, the famous story of the
Piltdown Man. Piltdown is a city near London, where in 1908 fake
skull bones were found. The bones were made to appear thousands of
years old and made to look as if the skull was a mixture of a
monkey and a man, thus proving that man came directly from apes.
Charles Blinderman—and this is the last paragraph in his book The
23
-------
Pijtdown inquest (1986)—said something which is true at all times.
He said that anyone conversant with Piltdown and history will
readily, if not eagerly, agree that many of the researchers shaped
reality to their heart's desire. Protecting their theories, their
careers, their reputations, all of which they lugged into the ^pit
with them, because that's where it was found, in the pit, in the
Piltdown. This story lasted for 40 years. There were over one
thousand research papers and Ph.D. theses written in support of the
fake skull bones.
An example now of an error that affects national policy: A 1985
report on Social Security statistics, presented by Martin
Feldstein, who was a chief economic adviser for the President, said
that an attempt to replicate Feldstein's construction of Social
Security wealth revealed that his series was incorrect. Feldstein
has acknowledged that a computer programming error was made in
incorporating with those benefits provisions of the 1956 amendments
to the Social Security Act. As a result of this error, his Social
Security wealth series grew rapidly after 1957. By 1974 the series
was 37 percent larger than the correct value. Our national policy
was based on those error numbers.
There are numerous other cases of alleged and proven cases of
errors, misconduct, and fraud. We have been deluged recently with
investigations, reports, and Congressional hearings regarding
misconduct in science. What are the issues of data integrity?
They are: 1) falsification of data; 2) plagiarism, which is the
hardest one; 3) suppression and selection of data; 4) misuse of
privileged information; and 5) what you are concerned with—poor
data quality.
What are the causes of these problems? In my view the causes are
numerous, but they can be categorized into two broad areas:
institutional in nature and generic in nature. The initial
institutional response to the problems of data integrity is like a
sick patient being told for the first time he has cancer—his
denial, defensiveness, combativeness, then, of course, followed by
reluctance. However, due to public pressures, research now has new
regulations—the National Science Foundation (NSF) has, the
National Institutes of Health (NIH) has, EPA has, and the Food and
Drug Administration has. But none of the regulations or their
basic foundation acknowledges the very generic nature of the
problem. Thus, there is no development and formulation of policies
and procedures to prevent, deter, deal with, and remedy the problem
on a continuing basis.
In his book, The Strategy of social Regulation; Decision
Frameworks for Policy in 1981, Lave summarized our society's
dilemma with regulations. Americans can live with social
regulation despite its cost and disruption and can't live without
it because of the strong public desire to curb the worst abuses of
an industrial economy. That's what you're dealing with; if all
24
-------
industry were perfect, EPA would not exist. Neither of the extreme
choices—putting more resources into the regulatory agencies and'
increasing promulgation of rules versus eliminating the laws and
agencies—is viable. Americans have no choice but to learn to
accomplish these social goals with less controversy and greater
efficiency.
Sociologists such as Emil Durkheim, of the late 19th and 20th
century, and those who applied his theories to the sociology of
science in the past 50 years, such a? Merton and Zuckerman, tell us
that deviant behavior is due to "breakdown in values," whether in
the community at large or a sub group, such as the scientific
community. Furthermore, sociologists deduced that "social control"
was the guardian of norms and values in science.
In 1977 Zuckerman stated:
Social control and science depends partly on scientists
internalizing moral and cognitive norms in the course of
their professional socialization and partly on social
mechanism for the detection of deviant behavior and the
exercise of sanctions when it is detected....They must
also provide for the detection of deviant behavior and
for the exercise of sanctions when it occurs. In science
the institutionized requirement, that new contributions
be reproducible is the cornerstone of the system of
social control. It has two functions, deterrence and
detection.
Zuckerman then realizes the potential in her argument when she
states:
Critics of the social organization of science contend
that in all fields insufficient incentives are provided
for regulation, and, of course, as long as
reproducability of scientific results remains an ideal
not often realized in practice, it cannot serve as a
deterrent to the cooking of data.
This is exactly one of the reasons why dependence on social control
does not work. More importantly the cost of reproducing large and
complex research is financially, in current day economics,
prohibitive. It is also undesirable for society to wait for a very
long time such as 5 to 10 years to repeat the study when the
research results address an important public health or the security
of our society. As a matter of fact, the majority of research, by
its very nature, is not contestable, and so won't be reproduced.
Not too many research findings fall in the category of cold fusion
where everyone wants to repeat those experiments.
Social controls also fall apart where research scientists promote
a product, or a drug in which they themselves have large financial
25
-------
interests. We all are under certain kind of pressures to cut
corners. Be aware of them. You in the environmental field
acknowledged these problems years ago. In the past 20 years you
have developed a framework for how to deal with the issue of the
integrity of data. Furthermore, your policies continue to keep up
with the changes in the field and the profession.
We in the research field, especially in the sciences, have a long
way to go since we barely admit there is a problem. This audience
is easy to convince that these generic problems exist in the
research field; moreover, that these type of problems exist in any
complex and large enterprise. Remember, research and development
is a $160 billion business per year.
The generic problems are really three. one is conflict of
interest. To me, that's the most important one. It exists in
every aspect of what you do and what research and development does.
Second is complexity and third is remoteness. Research is no
longer conducted by a certain individual in a small laboratory, but
rather in a large and complex group with a differing expertise.
Therefore, laboratory directors rely on numerous portions of
research data from different areas. Furthermore, the project
leader may physically be at a remote location from the research
laboratory, just like the people in Washington, as well as
disconnected from the daily bench research.
I want to go back to the conflict of interest issue, which I think
is even more important and critical in research than in any other
system. For example, in the financial world one can
compartmentalize and isolate several functions that have inherent
conflicts of interest. However, in research those who collect,
analyze, and manipulate the data are one and the same.
Furthermore, in research it's not only undesirable to
compartmentalize, but rather nearly impossible. In a fertile
research environment you want all those involved in research to
know and have access to the data on a daily basis. Accessible
research data is as crucial for each subsequent step in research.
Justice Learned Hand describes conflict of interest in 1939, in the
Yale Law Journals
Our convictions, our outlook, the whole makeup of our
thinking, which we cannot help bringing to the decision
of every question, is the creature of our past, and into
our past have* been woven all sorts of frustrated
ambitions with their envies and hopes of preferment, with
their corruptions, which long since forgotten, still
determine our conclusion. A wise man is one exempt from
the handicap of such a past. He is a runner stripped
from the race, he can weigh the conflicting factors of
his problems without always finding himself in one scale
or the other.
26
-------
Let me give you a recent example of the insensitivity of the
research community to the issue of conflict of interest, or the
perceived conflict of interest, because it is equally important.
NIH appointed a committee due to Congressional pressure to
investigate allegations against Dr. David Baltimore, a Nobel
Laureate. The Magazine section of the Washington Post last Sunday
was about Walter Stewart and Ned readers, two of my colleagues at
NIH who are called "fraud busters," who have been after it for four
years. Let me know that I am not here to address the merits of
these allegations.
The original committee appointed by NIH consisted of three people.
One member was a former post-doctoral fellow and co-authored with
Dr. Baltimore 14 papers. None of these papers, of course, was the
one in question. The second member co-authored a textbook with Dr.
Baltimore. Because of Congressional pressures the composition was,
of course, changed. As all of you know, a recent draft report by
NIH has claimed and pointed a finger of misconduct by the principle
author of the paper, and the paper has been withdrawn by Dr.
Baltimore himself. Rough estimates of the amount of questionable
research, that sloppy work, ranged from 7 to 12 percent. These
estimates, as one would predict, have a tremendous immediate
economic impact.
In other words, we have close to $10 to $15 billion dollars a year
spent on questionable research. For each expenditure of one dollar
in research, we have an additional $10 to $30 dollar impact on the
economy in the long range. One can see the huge annual economic
impact of shoddy research on our economy. These numbers on the
economic impact of continued questionable research are detached
from its moral and ethical impact on our society. In this century
errors, deceptions and fraud have lasted too long, as in the case
of the Piltdown man—lasted 40 years or have had tragic
consequences, as in the case of the Challenger disaster. It is
apparent that our society cannot fully depend on the current system
of extremely defused and ill-defined accountability.
The primary source of all the problems with integrity of data is
the conflict of interest. The other two are complexity and
remoteness. There is little you could do about them, because that
is the very nature of our huge enterprise. The conflict of
interest you could do something about.
On May 6-7, I am presenting a paper at Georgetown on Ethical Issues
in Research. I have taken statistics for 10 years, showing that
industry advisory councils are the top group at most Federal
agencies. And I discovered that for the same merits priority
score, these advisory groups have probability of getting funding
twice. These are the people who write the programs, the direction
of the program, and where the money ought to go. They can't be
objective because the statistics for 10 years indicate they're
getting twice the money for the same priority score.
27
-------
The same thing with industry. There is no way unless they are Mr
Spock of Star Trek; he's all the time pre-programmed to be
objective, right? People don't behave that way. I will not behave
that way. No one does. We all have conflict of interests. And
we've got to avoid it to have objective opinion. Try to get people
with independent sources of judgment, who don't care what the
outcome is because it doesn't affect them whatsoever. No one of us
is devoid of all these connections.
There is a strong argument toward getting the most experts to be
the most objective reviewers of that project. You have two issues:
one is expertise; the second is the fact that if they have a
conflict of interest they will exercise censorship. A great deal
of creative work might be hampered and you've got to weigh the two.
I review grants for Muscular Dystrophy and the American Heart
Association; I serve on the policy committee and peer review
committee of several organizations; I've even been on NIH advisory
for NSF. I would say that 80 to 90 percent of the proposals I have
reviewed are not directly in ray area. I don't know these people.
I don't eat and drink and meet them all the time in meetings and
know their wives and their children. I am much more objective when
I don't know the person. I believe I am a competent scientist to
be able to spend an extra two hours to evaluate a project not
directly in my own very, very narrow field of expertise. And I
think I would be much more objective than the expert, because the
joke is, in science, that you can nickel and dime any proposal to
death when you are an expert.
The American Heart Association uses huge councils. They review all
the research grants, and 99 percent of them are not in their area.
And they do a darn good job in evaluating and funding money.
They've had tremendous success in funding excellent people for the
past 20 years.
I want now to turn my attention to the issue of QA and peer review
within the environmental data quality. I have spoken at length
with about 10 people from within and outside EPA. Let me first
define my terms the way I have developed them for the past 10 years
from their use and the literature in all available fields of
finance, research, science, health care, quality assurance, and
policy analysis. By the way, the field of finance is ahead of us
in terms of accountability and data audit. That doesn't speak much
about the S&L crisis; but the S&L crisis is basically the failure
of accountability, rather than the lack of the science of
accountability within the financial field.
First, what do I mean by quality control? QC concerns itself with
ensuring the quality of products and services during the numerous
production or service steps in order to guarantee compliance with
the originally agreed-upon specifications. Most of what I know
from people—what they call QA is really QC. The specs are already
28
-------
predetermined and you're trying to follow them to the letter. QA
concerns itself with reviewing the quality of products and services
after the products or services have been used by the consumer for
a certain length of time.
What I perceive in the research and development of what is the
value of QA: there are three and they are all related to R&D. And
everything else emanates from these three things: evaluation of
the current R&D activities, actual performance of R&D, and
selection of future R&D projects. That's basically what you do
most of the time. Selection of future R&D products helps you to
evaluate where you should go next.
What are the generalized methods of achieving QA? This is, again,
not necessarily only for EPA.
An internal QA/QC unit, in my view, should be down to a minimum
number of staff. Guidelines on conflict of interest.
Training and education—that's so important in QA to all of our
personnel, starting from top management. Remember, all of us were
trained. But we got our college degree, or master's degree, or our
Ph.D. without any concern to QA. So we are almost too late. The
new generation is where it's important; as for us, we have to be
retrained and retrained. Retraining is harder, just like the
infant when he learns a language, it's easier to teach it to him
when he's less than six years old.
Use of QA/QC consultants, use of outside independent data auditors,
implementation of outside QA data auditors' recommendations,
retraining and education of internal QA/QC staff, use of
independent external QA data auditors whenever a new event occurs—
that is, you have a new project, or a new direction in the company
or in the agency.
I want to give you my definition of what a data auditor is. I have
read the entire 3,000-page document of the Food and Drug
Administration on Good Laboratory Practices. They have no
definition of what data audit is. None of these documents have a
definition of the auditor either—they just go around it. A data
audit is the systematic process upon which objective evidence is
obtained and evaluated as to assertions about research data and
their value to determine the degree of correspondence between those
assertions and values and established or predetermined criteria
which can then be communicated to interested parties. This is work
that took us two years. Steven Loeb is a financial auditor, one of
the top in the country, and an editor-in-chief of Accountancy in
Public Policy. We searched the literature from the 1930's till
now, on how the word data audit is used.
John Lawrence, the Director of the Institute of Quality Assurance
in Canada, said (this is going to appear in the next issue of
29
-------
Accountability in Research, which is devoted to environmental data
quality assurance): "The three legs of a quality assurance
program: a quality assurance management plan, quality control, and
quality assessment."
Here is how he defines his terms. Stringent requirements means
that the entire measurement process, from initial site selection
through to data interpretation and archiving must be thoroughly
quality controlled. Rigorous QA procedures and protocols must be
an integral part of all monitoring programs if reliable, traceable
and compatible data are to be generated. The integrity of samples
must be ensured throughout collection, handling, and analysis.
Then he continues to define quality assessment. "Quality
assessment is administered under the quality assurance management
plan, preferably by a neutral third party." The whole thread of
what I've been talking about is having a neutral third party—a
source of an independent judgment, I call it. It has a management
and a scientific component. The managerial component through
interagency comparison study—and the audits—provides the
necessary information and advice to managers on overall credibility
of data and the suitability of field and laboratory protocols and
of the effectiveness of internal QC.
Based on these definitions and my rudimentary understanding of the
EPA program, I would like to make a few comments, first about my
understanding and appreciation of the EPA peer review program. It
reviews, primarily, reports; no QA program probably is in it, and
no data audit is in it. It is not a focused responsibility. I'm
trying to instigate you to think and get angry and ask questions
because I think you need an outsider to say these things^-conflict
of interest issues are not clear.
The reviewers may not be neutral on the subject or the outcome.
How truly independent are they? That wasn't clear to me. In house
peer review, is it truly objective? That is, once you know the
staff, you know everybody in the program. Peer review may coincide
with the end of the program. It could, therefore, be too late, and
too wasteful. It has no value—the program is finished. I guess
it will help in subsequent projects which are similar.
Peer review should not be a substitute for a QA program. How
binding are the peer review opinions? Otherwise, what are they
there for? Is there a follow up? Is it really a single event
only? How thorough could a peer review be when you only spend just
a few days to examine the data? Is there truly a critical review of
the data, of the raw data or original data?
I would like to make a few comments on the QA program within EPA.
One: Is there a philosophical commitment of QA by everyone, the
entire EPA? Is there a QA agency culture that is similar to
corporate cultures? Is there such a thing? Because I believe some
30
-------
pharmaceutical companies, especially the modern ones, have a true
QA corporate culture within them. Two: Lack of concept of
evaluation of data quality and why. Three: Does it truly audit
raw data? I was not able to decipher that from all the documents
I read, or from all of the 10 people I've talked to. Four: How
are all QA programs integrated? Five: QA training of scientists.
All these university contractors have zero QA training. They not
only don't believe in it, they think it's a pile of junk. Could
scientists be trained as QA officers? Six: QA, in order to be
effective today, should have access to all data, including industry
confidential data.
The way I envision QA is after the fact—to look back and peer
review and assure the data after a time has lapsed of the services
or products. Again, you have to look at the literature and how it
was used by all these other fields. Once you know the specs and
you've put them down in writing, and you're telling all the
subordinates how to do it, that is no longer a QA program; it's a
QC program because you know what to expect from them. QA is you
evaluate the QC program after two or three years; you change the QC
program; you make suggestions. The procedures may be wrong; new
methodologies may be introduced; new science may be introduced—
it's much more global in nature. That's how I view QA, rather than
as primarily a check list approach.
This is based on the literature of the past 40 years. And part of
my job in writing and urging a lot of people to write on this
subject is to have a basis of sharing information. The Journal is
one, the book is another, the conferences that I organized three
years ago was one, the next one in Rome is another one, another one
is going to be in Washington next year. This is the basis of why
we share information. How many books or literature written, for
example, or articles on the EPA QA program were read by everyone?
Not a lot, and that's important. I'm inviting you to write your
views, to disagree—that's what academic life and intellect is all
about.
I will end by giving some bold, overall strategic suggestions.
One: Scientists and QA personnel cannot run away from the
implication of the data on policy and regulation. There is no such
thing called pure science without an impact on other parts of our
society. I heard a lot about "we're only interested in science."
They should become involved in policy and regulation—you and
scientists.
Two: There should be one overall QA program where peer review
should be a component of QA programs. QA programs are a permanent
fixture and the peer review is only a temporary event to augment an
ongoing QA program.
Three: QA programs should be augmented by truly independent
outside reviewers who perform a truly scientific data audit.
31
-------
Four: Discovery laboratories. Exploratory laboratories that test
market new ideas and are nonprogrammatic, not of the basis of any
paper or a policy, but purely the private notebook of a scientist,
should not be the subject of any QA or data audit. This is to
protect the creative process.
In the 1990's our society will demand a greater and greater
accountability for the conduct of each professional group's action.
Society will not give each professional group their unchecked
license to do what they wish on the principle that they know best.
They will not do this for the military, doctors, accountants,
scientists, or even QA personnel.
32
-------
JOURNEY TOWARD QUALITY: THE FEDERAL EXPRESS STORY
Gilbert Mook
Vice President, Properties and Facilities
Federal Express, Inc.
It's my pleasure to represent Federal Express, especially to be out
preaching the quality gospel. We have an opportunity to go out and
talk to a number of groups about our.quality program, and I've had
the opportunity to meet with a lot of different organizations, both
government and non-government, and I've seen some interesting
things going on—a lot of positive activity.
One of the things we found is that no matter what business you're
in, chances are that you have a shelf full of "how to" books.
Everyone wants to know the secret of success, and we can always go
to a host of self-appointed quality gurus who are willing to let
you in on their secrets. Unfortunately, what works for one company
or one organization doesn't necessarily work for another. And
that's why many organizations, including Federal Express, employ
many different techniques and methods in their ongoing efforts to
define and refine their quality process.
And so, when people come to us and say, "What did you guys do to
win the Baldrige Quality Award and how can we incorporate the
special secret into our quality process," unfortunately we don't
have a one-size-fits-all answer. What we can do, perhaps, is go
over three fundamentals that have guided us.
First of all, customer satisfaction starts with employee
satisfaction. Second, service quality has to be measured, and
third, customer satisfaction is everyone's job. These points are
the basis of what I'm going to talk about with you this evening.
Our focus at Federal Express on quality began the day that we
began. For although we were really a maverick at the time in the
distribution industry, our objective then, as it is now, was to
provide timed, definite delivery for high priority documents and
packages. Fred Smith, our founder and CEO, was,first to apply the
hub-and-spoke concept for distribution practices. At the time
there were no precedents to guide us because this was a unique
approach; consequently, we had to create practices and processes as
needs arose. Though many of these processes have changed and our
operations have expanded over the years, the focus has always been
on delivering quality service.
When we began operations in 1973 we shipped eight packages on our
first official night of operation. From these humble beginnings,
we've become the world's largest air express transportation
company, delivering over 1.6 million items to 127 countries around
the world each working day. Our fleet today consists of more than
33
-------
400 aircraft, which includes Boeing 747s, DC 10s, 727s, F-27s
Cessna 208s. On the ground we have more than 35,000 computer and
radio-equipped vehicles. Also, we have 94,000 employees to support
our worldwide operation. w*".*.
To expand so far and so fast, we've had to continually reassess our
policies, our procedures, our logistics, everything. In fact, our
focus has always been on 100 percent customer satisfaction. From
the beginning we've been guided by a simple but profound three-word
corporate philosophy: People, Service, Profit. And this is as
relevant today as it was nearly two decades ago. The philosophy
guides the setting of our annual corporate goals, and we have one
goal for each of these three elements.
Our measurable people goal is the continuous improvement of our
management leadership index score, which we track through our
annual Survey Feedback Action program. I'm going to tell you a
little more about that in a few minutes.
Our service standard is 100 percent customer satisfaction. We had
to guide our efforts. In order to do that, we've created a unique
measurement system of service quality indicators. This index
measures our weekly, monthly, and yearly progress toward achieving
our 100 percent goal.
Our profit goal, much like any other company's goals, is
fundamental to our long-term viability. If you don't make a
profit, you can't sustain growth.
To sum up our People Service Profit philosophy, we believe that if
you place your people first they will, in turn, deliver an
impeccable level of service, which is demanded today by our
customers, and that profit will be the consequence.
The essence of our people-first policy is that customer
satisfaction begins with employee satisfaction. Let me share with
you some of the methods that we use to demonstrate our commitment
to a people-first philosophy. One of our most important programs
is our annual Survey Feedback Action (SFA) program. SFA has been
a part of our quality process for the last 11 years. The survey
gives people the chance to express their feelings about their
managers, their service, and about pay and benefits.
Once a year every employee within every work group anonymously
fills out this surve'y. A portion of the survey includes a series
of statements concerning the immediate manager's leadership
abilities. One such statement may say, "My manager asks for my
ideas about work." Or another: "I can tell my manager what I
think." Or: "My manager tells me when I do a good job."
In each case, the person filling out the form may respond either
favorably or unfavorably. While the individual responses are kept
34
-------
confidential, the overall results of the survey are passed onto
each manager, who then must meet with their group to develop an
action plan for resolving any actions that were identified in the
survey. So the survey gives useful information regarding
individual managerial strengths and weaknesses.
Just as importantly, all work group results are integrated into an
overall corporate leadership score. These scores are then used to
diagnose the corporate-wide leadership problems and in addition
become part of management's overall objectives. And here's the
rub: this is how they get your attention, because these scores
are then tied to incentive compensation for both managers and
professionals throughout the corporation. in fact, if the
company's wide leadership score isn't as high as it was the year
before, no one in management receives a bonus. That's incentive.
So the survey feedback action encourages strong, even-handed
leadership and open two-way communication.
Another people program is our Guaranteed Fair Treatment (GFT)
Program. The aim of the GFT process is to maintain a fair
environment in which everyone who has a grievance or a concern
about his or her job, or who feels that he or she may have been
mistreated for whatever reason, can go through and have these
concerns addressed through the management chain. A team of
management weekly reviews GFT cases that have not been resolved and
have progressed up through the three-step internal process to the
final stage, which we call the appeals board. What we've found
over the years is that this is not the fastest way to address an
issue, but we think it is the fairest.
Both the Survey Feedback Action program and the Guaranteed Fair
Treatment program promote open communications. By creating this
environment we have found that people are more apt to take part, to
make suggestions for improvement, to question decisions, and to
surface concerns. We work hard at keeping these lines of
communication open within and between divisions, departments,
management, and the front line worker.
For example, we have an 8:30 a.m. operations meeting every day and
this serves as an example of some effective cross-divisional
communication. At these meetings divisions representatives from
our various divisions around the world come together to discuss
major operational problems encountered during the previous 24
hours. We run essentially an entire war plan every 24 hours. So
people are gathered around the table or participating via world
wide conference call—these people will determine who is going to
solve each problem that's been identified, how it's going to be
done, and all these action plans must be developed and implemented
within 24 hours.
This continuous check on quality of service enables us to find ways
to reduce failures. It's also another way to communicate
effectively.
35
-------
One of the things we've found over the years is that gone are the
days where senior management used to be able to sit down and face
the entire work force to discuss problems. Today we've gone to the
high tech solution. We now have a technology to share information
quickly and effectively via our satellite link television network,
FX-TV. New information is quickly and effectively shared through
live phone-in question-and-answer sessions between top officers of
the company and employees. This front line feedback is vital to the
quality process. Today all major presentations are broadcast over
the FX-TV network for employee viewing. Effective communication is
critical to keeping everyone aligned with our goals.
Training is also fundamental to the success of our quality process.
Employees must know what's expected of them and be given the proper
training in order for them to be effective in their jobs. At
Federal Express all customer contact people receive extensive
training before they assume their jobs and deal with customers.
For example, our call center agents are given six weeks of
intensive training before they'll ever take the first call. Senior
agents provide one-on-one coaching to help the trainees become
familiar with the computer terminal and its functional screens.
Every six months couriers and service agents and customer service
representatives must participate in a job knowledge testing
program. These tests are online and can be taken at any one of our
over 25,000 terminals around the world. After the tests are
completed, the computer tallies the score and stores it in the
employee's training record, within 24 hours the agent or courier
receives the pass/fail results of his test. Along with the test
results, each person receives a personalized prescription that
targets areas requiring review. It gives them a list of resources,
training materials, and interactive video lessons to help them get
back up to speed.
We're very serious about training. If, for example, a customer
service agent were not to pass the test, he or she would be
relieved from duty for eight hours of remedial training. This
training can be obtained through interactive video lessons. By
going through this training, employees would progress at their own
rate until they're ready to take the tests again. So all our
training directly supports the continuous improvement in the
quality of our service.
We also have a variety of reward programs that encourage people to
work toward providing the highest possible quality of service. Pay
for performance is written into everybody's job description from
myself as a senior officer, down to the courier that you may meet
in your workplace. A manager can bestow what we call a Bravo Zulu
Award, which is U.S. Navy jargon for "well done." This can be
bestowed on the spot. This commendation is given for clearly going
above and beyond one's job responsibilities, and can bring with it
monetary reward in the form of a check, or a non-cash award, such
36
-------
as tickets to a dinner or theater tickets. For the non-management
employee we have what we call the Golden Falcon Award, which is a
recognition of service above and beyond. Teamwork is applauded
through a monthly Circle of Excellence Award given to the top
performing work unit or station in the field.
We've found that when people have the autonomy to make decisions
that affect their performance and its outcome, and they have their
ideas listened to and acted upon, they have greater ownership in
their job. I'll give you an example: We didn't just create a tool
like the Cosmos Super Tracker and hand it out to our couriers for
them to use. We asked for their input. We asked them to help us
design it and work with us to make it better before we even rolled
it out for the first time. By providing real time information
through our Cosmos Tracking System, we are proactively trying to
reduce customer dissatisfaction.
One of the things we've discovered is that a customer not having
information about a package is equally, if not more frustrated,
than receiving a late package. According to studies that we've
made, 70 percent of customers with complaints don't complain, they
just go away—often for good. To avoid this consequence, we
believe that you must quantitatively measure service quality.
Many of us who have studied quality have read of W. Edwards Deming.
He said that you cannot manage that which you can't measure. For
many years we measured our service levels by measuring our success
rate, the percentage of on-time deliveries. And, indeed, we found
out that 99 percent looked pretty good. But in 1985 we realized
that if we really wanted a true picture of our performance, we
needed to begin our performance by our customer's standards and
perspectives.
We initiated customer satisfaction studies that afforded an in-
depth look at the way our customers perceived our service. We set
about interviewing, quarterly, a sample of our customers who
shipped with us—some exclusively, some with our competitors. They
were each asked to participate in a 20-minute interview covering
about 50 areas. Two years into these studies we realized that
while we were becoming more aware of customer needs, we were
missing the mark because we were falling prey to the law of large
numbers. One of the things we found is that even at a 99.1 percent
success rate, this translates into 2.5 million failures a year.
That's 2.5 million customers who may go away.
By 1987 we decided we needed a more rigorous method to measure the
quality of our performance. So we turned our measuring stick
upside down. We stopped measuring the percentage of our success
and began measuring the number of actual failures. Our tracking
system provided the data base. From this we developed our Service
Quality Indicators. We call it SQI.
SQI is a service measurement index which tells us exactly how much
37
-------
improvement we've made in reducing our errors across 12 critical
categories of service. Here's how the system works: Weights are
assigned to each component, from one to ten points per failure,
according to how bad these failures would frustrate a customer.
For example, a late package is weighted at one point, and this,
indeed, is less upsetting than a damaged package that's weighted at
10 points. The nature of the failure is just as important as the
total number committed.
What we've done is to go out and measure all the things that annoy
our customers and then measure ourselves against that. A missed
pickup is a pretty big hit. That's when someone calls and says,
"Hello, I have a package." And no one shows up. That's an
egregious error. We also have lost packages. What could be worse?
Damaged packages. So these points are assigned to each one of
these sins as we go through.
One of the things we found is that often a customer may not even be
aware that we've failed. For example, suppose we guaranteed a
priority one service by 10:30 the following morning. If we were to
deliver that package at 10:31, our system that counts the delivery
knows it's a failure, even though the customer may not even notice
it. Why would we go to the trouble of docking ourselves for
packages delivered just one minute late? The answer's quite
simple. One hundred percent customer satisfaction is our goal and
nothing less.
All of these efforts, which involve the setting of clear goals, the
creation of a people-first environment, the cross divisional
communication, the extensive use of technology, training, rewards
and recognition systems, and sophisticated measurement systems,
have been encouraged and developed so that people are prepared to
deliver quality service.
Part of improving our process has been the development of Quality
Action Teams (QAT's). And through the years we've seen more and
more people from all areas participating in these QAT's. The
critical success factor is that no one comes closer to the
expertise about a particular job as the person who is doing the
job. We now have a process of measuring quality efforts, and once
a quarter each division goes through an internal selection process
in which they choose their best quality success stories. Teams
chosen are considered to be the best of the best. We have a
ceremony; team members present their quality success stories to
management; and some of the results have been astounding from all
levels.
One of the things that I've had a difficult time preaching to
people about quality is that every person in every job is an expert
about his particular area, and it's those little things that add
up. Those are the real expressions of quality, as opposed to
coming in with some sweeping managerial changes.
38
-------
Last year, for example, we had a QAT from our Memphis Super hub.
They developed a recycling plan. We are now recycling such things
as steel, batteries, plastic, wood, pallets, tires, waste oil,
paper, and the results in one year—just in this group—are that
they saved over $200,000. Now this program is company-wide and
we're recycling all over the world.
We talked about our 12 service quality indicators. Well, we formed
what we call 12 root cause teams to enhance the quality process
which looks at each one of these service quality indicators. And
although each one of these teams is led by a corporate officer,
there's really no star quarterback. As with all the quality action
teams, every member is equally empowered regardless of whatever his
position may be in the organization. So everyone, especially our
frontliners, has the option, the time, and the power, to deal with
customer problems. That empowerment translates into better service
and more satisfied customers.
We formed quality action teams across divisions to improve
responsiveness to our internal relationships. When analyzing the
way people rely on one another throughout our operation, one of the
things that we found is that everyone's job is to support someone
else. That someone else may very well be their internal customer.
So, if you can do a good job in providing service to your internal
customer, when that service reaches the external customer,
satisfaction is built in at every step and it then becomes a
permanent fixture.
What models did we use at Federal Express? We had a quality guru
organization come in and give us a list of procedures. Basically
it was a language. It seems to me that the most important thing is
to get everybody talking the same language so that they can
communicate. And that's all it was. Our corporate quality
structure is an Executive Quality Board and a Quality Advisory
Board.
The key thing in getting the quality program going is that it has
got to be bought off and demonstrated by the top management. It's
not something that you can send down and say, institute this
quality program. Because if the people don't see the top guys
doing it, they're not going to buy it. And that, to me, is worth
a thousand books on quality. You've got to preach it if you want
to get other people to buy into it.
Everybody wants their organization to run better. Everybody wants
to have a more profitable organization. And one of the things that
we found—and this is an example of why management would buy into
this thing—is that we spend an inordinate amount of money ensuring
100 percent customer satisfaction. We charter extra airplanes, we
run people overtime, and it costs a lot of money. The cost of
quality is very expensive. What you find going through the design
phase and finding something to change is that it costs maybe 10
39
-------
dollars. But if you change something once it's operational and out
in the field, it costs you 100 dollars or more.
The real truth is not that we were blessed with any more wisdom
than anybody else, but when it gets right down to it, quality was
a tool that we thought we could use to make our company more
profitable. And that's what gets people's attention.
Management puts the procedures in place. I am the quality disciple
in my division. We do have some quality professionals but I don't
have any in my division. And the reason I don't is because it's
always been my theory that as soon as you have somebody else to
come in and perform the task, management then assumes that they're
off the hook to be accountable. So we don't have any in our
organization. Everybody's got to do it. And everybody's measured
by it. Not measured by the number of events, but rather by the
performance of their jobs and it's got to continually improve.
People talk about the quality process—that's a scary word.
Quality process can be a group of people who run around with clip
boards counting are you using the right language. Our quality
philosophy is that everybody buys into the fact that if I
communicate with somebody, and if we have a customer supplier
alignment, then I know what's expected of me.
We've got a form that we fill out. In one column is "How do we
think we're doing in providing service to this customer?" The
other side, the side the customer fills out, asks, "How is this guy
really doing?" And guess what? The numbers aren't the same. So
the idea of the customer supplier alignment is to get his
perception and your perception the same thing. And then you make
a contract and you write it up. And then you are measured by
meeting the elements of that contract. And it works. It gets
people's attention.
Our well-communicated focus on customer satisfaction saved the day
for one of our customers. A fellow by the name of Michael Davidson
of Atlanta was unable to meet the cutoff date of his business
service center, so he ran the Atlanta Hartsdale airport with 150
packages that needed to go out. They happened to be his company
payroll and he was very interested in getting it out. He came
charging in here during what was the busiest time of the shift for
the young man who was accountable for getting that aircraft out and
loaded. His name was Mr. Augustus. Realizing the time constraints
on this one guy, he shanghaied a service agent and a couple of
pilots and together they worked to code the package so they could
get the correct sort designators and get the freight on its way.
All this was done, by the way, without jeopardizing the on-time
departure of that particular Newark to Atlanta flight that Mr.
Augustus was assigned to get out. Thanks to his dedication in
getting this job done, he did earn the Golden Falcon Award that we
talked about earlier. For Mr. Augustus and all of us at Federal
40
-------
Express, quality is an objective that leads to 100 percent customer
satisfaction.
Driven by this quality-focused mind set, we then applied for the
Malcom Baldrige Quality Award, even though we were struggling
through a very challenging year. We had just integrated with
Flying Tiger Lines and had opened up a whole new segment of our
business. Going for the Award and winning it has been quite an
educational experience for all of us at Federal Express. And
naturally, we're all proud of being the first service company to be
named as a recipient. But we're really only as good as our latest
pickup. So whatever loyalty we have earned in the past it must be
preserved, because loyalty in a competitive marketplace is a very
fragile commodity.
We found that the real value of the Malcom Baldrige Award is the
opportunity to rigorously evaluate our own company. Our quality
processes have improved as a result of going through this.
Applying for the Malcom Baldrige Quality Award will do the same for
any company that's willing to undertake the challenge. One of the
other things that concerned us upon receiving this award was that
we would run the risk of everybody saying that we have now solved
the quality problem and let's go back to the regular way we've been
screwing up the business for years. It's in many ways a mixed
blessing.
With all this in mind, we do keep a watchful eye on our Quality
Service Indicators and an open mind as we look into the future. We
in industry today are fighting for our existence in a global
marketplace where our competitors are ahead in the quality arena.
We can go through a long list of industries that are no longer with
us or no longer viable because of this. And so, while some may see
the Malcolm Baldrige Award as a culmination, it certainly is not to
us—it's only just another landmark on our journey toward achieving
our objective of 100 percent customer satisfaction.
Fortunately, it's not a journey that we at Federal Express take
alone. The example is this meeting here today. Today countless
corporations have begun to align themselves with suppliers who
share their penchant for quality. These organizations realize that
quality performance is becoming a necessity for survival. The
quality improvement process at Federal Express is aimed at 94,000
employees believing that our goal of 100 percent customer
satisfaction will always be the key to our continued prosperity.
Thank you very much.
41
-------
GOOD AUTOMATED LABORATORY PRACTICES
PANEL DISCUSSION
Moderator:
Kaye Mathews
EPA National Enforcement Investigation Center
Panelists:
Jeff Worthington
TechLaw Inc.
Joan Fisk
EPA Office of Emergency and Remedial Response
Rick Johnson
EPA Office of Information Resources Management
Kaye Mathews
We will be focusing our attention on the trend toward computer
automation in the laboratory and EPA's progress toward addressing
these trends with its draft guidance "Good Automated Laboratory
Practices: Recommendations for Ensuring Data Integrity in
Automated Laboratory Guidance."
Joan Fisk
I'll be talking about Good Automated Laboratory Practices Guidance:
A Perspective for Superfund Data Collection Activities. By way of
background for those of you who aren't familiar with Superfund's
Analytical Operations Branch, we have quite a few responsibilities
related to analytical services under our cognizance. We provide
environmental service assistance teams to each region. These are
contractors within each region who do things such as data review,
analysis of samples, and reviewing QA plans.
We also are the coordinators for developing guidance for data
review or data usability. We are the leads generally for Superfund
methods development in coordination with the Office of Research and
Development. We also maintain an extensive database called the
Contract Laboratory Program (CLP) Analytical Results Database Card.
We also provide large routine analytical services through the CLP
and special analytical services.
We are very much involved with quality assurance (QA) oversight and
quality control (QC) programs for Superfund. We work with Duane
Geuder, Superfund1s QA Officer, and our QA Coordinator, Jim Baron.
We are involved in the QA oversight of the CLP and are also trying
to get into other areas where analyses are being done outside the
43
-------
CLP. We're in the process of making every effort we can to put
performance evaluation (PE) materials into the regions for all
these uses.
We've seen this Good Automated Laboratory Practices (GALP) as being
very important in looking beyond the traditional in our QA
oversight because of the changes that have happened in the industry
over the past few years. in the last few years, the laboratory
community has become heavily computerized. There are many reasons
for this. The market pressures 'with all the environmental
legislation—the Comprehensive Environmental Response,
Compensation, and Liability Act (CERCLA) or Superfund, the Resource
Conservation and Recovery Act (RCRA), the Clean Water Act, Clean
Air Act, and Safe Drinking Water Act—are such that there's an
enormous amount of analytical work to be done out there.
Fortunately the technology has marched along with the need so that
the facility is there for the laboratories to be automated and
therefore able to get their jobs done.
However, along with all the benefits that come with automation, we
have gotten a new set of problems to deal with. We have many
things that affect or compromise data integrity. In the old days
we used to worry about things as simple as manual transcription
errors. Now we have to worry about computer errors. We're talking
about things such as data entry errors, computers talking to each
other, storage of data, and electronic transfer of data whether it
be on magnetic media like diskettes or over the telephone lines.
So we have additional sources of error such as sampling errors,
analytical errors, and operator errors.
In addition, we've had another problem, which is that alleged fraud
is compromising the integrity of our data. It's become much easier
to cheat in the laboratory community because of all these neat
things you can do with computers. We find that there have been
instances where laboratories have manually edited their data just
to meet the contract requirements and there was no technical
justification for doing this. All this has happened because it has
gotten much easier.
We believe that the GALP guidance will promote data integrity
within the analytical laboratory community. We think it will
assist us in ensuring the quality of data. It will not guarantee
it, but we believe that following the guidance will give us a
better chance at success.
It's important that the laboratory not look at the document at face
value. They can use it as a foundation for data management/
automation practices, but it's important that they look at it as a
minimum set of requirements of things they must address. This does
not mean that there are not other things that need to be addressed.
They have to establish within their own laboratory the systems that
have to be in place in order to meet the requirements they set up
44
-------
for themselves. It's important that you have a system in place to
know when it's working and when it's not. You have to have checks
in place. And you also have to be able to have a system where
you're going to correct your problems when you find that you
haven't met success.
In connection with the fraud issue, we believe that if you
institute some of the things in the GALP document, such as some of
the ideas on security or audit trails, it's going to be more
difficult to commit fraud. It may be at the point where it's
easier to do it right the first time, or cheaper to do it over
again than it is to go through manipulations with the data just to
make it look right. While this is not going to prevent fraud, it
will make it more difficult.
The laboratory community is familiar with this document. Rick
Johnson gave a talk at our annual data management caucus last July
and the audience was very enthusiastic. The lab community has
been anxiously awaiting this document ever since. We plan on
having another data management caucus this upcoming December. We'd
like to think there will be people using it by then, and we'll be
able to have some success stories related to it by the lab
community. We do plan on providing the GALP guidance to our
community when it's available for release. I think it probably has
to undergo a revision before that.
We have included some data management requirements and tenets of
the GALP in our existing CLP contracts and I believe that our
special analytical contracts have also included some of these GALP
tenets. We have put in hardware and software requirements; we have
added personnel that were not ever listed in our contracts before;
and we've clearly defined some of the security levels that are
absolutely necessary. We've required something that's essential to
us in looking at data, and that's the audit trail. We think that
with these security things in place it may make fraud more
difficult. As it makes it more difficult to access the data
system, you can eliminate the number of people that have access and
limit the potential for people who are going to do bad stuff to
your data.
Also, as far as the audit trail goes—if it's required that you
identify where you've made a change, you qualify the data as having
made a change and you have to signify. You may be more reluctant
to go in and make that change if you do not have a technical
justification. So we do think that these things will impede,
though not prevent, fraud. We have added some items to our
repertoire that reflect data management issues. We think we can
improve on it, and we are going to be requesting assistance from
Rick in helping us perfect our audit process so that we would be
good at doing the QA oversight job on data management practices of
our community.
45
-------
I consider the GALP guidance as a great breakthrough for EPA. It
was not written specifically for Superfund, but it does provide a
basis for providing data of integrity from computerized systems
that are critical for Superfund decision-making. I believe it is
in the mutual benefit of the laboratory community and the EPA
clients to take advantage of the GALP in our mutual striving to
improve data quality and integrity.
Jeff Worthington
I'll be providing a testimonial to some of the types of
observations we've seen in laboratories over the last five or six
years. First, I want to give you some information about where our
experience and background came into this type of observation.
TechLaw provides support to the Federal government primarily as the
Contract Evidence Audit Team (CEAT) contract to EPA's National
Enforcement Investigations Center. We have been doing this from
1980 to the present. We have conducted 850 evidence audits (field
and lab) . We also participated in the environmental survey as well
as the Love Canal habitability study. I think probably in some of
the study areas is where we first saw, for instance, the use of
electronic data transfer.
Besides going in to audit laboratories that are providing data to
EPA, we've also assisted in litigation support in over 500 cases.
The type of support sometimes entails helping prepare samples and
sample evidence for trials for EPA or Department of Justice
attorneys. In addition, we've conducted audits of PRPS1 search
reports that have been developed by other contractors. TechLaw
also has some technical enforcement support, which is along the
lines of litigation support as provided under the CEAT contract.
With the Department of Energy we've recently spent some time
designing document management systems and preparing technical and
evidential audits. We've also worked as expert witnesses for the
Department of Justice in assembling environmental data.
One of the trends that we're seeing in laboratories is increased
use of laboratory information management systems (LIMS). In 1985,
when we were looking at laboratories, many of the laboratory
directors would try to drag you off to the side and say, "Don't you
want to see my nice new toy, my new LIMS?" And we'd say, "No, we
want to see your papers." And over time we've started to see the
paper disappear into the LIMS and to understand that the evidence
is starting to enter into computer systems.
The second trend that we've observed is the use of electronic data
transfer systems by laboratories. They're doing this for several
reasons. The first reason is to facilitate transfer of data.
These can be used for quick turnaround, and it's a good marketing
tool. Some of EPA's programs and other clients are asking for this
type of service as a deliverable.
46
-------
The third trend is the replacement of handwritten records with
direct computer data entry. This started first, probably, with the'
analysis area. Many people who are most comfortable with
computers—the analysts sitting at their GC or GCMS—began to
think, "Why do I need this log book; why don't I just start putting
this information directly into the computer?"
You sometimes see that same activity in the preparation area and
sometimes in sample receiving. But when you look at information
generated by a laboratory, both on handwritten records and
information inside a LIMS, sometimes the information disagrees.
This is for a variety of reasons. Sometimes the LIMS is not
designed as a QC or QA tool; it's designed just to track
information within the laboratory. For that reason people use it
to say, "Was this thing prepared or was this thing analyzed?"
Often those records conflict, because maybe the laboratory manager
went to the .LIMS two days after samples were prepared and then
punched in all the samples that were analyzed, and maybe used
different initials and names. Often when you're preparing
evidence, you see that the information that's summarized in those
systems doesn't always agree with the handwritten records. That's
something that needs to be reconciled. It's not necessarily a
problem for the evidence.
Also, there's a lack of written procedures for the use of software
and hardware systems, both in laboratories and in field operations.
There's seldom any life cycle documentation of software
development. I think that problem is universal across all
businesses, not just EPA and its contractors. Often you will not
find the software within a life cycle documentation. Many times
there is no clear definition of the responsibilities for the
functions related to software and hardware systems.
There is also no check on data accuracy. That depends very much on
the type of business, but often in laboratories and field
operations there may not be anybody who actually checks to verify
that the information is accurate.
we could almost think of the evidence as actually moving into
computers—the papers are disappearing in some cases and that's
where the evidence is. There's more and more reliance on
computers, and not just in the laboratory. And with future
cleanups being the thing of the 1990's, maybe we'll see more use of
electronic data transfer to transfer information into the field and
people's decisions will rely on the data that is transferred.
People are addressing these concerns now.
In conclusion, there are several things I'd like to touch on.
First of all, guidance is needed for this area. The GALP is an
example of a guidance document that covers the laboratory areas.
I would also suggest that other areas such as field or sampling
analysis plants—anywhere there's a computer that's being used,
47
-------
either in the collection and generation of data or by any other
contractor—need to address many of the same types of issues.
Secondly, if you're in the process of having a computer replace
your handwritten records now, or you're working with somebody who
does, print out your computer record in a timely manner. For
instance, if it's an analysis activity, have them print out their
GC log, look through it, verify that it's accurate, and then sign
and date it.
Lastly, as Rick Johnson presents the GALP, I'd like to ask the
audience to consider both computer software and hardware guidance
for potential inclusion in any QA program or project plans in the
future. It's being addressed in the EQA-1 draft document now and
we strongly support that.
Rick Johnson
Those of you who aren't particularly computer literate hang on
because the problem that I'm going to describe is not a bunch of
technical issues as much as it is an overall set of procedures and
understandings in management—practices in the laboratory to make
things better than they currently are in a number of labs. My
purpose is to describe our program for ensuring the adequacy and
integrity of computer resident data. What I'll do is give you a
feel for the basis of it—where we got into it, why we got into it,
what it's all about, and where we think we are with it today.
I'm on what's called the Scientific Systems Staff. The Scientific
Systems Staff is in the Office of Information Resources Management,
one level below the Administrator. There are basically five
program offices that report to the Administrator. There are
several others, and then all of the Regional Administrators. Our
mission is to help the Agency out in the area of information
technology, anything from providing the resources for hardware and
software development to overseeing contractor's work in this area
to developing and managing all the Agency's information assets to
promoting data sharing and integration. This last one turns out to
be something that's important to all of us here at hand, because
this is what Good Automated Laboratory Practices are all about—the
issue of data sharing and data integration.
Why did we ever start on this in the first place? About two and a
half years ago we started becoming clearly aware that there was a
rise in automation in the laboratories. Problems were beginning to
surface. The auditors that audit for the GALP program and also in
the CLP programs were beginning to wonder what to do with all of
the automation that was coming into the laboratory. There was no
uniform set of EPA principles to guide laboratories as they
automated. People were making expenses in hardware and software
that clearly were not meeting perceived needs that were developing
as the Agency moved forward in its information management policies.
48
-------
In addition to all this, there were a number of activities going on
in the way of development of requirements for information
management and information dissemination.
Collectively all of these things looked as if this was something we
wanted to check into and see if there was some genuine need for the
assistance of our office in this area. What we did is put together
a program to identify as much information as we could in various
areas to be able to assess whether or not we needed to go any
further with it.
It was a several-pronged program. The first was to go out, and
through a combination of site visits and a questionnaire survey
that was sent out to about 200 labs, assess what the state of
automation was in laboratories, and if there was a problem,
determine if we needed to get involved with it.
Secondly, we wanted to examine all the existing procedures that
were around and not go about re-inventing the wheel. Automation
has been in the banking arena for a number of years; perhaps there
are some lessons there that we could learn. Perhaps we could also
find out things from the clinical laboratory industry. Forensic
toxicology and stuff like that have been around for a number of
years, and they've put together a set of principles and practices
in their laboratory and are also automating. Why not go to them
and see what they've done, and find out if there are lessons to be
learned from there?
Third, we wanted to look at state-of-the-art hardware and software
technology as it applied to laboratories. Perhaps there were some
readily available fixes there. There are some things that can be
done...capitalized on...utilized. Perhaps there's software in the
labs that vendors are selling that could do a much better job, and
maybe there are some hardware fixes, simple things that could make
a lot of sense.
Since the Good Laboratory Practices (GLP's) have been around for a
number of years and the computer is already under the aegis of the
GLP's, why not look into the GLP's and see, in fact, if there
aren't some lessons to be learned that we could capitalize on as
well. They've withstood the test of time; they've been under
review by the scientific community for some 10 years and were
finalized about two years ago by EPA. They'd been in place by the
Food and Drug Administration for a long time. Why not look into
those and see if there are things that make sense and apply to the
computer when it's put into the laboratory?
An important consideration was the fact that there were existing
requirements already on the books that we should probably be
incorporating into whatever guidance we come out with.
Finally, there were a number of requirements underway that could
49
-------
also affect laboratories—the electronic data transmission
standard, some of the system design and development guidance, the
computer security act, and a number of other things.
These were basically the areas we charted out. We had people with
different capabilities looking at each one of these things over the
last couple of years. And what we found out, first of all, was
that the state in the laboratories, regarding their use of the
computer, was not as good as we would like to see it. Physical
security was typically lacking. You could easily walk into a
laboratory in many cases without any ID—without any checks or
balances.
System access was not protected. People could get on the system,
log into it, and there were no passwords in place or voice
recognition. I think this was the case in about 50 percent of the
labs.
Probably one of the biggest areas where problems occur is in data
verification. We are moving from the area of laboratory notebooks
and people are beginning to key in on the computer all their
information—setting aside the fact that it's also now being
electronically transcribed from instrumentation. There were very
few verification procedures in place in a number of the labs.
There was no double check, for example, to see that data were right
or blind entry. Generally, if there was a problem across
laboratories that was immediate to impeding data integrity, it
probably was the verification.
Documentation was sketchy. A number of the labs that we went into
had no idea what version of the software was used to create what
data sets. In a number of cases they had none of the versions of
the software available, yet they were relying on this data to make
decisions and provide the Agency with environmental information.
And then there were a host of other things—anything from the lack
of competent staff available to the adequacy of the staff. In some
cases we saw a couple of labs where people were working 70- and 80-
hour work weeks. I don't consider that to be adequate resources
for a lab, but some people may.
When we looked at automated financial systems we learned a number
of things. They've been in place for about 10 years. One of the
first things they do is perform a security risk assessment. They
look into the environment of their operation and determine where
various breaches to the potential integrity of the data are,
analyze them, and come up with an overall schematic that lays out
where the problems are. Then they go through what's called a risk
management program, where they effectively respond to each one of
the security needs that they found out about. Some of the
components of that system were access management programs: varying
the password; requiring people to make periodic changes to it; when
50
-------
somebody leaves the organization, their password is automatically
dropped; verification procedures; double entry; and sometimes blind
keying in the second entry. Audit trails were standard, as was
hard copy retention.
When we looked at hardware and software technology we found that
generally there is no single guarantee that ensures data integrity
in either hardware or software areas. And surprisingly enough,
there were no established software standards for data integrity in
laboratories. There are a number of laboratory information
management vendors selling a lot of different software, yet there
were no standards in place. Some LIM systems, for example, had
password protection, some of them didn't. Some had certain data
backup and data recovery features, others didn't.
They can customize software to meet whatever needs you want; the
problem has been that nobody's ever had a clear understanding of
what was needed or what is needed. So when people went in to
determine what it was they needed they came back to the software
vendors and had a prescription that in many cases was probably good
for what their perceived needs were, but that lacked the
requirements for an information system. I blame that in part on us
as much as those folks that are dealing with us, because we never
really had in one place a common compendium that lists out all of
the requirements and understandings of what we feel constitutes
good management practices in the laboratory to ensure data
integrity.
I'd say the software manufacturing business is in bad need of some
overall standardization. And I think we really found it out,
particularly in the laboratory environment. There've been
countless amounts of investments made in off-the-shelf LIM systems
that many companies have been sold a bill of goods on.
Some other interesting things we found out were that some of the
advancing technology that some of you bump into now in the grocery
stores—optical scanners—can be adapted to automation in the
laboratory and ease some of the problems in our transcription
errors. Also, magnetic ink readers have become standardized across
the entire banking industry, virtually worldwide. And "smart
cards"—they're like the strip of information on the back of your
banking card that can hold an entire portfolio of information and
can be updated as you move from various points in the lab to
others. That's not out of the question at all any more.
We looked at our GLP regulations that are in place now and found
out a couple of things. One of the GLP's that was in the toxics
program and the pesticide program, which had been adopted by other
EPA programs, allows for raw data to be just about anything you
want it to be. "Raw data may include.. .computer printouts,
magnetic media...and recorded data from automated instruments" (54
CFR 158 & 160.3 [FIFRA] and ibid 792.3 [TSCA]).
51
-------
Wherever the data are first recorded, that is considered the raw
data, but there is no requirement that the raw data have to be'
recorded in any one form or the other. Laboratories have the
latitude to choose what media they use and how they go about it,
but once they do it and decide on it, that is raw data. Then,
depending on the medium that it's retained on, there are certain
requirements regarding the retention of that raw data.
The issue of what constitutes the raw data for the laboratory—
wherever the data are first recorded and however they're first
recorded are considered the raw data. That raw data then must
respect the data change requirement, the data entry requirement,
and the data verification requirements which are spelled out in the
GALP. In the case where you have information sitting somewhere and
that information generally will be considered the raw data, the
computer information should not be relied on. The person who's in
charge of the machine should be attached to that raw data when that
raw data were entered the machine identification for that
information. There are all sorts of specifications for acceptance
testing of software and hardware configurations that are required.
Finally, the testing of facilities encompasses those operational
units that are being used or have been used to conduct studies,
e.g., the computer. Therefore, if the computer is being used in
the conduct of the lab, it also must come under the GLP
requirements.
A couple of other things—the GLP requirements apply to automated
systems. There are also requirements for the documentation of
personnel qualifications, oversight of QA, and standard operating
procedures (SOP's).
Regarding existing requirements, we had a couple of them in place.
EPA's Electronic Data Transmission Standard now makes it possible
for people to report data electronically to the Agency, with the
standard in place to allow people to do that, and with an
understanding of what needs to be in each one of the transmission
records. We also have a number of things going on in the
Information Resource Management Policy related to such things as
system design and development to documentation.
There were a number of requirements under development at the time
we put this together. The Federal Electronic Reporting Standards
were being pushed into all the Federal agencies and they must have
a general standard for the electronic transmission of data. EPA
beat Congress to this by about a year. The Federal Computer
Security Act specifies various levels of security that must be in
place for certain types of data as defined within the Act itself.
Finally, we are continually involved in changing our System Design
and Development Guidance.
What all this meant collectively to us was that we probably needed
52
-------
to do something. It looked as if there was clearly a reason to do
it. There were a number of things under development, and we needed
to bring it all together. We decided that we needed a registry of
principles—what we subsequently called the Good Automated
Laboratory Practices—and then to help folks to implement the GALP,
implementation guidance as well. We wanted to not just lay this
out to people and tell them that they should do it, but to give
them some feel for what we expect them to do, and some
understanding of why it is that they should be doing it. We also
wanted compliance guidance for the. auditors to enable them to
determine that they are, in fact, following specifications.
Recognizing all this and putting together all this information
between late '89 and early %90 and collecting it all, we finally
came up with the draft Good Automated Laboratory Practices. Then
we went one step ahead and put into it Implementation Guidance.
Figure l shows you relationships to various principles within the
GALP. The bottom line here is that nothing's new. Everything
that's in the GALP is already there in one place or the other. The
Federal Computer Security Act is dealt with; EPA's Information
Resource Management (IRM) Policy is there; the Good Laboratory
Practices are incorporated in the GALP; some of the statutory
requirements from several different programs are incorporated.
Retention requirements that the Federal government mandates all
Federal agencies to do and the types of media are all addressed
there as well, and others such as the electronic data transmission
standard are also included.
The Quality Assurance Unit has oversight over such things as the
standard operating procedures (SOP's), the operation maintenance of
the computer system, and security. In addition, this Responsible
Person (RP) has to report to management that everything is going
according to plan. Or, management can assume the role of the RP if
they want.
Figure 2 shows some of the different areas of the GALP and the
different citations to the different statutes so that you can cross
reference back and forth. This includes areas dealing personnel
and qualifications, the personnel training, such as what laboratory
management should do, the role the RP should have in the
laboratory, what the QA Unit should do and what operational roles
it should have, the facilities and what govern them, the equipment,
security specifications, standard operating procedure, software
requirements, data entry, raw data definition and records and
archiving. Again, these all relate back to any one of the
different statutes I showed you earlier.
Section 7.14 covers comprehensive testing periodically done on the
system to ensure that, with any of the changes that have been done
with the system over time, the system is still fully capable. I
think we specify that in the GALP at least once every two years, as
I recall.
53
-------
Under section 7.10, data entry, there are two areas: integrity of
data and data verification. Under integrity are three separate
requirements: tracking person—that's who entered the data, or who
was handling the machine at the time the machine entered the data
into the system; the eguipment that was being used, the
identification information on the equipment, and the time and date
that it was entered; and data change. This is one of the 82
principles in the GALP and it relates to changing data in the
information system.
There's nothing new in this requirement; it's in the GLP's. It's
been in fundamental accounting principles for a number of years.
What this does is let somebody know that the data have been
changed, who changed it, when they changed it, and why they changed
it. This is very important. It seems, on the surface, like
something anybody would want to be able to do and have in place.
A number of software vendors have sold software to laboratories,
advertising that they have an audit trail in the system. In many
cases it does not meet one or the other of these requirements. It
doesn't, for example, tell the auditor or somebody going back
through it that the data had been changed. It does preserve the
original data; it dumps it off onto a tape somewhere and writes the
new data in. But there's no indication that the data were changed.
In other cases, it doesn't tell who changed it, why they changed
it, or when it was changed. But it may show that it has been
changed. The full compliment of requirements here are only met
right now by about three (that I know of) commercial software
vendors. A number of labs on their own have built systems to meet
these specifications.
So in an effort to help people, rather than just telling people
what to do, the guidance is formatted like this: it shows the icon
at the top, gives you a specific statement of the individual
requirement, attempts to explain what the requirement is, why it is
wanted, gives you an example of it, and then it shows a coding in
here for who is suggested to take on the role of managing that
particular area of the GALP.
For example, with regard to the data change requirement, please
look at Figure 3. Here you'll see the data change requirement
restated, an explanation of what this is all about, what is meant
by it, an example, and who's responsible for it, and there's an
underlying principle- called audit. There are six basic principles
behind the GALP and six basic operational roles that are assumed in
the guidance.
Finally, a special consideration—where we can we've tried to show
a picture. If the picture's right, it's worth a thousand words.
For example, Figure 4 shows a picture of what comprises an adequate
audit trail according to the GALP principles. You'll notice that
54
-------
there are additional notes as well that can refer you back to
various types of documents and various EPA requirements that are
already in place. In this case you have the Toxic Substances
Control Act (TSCA) and Federal Insecticide, Fungicide, and
Rodenticide Act (FIFRA) regulations that are already in place and
specifically require that. And you have a number of background
assessment documents we've done leading up to the GALP that are
also referenced. This requirement is already in place in the
clinical industry; it's already in place in the financial industry;
and it's a requirement in our GLP's as well.
Where we are with it today: the document was sent out for review
and comment in early February. We asked that the comments be back
by the end of March; we're still waiting for some offices. What we
intend to do over the next several months is look at the comments.
I've hired a couple of folks to take a careful analysis of the
comments and go out and make some laboratory visits and quality
checks on them. Following the completion of that, we will come out
with a paper that shows each comment, how we've addressed it, what
we consider about it, what we're doing with it, and where changes
to the GALP are indicated. It may be out by early December.
We struggled with whether or not to require this of all the
Agency's programs via Administrative Order, or to package it as
recommendations to Agency programs to adopt, or call it guidance
for Agency programs to adopt. We spent about a month and a half
back and forth considering those two options. It was deemed at the
time that the merits of the individual GALP will stand for
themselves and that the EPA programs can then, by merit of the
individual GALP principle, adopt those within their framework so
that the integrity of the individual principle will speak for
itself and the Agency programs can then pick it up.
You'll see in a number of cases for a given principle that it may
say a "shall" or a "will" or in some cases it speaks in very
general terms. Those are specifically written that way. If
somebody's going to adopt an Agency program, they're going to adopt
one of the GALP principles. Then the language in there is very
specific to that effect. If they're going to adopt a principleone place.
Then we go on to the next site. And we can put all of the site by
site planning in one place and separate the lab standard procedures
and the field standard procedures and incorporate them by
referencing them—get them separated from the issue of planning.
I want to make a point, which is emphasizing existing terms and
55
-------
another 20 or so programs in the Agency that have already adopted
many others. And there are a number of programs that have a mish-
mash of them. If we went out with requirements, one of the
problems we would have is that they would be viewed as a
duplication of those folks who are having to establish them. They
already have them on the books; why do they have to do them again?
There also are individual interpretations and some of the different
areas that different programs have for their statutes that we feel
the integrity of which had to be maintained. So at this point we
are packaging them as guidance or recommendations, providing ways
in which someone can achieve compliance with them in the document
and giving examples of how one can achieve compliance in a number
of cases where special considerations can come into play. But
again, it is going to be up at this point to the individual EPA
programs to decide which of the elements they want to adopt and
which ones they don't.
The document, by the way, has not just been distributed to EPA.
There are about 250 organizations inside and outside of EPA. I
know that the U.S. Department of Agriculture, U.S. Department of
Energy, the Food and Drug Administration, and the associated people
in those organizations have all been given copies of it and were
involved in the program from the very onset.
56
-------
Figure 1
Computer Security
Act of 1987
Statutory Requirements
for Environmental
Programs:
Superfund
Resource Conservation
and Recovery Act
Clean Water Act
Safe Drinking Water Act
Others
GALP
EPA IBM Policy:
EPA System Design and
Development Guidance
EPA'S Operations and
Maintenance Manual
EPA Information Security Manual
EPA'S Data Standards (or Electronic
Transmission of Laboratory
Measurement Results
Findings of EPA's Electronic
Reporting Standards Work Group
National Archives and Records
Administration's Electronic
Records Management
Regulations
EPA's Good Laboratory
Practices
Federal Insecticide,
Fungicide, Rodentlclde
Act GLP (40 CFR Pan 160,
August 1989)
Toxic Substances Control
Act GLP (40 CFR Part 792,
August 17, 1989)
57
-------
Figure 2
APPENDIX A: INVENTORY OF COMPLIANCE DOCUMENTATION
"CORD PURPOSE SUBSECTION REFERENCE
Organization and Personnel
Personnel Record*
Quality Assurance
Inspection Fteporu
Ensure competency of
personnel
Ensure QA oversqht
Facility
Environmental
Specifications
Ensure against data toss
from environmental threat
7.1
7.4
7.5
FIFRAGLPs 160.29
TSCAGLPs 72929
FIFRAGLPl 160.35
TSCAQLPs 792.35
FIFRAGLPs 160.43
TSCAQLPS 792.43
Equipment
Hardware Description
Acceptance Testing
Maintenance Records
Identify hardware in use
Ensure operational
Integrity of hardware
Insure ongoing operational
integrity of hardware
7.6
7.12
7.8
7.12
7.6
7.12
FIFRAOLPs 160.61
TSCAQLPs 792.61
EPA Information Security
Manual for Personal
Computers
System Design and
Development Guidance
FIFRAGLPs 160.63
TSCAGLPs 792.63
Laboratory Operations
Security RM
Assessment
Standard Operating
Procedure*
• Security Procedure*
• Raw Data
Definition
Wentfy security rtstt
Ensure consistent use of system
Ensure data Integrity secured
Define •computer-resident*
7.7
7.S
7.8
7.8
Computer Security Act
FIFRAGLPs 160.61
TSCAGLPs 792.81
Computer Security Act
FIFRAGLPs 160.3
TSCAGLPs 7913
58
-------
Figure 2 cont.
APPENDIX A: INVENTORY OF COMPLIANCE DOCUMENTATION
RECORD
• Procedures for data
analyra. preceding
• Procedures for date
storage and removal
• Procedures for
backup/recovery
• Procedural for main-
tenance of computer
system hardware
Standard Operating
Procedural
• Procedures for
Electronic Reporting
• SOPs at bench/
wo natation
• Historical Files
PURPOSE SUBSECTION REFERENCE
Ensura comment use of lyitem
Ensure comment use of system
Ensura consistent use of system
Ensura comment use of system
Ensura comment use of system
Ensure comment use of system
provide histoncal record of
previous procedural n use
7.8
7.8
7.8
7.8
7.8
7.8
7.8
FIFRAGLPs 160.87. 160.107
TSCAQLPs 792.81. 792.107
FIFRAGLPs 160 81
TSCAGLPs 792.81
EPA Information Security
Manual lor Personal
Computers
FIFRAGLPs 160.63
TSCAGLPs 792.63
Transmiiilom Standard!
Electronic Reporting
Standards Workgroup
FIFRAGLPs 160.81(c)
TSCAQLPS 792.81(C)
FIFRAGLPs 160.81 (d)
TSCAGLPs 792.81 (d)
Software Documentation
Description
Uf« Cycle Documentation
• Deign Document/
Functional
Specifications
Identify software In use
Ensura operational Integrity
of software
Ensura operatnnal Integrity
of software
7.9
7.9
7.9
FIFRAGLPs 160.61
TSCAGLPs 792.81
Computer Security Act
System Design and
Development Guidance
see above
59
-------
Figure 2 cont.
APPENDIX A: INVENTORY OF COMPLIANCE DOCUMENTATION
RECORD PURPOSE SUBSECTION REFERENCE
Ute Cycio
Documentation
• AcoeptanceTosting
TMting
• Chang* Control
Preeeflurea
• Procedures tor
Reporting/Resolving
Software Problems
• Historical Fib
(version numMn)
Enaure operational Imtgrtty
ofaoftware
Ensure operational integrity
ol software
Enaure operational Intagrtty
of software
Ensure reconstrueiion of
reported data
7.9
7.9
7.9
7.9
EPA Information Security
Manual for Personal
Computer!
see above
see above
see above
FIPRAQLPs 160.81
TSCAQLPs 792.81
Operations Records/Logs
BacK-up/Recovery Logs
Software Aocaptanea
T«st Record
Software Maintenance
(Chang* Central) Records
Protection from data loss
Enaure operational Integrity
of software
Enaure ongoing integrity
of software
7.12
7.12
7.12
EPA Information Security
Manual tor Personal
Computers
System Design and
Development Guidance
see above
60
-------
Figure 3
7.10 Data Entry
/; Integrity of Data
3) Data Change
When a laboratory uses an automated data collection system in the conduct of a
study, the laboratory shall ensure integrity of the computer-resident data col-
lected, analyzed, processed, or maintained on the system. The laboratory shall
ensure that in automated data collection systems:
3) Any change in automated data entries shall not obscure the original entry,
shall indicate the reason for change, shall be dated, and shall identify the individual
making the change.
EXPLANATION
EXAMPLE
When data in the system is changed after initial entry, an audit trail
must exist which indicates the new value entered, the old value, a
reason for change, date of change, and person who entered the
change.
This normally requires storing all the values needed in the record
changed or an audit trail file and keeping them permanently so that
the history of any data record can always be reconstructed. Audit
Trail reports may be required and, if any electronic data is purged,
the reports may have to be kept permanently on microfiche or
microfilm.
Responsibility: Responsible Person
Principle: 3. Audit
SPECIAL
CONSIDERATION:
Laboratories may consider adopting the policy by which only one
individual may be authorized to change data, rather than implement-
ing a system that records the name of any and all individuals making
data changes.
61
-------
Figure 4
7.10 Data Entry
Integrity of Data
3) Data Change
;c*
134.7
• NAME OF PERSON
ENTERING DATA
•DATE OF ENTRY
1
AUDIT TRAIL
•
CHANGE PROCESS
144.7
" 134.7
NAME OF PERSON
MAKING CHANGE
DATE OF CHANGE
REASON FOR CHANGE
ORIGINAL
DATA
CHANGED
DATA
Notes...
For additional guidance, see: FIFRA GLPs 40CFR 792J30(e); TSCA GLPs 40CFR
160J30(e); Automated Laboratory Standards: Evaluation of Good Laboratory
Practices for EPA Programs, Draft (June 1990); Automated Laboratory Standards:
Evaluation of the Standards and Procedures Used in Automated Clinical
Laboratories, Draft (May 1990); and Automated Laboratory Standards: Evaluation
of the Use of Automated Financial System Procedures (June 1990).
62
-------
HAZARDOUS WASTE SITE REMEDIATION
PANEL DISCUSSION
Moderator:
Gary Johnson
EPA Quality Assurance Management Staff
Panelists:
Thomas Morris
Martin Marietta Energy Systems, Inc.
Duane Geuder
EPA
Marcia Davies
U.S. Army
John Edkins
U.S. Navy
Gary Johnson
I'd like to give you a little background on the proposed National
Consensus Standard that is being developed through ASQC. Then we
will have presentations by our panelists on their view of the
issues that need to be addressed in order to make an acceptable
National Consensus Standard work for their organizations. We'll
tell you a little about the origin of our efforts on this.
This effort occurred because of concerns expressed over QA
requirements for hazardous waste management clean-up activities.
We found that there were multiple sets of requirements out there.
There was EPA guidance in several forms. The now infamous QAMS-
005/80 which has been out there since December 1980 has not been
revised since the "interim guidelines" were published.
Other organizations have other sets. Probably the most common set
of requirements that has been applied to these activities has been
NQA-1. This was a set of QA requirements developed primarily for
nuclear facilities. In fact "N" means "nuclear" in NQA. Its
application to environmental concerns has been somewhat successful
in some areas, but not without some difficulty. There were not
only these different requirements, but also the fact that those in
the regulated community often had to respond to multiple
requirements—both EPA and NQA—since they were not fully
compatible. And this has led to duplication of planning
documentation; it's led to rework; and both have led to added time
and cost, which neither we in government nor the regulated
63
-------
community can afford to allow to continue.
Harmonization began as an effort of the ASQC Energy Division. We
began this effort just a little over a year ago, in the winter of
1990. The participants included EPA, DOE, DOD, Nuclear Regulatory
Commission (NRC), various EPA and DOE contractors, and private
consultants. This has been solely a voluntary effort by a group of
your peers—fellow quality professionals who recognized that a need
existed to bring more consistency and standardization into the way
we go about QA in environmental work.' Our purpose as we began this
effort was to harmonize the current multiple QA requirements into
a single set for environmental programs.
No current standard exists for environmental programs. A standard
should provide a clear statement of what QA elements are needed,
allow flexibility on how and by whom requirements are implemented,
enable more consistency by everyone doing the same things, and
enable use of the good work already done to harmonize QA
requirements.
When you think of a standard, probably you're thinking in terms of
a performance standard. What we're talking about is a requirement
standard. In the Agency we do not have a current requirement
standard; we have EPA Order 5360.1. That is not a requirement
standard. We have guidance, such as the now defunct 004 and 005.
Those are guidelines, not requirement standards. In fact, there is
no requirement in EPA right now that anyone should implement a QA
project plan. All the contract regulations say is that you have to
prepare it and get it approved—they do not say you have to
implement it.
We need a clear statement—a good solid foundation—for quality
assurance in our Agency. What we've heard over the last 10 years
is that you've had a lot of frustrations in trying to deal with
carrying out and institutionalizing a QA program in your own
organization. Our recommendation that went to the ASQC group was
that a standard is a pretty good idea—that perhaps we can provide
this clear statement of the things that we will do in a QA program.
But at the same time, we emphasized that flexibility had to be
provided to the organizations implementing this standard to
determine how the requirements would be implemented and by whom.
Everyone recognized from the very outset that we could not write a
prescriptive requirements document that would have any prayer of
working in EPA—given the diversity of our programs—nor in any
other Federal agency engaged in environmental work. But we felt
that if we could get everybody at least doing the same things and
talking about the same things, then we could begin to bring more
consistency and ultimately improve the way that we're carrying out
QA activities in our respective organizations.
We also recognized in this group that a lot of good work had
64
-------
already been done. We have to give credit to the NQA Committee of
the American Society for Mechanical Engineers (ASME), because there
are some really good things in NQA-1 that our committee felt was
appropriate to include. And so our philosophy was, "steal
shamelessly" because there are a lot of good ideas out there. And
so we solicited ideas from whatever sources we could and we haven't
stopped soliciting those ideas.
Now, to briefly describe the structure of the proposed standard.
It has three parts. Part A deals with management systems. The
intent here is to define what you need for an effective QA program
in terms of the structure and framework. In other words, what do
you need to be able to carry out day to day activities on specific
technical projects? Part B, characterization of environmental
processes and conditions, is primarily environmental work—
environmental monitoring, sampling and analysis activities. Part
C deals with design construction and operation of environmental
engineering systems. This part deals with the technologies that we
use in pollution control, remedial design, remedial action, and
Super fund. Part C addresses an area that the Agency has not
completely addressed, and so we felt there was an opportunity there
to address that. In particular, we found that a lot of the work
that had been done by DOE and DOD provided us with a wealth of
information which, through the standard, we could create good,
concise, clear statements of what would be needed.
Figure 1 shows the sections of Parts A, B, and C.
To quickly show you how Part A is constructed, visualize this as an
umbrella under which everything is done. The management commitment
and organization, the first statement in that standard, says that
management at all levels is responsible for quality. That is an
essential statement; it's an essential part of this entire process.
The QA program could document the management's systems that you are
employing. So that's really nothing new there.
As for personnel training and qualifications, we all recognize (and
certainly from viewing the world from a TQM standpoint) that human
resources are our strongest asset. We need to make sure that those
concerns are addressed and that people who are doing work that
affects the quality of the results, and therefore the decisions
that you're making, have the necessary skills to carry out that
work most effectively.
Regarding management assessment, it's absolutely essential that
management participate in the process and that they periodically
examine the effectiveness of that process.
65
-------
FIGURE 1
Part A Management Systems
1 Management commitment and organization
2 QA program
3 Personnel training and qualification
4 Management assessment
5 Procurement of services and items
6 Documents and records
7 Use of computer hardware and software
8 Operation of analytical facilities and labs
9 Quality improvement
Part A requirements apply to both parts B and C.
Part B Characterization of Environmental Processes and Conditions
1 Planning and scoping
2 Design of data collection operations
3 Implementation of planned operations
4 Quality assessment and response
5 Assessment of data usability
Part C Design, Construction, and Operation of Environmental
Engineering Systems
1 Planning
2 Design of environmental engineering systems
3 Implementation of environmental engineering systems design
4 Inspection and acceptance testing
5 Operation of environmental engineering systems
6 Quality assessment and response
66
-------
In the procurement of services and items, we have to make sure that.
the services we get from subcontractors meet our criteria, or that
the equipment that we purchase to monitor polluted streams or air
meets our needs.
We need to do a better job in the Agency of maintaining documents
and records. One of the most frequent comments that I've heard
from Superfund has been the difficulty in maintaining satisfactory
documentation on given sites, largely because, in many cases, we
don't have a process in place to do that. So we can learn from
what others have done here.
We're depending more and more on computerized acquisition systems
and storage systems for our data. And so it's appropriate that we
address that issue in our requirements standard for QA pertaining
to the use of computer hardware and software.
The operation of analytical facilities in laboratories—it's
important that these labs practice good automated laboratory
practices, that this be an integral part of their operations. It
was felt that highlighting this was very important, because so much
of what we do depends on the products of those operations.
Last under Part A is quality improvement. We can always learn from
our experiences and next time do it a little better.
In Part B we followed a very simple axiom: plan, implement,
assess. The first two requirements relate to the planning that
goes into designing the data collection activities. The
requirements are consistent with the data quality objectives
process that we have been using in EPA for several years now.
Then an interesting thing: implementing what you've planned.
That's perhaps an area where we haven't been as persistent as we
should be.
The next item is assessment. This refers not only to assessment of
the work that's being done in process, in responding to it where
responses are appropriate, but also the assessment of the usability
of the results—recognizing that even imperfect data sets can be
used for some decisions if we understand the limitations on the use
of those data sets.
Part C deals with engineering systems. Again, the same approach:
plan, implement, assess. Plan the design of environmental
engineering systems that may be required for a Superfund remedy;
design those systems; implement the system's design, and make sure
that the components of the technology that are being constructed
pass inspection and acceptance-testing requirements. You need the
assurance that this system is going to work once it's actually put
out there in the field. To make sure we have thought out the
operation of these systems, sufficient guidance should be provided
67
-------
in the form of operating manuals and the like for the successful
operation of these systems.
Last is quality assessment and response—to make sure that the
systems do, in fact, perform as intended, and if they do not, that
appropriate response actions to correct the situation be performed.
I'm sure you are probably thinking, what's the impact on EPA?
Well, for the large part, most of your programs are going to be
unaffected. Because as Nancy said, over the last number of years,
we've heard from a lot from you, and our responsibility has been to
try to carry those concerns to the harmonization committee. I
think that when you look through this document, you'll find that it
really doesn't change what you've been doing very much, because the
essential elements are there.
Part C does add something new, because it deals with issues
pertaining, particularly in Superfund, to remedial design and
remedial action that we have not effectively addressed as an
agency. But there's still ample time to do so.
Our key interest in this is to be able to provide a foundation, a
basis upon which new guidance for a variety of issues such as QA
management plans, project plans, and audits can be developed and
given to you and to the regulating community at large to use.
The current status: we believe that ultimately we can achieve an
acceptable national consensus standard. But the emphasis is on the
word consensus, and consensus requires your participation.
We have also presented this document to the second annual Hazardous
Waste Conference in Las Vegas, which was sponsored by the ASQC
Energy Division because of the particular community that would be
directly impacted by this and that has a special interest in it.
So the document has, at least to this date, gotten somewhat limited
distribution. We have gotten a number of comments back in and I
appreciate that very much.
Within ASQC the standard setting process has begun. Two weeks ago
in Hollywood, Florida, the Energy Division council voted to report
the standard out of committee and to request that the ASQC
standards committee list this standard and report it to the
American National Standards Institute as a proposed national
consensus standard for environmental programs.
I've got to say a couple of words here about why that's important—
it takes this out of the political arena so that no one
organization feels that any particular organization is trying to
enforce its own agenda. I can assure you that we've had some candid
and lively discussions from the policy group and the work group
that put together this initial draft. We've tried to accommodate
everyone's concerns.
68
-------
The pleasant thing about it all is that the group coalesced very
quickly. We found that the concerns were the sane. You get
quality professionals together and you find out that you've pretty
much got the same set of problems.
This week some of our colleagues in the committee are presenting
this standard to the ASME NQA committee which is meeting in
Williamsburg, Virginia. We're looking for their support and
comments on it and also to express our appreciation to that group
for the effort that they've put into the NQA standards through
ASME, which provided us with valuable information.
The next step, as we said all along, is getting input from you.
Please review it and let us have your comments. Sometime later
this summer we hope to publish a revised draft in the Federal
Register to allow for a formal public comment period. This will be
used as input to the ASQC standard-setting process. And we're
hoping that in the not-too-distant future ASQC will issue this
standard which may be at that point known as ASQC Standard E-4.
We're hoping that the standard will be accepted and endorsed by the
participating agencies, by EPA, if it meets your needs.
In the meantime we need your help, my fellow EPA folks, to help
pull together revised guidance that more closely reflects your
needs and your concerns right now for where our quality program
needs to be for the remainder of this decade.
For the purpose of our panel discussion we're going to assume that
we can reach an acceptable national consensus standard. We're
going to ask the panel to address what must be done, or what issues
must be addressed to make the national consensus standard work in
the organization. Through their discussion we hope to be able to
bring some of these key issues to the forefront so that we'll know
what areas we're going to have to identify in the months ahead as
we address this.
Marcia Davies
I would like to talk to you about the structure of the Army Corps
of Engineers and how the Corps views QA. The Corps since the
1940's has always had the same structure. We have a headquarters
in Washington whose responsibility is policy, interface with
Congress, and interface with the headquarters of other government
agencies in Washington—trouble shooting and resolving differences
that can't be resolved elsewhere. Beneath headquarters we have
divisions that are geographic. There are about a dozen of these
around the country. Each division has several districts and each
district has area offices. As you go down that structure, the
geographic responsibility gets smaller and smaller and smaller.
What are the purposes of all these? The worker bees of the Corps
are in the districts. The districts are the direct overseers of
69
-------
the contractors. The districts do the work in-house. The dam
builders for civil works and so forth are at the district level:
The area offices which are under the districts are construction
representatives. And those are broken down to small enough
geographic areas so everywhere that the Corps has a construction
project going on, there is a construction representative on-site
that reports to a nearby office. This was put together for the
Corps' traditional civil works and military construction missions
over the years.
The sole purpose of the division is quality assurance. The
division oversees the work districts—makes sure all their
contracts are legal, and that they're not doing under one
particular contracting mechanism what should be done under another
one. There's a lot of technical support at the division level:
legal support, personnel support and so forth. And each division
services several districts and many area offices.
I feel that the traditional structure of the Corps lends itself
very well to the TQM concept. The principle of the Corps is one-
step-up review. If the districts have a contractor working, the
districts are supposed to have the personnel to review the work of
the contractor, with general division oversight. If the districts
are working in-house, than the divisions are supposed to directly
oversee, technical and otherwise, what the districts are doing.
Within that structure, there are special offices that are called
design centers or centers of technical expertise. The division
that I belong to, the Missouri River Division, is designated as a
mandatory center of technical expertise for environmental works.
And so we have special missions where we oversee all of the
districts for some aspects of their work.
To the Corps, QA is a government function. Contractors do QC and
the government does QA. It is rather rare for the Corps to
contract out its QA function. That's almost always done in-house
by government employees. Sometimes we get into semantics
differences with the folks in the regions or in the states in terms
of what they think we should be doing and what we're actually
doing.
The Corps entered into hazardous waste remediation in 1982 to
support EPA in Super fund, primarily in the area of design and
construction since the Corps is an engineering outfit—but an
engineering outfit which had been traditionally devoted to design
and construction. Since that time, the Corp's missions and
requests for assistance have expanded quite a bit, so that the
Corps districts currently manage remediation problems for EPA, DOE,
and the Department of the Army. The Corps manages the entire
formerly-used defense sites program for DOD. We do some work for
the Navy, the Air Force, the General Services Administration, the
National Guard, and the Department of Transportation—whoever comes
70
-------
knocking at the door saying, "HELP" is what the Corps often gets
involved in.
We strongly support any harmonization effort, because working for
that many different Federal agencies is very difficult under the
current set of circumstances. The Army Corps of Engineers recently
had an internal reorganization effort. The principles under which
we're slowly reorganizing internally say that the people who are
responsible for the projects and programs—the managers responsible
for the schedules and for communicating with the outside agencies
and so forth—are in a different reporting line from the people who
are responsible for the technical adequacy of the products. And so
for each project, sometime down the road in the Army Corps of
Engineers—I'd say within the next six months in some districts-
there will be a project manager and a technical manager. The
technical manager will coordinate all of the internal QA efforts
for that particular project. That will be in the environmental
arena as well as in military construction and all of the other
areas in which the Corps operates.
The Corps QA program has all of the traditional structural
elements, specific requirements for each project as well as general
guidance requirements. We write QA program plans; we like to call
them chemical data acquisition plans because of what our focus on
QA is and we think that says better what they are.
Internal review—land validation. We do land validations; we do
them out of my office. We use audit samples and we generate them
ourselves within the Corps. We have our own audit sample system.
And in our audit sample system we have real world soil samples, and
a large percentage of the CLP labs does not pass that sediment
sample on the first run. We've taught them a few things about
extracting. And we do not run quarterly blinds. Instead, for every
project we take government splits. The Corps has nine division
laboratories and those government split samples go to our own
laboratories and they either analyze those in-house, or with
intensively managed commercial contracts that they hold. The news
is there aren't many matrix effects out there.
So we've learned a lot of things and we feel that we have a good
chemistry program going. Problems are essentially recognition. If
it ain't the CLP, it ain't what the regulators want. Is it CLP-
equivalent? Very frequently. But it's not the CLP. We like to
standardize on SW 84$; we like to require all of the internal QC
that has to this date been optional in SW 846. We like to see that
reported. We're probably on 90 percent of our work never going to
court—maybe more than 90 percent of our work never going to court.
It's quite clear who did it, when they did it, and what they've got
to do about it.
I'm pointing up some problems in implementation that I see. I
think the main problem that we see is lack of recognition that
71
-------
there are a lot of different ways to do things and they're probably
mutually acceptable. And they all work well.
Now speaking mostly to the investigation phase. Given all the
things that we have in place, including geotechnical guidance,
almost everything that we have we have ripped off from EPA, because
we started in this arena in 1982 with a Memorandum of Understanding
with EPA. We have very few wheels that we've invented that weren't
already somewhere mentioned or suggested in existing EPA guidance,
including the split sample concept,, which as far as I know, we're
about the only ones who implement.
Because we've been a design construction organization for a long
time, we have in place a number of things with respect to specs or
guide specs for building slurry walls on contaminated projects, and
landfill covers that are compliant with rec river requirements,
that are helpful to our field people, who are overseeing projects
but don't really have the time or the technical ability sometimes
to sit down and go through all of the legalese that's in the
guidance in the Federal Register. So we tend to develop pieces of
guidance that translate those things into field directives for our
folks. And we have been doing this on the design and construction
site for quite a while. I think that we may have something
important to contribute in that area as an organization to that
part of this new standard.
A couple of the specific problems that we have are, for example, on
studies: Who reviews? Who trusts whose reviews? When have we
reviewed enough? How many agencies do we have to review? How long
does it take, and how much money do we spend on this? I'm not
downplaying review; review keeps us alive—it's a very important
part of the activities of the office from which I come. But some
of the projects we review, the Department of the Army reviews. We
pay the states and the regions to review; the contractors review.
And you've got these massive numbers of people in essence all
saying the same thing, and I would bet you a dollar that if you and
I sat down next to one other and you're a chemical professional in
this area and we review the same document for chemistry, we
wouldn't come up with the same set of comments.
So I think that the review process needs a lot of work. And I
think that it needs to be addressed in the standard in greater
detail and I think that we need to figure out a way to get on with
this process. DQO's are wonderful things; I endorse them heartily.
I felt that the little two volume set on DQO's that came out of EPA
really turned the lights on. That was great. But they're not
implemented. You can use data quality objectives all you want, but
you're going to get to a regional Federal facility project manager
who's going to say, "You've got to do it over, guys. This isn't the
CLP and that site's on the NPL list." Those decisions are not
being made by quality professionals. If we could talk to the
quality professionals I suspect we wouldn't come up with the same
72
-------
decision.
There are things like that which I feel the standard needs to
address—and address in greater detail in terms of what's happening
and who does what. How much uncertainty are we willing to accept
in data? -Well, we're not willing to accept bad data. So we want
the contractor to accept the responsibility for the analytical, and
if most of it is flagged, he's going to do it over. And so we
don't have contracts that allow them to flag it and give it back to
us as acceptable. Consequently, we've run into problems because
our data's not flagged. And it's not flagged because we didn't
accept the flags, whether it's SWA 46 or whatever type of
analytical method that they're using.
I think we need to pay a lot of attention to DQOs on all the
analytical levels and that we need data validation guidance for
level one, level two, level three, level four—the concept of level
four needs to be expanded to include more things than the CLP. I
think we need to figure out a way to do this work cheaply, smartly,
on schedule, by doing a whole lot more interagency cooperating than
we do now. The Corps would welcome the opportunity to do that.
And I think that we need to address redundancy of effort among
Federal agencies.
I want to say that the brightest light in the past year, and the
thing that I think will help us a great deal as it gets down the
road, are the actions of the Environmental Monitoring Methods
Council. I was excited when I heard about that and I think it's
wonderful. And that's about all I have to say.
John Edkins
I'm going to agree with almost everything that Marcia said and then
act as a ground wire for some rudimentary problems that we have in
the Navy—perhaps a general agreement for a harmonization effort
for the same reasons that Marcia described and a plea for help
because of the particular resources problem that we have. I'm not
a QA professional; I'm a geologist. I've been doing this for about
four years, so I have had to deal with some very basic problems in
starting with this, and through my friends at the Quality Assurance
Management Staff, gain a basic perspective.
I want to say something about our structure for environmental
compliance. The Naval Facilities Engineering Command in
Alexandria, Virginia, responds directly to the Chief of Naval
Operations. Then that structure is broken down into eight
engineering field divisions, which is the operating arm for cleanup
of Navy shore facilities. Then the Naval Facilities Engineering
Command has several other smaller support organizations, namely
ours, where we were formulated originally as a small
multidisciplinary group. As far as compliance with CERCLA is
concerned, somewhere around 1980 all of the original Navy property
73
-------
assessment was done out of our office. When the engineering field
divisions plunged into the problem of investigating the environment
ih«?UKtltUdin°U? dri11 holes and collecting environmental data
and QA became an issue, our command office then turned back to us
and *aid' .a11 ri9ht» 9ive us QA, because you're the
multidisciplinary group.
That's how we happened to fall into this picture. We have a
OA°fna?aei.f i?J?<2fCV.' ^e tend to buy everything. When I got into
QA in late 1987 the first thing that I attempted to do was bring to
the attention of my management the need to restructure everything
that the Navy does to make it sound, look, act, and taste like EPA
guidance. I realized that we were not going to get accepted on a
local level unless we had those key program elements and names in
our program.
We now have some 2,000-plus sites. They're small sites. We
identify the individual pesticide rinse aid area and a fuel spill
as individual sites in our program language, although we do have
the large landfill with the multi-types of waste present. A common
Navy site will be a PCB waste discharge area. Typically one Navy
base, a Naval air station for example, will have from 12 to 15
sites identified of that nature. To give you an idea of our
resources, our Navy engineers that act as remedial project
managers—the people that drive the contractors—may have as many
as three Naval installations and perhaps upwards of 50 sites for
which they have responsibility. These people typically will be one
or two years out of graduate school and just be becoming integrated
into the process by the time they are hired away by our more well-
to-do industrial brethren.
So my thesis today is one of a procurement-based problem and also
the idea that there is indeed nothing new under the sun. I took
Gary seriously about barriers to harmonization, and I could only
really come up with one barrier. It was communication, without a
doubt. We have a multi-disciplinary industry and terminology from
many disciplines. We've got toxicologists trying to talk to
engineers who are trying to talk to bio-ecologists, and now I find
that we are trying to talk to procurement specialists about what
all of this is that we're trying to buy in terms of environmental
data and QA services for environmental data.
We have a lack of general industrial standards or standard
operating procedures. Many of the procedures that we use don't
have very strict source referencing requirements for contractors.
Many times in the Navy we have seen instances where EPA guidance
documents have been used in lieu of good contract standards or good
contract specifications. In other words, a statement of work will
say, take these guidance documents and those regulations and
comply, comply, comply. A contractor is out there performing his
own interpretation of guidance, which can sometimes be ambiguous
and does lack these kinds of small critical elements as the source
74
-------
referencing requirements. Guidance tends not to identify by whom
individual or specific actions are to be taken. When you get in a
procurement milieu you find out very quickly that that becomes
important.
The fact that we are trying to turn a lot of good ideas in Federal
guidance into a procurement system, and then trying to adapt to
sometimes changing a revolving guidance and revising our
procurement system to address that, becomes a problem for us and
creates a lot of redundancy and waste.
Certainly Federal guidance has a lot to say for it. It does tend
to standardize and provide a lot of good elements for us. The
ideas of redundancy and streamlining have been recognized and these
guidances are continuously improved. The very concept of DQO's I
heartily applaud—the think before you leap approach. That was a
badly needed concept early in the industry when "characterize the
site" meant go out and get data on just about everything and then
figure out later what you were going to do with it. Also, on the
very idea of risk-based decision making I heartily approve.
The concept of harmonization itself—the standardization of
national programs for environmental clean-up and compliance—is
something that's going to benefit everyone. We have the same
structural problem that Marcia alluded to in that our Navy field
divisions are roughly the same in size and number of geographic
acreage as EPA regions, but they don't coincide. So we'll have one
engineering field division with possibly as many as three different
EPA regions that they are responding to, not to mention state and
local governments.
The work that's been done by QAMS and DOD-sponsored annual QA
meetings and the ad hoc committee on quality and environmental
measurements, ASQC Energy Division work, has resulted in the EQA-1
effort. All of these things are definite progress, and I want to
not focus too much on those but go on to something that I'm very
interested in, which is the individual method standards, because
I'm a firm believer in building blocks.
Standards for data collection and interpretation include EPA
guidance and standard operating procedures (SOP's), state standards
and SOP's, and contractor and other agency in-house SOP's. Some
other people that I have talked to in these meetings have indicated
a joy in collecting SOP's—we call this piracy—but actually many
of these procedures the government's paid for probably 16 times, so
we ought to be able to pirate them, especially in the contractor
area. Looking for all sources of standards, I sifted through EPA
and came up with a batch several years ago and then turned around
and plugged those into ASTM Committee D18 Soil and Rock, for which
I'm an active member. I got sucked into that process with a great
deal of joy, mixing myself with industrial interests as well as
other government agencies and local government agencies.
75
-------
I'd like to focus just a little on D18.21 standards. These are
consensus standards and my initial charge was to develop field QA
*? correspond with lab QA, and it turned out that we got into a lot
of different things. But just to mention a few: the surface and
borehole geophysics and vadose zone monitoring, everybody's
favorite .."how to put in a monitoring well," soil sampling
practices, pump testing for hydraulic purpose and ground monitoring
well design and construction, monitoring well maintenance, and
rehabilitation and decommissioning.
We typically decommissioned monitoring wells by cutting them off at
ground level with a backhoe or bulldozer, becoming lost in the
records and also conduits into subsurface lower layers, a real
problem. The American Society for Testing and Methods (ASTM) is
looking at this. This one, which I'm particularly involved with,
I think is very important because this gets into some of the
aspects of DQO's: how to design an experiment for a site and how
to interpret the data. These are very important issues and I
wanted to raise our involvement in the building blocks aspect of
It •
I'm going to talk about what more we can do. We need to recognize
and focus on QA as a procurement-based problem. We also need an
increased recognition by people of data as the primary product. We
need more model contracts requirements documents and standards for
contractor action, right down to the building block level. We also
need to look at some long-standing technical terms and determine
how we might clarify existing program language.
QA/QC seemed a very simple concept, at least originally. As I
said, I'm a geologist, not a QA expert. It took me a while to
figure it out, because when you ask people what QA/QC is you get as
many different answers as you ask people. QA in terms of
procurement is inspection and acceptance. As for the in-house
problem, it seems that a motor company can have a staff that
decides they want to sell this to our perceived customers and set
up a series of specifications, and then another engineering staff
will design those into tools and set up a manufacturing line. QC
has a responsibility for making sure that the product that comes
off the line conforms somehow to the specifications that were set '
up by QA. As soon as the QA expert leaves the motor company and
goes to work for the Federal government, there gets to be a very
sharp line between the two. It's a procurement line; it's a
contractual line; it's a no conflict line—buying products on
behalf of the American people.
I did some checking around on definitions and there are a couple of
things that were interesting. QA implies inspectioning acceptance
on behalf of the government, confirming that a product conforms to
contract specifications, whereas QC is ensuring that it conforms to
specifications. But in both cases we have the implicit
specification already there. We presume that we have
76
-------
specifications in both QA and QC. Also, the interesting thing here
too is that it's very clear that QA is a government responsibility
and QC is a contractor resnonsibilit-v.
u«w j.a unau At -» very ciear tnat QA is
and QC is a contractor responsibility
I did some reading on inspection acceptance in the Federal
Acquisition regulation. They basically defined higher level
contract quality requirements as those which apply to critical and
complex procurement, and the definitions of complex and critical
were given. Complex means that the product has quality
characteristics not wholly visible in the end item. Critical means
that the failure of the item could injure personnel or jeopardize
a vital agency mission. I looked at those and said, That looks
like data to me. it's very hard to say that I've got a high
quality piece of data when I look at it. And it is critical, too,
because we have human lives that are at risk in this environmental
cleanup business. The thing about a higher level contract
requirement is that it puts an increased onus on the government to
not use contractor inspection of product. It puts an increased
onus on the government to have a non-conflict inspection except as
a process.
In wrestling with the problems of QA, I ran into the fact that we
didn't have any standards. In talking to contracts people, they
presumed that we had standards and specifications for what we were
trying to buy and I said, no, we don't have any. And they said,
that's not QA. And I said, well, what is it? Well, that's
acquisition support—to develop those specifications and standards.
One of my co-workers, Barbara Johnson, helped me with this one. I
think that this appears in EQA 1 under the heading of Quality
Improvement. It says that you look at what's wrong with what's
coming off the assembly line and decide how to re-specify it so
that you continuously improve.
You've got to have all of these things happen in order to get
quality data in a new industry. Yet we've got a procurement system
that's already written up in a Federal Acquisition regulation that
we have to communicate with whoever thinks QA is just
inspection/acceptance. So we've got to look at these issues and
determine how to communicate with these people and get them on
board to our issues and on our side.
I want to emphasize one last thing: the acquisition process has
nothing to do with the people who are generating the data. The
Navy buys everything—we buy QA program plans; we buy QA project
plans; we buy the whole business. The contractors are over here;
they're producing. We're over here figuring out, well, we'll get
their product in here and we'll inspect it.
I wanted to touch on the recognition of data as a primary product,
and I argued this in our Naval Facilities Engineering Command. I
have people that want to refer to this as construction, and I said,
77
-------
That's fine, but you don't know where to clean up; you don't know
to what level to clean up; and you can't verify that you indeed
cleaned the area up without data. How do you reassure the neighbor
across the border line that you got rid of a part per billion of
benzene out of the ground when it's an invisible situation? Data
supports your decision. You stand or die by data. We all know
this, but it's something that I have to keep hammering away at in
my own shop. *
We need an increased focus on these contracts requirements
documents, individual methods standards and SOP's, and model work
statements, which could be almost in any form for every step of the
process, for planning documents, for reporting, for execution of
work for feasibility studies, for remedial designs, you name it.
We need standardized planning and reporting formats. In a way, a
lot of our problem is on format. We can provide the same
information with the same substance, but if we call it something
slightly different, or label it in a slightly different order
because we're buying these things so quickly that one engineer out
there procuring a million dollars worth of contracts services in a
very short time provides that to a region, and the region looks at
it and says no, that's not acceptable, that's not what we're asking
for, you don't have a QA program plan here or whatever it's called.
And so we're required to redo the effort or revise it. That's very
expensive.
Referencing is something that I think we need to have in guidance
documents, in our model work statements, and in our contracts
requirements. We all track information. We buy a lot of redundant
information in the Navy. I'm sure you've all seen that problem
before.
It was very complex for me to try to convert a very good piece of
EPA guidance into a streamlined procured work plan for the Navy.
One of the things that I noticed was that the QA project plan is
very similar to something that we designed also. The project plan
guidance 005/80 consists largely of standard operating procedures.
When I looked at some of the things that had to do with sites and
the objectives for measurement, I had to sweep all of these things
that were site-specific over into one place, because, as I said,
often we have 15 sites on one Navy base, but that usually will be
one contractor as well. The same procedures for drilling wells
will be the other contractor on all sites. What we have here is
site one, with all of its conditions, its data quality objectives
and scoping, and its risk-associated elements put in one place.
Then we go on to the next site. And we can put all of the site by
site planning in one place and separate the lab standard procedures
and the field standard procedures and incorporate them by
referencing them—get them separated from the issue of planning.
I want to make a point, which is emphasizing existing terms and
78
-------
trying to pull in as many existing terms to define program language
as we can—in terms that other people in the industry might
recognize. There's nothing inherently wrong with acronyms in
themselves, but they tend to get a life of their own if we don't
pull them back to long-term technical terms.
DQO is one of my favorite acronyms because I really like the
concept. However, when we get on a small site scale, I think a lot
of people would recognize these sorts of things: hypothesis
development, hypothesis testing, experimental design, and errors
and controls. We need to go for all of these and we need to use
these words to tie together these concepts.
I've now defined QA as government responsibility in a procurement-
related system, with planning being something different from
standard procedures. We had a contract provide a QA plan which
named these program plans and project plans, but since it planned
to have QA, this very much looked like a plan to plan the planning
plans and we had no actual planning there at all. We can spend a
lot of money on these kinds of documents. So I'm advocating a
terminology problem where we need to tie these back together
somehow against the Federal Acquisition Regulation and the problems
of procurement, and that will help. We can keep these terms if we
give them a little more clarification and definition.
From the Navy perspective we are looking at standard procedures;
we're looking at building blocks; and we're looking at procurement
tools. We have a need to provide engineering field divisions with
things that buy good services just by the plug-in sort of factor,
so we have a very baseline sort of attitude on this. Any guidance
that comes out that will be useful to us will help name those
relationships, especially for contracts and government
responsibilities. QA is procurement to us.
Duane Geuder
My major concern in being almost last on a panel like this is that
almost everything has generally been said. Fortunately I'm not
last, so I still have a chance. For any number of issues that I
address, approximately half of them have already been addressed
either by Nancy or Gary or John or Marcia.
What I want to start out with is to give credit to Gary and anyone
else involved in the generation of this document because I think
it's a quantum leap forward. Obviously a lot of time has gone into
it. I hope we can, in fact, get it to a point of implementation.
Two things jumped out at me from the document: first, the advocacy
of plan, implement, and assess; and second, to foster a no-fault
attitude. Many times the QA community is looked upon as the
policeman and we have difficulty doing our job because people think
we're looking for fault as opposed to trying to improve the
79
-------
process. The plan, implement, and assess was at least in
Superfund. We're pretty good at implementing but we really
haven't done a good job in the planning and assessment.
With the fraud issue in the CLP, or the fraud issue in the lab
community-,- what I'm hoping is that the diplomacy among the federal
agencies will continue and that that diplomacy will pervade not
just the Federal agencies but all of the organizations within the
Federal agencies, because we in Superfund and elsewhere have to
deal with numerous organizations within Superfund that all support
the project. And likewise, with implementation of this guidance
that it would minimize and reduce all opportunity and inclination
for fraud in the lab community or elsewhere.
Anyway, I understood my mission in a slightly different way from
Gary. I think it's not terribly off track, but I was attempting to
answer two questions: What could a single QA standard mean? and
what are the issues for implementation? I think we mostly dwelled
on the second, but the first, which was addressed by Robert Layton,
was the question of terms, definitions and acronyms which the Navy
is so fond of—EPA and Superfund are enamored with acronyms. When
I first started with EPA, it took me three weeks just to learn the
acronyms and that was in water enforcement. Superfund has another
order of magnitude of acronyms. At any rate, this standard
guidance would certainly help to standardize our language, our
terms, and our definitions.
The guidance will provide a standard process that will ultimately
facilitate review and audit by EPA or by Superfund or Federal
facilities or contractors and most importantly, Potentially
Responsible Parties (PRP's). Again, more acronyms. By definition
it will eliminate barriers, both in its development and in its
application. The barriers among the agencies are already falling;
I hope that the barriers within the agencies will tumble in a
similar manner. The buy-in by the agencies and the sub units will,
in fact, eliminate the barriers. If each entity has a buy-in, some
ownership of the product, which is what Gary has emphasized, then
certainly it can't fail. It'll provide a yardstick, especially in
Superfund, for cost recovery.
We need some kind of yardstick, some measure of what the PRP's or
any other entities should be doing at a remedial site—what is an
appropriate level and how many samples, because we go after cost
recovery. If EPA says you should do more than is necessary, then
we're not going to get all our costs. If EPA says less, then the
project won't be done properly. So, definitively a yardstick for
cost recovery and in some instances we can go for triple damages.
In that instance we definitely need a very viable, documentable
yardstick to measure the activities. You can't get away from it.
Clearly, resource implications come out from this. Increased
resources are unlikely; therefore we're going to have to do things
smarter with the resources we have and reallocate those resources.
80
-------
It's been mentioned by Nancy, Gary, and others that we're going to
need tools and guidances. But they're going to have to be"
customized; they're going to have to be user-friendly. Hopefully
they'll be automated. We at headquarters have heard from the
regions numerable times that there's too much guidance—we've heard
this in some of our audits and reviews. Therefore, whatever
guidance is forthcoming, we'd better be careful to keep it terse,
to the point, and as user-friendly as possible.
Last, but not least, this will provide a significant improvement to
EPA's, and I hope to the other, QA programs—the other Federal
agencies. Significant improvement, because it emphasizes a cradle-
to-grave approach where you do QA through the upfront planning, you
do through implementation, and then at the tail end you do the
proper assessment to make sure everything has worked and what you
can do to improve it.
These are the issues for implementation. Resources—we can't get
away from it. I mentioned it already. I can see that if we have
a full fledged audit or review process within Superfund, the
resource implications are dramatic. We currently don't do a lot of
field and on-site auditing. If we are to do so, significant
resources are going to have to be shifted from some other activity
if we're not going to get additional resources. The other thing:
we plan two or three years in advance, so they are now providing
input to the '93 budget process. I've done so already. If we're
going to change, or get additional resources, we won't see anything
until '94 at best, based on this particular process.
Nancy mentioned that there's always resistance to change. This
will require dramatic change, especially in where we have our QA
resources applied presently. We've emphasized the analytical
component; we've minimized the upfront planning and scoping; and
we've minimized the review and audit process. Marcia mentioned
that data quality objectives (DQO's) are great, and they are. But
within Superfund and within EPA there's a DQO syndrome. It carries
a bad taste, unfortunately, with too many of the engineers because
of some of the problems in its infancy with implementation.
We also have a QA syndrome that we have to deal with, where the
engineers and a lot of the data users are not QA advocates; they
want to get out with their shovel and re-mediate the site. So we
have those two syndromes to deal with.
We also have the issue of true implementation. We can write
program plans and project plans until they come out our ears; if
they are not fully and accurately implemented, they're of minimal
value, except as an exercise and profit for the contractors. We
need more than lip service and we need true upper level management
commitment to this guidance. Once it's in place, management has to
understand they're going to be wed to this and it's going to cost.
Others have mentioned education and training. I put it as
81
-------
enlightenment education and training, especially for the engineers
and the data users that we have to deal with in the QA community,
so that they understand that QA is not a necessary evil, a hurdle,
a barrier, but a useful management tool and a mandatory management
tool.
Last, but not least, are analytical facilities and labs. I think
we need to develop among the Federal agencies a minimum standard
that the labs will ascribe to all data. If the user needs more,
the DQOs will get it. If you can accept the bare minimum then it's
taken care of within all of the laboratory community.
Tied in with that, we need to do a lot more in addressing
performance-based results as opposed to method-based results. We
often encumber ourselves by being wedded to particular methods that
don't always work. The CLP is a good example. We have very rigid
requirements that can't work in every instance. There has to be
some flexibility.
There are two related projects going on in Super fund presently that
will dovetail very well with this standard. One is called DAS for
those who love the acronyms; it's tied in with the long- term
contracting in Superfund and it is the Delivery of Analytical
Services. That component was left out of the strategy; it's
currently being addressed. Clearly the chemistry in the analytical
services required by Superfund will be a major component of this
standard.
The other is that we presently have an ongoing review of the
Superfund QA\QC to look at the program in its entirety, to find out
where there are shortcomings and where there can be improvements
made. This is being done in a TQM approach, so we have regional as
well as other input.
Thomas Morris
In the Hazardous Waste Remedial Actions Program (HAZRAP) we have
lots of customers. It's difficult because we've got the DOD
people, and even within DOD, we have the Army, the Navy, the
National Guard, and then we've got EPA and DOE. Everybody has a
different agenda so we have a pretty difficult time of it. We also
have all the subcontractors who do work for us that we oversee and
do project management for. I've worked in about four different
areas—all within Martin Marietta—and I fight the same battles,
time and time again, so maybe I can shed some light on areas that
will help in.implementation of quality standards.
There are five keys to effective implementation of any quality
standard: understand cultural resistance to change; demonstrate
top management commitment; provide adequate education and training;
be willing to define the overall objectives and process
requirements at a global level; and ensure availability of
82
-------
electronic user-friendly implementation techniques, tools, and
processes to help project team participants.
Respect for QA I'll talk about first. I expect that you all have
about the same level of respect by your project managers and your
upper level managers that I have, and I don't know how we get
through that. I've struggled with it for several years. I think
it's changing slowly. One of the problems was that they put QA
people into QA positions when they had nowhere else to put them.
There are some exceptions to that—we"re probably those exceptions.
We have different ways we put things, and I think managers have
trouble understanding it sometimes.
To understand cultural resistance to change, I think you have to
analyze and pay attention to the basis for resistance. You've got
to actually look back in time and say, The managers that are in
power, what's their real heritage? What year did they grow up in?
I think you have to get to the point of understanding what their
real motivational drive is all about.
Our personnel people have done a lot of activities over the past
four years or so, such as using Myers-Briggs and those kinds of
tools. I had the benefit of knowing the guy in training who does
those and he's plotted all of our senior managers and they're all
power hungry. In seriousness, I point that out. It's real and
you have to learn to understand that it's there. I think part of
the reason why managers have difficulty with QA people is that
we're threatening to those managers. In many cases, of course, we
really do understand what we're talking about and it's not easy for
them to accept that.
Break down barriers that were established as a result of ego and
insecurity. That's tied to the real bottom-line needs. I think we
need to start using some psychology-oriented people to better
understand what drives people. You've got to deal with it.
Empower the people doing the work—that's embodied in TQM—and
listen to the people who actually do the work.
Involve mid-level managers. Even when people get on board with TQM
and even when the top managers speak the right words, whether they
practice it or not, we seem to somehow miss the mid-level managers.
They finally find out about it somewhere down the road, by hearsay,
and that's threatening for them because they haven't had a part in
it. The real commitment is when you say and do. And that's a pet
peeve I have about presentations. People get up and say what needs
to be done and never get down to actually doing it.
Encourage change for the better; emphasize continuous improvement.
Demonstrate top management commitment. Practice values
implementation. There's a lot of politics involved in decisions,
83
-------
and sooner or later politics is going to drive you down. I don't
think you're going to get anywhere until you treat people like
people and listen to what they say and let them listen to what you
have to say.
Reward those willing to take risks and accept accountability.
Nobody wants to accept accountability. I understand why, too;
there's risks associated with it. There's legal liabilities
associated with it. We have major problems getting people—whether
it's EPA or whether it's the states—to accept accountability.
Say, okay, I'll put my signature down and I'll accept the
accountability for whatever method you're proposing, or accept a
plan that's been submitted. And you've got to reward the people
who take the risks and if they fail, they have to suffer some of
the consequences for that, but on the other hand, you've got to
take into account that they're willing to try and make some
decisions to move forward.
Make the QA function a key step in the upward mobility chain. I
think one of the things that will really demonstrate top management
commitment is when they start having the QA function be a place
where "fast trackers have to move through." Actually, once people
get into it, they do develop an understanding of it. But when
they're out on the side and you're bugging them about management
systems that need to be in place, it's just a pain for them. You
put them in that position and let them go through that and then let
them take it out and implement it. If they actually learn it and
you put them in a project position, then all of a sudden you have
it infused within where the work actually takes place.
Integrate methodologies for assuring quality into a defined line
organization procedural system. Within HAZRAP for our QA program
we're getting all of our procedures in place for doing business.
With our procedures manual, we've got a section called
"Administration" and of course it's got all of the "how do you hire
and interview people and how do you orient them," and that sort of
thing. But in the project management system section we've got not
only the things about "how do you plan a project and how do you use
a team concept and how do you estimate your costs and schedule it
and go through what procurement acquisition strategy you're going
to do," but also things such as quality management and DQO's, "how
do you do readiness reviews, how do you document non-conformances,
how do you resolve problems and get to root cause, how do you apply
lessons learned." Those kinds of things have always been pushed
off on the QA people, but they're really a part of doing the
project.
And so, we're trying to move as much of QA into the project as we
can, because document control, and all of the things that go along
with document control, is not a QA function but a records
management function. That's probably not even a project function.
We don't have document control in our project management section;
84
-------
we've got it over in a section called documentation; and we let
people who understand all the myriad of regulations associated with
keeping documents for 75 years and microfiching take care of it.
That's not a QA responsibility.
QA is an -integral part of a structured and disciplined project
management system, not something that some QA people do off to the
side. I think you have to adequately staff and fund a QA function
and train the project participants, depending on the level that you
involve QA into the project. You need to say, we're going to have
a QA function, we're going to recognize them as a viable entity,
and we're going to fund that operation because we believe those are
the kinds of management practices that need to be taken if we're
going to do work, regardless of what kind of work it is.
Or, if you're going to let project participants be the people who
document your non-confonnances and do your problem reporting and
problem resolution, then you've got to fund training the project
managers to do those things. QA people have a better understanding
of that, but nothing says you can't train project managers to get
to what caused the problem and document it.
Respect the QA professionals' credibility and input equally. I
daresay that if it's like it is in some of the organizations and
programs that I've worked in, the QA person gets called in when
there's a problem, or when there's something that has to be
resolved or when there's been a non-conformance, and you've got to
fill out a form and the project person doesn't know how to fill it
out because he's never been trained to fill it out and never
respected the fact that you needed the form in the first place.
Provide responsibility, accountability, and authority for function-
specific decisions, our documentation doesn't yet give us specific
enough responsibility or the accountability and authority for the
things that QA people are responsible for, or for the things that
the program people are responsible for.
Provide adequate education and training. I think we have to teach
management and project personnel what QA is, not from the
standpoint of the QA requirements, but from the basic practical
elements of QA. This is what document control is all about; it's
about making sure that the right work plan's out in the field. I
think you ought to teach the QA professionals a little more about
basic project management principles so we'll have a better
understanding of what the project people are facing. They're
facing a lot of problems, too, with controlling costs and schedule,
and overhead rates.
Teach everybody the basics of regulations, terminology, and
statistics. We've got instances where states require absolutely
knowing that there's not one part per billion anywhere on a site,
and we'll have other states who actually will say, you can do a
85
-------
statistically-designed random number-generated statistical sampling
for anything. You can give me a 95 percent competence level, or 99'
percent competence level or contaminant is below a certain limit
then we'll accept it. I don't think we use nearly enough
statistics.
Teach DQO's. I'm a strong supporter of DQO's. We've got several
hundred projects but we probably had one or two DQO's that were
actually used. Yet it's a wonderful process. We're having
difficulty in teaching our contractors about what DQO's are. We
have to sit with them in the hotel room—they've got a contract to
do the DQO's for us—and we have to say, here's some examples of
how DQO's need to be done. Then they feed it back to us. It's
reality.
I think we need to teach assessment and oversight to everybody,
because one of the things we're moving toward is not only
independent assessment such as the QA people typically do, and
audits and field surveillance, but we're also trying to teach the
project people to do self assessments all the way along where
they're looking at themselves, even if it's just for a small piece
of the activity.
I think the training courses need to be structured 25 percent
lecture, 75 percent practical. I think we need a lot of DQO
training and that training needs to be set so that even if it's a
simple example, you can work through the forms and fill in the
blanks and learn by doing.
And last, staff the training function with personnel qualified to
teach the subjects. We typically have training departments and
they're staffed with trainers and you can give them the slides and
they can go through the slides, but if they don't understand the
material that they're teaching, then they have no credibility with
you. The audience probably has more knowledge than the trainer
before they even begin to take the course.
Be willing to define the overall objective and process requirements
at a global level. Set the program structure up in the top, but
formulate the procedures from within the organization. We
formulated our list of procedures for doing business in HAZRAP at
the staff and department level, but now that we're assigning the
preparation of these procedures for doing business, we've got
secretaries doing the ones on secretarial work; we've got project
managers doing the one on DQO's and quality planning—not the QA
person, but the project people. We've got the people who actually
do the work doing it. Now we have that separate bit of oversight
in assuring that once they get a procedure together the
implementation of it does fulfill the requirements that have to be
fulfilled. But we try to get it written at the project level.
Drive for consistency, at least in approach. Maybe there are
86
-------
slightly different requirements in CERCLA versus RCRA for the kinds
of information that need to be in a QA program plan, but the
general approach that you go about in doing a program plan should
be standardized. Trying to strive for consistency is a difficult
thing to do. But the only way that you can eat an elephant is one
bite at a time. It would be nice, though, if we had a consistent
approach for eating the elephant we started maybe up here instead
of back here all the time. But on the other hand, I think the
harmonization effort is starting to try and set up at a global
level, and from then begin to build all the parts.
Build any graded application into the procedures themselves. If
there's a need for a graded approach for a small site versus a
large site, or a site that may be either contaminous or not as
critical, you can try to build that into the procedure and have a
procedure that has a graded approach in it, rather than saying that
we have this QA program for this kind of site, and we have this
kind of QA program for this kind of site, because controlling a
document is controlling a document. If you're really talking about
QA principles, at a minimum coordinate the building of the pieces
of the program to maximize the strengths of each part.
There's great work that we face in all of these areas. CERCLA has
a work plan, a sampling analysis plan, a field sampling plan, QA
program plans, and health and safety plans. The Underground
Storage Tanks (UST) program has things such as initial response
measures and site characterizations. RCRA, CERCLA, UST, and NPDES
all overlap, and many of the elements are similar enough in nature
that I think if we could get people together and use the best parts
of them, we could come up with generic statements or generic work
plans.
As a matter of fact, we have a generic statement of work. We may
use it internally, but we have a generic statement of work that you
can pull the pieces from. It's on a computer and you can go in and
fill in the blanks. As you go through it you pick the things that
are applicable and pull them out, and the computer goes in and
through some WordPerfect manipulation puts together a statement of
work in a couple of hours instead of a couple of weeks. We're also
working on a generic work plan that leads to ensuring availability
of electronic user-friendly implementation techniques, tools, and
processes to help project participants. Our next step in this
generic work plan is to build an expert system where the screen
asks a series of project related questions about a specific project
in the language that a project manager can understand, and then
obtains his or her answers to those questions. For example, what
kind of a site is this? Is there water contamination? Is there
soil contamination? Is there both? As the manager answers, the
expert system automatically says okay, because there's soil
contamination, these are the potential applicable standards that go
with it. The project manager gets the particular words that need to
be in the statement of work from a database that already exists.
87
-------
Structure software to respond transparently to project manager
language. We're working toward generic QA program plans, generic
statements of work, generic work plans, and health and safety
plans. Share lessons learned with everyone. I don't think we
share lessons learned enough and we ought to share then in a user-
friendly lessons-learned system of some type. One of our agenda
items is to work with our contractors. We have 11 major general
order contractors who work with us. We'd like to get a lessons-
learned system that they're willing to share among each another.
They all face the same things; they-just face them in a different
area of the country.
We have one region that tells us we've got to use a certain kind of
decon procedure that calls for specific decontamination fluids to
be used, and we have other regions that say you can use whatever is
applicable including steam. So we run into the same kinds of
problems that have been mentioned in some of the other
presentations. Those are going to happen unless we work together
and we stand to benefit, because you are our customers too. We
have to deal with the regions and the states and get decisions
made, so I know it would benefit us. We want to work with you,
because we want to be in this business for a long time too.
Appreciate it.
88
-------
ECOLOGICAL MONITORING
PANEL DISCUSSION
Chair:
Robert Graves
EPA
Panelists:
Janes K. Andreasen
Fish and Wildlife Service
Adriana y Cantillo
National Oceanic and Atmospheric Administration
Thomas F. Cuffney
U.S. Geological Survey, Water Resources Division
Robert Graves
I am the Acting QA Coordinator for EMAP, which stands for
Environmental Monitoring and Assessment Program. This talk is not
going to be about EMAP per se, but rather about ecological
monitoring. Nancy had asked for my talk to cover a few different
objectives. One is to define what ecological monitoring is; two is
to describe why harmonization standardization of QA is important in
ecological monitoring; three is to give a brief overview of EPA's
QA program; four is to link EPA's QA program to ecological
monitoring; and five is to link EMAP's QA structure to EPA's QA
requirements.
To give you a little background: from my perspective, EPA has two
broad objectives: one is to protect human health and the other is
to protect the environment. To achieve these objectives, EPA works
within the area of risk assessment and risk management. The Agency
has in the past focused mainly on risk assessment as it applies to
human health. That is, we regulate contaminants for the most part,
to protect the health of human beings.
However, we are now, I believe, re-emphasizing that other objective
we have within the Agency, and that is to protect the health of the
environment. I think we're beginning to realize that our
biological resources sustain our existence, and that to protect our
own health, we have to protect the biosphere in which we live.
Accordingly, ecological monitoring research within the Agency is
starting to get more emphasis than it had in the past, and that is
one of the reasons I believe that EMAP has developed. EMAP is
probably the Agency's largest ecological monitoring program.
89
-------
Let me begin by giving you a brief definition of what ecological
monitoring is. It's the measurement of abiotic and biotic factors
in the environment to assess current conditions, that is, status,
and to identify and warn of changes in biological resources, which
is trends.
What are the QA objectives of any ecological monitoring program?
One is to ensure that the data generated are of sufficient quality
to meet program needs. Two is to ensure that the procedures and
processes used are such that they will produce the desired results.
Three is to ensure that all procedures and processes and data are
sufficiently documented. Fourth, which I think is the crux of
harmonization, is to ensure that data generated in one program,
albeit EPA, are defined well enough so they can be validly compared
to those data generated in other programs, that is, programs such
as the National Oceanic and Atmospheric Administration's (NOAA)
Status and Trends, U.S. Fish and Wildlife Service, U.S. Geological
Survey (USGS), and a lot of other Federal agencies and states and
other research efforts.
The purpose behind harmonization, I think, is to have data
comparability. If we can have ecological monitoring programs that
give us data comparability, we can then put all the data together
and make some real integrated assessments.
Harmonization/standardization, the way I look at it, touches on
various methods. One is sampling methods. The one thing I think
is true with ecological monitoring, and it's probably true of other
programs as well, is that when I speak of methods, I don't
necessarily think of the laboratory methods. I think the biggest
crux of ecological monitoring is to harmonize sampling methods.
For example, when you go out in the field, how do you collect your
samples so that the way EPA collects its samples is compatible with
the way the Geological Survey collects its samples? If we're
looking at fish tissue, for example, are we looking at the same
tissue? Are we looking at the edible portion of the fish, are we
looking at the whole fish, are we looking at livers? How do we
take the fillets if we're looking at the edible portion; is it skin
on or skin off? All these factors will have a great impact on
whether we can pull that data in the end. That, I think, is more
important than the analytical techniques used.
The analytical techniques are comparable—and you'll listen to
Adriana talk about NOAA's program where they don't really specify
standardized methods--in other words, what they do is specify
performance criteria of the laboratory so that laboratories can use
any laboratory methods they' want as long as they generate data of
a certain precision and accuracy. I think that's legitimate. But
if we don't have sampling protocols that are standardized, we'll
still never be able to pull that data in the end, and I think
that's where we need to focus a lot of our attention—not only
sampling methods in that sense, but also things like statistical
90
-------
methods. How do we design our protocols? Are we using randomized
samples or non-randomized samples? Again, I think we need to get
our statisticians involved upfront to see if we take our samples in
different fashions and whether or not we'll be able to integrate
that data together in the end.
QA/QC techniques is another big issue. There are certain QC
techniques that are mandatory, such as blanks or calibration
standards. I think that from a laboratory perspective, if all the
Federal agencies could agree on a certain QC program, it would make
it a lot more cost beneficial for us to run samples rather than
what happens right now. EPA is probably one of the worst offenders
in this realm. No matter what program you're working for, it has
completely different QC requirements. And most of these
requirements are not requirements because they're really necessary
for that particular program, but they are requirements that some
manager puts in because that's their particular bias. Therefore,
what happens is in the laboratory they've got to shift, as they do
samples for the drinking water program and they've got to shift
again, as they do samples for the NPDES programs, and then they've
got to shift to QC techniques a third time when they do for
Superfund, or for RCRA or for any of the others.
Method Detection Limits (MDL) determinations—I think that's
another important thing. For a lot of our methods, we do list
MDL's, but the way that we calculate these MDL's varies from method
to method in some cases; therefore you can't get a good feel
whether method A is really comparable to method B, or at least go
down to the same levels.
Definitions—I think that was mentioned the other day; laboratory
certification on a national scale was another one. Indicator
selection with respect to ecological monitoring and computer
hardware/software standardization, which I think was addressed when
they were talking about GALP's.
If we achieve this, what will this buy us? What it will do is buy
us integrated programs; it'll give us an integrated database. We
can then take inputs from either the Federal agencies that are
doing ecological monitoring, whether it be EPA, NOAA, the Bureau of
Land Management (BLM), U.S. Department of Agriculture (USDA), or
any other agency, or any of the state programs that are doing
ecological monitoring, and other international programs, and feed
it into a common database that will allow us to do integrated
assessment reports. '
Let's take a quick look at the QA components within EPA. The way
I look at what QAMS has dictated to the rest of us within the
Agency is that we have an umbrella document called a quality
assurance program plan (QAPP). A QAPP more or less sets the
general guidance of how we're going to operate a program. Under
that we have various other documents that we're mandated to put
91
-------
together, such as data quality objectives (DQO's), quality
assurance project plans (QAPjP's), audits, and audit reports.
Why do we require these particular programs? The rationale is that
the QA program establishes the principles and guidance. The DQO's
establish what the customers needs are, and I think it's important
that we go back and determine who our clients are and exactly what
it is that they expect out of these programs. The QAPjP then
establishes the process for satisfying customer needs. The audits,
in turn, evaluate that process to* improve it and to judge its
applicability to satisfy customer needs. Finally, the reports are
the product to meet customer needs.
How does this all tie in with respect to ecological monitoring
programs? What steps does one go through when one does the
planning for an ecological monitoring program? This is the
scenario that I came up with, and I tried to tie that into the
various elements of what's required under EPA.
First, you have the initial preparation where you define the
objectives of the program. Basically, that is when you're asking
questions such as what data are required, what is the end use of
data, and what is the total allowable error? That is the
vernacular of EPA's DQO's. Then you always have that thing called
resource allocation, which is the budget—that seems to be the
roadblock in most of our programs. We never have enough resources
to really do what it is that we plan to do. That's a reality check
and we have to pare back so that we can do our program within our
prescribed budget.
The system design is where you do your research plans, your QA
project plans, your design plans, data processing (you have your
computer hardware/software needs), and field operations, which
describes your QA project plans and your logistics plans.
Analytical laboratory operations from the QA perspective is
described in the QA project plans. You have data reduction and
analysis and validation, and again, the procedure for that should
be described in your QA project plan. You have your audit reports
and your corrective action memos. System certification—you go out
and do pilot studies and demonstration projects. Finally, you
conduct your full scale study which is implementation. Then you
have your data reporting, which is your final report.
How does this process fit into ENAP, which is the Agency's
ecological monitoring assessment program? And when I say the
Agency's ecological monitoring assessment program, I think I should
step back, because that is not quite true. EPA is making a
concerted effort to make EMAP not an EPA program but a Federal
program. We're looking for partners out there in the Federal
sector to work with us on EMAP. And I think we're doing a fairly
good job at working with the other Federal agencies.
92
-------
We did a pilot study this spring in Virginia Province which is an
estuary system, and we worked very closely with NOAA's Status and
Trends program. We're working closely with the BLM when it comes
to arid lands, and we work with the USDA Forest Service with
respect to the forest monitoring effort that was done in the East
Coast this last year. We are also forming partnerships with other
Federal agencies.
The EMAP process of going through this is that first we do a pilot
study. The pilot study entails designing your study, conducting
your study, and of course, interpreting your results. If it was
successful, then you move on to a demonstration. If it's not, you
go back and do a pilot study again. A pilot study is a very small
study where you test all your processes and all your hypotheses to
determine how you should design your study and choose the
indicators to tell you what it is that you want.
A demonstration project, on the other hand, is a large-scale pilot
study done on an eco-regional basis. It allows you to test things
such as spatial variability to make sure that it isn't so great
that it swamps out the variability of your measurements. And again
you go through the same scenario, and if it's successful you move
on to implementation, which is where you take it on a national
scale.
One of the things, though, with implementation is that we're never
quite satisfied. We don't implement it and just stick with it. We
have this thing called continuous improvement, and we keep
recycling back. So even though you're in the implementation phase,
you've got to continually reassess what it is you're doing, relook
at your indicators, see if it's telling you what you want it to
tell you, and redesign on a year-to-year basis.
How does that fit in with the QA techniques that are mandated by
EPA? The QA tools—DQO's, QAPjP's, and audits—are required in
each stage of the process, at the pilot stage, the demonstration
stage, and the implementation stage. The difference is that the
content of these various documents differs depending on which stage
of the project you're in. For example, let's take the QA project
plan. You may at the pilot stage make it more extensive than it is
at the implementation stage, because at the pilot stage what you
also want to do is to test all those QC techniques out there to
determine what QC techniques are needed to control those parameters
that you're measuring. That way, when you go to the implementation
state you can do the most cost effective QC on a national basis.
When you're doing basically thousands of samples, you don't have to
be doing all that same QA/QC that you did at the pilot stage when
you were trying to learn and assess.
If we achieve our goals, we'll end up with a national integrated
ecological monitoring program where many different Federal agencies
will be contributing to the EMAP database. You'll have EPA
93
-------
?,™ri »UrJ-ngJ, but you U also have the Fish and Wildlife Service,
USGS, BLM, NOAA, the states, the local government, and other
research efforts that can all be feeding into that.
If we're really successful—maybe another decade out—we'll end up
with an international integrated ecological monitoring program?
EMAP has already made some strides in that direction. We've talked
to Brazil and they do have the rain forests on one of the EMAP
grids; we've also been over to Australia with the arid-lands people
over there and Australia is very interested in the EMAP concept and
with becoming part of the same grid system. So we are moving into
the international ecological monitoring program.
In closing, I'd like to reiterate what Robert Layton, the Regional
Administrator for Region 6, said yesterday. He charged this group
to develop data collection processing and reporting techniques that
transcend agency boundaries. I think that's really what
harmonization is all about, at least on the ecological monitoring
scale. We need to develop these types of techniques that transcend
across the agencies so that data from NOAA and the Fish and
Wildlife Service can all be pooled and put into a common database,
and we can really get around to doing an integrated data
assessment. Thank you.
Adriana y Cantillo
I'm here to describe briefly the National Oceanographic and
Atmospheric Administration (NOAA) National Status and Trends
Program, which has just started its sixth year of sampling. The QA
function of status and trends is an integral part of the program
and was included in the program plan from its inception. I must
add that this was done to the chagrin of the potential contractors
because we're dealing with a community in the marine sciences field
that is used to QA, and initially the comments were: we don't need
to do QA, we know what we're doing. But now they have decided that
this is an excellent idea and in presentations that I have seen,
they make a point of saying that they participate in the status and
Trends QA Program. So they're actually publicizing the fact that
they take part in it.
The Status and Trends Program measures the current status of and
any changes over time in the environmental health of Eastern and
Coastal waters of the United States. It has six major pieces—
benthic surveillance project, mussel watch project, biological
surveys, the QA program, the specimen bank, and historical trends
assessment. The major pieces that I'm going to discuss are benthic
surveillance, mussel watch, and the QA program.
If anybody's interested in further information on the program, feel
free to drop me a line. We have almost 100 publications on results
and interpretations from the program—we'll be happy to get a list
of publications to you.
94
-------
Benthic surveillance collects sediments bienially in bottom-
dwelling fish from around 75 sites in the United States. All
analysis is done by NCAA's National Marine Fisheries Service. The
sampling sites are determined by where the fish are. So over time
we may have to go to different places in an estuary. The fish are
collected -by trawling, so we are dependent upon where the fish
happen to be on that particular day. We are also looking at the
incidence of fish disease and tumors.
Mussel watch is a continuation of the 1970's Mussel Watch Study
that was conducted by EPA and other groups. In this case sediments
and bivalves (mussels/oysters) are sampled around the country
yearly at about 220 sites. Texas A & M University and Battelle are
the contractors for this study. The sampling sites are governed
simply where the mussel or the oyster beds are. We collect natural
specimens so where they live is where we go pick them up. We don't
have, again, any choice in where the sampling sediment sites are.
The QA program will document all the sampling protocols and
analytical procedures used in the Status and Trends Program. This
becomes very important from the fact that we have to pool samples
together to get enough material to analyze. And so it's very
important that the various contractors know that they have to
prepare composite samples of X animals and what part of the animal
they're supposed to collect and be consistent about it over time.
We also want to reduce variations inside a particular laboratory
and between laboratories. We want to use the QA program to
eventually compare our data to the data generated by other
programs.
In methodology, we're quite different from EPA. We do not specify
any analytical methodology. The laboratories are free to use
whatever method works. This frees the laboratory—especially the
contractor laboratories—to use analytical instrumentation that
they may have on hand and have the expertise on knowing how to use
expertise that perhaps is not common to everybody. This is quite
all right, as long as they can produce results in the
intercomparison exercises that are similar to everybody else's.
This is very much appreciated by the contractors and it eliminates
a lot of headaches for us.
We require the use of standard reference materials in control
samples. We contract with the National Research Council of Canada,
and with the National Institute of Standards and Technology (NIST)
in the preparation of control materials and standard reference
materials (SRM's). Some of the materials that have been prepared
for Status and Trends have become SRM's.
All the methodology and sampling protocols are documented. We're
in the process of doing that now. It's difficult because the
contractors don't seem to want to provide the detail of information
that we want for the documentation documents we're preparing. We
95
-------
do want to put on paper the level of details down to the part
number of the materials used in the plastic ware, because we do not
know if 10 years in the future that may be something that will be
critical.
So we're doing a very detailed documentation of methodology. And
we're looking at the addition of all the results of intercomparison
exercises into the national Status and Trends database, so that
anybody looking at our Status and Trends data will have available
the results of intercomparison exercises, things such as detection
limits, what the precision was, how the laboratories compare with
each other, and how their accuracy was compared with SRM's. This
will all be part of the electronic database.
The most important part of the QA program is that each year there
is an intercomparison exercise that takes place and the NOAA
contractors, including the National Marine Fisheries Service, are
require to participate. The organic intercomparison exercise is
prepared and run by the National Institute of Standards and
Technology (NIST), and the inorganic by the National Research
Council of Canada (NRC). NOAA does not do anything in the
laboratory; we do not have a lab in the office; we contract this to
NIST and NRC. So they act as an independent sort of judge or
evaluator of how our contractors are doing.
Sample types in the past have included free strike sediments and
extracted sediments, so that you take out that variable of the
extraction, homogenized frozen tissues. They now want to explore
the possibility of tissue samples from matrices, such as mussels
other bivalves, fish, and also sediments from clean areas and
contaminated areas. The way this works is that every year at the
beginning of the year, the labs get the samples from NRC and NIST;
they analyze the results; and then they send back the results to
the originating organizations. They, in turn, get all the results
together, and every year they have a meeting with all the
laboratories and the results are presented.
Our core laboratories have been participating in the program since
it began five years ago. They are doing very well in the analysis
of the intercomparison exercise samples. We just opened up the QA
program to other laboratories; this is a way of showing that
participating in these exercises has a beneficial side effect. Our
core labs do very well compared to NRC, and the new labs still have
some work to do.
Every year there is a meeting in the late fall or winter once the
results of these intercomparison exercises are sent back to NRC and
NIST, where the laboratories that have participated and would like
to go to the workshop can attend. The purpose of the QA workshop
is to show everybody's results and discuss what problems there may
be in common and what the probable solutions are. It's almost like
a teaching situation. During that workshop the laboratories and
96
-------
NIST or NRC decide what kind of samples they want to do next. So
the choice of sample for the next intercomparison exercise is, most
of the time, the choice of the laboratories.
In other words, they may say, we're having problems with analyzing
a real clean sediment, or we have analyzed clean sediments before
but we're not sure how we would do with a real contaminated
sediment, so we'd like to do that. Or, we would really like to
test and see what our extraction is . like. Can you send us
something that is not extracted so we can add that into the degree
of difficulty?
It changes from year to year and that's why it's not possible to
show you an improvement of one lab over time. The samples change
and the degree of difficulty changes as the labs get better and
better.
In future developments, some of the EMAP labs are now going to take
part in our QA program, especially the ones that will be working in
the coastal area. We also are going to open up the intercomparison
exercises to non-NOAA contractor labs, or any Federal or state
government lab that would like to participate.
The response to this has been overwhelming. We have about 30 labs
already that are going to participate in the inorganic
intercomparison exercise. The organic exercise is completely full
and we have a waiting list of about 25 labs. A lot of the comments
from the field are that we're doing such and such analysis but
we're not sure whether we're doing it right. So the response has
been very good.
We haven't been able to open it up to more labs simply because of
funds. It's very expensive to do this, but probably slowly we'll
be opening this up. We have found that for marine environmental
samples there isn't that much QA available. The marine chemists
are an independent breed and a lot of them have become chemists
because they need to determine something in the marine environment.
So they don't see the need for QA and don't like to do it. But
very slowly the point is getting across.
We're also beginning to work with international groups. Right now
we have the International Mussel Watch that is scheduled to begin
in the Caribbean, probably this year or next, and they will be part
of an intercomparison exercise since we also do Mussel Watch.
Slowly we're spreading and trying to get the word out. If anyone
would like to participate in the trace metal intercomparison
exercise, please give me a call.
James K. Andreasen
The mission of the Fish and wildlife Service is to conserve,
protect, and enhance the fish and wildlife resources of our country
97
-------
and their habitats. I want to briefly describe an ongoing
marketing program that we've had, tell you a little about some of
the results of that program, and then discuss a new effort that we
have started.
During all this I'd like you to consider how this all interacts
with the QA requirements. Consider the complexity of the
environment, the number of different species that are out there
that should be monitored in order to say something about the health
of the environment. Think about how diverse those habitats are and
the difference between the life history of each of those critters
we're trying to collect, different collecting methods that are
needed, storage techniques, and everything else that goes along
with that. Maybe you'll get some idea of why we think there is a
tremendous need for having harmonization within environmental
sampling.
Like EPA, we're a strongly decentralized organization. We have
seven geographical regions in the country plus a separate research
division. It makes another level of complexity on top of the whole
thing.
The Service has responsibility under several Federal laws and
international treaties to manage migratory birds, threatened and
endangered species, anadromous fish, and certain marine mammals and
lands that are under the control of the Service, such as our
National Wildlife Refuges and National Fish Hatcheries. Currently
there are about 475 wildlife refuges around the country and this
consists of some 89 million acres of land. We are in the process
of developing a monitoring program for each one of those sites.
These are some of the resources that we're charged with protecting:
migratory birds, endangered species, and brown pelicans. We have
our various wildlife refuges and they're all being assaulted by
various kinds of environmental impacts. All you have to do is open
up any newspaper and you'll see that there are daily new problems
coming along that are going to affect one of those species. A lot
of things we found in our previous monitoring efforts that showed
the impact of environmental contaminants on natural resources were
from our irrigation drainwater program which we've had functioning
throughout the west. We saw impacts from the Alaskan oil spill,
not only the critters being killed by the oil directly, but the
secondary effects on bald eagles from eating those oiled birds.
We're looking at a lot of these things, not strictly for the direct
impact on the critters, but rather the fact that a lot of these
birds and other fish eat smaller things that are in the
environment, so we're concerned about the impact on the birds
consuming contaminated prey in some circumstances.
I want to describe briefly a program that was originally called the
National Pesticide Monitoring Program (NPMP). It was set up by
Congress in 1967, right after Rachel Carson's book hit the stands.
98
-------
Originally there were many agencies involved in the project.
Gradually, over the years, most of them dropped out. The Fish and
Wildlife Service kept the thing going and we're still collecting
fish—not primarily pesticides any more—but we're looking at the
whole suite of environmental contaminants similar to the list that
the National Marine Fisheries Service is using for the Mussel Watch
Program. We're looking at that same suite of contaminants still.
We're going for pesticides, PCB's, and a suite of metals.
When we originally set up the statijons for this program, we tried
to look at selecting areas that would be fixed stations, there over
time, and that would integrate a large watershed area in the fish
part of the program. We originally also looked at whole body
starlings, and we looked at duckwings. Those programs, because of
the extreme migratory nature of birds, were not as successful as
the fish program so we've dropped those in the last few years; but
we are going to reinstitute another sampling in that work for
birds. We've tried to pick sites so we could integrate a lot of
watershed areas but yet not be impacted by a point source.
This is what the network ended up looking like. There are
presently 112 stations in the program. Samples are collected on a
two-year cycle, and we analyze these samples in-house or at the
Columbia National Fisheries Contaminant Research Lab in Columbia,
Missouri. It takes about two years to get this many samples
analyzed. Here again we can only sample the species that are
there, but we tried to establish criteria for each one of the
different kinds of sites. It was cold water, warm water; we tried
to look at a predator species and another species that was more
tied to the bottom.
Over the years we've decided that we would only sample fish for
whole body residues. We don't do fillets for this program because
we're more interested in what the consumer organisms are eating and
the impact on their health, not so much on human health, although
this data set has been used in several publications to try to get
some idea of what the impact might be on human health.
We look at composite samples. Our directions specify that at each
station we would collect two samples of a bottom dwelling fish and
one sample of a predatory species. There will be five fish in each
composite of the sample. They were to be selected for uniform
size. We established the criteria; the fish weren't supposed to be
flopped around on the bottom of the boat where they got a lot of
oil and grease on them. We took them wrapped, froze them
immediately and then shipped them off for analysis.
All the samples in the field are collected by our own biologists.
We have about 60 field stations around the country and 110 or so
biologists that work in the program. They do things other than
just collect these samples. So we are collecting in-house; we do
the analysis in-house also. Common carp is one of the species that
99
-------
we've collected at almost all of the sites in the country. They're
an introduced species, but they're now ubiquitous in the country;
Also large mouth bass.
Over the years, looking at the number of stations we've collected
from and the number of samples, we have well over 3,500 individual
composite samples of fish in this program, and cumulatively we've
collected at over 1,000 locations.
One of the problems with QA in the program has been that there's
been a difference. Analytical techniques have improved over the
years. When we first started, people didn't know there were PCB's
in the environment; therefore our early organic chlorine residue
data also included PCB's. We were later able to separate that out.
Same thing with toxaphene—we didn't know about toxaphene
originally. Toxaphene has been detected at stations in watersheds
where it was never used, indicating aerial transport of the
chemical.
Generally, over the program, there has been a decline in the
concentration of some of these things in the environment. Isomers
of DDT and dieldren show a good decline over the years. The same
thing with PCB's. We've seen a pretty good decline in PCB's.
They're not quite as dramatic; but as the technology has improved,
we're more sensitive to what's out there.
Some of the elements, though, don't show this same kind of a
decline. We haven't seen a decrease in selenium concentrations
across the country; its residue seems to be staying about the same.
The program has been very successful in showing declines of some
elements and of some compounds. There's been a statistically
significant decline in DDT, PCB's, and dieldrum between 1976 and
1986; this indicates that the ban on these chemicals has been
effective in removing them from the environment.
The data also indicate that there's a geographic spread of PCB's
and toxaphene in the environment. Toxaphene was never applied
around the Great Lakes, yet we're finding significant residues of
toxaphene in Great Lake fishes. I think this is primarily due to
aerial transport, something that we hadn't considered before. PCB
residues continue to be a problem in parts of the country. In
Hudson River fish, concentrations are greater than 10 parts per
million, while PCB residues in other parts of the heavily
industrialized northeast are somewhere between 2 and 5 parts per
million. The data set has pointed out the need to do more
intensive studies in some areas where residues have been high over
the years. We've gone back in and done more intensive sampling.
In order for a monitoring program to remain effective, it has to be
able to adapt to new and previously undetected compounds. It has to
be able to respond to new locations that are needed, and we need to
be able to adapt to new management perspectives for what kind of
100
-------
data are needed, with that in mind, the Fish and Wildlife Service
has designed a new program that we're calling Biomonitoring
Environmental Stats and Trends, or the BEST program. We have
completed a design document. This has gone out for review across
the country. BEST is going to be the overall umbrella for our
program which will be site specific sampling, it'll be probability
based, but within fixed stations. We're hoping to be able to
integrate all these different things into one program that will
answer the questions that are being asked of us by Congress.
The Fish and Wildlife Service has also signed Memoranda of
Understanding (MOU) with EPA for the EMAP program. We're involved
with EPA on several other programs that involve monitoring and
doing assessments for nonpoint source pollution through our drain-
water irrigation and quality program. We're partners with USGS in
their National Water Quality Assessment (NAWQA) program, which
you'll hear about next. And we see a need for having harmonization
of these collection programs across all the different programs in
the government.
We're hoping that there's a new day dawning for the environment.
Through all these various programs we will have a greater concern
for our resources.
Thomas F. Cuffney
It's a nice feeling to be able to come and visit with EPA and give
a presentation. But I must confess that I was a little uneasy to
come here and address this group. For one thing, I am not a
Quality Assurance Officer, though I am and have been involved with
our branch of QA, particularly Dave Erdman, Bill Shampine, and Tom
Maloney. In addition, I'm not a chemist. And beyond that, I'm not
an engineer. So the question comes up, what am I? Well, I'm an
ecologist. I am one, as I understand it, of only two people with
that job description currently in the U.S. Geological Survey
(USGS).
I'm also going to be talking about something I think is very
different from what has been talked about in terms of QA here— QA
applied to biological programs, specifically programs that are not
involving chemical measurements. In addition, I come from an
agency which has no regulatory responsibility and which neither
owns nor manages any lands. This actually has a lot of advantages
associated with it.
I do come from an agency that has a long history of providing high
quality and long term data and interpretations for its customers.
My function in the National Water Quality Assessment (NAWQA)
program is the development of protocols for the ecological surveys.
These QA programs that I'm going to talk about are interesting in
the fact that they're being driven by the science and not by any
type of regulatory requirements. I put this talk together before
101
-------
I had an opportunity to read the harmonization document. I ask
that you focus on the intent and content of this talk relative to
the harmonization document and not on the vocabulary.
The objectives of NAWQA are to provide nationally consistent
descriptions of current water quality conditions, to define long-
term trends in water quality and finally, to identify, describe,
and explain, to the extent possible, the major natural and human
factors affecting water quality conditions and trends. So three
themes are running through this: description, trends, and cause
and effect studies.
There are three large scale components of NAWQA: surface waters,
ground waters, and biology. The surface water and ground water
components each have chemistry and hydrology involved in them. The
biology is basically tissue surveys, which most people here are
probably comfortable with, and also the ecological surveys.
NAWQA is a large scale, long term program. We have 60 basins
across the United States, including Alaska and Hawaii, which are
being studied. The focus of the study will take at any one time 20
basins under intensive studies. It will take 10 years to complete
an entire cycle involving all 60 study units. And each basin in
there is a study unit.
We're dealing with a hierarchy of water quality issues within the
NAWQA program. At the study unit, or the basin level, it's the
level at which the data collection occurs. Data collected at that
level will be used to look at issues at a variety of levels
increasing in spatial content: local water quality issues,
regional water quality issues, and national water quality issues.
So in terms of QA, we need to be producing data which can be
applied across those different levels. In addition, looking at
harmonization across Federal agencies, it will be nice that we can
provide information compatible with other programs that want to use
this information at different levels.
The goal of the ecological surveys is to characterize the
distribution and relative abundance of biological communities.
We're looking at three community types. One is the benthic
invertebrates, which I am most involved with; the second one is the
algae; and the third is the fish communities. In these
communities, looking at the distribution and relative abundance of
these communities in .terms of water quality parameters, there are
three elements involved—distribution, which implies spatial
characterization of communities; relative abundance, which implies
enumeration; and the biological communities, which imply taxonomy.
The QA/QC needs for benthic invertebrates are that we need a
consistency in field collections. If we're going to compile data
across regions, across basins, we need to be dealing with
consistent field collection methods. We need consistent lab
102
-------
processing—in other words, sorting. A lot of what goes into
processing these samples is physically removing organisms from a
matrix of organic material and sediment. We need accuracy and
consistency in identifications, not only at the time that the
samples are identified, but also because the science of taxonomy is
constantly, changing. What is listed as species A this year may be
species B next year. You have to keep on top of those changes if
you're going to have data which are truly timeless. We also need
accuracy and consistency in the enumeration, in the actual
processing or counting of these invertebrates as they are
collected.
The way that we are trying to get to these objectives within the
NAWQA program is to develop consistent approaches for field and
laboratory processing. There are at least five parameters listed
for QA for our field collections. The first item is standard
protocols. That's what I'm involved with at this point, with
developing sampling protocols which can be used across the nation
in teams of different sizes. We typically use the term protocol
and don't use SOP's, though they pretty much serve the same
function. I want to make a point about harmonization across
Federal agencies and that is that we should be more concerned with
the intent and content and less with the vocabulary of what we're
using. It's probably very difficult to get another agency to
change what they're calling an item. It's probably far easier to
get them to include extra material in a protocol that would satisfy
your requirements for an SOP.
We're also involved with formal training through our Denver
Training Center, where we will train the project personnel who will
be collecting the samples. Training involves more than just
teaching these individuals how to go out and physically collect the
sample. We need to train these individuals in the philosophy
behind why the sample is being collected, and also in some of the
theories about the operation of streams so that we can obtain from
them feedback in terms of what's going on in the field and better
design the program as we go along.
Documentation, of course, is a key issue. We take copious field
notes in the survey. The problem with field notes is they're
seldom seen again. We have standard data forms which we are using,
which also tend to get away. This lack of accessibility for field
notes is a problem.
Under our new computerization scheme, which is our National Water
Information System II, we are working toward incorporating
documentation into the database itself, even to the point where
we're discussing the possibility of scanning field notes into the
database so that if you were working with this data and you needed
to call up that information, you could access it. In this current
day and age, if it's not in the computer, it doesn't get used. And
we need to work toward better computerization of our support
103
-------
information for the data that actually goes in the database.
Another important QA issue here is our interstudy unit
communication. We have regional biology teams which serve as
advisory boards for each of the study units; we have a Fish and
Wildlife person who will be part of that biology team. In addition
to the biology teams advising the study units, there will be study
unit liaison committees, which will be made up of people from
Federal, state, and local governments, as well as university people
who will help the projects develop their work plan.
Sample processing QA/QC for these benthic invertebrate samples
becomes somewhat tied up with the whole issue of contracting. What
we are looking for is to achieve some standardized contracts which
involve putting into those contracts QA/QC checks that are provided
by the contractor and can be used by all the study units.
We also are working toward implementing a contractor evaluation or
certification process so that we can identify laboratories which
can provide us the services that we need and also develop a
formalized process by which we can weed out those organizations
which bid on a project, but do not have the capability to provide
us with the services we need.
We're also looking at regionalizing contract laboratories so that
contract laboratories in a specific region with a specific
expertise in a taxonomy will be receiving the samples. This will
help us, I think, both in terms of logistically controlling the
contracting process and oversight for the laboratory, but also
ensure that these people are familiar with the fauna across the
country, which varies considerably.
We're instituting a USGS QA/QC laboratory oversight, which will
involve actually putting together a laboratory with people to
monitor the contractor compliance of the samples so that we can
monitor how well these contractors are doing in processing the
samples and ensure that we're getting accurate and consistent
information. We're working with the computerization of taxonomy,
also QA/QC checks and sample tracking, so that as information is
entered into the computer, it will tell us that that's a valid
name, or flag the information to tell us that that name is not
valid, the authority with that name is not valid, or that that
species may not even occur in that region of the country. This is
an important check for us. This, along with the QA/QC checks and
sample tracking, will be in our National Water Information System
too.
We're also trying to develop a standardized taxonomy database
employing the NOAA National Oceanic Data Center Species Codes.
There's an interesting story about harmonization here. This body
of taxonomic information is maintained by NOAA. It contains
information on everything from antelope down to protozoa. The
104
-------
information related to marine taxonomy is very good and very well
maintained. The information related to fresh water biology is not
very well maintained. As you can imagine, NOAA does not have as
much interest in the fresh side as the marine.
But the USGS and EPA are both interested in using this database and
incorporating it into their databases to keep track of the
taxonomy. So the USGS is funding a NOAA person half-time to update
this list. EPA is also cooperating with that.
Another way to ensure quality is to deposit voucher specimens in
outside collections. The voucher specimens are physical organisms
which represent the name that you're attaching to them. And you
put the speciman out there in an outside laboratory, a museum, or
university somewhere, where other people who are specialists in
this field can look at that specimen and decide whether you've
named it right and inform you of any current changes that have
occurred in the nomenclature. I think that this is a very
important component in getting credibility for your
identifications.
In addition, we will be curating study collections within the USGS,
which will be available both to the survey personnel and to people
outside of the survey who wish to look at these.
When you put this all together, the striking thing is the
Biological QA Group of NAWQA. NAWQA is making a big effort to
ensure the quality and timelessness of this biological information
so that when we meet again, 20 or 30 years from now, that
information will be just as good then as it is now. I think
there's room for a lot of harmonization and a lot of interagency
cooperation in developing these types of protocols.
105
-------
DISCUSSION SUMMARIES
After hearing the preceding panel discussions, participants at the
meeting met in small groups to discuss how they felt about
implementing a national consensus standard for EPA and other
Federal agencies with environmental programs. Each group then
chose a spokesperson to report on the nature of the discussion.
Quality Assurance Management Staff member Gary Johnson summarized
the break-out discussion sessions.
Question 1: What are the indicators for success in this effort?
What benefits could vour organization anticipate bv implementing a
new quality framework/standard?
Group 1 determined the Indicators of Success to be: 1. General
acceptance (stated in writing, use of other agency's data, outside
group accepts and uses data) and 2. Reduced litigation.
The benefits of harmonization were catagorized in four sections:
1. Cost/Schedule/Performance (improved allocation of resources,
increased efficiency, reduced frustration level of multi-agency
requirements)
2. Cost/Schedule (less repetition of work, less time and money
spent, reduced time for inter-agency negotiations)
3. Cost/Performance (large data base)
4. Performance (common language, better quality products,
consistent products, defined set of compliance requirements, better
staff buy-in, clear definition of objectives across agencies,
enhanced assessments).
Group 2 concluded that Indicators of Success were: 1. Interagency
Acceptance/Recognition (at all levels, regulator accepts product,
regulated community accepts guidelines and standards) and 2.
Elimination of redo's (realization that Accrual of Benefits =
Indication of Success).
Benefits of harmonization were seen as: 1. Data Comparability
through application of standardized criteria (review, scoring,
index values); 2. Reduced Redundancy/Paperwork (project and
program); and 3. Translation to Management/Management Buy-in (data
quality = data utility).
A data quality "hierarchy" was discussed in the following order:
Data (measurements and numbers); Information; Interpretation;
Knowledge; and Decision-making. Panelists also determined that
NIST has a role to play in harmonization.
Question 2: What are the roadblocks to implementation of a new
standard in your organization? What actions/changes will need to
107
-------
take place within your organization in order to implement a new
standard?
Group 3 regarded the roadblocks as the absence of a common
language, displacement of old guidance, resistance to change, and
"not invented here." Suggested actions included educating affected
users, keeping open communications, and tailoring marketing
strategies.
Group 4 discussed six roadblocks to implementing new standards: i.
Changing how people think—priority changes; 2. Ineffective
communications; 3. Lack of Standard Data Comparability; 4. Fixed
budget needs a good DQO, but lacks funds; 5. Management takes
holistic stance; and 6. Resistance to change. Panelists agreed
that several act ions/changes needed to take place: the addition of
training programs, "selling" the idea (P.R. "benefits" and
payoffs, marketing, cost savings), more effective use of
communications, general acceptance of change, and having states
"buy in" on the idea.
Question 3: What factors within vour organization will facilitate
acceptance of a new standard? What information/interaction from
other groups or agencies would help to pave the way?
Group 5 agreed that the degree of acceptance is proportional to the
perceived benefits, that user implementation must be brought into
the decision-making process, that the standard must be a practical
tool for the user/implementer, and that senior management must be
involved in a substantive manner.
Panelists talked about the need for "consensus" from agencies other
than EPA, the need for a mechanism to establish true interagency
harmonization, and the requisite for a consensus to set performance
criteria for environmental monitoring.
Other observations included: 1. the document needs to be written
in plain English, not technical jargon; 2. there must be
sufficient specificity to minimize the range of interpretation; and
3. enforcement and litigation efforts must not be restricted.
108
-------
NATIONAL PROGRAM OFFICES
From a summary by Marty Brossman
Quality Assurance Officer, Office of Water
The National Program Office Sessions included: the joint session
with Regional Offices on the EMMC, a session on Quality Assurance
Program Plans—guidance and implementation, and a session on
Qualifying Data for Specific Uses. A tentative panel was proposed
on TQM—concept and implementation, but time constraints precluded
this session. The issues to be discussed in this session are of
importance to the continuing success of TQM and QA in the Agency;
accordingly, they are briefly summarized in discussion of the
sessions.
Quality Assurance Program Plans fOAPPl
The QAPP is recognized as an important management level document at
each Office, Regional and Laboratory level of EPA. It describes
the QA policy, roles, responsibilities, and plans for
implementation of the Agency and specific Office, Regional, and
Laboratory programs. The QAPP, as signed off by a top management
official, the QA manager, and with concurrence by the Quality
Assurance Management Staff (QAMS) of the Agency, is equivalent to
a EPA contract. Meeting that contract effectively is important to
the success of a QA program.
This session was designed to address issues of QAPP development and
implementation to improve effectiveness. The difficulties in
developing an effective QA Program Plan were addressed in detail.
They included: the need to gain command of widely diverse studies
and programs; management's lack of acceptance of its responsibility
for the quality and defensibility of its data; the isolation of
some program offices from the data acquisition tasks imposed on
Regions and States; and a lack of understanding of the need to
define the customer and data use.
Two effective tools to reinforce the QA program were discussed.
The utilization of Management System Reviews (MSRs) has proven a
useful tool in evaluating a QAPP in place. Short falls become
apparent when performance is evaluated against specific commitments
and responsibilities. In addition, the QA Annual Report is also a
useful tool. This report, prepared by the QA Officer as a part of
his QA Program Plan, provides a vehicle to discuss accomplishments
and short falls with management, and gain agreement for the next
year's plan.
While action items were not developed as a result of this session,
opportunities were described to provide recommendations to improve
the QA Plan development and implementation process in the new
guidance being developed by QAMS.
109
-------
Qualifying Data for Specific Uses
This session was designated as an "umbrella" session to cover a
wide range of issues related to data qualification and use. Issues
raised by Larry Keith of Radian Corporation and Wendy Blake-
Co leman. Immediate Staff, Office of Water, brought forth extensive
discussion.
Larry pointed out that with the commonly used Limit of Detection
(LOD) or Method Detection Limit (MDL) set at Standard Deviation
(SD), there is a 50% probability of a. false negative detection, but
less than 1% probability of a false positive detection. He
suggested that whenever false negatives were important to the
Agency, the LOD or MDL should be set at 6 SD, where the probability
of false negative and false positive detection is equally low (less
than 1%). Because of the high reliability of detection assignments
at 6 SD, it was suggested that this be called the Reliable
Detection Level.
Mr. Keith also suggested that a distinct difference be made between
laboratories reporting data and users/requesters taking laboratory
data and presenting it in final form. He recommended that
laboratories report all data unless requested not to do so by the
user/requester of the data. He further pointed out the potential
liability of the sometimes current practice of reporting "Not
Detected" (ND) rather than "less than MDL or LOD" when an analyte
is detected, but less than the MDL or LOD. Mr. Keith therefore
suggested' that "ND" should only be used when there is no measurable
value and not when the value is between 0 and 3 SD. Technical
issues of this type are extremely important to the Agency since
many toxic materials are harmful and below our ability to detect
them. Accordingly, agreement on detectability definitions and
related issues directly impacts the Agency's ability to regulate
and defend its controls.
Wendy Blake-Coleman addressed a range of data issues-some impacted
by the Office of Water's recent reorganization. As a preface, she
described the reorganization and the new locations of the Office's
major environmental data bases. She also described the new role of
the Policy and Resources Staff in data base and information
resources management. Issues being addressed include the STORET
Modernization program, integration of CIS into the data systems,
and development of QA data and usability guidance. The concept of
minimum data set was then discussed with particular reference to
the ground water data base. The ensuing discussion centered on
whether the data elements were primarily descriptive and not
quantitative. The range of issues addressed were so broad that
individuals decided to follow up specific areas of their interest
with other discussions.
110
-------
Total Quality Management
Time demands precluded follow-through on this session. Some of the
issues planned for discussion in follow-up sessions, however, are
summarized here.
The current Agency and National focus on Total Quality Management
provides great potential for progress in the QA and TQM programs—
if managers understand the direct relationship. TQM has developed
from within the QA/QC field represented by such professional
organizations as The American Society for Quality Control (ASQC).
Principles formerly applied to customer requirements in hardware
and data have now been applied to other goods and services. Thus,
it is apparent to many that managers cannot "buy-in" to a TQM
program without having automatically endorsed the QA program. We
should explore opportunities to market the QA program to Agency
managers as a good example of TQM in practice.
Ill
-------
REGIONAL OFFICES
From a summary by Dale Bates
Chief, Environmental Services Support Branch
This afternoon session focused on drinking water regulations, field
quality assurance and quality control, the Environmental Monitoring
& Assessment Program, statistical support needs, and Superfund's
Enforcement Survey. Highlights of. the discussions are presented
below:
Drinking Water Regulations
Al Havinga and Herb Brass briefed the group on the following
topics:
- Draft corrections to the January 1, 1991 notice in the Federal
Register that will be published in July.
- Regional concerns on regulatory issues.
- Participants in the workgroups on lab issues (with representation
from Regional Offices, Environmental Monitoring & Support
Laboratory - Cincinnati, and Office of Ground Water & Drinking
Water (OGWDW).
- OGWDW will provide the Office of Regional Operations (ORO) with
a listing of current workgroups.
- Comments on the new Chapter 5, Laboratory Certification Manual
should be forwarded to Nancy Wentworth by May 1.
Nate Malof explained the Privatization & Cooperative Research &
Development Agreements (CRADAs).
Field OA/OC
Llev Williams identified and discussed issues of concern and
training on Field QA/QC.
Environmental Monitoring & Assessment Program
Marcus Kantz identified the need for coordination of activities
with Regions.
Bob Graves volunteered to encourage EMAP personnel to enhance
communications.
Statistical Support Needs
Kent Kitebingham discussed the need for expertise and training in
Regions, and the need to develop strategies with positive impacts.
Superfund Enforcement Survey
Region 2 revealed significant resources utilization in support of
the Superfund Enforcement Program.
113
-------
OFFICE OF RESEARCH AND DEVELOPMENT
From a summary by Allan Batterman
Quality Assurance Manager
Environmental Research Laboratory
Quality Assurance in Modeling Projects
This session focused on methods used by QAMS and modelers to
address QA in modeling projects. A draft document was presented
for review, and panelists edited the document for general ORD input
and approval as a guidance document for modeling QA/QC. The key
question for review was: How is modeling QA/QC addressed in Agency
model development and use? Five presentations were given, with
discussions of each following.
Definitions of Modeling (overhead transparency) provided two
acceptable definitions of modeling: l. Experimental data gathering
that provides information to test hypotheses—computer programs
implement the models which are environmental compartments.
Processes and water relationships; 2. A mathematical equation or
set of equations that describe relationships between variables of
interest. OA of Models (overhead transparency) suggested that
modelers consider up front and in writing, points that are
necessary to assure validity of the model. Panelists determined
that the validation of a model is easier when a written plan with
guidelines and expectations is developed beforehand.
Purpose of Modeling Document (overhead transparency) addressed
topics such as providing documentation of procedures,
applicability, and reliability. Points of discussion were: making
sure creativity is not stifled when implementing QA into model
development; changing the QA approach to appropriately reflect a
changed model; asking the questions, Is this to satisfy the QA
officer? Is QA suitable for the scope of the project?; the
importance of stating the intended purpose/objective of the study
up front; realizing that endpoints are descriptive of a process and
interpolation may be necessary; understanding that suitable
modeling for risk assessment may require specific QA; and the
importance of research planning.
Modeling Project (overhead transparency) discussed the process of
developing a concept, validating and establishing an existing
model, applying the model and verifying it, and using the model as
part of data analysis or interpretation. Panelists raised several
issues: documenting the accountability—determining if the
modeling project should proceed; the necessity of avoiding bad
models by deciding the purpose of the modeling study and evaluating
and examining all possible variables of the model; and that QA
should apply to the computer issues of the model, including the
computer codes. The U.S. Geological Survey determined that
115
-------
modeling should be different than other investigative processes in
the QA arena because creativity is required and examples of good
case histories are needed.
The last presentation, Computer Code. (overhead transparency)
focused on evaluating models, suggesting corrective action if
necessary, and testing computer models. Panelists determined that
the computer code, how the computer manipulates the data into
subsets, is subject to human error and must not be assumed to
function properly at all times. Error checks must be made, as well
as a range of acceptance set up in which samples should fall
within. In addition, all errors must be documented through the use
of "acceptance reports." Acceptance reports determine what data
fall out of range, and adjust calibration if values are higher or
lower.
Members of the panel also agreed that models taken from scientific
documents and publications may not have been through QA; therefore,
it was important to rely on the professional judgment of project
personnel to determine that procedures were addressed properly. If
the model did not work, new variables would have to be studied,
reassessments figured, and corrective actions taken.
QA PLAN
(Documentation)
Types of Information Considered
A. Project Description/Resources
- scope and goals
- project personnel
B. Model Description
- model equations
- purpose/relevance
- limits/parameters
C. Data Quality
- calibration/parameterization
- types/quality data needed
- evaluation of data
- source/acceptance/format/data uncertainty
D. Model Uncertainty
- sources of uncertainty
- assumptions
- sensitivity analysis
- comparisons
E. Computer Program
- code verification
116
-------
- records of code versions
- programming documentation
- user documentation
Privatization of Quality Assurance Reference Materials Using the
Federal Technology Transfer Act (FTTA) Cooperative Research and
Development Agreements (CRADAs)
The focus of this session was on EPA-certified QA reference
materials: what is available, how to make them accessible on a
continual basis, and where to obtain them. The discussion also
included EPA's oversight role, if materials would be EPA-traceable,
and questions about accreditation.
The goals of the session were to: l. Minimize disruptions in
existing operations; 2. Reduce ORD expenditures but assure
dependable study, timely development of new products, high quality,
and a reliable distribution system; 3. Encourage competition
through accreditation; and 4. Maintain a long-term program to meet
Agency needs.
EPA's oversight roles were determined to be: 1. If wholesaler does
not produce compound, EPA will furnish to the wholesaler, who will
distribute; 2. Two analyses in-house; 3. Analyzing and verifying
the lot before a sale; 4. All "EPA certified samples" will be
stability tested.
$12 million, 7 Years - What Have We Learned? (Applying DQOs to a
National Survey)
The purpose of the session was to examine a case study for
potential areas of improvement in applying the DQO process; the
outcome was to get recommendations for successfully applying the
DQO process to large surveys.
Lora Johnson, QA Manager, National Pesticides Survey, discussed the
survey results and two approaches for evaluating the success of the
survey in achieving DQOs. Dean Neptune then led a discussion among
participants on the difficulties of implementing the DQO process.
for large surveys.
117
-------
QUALITY ASSURANCE MANAGER OF THE YEAR AWARD
Introduction by Nancy Wentworth
We have a significant honor to bestow, the Quality Assurance
Manager of the Year Award. For those of you who are new to the EPA
community, we have, for the last three years, presented an award to
an individual from EPA's QA community who has been nominated by one
of his or her peers, supervisors, friends, or someone who has seen
the value of the work. This year we received nine nominations. We
had individual nominations and group nominations, or a number of
people nominated as a group, I should say. We invited three senior
managers from the Agency to serve as the award review panel. This
is what we consider to be an Agency award and therefore we sought
advice on the selection from senior managers.
The reviewers were Richard Guimond, the Office Director for the
Office of Radiation Programs and in transition to the Deputy
Assistant Administrator in the Office of Solid Waste and Emergency
Response; Tim Oppelt, the laboratory director at the Risk Reduction
Engineering Lab in Cincinnati; and Bill Rice, the Deputy Regional
Administrator in Region 7 in Kansas City. They were, as a group,
very impressed with the nominations that we received from across
the agency.
I'd like to take a moment and credit the people who were nominated
and give you a little background on who nominated them and what
their accomplishments were that warranted the nomination. Then I
will make the presentation of the award.
First, Jeanne Hankins, who was the quality assurance manager in the
Office of Solid Waste and is now on a one-year assignment on the
EMMC Laboratory Accreditation Panel, was nominated by the Office
Director of the Office of Solid Waste, Sylvia Lowrance, for her
work in Chapter One of SW 846, which is the methods and field
manual for the Solid Waste Program. She was also credited with her
work in the formation of the RCRA Advisory Committee for
Environmental Data, the RACED as it's known, which has been a real
affirmative step toward improving the relationships with the
regions and improving communication within regions on issues
relating to the implementation of the solid waste law.
Second, Rick Johnson from the Office of Information Resource
Management, was nominated by Jeff Worthington from TechLaw, for his
work on the Good Automated Lab Practices program.
Third, Dr. Henry Kahn was nominated by Ramona Trovato, Office of
Water Regulations and Standards, for his work in Fl Guidelines in
the application of statistical methods to the development of
industrial water pollution control regulations and his work in the
statistical design of the national sewage sludge survey and the
119
-------
dioxin and pulp and paper mill effluence surveys.
Fourth, Brenda Grizinski and Don Sandifor were nominated by Bill
Fair less, the Environmental Services Division Director in Region 7,
for their combined effort in the development and implementation of
a statistically-based sampling procedure for use in dioxin cleanup
sites. This is a project which many of you have heard about, which
was an application of the DQO process in Region 7 at the request of
Region 7 and which has resulted in a very significant savings of
money in a waste cleanups on a dioxin contaminated site in the
region.
Fifth, a group of individuals from the Office of Underground
Storage Tanks in Region 6 were nominated by Bob Layton, the
Regional Administrator. These people were William Ray Lindale,
Mike Scoggins, John Sernaro, Herb Sharo, Jim Duck, and Audrey
Lincoln. They were nominated for their use of TQM principles in the
development of the regional program and of the state programs for
underground storage tanks. They set up quality action teams; they
worked with each of the states to develop an underground storage
tank program that was best suited for their legislation, their
politics, and their environmental circumstances.
Also from Region 6, Mary Ann LeBarr and Judith Black were nominated
by Bob Layton, the Regional Administrator, for their work in
implementation of field and lab quality auditing programs in the
Superfund program in the region.
Seventh, Bob Graves, who was nominated by John Winter, the quality
assurance division director at the Environmental Monitoring
Systems Lab in Cincinnati, who nominated Bob for his work in the
development of the QA program for the Ecological Monitoring and
Assessment Program. This is the largest ecological monitoring
program that the Agency has ever undertaken and is an extensive
program requiring coordination of quality programming among and
between about five to seven of EPA's ORD laboratories and a number
of other agencies including NOAA, Fish and Wildlife, and USGS, so
it's been a significant effort on his part to bring all of the
quality community together for the common goal of the EMAP program.
Next, Marty Brossman of the Office of Water Regulations and
Standards, who was nominated by two of his QA compatriots, Barry
Towns, the QA manager in Region 10 and Jerry McKenna, who is now
the lab chief in Region 2, both of whom are previous winners of
this award. Marty was nominated for his work on customizing QA
documentation for Office of Water Regulations and Standards
projects, for his work as a regional QA liaison, taking special
time to work with the regions on QA issues, for his work in the
development of data quality objectives in their regions in the
water programs, for looking into the issue of quality in
computerized data bases and his work in the design and
implementation of the National Dioxin Survey and National
120
-------
Bioaccumulation Survey.
And the last nominee was Elizabeth Leovey who is in the Office of
Pesticide Programs. Elizabeth was nominated by Susan Wayland, the
Deputy Office Director in the Office of Pesticide Programs; Michael
Cook, the Office Director in the Office of Drinking Water; and Gene
Briskin, the director of the National Pesticide Survey. She was
nominated for her work in the National Pesticide Survey, including
encouraging and setting up a pilot survey to see if the planned
activities would work, the invitation for management system review
to see if the processes they had in place for the survey would
yield the data they needed for their decisions, and for working to
assure that the follow-up field and analytic QA programs would,
indeed, meet the needs that they had defined at the beginning of
the survey. I think we owe a round of applause to all of these
people.
The winner of this award receives a check and a plague. And there
is a historical plaque with all of the award winners and the
individual that wins it will also get his name engraved on the
plague outside of my office in Washington. The winner: 1990
Quality Assurance Manager of the Year, Martin W. Brossman, in
recognition of outstanding accomplishment in the field of guality
assurance planning and management.
121
-------
ACCEPTANCE SPEECH BY MARTY BROSSMAN
QA Manager of the Year
I'm just flabbergasted and thrilled. And I think you know this is
the real highlight of my work for EPA. So I might as well stop
now. But I do feel very strongly, and I know all of you I've
worked with know that I mean this, that this award is an award I
share with QAMS, who've been my guiding light through all these
difficult years and with, primarily, the regional QA Officers
who've made it all possible for me to see things happen. That
combination is really the reason I was able to do anything. And
I'm inspired by the type of people that we've been able to work
with over the years: this has been a tough game, to convince
management that this was worthwhile and necessary. The spirit and
the competence of the people that I've been able to work with, QAMS
and particularly all my regional pals who have helped me carry out
the kind of things that we wanted to see happen in the real world,
are really the recipients of this and I thank you all. Thank you.
123
------- |