T10 68 United States Office of
I ivr.wu
1986
Environmental Protection Toxic Substances
Agency - Washington DC 20460
TSCA Assistance Office
vxEPA Expjaining
Environmental
Risk
230R86900
-------
-------
Explaining
Environmental
Risk
Some Notes on
Environmental Risk
Communication
by
Peter M. Sandman
November 1986
U S. Environmental Protection Agency
Region 5, Library (PL-12J)
77 West Jackson Boulevard, 12th Ftooi
Ctiicago, IL 60604-3590
-------
Contents
"Important if True" 1
Dealing with the Media 4
1. Environmental risk is not a big story 4
2. Politics is more newsworthy than science 5
3. Reporters cover viewpoints, not "truths" 6
4. The risk story is simplified to a dichotomy 8
5. Reporters try to personalize the risk story 9
6. Claims of risk are usually more newsworthy than claims of safety 10
7. Reporters do their jobs with limited expertise and time 12
Dealing with the Public 14
1. Risk perception is a lot more than mortality statistics 14
2. Moral categories mean more than risk data 16
3. Policy decisions are seen as either risky 01 safe 17
4. Equity and control issues underlie most risk controversies 18
5 Risk decisions are better when the public shares the power 20
6. Explaining risk information is difficult hut not impossible
if the motivation is there 21
7. Risk communication is easiei when emotions are seen as legitimate 2.'!
Selected Bibliography 26
-------
'Important If True'
In colonial times newspaper "correspondents" were nothing
more than acquaintances of the publisher, writing home from
their travels. Unable to confirm or disconfirm their reports,
cautious publishers often printed them under the headline
"Important If True."
"Explaining Environmental Risk" should be read in the
spirit of this caution. While I have leaned heavily on the risk
communication research literature where I could, many
questions haven't been thoroughly studied, and here I have
relied on my experience, my sense of other people's
experience, and, frankly, my biases. If your experience and
biases suggest different answers, try them. If you want to
stick more closely to research findings, check the sources
listed at the end.
Why are so many risk assessment and risk management
people beginning to take an interest in risk communication?
There are two answers, I think, one entirely admirable and
the other more open to question. The good news is that
experts and managers are coming to recognize that how
people perceive a risk determines how they respond to it,
which in turn sets the context for public policy. It is hard to
have decent policies when the public ignores serious risks
and recoils in terror from less serious ones. The task of risk
communication, then, isn't just conveying information,
though that alone is a challenge; it is to alert people when
they ought to be alerted and reassure them when they ought
to be reassured. If your job is directing the cleanup at
chemical spills, or running a right-to-know program, or siting
new waste facilities—in fact, if your job has anything to do
with setting or administering or following environmental
regulations—explaining environmental risk is an important
piece of your job. And it's probably a piece for which you
have had little training.
The more questionable reason for the growing interest in
risk communication is the hope in some quarters that
communicating about the environment can somehow replace
managing it or regulating it aggressively. This is a common
dilemma for communication specialists—advocates of bad
policies sometimes imagine that they can get away with
anything if they sell it cleverly enough, while advocates of
good policies sometimes imagine that they don't have to sell
at all. At a January 1986 national conference on risk
communication (co-sponsored by the Conservation
Foundation, the National Science Foundation, the
-------
Environmental Protection Agency, and other organizations),
the sessions on how to alert people to serious risks were
sparsely attended, while overflow crowds pondered ways of
calming people down. People sometimes need to be calmed
down—but the ultimate goal of risk communication should
be rational alertness, not passive trust.
If a public that views risk with rational alertness strikes
you as a desirable outcome, "Explaining Environmental Risk"
should help. This is neither a theoretical treatise nor a
nitty-gritty cookbook; along with the practical suggestions for
effective communication, I have tried to explain why some
strategies work and others fail, so that you can build on this
understanding to design your own strategies.
Though I hate to admit it, risk communication is a simpler
field than risk assessment or risk management. It just isn't
that hard to understand how journalists and nontechnical
publics think about risk. But it is crucial to understand, and
not mastering the rudiments of risk communication has led a
lot of smart people to make a lot of foolish mistakes. With
apologies to busy readers, I have therefore resisted the urge
to produce an executive summary or a list of
recommendations. Technicians can get by on cookbooks,
perhaps, but decision-makers need to understand.
Much depends, in fact, on whether you think risk
communication is a job that can safely be left to
"technicians" (public relations staff, community affairs
officers) or whether—as I am convinced—you believe it must
become an integral part of risk management. Although I hope
public information people will find some value in what I
have to say, my main goal is for environmenlal protection
commissioners and plant managers to read it ... not merely
pass it along to the public information office.
The temptation to pass it along to the public information
office—and then forget it—is almost overwhelming, I know.
It's not just that decision-makers are busy people. It's not
even that decision-makers don't realize how greatly their
success depends on dealing effectively with the media and
the public. It's more that they wish it weren't so, that dealing
with the media and the public seems in so many ways the
least pleasant, least controllable, least fair part of their work.
Most risk managers, I suspect, spend a good deal of time
hoping the media and the public will go away and leave
them to do their jobs in peace.
But since they won't, the next best thing is to understand
better why they won't, how they are likely to react to what
you have to say, and what you might want to say differently
next time. I hope "Explaining Environmental Risk" will help.
-------
Four on-going research projects have added greatly to my
understanding of risk communication. They are: (1)
"Environmental Risk Reporting" and "Risk Communication
for Environmental News Sources" (with David B. Sachsman,
Michael Greenberg, Audrey R. Gotsch, Mayme Jurkat, and
Michael Gochfeld), both funded by the National Science
Foundation Industry/University Cooperative Center for
Research on Hazardous and Toxic Substances; (2) "Getting to
Maybe: Building Toward Community-Developer Negotiations
on New Hazardous Waste Facilities" (with Jim Lanard and
Emilie Schmeidler), funded by the Fund for New Jersey; (3)
"Manual and Conference for DEP Risk Communication" (with
Caron Chess and B.J. Hance), funded by the New Jersey Spill
Fund, New Jersey Department of Environmental Protection;
and (4) "Radon Risk Communication Symposium and
Recommendations" and "Radon Knowledge, Attitudes, and
Behavior in New Jersey" (with Neil Weinstein), both funded
by the New Jersey Department of Environmental Protection.
Of course my colleagues and funders on these projects are
not responsible for my speculations in this report.
Several organizations have invited me to address them on
strategies of risk communication, providing an opportunity to
develop the ideas expressed in this report and test them on
thoughtful and experienced audiences. I am grateful
especially to the National Governors' Association, the New
Jersey Hazardous Waste Facilities Siting Commission, the
Council of Scientific Society Presidents, the Institute for
Environmental Studies of the University of North Carolina,
and the Air Pollution Control Association.
Peter M. Sandman is Professor of Environmental Journalism
at Cc-ok College, Rutgers University, New Brunswick, NJ, and
Director of the Environmental Communication Research
Program of the New Jersey Agricultural Experiment Station.
Preparation of this report was funded by the Office of Toxic
Substances of the United States Environmental Protection
Agency as part of the Agency's effort to obtain diverse views
on risk communication. Publication of this document does
not signify that the contents necessarily reflect the views and
policies of the Agency.
-------
Dealing With The Media
1. Environmental risk is not a big story. The mass media are
not especially interested in environmental risk. Reporters do
care whether or not an environmental situation is risky;
that's what makes it newsworthy. But once the possibility of
hazard is established—that is, once someone asserts the risk
on the record—the focus turns to other matters: how did the
problem happen, who is responsible for cleaning it up, how
much will it cost, etc. Assessing the extent of the risk strikes
most journalists as an academic exorcise. The reporter's job is
news, not education; events, not issues or principles. And
the news is the risky thing that has happened, not the
difficult determination of how risky it actually is.
In an emergency, of course, the extent of the acute risk is
the core of the story; radio reporters in particular want to
know first and foremost whether to tell listeners to stay
indoors, to evacuate, not to drink the water, etc. But the
media don't especially want to know the ins-and-outs of risk
assessment, the details of how great the risk is likely to be,
how sure the experts are, or how they found out. If the story
is important enough, these technical details merit a
follow-up, a sidebar on the third or fourth day—but few
stories are important enough.
The typical news story on environmental risk, in other
words, touches on risk itself, while it dwells on more
newsworthy matters. In 1985 newspaper editors in New
Jersey were asked to submit examples of their best reporting
on environmental risk, and the articles were analyzed
paragraph by paragraph. Only 32 percent of the paragraphs
dealt at all with risk. Nearly half of the risk paragraphs,
moreover, focused on whether a substance assumed to be
risky was or was not present (e.g. is there dioxin in the
landfill), leaving only 17 percent of the paragraphs that dealt
directly with riskiness itself (e.g. how hazardous is dioxin).
In a parallel study, reporters were asked to specify which
information they would need most urgently in covering an
environmental risk emergency. Most reporters chose the basic
risk information, saving the details for a possible second-day
story. What happened, how it happened, who's to blame, and
what the authorities are doing about it all command more
journalistic attention than toxicity during an environmental
crisis.
-------
The nature ot the crisis determines how nuK.h stress the
media put on risk as opposed to other issues. Reporters
know, tor example, that a chemical spill is a risk story, and
at the scene ot a spill they will keep asking about toxic
effects even alter they art: told the chemical is benign and
inert. A tire story, on the other hand, automatically raises
questions about how the tire started, how much damage was
done, who turned in the alarm, and the like; many reporters
won't realize unless told that a lire in a battery factory or a
supermarket warehouse is a toxic event. But even when
reporters understand that environmental risk is a ke\ element
of the crisis, their appetite tor risk information is strong but
easily sated; they want to know badly, but they don't want to
know much.
And when there is no crisis? The extent ot a chronic risk is
newsworthy only when events make it so—for example.
when a court battle or a regulatory action hinges on a
disputed risk assessment. Sources wishing to "sell" a chronic
risk story to the media must therefore work to make it
newsworthy. Give it a news peg—that is, make something
happen that reporters can cover. Make it interesting. Build
the case tor its importance. Provide a prop worth focusing a
camera on. But expect only partial success; reporters tlock to
the scene of a crisis, but they have to be seduced into
covering chronic risk.
Among the greatest environmental risks in New Jersey is
indoor radon contamination Because it is new and serious, it
received considerable media attention in 1985 and early
1986. Then the coverage began to slip. The easy news pegs
were over: the discovery of the problem, the first home in the
state with a super-high reading, the passage of radon
legislation. With no "radon industry" to tight back, the
conflict that journalism teeds on has been conspicuously
missing from the radon story. Radon is more a health
problem and a housing problem than an environmental
controversy, and its coverage is correspondingly muted. And
radon at least has the "advantage" of cancer, the disease we
love to hate. Imagine its low visibility it it gave people
emphysema instead.
2. Politics is more newsworthy than science. The media's
reluctance to focus on risk tor more than a paragraph or two
might be less of a problem if that paragraph or two were a
careful summary of the scientific evidence. It seldom is. In
fact, the media are especially disinclined to cover the science
ot risk. Most of the paragraphs devoted to risk in the New
Jersey study consisted of unsupported opinion—someone
asserting or denying the risk without documentation. Only
4.2 percent of the paragraphs (24 percent ot the risk
paragraphs) took an intermediate or mixed or tentative
-------
position on the extent of the risk And only a handful of the
articles told readers what standard (if anv) existed tor the
hazard in question, much less the status of research and
technical debate surrounding the standard.
The media's focus on the politics of risk rather than the
science of risk is most visible in the sources relied upon in
risk coverage. In the New Jersey study. 57 percent of the
sources cited were government, with state government (22
percent) leading the pack. Industry captured 15 percent of
the paragraphs; individual citizens and advocacy groups
were cited in 7 percent each. Uninvolved experts such as
academics—those least likely to have an axe to grind, most
likely to have an intermediate opinion and a technical basis
for it—were cited in only 6 percent of the paragraphs. Of
course sources from government, industry, and
environmental groups may also have scientific rationales for
their judgments, and "experts" are not always neutral. Still, it
is important that the media get their risk information from
people who are directly involved in the news event; only
occasionally do they seek out uninvolved experts for
guidance on the extent of the risk.
Trying to interest journalists in the abstract issues of
environmental risk assessment is even tougher than trying to
get them to cover chronic risk: abstract issues are not the
meat of journalism. Yet the public needs to understand
abstractions like the uncertainty of risk assessments, the
impossibility of zero risk, the debatable assumptions
underlying dose-response curves and animal tests. Where
possible, it helps to embed some of these concepts in your
comments on hot breaking stories—though reporters and
editors will do their best to weed them out. When there is no
breaking story, try to sell your favorite reporter on a feature
on the fight over how conservative risk assessment ought to
be. Emphasize that the problem underlies many of the stories
he or she is covering. But understand why you will have
only partial success, why the science of risk is inevitably less
newsworthy than the politics of risk.
3. Reporters cover viewpoints, not "truths." Journalism, like
science, attempts to be objective, but the two fields define
the term very differently. For science, objectivity is
tentativeness and adherence to evidence in the search for
truth. For journalism, on the other hand, objectivity is
balance. In the epistemology of journalism, there is no truth
(or at least no way to determine truth): there are only
conflicting claims, to be covered as fairly as possible, thus
tossing the hot potato of truth into the lap of the audience.
Imagine a scale from 0 to 10 ot all possible positions on an
issue. Typically, reporters give short shrift to 0. 1. 9. and 10:
-------
these \ievvs art: too extreme to be credible, and are covered
as "oddball" if they are covered at all. (You may think some
pretty extreme viewpoints get respectful media attention—
but you haven't met the people reporters decide
not to quote.) Reporters also pay relatively little attention to
4, 5, and B. These positions are too wishy-washy to make
good copy; how do you build a story out of "further research
is needed?" And sources with intermediate positions are
unlikely to be heavily involved in the issue, certainly
unlikely to seek media attention. Most ot the news, then,
consists of 2's and 3's and 7's and 8's, in alternating
paragraphs if the issue is hot, otherwise in separate stories as
each side creates and dominates its own news events.
Objectivity to the journalist thus means giving both sides
their chance, and reporting accurately what they had to say.
It does not mean tilling in the uninteresting middle, and it
certainly does not mean figuring out who is right. Journalists
who insist on trying to figure out who is right are encouraged
to become columnists ... or to leave.
If a risk story is developing and you have a perspective
that you feel has not been well covered, don't wait to be
called. You won't be. And you don't need to wait. Reporters
are busy chasing after the sources they have to talk to, and
listening to the sources who want to talk to them. If you're in
the former category—if you're safety manager at a plant that
just experienced an uncontrolled release, for example—
reporters will find their way to you, like it or not.
Otherwise, rather than suffer in silence, become one of the
relatively few experts who keep newsroom telephone
numbers in their rolodex. You will find reporters amazingly
willing to listen, to put you in their rolodexes, to cover your
point of view along with all the others. Insofar as you can,
try to be a 3 or a 7—that is, a credible exponent of an
identifiable viewpoint. Don't let yourself be pushed to a
position that is not yours, of course, but recognize that
journalism doesn't trust O's and 10's, and has little use for
5's.
In deciding whether to brave the considerable risks of
media exposure, bear in mind that the story will be covered.
whether or not you arrange to be included. News items are
allotted media attention to the extent that journalists see
them as important and interesting. Then the search begins for
information to fill the vacuum—preferably new, solid,
comprehensible information that reflects an identifiable point
of view, but if there's not enough of that to fill the time or
space that the story "deserves." reporters will scrounge tor
angles to make up the difference. The result can be an
enlightening feature on the problems of technical prediction.
but it's more likelv to be a "color storv"—the fears ot
-------
bystanders, the views of ideologues, the speculations of
spokespeople, the history of mismanagement. Environmental
risk stories often turn into political stories in part because
political content is more readily available than technical
content. Experienced sources work at filling the vacuum.
Although journalists tend not to believe in
Truth-with-a-capital-T, they believe fervently in facts. Never
lie to a reporter. Never guess. If you don't know, say you
don't know. (But expect reporters to ask why you don't
know.] If you don't know but can find out later, do so, and
get back to the reporter as soon as possible, remembering that
journalistic deadlines are measured in minutes, not months.
If you know but can't tell, say you can't tell, and explain
why. If you know but can't manage to say it in English, find
someone who can. Reporters do not expect you to be neutral;
in fact, they assume that you probably have an axe to grind,
and prefer that you grind it visibly. They do expect you to
grind it with integrity.
4. The risk story is simplified to a dichotomy. The media
see environmental risk as a dichotomy; either the situation is
hazardous or it is safe. This is in part because journalism
dichotomizes all issues into sides to be balanced. But there
are other reasons for dichotomizing risk. (1) It is difficult to
find space for complex, nuanced, intermediate positions in a
typical news story, say 40 seconds on television or 15 short
paragraphs in a newspaper. (2) Virtually everyone outside his
or her own field prefers simplicity to complexity, precision
to approximation, and certainty to tentativeness. As Senator
Edmund Muskie complained to an aide when the experts
kept qualifying their testimony "on the other hand": "Eind
me an expert with one hand." (3) Most of the "bottom lines"
of journalism are dichotomies— the chemical release is either
legal or illegal, people either evacuate or stay, the incinerator
is either built or not built. Like risk managers, the general
public is usually asked to make yes-or-no decisions, and
journalists are not wrong to want to offer information in that
form.
Reporters are accustomed to the fact that technical sources
invariably hedge, that nothing is ever "proved." They see this
as a kind of slipperiness. Someone can always be found to
advocate a discredited position (the tobacco industry has
plenty of experts); no one wants to go too far out on a limb
in case new evidence points in a different direction;
researchers in particular like to leave the issue open so they
can justify more research. Pinning down evasive sources is a
finely honed journalistic skill. In terms of our O-to-10 scale,
reporters spend a fair amount of time trying to get 5-ish
sources to make clear-cut 3 or 7 statements.
-------
Sources, especially technical sources, greatly resent the
pressure from journalists to dichotomize and simplify. The
dichotomization of risk distorts the reality that nothing is
absolutely safe or absolutely dangerous, and polarizes
"more-or-less" disagreements into "yes-or-no" conflicts. And
oversimplification of any sort can mislead the audience and
damage the reputation of the source. But recognize that
journalists must simplify what they cover. If you refuse to
simplify what you say, the reporter will try to do the job for
you (at great risk to accuracy) or will turn to a more
cooperative source.
The most qualified person to simplify your views is you.
Decide in advance what your main points are, and stress
them consistently and repetitively, even if you have to hook
them onto your answers to irrelevant questions. Leave out
the technical qualifiers that your colleagues might insist on
but the general public doesn't need to know (but leave in the
qualifiers that really affect the bottom line). Stay away from
jargon, and explain the technical terms you can't avoid.
Check to make sure the reporter understands what you are
saying; if the reporter looks glassy-eyed or starts frantically
taking down every word, back up and start over.
When you explain the significance of a toxic substance to
reporters, try to avoid the "is it there or not" dichotomy,
which can so easily alarm people about tiny concentrations.
On the other hand, don't expect reporters to sit still for a
dissertation on uncertainty in dose-response curves. Your
best bet, when you can, is to specify the amount involved,
then set it against some standard of comparison, ideally a
government exposure standard. This is still a dichotomy, of
course; it leaves the misimpression that exposures just under
the standard are perfectly safe while exposures just over are
deadly. But as dichotomies go, "over or under" is preferable
to "there or not."
If you want to fight the journalistic tendency to
dichotomize risk, fight it explicitly, asserting that the issue is
not "risky or not" but "how risky." Recognizing that
intermediate positions on risk are intrinsically less dramatic
and more complex than extreme positions, work especially
hard to come up with simple, clear, interesting ways to
express the middle view. Even so, expect reporters to insist
on knowing "which side" you come down on with respect to
the underlying policy dichotomy.
5. Reporters try to personalize the risk story. Perhaps
nothing about media coverage of environmental risk so
irritates technical sources as the media's tendency to
personalize. "Have you stopped drinking it yourself?"
"Would you let your family live there?" Such questions fly in
-------
the i'ace of the soiuce's technical training to keep oneself out
of one's research, and they confuse the evidentiary
requirements of policy decisions with the looser ones of
personal choices. But for reporters, questions that personalize
are the best questions. They do what editors are constantly
asking reporters to do: bring dead issues to life, make the
abstract concrete, focus on real people facing real decisions.
Personalizing also forces the source to dichotomize, to make
the same "yea" or "nay" decision the reader or viewer must
make.
In a sense, experts and policy-makers work at a different
level of analysis than reporters and the public. As an EPA
study on the ethelyne dibromide controversy noted, the
agency wanted to talk about "macro-risk" (how many deaths
will result from EDB contamination), while reporters kept
asking about "micro-risk" (is it okay to eat the cake mix). The
connections between macro-risk and micro-risk are difficult
to draw. But for the individual citizen (faced with a cake
mix, not a regulatory proposal), micro-risk is the issue, and
reporters are not off-base in pushing technical sources to
trace the connections. This is what personalizing questions
are designed to do.
Knowing that reporters will inevitably ask personalizing
questions, be prepared with answers. It is often possible to
answer with both one's personal views and one's policy
recommendations, and then to explain the difference if there
is one. Or come with colleagues whose personal views are
different, thus dramatizing the uncertainty of the data. If you
are not willing (or not permitted) to acknowledge your own
views, plan out some other way to personalize the risk, such
as anecdotes, metaphors, or specific advice on the individual
micro-risk level.
6. Claims of risk are usually more newsworthy than claims
of safety. On our O-to-10 scale of risk assertions, the 3's and
7's share the bulk of the coverage, but they don't share it
equally. Risk assertions receive considerably more media
attention than risk denials. Sometimes, in fact, the denials
get even less coverage than the intermediate position, and
reporters wind up "balancing" strong assertions of risk with
bland statements that the degree of risk is unknown. In the
New Jersey study, the proportions were 58 percent "risky,"
18 percent "not risky," and 24 percent mixed or
intermediate.
This is not bias, at least not as journalism understands
bias. It is built into the concept of newsworthiness. If there
were no allegation of risk, there would be no story. That
something here might be risky is thus the core of the story;
having covered it, the media give rather less attention to the
counterbalancing notion that it might not be risky.
10
-------
Other factors contribute to the tilt toward alarming news.
One is the reporter's desire to "build" the story, to come back
with something that editors will want to showcase.
(Reporters are much more interested in selling stories than in
"selling newspapers.") Another factor is the journalist's
preference for simple, graphic language, for "dump" rather
than "land emplacement." Risks sound riskier in simple
language than in technical jargon. The factor closest to
outright bias—but still distinguishable in the minds of
journalists—is the media's traditional skepticism toward
those in authority. Most news is about powerful people, but
along with the advantage of access government and industry
must endure the disadvantage of suspicion. Environmental
groups, by contrast, receive li-ss attention from the media, but
the attention is more consistently friendly.
On the other hand, the media are often and justly criticized
for being too slow to alert the public to new environmental
hazards. Considering that we rely largely on journalism as an
"early warning system" for social problems on the horizon,
this is a serious criticism. To gain a journalistic hearing, the
first source to assert a particular risk must be reasonably
credible, highly committed, and very lucky or very skilled.
Almost invariably, new technologies start out with
sweetheart coverage. The environmental controversy comes
later, and only after the controversy is on the media agenda
(and the technology is perhaps too deeply embedded to be
dislodged) does the risky side of the argument catch up and
pull ahead. This may be the worst of all possible patterns: to
fail to warn us about risks when it's early enough to make a
societal go/no-go decision, then to frighten us deeply about
risks after the decision has been made.
The principal exception to this pattern is emergencies. On
a chronic risk story, the risk is the story. But a genuine
emergency is by definition a big story; freed from the need to
build the story, the reporter—especially the local reporter—
may try to prevent panic instead. The President's
Commission on the Accident at Three Mile Island conducted
a content analysis of network, wire service, and major
newspaper coverage during the first week of the 1979
accident. The Commission's expectations of sensationalism
were not confirmed. Of media passages that were clearly
either alarming or reassuring in thrust, 60 percent were
reassuring. If you stick to the technical issues, eliminating
passages about inadequate flow of information and general
expressions of tearfulness from local citizens, the
preponderance of reassuring over alarming statements
becomes 73 percent to 27 percent.
It didn't seem that way at the time, of course. The
information that something previously assumed to be safe
may or may not be hazardous naturally strikes people as
11
-------
alarming, almost regardless ot the amount of attention paid to
the two sides; imagine reading this evening that scientists
disagree over whether your favorite food is carcinogenic;.
Thus, sociologist Allan Mazur has found that public,
fearfulness about risky new technologies is proportional to
the amount of coverage, not to its character. Media coverage
of environmental risk alerts the public to risks it was
otherwise unaware of, and thus increases the level ot alarm
even when it is balanced.
None of this is a rationale for avoiding the media. Even
balanced media coverage may not reliably lead to balanced
public opinion, but balanced coverage is preferable to
unbalanced coverage. And the coverage is most likely to be
balanced when sources on all sides are actively trying to get
covered. People with knowledge and opinions to share
perform a public service when they share them. What can
you do to alert people to the risks of a new technology before
it is too late? What can you do to redress the alarming
imbalance once the media have begun to overdramatixe the
risks? Energetic public relations will help with both tasks.
though in both cases you will be working against the grain
7. Reporters do their jobs with limited expertise and time.
At all but the largest media, reporters covering environmental
risk are not likely to have any special preparation tor the
assignment. Specialized environmental reporters are more the
exception than the rule. Reporters covering an environmental
emergency, for example, are mostly general-assignment
reporters or police reporters, sent to the scene (or the phones)
without time to scan the morgue, much less a technical
handbook. And reporters tend to be science-phobic in the
first place; the typical college journalism major takes only
two science courses, and chooses those two carefully in an
effort to avoid rigor. Though there are many exceptions, the
average reporter approaches a technical story with
trepidation (often hidden by professional bravado), expecting
not to understand.
It doesn't help that the average reporter covers and writes
two to three stories a day. Here too there are exceptions, hut
most journalists are in a great hurrv most ot the time. They
must make deadline not just on this stor\. but quite often on
the story they will be covering after this one. Their goal.
reasonably, is not to find out all that is known, but just to
find out enough to write the story Even it thev knew more,
they would not have the space or airtime to report more, nor
do they believe their readers or viewers would have the
interest or patience to absorb more.
Note also that irrespective of what journalistic superstars
earn, the average reporter at a small daily newspaper takes
home perhaps S13.000-S18.000 a year Considering their
-------
incomes, journalists are shockingly competent and dedicated.
but there are limits to hcnv much competence and dedication
a salary in the teens can purchase.
If the idea appeals to you. by all means otter to teach local
journalists the basics of your iield—but don't expect general
assignment reporters to find much time (or much
stomach) for technical training they will use only a tew times
a year. A beat reporter who covers your issue full-time (it
you are lucky enough to have one) is a much better candidate
for technical training.
Better still, train yourselt (and \our colleages and stati) in
dealing with the media. Hiring effective public information
specialists also helps, but reporters much prefer to talk to the
people in charge and the people in the know Especially
during an emergency, press calls often go to the boss and the
expert instead of the press office, so the boss and the expert
should know how to talk to reporters 'The annals of risk
communication are full of stories of corporate managers and
agency bureaucrats who shot themselves in the toot—and
permanently damaged their organizations—because they
hadn't the least idea of how to deal with the media. Even the
best communication skills can't rescue a technical disaster, of
course; who wants to handle the PR at Chernobyl or Bhopal?
But inadequate communication skills can create a disaster
that needn't have been.
And adequate communication skills are not so hart! to
develop. All it takes is a little understanding of how the
media work, a little training in dealing with reporters, and a
little experience to smooth out the rough edges. \Yhv. then.
do so many managers, bureaucrats, and technical experts
avoid all contact with the media7 Because it's risky.
Reporters don't always understand what you're telling them:
they don't always share your goals and values. the\ don't
always handle their jobs the way you want them to. In all
these ways and mam others, reporters may be different from
the people you usually work with And so working with
reporters may sound like something less than an unalloyed
pleasure.
Pleasure or not. the risks ot due,king the media are far
greater than the risks ot working with them. Every news story
about environmental risk is a collaboration between the
journalists working on the story and the sources they talk to.
There's not too much you can do to change the nature ot
journalism or the performance of journalists But you can
understand them and figure out hou to deal with them B\
improving your own performance as a source, you < (in bring
about a real improvement in media coverage of
environmental risk
-------
Dealing With The Public
1. Risk perception is a lot more than mortality statistics. If
death rates are the only thing vou care about, then the public
is afraid of the wrong risks. Thai is. public fears are not well
correlated with expert assessments or mortality statistics.
This is often seen as a perceptual distortion on the part of
the public, but a more useful wax to see? it is as an oversim-
plification on the part of many experts and policy-makers. In
other words, the concept of "risk" means a lot more than
mortality statistics.
Virtually everyone would rather dri\-e home from a party
on the highway than walk home on deserted streets. Even if
we do not miscalculate the relatix'e statistical likelihood of a
fatal mugging versus a fatal car crash, the possibility of
getting mugged strikes us as an outrage, xvhile xve accept the
possibility of an auto accident as x'oluntary and largely
controllable through good driving. (Eighty-five percent of all
drivers consider themselves better than average.) Similarly, a
household product, however carcinogenic., seems a lot less
risky than a high-tech ha/ardous waste treatment
facility—the former is familiar and under one's oxvn control,
while the latter is exotic and controlled by others.
Risk perception experts (especially psychologists Paul
Slovic, Sarah Lichtenstein, and Baruch Eischhoff) have spent
years studying hoxv people interpret risk. The folloxving list
identifies some of the characteristics other than mortality that
factor into our working definitions of risk. Remember, these
are not distortions of risk; they are part of xvhat we mean by
the term.
Less Risky
Voluntary
Familiar
Controllable
Controlled by self
Fair
Not memorable
Not dread
Chronic
Diffuse in time and space
Not fatal
Immediate
Natural
Individual mitigation possible
Detectable
More Risky
Involuntary
Unfamiliar
Uncontrollable
Controlled by others
Unfair
Memorable
Dread
Acute
Focused in time and space
Fatal
Delayed
Artificial
Individual mitigation impossible
Undetectable
14
-------
The very same risk—as experts see these things—will be
understood quite differently by the lay public
depending on where it stands on the dimensions listed
above. Some thirty percent of the homes in northern New
Jersey, for example, have enough radon seeping into their
basements to pose more than a one-in-a-hundred lifetime risk
of lung cancer, according to estimates by the U S.
Environmental Protection Agency and the State Departments
of Health and Environmental Protection. But despite
considerable media attention (at least in the beginning), only
five percent of North Jersey homeowners have arranged to
monitor their homes for radon, and even among these few
the level of distress is modest—compared, say. to the
reaction when dioxin is discovered in a landfill, objectively a
much smaller health risk. State officials were initially
concerned about a radon panic, but apathy has turned out to
be the bigger problem.
The source of the radon in New Jersey homes is geological
uranium; it has been there since time immemorial, and no
one is to blame. But three New Jersey communities—
Montclair, Glen Ridge, and West
Orange—have faced a different radon problem: landfill that
incorporated radioactive industrial wastes. Though their
home readings were no higher than in many homes on
natural hotspots, citizens in the three communities were
outraged and fearful, and they successfully demanded that
the government spend hundreds of thousands of dollars per
home to clean up the landfill. The state's proposal to dilute
the soil nearly to background levels and then dispose of it in
an abandoned quarry in the rural community of Vernon has
provoked New Jersey's largest environmental demonstrations
in years, with thousands of residents swearing civil
disobedience sooner than let the trucks go through. In nearby
communities threatened by naturally occurring radon,
meanwhile, the concern is minimal.
It doesn't help to wish that people would confine their
definitions of risk to the mortality statistics. They won't.
Mortality statistics are important, of course, and policy-
makers understandably prefer to focus on the risks
that are really killing people, rather than the risks that are
frightening or angering people because they are involuntary,
unfamiliar, uncontrollable, etc. But successful risk
commuication begins with the realization that risk perception
is predictable, that the public overreacts to certain sorts of
risks and ignores others, that you can know in advance
whether the communication problem will be panic, or apathy
And since these differences between risks are real and
relevant, it helps to put them on the table. Merely
acknowledging that a risk seems especially tearful because it
13
-------
is unfamiliar or unfair will help. Doing something to remedy
the unfamiliarity or unfairness will help even more.
Just to make things more complicated, risk perception is
not linear, not for anybody. That is, you can't just multiply
how probable a risk is by how harmful it is to get how badly
people want to prevent it. (It you could, there would be no
insurance industry and no gambling industry.) In general,
people will pay more to protect against low-probability loss
than to pursue low-probability gain—but if the price is low
enough to be dismissed as negligible, even an infinitesimal
chance at a big payoff looks good.
Risk judgments are also very responsive to verbal cues.
Doctors, for example, are much more likely to prescribe a
new medication that saves 30 percent of its patients than one
that loses 70 percent of them. A pollutant or an accident that
will eventually give cancer to 10,000 people sounds very
serious, but one that will add less than one tenth of one
percent to the national cancer rate sounds almost negligible.
There is in fact no "neutral" way to present risk data, only
ways that are alarming or reassuring in varying degrees.
Finally, people's perception of risk is greatly influenced by
the social context. Our responses to new risks, in fact, are
largely predictable based on our enduring values and social
relationships. Do we like or dislike, trust or distrust the
people or institutions whose decisions are putting us at risk?
Do our friends and neighbors consider the risks tolerable or
intolerable? Are they enduring higher risks than ours, or
escaping with lower ones? All these factors, though they are
irrelevant to the mortality statistics, are intrinsic parts of
what we mean by risk.
2. Moral categories mean more than risk data. The public
is far from sure that risk is the real issue in the first place.
Over the past several decades our society has reached near-
consensus that pollution is morally wrong—not just harmful
or dangerous, not just worth preventing where practical, but
wrong. To many ears it now sounds callous, if not immoral,
to assert that cleaning up a river or catching a midnight
dumper isn't worth the expense, that the cost outweighs the
risk, that there are cheaper ways to save lives. The police do
not always catch child molesters, but they know not to argue
that an occasional molested child is an "acceptable risk."
Government agencies build their own traps when they
promulgate policy (and public relations) in the language of
morality, depicting food additives or chemical wastes or
polluted water as evils against which they vow to protect the
innocent public. It is not at all obvious which environmental
"insults" (another term with moral overtones) a society
should reject on moral grounds and which it should assess
16
-------
strictly in terms of impact. But an agency that presents itself
and its mission in moral terms should expect to he held to its
stance. And an agency that wishes to deal with
environmental risk in terms of costs-and-benefits instead of
good-and-evil should proceed gently and cautiously, aware
that it is tramping on holy ground.
Nor is morality the only principled basis for questioning
the costs-and-benefits premises of risk assessment. Just as the
moralist challenges the Tightness of trading oil certain risks
against costs or benefits, the humanist challenges the
coherence of the tradeoffs. How, the humanist asks, can
anyone make sense of a standard that tries to put a cash
value on human life? Or, indeed, of a standard that assumes
that a hundred widely scattered deaths per year are
equivalent to a one-in-a-hundred chance of obliterating a
community of 10,000?
Similarly, the political critique of the premises of risk
assessment begins by noting that "the greatest good for the
greatest number" has always been a convenient rationale for
the oppression of minorities. Democratic theory asserts that
individuals and groups should be free to bargain for their
own interests, and should be protected from the tyranny of
the majority. There is nothing unreasonable about the
suggestion that equitable distribution of risks and
benefits—and of the power to allocate risks and benefits—is
often more important than the minimization of total risk or
the maximization of total benefit. It may be efficient to dump
every environmental indignity on the same already degraded
community, but it is not fair.
3. Policy decisions are seen as either risky or safe. Like
the media, the public tends to dichotomize risk. Either the
risk is seen as very frightening, in which case the response is
some mix of fear, anger, panic, and paralysis; or the risk is
dismissed as trivial, in which case the response is apathy.
In their personal lives, people do not necessarily dichoto-
mize risk. Most of us are quite capable of understanding that
the picnic might or might not be rained out, that the boss
might or might not get angry, even that smoking might or
might not give us lung cancer. Of course quantified
probabilistic statements are genuinely hard to understand,
especially when the probabilities are small, the units are
unfamiliar, and the experts disagree. But beyond these
perplexities lies another issue of enormous importance to
risk communication. While people may (with difficulty)
master a probabilistic risk statement that concerns what they
should do to protect themselves, they are bound to resist
probabilistic risk statements that concern what others
(government, say) should do to protect them. On my own
17
-------
behalf, I may choose to tolerate a risk or to piotect against it,
but lor you to deude that mv risk is tolerable is itselt
intolerable. Quantitative risk assessments, risk-beneiit
calculations, risk-cost ratios, and risk-risk comparisons are all
hard to hear when we bear the risk and someone else makes
the decision.
4. Equity and control issues underlie most risk
controversies. Trust and credibility are often cited as the key
problems ot risk communication. Certainly few people trust
government and industry to protect them from environmental
risk. This is just as true ot the passive, apparently apathetic
public as it is of the activist, visibly angry public. The former
is simply more fatalistic, more prone to denial, more
completely drowned in undiscriminating chemophobia. The
activist public, in other words, distrusts others to protect its
interests and thus chooses to protect its own. The far larger
passive public is passive not because it believes others will
protect its interests, but because it doubts it can protect its
own. Both publics listen to the reassurances of government
and industry—if they listen at all—with considerable
suspicion.
But to say that trust is the problem here is to assume that
the goal is a passive public that doesn't mind being passive.
If the goal is an actively concerned public, then the problem
isn't that people are distrustful, but rather that government
and industry demand to be trusted. Translate the question of
trust into the underlying issue of control. Who decides what
is to be done?
Any environmental risk controversy has two levels. The
substantive issue is what to do; the process issue is who
decides. So long as people feel disempowered on the process
issue, they are understandably unbending on the substantive
issue, in much the same way as a child forced to go to bed
protests the injustice of bedtime coercion without
considering whether he or she is sleepy. It isn't just that
people oppose any decision they view as involuntary and
unfair, regardless of its wisdom; because the equity and
control issues come first, people typically never even ask
themselves whether they agree on the merits. Outraged at the
coercion, they simply dig in their heels. It is hardly
coincidental that risks the public tends to overestimate
generally raise serious issues of equity and control, while
most of the widely underestimated risks (smoking, fat in the
diet, insufficient exercise, driving without a seatbelt) are
individual choices.
Specialists in negotiation and conflict resolution have long
understood this relationship between substantive issues and
the process issues ot equity and control. Consider for
18
-------
example a community chosen by the state government to
"host" a hazardous waste incinerator. Justly offended at this
infringement of local autonomy, the community prepares to
litigate, frantically collecting ammunition on the
unacceptability ot the site. Both their anger and the legal
process itself encourage community members to overestimate
the risk of the proposed facility, to resist any argument that
some package of mitigation, compensation, and incentives
might actually yield a net gain in the community's health
and safety, as well as its prosperity.
In interviews with community members faced with such a
situation, the control issue tends to overshadow the risk
assessment, But when citizens are asked to hypothesize a de
facto community veto and envision a negotiation with the
site developer, they become quite creative in designing an
agreement they might want to sign: emissions offsets.
stipulated penalties, bonding against a decline in property
values, etc. It is still too early to tell whether a negotiated
hazardous waste treatment facility is feasible. But thinking
about such a negotiation becomes possible for community
members only when they feel empowered—that is, when the
issue of outside coercion has been satisfactorily addressed.
On this dimension people's response to information is not
much different from their response to persuasion. We tend to
learn for a reason—either we're curious, or we're committed
to a point of view and looking for ammunition, or we're
faced with a pending decision and looking for guidance.
These three motivations account for most
information-seeking and most learning—and none of them
exerts much influence when an individual citizen is ottered
information about, say, a Superfuncl clean-up plan. A few
stalwart souls will read out of curiosity, though it won't take
much technical detail to put a stop to that. Activists will
scour the plan for evidence to support their position or for
evidence that their position wasn't properly considered.
(Activists know what they think and believe they can make a
difference.) And those charged with litigating, funding, or
implementing the plan study it in order to do their jobs.
And the general public? Why learn if you feel powerless
do anything about what you have learned? On the other
hand, when the public has felt it was exercising real
influence on a decision—the ASARCO smelter in Tacoma
comes to mind—it has shown a surprising ability to master
the technical details, including risk assessment details.
Not that every citizen wants to play a pivotal role in
environmental decision. U'e have our own lives to lead, and
we would prefer to trust the authorities. It the issue is
unimportant enough we often decide to trust the authorities
despite our reservations; it the crisis is urgent enough we
19
-------
may feel we have no choice but to trust the authorities, again
despite our reservations. The gravest problems of risk
communication tend to arise \\lien citizens determine that
the issue is important, that the authorities cannot be trusted.
and that they themselves are powerless. Then comes the
backlash of outrage.
5. Risk decisions are better when the public shares the
power. People learn more and assess what they learn more
carefully if they exercise some real control oxer the ultimate
decision. But this sort oi power-sharing is, of course.
enormously difficult for policy-makers, for a wide range of
political, legal, professional, and psychological reasons.
Interestingly, corporate officials may sometimes find
power-sharing less unpalatable than government officials.
Corporations have a bottom line to nurture, and when all else
fails they may see the wisdom of sharing power in the
interests of profit. But government officials have no profit to
compensate for the loss of power, so they may find it harder
to share.
"Public participation," as usually practiced, is not a
satisfactory substitute for power-sharing. To be sure, telling
the public what you're doing is better than not telling the
public what you're doing. Seeking "input" and "feedback" is
better still. But most public participation is too little too late:
"After years of effort, summarized in this 300-page report, we
have reached the following conclusions... Now what do you
folks think?" At this point it is hard enough for the agency to
take the input seriously, and harder still for the public to
believe it will be taken seriously. There is little power-
sharing in the ''decide-announce-defend" tradition of public
participation.
The solution is obvious, though difficult to implement.
Consultations with the public on risk management should
begin early in the process and continue throughout. This
means an agency must be willing to tell the public about a
risk before it has done its homework—before the experts
have assessed the risk thoroughly, before all the policy
options have been articulated, way before the policy
decisions have been made. There are dangers to this strategy:
people will ask the agency what it proposes to do about the
problem, and the agency will have to say it isn't sure yet. But
on balance an agency is better off explaining why it doesn't
yet have all the answers than explaining why it didn't share
them years ago. In fact, not having all the answers can be
made into an asset, a demonstration of real openness to
public input. The goal, after all, is to enlist the rationality of
the citizenry, so that citizens and experts are working
together to figure out how great the risk is and what to do
about it.
20
-------
Of course no responsible agency will go public without
any answers. What's important is to propose options X, Y,
and Z tentatively, with genuine openness to V and W, and to
community comments that may eliminate Z. A list of options
and alternatives—and a fair and open procedure for
comparing them and adding new ones—is far more
conducive to real power-sharing than a "draft" decision.
This sort of genuine public participation is the moral right
of the citizenry. It is also sound policy. Undeterred by
conventional wisdom, lay people often have good ideas that
experts can adapt to the situation at hand; at a minimum,
lay people are the experts on what frightens them and what
would reassure them. When citizens participate in a risk
management decision, moreover, they are far more likely to
accept it, for at least three reasons: (1) They have instituted
changes that make it objectively more acceptable; (2) They
have got past the process issue of control and mastered the
technical data on risk; that is, they have learned why the
experts consider it acceptable; and (3) They have been heard
and not excluded, and so can appreciate the legitimacy of the
decision even if they continue to dislike the decision itself.
6. Explaining risk information is difficult but not
impossible, if the motivation is there. High school teachers
have long marveled that a student who couldn't make sense
of Dickens's A Tale of Two Cities had no trouble with Hot
Rod's far more complex instructions on how to adjust one's
sparkplugs for a fast start on a rainy day. Motivation makes
the difference. When people have a reason to learn, they
learn.
It is still possible for communicators to make the learning
easier or harder—and scientists and bureaucrats have
acquired a fairly consistent reputation for making it harder.
At Three Mile Island, for example, the level of technical
jargon was actually higher when the experts were talking to
the public and the news media than when they were talking
to each other. The transcripts of urgent telephone
conversations between nuclear engineers were usually
simpler to understand than the transcripts of news
conferences. To be sure, jargon is a genuine tool of
professional communication, conveying meaning (to those
with the requisite training) precisely and concisely. But it
also serves as a tool to avoid communication with outsiders,
and as a sort of membership badge, a sign of the status
difference between the professional and everyone else.
Like any piece of professional socialization, the tendency
to mystify outsiders becomes automatic, habitual more than
malevolent. It's hard for a layperson to get a straight answer
from an expert even when nothing much is at stake. When a
potentially serious risk is at stake, when people are
21
-------
frightened or angry or exhausted, when the experts aren't
sure what the answers are. when the search for a scapegoat is
at hand, effective communication is a lot to expect.
In many risk communication interactions, in short, the
public doesn't really want to understand (because it feels
powerless and resentful) and the experts don't really want to
be understood (because they prefer to hold onto their
information monopoly). The public finds it convenient to
blame the experts for obfuscation, and the experts find it
convenient to blame the public for obtuseness. These
motivational issues are probably more important than the
traditional concerns of clarity in determining whether real
knowledge will pass from expert to public.
Within the traditional concerns of clarity, the major issue
is simplification. Even assuming a public that wants to
understand and an expert who wants to be understood, risk
information must still be simplified.
Insofar as possible, of course, it is wise to simplify
language rather than content. That is, take the extra words to
make hard ideas clear. Unfortunately, neither the expert
source nor the lay audience is usually willing to dedicate the
time needed to convey complex information a step at a time.
So inevitably simplification becomes a matter of deciding
what information to leave out. Experts are famous for their
conviction that no information may be left out; unable to tell
all, ihey often wind up telling nothing.
In fact, there are three standard rules of thumb for
popularizing technical content. (1) Tell people what you
have determined they ought to know—the answers to the
questions they are asking, the instructions for coping with
the crisis, whatever. This requires thinking through your
information goals and your audience's information needs,
then resolutely keeping the stress where you have decided it
should be. (2) Add what people must know in order to
understand and feel that they understand the information—
whatever context or background is needed to
prevent confusion or misunderstanding. The key here is to
imagine where the audience is likely to go off-track, then
provide the information that will prevent the error. (3) Add
enough qualifiers and structural guidelines to prepare people
for what you are not telling them, so additional information
later will not leave them feeling unprepared or misled. Partly
this is just a matter of sounding tentative; partly it is
constructing a scaffolding of basic points on which people
can hang the new details as they come in. Applying these
three rules isn't easy, but it is a lot easier than trying to tell
everything you know.
The hardest part of simplifying risk information is explain-
ing the risk itself. This is hard no! only because risk assess-
22
-------
merits are intrinsically complex and uncertain, but also
because audiences cling tenaciously to their safe-or-
dangerous dichotomy. One path out of dichotomous
thinking is the tradeoff: especially risk benefit, but also
risk-cost or risk-risk. But there is solid evidence that
lay people resist this way of thinking; trading risks against
benefits is especially offensive when the risks raise moral
issues and the "victims" are not the ones making the choice.
Another alternative to dichotomy is the risk comparison: X is
more dangerous than Y and less dangerous than Z. But as we
have already noted, risk means a lot more than mortality
statistics, and comparing an involuntary risk like nuclear
power to a voluntary one like smoking invariably irritates
more than it enlightens—as does any risk comparison that
ignores the distinctions listed at the start of this section.
The final option to dichotomy is to provide the actual data
on deaths or illnesses or probability of occurrence or
whatever. This must be done carefully, with explicit
acknowledgement of uncertainty, of moral issues, and of
non-statistical factors like voluntariness that profoundly
affect our sense of risk. Graphs and charts will help; people
understand pictorial representations of probability far better
than quantitative ones.
Don't expect too much. People can understand risk
tradeoffs, risk comparisons, and risk probabilities when they
are carefully explained. But usually people don't really want
to understand. Those who are frightened, angry, and
powerless will resist the information that their risk is
modest; those who are optimistic and overconfident will
resist the information that their risk is substantial. Over the
long haul, risk communication has more to do with fear,
anger, powerlessness, optimism and overconfidence than
with finding ways to simplify complex information.
7. Risk communication is easier when emotions are seen as
legitimate. It follows from what we have been saying that an
important aspect of risk communication is finding ways to
address the feelings of the audience. Unfortunately, experts
and bureaucrats find this difficult to do. Many have spent
years learning to ignore feelings, their own and everyone
else's; whether they are scientists interpreting data or
managers setting policy, they are deeply committed to doing
their jobs without emotion.
At an even deeper level, scientists and bureaucrats have
had to learn to ignore the individual, to recognize that good
science and good policy must deal in averages and
probabilities. This becomes most obvious when a few people
feel threatened by a generally desirable action, such as the
siting of a hazardous waste facility. Experts who are
confident that the risk is small and the facility needed may
-------
welt try to sympathize with the target community—but their
training tells them playing the odds is a good bet, somebody
has to take the risk, the decision is rational, and that's the
end of the matter.
Thus the most common sources of risk information are
people who are professionally inclined to ignore feelings.
And how do people respond when their feelings are ignored?
They escalate—yell louder, cry harder, listen less—which in
turn stiffens the experts, which further provokes the
audience. The inevitable result is the classic drama of
stereotypes in conflict: the cold scientist or bureaucrat versus
the hysterical citizen.
Breaking this self-defeating cycle is mostly a matter of
explicitly acknowledging the feeling (and the legitimacy of
the feeling) before trying to explain anything substantive—
because any effort to explain substance first will
be experienced by people as just another way of not noticing
how they feel. The trick, in other words, is to separate the
feeling from the substance, and respond to the feeling first. "I
can tell you're angry about this" won't eliminate the anger—
nor should it—but it will eliminate the need to insist
on the anger, and will thus free energy to focus on the issue
instead. "A lot of people would be angry about this" and "in
your position I would be angry about this" are even more
empathic remarks, legitimating the anger without labeling the
citizen. All three responses are far more useful than
pretending that the anger isn't there or, worse yet,
demanding that it disappear. Techniques of this sort are
standard practice in many professional contexts, from police
crisis intervention to family counseling. Training is available;
risk communicators need not reinvent the wheel.
It helps to realize that experts and bureaucrats—their
preferences notwithstanding—have feelings too. In a public
controversy over risk, they are likely to have very strong
feelings indeed. After all, they consider themselves moral
people, yet they may be accused of "selling out" community
health or safety or environmental protection. They consider
themselves competent professionals, yet they may be accused
of egregious technical errors. They very likely pride
themselves on putting science or public service ahead of
personal ambition, yet they may be accused of not caring.
They chose their careers expecting if not gratitude at least a
calm working environment and the trust and respect of the
community. Instead they are at the center of a maelstrom of
community distrust, perhaps even community hatred. It
hurts.
24
-------
The pain can easily transform into a kind of icy paternal-
ism, an "I'm-going-to-help-you-even-if-you-don't-know-what's-
good-for-you" attitude. This of course triggers even more
distrust, even stronger displays of anger and fear. Risk
communication stands a better chance of working when both
sets of feelings—the expert's and the community's—are on
the table.
Feelings are not usually the core issue in risk communica-
tion controversies. The core issue is usually control, and the
way control affects how people define risk and how they
approach information about risk. But the stereotypical
conflict between the icy expert and the hysterical citizen is
nonetheless emblematic of the overall problem. The expert
has most of the "rational" resources—expertise, of course;
stature; formal control of the ultimate decision. Neither a
direct beneficiary nor a potential victim, the expert can
afford to assess the situation coldly. Indeed, the expert dare
not assess the situation in any other way. The concerned
citizen, meanwhile, has mainly the resources of passion—
genuine outrage; depth of commitment; willingness
to endure personal sacrifice; community solidarity; informal
political power. To generate the energy needed to stop the
technical juggernaut, the citizen must assess the situation
hotly.
A fundamental premise of "Explaining Environmental
Risk" is that risk understanding and risk decision-making
will improve when control is democratized. We will know
this is happening when citizens begin approaching risk
issues more coolly, and experts more warmly.
25
-------
Selected Bibliography
Covello, Vincent T., "The Perception of Technological
Risks: A Literature Review," Technological Forecasting and
Social Change, 1983, pp. 285-287.
Covello, Vincent T., Detlof von Winterfeldt, and Paul
Slovic, "Communicating Scientific Information about Health
and Environmental Risks: Problems and Opportunities from a
Social and Behavioral Perspective," in V. Covello, A.
Moghissi, and V.R.R. Uppuluri, Uncertainties in Risk
Assessment and Risk Management (New York: Plenum
Press, 1986), in press.
Fischhoff, Baruch, "Protocols for Environmental Reporting:
What to Ask the Experts," The Journalist (Foundation for
American Communications), Winter 1985, pp. 11-15.
Klaidman, Stephen, "Health Risk Reporting," Institute for
Health Policy Analysis, Georgetown University Medical
Center, Washington, DC, 1985.
Mazur, Allan, "Media Coverage and Public Opinion on
Scientific Controversies," journal of Communication, 1981,
pp. 106-115.
Mazur, Allan, "Bias in Risk-Benefit Analysis," Technology
in Society, 1985, pp. 25-30.
Nelkin, Dorothy, Science in the Streets (New York:
Twentieth Century Fund, 1984).
President's Commission on the Accident at Three Mile
Island, Report of the Public's Right to Information Task Force
(Washington, DC: U.S. Government Printing Office, 1979).
RuckeJshaus, William, "Risk in a Free Society," Risk
Analysis, September 1984, pp. 157-163.
Sandman, Peter M., "Getting to Maybe: Some
Communications Aspects of Hazardous Waste Facility
Siting," Seton Hall Legislative Journal, Spring 1986.
26
-------
Sandman, Peter M., David B. Sachsman, Michael
Greenberg, Mayme Jurkat, Audrey R. Gotsch, and Michael
Gochfeld, "Environmental Risk Reporting in New Jersey
Newspapers," Environmental Risk Reporting Project,
Department of Journalism and Mass Media, Rutgers
University, January 1986.
Sharlin, Harold I., "EDB: A Case Study in the
Communication of Health Risk," Office of Policy Analysis,
U.S. Environmental Protection Agency, January 1985.
Slovic, Paul, "Informing and Educating the Public About
Risk," Decision Research Report 85-5, November 1984.
Slovic, Paul, Baruch Fischhoff, and Sarah Lichtenstein,
"Facts and Fears: Understanding Perceived Risk," in R.C.
Schwing and W. Al Albers, eds., Societal Risk Assessment:
How Safe Is Safe Enough? (New York: Plenum, 1980), pp.
181-216.
Weinstein, Neil D., and Peter M. Sandman,
"Recommendations for a Radon Risk Communication
Program," Office of Science and Research, New Jersey
Department of Environmental Protection, November 1985.
-------
------- |