WHAT IS THE RISK OF THE IMPOSSIBLE?
Daniel M. Kammen†, (1)
, Alexander I. Shlyakhter* and Richard Wilson*
†Woodrow Wilson School of Public and International Affairs
Princeton University, Princeton NJ 08544
*Department of Physics
Harvard University, Cambridge, MA 02138
We examine the progression from risk identification to resolution through action or inaction on the part of investigators and policy makers. The cases considered are ones where heated scientific and policy disagreement over the importance - and even the existence - of the risk agent persists or persisted throughout much of the process. Examples include cancer risks from extremely low doses of radiation and cancer from extremely low frequency electromagnetic fields. We explore the decision making process followed in these case studies, and discuss the extent to which caution on the part of policy makers may be considered reasonable or unreasonable. Each of these cases illustrates distinct aspects of the interplay between technical and societal responses to risk that are summarized by: (1) a ban on the perceived risk agent; (2) implementation of the best available technology; and (3) risk assessment and communication. We draw a number of conclusions that could facilitate a societal dialogue, including greater public participation in the scientific and policy evaluation stages, and the generation of improved guidelines and protocols to manage technological options and risks.
1. INTRODUCTION: THE SOCIAL CONTEXT OF RISK
How are discoveries or technological advances, that often conflict with current understanding, evaluated in terms of scientific merit, and then in terms of potential risk or benefit to society? How are the risks of low probability but high consequence events evaluated in terms of policy action and resource allocation? Who decides what fears are justified, and what levels of voluntary or involuntary risk deserve attention? And finally, how efficient is the process of scientific "peer review" at filtering for incorrect, or worse yet, for fraudulent claims, while carefully scrutinizing but at the same time nurturing "long-shot/large payoff" research? The answers to these questions, while formally in the realm of science and policy research, often boil-down to ideological decision making, where the acceptability of a risk or the potential payoff is a question in which different stakeholders have radically divergent views.
In this article we will discuss some reports of a risk or hazard in situations where others claim that a hazard is impossible. Whether or not a claim seems nonsensical, possible, or sensible depends upon the knowledge and experience of the perceiver. On the other hand, what is initially thought to be stupid or inconsequential may turn out to be a valid concern. In fact, it is just this scenario, where the fears or claims expressed by a few are denigrated and deemed unfounded until later discovered to have been all too true that causes substantial, and warranted, backlash against the scientific community.
We will examine a number of extreme cases, those where a risk perceived by one person or group is seen, and often severely criticized, by another as an impossibility. The goal of this work is to highlight the role that risk assessment and communication must play, and the critical need for transparency within the scientific process if the benefits of technological innovations are to be exploited while potential dangers are minimized.
As individuals and society as a whole contemplate the dangers and risks inherent in life, one of the most important lessons learned is that of caution. We must be wary of situations where there is uncertainty, and avoid commitments in matters that are not sufficiently understood. An excess of caution, however, can lead to societal stagnation. One situation where this is likely to arise occurs when the public perception and the opinion of "experts" differ, or when the expenses incurred to avoid a perceived risk are vast compared to the benefits. In both of these cases, the root cause is frequently lack of information and communication within the "expert" community and between the experts and the wider public. A dialog between these groups can highlight the areas where further research is necessary. A second critical aspect of this exchange is to identify the questions raised by technological advances that are, in fact, inherently social or political, in which case we which must seek resolution in that domain.
In many cultures -- particularly in technologically complex societies -- individuals intentionally or implicitly entrust their fate to the overall capability of society (largely through its professional experts) to determine the best course of events in areas that they as individuals do not understand, or feel that they have no power to influence. For this process to work, the experts must earn and continue to be trusted. A tension arises when there is disagreement between experts, between experts and the media, and between experts and the general public. Some members of society might argue that a certain perceived danger is unreal; that a feared chain of events is impossible; or that the reservations expressed by others are unwarranted. To address this common dilemma there is a pressing need for an open review process, involving risk investigation, public discussion to alleviate confrontation, and finally action stemming from consensus. We begin by exploring the form and credibility of evidence relating to newly identified scientific and technological risks.
2. POSITIVE AND NEGATIVE EVIDENCE ABOUT RISK
When a new risk is recognized or claimed, it should obviously be understood as much as possible within existing time and resource constrains before action is taken to mitigate it. The process of trying to understand risk begins with the hazard -- real or postulated. This hazard is generally associated with a condition perceived previously to be benign, or with a claim that a new discovery, invention, or manufacturing process presents a health hazard or an environmental risk. The problem for the risk assessor or policy maker is that immediate mitigation action is demanded and therefore an immediate answer is usually required. There is almost always less time than is desirable for an extensive and detailed investigation. In such cases the initial estimate of risk is invariably strongly influenced by pre-existing prejudice and biases.
We illustrate the way in which belief is influenced by prior knowledge and belief by considering the following three declarative statements:
"I saw a dog running down 5th Avenue"
"I saw a lion running down 5th Avenue"
"I saw a stegosaurus running down 5th Avenue"
Little evidence is needed to be convinced that a dog did, in fact, run down 5th Avenue: dogs are common in contemporary urban society which is the assumed context of the statement. We might demand a bit more evidence if the speaker were claiming damages for a car accident caused by the postulated dog.
However, before believing the second statement, most of us would require much more auxiliary evidence. Sufficient auxiliary evidence might be that a circus train had crashed and released its animals or that a lion had escaped from the Bronx zoo. The geographical context of belief is apparent; if "down 5th Avenue" is replaced by "along the Serengetti plain of Tanzania" it is likely that belief would be universal.
Most people, however, would steadfastly insist that seeing a stegosaurus is absolutely impossible. The author of the above statement would be considered mistaken, drunk, or a crack-pot.
This example was chosen to be a situation where the prejudices and biases of a professional (expert) are nearly the same as those of a member of the general public. This example illustrates the minimal requirements for belief and subsequent action, as well as the relationship between risk assessment and the Bayesian, or personalist, perspective1. In other cases the prejudices of the public and of the expert seemed to differ. It is with these troublesome cases that this paper is concerned.
In this paper we examine a number of cases where there is, or was, heated disagreement between scientists, or between scientists and the public. In these cases there exists no consensus over the importance -- and even the existence -- of the hazard or magnitude of the risk. The most frequently divisive and contentious cases are those where decisions must be made based on information generally considered insufficient to follow "the scientific method:" cases dominated by "negative evidence" 2,3.
In an editorial entitled "Can evidence ever be inconclusive?"4, John Maddox, editor of Nature, notes a similar problem at the level of scientific reporting and review. Maddox notes that:
"If an article with a fine declarative title should run into trouble with it referees on the grounds that the text does not fully support the conclusion, but the work itself is inherently interesting, there is every likelihood that the author will seek to keep the original title, prefacing it with 'Evidence for ...'. The misunderstanding seems to be that the weasel words betoken evidence that falls short of proof."
We explore the decision making process followed in these case studies, and discuss the extent to which caution may be considered reasonable or unreasonable. We then draw a number of conclusions concerning the interaction of science and public policy, and potential paths through the labyrinth that proceeds from the initial investigation, to gathering information, to implementation of policy that could facilitate a societal dialogue. We conclude with a discussion of recommendations to improve guidelines and protocols to manage technological options and risks.
3. EXAMPLES OF SEEMINGLY IMPOSSIBLE RISKS
We consider here a variety of situations where one group in society has declared that a certain hazard or risk is impossible and another segment has considered it to be possible. By comparing the debate in several extreme situations we gain insight into the management of newly claimed or identified risks.
Risk of the explosion of a nuclear warhead detonating the atmosphere.
Risk of a nuclear power plant exploding like a bomb.
Risk of cancer in USA from exposure to radiation from Chernobyl.
Risk of cancer from exposure to a chemical carcinogen at extremely low exposure levels.
Risk of death from non-cancerous causes stemming from exposure to low levels of air pollution.
Risk of global environmental change resulting from the Iraqi ignition of oil wells in Kuwait.
Risk of flooding New York City because of sea-level rise caused by global warming.
Risk of being killed by an asteroid impact.
Claims of safe, cheap, and limitless energy from "cold" nuclear fusion.
What most of these examples have in common is that at one time or another some "expert" or "experts" have pronounced the postulated causal chain of events to be impossible on a time scale meaningful for contemporary society. In the cases where the possibility of the event is acknowledged, the probability is considered so vanishingly small as to be unimportant in the face of more pressing priorities. What is common to all these cases, is that at one time or another many members of the scientific community or members of public declared the risk to be insignificant. In fact, these risks have at some point, all been termed "impossible." Nonetheless, others, often poorly qualified technically to evaluate the risk, or arguing from an ideological rather than scientific perspective, have continued the study of the postulate, and often urged extensive societal action. By examining and comparing the scientific basis for these situations and, most importantly, the manner in which the associated investigation was presented to the public, we hope to illuminate the process by which individuals and society interact to make decisions concerning risks.
We begin with discussions of three distinct case studies in nuclear risk because of the extreme, and frequently, ideological, positions taken both in support and in attacks on the nuclear industry.
Risk of a nuclear bomb explosion detonating the atmosphere
During the initial discussions of the feasibility of the fission and fusion bombs in 1942, Edward Teller put forward the idea that the unprecedented high temperatures in the explosion of such bombs might initiate a fusion chain reaction of the hydrogen in the earth's oceans or its atmosphere and burn up the world. A chronicler of the meeting reports that:
The deuterium-deuterium reaction was thoroughly familiar, ... nevertheless Oppenheimer stared at the blackboard in wild surprise, and the other faces in the room, including Teller's, successively caught the same look. The heat buildup Teller had calculated was not only enough to maintain the deuterium reaction but also another reaction between its reaction products and nitrogen. ... Oppenheimer saw it, with or without a deuterium wrapper, setting afire the atmosphere of the entire planet. ... As soon as it became clear that none of them could refute the implications in Teller's heat study, Oppenheimer suspended the sessions. He decided that [Arthur H.] Compton ought to know that the program he was directing seemed pointed toward igniting the air and the ocean5.
Oppenheimer, took the threat very seriously and immediately met with Arthur Compton, the director of the "Metallurgical Project," as the atomic weapons program was coded. Both recalled the discussion that culminated in the statement that, "it was better to accept the slavery of the Nazis than to run a chance of drawing the final curtain on mankind" 6,7. Accounts of the discussions of the day by Oppenheimer, Bethe, and Teller all suggest that, "impossibility of igniting the atmosphere was ... assured by science and common sense"8 soon after the initial meeting. In any case, a team of physicists at Los Alamos directed by Edward Teller proceeded to demonstrate that even with "extravagant assumptions" it is impossible to initiate and sustain an atmospheric or oceanic thermonuclear chain reaction with nuclear warheads of any technologically conceivable tonnage9.
Despite the clear scientific evidence that nuclear detonation of the atmosphere was impossible, claims of this threat surface periodically, notably in 1959, 1975, and again in 19927,10,11. An examination of the reasons that this argument, seemingly put to rest for good, continues to surface provide an excellent window on the strengths and weaknesses of our means to evaluate radical claims of risk.
"Data" from 1959 has been cited10 as the principal evidence that a thermonuclear explosion has a non-zero probability of detonating the atmosphere. The problem is more one of imprecise language than not imprecise science. In her written account of an interview with Compton, Pearl S. Buck stated that:
If after calculation, [Compton] said, it were proved that the chances were more than approximately three in one million that the Earth would be vaporized by the explosion, [Compton] would not proceed with the [bomb] project. Calculations proved the figure slightly less -- the project continued7.
Compton's knowledge of the physics of thermonuclear detonation6 and his familiarity with the results of Konopinsky, et al.9 make it fairly certain that either Buck misunderstood or misquoted Compton, or that Compton mis-spoke12. A likely scenario is that in describing the risk as impossible, Compton or Buck used the euphemism, 'one in a million' which was somehow transmuted to 'three in a million'13. At the time "one in a million" was regarded as a vanishingly small probability. At present concentrations of one ppm are routinely the subject of regulatory interest by the Environmental Protection Agency and other health and environmental agencies.
With the scientific literature unchanging in its pronouncement that an atmospheric chain reaction was impossible, the language in Buck's interview would hardly seem sufficient evidence to claim anew a threat of an 'ultimate catastrophe.' Nevertheless, that claim was made by H. C. Dudley10, and covered by The New York Times14. Once again Bethe12 demonstrated that, "the claim is nonsense", and a further detailed calculation was performed that not only proved that, "neither the atmosphere nor the oceans of the Earth may be made to undergo propagating nuclear detonation under any circumstances" (emphasis added)15, but also resolved a literary but not technical ambiguity in the initial Konopinsky et al. paper9.
A second piece of "evidence" for the possibility that a nuclear explosion could detonate the atmosphere was derived from the wording of the report9. Its summary statement concludes that:
... even if bombs of the required volume (i.e. greater than 1,000 cubic meters) are employed, energy transfer from electrons to light quanta by Compton scattering will provide a further safety factor and will make a chain reaction in air impossible.
In discussing the complexity of the reaction process, and the ever-present possibility for new discoveries, the authors employed standard scientific conservatism by noting that, "further research on the subject [is] highly desirable." Dudley10 and Commoner11 both interpreted this last phrase to imply a significant risk of an atmospheric chain reaction.
In contrast to the debate in 1975 that evolved in the scientific and popular press, the most recent claim of a non-zero probability of nuclear detonation of the atmosphere11 was evaluated for publication and rejected through the process of peer review at several scientific journals. As long as no new material or interpretive methods are brought to bear on an issue, this is entirely the correct approach and function of the review process.
The clear lesson from this drama is that of the critical need for accurate risk communication. Regardless of how the 'three in a million' statement initially arrived in print, it is the responsibility of whomever generated the statement to follow up any imprecise reporting in the scientific or popular media and to ensure that an accurate version is available for discussion and debate.
Risk of a nuclear power plant exploding like a bomb
Perhaps no other industry suffers more from public concern and fear, whether deserved or not, than the nuclear power "industry."16,17. We include in the definition of "industry" all participants; manufacturers (General Electric, Westinghouse, Combustion Engineering, Babcock and Wilcox), utility companies, bankers, regulators (specifically the Nuclear Regulatory Commission, and state Public Utility Commissions), and many nuclear research scientists. One argument that was widely discussed in the 1960's and early 1970's the possibility that a power plant could, and might, explode like an atomic bomb. The argument continued that, being larger than a bomb, the detonation would cause an explosion dwarfing the damage caused by the Hiroshima or Nagasaki warheads.
In the discussion of this potential risk, statements that this sequence of events is impossible when made by the nuclear power "industry" are, according to this view, clearly not to be trusted. Since these statements are not trusted it is obvious that the existing nuclear "industry" is also not trusted to competently manage a critical and potentially dangerous technology, or to provide complete and accurate information and accounting of its operation. An indication of this distrust is that in surveys of public perception of risks, nuclear power is listed as the largest perceived risk18 (although it is important to note that the explosion of power plants is not generally perceived as the critical risk it was portrayed to be two decades ago). On the other hand, according to professionals it is one of the lowest in terms of morbidity and mortality per kW-hour19.
Much has been written, and much can be written on the reasons for the problems of the nuclear power industry in the United States. It is notable that unlike most industries, technological improvements such as those in reactor safety since the important study of 197520 appear to have had little or no influence on the discussion of the cost/benefit analysis of nuclear power. Based on current reactor designs, notably those employed in new reactors in Asia and Europe21 the calculated probability even of a catastrophic accident of any sort becomes a vanishingly small to nonexistent risk. This, however, does not seem to influence the public perception. Some analysts17 argue that the industry has arrogantly failed to confront, let alone cooperate with or co-opt public education and involvement, thereby has done itself and the nation a great disservice. Others argue that the nuclear industry has made more attempts to involve the public than any industry in history. Whichever is true (and probably both are), a significant fraction of the nuclear industry, particularly the utility companies and their bankers, have given up. The manufacturers still, however, are receiving federal funds for development, although this appears to be more directed to potential overseas, Third World, orders than to potential markets in the United States.
Most scientists and industry analysts would agree with Morgan17 that, "... the United States nuclear power industry is dead. Its rebirth will take more than increasing energy supply pressures, public relations, and a little fine tuning." No one, however, has a practical solution.
We offer two points. First, while it was certainly vital to examine the technical potential and public fear of an exploding nuclear plant, a continued focus on this possibility ignores the root of well-founded public concern over nuclear power. Public opposition to nuclear power today reflects a broad and well-informed set of concerns: small radiation releases from plants; long-term waste storage; and the health and environmental mess that surrounds the Hanford, Rocky Flats, and other mismanaged legacies of the Cold War. Thus while the public has moved on to consider a range of risks, the nuclear industry continues to be fixated on the simplest of misconceptions. In a 1990 analysis titled Technological Risk, for example, H. W. Lewis still comments that, "Much of the argument against nuclear power has risk aversion as it basis -- though the possibility of a major accident is extremely low ...."22.
Secondly, the fear of nuclear power is likely to be in part driven by an associated menace -- a fear that the very existence of nuclear power encourages the proliferation of nuclear weapons. Some analysts16,23,24 have suggested that the worldwide nature of the "industry" enables them to take a positive role in prevention of any such proliferation. So far this is a role that they have not assumed.
As an example, a nuclear industry that is truly responsive to not only the concerns of the public, but also the areas in which the public does not yet see a important risk, might concern itself with the current climate of nuclear proliferation. There is a broad public sentiment that since the collapse of the Soviet Union, that proliferation is not a significant hazard25. While this may be changing in the face of the confrontation over North Korea, a number of analysts of the nuclear industry have argued that the breakup of the USSR has increased the risk of nuclear fuel and the diversion of dual-use technologies26.
Risk of cancer in USA from exposure to radiation from Chernobyl
Immediately after the Chernobyl accident, radioactivity was measured in Europe and soon thereafter in countries as far removed as China, Kuwait, Japan and the United States. The presence of any radioactivity was interpreted by many people as an imminent hazard and there was widespread fear.
One of the authors (RW) spent two hours on the telephone in May 1986 with a worried mother who did not want her daughter to travel to Brussels and, "fly through that radioactive cloud." The fear still persists today and, like many fears, is used for political purposes. What is clear is that 100 or more thyroid cancers in children in Belarus and the Ukraine can be attributed to the Chernobyl disaster, and fears persist that an increase of U. S. cancers will also result.
A long-running debate over the impact of the Chernobyl accident on residents of the U. S. provides a study in scientific and pseudo-scientific exchange. Gould26 noticed that 33.06% of the 1986 deaths occurred in the U. S.. during the months of May to August 1986 compared to 31.97% in the same quarter of earlier years. He claimed that the difference, 1.12% of the U. S.. death rate, was due to Chernobyl. Even if true, most experts agree that it might have any of a number of causes, but Gould chose to suggest iodine releases from the Chernobyl nuclear power plant.
The deposition of iodine and the deposition of other radionuclides around the world from the Chernobyl plant release have been carefully measured. From the deposition and the known half-life the dose can be calculated. The average first year dose in the United States was about 1.3 mRem, compared with 60 mRem average in Italy and 40 Rem for the 24,000 persons between 3 and 15 km from the power plant (excluding Pripyat)28. The difference of the 1986 mortality in the US (33.06%) and the 1985 mortality (31.97%) is about 3%. If this was due to radioactivity from Chernobyl, and assuming a linearity between mortality and dose, this would suggest a mortality increase of 415% (5.2 times the natural rate) effect in Italy and an increase in mortality of 2770 times (2770 times the natural rate) in the area immediately downwind of the Chernobyl power plant. This argument by itself should have been enough to discredit the whole discussion. However, this straightforward argument was not sufficient to stop the Wall Street Journal from dignifying Gould's claim with a column, asserting that he had caused a "scientific" controversy29.
The News Tribune newspaper in Seattle was better. They examined an important aspect of this claim -- that cancers in the state of Washington were caused by Chernobyl -- and clearly echoed the above point that the change in mortality and the exposure to radioactive fallout were inconsistent. Dr. Patricia Starzyk of the Washington State Department of Social and Health Sciences29 noted that on further analysis the Washington State mortality only rose 2% in summer 1986, not 9% as was alleged. This was not an unusual increase. Moreover, she identified five traditional medical causes for summer increases: infectious disease; arteriosclerosis; chronic lung disease, suicide and diabetes.
An even more direct, and what should have been terminal, refutation of Dr. Gould's claim came from a Los Angeles Times reporter29 who noted that Gould had used an incomplete set of numbers. The 33.06% that Gould had stated as the fraction of U. S. deaths between May and August 1986 was, in fact, incorrect. The more accurate percentage, 32.2%, is "identical to the data for the summer of 1984, and consistent with normal seasonal mortality patterns. The 1985 rate was 31.6%"30. These facts seem to have had a surprisingly small impact, presumably because a penchant for belief exists in the community. This topic will be taken up again in the section on Cold Nuclear Fusion.
Yet another study31 found no effect in Canada, although the effect on Canada should have been similar to that on the U. S. if Gould et al. (1986) were correct. In Canada deaths from infectious diseases remained steady while death rates among 25 - 34 year-olds and among infants actually decreased.
The refutation of Dr. Gould's claim, summarized by Shihab-Eldin, Shlyakhter and Wilson (1992), clearly explains why there has been no discussion in the scientific community of cancers in the United States caused by radiation from Chernobyl. However in a recent article in the Nation32 reiterated his "findings" of effects from Chernobyl on the United States and claimed that they have not been challenged! When the accuracy was directly examined33, Gould responded with further alleged "proofs" of the large health effects of the minuscule doses of radiation, without responding directly to the challenge. In many ways, this progression of assertions of risk mirrors the history of the thermonuclear atmospheric ignition debate. While it is vital to encourage the pursuit of maverick views and theories, this must occur in an atmosphere where every author, scientific or popular, is obliged to address specific questions and criticisms, not simply to raise new hypotheses in lieu of careful analysis.
It is clear that unjustified claims will not help anyone actually exposed to appreciable levels of radiation. Nor will such claims help officials in the United States to protect public health in this country. In both cases the pressure from the unduly alarmed public might divert the funds that could be better used for risk reduction elsewhere. Nonetheless, such claims persist. Is the effect truly impossible, as the professional experts claim, or have the critics perceived something the experts have not, unintentionally or intentionally, noticed? Given the apparent consensus among the scientific community, one logical effort to undertake might have been to announce in numerous forms to the public that while the Chernobyl accident was a human tragedy with widespread impacts throughout Europe and the Former Soviet Union, its direct impact was, however, regional rather than global.
Risk of cancer from low levels of a carcinogen
It has long been apparent that agents vary in their propensity for harm, e.g., Paracelsus' often quoted remark, "the dose causes the poison". However, this does not imply a particular dose-response relationship. Historically, it was considered that at low enough doses, below some ill-defined threshold, all agents posed no threat. This began to change after X-rays came to medical use. In the 1920's scientists began a search for a dose below which there was no cancerous effect. As studies were done at lower and lower doses, and effects were still found, it was postulated that there might be a linear dose response relationship. This found its way into public policy with the International Commission on Radiation Protection (ICRP) which suggested in 1927 that "no dose should be given without expectation of some benefit." This policy was specifically applied to cancer treatment, where mutagenic effects were observed for a wide range of irradiation levels in experiments on in vitro cells.
A physicist, Crowther (1924), first proposed a theory by which radiation could cause cancer with a linear dose response relationship. He proposed that an ionized cell becomes a cancerous cell. The probability of a person developing cancer then becomes the probability of one of his cells being ionized. Since many hundreds of cells are ionized each minute by cosmic radiation, this effect is millions of times too large. Modifications must be made in the elementary idea. Only a small fraction of ionized cells remains to cause a cancer; others are "repaired" or are otherwise prevented from posing a cancer risk.
A number of biologists, who frequently work with "cooperative" phenomena, promptly argued that the biological repair mechanism will act well up to a threshold above which the mechanism is overwhelmed. This then gives a non-linear dose response relationship. Even now, there are biologists who regularly argue that a completely linear dose response relationship is impossible because of this repair mechanism. While this is generally now considered to be an extreme view, it does illustrate a difference between some biologists and physicists in the prejudices and perceptions of causal agents.
This becomes an interesting, and at times heated, argument. A physicist, generally trained to examine individual rather than to cooperative phenomena, might then question; why should the repair mechanism have a threshold? why, instead, could it merely not be completely effective, such that only 99.9999% of all cells are repaired? This would still leave one altered cell in a million to cause cancer.
This argument has political overtones and consequences. If there is really a threshold, regulation can become simple; simply keep exposures and doses below the threshold. For example, if there is a threshold for lung cancers caused by asbestos at or about the present occupational (OSHA) standard of 0.2 fibers/cc, why should we spend anything to remove asbestos when the average exposure is far less than this standard34?. If there is no threshold, as is presently commonly assumed for regulatory purposes and by many scientists, then one must face the question: "what is an acceptable risk?" For the case of asbestos, a detailed analysis of the risk categories and a priority ranking of remediation options is presented by D'Agostino and Wilson34.
Many scientists, particularly physicians, have tried to resolve this public policy question by arguing that there is an "effective threshold", a level below which no significant increase of risk, appealing to commonplace experience. This is emphasized by the observations of Crump, et al.,35, that regardless of whether or not there is a biological threshold in the abstract, there is unlikely to be a threshold in a human population for any radiation induced cancer for which there is also a natural incidence.
This important political distinction has led many scientists to try to prove the existence of a threshold for the pollutant caused by the industry with which they are concerned. However it is inherently impossible to prove the existence of an absolute threshold -- a level below which nothing occurs. Some scientists have tried to argue that there may be linearity, with no threshold, for carcinogens; this is apparently not true, however, for non-cancer end points.
In the absence of a rigorous proof of a threshold effect, regulators have been forced into defining an effective threshold or "No Observed Effect Level (NOEL)". But it has recently been realized that the argument of Crump, et al.35 applies very widely. Many years ago it was suggested that air pollution causes adverse effects on health at ambient levels, perhaps with a linear dose response36-38. This idea was roundly attacked as being impossible, unscientific and contrary to sound public policy. In a forthcoming paper, Crawford, Evans and Wilson39 discuss how the argument of Crump, et al.35 applies not only to carcinogens but to a dozen non-carcinogenic endpoints. This suggests that low dose linearity, once regarded as impossible, is not only possible, but usual.
On the other hand, there is some evidence that populations exposed to low levels of natural and man-made chemicals may be have seen decreased cancer rates, i.e. relative risks less than one40. (The Relative Risk, RR, is defined as the ratio of risk in the exposed to a comparable control population). Thus, RR >1 indicates an elevated risk while RR = 1 indicates that there is no difference between the subject and control populations. RR < 1 indicates a reduced risk in the exposed group.
Clearly this is a difficult scientific question that is still not fully resolved. The present optimal solution would first be to openly acknowledge the deficiencies in current understanding, and then to address in a risk or policy framework those issues which can not conclusively argued scientifically. This combination of open discussion of the status of the scientific research and review process provides the only sensible basis for public health and environmental policy making.
Risk of rapid sea-level rise from global warming flooding coastal cities
In many respects, sea-level rise is the most concrete result of projected global warming. During the heat wave of 1988, described as the "greenhouse summer"41 dramatic stories circulated that a doubling of atmospheric CO2 concentrations might rapidly melt the polar ice, which in turn might lead to a sudden (over the course of a very few years) increase the sea-level by as much as 10 meters, flooding many low-lying areas of the world, and most of the principal population centers. As such, the risk rapidly captured mass media attention, and inspired cartoonists to show ocean waves lapping up on the chest of the Statue of Liberty while in the background water covers half of the buildings of New York City41.
Scientific investigation of the dynamics of carbon dioxide-temperature, and temperature-ice interactions led to the conclusion that such rapid and large magnitude sea-level increases would not result from a doubling of the atmospheric concentration of CO2; such increases are inconsistent with current understanding of the physical climate system. Nevertheless, the image of a catastrophic sudden increase has already entered the public imagination. The present best estimate for the sea-level rise in the "Business as Usual" scenario is far less than 10 meters, but is also very uncertain, 66 cm ± 57 by the year 210042. This uncertainty does not even preclude a fall in sea-level43,44, although a rise is by far the more probable outcome. This, of course, does not prove that sea-level rise would not be an ecological disaster. Significant sea-level rise will inundate and displace wetlands and lowlands, with other potential impacts including increased coastal erosion, drowning of some barrier reefs, intrusion of salt water into coastal ground water supplies, altered tidal ranges in rivers and bays, and altered sediment deposition patterns45.
The rapid entry of sea-level rise into the public awareness of global change also presents an important opportunity to establish new policies with significant environmental benefits. In a recent analysis of long-term environmental changes due to increases in the atmospheric CO2 concentration of eight times, Manabe and Stouffer46 found that potentially catastrophic modification of ocean circulation patterns and a sea-level rise of four to 10 meters are possible over a time scale of three to five centuries. While the long time scale of the change would be interpreted as a sure sign that sea-level rise will receive low political priority, public perception appears to differ. Current research indicates that the public, and particularly the younger generations are exceedingly sensitive to environmental degradation, even if it is expected to occur over a duration of many generations25. The optimistic but realistic conclusion is that if an open and accurate dialog can be maintained between the research and wider public communities on the topic of climatic change, support for a new, inter-generational set of environmental policies could become reality.
Risk of global environmental change: the Kuwait oil fires
In January 1991 it became clear that President Saddam Hussein of Iraq was prepared to blow up and set on fire the Kuwait oil wells. King Hussein of Jordan, quoting one of his scientists (1991), argued that this could have global environmental impacts and was itself a reason for not engaging in military activities in Kuwait. This was echoed by Professor Carl Sagan of Cornell University47. Most scientists scoffed and the advice was ignored; Saddam Hussein set fire to the oil wells.
Were the scientists correct in raising the fear or was the global change impossible? Sagan's argument was that it was likely that the wells would burn and generate a great deal of particulate matter, therefore obscuring the sun, and causing a global cooling. This was similar to the argument about "nuclear winter" raised by SCOPE48 and discussed by Turco, et al.49.
There was a prompt scientific reaction to the suggestion. It was pointed out that even if (as eventually occurred) almost all the wells were set alight, this would only result in a 5% increase in the global rate of consumption of fossil fuels. A global, rather than a regional effect therefore seemed unlikely. In this case, scientists were already familiar with the necessary types of calculations because of prior research on the claims of a nuclear winter resulting from a nuclear war. Most scientists had, correctly or not, discounted the calculations of Turco, et al.49 for a number of reasons, including the failure to consider the inhomogeneous nature of the atmosphere. Complete extinction of sunlight at one geographical location can be compensated partially by air circulating from a neighboring region where the absorptive loses of sunlight are less.
A flurry of scientific investigation followed the ignition of the wells, including airborne pollutant measurements and computer simulations. While global environmental models were not sufficient to provide an iron-clad answer to the possibility of a global impact, several conclusions quickly emerged: the smoke was not as black as initially feared; particulate levels were less than that suggested by initial estimates; vertical transport was minimal; and that the atmospheric residence time of combustion products was far shorter than initially suggested50. These observations led a broad spectrum of scientists to conclude that all indications were that the oil fires did not pose, and never did pose, a global threat. However, the temperature in Kuwait during summer 1991 was reported to be many degrees less than usual and this difference has often been attributed to the blanketing of the sun caused by the fires50. The consensus was that while regional effects were expected and observed, no global impact should occur.
Was the absence of a global effect due to chance? If the fires had occurred under different weather conditions would there have been a different result? If Saddam Hussein had succeeded in filling the trenches with oil and setting them alight to make a fire several times larger (although only for a limited duration), would that have produced a global effect? These and other matters were discussed at several meetings51, where the vast majority of scientists answered 'no.' Nevertheless, there remained a lingering doubt that unfavorable circumstances could have produced a much bigger effect, and that the predictions of Hussein and Sagan must be carefully evaluated and guarded against.
In this case, there was a rapid scientific investigative response to a potential crisis that was followed almost as quickly by discussion and consensus through the media and open scientific conferences52.
Risk of death due to asteroid impact
Most people would agree that there is a small but finite risk of being killed by an aircraft falling from the sky. One reads about such accidents in the newspaper. But few people think that it is an important risk, and are surprised to hear that the risk of a bystander being killed by an aircraft crash is four times the threshold risk of one chance in a million where the Environmental Protection Agency contemplate regulation (the probability is 4.2 x 10-6 per lifetime53). But, who has heard of a fatality due to an asteroid impact?
The possibility that dinosaurs were destroyed by impact of an asteroid was suggested by Alverez et al.54. After the verification of a number of auxiliary predictions, the theory has been accepted by a majority of paleontologists.
Other scientists began to explore the probabilities55 of collision and impact. The most complete study of the probability is that of Chapman and Morrison56,57. They conclude that asteroids and comets with the explosive energies of the largest nuclear bombs strike the earth every century, but since they usually land in uninhabited areas they pose a trivial threat compared to more common natural disasters. But much more massive objects of diameter about 2 km, although small compared with the impactor discussed by Alvarez et al.54, have a one in a million chance per year of impacting the earth. Such an event might disrupt the ecosphere considerably and cause extensive indirect human mortality, in addition to the direct effect of impact and explosion. The risk to an individual is about 1 in a million (10-6) per year, or 1 in 20,000 (5 X 10-5) per 50 year lifetime. This is 50 times larger than typical risks in which the EPA does take an interest, and over 250 times larger when the EPA's conservative risk standards are taken into account. If a risk this large comes to the attention of an EPA official, it would be certain to receive regulatory attention58.
It is self-evident that the public are not particularly concerned by the risk of asteroid impact(2). A fascinating parallel story exists for the risk and perception of risk of fatalities from the fiery re-entry and return to the Earth's surface of an artificial satellite. While the case of several recent Soviet craft and the Skylab module received considerable attention, other more improbable satellite risks have been examined59. In 1990 and 1991 the Galileo and Ulysses space probes were launched. Preceding the launch was a detailed risk assessment estimating the chance of catastrophe. Among the problems considered was the possible perturbation of the orbit by asteroid impact as the space probe came back to swing round the earth to be accelerated toward the outer planets60. While the overall probability was deemed acceptably small, it was the product of three probabilities; (a) asteroid impact; (b) impact such that the perturbed orbit intersected the earth; and (c) damage to the spacecraft that would knock out all radio equipment so that the orbit could not be corrected again by signals from the ground.
It seems too remote -- yet the calculated probability is similar to or larger than some risks that are of regulatory concern. One reason for this lack of concern may be that this is a natural disaster and cannot be prevented. This disaster, however, can be addressed55, and thus raises issues of resource priority. An astronomical sky-survey system or deep space early warning radar system, could detect such asteroids and measure the trajectory. If on an intersection course with the Earth, we could send up a rocket to intersect, and "nudge" the asteroid into a different trajectory. This is within present day technology, although the capability sounds most like a post-Cold War employment ploy for veterans of the SDI ("Star wars") program. It would be expensive, but the American public seem willing to spend almost unlimited sums on disposal of high level nuclear waste.
The obvious implication from the case of "asteroid defense" is that even when there is a means of reducing risk which is technically possible, a social benefit/cost decision must be made. It is important for the public to address explicitly and to decide upon the importance of addressing each risk or risk category: in this way control of the process remains society as a whole
Claim of safe, cheap, and limitless energy from "cold" nuclear fusion
We conclude this list of "impossible risks" with a case of the inverse: not the impossibility of a risk, but of a disputed claim of a potential benefit. The now infamous case of "cold nuclear fusion," involving electrochemically catalyzed fusion in the solid state as opposed at high temperature and pressure in the plasma state. The promise of cold fusion, that of essentially boundless fusion energy from an inexpensive and technically simple table-top apparatus as opposed to fusion achieved in vastly complex and expensive tokamak reactors, is a holy grail of epic proportions: "salvation in a test tube"61. While the hypothesis and the experiments performed by Pons and Fleischmann, and several other scientists in pursuit of cold fusion, were long-shots, one can argue that they were no more fanciful than a number of experiments, some yielding wonderful discoveries, performed by enterprising researchers in a variety of fields. What makes the story of cold fusion remarkable is the combination of substandard scientific method, efforts to avoid and subvert any sort of peer review, reliance on belief over proof, and an atmosphere of secrecy (perhaps aggravated overzealous legal advice) that sustained cold fusion in the face of evidence that the claims were impossible and wrong.
While the pursuit of cold fusion was rife with errors of omission and shoddy research, a critical feature of the saga is that the initial announcement of the "effect", and of many subsequent claims, were all made through the medium of the press, with peer-reviewed scientific articles always promised as follow-up instead of in the reverse order. The absence of evidence available for scientific examination is reflected in the story of the famous "gamma ray" spectrum. One of the cornerstones of evidence for cold fusion shown on television and at press conferences was a spectrum, supposedly showing a peak caused by gamma rays from neutron capture in deuterium. The spectrum that was circulated for examination had been digitized off of a television monitor by scientists frustrated by the lack of available material. It was quickly demonstrated that the critical peaks in the spectrum were at the incorrect energy and were too narrow to be the result of any sort of fusion process (Taubes, p. 310 - 312). The initial claim of cold fusion and subsequent reports of "positive" findings, for example, were made at press conferences, with the justification that rapid announcement was necessary to secure rightful credit for a discovery in the face of competitors pursuing very similar experiments.
According to the published analysis of Taubes61, a major impetus for a rushed news conference was inter-university competition. After university-level discussions that had broken down, Pons and Fleischmann of the University of Utah believed that Steve Jones at Brigham Young University was very close to making a similar announcement, and at that moment had more data to present than did they61. Further clouding the picture was the fact that Jones had reviewed, and rejected, a funding proposal submitted by Pons and Fleischmann to the U. S. Office of Naval Research and then directed to the Office of Advanced Energy Projects. Both groups pointed to sketchy evidence that they had been working on the problem for many years, and were driven by the need to solidify this "evidence" with a publication61.
In the process of moving from initial tentative data, to a press conference, Pons and Fleischmann moved rapidly from informing the University of Utah administration that they, "needed another 18 months to verify the initial experiments,"61 to the claim at the March 23, 1989 press conference that, "... in a few short years [we] should be able to build a fully operational nuclear fusion reactor that could produce electric power."
The ensuing demand for further confirmation, and the inability of the researchers to produce even the data described at the initial press conference, demonstrated that regardless of the veracity of their claim, they were far from ready to present any meaningful claims of cold fusion. The next several months witnessed a massive effort by many research groups to corroborate the initial findings.
Despite the lack of data to support the claim of cold fusion, and statements by a variety of scientists that the claims were, "preposterous" and, "a case of self-deception" (G. Chapline of Lawrence Livermore Laboratory, as quoted by Taubes61), cold fusion seemed to have a life of its own. The interest in cold fusion, particularly the fact that two groups at the University of Utah and at Brigham Young University were both hard at work on the problem, and the constant series of articles published by The Wall Street Journal suggested that the effect must be real, with only some details to be ironed out. At the height of the cold fusion frenzy, no less than eleven laboratories claimed to have evidence for cold fusion. The laboratories included: two groups at Texas A & M University; two groups at Los Alamos National Laboratory; one each at the Bhabba Atomic Research Institute; Stanford University; Case Western Reserve University; the Institute of Petroleum in Mexico; Oak Ridge National Laboratory; the University of Florida; and an independent laboratory at the University of Utah62. It is noteworthy that only a small minority of the groups claiming to observe signs of cold fusion were electrochemistry laboratories.
After the first signs that there were difficulties with the claims of Pons and Fleischmann and Jones, research groups began to retract their confirmatory claims or to quietly drop the matter63. True to the description put forth by Irving Langmuir in 1932 of "pathological science", the number of supporters of cold fusion rose to roughly half of the active research groups, and then dropped almost to oblivion61.
Cold fusion research has not wholly been abandoned, a conference on the subject was held as recently as 199264, and reports of substantial research and development grants in Japan continue to appear today62,63. Each recent claim, however, is met by an equally strident counter-attack.
As examples of the occasional but ongoing peer-reviewed and general science literature debate, an article proposing several theoretical mechanisms for cold fusion by Fleischmann, Pons, and Preparata64 recently appeared in the peer-reviewed Il Nuovo Cimento and was quickly cited by the editor of Nature as an example of just how authors all to often over-hype their results or claims4.
Equally intriguing is the case of a recent article in the semi-popular magazine, Technology Review, where one of the early proponents of cold fusion, Edmund Storms, claims that, "the basis for skepticism is dwindling"65. In contrast to the case of atmospheric ignition, where further publication without further positive evidence was rejected through peer review, Storms claims there to be new evidence, "of energy-releasing nuclear reactions at room temperature pour[ing] in from labs around the world." At the same time, physicist Paul Lindsay sees the opposite trend in the experimental record over the last few years. He notes that the accumulated negative evidence since the initial Pons and Fleischmann claim makes the publication in Technology Review, " embarrassing ... it reads just like nothing has happened in five years"66. However the above does not preclude the existence of small, non-equilibrium, effects producing charged particles, which are of no importance to overall energy production.
4. EVALUATION OF THE EXAMPLES: GUILTY UNTIL PROVEN INNOCENT
As we ponder these examples, and focus on the situation where professional "experts" perceive a risk that the public does not, we find that we are in disagreement with much of the conventional wisdom concerning societal response to risk. Conventional wisdom holds that: the public are primarily concerned with the involuntary nature of the risk67; the public are more concerned with risks involving many people at once than when people are involved singly68; and that there are some risks which impose a feeling of "dread"69. Although these are no doubt contributory, we believe that a more important theme is the distrust of the professional expert, and, by extension, distrust of the process of identifying and dealing with risks. Again, this is a situation that can only be attributed to the actions or inactions of experts, and can best be addressed through a rigorous effort of scientific and risk communication and involvement with the general public3.
One prejudice held by most experts is that if a phenomenon does not fit with existing scientific understanding, it requires more, rather than less, evidence to prove its reality. The conflict naturally arises between the excitement of the discovery of the moment and the necessity of frequently slow research to verify the initial claims. The cases above suggest that this is an area where the public do not understand the scientific process. Many frustrated scientists have expressed the view that the public must take the time to 'do their homework' and learn; we try here to avoid such a simplistic response.
In the Anglo Saxon legal system it is usual to argue that a person is innocent until proven guilty. If we apply this concept to a new technology we realize that it is all too easy to say that there is no risk if we have not found one. Experience tells us that this is not always true; from a human standpoint it is the novel or the unknown that must demonstrate its benevolence to us. The opposite concept, that a new technology should be considered guilty unless proven innocent, however, faces problems of implementation. To prove that something is harmless requires a search for negative evidence, and is always compromised by the qualifier: "to the extent of our tests, we have found no harmful aspects of this technology." In fact, it may be virtually impossible to prove that something is safe; DDT, CFC's and a host of chemicals were all assumed to be safe -- and tested as such -- until further tests proved otherwise.
The dilemma reduces to a simple question: How much negative evidence do we need to discount a particular suspected risk3? A corollary to this is the policy question: Who is to decide on the evidence required? The cases above suggest that public disagreement with the cutoff selected by professional scientists is often a major cause of dissatisfaction with the scientific research which has been carried out on the public behalf.
Increased emphasis on communicating with the public70 is certainly an important step in decisions about risks. Equally important, however, is the prior stage: the need to involve the public, the ultimate consumer of 'public-interest science'70, in the process of determining what constitutes an acceptable risk, and what is an acceptable quantity of negative evidence. The obvious benefit of such an open dialogue over the risks of life is that the frequently adversarial relationship between the public and public policy makers will be reduced, and that the perception of the particular decision as a fait accompli will be removed. In the next section we examine a case still very much in popular and scientific debate where a suspected risk is in conflict with currently understood biology and physics; a case where negative evidence is central
5. NEGATIVE EVIDENCE AND CLAIMS OF HEALTH RISKS FROM ELECTROMAGNETIC FIELDS
Since the discovery of electricity there have been suggestions that electromagnetic fields have extraordinary effects on people. The hypnotic effects Mesmer described in the 18th century were originally called animal magnetism. At that time curative, or at least benign, effects were claimed. But more recently harmful effects have been claimed. There are a very large number of such claims that have now been made, and many of the claims are contradictory or nebulous enough so that it is certain that some are wrong. Some of the claims now seem to everyone to be obviously incorrect. For example, James Thurber72 described how his grandmother insisted on covering every unused electricity outlet in her house to make sure that the electricity did not leak out. As described in the story, "The car we had to push,"72 Thurber's mother moved from empty light fixture to light fixture around their home screwing in a light bulb to determine if the switch was on. She was then able to relax with the knowledge that she had stopped this, "costly and dangerous leakage." Few scientists now would accept this as plausible.
That biological systems are very sensitive to electromagnetic fields and currents has been known for two centuries. Galvani noticed the twitching of a frog's leg as he connected two electrodes to it. This led him to use frogs' legs as the active part of an instrument to detect electric currents -- the Galvanometer. While silica, artificial fiber, or spring-mounted wire coils are now used in Galvanometers, the effects of electricity and magnetism on tissue at various levels now account for a substantial component of biological research. Moreover, directly visible effects of magnetic fields have been known for nearly a century. When one puts one's head into the gap of a cyclotron magnet, with its' 18,000 Gauss field, and moves one's head around, one sees flashes of light on the retina. These have been recorded in time varying magnetic field also at magnetic field intensities above about 70 Gauss and frequencies of 50 Hz and above. These visual sensations (magnetophosphenes) are caused by induced currents in the retina. But below 70 Gauss it is not generally accepted that these visual sensations occur.
Sharks and rays have specialized sense organs (the ampullae described by Lorenzini73) which detect electric fields in the water and are used to orient their swimming; some bacteria have magnetite particles to orient themselves; as do some fish. These organs are sensitive to the Earth's magnetic field, not to fields orders of magnitude smaller than the omnipresent natural background as would be necessary to match some of the published claims.
There are evident dangers of currents, some of which can be easily avoided. A cartoon in England in 1935 showed an old lady asking a tram (trolley) conductor if it was dangerous to stand on the metal trolley track, "Not unless you put your other foot on the overhead wire, mum."
The fact that electromagnetic fields are commonplace in modern society and the degree of public concern, however, fully justify a careful examination of the potential for harmful health effects.
6. ELECTROMAGNETIC FIELDS: THE NEW CONCERNS
The present concern that low intensity electromagnetic fields are hazardous arises primarily from the report in 1980 by Wertheimer and Leeper74 that the incidence of childhood leukemia near Denver is associated with the presence of power lines, and, by inference, with electromagnetic fields of very low intensity, roughly three milligauss. This postulate was immediately linked to an earlier suggestion that electric and magnetic fields of this low intensity can produce effects on cells -- in particularly on the rate of calcium efflux from chicken brains. It seemed that these observations were mutually supportive. This has led to a great deal of public concern, and has been used to stop, delay, or reroute power line projects.
One measure of the significance of a potential ELF health risk are the recent estimates that place the expense of the problem to date at over one billion dollars75. Yet the scientific community as a whole has not accepted the premise that weak electromagnetic fields cause leukemia or any other health hazard. The public policy issue is: to what extent should society accept the claims by a limited group of scientists, either fund research to clarify the matters further, or to make expensive modifications to industrial infrastructure to reduce the claimed harmful exposure? Further, does common prudence suggest we avoid electric and magnetic fields, and if so, how?
In making their initial postulate, Wertheimer and Leeper attributed the phenomenon (elevated cancer rates) to what they regarded as a most reasonable causative agent: strong electromagnetic fields. The causal agent that a person considers to be most likely is a subjective matter, and is strongly connected with a person's prior experience. As the debate over ELF has made very evident, most physical scientists and many cellular biologists consider that attribution of leukemia to low intensity electromagnetic fields implausible, to say the least. However many epidemiologists76,77,78 have found recent epidemiological data convincing. Let us discuss some of the preconceptions of a physicist.
The Background of Physical Law
The laws of electricity and magnetism, clearly enunciated over 100 years ago by James Clerk Maxwell, describe a wealth of precise physical measurements. They must clearly be used as a guide to the phenomena we might expect. In particular, the force law describes the effect of a magnetic field, B, on a particle of charge e: moving with velocity v:
F = e [E + (1/c)(v x B)],
where F is the force, E is the electric field and c is the velocity of light. All symbols in boldface are vectors, e.g. the electric field, E. The first term on the right had side describes the force on a charge, e, in the electric field, E. The second term is the Biot-Savart law, which describes the force experienced by a moving charge in a magnetic field. The crucial fact is that any force, on ions, on membranes, or other components of biological systems due to electric or magnetic fields are forces on charges. Further, the forces due to magnetic fields are lower than those due to electric fields of comparable magnitude by the factor (v/c). Because velocities in biological systems are generally quite small compared to that of light, (v/c) is frequently less than 1/100,000. Consequently, the effects of magnetic fields are expected to be small. Another way to consider the equation is to recognize that for the electric and magnetic field terms to have the same units, that of force, the relative contribution of the magnetic field is reduced by (v/c).
One of the important features about electric fields is the fact that an electric field inside a conductor is greatly reduced from an electric field outside of a conductor. The human body is a conductor, and static electric fields cause no electric field inside. This is demonstrated regularly by physics lecturers, when the lecturer charges himself up to a potential of 100,000 volts, but feels nothing; the only visible effect is that his hair (outside the body boundary) stands on end. This argument does not apply to fish in water where there is a strong direct coupling between the conductive water and the conductive tissue of the fish. Of course, if the same lecturer were standing on a wet floor an electric current would flow through him, and touching even 100 volts can be fatal (if the power supply can provide enough current to electrocute him).
A magnetic field changing with time inside a body can produce an electric field. This is described by one of Maxwell's equations,
curl E + (1/c) B/t = 0,
where the "curl E" means rotation of the electric field around the magnetic field lines. B/t indicates a partial derivative, in this case a time rate of change. From this equation we see that a time varying magnetic field, such as a typical 60 Hz field, can produce an electric field at a cell inside a partially conducting body.
Another physical law critical to an understanding of the extremely low frequency electromagnetic field ELF (commonly defined as fields with frequencies less than 300 Hz). debate is the Second Law of Thermodynamics, which is usually discussed for systems in equilibrium. This describes a number of fundamental features of the macroscopic behavior of the random motions of atoms and molecules. With any body at a non-zero temperature there are unavoidable fluctuations, or jostling, termed Brownian motion. The statistical mechanics which govern these motions are well described by Boltzmann's Law, and in particular the equipartition theorem, which states that for every degree of freedom there is (1/2) kT of energy in the inherent fluctuations. Here k is a constant and T the temperature.
These laws apply both in classical mechanics and in the domain of quantum mechanics. Only in exceptional cases can biological systems detect signals that are smaller than the thermal fluctuations of the system, i.e. for a signal smaller than (1/2) kT. If the energy is larger than (1/2) kT, it is a more straightforward issue to imagine biological systems capable of detecting it. Such considerations tell us that the energy in a magnetic field varies as B2. If we accept that any effect on biological systems, like all electromagnetic fields, conserve parity, then any effect must vary at low fields as an even power of B. Thus, an effect that is just detectable in the presence of the Earth's magnetic field of 1/2 Gauss, would be 10,000 times too small to be detectable at five milligauss, a typical field intensity ascribed to ELF fields from power lines at a distance of several hundred feet.
Another way of expressing the limits of detection is in terms of Nyquist's theorem which states that the mean square of the noise voltage is given by the equation:
< v2 > = 4 kTRdf
Where again T is the temperature in degrees Kelvin, R is the resistance between the places the voltage is measured, such as the membrane surfaces of a cell, and df describes the frequency interval over which the measurements are made. A critical point in the analysis is that the voltage across a membrane or cell is the electric field multiplied by the linear dimension of the membrane. It is this voltage that is to be compared with a "noise" signal characteristic of biological systems. The resistance in the formula for the noise voltage is proportional to linear dimension, and the voltage itself proportional to the square root of the linear dimension. If the linear dimension of the cell, or other biological detecting device is increased, the signal to noise ratio improves.
The physical parameters of the system have been well studied by Adair79. He and other physicists have, however, been accused of myopically rejecting biological data that conflicts with their claims. We envisage a biological detection device which is several centimeters in size, which is large enough to detect fields smaller than the Earth's magnetic field. If this were the case, it would then appear physically plausible that biological systems could detect extremely low frequency electromagnetic fields consistent with those implicated by proponents of the ELF-cancer link. Thus, while a physicist may be predisposed to reject the possibility that a body can be sensitive to a field smaller than the natural background fluctuations, a biologist thinking about the system in its entirety may not be. More than one biologist has commented that, "it would be surprising if such man-made fields do not have some effect"55. Can a biologically plausible mechanism be proposed? Another possibility might be a mechanism whereby all signals were averaged over a long time -- say a year. Under the proper conditions signal averaging over long times could increase the biological impact of very weak fields. But it is unclear that there is any real biological system that could average over such a long period. A limit is given by 1/f (flicker) noise that occurs in both biological and non-biological systems. The averaging times suggested by covering an appreciable part of the ELF frequency domain, 0 to 300 Hz suggest, however, far shorter averaging times of a second or less.
If one postulates biological systems that are tightly electrically coupled and non-dispersive over scales of many centimeters, then advocates of biological sensitivity to extremely low frequency electromagnetic fields would be advised to explore these systems. It is difficult to take seriously the arguments that ELF fields cause cancer when, to date, no such systems have been discovered in humans. Until such systems are identified, it remains prudent for physical scientists to reject the ELF-cancer link hypothesis.
7. SCIENTIFIC PROCEDURE AND PROTOCOL: OPTIMISTIC SKEPTICISM
It is a common statement among scientists that "when sound laboratory data do not fit a theory, it is time to look for a new theory." There is, of course, disagreement on what constitutes "sound," and biophysics and practitioners of the sub-field of bioelectromagnetics are no exception. In particular, is an experiment that has not been replicated a sound experiment? Moreover, when the apparent consequences of a piece of data seem to demand revision of the most cherished of scientific laws, it is vital to do two things.
(1) look very carefully at the data to see if they are as sound as first suggested and believed and
(2) to lay out the apparent inconsistencies as clearly as possible, to see whether, under closer examination, they might vanish.
A characteristic of research on a possible ELF-cancer link has been an inability to follow this scientific method. The reasons range from financial and resource constraints to the reluctance of one investigator to repeat the experiments of another. This situation, coupled with the reasonable tendency to attribute any novel finding to the most probable cause is a recipe for trouble: scientists pursuing a unique line of investigation and presenting data that fit an evolving "model" without the needed level independent confirmation and peer review. Is that happening here?
The role of peer review in the ELF debate, and in the cases we have considered generally, is illustrated by the interest of the renown physicist J. J. Thompson in telepathy. In his 1899 address to the British Association for the Advancement of Science, Thompson speculated that electromagnetic fields with wavelengths between the ultraviolet and X-rays are the carriers of information between people, and hence the physical mechanism of telepathy (Thompson, 1899). Although considered by most of the scientific community to be without merit, Thompson's proposal received a round of debate and reasonable scrutiny. The situation in the early 1990's with regards to the ELF-cancer link is not dissimilar: speculation exists that non-linear and poorly understood phenomena might evade the limits seemingly set by conventional physical theory. Such speculations, regarding telepathy, ELF, or cold fusion are an important part of the scientific process, and deserve scrutiny. It is the responsibility of those who propose such ideas, however, to provide timely and sufficient information so that an efficient assessment can be made. In many cases this last step has been shamefully abused in the rush for prestige, acclaim, and notoriety.
A second important aspect of the ELF debate is that much of the data is epidemiological, a field where quantitative measures of data quality are by necessity less rigorous than those used in some classical areas of experimental research. The consumers of the results, such as non-epidemiological scientists attempting to correlate laboratory and epidemiological results, need to use their own common sense in evaluating the reported evidence and estimating how safe is safe enough. As Feinstein80 put it "if war is too important to be left to military leaders ... the interpretation of epidemiologic data cannot be relegated exclusively to epidemiologists."
An Application: Addressing the Public's Concerns about Electromagnetic Fields
In response to the growing mass of literature and debate on the topic of ELF and cancer, Oak Ridge Laboratory charged a committee with a review of the present scientific and policy situation. Strangely, the otherwise excellent report by the Committee on Interagency Radiation Research and Policy Coordination81 failed to address the central public policy issue. David Savitz82 evaluated the work of the committee, whose:
... response appears to address the multifaceted question, Is there persuasive evidence that electric and magnetic fields are a major cause of clinically adverse health effects, with a biological understanding of the processes involved ... and a firm basis for risk assessment? The answer they provided is the correct, obvious, and even obligatory one: "No."
But this does not guide us in resolving a potential risk which is obviously not understood at present. Savitz continues:
A separate but related question is, "Given the present state of knowledge and hypotheses concerning health effects of electric and magnetic fields, what priority should be assigned to further research on this issue"79?
An assessment of the priority that a potential risk should be accorded in light of minimal evidence and the wider milieu of scientific and technical costs and benefits is, of course, more difficult to provide. It is, however, just this potential involuntary risk that is of paramount importance for public health and is of public concern. Any scientific investigation that fails to address these clear public concerns fails the first criteria of 'public interest' science and can hardly be considered to be a good 'buy' for the taxpayers.
In the case of electromagnetic fields it is clear that this test has not been met. There is as yet no simple experiment on the biological effects of low intensity magnetic fields that can be repeated in every laboratory. Organizations which award contracts for laboratory research into the potential of electromagnetic fields to impact ionic transport rates, mRNA synthesis, and melatonin production have failed to force researchers to systematically explore the pertinent parameter regimes and to repeat and confirm the results of other groups. With this first step incomplete, it goes without saying that the second step, that of communicating the information clearly to the wider scientific and public audiences has not been accomplished. The final stage, that of discounting or acting to mitigate the ELF risk thus remains a distant goal. There has been considerable public discussion of a policy of "Prudent Avoidance"83. But if the cause of an effect is not probable, or at least possible, or even merely unclear, how do we know what to prudently avoid?
8. DISCUSSION: LESSONS LEARNED
One of the lessons that can be learned from these studies and then applied to a wide variety of assessments of health, technological, and environmental risks is that analysts must begin with, and then go beyond, simply an accurate assessment and report of the risks. The risks and the social and economic context in which they arise must be clearly communicated, and so must the strengths and weaknesses of the analysis performed. This usually requires an interdisciplinary approach. Finally, the interaction between "expert" and the concerned public must be redirected into a cooperative, and not antagonistic one.
In each of the case studies outlined above, the most efficient path from risk identification to action involved scientific dialogue among "experts" and discussion to and between segments of the general public. This dialogue must include listening to the public as well as talking to the public. As with any product, science and risk analysis conducted for the public good must involve the eventual consumer in the feedback loop. We next present two examples of state public health programs and individuals that exemplify the needed avenues of cooperative interaction.
Dr. David Brown of the Department of Health of the State of Connecticut consistently works to examine any cancer or disease cluster brought to the attention of the department, however implausible it may initially appear. Once a claim is reported and the investigation initiated, he rightly follows the study to scientific completion, regardless of the stage at which the concerned citizen(s) have reached a conclusion with which they are comfortable84. The methodological advantage of this protocol is that no degree of political-technological collusion can be claimed, and the reports produced by the state Department of Health clearly demonstrate a consistent standard of veracity and impartiality.
Dr. Raymond R. Neutra, chief of the California Department of Services' Special Epidemiological Studies Program, behaved similarly when confronted by parents and teachers from an elementary school in Fresno, California, with claims of a cancer cluster caused by exposure to extremely low frequency electromagnetic fields from transmission lines85. In responding to the claims of a cancer risk, Dr. Neutra:
1. Held a series of meetings with the concerned parents and teachers where the current state of information concerning the health effects of ELF was presented.
2. Explained to the community the steps considered to be methodologically correct for identifying a statistically significant area of elevated risk.
3. Involved concerned teachers in the school by providing them with gaussmeters and asked them to record their own series of field measurements at the school. The measurements were then be used in subsequent analysis and compared to the exposure levels recorded in other international studies of the risks of ELF exposure.
There is no resolution yet at hand of the Fresno school situation. It is therefore too early to tell whether an open dialogue and involvement of the concerned sectors of the community in the investigative process is a complete success. What is clear, however, is that the principal frustration of the parents, that of neglect by the authorities that are paid to safeguard their health and their children's health, has been directly addressed.
The title of this essay, "What is the risk of the impossible?" should have a very simple answer: zero. The fact is, however, that in many cases a risk is perceived by some people as impossible, and hence safe, while others see a very definite and non-zero risk. We have considered several cases where society has had to deal with risks that were later classified as nonexistent. There is no universally applicable approach for treating such situations. Each case must be carefully and individually analyzed. The public should be informed about the problem and the research efforts, but it should not be unduly alarmed by speculative publications in the media that do not at least aspire to the ideals of peer and open review. Unfortunately in the real world, "... it is very hard to prove cause and effect relationships in science, particularly when dealing with subtle hazards ... [particularly when] filing speculative claims can be financially rewarding ... even if they only sometimes pay off."86. This presents a problem that our society must solve in order to let the people enjoy the results of technological progress without being scared by the risks of the impossible.
A number of unifying methodological and policy lessons emerged from the cases of exceedingly improbable, or impossible risk evaluated here:
The process of peer review, in scientific circles, or in the interaction of experts and the wider public, must be integral to the evaluation of claims of risk.
The failure of the Soviet Union to properly disclose information regarding the accident at Chernobyl, and the attempt of cold fusion researchers to "publish" their claims in the popular press both illustrate the need for open evaluation of claims and supporting evidence. In this sense, risk assessment can be likened to a free-market economy: the invisible hand of open exchange, 'trading ideas and facts on their merits' is the most efficient mechanism for policy making.
Fluidity of opinion, and negotiating position must be maintained.
In the debate over policy, the degree to which opinions can change is crucial. While an initial discrepancy between the perception of an expert and the perception of the public is probably usual, one would hope that this would diminish as time progresses. Each of these two groups will watch the other to see whether their perception has changed, perhaps as the result of a public education campaign or perhaps in light of new data. As debates, particularly over technologically complex issues, continue, it is critical to distinguish between ideological and scientific opinions. The resistance to novel ideals or inability to alter ones position can degrade discussion from valuable interchange, to emotional opinion to stonewalling or cover-up -- which is notably unproductive.
Curiously enough, the criteria that the Nobel Prize-winning Chemist Irving Langmuir proposed to identify pathological, or fraudulent, science applies equally well to identifying impossible risks or untenable policy assertions87:
1. The maximum effect is observed by a process of barely detectable intensity, and the effect is largely independent of the intensity of the apparent causal agent.
2. The effect remains close to the limit of detectability.
3. Claims of great measurement accuracy, or of profoundness, persist in the face of mounting evidence to the contrary.
4. Theories are put forward that fail the test of being the simplest explanation for the available information.
5. Criticisms are met by ad hoc explanations: the proponents "always have an answer -- always."
As an example in the application of these criteria to evaluating claims and counter-claims in science and technology, let us return to the case of cold fusion. Pons and Fleischmann recently published a new defense of cold fusion, under the title, "Possible theories of cold-fusion," the abstract of their paper reads:
We review some of the key facts in the phenomenology of Pd-hydrides usually referred to as 'cold fusion'. We conclude that all theoretical attempts that concentrate only on few-body interactions, both electromagnetic and nuclear, are probably insufficient to explain such phenomena. On the other hand we find good indications that theories describing collective, coherent interactions among elementary constituents leading to macroscopic quantum-mechanical effects belong to the class of possible theories of those phenomena64.
Irrespective of the eventual outcome of scientific investigation of cold fusion, this new abstract appears to violate the tenets of Langmuir's warnings. Fleischmann, Pons, and Preparata (FPP) appear to have discounted any explanation or need to square their results with the simplest possible set of assumptions and models, namely all those involving current electromagnetic and nuclear theory.
We must, however be cautious in applying Langmuir's criteria too early in scientific enquiries. While most spurious experiments do indeed follow Langmuir's criteria, it is likely that many correct experimental results follow these criteria in the early stages, because of a confusion and lack of complete understanding. The crucial feature is how long the scientists remain confused.
It is critical to note that FPP may indeed end up being correct in their assertion that cold fusion is a real phenomenon or set of effects. Breakthroughs in understanding, by definition, involve ground-breaking insight that is beyond the scope of contemporary models or understanding. As part of the scientific process, however, their approach is troublesome. In an effort to continue the interest and investigation of cold fusion, they have apparently invoked an increasingly complex set of theories that build upon a set of experimental results that have themselves been shown by the research community to be invalid.
At the level of science policy, FPP's claims of "Possible theories of cold fusion" highlights the problem of negative evidence with which we began this study. Despite the diverse set of experiments and experimenters who failed to reproduce the results of Pons and Fleishcmann, a collection of negative evidence frequently fails to amount to the "smoking gun" of proof that is so critical to decision making when science enters the political or legal arena.
Risk communication must be more than presenting evidence; it must be an interactive process.
Research and risk analysis that relates to the general public must involve the general public in establishing the agenda, and in setting health and safety goals. The inherent fear of new scientific and technological advances places the burden of proof squarely on the technical expert. While this is formally acknowledged in such procedures as the testing of new drugs for adverse side effects, and testing automobiles for dangerous defects, these steps are the effect, not the cause. Pronouncing something to be "safe", is akin to determining that the risk posed is below an agreed-to threshold. Thus, decisions of safety are based on negative evidence. The public must be integral in the process of establishing these standards.
A number of people have provided fascinating perspectives on the examples we explore here, and have reacted to earlier versions of this manuscript3. In particular, we would like to thank H. Berg, K. Foster, J. Holdren, D. Kahneman, W, Kempton, J. Merritt, M. G. Morgan, M. Levy, R. Pound and F. von Hippel for comments and criticisms. This work was partially supported by the Department of Energy through the Northeast Regional Center for Global Environmental Change and by the Air Force Office of Sponsored Research.
Electronic mail addresses for the authors: email@example.com; firstname.lastname@example.org; and email@example.com.
1. Morgan, M. G.; Henrion, M. Uncertainty. Cambridge, UK: Cambridge University Press; 1990.
2. Putnam. H. The Performance of William James, Bull. Amer. Acad. Arts Sci., xlvl, xx - yy; 1992.
3. Kammen, D. M.; Wilson, R. (1993) The Science and Policy of Risk, Science, 260, 1963.
4. Maddox, J. Can evidence ever be inconclusive? Nature 369: 97; 1994.
5. Davis, N. P. Lawrence and Oppenheimer. New York, NY: Simon and Schuster: p. 129-132; 1968.
6. Compton A. H. Atomic Quest. (Oxford, UK: Oxford University Press; 1956.
7. Buck, P. S. The bomb -- the end of the world? The American Weekly, March 8, p. 9; 1959.
8. Rhodes, R. The making of the atomic bomb. New York, NY: Simon and Schuster; p. 579, 665; 1986.
9. Konopinski, E. J.; Marvin, C.; Teller, E. Ignition of the atmosphere with nuclear bombs. Los Alamos National Laboratory Report, LA-602; declassified 1973; 1946.
10. Dudley, H. C. The ultimate catastrophe, Bull. Atomic Scientists, p. 21 - 24; November, 1975.
11. Commoner, F. Understanding the atmospheric ignition problem, unpublished manuscript; 1992.
12. Bethe, H. A. Ultimate catastrophe?, Bull. Atomic Scientists, p. 36, 38; June, 1976.
13. von Hippel, F.; Williams, R. H. Taxes credulity, Bull. Atomic Scientists, p. 2; January, 1976.
14. Sullivan, W. Experts doubt view that atom blast could end all life, The New York Times, November 23, p. 50; 1975.
15. Weaver, T. A.; Wood, L. Necessary conditions for the initiation and propagation of nuclear-detonation waves in plane atmospheres, Phys. Rev. A, 20: 316-328; 1979.
16. Wilson, R. The future of nuclear energy. Talk at meeting of Global Climate Change, December 1992.
17. Morgan, M. G. What would it take to revitalize nuclear power in the United States? Environment, 35:7-32; 1993.
18. Slovic P. Talk to American Physical Society, Washington meeting. 1993.
19. IAEA (International Atomic Energy Agency), Electricity and the environment. Proc. of the senior experts symposium, Helsinki, Finland; 1991.
20. Rasmussen, N. et al. Reactor safety study. Report to the U. S. Atomic Energy Commission; 1975.
21. Häfele, W. Energy from nuclear power, Sci. Amer., Special issue, "Energy for Planet Earth", 95-106; September, 1990.
22. Lewis, H. W. Technological risk. New York, NY: W. W. Norton Publ. p. 24; 1990.
23. Socolow, R. Talk at Princeton Plasma Physics Lab., October, 1992.
24. Kay, D. Colloquium at the Physics Department, Harvard University, March, 1992.
25. Kempton, W. Will public environmental concern lead to action on global warming? Ann. Rev. Energy Env., 18: 217-245; 1993.
26. Berkhout, F.; Feiveson, H. Securing nuclear materials in a changing world. Ann. Rev. Energy Env., 18: 631-665; 1993.
27. Gould J. M. Significant U. S. mortality increases in the summer of 1986. undated, and unsigned article; widely quoted and circulated; 1988.
28. IAEA (International Atomic Energy Agency). USSR State Committee on the Utilization of Atomic Energy, The Accident at the Chernobyl Nuclear Power Plant and its Consequences. Information compiled for the IAEA Experts Meeting, 25-29 August 1986, Vienna, Moscow, 1986.
29. Starzyk, P. Letter to Wall Street Journal; 1987.
30. Steinbrook, R. 20,000 - 40,000 Chernobyl deaths disputed, Los Angeles Times, Feb. 27, 1988.
31. USCEA (US Council for Energy Awareness). Brancker, A. Quoted in: US Council for Energy Awareness 231, (Washington, DC : USCEA; 1988.
32. Gould, J. M. (1993) Chernobyl -- The Hidden Tragedy, Nation, March 15, p. 331-334; 1993.
33. Wilson R.; Shlyakhter, A.I. Nuclear Fallout. Nation, May 31, 722:752; 1993.
34. D'Agostino, R.; Wilson R. Asbestos: the hazard, the risk and public policy. Foster K.R., Bernstein, D. E.; Huber P.W. eds. In Phantom Risk - Scientific Inference and the Law, Cambridge, MA: MIT Press; 1993.
35. Crump, K. S., Hoel, D. G., Lampley, C. H., Peto, R. Fundamental carcinogenic processes and their implications for low dose risk assessment. Cancer Res., 36: 2973-2979; 1976.
36. Lave, L.; Seskin, E. Health and Air Pollution. Swed. J. Econ. 73: 76-95; 1971.
37. Wilson, R., Colome, S.D., Spengler, J.D., Wilson, D.G. Health effects of fossil fuel burning: assessment and mitigation. Boston, MA: Ballinger Publishing Co.; 1981.
38. Evans, J., Ozkaynak, H., Wilson, R. The use of models in public health risk analysis. J. Energy Env. 1: 1-20; 1982.
39. Crawford M., Evans, J.; Wilson, R. Low dose linearity: the rule or the exception? Abstract, Society of Risk Analysis, Annual Conference; also submitted to Human and Ecological Risk Assessment (Sept. 1994).
40. Cook, R. R. Responses in Humans to Low Level Exposures. Talk presented at the BELLE Second Annual Conference on New Perspectives on Dose-Response relationships and low level exposures, Arlington, VA, April 26 - 27; 1993.
41. Schneider, S. H. Global warming. New York, NY: Random House New York, p. 147; 1989.
42. Oerlemans, J. A. A projection of future sea-level, Climatic Change 15: 151-174; 1989.
43. Shlyakhter, A. I.; Kammen, D. M. Sea-level rise or fall? Nature 357: 25; 1992.
44. Warrick, R. A.; Oerlemans J. Sea Level Rise. In: Scientific Assessment of Climate Change, Report prepared for Intergovernmental Panel on Climate Change (Working Group I), (WMO and UNEP, Geneva, Switzerland), 261-285; 1990.
45. Manabe, S.; Stouffer, R. J. Century-scale effects of increased CO2 on the ocean-atmosphere system. Nature 362: 215-217; 1993.
47. Sagan, C. Kuwaiti fires and nuclear winter. Science 254: 1434; 1991.
48. SCOPE. Environmental consequences of nuclear war: SCOPE 28. New York, NY: John Willey and Sons, 1986.
49. Turco R. P.; Toon A. B., Ackerman, T. P., Pollack, J. B.; Sagan, C. Nuclear winter -- physics and physical mechanisms. Ann. Rev. Earth Plan. Sci., 19: 383-422; 1991.
50. Hobbs, P. V.; Radke, L. F. Airborne studies of the smoke from the Kuwait oil fires. Science 246: 987-991; 1992.
51. WMO (World Meteorological Organization). Report on the effects of the Kuwait oil fires. Geneva, Switz., May, 1992
52. Wilson, R. Comparison of large disasters. Talk presented at the Energy and Ecology Conference, Washington, DC, June, 1992a.
53. Goldstein, D., Demak, M.; Wartenberg, D. Risk to groundlings of death due to airplane accidents: a risk communication tool. Risk Analysis 12: 339-341; 1992.
54. Alvarez, L. W., Alvarez, W., Asaro, F., Michael, H. V. Extraterrestrial cause for the Cretaceous-Tertiary extinction. Science 208: 1095-1108; 1980.
55. Purcell, E. M. Private communication; 1990.
56. Chapman, C. R.; Morrison, D. Impacts on the Earth by asteroids and comets: assessing the hazard. Nature 367: 33-39, 1994.
57. Morrison, D. and the Spaceguard Workshop. The Spaceguard Survey: Report of the NASA International Near-Earth-Object Detection Workshop (Jet Propulsion Laboratory/NASA Solar System Exploration Division Report), January 25, 1992.
58. Travis, C. et al., Cancer risk management: an overview of regulatory decision making. Env. Sci. Tech. 21: 415; 1987.
59. Morrison, D. Personal communication. NASA Space Science Division Reference SS:245-1, January 7, 1994.
60. Mitchell, R. T., et al., Galileo: Earth avoidance study. D5580 Rev. A. (Jet Propulsion Laboratory; California Institute of Technology, Pasadena, CA). 1988.
61. Taubes, G. Bad science: the short life and weird times of cold fusion. New York: Random House, p. 343; 1988.
62. Newsweek; Cold Fusion. April 14, 1989.
63. Swinbanks, D. Cold fusion produces heat but not papers. Nature 359: 765; 1992.
64. Williams, R. Cold fusion misconception. Chemical & Engineering News 71: 4; 1993.
62. Normile, D. Japanese add fuel to cold fusion debate. R & D Magazine 35: 29; 1993.
63. Hadfield, P. Lukewarm reception for Japanese cold fusion. New Scientist 136: 10; 1992.
64. Fleischmann, M., Pons, S., Preparata, G. Possible theories of cold-fusion. Il Nuovo Cimento A, Nuclei, Particles and Fields 107: 143-156; 1994.
65. Storms, E. Warming up to cold fusion. Technology Review, May/June, 19-29, 1994.
66. Science. Cold fusion reproduced -- on paper. Science 264: 71; 1994.
67. Starr, C. Social benefit v. technological risk. Science 165: 1231, 1969.
68. Wilson, R. Examples in risk-benefit analysis. Chem. Tech. 5: 604-607; 1975.
69. Slovic, P., Fischoff, B., Lichtenstein, S. Perceived risk: psychological factors and social implications. Proc. Royal. Soc., London A 376: 17-34; 1981.
70. Morgan, M. G.; Fischhoff, B.; Bostrom, A.; Lave, L.; Atman, C. J. Communicating risk to the public. Environ. Sci. Technol. 26: 2048-2056, 1992.
71. von Hippel, F. Citizen scientist. New York: Simon & Schuster; 1991.
72. Thurber, J. T. Thurber carnival. New York, NY: Simon & Schuster; p. 185; 1982.
73. Kirschvink, J. L. Constraints on the biological effects of extremely low frequency electromagnetic fields. Phys. Rev. A., 46: 2178-2184; 1992.
74. Weitheimer, N.; Leeper, E. Electrical wiring configurations and child cancer. Amer. J. Epidemiol. 109: 273-84; 1979.
75. Florig, H. K. Containing the costs of the EMF problem. Science 257: 468; 1992.
76. Ahlbom, E.; Feychting, M.; Koskenvuo, M.; Olsen, J.H.; Pukkala, E.; Schulgen, G.; Verkasalo, P. Electromagnetic fields and childhood cancer (letter). Lancet 342: 1295; 1993.
77. Thériault, G.; Goldberg, M.; Miller, A.B.; Armstrong, B.; Guénel, P.; Deadman, J.; Imbernon, E.; To, T.; Chevalier, A.; Cyr, D.; Wall C. Cancer risks associated with occupational exposure to magnetic fields among electric utility workers in Ontario and Quebec, Canada, and France: 1970-1989. Amer. J. Epidemiol. 139: 550; 1994.
78. Floderus, B.; Persson, T.; Stenlund, C.; Wennberg, A.; Öst, Å.; Knave, B. Occupational exposure to electromagnetic fields in relation to leukemia and brain tumors: a case-control study in Sweden. Cancer Causes and Control 4: 465; 1993.
79. Adair, R. K. Constraints on the biological effects of weak extremely-low frequency electromagnetic fields. Phys. Rev. A., 43: 1039-1048; 1991.
80. Feinstein, A. R. Scientific standards in epidemiologic studies of the menace of daily life. Science 242: 1257-1263; 1988.
81. CIRRPC; Health effects of low-frequency electric and magnetic fields. Prepared by Oak Ridge Associated Universities Panel for the Committee on Interagency Radiation Research and Policy Coordination, ORAU 92/F8. June, 1992.
82. Savitz, D. A. Health effects of low-frequency electric and magnetic fields. Env. Sci. Technol. 27: 52-54, 1993.
83. Morgan, M. G. Prudent avoidance, more study of EMF's. Issues Sci. Tech. 6: 18; 1990.
84. Brown, D. Risk analysis in environmental health. Lecture at the course on risk assessment in Harvard University School of Public Health, September, 1992.
84. Brodeur, P. The cancer at Slater school. The New Yorker, p. 86-119; December 7, 1992.
86. Foster, K. R. The Health effects of low-level electromagnetic fields: phantom or not-so-phantom risk? Health Physics 62: 429-436, 1992.
87. Langmuir, I. The collected works of Irving Langmuir. Suits, C. G.; Way, H. E. eds., New York, NY: Pergamon Press; 1960.
88. Kammen, D. M.; Shlyakhter, A. I.; Wilson, R. What is the risk of the impossible? Center for Domestic and Comparative Policy Studies Report, 93-6, Woodrow Wilson School, Princeton University; 1993.
1. To whom correspondence should be addressed at: 444 Robertson Hall, Woodrow Wilson School of Public and International Affairs, Princeton University, Princeton NJ 08544 USA.
2. This perception may have temporarily changed in the summer of 1994 as the newspapers describe a comet impacting the planet Jupiter.