- Home
- Natasha Zaretsky
Radiation Nation Page 5
Radiation Nation Read online
Page 5
The identification of atomic power with nature was a constant motif in promotional literature. Brochures featured illustrations of plants in pastoral settings, surrounded by butterflies, trees, and birds.38 Designed to blend into park-like landscapes with power lines buried underground, in marked contrast to the ugly slag heaps and smokestacks associated with the industrial age, the aesthetics of power plants were meant to convey harmony with their environmental surroundings. According to Glenn Seaborg, nuclear plants were “as close to an extension of nature as any human enterprise.”39 Aligning atomic power with a benign view of nature was premised on the notion that while the atomic age may have been historically new, atomic power itself was ageless, as was the radiation it emitted. “Radiation is not new to the world; it is not new to Met-Ed,” one company brochure explained. “Radiation is part of our natural environment. We have always lived in its presence. Natural radiation … comes from cosmic rays reaching earth from outer space and naturally radioactive substances present in commonplace materials in our bodies.”40 This emphasis on the ubiquity of radiation was meant to efface its toxicity. One Met-Ed brochure even urged readers to take a personal radiation inventory questionnaire to estimate how much radiation they harbored internally. Throughout its safety reassurances, the industry oscillated back and forth, sometimes emphasizing that radiation could be safely contained inside the walls of power plants, while at other times insisting on its ubiquity. “There is not a nook or cranny of the ocean, the earth, or space … that is free of radiation,” one industry brochure proclaimed. “There is not a living thing upon our earth that has not been subjected to radiation throughout its existence … there is not a single ancestor of any of us that has not been subjected to radiation throughout his or her lifetime.”41 As one proposed Met-Ed radio spot reassured listeners: “You’re exposed to more radiation on one sunny afternoon at the beach than by living next door to a nuclear power plant for a whole year.”42 Turn-of-the-century scientists had looked upon radium as possessing magical properties, but by the mid-1960s, industry promoters were taking pains to convince Americans that radiation was everywhere and ordinary.
FIGURE 1.2. “Radioactivity. It’s Been in the Family for Generations.” Advertisement for Investor Owned Electric Light and Power Company. Reprinted from Life magazine, October 13, 1972.
A CRIME AGAINST THE FUTURE: THE RADIATION SCARES OF THE 1950s AND 1960s
Industry efforts to construct radiation as ordinary were countered by a growing public fear of atomic testing. Testing was endemic to the Cold War era. Between 1945 and 1976, the military tested 588 nuclear and thermonuclear weapons, nearly a third of them above ground.43 In the years between 1946 and the signing of the Limited Test Ban Treaty in August 1963, the AEC oversaw multiple test series, including Operation Crossroads (July 1946), Operation Greenhouse (May 1951), the Ivy-Mike nuclear test (November 1952), Castle Bravo (March 1954), Operation Teapot (1955), Operation Redwing (May 1956), Operation Plumbob (May 1957), Operation Hardtack I and II (April–October 1958), Operation Argus (August 1958), and Operation Dominic (May 1962). Anthropologist Joseph Masco writes of this period: “Nuclear devices were exploded on towers, dropped from planes, suspended from balloons, floated on barges, placed in craters, buried in shafts and tunnels, launched from submarines, shot from cannons, and loaded into increasingly powerful missiles.”44 These tests were conducted in either the Pacific Proving Grounds (an umbrella term that referred to sites in the Marshall Islands and other parts of the Pacific), or the Nevada Test Site (NTS), a 1350-square-mile range located ninety miles north of Las Vegas.45 Nuclear testing thus established a connection between the water-bound atolls of the Pacific Ocean and the arid desert of the US Southwest. Both sites were remote and relatively depopulated. But they were not empty, and residents of the Pacific atolls and the US Southwest bore the brunt of atomic dangers disproportionate to their populations.
One problem was that fallout from testing did not follow a predictable path. Between 1951 and 1962, fallout drifted from test sites over one hundred times. In 1951, scientists detected radioactive fallout in snowfall as far away as Rochester, New York. The same year, the AEC began receiving letters reporting that fallout was disrupting weather patterns around the world.46 In March 1953, stockmen in Utah blamed nuclear testing in neighboring Nevada for the deaths of over one thousand ewes and lambs. The following year, the Castle Bravo explosion spread radioactive ash over seven thousand square miles of the Pacific, exposing over 250 Marshall Islanders to radiation poisoning. A Japanese fishing boat was in the plume’s path, and twenty-three fishermen suffered radiation illness. One of them, Aikichi Kuboyama, died. In 1954 and 1955, radioactive rain fell in Troy, New York, and Chicago, two cities located well over a thousand miles away from the NTS. As oceanographers and earth scientists began monitoring these radiation releases, they confronted an epistemological dilemma. Because they initiated surveillance of radiation levels after weapons testing had commenced, they had no way of knowing what constituted a baseline radiation level. Scientists thus found themselves tracking a planetary radiological experiment that was already well underway by the time their observations began.47
Strontium-90, an isotope that mimics calcium and can lodge in the bones, posed a special problem. In 1957, traces of Strontium-90 were detected in wheat and milk, suggesting that fallout had entered the food chain through cows grazing on exposed pasture. A government study published in June 1959 found that in some parts of the country, the Strontium-90 content in milk approached the proposed maximum permissible dose.48 This finding was troubling not only because exposure to Strontium-90 increased the risk of bone cancer, but because the isotope had a half-life of twenty-eight years (a half-life refers to the time it takes for 50 percent of an isotope to dissipate), thus posing a long-term danger. Throughout the late 1950s, Congress received letters from thousands of citizens worried about the milk supply, scientists warned that radiation could cause leukemia and blood disorders, citizens groups conducted local studies on radiation exposure, and the Saturday Evening Post named radioactive fallout “the silent killer.”49 At precisely the moment that promoters of civilian atomic power were honing their claims about its ordinariness, radiation was being transformed into a symbol of slow death.
A moniker like “the silent killer” offers a clue to why this transformation occurred: radiation defied all modes of sensory perception. Sociologist Kai Erikson cites radiation as one example of “a new species of trouble”—forms of toxicity and contamination symptomatic of late industrial modernity that inflict harm surreptitiously. “They penetrate human tissue indirectly,” Erikson writes, “rather than wound the surfaces by assaults of a more straightforward kind.”50 Invisible and silent, radiation could penetrate the body, lie dormant for decades, and then violently return to the surface, producing premature death. As we will see, radiation’s invisibility in particular would pose a thorny phenomenological challenge at Three Mile Island: In the absence of visual cues, how could the boundary between safety and danger be determined? The invisibility of radiation placed a heavy burden on technological fixes, such as Geiger counters, that might fill the void.
The scare was also amplified by a lack of scientific consensus about whether there was such a thing as a safe threshold below which radiation did no harm.51 By mid-century, there was no doubt that high levels of radiation exposure could produce toxicity, illness, and death, but the effects of low-level radiation remained elusive. As the National Academy of Sciences explained in 1960, “many aspects of the [radiation] problem are too little understood to permit more than tentative conclusions,” including whether there was “a radiation threshold.”52 Some scientists contended that below a certain level, the effects of radiation were so negligible as to pose virtually no health risk, meaning that there was what they called a permissible dose of radiation. But others advanced what was called the linear no-threshold dose hypothesis, according to which there was a directly proportional relation between radiation a
nd risk all the way down to the lowest levels. The public thus encountered confusing and contradictory reports. For its part, the AEC consistently reassured residents living near the NTS that the amount of radiation emitted from fallout was slight and did not add significantly to what it called “normal background radiation,” that is, radiation emitted from earth, rocks, and the sun. When confronted with the findings of geneticists who warned that any radiation exposure could do harm, the AEC minimized the risk by framing it in the broadest possible statistical terms. At one point, AEC chairman Lewis Strauss explained that radiation exposure from testing “would be only a small fraction of the exposure that individuals receive from natural sources and medical x-rays during their lives.”53 “It’s not dangerous,” echoed anchorman Walter Cronkite in reference to a radioactive dust cloud during a televised Nevada test explosion in 1953.54 But several studies conducted in the mid-1950s by scientific bodies like the National Academy of Sciences and the UN Scientific Committee on the Effects of Atomic Radiation broke with the commission, warning that even low-level radiation from fallout could be dangerous.55 Thus throughout the 1950s, reassurances from the AEC often appeared alongside more somber assessments. Atomic physicists who felt confident that radiation levels could be controlled often clashed with biologists and geneticists who were more alarmed by the potential danger. And international bodies like the International Committee on Radiation Protection tended to be more conservative than their US counterparts when making recommendations about what constituted an acceptable maximum dosage of radiation.56 Confusion surrounding low-level radiation illuminated both the considerable cultural and social authority of postwar science and the limits of that same authority (as radioactive fallout filtered into the environment).
This debate about radiation thresholds registered a historically new way of thinking about the human body and its relationship to the environment. This encompassed a proto-ecological awareness of the body’s permeability by its outside, and it prompted the question of whether there was such a thing as a permissible toxic load that this body could bear. That question had its origins in the theory of homeostasis, first popularized in 1932 by physiologist Walter Cannon, which posited the idea of the body as a self-regulating system.57 As historian Linda Nash argues, industrial toxicologists drew on the model of homeostasis to develop the concept of “biologic thresholds—that is, the assertion that there is always a level of exposure below which the body can absorb and adjust to pollutants without sustaining permanent harm.”58 The permissible dose theory of radiation reflected an influential method of risk assessment according to which the threat of disease or injury from chemical toxicity was correlated to the amount of exposure. This toxicological principle was captured in the formulation “the dose makes the poison.”59 Emerging out of the debate in the 1950s was what scholars have called the ecological body.60 This body was defined not simply by its more porous relationship to its environment, but also by the presumption that some kind of chemical and toxic load was both inevitable and sustainable. Haunting the ecological body, though, was a series of tricky, elusive questions about dosage: At what point might toxic exposures push the homeostatic system beyond its tipping point? When might the dose of any toxin become a poison? And when might somatic resilience and adaptation give way to vulnerability, decomposition, and death?
The realization that radioactive isotopes from fallout could infiltrate bodily organs and tissue—not unlike the way they seeped into land and oceans—established an intimate association between somatic and planetary risk.61 The mounting concern in the late 1950s over the presence of Strontium-90 in milk provides the most salient example. Milk is a unique food in two senses. First, it is what is called an indicator commodity, in that it is one of the first places where radioactive isotopes are detected (irradiated milk is often a tip-off that other staples are contaminated).62 Second, milk is a deeply symbolic commodity because of its primal associations with motherhood, nursing, and infancy. Fallout could travel a circuitous but recognizable path through the food chain, from an ostensibly remote test site to a farm pasture where cattle grazed, then from the body of a cow into a bottle of milk, then from a bottle of milk onto a kitchen table, and then from a kitchen table into children’s bodies. In 1960, the Committee for Nuclear Information collected baby teeth—the final stop on the chain—in order to gauge children’s exposure to Strontium-90.63 In a St. Louis living room, women volunteers gathered around card tables, meticulously sorting through tens of thousands of baby teeth donated to the committee in order to gauge the presence of Strontium-90 in the young.64 The presence of a mobile, radioactive isotope in milk and baby teeth suggested that despite the trappings of domestic tranquility, the atomic age contained destructive elements that could not be purged from either the homes or the bodies of American civilians. A sanctified, feminized domestic realm was at the heart of Cold War ideology, and the radiation threat undermined that ideology by suggesting that weapons testing might constitute a graver threat to Americans than any Communist enemy.
The story of Strontium-90 is also significant because of the figure at its center: the child.65 While postwar scientists argued about a permissible dose, all agreed that babies and young children were especially vulnerable to radiological injury. This consensus suggested that the amount of the dosage alone was insufficient for assessing risk; the timing of radiological exposure also mattered. Still-growing bones, organs, and tissue were more susceptible to the absorption of radioactive isotopes like Strontium-90, and children’s life spans meant that they simply had more time than adults to process radiation’s cumulative effects. One of the first suspected civilian casualties of nuclear fallout in the United States was a young boy named Martin Laird, who had been three years old when testing began seventy miles away from his home in Carson City, Nevada. He died of leukemia four years later. His mother was convinced that fallout from testing had caused his death. “We are forgotten guinea pigs,” Martha Laird would later charge at a congressional hearing in Las Vegas, “We were feeding our children and families poisons from those bombs.” Nevada Republican senator George Malone accused her of pedaling “Communist-inspired scare stories.”66 But a number of studies conducted in the late 1950s and early 1960s found that children living near the NTS had been exposed to Iodine-131, another radioactive isotope that can lodge in the thyroid gland and cause thyroid cancer.67 These findings filtered into magazines ranging from the New Republic to the more middlebrow McCall’s Magazine, where articles appeared with titles like “Our Irradiated Children” and “Radioactivity Is Poisoning Your Children.” In 1958, a critic of the AEC appeared on Edward Murrow’s nationally syndicated television show and recounted the story of Martin Laird’s leukemia.68 In 1961, a Public Health Service official in Albany, New York, received a worried phone call from a mother who had heard from a news report that there had been a tenfold increase in airborne radioactivity near her home. Her first question for the official was whether it was “safe to send her children to school.”69 By the late 1950s, the specter of white, middle-class children endangered by an invisible threat hung over atomic weapons testing. This undermined the culture of dissociation by implicitly evoking the terror of Hiroshima and Nagasaki. The utter decimation of those cities meant that there were very few photographs of fractured, gutted buildings—the images often associated with European carpet-bombed cities like London and Dresden. Instead, as historian John Dower has observed, images of mothers and children served as visual proxies for Japanese cities where there were no buildings left to photograph. Indeed, “the victimized Japanese mother and child became perhaps the most familiar symbol of the horrors of nuclear war.”70 The smiling children featured in an advertisement from the Committee for a Sane Nuclear Policy (SANE) appeared thousands of miles away—both spatially and emotionally—from the Japanese mother and child at Hiroshima in 1945. But the potential for radiological injury merged the biological fates of children otherwise divided by geography, race, class, and circumstance.
/>
FIGURE 1.3. “Your Child’s Teeth Contain Strontium 90,” SANE Advertisement, 1963. Copyright Held by SANE, Inc. Courtesy of Swarthmore College Peace Collection.
Further fueling such fears were the somatic and genetic injuries associated with radiation exposure. The most serious threat was cancer. Radioactive isotopes like Strontium-90 and Iodine-131 could cause bone and thyroid cancers, and by the 1950s, researchers were observing elevated rates of leukemia among Hiroshima and Nagasaki survivors, as well as among children who had been exposed to X-rays during infancy. The cancer threat was thus a constitutive feature of the radiation scare, and the radiation scare, in turn, arose amid a constellation of mid-century fears surrounding cancer. As oncologist and writer Siddhartha Mukherjee has argued, cancer occupied a unique place in the postwar American cultural and social imaginary.71 At a time when new wonder drugs like penicillin were saving lives, cancer—its causes, treatments, and cures—remained a stubbornly enigmatic puzzle. As breakthroughs in immunology and bacteriology lengthened the lives of many Americans and as infant and child mortality rates fell, the persistence of cancer and its maddening refusal to be definitively cured once and for all illuminated the limits of modern medicine. At the same time, researchers of the 1950s were discovering that exposure to certain industrial pollutants, namely carcinogens, posed serious cancer risks. During the postwar years, in other words, cancer came to play an anomalous and even defiant role in modern epidemiology. It was elusive to any single cure or treatment and was a hallmark of (rather than something eradicated by) industrial modernity. The specter of a young child stricken down by a radiation-induced leukemia amplified the singular horror of cancer, while cancer research brought into relief the scientific debate about the differences between low and high levels of radiation exposure. After all, in the field of oncology, radiation was a double-edged sword: high doses of radiation could cause cancer, while low doses could help cure it. Radiation was both carcinogen and weapon in the growing arsenal of cancer treatment, a paradox that captured its dual identification with death and rebirth, sickness and health.