Reflections on the thermonuclear ignition at Lawrence Livermore Laboratory

On December 13, 2022, Secretary of Energy, Jennifer Granholm, announced a historic scientific achievement; President Biden would call it a “BFD” she said. Eight days before, the National Ignition Facility (NIF) at the Department’s Lawrence Livermore National Laboratory had aimed its 192 lasers at a single tiny capsule of deuterium and tritium and achieved “ignition”; that meant the nuclear fusion reactions had released more energy than was in the lasers that “ignited” the pellet. Net positive energy from fusion reactions has never before been accomplished in a laboratory. The explosion at Livermore on December 5, 2022 was 3.15 megajoules, equivalent to about three sticks of dynamite.

Several details in this blog are from the December 13, 2022 Livermore technical panel video.

Thermonuclear fusion, which powers the sun and stars, was first accomplished on Earth on November 1, 1952, when the United States exploded a 10.5-megaton thermonuclear blast, code named “Mike”; it was about 14 billion times greater than the laboratory explosion at Livermore. The Mike explosion evaporated the island of Elugelab (part of Enewetak Atoll in the Marshall Islands) where the test was done. The 2022 Livermore experiment would have shattered the pencil-eraser-sized gold cavity called a “hohlraum”; inside it was a little diamond shell containing the deuterium-tritium (D-T) fuel. Science magazine noted that the “diamond capsule turned out to be the key.”

The laser fusion dream was born at Livermore 60 years ago. The 1954 multi-megaton BRAVO test at Bikini Atoll had made nuclear weapons testing infamous, raining intense radioactive fallout on Rongelap Atoll (among others) and on a Japanese fishing boat, the Daigo Fukuryū Maru – the Lucky Dragon No. 5. Global calls for an end to nuclear testing, and even nuclear disarmament, became loud and insistent. A U.S.-Soviet nuclear test moratorium followed in the late 1950s. Laser fusion raised the possibility of smaller, “clean” bombs without radioactive fallout. The radioactivity in fallout is almost entirely due to the fission products that result from the splitting of uranium and plutonium nuclei in atomic and thermonuclear weapons.

John Nuckolls, a former director of the Livermore Laboratory described the laser effort as being located in the “hectoton group [a hundred tons of TNT equivalent] instead of the megaton group.” “I heard,” he reminisced on the 70th anniversary of the founding of the laboratory in September 1952 (minutes 38 and 39 of the recording) that the object was to make “a fusion explosion without a fission explosion; pure fusion they called it.”

Though there would be some induced radioactivity in soil and building materials and carbon-14 in the air, smaller pure fusion bombs would be much more usable; indeed, it was difficult even to imagine militarily sensible targets for multi-megaton bombs. Pure fusion bombs would also make it possible to occupy the bombed territory without the kinds of radiation exposure and controversy about cancer that came to dog the nuclear weapons program, including among the U.S. troops who occupied Hiroshima and Nagasaki in the wake of the atomic destruction of those cities. The problem of marching troops into ground zero would be much simplified.

A new twist was added to Livermore’s laser fusion program in the 1990s, when nuclear testing was stopped. The National Ignition Facility would allow the study of physics of thermonuclear explosions without having to explode bombs in the Nevada desert to keep the US nuclear arsenal “safe, secure, and effective” as Livermore’s Director Kim Budil put it, celebrating the achievement on December 13, 2022 (minute 21 of the event video). It was part of an elaborate and costly “Stockpile Stewardship Program.” Scientists on the technical panel that day also mentioned weapons modernization as a goal. To their credit, the primacy of the weapons functions was clearly noted.

As the years wore on and the National Ignition Facility’s complexity delayed the achievement of ignition, there was more talk of the device being useful as an energy source and as an instrument for the study of the stars. The funds kept flowing.

Despite the achievement of ignition, the Livermore approach, known technically as “indirect drive” inertial confinement fusion because of the use of the hohlraum, has little chance of becoming a viable, economical power source. It needs to go from one explosion a day to perhaps ten per second. Even with much better lasers, energy output per unit of energy input must increase by 500 times or more to account for the energy to make the laser light and have significant energy left over to power something else, like a light bulb. These two factors alone require a combined performance improvement by hundreds of millions of times.

The huge performance increase must be accomplished with less-than-ideal materials from a physics point of view. Gold hohlraums must be replaced by lead ones (the most likely cheap material), each perfectly positioned to yield a blast; diamond shells, which took months of labor to make, will not be affordable.

Then there are housekeeping practicalities. Each explosion will shatter the lead hohlraum – creating debris, shrapnel, and lead dust. The lasers must stand the heat of hundreds of millions of shots per year (there are 31.5 million seconds in a year) without being damaged by shrapnel. The lead debris and dust from must be cleaned up safely, recovered, and (one hopes) recycled. All that without even speaking of how the energy of the explosions – mostly in neutrons – will be captured and converted to electricity.

Finally, there is the tritium problem. The Livermore shot only used four percent of the D-T fuel. Even with improvement, there will be plenty of unused tritium, some of which lodge in plentiful available metal surfaces and form metal hydrides – since, chemically, tritium is just elemental hydrogen. Most of the unused tritium must be recovered; the consumed and lost tritium must be entirely produced economically inside the device itself (from the more difficult D-D reactions or lithium-6 + neutron reactions). On the market, tritium costs around $30,000 per gram or more, hundreds of times more expensive than gold per unit weight. Just the tritium cost element could render any D-T fusion approach to electricity production uneconomical if a significant fraction of tritium had to be procured commercially.

While its prospects as an energy source are dim, the National Ignition Facility could play a significant part in reviving the old dream of pure fusion nuclear weapons. Being the size of three football fields, it is far too large to be a weapon; but its physics and engineering insights can play a part in the creation of pure fusion weapons.

Smaller fusion devices, like “magnetized target fusion,” which is being studied at Los Alamos, could play a role. The concept was invented in 1951 by the famed Soviet scientist and human rights activist Andrei Sakharov, who led the development of the Soviet thermonuclear bomb; it was brought to Los Alamos during the post-Cold War heyday of US-Russian collaboration in the 1990s. Research on it continues at Los Alamos, though US-Russian collaboration has fallen by the wayside.

IEER’s 1998 report, Dangerous Thermonuclear Quest (by Hisham Zerriffi and myself), analyzed the various fusion programs in the “Stockpile Stewardship Program”; we concluded that one result of the combination of programs could be the development of pure fusion weapons. Then whole idea of “inertial confinement fusion,” of which laser fusion is one variant, was that the plutonium trigger that ignites fusion in thermonuclear bombs be replaced by a non-nuclear energy source. Until December 5, 2022 that was only theory; for decades it seemed a distant dream. No longer. The reality is here.

The possible weapons angle also makes the question of whether the explosions at NIF comply with the Comprehensive Test Ban Treaty more immediate. The treaty bans all nuclear explosions, with no exceptions. In response to an October 28, 1999 letter from then-Senator Tom Harkin, the Department of Energy flatly stated that “NIF experiments are not nuclear explosions”; but the experiments then were far from achieving ignition. The Department avoided answering whether planned experiments with energy release of ten pounds of TNT equivalent would be explosions. (The correspondence is available in Rule of Power or Rule of Law?, Appendix B, pp. 161-168). Would three sticks of dynamite (about 1.6 pounds of TNT equivalent) be an explosion? What about the much larger ones releasing hundreds of megajoules (hundreds of sticks of dynamite) of energy?

John Nuckolls, who once led the Livermore laser fusion program and later led the whole laboratory, conceptualized fusion ignition without plutonium even before lasers were invented. He recounted that in 1957 Edward Teller, who led the founding of the Livermore lab and the development of thermonuclear weapons, had proposed using megaton-size underground nuclear explosions in a huge cavity to generate electricity. That idea inspired Nuckolls to examine the prospect of fission-free inertial confinement fusion; he had no hesitation in calling the phenomena, including from laser-driven fusion, “explosions” in his 1998 recapitulation of the early history of the inertial confinement fusion program.

The matter of Comprehensive Test Ban Treaty compliance remains unresolved. It may be moot from a U.S. domestic point of view since the U.S. Senate rejected ratification of the Treaty in 1999. But the Treaty still matters a great deal; indeed, it is central to the global non-proliferation regime.

The insights from ignition at NIF may well help other fusion energy programs, a matter I will discuss another day. For now, the celebration of the immense scientific achievement must surely be tempered by reflection on the serious nuclear weapons and proliferation questions that are now more real and pressing than they were before December 5, 2022.

Will clean energy impoverish the poor or help create a path to energy justice? (Part I)

Millions of low-income families already face crushingly high burdens of energy costs. High energy costs burdens as a percentage of income are a principal cause of financial distress, creating conflicts between paying utility bills or rent or buying medicines and food. The consequences include ill-health, insecurity, and, in many cases, evictions and mortgage foreclosures. Six percent of income is considered the maximum affordable amount for utility bills (1); for low-income households energy burdens are often 10 to 15 percent and as high as 30 percent, and sometimes more, for the lowest income groups.

What is the magnitude of the gap between actual energy bills and affordable energy even before extreme weather events exacerbated by climate change make it worse?

I summed it all up in a paper on energy burdens I prepared for the Just Solutions Collective here, co-published with the Climate and Clean Energy Equity Fund. The objective was to elevate the need to address energy burden, as well as provide budget numbers for advocacy groups while Congress was considering a $3.5 trillion social infrastructure and energy transition bill in 2021. The plan has bit the political dust; but of course, the need to create a clean energy transition that is equitable and economical for all remains.

In round numbers, filling the gap between the actual energy burdens of low-and moderate-income households (defined as less than 200 percent of the federal poverty level) and an affordable six percent of household income is about $36 to $40 billion per year. Federal assistance is typically about a tenth of that amount; state assistance programs complement the federal money, leaving an annual residual gap of roughly $30 billion. This is not a large sum in the federal scheme of things; it amounts to about half-a-percent of the anticipated federal budgets in the early to mid-2020s. Considering it would greatly alleviate financial stress and suffering among more than 30 million families, it can even be considered small.

About five percent of households that now receive utility bill assistance also lose their homes each year as a result of financial stresses between paying for medicines, food, utility bills, or rent. That is roughly 300,000 families losing their homes every year. Three-fourths move in with family or friends, creating new stresses, crowding, and, as the pandemic has shown, health risks. One fourth become homeless or are in public shelters. The costs and stresses on families that lose their homes are devastating. The homes where they move become more crowded with potential health consequences. But there are costs for non-low-income households as well. The cost of added emergency room visits to hospitals alone for each homelessness event can amount to roughly $20,000; expenses on shelter add to these costs (2). Ill-health and loss or productivity has been noted as consequences of homelessness (3).

Low-cost solar energy and wind energy are widely held to promise not only clean energy but also lower cost energy and millions of jobs. On average. But averages can hide a lot of sins. The task is to ensure that the averages also reflect changes for the better in the experience of low- and moderate income households; the lives of rural as well as urban people; those who have heating systems that use fossil gas or propane or fuel oil — and those who now sometimes rely on gas stoves and ovens and portable kerosene heaters to keep warm.

The economic opportunities of the transition to clean energy will come via investment in distributed solar energy, storage, smart grids and appliances, weatherization, and efficient electrification of homes and transportation. Without the capital to invest, dependence on landlords’ decisions on investment for instance in efficient heating or weatherization creates the prospect that a bad situation today could get worse for families of modest means because of the transition to clean energy. This prospect was recently explicitly recognized by the California Public Utilities Commission in 2021. It said that better off households that can afford electric vehicles, smart appliances, solar and storage, and other technologies will be able “to shift load and take advantage of potential structural billing benefits that follow….” The report added that this “often results in a cost shift toward the lower-income and otherwise vulnerable customers.” (4)

While it is not often so bluntly said, an inequitable energy transition could wind up making more people homeless. Moreover, without integrating energy equity, the energy transition will be slower, less effective, and likely fall short of the rapid change needed to minimize rapidly mounting climate harm. There are therefore compelling reasons to integrate equity into the energy transition design.

The first step is to make the resources available for energy assistance; the second is to reduce the need for assistance by investments that will reduce energy cost and emissions at the same time, including for the lowest income households.

Currently, only about a fifth of the households eligible for assistance actually get it (5), which means that the vast majority of households either do not apply or are turned down if they do. State data indicate that not applying is the more frequent cause, but refusals are also an important reason for not receiving assistance. In Maryland, which has a typical participation rate, 133,389 households applied for LIHEAP assistance in 2019 of an eligible population of about 400,000; in other words, two-thirds of those eligible did not apply. Of those who applied, about one-third were denied (6). The obstacles are huge: documentation requirements are cumbersome; lack of access to broadband makes applying even more difficult since it has to be done in person (obviously impossible during much of the pandemic). Language is often a barrier, as is adequate information more generally. And the stigma attached to assistance prevents many people from applying. Getting from 20 percent participation to near-100-percent enrollment will itself take effort and investment. A three or four year federal grant program totaling about $50 billion could accomplish this – at least, the resources would not be an obstacle.

The energy transition provides the opportunity to make investments to reduce energy bills and emissions at the same time. Solar is a good example. Most low-income people are renters and even those who are homeowners would face obstacles like access to low-cost credit or suitable roofs for installation. But suitable policies like green banks creating loan loss reserves can open up the field for all low- and moderate-income households to qualify for cheaper electricity from community solar plants.

I’ll cover the technical aspects of integrating all households into the energy transition and making the grid more robust in an era of climate change in a future post. Suffice it to say here that investments that benefit society, reduce emissions, reduce bills and reduce assistance requirements are possible with the right policies and vigorous implementation. The starting point surely has to be to recognize the severity of the energy burden problem today for tens of millions of families. That awareness is needed to merge the political determination for achieving a climate-friendly energy system with the tenacity to achieve social and economic equity. (Prepared for the Just Solutions Collective).

1. Housing affordability, including utility bills, is defined by the federal government as 30 percent of household income or less (HUD Affordability Guide); the six percent figure is a fifth of the affordable expenditure on housing and a common energy affordability metric (Colton 2021; ACEEE 2020). State assistance programs, like the one in New Jersey, also use this as the affordability metric.

2. See IEER’s report (pages 87 to 92) on energy justice in Maryland for a detailed discussion, including costs of added emergency room visits and shelter.

3. Caroline Julia von Wurden, “The Impact of Homelessness on Economic Competitiveness,” American Security Project, May 1, 2018.

4. CPUC 2021, download here, p. 6

5. Custom report generated by Arjun Makhijani for 2015-200, inclusive, from the LIHEAP Data Warehouse; includes recipients of heating and cooling assistance. Heating alone is about 2.5 percent lower.

6. Office of Home Energy Programs, Electric Universal Service Program (EUSP): Proposed Operations Plan for Fiscal Year 2021, May 2020, Table 5 for data on applications and actual grants. Item 556 in the Public Service Commission Docket for Case 8903. LIHEAP Home Energy Databook FY 2017 for state eligible population (386,361), Table B-2, extrapolated to approximately 400,000 for 2019. Download at https://liheappm.acf.hhs.gov/notebooks

The electric grid in a time of climate disasters: communities show the way

Three major electricity grid disasters in just over a year are exemplary of the havoc that climate extremes are causing and what needs to be done about it: (i) the howling winds of the 2020 derecho, when hundreds of thousands lost power in Iowa and Illinois in August 2020; (ii) the February 2021 breakdown of much of the Texas grid in the maelstrom of a winter polar vortex; and (iii) the failure of all eight transmission lines supplying a major city, New Orleans, and a near-total blackout in the midst of the miseries of the most intense recorded hurricane to hit Louisiana, Hurricane Ida on August 29, 2021. Such catastrophes often compound the miseries of those who already suffer disproportionately from economic, social, and racial injustices, much as the Covid-19 pandemic has also done.

In addition to these debacles, there has also been a startlingly different kind of failure: the periodic, deliberate shutdowns of sections of the grid to prevent fires starting at transmission towers in fire-vulnerable areas in California, which is in the midst of a deep, historic drought. The reason? Transmission-tower-related fires have caused death and destruction in the state on more than one occasion. Deliberate black-outs and periodic misery or death and destruction by grid-triggered fire — those are the current electrical choices in large parts of the country’s most populous state.

A large part of the vulnerability arises from the very nature of centralized grid infrastructure. Fossil gas power plants compete for fuel with heating buildings in the grip of extreme cold; nuclear power plants must actually be shut down for safety reasons if there is no grid electricity supplying them, as was the case with the Waterford 3 plant in Louisiana during Hurricane Ida. Large power plants can deliver no electrons when transmission lines are down and distribution lines lie prone in flooded streets, tangled in fallen trees. Homes and businesses that have emergency generators depend on gas stations functioning, but no electricity means no power at the fuel pump.

The Blue Lake Rancheria Tribal Nation in northern California showed the way technically and socially in mid-October 2019 when PG&E, the largest investor-owned utility in California (and also the United States), preemptively shut off power to millions of people to reduce fire risk from its facilities. The Tribal Nation’s microgrid (ironically helped by PG&E), with 1,500 solar panels and battery storage, became a haven for people and, as it turns out, a savior of fish: “As one of the only gas stations in the county with power, the reservation provided diesel to United Indian Health Services to refrigerate their medications and to the Mad River Fish Hatchery to keep their fish alive. The local newspaper used a hotel conference room to put out the next day’s paper. Area residents stopped by to charge their cell phones,” according to Jefferson Public Radio. An estimated ten thousand neighbors of the Nation came for gasoline and other supplies on a single day.

Hurricane Ida holds the same message. Some largely African American and Vietnamese American communities near Lake Ponchartrain had opposed a new fossil gas-fired electricity generating plant proposed by the utility Entergy; they wanted local solar and storage instead. The New Orleans City Council rejected their idea. The communities suffered the pollution from the plant when it operated and the blackout when it failed during Ida. In contrast, a microgrid on the roof of a new mixed housing development (29 affordable and 21 market-rate apartments), conceived after Hurricane Katrina, provided at least some electricity for its residents amidst near-total gloom.

Microgrids are much more than emergency power supplies. They operate efficiently with the grid, exporting and importing power from it in normal times; they also provide resilience by automatically “islanding” — disconnecting from the grid — during outages. During such times they function as small, very local, isolated grids – hence the term. This local electricity system provides power for essential services during emergencies — shelter, fuel, refrigeration for food and medicine….

The supply chain failures during the pandemic and recent massive grid failures have shown that market “efficiency” defined by profit in normal times alone is economically disastrous, even deadly. It does not provide the safety, security, and health that our climate predicament demands. Distributed renewable energy resources, including microgrids with solar and battery storage, and preferably also seasonal thermal storage of heat and cold for emergency shelter spaces, must be in the center of simultaneous mitigation and adaptation: reducing carbon dioxide emissions while enabling communities to be more secure during the weather upheavals that have become more frequent and intense.

Congress and the Biden administration are creating a vast infrastructure program that will include substantial funding for climate and energy. Grid resilience, with distributed solar energy and storage as their primary foundations, must be at the center of the investment strategy; microgrids, including public purpose microgrids, must be at the core. Fortunately, the administration’s Solar Futures study, issued in September 2021, recognizes the potential of both solar energy and of microgrids in building resilience.

Lessons from repeated grid failures since Hurricane Sandy in 2012 devastated much of the northeastern coast of the United States, show that communities often have more wisdom than large profit-centered utilities trying to hold-on to a failing and dangerous business-as-usual model. The infrastructure program should fund and empower communities, especially those that already suffer disproportionate harms and vulnerabilities, to help us all to a safer place in a time of climate disasters.

This blog post was written for the Just Solutions Collective

From Pearl Harbor to Hiroshima

For decades, there has been an argument in the United States every August 6 and 9, the anniversaries of the atomic bombings of Hiroshima and Nagasaki.

One side says the Japanese militarists were brutal and determined to fight to the end; there is substantial evidence for this. The United States captured Saipan and Tinian in the summer of 1944, about a year before Japan surrendered. The critical importance of this event, sometimes called the Pacific D-Day, is evidenced by the fact that Prime Minister (and General) Hideki Tojo, who led Japan to war with the United States after he was appointed to that office in late 1941, resigned right after the American victory in Saipan. The strategic reality was that Japan was now within range of U.S. B-29 bombers. The war could only conclude with a U.S. victory on U.S. terms. (Tinian was the base from which the B-29 bombers that destroyed Hiroshima and Nagasaki took off.) That a U.S. victory was inevitable was devastatingly demonstrated, months before Hiroshima, by the fire bombing of Tokyo on March 10, 1945 with hundreds of B-29 bombers participating in the air raid. Yet Japan did not surrender in late 1944 or in March 1945. Even after the atomic bombings, there remained Japanese military leaders who wanted to fight on. Each August 6 we also hear that the bombings ended the war, thereby saving half a million (or more) American lives thereby obviating the need for an invasion of the Japanese homeland, which would be fiercely defended.

The other “revisionist” side says that atomic bombings were were unjustified because the war was essentially over. By mid-July 1945, the United States had cable traffic in hand indicating a strong Japanese faction in favor of surrender if their emperor was allowed to remain on the throne (the very one who had given the go-ahead for the war, but who, in the end was allowed to remain, but only after the bombs had been used). U.S. generals’ upper estimate of U.S. military deaths in an invasion of Japan was less than 50,000 — a huge number but far smaller than common post-war claims. Who would know better how to estimate casualties than the U.S, military leaders who had been in the thick of the brutal war in the Pacific where the casualty rate was infamously high? (For reference, total U.S. combat deaths in the European and Pacific theaters combined, in World War II were about 300,000; of these, Pacific theater deaths were about 100,000 – though of course, the invasion of Japan was avoided, unlike the European theater, where the fighting went on till Germany was occupied and physically vanquished, with Hitler dead. An additional 100,000 or so died of non-combat causes.) General Groves, who directed the Manhattan Project, said that it was conducted on the basis that “Russia was our enemy.” The bombings were really a message to the Soviets about the shape of the post-war world and who would run it. These assertions also are substantially true.

Yet, the truths expressed by both sides taken together are not large enough to account for a complex reality; much that is essential does not enter either picture. Those who say the bombings ended the war almost never acknowledge the major role that the Soviet declaration of war on Japan on August 8, 1945 played in the Japanese surrender. For them, the bombings did it. Yet, U.S. military leaders and President Truman himself believed that a Soviet declaration of war on Japan would be decisive. Japanese deliberations that led up to the surrender, including on the day of the Nagasaki bombing and just after the Soviet declaration of war on Japan, indicate the centrality of the Soviet entry. The Soviets had already occupied much of Eastern Europe; Japan did not want Soviet occupation. If a choice had to be made, the Japanese would rather surrender to the Americans.

Neither argument takes into account that the purpose of the Manhattan Project was to prevent Hitler and the Nazis from getting a monopoly of the bomb; yet Germany was explicitly de-targeted on May 5, 1943, two years before the end of the war in Europe. Too little attention is paid to the fact that the bomb project was accelerated in December 1944 when there was definitive evidence that Germany did not have a viable bomb project. Only one scientist, Joseph Rotblat, left Los Alamos then. A quarter of a century later, Richard Feynman, one of the many brilliant physicists at Los Alamos who stayed till the end, regretted that he did not reflect that the purpose of the project had been accomplished when Germany was defeated. (“I simply didn’t think, okay?” he said.)

Then there is the fact of the timing of the use of the bombs. The Allies, including the Soviets, had agreed in July 1945 that the Soviet Union would declare war on Japan on August 15, 1945. (The Soviets accelerated their entry by a week after the bombing of Hiroshima.) The invasion of Japan was not due to start until November 1, 1945. Moreover, U.S. military leaders knew that the atomic bombings may put the lives of U.S. prisoners of war at risk. So why not wait for a few days to see if the Soviet entry would trigger a surrender? But they did not.

The bombings of Hiroshima and Nagasaki have shaped our world ever since and kept us on the edge of utter catastrophe for decades; they need to be understood on a much larger historical canvas, deserving of such a critical event in human history. That canvas should include the US-Japanese competition in the Pacific at the start of the twentieth century (friendly, or at least accommodating, at first) as the U.S. extended its reach beyond its shores and Japan decided to become an imperialist power in the East, conquering Korea and defeating imperial Russia in the 1905 war; the U.S. decision to re-locate the Pacific fleet to Pearl Harbor from San Diego in 1940 despite the misgivings of naval leaders that it would be a target for Japan; the Japanese bombing of Pearl Harbor; the early atomic targeting of Japanese forces and the de-targeting of Germany in 1943 (in an April 1945 briefing paper for the newly installed President Truman, Groves wrote “The target is and was always expected to be Japan“; italics added); and U.S. post-war aims that started taking shape soon after World War II began in Europe. I tried to bring these aspects together in a speech I gave in Santa Fe in 2012. It was entitled From Pearl Harbor to Hiroshima. Despite the end of the Cold War three decades ago, nuclear dangers are rising. This August, when we again remember the bombings that proclaimed the start of the nuclear age to the world, it might be worth an hour of your time.

Getting to a 100% renewable electricity sector

There is not much argument now about whether the United States can get to a 100% carbon-free electricity sector in the next 15 years or so. But many still believe that nuclear energy will be needed for the job as a complement to wind and solar. Their arguments center on the following:

1. It will take too much storage to make up for the variability of solar and wind. Therefore, dispatchable sources are necessary; in particular, nuclear is necessary.
2. Solar and wind take up large areas of land.

It doesn’t take a very elaborate technical analysis to conclude that solar + wind + battery storage is a critical but partial answer to the variability of wind and solar. But the claim that solar and wind therefore need nuclear as a complement derives from a failure to examine the full array of technologies already at our disposal. I’ve spent a few years looking at this issue in depth. The analysis shows that a 100% renewable electricity system based on solar and wind would be economical and also be more reliable and resilient in the face of climate extremes than the electricity system we have today, where centralized nuclear, coal, and fossil gas (aka “natural gas”) plants are the mainstays of supply. It would also be a lot cleaner.

You can find most details in a 2016 300+ page IEER report (Prosperous, Renewable Maryland). I’ll give a sketch in this blog and add some new details. We (IEER staff) did hour-by-hour modeling that included the supply needed to electrify transportation and space and water heating, so that CO2 emissions could be reduced from those sectors as well. Efficiency is an excellent starting point, making the system more economical; we factored that in first. We downloaded hourly solar and wind data (onshore and offshore) for various locations and combined them to get an hourly primary supply picture, along with the small amount of existing hydroelectric power (about 2%) in the state. We obtained utility demand data for Maryland and made hourly estimates of individual major components, including space heating, air-conditioning, water heating, and clothes washing. Our modeling of lighting took seasonal variation into account. Then we added battery storage. Here is what we found:

1. The system worked best when solar and wind generation were roughly balanced on an annual basis – wind supplies some electricity at night, when, by definition, there is no solar. Wind is also more plentiful in the winter, making it a good seasonal complement to solar.
2. Solar, wind and hydro and a small amount of industrial combined heat and power (using renewable hydrogen) met all of the load for 68% of the hours of the year, and varying amounts of load for the rest of the hours. In the lowest supply hours, the fraction of the load that was not met considerably exceeded 50%.
3. Adding just 5.5 hours of battery storage (average hourly load) increased the hours of meeting all the demand from 68% to 96%. Much, much better but still not good enough. Trying to increase battery capacity to meet the rest of the load makes it quickly apparent that the needed capacity spirals to huge, impractical numbers.

That is where the “smart grid” comes in. In a smart grid, communications go from one way — from the consumer, who flips a switch “on” or “off,” to the utility, which supplies the electrons instantly — to two ways: supply and demand can “talk” to each other, enabling what is known as “demand response” – the ability to adjust load to supply, if needed. The Federal Energy Regulatory Commission (FERC) has recognized demand response as a resource for the electricity grid (FERC Order 2222).

During hours when the solar-wind-battery combination does not meet the load fully, the smart grid can work with smart appliances (already on the market) to defer the unmet load to some other hour of the day for consumers who have chosen that option; they would get paid for signing up. FERC Order 2222 recognizes the technical equivalence of demand response, from the grid operation standpoint, to increasing the output of an electric power plant.

How would it work in everyday life? The hours when the solar-wind-battery combination does not meet load generally occur on days when there are other hours with surplus supply; this means demand can be deferred to some other time within a day when surplus solar and wind are available. Here are two examples.

Washing clothes: You load the machine and press “Start.” It happens to be a time when the grid has no available solar-wind-battery supply. For a rebate on your electricity bill, you have pre-selected the option of deferring your clothes washing to some other time of that day, should the need arise. Most of the time, the washing machine will start right away; but a small number of times, when supply is short, your clothes washing will be deferred automatically to a time within a 24-hour window when there is excess supply. You get an additional rebate on your electricity bill when the grid actually defers your demand. But suppose you are in a real hurry this time. You’ll have an override button, and your clothes washer will come on right way; you’ll pay more for that wash. (I’ll soon tell you where that electricity will come from.) If you don’t sign up at all for demand response, your clothes will always be washed when you press “Start” and you’ll always pay more when the grid has the most demand relative to supply. That is how peak electricity pricing works even today; it will be the same thing but in a new context, more nuanced, and more widespread. (That is also why renters, especially low-income renters, need to be protected during the transition to a smart grid: they don’t control what appliances they have. More on that in a future blog.)

Charging an electric car (by the time a grid is fully renewable, most cars will be electric): You plug it in to your home charger; you need 50 miles more of charge as soon as possible. The car starts charging and will be done in, say, two hours, independent of the state of the grid. You would pay more for that kind of on-demand charging, though, in a well-designed system, it will still be cheaper per mile than gasoline. But say you plug in at 10 p.m. and don’t need the 50 miles of added charge until commuting time at 7 a.m. the next morning. Your car will charge automatically for two hours but over the nine-hour window at times when there is surplus supply; as a result, you’ll have a cheaper commute. The potential for demand response is even greater when it is aggregated across consumers — something a smart grid would also enable. Allowing aggregation of demand response and other distributed resources is a principal feature of FERC Order 2222.

Given the attractiveness and flexibility of demand response, most people will sign up to save money. But what happens when people override their choices? That will be figured into the grid reliability calculation. The grid depends even now on what is called a “diversity factor” — not everyone cooks or bathes at the same time. Not all refrigerator compressors come on at the same time; neither do the electrical heating elements that keep freezers defrosted. Similarly, only some people will want to override a preset choice at any particular time. The grid won’t care which ones they are; a diversity factor, with a safety margin, will be built in.

Because some surplus electricity is available at some hour of essentially every day, a good bit of the demand that batteries cannot meet is now met by renewable generation but not at the exact time when the “Start” button is pressed. (Of course some things are not suitable for demand response, notably lighting; that is built into the design of demand response.)

These examples demonstrate a crucial fact: demand response reduces the cost of electricity to consumers because they get paid for contributing to grid resources. If the investment is only on the generation side, as for instance in peaking generation, then consumers pay all that added cost for resources that are used only a tiny fraction of the time.

Even after demand response there may be a tiny balance of the annual load for small fraction of the year’s hours. It was between one and two percent in the Maryland example we analyzed. The technology do meet that demand is also available.

Probably the best way would be to use “vehicle-to-grid” (V2G) technology. With EVs the same plug that charges the vehicle can also, with the appropriate technology and software, feed electricity to the grid — that is the “to-grid” part of V2G. It’s been developed and tested; it works. But can it meet the load? Is it comparable to nuclear plants?

Take long-term parking lots at airports. With suitable equipment, each could function as a sizable power plant. For example, the Thurgood Marshall Baltimore-Washington International Airport has more than 10,000 long-term parking spots. If three-quarters full, the cars parked and plugged in could supply the power equivalent of a large nuclear reactor for a short time (which is the nature of the residual peak load need). The daily lots could supply more. And that is just one airport! You park, you set how much minimum charge you must have when you arrive back. And you get a discount on your parking. If the grid actually uses your car battery, you get paid to park your car. An overview number: the combined horsepower of today’s vehicles is more than 50 times the combined capacity of all electric generation stations. Do we really need new nuclear, which makes plutonium just to boil water and consumes an enormous amount of water, in a world with security problems and climate extremes? Here is a recent article on that topic by M.V. Ramana and me, done for the Environmental Working Group. Rather, shouldn’t we we be thinking of saying bye-bye, as mindfully as possible, to the era of “Atoms for Peace”?

There are many other advantages of V2G. For example, when most or all heating is electrified efficiently and solar and wind are the main energy supply, peak demand will tend to occur on cold, windless winter nights. That is when school buses are parked. School districts could make money by signing up their electric buses up for V2G. Ditto for many other vehicles. Lawn care companies could make money by lending their battery powered mowers machines to the grid at night.

Another way to meet residual peak demand would be to make hydrogen when there is surplus supply of solar and wind. Hydrogen production using electricity (by electrolysis of water) is a form of energy storage; it can be done at power stations. (Large power stations today don’t produce hydrogen but they do use it to cool the electric generators, which enables operation at the maximum possible efficiency. It is stored on site; it is a familiar material in the electric power business.) When needed the hydrogen provides the fuel for light duty fuel cells of the type that are now used in fuel cell vehicles. This is the approach we modeled because it was simpler to do so.

Every single technology needed is available now to enable a transition to a resilient, reliable, democratized, economical, and 100% renewable, clean grid in 15 years. I would go so far as to say that demand response (including V2G) is the present-day equivalent of the post-1973 energy crisis understanding that energy efficiency could do the same things as energy supply, only more cleanly and cheaply. (For a personal account of that, see my tribute to the pioneer of energy efficiency policy, Dave Freeman, who passed away last year; he understood the role of efficiency well before 1973.) Unfortunately, much modeling today does not include demand response integrally.

Now, a note on land area: Fossil and nuclear power plants need fuel, which means more land every year for mining that fuel. Nuclear fuel is compact by the time it reaches the reactor; but every ton of reactor fuel requires on the order of a thousand tons of uranium ore (give or take, depending on the quality of the ore). There are hundreds of millions of tons of uranium mining and milling wastes in the United States, mainly on the Colorado Plateau, from nuclear weapons and nuclear power, despite the fact that the United States has been importing most of its nuclear power uranium requirements for decades — creating wastes in other countries like Canada, Australia, and Kazakhstan. Coal, petroleum and fossil gas also use land for fuel production and pipelines, not to speak of the area of flattened mountaintops, contaminated streams, and coal ash ponds.

Consider also this statistic on land use: today, about 30 million acres of agricultural land are devoted to the production of ethanol from corn, mostly for 10% of automotive fuel. This is far more than wind and solar land requirements. Here is a thought experiment. Suppose all the electricity the United Stats uses were supplied by ground-mounted solar. It would take roughly 10 million acres, or one-third of the land now used for corn ethanol. Further, the construction footprint of that solar – the steel to hold up the panels and the concrete footings would be at most 200,000 acres and probably much less. (The construction footprint of wind is also small.) Almost all of the the rest of the area could be used to grow food or graze sheep or return it to native grasses to enrich the soil and put back carbon in the soil.

Of course, we would not build a renewable energy system with only ground-mounted solar; far from it. A balanced renewable energy system will have onshore and offshore wind; it will have rooftop solar (for solar on new homes, see my 2020 report Gold on the Roof), urban ground-mounted solar, solar in parking lots, and solar on brownfields. Whenever solar is constructed on farmland, the area could be used to help join the food and energy systems so as to make them both healthier, more resilient, and more sustainable, while economically strengthening family farms and ranches. (See my 2021 report Exploring Farming and Solar Synergies)

Does renewable energy have environmental impacts? Yes. The scales of mining and construction impacts of a large power systems are roughly comparable — they all require a large amount of construction intensive investment, though specific materials and impacts depend on the technology. Where solar and wind shine (so to speak) is that they need no fuel. Rather, Mother Nature provides free fuel; we invest in the technology to harness it. To minimize the impact, whatever the energy system, it’s best to conserve energy and use it efficiently. We’ll be even better off and reduce mining impacts if we put in place facilities to recycle solar panels and batteries at the start of our renewable energy journey. That too, I’m saving for another blog. This one is already long enough.

A note of thanks to the Town Creek Foundation, which funded IEER’s Maryland work in its entirety for several years; the foundation made its last grants and closed its doors at the end of 2019.

Nuclear weapons ban treaty is now international law

Today, January 22, 2021, is a historic day. The Treaty on the Prohibition of Nuclear Weapons enters into force, three months after the 50th country, Honduras, ratified it. Nuclear weapons are now illegal under international law in every aspect. Possession is illegal; manufacture is illegal; use is illegal; threatening to use is illegal; transfer is illegal; aiding and abetting any of these things is illegal.

I salute the International Physicians for the Prevention of Nuclear War, where the idea for this treaty originated — though its antecedents go much farther back — to the 1990s, when many non-government organizations, including IPPNW, created a mock treaty to ban nuclear weapons. The International Campaign to Abolish Nuclear Weapons (ICAN) was formed to bring the idea to fruition; it won the Nobel Peace Prize for doing so.

It is a comprehensive treaty; no nuclear weapon states have signed it. There are nine: the United States, Russia (the successor nuclear state of the Soviet Union), Britain, France, China, Israel, India, Pakistan, and North Korea (though Israel does not confirm or deny possessing these weapons). The first five are parties to the 1970 Nuclear Non-Proliferation Treaty, whose Article VI requires them to negotiate “in good faith” to achieve nuclear disarmament. In the 1990s, the World Court interpreted this to mean actually achieving nuclear disarmament in all its aspects. Both the good faith and the achievement have been sorely lacking.

Avoiding further humanitarian catastrophes that the manufacture, testing, and use of nuclear weapons have already created are at the heart of the treaty; so, of course, is preventing the true apocalypse of a nuclear war. Countless families since the first chain reaction was achieved at the University of Chicago on December 2, 1942 have suffered. IPPNW and IEER documented those disasters, so far as public information would allow, in three volumes published in the 1990s: Radioactive Heaven and Earth on testing; Plutonium Deadly Gold of the Nuclear Age; and Nuclear Wastelands (published by MIT Press). Health and environmental harm was at the center of the first nuclear weapons treaty — the 1963 treaty that banned nuclear testing in the atmosphere, undersea, and in space.

But much remains in obscurity. Radioactive waste problems continue to fester. The fact that uranium was mined in many non-nuclear weapons states, leaving behind ill-health and radioactive waste, has hardly registered on the global political scene. Nuclear testing was done largely on indigenous and colonial lands. At the 2014 conference in Vienna, one of three that led up to the treaty, I argued that every nuclear weapons state has first of all harmed its own people without informed consent. That fact remains almost as obscure as it was in July 1945, when the first nuclear weapons test lit the New Mexico sky with the most ominous light the world had seen to that point; worse was to come. Indeed, the families irradiated by the intense fallout from that “Trinity test” and their descendants, organized as the Tularosa Basin Downwinders Consortium, are still struggling for recognition and compensation.

In view of this history, it seems appropriate to ask the states parties to the TPNW to set up a Global Truth Commission on Nuclear Weapons under the auspices of the United Nations. Bringing to light the awful truths of the poisoning of the Earth and the lawlessness that has accompanied it (“A secret operation not subject to laws” one high U.S. government official said in 1989) may help being some justice to those who have suffered. At the same time, it may help mobilize the public in the nuclear weapons states and their allies — who live under a malignant “nuclear umbrella” whose use would destroy them and everyone else — to demand an end to something that has been immoral since its creation and is now also unequivocally illegal.

Public Transit – an element of economic and environmental justice

Even before the COVID-19 pandemic, in 2017, about 50 million U.S. households were under such economic stress that they could not cover an unexpected $400 expense, like the breakdown of a car or a sudden health problem, without borrowing; many are unable to cover it at all. As is well-recognized, the pandemic has exacerbated these problems, which fall disproportionately hard on the Black, Indigenous, and Latinx communities. Public transit is a case in point, one of the most important, in fact.

Low-income households routinely face impossible economic choices – pay the utilities or the rent? Buy food or medicines? In the tightest of situations, these essentials come to be regarded as temporarily discretionary. Pay the rent in the winter instead of utility bills since utilities are less likely to be cut off. Pay utilities in the spring, by now accumulated so much that the bills compete with rent. But most transportation expenses, whether for a personal vehicle or public transit or both, are not flexible even temporarily. Most importantly, transportation is needed to get to work; the expense is unavoidable on a daily basis or the other bills stand no chance of being paid.

In 2016, households in the lowest income quintile spent almost 30% of their income on transportation. At the same time, public transit systems around the country routinely operate in fiscal distress, trying to raise revenue by fare increases. Transit workers, who are so essential to the functioning of life in cities, have had to struggle hard for everything from safe buses to adequate time for bathroom breaks. Then came the pandemic and the collapse of ridership and revenue, and inadequate federal support. At the same, workers who kept the system going — grocery store workers, medical personnel, and, not least, transit workers themselves — were declared essential; they now faced added risk not only for themselves, but also for their families.

Beyond the issues of affordability and safety is the central stark fact that transportation is the single largest contributor to U.S. greenhouse gas emissions: 36% of carbon dioxide emissions from fossil fuel burning and 31% of total net greenhouse gas emissions in 2018, according to the Environmental Protection Agency. Public transit has rightly been seen as a principal tool in reducing these emissions, along with making that transit emissions-free.

Even before the pandemic, cities around the world had begun experimenting with making public transit free. In September 2018, Dunkirk, France made public transportation free. Weekday ridership increased over 60%; on weekends it more than doubled. In January 2020, the Kansas City Council voted unanimously to make public transit free. In February, Luxembourg became the first country, albeit small, to make public transit free. Lest one think that public transit is only for the densest cities like New York, Luxembourg’s population density is only 627 people per square mile compared to New York City’s more than 10,000. Even Montgomery County, Maryland, which has large swaths of rural land in an agricultural reserve, has a population density more than three times that of Luxembourg.

Increasing ridership must be combined with zero emissions. Electric buses are now coming into widespread use. Powered by solar and wind energy — now the cheapest new electricity sources — revived public transit can be a central instrument in making air cleaner, household finances more secure, and society more equitable. Commerce in cities could be stimulated. In Dunkirk, some people have gotten rid of their cars altogether.

Electricity, water, and sewage are considered essential public utilities in cities, necessary to keep the them functioning. Public transit easily fits the criterion of an essential service. The climate crisis, the severe and constant strain on family budgets of tens of millions of U.S. households, exacerbated by the pandemic, and the need for clean, breathable air all point in one direction: public transit in cities and close suburban areas should be declared a public utility. And it should be made free. When coupled with pedestrian and bicycle friendly cities, becoming more common during the pandemic, a revolution in transportation can be one of the anchors of economic vibrancy married to economic and environmental justice.

Indeed, there is a strong argument that the concept should be extended to rural areas, where water supply, sewage connections, and transportation can pose significant economic, health, and environmental challenges. The heavy toll of the lack of wastewater services was covered in a shocking November 30, 2020 article in the New Yorker. Similar problems, including lack of affordable transportation options, are widespread on tribal lands.

Where will the money come from? Political determination is needed to see the sources. Real estate, notably commercial real estate along dense public transit corridors, such as the Washington, D.C. area Metro system, increases in value due to the availability of transit. Yet, the benefit of that does not accrue to those who ride transit or to transit workers. Rather, it goes into private pockets and, to a lesser extent, into general tax revenues. Taxing real estate along transit corridors corresponding to the increase in value is one source. Second, transit riders subsidize private car ridership. Jammed in at rush hour, transit riders relieve congestion on the roads and reducing pollution in the bargain; yet they pay more at those very times. A congestion charge on cars is among the options. Then there is the wealth tax, put on the political map by Senator Elizabeth Warren during her presidential campaign. A simple 1% wealth tax on the wealthiest 0.1% of households would raise about $200 billion a year, in round numbers. That tax would affect only one out of seventy millionaires. It’s not as if the rich will have less money; it’s just that their wealth will grow a little less rapidly. Bill Gates set up his foundation in 2000; he has given away billions. Yet, he is almost twice as rich today as he was two decades ago.

Transportation, and within that public transit, is a big piece of the big environmental, economic, climate, and justice puzzle; it is far larger one than is often recognized. By the same token, the role of public transit in accelerating solutions — needed as much for climate as for justice — is often underappreciated. That should change; the sooner, the better.

On January 22, 2021, nuclear weapons will be illegal under international law

In 2017, the United Nations General Assembly convened a conference to consider a treaty on a complete ban on nuclear weapons — including their manufacture, possession, use, transfer, and testing. On July 7, 2017, 122 countries voted to adopt the Treaty on the Prohibition of Nuclear Weapons, with one abstention and one vote against. The treaty required 50 countries to ratify it to enter into force. That target was reached on October 24, 2020, when Honduras ratified the treaty; it was just a single day after Jamaica and Nauru had done so.

On January 22, 2021 — 90 days after the fiftieth ratification — the nuclear ban treaty will enter into force. From that day onwards, all aspects of nuclear weapons will be illegal under international law. Nuclear weapons will join the other infamous weapons of mass destruction — chemical and biological weapons — as being illegal. One of the most salient aspects of the nuclear weapons ban treaty is that its motivating factors included not only “the catastrophic humanitarian consequences that would result from any use of nuclear weapons,” but also the vast and lasting damage to human health and the environment caused by nuclear weapons production and testing, with disproportionate impacts on women and children.

Nuclear weapons, the treaty says “…pose grave implications for human survival, the environment, socioeconomic development, the global economy, food security and the health of current and future generations, and have a disproportionate impact on women and girls, including as a result of ionizing radiation.” It also notes the devastating impact that nuclear weapons testing has had on indigenous peoples.

I am happy to report that IEER’s work had a role in some aspects of the treaty, notably regarding the humanitarian aspects of their production and testing. Our partnership with the 1985 Nobel-Prize winner International Physicians for Prevention of Nuclear War (IPPNW) in the late 1980s and the 1990s resulted in three detailed books on the health and environmental impacts of nuclear weapons production and testing. Two of them, Radioactive Heaven and Earth (1991), on testing and Plutonium: Deadly Gold of the Nuclear Age (1992) can be downloaded free. The third, Nuclear Wastelands: A Global Guide to Nuclear Weapons Production and Its Health and Environmental Effects (1995) was published by MIT Press. IPPNW was the organization that initiated the International Campaign to Ban Nuclear Weapons (ICAN) in 2006, which led to the treaty and a Nobel Peace Prize for ICAN.

IEER was also present at the official December 2014 conference in Vienna, Austria, on the humanitarian impacts of nuclear weapons production and testing, where I made a presentation “Assessing the Harm from Nuclear Weapons Production and Testing”; I also made a presentation at the 2017 conference when 122 countries voted for the final treaty text.

IEER has also played a leading role in calling attention to the disproportionate impact of ionizing radiation on children, with greater impact on female children, and on women. Our 2006 report, Science for the Vulnerable, was the first to explore in these impacts detail, in part based on the scientific findings to that effect of the U.S. National Academies and the United States Environmental Protection Agency. Our work in this area continues.

Not a single nuclear weapon state signed the treaty in 2017. That remains the case to this day. Yet, there is a proverbial silver lining to that dark cloud. The treaty text was determined essentially by countries who do not have nuclear weapons and don’t want them. That has given us a clean treaty text; it bans all aspects of nuclear weapons, period. Had there been a treaty done by the nuclear weapon states it would likely be weak, or full of loopholes, or both. The 1970 Nuclear Non-Proliferation Treaty (NPT), which has been ratified by the first five nuclear weapon states, contains a commitment, in Article VI, to negotiate nuclear disarmament in “good faith” — a commodity that has been in especially short supply. It was largely on the expectation that nuclear weapon states would disarm that the other parties agreed not to acquire nuclear weapons. The failure of nuclear weapon states, the five who have ratified the NPT and the four who have not, to chart a clear path to complete nuclear weapons elimination was one of motivating forces for the creation of the Treaty on the Prohibition of Nuclear Weapons.

On January 22, 2021, nuclear weapons will be illegal under international law. There will remain the large task of charting a path to give practical effect to that law. In my view, it will be the same path that will also produce a more peaceful, equitable, and democratic world on a much broader front. Nuclear weapons are, after all, the most violent and inequitable expression of much broader violence, inequity, and ecological destruction in the world. IEER spelled out some elements of that in 1998 in articles on achieving enduring elimination of nuclear weapons in a special issue of our newsletter , Science for Democratic Action. It is noteworthy then, that many of the countries that have ratified the treaty and have led the way to making nuclear weapons illegal are also among the ones most threatened by the devastation of climate disruption due to human activities.

The Nagasaki atomic bombing – why the rush?

Nagasaki was destroyed by a plutonium atomic bomb seventy five years ago, on August 9, 1945. Called “Fat Man,” it was the same design that had been tested in the New Mexico desert less than a month before, spreading intense radioactive fallout over a wide area. A day before, on August 8, 1945, the Soviet Union had declared war on Japan, having been neutral until then. Japanese wartime leaders had been divided about surrender for weeks; now, facing a two-front war, the debate became more urgent and intense.

Susan Southard’s account of the debate in the Japanese councils of war notes that “the news of the second atomic bombing bombing had no apparent impact on their deliberations [on August 9], which, according to notes from their meeting continued throughout the day and with no further mention of Nagasaki.” The specter of occupation by the Soviets and the hopelessness of the two-front war seem to have decided the Emperor of Japan that very night to signal a surrender to the United States, which duly happened on August 15, 1945.

This much seems well-supported by the facts; despite that the controversy rages. What is even less debated is the timing of the use of the bombs and the targeting of Japan. In a blog post three days ago, I recounted, once again, that the decision to not target Germany and, instead, to orient the bomb to the Pacific theater had been taken on May 5, 1943, more than two years before Hiroshima and Nagasaki were destroyed.

The bombs were used as soon as they were ready and the weather permitted — August 6 and 9. On August 9, Nagasaki was not even the intended target; it was Kokura, but there were too many clouds over that city. So Nagasaki, a secondary target that had already been bombed with more mundane explosives, had the atomic misfortune instead. The main targets were cities that had deliberately been spared conventional bombing; the object was to measure the impact of the atomic bomb with as much scientific precision as possible. Prior destruction would confound those measurements.

The rush was not related to the anticipated large loss of lives of U.S. troops in an invasion (tens of thousands in the military’s estimates from June 1945) because that D-day was not until November 1, nearly three months after the atomic bombings. President Truman, General Marshall, and other military leaders believed that a Soviet entry into the war would cause Japan to capitulate. At the mid-July Potsdam conference, the Soviets had agreed to declare war on Japan on August 15. Following the Hiroshima bombing, Stalin accelerated that declaration by a week. He got the message the U.S. sent about the shape of the post-war world; he was going to have his say.

Why not wait till a few days after August 15 to bomb Hiroshima? Why not wait for a few days after August 8, when the Soviets actually entered the war, to use the second bomb? Why persist in the hurried schedule?

Two principal reasons can explain the rush: one imperative was to justify the use of vast resources on the Manhattan Project. If the bombs were not used and shown to be important, even decisive, in ending the war, there would be endless investigations. President Truman, as as a senator in 1944, had already threatened investigations when he was frustrated in his attempts to find out where all the money was going. Jimmy Byrnes, FDR’s director of the War Mobilization Board (and later Truman’s Secretary of State), had warned in February 1945 that he, FDR, had better show the money spent on the project was actually contributing to the war effort.

The Manhattan Project had had very high priority on wartime resources; for example, welders were sent from shipyards in San Francisco to Hanford to build the plutonium production plants there. Were the bombs not used, there could well be a reasonable argument that the Project actually cost the lives of U.S. soldiers and sailors.

The second was to announce the shape of the post-war world, most of all to the Soviet Union. Groves frankly acknowledged that when he said after the war: “There was never from about two weeks from the time I took charge of this Project any illusion my part that Russia was our enemy, and the project was conducted on that basis. I didn’t go along with the attitude of the country as a whole that Russia was a gallant ally….Of course, that was so reported to the President.” (as quoted by Martin Sherwin, in his book A World Destroyed, p. 62, Vintage Books, ppbk., 1987).

The United States and Britain had kept the fact of the bomb project from their Soviet wartime ally. Churchill had explicitly spurned a plea from the great Danish physicist Niels Bohr in 1944 that the Soviets, as allies, should be informed. In any case, Stalin was well-informed; he already knew of the bomb project through his spy network. With Truman’s hint at Potsdam about the successful atom bomb test, Stalin accelerated the Soviet bomb effort. With Hiroshima, he accelerated the Soviet declaration of war in Japan. He would have his say.

The die was cast. Having been started, at the instance of Einstein and others, as an effort to deter Hitler from blackmailing the world with atom bombs, over time it ceased being about Germany – most definitively so by early December 1944, by which time the Manhattan Project spy mission, Alsos, determined that Germany did not have a viable bomb project

At that point, the vast plutonium separation plants at Hanford Washington, operated by DuPont, had not yet been started. None of the tens of millions of gallons highly radioactive waste from plutonium separation that still haunt Eastern Washington State, had been created. At that moment, when the bomb project was accelerated instead of being stopped (a logical step had it still been about the Nazis), it had definitively become about money and power — wartime money and postwar power. That is the secret in open view about the timing of the bombing of Hiroshima and, even more so, about destruction of Nagasaki 75 years ago.

PS: Many years ago, I had the privilege of interviewing Walter Hooke, a marine veteran who was among the US troops that occupied Nagasaki. And here is my Hiroshima blog post from three days ago. These are vignettes of the history. For an overview, please refer to my 2012 talk in Santa Fe, From Pearl Harbor to Hiroshima; it’s about one hour.

When was the decision to use the atom bomb made?

When was the decision made to use atom bombs on Hiroshima and Nagasaki? Was it one decision — or several that made their use inexorable and inevitable? What were the forums in which those decisions were made? When was Japan targeted? And Germany? Seventy-five years after those cities were obliterated, these remain insistent questions.

The first step, the establishment of the Uranium Committee in October 1939, after President Roosevelt read Einstein’s letter urging a bomb project, was not really a decision to use the bomb. It was a scientific exploration that more or less had a deterrence aim to beat the Nazis to the bomb.

Vannevar Bush (no relative of the two presidents to come), who headed the National Defense Research Committee in the White House, slow walked the project until well into 1941. Only $10 million was spent in the first two years. He reported directly to FDR and was the central decider in weapons development during World War II.

Bush was an electrical engineer, inventor, Vice-President of the Massachusetts Institute of Technology and, from 1939 onward, President of the Carnegie Institution. He came to the White House job in 1940 determined to bring the full force of U.S. science, including that in academia, to bear on the development of weapons in this war, not the next. He knew that the generals considered academic scientists to be eggheads who, at best, would develop weapons for the next war — or in the words of Harvey Bundy, confidante of Secretary of War, Henry Stimson, the military “would naturally have the feeling that these damn scientists weren’t very practical men; they were visionaries.” Bush was determined to prove them wrong.

At first, the atom bomb project seemed too speculative to his purpose; who knew if it would work at all? But in July 1941 the British MAUD (Military Application of Uranium Detonation) Committee concluded that a modest amount of uranium-235 would be sufficient to produce a massive atomic explosion. Now the bomb was not speculative, though it was not yet a reality. Such was the certainty that, three years later, the uranium bomb was used directly on Hiroshima without ever being tested. (The July 1945 Trinity test was for the more tricky plutonium implosion design.) Now the argument that the U.S. should beat the Nazis to the nuclear punch could be joined to Bush’s ambition to put science in the service of U.S. weapons to be used in World War II and for global power after it (now more politely called “national security”).

Bush briefed FDR on October 9, 1941 recommending an all-out effort to make the bomb. Given his position and his ambition that the weapons whose development he was overseeing should be used in World War II, the first decision to use the bomb was made, in spirit and at least arguably, on that date, almost two months before the U.S. formally entered the war upon the bombing of Pearl Harbor. Of course, it had to be built in time. But course was set.

By May 1943, what came to be known as the Manhattan Project was firmly established. Leading U.S. physicists, including immigrants who had fled Europe, met in 1942 at the University of California, Berkeley for what should be the most famous summer study ever. They too concluded a uranium bomb would work. A nuclear chain reaction, the explosive heart of the bomb, was demonstrated at the University of Chicago on December 2, 1942. Hanford, Washington, had been selected as the site to make plutonium on an industrial scale. Los Alamos was selected as the place where scientists would design the bomb high on an isolated New Mexican mesa. It was a region that Robert Oppenheimer knew well.

On May 5, 1943, the Military Policy Committee, consisting of five men, chaired by Bush, met to review progress. It was, in effect, the Executive Committee of the Manhattan Project. James Conant, President of Harvard, was Vice-Chair. General Groves, who oversaw and coordinated the massive project on the ground, was a member, as were two other military personnel, General Styer, an expert in logistics, and Admiral Purnell, Deputy Chief of Naval Operations for Materiel. The Committee did not have a single general or admiral in charge of actually prosecuting the war that was raging. In fact, almost to a man, the the generals and admirals in the field did not know about the Manhattan Project.

It was on that fateful day, two years before the end of the war in Europe, that that the Military Policy Committee decided that Germany would NOT be targeted; the target selected was the Japanese fleet stationed at the Pacific Island of Truk. Manhattan Project scientists other than Bush and Conant, continued to labor under the idea that the bomb would be to deter Germany or perhaps used on it. Bush had decided not to inform them. When I interviewed several leading ones who were still alive in 1995, including Glenn Seaborg and Hans Bethe, none knew of the May 5, 1943 decision, even then though the fact had been public, in fine print, for decades.

May 5, 1943 was the first specific decision to use the bomb; it was the first targeting decision. Germany was not targeted since they might reverse engineer a bomb if it were a dud. Japan was thought to be less likely to do so. The targeting of the fleet at Truk rather than Tokyo was an added precaution — a dud would sink and be hard to recover. From May 5, 1943 onward the use of the bomb was all about the Pacific theater and in 1944, Japan itself.

On September 18, 1944, FDR and Churchill agreed that “when a ‘bomb’ is finally available, it might perhaps, after mature consideration, be used against the Japanese, who should be warned that this bombardment will be repeated until they surrender.” The aide-memoire makes no mention of Germany. In that same time frame, logistical preparations were made in the Pacific theater to use the bomb on Japan. There were none in the European theater.

By December 1944, the Manhattan Project spy mission, Alsos, had enough information to conclude that Germany’s efforts “to develop a bomb were still in the experimental stages,” as Groves wrote in his memoir, Now It Can Be Told. The war was coming to a close; the Soviets were well into Eastern Europe. Paris had been liberated months before. Joseph Rotblatt, a scientist at Los Alamos, decided his job was done; there was no German bomb threat. He quit. He was the only one.

The Project itself was accelerated so the bomb could be used before the war in the Pacific ended. Bush was joined in his determination that weapons developed during the war should be used in the war by others, notably General Groves. A principal motivation was to justify the use of immense resources for the atom bomb project; that meant showing that the bomb had played a big role in ending the war and saving the lives of US armed forces personnel by preventing an invasion.

The last decisions were in May 1945; they were to pick target cities in Japan. While some scientists opposed the use of the bomb on cities, their pleas did not reach President Truman. Jimmy Byrnes, his Secretary of State, explicitly rejected a similar plea from Leo Szilard, who first conceptualized the chain reaction years before it was experimentally achieved. At the top, there was no serious consideration about whether the bombs should be used. It was just a question of when.

The answer: as soon as the bombs were ready and weather permitted — those dates were August 6 and August 9, 1945. The invasion of Kyushu was not due till November 1, 1945. The determination to use the bomb in World War II was realized by its early use. The role of the bombings in ending the war has increasingly come into question with strong evidence pointing to the decisive role of the Soviet entry into the War on August 8, 1945. The Soviets had been neutral with respect Japan before then. Japanese rulers, observing Eastern Europe, did not want to be occupied by the Soviets. So the Japanese submitted to the United States’ demands.

It worth noting that, on April 23, 1945 as part of briefing materials for the newly installed President Truman, Groves wrote. “The target is and was always expected to be Japan.” (italics added). The decision in the direction of Japan and away from Germany was made two years before, on May 5, 1943. Groves knew because he was a part of it. Finally, there is much evidence that the use of the bombs was, in significant measure, a message to the Soviets about who would run the post-War world. Bush too had achieved his objective — the bomb was used in World War II and the U.S. had announced itself as the preeminent global power.

PO Box 5324 · Takoma Park, Maryland, 20913 USA · Tel. 1-301-509-6843 · E-mail: