Every anniversary of the atomic bombings of Hiroshima and Nagasaki, two schools of thought square off. One says, the bombings were not necessary to end the war; the Japanese were close to surrender anyway. The other says remember Pearl Harbor, the Japanese militarists’ determination to fight to the end. But many questions remain unasked in this framework. Why was the U.S. Pacific fleet moved to Pearl Harbor in 1940? Why did Japan bomb it? When were Japanese forces first targeted, rather than Germany? The answers may surprise you. They are in a talk I gave at Santa Fe in 2012: see the video of it below. It’s about an hour. Links to some historical documents and additional information are provided below.
Pearl Harbor was not a “sneak attack” in the sense that it was a total surprise to the United States. The Pacific fleet had been moved there in June 1940 to assert U.S. power in the Pacific. Admiral Kimmel, in charge of the fleet, noted in February 1941, nearly ten months before the attack that he felt that “a surprise attack (submarine, air, or combined) on Pearl harbor is a possibility.” So the bombing of Pearl Harbor in December 1941 was not a surprise in any military sense. The fleet’s vulnerability had been anticipated.
Japan depended on the U.S. for 80 percent of its oil imports in 1940, making it very vulnerable to the oil embargo which began on August 1, 1941. It had to decide — pursue empire and get to Indonesian oil or give up its attempt to conquer China and other areas of Asia.
In 1944, as a senator, Harry Truman had been frustrated and upset that he had not been allowed to send a personal military representative to the Hanford Site to determine whether the large expenditures there were wasteful or not. He gave in for the moment and agreed not to investigate, but warned Secretary Stimson in March 1944 that “[t]he responsibility … for any waste or improper action which might otherwise be avoided resists squarely upon the War Department.” Truman did not know about the Manhattan Project until after he became President in April 1945, upon the death of President Roosevelt.
James Brynes, as director of the Office of War Mobilization, advised President Roosevelt in a March 3, 1945 memorandum that there should be an independent scientific investigation “to justify continuance of the project.’’ He warned that “if the project proves a failure, it will then be subjected to relentless investigation and criticism.”
The concept that half-a-million lives may be saved by an early end to the war was mentioned in a note to Henry Stimson by an “economist” friend (my guess is that it was his cousin, Alfred Loomis, a wealthy Wall Street banker and amateur physicist) as an argument in favor of a conditional surrender policy. The main purpose of the suggestion of conditional surrender was to end the war before the Soviets entered, and thereby keep markets in Asia other than Formosa and Korea for the British and the Americans. Formosa and Korea were proposed to be ceded to Japan as part of the early end to the war.
Stimson forwarded the letter to General Marshall for evaluation by his staff, which the general sent him on June 7, 1945. The General Staff considered the proposal “acceptable from the military standpoint, but its implementation on the terms suggested is considered doubtful.” Overall, the staff analysis leaned in the direction of terms that appear to add up to unconditional surrender. It rejected out of hand the suggestion that the invasion of Japan would cost half-a-million American lives. It stated that the estimate “under our present plan of campaign, is considered entirely too high” (underlining in the original; italics and bold added). General Marshall’s cover note to Stimson stated that he “general agreement” with the analysis.
The war plan itself had three scenarios. Estimated deaths were in the range 25,000 to 46,000; injuries were in the range of 105,000 to 170,000 plus 2,500 to 4,000 missing in action. A few days later, on June 18, 1945, General MacArthur clarified that the “estimate was derived from the casualty rates in Normandy and Okinawa, the highest our forces have sustained….The estimate is purely academic and routine and was made for planning alone. I do not anticipate such a high rate of loss.”
Published on August 05, 2015 by IEER Administrator in News
(Aerial view Sellafield, Cumbria. Photo credit Simon Ledingham)
So, as a practical matter, most of the $100 billion must be chalked up as the cost of Britain’s nuclear bombs, since that was the only (arguably) “useful” output. That is roughly $600 million dollars per operational bomb in Sellafield clean up costs alone. Then add remediation of all the other bomb-related sites and the costs of setting up and running the nuclear bomb complex.
In the United States cumulative costs to the year 1996 were about $5.5 trillion (1995 dollars), including not only the bombs, but the delivery systems, personnel, etc. incurred until then. It has been increasing by tens of billions each year since. Cleanup will total hundreds of billions of dollars. And, according to current plans, many sites will be left with significant amounts of contamination. (For an accounting of the U.S. program, See Atomic Audit, Brookings Institute, 1998, ed. Stephen Schwartz. I was one of the authors. It’s available from IEER’s website (http://ieer.org/resource/books/atomic-audit/)
In the meantime, Fuksuhima continues to be an emergency without end – vast amounts of radioactivity, including strontium-90 in the groundwater, evidence of leaks into the sea, the prospect of contaminated seafood. Strontium-90, being a calcium analog, bioaccumulates in the food chain. It is likely to be a seaside nightmare for decades. (Listen to Arjun discuss the ongoing radioactivity leaks in an interview with Living on Earth radio: http://www.loe.org/shows/segments.html?programID=13-P13-00030&segmentID=4)
“The difficulties we face at Fukushima Daiichi are on par with the difficulties we faced in the wake of World War II. Tepco [the Tokyo Electric Power Company] needs more assistance from others in Japan, me included. We cannot force everything on Tepco; that’s probably not going to solve the problem.”
So nuclear power has gone from a promise of “too cheap to meter” to a disaster like the horrific post-war rubble in Japan. And the biggest bang for the buck that was supposed to be the bomb has become endless bucks for the bang. A sad reminder as we approach the anniversaries of the bombings of Hiroshima and Nagasaki.
Published on July 29, 2013 by Arjun Makhijani, Ph.D in News
The last few months have seen some definite signs that commercial nuclear power is not the wave of the future but a way of boiling water that might be seen as a twentieth century folly. Four commercial nuclear reactors have been shut permanently ostensibly for different reasons, but economics underlies them all.
Crystal River in Florida came first, in early February 2013. It had been shut since 2009. Like many other pressurized water reactors, it had to have a premature replacement of its steam generators, the huge heat exchangers were the hot reactor water (“primary water”) heats up water in the secondary circuit to make the steam the drives the turbine-generator set. The outer layer of the containment structure cracked during the replacement. Duke Energy, the owner, determined it was too costly to fix the problem. See Duke’s press release at http://www.duke-energy.com/news/releases/2013020501.asp
The 556-megawatt Kewaunee reactor in Wisconsin came next, in early May, unable to compete with cheap natural gas and falling electricity prices. Indeed, electricity consumption in the United States is declining even as the economy recovers from the Great Recession due in part to the increasing efficiency of electricity use. There doesn’t appear to be enough money in the reserve fund for decommissioning at present – see the New York Times article at http://www.nytimes.com/2013/05/08/business/energy-environment/kewaunee-nuclear-power-plant-shuts-down.html.
San Onofre, with two reactors, came next. Both had been down since early 2012, when excessive wear of steam generator tubes and leaks of primary water were discovered. The steam generators were new, but contrary to the company’s claims, it turned out that the new ones were not copies of the original licensed design. A long, contentious process followed; prospects for a green light to restart faded. The blame game between the supplier of the steam generators, Mitsubishi, and the majority owner, Southern California Edison grew intense (and it continues). Announcing the decision to close the plant, the SCE President Ron Litzinger said: “Looking ahead, we think that our decision to retire the units will eliminate uncertainty and facilitate orderly planning for California’s energy future.” (See the La Times article at http://www.latimes.com/local/lanow/la-me-ln-edison-closing-san-onofre-nuclear-plant-20130607,0,7920425.story).
Nuclear plants were supposed to create certainty, reliability, predictability, 24/7 operation. But in the last few years, this has given way to a new reality. Nuclear reactors are 24/7 until they become 0/365 with little or no notice. The above are just four examples. Before the Fukushima disaster, Japan had 54 reactors. Four were irretrievably damaged by the accident. In the 15 months that followed, the other 50 were progressively shut or remained in shut down mode. In the last year, only two have been restarted. It will be a contentious process before any more of them can be restarted. It is possible none will be. Many in Japan assume they won’t be for they are installing solar power at rapid rates – 1.5 gigawatts in the first quarter of 2013 alone – equal to about one-and-a-half reactors in peak power output. About 6 gigawatts would be required to generate an equal amount of electricity to one typical power reactor. Capacity comparable to that will likely be installed in Japan this year.
Finally, Germany prematurely shut eight reactors following Fukushima, consolidating and accelerating the post-Chernobyl process of phasing out nuclear power altogether (the end date is now set for 2022).
But officialdom in the United States still clings to the idea that we need nuclear power. So reliable, so baseload, so twentieth century (oops, wrong century).
Published on June 20, 2013 by Arjun Makhijani, Ph.D in News
Statement of Arjun Makhijani for the March 2013 conference commemorating the Fukushima accident To be read by Helen Caldicott
I appreciate that my friend, Helen Caldicott, one of the two people who inspired my book Carbon-Free and Nuclear-Free (the other was S. David Freeman) has agreed to read a brief statement from me on this second anniversary of the Fukushima disaster. I wanted to share two of the new things I have learned as I have followed the consequences of Fukushima unfold.
First, the Japanese government proposed to allow doses as high as 2 rem (20 millisieverts) per year to school children, claiming that the risk was low or at least tolerable. An exposure at this level over five years – 10 rem in all — to a girl, starting at age five, would create a cancer incidence risk of about 3 percent, using the [age- and gender-specific] risk estimates in the National Academies BEIR VII report.
Now imagine that you are a parent in Japan trying to decide whether to send your daughter to such a school. Roughly thirty of every hundred girls would eventually develop cancer at some point in their lives; just one of those would be attributable to Fukushima school exposure, according to the risk numbers. But no one would know if their daughter’s cancer was attributable to the exposure at school and neither would the Japanese government’s radiation bureaucrats. Why is it difficult to understand that while the risk attributable to school contamination would be one in thirty, the proportion of parents stricken with guilt and doubt would be closer to one in three? Would you ever forgive yourself if you made the decision to send your daughter to that school? Or your son, though the risk attributable to Fukushima exposure would be less than that experienced by girls?
Indeed, due to the long latency period of most cancers, you would be fearful even if no cancer had as yet appeared. The Pentagon understood this when a Joint Chiefs of Staff Task Force evaluated the extensive contamination produced by the July 1946 underwater nuclear bomb test (Test Baker) at Bikini for its usefulness in war. Here is a quote from their 1947 report:
“Of the survivors in the contaminated areas , some would be doomed by radiation sickness in hours some in days, some in years. But, these areas, irregular in size and shape, as wind and topography might form them, would have no visible boundaries. No survivor could be certain he was not among the doomed, and so added to every terror of the moment, thousands would be stricken with a fear of death and the uncertainty of the time of its arrival.”
Compare this for yourself with the aftermath of Fukushima and the plight of the parents.
Second, nuclear power’s conceit was that nuclear power is 24/7 electricity supply. Since Fukushima, over sixty of the world’s light water power reactors have been prematurely shut for a variety of reasons, though just four reactors were stricken by the accident: 52 in Japan, eight in Germany, several in the U.S. Even if some are eventually restarted, nuclear power has shown a unique ability to go from 24/7 power supply to 0/365 essentially overnight for long periods– hardly a convincing claim of reliability.
We can do better than making plutonium just to boil water or polluting the Earth with fossil fuel use. When I finished Carbon-Free Nuclear-Free in 2007, I estimated it would take about forty years to get to an affordable, fully renewable energy system in the United States. Today, I think in can be done in twenty-five to thirty years. Are we up to the challenge? Finally, I truly regret I cannot be there to publicly thank and honor my friend Helen for inspiring Carbon-Free, Nuclear-Free, which you can download free from ieer.org, also thanks to her. I wish you a very productive conference.
On February 6, 2013, Dan Stout, who is the Tennessee Valley Authority’s senior manager for its Small Modular Reactor project, gave a colloquium at the University of Tennessee in Knoxville. Much of the talk was just nuclear-boosterism. For instance, he claimed that “nuclear power was tested hard in 2011. It remains safe reliable and affordable.”
There was no mention of the fact that post-Fukushima, about 60 of the world’s light water reactors were closed for one reason or another, mainly in Japan and Germany, taking them from 24/7 power generation to 0/365. He said nothing of the enormous social, economic, and environmental dislocation and loss that has afflicted the Fukushima region and beyond since that time. And there was nothing of the Nuclear Regulatory Commission’s Task Force report of July 2011 that found US regulations seriously lacking in a number of respects (http://pbadupws.nrc.gov/docs/ML1118/ML111861807.pdf). But there was some refreshing candor about Small Modular Reactors (SMRs) mixed with this sales talk. His talk is archived at http://188.8.131.52/UTK/Viewer/?peid=fa73ded60b7b46698e9adc0732101a76
SMRs are supposed to overcome the loss of economies of scale by using assembly mass-manufacturing techniques such as are used for cars and passenger aircraft. The site set up would be standardized and the set up and commissioning on site would be relatively quick (36 months). So “economics of replication” would replace the “economies of scale” (one of the principal reasons that reactors got larger as time went on).
But there is a chicken and egg problem here, to quote a cliché. You’ve got to have a lot of orders before you can set up your assembly line and produce cheap reactors, but you have to have demonstrated your reactors are certified before you get the nuclear masses lining up to order them, given the risks involved. There are no such orders yet; no assembly line is in sight.
So for now, SMRs would be “cobbled together” in existing facilities. “Does the federal government want to help?” he asked rhetorically. “I don’t know,” he went on. “We’re going to find out. I am not going to make your electric bills all triple because of this project. That’s just …TVA won’t do that.” (Italics added.)
So for the foreseeable future without massive subsidies, nuclear power using SMRs will be the same as nuclear power with the present Atomic Leviathans – expensive and in need of government subsidies. But you have to hand it to Mr. Stout for one thing. Unlike Georgia Power and South Carolina Electric and Gas, two utilities building the new large behemoth variety (the AP1000), he is not willing to saddle TVA ratepayers with the cost of yet another nuclear gamble. TVA has been there and done that, and is still doing it with large reactors. A large part of TVA’s indebtedness is from 1970s and early 1980s was due to the cancellation mid-stream of costly and unneeded reactors. No, prudently for TVA, Mr. Stout wants the taxpayers to take the deal.
And the federal government has obliged by committing up to half of the $452 million proposed for SMRs to Babcock & Wilcox, designer of the mPower, 180 megawatt reactor, and the TVA to advance design and certification.  B&W had spent over $200 million on the project as of the middle of last year. Where will it lead other than the “cobbled together” machine? Specifically, what will it take to get an assembly line? Here is how Mr. Stout explained it:
So the concept is that you gotta to have an assembly line cranking out repeatable parts, achieving a standardized vision of lots of mPower reactors. That creates the nth of a kind plant that has the efficiency in cost. I’m building Unit One. I don’t want to pay for B&W’s factory with automation to crank out repeatable parts. So that creates a contracting challenge… So as you scratch your head and puzzle how does work, remember the math won’t work on one unit. In fact our unit is most likely going to be, I’m going to use the word “cobbled together”, it’s going to be manufactured within existing facilities. But if B&W can get an order backlog of a hundred SMRs and they are going to start delivering them in China and India etc. then they’ll be able to go get the financing and build the building and have new stuff put in place to crank out these parts in a more automated manner. So as long as the design is the same this should all work. The devil is going to be in the details and in the oversight and in the inspections. [italics added]
The end of this decade is the target for the commissioning of the first mPower reactor (the most advanced SMR in planning and design and the only one with substantial federal government support so far). It would take some years after that – well into the 2020s – to fully prove the design and, if needed, debug it. It stretches credulity that China and India, which along with Russia, are the main centers of nuclear power construction today, would put in orders totaling a hundred reactors much before 2020. Indeed, if they were that attracted to SMRs, why would they not pay the license fees and set up the assembly line themselves? Most notably, China, where 28 reactors are under construction (http://www.world-nuclear.org/info/inf63.html), already has a much better supply chain than the United States. So the government subsidy to B&W and TVA would likely pave the way for an assembly line in China! Unless…
The federal government orders a hundred reactors or entices utilities to do so with massive subsidies. We are talking putting tens of billions of taxpayer dollars at risk – at a time when the air is full of talk of furloughs for teachers, air traffic controllers and civilian Pentagon employees, among others, and cutbacks in training of military personnel.
What happens if a common design or manufacturing problem is discovered as it was with the Dreamliner batteries? How is a mass manufacturer of reactors, whose internals become radioactive after commissioning, going to recall them or their major components? No word from Mr. Stout, or that I am aware, from any of the SMR enthusiasts, about this. For safety issues see the testimony of Dr. Ed Lyman of the Union of Concerned Scientists, at http://www.ucsusa.org/assets/documents/nuclear_power/lyman-appropriations-subcom-7-14-11.pdf.
IEER and Physicians for Social Responsibility wrote an SMR fact sheet in 2010, outlining such concerns. (http://ieer.org/resource/factsheets/small-modular-reactors-solution/) They have only deepened with time. SMRs are highly unlikely to provide the industry the nuclear renaissance that has so far eluded it. Indeed, by the time the first ones are being commissioned, solar photovoltaics, smart grids, and a host of other disturbed electricity technologies will be more economical. Wind-generated electricity already is. No one has a perfect crystal ball in the energy business, but mine tells me that SMRs will not solve our CO2 woes. They are likely to be economically obsolete before this decade is done – and the end of this decade is the target date for the first mPower reactor to come on line. So why go there, to a time when there are going to be only costs and little prospect of benefits?
This sentence was corrected on April 3, 2013. The original text “And the federal government has obliged by committing $452 million to Babcock & Wilcox, designer of the mPower, 180 megawatt reactor, and the TVA to advance design and certification.” is incorrect. ↩ Return
Published on February 26, 2013 by Arjun Makhijani, Ph.D in News
Three decades after the Nuclear Regulatory Commission first promulgated rules for disposal of low-level waste, it is proposing to massively relax them. The original rules were none too strict. As you can see from the calculation in the comments I made yesterday on the proposed rule, the existing rules would allow for contamination of groundwater hundreds of times above the drinking water limit in the case of disposal of graphite made radioactive by being used in a reactor. This is a Department of Energy calculation used in my comments as an illustration of what is now allowed under NRC rules. Instead of making the rules better and more rational and instead of filling the health protection gaps, a massive relaxation is proposed. For instance, the allowable pollution from plutonium, uranium, thorium strontium-90, radioactive iodine and many other radioactive materials from low-level waste disposal would be allowed to go up greatly because the dose limits for individual human organs are to be eliminated, according to the proposal. The proposed rule also does not define the term “member of the public.” So far in calculating compliance, infants and children are not explicitly taken into account.
Take a look at the proposed rules. Read my comments, and also read my letter to Allison Macfarlane, the new Chairman of the Nuclear Regulatory Commission. I had a very interesting meeting with her in November and I hope she will follow up to get children into the picture and, as a scientist, to straighten out the NRC bureaucracy on fixing at least egregious scientific errors.
I am going to write a bit more broadly on science and democracy this year.
Happy New Year
The political temperature between Japan and China is rising again over the Senkaku/Diaoyu islands. Once more oil appears to be a principal issue – as it was in the period leading up to the Japanese attack on Pearl Harbor. The road to Pearl Harbor and from there to the atomic bombings of Hiroshima and Nagasaki that have shaped so much of the world ever needs to be clearly illuminated, now more than at any time since the end of World War II. The question of whether Japan should consider developing its own nuclear weapons is moving into the political discourse and even some acceptability. The former Governor of Tokyo who resigned to run for national office as head of the newly formed right wing Japan Restoration Party won 61 of the 480 seats in the lower house of Japan’s Parliament (the Diet). Mr. Ishihara has “suggested there is a need for Japan to arm itself with nuclear weapons, expand the military and revise the pacifist constitution,” according to new reports. See more: http://www.theprovince.com/news/Nationalists+take+power+Japan+fire+warning+shot+China/7707292/story.html#ixzz2FLjApjLp
On August 4, 2012 I gave a talk in Santa Fe, New Mexico on the history of US-Japanese relations that led up to rising tensions and the bombing of Pearl Harbor and of events from that time till the use of the atom bombs on Japan. More than 67 years after those bombings, few know that Japanese forces were first targeted on May 5, 1943 as the preferred target for those atom bombs, long before the bombs were built and well before anyone knew when the war would be over. In fact, Germany was explicitly de-targeted on that same date by the Military Policy Committee. Watch a video of the talk here.
This speech has a different perspective in many ways than are common in US discourse of the bombings. One side only discusses the evidence that the bombings were unjustified; the other points to Japanese militarism and the intensity of the violence in the Pacific Theater of World War II to justify the use of the bombs. I sought to affirm the truths in both arguments but added much that has been missing. So I would particularly welcome your comments on this speech and blog post. If you think you’ve learned something new, we encourage you to ask radio stations and television stations to use this material. It was broadcast on KEXP in Seattle shortly after the anniversary of Pearl Harbor earlier this month.
The Heinrich Böll Foundation has created a new site to inform the public about its historic energy transition or “energiewende”: http://www.EnergyTransition.de The aim is the reduce greenhouse gas emissions by 80 percent by 2050. A renewable electricity sector is a principal part of this goal. Germany already produces 26% of its electricity and is set to exceed the target of 40% by 2020. As evidence for severe climate disruption mounts, the German energiewende provides some hope. This kind of transformation was a gleam in my eye when I finished Carbon-Free and Nuclear Free in 2007. It was my hope for the United States. I still hope that action by states, corporations, individuals, and even the federal government (though CO2 rules, efficiency standards, etc.) will get us to a fully renewable energy sector by 2050 in the United States and worldwide.
Published on November 29, 2012 by Arjun Makhijani, Ph.D in Energy Systems
On November 20, President Obama announced funding to develop small modular reactors. The US went from small reactors to large ones to get economies of scale. Power reactor generating capacity goes up faster than material costs — one of the sources of economies of scale. Despite that, large reactors are very expensive and the so-called nuclear renaissance is sputtering. So now the new nuclear nirvana is going back to small reactors. It is unlikely to work. Government should not still be subsidizing nuclear 55 years after the first “commercial” nuclear power plant came on line in 1957. Like the first time, government is rushing in without due consideration of many of the problems.
A decade ago, concern about climate disruption focused mainly on mitigation. How could the world drastically reduce greenhouse gas emissions to curb the severity and frequency of extreme weather events? With global treaty efforts in tatters and Washington in gridlock however, the focus began to shift to adaptation. How can the damage from climate change be reduced?
Even a cursory look at the destruction wrought by Hurricane Sandy – a waterlogged landscape, natural gas explosions, devastating fires, shortages of food, water, and gasoline, and vast areas without electricity — makes it clear that we must do both.
Thoroughly revamping the country’s century-old electrical infrastructure is a critical starting point. We need a system that is much more resistant to damage and recovers quicker. One way to accomplish both goals was illustrated at Japan’s Tohoku-Fukushi University after last year’s devastating tsunami. The university’s electric power generation system consists of local natural gas-fired generators, fuel cells, solar photovoltaics, and storage batteries. Because of this microgrid, essential facilities, including the water plant, elevators, lighting kept functioning even as much of the rest of the larger grid was swept away. That allowed vital nursing facilities, clinic and laboratory equipment to keep running. (Learn more about the Tohoku-Fukushi microgrid and about other microgrid examples at the Lawrence Berkeley Lab website)
Courtesy of DOE/NREL, Credit – Connie Komomua.
Normally, a microgrid functions as part of a larger regional or national system. Electricity is generated, stored and supplied locally. At the same time, power is exchanged with the rest of the grid to reduce costs and maintain a high level of reliability and performance. In an emergency, however, a microgrid will cut itself off automatically from the stricken network. Instead, it goes into “island” mode, continuing to supply local customers essential needs. That would prevent problems like the one during Hurricane Sandy when an explosion at a single substation caused a massive blackout in Lower Manhattan. Of course, microgrids cannot protect specific locations from flooding or damage. That is a different kind of problem. But with a system of interconnected microgrids, much of the essential equipment in Lower Manhattan out of the reach of flooding would have kept operating.
Putting microgrids at the core of the transformation of the electrical system will end total dependence on a vulnerable, overly-centralized system. The replacement will be a distributed, intelligent system whose essential parts are much more likely to function without disruption during extreme events. In addition, a system based on microgrids is also well-matched to greatly increasing efficiency of electricity use. The higher the efficiency of use, the larger the number of functions a micro-grid in island mode can supply. Higher efficiency also means that a much larger part of the economy can keep functioning at any given level of power. Buildings that are well insulated will stay warm longer without the heating system functioning; food will be preserved much longer without power in highly efficient refrigerators. Crucially, this technology, built for adapting to climate disruption will also mitigate it by helping to reduce greenhouse gas emissions.
As they consider how to protect the region from extreme storms and floods, Governors Christie and Cuomo and Mayor Bloomberg should appoint a task force to create a roadmap for building a distributed resilient efficient and intelligent grid in New York City, Long Island and the Jersey shore. Such a project could be the core of the infrastructural transformation that is needed all along the Gulf and Atlantic Coasts. Interconnected microgrid networks can enable people and the economy to flourish in the new normal of more frequent and more violent weather events.
Arjun Makhijani is senior engineer and president of the Institute for Energy and Environmental Research; he has consulted with electric utilities and several agencies of the United Nations on energy issues. ↩ Return
Published on November 05, 2012 by Arjun Makhijani, Ph.D in Energy Systems