Book review of Heinberg’s “Afterburn: society beyond fossil fuels”

Preface. This book has 15 essays Heinberg wrote from 2011 to 2014, many of them available for free online.  These are some of my Kindle notes of parts that interested me, so to you it will be disjointed and perhaps not what you would have chosen as important — but it gives you an idea of what a great writer Heinberg is and hopefully inspires you to buy his book.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Heinberg, R. 2015. Afterburn: Society Beyond Fossil Fuels. New Society Publishers.

The most obvious criticism that could be leveled at the book “The Party’s Over”, which came out in 2005, is the simple observation that, as of 2014, world oil production is increasing, not declining. However, the following passage points to just how accurate the leading peakists were in forecasting trends: “Colin Campbell estimates that extraction of conventional oil will peak before 2010; however, because more unconventional oil—including oil sands, heavy oil, and oil shale—will be produced during the coming decade, the total production of fossil-fuel liquids (conventional plus unconventional) will peak several years later. According to Jean Laherrère, that may happen as late as 2015.”

In the “Party’s Over”, I also summarized Colin Campbell’s view that “the next decade will be a ‘plateau’ period, in which recurring economic recessions will result in lowered energy demand, which will in turn temporarily mask the underlying depletion trend.

Economics 101 tells us that supply of and demand for a commodity like oil (which happens to be our primary energy source) must converge at the current market price, but no economist can guarantee that the price will be affordable to society. High oil prices are sand in the gears of the economy. As the oil industry is forced to spend ever more money to access ever-lower-quality resources, the result is a general trend toward economic stagnation. None of the peak oil deniers warned us about this.

Peakists within the oil industry are usually technical staff (usually geologists, seldom economists, and never PR professionals) and are only free to speak out on the subject once they’ve retired. The industry has two big reasons to hate peak oil. First, company stock prices are tied to the value of booked oil reserves; if the public (and government regulators) were to become convinced that those reserves were problematic, the companies’ ability to raise money would be seriously compromised—and oil companies need to raise lots of money these days to find and produce ever-lower-quality resources. It’s thus in the interest of companies to maintain an impression of (at least potential) abundance.

The problem is hidden from view by gross oil and natural gas production numbers that look and feel just fine—good enough to crow about. President Obama did plenty of crowing in his 2014 State of the Union address, where he touted “More oil produced at home than we buy from the rest of the world—the first time that’s happened in nearly 20 years.” It’s true: US crude oil production increased from about 5 million barrels per day (mb/d) to nearly 7.75 mb/d from 2009 through 2013, with imports still over 7.5 mb/d. And American natural gas production has been at an all-time high. Energy problem? What energy problem?

We’ll never run out of any fossil fuel, in the sense of extracting every last molecule of coal, oil, or gas. Long before we get to that point, we will confront the dreaded double line in the diagram, labeled “energy in equals energy out.” At that stage, it will cost as much energy to find, pump, transport, and process a barrel of oil as the oil’s refined products will yield when burned in even the most perfectly efficient engine (I use oil merely as the most apt example; the same principle applies for coal, natural gas, or any other fossil fuel). As we approach the energy break-even point, we can expect the requirement for ever-higher levels of investment in exploration and production on the part of the petroleum industry; we can therefore anticipate higher prices for finished fuels. Incidentally, we can also expect more environmental risk and damage from the process of fuel “production” (i.e., extraction and processing), because we will be drilling deeper and going to the ends of the Earth to find the last remaining deposits, and we will be burning ever-dirtier fuels. Right now that’s exactly what is happening.

Unless oil prices remain at current stratospheric levels, significant expansion of tar sands operations may be uneconomic.

Lower energy profits from unconventional oil inevitably show up in the financials of oil companies. Between 1998 and 2005, the industry invested $1.5 trillion in exploration and production, and this investment yielded 8.6 million barrels per day in additional world oil production. Between 2005 and 2013, the industry spent $4 trillion on E&P, yet this more-than-doubled investment produced only 4 mb/d in added production.

 

It gets worse: all net new production during the 2005–13 period was from unconventional sources (primarily tight oil from the United States and tar sands from Canada); of the $4 trillion spent since 2005, it took $350 billion to achieve a bump in their production. Subtracting unconventionals from the total, world oil production actually fell by about a million barrels a day during these years. That means the oil industry spent more than $3.5 trillion to achieve a decline in overall conventional production.

Daniel L. Davis described the situation in a recent article in the Financial Times: The 2013 [World Energy Outlook, published by the International Energy Agency] has the oil industry’s upstream [capital expenditure] rising by nearly 180% since 2000, but the global oil supply (adjusted for energy content) by only 14%. The most straightforward interpretation of this data is that the economics of oil have become completely dislocated from historic norms since 2000 (and especially since 2005), with the industry investing at exponentially higher rates for increasingly small incremental yields of energy.

The costs of oil exploration and production are currently rising at about 10.9% per year, according to Steve Kopits of the energy analytics firm Douglas-Westwood.  This is squeezing the industry’s profit margins, since it’s getting ever harder to pass these costs on to consumers. In 2010, The Economist magazine discussed rising costs of energy production, musing that “the direction of change seems clear. If the world were a giant company, its return on capital would be falling.”

The critical relationship between energy production and the energy cost of extraction is now deteriorating so rapidly that the economy as we have known it for more than two centuries is beginning to unravel.

The average energy profit ratio (a.k.a. Energy Returned on Invested) for US oil production has fallen from 100:1 to 10:1, and the downward trend is accelerating as more and more oil comes from tight deposits (shale) and deepwater. Canada’s prospects are perhaps even more dismal than those of the United States: the tar sands of Alberta have an EROEI that ranges from 3.2 : 1 to 5 : 1.  A 5-to-1 profit ratio might be spectacular in the financial world, but in energy terms this is alarming. Everything we do in industrial societies—education, health care, research, manufacturing, transportation—uses energy. Unless our investment of energy in producing more energy yields an averaged profit ratio of roughly 10 : 1 or more, it may not be possible to maintain an industrial (as opposed to an agrarian) mode of societal organization over the long run.

Our economy runs on energy, and our energy prospects are gloomy, how is it that the economy is recovering? The simplest answer is, it’s not—except as measured by a few misleading gross statistics.

Unemployment statistics don’t include people who’ve given up looking for work. Labor force participation rates are at the lowest level in 35 years.

Claims of economic recovery fixate primarily on one number: gross domestic product, or GDP. Is any society able to expand its debt endlessly? If there were indeed limits to a country’s ability to perpetually grow GDP by increasing its total debt (government plus private), a warning sign would likely come in the form of a trend toward diminishing GDP returns on each new unit of credit created. Bingo: that’s exactly what we’ve been seeing in the United States in recent years. Back in the 1960s, each dollar of increase in total US debt was reflected in nearly a dollar of rise in GDP. By 2000, each new dollar of debt corresponded with only 20 cents of GDP growth. The trend line looked set to reach zero by about 2015.

We won’t quickly and easily switch to electric cars. For that to happen, the economy would have to keep growing, so that more and more people could afford to buy new (and more costly) automobiles. A more likely scenario: as fuel gets increasingly expensive the economy will falter, rendering the transition to electric cars too little, too late.

Most nations have concluded that nuclear power is too costly and risky, and supplies of uranium, the predominant fuel for nuclear power, are limited anyway. Thorium, breeder, fusion, and other nuclear alternatives may hold theoretical promise, but there is virtually no hope that we can resolve the remaining myriad practical challenges, commercialize the technologies, and deploy tens of thousands of new power plants within just a few decades.

 

Many economists and politicians don’t buy the assertion that energy is at the core of our species-wide survival challenge. They think the game of human success-or-failure revolves around money, military power, or technological advancement. If we toggle prices, taxes, and interest rates; maintain proper trade rules; invest in technology research and development (R&D); and discourage military challenges to the current international order, then growth can continue indefinitely and everything will be fine. Climate change and resource depletion are peripheral problems that can be dealt with through pricing mechanisms or regulations.

Some policy wonks buy “it’s all about energy” but are jittery about “renewables are the future” and won’t go anywhere near “growth is over.” A few of these folks like to think of themselves as environmentalists (sometimes calling themselves “bright green”)—including the Breakthrough Institute and writers like Stewart Brand and Mark Lynas. A majority of government officials are effectively in the same camp, viewing nuclear power, natural gas, carbon capture and storage (“clean coal”), and further technological innovation as pathways to solving the climate crisis without any need to curtail economic growth.

Other environment-friendly folks buy “it’s all about energy” and “renewables are the future” but still remain allergic to the notion that “growth is over.” They say we can transition to 100% renewable power with no sacrifice in terms of economic growth, comfort, or convenience. Stanford professor Mark Jacobson3 and Amory Lovins of Rocky Mountain Institute are leaders of this chorus. Theirs is a reassuring message, but if it doesn’t happen to be factually true (and there are many energy experts who argue persuasively that it isn’t), then it’s of limited helpfulness because it fails to recommend the kinds or degrees of change in energy usage that are essential to a successful transition.

The general public tends to listen to one or another of these groups, all of which agree that the climate and energy challenge of the 21st century can be met without sacrificing economic growth. This widespread aversion to the “growth is over” conclusion is entirely understandable: during the last century, the economies of industrial nations were engineered to require continual growth in order to produce jobs, returns on investments, and increasing tax revenues to fund government services.

 

Anyone who questions whether growth can continue is deeply subversive. Nearly everyone has an incentive to ignore or avoid it. It’s not only objectionable to economic conservatives; it is also abhorrent to many progressives who believe economies must continue to grow so that the working class can get a larger piece of the proverbial pie, and the “underdeveloped” world can improve standards of living. But ignoring uncomfortable facts seldom makes them go away. Often it just makes matters worse. Back in the 1970s, when environmental limits were first becoming apparent, catastrophe could have been averted with only a relatively small course correction—a gradual tapering of growth and a slow decline in fossil fuel reliance. Now, only a “cold turkey” approach will suffice. If a critical majority of people couldn’t be persuaded then of the need for a gentle course correction, can they now be talked into undertaking deliberate change on a scale and at a speed that might be nearly as traumatic as the climate collision we’re trying to avoid? To be sure, there are those who do accept the message that “growth is over”: most are hard-core environmentalists or energy experts. But this is a tiny and poorly organized demographic. If public relations consists of the management of information flowing from an organization to the public, then it surely helps to start with an organization wealthy enough to be able to afford to mount a serious public relations campaign.

All animals and plants deal with temporary energy subsidies in basically the same way: the pattern is easy to see in the behavior of songbirds visiting the feeder outside my office window. They eat all the seed I’ve put out for them until the feeder is empty. They don’t save some for later or discuss the possible impacts of their current rate of consumption. Yes, we humans have language and therefore the theoretical ability to comprehend the likely results of our current collective behavior and alter it accordingly. We exercise this ability in small ways, where the costs of behavior change are relatively trivial—enacting safety standards for new automobiles, for example. But where changing our behavior might entail a significant loss of competitive advantage or an end to economic growth, we tend to act like finches.

 

Some business-friendly folks with political connections soon became alarmed at both the policy implications of—and the likely short-term economic fallout from—the way climate science was developing, and decided to do everything they could to question, denigrate, and deny the climate change hypothesis. Their effort succeeded: Especially in the United States, belief in climate change now aligns fairly closely with political affiliation. Most elected Democrats agree that the issue is real and important, and most of their Republican counterparts are skeptical. Lacking bipartisan support, legislative climate policy has languished. From a policy standpoint, climate change is effectively an energy issue, since reducing carbon emissions will require a nearly complete revamping of our energy systems. Energy is, by definition, humanity’s most basic source of power, and since politics is a contest over power (albeit social power), it should not be surprising that energy is politically contested. A politician’s most basic tools are power and persuasion, and the ability to frame issues. And the tactics of political argument inevitably range well beyond logic and critical thinking. Therefore politicians can and often do make it harder for people to understand energy issues than would be the case if accurate, unbiased information were freely available. So here is the reason for the paradox stated in the first paragraph: As energy issues become more critically important to society’s economic and ecological survival, they become more politically contested; and as a result, they tend to become obscured by a fog of exaggeration, half-truth, omission, and outright prevarication.

Who is right? Well, this should be easy to determine. Just ignore the foaming rhetoric and focus on research findings. But in reality that’s not easy at all, because research is itself often politicized. Studies can be designed from the outset to give results that are friendly to the preconceptions and prejudices of one partisan group or another. For example, there are studies that appear to show that the oil and natural gas production technique known as hydraulic fracturing (or “fracking”) is safe for the environment. With research in hand, industry representatives calmly inform us that there have been no confirmed instances of fracking fluids contaminating water tables. The implication: environmentalists who complain about the dangers of fracking simply don’t know what they’re talking about.

 

Renewable energy is just as contentious. Mark Jacobson, professor of environmental engineering at Stanford University, has coauthored a series of reports and scientific papers arguing that solar, wind, and hydropower could provide 100% of world energy by 2030. Clearly, Jacobson’s work supports Politician B’s political narrative by showing that the climate problem can be solved with little or no economic sacrifice.

If Jacobson is right, then it is only the fossil fuel companies and their supporters that stand in the way of a solution to our environmental (and economic) problems. The Sierra Club and prominent Hollywood stars have latched onto Jacobson’s work and promote it enthusiastically. However, Jacobson’s publications have provoked thoughtful criticism, some of it from supporters of renewable energy, who argue that his “100 percent renewables by 2030” scenario ignores hidden costs, land use and environmental problems, and grid limits. Jacobson has replied to his critics, well, energetically.

Here’s a corollary to my thesis: Political prejudices tend to blind us to facts that fail to fit any conventional political agendas. All political narratives need a villain and a (potential) happy ending. While Politicians A and B might point to different villains (government bureaucrats and regulators on one hand, oil companies on the other), they both envision the same happy ending: economic growth, though it is to be achieved by contrasting means. If a fact doesn’t fit one of these two narratives, the offended politician tends to ignore it (or attempt to deny it). If it doesn’t fit either narrative, nearly everyone ignores it. Here’s a fact that apparently fails to comfortably fit into either political narrative: The energy and financial returns on fossil fuel extraction are declining—fast.

The top five oil majors (ExxonMobil, BP, Shell, Chevron, Total) have seen their aggregate production fall by more than 25% over the past 12 years—but it’s not for lack of effort. Drilling rates have doubled. Rates of capital investment in exploration and production have likewise doubled. Oil prices have quadrupled. Yet actual global rates of production for regular crude oil have flattened, and all new production has come from expensive unconventional sources such as tar sands, tight oil, and deepwater oil. The fossil fuel industry hates to admit to facts like this that investors find scary—especially now, as the industry needs investors to pony up ever-larger bets to pay for ever-more-extreme production projects.

 

The past few years, high oil prices have provided the incentive for small, highly leveraged, and risk-friendly companies to go after some of the last, worst oil and gas production prospects in North America—formations known to geologists as “source rocks,” which require operators to use horizontal drilling and fracking technology to free up trapped hydrocarbons. The ratio of energy returned to energy invested in producing shale gas and tight oil from these formations is minimal. While US oil and gas production rates have temporarily spiked, all signs indicate that this will be a brief boom.

During the 1930s, the US-based National Association of Manufacturers enlisted a team of advertisers, marketers, and psychologists to formulate a strategy to counter government efforts to plan and manage the economy in the wake of the Depression. They proposed a massive, ongoing ad campaign to equate consumerism with “The American Way.” Progress would henceforth be framed entirely in economic terms, as the fruit of manufacturers’ ingenuity. Americans were to be referred to in public discourse (newspapers, magazines, radio) as consumers, and were to be reminded at every opportunity of their duty to contribute to the economy by purchasing factory-made products, as directed by increasingly sophisticated and ubiquitous advertising cues.

Veblen asserted in his widely cited book The Theory of the Leisure Class that there exists a fundamental split in society between those who work and those who exploit the work of others; as societies evolve, the latter come to constitute a “leisure class” that engages in “conspicuous consumption.” Veblen saw mass production as a way to universalize the trappings of leisure so the owning class could engage workers in an endless pursuit of status symbols, thus deflecting workers’ attention from society’s increasingly unequal distribution of wealth and their own political impotence.

The critics have insisted all along, consumerism as a system cannot continue indefinitely; it contains the seeds of its own demise. And the natural constraints to consumerism—fossil fuel limits, environmental sink limits (leading to climate change, ocean acidification, and other pollution dilemmas), and debt limits—appear to be well within sight. While there may be short-term ways of pushing back against these limits (unconventional oil and gas, geoengineering, quantitative easing), there is no way around them.

 

Consumerism is inherently doomed. But since consumerism now effectively is the economy (70% of US GDP comes from consumer spending), when it goes down the economy goes too. A train wreck is foreseeable. No one knows exactly when the impact will occur or precisely how bad it will be. But it is possible to say with some confidence that this wreck will manifest itself as an economic depression accompanied by a series of worsening environmental disasters and possibly wars and revolutions. This should be news to nobody by now, as recent government and UN reports spin out the scenarios in ever grimmer detail: rising sea levels, waves of environmental refugees, droughts, floods, famines, and collapsing economies. Indeed, looking at what’s happened since the start of the global economic crisis in 2007, it’s likely the impact has already commenced—though it is happening in agonizingly slow motion as the system fights to maintain itself.

World conventional crude oil production has been flat-to-declining since about 2005. Declines of output from the world’s supergiant oilfields will steepen in the years ahead. Petroleum is essential to the world economy and there is no ready and sufficient substitute. The potential consequences of peak oil include prolonged economic crisis and resource wars.

Other unconventionals, like extra-heavy oil in Venezuela and kerogen (also known as “oil shale,” and not to be confused with shale oil) in the American West, will be even slower and more expensive to produce.

Why no collapse yet? Governments and central banks have inserted fingers in financial levees. Most notably, the Federal Reserve rushed to keep crisis at bay by purchasing tens of billions of dollars in US Treasury bonds each month, year after year, using money created out of thin air at the moment of purchase.

Virtually all of the Fed’s money has stayed within financial circles; that’s a big reason why the richest Americans have gotten much richer in the past few years, while most regular folks are treading water at best.

What has the too-big-to-fail, too-greedy-not-to financial system done with the Fed’s trillions in free money? Blown another stock market bubble and piled up more leveraged bets. No one knows when the latest bubble will pop, but when it does the ensuing crisis may be much worse than that of 2008. Will central banks then be able to jam more fingers into the leaky levee? Will they have enough fingers?

ExxonMobil is inviting you to take your place in a fossil-fueled 21st century. But I would argue that Exxon’s vision of the future is actually just a forward projection from our collective rearview mirror. Despite its hi-tech gadgetry, the oil industry is a relic of the days of the Beverly Hillbillies. This fossil-fueled sitcom of a world that we all find ourselves trapped within may on the surface appear to be characterized by smiley-faced happy motoring, but at its core it is monstrous and grotesque. It is a zombie energy economy.

 

Oil and gas are finite resources, so it was clear from the start that, as we extracted and burned them, we were in effect stealing from the future. In the early days, the quantities of these fuels available seemed so enormous that depletion posed only a theoretical limit to consumption. We knew we would eventually empty the tanks of Earth’s hydrocarbon reserves, but that was a problem for our great-great-grandkids to worry about.

In a few years we will look back on late 20th-century America as a time and place of advertising-stoked consumption that was completely out of proportion to what Nature can sustainably provide. I suspect we will think of those times—with a combination of longing and regret—as a lost golden age of abundance, but also an era of foolishness and greed that put the entire world at risk.

Making the best of our new circumstances will mean finding happiness in designing higher-quality products that can be reused, repaired, and recycled almost endlessly and finding fulfillment in human relationships and cultural activities rather than mindless shopping. Fortunately, we know from recent cross-cultural psychological studies that there is little correlation between levels of consumption and levels of happiness. That tells us that life can in fact be better without fossil fuels. So whether we view these as hard times or as times of

Nations could, in principle, forestall social collapse by providing the bare essentials of existence (food, water, housing, medical care, family planning, education, employment for those able to work, and public safety) universally and in a way that could be sustained for some time, while paying for this by deliberately shrinking other features of society—starting with military and financial sectors—and by taxing the wealthy. The cost of covering the basics for everyone is still within the means of most nations. Providing human necessities would not remove all the fundamental problems now converging (climate change, resource depletion, and the need for fundamental economic reforms), but it would provide a platform of social stability and equity to give the world time to grapple with deeper, existential challenges. Unfortunately, many governments are averse to this course of action. And if they did provide universal safety nets, ongoing economic contraction might still result in conflict, though in this instance it might arise from groups opposed to the perceived failures of “big government.” Further, even in the best instance, safety nets can only buy time. The capacity of governments to maintain flows of money and goods will erode. Thus it will increasingly be up to households and communities to provide the basics for themselves while reducing their dependence upon, and vulnerability to, centralized systems of financial and governmental power. This will set up a fundamental contradiction. When the government tries to provide people the basics, power is centralized—but as the capacity of the government wanes, it can feel threatened by people trying to provide the basics for themselves and act to discourage or even criminalize them.

Theorists on both the far left and far right of the political spectrum have advocated for the decentralization of food, finance, education, and other basic societal support systems for decades. Some efforts toward decentralization (such as the local food movement) have led to the development of niche markets.

The decentralized provision of basic necessities is not likely to flow from a utopian vision of a perfect or even improved society (as have some social movements of the past). It will emerge instead from iterative human responses to a daunting and worsening set of environmental and economic problems, and it will in many instances be impeded and opposed by politicians, bankers, and industrialists. It is this contest between traditional power elites and growing masses of disenfranchised poor and formerly middle-class people attempting to provide the necessities of life for themselves in the context of a shrinking economy that is shaping up to be the fight of the century.

When Civilizations Decline

In his benchmark 1988 book The Collapse of Complex Societies, archaeologist Joseph Tainter explained the rise and demise of civilizations in terms of complexity. He used the word complexity to refer to “the size of a society, the number and distinctiveness of its parts, the variety of specialized social roles that it incorporates, the number of distinct social personalities present, and the variety of mechanisms for organizing these into a coherent, functioning whole.”

 

Civilizations are complex societies organized around cities; they obtain their food from agriculture (field crops), use writing and mathematics, and maintain full-time division of labor. They are centralized, with people and resources constantly flowing from the hinterlands toward urban hubs.

Thousands of cultures have flourished throughout the human past, but there have only been about 24 civilizations. And all—except our current global industrial civilization (so far)—have ultimately collapsed.

Tainter describes the growth of civilization as a process of investing societal resources in the development of ever-greater complexity in order to solve problems. For example, in village-based tribal societies an arms race between tribes can erupt, requiring each village to become more centralized and complexly organized in order to fend off attacks. But complexity costs energy. As Tainter puts it, “More complex societies are costlier to maintain than simpler ones and require higher support levels per capita.” Since available energy and resources are limited, a point therefore comes when increasing investments become too costly and yield declining marginal returns. Even the maintenance of existing levels of complexity costs too much (citizens may experience this as onerous levels of taxation), and a general simplification and decentralization of society ensues—a process colloquially referred to as collapse.

During such times societies typically see sharply declining population levels, and the survivors experience severe hardship. Elites lose their grip on power. Domestic revolutions and foreign wars erupt. People flee cities and establish new, smaller communities in the hinterlands. Governments fall and new sets of power relations emerge. It is frightening to think about what collapse would mean for our current global civilization.

 

Nevertheless, as we are about to see, there are good reasons for concluding that our civilization is reaching the limits of centralization and complexity, that marginal returns on investments in complexity are declining, and that simplification and decentralization are inevitable. Thinking in terms of simplification, contraction, and decentralization is more accurate and helpful, and probably less scary, than contemplating collapse. It also opens avenues for foreseeing, reshaping, and even harnessing inevitable social processes so as to minimize hardship and maximize possible benefits.

Some of the effects of declining energy will be nonlinear and unpredictable, and could lead to a general collapse of civilization. Economic contraction will not be as gradual and orderly as economic expansion has been. Such effects may include an uncontrollable and catastrophic unwinding of the global system of credit, finance, and trade, or the dramatic expansion of warfare as a result of heightened competition for energy resources or the protection of trade privileges.

Further stimulus spending would require another massive round of government borrowing, and that would face strong domestic political headwinds as well as resistance from the financial community (in the form of credit downgrades, which would make further borrowing more expensive).

Without increasing and affordable energy flows a genuine economic recovery (meaning a return to growth in manufacturing and trade) may not be possible.

The evidence for the efficacy of austerity as a path to increased economic health is spotty at best in “normal” economic times. Under current circumstances, there is overwhelming evidence that it leads to declining economic performance as well as social unraveling. In nations where the austerity prescription has been most vigorously applied (Ireland, Greece, Spain, Italy, and Portugal), contraction has continued or even accelerated, and popular protest is on the rise.

Austerity is having similar effects in states, counties, and cities in the United States. State and local governments cut roughly half a million jobs during 2009–10; had they kept hiring at their previous pace to keep up with population growth, they would instead have added a half-million jobs. Meanwhile, due to low tax revenues, local governments are allowing paved roads to turn to gravel, closing libraries and parks, and laying off public employees. It’s not hard to recognize a self-reinforcing feedback loop at work here. A shrinking economy means declining tax revenues, which make it harder for governments to repay debt. In order to avoid a credit downgrade, governments must cut spending. This shrinks the economy further, eventually resulting in credit downgrades anyway. That in turn raises the cost of borrowing. So government must cut spending even further to remain credit-worthy. The need for social spending explodes as unemployment, homelessness, and malnutrition increase, while the availability of social services declines. The only apparent way out of this death spiral is a revival of rapid economic growth. But if the premise above is correct, that is a mere pipedream.

Centralized provision of the basics. In this scenario, nations directly provide jobs and basic necessities to the general public while deliberately simplifying, downsizing, or eliminating expendable features of society such as the financial sector and the military, and taxing those who can afford it—wealthy individuals, banks, and larger businesses—at higher rates. This is the path outlined at the start of the essay; at this point it is appropriate to add a bit more detail. In many cases, centralized provision of basic necessities is relatively cheap and efficient. For example, since the beginning of the current financial crisis the US government has mainly gone about creating jobs by channeling tax breaks and stimulus spending to the private sector. But this has turned out to be an extremely costly and inefficient way of providing jobs, far more of which could be called into existence (per dollar spent) by direct government hiring. Similarly, the new US federal policy of increasing the public’s access to health care by requiring individuals to purchase private medical insurance is more costly than simply providing a universal government-run health insurance program, as every other industrial nation does. If Britain’s experience during and immediately after World War II is any guide, then better access to higher-quality food could be ensured with a government-run rationing program than through a fully privatized food system. And government banks could arguably provide a more reliable public service than private banks, which funnel enormous streams of unearned income to bankers and investors. If all this sounds like an argument for utopian socialism, read on—it’s not. But there are indeed real benefits to be reaped from government provision of necessities, and it would be foolish to ignore them. A parallel line of reasoning goes like this.

 

Immediately after natural disasters or huge industrial accidents, the people impacted typically turn to the state for aid. As the global climate chaotically changes, and as the hunt for ever-lower-grade fossil energy sources forces companies to drill deeper and in more sensitive areas, we will undoubtedly see worsening weather crises, environmental degradation and pollution, and industrial accidents such as oil spills. Inevitably, more and more families and communities will be relying upon state-provided aid for disaster relief. Many people would be tempted to view an expansion of state support services with alarm as the ballooning of the powers of an already bloated central government. There may well be substance to this fear, depending on how the strategy is pursued. But it is important to remember that the economy as a whole, in this scenario, would be contracting—and would continue to contract—due to resource limits.

In any case, it’s hard to say how long this strategy could be maintained in the face of declining energy supplies. Eventually, central authorities’ ability to operate and repair the infrastructure necessary to continue supporting

As central governments seek to maintain complexity at the expense of more dispersed governmental nodes (city, county, and state governments), then conflict between communities and sputtering national or global power hubs is likely. Communities may begin to withdraw streams of support from central authorities—and not only governmental authorities, but financial and corporate ones as well.

Communities that have to contend with declining tax revenues, competition from larger governments, and predatory mega-corporations and banks, then nonprofit organizations—which support tens of thousands of local charity efforts—face perhaps even greater challenges. The current philanthropic model rests entirely upon assumed economic growth: foundation grants come from returns on the foundation’s investments (in the stock market and elsewhere). As economic growth slows and reverses, the world of nonprofit organizations will shake and crumble, and the casualties will include tens of thousands of social services agencies, educational programs, and environmental protection organizations . . . as well as countless symphony orchestras, dance ensembles, museums, and on and on. If national government loses its grip, if local governments are pinched simultaneously from above and below, and if nonprofit organizations are starved for funding, from where will come the means to support local communities with the social and cultural services they need?

Local movements to support localization—however benign their motives—may be perceived by national authorities as a threat.

Complications

Scenarios are not forecasts; they are planning tools. As prophecies, they’re not much more reliable than dreams. What really happens in the years ahead will be shaped as much by “black swan” events as by trends in resource depletion or credit markets. We know that environmental impacts from climate change will intensify, but we don’t know exactly where, when, or how severely those impacts will manifest; meanwhile, there is always the possibility of a massive environmental disaster not caused by human activity (such as an earthquake or volcanic eruption) occurring in such a location or on such a scale as to substantially alter the course of world events. Wars are also impossible to predict in terms of intensity and outcome, yet we know that geopolitical tensions are building.

The success of governments in navigating the transitions ahead may depend on measurable qualities and characteristics of governance itself. In this regard, there could be useful clues to be gleaned from the World Governance Index, which assesses governments according to criteria of peace and security, rule of law, human rights and participation, sustainable development, and human development. For 2011, the United States ranked number 32 (and falling: it was number 28 in 2008)—behind Uruguay, Estonia, and Portugal but ahead of China (number 140) and Russia (number 148).

One wonders how many big-government centralists of the left, right, or center—who often see the stability of the state, the status of their own careers, and the ultimate good of the people as being virtually identical—are likely to embrace such a prescription.

History teaches us at least as much as scenario exercises can. The convergence of debt bubbles, economic contraction, and extreme inequality is hardly unique to our historical moment. A particularly instructive and fateful previous instance occurred in France in the late 18th century. The result then was the French Revolution, which rid the common people of the burden of supporting an arrogant, entrenched aristocracy, while giving birth to ideals of liberty, equality, and universal brotherhood. However, the revolution also brought with it war, despotism, mass executions—and an utter failure to address underlying economic problems. So often, as happened then, nations suffering under economic contraction double down on militarism rather than downsizing their armies so as to free up resources. They go to war, hoping thereby both to win spoils and to give mobs of angry young men a target for their frustrations other than their own government. The gambit seldom succeeds; Napoleon made it work for a while, but not long. France and (most of) its people did survive the tumult. But then, at the dawn of the 19th century, Europe was on the cusp of another revolution—the fossil-fueled Industrial Revolution—and decades of economic growth shimmered on the horizon. Today we are just starting our long slide down the decline side of the fossil fuel supply curve.

The world supply of uranium is limited, and shortages are likely by mid-century even with no major expansion of power plants. And, atomic power plants are tied to nuclear weapons proliferation.

None of this daunts Techno-Anthropocene proponents, who say new nuclear technology has the potential to fulfill the promises originally made for the current fleet of atomic power plants. The centerpiece of this new technology is the integral fast reactor (IFR). Unlike light water reactors (which comprise the vast majority of nuclear power plants in service today), IFRs would use sodium as a coolant. The IFR nuclear reaction features fast neutrons, and it more thoroughly consumes radioactive fuel, leaving less waste. Indeed, IFRs could use current radioactive waste as fuel. Also, they are alleged to offer greater operational safety and less risk of weapons proliferation.

Fast-reactor technology is highly problematic. Earlier versions of the fast breeder reactor (of which IFR is a version) were commercial failures and safety disasters. Proponents of the integral fast reactor, say the critics, overlook its exorbitant development and deployment costs and continued proliferation risks. IFR theoretically only “transmutes,” rather than eliminates, radioactive waste. Yet the technology is decades away from widespread implementation, and its use of liquid sodium as a coolant can lead to fires and explosions.

David Biello, writing in Scientific American, concludes that, “To date, fast neutron reactors have consumed six decades and $100 billion of global effort but remain ‘wishful thinking.’”

But we don’t have the luxury of limitless investment capital, and we don’t have decades in which to work out the bugs and build out this complex, unproven technology.

Degrading topsoil in order to produce enough grain to feed ten billion people? Just build millions of hydroponic greenhouses (that need lots of energy for their construction and operation). As we mine deeper deposits of metals and minerals and refine lower-grade ores, we’ll require more energy.

Governments are probably incapable of leading a strategic retreat in our war on nature, as they are systemically hooked on economic growth. But there may be another path forward. Perhaps citizens and communities can initiate a change of direction.

Wes Jackson of the Land Institute in Salina, Kansas, has spent the past four decades breeding perennial grain crops (he points out that our current annual grains are responsible for the vast bulk of soil erosion, to the tune of 25 billion tons per year).

Population Media Center is working to ensure we don’t get to ten billion humans by enlisting creative artists in countries with high population growth rates (which are usually also among the world’s poorest nations) to produce radio and television soap operas featuring strong female characters who successfully confront issues related to family planning. This strategy has been shown to be the most cost-effective and humane means of reducing high birth rates in these nations.

It’s hard to convince people to voluntarily reduce consumption and curb reproduction. That’s not because humans are unusually pushy, greedy creatures; all living organisms tend to maximize their population size and rate of collective energy use. Inject a colony of bacteria into a suitable growth medium in a petri dish and watch what happens. Hummingbirds, mice, leopards, oarfish, redwood trees, or giraffes: in each instance the principle remains inviolate—every species maximizes population and energy consumption within nature’s limits. Systems ecologist Howard T. Odum called this rule the Maximum Power Principle: throughout nature, “system designs develop and prevail that maximize power intake, energy transformation, and those uses that reinforce production and efficiency.”

In many countries, including the US, government efforts to forestall or head off uprisings appear to be taking the forms of criminalization of dissent, the militarization of police, and a massive expansion of surveillance using an array of new electronic spy technologies. At the same time, intelligence agencies are now able to employ up-to-date sociological and psychological research to infiltrate, co-opt, misdirect, and manipulate popular movements aimed at achieving economic redistribution. However, these military, police, public relations, and intelligence efforts require massive funding as well as functioning grid, fuel, and transport infrastructures. Further, their effectiveness is limited if and when the nation’s level of economic pain becomes too intense, widespread, or prolonged. A second source of conflict consists of increasing competition over access to depleting resources, including oil, water, and minerals. Among the wealthiest nations, oil is likely to be the object of the most intensive struggle, since oil is essential for nearly all transport and trade. The race for oil began in the early 20th century and has shaped the politics and geopolitics of the Middle East and Central Asia; now that race is expanding to include the Arctic and deep oceans, such as the South China Sea. Resource conflicts occur not just between nations but also within societies: witness the ongoing insurgencies in the Niger Delta, where oil revenue fuels rampant political corruption while drilling leads to environmental ravages felt primarily by the Ogoni ethnic group; see also the political infighting in fracking country here in the United States, where ecological impacts put ever-greater strains on the social fabric.

Lastly, climate change, water scarcity, high oil prices, vanishing credit, and the leveling off of per-hectare productivity and the amount of arable land are all combining to create the conditions for a historic food crisis, which will impact the poor first and most forcibly. High food prices breed social instability—whether in 18th-century France or 21st-century Egypt. As today’s high prices rise further, social instability could spread, leading to demonstrations, riots, insurgencies, and revolutions.

In the current context, a continuing source of concern must be the large number of nuclear weapons now scattered among nine nations. While these weapons primarily exist as a deterrent to military aggression, and while the end of the Cold War has arguably reduced the likelihood of a massive release of them in an apocalyptic fury, it is still possible to imagine several scenarios in which a nuclear detonation could occur as a result of accident, aggression, preemption, or retaliation. We are in a race—but it’s not just an arms race; indeed, it may end up being an arms race in reverse.

We can only hope that historical momentum can maintain the Great Peace until industrial nations are sufficiently bankrupt that they cannot afford to mount foreign wars on any substantial scale.

 

In his recent and important book Carbon Democracy: Political Power in the Age of Oil, Timothy Mitchell argues that modern democracy owes a lot to coal. Not only did coal fuel the railroads, which knitted large regions together, but striking coal miners were able to bring nations to a standstill, so their demands for unions, pensions, and better working conditions played a significant role in the creation of the modern welfare state. It was no mere whim that led Margaret Thatcher to crush the coal industry in Britain; she saw its demise as the indispensable precondition to neoliberalism’s triumph. Coal was replaced, as a primary energy source, by oil. Mitchell suggests that oil offered industrial countries a path to reducing internal political pressures. Its production relied less on working-class miners and more upon university-trained geologists and engineers. Also, oil is traded globally, so that its production is influenced more by geopolitics and less by local labor strikes. “Politicians saw the control of oil overseas as a means of weakening democratic forces at home,” according to Mitchell, and so it is no accident that by the late 20th century the welfare state was in retreat and oil wars in the Middle East had become almost routine. The problem of “excess democracy,” which reliance upon coal inevitably brought with it, has been successfully resolved, not surprisingly by still more teams of university-trained experts—economists, public relations professionals, war planners, political consultants, marketers, and pollsters. We have organized our political life around a new organism—“the economy”—which is expected to grow in perpetuity, or, more practically, as long as the supply of oil continues to increase.

Andrew Nikiforuk also explores the suppression of democratic urges under an energy regime dominated by oil in his brilliant book The Energy of Slaves: Oil and the New Servitude. The energy in oil effectively replaces human labor; as a result, each North American enjoys the services of roughly 150 “energy slaves.” But, according to Nikiforuk, that means that burning oil makes us slave masters—and slave masters all tend to mimic the same attitudes and behaviors, including contempt, arrogance, and impunity.

As power addicts, we become both less sociable and easier to manipulate. In the early 21st century, carbon democracy is still ebbing, but so is the global oil regime hatched in the late 20th century. Domestic US oil production based on hydraulic fracturing (“fracking”) reduces the relative dominance of the Middle East petro-states, but to the advantage of Wall Street—which supplies the creative financing for speculative and marginally profitable domestic drilling. America’s oil wars have largely failed to establish and maintain the kind of order in the Middle East and Central Asia that was sought. High oil prices send dollars cascading toward energy producers but starve the economy as a whole, and this eventually reduces petroleum demand.

Governance systems appear to be incapable of solving or even seriously addressing looming financial, environmental, and resource issues, and “democracy” persists primarily in a highly diluted solution whose primary constituents are money, hype, and expert-driven opinion management. In short, the 20th-century governance system is itself fracturing. So what comes next?

Please follow and like us:



Rare: The High-Stakes Race to Satisfy Our Need for the Scarcest Metals on Earth by Keith Veronese

Preface.  Capitalism believes there’s a solution for everything due to Man’s Inventive Brain, but when it comes to getting metals out of the earth, there are some very serious limitations.  In parts per billion, there’s only 4 of platinum, 20 of silver, and less than 1 part for many important metals. Yet they are essential for cars, wind turbines, electronics, military weapons, oil refining, and dozens of other uses listed below.

China controls 97% of rare earth metals.   Uh-oh.

The overwhelming majority of Earth’s crust is made of hydrogen and oxygen. The only metals present in large amounts within the crust are aluminum and iron, with the latter also dominating the planetary core. These four elements make up about 90% of the mass of the crust, with silicon, nickel, magnesium, sulfur, and calcium rounding out another 9% of the planet’s mass.

Our civilization is far more dependent on very rare elements than I’d realized, which are extremely scarce and being dissipated since so few are recycled (it’s almost impossible to recycle them though, the cost is too high, and many elements are hard to separate from one another).

So in addition to peak oil, add in peak metals to the great tidal wave of collapse on the horizon.

What follows are my kindle notes.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Keith Veronese. 2015. Rare: The High-Stakes Race to Satisfy Our Need for the Scarcest Metals on Earth. Prometheus books.

Scientifically, metals are known for a common set of properties. Almost all metals have the ability to transmit electricity and heat—very useful properties in the world of electronics. Most metals can be easily bent and molded into intricate shapes. As a nice bonus, most metals are resistant to all but the most extreme chemical reactions in the outside environment, with the added stability increasing their usefulness.

A very apparent exception to this stability, however, is the rusting of iron, a natural process that occurs as iron is exposed to oxygen and water over time in junkyards, barns, and elsewhere.

Is a particular metal hard to find because there is a limited amount, is it simply difficult to retrieve, or does technological demand outpace supply? The acquisition difficulty is likely due to a combination of all these reasons

Parts per billion

4          Platinum, a scarce, precious metal, exists in four parts per billion of Earth’s crust—only four out of a billion atoms within the crust are platinum. This is an extremely small amount. To put the amount of platinum on Earth in an easier-to-visualize light, imagine if one took all the platinum mined in the past several decades and melted it down; the amount of molten platinum would barely fill the average home swimming pool.

20        Silver, a metal many use on a daily basis to eat with, exists at only a 20-parts-per-billion value—20 out of every billion atoms on the planet are silver.

1          Osmium, rhenium, iridium, ruthenium, and even gold exist in smaller quantities, much less than one part per billion, while some are available in such small concentrations that no valid measurement exists.

On the extreme end of the scarcity spectrum is the metal promethium. The metal is named for the Greek Titan Prometheus, a mythological trickster who is known for stealing fire from the gods. Scientists first isolated promethium in 1963 after decades of speculation about the metal. Promethium is one of the rarest elements on Earth and would be very useful if available in substantial amounts. If enough existed on the planet, promethium could be used to power atomic batteries that would continue to work for decades at a time. Estimates suggest there is just over a pound of promethium within the crust of the entire planet. When the density of the metal is accounted for, this is just enough of the metal to fill the palm of a kindergartner’s hand.

This special attraction to iron explains why so many prized metals are hard to find. Earth’s molten core is estimated to be comprised of up to 90% iron, leading the elements to sink into the depths of Earth’s crust and continually move closer to the planet’s iron core over billions of years. At the same time, this drive to the core depletes the amount of the metals available in Earth’s crust. The pull poses a problem to mining efforts—a pull to the core prevents the formation of concentrated deposits that would be useful to mine, leading the metals to instead reside in the crust of our planet in spread-out, sparse amounts.

The mass of Earth is approximately 5.98 × 1024 kilograms. There is absolutely no easy (or useful) way to put a number of this magnitude into a reasonable context. I mean, it’s the entire Earth. I could say something silly, like the mass of the planet is equal to 65 quadrillion Nimitz-class aircraft carriers, each of which weighs 92 million kilograms a piece. This comparison might as well be an alien number, as it lends no concept of magnitude.

The overwhelming majority of Earth’s crust is made of hydrogen and oxygen. The only metals present in large amounts within the crust are aluminum and iron, with the latter also dominating the planetary core. These four elements make up about 90% of the mass of the crust, with silicon, nickel, magnesium, sulfur, and calcium rounding out another 9% of the planet’s mass.

Making up the remaining 1% are the 100+ elements in the periodic table, including a number of quite useful, but very rare, metals.

What is easier to understand are reports of the ages and proportion of metals and other elements that reside on the surface of the planet and just below. At the moment, Earth’s crust is the only portion of the planet that can be easily minded by humans.

Deposits of rare metals, including gold, are found under the surface of the planet’s oceans, but these deposits are rarely mined for a number of reasons. These metals often lie within deposits of sulfides, solid conjugations of metal and the element sulfur that occur at the mouth of hydrothermal vents. While technology exists that allows for the mining of deep-sea sulfide deposits, extremely expensive remotely operated vehicles are often necessary to recover the metals. Additionally, oceanic mining is a politically charged issue, as the ownership of underwater deposits can be easily contested. As technology advances, underwater mining for rare metals and other elements will become more popular, but, for the moment, due to cost and safety reasons, we are restricted to the ground beneath our feet that covers about one-third of the planet. Earth’s crust varies in thickness from 25 to 50 kilometers along the continents, and so far, humankind has been unable to penetrate the full extent of the layer. The crust is thickest in the middle of the continent and slowly becomes thinner the closer one comes to the ocean. So what does it take to dig through the outer crust of our planet? It takes a massive budget, a long timescale, and the backing of a superpower, and even this might not be enough to reach the deepest depth. Over the course of two decades during the Cold War, the Soviet Union meticulously drilled to a depth of 12 kilometers into the crust of northwest Russia’s Kola Peninsula. No, this was not part of a supervillain-inspired plan to artificially create volcanoes but was rather an engineering expedition born out of the scientific head-butting that was common during the Cold War. The goal of this bizarre plot? To carve out a part of the already thin crust north of the Arctic Circle to see just how far humans could dig along and to see exactly how the makeup of the outer layer of the planet would change. Work on the Kola Superdeep Borehole began in 1970, with three decades of drilling leaving a 12-kilometer-deep hole in the Baltic crust, a phenomenal depth, yet it penetrated but a third of the crust’s estimated thickness. As they tore through the crust in the name of science and national pride, the team repeatedly encountered problems due to high temperatures. While you may feel cooler than ground-level temperatures in a basement home theater room or during a visit to a local cavern, as we drill deep into the surface, the temperature increases 15 degrees Fahrenheit for every 1.5 kilometers. At the depths reached during the Kola Borehole expeditions, temperatures well over 200 degrees Fahrenheit are expected. The extremely hot temperatures and increased pressure led to a series of expensive mechanical problems, and the project was abandoned.

The Kola Superdeep Borehole is the inspiration for the late 1980s and 1990s urban legend of a Soviet mission to drill a “Well to Hell,” with the California-based Trinity Broadcasting Network reporting the high temperatures encountered during drilling as literal evidence for the existence of hell. The Soviet engineers failed to reach hell, and they also failed to dig deep enough to locate rare earth metal reserves. At the moment, we simply lack the technology to breach our planet’s crust. The Kola Borehole fails to reach the midpoint of the crust, with at least twenty more kilometers of drilling to go at the time the project was shut down in 1992. Although Earth’s crust holds a considerable amount of desirable metals, if the metals are not in accessible, concentrated deposits, it is usually not worth the cost it would take for a corporation to retrieve them

The composition of metals within the planet’s crust is not uniform, unfortunately, further dividing the world’s continents into “haves” and “have nots” when it comes to in-demand metals.

Copper is very hard to isolate from the crust in a pure form. Bronze, a combination of copper with tin, was sufficient for our ancestors to make weapons and tools, but purer forms of copper and other metals are necessary for the varied number of modern uses. Copper is found within the mineral chalcopyrite. To isolate pure copper from chalcopyrite calls for a work-intensive process that involves crushing a large mass of chalcopyrite, smelting the mineral, removing sulfur, a gaseous infusion, and electrolysis before 99% pure, usable copper is obtained. Aluminum, a metal so common it is used to make disposable containers for soft drinks, undergoes a similar process before a form that meets standards for industrial use is obtained.

ROCKS INTO SMARTPHONES. The use of exotic metals has become commonplace to improve the activity of existing consumer goods. The piece of aluminum used as part of a capacitor within a smartphone is exchanged for a sliver of tantalum in order to keep up with processor demands, creating an enormous market for the rare metal. Rhodium, ruthenium, palladium, tellurium, rhenium, osmium, and iridium join the extremely well-known platinum and gold as some of the rarest metals on the planet that find regular uses in the medical industry. These rare metals play interesting roles in protecting the environment. A great example is the use of platinum, palladium, and rhodium in catalytic converters, a key component in every automobile built and sold in the United States since the 1970s. Each converter contains a little over five grams of platinum, palladium, or rhodium, but this meager amount acts as a catalyst that turns carbon monoxide into a water vapor and harmless emissions for hundreds of thousands of miles, with the metal unchanged throughout the process. An extremely recent and highly relevant example of a little-known metal that jumped to the forefront of demand is tantalum. Tantalum is in almost every smartphone, with a sliver in each of the nearly one billion smartphones sold worldwide each year.

Europium is used to create the color red in liquid-crystal televisions and monitors, with no other chemical able to reproduce the color reliably. As copper communication wires are replaced with fiber-optic cable, erbium is used to coat fiber-optic cable to increase the efficiency and speed of information transfer, and the permanently magnetic properties of neodymium lead to its extensive use in headphones, speakers, microphones, hard drives, and electric car batteries.

Conflict metals share a number of parallels with a much sought-after and contested resource: oil. These metals may serve to be the catalyst for a number of political and even military conflicts in the coming centuries. All our heavy metal elements, to which many of the rare metals belong, were born out of supernovas occurring over the past several billion years. These metals, if not recycled or repurposed, are finite resources. Inside the stories of these rare metals are human trials and political conflicts. In the past decade, the Congo has been ravaged by tribal wars to obtain tantalum, tungsten, and tin, with over five million people dying at the crossroads of supply and demand. Afghanistan and regions near the Chinese border are wellsprings for technologically viable rare metals due to the disproportionate spread of these high-demand metals in the planet’s crust. In an interesting move, the United States tasked geologists with estimating available resources of rare metals during recent military actions in Afghanistan. California, specifically the Mountain Pass Mine within San Bernardino County, was a leading supplier of rare earth metals in North America well into the 1990s. Mountain Pass, however, was shut down in the early 1990s after a variety of environmental concerns outweighed the additional cost of acquiring the rare earth metals mined there compared to overseas sources. Since the metals rarely form concentrated deposits, the places in the world that play home to highly concentrated deposits of in-demand metals become the target of corporations and governments.

The amount of europium, neodymium, ytterbium, holmium, and lanthanum is roughly the same as the amount of copper, zinc, nickel, or cobalt.  Simply put, the majority of the 17 are not rare; they are spread throughout the planet in reasonable amounts. The metals are in high demand and inordinately difficult to extract and process, and it is from a combination of these factors that the 17 derive their rarity.

RARE VERSUS DIFFICULT TO ACQUIRE.  While the 17 metals may be distributed throughout the planet, finding an extractable quantity is a challenge. The elements are spread so well that they appear in very small, trace quantities—a gram here, a milligram there—in deposits and are rarely, if ever, found in a pure form. Extracting and accumulating useful, high-purity quantities of these 17 metals is what lends them the “rare earth” name, as their scattered nature spreads them throughout the planet, but in tiny, tiny amounts. To obtain enough of any one of these 17 to secure a pure sample, enormous quantities of ore must be sifted through and chemically separated through a series of complex, expensive, and waste-creating processes. The basics of chemical reactions act as a spanner in the works through processing, as the desired metal is lost through side-reactions along the way. Small losses in multiple steps add up quickly, further decreasing the amount of metal available for use. Why expend so much effort to discover and refine these 17 rare metals? Many of them are necessary to fabricate modern electronics, metals woven into our everyday lives and used by brilliant scientists and engineers to fix problems and make electronics more efficient at the microscopic level. Think of the 17 rare earth metals like vitamins—you may not need a large amount of any one of them to survive, but you do need to meet a regular quota of each one. If not, your near future might resemble that of a passenger traveling in steerage from Europe to the New World as you develop scurvy from lack of vitamin C. Yes, we can make substitutes of one of the rare metals for a similarly behaving one on a case-by-case basis, but we need every metal from lanthanum to lutetium, and in sufficient amounts, if we want the remainder of the twenty-first and the upcoming twenty-second centuries to enjoy the progress we benefited from in the twentieth.

What is it about these 17 metals that make them useful? Reasons vary, but the 15 elements between lanthanum and lutetium huddled for shelter under the periodic table have a subatomic level of similarity—the 15 can hide electrons better than the rest of the elements on the periodic table.

When the new electron is added to its set (one electron for each element after lanthanide), another set of electrons is left unprotected to the positive pull of protons in the nucleus.

The extra “tug” from protons in the nucleus does not play a role as long as the atom is neutral, but should an electron become dislodged (as often occurs with metals) and an ion is formed, the ion will be smaller in size than normal due the extra pull. When metals form bonds with other atoms and elements, they often do so as ions, with this break from the norm giving the rare earth metals some of their interesting properties. Because of this phenomenon, ions of the rare earth metals from lanthanum to lutetium grow smaller in diameter from left to right across the row. This is the reverse of typical trends seen in the periodic table, as ions of elements typically become larger across the row. As seen in the rare earth metals, this alteration leads to making ions of these rare earth metals smaller; the electrons traveling along their unique path bestow on the elements interesting magnetic abilities, properties that make rare earth metals particularly sought after for use in electronics and a variety of military applications.

Minerals contain a variety of elements, with multiple metals often found in a single mineral deposit. Rocks with a consistently high concentration of a given metal, like magnetite, which has a large amount of iron, are often commonly traded.

Mineral deposits differ in the amount of usable metal they contain, with the concentration of metal, ease of extraction, and rarity playing a role in determining how mining operations proceed. Metals are found in a variety of purities, interwoven in a matrix of organic materials and often with other similar metals. Aluminum is found within bauxite deposits, tantalum and niobium are found with the coveted ore coltan, while cerium, lanthanum, praseodymium, and neodymium are found in the crystalline mineral monazite. Recovering a sample from the ground through hours of digging and manual labor is just the first step—before any of these metals can be used, an extensive process of purification is often necessary. This purification process is essential because high levels of purity are necessary for their efficient use. Five species of minerals dominate our concern in the hunt for rare earth metals: columbite, tantalite, monazite, xenotime, and bastnäsite. We can further reduce this to four species, since columbite and tantalite are often found together in the ore coltan. Coltan ore contains large deposits of tantalum and niobium, two of the most sought-after rare metals. Central Africa is home to large deposits of coltan, but the fractured nature of the nations in the region and opposing factions have taken the lives of thousands and disrupted countless more as rival groups swoop in to make money off of legal and illegal mining operations in the region. Raw monazite, xenotime, and bastnäsite are relatively inexpensive. You can buy a rock of the red-and-caramel-colored minerals on any one of a number of websites, with a fingertip-sized piece of monazite or bastnäsite available for the price of a steak dinner at a truck stop diner. Unlike the concentrated deposits of tantalum and niobium in coltan, samples of monazite, xenotime, and bastnäsite minerals hold small amounts of multiple rare earth metals within them.

Sizable deposits of monazite, xenotime, and bastnäsite are found in North America,

Searching for rare earth metals in monazite brings with it a major problem with the ore—most samples are radioactive. The naturally radioactive metal thorium is a large component of monazite, with the fear of environmental damage, additional economic cost, and employee health concerns acting as barriers to monazite mining operations. Once a sufficient quantity of any one of these minerals is obtained, there is a long road to tread before the desired metals are pulled from the rocks. Eighteen steps are necessary before monazite can begin to be purified into individual rare earth metals, while bastnäsite requires 24. Some of these steps are simple—crushing and subsequent heating of the raw mineral ore—while others are large-scale chemical reactions requiring highly trained professionals.

The minerals hold tiny amounts of several different rare metals within them. Until recently, carrying out mining operations solely to garner rare earth metals was considered much too expensive. But if the rare earth metals were a useful by-product of other mining and processing efforts, then so much the better. A great example of this phenomenon is carbonatite, a rock of interest but one less prized than coltan, bastnäsite, xenotime, or monazite. Carbonatite, is sought for the rich copper content within, with the added bonus of small amounts of rare earth metals that can be teased out as the mineral is broken down.

The light rare earth elements (LREEs) are lanthanum, cerium, praseodymium, neodymium, and samarium, while europium, gadolinium, terbium, dysprosium, holmium, erbium, thulium, ytterbium, lutetium, and yttrium make up the heavy rare earth elements (HREEs). As a general rule, an HREE is harder to find in substantial usable quantities than an LREE, making the heavy rare earth elements more valuable.

Overall, elements that have lower atomic masses (in day-to-day language, these elements weigh less per atom) are more abundant than atoms with higher atomic masses. Hydrogen atoms (a proton and an electron, so its atomic mass is just over one) and helium atoms (two protons, two electrons, and two neutrons for an atomic mass of four) are two of most abundant in the universe, while the number of elements at the other end of the periodic table with larger masses like gold (79 protons, 79 electrons, and an average of 118 neutrons for an atomic mass of just under 179) are far less abundant. This trailing phenomenon across the periodic table is part of the answer as to why there are fewer of the heavy rare earths on and within the planet (as well as the rest of the universe) than there are light rare earth elements.

At the moment, 90% of the world’s current supply of rare industrial metals originates from two countries. The export of raw supplies from these countries is increasingly coming under fire, with the countries championing a movement to convince corporations to move away from the quick monetary gain that exporting raw materials offers and moving toward making a profit by exporting finished consumer electronics. At present, we are seeing the beginning of territorial wars over a far more common resource, fresh water, in the United States and elsewhere in the world. If governments are experiencing difficulties sharing and parceling out water, as we see in ongoing disputes between Alabama, Georgia, and Florida over the Apalachicola-Chattahoochee-Flint River and Alabama-Coosa-Tallapoosa River basins, the quarrels possible over rights to desperately needed metals between non-civil or even warring nations could be frightening.

In the 1990s, a number of successful Chinese mining operations began, with their rich supply of high-quality rare earths flooding the global market and driving prices down to near-record lows.

China’s population is consuming rare earth metals at an astonishing rate. By the year 2016, the population of China is projected to consume one hundred and thirty thousand tons of rare earth metals a year, a number equivalent to the entire planet’s consumption in the beginning of this decade.

China holds one-third of the planet’s rare earth supply, but a vast number of mining and refining operations ongoing within its borders allow China to account for roughly 97% of the available rare earth metals market at any given time. Yes, other countries have rare earth metal resources, but they lack the infrastructure or means to put them to use. The addition of politics into the equation places China in an enviable position of power should a nation or group of nations interfere with the country’s interests on any level. Unhappy with the Japanese presence in the South China Sea? Prohibit exports to Japan.

Military weaponry relies on the same goods that require these rare-metal components, further indebting a sovereign nation.

Neodymium magnet motor can outwork an iron-based magnet motor of more than twice its size—but these benefits are not without a substantial price. Rare earth magnet components often cost ten or more times the price of their less efficient, more common counterparts, and any disruption in supply will only lead to a widening of the price gap. When faced with a long-term drop in the supply of rare earth metals, manufacturers will be forced to choose between passing the costs onto the consumer and in the process risk losing market share, or selecting cheaper, older parts and manufacturing methods—the same ones many of the rare earth metals helped replace—that would lead to inferior products and eliminate a number of technological advances.

There are over 30 pounds of rare earth metals inside of each Toyota Prius that comes off a production line, with most of that mass split between rare earth components essential to motors and the rechargeable battery. Of this 30, 10 to 15 pounds is lanthanum, with the lanthanum used as the metal component of nickel metal hydride (NiMH) batteries. As the first generation of hybrid automobiles reaches the end of its lifetime, owners will be forced to replace their battery or move on to a different car, with both alternatives bringing an uptick in rare earth metal consumption.

The amount of rare earth metals needed to create of a state-of-the-art wind turbine dwarfs that needed for an electric car, with 500 pounds of rare earth metals needed to outfit the motors and other interior components of a single energy-generating wind turbine.

Each of the 17 rare earth metals exhibits similar basic chemical and physical properties, with these similarities providing quite the challenge when it comes to separating them from one another in raw mineral ore. If you heat a mineral sample containing several of the rare earth metals to extremely high temperatures, it becomes difficult, if not impossible, to differentiate and physically separate each one because they share similar melting points. The rare earth elements are intricately bound to one another along with abundant elements like carbon and oxygen, making it impossible for industrious at-home refiners and large corporations to pick up a hundred pounds of raw mineral rocks and chip away for hours to separate the elements as one could do, in theory, with gold. Instead, concentrated acids and bases are needed to extract the individual elements, with chemists trying thousands of combinations before settling on the proper method to separate and purify a rare earth metal like cerium, a metal needed for use in pollution-eliminating catalytic converters, from a sample of bastnäsite or monazite.

Beryllium, an element now deemed vital to US national security due to its inclusion in next-generation fighter jets and drones.

Gadolinium is used to create the memory-storage components of hard drives.

Despite the eventual separation into praseodymium and neodymium, the use of didymium continues to evolve. Oil refineries use the mixture of two elements as a catalyst in petroleum cracking, a heat-intensive process necessary to break down carbon to carbon bonds present in extremely large molecules en route to the culling of octane for use in gasoline.

A myriad of weapons devices used by the United States and a handful of other countries rely on rare earth metals to operate. Neodymium and its neighbor on the periodic table, samarium, are relied on to manufacture critical components of smart bombs and precision-guided missiles, ytterbium, terbium, and europium are used to create lasers that seek out mines on land and under water, and other rare earth elements are needed to build the motors and actuators used for Predator drones and various electronics like jamming devices.

Each element from position 84 to the end of the periodic table at 118 is radioactive, and of these 36 elements, only 12 are available in large enough quantities to be useful to humans.

Deep in the interior of nuclear power plants the fuel rods are arranged in arrays within a cooling pool to maximize safety. The goal is to allow the heat generated from the billions of neutron additions to safely flow through the water—without the liquid, the heat created as a result of reactions ongoing within fuel rods would quickly overrun any containment units and lead to a meltdown. Water is chosen as the mediating material due to its ability to take on a substantial quantity of heat before evaporating.

Uranium fuel poses an ever-present danger during the reprocessing period since, once uranium and plutonium are separated from their metal housings and dissolved in acid, it is still theoretically possible (although extremely unlikely) for them to gather in localized hot spots within the processing tanks and reach dangerous critical mass. Even if the economic hurdles and safety issues are overcome, the inherent nature of reprocessing sites and the substantial quantity of nuclear fuel within their walls could leave them vulnerable to direct attacks from terrorist groups or the theft of still-fissionable nuclear material. It would be foolish to think an attack making use of nuclear material en route for reprocessing would not be devastating. Even if the attackers failed to turn stolen spent fuel into a high-power nuclear weapon, threats will forever loom from less scientifically advanced attacks stemming from the addition of radioactive waste into an existing explosive device or a strike on a nuclear reprocessing facility that would turn the entire site into an unconventional dirty bomb. Such an attack could exact minimal physical damage and still render the surrounding area unfit for habitation for many years. The psychological toll would be unlike any disaster seen in the Western Hemisphere, with hundreds of billions of dollars necessary to decontaminate and clean the area and tremendous upheaval as several generations would find their lives and homes severely impacted in a single attack. These fears are not merely the creation of a post-9/11 think tank but are a hypothetical plague that has occupied the highest office in the land for six decades. Presidents Gerald Ford and Jimmy Carter halted reprocessing of plutonium and spent nuclear fuel during their terms in office in an effort to stop the spread of national nuclear weapons programs and clandestine attempts to secure a nuclear device across the globe—a fear bolstered by ongoing tensions in India and Iran during the late 1970s.

President Ronald Reagan lifted this ban during his tenure, only to have his successor, George H. W. Bush, prevent New York’s Long Island Power Authority from teaming with the French government–owned corporation Cogema to process reactor fuel. President William J. Clinton followed Bush’s lead, while President George W. Bush went on to embrace nuclear reprocessing by forming the sixteen-country Global Nuclear Energy Partnership and encouraging private corporations to develop new reprocessing technology.5 This trend of “stop-start” policy on the matter reversed once again with President Barack Obama, who signaled what appears to be the death knell for commercial nuclear processing in the United States, at least for the first half of the twenty-first century. Fiscal concerns informed his decision to cancel plans to build a large-scale nuclear reprocessing facility in 2009 and a South Carolina reprocessing site in 2014.  At the moment, the United States does not reprocess reactor fuel previously used to generate power for public consumption; it instead chooses to focus recycling efforts on radioactive materials created in the course of scientific research. Regardless of one’s personal political views, the reticence of five presidents to pursue nuclear processing—Ford, Carter, G. H. W. Bush, Clinton, and Obama—should be a sign to those championing the cause of nuclear processing. Financial issues aside, concentrating large amounts of nuclear material in one area, no matter how secure, with hundreds, if not thousands, of workers coming in contact with the material makes the site ripe for thievery and attack. Acquisition of radioactive material by clandestine individuals is not isolated to action movie plots and Tom Clancy novels but is a plausible threat. A dirty bomb has yet to be detonated anywhere in the world, thankfully confining these radiological weapons to movies and novels, wherein the bombs play the role of an all-too abundant plot device and source of melodrama. The most feeble of dirty bombs needs only a sufficient source of radioactive waste and an explosive device to disperse the waste in order to render a location unfit for years.

Almost every step of a reprocessing effort creates additional radioactive waste. Liters upon liters of strong acids and harsh carcinogenic solvents are used en route to reclaiming metallic uranium and plutonium that can used in a new way. This “new” waste created in the dissolving states contains only a fraction of the radioactivity in a sample of reactor-grade uranium, but nevertheless, the radioactive waste must be locked away until the natural decay of radiation over time occurs.

in the process it is possible to create considerable quantities waste.

A metric ton of fuel rod waste contains four to five kilograms of recoverable rare metals, making the effort worthwhile in dire circumstances.

If you are devious and looking for a way to swindle people out of gold, tungsten sounds really great at this point, right? One big problem lies in the path for any would-be gold counterfeiter—tungsten metal is grayish-white, a very different hue than traditional yellow gold. A visual problem such as this can be rectified with willpower and a drill, leading gold-adulterers to hide tungsten metal within solid-gold objects to create a passable fake.  Reports of precious metal traders learning they were scammed by keen counterfeits of one-kilogram gold bars with newly drilled holes filled with tungsten prior to the transaction are popping up in China, Australia, and New York City, a sordid trend brought about in recent years by the astronomical run-up in the price for gold.8 The gold removed from the bar then enters the pocket of the driller, while the bar is passed along to an uninformed buyer at its normal face value. Tales of tungsten bars coated with twenty-four-carat gold also swirl, with purchasers learning of their exceptional misfortune when the top layer peels away like the gold foil covering a chocolate bar.

The cost of melting down zinc and a smidgen of copper (pennies have gone from being made entirely of copper up until 1982 to less than 3% copper currently), parceling it out into discs, stamping the visage of our 16th president on the face, and trucking rolls of the coin from the mint averages two per every penny created. In this case, the seigniorage is a net loss for the Treasury Department, as the department loses a little less than a cent on each newly minted penny, and the net loss continues with the nickel, with eleven cents’ worth of materials, wages, and machine upkeep going into creating each one.

All the gold-plated tungsten items are sold as fakes, but they improve upon techniques used in sordid deals of counterfeit bars. These commercially manufactured and advertised “fake” tungsten-core coins are currently seen as a blight by the coin-collecting and gold-trading community, but someone with an ultrasonic or x-ray fluorescence detector could always use one of these elaborately produced plated coins to test the device in question. If you are a pessimist, the fake coins may turn out to be useful if you lack the financial assets to hoard gold and live your life prepping for an imminent worldwide financial collapse or natural disaster. Gold is desired foremost among precious metals due to historical and traditional sentiment. In a rebooted world where those bargaining for goods lack any sort of detection devices, the look and feel of gold may be all you need. Corporations and nations seek out rare and scarce metals for their value, their ability to improve human life.

Thallium became so popular as a murder weapon that the chemical earned the name “inheritance powder” in the dawn of the Industrial Revolution due to the metal’s dubious link to convenient deaths benefiting wealthy heirs. When used for ill intent, thallium is dosed not as a spoonful of metal shavings but in the form of the crystalline thallium sulfate. By itself, thallium metal will not dissolve readily in water, making it difficult to hide this form of the poison in a drink. On the other hand, thallium sulfate retains the poisonous characteristics of thallium while behaving similarly to table salt, sodium chloride, bestowing upon the substance a crystalline appearance at room temperature while making the chemical far more concealable. This form is still quite potent, as less than a single gram of thallium sulfate is enough to kill an adult.  Availability mingled with potency and concealment combine to make thallium sulfate an excellent murder weapon. Prior to 1972, thallium sulfate sat on the shelves of supermarkets across the United States as the main ingredient in commercial rat killers. Thallium ends life by forcing the body to shut down as it takes the place of potassium in any number of the body’s cellular reactions and physiochemical processes. Once ingested, the poisonous compound thallium sulfate dissolves, separating the thallium atoms and allowing the metal to enter the bloodstream. The body then begins to incorporate thallium into molecular-level events needed to maintain proper working order, and that’s where trouble begins. Thallium atoms are remarkably similar in size to potassium atoms, and this is a problem for the human body. Potassium is a vital part of energy-manufacturing mechanisms and a gatekeeper for a number of cellular channels. Due to similarity between the size and charge of thallium and potassium, the body confuses the metals and allows thallium to substitute for potassium. Unfortunately, this substitution is a deadly one, leading to a shutdown of a number of delicate submicroscopic events that brings about death in a handful of weeks. Erosion of fingernails and hair loss are two prominent late-stage flags denoting thallium poisoning, with the first signs of hair loss showing as soon as a week after consumption of the poison.  If you are poisoned with thallium and do not die from acute kidney failure or its complications within a few weeks, your way of life will likely be changed forever, thanks to recurring dates with a dialysis machine.

Swiss scientists studying the exhumed body of Palestinian leader Yasser Arafat in November of 2010 found nearly 20 times the baseline amount of polonium in his bones, along with traces of the radioactive element in his clothes and the soil where he was laid to rest. Arafat died in 2004 from what is described as a stroke by his attending physician after a bout with the flu characterized by vomiting—a symptom that plagued Litvinenko immediately after his poisoning. The discovery of such a large concentration of polonium has changed the way historians and political scientists view Arafat’s death, this finding fostering a growing movement to paint it as murder by an unknown culprit. This is not the first intimation of foul play surrounding Arafat’s death: his former adviser Bassam Abu Sharif publicly accused Israeli intelligence operatives of poisoning the Palestinian figurehead’s medicine and placing thallium in his food and drinking water.

The title “wonder drug” is thrown around frequently in the pharmaceutical world, but a small-molecule drug that can effectively treat lung, ovarian, bladder, cervical, and testicular cancer with fewer side effects than radiotherapy? The integration of platinum atoms in a small molecule to create a drug yields a tool effective at treating a wide variety of cancers. Cis-diamminedichloroplatinum(II), which moonlights as the much-easier-to-say trade name cisplatin, is a simple molecule at the forefront of cancer treatment starring a single atom of platinum at its core. Structurally cisplatin is a quite simple molecule featuring chlorine, nitrogen, and hydrogen oriented at ninety-degree angles around a platinum core. Making cisplatin is not difficult; the reaction requires only four steps, with the difficulty of the synthesis on par with a typical lab session from an undergraduate student’s sophomore year. The high cost of the platinum materials, however, keep the metal out of the teaching labs of even the most wealthy universities due to perceived waste and the thought that a devious lab student might run off with a bottle of platinum tetrachloride in the hope of purifying the platinum metal within. The discovery of cisplatin’s important role in the war on cancer came about as many great scientific achievements do—by complete accident. In a 1965 study of Escherichia coli bacteria—the fecal matter component and model bacteria most often used by researchers—a trio of Michigan State University scientists observing the impact of electrical fields on bacteria noted that their cell samples quit replicating, an outcome that failed to correlate with their experimental logic. Like all good scientists, the researchers went into detective mode and began mentally dissecting every part of their experimental setup. Their in-depth look revealed that the platinum metal used in the electrodes to create their experimental electrical fields was being leached slowly into the bacteria’s growth medium, inadvertently dosing the bacteria with platinum and causing the E. coli to grow to phenomenal sizes and bypass the life checkpoints that would trigger a fission process to create new cells. While the trio did not come across any interesting happenings when they placed their precious E. coli in a variety of electrical fields, they did discover that platinum could prevent bacteria from reproducing. The finding was warmly received by the medical world and led to the incorporation of cisplatin in cancer treatment by the end of the next decade. Cisplatin brings about apoptosis in cancer cells shortly after reacting with the cell’s DNA. Once bound to DNA, the information-carrying molecule becomes cross-linked and thus unable to divide—a step necessary for the cell to undergo its form of reproduction: fission. If tumor cells cannot reproduce, the runaway train of unbounded growth is halted. Cisplatin’s effect on DNA can also have another cancer-fighting effect—the wholesale destruction of cancer cells. Cells can stimulate the repair of DNA after determining that it can no longer divide, however, once the repair efforts are unsuccessful—thanks to the presence of cisplatin—the cell starts its own self-destruction sequence—apoptosis—resulting in the destruction of the tumor cell. If apoptosis can be successfully triggered in enough cancer cells, the tumor will begin to shrink. Patients given cisplatin and two other drugs making use of similar platinum chemistry to achieve the same result—carboplatin and oxaliplatin—experience fewer side effects than those who are treated with radioactive materials, making the pharmaceutical a great option since it gained approval from the Federal Drug Administration in 1978. The popularity of platinum in cancer treatment led medical researchers to investigate the possibility of antitumor properties in rhodium and ruthenium, metals often used in conjunction with platinum in catalytic converters, but with little success due to unforeseen toxic effects not observed with cisplatin.

Tantalum is a corrosion-proof metal used to increase the efficiency of capacitors—a useful application that has allowed mobile devices to shrink in size or increase in processing power at a rapid pace in the past decade. Tantalum is found alongside the metals tin and tungsten,

Sadly, tantalum mining funded rebel factions during the Second Congo War (1998–2003), the bloodiest war since World War II, with five million people killed as a result of the fighting.

In a disturbing nod to the current strife surrounding tantalum, the metal’s name comes from the disturbing tale of the Greek mythological figure Tantalus. Tantalus’s life was awful—he lived in the deepest corner of the underworld, Tartarus, where he cut up and cooked his son Pelops as a sacrifice to the gods. His sins did not end there, however, as Tantalus forced the gods to unwittingly commit cannibalism by dining on Pelops’s appendages. To punish Tantalus for this gruesome gesture, the gods condemned him to a state of perpetual longing and temptation by placing him in a crystalline pool of water near a beautiful tree with low-hanging fruit. Whenever Tantalus raised his hands to grasp a piece of fruit to eat, the delicate branches would move to a position just of out of reach; whenever he dipped down for a drink, the water pulled back from his cupped hand. Mythological lore finishes this mental image of eternal temptation by suspending a massive stone above Tantalus. He was condemned to a world of immense desires constantly within reach but of which he was forever unable to partake, leaving him to perpetually starve against a backdrop of plenty.

Coal naturally contains uranium—one to four parts per million. This is not a lot of uranium, but it is a quantifiable amount of the radioactive material nonetheless. A heavy-duty train car like the BNSF Railway Rotary Open Top Hopper can carry a hundred tons of coal, with a hundred similar cars linked together for a total just over ten thousand tons. This run-of-the-mill train sounds a good bit more ominous with a quick calculation using the parts per million of the uranium in coal. After a few minutes of number crunching, the sensationalist could claim that the bituminous coal train is carrying between 20 to 80 pounds of uranium, and this hypothetical individual, in the midst of making a hysteria-inducing statement, would be correct. Although the movement of 80 pounds of uranium across the heartland of the United States resembles a plot point from a spy movie, black helicopters filled with FBI and Homeland Security agents will not be descending on the trains of North America anytime soon, because the uranium is safely split between millions of pieces of coal spread throughout the train. This is the same dispersal pattern we see with the distribution of rare earth metals in rocks and quarries. During World War II the United States and Germany did not destroy their coal mines to get a small allowance of uranium to use in the building of nuclear bombs—the coal by itself is far more valuable. Instead, these countries looked to well-known deposits featuring high concentrations of uranium to build their stockpiles.

Concentrated deposits of metals—often the only deposits worth mining—are created over millions of years.

The majority of the rare earth metals, including two of the most useful, niobium and tantalum, are found in igneous rock, leading to several theories that place the origin of rocks containing these metals in the slow release of rare earth element–rich magma from chambers deep below the surface of the earth. The formation could have taken place underground as small portions of magma exited the chamber and cooled slowly, or as the magma pushed through the surface and became the lava flows often associated with volcanic activity.

China’s available supply of rare metals rivals the material wealth of oil underneath the sands of Saudi Arabia and the Middle East. A crippling share of the planet’s supply of rare earth metals is in China—the United States Geological Survey estimates more than 96% of the available supply of these metals is centered within its boundaries, leaving the rest of the world to fend for crumbs under their borders or to rely on Chinese-manufactured products.

The minerals containing tantalum, niobium, and other rare metals likely accumulated over the course of a four-hundred-million-year span in the Middle Proterozoic period,

While we will never truly know how such substantial quantities of varied metals gathered in this section of Inner Mongolia, a number of theories are bandied about by geologists.

The shuffling of Earth’s tectonic plates and the movement of lava during the periods of geologic tumult that characterized formation of our planet’s landmasses is central to the most prominent theories, with the possibility that the movement of magma could have triggered hydrothermal vents that pelted the earth at Bayan Obo with metals brought from deep below the surface. The rare metals present at Bayan Obo, and throughout the world, are found in the repeating, organized forms of familiar chemical compounds. These molecules typically consist of two atoms of the metal joined by three atoms of oxygen, with variations of the number of metal and oxygen atoms present. This odd couple forms a very stable type of chemical compound, the oxide. Thanks to this combination of metal and oxygen, the molecules are readily taken into mineral deposits. This stroke of luck is not without its own problems, however: the metals must be separated from oxygen before we can use them.

Despite its vast mineral wealth, Bayan Obo is far from the only reason China rose to dominate the rare earth markets during the first decade of the 21st century. Selling at astonishingly low prices is the clever move that made China the undisputed source for rare earths. By taking advantage of the abundant supply at Bayan Obo, Chinese production of these metals all but ran the previous corporate leaders in the United States and Australia from the world market.  Within a decade and a half this economic plan guided countries and corporations to the cheap and available supply of Bayan Obo, soon putting each at the mercy of China’s economic and political policies. A brilliant yet simple tactic effectively yielding a sea change normally only brought about through the devastation of a war, but in this case it occurred without a single shot being fired. This brand of economic policy is convincing foreign corporations in Japan and the USA to open manufacturing plants and offices within China’s borders in hope of securing favor and a continuous supply of the rare metals they can rely on in manufacturing.  Corporations willing to make the jump into China’s metal market are also positioning themselves wisely in the event that China radically increases export taxes on its metal supply, an ever-looming possibility that could destabilize market sectors overnight.

Will we see a day when the dependence on China for rare earth metals ceases? Not likely. The supply of rare earth metals could last several decades if not longer if China exercises wisdom in domestic and foreign economic policy. The rest of the world has little recourse in the face of price increases, as any cache of commercially viable rare metals would likely cost more to retrieve than those sold by corporations inside China. Even if countries drew the political ire of China or simply decided to forge their own path by exploring and making use of a newly found untapped deposit of metals within their borders, it could take well over a decade and phenomenal expense before a semblance of self-sufficiency is actually achieved.

North America has a few rare earth metal mining sites, with the crown jewel being the oft-maligned Mountain Pass site deep in California’s Mojave Desert.  The Mountain Pass site looks nothing like the series of caves and tunnels often associated with coal or gold mining. Molycorp’s prize, a gem tucked in the middle of the California sprawl and seventy-five miles from the nearest city, is more rock quarry than classical mine, with this hole in the face of the earth growing larger, one transit ring at a time as rocks containing mineral ore are transferred from the bottom to the surface and then to processing plants.

Mountain Pass performed well as the United States’ key source of rare metals well into the late 1990s, when two factors led to the closure of the site. China’s meteoric rise as a rare earth manufacturer came at the expense of Mountain Pass’s supply. Chinese corporations flooded the market with inexpensive rare earth metals, softening the international market for rare earths to the extent that it was no longer cost effective to maintain Mountain Pass.

Mountain Pass came under intense public scrutiny in 1997 after a series of environmental incidents. Chief among these problems were seven spills that sent a total of three hundred thousand gallons of radioactive waste emanating from Mountain Pass across the Mojave Desert.  Cleanup of these spills cost Chevron 185 million dollars, sending the United States’ most fruitful rare earth metal mine into a death spiral.

The mine stayed dormant until the price of rare earths increased in the past decade, when Chevron sold the mine to Molycorp, which spent an estimated 500 million dollars to resume operations. A risky move, but one with an underlying sense of wisdom if Mountain Pass could return to its former glory.  Stating that keeping a corporation, its workers, and shareholders afloat in the rare earth mining industry is an arduous task would be an enormous understatement. Mining is a difficult if not damned industry, one where profit margins are eternally slim and political events can change the world stage in a handful of days, if not overnight. Before Molycorp and other mining entities can earn a single dollar, the corporations must find and acquire a mineral-rich site, tear the prized rocks from the crust of the earth, and then carry out 30-plus refining steps to isolate a single rare earth metal. The financial markets of the world continue to fluctuate the entire time, with minor changes bringing about a sea change in the mining world as commodity prices fluctuate wildly.

For example, what if the state-owned corporate entities of China are encouraged by the nation’s government to limit exports to North America and Europe? Prices soar the next morning, quickly eating up every kilogram a company has in its reserves. But what about the opposite scenario—a private mining corporation announces the discovery of an unexplored cache of bastnäsite in Scotland? Prices plummet, and corporations across the world are forced to limit mining and processing efforts to ensure a market glut years in the future will not kill the industry.

Gold, platinum, tantalum, and several other rare and valuable metals are used in small quantities in smartphones and computers, but the employee skill sets and time necessary to obtain and refine these metals often makes metal-specific recycling efforts cost prohibitive.

Why are jewelry-grade precious metals used in electronics? It’s a simple answer—using the metals makes your electronics faster, more stable, and longer lasting. For example, gold is a spectacular conductor. As an added benefit, the noble metal doesn’t corrode, so gold-plated electronics do not experience a drop-off in efficiency over time. Gold is plated on HDMI cables and a plethora of computer parts in a very thin layer—a thickness commonly between three and fifteen micrometers (there are a thousand micrometers in a millimeter, if it has been a while since you’ve darkened the halls of a chemistry or physics department). This very thin, very light superficial coating—thinner than a flimsy plastic grocery store bag—is enough to enhance the efficiency of signal transfer, making it worthwhile to use gold over cheaper metals with similar behavior, like copper or aluminum.

The amateur scientists looking to recover gold and platinum from computer parts are not too different from the elderly men and women clad in socks and sandals who wander along beaches combing the sands with a small shovel and metal detector in hand. There is one major difference between these two groups of treasure seekers, however. Those performing at-home recycling and recovery from computer parts know where their treasure lies; it’s just a matter of performing a series of chemical reactions to retrieve the desired precious metals.

A number of companies sell precious metal recycling and refining kits on the Internet, with prices starting as low as seventy dollars, provided the amateur recycler already owns a supply of protective equipment and personally manages chemical waste disposal. More expensive kits make use of relatively safer electrolysis reactions—similar to the hair-removal method touted in pop-up kiosks at shopping malls. This slightly safer method brings with it a much higher price tag, with retail starter kits beginning in the $600 range before rising to several thousand dollars. This high price is the cost of doing business for someone with time and (literally) tons of discarded computer equipment to refine,

While the “scorched-earth” hobbyist approaches used by Ron and Anthony are dangerous, the Third World equivalent is disturbingly post-apocalyptic. Venturing into mountains of discarded monitors, desktop towers, and refrigerators, children and teenagers fight over sun-and-rain-exposed electronic parts in search of any metals—

Once electronic waste is deposited in the landfills of poor villages, the waste will not stay there for long. Locals in Accra and numerous small towns spread across India and China learned of the possibilities for parts from abandoned computer monitors, televisions, and towers and, like the hobbyists mentioned earlier, took up efforts to retrieve the precious components. In a society where economic prosperity and annual average incomes are measured in the hundreds and not tens of thousands of dollars, the few dollars one might make during a twelve-hour foray through massive piles of rubbish is well worth the effort and risk. The electronics wastelands littered throughout developing countries could not exist, however, without complicit partners in the destination countries. How do these relationships begin?

TOOLS OF THE POOR Those who choose to make a living by retrieving electronic waste from dumps, tearing the equipment down, and refining the rare metals found within them are exposed to many of the same hazards as our hypothetical hobbyists, but on a much higher scale. While inquisitive First World hobbyists like Anthony and Ron refine scrap for fun in their spare time, a recycler in the developing world performs the same work but for 12 to 14 hours a day and with minimal protective equipment due to the prohibitive cost of respirators, gloves, and goggles. They carry out these activities in an even more dangerous environment as well, exposing themselves to the physical hazards of landfills before the first step of metal recovery begins. Their tools are often crude. Workers place the metals in clay kilns or stone bowls and heat them over campfires. Heating the refuse loosens the solder present on many electronic parts—solder that is typically made of lead and tin. Children huddle over the fire as the scraps are heated to the point where the solder is liquefied and a desired component can be pulled away for further processing. The cathode-ray tubes in older computer monitors—an item not even contemplated for recovery by First World hobbyists because of the danger and minimal reward—are boons for profit-seeking recyclers in the developing world. Tube monitors contain large amounts of lead dust—as much as seven pounds of lead in some models—and at the end of these fragile tubes is a coveted coil of copper. While copper is not the most precious of metals, it is valuable due to its many applications, turning the acquisition of one of these intact copper coils into a windfall for a working recycler. Smashing a monitor to retrieve the coil often involves shattering the lead-filled cathode ray-tube, doing a phenomenal amount of environmental damage while covering the worker with millions of lead particulates. What is done with the unwanted scrap after the useful parts are plucked out is another problem altogether. In many situations, unwanted pieces are gathered into a burn pile and turned to ash, emitting harmful pollutants into the atmosphere. What remains in solid form is often deposited in waterways—Mother Nature’s trashcan—and coastal areas. There is rarely a municipal waste system in place to recover the unwanted scraps in these villages, and years of workers dumping broken and burnt leftovers into local streams has contaminated the soil and local water supply. Drinking water is already trucked into the recycling village of Guiyu from a nearby town due to an abundance of careless dumping. Cleaning the water system would likely be too costly and a losing battle if the landfill recyclers are unwilling to change their ways. The physiological impact of recycling electronic waste has been best studied among the inhabitants of China’s Guiyu village. Academic studies show children in Guiyu to have elevated levels of lead in their blood, leading to a decrease in IQ along with an increase in urinary tract infections and a sixfold rise in miscarriages.6 Many of the young workers flocking to the landfills feel compelled to sift through the electronic waste in order to provide for their elders under China’s one-child policy, a policy placing an undo financial burden on the current generation. In addition to complications from lead exposure, hydrocarbons released into the air during the burning of waste have led to an uptick in chronic obstructive pulmonary disease and other respiratory problems, as well as permanent eye damage. Fixing the long-term electronic waste problem in these villages is a complicated and costly proposition. Apart from a generation of children poisoned and possibly lost, this is a relatively new revenue source, with the oldest of the children involved just now entering their thirties. The area of Guiyu was once known for its rice production, but a decade of pollution stemming from electronic waste dumping and refining has rendered the area unfit for agriculture.

Tantalum is particularly coveted for its use in electronics. The metal is stable up to 300 degrees Fahrenheit, a temperature well within the range of most industrial or commercial uses of the element. It works as an amazing capacitor, allowing for the size of hardware to become smaller—an evergreen trend in the world of consumer electronics. Tantalum is also useful for its acoustic properties, with filters made with the metal placed in smartphone handsets to increase audio clarity by reducing the number of extraneous frequencies. The metal can also be used to make armor-piercing projectiles. A run-of-the-mill smartphone has a little over 40 milligrams of tantalum—a piece roughly half the size of a steel BB gun pellet when one accounts for the variation in density between the metals.

Ammonium nitrate is a small molecule used as a fertilizer that can also be incorporated into explosive devices. Karzai enacted the ban in the hope of making it more difficult for the Taliban and other groups to fashion homemade explosive devices used to kill NATO troops stationed in the region. Once denied access to ammonium nitrate, farmers in Afghanistan noted an astonishing drop-off in crop yields, yet they received little to no help from the Afghan government to transition away from the use of ammonium nitrate after the ban. Farmers harvesting a nine-hundred-pound-prune yield the previous year saw their yields plummet to one hundred and fifty pounds after Karzai’s ban.  A drop in yields of as little as 5 or 10% in a developed country would be very damaging to its financial bottom line, but in a country in which 36% of its people live at or below the poverty line, the absence of ammonium nitrate is downright devastating.  Farmers either had to raise the price of their produce or make the move to illegal opium farming to make a living. The allure of opium is, pardon the pun, intoxicating. Raw opium sells for several hundred dollars per pound, and with a probable harvest of roughly fifty pounds of poppies per acre, the attraction is strong for even the most pious of farmers.

While farmers suffered, the Taliban simply turned to a source not subject to Karzai’s ban to construct explosives: potassium chlorate, a chemical used in textile mills across the region. In addition, national and local government efforts to reduce environmental damage continually ran afoul of the Afghan people, including an environmentally conscious ban on the use of brick kilns and an effort to limit automobile traffic in the populous city of Mazar-e Sharif.  While their intentions were no doubt noble, the actions were shortsighted and resulted in decreased income for the vast numbers of the less well-off living and working in the city. These are excellent examples of the troubles such a developing country faces as it tries to advance its economy and infrastructure while at the same time doing minimal damage to the environment, a problem that continues to plague Afghanistan as the country tries to make the most of its vast resources. And when government mandates fail or a situation is in need of an immediate response, there is little money available to develop a solution. Erosion and deforestation are blights on the already parched earth of Afghanistan, turning more and more useful acreage into the desert that already covers the majority of the country. A 2012 initiative through Afghanistan’s National Environment Protection Agency set aside six million dollars to fight climate change and erosion, an embarrassingly small sum to dedicate to preserving the farmland that provides the livelihood for 79% of the country’s people.

Weak electrical system plagues the country as it lurches into the third decade of the twenty-first century. Blackouts limit the access of electricity in a significant portion of the country to a mere one to two hours a day, putting modern necessities like refrigeration out of reach. Industrial efforts are also stymied by breakdowns in the electrical system, with money lost and manufacturing forced to halt production due to frequent electrical outages.

Nine years into the United States’ war in Afghanistan, the Pentagon released the results of the US Geological Survey operation carried out to observe and catalog the potential rare earth resources in Afghanistan. The fabled 2010 report—already bolstered by rumors of a Pentagon memorandum christening Afghanistan the “Saudi Arabia of Lithium”—revealed a treasure trove of previously unknown mineral resources including gold, iron, and rare earth metals. Early speculation placed a one-trillion-dollar value on the accessible deposits, but there is a substantial problem—Afghanistan lacks sufficient modern mining technology to tackle retrieval efforts. Separate estimates made by Chinese and Indian interests dwarf the figure, placing the mineral wealth of Afghanistan closer to three trillion dollars.

The wealth reported in 2010 is likely a continuation of the work carried out by the US Geological Survey Mineral Resources Project, which aided members of Afghanistan’s sister group, the Afghanistan Geological Survey, from 2004 to 2007, to help the country’s government determine a workable baseline of their mineral wealth.18 While cynicism often reigns when we look at North American incursion into Afghanistan, this may not have been a solely profit-minded gesture, as the USGS also teamed up with the Afghan government to assess earthquake hazards as well as to catalog oil and gas resources in the country during the same time period.

The United States cannot produce useful quantities of eight of the 17 elements commonly labeled as rare earth metals—terbium, dysprosium, holmium, europium, erbium, thulium, ytterbium, and lutetium—because they simply do not exist within our borders.

According to the US Department of Defense, high-purity beryllium is necessary to “support the defense needs of the United States during a protracted conflict,” but procuring a supply is not easy. Making a case for the defense industry’s reliance on beryllium is easy. No fewer than five US fighter craft, including the F-35 Joint Strike Fighter that will be employed by the United States, Japan, Israel, Italy, and five other countries over the next several decades, rely on beryllium to decrease the mass of their frames in order to allow the nimble movements that make the planes even more deadly. Copper-beryllium alloys are a crucial component of electrical systems within manned craft and drones, along with x-ray and radar equipment used to identify bombs, guided missiles, and improvised explosive devices (IEDs). The metal also has a use far removed from such high-tech applications. Mirrors are fashioned out of beryllium and used in the visual and optical systems of tanks because it makes the mirrors resistant to vibrational distortion. High-purity beryllium is worth just under half a million dollars per ton when produced domestically, with Kazakhstan and Germany supplying the only significant amounts to the United States through import. In 2008 the Department of Defense approved the construction of a high-purity beryllium production plant in Ohio after coming to the conclusion that commercial domestic manufacturers could not supply enough of the processed metal for defense applications nor did sufficient foreign suppliers exist. While the plant in Ohio is owned by a private corporation, Materion, the Department of Defense is apportioned two-thirds of the plant’s annual output.

Lanthanum is the key component of nickel-metal hydride, with each Toyota Prius on the road requiring twenty pounds of lanthanum in addition to two pounds of neodymium. Like many of the rare earth metals, lanthanum is not as rare as the description would suggest; it is the separation and extraction of lanthanum that complicates matters and thereby results in the metal’s relative scarcity. With the Nissan Leaf and Tesla Motors’ Roadster becoming trendy choices for new car buyers, the need for lanthanum will remain and no doubt grow in the foreseeable future. The metal will become even more relevant as automobile manufacturers push the limits of battery storage, an effort that will require significantly more lanthanum for each car rolling off the assembly line.

In liquid fuel reactors, energetic uranium compounds are mixed directly with water, with no separation between nuclear fuel and coolant. Liquid fuel reactors can make use of lesser-quality uranium and appear to be safer at first glance because the plants do not need to operate under high pressure to prevent water from evaporating. On the downside, they pose an even larger contamination and waste storage problem than conventional solid fuel reactors. Since there is no separation between the cooling waters and uranium, much more waste is produced, waste that, in theory, must be stored for tens of thousands of years in geological repositories before the murky waters no longer pose a danger.

Thorium power plants would need constant maintenance and a highly skilled set of workers on around-the-clock watch to oversee energy production. This is not to say solid fuel nuclear power plants are worry-free, but the solid fuel plant is the comfortable dinner-and-a-movie alternative to taking a high-maintenance individual out for a night on the town. Why would molten salt plants need constant observation? Thorium molten salt reactors create poisonous xenon gas, a contaminant that must be monitored and removed to maintain safe and efficient energy generation. Because of this toxic by-product, a thorium molten salt reactor would not succeed with just a technician overseeing a thoroughly automated plant but would require a squad of highly educated and dedicated engineers analyzing data and making changes around the clock. Luckily, most of the world’s current power plant employees are quite educated, but the act of retraining each and every worker is a substantial barrier that prevents the switch to thorium fuel plants in North America.

No country currently possesses a functional thorium plant, but China is on the inside track thanks to an aggressive strategy that aims to begin electricity generation by the second half of this decade. India is committed to generating energy using thorium as well, aiming to make use of their own extensive thorium reserves to meet 30% of their energy needs by 2050.

NEODYMIUM AND NIOBIUM

Neodymium—one of the two elements derived from Carl Gustaf  Mosander’s incorrect, but accepted, discovery of didymium in 1841—is the most widely used permanent magnet, with the rare earth metal being found in hard drives and wind turbines as well as in lower-tech conveniences like the button clasp of a purse. Along with the rare earth metal neodymium, niobium metal magnets are becoming increasingly necessary in recreational items, in particular, safety implements, electronics, and the tiny speakers contained in the three-hundred-dollar pair of headphones

Niobe is known as the daughter of Tantalus (for whom the rare metal element tantalum is named). Like her father, she is a thorn in the side of the gods.

Niobe is lucky in one part of life—she is the mother of 50 boys and 50 girls, and she takes a considerable amount of pride from this fact. Her pride is too much for Apollo and Artemis to take—the mythological super couple are only able to bear a single boy and girl, and when Niobe gloats in their midst, Apollo and Artemis slay all 100 of Niobe’s offspring. Mass murder is not enough to quench the godly anger in this bummer of a story, as Apollo and Artemis take the scenario one step further and turn Niobe into stone.

Niobium, a metal typically used to make extremely strong magnets, is also quite stable and has the added bonus of mild hypoallergenic properties—a boon to the medical world in which niobium became an obvious choice for use in implantable devices, specifically pacemakers.

Magnetism and electricity go hand in hand in modern life—magnetic fields affect electrical fields and vice versa. This connection is used to create superconducting magnets, which run electrical current through metal coils to generate the strongest magnetic fields possible with our current understanding of technology. Using a wire made of a permanent magnet, like neodymium, turns the basic run-of-the-mill electromagnet into a superconducting one.

Greenland has long been hypothesized to have rich resources of the metals, but any and all attempts at commercial mining have been halted because uranium is commonly discovered during excavations of rare earth metals. Once Greenland’s parliament overturned legislation banning the extraction of uranium, the parliament also freed up the country for mining of treasure troves of rare earth metals.

While pearls can be grown and harvested in a few short years, polymetallic nodules grow a mere half an inch in diameter over the course of a million years—not exactly the timetable we see with renewable resources. Once the last manganese nodule is harvested and refined, that will be the end of underwater rare metal mining.

When nodule mining becomes a reality, the process will build upon the existing foundation put in place through the underwater mining of diamonds. The De Beers Corporation currently operates five full-time vessels for this purpose, with all five dedicated to sifting through shallow sediment beds off the coast of the African country of Namibia. The German-based company found underwater operations far more efficient than above-ground mining efforts, as a fifty-man crew armed with state-of-the-art technology can match the output of three thousand traditional mine workers.  Two methods used for underwater diamond mining are directly applicable to retrieving manganese nodules from the ocean floor. Drilling directly into the seabed is a possible retrieval option, with this avenue penetrating deep below the floor to bring up broken-up rock, sediment, and nodules through alien-looking, mile-long tubes. Once the debris is brought to the hull of a mining ship, chemical and physical processes are used sift to through the cargo, with any undesired rock and sediment returned to the bottom of the ocean floor. The second method shuns the use of drilling and instead uses a combination of conveyor belts and hydraulic tubes to cover larger areas than are accessible by drilling.

 

Please follow and like us:



Small Nations Have Big Plans for Nuclear Energy

  • Estonia inks MOU with Moltex for work on a Molten Salt Reactor
  • Romania inks MOU with NuScale for work on LWR type SMR
  • Ukraine plans consortium for work on Holtec SMR
  • Czech PM details plans to push for revised tender on new reactors
  • South Africa take steps to reopen effort to secure nuclear reactors

Five small nations are moving ahead with plans to develop their own nuclear power stations.  Two of the efforts involve U.S. developers of small modular reactors (SMRs) Here is a round up of recent news items.

Estonia to Study Siting of Moltex Advanced Reactor

(WNN) Fermi Energia of Estonia has selected Moltex Energy as its preferred technology for its plans to establish carbon-free energy production in the Baltic region. Moltex Energy said this week that the two companies had signed a Memorandum of Understanding (MOU) which states their intention to work together, including a feasibility study for the siting of a Moltex advanced reactor and the development of a suitable licensing regime.

moltex
Moltex Conceptual Design 

In its statement, Moltex Energy noted that Estonia generates the majority of its power from oil shale, but that this fossil fuel capacity will have been mostly retired by 2030.

Wind power in the Baltic provides some potential, but the country needs an alternative, reliable power source if it is to remain self-sufficient in energy, it said.

Estonia’s neighbors Latvia, Lithuania and Finland are all net importers of electricity. The intent of the MOU is to create a source of clean and safe power generation in Estonia which would represent an improvement in energy security for the whole region.

Simon Newton, business development director at Moltex, said: “Estonia is a vibrant, entrepreneurial and forward-looking economy and is the perfect place to benefit from the Moltex Stable Salt Reactor technology.” (video)

Kalev Kallemets, CEO of Fermi Energia, said:

“Our ambition is to deploy the first fourth generation small modular reactor in the EU, here in Estonia, by the early 2030s. We are delighted to be working closely with Moltex Energy on this vital project. It is important for Estonia to have its own source of clean, cheap energy and Moltex’s innovative technology has huge potential for us.”

UK-based Moltex Energy announced in July last year that it will build a demonstration SSR-W (Stable Salt Reactor – Wasteburner) at the Point Lepreau nuclear power plant site in Canada under an agreement signed with the New Brunswick Energy Solutions Corporation and NB Power. The firm is also pursuing market opportunities in the UK.

Moltex Energy’s SSR is a conceptual UK reactor design with no pumps (only small impellers in the secondary salt bath) and relies on convection from static vertical fuel tubes in the core to convey heat to the steam generators.

The fuel assemblies are arranged at the center of a tank half filled with the coolant salt which transfers heat away from the fuel assemblies to the peripheral steam generators, essentially by convection. Core temperature is 500-600°C, at atmospheric pressure.

How the Moltex Reactor Works

fuel_tube

Conceptual Design of Moltex Fuel Assembly

The fuel is in the salt and is held in vented tubes.  The tubes are bundled into fuel assemblies similar to those in a conventional PWR. These are held in the support structure which forms the reactor modules. (Technical papers)(PDF files)

The tank is filled with a safe molten salt coolant, which is not pressurized like gas or water coolants in today’s power reactors and not violently reactive with air and water like sodium in today’s Fast Breeder reactors.

A second similar coolant salt system takes heat from the primary coolant salt to a patented GridReserve energy storage system.

GridReserve is a collection of molten salt storage tanks that stores gigawatt scale thermal energy when it’s not needed for electricity production. When demand goes up, say when renewables are off, the plant can take heat from the reactor and storage tanks to produce electricity. This is just like in a Concentrated Solar Power plant and uses the same solar salt, turning a 1GW reactor into a 3GW peaking plant.

grid reserve

                                                        Energy Flows in a Moltex GridReserve System

The GridReserve system appears to be a form of “load following,” not from the reactor itself, but from the stored heat in the secondary salt loop. This approach removes the burden of managing the reactor for this purpose.

Refuelling is simple: Fuel assemblies are simply moved sideways out of the core and replaced with fresh fuel assemblies. This results in a near on-line refuelling process.

The entire construction is simple, with no high pressure systems, few moving parts, and no Pressure Vessel needing specialist foundries. The reactor is continuously cooled by natural air flow, giving complete security against overheating in an accident situation.  See this video  for a “fly through” of the design.

The firm claims on its website that multiple versions of Stable Salt Reactors are possible. The first being developed now is a “waste burner.” This uses fuel produced by a new, low cost and very simple process from spent conventional reactor fuel.

Reduction in the radioactive life of the majority of that spent fuel from hundreds of thousands of years to just a few hundred years will effectively clean up a large part of the hazardous residue of the first nuclear era.

A second generation Stable Salt Reactors design will be able to breed new nuclear fuel from depleted uranium and thorium. The firm also proposes to develop a graphite moderated option to use conventional enriched uranium as fuel.

Romania to Explore Use of a NuScale SMR

(WNN) An agreement was inked this week between US small modular reactor (SMR) developer NuScale Power and Romanian energy company Societata Nationala Nuclearelectrica SA (SNN SA) to explore the use of SMRs in Romania.

The two companies have signed a memorandum of understanding (MOU) covering the exchange of business and technical information on NuScale’s nuclear technology, with the goal of evaluating the development, licensing and construction of a NuScale SMR for a “potential similar long-term solution” in Romania.

John Hopkins, NuScale Power chairman and CEO, said the company was looking forward to collaborating with SNN SA “to determine what role NuScale’s technology can play in Romania’s energy future.”

SNN SA, also referred to as Nuclearelectrica, operates two Canadian-supplied CANDU units at Cernavoda that currently generate up to 20% of Romania’s electricity. The company’s CEO, Cosmin Ghita, said:  The reactors use natural uranium and heavy water to achieve criticality. No enriched fuel is needed to run the reactors.

“As the only nuclear power provider in Romania, we see great potential in SMRs because of the clean, safe, and affordable power they provide.”

Romania has been in negotiations with China since 2016 for development of two new CANDU type nuclear reactors.  Some work that has already taken place on the CANDU Units 3 & 4 would be a springboard for completion by CGN.

What NuScale Would Bring to Romania

NuScale’s SMR technology features the self-contained NuScale Power Module, with a gross capacity of 200 MWt or 60 MWe in 2 “six packs” for a total of 720MWe. Based on pressurized water reactor technology (PWR) , the scalable design can be used in power plants of up to 12 individual modules (two six packs)

The technology is currently undergoing design certification review by the US Nuclear Regulatory Commission. The Utah Associated Municipal Power Systems is planning the development of a 12-module plant at a site at the Idaho National Laboratory, with deployment expected in the mid-2020s.

NuScale has released information on the cost-competitive nature of its SMR. The firm said on its web site that the estimated construction cost for the first NuScale plant is about $3 billion which works out, in round numbers, to $4400/Kw. The firm also said that total construction time, to mechanical completion, but not commissioning, would be 54 months.  In July 2018 the firm released information saying that it was working on further cost savings with a target cost of $4200/Kw.

By comparison, CGN’s cost estimate for completion of the partially built twin CANDUs, at 720 MWe each, would come in at $5070/Kw.

NuScale has also signed MOUs to explore the deployment of its SMR technology in Canada and Jordan. All of these agreements are highly conceptual and don’t involve, at this stage, any significant financial commitments.

History of New CANDUs for Cernavoda

According to the World Nuclear Association in September 2014 China General Nuclear (CGN) submitted an offer to build the two units, and was accepted as a qualified investor. In October SNN designated CGN as the ‘selected investor’ for the project and a letter of intent to proceed was signed by all parties. In November 2015 the two companies signed a further agreement for the development, construction, operation, and decommissioning of Cernavoda 3&4.

CHN is reported to hold a 51% equity position in the project. The state nuclear power corporation Societatea Nationala Nuclearelectrica (SNN) said the cost is €7.2 billion/$7.7 billion for two 720 MW units.

In January 2016 the government concluded talks with CGN on the major areas of support and commitment associated with the project, including electricity market reform, tariff mechanisms, electricity sales, state guarantees, financial incentive policies, and continuity of those policies.

Construction is expected to resume at both sites. Work had begun on unit 3 as part of larger five reactor expansion project, but only unit 2 was completed in 2001. Preliminary work on units 3 & 4 was started thereafter and then stopped. World Nucleasr News did not update its 2018 report to indicate that restart of work on Units 3 & 4 has taken place. Completion dates are said to be in the early 2020s.

The new reactors, Units 3 & 4, will be updated versions of the Candu 6, but not the full EC6 version, since the concrete structures are already built. Unit 3 is reported to be 53% complete and Unit 4 30%. These updated numbers indicate some continuing level of construction activity since in 2017 WNN reported completion figures of 15% and 14%. The units will have an operating lifetime of 30 years with the possibility of a 25-year extension. Some 1000 tonnes of heavy water has been produced and is in storage.

Holtec’s SMR-160 Attracts Attention in Ukraine

(WNN & wires) Holtec International has made progress with its work on an SMR-160 system through agreements with Energoatom and Exelon Generation announced during the winter meeting of the Holtec Advisory Council for SMR-160, held in February 2019 in Jupiter, Florida.

Holtec is privately held and keeps details of its development efforts closely held. So, the news about the meeting of its advisory committee represents a rare look at progress on the 160 MW SMR.

The SMR-160 reactor is under review by the Canadian Nuclear Safety Commission and is in Phase 1 – Pre-Licensing Reviews -of the three-phase evaluation cycle. The SMR field in Canada has become highly competitive with nine other reactor vendors also in process for similar reviews. Two SMR developers have completed the Phase 1 process.

State Nuclear Regulatory Inspectorate of Ukraine, the nuclear regulatory authority in Ukraine, is expected to coordinate its regulatory assessment of SMR-160 under a collaborative arrangement with its Canadian counterpart.

Energoatom President Yury Nedashkovsky announced plans to establish a consortium with Holtec and Ukraine’s national nuclear consultant, State Scientific and Technical Centre for Nuclear and Radiation Safety (SSTC-NRS). It will explore the environmental and technical feasibility of qualifying a ‘generic’ SMR-160 system that can be built and operated at any candidate site in the country.

A formal announcement of the adoption of the terms of engagement for the consortium is expected later this year.

At the same meeting, Holtec signed a memorandum of understanding with Exelon Generation, adding Exelon to the SMR-160 team, which currently includes SNC-Lavalin and Mitsubishi Electric.

Chris Mudrick, Exelon Generation senior vice president, Northeast Operations, said in the Holtec statement:

“As the largest nuclear operator in the United States, Exelon Generation is pleased to partner with Holtec to develop an operating model for the SMR-160. This project is a great example of how innovation and new technologies are bringing our industry together and driving the future of nuclear power.”

Under the terms of the MoU, Exelon Generation plans to support SMR-160’s market acceptance, develop a generic deployment schedule and staffing plan, and assist to improve its operability and maintainability features.

As SMR-160s are built around the globe, Exelon Generation could provide reactor operating services to customers that lack an established nuclear industrial infrastructure. This approach may facilitate entry in to markets in small countries that otherwise might not be ready to adopt SMRs as part of their energy mix.

Holtec describes the SMR-160 as a “passive, intrinsically safe, secure and economical” small modular reactor that has the flexibility to be used in remote locations, in areas with limited water supplies or land, and in unique industrial applications where traditional larger reactors are not practical.

Advisory Committee Member Profiles

The meeting was led by the incoming advisory committee chairman, Michael Rencheck, CEO of Bruce Power, Canada, and attended by invited industry experts from several leading organizations, including Bruce Power, Energoatom, Entergy, Exelon Generation, Southern, Talen Energy, NEI, SNC-Lavalin, Mitsubishi Electric, and several major suppliers.  (Membership list and bios)

Czech PM Calls for Nuclear Expansion with State Controls

Reuters reports that Czech Prime Minister Andrej Babis has outlined the government’s plan to build a number of nuclear reactors, saying the state should control construction so it can halt the expansion should power prices fail to support the project.  His statement clearly indicates the government still isn’t ready to do things –

  • Establish a guarantee and basis for a rate floor for pricing the electricity from the reactors in order to attract investors
  • Buy out the minority institutional investors in CEZ, the state owned electric utility, to cut off the prospect of lawsuits that might interfere with the project.

Even so the government expects to sign a contract with majority state-owned CEZ to build one or more new reactors at Dukovany, with a tender towards the end of 2020 and with a supplier chosen by 2024.

Babis said the government would not provide CEZ an unlimited state guarantee and that the utility would cover any extra costs not generated by the state regulators.

“The basic aim of the state should be to take control of construction of new nuclear capacity,” Babis told Reuters. “The state would get such control by signing a contract with CEZ on construction.”

The government has been considering how to fund a multi-billion-dollar expansion of CEZ’s nuclear power plants, before some units reach the end of their lifetime. Efforts to complete a previous tender for up to five new reactors, including several at Temelin, worth up to $25 billion, collapsed in 2014 when the government informed bidders it would not offer rate guarantees for electricity sold by the nuclear power stations.

South Africa Its Puts Toe Back in Nuclear Waters

(WNN) South Africa must consider nuclear as a clean energy source that can be part of its electricity generation mix, Energy Minister Jeff Radebe said in a speech at a business awards ceremony.

His statement comes as the South African government struggles to find a way to eliminate brown outs and insure reliable electrical power for its heavy industry and affordable rates for a struggling economy.  Poverty is widespread in South Africa which also has entrenched high unemployment.

The country also needs to find a way to make procurement of nuclear energy credible. The previous administration, led by President Jacob Zuma, inked a secret deal with Russia for eight 1200 MW VVER PWR type reactors with a price tag of just over $43 billion.

Not only could South Africa not afford the project, even with 50% financing from Rosatom, but the deal, when reported in the news media, generated a political firestorm. Charges of nepotism and corruption were also made as Zuma hired relatives to run key parts of the project.

The Energy and Finance ministries were caught by surprise by the news of the Russian deal. Eskom, the state owned electric utility, said it didn’t have the funds to cover South Africa’s 50% cost share.

A public outcry over the lack of transparency in Zuma’s dealings with Rosatom led to a cancellation of the project coincident with end of his term in office.

The new administration hasn’t yet updated its Integrated Resource Plan (IRP) to expand that role of nuclear energy to meet the nation’s need for electricity, but recent comments by Radebe seem to indicate he’s heading in that direction.

“As a developing economy, plagued by high poverty and unemployment levels, the issue of reliable and affordable energy is critical,” Radebe said.

“We have to consider nuclear, and despite its high capital costs, we have not lost sight of the fact that this is a clean energy source that can contribute optimally for electricity generation,” the minister said.

Sustainable energy planning requires a “holistic approach” to planning for future energy needs, ensuring environmental and climate change issues, together with social development and economic growth, are all considered in a balanced manner, he said.

“We have come to realize that achieving these objectives simultaneously is no easy task as it entails juggling competing and often conflicting objectives. During the energy planning process, we therefore cannot discriminate against or favor any particular energy carriers,” he said.

The country cannot not ignore its abundant coal reserves, and the “relatively low” price of coal, but this is “counter-balanced” by coal’s high carbon content and internalized through policy options including emissions reduction targets and the introduction of carbon taxes.

The Portfolio Committee on Energy, which provides parliamentary oversight for the work of South Africa’s energy department, in November said the IRP should make it explicit that both coal and nuclear will remain important elements of the country’s energy mix.

~ Other Nuclear News ~

Testing Complete for China’s Hualong One Fuel

(WNN) Long-term irradiation testing of China National Nuclear Corporation’s (CNNC) CF3 pressurized water reactor (PWR) fuel has been completed.

Four sets of CF3 fuel assemblies, which are designed for use in the Hualong One reactor, were loaded into Qinshan II unit 2 – a Chinese-designed CNP-600 PWR – in July 2014.

The assemblies have undergone poolside inspections during each fuelling cycle, CNNC said. Inspection results show that the performance of the design has met internationally accepted standards.

According to World Nuclear Association information, CF3 fuel assemblies are being manufactured at CNNC’s main PWR fuel fabrication plant at Yibin in Sichuan province, using fuel pellets from Kazakhstan’s Ulba Metallurgical Plant.

Hualong One reactors are currently under construction at Fuqing and Fangchenggang. Fuqing 5 and 6 are expected to start up in 2019 and 2020, as are Fangchenggang 3 and 4. The Hualong One promoted on the international market is called the HPR1000, two of which are under construction at Karachi in Pakistan.

The significance of the successful tests is that China is now on the road to being self-sufficient for fuel for its new Hualong One reactors which are intended for export. Deals have been set in motion in the UK, Argentina and two units are nearing completion in Pakistan.

Finland’s Fennovoima, Its Next Nuclear Reactor,
a Russian Project, Likely to be Delayed Beyond 2024

Reuters reports that  a Finnish-Russian consortium’s plan to build a nuclear reactor in western Finland by 2024 is likely to be delayed, perhaps as long as four years, as more time is needed to secure licenses, its chairman said.

“Normally when a plane departs late it arrives late. 2024 would be extremely ambitious if not unrealistic,” the consortium’s chairman, Esa Harmala, told Reuters.

Finland’s nuclear regulator STUK told Reuters it would make a decision on a license to start construction of the reactor, named Hanhikivi 1, in 2020, depending  on getting required documents from its owners, including Rosatom.

The Fennovoima consortium said in 2017 it would submit the documents in 2018. Since then it said it would get a permit in 2019, a year later than originally planned. STUK said last that the documents would be submitted by July 2019.

That means the consortium may not get STUK approval until 2020 and would struggle to meet its target to start the plant in 2024.

The Fennovoima consortium includes Russia’s state nuclear company Rosatom, whose involvement has raised concerns in Finland about Russia’s influence in the country. A spokesman for Rosatom declined to comment on the delay in response to an inquiry from Reuters saying only it is still working on a 2024 startup date.

The Finnish parliament approved the project to build the 1.2 gigawatt (GW) reactor, which is expected to cost 6.5 billion-7 billion euros ($7.5 billion-$8 billion), to boost domestic energy production.

# # #

Read More


It Sounds Crazy, But Fukushima, Chernobyl, And Three Mile Island Show Why Nuclear Is Inherently Safe

Fukushima was a public health catastrophe, just not one caused by radiation.

Shutterstock

After a tsunami struck the Fukushima Daiichi nuclear plant in Japan eight years ago today, triggering the meltdowns of three reactors, many believed it would result in a public health catastrophe.

“By now close to one million people have died of causes linked to the Chernobyl disaster,” wrote Helen Caldicott, an Australian medical doctor, in The New York Times. Fukushima could “far exceed Chernobyl in terms of the effects on public health.”

Many pro-nuclear people came to believe that the accident was proof that the dominant form of nuclear reactor, which is cooled by water, is fatally flawed. They called for radically different kinds of reactors to make the technology “inherently safe.”

But now, eight years after Fukushima, the best-available science clearly shows that Caldicott’s estimate of the number of people killed by nuclear accidents was off by one million. Radiation from Chernobyl will kill, at most, 200 people, while the radiation from Fukushima and Three Mile Island will kill zero people.

In other words, the main lesson that should be drawn from the worst nuclear accidents is that nuclear energy has always been inherently safe.

The Shocking Truth

The truth about nuclear power’s safety is so shocking that it’s worth taking a closer look at the worst accidents, starting with the worst of the worst: Chernobyl.

The nuclear plant is in Ukraine which, in 1986, the year of the accident, was a Soviet Republic. Operators lost control of an unauthorized experiment that resulted in the reactor catching fire.

There was no containment dome, and the fire spewed out radioactive particulate matter, which went all over the world, leading many to conclude that Chernobyl is not just the worst nuclear accident in history but is also the worst nuclear accident possible.

Twenty-eight firefighters died after putting out the Chernobyl fire. While the death of any firefighter is tragic, it’s worth putting that number in perspective. Eighty-six firefighters died in the U.S. in 2018, and 343 firefighters died during the September 11, 2001 terrorist attacks.

Since the Chernobyl accident, 19 first responders have died, according to the United Nations, for ”various reasons” including tuberculosis, cirrhosis of the liver, heart attacks, and trauma. The U.N. concluded that “the assignment of radiation as the cause of death has become less clear.”

What about cancer? By 2065 there may be 16,000 thyroid cancers; to date there have been 6,000. Since thyroid cancer has a mortality rate of just one percent — it is an easy cancer to treat — expected deaths may be 160.

The World Health Organization claims on its web site that Chernobyl could result in the premature deaths of 4,000 people, but according to Dr. Geraldine Thomas, who started and runs the Chernobyl Tissue Bank, that number is based on a disproven methodology.

“That WHO number is based on LNT,” she explained, using the acronym for the “linear no-threshold” method of extrapolating deaths from radiation.

LNT assumes that there is no threshold below which radiation is safe, but that assumption has been discredited over recent decades by multiple sources of data.

Support for the idea that radiation is harmless at low levels comes from the fact that people who live in places with higher background radiation, like Colorado, do not suffer elevated rates of cancer.

In fact, residents of Colorado, where radiation is higher because of high concentrations of uranium in the ground, enjoy some of the lowest cancer rates in the U.S.

Even relatively high doses of radiation cause far less harm than most people think. Careful, large, and long-term studies of survivors of the atomic bombings of Hiroshima and Nagasaki offer compelling demonstration.

Cancer rates were just 10 percent higher among atomic blast survivors, most of whom never got cancer. Even those who received a dose 1,000 times higher than today’s safety limit saw their lives cut short by an average of 16 months.

But didn’t the Japanese government recently award a financial settlement to the family of a Fukushima worker who claimed his cancer was from the accident?

It did, but for reasons that were clearly political, and having to do with the Japanese government’s consensus-based, conflict-averse style, as well as lingering guilt felt by elite policymakers toward Fukushima workers and residents, who felt doubly aggrieved by the tsunami and meltdowns.

The worker’s cancer was highly unlikely to have come from Fukushima because, once again, the level of radiation workers received was far lower than the ones received by the Hiroshima/Nagasaki cohort that saw (modestly) higher cancer rates.

What about Three Mile Island? After the accident in 1979, Time Magazine ran a cover story that superimposed a glowing headline, “Nuclear Nightmare,” over an image of the plant. Nightmare? More like a dream. What other major industrial technology can suffer a catastrophic failure and not kill anyone?

Remember when the Deepwater Horizon oil drilling rig caught on fire and killed 11 people? Four months later, a Pacific Gas & Electric natural gas pipeline exploded just south of San Francisco and killed eight people sleeping in their beds. And that was just one year, 2010.

The worst energy accident of all time was the 1975 collapse of the Banqiao hydroelectric dam in China. It collapsed and killed between 170,000 and 230,000 people.

Nuclear’s worst accidents show that the technology has always been safe for the same, inherent reason that it has always had such a small environmental impact: the high energy density of its fuel.

Splitting atoms to create heat, rather than than splitting chemical bonds through fire, requires tiny amounts of fuel. A single Coke can of uranium can provide enough energy for an entire high-energy life.

When the worst occurs, and the fuel melts, the amount of particulate matter that escapes from the plant is insignificant in contrast to both the fiery explosions of fossil fuels and the daily emission of particulate matter from fossil- and biomass-burning homes, cars, and power plants, which kill seven million people a year.

Thanks to nuclear’s inherent safety, the best-available science shows that nuclear has saved at least two million lives to date by preventing the burning of biomass and fossil fuels. Replacing, or not building, nuclear plants, thus results in more death.

In that sense, Fukushima did result in a public health catastrophe. Only it wasn’t one created by the tiny amounts of radiation that escaped from the plant.

Anxiety Displacement and Panic

The Japanese government, in the view of Chernobyl expert Geraldine Thomas and other radiation experts, contributed to the widespread view of radiation as a super-potent toxin by failing to return residents to the Fukushima province after the accident, and for reducing radiation in soil and water to unnecessarily low levels.  

The problem started with an over-evacuation. Sixty-thousand people were evacuated but only 30,000 have returned. While some amount of temporary evacuation might have been justified, there was simply never any reason for such a large, and long-term, evacuation.

About 2,000 people died from the evacuation, while others who were displaced suffered from loneliness, depression, suicide, bullying at school, and anxiety.

“With hindsight, we can say the evacuation was a mistake,” said Philip Thomas, a professor of risk management at the University of Bristol and leader of a recent research project on nuclear accidents. “We would have recommended that nobody be evacuated.”

Beyond the evacuation was the government’s massively exaggerated clean-up of the soil. To give you a sense of how exaggerated the clean-up was, consider that the Colorado plateau was and is more (naturally) radioactive than most of Fukushima after the accident.

“There are areas of the world that are more radioactive than Colorado and the inhabitants there do not show increased rates of cancer,” notes Dr. Thomas. And whereas radiation levels at Fukushima decline rapidly, “those areas stay high over a lifetime as the radiation is not the result of contamination but of natural background radiation.”

Even residents living in the areas with the highest levels of soil contamination were unaffected by the radiation, according to a major study of nearly 8,000 residents in the two to three years since the accident.

In 2017, while visiting Fukushima for the second time, I lost my cool over this issue. Jet-lagged and hungry, and witnessing the ridiculous and expensive bull-dozing of the region’s fertile topsoil into green plastic bags, I started grilling a scientist with the ministry of the environment.

Why were they destroying Fukushima’s precious topsoil in order to reduce radiation levels that were already at levels far lower than posed a danger? Why was the government spending billions trying to do the same thing with water near the plant itself? Was nobody in Japan familiar with mainstream radiation health science?

At first the government scientist responded by simply repeating the official line — they were remediating the top soil to remove the radiation from the accident.

I decided to force the issue. I repeated my question. My translator told me that the expert didn’t understand my question. I started arguing with my translator.

Then, at that moment, the government scientist started talking again. I could tell by the tone of his voice that he was saying something different.

“Every scientist and radiation expert in the world who comes here says the same thing,” he said. “We know we don’t need to reduce radiation levels for public health. We’re doing it because the people want us to.”

The truth of the matter had been acknowledged, and the tension that had hung between us had finally broken. “Arigato gozaimasu!” I said, genuinely grateful for the man’s honesty.

The man’s face was sad when he explained the situation, but he was also calmer. The mania behind his insistence that the “contaminated” topsoil had required “cleaning” had evaporated.

And I wasn’t mad anymore either, just relieved. I understood his dilemma. He had only been the repeating official dogma because his job, and the larger culture and politics, required him to.

Such has been the treatment of radiation fears by scientists and government officials, not just in Japan, for over 60 years.

There is no evidence that low levels of radiation hurt people, but rather than be blunt about that, scientists have, in the past, shaded the truth often out of a misguided sense of erring on the side of caution, but thereby allowing widespread misunderstanding of radiation to persist.

We also now know that when societies don’t use nuclear, they mostly use fossil fuels, not renewables. After Fukushima, Japan closed its nuclear plants and saw deadly air pollution skyrocket.

The biggest losers, as per usual, are the most vulnerable: those with respiratory diseases, such as emphysema and asthma, children, the elderly, the sick, and the poor, who tend to live in the most polluted areas of cities.

It’s also clear that people displace anxieties about other things onto nuclear accidents. We know from in-depth qualitative research conducted in the 1970s that young people in the early part of that decade were displacing fears of nuclear bombs onto nuclear plants.

Nuclear plants are viewed as little bombs and nuclear accidents are viewed as little atomic explosions, complete with fall-out and the dread of contamination.

It is impossible to view the Japanese public’s panicked overreaction to Fukushima and not see it as partly motivated by the horror of having seen 15,897 citizens instantly killed, and another 2,533 gone missing, after a tsunami hammered the region.

The sociologist Kyle Cleveland argues persuasively that Fukushima was a “moral panic,” in that the panic was motivated by a desire by the Japanese news media and public for revenge against an industrial and technical elite viewed as uncaring, arrogant, and corrupt.

Seeing Opportunity In Fear

After Fukushima, investors poured millions into so-called “advanced nuclear” start-up companies proposing to use chemicals, metals, or gases instead of water for cooling the uranium or thorium fuels in nuclear plants.

Often, they inadvertently reinforced the worst of the public’s fears. It’s one thing when anti-nuclear activists fear-monger about Fukushima, it’s quite another when supposedly pro-nuclear advocates do so.

Worse, the notion that one could look at the design of a nuclear plant and declare it safer than existing nuclear plants is transcience at best, pseudoscience at worst.

To compare the relative safety of different kinds of nuclear reactors one would need decades of operational data, which don’t exist for non-existent designs. And even then, one would likely need a lot more accidents and deaths to tease out any kind of correlation.

When pressed as to supposed safety advantages, advocates of radical innovation in nuclear often slip into claiming that this or that design will be far cheaper than today’s designs.

But the cheapest nuclear is the kind that humans have the most experience building, operating, and regulating. Slow, conservative, and incremental innovation is what has made nuclear plants cheaper, while radical innovation has made it more expensive.

Was anything better for the U.S. nuclear industry than Three Mile Island? Not a single nuclear industry executive would have said so at the time. But in the decades since, many of them came to believe precisely that.

In response to Three Mile Island, the nuclear industry stepped up training, checklists, and better oversight. The result was that nuclear plants in the U.S. went from operating at 55 percent to over 90 percent of the time.

Anti-nuclear activists have long claimed that there is a trade-off between nuclear safety and economics when it comes to the operation of plants, when in reality the opposite is the case. With improved performance came far higher income from electricity sales.

Might Japanese nuclear leaders look back on Fukushima the same way one day? That depends on what they do now.

To date, Japanese leaders have tried to make amends to the public for the Fukushima accident, but they’ve done so in ways that have reinforced the view of radiation as a super-potent toxin, and without building any greater trust in the technology.

For decades, nuclear leaders in Japan and the U.S. reinforced the notion that nuclear is an inherently dangerous technology, but one that they could control. When it became clear that they couldn’t control it, the public understandably assumed that they had been put in danger.

The truth is, in part, more reassuring. The radiant particulate matter that escapes from the worst nuclear accidents isn’t all that dangerous because there isn’t all that much of it.

But another lesson is that humans are never in absolute control of our technologies. If we were, then nobody would die from exploding natural gas pipelines, plane crashes, or collapsed hydroelectric dams.

The question is not how humans can gain absolute mastery, since that’s impossible, but rather which machines, on balance, deliver the most good with the least harm. On that metric, nuclear power has always been, inherently, the safest way to power civilization.

” readability=”425.200437693″>

Fukushima was a public health catastrophe, just not one caused by radiation.

Shutterstock

After a tsunami struck the Fukushima Daiichi nuclear plant in Japan eight years ago today, triggering the meltdowns of three reactors, many believed it would result in a public health catastrophe.

“By now close to one million people have died of causes linked to the Chernobyl disaster,” wrote Helen Caldicott, an Australian medical doctor, in The New York Times. Fukushima could “far exceed Chernobyl in terms of the effects on public health.”

Many pro-nuclear people came to believe that the accident was proof that the dominant form of nuclear reactor, which is cooled by water, is fatally flawed. They called for radically different kinds of reactors to make the technology “inherently safe.”

But now, eight years after Fukushima, the best-available science clearly shows that Caldicott’s estimate of the number of people killed by nuclear accidents was off by one million. Radiation from Chernobyl will kill, at most, 200 people, while the radiation from Fukushima and Three Mile Island will kill zero people.

In other words, the main lesson that should be drawn from the worst nuclear accidents is that nuclear energy has always been inherently safe.

The Shocking Truth

The truth about nuclear power’s safety is so shocking that it’s worth taking a closer look at the worst accidents, starting with the worst of the worst: Chernobyl.

The nuclear plant is in Ukraine which, in 1986, the year of the accident, was a Soviet Republic. Operators lost control of an unauthorized experiment that resulted in the reactor catching fire.

There was no containment dome, and the fire spewed out radioactive particulate matter, which went all over the world, leading many to conclude that Chernobyl is not just the worst nuclear accident in history but is also the worst nuclear accident possible.

Twenty-eight firefighters died after putting out the Chernobyl fire. While the death of any firefighter is tragic, it’s worth putting that number in perspective. Eighty-six firefighters died in the U.S. in 2018, and 343 firefighters died during the September 11, 2001 terrorist attacks.

Since the Chernobyl accident, 19 first responders have died, according to the United Nations, for ”various reasons” including tuberculosis, cirrhosis of the liver, heart attacks, and trauma. The U.N. concluded that “the assignment of radiation as the cause of death has become less clear.”

What about cancer? By 2065 there may be 16,000 thyroid cancers; to date there have been 6,000. Since thyroid cancer has a mortality rate of just one percent — it is an easy cancer to treat — expected deaths may be 160.

The World Health Organization claims on its web site that Chernobyl could result in the premature deaths of 4,000 people, but according to Dr. Geraldine Thomas, who started and runs the Chernobyl Tissue Bank, that number is based on a disproven methodology.

“That WHO number is based on LNT,” she explained, using the acronym for the “linear no-threshold” method of extrapolating deaths from radiation.

LNT assumes that there is no threshold below which radiation is safe, but that assumption has been discredited over recent decades by multiple sources of data.

Support for the idea that radiation is harmless at low levels comes from the fact that people who live in places with higher background radiation, like Colorado, do not suffer elevated rates of cancer.

In fact, residents of Colorado, where radiation is higher because of high concentrations of uranium in the ground, enjoy some of the lowest cancer rates in the U.S.

Even relatively high doses of radiation cause far less harm than most people think. Careful, large, and long-term studies of survivors of the atomic bombings of Hiroshima and Nagasaki offer compelling demonstration.

Cancer rates were just 10 percent higher among atomic blast survivors, most of whom never got cancer. Even those who received a dose 1,000 times higher than today’s safety limit saw their lives cut short by an average of 16 months.

But didn’t the Japanese government recently award a financial settlement to the family of a Fukushima worker who claimed his cancer was from the accident?

It did, but for reasons that were clearly political, and having to do with the Japanese government’s consensus-based, conflict-averse style, as well as lingering guilt felt by elite policymakers toward Fukushima workers and residents, who felt doubly aggrieved by the tsunami and meltdowns.

The worker’s cancer was highly unlikely to have come from Fukushima because, once again, the level of radiation workers received was far lower than the ones received by the Hiroshima/Nagasaki cohort that saw (modestly) higher cancer rates.

What about Three Mile Island? After the accident in 1979, Time Magazine ran a cover story that superimposed a glowing headline, “Nuclear Nightmare,” over an image of the plant. Nightmare? More like a dream. What other major industrial technology can suffer a catastrophic failure and not kill anyone?

Remember when the Deepwater Horizon oil drilling rig caught on fire and killed 11 people? Four months later, a Pacific Gas & Electric natural gas pipeline exploded just south of San Francisco and killed eight people sleeping in their beds. And that was just one year, 2010.

The worst energy accident of all time was the 1975 collapse of the Banqiao hydroelectric dam in China. It collapsed and killed between 170,000 and 230,000 people.

Nuclear’s worst accidents show that the technology has always been safe for the same, inherent reason that it has always had such a small environmental impact: the high energy density of its fuel.

Splitting atoms to create heat, rather than than splitting chemical bonds through fire, requires tiny amounts of fuel. A single Coke can of uranium can provide enough energy for an entire high-energy life.

When the worst occurs, and the fuel melts, the amount of particulate matter that escapes from the plant is insignificant in contrast to both the fiery explosions of fossil fuels and the daily emission of particulate matter from fossil- and biomass-burning homes, cars, and power plants, which kill seven million people a year.

Thanks to nuclear’s inherent safety, the best-available science shows that nuclear has saved at least two million lives to date by preventing the burning of biomass and fossil fuels. Replacing, or not building, nuclear plants, thus results in more death.

In that sense, Fukushima did result in a public health catastrophe. Only it wasn’t one created by the tiny amounts of radiation that escaped from the plant.

Anxiety Displacement and Panic

The Japanese government, in the view of Chernobyl expert Geraldine Thomas and other radiation experts, contributed to the widespread view of radiation as a super-potent toxin by failing to return residents to the Fukushima province after the accident, and for reducing radiation in soil and water to unnecessarily low levels.  

The problem started with an over-evacuation. Sixty-thousand people were evacuated but only 30,000 have returned. While some amount of temporary evacuation might have been justified, there was simply never any reason for such a large, and long-term, evacuation.

About 2,000 people died from the evacuation, while others who were displaced suffered from loneliness, depression, suicide, bullying at school, and anxiety.

“With hindsight, we can say the evacuation was a mistake,” said Philip Thomas, a professor of risk management at the University of Bristol and leader of a recent research project on nuclear accidents. “We would have recommended that nobody be evacuated.”

Beyond the evacuation was the government’s massively exaggerated clean-up of the soil. To give you a sense of how exaggerated the clean-up was, consider that the Colorado plateau was and is more (naturally) radioactive than most of Fukushima after the accident.

“There are areas of the world that are more radioactive than Colorado and the inhabitants there do not show increased rates of cancer,” notes Dr. Thomas. And whereas radiation levels at Fukushima decline rapidly, “those areas stay high over a lifetime as the radiation is not the result of contamination but of natural background radiation.”

Even residents living in the areas with the highest levels of soil contamination were unaffected by the radiation, according to a major study of nearly 8,000 residents in the two to three years since the accident.

In 2017, while visiting Fukushima for the second time, I lost my cool over this issue. Jet-lagged and hungry, and witnessing the ridiculous and expensive bull-dozing of the region’s fertile topsoil into green plastic bags, I started grilling a scientist with the ministry of the environment.

Why were they destroying Fukushima’s precious topsoil in order to reduce radiation levels that were already at levels far lower than posed a danger? Why was the government spending billions trying to do the same thing with water near the plant itself? Was nobody in Japan familiar with mainstream radiation health science?

At first the government scientist responded by simply repeating the official line — they were remediating the top soil to remove the radiation from the accident.

I decided to force the issue. I repeated my question. My translator told me that the expert didn’t understand my question. I started arguing with my translator.

Then, at that moment, the government scientist started talking again. I could tell by the tone of his voice that he was saying something different.

“Every scientist and radiation expert in the world who comes here says the same thing,” he said. “We know we don’t need to reduce radiation levels for public health. We’re doing it because the people want us to.”

The truth of the matter had been acknowledged, and the tension that had hung between us had finally broken. “Arigato gozaimasu!” I said, genuinely grateful for the man’s honesty.

The man’s face was sad when he explained the situation, but he was also calmer. The mania behind his insistence that the “contaminated” topsoil had required “cleaning” had evaporated.

And I wasn’t mad anymore either, just relieved. I understood his dilemma. He had only been the repeating official dogma because his job, and the larger culture and politics, required him to.

Such has been the treatment of radiation fears by scientists and government officials, not just in Japan, for over 60 years.

There is no evidence that low levels of radiation hurt people, but rather than be blunt about that, scientists have, in the past, shaded the truth often out of a misguided sense of erring on the side of caution, but thereby allowing widespread misunderstanding of radiation to persist.

We also now know that when societies don’t use nuclear, they mostly use fossil fuels, not renewables. After Fukushima, Japan closed its nuclear plants and saw deadly air pollution skyrocket.

The biggest losers, as per usual, are the most vulnerable: those with respiratory diseases, such as emphysema and asthma, children, the elderly, the sick, and the poor, who tend to live in the most polluted areas of cities.

It’s also clear that people displace anxieties about other things onto nuclear accidents. We know from in-depth qualitative research conducted in the 1970s that young people in the early part of that decade were displacing fears of nuclear bombs onto nuclear plants.

Nuclear plants are viewed as little bombs and nuclear accidents are viewed as little atomic explosions, complete with fall-out and the dread of contamination.

It is impossible to view the Japanese public’s panicked overreaction to Fukushima and not see it as partly motivated by the horror of having seen 15,897 citizens instantly killed, and another 2,533 gone missing, after a tsunami hammered the region.

The sociologist Kyle Cleveland argues persuasively that Fukushima was a “moral panic,” in that the panic was motivated by a desire by the Japanese news media and public for revenge against an industrial and technical elite viewed as uncaring, arrogant, and corrupt.

Seeing Opportunity In Fear

After Fukushima, investors poured millions into so-called “advanced nuclear” start-up companies proposing to use chemicals, metals, or gases instead of water for cooling the uranium or thorium fuels in nuclear plants.

Often, they inadvertently reinforced the worst of the public’s fears. It’s one thing when anti-nuclear activists fear-monger about Fukushima, it’s quite another when supposedly pro-nuclear advocates do so.

Worse, the notion that one could look at the design of a nuclear plant and declare it safer than existing nuclear plants is transcience at best, pseudoscience at worst.

To compare the relative safety of different kinds of nuclear reactors one would need decades of operational data, which don’t exist for non-existent designs. And even then, one would likely need a lot more accidents and deaths to tease out any kind of correlation.

When pressed as to supposed safety advantages, advocates of radical innovation in nuclear often slip into claiming that this or that design will be far cheaper than today’s designs.

But the cheapest nuclear is the kind that humans have the most experience building, operating, and regulating. Slow, conservative, and incremental innovation is what has made nuclear plants cheaper, while radical innovation has made it more expensive.

Was anything better for the U.S. nuclear industry than Three Mile Island? Not a single nuclear industry executive would have said so at the time. But in the decades since, many of them came to believe precisely that.

In response to Three Mile Island, the nuclear industry stepped up training, checklists, and better oversight. The result was that nuclear plants in the U.S. went from operating at 55 percent to over 90 percent of the time.

Anti-nuclear activists have long claimed that there is a trade-off between nuclear safety and economics when it comes to the operation of plants, when in reality the opposite is the case. With improved performance came far higher income from electricity sales.

Might Japanese nuclear leaders look back on Fukushima the same way one day? That depends on what they do now.

To date, Japanese leaders have tried to make amends to the public for the Fukushima accident, but they’ve done so in ways that have reinforced the view of radiation as a super-potent toxin, and without building any greater trust in the technology.

For decades, nuclear leaders in Japan and the U.S. reinforced the notion that nuclear is an inherently dangerous technology, but one that they could control. When it became clear that they couldn’t control it, the public understandably assumed that they had been put in danger.

The truth is, in part, more reassuring. The radiant particulate matter that escapes from the worst nuclear accidents isn’t all that dangerous because there isn’t all that much of it.

But another lesson is that humans are never in absolute control of our technologies. If we were, then nobody would die from exploding natural gas pipelines, plane crashes, or collapsed hydroelectric dams.

The question is not how humans can gain absolute mastery, since that’s impossible, but rather which machines, on balance, deliver the most good with the least harm. On that metric, nuclear power has always been, inherently, the safest way to power civilization.