Long-term Mitigation: Fusion

A month ago, I was working on a series of blogs about the long-term impacts of and solutions for climate change. I got sidetracked and decided to follow two dramatic events as they unraveled: first was the tax legislation that has now been passed in different forms in both the House and Senate. It is predicted to increase the deficit by 1.5 trillion dollars in the next ten years. I also looked at the White House’s formal approval of a detailed, congressionally-mandated report about the impacts of climate change on the US. Given the details of the predicted damage that climate change can inflict, I believe that the two decisions were contradictory and that the lawmakers’ actions are irrational.

The last blog (November 7, 2017) in the previous series, “Long Term Solutions: Energy,” was meant to segue into a focus on an ultimate solution fusion:

Fusion power is a form of power generation in which energy is generated by using fusion reactions to produce heat for electricity generation. Fusion reactions fuse two lighter atomic nuclei to form a heavier nucleus, releasing energy. Devices designed to harness this energy are known as fusion reactors.

The fusion reaction normally takes place in a plasma of deuterium and tritium heated to millions of degrees. In stars, gravity contains these fuels. Outside of a star, the most researched way to confine the plasma at these temperatures is to use magnetic fields. The major challenge in realising fusion power is to engineer a system that can confine the plasma long enough at high enough temperature and density.

As a source of power, nuclear fusion has several theoretical advantages over fission. These advantages include reduced radioactivity in operation and as waste, ample fuel supplies, and increased safety. However, controlled fusion has proven to be extremely difficult to produce in a practical and economical manner. Research into fusion reactors began in the 1940s, but as of 2017[update], no design has produced more fusion energy than the energy needed to initiate the reaction, meaning all existing designs have a negative energy balance.[1]

Over the years, fusion researchers have investigated various confinement concepts. The early emphasis was on three main systems: z-pinch, stellarator and magnetic mirror. The current leading designs are the tokamak and inertial confinement (ICF) by laser. Both designs are being built at very large scales, most notably the ITER tokamak in France, and the National Ignition Facility laser in the USA. Researchers are also studying other designs that may offer cheaper approaches. Among these alternatives there is increasing interest in magnetized target fusion and inertial electrostatic confinement.

Stars are the energy generators of the universe. Per definition, they all generate their energy — through fusion in their cores. The most important source of this energy is hydrogen. All the hydrogen in the universe was created primordially in its first few minutes of formation (big bang). The initial distribution of elements was mostly 75% hydrogen and 25% helium. So cosmologically, hydrogen is the primary energy source in the universe. It can be converted into other forms of energy by way of fusion reactions. Stars in the universe are defined by their ability to utilize fusion reactions: the gravitational contractions raise their core temperatures above the ignition of the fusion reaction. Star masses are limited to between around 100 solar mass (the mass of our sun) to 0.1 solar mass. The upper limit comes because the “burning” of the hydrogen accelerates sharply with the mass of the star, thus making the lifetime of the big stars shorter as they grow, while the star masses’ lower limit is the minimum gravitational force strong enough to ignite fusion of simple hydrogen in their cores.

There are smaller objects called Brown Dwarfs that are smaller than 0.1 solar masses and thus unable to fuse normal hydrogen but bigger than big planets such as Jupiter; they can get some energy through fusion of deuterium (an isotope of hydrogen) and lithium:

Brown dwarfs are objects which have a size between that of a giant planet like Jupiter and that of a small star. In fact, most astronomers would classify any object with between 15 times the mass of Jupiter and 75 times the mass of Jupiter to be a brown dwarf. Given that range of masses, the object would not have been able to sustain the fusion of hydrogen like a regular star; thus, many scientists have dubbed brown dwarfs as “failed stars”.

The ultimate solution to our energy problems is to learn how to use fusion as our source of energy. Since immediately after the Second World War, we have known how to use fusion in a destructive capacity (hydrogen bombs) and have been earnestly trying to learn how to use it for peaceful applications such as converting it into electrical power. It is difficult. To start with, if we want to imitate our sun we have to create temperatures on the order of 100 million degrees Celsius. Before we can do that, we have to learn how to create or find materials that can be stable at such temperatures: all the materials that we know of will completely decompose in those circumstances. Figure 1 illustrates the facilities engaging in this research and their progress. We are now closer than we have ever been to maintaining a positive balance between energy input and energy output (ignition in the graph) but we are not there yet.

Figure 1 – Fusion experimental facilities and the plasma conditions they have reached

The vertical axis in Figure 1 represents a quantity called the triple product. The same site where I found the figure explains this quantity:

The triple product is a figure of merit used for fusion plasmas, closely related to the Lawson Criteria. It specifies that successful fusion will be achieved when the product of the three quantities – n, the particle density of a plasma, the confinement time,τ and the temperature, T – reaches a certain value.  Above this value of the triple product, the fusion energy released exceeds the energy required to produce and confine the plasma. For deuterium-tritium fusion this value is about : nτT ≥ 5×1021 m-3 s KeV.  JET has reached values of  nτT of over 1021 m-3 s KeV.

In other words, Joint European Torus (JET), which is located near Dorchester, England, is more than 1/5th of the way to figuring it out. The horizontal axis represents temperature in the Kelvin scale.

oK = 273 + oC

In an interview with Scientific American, John Holdren, President Obama’s science adviser, summarized the history as well as the present state of the technology:

John Holdren has heard the old joke a million times: fusion energy is 30 years away—and always will be. Despite the broken promises, Holdren, who early in his career worked as a physicist on fusion power, believes passionately that fusion research has been worth the billions spent over the past few decades—and that the work should continue. In December, Scientific American talked with Holdren, outgoing director of the federal Office of Science and Technology Policy, to discuss the Obama administration’s science legacy. An edited excerpt of his thoughts on the U.S.’s energy investments follows.

Scientific American: Have we been investing enough in research on energy technologies?

John Holdren: I think that we should be spending in the range of three to four times as much on energy research and development overall as we’ve been spending. Every major study of energy R&D in relation to the magnitude of the challenges, the size of the opportunities and the important possibilities that we’re not pursuing for lack of money concludes that we should be spending much more.

But we have national labs that are devoted—

I’m counting what the national labs are doing in the federal government’s effort. We just need to be doing more—and that’s true right across the board. We need to be doing more on advanced biofuels. We need to be doing more on carbon capture and sequestration. We need to be doing more on advanced nuclear technologies. We need to be doing more on fusion, for heaven’s sake.

Fusion? Really?

Fusion is not going to generate a kilowatt-hour before 2050, in my judgment, but—

Hasn’t fusion been 30 years away for the past 30 years?

It’s actually worse than that. I started working on fusion in 1966. I did my master’s thesis at M.I.T. in plasma physics, and at that time people thought we’d have fusion by 1980. It was only 14 years away. By 1980 it was 20 years away. By 2000 it was 35 years away. But if you look at the pace of progress in fusion over most of that period, it’s been faster than Moore’s law in terms of the performance of the devices—and it would be nice to have a cleaner, safer, less proliferation-prone version of nuclear energy than fission.

My position is not that we know fusion will emerge as an attractive energy source by 2050 or 2075 but that it’s worth putting some money on the bet because we don’t have all that many essentially inexhaustible energy options. There are the renewables. There are efficient breeder reactors, which have many rather unattractive characteristics in terms of requiring what amounts to a plutonium economy—at least with current technology—and trafficking in large quantities of weapon-usable materials.

The other thing that’s kind of an interesting side note is if we ever are going to go to the stars, the only propulsion that’s going to get us there is fusion.

Are we talking warp drive?

No, I’m talking about going to the stars at some substantial fraction of the speed of light.

When will we know if fusion is going to work?

The reason we should stick with ITER [a fusion project based in France] is that it is the only current hope for producing a burning plasma, and until we can understand and master the physics of a burning plasma—a plasma that is generating enough fusion energy to sustain its temperature and density—we will not know whether fusion can ever be managed as a practical energy source, either for terrestrial power generation or for space propulsion. I’m fine with taking a hard look at fusion every five years and deciding whether it’s still worth a candle, but for the time being I think it is.

We know now that we can satisfy most of our needs and avert some of the predicted disaster if we use sustainable sources of electricity. If we can figure it out, fusion seems to be a good bet to solidify that trend.

Be Sociable, Share!
Posted in administration, Climate Change, politics, Sustainability, Trump | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Irrationality and the Future

Climate change is all about future impact; the aspects that we see and deal with right now are limited to those already highlighted by early warning signs in this process. The fact that the issues are global means that mitigating their impacts requires even more time than it might otherwise. Seventy percent of Earth is covered by oceans, which absorb a significant fraction of the additional heat and human-emitted greenhouse gases; the effects of climate change will continue even after strong mitigation accomplishments such as a full global energy transition to non-carbon fuels. Many of the driving forces of climate change – such as deep ocean temperature – are slow processes that need a long time to equilibrate (the IPCC labeled the process of climate stabilization after the world regularizes its concentration of greenhouse gases as committed warming). Action to stabilize atmospheric greenhouse concentrations needs to start now in order to affect temperatures in the future.

Yet people hate to pay now for promised benefits down the line (think about education!). The term economists use is “discounting the future.” There are debates as to the rate at which people discount it but there is overall consensus that the phenomenon exists. In a sense, the hatred of “pay now to prevent future losses” systems contradicts one of the main biases in human irrationality: loss aversion. But as long as the losses are predicted to materialize in the future, we would much rather not think about them at all, thank you!

Tversky and Kahneman (see my previous two blogs) also noticed some of the flaws inherent in asking people to make predictions. Wikipedia summarizes some important aspects of their thinking:

Kahneman and Tversky[1][2] found that human judgment is generally optimistic due to overconfidence and insufficient consideration of distributional information about outcomes. Therefore, people tend to underestimate the costs, completion times, and risks of planned actions, whereas they tend to overestimate the benefits of those same actions. Such error is caused by actors taking an “inside view“, where focus is on the constituents of the specific planned action instead of on the actual outcomes of similar ventures that have already been completed.

Kahneman and Tversky concluded that disregard of distributional information, i.e. risk, is perhaps the major source of error in forecasting. On that basis they recommended that forecasters “should therefore make every effort to frame the forecasting problem so as to facilitate utilizing all the distributional information that is available”.[2]:416 Using distributional information from previous ventures similar to the one being forecast is called taking an “outside view“. Reference class forecasting is a method for taking an outside view on planned actions.

Reference class forecasting for a specific project involves the following three steps:

1. Identify a reference class of past, similar projects.

2. Establish a probability distribution for the selected reference class for the parameter that is being forecast.

3. Compare the specific project with the reference class distribution, in order to establish the most likely outcome for the specific project.

Their argument of our innate optimism and natural disregard for risk (often termed the “planning fallacy”) has a close resemblance to the “just world” hypothesis that I discussed in last week’s blog: we think of the world as operating in a rational manner in spite of countless evidence that humans are not very rational creatures. You can see another window into Kahneman’s thinking via Morgan Housel’s interview with him on the topic:

Morgan Housel: When the evidence is so clear that the recent past is probably not going to predict the future, why is the tendency so strong to keep assuming that it will?

Dr. Kahneman: Well, people are inferring a sort of causal model from what they see, so when they see a certain pattern, they feel there is a force that is causing that pattern to, that pattern of behavior and behavior in the market and to continue. And then if the force is there, you see the extending. This is really a very, very natural way to think. When you see a movement in one direction, you are extrapolating that movement. You are anticipating where things will go and you are anticipating usually in a fairly linear fashion, and this is the way that people think about the future generally.

This sort of thinking about future impact has carried over into the application of how time biases play out in Behavioral Economics.

The first example here is from a paper by Benartzi and Thaler that introduces the concept of “Myopic Loss Aversion” in order to try to explain the puzzle of why stocks outperform bonds so strongly in long-term investments:

Myopic Loss Aversion and the Equity Premium Puzzle

Shlomo Benartzi, Richard H. Thaler

NBER Working Paper No. 4369
Issued in May 1993
NBER Program(s): AP

The equity premium puzzle, first documented by Mehra and Prescott, refers to the empirical fact that stocks have greatly outperformed bonds over the last century. As Mehra and Prescott point out, it appears difficult to explain the magnitude of the equity premium within the usual economics paradigm because the level of risk aversion necessary to justify such a large premium is implausibly large. We offer a new explanation based on Kahneman and Tversky’s ‘prospect theory’. The explanation has two components. First, investors are assumed to be ‘loss averse’ meaning they are distinctly more sensitive to losses than to gains. Second, investors are assumed to evaluate their portfolios frequently, even if they have long-term investment goals such as saving for retirement or managing a pension plan. We dub this combination ‘myopic loss aversion’. Using simulations we find that the size of the equity premium is consistent with the previously estimated parameters of prospect theory if investors evaluate their portfolios annually. That is, investors appear to choose portfolios as if they were operating with a time horizon of about one year. The same approach is then used to study the size effect. Preliminary results suggest that myopic loss aversion may also have some explanatory power for this anomaly

The key here is that the short-term checking of the performance of the portfolio basically converts the long-term bias to a short-term issue.

Cheng and He tried a direct experimental approach to the issue. Here I am including the abstract, with its summary of the project’s conclusions, and the experimental setup that was used to derive these conclusions:

Deciding for Future Selves Reduces Loss Aversion

Qiqi Cheng and Guibing He

Abstract

In this paper, we present an incentivized experiment to investigate the degree of loss aversion when people make decisions for their current selves and future selves under risk. We find that when participants make decisions for their future selves, they are less loss averse compared to when they make decisions for their current selves. This finding is consistent with the interpretation of loss aversion as a bias in decision-making driven by emotions, which are reduced when making decisions for future selves. Our findings endorsed the external validity of previous studies on the impact of emotion on loss aversion in a real world decision-making environment.

Tasks and Procedure

To measure the willingness to choose the risky prospect, we follow Holt and Laury (2002, 2005) decision task by asking participants to make a series of binary choices for 20 pairs of options (Table 1). The first option (Option A, the safe option) in each pair is always RMB 10 (10 Chinese Yuan) with certainty. The second option (Option B, the risky option) holds the potential outcomes constant at RMB 18 or 1 for each pair but changes the probabilities of winning for each decision, which creates a scale of increasing expected values. Because expected values in early decisions favor Option A while the expected values in later decisions favor Option B, an individual should initially choose Option A and then switch to Option B. Therefore, there will be a ‘switch point,’ which reflects a participant’s willingness to choose a risky prospect. The participants are told that each of their 20 decisions in the table has the same chance of being selected and their payment for the experiment will be determined by their decisions.

The conclusions from these discussions are clear – our biases are highly sensitive to the length of time in which we are trying to make predictions and thus directly impact mitigations that have to start in the present. Climate change is an existential issue so we don’t have the luxury of waiting until the damage is imminent. Once we get there it is too late to avoid.

Be Sociable, Share!
Posted in Anthropocene, Anthropogenic, Climate Change, Sustainability | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Collective Irrationality and Individual Biases: Climate Change II

Last week I discussed some issues in terms of psychology of judgement and decision making; I feel that they need some clarification and expansion.

I looked at how highly educated Democrats and Republicans diverge sharply in their opinions about the extent to which human actions contribute to climate change. I explained the phenomenon through the concept of “following the herd.” That is, since we are unwilling/unable to learn all the details of a complicated issue such as climate change, we choose instead to follow the opinions of the people that we trust – in this case the leadership of the political parties. But we don’t ask ourselves how those people in power form their own opinions.

The second unsettled issue that came out of last week’s blog is how we apply “loss aversion” to climate change. In other words, while there is a strong probability that if climate change is left unchecked we will lose big money in attempts to mitigate the damage it causes – yet many of us choose to ignore that inevitability.

Some of our understanding of this dichotomy traces back to the 19th Century American philosopher and psychologist William James, who pioneered biological psychology – the mind-body phenomenon:

One of the more enduring ideas in psychology, dating back to the time of William James a little more than a century ago, is the notion that human behavior is not the product of a single process, but rather reflects the interaction of different specialized subsystems. These systems, the idea goes, usually interact seamlessly to determine behavior, but at times they may compete. The end result is that the brain sometimes argues with itself, as these distinct systems come to different conclusions about what we should do.

The major distinction responsible for these internal disagreements is the one between automatic and controlled processes. System 1 is generally automatic, affective and heuristic-based, which means that it relies on mental “shortcuts.” It quickly proposes intuitive answers to problems as they arise. System 2, which corresponds closely with controlled processes, is slow, effortful, conscious, rule-based and also can be employed to monitor the quality of the answer provided by System 1. If it’s convinced that our intuition is wrong, then it’s capable of correcting or overriding the automatic judgments

In the 1960s two Israeli psychologists, Daniel Kahneman and Amos Tversky, started an intellectual journey to establish the boundaries of human rationality. They had to account for the fact that if humans evolved from other species, those ancestral species are known for their instinctive survival reflexes rather than their rationality. Kahneman and Tversky expanded upon the distinction made above. They reasoned that human thinking is based on two brain activities, located in different parts of the brain: the automatic and the rational brains, which Kahneman labeled Intuition (System 1) and Reasoning (System 2):

Daniel Kahneman provided further interpretation by differentiating the two styles of processing more, calling them intuition and reasoning in 2003. Intuition (or system 1), similar to associative reasoning, was determined to be fast and automatic, usually with strong emotional bonds included in the reasoning process. Kahneman said that this kind of reasoning was based on formed habits and very difficult to change or manipulate. Reasoning (or system 2) was slower and much more volatile, being subject to conscious judgments and attitudes.[8]

Kahneman and Tversky’s efforts quickly spread to many other areas including economics and health care. Daniel Kahneman was recognized with the 2002 Nobel Prize in Economics (Amos Tversky passed away in 1996) and this year’s Economics Prize was given to Richard Thaler, one of the earliest practitioners to apply this work to the field of economics. Both Daniel Kahneman and Richard Thaler have written books on their efforts, targeted to the general public. As part of that general public, I read the books cover to cover. Kahneman’s book is called Thinking Fast and Slow (Farrar, Straus and Giroux s – 2011) and Thaler’s book with Cass Sunstein is titled Nudge (Yale University Press – 2008). Both books are best-sellers.  There’s also a new book by Michael Lewis about Tversky and Kahneman’s careers together: The Undoing Project (W.W. Norton – 2017). I searched all three books for direct references to climate change. Thaler and Sunstein’s book has a small chapter on environmental issues that I will summarize toward the end of the blog. Lewis’ book doesn’t have a searchable index and it’s been a few months since I read the book so it is difficult to tell specifics. Kahneman’s book has a searchable index but I didn’t find any directly relevant references there either.

I was fortunate to come across the transcript of a talk that Daniel Kahneman gave on April 18, 2017 to the Council on Foreign Relations. The meeting was presided over by Alan Murray, Chief Content Officer at Time magazine. At the conclusion of his talk Kahneman agreed to answer questions from the audience. Two of the questions directly referred to climate change:

Q: Hi. I’m Jack Rosenthal, retired from The New York Times. I wonder if you’d be willing to talk a bit about the undoing idea and whether it’s relevant in the extreme to things like climate denial.

KAHNEMAN: Well, I mean, the undoing idea, the Undoing Project, was something that I—well, it’s the name of a book that Michael Lewis wrote about Amos Tversky and me. But it originally was a project that I engaged in primarily. I’m trying to think about how do people construct alternatives to reality.

And my particular, my interest in this was prompted by tragedy in my family. A nephew in the Israeli air force was killed. And I was very struck by the fact that people kept saying “if only.” And that—and that “if only” has rules to it. We don’t just complete “if only” in any—every which way. There are certain things that you use. So I was interested in counterfactuals. And this is the Undoing Project. Climate denial, I think, is not necessarily related to the Undoing Project. It’s very powerful, clearly. You know, the anchors of the psychology of climate denial is elementary. It’s very basic. And it’s going to be extremely difficult to overcome.

MURRAY: When you say it’s elementary, can you elaborate a little bit?

KAHNEMAN: Well, the whether people believe or do not believe is one issue. And people believe in climate and climate change or don’t believe in climate change not because of the scientific evidence. And we really ought to get rid of the idea that scientific evidence has much to do with people’s beliefs.

MURRAY: Is that a general comment, or in the case of climate?

KAHNEMAN: Yeah, it’s a general comment.

MURRAY: (Laughs.)

KAHNEMAN: I think it’s a general comment. I mean, there is—the correlation between attitude to gay marriage and belief in climate change is just too high to be explained by, you know.

MURRAY: Science.

KAHNEMAN: —by science. So clearly—and clearly what is people’s beliefs about climate change and about other things are primarily determined by socialization. They’re determined—we believe in things that people that we trust and love believe in. And that, by the way, is certainly true of my belief in climate change. I believe in climate change because I believe that, you know, if the National Academy says there’s climate change, but…

MURRAY: They’re your people.

KAHNEMAN: They’re my people.

MURRAY: (Laughs.)

KAHNEMAN: But other people—you know, they’re not everybody’s people. And so this, I think—that’s a very basic part of it. Where do beliefs come from? And the other part of it is that climate change is really the kind of threat for which—that we as humans have not evolved to cope with. It’s too distant. It’s too remote. It just is not the kind of urgent mobilizing thing. If there were a meteor, you know, coming to earth, even in 50 years, it would be completely differently. And that would be—people, you know, could imagine that. It would be concrete. It would be specific. You could mobilize humanity against the meteor. Climate change is different. And it’s much, much harder, I think.

MURRAY: Yes, sir, right here.

Q: Nise Aghwa (ph) of Pace University. Even if you believe in evidence-based science, frequently, whether it’s in medicine, finance, or economics, the power of the tests are so weak that you have to rely on System 1, on your intuition, to make a decision. How do you bridge that gap?

KAHNEMAN: Well, you know, if a decision must be made, you’re going to make it on the best way—you know, in the best way possible. And under some time pressure, there’s no time for deliberation. You just must do, you know, what you can do. That happens a lot.

If there is time to reflect, then in many situations, even when the evidence is incomplete, reflection might pay off. But this is very specific. As I was saying earlier, there are domains where we can trust our intuitions, and there are domains where we really shouldn’t. And one of the problems is that we don’t know subjectively which is which. I mean, this is where some science and some knowledge has to come in from the outside.

MURRAY: But it did sound like you were saying earlier that the intuition works better in areas where you have a great deal of expertise..

KAHNEMAN: Yes.

MURRAY: —and expertise.

KAHNEMAN: But we have powerful intuitions in other areas as well. And that’s the problem. The real problem—and we mentioned overconfidence earlier—is that our subjective confidence is not a very good indication of accuracy. I mean, that’s just empirically. When you look at the correlation between subjective confidence and accuracy, it is not sufficiently hard. And that creates a problem.

Kahneman clearly says that he doesn’t think people make up their minds about life based on science and facts – and that is especially true when it comes to climate change. He acknowledges how intuition and reason each play parts in the way we make decisions – sometimes to our own detriment.

The environmental chapter in Thaler’s and Sunstein’s book, Nudge, “Saving the Planet,” looks at climate change as well as how we can shape our own minds and others’. The authors explore the possibility that a few well-thought-out nudges and better architectural choices might reduce greenhouse gases. There is a separate chapter about the different meanings of architectural choices as well. Here is the key paragraph that explains the concept:

If you indirectly influence the choices other people make, you are a choice architect. And since the choices you are influencing are going to be made by humans, you will want your architecture to reflect a good understanding of how humans behave. In particular, you will want to ensure that the Automatic System doesn’t get all confused.

The book emphasizes well-designed free choices that the authors refer to as a perspective of Libertarian Paternalism, as contrasted with regulation (command and control) – the prevailing approach to environmental government activities. Thaler and Sunstein mention Garret Hardin’s article, “Tragedy of the Commons,” (see explanations and examples in July 2, 2012 blog), which points out that people don’t get feedback on the environmental harm that they inflict. They say that governments need to align incentives.  They discuss two kinds of incentives: taxes (negative – we want to avoid them) and cap-and-trade (see November 10, 2015 blog) (positive – we want to maximize profits). The book offers some pointers on how to account for the fact that players are humans: redistribute the revenues that you get either from cap-and-trade or carbon taxes and provide feedback to consumers about the damage that the polluters impose. One example the authors use to illustrate the effectiveness of such nudges the mandatory messages about the risks of cigarettes smoking. They also recommend trying find ways to incorporate personal energy audits into the choices that people make.

Be Sociable, Share!
Posted in administration, Anthropocene, Anthropogenic, Climate Change, law, Sustainability | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Collective Irrationality and Individual Biases: Climate Change

Last week’s blog looked at the connections between the latest effort to rewrite our tax code and the necessary detailed accounting of the resources we will need to compensate for the increasing damage that climate change will inflict on us in a business-as-usual scenario. This kind of “dynamic scoring” is a political manifestation of a cognitive bias that we call “loss aversion.” This phenomenon has become one of the pillars of the psychology of judgement and decision making, as well as the foundation for the area of behavioral economics. In short, we are driven by fear: we put much more effort into averting losses than maximizing possible gains.

With climate change and tax policy, however, we seem to do the reverse.

Well, psychology seems to have an answer to this too.

Here is a segment from the concluding chapter of my book, Climate Change: The Fork at the End of Now, which was published (Momentum Press) in 2011.

I was asking the following question:

Why do we tend to underestimate risks relating to natural hazards when a catastrophic event has not occurred for a long time? If the catastrophic events are preventable, can this lead to catastrophic inaction?

I tried to answer it this way:

My wife, an experimental psychologist and now the dean of research at my college, pointed out that social psychology has a possible explanation for inaction in the face of dire threats, mediated by a strong need to believe that we live in a “just world,” a belief deeply held by many individuals that the world is a rational, predictable, and just place. The “just world” hypothesis also posits that people believe that beneficiaries deserve their benefits and victims their suffering.7 The “just world” concept has some similarity to rational choice theory, which underlies current analysis of microeconomics and other social behavior. Rationality in this context is the result of balancing costs and benefits to maximize personal advantage. It underlies much of economic modeling, including that of stock markets, where it goes by the name “efficient market hypothesis,” which states that the existing share price incorporates and reflects all relevant information. The need for such frameworks emerges from attempts to make the social sciences behave like physical sciences with good predictive powers. Physics is not much different. A branch of physics called statistical mechanics, which is responsible for most of the principles discussed in Chapter 5 (conservation of energy, entropy, etc.), incorporates the basic premise that if nature has many options for action and we do not have any reason to prefer one option over another, then we assume that the probability of taking any action is equal to the probability of taking any other. For large systems, this assumption works beautifully and enables us to predict macroscopic phenomena to a high degree of accuracy. In economics, a growing area of research is dedicated to the study of exceptions to the rational choice theory, which has shown that humans are not very rational creatures. This area, behavioral economics, includes major contributions by psychologists.

Right now, instead of trying to construct policies that will minimize our losses, we are just trying to present those possible losses as nonexistent. We are trying to pretend that the overwhelming science that predicts those losses for business-as-usual scenarios is “junk science” and that climate change is a conspiracy that scientists have created so they can get grants for research.

I too am guilty of cognitive bias when it comes to climate change.

A few days ago, a distinguished physicist from another institution was visiting my department. He is very interested in environmental issues and, along with two other physicists, is in the process of publishing a general education textbook, “Science of the Earth, Climate and Energy.”

During dinner he took a table napkin and drew curves similar to those shown in Figure 1 and asked me for my opinion. I had never seen such a graph before and it went against almost everything I knew, so I tried to dismiss it. The dinner was friendly so we let it go.

A few days later, an article in The New York Times backed him up:

Figure 1Extent of agreement that human actions have contributed to climate change among Republicans and Democrats in the US (NYT).

The article looked at other tendencies in the attitudes of the two parties based on educational level and they showed much less disparity:

On most other issues, education had little effect. Americans’ views on terrorism, immigration, taxes on the wealthiest, and the state of health care in the United States did not change appreciably by education for Democrats and Republicans.

Only a handful of issues had a shape like the one for climate change, in which higher education corresponded with higher agreement among Democrats and lower agreement among Republicans.

So what distinguishes these issues, climate change in particular?

First, climate change is a relatively new and technically complicated issue. On these kinds of matters, many Americans don’t necessarily have their own views, so they look to adopt those of political elites. And when it comes to climate change, conservative elites are deeply skeptical.

This can trigger what social scientists call a polarization effect, as described by John Zaller, a political scientist at the University of California, Los Angeles, in his 1992 book about mass opinion. When political elites disagree, their views tend to be adopted first by higher-educated partisans on both sides, who become more divided as they acquire more information.

It may be easier to think about in terms of simple partisanship. Most Americans know what party they belong to, but they can’t be expected to know the details of every issue, so they tend to adopt the views of the leaders of the party they already identify with.

For comparison, here’s the layout of voter turnout in the 2016 elections:

Figure 2Voter turnout and preference in the 2016 election by education

In behavioral economics the NYT explanation of the diverging attitude with regard to climate change is called “Following the Herd” (Chapter 3 in Nudge by Richard Thaler and Cass Sunstein).

I will expand on this in the next blog.

Be Sociable, Share!
Posted in administration, Anthropogenic, Climate Change, Education, Election, politics, Sustainability, Trump | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Dynamic Scoring: Taxes and Climate Change

Our government’s executive and legislative branches, are in the midst of discussing two important issues: tax breaks and climate change. Well, in truth, the only real discussion going on has to do with the tax legislation. Climate change is only being addressed indirectly – The Trump administration officially approved the congressionally-mandated climate change report (discussed previously in the August 15, 2017 blog) that was compiled by scientists from 13 government agencies. It now becomes an “official” document, in spite of the fact the White House claims that the President never read it. Meanwhile, right now in Bonn, Germany the 23rd Conference of the Parties (COP23) to the United Nations Framework Convention on Climate Change (UNFCCC) is taking place. One hundred ninety-five nations are there to discuss implementation of the 2015 Paris Agreement. As we all remember, President Trump announced in June his intention to withdraw the US from this agreement. Nevertheless, the US is a full participant in this meeting. Syria has just announced that it will be joining this agreement, making the US the only country in the world to back out.

Let me now come back to taxes, starting with Forbes magazine’s “Tax Reform for Dummies”:

You see, they were planning to repeal Obamacare using something called the “budget reconciliation process,” and pay attention, because this becomes relevant with tax reform. Using this process, Congress can pass a bill that’s attached to a fiscal year budget so long as:

  1. The bill directly impacts revenue or spending, and
  2. The bill does not increase the budget deficit after the end of the ten-year budget window (the so-called “Byrd Rule.)

More importantly, using the reconciliation process, Republicans can pass a bill with only a simple majority in the Senate (51 votes), rather than the standard 60. And as mentioned above, the GOP currently holds 52 seats in the Senate, meaning it could have pushed through its signature legislation without a SINGLE VOTE from a Democrat, which is particularly handy considering that vote was never coming.

Well, the Republicans in Congress were able to pass this resolution with a simple majority, specifying that by using dynamic scoring, they will not increase the budget deficit after the specified 10-year period. They set the limit for this accounting as a deficit no larger than $1.5 trillion (1,500 billion to those of us that need help with big numbers). Here is an explanation of dynamic scoring:

Tax, spending, and regulatory policies can affect incomes, employment, and other broad measures of economic activity. Dynamic analysis accounts for those macroeconomic impacts, while dynamic scoring uses dynamic analysis in estimating the budgetary impact of proposed policy changes.

To give you an understanding of that timeline and budget, here’s a graph of our national deficit:

Figure 1 Post WWII budget deficit in the US

Republican rationale for dynamic scoring in tax cuts is that, “Tax cuts pay for themselves”:

Ted Cruz got at a similar idea, referencing the tax plan he unveiled Thursday: “[I]t costs, with dynamic scoring, less than $1 trillion. Those are the hard numbers. And every single income decile sees a double-digit increase in after-tax income. … Growth is the answer. And as Reagan demonstrated, if we cut taxes, we can bring back growth.”

Tax cuts can boost economic growth. But the operative word there is “can.” It’s by no means an automatic or perfect relationship.

We know, we know. No one likes a fact check with a non-firm answer. So let’s dig further into this idea.

There’s a simple logic behind the idea that cutting taxes boosts growth: Cutting taxes gives people more money to spend as they like, which can boost economic growth.

Many — but by no means all— economists believe there’s a relationship between cuts and growth. In a 2012 survey of top economists, the University of Chicago’s Booth School of Business found that 35 percent thought cutting taxes would boost economic growth. A roughly equal share, 35 percent, were uncertain. Only 8 percent disagreed or strongly disagreed.

But in practice, it’s not always clear that tax cuts themselves automatically boost the economy, according to a recent study.

“[I]t is by no means obvious, on an ex ante basis, that tax rate cuts will ultimately lead to a larger economy,” as the Brookings Institution’s William Gale and Andrew Samwick wrote in a 2014 paper. Well-designed tax policy can increase growth, they wrote, but to do so, tax cuts have to come alongside spending cuts.

And even then, it can’t just be any spending cuts — it has to be cuts to “unproductive” spending.

“I want to be clear — one can write down models where taxes generate big effects,” Gale told NPR. But models are not the real world, he added. “The empirical evidence is quite different from the modeling results, and the empirical evidence is much weaker.”

President Reagan’s tax cut that Senator Cruz is referring to took place in 1981 in the middle of a serious recession. But it came at the heels of a post-war deficit. The tax cut did not pay for itself.

We can now return to the executive summary that precedes the recently-approved government Climate Science Special Report. I am condensing it into the main , each of which receives a detailed discussion in the full 500-page report:

  • Global annually averaged surface air temperature has increased by about 1.8°F (1.0°C) over the last 115 years (1901–2016). This period is now the warmest in the history of modern civilization.
  • It is extremely likely that human activities, especially emissions of greenhouse gases, are the dominant cause of the observed warming since the mid-20th century.
  • Thousands of studies conducted by researchers around the world have documented changes in surface, atmospheric, and oceanic temperatures; melting glaciers; diminishing snow cover; shrinking sea ice; rising sea levels; ocean acidification; and increasing atmospheric water vapor.
  • Global average sea level has risen by about 7–8 inches since 1900, with almost half (about 3 inches) of that rise occurring since 1993. The incidence of daily tidal flooding is accelerating in more than 25 Atlantic and Gulf Coast cities in the United States.
  • Global average sea levels are expected to continue to rise—by at least several inches in the next 15 years and by 1–4 feet by 2100. A rise of as much as 8 feet by 2100 cannot be ruled out.
  • Heavy rainfall is increasing in intensity and frequency across the United States and globally and is expected to continue to increase.
  • Heatwaves have become more frequent in the United States since the 1960s, while extreme cold temperatures and cold waves are less frequent; over the next few decades (2021–2050), annual average temperatures are expected to rise by about 2.5°F for the United States, relative to the recent past (average from 1976–2005), under all plausible future climate scenarios.
  • The incidence of large forest fires in the western United States and Alaska has increased since the early 1980s and is projected to further increase.
  • Annual trends toward earlier spring melt and reduced snowpack are already affecting water resources in the western United States. Chronic, long-duration hydrological drought is increasingly possible before the end of this century.
  • The magnitude of climate change beyond the next few decades will depend primarily on the amount of greenhouse gases (especially carbon dioxide) emitted globally. Without major reductions in emissions, the increase in annual average global temperature relative to preindustrial times could reach 9°F (5°C) or more by the end of this century. With significant reductions in emissions, the increase in annual average global temperature could be limited to 3.6°F (2°C) or less.

We constantly worry what kind of damage we in the US are about to experience from these impacts, given the prevailing business-as-usual scenario. Fortunately, a detailed paper in Science Magazine (Science 356, 1362 (2017)) gives us some answers:

Estimating economic damage from climate change in the United States

Solomon Hsiang,1,2*† Robert Kopp,3*† Amir Jina,4† James Rising,1,5†

Michael Delgado,6 Shashank Mohan,6 D. J. Rasmussen,7 Robert Muir-Wood,8

Paul Wilson,8 Michael Oppenheimer,7,9 Kate Larsen,6 Trevor Houser6

Estimates of climate change damage are central to the design of climate policies. Here, we develop a flexible architecture for computing damages that integrates climate science, econometric analyses, and process models. We use this approach to construct spatially explicit, probabilistic, and empirically derived estimates of economic damage in the United States from climate change. The combined value of market and nonmarket damage across analyzed sectors—agriculture, crime, coastal storms, energy, human mortality, and labor—increases quadratically in global mean temperature, costing roughly 1.2% of gross domestic product per +1°C on average. Importantly, risk is distributed unequally across locations, generating a large transfer of value northward and westward that increases economic inequality. By the late 21st century, the poorest third of counties are projected to experience damages between 2 and 20% of county income (90% chance) under business-as-usual emissions (Representative Concentration Pathway 8.5).

Figure 2 Direct damage in various sectors as a function of rising temperature since the 1980s

The paper’s abstract above indicates a negative impact of 1.2% of GDP for each 1oC (1.8oF) rise. The current GDP of the US is around $18 trillion, so 1.2% of that per 1oC amounts to $216 billion. If we take the recent “typical” growth rate of the economy at 2% it amounts to $360 billion/year. The loss for 1oC of warming amounts to 60% of this “typical” growth.

Accounting through dynamic scoring should count losses as well as gains – in this case, those resulting from climate change. Next week I will expand on this topic.

Be Sociable, Share!
Posted in administration, Anthropogenic, Climate Change, COP21, IPCC, law, Sustainability, Trump, UN, UNFCCC | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Long-term Solutions: Energy

The last two blogs focused on the Netherlands’ leading role in showing the rest of the world strategies for living on an increasingly inhospitable planet, where the terrain is becoming uninhabitable for both humans and agricultural crops and the oceans are consuming ever larger masses of land through sea-level rise. The timing for all of this depends on decisions that we make today. The transition into this bleak future is at work even now, with large land areas already having been declared uninhabitable in places such as India and Africa, resulting in large scale immigration to better terrains.

The last two blogs focused on attempts to adapt to this reality, not through immigration (there are fewer and fewer places where we can go) but by living and growing crops isolated from the physical environment, using indoor agriculture and floating houses.

All of this requires us to realize that there are limits to our ability to adapt to hostile environment; we need to be convinced to completely decarbonize our energy sources. The next few blogs will focus on the issue of energy (with the usual disclaimer for interruptions due to unforeseen events), including a description of where we are in learning how to use the ultimate energy source that all the stars in the Universe use – fusion energy. Since we still don’t know how to use fusion energy for peaceful purposes, today’s blog will instead look at various forms of solar energy. Solar energy supply is intermittent and we need to learn how to adapt our usage to accommodate that flux. (October 21, 2014 and the following blogs, which include active exchanges with Joe Morgan),

Eduardo Porter’s article in The New York Times drew my attention to recent scientific activity and debate on this issue:

Fisticuffs Over the Route to a Clean-Energy Future

By Eduardo Porter

Could the entire American economy run on renewable energy alone?

This may seem like an irrelevant question, given that both the White House and Congress are controlled by a party that rejects the scientific consensus about human-driven climate change. But the proposition that it could, long a dream of an environmental movement as wary of nuclear energy as it is of fossil fuels, has been gaining ground among policy makers committed to reducing the nation’s carbon footprint. Democrats in both the United States Senate and in the California Assembly have proposed legislation this year calling for a full transition to renewable energy sources.

They are relying on what looks like a watertight scholarly analysis to support their call: the work of a prominent energy systems engineer from Stanford University, Mark Z. Jacobson. With three co-authors, he published a widely heralded article two years ago asserting that it would be eminently feasible to power the American economy by midcentury almost entirely with energy from the wind, the sun and water. What’s more, it would be cheaper than running it on fossil fuels.

Jacobson, et al. wrote a related article that was published in PNAS, which was then thoroughly critiqued on the same platform. For more on the exchange, I invite you to check the original references.

Low-cost solution to the grid reliability problem with 100% penetration of intermittent wind, water, and solar for all purposes

Mark Z. Jacobson, Mark A. Delucchib, Mary A. Camerona, and Bethany A. Frewa

Introduction:

Worldwide, the development of wind, water, and solar (WWS) energy is expanding rapidly because it is sustainable, clean, safe, widely available, and, in many cases, already economical. However, utilities and grid operators often argue that today’s power systems cannot accommodate significant variable wind and solar supplies without failure (1). Several studies have addressed some of the grid reliability issues with high WWS penetrations (2–21), but no study has analyzed a system that provides the maximum possible long-term environmental and social benefits, namely supplying all energy end uses with only WWS power (no natural gas, biofuels, or nuclear power), with no load loss at reasonable cost. This paper fills this gap. It describes the ability of WWS installations, determined consistently over each of the 48 contiguous United States (CONUS) and with wind and solar power output predicted in time and space with a 3D climate/weather model, accounting for extreme variability, to provide time-dependent load reliably and at low cost when combined with storage and demand response (DR) for the period 2050–2055, when a 100% WWS world may exist.

Conclusions:

Discussion and Conclusions The 2050 delivered social (business plus health and climate) cost of all WWS including grid integration (electricity and heat generation, long-distance transmission, storage, and H2) to power all energy sectors of CONUS is ∼11.37 (8.5–15.4) ¢/kWh in 2013 dollars (Table 2). This social cost is not directly comparable with the future conventional electricity cost, which does not integrate transportation, heating/cooling, or industry energy costs. However, subtracting the costs of H2 used in transportation and industry, transmission of electricity producing hydrogen, and UTES (used for thermal loads) gives a rough WWS electric system cost of ∼10.6 (8.25–14.1) ¢/kWh. This cost is lower than the projected social (business plus externality) cost of electricity in a conventional CONUS grid in 2050 of 27.6 (17.2–54.4) ¢/kWh, where 10.6 (8.73–13.4) ¢/kWh is the business cost and ∼17.0 (8.5–41) ¢/kWh is the 2050 health and climate cost, all in 2013 dollars (22). Thus, whereas the 2050 business costs of WWS and conventional electricity are similar, the social (overall) cost of WWS is 40% that of conventional electricity. Because WWS requires zero fuel cost, whereas conventional fuel costs rise over time, long-term WWS costs should stay less than conventional fuel costs. In sum, an all-sector WWS energy economy can run with no load loss over at least 6 y, at low cost. As discussed in SI Appendix, Section S1.L, this zero load loss exceeds electric-utility industry standards for reliability. The key elements are as follows: (i) UTES to store heat and electricity converted to heat; (ii) PCM-CSP to store heat for later electricity use; (iii) pumped hydropower to store electricity for later use; (iv) H 2 to convert electricity to motion and heat; (v) ice and water to convert electricity to later cooling or heating; (vi) hydropower as last-resort electricity storage; and (vii) DR. These results hold over a wide range of conditions (e.g., storage charge/discharge rates, capacities, and efficiencies; long-distance transmission need; hours of DR; quantity of solar thermal)(SI Appendix, Table S3 and Figs. S7–S19), suggesting that this approach can lead to low-cost, reliable, 100% WWS systems many places worldwide.

Response to Jacobson et al.:

More than one arrow in the quiver: Why “100% renewables” misses the mark

John E. Bistlinea, and Geoffrey J. Blanforda

Jacobson et al. (1) aim to demonstrate that an all renewable energy system is technically feasible. Not only are the study’s conclusions based on strong assumptions and key methodological oversights, but its framing also omits the essential notion of trade-offs. A far more relevant question is how renewable energy technologies relate to the broader set of options for meeting long-term societal goals like managing climate change. Even if the goal were to maximize the deployment of renewable energy (and not decarbonization more generally), Jacobson et al. still fail to provide a satisfactory analysis by glossing over fundamental implications of the technical and economic dimensions of intermittency. We briefly highlight two prominent examples, and then return to the question of framing.

First, the paper’s “no load loss” assertion is predicated on the large-scale availability of energy storage, demand response, and unconstrained transmission to handle periods of supply surpluses and shortfalls. Its assumptions about the cost and reliability of intertemporal demand flexibility within and across sectors, as well as the electrification of end-use demand, are particularly aggressive. The potential scale and scope of these novel technologies remain highly uncertain and speculative, and the narrow confidence intervals presented in Jacobson et al. (1) do not reflect the full range of possible outcomes. Second, the paper does not account for the regional provision of resource adequacy (i.e., market clearing with spatial heterogeneity) in its reliability results—indeed, the analysis is conducted at the national level. Although geographic smoothing can ameliorate some balancing challenges, seasonal and diurnal variability of wind and solar output cannot be managed through offsetting spatial variability alone. Moreover, these effects require data that reflect renewable output simultaneously with load in each hour of a given year, yet Jacobson et al. (1) use different, nonsynchronous datasets. Consequently, their analysis preserves neither joint temporal nor spatial variability between intermittent resources and demand, which are among the main drivers of decreasing returns to scale for renewable energy (2).

There is an emerging literature on integrated modeling of long-term capacity planning with high-temporal resolution operational detail that effectively incorporates such economic drivers (e.g., ref. 3). By contrast, Jacobson et al. (1) use a “grid integration model” in which investment and energy system transformations are not subject to economic considerations. The resulting renewable dominated capacity mix is inconsistent with the wide range of optimal deep decarbonization pathways projected in model inter-comparison exercises (e.g., refs. 4 and 5) in which the contribution of renewable energy is traded off in economic terms against other low-carbon options.

Jacobson et al. (1) underscore how balancing and fleet flexibility will be important elements of power system design, and that electrification of other demand sectors is a promising option. However, the study underestimates many of the technical challenges associated with the world it envisions, and fails to establish an appropriate economic context. Every low-carbon energy technology presents unique technical, economic, and legal challenges. Evaluating these trade-offs within a consistent decision framework is essential. Such analyses consistently demonstrate that a broad research, development, and deployment portfolio across supply- and demand-side technologies is the best way to ensure a safe, reliable, affordable, and environmentally responsible future energy system.

More recently, Miara et al. provided a different analysis of the changes that are needed in the future of US power supply (Nature Climate Change 7, 793 (2017)). Their analysis incorporates the requirements for cooling water in a constantly warming environment.

Climate and water resource change impacts and adaptation potential for US power supply

Ariel Miara , Jordan E. Macknick , Charles J. Vörösmarty , Vincent C. Tidwell , Robin Newmark & Balazs Fekete

Abstract:

Power plants that require cooling currently (2015) provide 85% of electricity generation in the United States1,2. These facilities need large volumes of water and sufficiently cool temperatures for optimal operations, and projected climate conditions may lower their potential power output and affect reliability3,4,5,6,7,8,9,10,11. We evaluate the performance of 1,080 thermoelectric plants across the contiguous US under future climates (2035–2064) and their collective performance at 19 North American Electric Reliability Corporation (NERC) sub-regions12. Joint consideration of engineering interactions with climate, hydrology and environmental regulations reveals the region-specific performance of energy systems and the need for regional energy security and climate–water adaptation strategies. Despite climate–water constraints on individual plants, the current power supply infrastructure shows potential for adaptation to future climates by capitalizing on the size of regional power systems, grid configuration and improvements in thermal efficiencies. Without placing climate–water impacts on individual plants in a broader power systems context, vulnerability assessments that aim to support adaptation and resilience strategies misgauge the extent to which regional energy systems are vulnerable. Climate–water impacts can lower thermoelectric reserve margins, a measure of systems-level reliability, highlighting the need to integrate climate–water constraints on thermoelectric power supply into energy planning, risk assessments, and system reliability management.

Just as I was about to finish writing this (late afternoon on Friday) my electronic device dinged with the news that the Trump administration had officially approved the recent US Climate Science Report – the result of the Congress-mandated National Climate Assessment. When the draft of this report came out this year many people, including myself, were doubtful that the administration would release it, since it put the responsibility for climate change squarely on humankind and was in complete agreement with almost all the other credible information about climate change, as of the IPCC’s inception more than 20 years ago. I wrote about this draft report earlier (August 15, 2017). I was already using it as the most up-to-date reference in my class and I will refer to it in my next blog, straying away from my intent to focus my attention on future energy needs.

Be Sociable, Share!
Posted in administration, Climate Change, IPCC, Sustainability, Trump | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Long-term Adaptations III – Following the Netherlands: Sea Level Rise

One of the most pressing problems jeopardizing our planet’s capacity to sustainably support (human and other) life is sea level rise. This comes as a direct consequence of the rise in global temperature: the accelerated melting of land-based ice combined with the expansion of ocean-based water.

The IPCC’s recent projections for business-as-usual scenarios include a sea level rise of 1 meter by the end of the century (April 25, 2017). But a paper in Nature made the case that the IPCC report significantly underestimated Antarctica’s contributions to the projected global sea level rise, stating that the estimate should be raised by a factor of 5.

Here is what NOAA has to say about it:

With continued ocean and atmospheric warming, sea levels will likely rise for many centuries at rates higher than that of the current century. In the United States, almost 40 percent of the population lives in relatively high-population-density coastal areas, where sea level plays a role in flooding, shoreline erosion, and hazards from storms. Globally, eight of the world’s 10 largest cities are near a coast, according to the U.N. Atlas of the Oceans.

Global sea level has been rising over the past century, and the rate has increased in recent decades. In 2014, global sea level was 2.6 inches above the 1993 average—the highest annual average in the satellite record (1993-present). Sea level continues to rise at a rate of about one-eighth of an inch per year.

Higher sea levels mean that deadly and destructive storm surges push farther inland than they once did, which also means more frequent nuisance flooding. Disruptive and expensive, nuisance flooding is estimated to be from 300 percent to 900 percent more frequent within U.S. coastal communities than it was just 50 years ago.

The real estate database company Zillow produced a report estimating the cost of the damage:

  • Coastal homes are threatened by the prospect of rising sea levels, a new report from Zillow says.
  • More than $900 billion worth of U.S. residential real estate could be lost by a 6-foot rise in sea levels, the report says.
  • Such a rise in water levels, projected to become a reality by 2100, could destroy nearly 2 million homes.

The Netherlands doesn’t need to wait until the end of the century to experience the impact. A lot of that has to do with its geography:

The European part of the country can be split into two areas: the low and flat lands in the west and north, and the higher lands with minor hills in the east and south. The former, including the reclaimed polders and river deltas, make up about half of its surface area and are less than 1 metre (3.3 ft) above sea level, much of it actually below sea level. An extensive range of seawalls and coastal dunes protect the Netherlands from the sea, and levees and dikes along the rivers protect against river flooding. The rest of the country is mostly flat; only in the extreme south of the country does the land rise to any significant extent, in the foothills of the Ardennes mountains. This is where Vaalserberg is located, the highest point on the European part of the Netherlands at 322.7 metres (1,059 ft) above sea level. The highest point of the entire country is Mount Scenery (887 metres or 2,910 ft), which is located outside the European part of the Netherlands, on the island of Saba.

The country has done a lot over the last 60 years to protect itself from flooding:

As an old proverb says: God created the world but the Dutch created the Netherlands. Over 30% of the Netherlands lies below sea level. Floodings of sea and river water caused many victims as for example the 1953 flood disaster during which 1850 people were drowned.
The struggle against the water was both defensive, manifest by many dikes and dams, and offensive, as is shown by the many land reclamation works from as early as the 14th century.

The Delta Works (the closing of the sea inlets) are a masterpiece of engineering, but the Maeslant Kering (1997) is also an example of hydraulic ingenuity. Three great Dutch hydraulic works, among which a steam pumping station, are already on the Unesco World Heritage list.

The Dutch Delta is too precious to not take the necessary flood prevention measures Flooding of the large rivers which flow into the North-Sea .must be prevented It may even be necessary to re-inundate some polders at high tide and to restore old side gullies of the rivers to create overflow capacity.

At the end of 2011 a new style Delta Plan , the Delta Programme, was approved by the Dutch Senate .The object of the Delta programme is to protect our country against high water and keep our freshwater supply up to standard, now and in the future.

Figure 1 shows a map of the various projects.

Figure 1 Dutch Delta waterworks

Figure 2 shows a timeline of the projects:

Figure 2 Construction timeline

The same Wikipedia reference that provided the timeline discusses both the current status and the impact of climate change on future construction in the area:

The original plan was completed by the Europoortkering which required the construction of the Maeslantkering in the Nieuwe Waterweg between Maassluis and Hoek van Holland and the Hartelkering in the Hartel Canal near Spijkenisse. The works were declared finished after almost fifty years in 1997. In reality the works were finished on 24 August 2010 with the official opening of the last strengthened and raised retaining wall near the city of Harlingen, Netherlands.

Due to climate change and relative sea-level rise, the dikes will eventually have to be made higher and wider. This is a long term uphill battle against the sea. The needed level of flood protection and the resulting costs are a recurring subject of debate. Currently, reinforcement of the dike revetments along the Oosterschelde and Westerschelde is underway. The revetments have proven to be insufficient and need to be replaced. This work started in 1996 and should be finished in 2015. In that period the Ministry of Public Works and Water Management in cooperation with the waterboards will have reinforced over 400 km of dikes.[1]

In September 2008, the Delta commission presided by Dutch politician Cees Veerman advised in a report that the Netherlands would need a massive new building program to strengthen the country’s water defenses against the anticipated effects of global warming for the next 190 years. The plans included drawing up worst-case scenarios for evacuations and included more than €100 billion, or $144 billion, in new spending through the year 2100 for measures, such as broadening coastal dunes and strengthening sea and river dikes.

The commission said the country must plan for a rise in the North Sea of 1.3 meters by 2100 and 4 meters by 2200.[2]

Attempts to adapt to such a reality are not constrained to the Netherland, nor are they limited to blocking the rising water. One new option is living in floating structures (some of which have so far experienced more success than others):

Are the Floating Houses of the Netherlands a Solution Against the Rising Seas?

Figure 3 Floating homes in IJburg, Amsterdam.

The technology used to build houses on water is not really new. Whatever can be built on land can also be built on water. The only difference between a house on land and a floating house is that the houses on water have concrete “tubs” on the bottom, which are submerged by half a story and act as counter-weight. To prevent them from floating out to sea, they are anchored to the lakebed by mooring poles.

As sea levels are rising globally, many cities around the world are under threat from water. Some areas are projected to disappear completely in the next few decades. Therefore, designing houses to float may, in some instances, be safer than building on land and risking frequent floods. “In a country that’s threatened by water, I’d rather be in a floating house; when the water comes, [it] moves up with the flood and floats,” Olthuis says. He believes that water shouldn’t be considered an obstacle, but rather a new ingredient in the recipe for the city.

Floating houses are not only safer and cheaper, but more sustainable as well. Because such a house could more readily be adapted to existing needs by changing function, or even moving to a whole new location where it can serve as something else, the durability of the building is much improved. Olthuis compares this to a second-hand car: “By having floating buildings, you’re no longer fixed to one location. You can move within the city, or you can move to another city, and let them be used and used again.”

When waters rise, these flood-proof houses rise right with them

Float House

Figure 4 Morphosis Float House in New Orleans

The Float House is an ambitious, affordable housing project based in New Orleans. The city still hasn’t fully recovered from Hurricane Katrina, and these pre-fabricated, flood-proof homes designed by Morphosis Architects attempt to ensure the region can withstand the next Katrina-scale event.

During a flood, the foundation of the house acts as a raft, rising with the water. The FLOAT House has the ability to rise up to 12 feet on guide posts, which are secured via two concrete pile caps that extend 45 feet into the earth for added stability.

The house is also designed to be as self-sufficient as it is flood-proof. Cisterns within the structure store rainwater collected from the roof, and the water is then filtered and stored until needed. Solar panels on the roof generate all of the house’s power. Electrical systems in the home store and convert this energy as needed.

Floating houses could be key to housing in post climate change world:

Is this liveable yacht concept the answer to housing amongst rising sea levels?

Innovative company Arkup thinks so, and will launch the 100 percent solar-powered, electric and self-sustaining floating homes at a Florida boat show next month.

Built to withstand hurricane strength winds, the luxury houseboat has four bedrooms, 4.5 bathrooms, collects and purifies rainwater and is self-sustaining enough to be considered off the grid.

Figure 5Example of a proposed floating house

Next week I will focus on some of the energy alternatives that we will require to adapt to these situations.

Be Sociable, Share!
Posted in Anthropocene, Anthropogenic, Climate Change, IPCC, Sustainability | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Long-term Adaptations II – Following the Netherlands: Food and Habitability

Figure 1 – Indoor lattice growing setting from National Geographic magazine article

The photograph above resembles the one I included in last week’s blog. Both show the process of growing crops in a glass enclosure – except that Matt Demon’s preparation of Martian ground for farming was fiction while this is real.

This week I’ll look at long-term adaptation efforts we can take in light of the climate-related doomsday predicted to take place in the foreseeable future.

The two indicators I’ll concentrate on today are heat death and the end of food (see last week’s blog – October 17, 2017). I already gave heat death a quantitative treatment in my October 10th blog, when I talked about wet-bulb temperatures higher than 350C (950F). An “obvious” way to adapt to both of these dire problems is to follow Matt Damon’s example, growing our food and living our lives isolated from the outside environment. Up to a point, we can do it; we have the technology and it will not be that expensive to implement. Indeed, we only have to globally mimic the steps already being taken by a small rich European country: the Netherlands. Table 1 shows a few of the Netherlands’ important indicators.

Table 1 – The Netherlands, important indicators (Wikipedia)

Indicators Value Rank
Population (2017) 17,116,000
Population density 413.3/km2 30
GDP/Capita (Nominal – 2017) $44,654 13
Area 41,543km2 66

I have good friends in the Netherlands and I like to visit the country as often as I can. Since almost half of the nation’s landmass is located at or below sea level, its residents have made and are continuing to make heroic efforts to protect themselves from flooding – both by the ocean and the large rivers that pass through the country on their way to the ocean. Almost every discussion throughout the world that concerns itself with adaptation to rising sea levels includes Dutch guests being invited to share their experiences. While I was already familiar with some of the strategies they use for flood abatement, I did not know about their efforts in greenhouse agriculture. When I got my September 2017 issue of National Geographic with an article on those efforts, I was stunned with admiration. All the beautiful photographs below come from that great article.

Figure 2 – Bird’s-eye view of greenhouses in the Netherlands

Figure 3 – Chicken growing facility

Figure 4 – A rotary milking machine

Figure 5 – Growing tomatoes

The effort is not only delightful to watch, it is also very efficient. Here is some statistical evidence of this success:

Table 2 – Total water footprints of tomato production (Gallons per pounds – 2010)

Country Footprints
Netherlands 1.1
US 15.2
Global Average 25.6
China 34.0

Table 2 shows the water footprints of tomato production in the Netherlands, US, and China, together with the world’s average. The numbers are stunning and it is one of the best demonstrations of virtual water visualization I’ve seen. I introduced the concept of virtual water here several years ago (November 18, 2014):

The two main inventories that are included in most analyses are energy and water. Both inventories suffer from the same broad range of values as measured in different facilities. I just recently returned from a conference in Iceland in which I presented our work on water stress (“The Many Faces of Water Use” by Gurasees Chawla and Micha Tomkiewicz; 6th International Conference on Climate Change, Reykjavik, Iceland (2014)). One of the issues that we discussed there was the concept of virtual water:  the “sum of the water footprints of the process steps taken to produce the product” – an idea that constitutes part of the LCA analysis of products. For instance, the typical virtual water of fruits is 1,000 m3/ton. To remind us all, the weight of pure water in these units is 1ton/1m3, so the weight of the virtual water is 1,000 times the weight of the product itself. Where did the extra water go? For the most part, it either became waste water or evaporated.

Yes they are blaming the US for the depletion of Mexican water but in fact they should blame themselves because they are subsiding the Mexican strawberries growers by not charging them anything for the water that they use so that they will be able to better compete against the American growers.

Here’s one example of the concept of virtual water, as given in an article in a Mexican science magazine: when the US imports strawberries from Mexico, the imported strawberries conceptually “carry” with them all the virtual fresh water that is so greatly needed in both countries (Camps, Salvador Penische and Patricia Ávila García, Revista Mexicana de Ciencias 3:1579 (2012)); this equates to the US “stealing” water from the Mexicans. As a result, some are blaming the US for the depletion of Mexican water, and charging our country with immoral economic capitalism.

Meanwhile when proper water management is being used (mainly waste water treatment), the virtual water can be reduced by 90%.

Table 3 – The Netherlands’ yield of various agricultural products and its rank in comparison to the rest of the world – 2014

Produce Tons per square mile Rank
Carrots 17,144 5
Chilies and green peppers 80,890 1
Cucumbers 210,065 1
Onions 13,037 6
Pears 11,582 2
Potatoes 13,036 6

 

Table 4 – Rank in various categories of tomato production

Rank Category Quantity
1 Yield 144,352 tons/square miles
22 Total production 992,080 tons
95 Area harvested 6.9 miles2

 

Table 5 – Changes in vegetable growing practices from 2003-2014

Vegetable Production +28%
Energy used -6%
Pesticides used -9%
Fertilizers used -29%

The efforts of the Netherlands in this area are being recognized throughout the world and visitors from many developing countries are visiting to try to learn from the Dutch so they can try to implement the same methods back home. A global map in the article demonstrates the reach of these efforts.

This shift to enclosed agricultural methods and similarly isolated living structures as a response to increasingly uninhabitable lands can only be effective if the structures are cooled to the temperatures necessary for people and crops to continue thriving. This will, in turn, require the use of cooling devices and energy sources that will not amplify the damage. Cooling devices have their own limits, as I will explain in future blogs.

Be Sociable, Share!
Posted in Anthropogenic, Climate Change, Sustainability | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Long-Term Adaptations

Figure 1 – Scene from “The Martian”

I was considering using a more descriptive title for the coming series of blogs, inspired by a recent article in the New York Times (NYT) called, “How to Survive the Apocalypse?” The article starts as follows:

President Trump threatens to “totally destroy North Korea.” Another hurricane lashes out. A second monster earthquake jolts Mexico. Terrorists strike in London. And that’s just this past week or so.

Yes, the world is clearly coming to an end. But is there anything you can do to prepare?

Prudence won out and I decided on a much better-defined option. My last 7 blogs (August 29October 10, 2017) focused on the doomsday that business-as-usual scenarios are projected to bring about. Based on early signs already visible today, this catastrophe will come from human changes to the physical environment; the world is expected to reach full-force uninhabitability toward the end of the century.

My school’s Faculty Day in May featured a panel presentation, “The Role of Science in the Anthropocene.” The audience’s questions mirrored those posited in the NYT article: is there anything we can do to prepare? Or are we doomed? – in which case we should just try to enjoy our last days as happily as we can.

Almost every day now brings new calamities that originate from our changes to the physical environment. The most recent are the deadly fires throughout California. Similarly extensive fires have recently been recorded around the Mediterranean Sea and Australia. I am not alone in discussing the US government’s ineptitude in dealing with the physical calamities of climate change. I got sick of being depressed and decided to try to answer the question, “what can we do to prepare?”

The NYT piece was half-satirical and focused on how individuals could start to prepare their own disaster tool kits for local threats. My take will be more serious, collective, and global. The transition to doomsday that I am interested in is different from singular apocalyptic doomsday events such as a nuclear holocaust, the eruption of a super volcano, or the collision of a large asteroid. While we can obviously try to adapt to or mitigate most of those terrible occurrences, none of them is foreseen to impact us the way that climate change based on a business-as-usual scenario will. The eventual uninhabitability that will come from continuing our present changes to the chemistry of the atmosphere will occur gradually – within the span of a few human generations. We can still mitigate it considerably and at the same time try to develop technologies that will help us adapt.

In one of my earliest blogs, I defined the physics of sustainability (January 28, 2013):

I define sustainability as the condition that we have to develop here to flourish until we develop the technology for extraterrestrial travel that will allow us to move to another planet once we ruin our own.

In my opinion, the conditions to achieve this are very “straightforward.” They have to be able to answer two “simple” questions:

  • For how long? – Forever! To repeat President Obama’s language – We must act, knowing that today’s victories will be only partial, and that it will be up to those who stand here in four years, and forty years, and four hundred years hence to advance the timeless spirit once conferred to us in a spare Philadelphia hall

  • How to do it? – To achieve the sustainable objectives on this time scale, we will have to establish equilibrium with the physical environment and at the same time maximize individual opportunities for everybody on this planet

This will take time. A better definition would incorporate the options of attempting to establish livability wherever we can, including the soon-to-be-uninhabitable planet Earth.

How can we adapt to an uninhabitable planet? This is basically the same question as, “is there anything you can do to prepare?”

Well, what came to my mind was the movie, “The Martian” – with Matt Damon and Jessica Chastain – about an astronaut who got stuck on Mars and is trying to adapt. Figure 1 shows an example of his efforts. The movie was released toward the end of 2015 and received a Golden Globe Award for Best Picture in the category of Musical or Comedy. As Matt Damon said while receiving the award, it certainly was not a musical and the only funny thing about it was the category in which it was placed. It was a very serious, excellent movie. It described in great detail the work that the astronaut had to put into trying to survive on the uninhabitable surface of Mars.

David Wallace-Wells’ New York Magazine piece, which I discussed in my September 12, 2017 and subsequent blogs, presented decent qualitative descriptions of the main indicators of uninhabitability brought about by climate change:

  • Heat death
  • The end of food
  • Climate plagues
  • Unbreathable air
  • Perpetual war
  • Permanent economic collapse
  • Poisoned oceans

The end of the world as we know it will happen in phases. Some areas will become uninhabitable before others and that shift will likely be the main trigger for an indicator such as perpetual war. In a business-as-usual scenario, the transition will soon engulf the entire planet. How do we adapt to this kind of an environment? Perhaps we’ll simply have to take a page from Matt Damon and learn how to live happily while isolated from the environment. In the next few blogs I will go into more detail.

Be Sociable, Share!
Posted in administration, Anthropocene, Anthropogenic, Climate Change, Sustainability, Trump | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Doomsday: Local Timelines

The last few blogs focused on the ultimate consequences of continuing to make “progress” by relentlessly using the physical environment to serve humanity as if it were as a limitless resource. I tried to make the case that such efforts (business-as-usual scenarios) push the planet outside the window of habitability, with no place to go. The now fully-anticipated result is global doomsday. The expected timeline is uncertain but can be counted within a few hundred years. For people such as myself, who grew up exposed to thousands of years of history, these prospects are unacceptable – particularly because it is still within our power to prevent or at least considerably postpone this impact. Such an apocalyptic scenario obviously will not take place instantaneously on a global scale. It will start with a slowly expanding area of habitable local environments turning uninhabitable and their residents fleeing to friendlier places. Such a migration creates millions of environmental refugees and affect us all. This is not a speculation on an unknown future. It is already happening. This is the main reason that national security organizations are trying to understand the changes in the physical environment and how they affect its ability to support humans (See the “Global Trends 2035” blog, May 23, 2017).

In this blog I will try to quantify the criteria for local doomsdays with some concrete examples, starting with the concept of Wet Bulb temperature:

Wet Bulb Temperature – Twb

The Wet Bulb temperature is the adiabatic saturation temperature.

Wet Bulb temperature can be measured by using a thermometer with the bulb wrapped in wet muslin. The adiabatic evaporation of water from the thermometer bulb and the cooling effect is indicated by a “wet bulb temperature” lower than the “dry bulb temperature” in the air.

The rate of evaporation from the wet bandage on the bulb, and the temperature difference between the dry bulb and wet bulb, depends on the humidity of the air. The evaporation from the wet muslin is reduced when air contains more water vapor.

The Wet Bulb temperature is always between the Dry Bulb temperature and the Dew Point. For the wet bulb, there is a dynamic equilibrium between heat gained because the wet bulb is cooler than the surrounding air and heat lost because of evaporation. The wet bulb temperature is the temperature of an object that can be achieved through evaporative cooling, assuming good air flow and that the ambient air temperature remains the same.

Steven Sherwood and Matthew Huber wrote a paper, “An adaptability limit to climate change due to heat stress,” about the connections between Wet-Bulb temperatures and local limits of habitability. It was published in the Proceedings of the National Academy of Sciences (PNAS). Here is the abstract and some of their conclusions:

Abstract

Despite the uncertainty in future climate-change impacts, it is often assumed that humans would be able to adapt to any possible warming. Here we argue that heat stress imposes a robust upper limit to such adaptation. Peak heat stress, quantified by the wet-bulb temperature TW, is surprisingly similar across diverse climates today. TW never exceeds 31 °C. Any exceedance of 35 °C for extended periods should induce hyperthermia in humans and other mammals, as dissipation of metabolic heat becomes impossible. While this never happens now, it would begin to occur with global-mean warming of about 7 °C, calling the habitability of some regions into question. With 11–12 °C warming, such regions would spread to encompass the majority of the human population as currently distributed. Eventual warmings of 12 °C are possible from fossil fuel burning. One implication is that recent estimates of the costs of unmitigated climate change are too low unless the range of possible warming can somehow be narrowed. Heat stress also may help explain trends in the mammalian fossil record.

Conclusions:

We conclude that a global-mean warming of roughly 7 °C would create small zones where metabolic heat dissipation would for the first time become impossible, calling into question their suitability for human habitation. A warming of 11–12 °C would expand these zones to encompass most of today’s human population. This likely overestimates what could practically be tolerated: Our limit applies to a person out of the sun, in gale-force winds, doused with water, wearing no clothing, and not working. A global-mean warming of only 3–4 °C would in some locations halve the margin of safety (difference between TW max and 35 °C) that now leaves room for additional burdens or limitations to cooling. Considering the impacts of heat stress that occur already, this would certainly be unpleasant and costly if not debilitating. More detailed heat stress studies incorporating physiological response characteristics and adaptations would be necessary to investigate this.

If warmings of 10 °C were really to occur in next three centuries, the area of land likely rendered uninhabitable by heat stress would dwarf that affected by rising sea level. Heat stress thus deserves more attention as a climate-change impact.

The onset of TW max > 35 °C represents a well-defined reference point where devastating impacts on society seem assured even with adaptation efforts. This reference point constructs with assumptions now used in integrated assessment models. Warmings of 10 °C and above already occur in these models for some realizations of the future (33). The damages caused by 10 °C of warming are typically reckoned at 10–30% of world GDP (33, 34), roughly equivalent to a recession to economic conditions of roughly two decades earlier in time. While undesirable, this is hardly on par with a likely near-halving of habitable land, indicating that current assessments are underestimating the seriousness of climate change.

The paper by Eun-Soon Im et al. in Science Advances, 2017, 3(8), “Deadly heat waves projected in the densely populated agricultural regions of South Asia,” describes the evolving situation in South Asia. Here are a few key paragraphs from that paper that describe the main conclusions. They include the physiological criteria of the concept of hyperthermia, as mentioned in the previous paper:

Abstract

The risk associated with any climate change impact reflects intensity of natural hazard and level of human vulnerability. Previous work has shown that a wet-bulb temperature of 35°C can be considered an upper limit on human survivability. On the basis of an ensemble of high-resolution climate change simulations, we project that extremes of wet-bulb temperature in South Asia are likely to approach and, in a few locations, exceed this critical threshold by the late 21st century under the business-as-usual scenario of future greenhouse gas emissions. The most intense hazard from extreme future heat waves is concentrated around densely populated agricultural regions of the Ganges and Indus river basins. Climate change, without mitigation, presents a serious and unique risk in South Asia, a region inhabited by about one-fifth of the global human population, due to an unprecedented combination of severe natural hazard and acute vulnerability

INTRODUCTION

The risk of human illness and mortality increases in hot and humid weather associated with heat waves. Sherwood and Huber (1) proposed the concept of a human survivability threshold based on wet-bulb temperature (TW). TW is defined as the temperature that an air parcel would attain if cooled at constant pressure by evaporating water within it until saturation. It is a combined measure of temperature [that is, dry-bulb temperature (T)] and humidity (Q) that is always less than or equal to T. High values of TW imply hot and humid conditions and vice versa. The increase in TW reduces the differential between human body skin temperature and the inner temperature of the human body, which reduces the human body’s ability to cool itself (2). Because normal human body temperature is maintained within a very narrow limit of ±1°C (3), disruption of the body’s ability to regulate temperature can immediately impair physical and cognitive functions (4). If ambient air TW exceeds 35°C (typical human body skin temperature under warm conditions), metabolic heat can no longer be dissipated. Human exposure to TW of around 35°C for even a few hours will result in death even for the fittest of humans under shaded, well-ventilated conditions (1). While TW well below 35°C can pose dangerous conditions for most humans, 35°C can be considered an upper limit on human survivability in a natural (not air-conditioned) environment. Here, we consider maximum daily TW values averaged over a 6-hour window (TWmax), which is considered the maximum duration fit humans can survive at 35°C.

IMPACTS OF CLIMATE CHANGE

To study the potential impacts of climate change on human health due to extreme TW in South Asia, we apply the Massachusetts Institute of Technology Regional Climate Model (MRCM) (24) forced at the lateral and sea surface boundaries by output from three simulations from the Coupled Model Intercomparison Project Phase 5 (CMIP5) coupled Atmosphere-Ocean Global Climate Model (AOGCM) experiments (25). By conducting high-resolution simulations, we include detailed representations of topography and coastlines as well as detailed physical processes related to the land surface and atmospheric physics, which are lacking in coarser-resolution AOGCM simulations (26). On the basis of our comparison of MRCM simulations driven by three AOGCMs for the historical period 1976–2005 (HIST) against reanalysis and in situ observational data, MRCM shows reasonable performance in capturing the climatological and geographical features of mean and extreme TW over South Asia. Furthermore, the mean biases of MRCM simulations are statistically corrected at the daily time scale to enhance the reliability of future projections (see Materials and Methods). We project the potential impacts of future climate change toward the end of century (2071–2100), assuming two GHG concentration scenarios based on the RCP trajectories (27): RCP4.5 and RCP8.5. RCP8.5 represents a BAU scenario resulting in a global CMIP5 ensemble average surface temperature increase of approximately 4.5°C. RCP4.5 includes moderate mitigation resulting in approximately 2.25°C average warming, slightly higher than what has been pledged by the 2015 United Nations Conference on Climate Change (COP21).

On the basis of the simulation results, TWmax is projected to exceed the survivability threshold at a few locations in the Chota Nagpur Plateau, northeastern India, and Bangladesh and projected to approach the 35°C threshold under the RCP8.5 scenario by the end of the century over most of South Asia, including the Ganges river valley, northeastern India, Bangladesh, the eastern coast of India, Chota Nagpur Plateau, northern Sri Lanka, and the Indus valley of Pakistan (Fig. 2). Under the RCP4.5 scenario, no regions are projected to exceed 35°C; however, vast regions of South Asia are projected to experience episodes exceeding 31°C, which is considered extremely dangerous for most humans (see the Supplementary Materials). Less severe conditions, in general, are projected for the Deccan Plateau in India, the Himalayas, and western mountain ranges in Pakistan.

Many urban population centers in South Asia are projected to experience heat waves characterized by TWmax well beyond 31°C under RCP8.5 (Fig. 2). For example, in Lucknow (Uttar Pradesh) and Patna (Bihar), which have respective current metro populations of 2.9 and 2.2 million, TW reaches and exceeds the survivability threshold. In most locations, the 25-year annual TWmax event in the present climate, for instance, is projected to become approximately an every year occurrence under RCP8.5 and a 2-year event under RCP4.5 (Fig. 2 and fig. S1). In addition to the increase in TWmax under global warming, the urban heat island effect may increase the risk level of extreme heat, measured in terms of temperature, for high-density urban population exposure to poor living conditions. However, Shastri et al. (28) found that urban heat island intensity over many Indian urban centers is lower than in non-urban regions along the urban boundary during daytime in the pre-monsoon summer because of the relatively low vegetation cover in non-urban areas.

We all live “locally,” as do our friends and families. It is not surprising that we are most interested in the projections for climate change in our local environments. Uncertainties in long-term projections for local environments are much greater than those regarding global projections. However, our ability to mitigate and adapt to local environmental changes reflects our skills in making decisions with regard to other, global risks.

Some localities published their projections based on the highest global spatial resolution simulations that they could find, while others are conducting dedicated local simulations. Simulations, local or global, depend on how we run our lives. Usually we follow the IPCC’s practices and make our forecasts based on scenarios similar to those which it publishes.

If we live in big cities, websites like “Climate Central” are good sources to start with. Some of its guidelines are given below, along with a global map of the cities that the site covers:

Summers around the world are already warmer than they used to be, and they’re going to get dramatically hotter by century’s end if carbon pollution continues to rise. That problem will be felt most acutely in cities.

The world’s rapidly growing population coupled with the urban heat island effect — which can make cities up to 14°F (7.8°C) warmer than their leafy, rural counterparts —  add up to a recipe for dangerous and potentially deadly heat.

Currently, about 54 percent of the world’s population lives in cities, and by 2050 the urban population is expected to grow by 2.5 billion people. As those cities get hotter, weather patterns may shift and make extreme heat even more common. That will in turn threaten public health and the economy.

Figure 1

Here’s a short, personal, account of the current circumstances in Phoenix, Arizona and its future prospects:

Sorry to put such a fine point on this, but even without climate change, Phoenix, Arizona, is already pretty uninhabitable. Don’t get me wrong, I spend a fair amount of time there, and I love it—particularly in the fall and winter—but without air-conditioning and refrigeration, it would be unlivable as is. Even with those modern conveniences, the hottest months take their toll on my feeble Southern Californian body and brain. The historical average number of days per year in Phoenix that hit 100 degrees is a mind-bending 92. But that number is rapidly rising as climate change bears down on America’s fifth-largest city.

“It’s currently the fastest warming big city in the US,” meteorologist and former Arizonan Eric Holthaus told me in an email. A study from Climate Central last year projects that Phoenix’s summer weather will be on average three to five degrees hotter by 2050. Meanwhile, that average number of 100-degree days will have skyrocketed by almost 40, to 132, according to another 2016 Climate Central study. (For reference, over a comparable period, New York City is expected to go from two to 15 100-degree days.)

I live in New York City. Periodically (usually following new IPCC reports) the city compiles an up-to-date report on how climate change will impact the city, based on the best available information. The information is published in a full dedicated issue of the “Annals of the New York Academy of Sciences” for everyone to see.

The press has picked this up:

Climate change will come to New York the same way water boils around one of those mythical frogs. The city will be the same old New York in 2050; people won’t be frying eggs on the manhole covers in the summer, or riding gondolas around Times Square. But by then winter will have fewer than 50 days of freezing cold, instead of the average of 72 that was the norm in the late 20th century. There will be less shivering agony on train platforms, plus less salting and shoveling. But the summers will be harsher—half a dozen heat waves instead of the usual two, and those heat waves will be even longer and more sweltering than usual; there will be twice as many plus-90-degree days as there once were. In 2006, a brutal summer led to 140 people dying of heat-related causes; it’s safe to say that that sort of death toll will be routine by 2050.

All of that is according to the estimates in the 2013 Climate Risk Information report from the New York City Panel on Climate Change (I’m using that report’s low or median estimates). As one of America’s wealthiest and most liberal big cities—where even some prominent Republicans are staunch climate hawks—it’s not surprising that New York would commission a report like that, or take other steps toward fighting the effects of climate change. But even with all the resources of the five boroughs, that’s a tall order.

Next week I will leave aside the depressing topic of imminent doomsdays and try to address our options for mitigating or at least postponing them.

Be Sociable, Share!
Posted in Anthropocene, Anthropogenic, Climate Change, immigration, IPCC, Sustainability, UN | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment