Local Attributions

Myles Allen, one of the pioneers of the emerging science of determining how we attribute extreme climate events to humans, featured prominently in a Scientific American piece earlier this month:

But the radio voice added that it would be “impossible to attribute this particular event [floods in southern England] to past emissions of greenhouse gases,” said Allen in a commentary published in Nature shortly thereafter.

In 2003, that was the predominant view in the scientific community: While climate change surely has a significant effect on the weather, there was no way to determine its exact influence on any individual event. There are just too many other factors affecting the weather, including all sorts of natural climate variations.

His hunch held true. Nearly 15 years later, extreme event attribution not only is possible, but is one of the most rapidly expanding subfields of climate science.

The Bulletin of the American Meteorological Society now issues a special report each year assessing the impact of climate change on the previous year’s extreme events. Interest in the field has grown so much that the National Academy of Sciences released an in-depth report last year evaluating the current state of the science and providing recommendations for its improvement.

In 2004, he and Oxford colleague Daithi Stone and Peter Stott of the Met Office co-authored a report that is widely regarded as the world’s first extreme event attribution study. The paper, which examined the contribution of climate change to a severe European heat wave in 2003—an event which may have caused tens of thousands of deaths across the continent—concluded that “it is very likely that human influence has at least doubled the risk of a heat wave exceeding this threshold magnitude.”

The breakthrough paper took the existing science a step further. Using a climate model, the researchers compared simulations accounting for climate change with scenarios in which human-caused global warming did not exist. They found that the influence of climate change roughly doubled the risk of an individual heat wave. The key to the breakthrough was framing the question in the right way—not asking whether climate change “caused” the event, but how much it might have affected the risk of it occurring at all.

It is much easier to determine how humans have contributed to global climate change. Specific local weather event attribution requires in-depth examinations of driving forces in a particular location at a particular time, as well as an attempt to determine how to describe a system that fits within a global structure.

Figure 1Average yearly global temperature with (empty circles) and without (full circles) El Niño Southern Oscillations

Figure 1 describes average global temperature changes from 1880 until now. It also distinguishes between years that experienced El Niño Southern Oscillations and years that didn’t. On a global scale, the trend in both categories follows the same pattern. This is obviously not necessarily true on the local scale. Indeed, many of the papers that try to describe weather on a local scale incorporate the estimated extent of the El Niño impact. The figure emphasizes that – global or local – one of the best ways to determine attribution for a particular weather event is to try to identify a multi-year pattern in which one can distinguish the presence or absence of a particular driving force.

I discussed Figure 2 in an earlier blog (October 3, 2017) in the context of global attribution and as we will see below, it is currently the main tool being used to determine local weather event attributions. Figure 2 clearly shows the measured and calculated anomaly between human-influenced temperature and that without human influence.

Such a combination of superimposing observational data with and without human influence is now the cornerstone in determining human contributions to any extreme weather event.

Global warming attributions – simulation of 20th century global mean temperatures (with and without human influences) compared to observations (NASA)Figure 2 – Global warming attributions – simulation of 20th century global mean temperatures (with and without human influences) compared to observations (NASA)

Figure 3 shows the driving forces being shown in climate simulations, as taken from the most recent National Climate Assessment Report (August 15, 2017 blog). For the first time in the history of the report, this edition includes a full chapter on climate event attribution (Chapter 3).

The human attributions are included among sections of the gray box labeled “climate forcing agents.”

Once the simulations fit the measured results as they do in Figure 2, one can choose particular sections of interest within that gray box. In Figure 2, this is done using “human influence,” a broad term that includes CO2, non CO2 greenhouse gases, aerosols and land use.

simplified conceptual modeling framework of climate systemFigure 3Simplified conceptual modeling framework for the climate system as implemented in many climate systems (CSSR 4th National Climate Assessment Chapter 2)

The PRECIS model is a great example of a relatively simple model that was adopted to determine specific contributions to weather and climate events:

PRECIS is a regional climate model (RCM) ported to run on a Linux PC with a simple user interface, so that experiments can easily be set up over any region of the globe. PRECIS is designed for researchers (with a focus on developing countries) to construct high-resolution climate change scenarios for their region of interest. These scenarios can be used in impact, vulnerability and adaptation studies, and to aid in the preparation of National Communications, as required under Articles 4.1 and 4.8 of the United Nations Framework Convention on Climate Change (UNFCCC).[1]

The American Meteorological Society is now publishing yearly reports covering such events. Figure 4 summarizes those that took place in 2016, as shown in its January 2018 supplementary bulletin.

Location and types of events analyzed in BAMS supplement 2018Figure 4Location and types of events analyzed in the special supplement to the recent Bulletin of the American Meteorological Society Vol 99, No 1, January 2018

Each event constitutes a chapter in this report. Here are two such events (in bold) and the authors’ conclusions as to their attributions.

The 2016 extreme warmth across Asia would not have been possible without climate change. The 2015/16 El Niño also contributed to regional warm extremes over Southeast Asia and the Maritime Continent.

Conclusions. All of the risk of the extremely high temperatures over Asia in 2016 can be attributed to anthropogenic warming. In addition, the ENSO condition made the extreme warmth two times more likely to occur. It is found that anthropogenic warming contributed to raising the level of event probability almost everywhere, although the 2015/16 El Niño contributed to a regional increase of warm events over the Maritime Continent, the Philippines, and Southeast Asia, but had little significant contribution elsewhere in Asia.

The record temperature of April 2016 in Thailand would not have occurred without the influence of both anthropogenic forcing and El Niño, which also increased the likelihood of low rainfall.

Conclusions. Our analysis demonstrates that anthropogenic climate change results in a clear shift of the April temperature distribution toward warmer conditions and a more moderate, albeit distinct, shift of the rainfall distribution toward drier Aprils in Thailand. The synergy between anthropogenic forcings and a strong El Niño was crucial to the breaking of the temperature record in 2016, which our results suggest would not have occurred if one of these factors were absent. Rainfall as low as in 2016 is found to be extremely rare in La Niña years. The joint probability for hot and dry events similar to April 2016 is found to be relatively small (best estimate of about 1%), which implies that in addition to the drivers examined here, other possible causes could have also played a role, like moisture availability and transport (especially in the context of the prolonged drought), atmospheric circulation patterns, and the effect of other non-ENSO modes of unforced variability.

The attributions described here are mostly determined via a mixed analysis of direct observations and computer simulations. Such techniques are common in science. But some of my colleagues believe that the use of computer simulations to address issues such as anthropogenic climate change is inappropriate. I will address an example taken from cosmology of how computers have been used successfully to describe physical phenomena.

Be Sociable, Share!
Posted in Anthropocene, Anthropogenic, Climate Change, UN, UNFCCC | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Fake News: Attributions

Figure 1Snow in the Sahara

If you do a Google image search for “Snow in the Sahara,” you will get a screen full of images similar to this one. What would bring you to search for that particular term? I read The New York Times every morning and a couple of weeks ago I came across a story about just that phenomenon. For me, the photograph raised memories from close to 50 years ago when I first came to the US to do my postdoctoral education. I grew up in Israel, a small country with a Mediterranean climate where snow was a relatively rare occurrence. On my first visit to the US, my family and I drove from the east coast to California. We took Interstate 40 through New Mexico and Arizona, passing through the Mojave Desert. When I looked out the window I saw the unbelievable sight of a snow-covered terrain with the odd-looking Joshua trees emerging throughout.

Figure 2The Mojave Desert in winter

The photograph in the NYT triggered in me a similar wonder on a global scale. Going through the article I got some explanations:

“The Sahara is as large as the United States, and there are very few weather stations,” he added. “So it’s ridiculous to say that this is the first, second, third time it snowed, as nobody would know how many times it has snowed in the past unless they were there.”

Rein Haarsma, a climate researcher at the Royal Netherlands Meteorological Institute, cautioned against ascribing the white-capped dunes to changing temperatures because of pollution.

“It’s rare, but it’s not that rare,” said Mr. Haarsma said in an interview. “There is exceptional weather at all places, and this did not happen because of climate change.”

The snow fell in the Sahara at altitudes of more than 3,000 feet, where temperatures are low anyway. But Mr. Haarsma said cold air blowing in from the North Atlantic was responsible.

Those icy blasts usually sweep into Scandinavia and other parts of Europe, Mr. Haarsma explained, but in this case, high-pressure systems over the Continent had diverted the weather much farther south.

Mr. Haarsma offered a mechanism to explain why we see snow in the Sahara: he claimed it is not due to climate change but instead caused by cold air blowing down from the North Atlantic. He did not, however, explain why we are now seeing so much cold air from the North Atlantic reach as far south as the Sahara. Well, snow in the Sahara is not the only so-called unique extreme weather we are seeing.

I live in New York City and we have just experienced an extreme cold event that started around Christmas and lasted until mid-January with a few short breaks. The freeze affected the entire northeastern US, prompting President Trump to tweet:

In the East, it could be the COLDEST New Year’s Eve on record. Perhaps we could use a little bit of that good old Global Warming that our Country, but not other countries, was going to pay TRILLIONS OF DOLLARS to protect against. Bundle up!

4:01 PM – 28 Dec 2017

He got more than 60,000 retweets, more than 200,000 likes, comments and replies. Such is the power of social media these days.

Erik Ortiz wrote about this event:

Why climate change may be to blame for dangerous cold blanketing eastern U.S”:

A study published last year in the journal WIRES Climate Change, however, lays out how the warming Arctic and melting ice appear to be linked to cold weather being driven farther south.

“Very recent research does suggest that persistent winter cold spells (as well as the western drought, heatwaves, prolonged storminess) are related to rapid Arctic warming, which is, in turn, caused mainly by human-caused climate change,” Jennifer Francis, a climate scientist at Rutgers University and one of the study’s authors, said in an email.

But researchers have said the loss of sea ice and increased snow cover in northern Asia is helping to weaken the polar vortex.

In addition, “abnormally” warm ocean temperatures off the West Coast are causing the jet stream over North America — which moves from west to east and follows the boundaries between hot and cold air — to “bulge” northward, Francis said.

That scenario is what has caused a lack of storms so far this winter in California and Alaska’s unusually warm and record-breaking temperatures, she added.

Meanwhile, the wrinkled jet stream as it travels east is also being pushed farther south, according to her research.

Mr. Ortiz included a picture from NOAA (National Oceanic and Atmospheric Administration) that shows the schematic shift between a strong jet stream – which more or less confines the extreme cold air to the Arctic – and the global-warming-caused weaker jet stream, which allows the air to push further south to regions including the northeastern US and the Sahara Desert (Figure 3).

Figure 3The shifting jet stream (NOAA)arctic-temperature-rise-arrFigure 4 Change of temperature in the Arctic vs globally, 1880-2010s

Figure 4 summarizes the Arctic temperature changes as compared to the average global temperature changes from 1880 to the 2010s.

Bloomberg published a series of three articles on “How a Melting Arctic Changes Everything,” in which it tried to summarize the global changes resulting from the warming Arctic:

Part I – The Bare Arctic

 Part II – The Political Arctic

Part III – The Economic Arctic

There is almost universal agreement that the accelerated temperature increase in the Arctic is the result of climate change by way of changes in the region’s ground reflectivity. In other words, while Mr. Haarsma was basically right about the mechanism of the snow falling in the Sahara Desert, he was wrong about its attribution.

It is much more difficult to determine how to attribute localized, specific events to human activities than it is to attribute the culpability of climate change (multiyear weather) on a global scale. But most people’s perception of blame derives from specific extreme events in specific locations (i.e. we understand concrete examples best). So understanding where we stand in our efforts to create/connect culpability maps is of the utmost importance. Next week I will try to show an analysis of a specific case.

Be Sociable, Share!
Posted in administration, Anthropocene, Anthropogenic, Climate Change, IPCC, Sustainability, Trump | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | 1 Comment

Fake News

I am an old guy; my bucket list is modest but I am trying to check off the remaining items while I can still fully enjoy them. There’s only one that remains out of my reach: I want to travel to Mars. Not only would I like to see planet Earth from space with my own eyes but as a cosmology teacher, it would give me joy to experience a fraction of the distances that field studies. I am realistic enough to know that it won’t happen. That is, after all, one of the attractions of a bucket list.

Well, according to Elon Musk I might yet get my chance. Granted, a ticket will cost me $200,000.

Meanwhile, some people might get an even better deal. Stephen Hawking just announced that he’s willing to pay for people’s tickets, although in this case the trip is to Venus. There is only one caveat: the voyagers must “qualify” as climate change deniers. Unfortunately, that takes me out of the running. There is a reason for Hawking’s publicity stunt. There are strong correlations between the conditions one would find on Venus and those we can expect to find on Earth should climate change reach its logical conclusion according to business as usual projections (June 25, 2012). If only I could agree with denier logic that carbon dioxide is a benign gas and all of its links to climate change are “fake news” that scientists fabricate to get grant money from the government, I’d be well on my way toward fulfilling my space travel goals. After all, there probably wouldn’t be that much difference between the experiences of visiting Mars or Venus.

Fake news is a popular topic these days. It’s an excellent cover for ignorance – it means that anyone can make or dismiss an argument – whether or not they have supporting facts or reproducible observations to follow up their claim.

Deniers use the label of fake news to delegitimize climate change in the eyes of the public, especially when it comes to mitigation efforts that require voter support:

“Global Warming: Fake News from the Start” by Tim Ball and Tom Haris

President Donald Trump announced the U.S. withdrawal from the Paris Agreement on climate change because it is a bad deal for America. He could have made the decision simply because the science is false, but most of the public have been brainwashed into believing it is correct and wouldn’t understand the reason.

Canadian Prime Minister Justin Trudeau, and indeed the leaders of many western democracies, though thankfully not the U.S., support the Agreement and are completely unaware of the gross deficiencies in the science. If they did, they wouldn’t be forcing a carbon dioxide (CO2) tax, on their citizens.

Trudeau and other leaders show how little they know, or how little they assume the public know, by calling it a ‘carbon tax.’ But CO2 is a gas, while carbon is a solid. By calling the gas carbon, Trudeau and others encourage people to think of it as something ‘dirty’, like graphite or soot, which really are carbon. Calling CO2 by its proper name would help the public remember that it is actually an invisible, odorless gas essential to plant photosynthesis.

…CO2 is not a pollutant…the entire claim of anthropogenic global warming (AGW) was built on falsehoods and spread with fake news.

…In 1988 Wirth was in a position to jump start the climate alarm. He worked with colleagues on the Senate Energy and Natural Resources Committee to organize a June 23, 1988 hearing where Dr. James Hansen, then the head of the Goddard Institute for Space Studies (GISS), was to testify…Specifically, Hansen told the committee,

“Global warming has reached a level such that we can ascribe with a high degree of confidence a cause and effect relationship between the greenhouse effect and observed warming…It is already happening now…The greenhouse effect has been detected and it is changing our climate now…We already reached the point where the greenhouse effect is important.”

…More than any other event, that single hearing before the Energy and Natural Resources Committee publicly initiated the climate scare, the biggest deception in history. It created an unholy alliance between a bureaucrat and a politician that was bolstered by the U.N. and the popular press leading to the hoax being accepted in governments, industry boardrooms, schools, and churches across the world.

Trump must now end America’s participation in the fake science and the fake news of man-made global warming. To do this, he must withdraw the U.S. from further involvement with all U.N. global warming programs, especially the IPCC as well as the agency that now directs it—the United Nations Framework Convention on Climate Change. Only then will the U.S. have a chance to fully develop its hydrocarbon resources to achieve the president’s goal of global energy dominance.

The Ball and Haris piece claims that people who believe in climate change have been duped by fake news. Their key point is that the notion of carbon dioxide being the chemical largely responsible for climate change is utterly false. Their reasoning is twofold. The first is semantic – they take exception to calling carbon dioxide “carbon.” This is largely an issue of units. Most scientific publications use this association mainly because carbon dioxide is not the only anthropogenic greenhouse gas; other gases, such as methane, are often expressed in terms of units of carbon dioxide. Any transition between the units of “carbon” and “carbon dioxide” involves multiplication by 44/12 or 3.7, which is the ratio of the molecular weight of carbon dioxide to the atomic weight of carbon. I will remind those of you to whom the last sentence is a foreign language that some prerequisites – especially in vocabulary and basic math – are necessary when we describe the details of the physical environment. Ball and Haris’ second objection is that carbon dioxide is not a “pollutant” but an “invisible, odorless gas essential to plant photosynthesis.” While the latter statement is true, it does not represent the whole truth. My Edward Teller quote from last week directly addressed this issue. Essentially, carbon dioxide is a (measurable) pollutant because of its optical absorption properties, i.e. it is responsible for a great deal of the observed climate change because of its preferential absorption of the longer wavelengths of the electromagnetic spectrum.

Figure 1Atmospheric carbon dioxide concentration over the last 400,000 years

Figure 1 shows the atmospheric concentrations of carbon dioxide over the last 400,000 years. Again, the famous hockey-stick curve shows up in the perpendicular rise after the industrial revolution at the far right section of the graph. Last week’s blog’s data about how carbon dioxide added to the atmosphere between the industrial revolution and the 1950s didn’t have any carbon-14 in it, indicates the ancient origins of fossil fuels.

Furthermore, one of my earliest blogs (June 25, 2012) illustrates the carbon cycle (again – much of it in the form of carbon dioxide): where it’s going and where it’s coming from. Photosynthesis and respiration are part of it. Without the anthropogenic contributions (burning fossil fuels, land use change, cement production, etc.) the emission and sequestration balance to explain the long steady state (approximately constant rate) shown in Figure 1. Once we factor in the anthropogenic contributions and how they impact atmospheric absorption properties, we start to see the changing energy balance with the sun and hence the change of climate.

Ball and Haris’ piece at least presented a mechanism of internal logic that can be refuted. Many are satisfied with simply branding something fake news. Unfortunately, social media makes spreading this sort of false information a relatively painless process. Even Google is facing flak for its role in spreading fake news

“How Climate Change Deniers Rise to the Top in Google Searches” by HIROKO TABUCHI

Groups that reject established climate science can use the search engine’s advertising business to their advantage, gaming the system to find a mass platform for false or misleading claims.

Type the words “climate change” into Google and you could get an unexpected result: advertisements that call global warming a hoax. “Scientists blast climate alarm,” said one that appeared at the top of the search results page during a recent search, pointing to a website, DefyCCC, that asserted: “Nothing has been studied better and found more harmless than anthropogenic CO2 release.”

Not everyone who uses Google will see climate denial ads in their search results. Google’s algorithms use search history and other data to tailor ads to the individual, something that is helping to create a highly partisan internet.

It seems that even parts of the internet that we consider to be neutral are starting to reflect the increasingly combative political climate and the related problem of fake news.

Be Sociable, Share!
Posted in administration, Anthropocene, Anthropogenic, Climate Change, IPCC, Sustainability, Trump | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

“Natural” or “Anthropogenic”? – Climate Change

Last week’s blog looked at various methods of distinguishing “natural” vs. “artificial” vanilla. I used this as a jumping off point to facilitate answering the much more important question of how we distinguish anthropogenic climate change from “natural” climate change (i.e. that which took place way before humans had the capacity to inflict any changes on the physical environment of the planet).

Deniers’ most common argument is that they agree that the climate is changing but that it has been doing so since long before humans were around. More specifically, they claim that carbon dioxide couldn’t be a cause of such changes in the here and now because it is a “natural,” “harmless” compound.

Tom Curtis summarized the issue and enumerated the problems with such reasoning in his July 25, 2012 post on Skeptical Science. Skeptical Science is a blog for which I have the utmost respect (July 13, 2013) and which published one of my own guest posts. Here is a condensed outline of the data Curtis provided:

There are ten main lines of evidence to be considered:

1. The start of the growth in CO2 concentration coincides with the start of the industrial revolution, hence anthropogenic;

2. Increase in CO2 concentration over the long term almost exactly correlates with cumulative anthropogenic emissions, hence anthropogenic;

3. Annual CO2 concentration growth is less than Annual CO2 emissions, hence anthropogenic;

4. Declining C14 ratio indicates the source is very old, hence fossil fuel or volcanic (ie, not oceanic outgassing or a recent biological source);

5. Declining C13 ratio indicates a biological source, hence not volcanic;

6. Declining O2 concentration indicate combustion, hence not volcanic;

7. Partial pressure of CO2 in the ocean is increasing, hence not oceanic outgassing;

8. Measured CO2 emissions from all (surface and beneath the sea) volcanoes are one-hundredth of anthropogenic CO2 emissions; hence not volcanic;

9. Known changes in biomass too small by a factor of 10, hence not deforestation; and

10. Known changes of CO2 concentration with temperature are too small by a factor of 10, hence not ocean outgassing.

Figure 1 – Anthropogenic and total atmospheric carbon dioxide concentrations and the 14C isotopic decline (for more details see my Oct 3, 2017 blog on attributions)

Figure 1 demonstrates the changes in the first four items on this list, which I have also described in earlier blogs. The decline of 14C is the common denominator with the characterization of “natural” vanilla from last week’s blog. This important metric, however, becomes far less useful after the end of WWII because of the atmospheric contamination from the nuclear testing that took place at that time (As stated in the caption for Figure 1: “After 1955 the decreasing 14C trend ends due to the overwhelming effect of bomb 14C input into the atmosphere”).

Yet even the limited decline of 14C from 1900 to 1950 is telling: that period constitutes the start of the significant anthropogenic contributions to the atmospheric concentrations of carbon dioxide.

These important results present a strong argument that most of the increase in the atmospheric concentrations of carbon dioxide comes from humans burning fossil fuels. Deniers say that this is not sufficient grounds for associating the increase carbon dioxide concentration with climate change.

To make the argument that carbon dioxide is the main greenhouse gas responsible for climate change, I will quote one of the most famous scientists of the 20th century. Far from being thought of as a climate change–centered scientist, Edward Teller is instead known for creating the hydrogen bomb after the Second World War.

But Teller associated carbon dioxide with the global climate when he made a speech at the celebration of the centennial of the American oil industry in 1959:

Ladies and gentlemen, I am to talk to you about energy in the future. I will start by telling you why I believe that the energy resources of the past must be supplemented. First of all, these energy resources will run short as we use more and more of the fossil fuels. But I would […] like to mention another reason why we probably have to look for additional fuel supplies. And this, strangely, is the question of contaminating the atmosphere. [….] Whenever you burn conventional fuel, you create carbon dioxide. [….] The carbon dioxide is invisible, it is transparent, you can’t smell it, it is not dangerous to health, so why should one worry about it?

Carbon dioxide has a strange property. It transmits visible light but it absorbs the infrared radiation which is emitted from the earth. Its presence in the atmosphere causes a greenhouse effect [….] It has been calculated that a temperature rise corresponding to a 10 per cent increase in carbon dioxide will be sufficient to melt the icecap and submerge New York. All the coastal cities would be covered, and since a considerable percentage of the human race lives in coastal regions, I think that this chemical contamination is more serious than most people tend to believe.

This connection is a simple physical property of carbon dioxide that falls under the scientific discipline called “spectroscopy” (December 10, 2012). The connection is one of the most important parameters that characterizes climate change, as shown in Figure 2; we call it “climate sensitivity.”

IPCC equilibrium global mean temperature increaseFigure 2 – Projected temperature increase as a function of projected carbon dioxide increase (from the 4th IPCC report – AR4) December 10, 2012 blog

Radiative forcing due to doubled CO2

CO2 climate sensitivity has a component directly due to radiative forcing by CO2, and a further contribution arising from climate feedbacks, both positive and negative. “Without any feedbacks, a doubling of CO2 (which amounts to a forcing of 3.7 W/m2) would result in 1 °C global warming, which is easy to calculate and is undisputed. The remaining uncertainty is due entirely to feedbacks in the system, namely, the water vapor feedback, the ice-albedo feedback, the cloud feedback, and the lapse rate feedback”;[14] addition of these feedbacks leads to a value of the sensitivity to CO2 doubling of approximately 3 °C ± 1.5 °C, which corresponds to a value of λ of 0.8 K/(W/m2).

In the next blog I will look at how climate change deniers self-justify labelling this as “fake news,” thus relieving themselves of the burden of engaging with (or even considering) the associated science.

Be Sociable, Share!
Posted in Anthropocene, Anthropogenic, Climate Change, IPCC, Sustainability | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

“Natural” or “Anthropogenic”? – Vanilla Extract and Climate Change

Figure 1 – My Christmas present

Happy New Year!!! I got a yummy Christmas present from a family member that doubled as a hint to try to improve my cooking: homemade vanilla extract. They got the vanilla beans from Honduras and immersed them in vodka!

As a good giftee, I will use the extract in some of my food preparation (I like the flavor). The gift also reminded me of a time a few years back when a gentleman from our neighborhood came to my school to ask for advice on how he could distinguish a “natural” vanilla flavor from an artificial one. Neither I, nor any of my colleagues, could come up with satisfactory answers but we were intrigued with the differentiation he sought.

My own interest this distinction is a bit broader: I want to know how to convince all of you that the global climate change that we are experiencing is “artificial,” as in man-made or anthropogenic, and not “natural.”

I get constant complaints from many good friends for naming carbon dioxide and methane – the main components of natural gas – as pollutants that cause climate change. This is especially true given how often I blame climate change for the ultimate slaughter of our children and grandchildren should we continue our business-as-usual living practices. After all, humans and most other living organisms, exhale carbon dioxide and we and many animals around us naturally emit flatulence with methane as an important component.

Well, I will try to start our new year with the much more pleasant vanilla flavor and follow that up next time by utilizing a common technique for distinguishing natural from artificial vanilla to parallel the differences between natural and artificial origins of climate change. We will find that in both cases a learning curve is required to follow some of the language used.

The main interest of the gentleman that came to us seeking help to distinguish the source of the vanilla was money. According to him, many people who sell the extract and claim it to be natural are cheating and actually sell the much cheaper artificial variety at a bumped-up price. Our interest in realizing that we are the instigators of most of the climate change that we are experiencing is to also understand that it is up to us to stop that forward momentum.

The essence of the identification of the origin of vanilla is legal, as Forbes’ Eustacia Huen writes in, “What Manufacturers Really Mean By Natural And Artificial Flavors”:

According to the U.S. Food and Drug Administration’s (FDA) Code of Federal Regulations (Title 21), the term “natural flavor” essentially has an edible source (i.e. animals and vegetables). Artificial flavors, on the other hand, have an inedible source, which means you can be eating anything from petroleum to paper pulp processed to create the chemicals that flavor your food. For example, Japanese researcher Mayu Yamamoto discovered a way to extract vanillin, the compound responsible for the smell and flavor of vanilla, from cow poop in 2006, as reported by Business Insider.

Gary Reineccius’, “What is the difference between artificial and natural flavors?” in Scientific American defines it somewhat differently:

Natural and artificial flavors are defined for the consumer in the Code of Federal Regulations. A key line from this definition is the following: ” a natural flavor is the essential oil, oleoresin, essence or extractive, protein hydrolysate, distillate, or any product of roasting, heating or enzymolysis, which contains the flavoring constituents derived from a spice, fruit or fruit juice, vegetable or vegetable juice, edible yeast, herb, bark, bud, root, leaf or similar plant material, meat, seafood, poultry, eggs, dairy products, or fermentation products thereof, whose significant function in food is flavoring rather than nutritional.” Artificial flavors are those that are made from components that do not meet this definition.

The question at hand, however, appears to be less a matter of legal definition than the “real” or practical difference between these two types of flavorings. There is little substantive difference in the chemical compositions of natural and artificial flavorings. They are both made in a laboratory by a trained professional, a “flavorist,” who blends appropriate chemicals together in the right proportions. The flavorist uses “natural” chemicals to make natural flavorings and “synthetic” chemicals to make artificial flavorings. The flavorist creating an artificial flavoring must use the same chemicals in his formulation as would be used to make a natural flavoring, however. Otherwise, the flavoring will not have the desired flavor. The distinction in flavorings–natural versus artificial–comes from the source of these identical chemicals and may be likened to saying that an apple sold in a gas station is artificial and one sold from a fruit stand is natural.

So is there truly a difference between natural and artificial flavorings? Yes. Artificial flavorings are simpler in composition and potentially safer because only safety-tested components are utilized. Another difference between natural and artificial flavorings is cost. The search for “natural” sources of chemicals often requires that a manufacturer go to great lengths to obtain a given chemical. Natural coconut flavorings, for example, depend on a chemical called massoya lactone. Massoya lactone comes from the bark of the Massoya tree, which grows in Malaysia. Collecting this natural chemical kills the tree because harvesters must remove the bark and extract it to obtain the lactone. Furthermore, the process is costly. This pure natural chemical is identical to the version made in an organic chemists laboratory, yet it is much more expensive than the synthetic alternative. Consumers pay a lot for natural flavorings. But these are in fact no better in quality, nor are they safer, than their cost-effective artificial counterparts.

As we can read in the last two paragraphs of the Scientific American piece, in terms of functionality (i.e. flavor) there is not much difference between the “artificial” and the “natural”; if anything, the synthetic variety is considerably purer.

Chemical & Engineering News (CEN) tackles the full complexity of the issue in Melody M. Bomgardner’s, “The problem with vanilla: After vowing to go natural, food brands face a shortage of the favored Flavor.” The essence of this article is easily summed up as follows:

In brief:

Vanilla is perhaps the world’s most popular flavor, but less than 1% of it comes from a fully natural source, the vanilla orchid. In 2015, a host of big food brands, led by Nestlé, vowed to use only natural flavors in products marketed in the U.S.—just as a shortage of natural vanilla was emerging. In the following pages, C&EN explains how flavor firms are working to supply.

However, we need to delve in farther.

In Réunion, output of vanilla soared thanks to the Albius method, and orchid cultivation expanded to nearby Madagascar. Today, about 80% of the world’s natural vanilla comes from smallholder farms in Madagascar. There, locals continue to pollinate orchids by hand and cure the beans in the traditional fashion.

It didn’t take long for vanilla demand to exceed supply from the farms of Madagascar. In the 1800s and 1900s, chemists took over from botanists to expand supply of the flavor. Vanillin, the main flavor component of cured vanilla beans, was synthesized variously from pine bark, clove oil, rice bran, and lignin.

Rhône-Poulenc, now Solvay, commercialized a pure petrochemical route in the 1970s. In recent years, of the roughly 18,000 metric tons of vanilla flavor produced annually, about 85% is vanillin synthesized from the petrochemical precursor guaiacol. Most of the rest is from lignin.

But the traditional vanilla bean is starting to enjoy a renaissance, thanks to consumer demand for all-natural foods and beverages. Last year, a string of giant food companies, including General Mills, Hershey’s, Kellogg’s, and Nestlé, vowed to eliminate artificial flavors and other additives from many foods sold in the U.S.

Figure 2 – The various ways to Vanillin

There is a problem, however: World production of natural vanilla is tiny and has been falling in recent years. Less than 1% of vanilla flavor comes from actual vanilla orchids. With demand on the upswing, trade in the coveted flavor is out of balance.

Flavor companies are working feverishly to find additional sources of natural vanillin and launch initiatives to boost the quality and quantity of bean-derived vanilla. Suppliers such as Symrise, International Flavors & Fragrances (IFF), Solvay, and Borregaard are using their expertise along the full spectrum of natural to synthetic to help food makers arrive at the best vanilla flavor for each product.

Food makers, meanwhile, are confronting skyrocketing costs for natural vanilla, reformulation challenges, complicated labeling laws, and difficult questions about what is “natural.”

Although consumer disdain for artificial ingredients has been building for years, credit —or blame—for last year’s wave of “all natural” announcements goes to Nestlé, which in February 2015 was the first major brand to announce plans to eliminate artificial additives from chocolate candy sold in the  U.S. The announcement upended the massmarket chocolate industry practice of adding synthetic vanillin to counter the bitterness of cocoa.

For a big food firm, however, switching to natural vanilla is akin to squeezing an elephant into a Volkswagen. While the ink was drying on those all-natural announcements last year, output of Madagascar vanilla beans had plummeted to 1,100 metric tons, about half the normal harvest. That, along with rising demand, caused prices to more than double to roughly $225 per kg by the middle of last year, according to Mintec, a raw material price-tracking firm.

Cured vanilla beans contain only 2% of extractable vanilla flavor, meaning prices for pure vanilla reached an eye-popping $11,000 per kg. The industry is closely watching this year’s harvest, hoping to see vanilla costs eventually return to pre-2012 levels of about $25 per kg for beans or $1,250 for vanilla.

The clear distinction that we can make is between the petroleum-based guaiacol and the other routes that all the others originate from living plants.

The distinction between the “natural” and the “synthetic” vanilla will follow the same rationale that I covered in the October 3, 2017 where I described the human attributions to climate change:

14C is a radioisotope of carbon with atoms that contain 6 protons and 8 neutrons as compared to the more abundant isotope of carbon (12C), which contains 6 protons and 6 neutrons. The radioisotope is unstable and slowly converts to nitrogen (14N) by converting one neutron to one proton. The conversion rate is measured through a parameter called half-time. It means that if we start with a certain amount of the material, half of it will convert into 14N in that period of time. The half-life of 14C is 5,730 years. The natural abundance of this isotope in the atmosphere is about one atom in a trillion. Plants that grow by photosynthesis of carbon dioxide in the atmosphere end up with the same amount of the carbon isotope.

All the routes to the “natural” vanillin in Figure 2, except for the petroleum routes, originate from photosynthetic plants – within the plants’ lifetimes. The plants photosynthesize atmospheric carbon dioxide and have lifetimes considerably shorter than 5730 years (the half-life of carbon-14 that they digested from the atmosphere) so we expect the vanilla extracted through any of the “native” routes to have the same C-14 concentration as the atmosphere.

The petroleum route is based on petroleum that was formed via decay of plants and digestion by anaerobic (no oxygen) bacteria, millions of years ago. That’s a much longer time than the half-life of C-14. So we expect the vanilla that we get from this route to have no C-14. Measuring C-14 in the extract is a quick and easy job for laboratories that are equipped to do so.

A different method for the differentiation is based on additives that are present within “natural” vanilla and are absent in the synthetic vanilla. One of them is described below:

The production of vanilla beans is quite expensive, since it is a very labor intensive process and harvesting takes place 2 to 3 years after planting. This drives the price of natural vanilla extract to about three to five times higher than artificial vanilla preparations. Due to quality, price concerns and economically motivated frauds, it is important to differentiate between natural and artificial forms of vanilla extracts. Apart from vanillin, natural vanilla extracts have 4-hydroxybenzaldehyde, which is absent in artificial vanilla flavorings. This compound can be used as a marker ion to rapidly differentiate between natural and artificial vanilla preparations.[2]

This method is more general and can use a wider variety of analytical instruments compared to the radioactivity measurements, provided that the assumed additives are present.

Next week I will return to the October 3, 2017 blog about attribution to solidify the analogy between vanilla and the anthropogenic origins of climate change.

Be Sociable, Share!
Posted in Anthropocene, Anthropogenic, Climate Change, Sustainability | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | 1 Comment

Invasive Species, Collective Suicide, and Self-Inflicted Genocide

Recently, the NYT published some answers to questions posed by readers: “Are We an Invasive Species? Plus, Other Burning Climate Questions.” One of these questions was whether or not humans count as an invasive species:

Are we an invasive species? 

By Livia Albeck-Ripka

In a high-rise in Malaysia’s capital in 1999, a group of scientists — convened by the International Union for Conservation of Nature to designate “100 of the World’s Worst Invasive Alien Species” — asked themselves this very question.

“Every single person in the room agreed, humans are the worst invasive species,” said Daniel Simberloff, a professor of environmental science at the University of Tennessee, who was at the meeting. “But the I.U.C.N. disagreed.” Homo sapiens, which has colonized and deforested much of the Earth and pumped enough carbon dioxide into the atmosphere to change its climate, was excluded from the list.

While the meaning of “invasive” has been the subject of some debate, the generally accepted definition is a plant, animal or other organism that, aided by humans, has recently moved to a place where it is nonnative, to detrimental effect. Humans don’t fit that definition, said Piero Genovesi, chairman of the I.U.C.N.’s invasive species group, because they are the ones doing the moving. A more useful way to think about ourselves, Dr. Genovesi said, is as the drivers of every problem conservation tries to remedy.

The piece reminded me of something I read in 2011 by a science writer for the Smithsonian, which addressed the same question in a somewhat clearer way:

Are Humans an Invasive Species?

Sarah Zielinski

Let’s start with the definition of an invasive species. It turns out, it’s not so simple. The legal definition in the United States is “an alien species whose introduction does or is likely to cause economic or environmental harm or harm to human health.” The International Union for Conservation of Nature (IUCN), which developed the list of the 100 world’s worst from which our invasive mammals piece originated, defines them as “animals, plants or other organisms introduced by man into places out of their natural range of distribution, where they become established and disperse, generating a negative impact on the local ecosystem and species.” And a 2004 paper in Diversity and Distributions that examines the terminology of invasiveness notes that there is a lack of consensus on this topic and lists five dominant definitions for ‘invasive,’ the most popular of which is “widespread that have adverse effects on the invaded habitat.”

Despite the lack of a single definition, however, we can pull from these definitions some general aspects of an invasive species and apply those to Homo sapiens.

  • An invasive species is widespread: Humans, which can be found on every continent, floating on every ocean and even circling the skies above certainly meet this aspect of invasiveness.
  • An invasive species has to be a non-native: Humans had colonized every continent but Antarctica by about 15,000 years ago. Sure, we’ve done some rearranging of populations since then and had an explosion in population size, but we’re a native species.

3) An invasive species is introduced to a new habitat: Humans move themselves; there is no outside entity facilitating their spread.

4) An invasive species had adverse effects on its new habitat and/or on human health: Humans meet this part of the definition in too many ways to count.

Verdict: We’re not an invasive species, though we’re certainly doing harm to the world around us. If you think about it, all of the harm done by invasive species is by definition our collective faults; some kind of human action led to that species being in a new place where it then causes some harm. And so I’m not at all astonished to find people arguing that we’re the worst invasive species of them all.

I fully agree with Ms. Zielinski: based on our fully human-centric definition of invasive species, the term cannot apply to us while we are on this planet. We can, however, become an invasive species via our space explorations and are taking some precautions to minimize that risk. A good example of this is the Cassini spacecraft’s deliberate crash into Saturn’s atmosphere in September 2017 to prevent contamination of any future efforts to find life in space. All the damage that humans are causing to the physical environment does not “justify” labeling us as foreign invaders. This planet is our home. It is also the only home of any other life form that we know. We are all “competing” for dominance here. These events are much better framed as collective suicides or self-inflicted genocides.

Figure 1 – Fundamental questions of Astrobiology

I am teaching Cosmology as an advanced General Education course at Brooklyn College. The syllabus covers our attempts to find extraterrestrial life, an area known as Astrobiology. Figure 1 shows the various areas of inquiry that are being considered as part of this effort. Closer inspection of this figure reveals that environmental issues play a key role. For instance, “what is life’s future on Earth and beyond?” and “Future Environmental Change.” In the latter category we see “Catastrophes” – but this entry only refers to collisions of various kinds with space objects, not with the self-destruction of Earth’s native life forms. I have covered self-destruction in specific blogs such as “Nuclear Winter” (July 9, 2013) as well as throughout the CCF blog (July 9, 2013 estimated the energy release of climate change as a multiple of that produced by the Hiroshima bombing).

Be Sociable, Share!
Posted in Anthropogenic, Climate Change, Sustainability | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Saving the World through the Pursuit of Self Interest: Part 2

Part 1 of this segment, which I published on April 25, 2017, focused on the March for Science – an event that took place on Earth Day and addressed the Trump administration’s attitude of climate change denial. This follow-up seeks my students’ input.

The 2017 Fall semester is over and my students are preparing for their final exam. I have been teaching my Climate Change course using the TBL (Team Based Learning) system. I have 6 groups in my class, each composed of roughly 7 students that have been studying together throughout the semester.

As an extra credit assignment I have challenged the groups to produce a collective paper addressing, “what can we do to save the world?” I view this question as synonymous with the objective of this whole blog. They will post their answers as comments for this post and all of you can be the judges.

Be Sociable, Share!
Posted in administration, Anthropocene, Anthropogenic, Climate Change, Education, politics, Sustainability, Trump | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | 2 Comments

Long-term Mitigation: Fusion

A month ago, I was working on a series of blogs about the long-term impacts of and solutions for climate change. I got sidetracked and decided to follow two dramatic events as they unraveled: first was the tax legislation that has now been passed in different forms in both the House and Senate. It is predicted to increase the deficit by 1.5 trillion dollars in the next ten years. I also looked at the White House’s formal approval of a detailed, congressionally-mandated report about the impacts of climate change on the US. Given the details of the predicted damage that climate change can inflict, I believe that the two decisions were contradictory and that the lawmakers’ actions are irrational.

The last blog (November 7, 2017) in the previous series, “Long Term Solutions: Energy,” was meant to segue into a focus on an ultimate solution fusion:

Fusion power is a form of power generation in which energy is generated by using fusion reactions to produce heat for electricity generation. Fusion reactions fuse two lighter atomic nuclei to form a heavier nucleus, releasing energy. Devices designed to harness this energy are known as fusion reactors.

The fusion reaction normally takes place in a plasma of deuterium and tritium heated to millions of degrees. In stars, gravity contains these fuels. Outside of a star, the most researched way to confine the plasma at these temperatures is to use magnetic fields. The major challenge in realising fusion power is to engineer a system that can confine the plasma long enough at high enough temperature and density.

As a source of power, nuclear fusion has several theoretical advantages over fission. These advantages include reduced radioactivity in operation and as waste, ample fuel supplies, and increased safety. However, controlled fusion has proven to be extremely difficult to produce in a practical and economical manner. Research into fusion reactors began in the 1940s, but as of 2017[update], no design has produced more fusion energy than the energy needed to initiate the reaction, meaning all existing designs have a negative energy balance.[1]

Over the years, fusion researchers have investigated various confinement concepts. The early emphasis was on three main systems: z-pinch, stellarator and magnetic mirror. The current leading designs are the tokamak and inertial confinement (ICF) by laser. Both designs are being built at very large scales, most notably the ITER tokamak in France, and the National Ignition Facility laser in the USA. Researchers are also studying other designs that may offer cheaper approaches. Among these alternatives there is increasing interest in magnetized target fusion and inertial electrostatic confinement.

Stars are the energy generators of the universe. Per definition, they all generate their energy — through fusion in their cores. The most important source of this energy is hydrogen. All the hydrogen in the universe was created primordially in its first few minutes of formation (big bang). The initial distribution of elements was mostly 75% hydrogen and 25% helium. So cosmologically, hydrogen is the primary energy source in the universe. It can be converted into other forms of energy by way of fusion reactions. Stars in the universe are defined by their ability to utilize fusion reactions: the gravitational contractions raise their core temperatures above the ignition of the fusion reaction. Star masses are limited to between around 100 solar mass (the mass of our sun) to 0.1 solar mass. The upper limit comes because the “burning” of the hydrogen accelerates sharply with the mass of the star, thus making the lifetime of the big stars shorter as they grow, while the star masses’ lower limit is the minimum gravitational force strong enough to ignite fusion of simple hydrogen in their cores.

There are smaller objects called Brown Dwarfs that are smaller than 0.1 solar masses and thus unable to fuse normal hydrogen but bigger than big planets such as Jupiter; they can get some energy through fusion of deuterium (an isotope of hydrogen) and lithium:

Brown dwarfs are objects which have a size between that of a giant planet like Jupiter and that of a small star. In fact, most astronomers would classify any object with between 15 times the mass of Jupiter and 75 times the mass of Jupiter to be a brown dwarf. Given that range of masses, the object would not have been able to sustain the fusion of hydrogen like a regular star; thus, many scientists have dubbed brown dwarfs as “failed stars”.

The ultimate solution to our energy problems is to learn how to use fusion as our source of energy. Since immediately after the Second World War, we have known how to use fusion in a destructive capacity (hydrogen bombs) and have been earnestly trying to learn how to use it for peaceful applications such as converting it into electrical power. It is difficult. To start with, if we want to imitate our sun we have to create temperatures on the order of 100 million degrees Celsius. Before we can do that, we have to learn how to create or find materials that can be stable at such temperatures: all the materials that we know of will completely decompose in those circumstances. Figure 1 illustrates the facilities engaging in this research and their progress. We are now closer than we have ever been to maintaining a positive balance between energy input and energy output (ignition in the graph) but we are not there yet.

Figure 1 – Fusion experimental facilities and the plasma conditions they have reached

The vertical axis in Figure 1 represents a quantity called the triple product. The same site where I found the figure explains this quantity:

The triple product is a figure of merit used for fusion plasmas, closely related to the Lawson Criteria. It specifies that successful fusion will be achieved when the product of the three quantities – n, the particle density of a plasma, the confinement time,τ and the temperature, T – reaches a certain value.  Above this value of the triple product, the fusion energy released exceeds the energy required to produce and confine the plasma. For deuterium-tritium fusion this value is about : nτT ≥ 5×1021 m-3 s KeV.  JET has reached values of  nτT of over 1021 m-3 s KeV.

In other words, Joint European Torus (JET), which is located near Dorchester, England, is more than 1/5th of the way to figuring it out. The horizontal axis represents temperature in the Kelvin scale.

oK = 273 + oC

In an interview with Scientific American, John Holdren, President Obama’s science adviser, summarized the history as well as the present state of the technology:

John Holdren has heard the old joke a million times: fusion energy is 30 years away—and always will be. Despite the broken promises, Holdren, who early in his career worked as a physicist on fusion power, believes passionately that fusion research has been worth the billions spent over the past few decades—and that the work should continue. In December, Scientific American talked with Holdren, outgoing director of the federal Office of Science and Technology Policy, to discuss the Obama administration’s science legacy. An edited excerpt of his thoughts on the U.S.’s energy investments follows.

Scientific American: Have we been investing enough in research on energy technologies?

John Holdren: I think that we should be spending in the range of three to four times as much on energy research and development overall as we’ve been spending. Every major study of energy R&D in relation to the magnitude of the challenges, the size of the opportunities and the important possibilities that we’re not pursuing for lack of money concludes that we should be spending much more.

But we have national labs that are devoted—

I’m counting what the national labs are doing in the federal government’s effort. We just need to be doing more—and that’s true right across the board. We need to be doing more on advanced biofuels. We need to be doing more on carbon capture and sequestration. We need to be doing more on advanced nuclear technologies. We need to be doing more on fusion, for heaven’s sake.

Fusion? Really?

Fusion is not going to generate a kilowatt-hour before 2050, in my judgment, but—

Hasn’t fusion been 30 years away for the past 30 years?

It’s actually worse than that. I started working on fusion in 1966. I did my master’s thesis at M.I.T. in plasma physics, and at that time people thought we’d have fusion by 1980. It was only 14 years away. By 1980 it was 20 years away. By 2000 it was 35 years away. But if you look at the pace of progress in fusion over most of that period, it’s been faster than Moore’s law in terms of the performance of the devices—and it would be nice to have a cleaner, safer, less proliferation-prone version of nuclear energy than fission.

My position is not that we know fusion will emerge as an attractive energy source by 2050 or 2075 but that it’s worth putting some money on the bet because we don’t have all that many essentially inexhaustible energy options. There are the renewables. There are efficient breeder reactors, which have many rather unattractive characteristics in terms of requiring what amounts to a plutonium economy—at least with current technology—and trafficking in large quantities of weapon-usable materials.

The other thing that’s kind of an interesting side note is if we ever are going to go to the stars, the only propulsion that’s going to get us there is fusion.

Are we talking warp drive?

No, I’m talking about going to the stars at some substantial fraction of the speed of light.

When will we know if fusion is going to work?

The reason we should stick with ITER [a fusion project based in France] is that it is the only current hope for producing a burning plasma, and until we can understand and master the physics of a burning plasma—a plasma that is generating enough fusion energy to sustain its temperature and density—we will not know whether fusion can ever be managed as a practical energy source, either for terrestrial power generation or for space propulsion. I’m fine with taking a hard look at fusion every five years and deciding whether it’s still worth a candle, but for the time being I think it is.

We know now that we can satisfy most of our needs and avert some of the predicted disaster if we use sustainable sources of electricity. If we can figure it out, fusion seems to be a good bet to solidify that trend.

Be Sociable, Share!
Posted in administration, Climate Change, politics, Sustainability, Trump | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Irrationality and the Future

Climate change is all about future impact; the aspects that we see and deal with right now are limited to those already highlighted by early warning signs in this process. The fact that the issues are global means that mitigating their impacts requires even more time than it might otherwise. Seventy percent of Earth is covered by oceans, which absorb a significant fraction of the additional heat and human-emitted greenhouse gases; the effects of climate change will continue even after strong mitigation accomplishments such as a full global energy transition to non-carbon fuels. Many of the driving forces of climate change – such as deep ocean temperature – are slow processes that need a long time to equilibrate (the IPCC labeled the process of climate stabilization after the world regularizes its concentration of greenhouse gases as committed warming). Action to stabilize atmospheric greenhouse concentrations needs to start now in order to affect temperatures in the future.

Yet people hate to pay now for promised benefits down the line (think about education!). The term economists use is “discounting the future.” There are debates as to the rate at which people discount it but there is overall consensus that the phenomenon exists. In a sense, the hatred of “pay now to prevent future losses” systems contradicts one of the main biases in human irrationality: loss aversion. But as long as the losses are predicted to materialize in the future, we would much rather not think about them at all, thank you!

Tversky and Kahneman (see my previous two blogs) also noticed some of the flaws inherent in asking people to make predictions. Wikipedia summarizes some important aspects of their thinking:

Kahneman and Tversky[1][2] found that human judgment is generally optimistic due to overconfidence and insufficient consideration of distributional information about outcomes. Therefore, people tend to underestimate the costs, completion times, and risks of planned actions, whereas they tend to overestimate the benefits of those same actions. Such error is caused by actors taking an “inside view“, where focus is on the constituents of the specific planned action instead of on the actual outcomes of similar ventures that have already been completed.

Kahneman and Tversky concluded that disregard of distributional information, i.e. risk, is perhaps the major source of error in forecasting. On that basis they recommended that forecasters “should therefore make every effort to frame the forecasting problem so as to facilitate utilizing all the distributional information that is available”.[2]:416 Using distributional information from previous ventures similar to the one being forecast is called taking an “outside view“. Reference class forecasting is a method for taking an outside view on planned actions.

Reference class forecasting for a specific project involves the following three steps:

1. Identify a reference class of past, similar projects.

2. Establish a probability distribution for the selected reference class for the parameter that is being forecast.

3. Compare the specific project with the reference class distribution, in order to establish the most likely outcome for the specific project.

Their argument of our innate optimism and natural disregard for risk (often termed the “planning fallacy”) has a close resemblance to the “just world” hypothesis that I discussed in last week’s blog: we think of the world as operating in a rational manner in spite of countless evidence that humans are not very rational creatures. You can see another window into Kahneman’s thinking via Morgan Housel’s interview with him on the topic:

Morgan Housel: When the evidence is so clear that the recent past is probably not going to predict the future, why is the tendency so strong to keep assuming that it will?

Dr. Kahneman: Well, people are inferring a sort of causal model from what they see, so when they see a certain pattern, they feel there is a force that is causing that pattern to, that pattern of behavior and behavior in the market and to continue. And then if the force is there, you see the extending. This is really a very, very natural way to think. When you see a movement in one direction, you are extrapolating that movement. You are anticipating where things will go and you are anticipating usually in a fairly linear fashion, and this is the way that people think about the future generally.

This sort of thinking about future impact has carried over into the application of how time biases play out in Behavioral Economics.

The first example here is from a paper by Benartzi and Thaler that introduces the concept of “Myopic Loss Aversion” in order to try to explain the puzzle of why stocks outperform bonds so strongly in long-term investments:

Myopic Loss Aversion and the Equity Premium Puzzle

Shlomo Benartzi, Richard H. Thaler

NBER Working Paper No. 4369
Issued in May 1993
NBER Program(s): AP

The equity premium puzzle, first documented by Mehra and Prescott, refers to the empirical fact that stocks have greatly outperformed bonds over the last century. As Mehra and Prescott point out, it appears difficult to explain the magnitude of the equity premium within the usual economics paradigm because the level of risk aversion necessary to justify such a large premium is implausibly large. We offer a new explanation based on Kahneman and Tversky’s ‘prospect theory’. The explanation has two components. First, investors are assumed to be ‘loss averse’ meaning they are distinctly more sensitive to losses than to gains. Second, investors are assumed to evaluate their portfolios frequently, even if they have long-term investment goals such as saving for retirement or managing a pension plan. We dub this combination ‘myopic loss aversion’. Using simulations we find that the size of the equity premium is consistent with the previously estimated parameters of prospect theory if investors evaluate their portfolios annually. That is, investors appear to choose portfolios as if they were operating with a time horizon of about one year. The same approach is then used to study the size effect. Preliminary results suggest that myopic loss aversion may also have some explanatory power for this anomaly

The key here is that the short-term checking of the performance of the portfolio basically converts the long-term bias to a short-term issue.

Cheng and He tried a direct experimental approach to the issue. Here I am including the abstract, with its summary of the project’s conclusions, and the experimental setup that was used to derive these conclusions:

Deciding for Future Selves Reduces Loss Aversion

Qiqi Cheng and Guibing He

Abstract

In this paper, we present an incentivized experiment to investigate the degree of loss aversion when people make decisions for their current selves and future selves under risk. We find that when participants make decisions for their future selves, they are less loss averse compared to when they make decisions for their current selves. This finding is consistent with the interpretation of loss aversion as a bias in decision-making driven by emotions, which are reduced when making decisions for future selves. Our findings endorsed the external validity of previous studies on the impact of emotion on loss aversion in a real world decision-making environment.

Tasks and Procedure

To measure the willingness to choose the risky prospect, we follow Holt and Laury (2002, 2005) decision task by asking participants to make a series of binary choices for 20 pairs of options (Table 1). The first option (Option A, the safe option) in each pair is always RMB 10 (10 Chinese Yuan) with certainty. The second option (Option B, the risky option) holds the potential outcomes constant at RMB 18 or 1 for each pair but changes the probabilities of winning for each decision, which creates a scale of increasing expected values. Because expected values in early decisions favor Option A while the expected values in later decisions favor Option B, an individual should initially choose Option A and then switch to Option B. Therefore, there will be a ‘switch point,’ which reflects a participant’s willingness to choose a risky prospect. The participants are told that each of their 20 decisions in the table has the same chance of being selected and their payment for the experiment will be determined by their decisions.

The conclusions from these discussions are clear – our biases are highly sensitive to the length of time in which we are trying to make predictions and thus directly impact mitigations that have to start in the present. Climate change is an existential issue so we don’t have the luxury of waiting until the damage is imminent. Once we get there it is too late to avoid.

Be Sociable, Share!
Posted in Anthropocene, Anthropogenic, Climate Change, Sustainability | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Collective Irrationality and Individual Biases: Climate Change II

Last week I discussed some issues in terms of psychology of judgement and decision making; I feel that they need some clarification and expansion.

I looked at how highly educated Democrats and Republicans diverge sharply in their opinions about the extent to which human actions contribute to climate change. I explained the phenomenon through the concept of “following the herd.” That is, since we are unwilling/unable to learn all the details of a complicated issue such as climate change, we choose instead to follow the opinions of the people that we trust – in this case the leadership of the political parties. But we don’t ask ourselves how those people in power form their own opinions.

The second unsettled issue that came out of last week’s blog is how we apply “loss aversion” to climate change. In other words, while there is a strong probability that if climate change is left unchecked we will lose big money in attempts to mitigate the damage it causes – yet many of us choose to ignore that inevitability.

Some of our understanding of this dichotomy traces back to the 19th Century American philosopher and psychologist William James, who pioneered biological psychology – the mind-body phenomenon:

One of the more enduring ideas in psychology, dating back to the time of William James a little more than a century ago, is the notion that human behavior is not the product of a single process, but rather reflects the interaction of different specialized subsystems. These systems, the idea goes, usually interact seamlessly to determine behavior, but at times they may compete. The end result is that the brain sometimes argues with itself, as these distinct systems come to different conclusions about what we should do.

The major distinction responsible for these internal disagreements is the one between automatic and controlled processes. System 1 is generally automatic, affective and heuristic-based, which means that it relies on mental “shortcuts.” It quickly proposes intuitive answers to problems as they arise. System 2, which corresponds closely with controlled processes, is slow, effortful, conscious, rule-based and also can be employed to monitor the quality of the answer provided by System 1. If it’s convinced that our intuition is wrong, then it’s capable of correcting or overriding the automatic judgments

In the 1960s two Israeli psychologists, Daniel Kahneman and Amos Tversky, started an intellectual journey to establish the boundaries of human rationality. They had to account for the fact that if humans evolved from other species, those ancestral species are known for their instinctive survival reflexes rather than their rationality. Kahneman and Tversky expanded upon the distinction made above. They reasoned that human thinking is based on two brain activities, located in different parts of the brain: the automatic and the rational brains, which Kahneman labeled Intuition (System 1) and Reasoning (System 2):

Daniel Kahneman provided further interpretation by differentiating the two styles of processing more, calling them intuition and reasoning in 2003. Intuition (or system 1), similar to associative reasoning, was determined to be fast and automatic, usually with strong emotional bonds included in the reasoning process. Kahneman said that this kind of reasoning was based on formed habits and very difficult to change or manipulate. Reasoning (or system 2) was slower and much more volatile, being subject to conscious judgments and attitudes.[8]

Kahneman and Tversky’s efforts quickly spread to many other areas including economics and health care. Daniel Kahneman was recognized with the 2002 Nobel Prize in Economics (Amos Tversky passed away in 1996) and this year’s Economics Prize was given to Richard Thaler, one of the earliest practitioners to apply this work to the field of economics. Both Daniel Kahneman and Richard Thaler have written books on their efforts, targeted to the general public. As part of that general public, I read the books cover to cover. Kahneman’s book is called Thinking Fast and Slow (Farrar, Straus and Giroux s – 2011) and Thaler’s book with Cass Sunstein is titled Nudge (Yale University Press – 2008). Both books are best-sellers.  There’s also a new book by Michael Lewis about Tversky and Kahneman’s careers together: The Undoing Project (W.W. Norton – 2017). I searched all three books for direct references to climate change. Thaler and Sunstein’s book has a small chapter on environmental issues that I will summarize toward the end of the blog. Lewis’ book doesn’t have a searchable index and it’s been a few months since I read the book so it is difficult to tell specifics. Kahneman’s book has a searchable index but I didn’t find any directly relevant references there either.

I was fortunate to come across the transcript of a talk that Daniel Kahneman gave on April 18, 2017 to the Council on Foreign Relations. The meeting was presided over by Alan Murray, Chief Content Officer at Time magazine. At the conclusion of his talk Kahneman agreed to answer questions from the audience. Two of the questions directly referred to climate change:

Q: Hi. I’m Jack Rosenthal, retired from The New York Times. I wonder if you’d be willing to talk a bit about the undoing idea and whether it’s relevant in the extreme to things like climate denial.

KAHNEMAN: Well, I mean, the undoing idea, the Undoing Project, was something that I—well, it’s the name of a book that Michael Lewis wrote about Amos Tversky and me. But it originally was a project that I engaged in primarily. I’m trying to think about how do people construct alternatives to reality.

And my particular, my interest in this was prompted by tragedy in my family. A nephew in the Israeli air force was killed. And I was very struck by the fact that people kept saying “if only.” And that—and that “if only” has rules to it. We don’t just complete “if only” in any—every which way. There are certain things that you use. So I was interested in counterfactuals. And this is the Undoing Project. Climate denial, I think, is not necessarily related to the Undoing Project. It’s very powerful, clearly. You know, the anchors of the psychology of climate denial is elementary. It’s very basic. And it’s going to be extremely difficult to overcome.

MURRAY: When you say it’s elementary, can you elaborate a little bit?

KAHNEMAN: Well, the whether people believe or do not believe is one issue. And people believe in climate and climate change or don’t believe in climate change not because of the scientific evidence. And we really ought to get rid of the idea that scientific evidence has much to do with people’s beliefs.

MURRAY: Is that a general comment, or in the case of climate?

KAHNEMAN: Yeah, it’s a general comment.

MURRAY: (Laughs.)

KAHNEMAN: I think it’s a general comment. I mean, there is—the correlation between attitude to gay marriage and belief in climate change is just too high to be explained by, you know.

MURRAY: Science.

KAHNEMAN: —by science. So clearly—and clearly what is people’s beliefs about climate change and about other things are primarily determined by socialization. They’re determined—we believe in things that people that we trust and love believe in. And that, by the way, is certainly true of my belief in climate change. I believe in climate change because I believe that, you know, if the National Academy says there’s climate change, but…

MURRAY: They’re your people.

KAHNEMAN: They’re my people.

MURRAY: (Laughs.)

KAHNEMAN: But other people—you know, they’re not everybody’s people. And so this, I think—that’s a very basic part of it. Where do beliefs come from? And the other part of it is that climate change is really the kind of threat for which—that we as humans have not evolved to cope with. It’s too distant. It’s too remote. It just is not the kind of urgent mobilizing thing. If there were a meteor, you know, coming to earth, even in 50 years, it would be completely differently. And that would be—people, you know, could imagine that. It would be concrete. It would be specific. You could mobilize humanity against the meteor. Climate change is different. And it’s much, much harder, I think.

MURRAY: Yes, sir, right here.

Q: Nise Aghwa (ph) of Pace University. Even if you believe in evidence-based science, frequently, whether it’s in medicine, finance, or economics, the power of the tests are so weak that you have to rely on System 1, on your intuition, to make a decision. How do you bridge that gap?

KAHNEMAN: Well, you know, if a decision must be made, you’re going to make it on the best way—you know, in the best way possible. And under some time pressure, there’s no time for deliberation. You just must do, you know, what you can do. That happens a lot.

If there is time to reflect, then in many situations, even when the evidence is incomplete, reflection might pay off. But this is very specific. As I was saying earlier, there are domains where we can trust our intuitions, and there are domains where we really shouldn’t. And one of the problems is that we don’t know subjectively which is which. I mean, this is where some science and some knowledge has to come in from the outside.

MURRAY: But it did sound like you were saying earlier that the intuition works better in areas where you have a great deal of expertise..

KAHNEMAN: Yes.

MURRAY: —and expertise.

KAHNEMAN: But we have powerful intuitions in other areas as well. And that’s the problem. The real problem—and we mentioned overconfidence earlier—is that our subjective confidence is not a very good indication of accuracy. I mean, that’s just empirically. When you look at the correlation between subjective confidence and accuracy, it is not sufficiently hard. And that creates a problem.

Kahneman clearly says that he doesn’t think people make up their minds about life based on science and facts – and that is especially true when it comes to climate change. He acknowledges how intuition and reason each play parts in the way we make decisions – sometimes to our own detriment.

The environmental chapter in Thaler’s and Sunstein’s book, Nudge, “Saving the Planet,” looks at climate change as well as how we can shape our own minds and others’. The authors explore the possibility that a few well-thought-out nudges and better architectural choices might reduce greenhouse gases. There is a separate chapter about the different meanings of architectural choices as well. Here is the key paragraph that explains the concept:

If you indirectly influence the choices other people make, you are a choice architect. And since the choices you are influencing are going to be made by humans, you will want your architecture to reflect a good understanding of how humans behave. In particular, you will want to ensure that the Automatic System doesn’t get all confused.

The book emphasizes well-designed free choices that the authors refer to as a perspective of Libertarian Paternalism, as contrasted with regulation (command and control) – the prevailing approach to environmental government activities. Thaler and Sunstein mention Garret Hardin’s article, “Tragedy of the Commons,” (see explanations and examples in July 2, 2012 blog), which points out that people don’t get feedback on the environmental harm that they inflict. They say that governments need to align incentives.  They discuss two kinds of incentives: taxes (negative – we want to avoid them) and cap-and-trade (see November 10, 2015 blog) (positive – we want to maximize profits). The book offers some pointers on how to account for the fact that players are humans: redistribute the revenues that you get either from cap-and-trade or carbon taxes and provide feedback to consumers about the damage that the polluters impose. One example the authors use to illustrate the effectiveness of such nudges the mandatory messages about the risks of cigarettes smoking. They also recommend trying find ways to incorporate personal energy audits into the choices that people make.

Be Sociable, Share!
Posted in administration, Anthropocene, Anthropogenic, Climate Change, law, Sustainability | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment