Fake News

I am an old guy; my bucket list is modest but I am trying to check off the remaining items while I can still fully enjoy them. There’s only one that remains out of my reach: I want to travel to Mars. Not only would I like to see planet Earth from space with my own eyes but as a cosmology teacher, it would give me joy to experience a fraction of the distances that field studies. I am realistic enough to know that it won’t happen. That is, after all, one of the attractions of a bucket list.

Well, according to Elon Musk I might yet get my chance. Granted, a ticket will cost me $200,000.

Meanwhile, some people might get an even better deal. Stephen Hawking just announced that he’s willing to pay for people’s tickets, although in this case the trip is to Venus. There is only one caveat: the voyagers must “qualify” as climate change deniers. Unfortunately, that takes me out of the running. There is a reason for Hawking’s publicity stunt. There are strong correlations between the conditions one would find on Venus and those we can expect to find on Earth should climate change reach its logical conclusion according to business as usual projections (June 25, 2012). If only I could agree with denier logic that carbon dioxide is a benign gas and all of its links to climate change are “fake news” that scientists fabricate to get grant money from the government, I’d be well on my way toward fulfilling my space travel goals. After all, there probably wouldn’t be that much difference between the experiences of visiting Mars or Venus.

Fake news is a popular topic these days. It’s an excellent cover for ignorance – it means that anyone can make or dismiss an argument – whether or not they have supporting facts or reproducible observations to follow up their claim.

Deniers use the label of fake news to delegitimize climate change in the eyes of the public, especially when it comes to mitigation efforts that require voter support:

“Global Warming: Fake News from the Start” by Tim Ball and Tom Haris

President Donald Trump announced the U.S. withdrawal from the Paris Agreement on climate change because it is a bad deal for America. He could have made the decision simply because the science is false, but most of the public have been brainwashed into believing it is correct and wouldn’t understand the reason.

Canadian Prime Minister Justin Trudeau, and indeed the leaders of many western democracies, though thankfully not the U.S., support the Agreement and are completely unaware of the gross deficiencies in the science. If they did, they wouldn’t be forcing a carbon dioxide (CO2) tax, on their citizens.

Trudeau and other leaders show how little they know, or how little they assume the public know, by calling it a ‘carbon tax.’ But CO2 is a gas, while carbon is a solid. By calling the gas carbon, Trudeau and others encourage people to think of it as something ‘dirty’, like graphite or soot, which really are carbon. Calling CO2 by its proper name would help the public remember that it is actually an invisible, odorless gas essential to plant photosynthesis.

…CO2 is not a pollutant…the entire claim of anthropogenic global warming (AGW) was built on falsehoods and spread with fake news.

…In 1988 Wirth was in a position to jump start the climate alarm. He worked with colleagues on the Senate Energy and Natural Resources Committee to organize a June 23, 1988 hearing where Dr. James Hansen, then the head of the Goddard Institute for Space Studies (GISS), was to testify…Specifically, Hansen told the committee,

“Global warming has reached a level such that we can ascribe with a high degree of confidence a cause and effect relationship between the greenhouse effect and observed warming…It is already happening now…The greenhouse effect has been detected and it is changing our climate now…We already reached the point where the greenhouse effect is important.”

…More than any other event, that single hearing before the Energy and Natural Resources Committee publicly initiated the climate scare, the biggest deception in history. It created an unholy alliance between a bureaucrat and a politician that was bolstered by the U.N. and the popular press leading to the hoax being accepted in governments, industry boardrooms, schools, and churches across the world.

Trump must now end America’s participation in the fake science and the fake news of man-made global warming. To do this, he must withdraw the U.S. from further involvement with all U.N. global warming programs, especially the IPCC as well as the agency that now directs it—the United Nations Framework Convention on Climate Change. Only then will the U.S. have a chance to fully develop its hydrocarbon resources to achieve the president’s goal of global energy dominance.

The Ball and Haris piece claims that people who believe in climate change have been duped by fake news. Their key point is that the notion of carbon dioxide being the chemical largely responsible for climate change is utterly false. Their reasoning is twofold. The first is semantic – they take exception to calling carbon dioxide “carbon.” This is largely an issue of units. Most scientific publications use this association mainly because carbon dioxide is not the only anthropogenic greenhouse gas; other gases, such as methane, are often expressed in terms of units of carbon dioxide. Any transition between the units of “carbon” and “carbon dioxide” involves multiplication by 44/12 or 3.7, which is the ratio of the molecular weight of carbon dioxide to the atomic weight of carbon. I will remind those of you to whom the last sentence is a foreign language that some prerequisites – especially in vocabulary and basic math – are necessary when we describe the details of the physical environment. Ball and Haris’ second objection is that carbon dioxide is not a “pollutant” but an “invisible, odorless gas essential to plant photosynthesis.” While the latter statement is true, it does not represent the whole truth. My Edward Teller quote from last week directly addressed this issue. Essentially, carbon dioxide is a (measurable) pollutant because of its optical absorption properties, i.e. it is responsible for a great deal of the observed climate change because of its preferential absorption of the longer wavelengths of the electromagnetic spectrum.

Figure 1Atmospheric carbon dioxide concentration over the last 400,000 years

Figure 1 shows the atmospheric concentrations of carbon dioxide over the last 400,000 years. Again, the famous hockey-stick curve shows up in the perpendicular rise after the industrial revolution at the far right section of the graph. Last week’s blog’s data about how carbon dioxide added to the atmosphere between the industrial revolution and the 1950s didn’t have any carbon-14 in it, indicates the ancient origins of fossil fuels.

Furthermore, one of my earliest blogs (June 25, 2012) illustrates the carbon cycle (again – much of it in the form of carbon dioxide): where it’s going and where it’s coming from. Photosynthesis and respiration are part of it. Without the anthropogenic contributions (burning fossil fuels, land use change, cement production, etc.) the emission and sequestration balance to explain the long steady state (approximately constant rate) shown in Figure 1. Once we factor in the anthropogenic contributions and how they impact atmospheric absorption properties, we start to see the changing energy balance with the sun and hence the change of climate.

Ball and Haris’ piece at least presented a mechanism of internal logic that can be refuted. Many are satisfied with simply branding something fake news. Unfortunately, social media makes spreading this sort of false information a relatively painless process. Even Google is facing flak for its role in spreading fake news

“How Climate Change Deniers Rise to the Top in Google Searches” by HIROKO TABUCHI

Groups that reject established climate science can use the search engine’s advertising business to their advantage, gaming the system to find a mass platform for false or misleading claims.

Type the words “climate change” into Google and you could get an unexpected result: advertisements that call global warming a hoax. “Scientists blast climate alarm,” said one that appeared at the top of the search results page during a recent search, pointing to a website, DefyCCC, that asserted: “Nothing has been studied better and found more harmless than anthropogenic CO2 release.”

Not everyone who uses Google will see climate denial ads in their search results. Google’s algorithms use search history and other data to tailor ads to the individual, something that is helping to create a highly partisan internet.

It seems that even parts of the internet that we consider to be neutral are starting to reflect the increasingly combative political climate and the related problem of fake news.

Be Sociable, Share!
Posted in administration, Anthropocene, Anthropogenic, Climate Change, IPCC, Sustainability, Trump | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

“Natural” or “Anthropogenic”? – Climate Change

Last week’s blog looked at various methods of distinguishing “natural” vs. “artificial” vanilla. I used this as a jumping off point to facilitate answering the much more important question of how we distinguish anthropogenic climate change from “natural” climate change (i.e. that which took place way before humans had the capacity to inflict any changes on the physical environment of the planet).

Deniers’ most common argument is that they agree that the climate is changing but that it has been doing so since long before humans were around. More specifically, they claim that carbon dioxide couldn’t be a cause of such changes in the here and now because it is a “natural,” “harmless” compound.

Tom Curtis summarized the issue and enumerated the problems with such reasoning in his July 25, 2012 post on Skeptical Science. Skeptical Science is a blog for which I have the utmost respect (July 13, 2013) and which published one of my own guest posts. Here is a condensed outline of the data Curtis provided:

There are ten main lines of evidence to be considered:

1. The start of the growth in CO2 concentration coincides with the start of the industrial revolution, hence anthropogenic;

2. Increase in CO2 concentration over the long term almost exactly correlates with cumulative anthropogenic emissions, hence anthropogenic;

3. Annual CO2 concentration growth is less than Annual CO2 emissions, hence anthropogenic;

4. Declining C14 ratio indicates the source is very old, hence fossil fuel or volcanic (ie, not oceanic outgassing or a recent biological source);

5. Declining C13 ratio indicates a biological source, hence not volcanic;

6. Declining O2 concentration indicate combustion, hence not volcanic;

7. Partial pressure of CO2 in the ocean is increasing, hence not oceanic outgassing;

8. Measured CO2 emissions from all (surface and beneath the sea) volcanoes are one-hundredth of anthropogenic CO2 emissions; hence not volcanic;

9. Known changes in biomass too small by a factor of 10, hence not deforestation; and

10. Known changes of CO2 concentration with temperature are too small by a factor of 10, hence not ocean outgassing.

Figure 1 – Anthropogenic and total atmospheric carbon dioxide concentrations and the 14C isotopic decline (for more details see my Oct 3, 2017 blog on attributions)

Figure 1 demonstrates the changes in the first four items on this list, which I have also described in earlier blogs. The decline of 14C is the common denominator with the characterization of “natural” vanilla from last week’s blog. This important metric, however, becomes far less useful after the end of WWII because of the atmospheric contamination from the nuclear testing that took place at that time (As stated in the caption for Figure 1: “After 1955 the decreasing 14C trend ends due to the overwhelming effect of bomb 14C input into the atmosphere”).

Yet even the limited decline of 14C from 1900 to 1950 is telling: that period constitutes the start of the significant anthropogenic contributions to the atmospheric concentrations of carbon dioxide.

These important results present a strong argument that most of the increase in the atmospheric concentrations of carbon dioxide comes from humans burning fossil fuels. Deniers say that this is not sufficient grounds for associating the increase carbon dioxide concentration with climate change.

To make the argument that carbon dioxide is the main greenhouse gas responsible for climate change, I will quote one of the most famous scientists of the 20th century. Far from being thought of as a climate change–centered scientist, Edward Teller is instead known for creating the hydrogen bomb after the Second World War.

But Teller associated carbon dioxide with the global climate when he made a speech at the celebration of the centennial of the American oil industry in 1959:

Ladies and gentlemen, I am to talk to you about energy in the future. I will start by telling you why I believe that the energy resources of the past must be supplemented. First of all, these energy resources will run short as we use more and more of the fossil fuels. But I would […] like to mention another reason why we probably have to look for additional fuel supplies. And this, strangely, is the question of contaminating the atmosphere. [….] Whenever you burn conventional fuel, you create carbon dioxide. [….] The carbon dioxide is invisible, it is transparent, you can’t smell it, it is not dangerous to health, so why should one worry about it?

Carbon dioxide has a strange property. It transmits visible light but it absorbs the infrared radiation which is emitted from the earth. Its presence in the atmosphere causes a greenhouse effect [….] It has been calculated that a temperature rise corresponding to a 10 per cent increase in carbon dioxide will be sufficient to melt the icecap and submerge New York. All the coastal cities would be covered, and since a considerable percentage of the human race lives in coastal regions, I think that this chemical contamination is more serious than most people tend to believe.

This connection is a simple physical property of carbon dioxide that falls under the scientific discipline called “spectroscopy” (December 10, 2012). The connection is one of the most important parameters that characterizes climate change, as shown in Figure 2; we call it “climate sensitivity.”

IPCC equilibrium global mean temperature increaseFigure 2 – Projected temperature increase as a function of projected carbon dioxide increase (from the 4th IPCC report – AR4) December 10, 2012 blog

Radiative forcing due to doubled CO2

CO2 climate sensitivity has a component directly due to radiative forcing by CO2, and a further contribution arising from climate feedbacks, both positive and negative. “Without any feedbacks, a doubling of CO2 (which amounts to a forcing of 3.7 W/m2) would result in 1 °C global warming, which is easy to calculate and is undisputed. The remaining uncertainty is due entirely to feedbacks in the system, namely, the water vapor feedback, the ice-albedo feedback, the cloud feedback, and the lapse rate feedback”;[14] addition of these feedbacks leads to a value of the sensitivity to CO2 doubling of approximately 3 °C ± 1.5 °C, which corresponds to a value of λ of 0.8 K/(W/m2).

In the next blog I will look at how climate change deniers self-justify labelling this as “fake news,” thus relieving themselves of the burden of engaging with (or even considering) the associated science.

Be Sociable, Share!
Posted in Anthropocene, Anthropogenic, Climate Change, IPCC, Sustainability | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

“Natural” or “Anthropogenic”? – Vanilla Extract and Climate Change

Figure 1 – My Christmas present

Happy New Year!!! I got a yummy Christmas present from a family member that doubled as a hint to try to improve my cooking: homemade vanilla extract. They got the vanilla beans from Honduras and immersed them in vodka!

As a good giftee, I will use the extract in some of my food preparation (I like the flavor). The gift also reminded me of a time a few years back when a gentleman from our neighborhood came to my school to ask for advice on how he could distinguish a “natural” vanilla flavor from an artificial one. Neither I, nor any of my colleagues, could come up with satisfactory answers but we were intrigued with the differentiation he sought.

My own interest this distinction is a bit broader: I want to know how to convince all of you that the global climate change that we are experiencing is “artificial,” as in man-made or anthropogenic, and not “natural.”

I get constant complaints from many good friends for naming carbon dioxide and methane – the main components of natural gas – as pollutants that cause climate change. This is especially true given how often I blame climate change for the ultimate slaughter of our children and grandchildren should we continue our business-as-usual living practices. After all, humans and most other living organisms, exhale carbon dioxide and we and many animals around us naturally emit flatulence with methane as an important component.

Well, I will try to start our new year with the much more pleasant vanilla flavor and follow that up next time by utilizing a common technique for distinguishing natural from artificial vanilla to parallel the differences between natural and artificial origins of climate change. We will find that in both cases a learning curve is required to follow some of the language used.

The main interest of the gentleman that came to us seeking help to distinguish the source of the vanilla was money. According to him, many people who sell the extract and claim it to be natural are cheating and actually sell the much cheaper artificial variety at a bumped-up price. Our interest in realizing that we are the instigators of most of the climate change that we are experiencing is to also understand that it is up to us to stop that forward momentum.

The essence of the identification of the origin of vanilla is legal, as Forbes’ Eustacia Huen writes in, “What Manufacturers Really Mean By Natural And Artificial Flavors”:

According to the U.S. Food and Drug Administration’s (FDA) Code of Federal Regulations (Title 21), the term “natural flavor” essentially has an edible source (i.e. animals and vegetables). Artificial flavors, on the other hand, have an inedible source, which means you can be eating anything from petroleum to paper pulp processed to create the chemicals that flavor your food. For example, Japanese researcher Mayu Yamamoto discovered a way to extract vanillin, the compound responsible for the smell and flavor of vanilla, from cow poop in 2006, as reported by Business Insider.

Gary Reineccius’, “What is the difference between artificial and natural flavors?” in Scientific American defines it somewhat differently:

Natural and artificial flavors are defined for the consumer in the Code of Federal Regulations. A key line from this definition is the following: ” a natural flavor is the essential oil, oleoresin, essence or extractive, protein hydrolysate, distillate, or any product of roasting, heating or enzymolysis, which contains the flavoring constituents derived from a spice, fruit or fruit juice, vegetable or vegetable juice, edible yeast, herb, bark, bud, root, leaf or similar plant material, meat, seafood, poultry, eggs, dairy products, or fermentation products thereof, whose significant function in food is flavoring rather than nutritional.” Artificial flavors are those that are made from components that do not meet this definition.

The question at hand, however, appears to be less a matter of legal definition than the “real” or practical difference between these two types of flavorings. There is little substantive difference in the chemical compositions of natural and artificial flavorings. They are both made in a laboratory by a trained professional, a “flavorist,” who blends appropriate chemicals together in the right proportions. The flavorist uses “natural” chemicals to make natural flavorings and “synthetic” chemicals to make artificial flavorings. The flavorist creating an artificial flavoring must use the same chemicals in his formulation as would be used to make a natural flavoring, however. Otherwise, the flavoring will not have the desired flavor. The distinction in flavorings–natural versus artificial–comes from the source of these identical chemicals and may be likened to saying that an apple sold in a gas station is artificial and one sold from a fruit stand is natural.

So is there truly a difference between natural and artificial flavorings? Yes. Artificial flavorings are simpler in composition and potentially safer because only safety-tested components are utilized. Another difference between natural and artificial flavorings is cost. The search for “natural” sources of chemicals often requires that a manufacturer go to great lengths to obtain a given chemical. Natural coconut flavorings, for example, depend on a chemical called massoya lactone. Massoya lactone comes from the bark of the Massoya tree, which grows in Malaysia. Collecting this natural chemical kills the tree because harvesters must remove the bark and extract it to obtain the lactone. Furthermore, the process is costly. This pure natural chemical is identical to the version made in an organic chemists laboratory, yet it is much more expensive than the synthetic alternative. Consumers pay a lot for natural flavorings. But these are in fact no better in quality, nor are they safer, than their cost-effective artificial counterparts.

As we can read in the last two paragraphs of the Scientific American piece, in terms of functionality (i.e. flavor) there is not much difference between the “artificial” and the “natural”; if anything, the synthetic variety is considerably purer.

Chemical & Engineering News (CEN) tackles the full complexity of the issue in Melody M. Bomgardner’s, “The problem with vanilla: After vowing to go natural, food brands face a shortage of the favored Flavor.” The essence of this article is easily summed up as follows:

In brief:

Vanilla is perhaps the world’s most popular flavor, but less than 1% of it comes from a fully natural source, the vanilla orchid. In 2015, a host of big food brands, led by Nestlé, vowed to use only natural flavors in products marketed in the U.S.—just as a shortage of natural vanilla was emerging. In the following pages, C&EN explains how flavor firms are working to supply.

However, we need to delve in farther.

In Réunion, output of vanilla soared thanks to the Albius method, and orchid cultivation expanded to nearby Madagascar. Today, about 80% of the world’s natural vanilla comes from smallholder farms in Madagascar. There, locals continue to pollinate orchids by hand and cure the beans in the traditional fashion.

It didn’t take long for vanilla demand to exceed supply from the farms of Madagascar. In the 1800s and 1900s, chemists took over from botanists to expand supply of the flavor. Vanillin, the main flavor component of cured vanilla beans, was synthesized variously from pine bark, clove oil, rice bran, and lignin.

Rhône-Poulenc, now Solvay, commercialized a pure petrochemical route in the 1970s. In recent years, of the roughly 18,000 metric tons of vanilla flavor produced annually, about 85% is vanillin synthesized from the petrochemical precursor guaiacol. Most of the rest is from lignin.

But the traditional vanilla bean is starting to enjoy a renaissance, thanks to consumer demand for all-natural foods and beverages. Last year, a string of giant food companies, including General Mills, Hershey’s, Kellogg’s, and Nestlé, vowed to eliminate artificial flavors and other additives from many foods sold in the U.S.

Figure 2 – The various ways to Vanillin

There is a problem, however: World production of natural vanilla is tiny and has been falling in recent years. Less than 1% of vanilla flavor comes from actual vanilla orchids. With demand on the upswing, trade in the coveted flavor is out of balance.

Flavor companies are working feverishly to find additional sources of natural vanillin and launch initiatives to boost the quality and quantity of bean-derived vanilla. Suppliers such as Symrise, International Flavors & Fragrances (IFF), Solvay, and Borregaard are using their expertise along the full spectrum of natural to synthetic to help food makers arrive at the best vanilla flavor for each product.

Food makers, meanwhile, are confronting skyrocketing costs for natural vanilla, reformulation challenges, complicated labeling laws, and difficult questions about what is “natural.”

Although consumer disdain for artificial ingredients has been building for years, credit —or blame—for last year’s wave of “all natural” announcements goes to Nestlé, which in February 2015 was the first major brand to announce plans to eliminate artificial additives from chocolate candy sold in the  U.S. The announcement upended the massmarket chocolate industry practice of adding synthetic vanillin to counter the bitterness of cocoa.

For a big food firm, however, switching to natural vanilla is akin to squeezing an elephant into a Volkswagen. While the ink was drying on those all-natural announcements last year, output of Madagascar vanilla beans had plummeted to 1,100 metric tons, about half the normal harvest. That, along with rising demand, caused prices to more than double to roughly $225 per kg by the middle of last year, according to Mintec, a raw material price-tracking firm.

Cured vanilla beans contain only 2% of extractable vanilla flavor, meaning prices for pure vanilla reached an eye-popping $11,000 per kg. The industry is closely watching this year’s harvest, hoping to see vanilla costs eventually return to pre-2012 levels of about $25 per kg for beans or $1,250 for vanilla.

The clear distinction that we can make is between the petroleum-based guaiacol and the other routes that all the others originate from living plants.

The distinction between the “natural” and the “synthetic” vanilla will follow the same rationale that I covered in the October 3, 2017 where I described the human attributions to climate change:

14C is a radioisotope of carbon with atoms that contain 6 protons and 8 neutrons as compared to the more abundant isotope of carbon (12C), which contains 6 protons and 6 neutrons. The radioisotope is unstable and slowly converts to nitrogen (14N) by converting one neutron to one proton. The conversion rate is measured through a parameter called half-time. It means that if we start with a certain amount of the material, half of it will convert into 14N in that period of time. The half-life of 14C is 5,730 years. The natural abundance of this isotope in the atmosphere is about one atom in a trillion. Plants that grow by photosynthesis of carbon dioxide in the atmosphere end up with the same amount of the carbon isotope.

All the routes to the “natural” vanillin in Figure 2, except for the petroleum routes, originate from photosynthetic plants – within the plants’ lifetimes. The plants photosynthesize atmospheric carbon dioxide and have lifetimes considerably shorter than 5730 years (the half-life of carbon-14 that they digested from the atmosphere) so we expect the vanilla extracted through any of the “native” routes to have the same C-14 concentration as the atmosphere.

The petroleum route is based on petroleum that was formed via decay of plants and digestion by anaerobic (no oxygen) bacteria, millions of years ago. That’s a much longer time than the half-life of C-14. So we expect the vanilla that we get from this route to have no C-14. Measuring C-14 in the extract is a quick and easy job for laboratories that are equipped to do so.

A different method for the differentiation is based on additives that are present within “natural” vanilla and are absent in the synthetic vanilla. One of them is described below:

The production of vanilla beans is quite expensive, since it is a very labor intensive process and harvesting takes place 2 to 3 years after planting. This drives the price of natural vanilla extract to about three to five times higher than artificial vanilla preparations. Due to quality, price concerns and economically motivated frauds, it is important to differentiate between natural and artificial forms of vanilla extracts. Apart from vanillin, natural vanilla extracts have 4-hydroxybenzaldehyde, which is absent in artificial vanilla flavorings. This compound can be used as a marker ion to rapidly differentiate between natural and artificial vanilla preparations.[2]

This method is more general and can use a wider variety of analytical instruments compared to the radioactivity measurements, provided that the assumed additives are present.

Next week I will return to the October 3, 2017 blog about attribution to solidify the analogy between vanilla and the anthropogenic origins of climate change.

Be Sociable, Share!
Posted in Anthropocene, Anthropogenic, Climate Change, Sustainability | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Invasive Species, Collective Suicide, and Self-Inflicted Genocide

Recently, the NYT published some answers to questions posed by readers: “Are We an Invasive Species? Plus, Other Burning Climate Questions.” One of these questions was whether or not humans count as an invasive species:

Are we an invasive species? 

By Livia Albeck-Ripka

In a high-rise in Malaysia’s capital in 1999, a group of scientists — convened by the International Union for Conservation of Nature to designate “100 of the World’s Worst Invasive Alien Species” — asked themselves this very question.

“Every single person in the room agreed, humans are the worst invasive species,” said Daniel Simberloff, a professor of environmental science at the University of Tennessee, who was at the meeting. “But the I.U.C.N. disagreed.” Homo sapiens, which has colonized and deforested much of the Earth and pumped enough carbon dioxide into the atmosphere to change its climate, was excluded from the list.

While the meaning of “invasive” has been the subject of some debate, the generally accepted definition is a plant, animal or other organism that, aided by humans, has recently moved to a place where it is nonnative, to detrimental effect. Humans don’t fit that definition, said Piero Genovesi, chairman of the I.U.C.N.’s invasive species group, because they are the ones doing the moving. A more useful way to think about ourselves, Dr. Genovesi said, is as the drivers of every problem conservation tries to remedy.

The piece reminded me of something I read in 2011 by a science writer for the Smithsonian, which addressed the same question in a somewhat clearer way:

Are Humans an Invasive Species?

Sarah Zielinski

Let’s start with the definition of an invasive species. It turns out, it’s not so simple. The legal definition in the United States is “an alien species whose introduction does or is likely to cause economic or environmental harm or harm to human health.” The International Union for Conservation of Nature (IUCN), which developed the list of the 100 world’s worst from which our invasive mammals piece originated, defines them as “animals, plants or other organisms introduced by man into places out of their natural range of distribution, where they become established and disperse, generating a negative impact on the local ecosystem and species.” And a 2004 paper in Diversity and Distributions that examines the terminology of invasiveness notes that there is a lack of consensus on this topic and lists five dominant definitions for ‘invasive,’ the most popular of which is “widespread that have adverse effects on the invaded habitat.”

Despite the lack of a single definition, however, we can pull from these definitions some general aspects of an invasive species and apply those to Homo sapiens.

  • An invasive species is widespread: Humans, which can be found on every continent, floating on every ocean and even circling the skies above certainly meet this aspect of invasiveness.
  • An invasive species has to be a non-native: Humans had colonized every continent but Antarctica by about 15,000 years ago. Sure, we’ve done some rearranging of populations since then and had an explosion in population size, but we’re a native species.

3) An invasive species is introduced to a new habitat: Humans move themselves; there is no outside entity facilitating their spread.

4) An invasive species had adverse effects on its new habitat and/or on human health: Humans meet this part of the definition in too many ways to count.

Verdict: We’re not an invasive species, though we’re certainly doing harm to the world around us. If you think about it, all of the harm done by invasive species is by definition our collective faults; some kind of human action led to that species being in a new place where it then causes some harm. And so I’m not at all astonished to find people arguing that we’re the worst invasive species of them all.

I fully agree with Ms. Zielinski: based on our fully human-centric definition of invasive species, the term cannot apply to us while we are on this planet. We can, however, become an invasive species via our space explorations and are taking some precautions to minimize that risk. A good example of this is the Cassini spacecraft’s deliberate crash into Saturn’s atmosphere in September 2017 to prevent contamination of any future efforts to find life in space. All the damage that humans are causing to the physical environment does not “justify” labeling us as foreign invaders. This planet is our home. It is also the only home of any other life form that we know. We are all “competing” for dominance here. These events are much better framed as collective suicides or self-inflicted genocides.

Figure 1 – Fundamental questions of Astrobiology

I am teaching Cosmology as an advanced General Education course at Brooklyn College. The syllabus covers our attempts to find extraterrestrial life, an area known as Astrobiology. Figure 1 shows the various areas of inquiry that are being considered as part of this effort. Closer inspection of this figure reveals that environmental issues play a key role. For instance, “what is life’s future on Earth and beyond?” and “Future Environmental Change.” In the latter category we see “Catastrophes” – but this entry only refers to collisions of various kinds with space objects, not with the self-destruction of Earth’s native life forms. I have covered self-destruction in specific blogs such as “Nuclear Winter” (July 9, 2013) as well as throughout the CCF blog (July 9, 2013 estimated the energy release of climate change as a multiple of that produced by the Hiroshima bombing).

Be Sociable, Share!
Posted in Anthropogenic, Climate Change, Sustainability | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Saving the World through the Pursuit of Self Interest: Part 2

Part 1 of this segment, which I published on April 25, 2017, focused on the March for Science – an event that took place on Earth Day and addressed the Trump administration’s attitude of climate change denial. This follow-up seeks my students’ input.

The 2017 Fall semester is over and my students are preparing for their final exam. I have been teaching my Climate Change course using the TBL (Team Based Learning) system. I have 6 groups in my class, each composed of roughly 7 students that have been studying together throughout the semester.

As an extra credit assignment I have challenged the groups to produce a collective paper addressing, “what can we do to save the world?” I view this question as synonymous with the objective of this whole blog. They will post their answers as comments for this post and all of you can be the judges.

Be Sociable, Share!
Posted in administration, Anthropocene, Anthropogenic, Climate Change, Education, politics, Sustainability, Trump | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | 1 Comment

Long-term Mitigation: Fusion

A month ago, I was working on a series of blogs about the long-term impacts of and solutions for climate change. I got sidetracked and decided to follow two dramatic events as they unraveled: first was the tax legislation that has now been passed in different forms in both the House and Senate. It is predicted to increase the deficit by 1.5 trillion dollars in the next ten years. I also looked at the White House’s formal approval of a detailed, congressionally-mandated report about the impacts of climate change on the US. Given the details of the predicted damage that climate change can inflict, I believe that the two decisions were contradictory and that the lawmakers’ actions are irrational.

The last blog (November 7, 2017) in the previous series, “Long Term Solutions: Energy,” was meant to segue into a focus on an ultimate solution fusion:

Fusion power is a form of power generation in which energy is generated by using fusion reactions to produce heat for electricity generation. Fusion reactions fuse two lighter atomic nuclei to form a heavier nucleus, releasing energy. Devices designed to harness this energy are known as fusion reactors.

The fusion reaction normally takes place in a plasma of deuterium and tritium heated to millions of degrees. In stars, gravity contains these fuels. Outside of a star, the most researched way to confine the plasma at these temperatures is to use magnetic fields. The major challenge in realising fusion power is to engineer a system that can confine the plasma long enough at high enough temperature and density.

As a source of power, nuclear fusion has several theoretical advantages over fission. These advantages include reduced radioactivity in operation and as waste, ample fuel supplies, and increased safety. However, controlled fusion has proven to be extremely difficult to produce in a practical and economical manner. Research into fusion reactors began in the 1940s, but as of 2017[update], no design has produced more fusion energy than the energy needed to initiate the reaction, meaning all existing designs have a negative energy balance.[1]

Over the years, fusion researchers have investigated various confinement concepts. The early emphasis was on three main systems: z-pinch, stellarator and magnetic mirror. The current leading designs are the tokamak and inertial confinement (ICF) by laser. Both designs are being built at very large scales, most notably the ITER tokamak in France, and the National Ignition Facility laser in the USA. Researchers are also studying other designs that may offer cheaper approaches. Among these alternatives there is increasing interest in magnetized target fusion and inertial electrostatic confinement.

Stars are the energy generators of the universe. Per definition, they all generate their energy — through fusion in their cores. The most important source of this energy is hydrogen. All the hydrogen in the universe was created primordially in its first few minutes of formation (big bang). The initial distribution of elements was mostly 75% hydrogen and 25% helium. So cosmologically, hydrogen is the primary energy source in the universe. It can be converted into other forms of energy by way of fusion reactions. Stars in the universe are defined by their ability to utilize fusion reactions: the gravitational contractions raise their core temperatures above the ignition of the fusion reaction. Star masses are limited to between around 100 solar mass (the mass of our sun) to 0.1 solar mass. The upper limit comes because the “burning” of the hydrogen accelerates sharply with the mass of the star, thus making the lifetime of the big stars shorter as they grow, while the star masses’ lower limit is the minimum gravitational force strong enough to ignite fusion of simple hydrogen in their cores.

There are smaller objects called Brown Dwarfs that are smaller than 0.1 solar masses and thus unable to fuse normal hydrogen but bigger than big planets such as Jupiter; they can get some energy through fusion of deuterium (an isotope of hydrogen) and lithium:

Brown dwarfs are objects which have a size between that of a giant planet like Jupiter and that of a small star. In fact, most astronomers would classify any object with between 15 times the mass of Jupiter and 75 times the mass of Jupiter to be a brown dwarf. Given that range of masses, the object would not have been able to sustain the fusion of hydrogen like a regular star; thus, many scientists have dubbed brown dwarfs as “failed stars”.

The ultimate solution to our energy problems is to learn how to use fusion as our source of energy. Since immediately after the Second World War, we have known how to use fusion in a destructive capacity (hydrogen bombs) and have been earnestly trying to learn how to use it for peaceful applications such as converting it into electrical power. It is difficult. To start with, if we want to imitate our sun we have to create temperatures on the order of 100 million degrees Celsius. Before we can do that, we have to learn how to create or find materials that can be stable at such temperatures: all the materials that we know of will completely decompose in those circumstances. Figure 1 illustrates the facilities engaging in this research and their progress. We are now closer than we have ever been to maintaining a positive balance between energy input and energy output (ignition in the graph) but we are not there yet.

Figure 1 – Fusion experimental facilities and the plasma conditions they have reached

The vertical axis in Figure 1 represents a quantity called the triple product. The same site where I found the figure explains this quantity:

The triple product is a figure of merit used for fusion plasmas, closely related to the Lawson Criteria. It specifies that successful fusion will be achieved when the product of the three quantities – n, the particle density of a plasma, the confinement time,τ and the temperature, T – reaches a certain value.  Above this value of the triple product, the fusion energy released exceeds the energy required to produce and confine the plasma. For deuterium-tritium fusion this value is about : nτT ≥ 5×1021 m-3 s KeV.  JET has reached values of  nτT of over 1021 m-3 s KeV.

In other words, Joint European Torus (JET), which is located near Dorchester, England, is more than 1/5th of the way to figuring it out. The horizontal axis represents temperature in the Kelvin scale.

oK = 273 + oC

In an interview with Scientific American, John Holdren, President Obama’s science adviser, summarized the history as well as the present state of the technology:

John Holdren has heard the old joke a million times: fusion energy is 30 years away—and always will be. Despite the broken promises, Holdren, who early in his career worked as a physicist on fusion power, believes passionately that fusion research has been worth the billions spent over the past few decades—and that the work should continue. In December, Scientific American talked with Holdren, outgoing director of the federal Office of Science and Technology Policy, to discuss the Obama administration’s science legacy. An edited excerpt of his thoughts on the U.S.’s energy investments follows.

Scientific American: Have we been investing enough in research on energy technologies?

John Holdren: I think that we should be spending in the range of three to four times as much on energy research and development overall as we’ve been spending. Every major study of energy R&D in relation to the magnitude of the challenges, the size of the opportunities and the important possibilities that we’re not pursuing for lack of money concludes that we should be spending much more.

But we have national labs that are devoted—

I’m counting what the national labs are doing in the federal government’s effort. We just need to be doing more—and that’s true right across the board. We need to be doing more on advanced biofuels. We need to be doing more on carbon capture and sequestration. We need to be doing more on advanced nuclear technologies. We need to be doing more on fusion, for heaven’s sake.

Fusion? Really?

Fusion is not going to generate a kilowatt-hour before 2050, in my judgment, but—

Hasn’t fusion been 30 years away for the past 30 years?

It’s actually worse than that. I started working on fusion in 1966. I did my master’s thesis at M.I.T. in plasma physics, and at that time people thought we’d have fusion by 1980. It was only 14 years away. By 1980 it was 20 years away. By 2000 it was 35 years away. But if you look at the pace of progress in fusion over most of that period, it’s been faster than Moore’s law in terms of the performance of the devices—and it would be nice to have a cleaner, safer, less proliferation-prone version of nuclear energy than fission.

My position is not that we know fusion will emerge as an attractive energy source by 2050 or 2075 but that it’s worth putting some money on the bet because we don’t have all that many essentially inexhaustible energy options. There are the renewables. There are efficient breeder reactors, which have many rather unattractive characteristics in terms of requiring what amounts to a plutonium economy—at least with current technology—and trafficking in large quantities of weapon-usable materials.

The other thing that’s kind of an interesting side note is if we ever are going to go to the stars, the only propulsion that’s going to get us there is fusion.

Are we talking warp drive?

No, I’m talking about going to the stars at some substantial fraction of the speed of light.

When will we know if fusion is going to work?

The reason we should stick with ITER [a fusion project based in France] is that it is the only current hope for producing a burning plasma, and until we can understand and master the physics of a burning plasma—a plasma that is generating enough fusion energy to sustain its temperature and density—we will not know whether fusion can ever be managed as a practical energy source, either for terrestrial power generation or for space propulsion. I’m fine with taking a hard look at fusion every five years and deciding whether it’s still worth a candle, but for the time being I think it is.

We know now that we can satisfy most of our needs and avert some of the predicted disaster if we use sustainable sources of electricity. If we can figure it out, fusion seems to be a good bet to solidify that trend.

Be Sociable, Share!
Posted in administration, Climate Change, politics, Sustainability, Trump | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Irrationality and the Future

Climate change is all about future impact; the aspects that we see and deal with right now are limited to those already highlighted by early warning signs in this process. The fact that the issues are global means that mitigating their impacts requires even more time than it might otherwise. Seventy percent of Earth is covered by oceans, which absorb a significant fraction of the additional heat and human-emitted greenhouse gases; the effects of climate change will continue even after strong mitigation accomplishments such as a full global energy transition to non-carbon fuels. Many of the driving forces of climate change – such as deep ocean temperature – are slow processes that need a long time to equilibrate (the IPCC labeled the process of climate stabilization after the world regularizes its concentration of greenhouse gases as committed warming). Action to stabilize atmospheric greenhouse concentrations needs to start now in order to affect temperatures in the future.

Yet people hate to pay now for promised benefits down the line (think about education!). The term economists use is “discounting the future.” There are debates as to the rate at which people discount it but there is overall consensus that the phenomenon exists. In a sense, the hatred of “pay now to prevent future losses” systems contradicts one of the main biases in human irrationality: loss aversion. But as long as the losses are predicted to materialize in the future, we would much rather not think about them at all, thank you!

Tversky and Kahneman (see my previous two blogs) also noticed some of the flaws inherent in asking people to make predictions. Wikipedia summarizes some important aspects of their thinking:

Kahneman and Tversky[1][2] found that human judgment is generally optimistic due to overconfidence and insufficient consideration of distributional information about outcomes. Therefore, people tend to underestimate the costs, completion times, and risks of planned actions, whereas they tend to overestimate the benefits of those same actions. Such error is caused by actors taking an “inside view“, where focus is on the constituents of the specific planned action instead of on the actual outcomes of similar ventures that have already been completed.

Kahneman and Tversky concluded that disregard of distributional information, i.e. risk, is perhaps the major source of error in forecasting. On that basis they recommended that forecasters “should therefore make every effort to frame the forecasting problem so as to facilitate utilizing all the distributional information that is available”.[2]:416 Using distributional information from previous ventures similar to the one being forecast is called taking an “outside view“. Reference class forecasting is a method for taking an outside view on planned actions.

Reference class forecasting for a specific project involves the following three steps:

1. Identify a reference class of past, similar projects.

2. Establish a probability distribution for the selected reference class for the parameter that is being forecast.

3. Compare the specific project with the reference class distribution, in order to establish the most likely outcome for the specific project.

Their argument of our innate optimism and natural disregard for risk (often termed the “planning fallacy”) has a close resemblance to the “just world” hypothesis that I discussed in last week’s blog: we think of the world as operating in a rational manner in spite of countless evidence that humans are not very rational creatures. You can see another window into Kahneman’s thinking via Morgan Housel’s interview with him on the topic:

Morgan Housel: When the evidence is so clear that the recent past is probably not going to predict the future, why is the tendency so strong to keep assuming that it will?

Dr. Kahneman: Well, people are inferring a sort of causal model from what they see, so when they see a certain pattern, they feel there is a force that is causing that pattern to, that pattern of behavior and behavior in the market and to continue. And then if the force is there, you see the extending. This is really a very, very natural way to think. When you see a movement in one direction, you are extrapolating that movement. You are anticipating where things will go and you are anticipating usually in a fairly linear fashion, and this is the way that people think about the future generally.

This sort of thinking about future impact has carried over into the application of how time biases play out in Behavioral Economics.

The first example here is from a paper by Benartzi and Thaler that introduces the concept of “Myopic Loss Aversion” in order to try to explain the puzzle of why stocks outperform bonds so strongly in long-term investments:

Myopic Loss Aversion and the Equity Premium Puzzle

Shlomo Benartzi, Richard H. Thaler

NBER Working Paper No. 4369
Issued in May 1993
NBER Program(s): AP

The equity premium puzzle, first documented by Mehra and Prescott, refers to the empirical fact that stocks have greatly outperformed bonds over the last century. As Mehra and Prescott point out, it appears difficult to explain the magnitude of the equity premium within the usual economics paradigm because the level of risk aversion necessary to justify such a large premium is implausibly large. We offer a new explanation based on Kahneman and Tversky’s ‘prospect theory’. The explanation has two components. First, investors are assumed to be ‘loss averse’ meaning they are distinctly more sensitive to losses than to gains. Second, investors are assumed to evaluate their portfolios frequently, even if they have long-term investment goals such as saving for retirement or managing a pension plan. We dub this combination ‘myopic loss aversion’. Using simulations we find that the size of the equity premium is consistent with the previously estimated parameters of prospect theory if investors evaluate their portfolios annually. That is, investors appear to choose portfolios as if they were operating with a time horizon of about one year. The same approach is then used to study the size effect. Preliminary results suggest that myopic loss aversion may also have some explanatory power for this anomaly

The key here is that the short-term checking of the performance of the portfolio basically converts the long-term bias to a short-term issue.

Cheng and He tried a direct experimental approach to the issue. Here I am including the abstract, with its summary of the project’s conclusions, and the experimental setup that was used to derive these conclusions:

Deciding for Future Selves Reduces Loss Aversion

Qiqi Cheng and Guibing He

Abstract

In this paper, we present an incentivized experiment to investigate the degree of loss aversion when people make decisions for their current selves and future selves under risk. We find that when participants make decisions for their future selves, they are less loss averse compared to when they make decisions for their current selves. This finding is consistent with the interpretation of loss aversion as a bias in decision-making driven by emotions, which are reduced when making decisions for future selves. Our findings endorsed the external validity of previous studies on the impact of emotion on loss aversion in a real world decision-making environment.

Tasks and Procedure

To measure the willingness to choose the risky prospect, we follow Holt and Laury (2002, 2005) decision task by asking participants to make a series of binary choices for 20 pairs of options (Table 1). The first option (Option A, the safe option) in each pair is always RMB 10 (10 Chinese Yuan) with certainty. The second option (Option B, the risky option) holds the potential outcomes constant at RMB 18 or 1 for each pair but changes the probabilities of winning for each decision, which creates a scale of increasing expected values. Because expected values in early decisions favor Option A while the expected values in later decisions favor Option B, an individual should initially choose Option A and then switch to Option B. Therefore, there will be a ‘switch point,’ which reflects a participant’s willingness to choose a risky prospect. The participants are told that each of their 20 decisions in the table has the same chance of being selected and their payment for the experiment will be determined by their decisions.

The conclusions from these discussions are clear – our biases are highly sensitive to the length of time in which we are trying to make predictions and thus directly impact mitigations that have to start in the present. Climate change is an existential issue so we don’t have the luxury of waiting until the damage is imminent. Once we get there it is too late to avoid.

Be Sociable, Share!
Posted in Anthropocene, Anthropogenic, Climate Change, Sustainability | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Collective Irrationality and Individual Biases: Climate Change II

Last week I discussed some issues in terms of psychology of judgement and decision making; I feel that they need some clarification and expansion.

I looked at how highly educated Democrats and Republicans diverge sharply in their opinions about the extent to which human actions contribute to climate change. I explained the phenomenon through the concept of “following the herd.” That is, since we are unwilling/unable to learn all the details of a complicated issue such as climate change, we choose instead to follow the opinions of the people that we trust – in this case the leadership of the political parties. But we don’t ask ourselves how those people in power form their own opinions.

The second unsettled issue that came out of last week’s blog is how we apply “loss aversion” to climate change. In other words, while there is a strong probability that if climate change is left unchecked we will lose big money in attempts to mitigate the damage it causes – yet many of us choose to ignore that inevitability.

Some of our understanding of this dichotomy traces back to the 19th Century American philosopher and psychologist William James, who pioneered biological psychology – the mind-body phenomenon:

One of the more enduring ideas in psychology, dating back to the time of William James a little more than a century ago, is the notion that human behavior is not the product of a single process, but rather reflects the interaction of different specialized subsystems. These systems, the idea goes, usually interact seamlessly to determine behavior, but at times they may compete. The end result is that the brain sometimes argues with itself, as these distinct systems come to different conclusions about what we should do.

The major distinction responsible for these internal disagreements is the one between automatic and controlled processes. System 1 is generally automatic, affective and heuristic-based, which means that it relies on mental “shortcuts.” It quickly proposes intuitive answers to problems as they arise. System 2, which corresponds closely with controlled processes, is slow, effortful, conscious, rule-based and also can be employed to monitor the quality of the answer provided by System 1. If it’s convinced that our intuition is wrong, then it’s capable of correcting or overriding the automatic judgments

In the 1960s two Israeli psychologists, Daniel Kahneman and Amos Tversky, started an intellectual journey to establish the boundaries of human rationality. They had to account for the fact that if humans evolved from other species, those ancestral species are known for their instinctive survival reflexes rather than their rationality. Kahneman and Tversky expanded upon the distinction made above. They reasoned that human thinking is based on two brain activities, located in different parts of the brain: the automatic and the rational brains, which Kahneman labeled Intuition (System 1) and Reasoning (System 2):

Daniel Kahneman provided further interpretation by differentiating the two styles of processing more, calling them intuition and reasoning in 2003. Intuition (or system 1), similar to associative reasoning, was determined to be fast and automatic, usually with strong emotional bonds included in the reasoning process. Kahneman said that this kind of reasoning was based on formed habits and very difficult to change or manipulate. Reasoning (or system 2) was slower and much more volatile, being subject to conscious judgments and attitudes.[8]

Kahneman and Tversky’s efforts quickly spread to many other areas including economics and health care. Daniel Kahneman was recognized with the 2002 Nobel Prize in Economics (Amos Tversky passed away in 1996) and this year’s Economics Prize was given to Richard Thaler, one of the earliest practitioners to apply this work to the field of economics. Both Daniel Kahneman and Richard Thaler have written books on their efforts, targeted to the general public. As part of that general public, I read the books cover to cover. Kahneman’s book is called Thinking Fast and Slow (Farrar, Straus and Giroux s – 2011) and Thaler’s book with Cass Sunstein is titled Nudge (Yale University Press – 2008). Both books are best-sellers.  There’s also a new book by Michael Lewis about Tversky and Kahneman’s careers together: The Undoing Project (W.W. Norton – 2017). I searched all three books for direct references to climate change. Thaler and Sunstein’s book has a small chapter on environmental issues that I will summarize toward the end of the blog. Lewis’ book doesn’t have a searchable index and it’s been a few months since I read the book so it is difficult to tell specifics. Kahneman’s book has a searchable index but I didn’t find any directly relevant references there either.

I was fortunate to come across the transcript of a talk that Daniel Kahneman gave on April 18, 2017 to the Council on Foreign Relations. The meeting was presided over by Alan Murray, Chief Content Officer at Time magazine. At the conclusion of his talk Kahneman agreed to answer questions from the audience. Two of the questions directly referred to climate change:

Q: Hi. I’m Jack Rosenthal, retired from The New York Times. I wonder if you’d be willing to talk a bit about the undoing idea and whether it’s relevant in the extreme to things like climate denial.

KAHNEMAN: Well, I mean, the undoing idea, the Undoing Project, was something that I—well, it’s the name of a book that Michael Lewis wrote about Amos Tversky and me. But it originally was a project that I engaged in primarily. I’m trying to think about how do people construct alternatives to reality.

And my particular, my interest in this was prompted by tragedy in my family. A nephew in the Israeli air force was killed. And I was very struck by the fact that people kept saying “if only.” And that—and that “if only” has rules to it. We don’t just complete “if only” in any—every which way. There are certain things that you use. So I was interested in counterfactuals. And this is the Undoing Project. Climate denial, I think, is not necessarily related to the Undoing Project. It’s very powerful, clearly. You know, the anchors of the psychology of climate denial is elementary. It’s very basic. And it’s going to be extremely difficult to overcome.

MURRAY: When you say it’s elementary, can you elaborate a little bit?

KAHNEMAN: Well, the whether people believe or do not believe is one issue. And people believe in climate and climate change or don’t believe in climate change not because of the scientific evidence. And we really ought to get rid of the idea that scientific evidence has much to do with people’s beliefs.

MURRAY: Is that a general comment, or in the case of climate?

KAHNEMAN: Yeah, it’s a general comment.

MURRAY: (Laughs.)

KAHNEMAN: I think it’s a general comment. I mean, there is—the correlation between attitude to gay marriage and belief in climate change is just too high to be explained by, you know.

MURRAY: Science.

KAHNEMAN: —by science. So clearly—and clearly what is people’s beliefs about climate change and about other things are primarily determined by socialization. They’re determined—we believe in things that people that we trust and love believe in. And that, by the way, is certainly true of my belief in climate change. I believe in climate change because I believe that, you know, if the National Academy says there’s climate change, but…

MURRAY: They’re your people.

KAHNEMAN: They’re my people.

MURRAY: (Laughs.)

KAHNEMAN: But other people—you know, they’re not everybody’s people. And so this, I think—that’s a very basic part of it. Where do beliefs come from? And the other part of it is that climate change is really the kind of threat for which—that we as humans have not evolved to cope with. It’s too distant. It’s too remote. It just is not the kind of urgent mobilizing thing. If there were a meteor, you know, coming to earth, even in 50 years, it would be completely differently. And that would be—people, you know, could imagine that. It would be concrete. It would be specific. You could mobilize humanity against the meteor. Climate change is different. And it’s much, much harder, I think.

MURRAY: Yes, sir, right here.

Q: Nise Aghwa (ph) of Pace University. Even if you believe in evidence-based science, frequently, whether it’s in medicine, finance, or economics, the power of the tests are so weak that you have to rely on System 1, on your intuition, to make a decision. How do you bridge that gap?

KAHNEMAN: Well, you know, if a decision must be made, you’re going to make it on the best way—you know, in the best way possible. And under some time pressure, there’s no time for deliberation. You just must do, you know, what you can do. That happens a lot.

If there is time to reflect, then in many situations, even when the evidence is incomplete, reflection might pay off. But this is very specific. As I was saying earlier, there are domains where we can trust our intuitions, and there are domains where we really shouldn’t. And one of the problems is that we don’t know subjectively which is which. I mean, this is where some science and some knowledge has to come in from the outside.

MURRAY: But it did sound like you were saying earlier that the intuition works better in areas where you have a great deal of expertise..

KAHNEMAN: Yes.

MURRAY: —and expertise.

KAHNEMAN: But we have powerful intuitions in other areas as well. And that’s the problem. The real problem—and we mentioned overconfidence earlier—is that our subjective confidence is not a very good indication of accuracy. I mean, that’s just empirically. When you look at the correlation between subjective confidence and accuracy, it is not sufficiently hard. And that creates a problem.

Kahneman clearly says that he doesn’t think people make up their minds about life based on science and facts – and that is especially true when it comes to climate change. He acknowledges how intuition and reason each play parts in the way we make decisions – sometimes to our own detriment.

The environmental chapter in Thaler’s and Sunstein’s book, Nudge, “Saving the Planet,” looks at climate change as well as how we can shape our own minds and others’. The authors explore the possibility that a few well-thought-out nudges and better architectural choices might reduce greenhouse gases. There is a separate chapter about the different meanings of architectural choices as well. Here is the key paragraph that explains the concept:

If you indirectly influence the choices other people make, you are a choice architect. And since the choices you are influencing are going to be made by humans, you will want your architecture to reflect a good understanding of how humans behave. In particular, you will want to ensure that the Automatic System doesn’t get all confused.

The book emphasizes well-designed free choices that the authors refer to as a perspective of Libertarian Paternalism, as contrasted with regulation (command and control) – the prevailing approach to environmental government activities. Thaler and Sunstein mention Garret Hardin’s article, “Tragedy of the Commons,” (see explanations and examples in July 2, 2012 blog), which points out that people don’t get feedback on the environmental harm that they inflict. They say that governments need to align incentives.  They discuss two kinds of incentives: taxes (negative – we want to avoid them) and cap-and-trade (see November 10, 2015 blog) (positive – we want to maximize profits). The book offers some pointers on how to account for the fact that players are humans: redistribute the revenues that you get either from cap-and-trade or carbon taxes and provide feedback to consumers about the damage that the polluters impose. One example the authors use to illustrate the effectiveness of such nudges the mandatory messages about the risks of cigarettes smoking. They also recommend trying find ways to incorporate personal energy audits into the choices that people make.

Be Sociable, Share!
Posted in administration, Anthropocene, Anthropogenic, Climate Change, law, Sustainability | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Collective Irrationality and Individual Biases: Climate Change

Last week’s blog looked at the connections between the latest effort to rewrite our tax code and the necessary detailed accounting of the resources we will need to compensate for the increasing damage that climate change will inflict on us in a business-as-usual scenario. This kind of “dynamic scoring” is a political manifestation of a cognitive bias that we call “loss aversion.” This phenomenon has become one of the pillars of the psychology of judgement and decision making, as well as the foundation for the area of behavioral economics. In short, we are driven by fear: we put much more effort into averting losses than maximizing possible gains.

With climate change and tax policy, however, we seem to do the reverse.

Well, psychology seems to have an answer to this too.

Here is a segment from the concluding chapter of my book, Climate Change: The Fork at the End of Now, which was published (Momentum Press) in 2011.

I was asking the following question:

Why do we tend to underestimate risks relating to natural hazards when a catastrophic event has not occurred for a long time? If the catastrophic events are preventable, can this lead to catastrophic inaction?

I tried to answer it this way:

My wife, an experimental psychologist and now the dean of research at my college, pointed out that social psychology has a possible explanation for inaction in the face of dire threats, mediated by a strong need to believe that we live in a “just world,” a belief deeply held by many individuals that the world is a rational, predictable, and just place. The “just world” hypothesis also posits that people believe that beneficiaries deserve their benefits and victims their suffering.7 The “just world” concept has some similarity to rational choice theory, which underlies current analysis of microeconomics and other social behavior. Rationality in this context is the result of balancing costs and benefits to maximize personal advantage. It underlies much of economic modeling, including that of stock markets, where it goes by the name “efficient market hypothesis,” which states that the existing share price incorporates and reflects all relevant information. The need for such frameworks emerges from attempts to make the social sciences behave like physical sciences with good predictive powers. Physics is not much different. A branch of physics called statistical mechanics, which is responsible for most of the principles discussed in Chapter 5 (conservation of energy, entropy, etc.), incorporates the basic premise that if nature has many options for action and we do not have any reason to prefer one option over another, then we assume that the probability of taking any action is equal to the probability of taking any other. For large systems, this assumption works beautifully and enables us to predict macroscopic phenomena to a high degree of accuracy. In economics, a growing area of research is dedicated to the study of exceptions to the rational choice theory, which has shown that humans are not very rational creatures. This area, behavioral economics, includes major contributions by psychologists.

Right now, instead of trying to construct policies that will minimize our losses, we are just trying to present those possible losses as nonexistent. We are trying to pretend that the overwhelming science that predicts those losses for business-as-usual scenarios is “junk science” and that climate change is a conspiracy that scientists have created so they can get grants for research.

I too am guilty of cognitive bias when it comes to climate change.

A few days ago, a distinguished physicist from another institution was visiting my department. He is very interested in environmental issues and, along with two other physicists, is in the process of publishing a general education textbook, “Science of the Earth, Climate and Energy.”

During dinner he took a table napkin and drew curves similar to those shown in Figure 1 and asked me for my opinion. I had never seen such a graph before and it went against almost everything I knew, so I tried to dismiss it. The dinner was friendly so we let it go.

A few days later, an article in The New York Times backed him up:

Figure 1Extent of agreement that human actions have contributed to climate change among Republicans and Democrats in the US (NYT).

The article looked at other tendencies in the attitudes of the two parties based on educational level and they showed much less disparity:

On most other issues, education had little effect. Americans’ views on terrorism, immigration, taxes on the wealthiest, and the state of health care in the United States did not change appreciably by education for Democrats and Republicans.

Only a handful of issues had a shape like the one for climate change, in which higher education corresponded with higher agreement among Democrats and lower agreement among Republicans.

So what distinguishes these issues, climate change in particular?

First, climate change is a relatively new and technically complicated issue. On these kinds of matters, many Americans don’t necessarily have their own views, so they look to adopt those of political elites. And when it comes to climate change, conservative elites are deeply skeptical.

This can trigger what social scientists call a polarization effect, as described by John Zaller, a political scientist at the University of California, Los Angeles, in his 1992 book about mass opinion. When political elites disagree, their views tend to be adopted first by higher-educated partisans on both sides, who become more divided as they acquire more information.

It may be easier to think about in terms of simple partisanship. Most Americans know what party they belong to, but they can’t be expected to know the details of every issue, so they tend to adopt the views of the leaders of the party they already identify with.

For comparison, here’s the layout of voter turnout in the 2016 elections:

Figure 2Voter turnout and preference in the 2016 election by education

In behavioral economics the NYT explanation of the diverging attitude with regard to climate change is called “Following the Herd” (Chapter 3 in Nudge by Richard Thaler and Cass Sunstein).

I will expand on this in the next blog.

Be Sociable, Share!
Posted in administration, Anthropogenic, Climate Change, Education, Election, politics, Sustainability, Trump | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Dynamic Scoring: Taxes and Climate Change

Our government’s executive and legislative branches, are in the midst of discussing two important issues: tax breaks and climate change. Well, in truth, the only real discussion going on has to do with the tax legislation. Climate change is only being addressed indirectly – The Trump administration officially approved the congressionally-mandated climate change report (discussed previously in the August 15, 2017 blog) that was compiled by scientists from 13 government agencies. It now becomes an “official” document, in spite of the fact the White House claims that the President never read it. Meanwhile, right now in Bonn, Germany the 23rd Conference of the Parties (COP23) to the United Nations Framework Convention on Climate Change (UNFCCC) is taking place. One hundred ninety-five nations are there to discuss implementation of the 2015 Paris Agreement. As we all remember, President Trump announced in June his intention to withdraw the US from this agreement. Nevertheless, the US is a full participant in this meeting. Syria has just announced that it will be joining this agreement, making the US the only country in the world to back out.

Let me now come back to taxes, starting with Forbes magazine’s “Tax Reform for Dummies”:

You see, they were planning to repeal Obamacare using something called the “budget reconciliation process,” and pay attention, because this becomes relevant with tax reform. Using this process, Congress can pass a bill that’s attached to a fiscal year budget so long as:

  1. The bill directly impacts revenue or spending, and
  2. The bill does not increase the budget deficit after the end of the ten-year budget window (the so-called “Byrd Rule.)

More importantly, using the reconciliation process, Republicans can pass a bill with only a simple majority in the Senate (51 votes), rather than the standard 60. And as mentioned above, the GOP currently holds 52 seats in the Senate, meaning it could have pushed through its signature legislation without a SINGLE VOTE from a Democrat, which is particularly handy considering that vote was never coming.

Well, the Republicans in Congress were able to pass this resolution with a simple majority, specifying that by using dynamic scoring, they will not increase the budget deficit after the specified 10-year period. They set the limit for this accounting as a deficit no larger than $1.5 trillion (1,500 billion to those of us that need help with big numbers). Here is an explanation of dynamic scoring:

Tax, spending, and regulatory policies can affect incomes, employment, and other broad measures of economic activity. Dynamic analysis accounts for those macroeconomic impacts, while dynamic scoring uses dynamic analysis in estimating the budgetary impact of proposed policy changes.

To give you an understanding of that timeline and budget, here’s a graph of our national deficit:

Figure 1 Post WWII budget deficit in the US

Republican rationale for dynamic scoring in tax cuts is that, “Tax cuts pay for themselves”:

Ted Cruz got at a similar idea, referencing the tax plan he unveiled Thursday: “[I]t costs, with dynamic scoring, less than $1 trillion. Those are the hard numbers. And every single income decile sees a double-digit increase in after-tax income. … Growth is the answer. And as Reagan demonstrated, if we cut taxes, we can bring back growth.”

Tax cuts can boost economic growth. But the operative word there is “can.” It’s by no means an automatic or perfect relationship.

We know, we know. No one likes a fact check with a non-firm answer. So let’s dig further into this idea.

There’s a simple logic behind the idea that cutting taxes boosts growth: Cutting taxes gives people more money to spend as they like, which can boost economic growth.

Many — but by no means all— economists believe there’s a relationship between cuts and growth. In a 2012 survey of top economists, the University of Chicago’s Booth School of Business found that 35 percent thought cutting taxes would boost economic growth. A roughly equal share, 35 percent, were uncertain. Only 8 percent disagreed or strongly disagreed.

But in practice, it’s not always clear that tax cuts themselves automatically boost the economy, according to a recent study.

“[I]t is by no means obvious, on an ex ante basis, that tax rate cuts will ultimately lead to a larger economy,” as the Brookings Institution’s William Gale and Andrew Samwick wrote in a 2014 paper. Well-designed tax policy can increase growth, they wrote, but to do so, tax cuts have to come alongside spending cuts.

And even then, it can’t just be any spending cuts — it has to be cuts to “unproductive” spending.

“I want to be clear — one can write down models where taxes generate big effects,” Gale told NPR. But models are not the real world, he added. “The empirical evidence is quite different from the modeling results, and the empirical evidence is much weaker.”

President Reagan’s tax cut that Senator Cruz is referring to took place in 1981 in the middle of a serious recession. But it came at the heels of a post-war deficit. The tax cut did not pay for itself.

We can now return to the executive summary that precedes the recently-approved government Climate Science Special Report. I am condensing it into the main , each of which receives a detailed discussion in the full 500-page report:

  • Global annually averaged surface air temperature has increased by about 1.8°F (1.0°C) over the last 115 years (1901–2016). This period is now the warmest in the history of modern civilization.
  • It is extremely likely that human activities, especially emissions of greenhouse gases, are the dominant cause of the observed warming since the mid-20th century.
  • Thousands of studies conducted by researchers around the world have documented changes in surface, atmospheric, and oceanic temperatures; melting glaciers; diminishing snow cover; shrinking sea ice; rising sea levels; ocean acidification; and increasing atmospheric water vapor.
  • Global average sea level has risen by about 7–8 inches since 1900, with almost half (about 3 inches) of that rise occurring since 1993. The incidence of daily tidal flooding is accelerating in more than 25 Atlantic and Gulf Coast cities in the United States.
  • Global average sea levels are expected to continue to rise—by at least several inches in the next 15 years and by 1–4 feet by 2100. A rise of as much as 8 feet by 2100 cannot be ruled out.
  • Heavy rainfall is increasing in intensity and frequency across the United States and globally and is expected to continue to increase.
  • Heatwaves have become more frequent in the United States since the 1960s, while extreme cold temperatures and cold waves are less frequent; over the next few decades (2021–2050), annual average temperatures are expected to rise by about 2.5°F for the United States, relative to the recent past (average from 1976–2005), under all plausible future climate scenarios.
  • The incidence of large forest fires in the western United States and Alaska has increased since the early 1980s and is projected to further increase.
  • Annual trends toward earlier spring melt and reduced snowpack are already affecting water resources in the western United States. Chronic, long-duration hydrological drought is increasingly possible before the end of this century.
  • The magnitude of climate change beyond the next few decades will depend primarily on the amount of greenhouse gases (especially carbon dioxide) emitted globally. Without major reductions in emissions, the increase in annual average global temperature relative to preindustrial times could reach 9°F (5°C) or more by the end of this century. With significant reductions in emissions, the increase in annual average global temperature could be limited to 3.6°F (2°C) or less.

We constantly worry what kind of damage we in the US are about to experience from these impacts, given the prevailing business-as-usual scenario. Fortunately, a detailed paper in Science Magazine (Science 356, 1362 (2017)) gives us some answers:

Estimating economic damage from climate change in the United States

Solomon Hsiang,1,2*† Robert Kopp,3*† Amir Jina,4† James Rising,1,5†

Michael Delgado,6 Shashank Mohan,6 D. J. Rasmussen,7 Robert Muir-Wood,8

Paul Wilson,8 Michael Oppenheimer,7,9 Kate Larsen,6 Trevor Houser6

Estimates of climate change damage are central to the design of climate policies. Here, we develop a flexible architecture for computing damages that integrates climate science, econometric analyses, and process models. We use this approach to construct spatially explicit, probabilistic, and empirically derived estimates of economic damage in the United States from climate change. The combined value of market and nonmarket damage across analyzed sectors—agriculture, crime, coastal storms, energy, human mortality, and labor—increases quadratically in global mean temperature, costing roughly 1.2% of gross domestic product per +1°C on average. Importantly, risk is distributed unequally across locations, generating a large transfer of value northward and westward that increases economic inequality. By the late 21st century, the poorest third of counties are projected to experience damages between 2 and 20% of county income (90% chance) under business-as-usual emissions (Representative Concentration Pathway 8.5).

Figure 2 Direct damage in various sectors as a function of rising temperature since the 1980s

The paper’s abstract above indicates a negative impact of 1.2% of GDP for each 1oC (1.8oF) rise. The current GDP of the US is around $18 trillion, so 1.2% of that per 1oC amounts to $216 billion. If we take the recent “typical” growth rate of the economy at 2% it amounts to $360 billion/year. The loss for 1oC of warming amounts to 60% of this “typical” growth.

Accounting through dynamic scoring should count losses as well as gains – in this case, those resulting from climate change. Next week I will expand on this topic.

Be Sociable, Share!
Posted in administration, Anthropogenic, Climate Change, COP21, IPCC, law, Sustainability, Trump, UN, UNFCCC | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment