Markers for the Global Energy Transition

Last week I talked about Dieter Helm’s book, where he portrayed a future in which oil companies are going broke and fossil fuel prices are collapsing due to their practically infinite supply (via fracking and horizontal drilling). Growing awareness of climate change has led to strong public pressure to reduce our carbon footprints and we are experiencing major shifts in how we source and use energy. This includes the growing conversion to electric power. The energy industry was (and is still) structured to accommodate the now-outdated perception that energy supply is finite and shrinking and electrical distribution continues to rely on archaic structures. Helm presents a rosy future for the US because we can adapt to this post-peak-oil reality. But his prediction does not factor in our election of President Trump, who is bent on returning us to the yester-world in terms of our treatment of the physical environment.

As I mentioned in last week’s blog, I agree with some of Helm’s observations and disagree with others. I do feel, however, that it is easy to make predictions about a far future that many of us will not live to see (nor will we be held accountable for our predictions). Helm’s book suffers from that sort of glib certainty and I am aware that I have to some extent shared in the same practice throughout the five years I have been writing this blog.

Helm’s book triggered in me a strong desire to change my ways so I will be starting a series about the recent past. My jumping-off point is the IPAT identity that I have repeatedly referenced here (the first reference was from November 26, 2012). Below is a much more recent mention of this important identity:

There is a useful identity that correlates the environmental impacts (greenhouse gases, in Governor’s Romney statement) with the other indicators. The equation is known as the IPAT equation (or I=PAT), which stands for Impact Population Affluence Technology. The equation was proposed independently by two research teams; one consists of Paul R. Ehrlich and John Holdren (now President Obama’s Science Adviser), while the other is led by Barry Commoner (P.R. Ehrlich and J.P. Holdren; Bulletin of Atmospheric Science 28:16 (1972). B. Commoner; Bulletin of Atmospheric Science 28:42 (1972).)

The identity takes the following form:

Impact = Population x Affluence x Technology

Almost all of the future scenarios for climate change make separate estimates of the indicators in this equation. The difference factor of 15 in GDP/Person (measure of affluence), between the average Chinese and average American makes it clear that the Chinese and the rest of the developing world will do everything they can to try to “even the score” with the developed world. The global challenge is how to do this while at the same time minimizing the environmental impact.

The Population and Affluence terms are self-explanatory. The Impact term in this case refers to emissions of carbon dioxide. The Technology term consists of the three terms combined below:

Technology = (Energy/GDP)x(Fossil/Energy)x(CO2/Fossil)

The first term in this equation refers to Energy Intensity – how much energy we need to generate a unit of GDP (Gross Domestic Product, used here as an Affluence metric). The second term represents the fraction of the total energy that is being generated from fossil fuels. The last term specifies the kind of fossil fuel that is being used (coal, natural gas or oil).

The actual decomposition of global CO2 emissions is shown in Figure 2, which clearly demonstrates that for at least the last decade, Affluence has been the dominant contributor to emissions.

What I referred to in that blog as Figure 2 I am reposting here as Figure 1.

Figure 1 – Decomposition of the change in total global CO2 emissions from fossil fuel combustion by decade (January 24, 2017 blog) (IPCC report analyzing the IPAT identity)

In past blogs I referred to the various indicators in the identity in global terms. Here, and in the next few blogs I will instead cite individual countries. There are two main reasons for doing so. First, our global system is made up of sovereign countries, each of which can (for the most part) enforce actions only within its own borders. The second reason is that once I focus on individual countries I can emphasize those that are more/less successful at transitioning their energy away from carbon-based sources.

Table 1 includes population and GDP/Capita in current US$ for 12 countries. As Figure 1 shows, these are the two main indicators that drive carbon dioxide emissions. The population information came from the United Nations population review and the GDP/Capita came from the World Bank data case.

The sum total of the current population of the 10 most populous countries amounts to more than half of the global total so it is a fair indicator of the whole number.

Table 1 – Population and GDP/Capita of the twelve countries that I will use to analyze the global energy transition (in order of current population)

I have added Ethiopia and the Democratic Republic of Congo because the UN is projecting that given their current growth rates they will be among the 10 most populous countries by 2050. According to World Bank classification, this table includes one high-income (the US), three low-income (Bangladesh, Ethiopia, and DR Congo), four lower-middle income (India, Indonesia, Pakistan, and Nigeria) and four upper-middle income economies (China, Brazil, Russia, and Mexico). In other words, it represents the full range of global income distribution.

The table includes each of the twelve countries’ current GDP/Capita, growth rate, and population as well as their projected populations for 2030 and 2050.

Next week I will focus on trends in primary energy use, emphasizing alternative energy and the resulting change in carbon footprints that these newly diversified economies generate. I will follow it with a compilation of electricity use and the drivers being used to collect the electricity.

Be Sociable, Share!
Posted in Anthropocene, Anthropogenic, Climate Change, Sustainability | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Back to the Global Energy Transition

A short while back, I got an email from a Dutch friend’s brother, who had just finished reading my book on climate change. I have taken out any personal comments but am including his thoughts on renewable energy as well as his summary of his own background: 

Although I think it’s necessary to develop fusion reactors there is less and less support in doing so and I’m not so sure that the global decline in forest area can be stopped. I’m pretty sure I’s possible to enhance efficiencies on all levels but I’m not so sure it’s wise to subsidize the current renewable technologies until the time these technologies become competitive with fossil fuels. Wouldn’t such a strategy consume to much subsidy monies?

Did you happen to read the book “Burn Out” (2017) written by Dieter Helm? He is convinced that the low fossil price is a direct consequence of the lower energy intensive growth in China combined with a gas surplus communing from fracking in the states combined with renewables with nearly zero marginal cost that undermine the spot wholesale markets. He is also very optimistic that the climate change will get solved if we encourage entrepreneurs in developing new technologies. I’m very interested to hear your view on this technological development approach.

By the time that I got around to his email, my fall semester was over and I found myself with some available extra time. I promised him I would read Dieter Helm’s book, Burn Out: The End Game of Fossil Fuels (Yale University Press, 2017) and get in touch with him as soon as I finished. Now that I have, I’m sharing my thoughts.

Here are Helm’s main credentials, as taken from his website:

  • Professor Dieter Helm is an economist specializing in utilities, infrastructure, regulation and the environment, and concentrating on the energy, water, communications and transport sectors primarily in Britain and Europe. He is a Professor at the University of Oxford and Fellow of New College, Oxford.

  • In December 2015, Dieter was reappointed as Independent Chair of the Natural Capital Committee.

  • Dieter’s recent books include: Burn Out: The Endgame for Fossil Fuels, The Carbon crunch: Revised and Updated and Natural Capital: Valuing the Planet. These books were published by Yale University Press.

  • Nature in the Balance, edited with Cameron Hepburn, was published in early 2014 by Oxford University Press. Links below to other titles.

  • During 2011, Dieter assisted the European Commission in preparing the Energy Roadmap 2050, serving both as a special advisor to the European Commissioner for Energy and as Chairman of the Ad Hoc Advisory Group on the Roadmap. He also assisted the Polish government in their presidency of the European Union Council.

The Preface to Burn Out starts as follows:

When you read this, the oil price could be anywhere between$20 and $100 a barrel. It could even be outside these boundaries. Although it will matter a lot to the companies, traders and consumers, it will not tell you very much about the price in the medium-to-longer term. The fact that the price was $147 in 2008 and $27 in early 2016 just tells you that it is volatile.

He is absolutely right. I’m including recent oil and natural gas prices as taken from Bloomberg below.

Crude Oil and Natural Gas


JPY is Japanese Yen. Bbl, kl, and MMBtu are all measurement units. Bbl stands for barrel: one barrel equals 42 US gallons or 35 UK (imperial) gallons. One barrel of crude oil equals 5604 cubic-feet of natural gas. One US bbl= 6.2898 kiloliters (kl). One MMBtu is equal to 1 million BTU (British Thermal Units). One standard cubic foot of natural gas yields ≈ 1030 BTU.

I am not going to get into the book’s conclusions and analyses but I think that my friend’s brother was a bit premature in giving up on fusion and subsidies for sustainable energy sources. We just need to make sure that at the same time, we keep subsidizing the fossil fuel industries and allowing them to amortize – pay off – their oil reserves.

Here are my main takeaways from Helm’s book:

  • Traditional energy companies are going broke
    • The concept of Peak oil has been disproven, according to Shell oil, i.e. demand will peter out before supply does
    • Oil/gas reserves are basically infinite (and thus the prices are never going to reflect decreasing supply)
    • There are increasingly strong social pressures to limit the ability of oil companies to tap reserves
    • People would rather transition to electric power and the way we generate electricity is changing
  • The electric car revolution is both pushing and reflecting the alternative energy revolution
  • As supporting evidence of this shift, he cites the declining profits of current oil producers in the Middle East, Venezuela, Russia, etc. – with the noticeable exception of the United States. The year of publication of the book is listed as 2017 but he doesn’t mention President Trump or his contributions to the energy transitions that we are experiencing. I assume that this means the book was written before the 2016 election and published early last year.

My overall perspective is that the author is writing this as a Big Oil man predicting the downfall of the oil industry. But it is conspicuously lacking any mention of the industry’s intensive efforts to prevent/delay its collapse by helping to finance climate change deniers.

I am in full agreement with the author’s conclusion that with the advances in fracking and horizontal drilling, peak oil doesn’t exist anymore. I also think he is right that the global economy is shifting to other forms of producing electricity.

One of the first things that I teach my students is that electricity is not a primary energy source – it is a product. Even when we shift to electric power (e.g. electric cars), it is critical that we are aware of the primary energy sources (e.g. solar, wind, natural gas, etc.) that give us that power.

My practical conclusion from reading the book is that I want to use this blog as a platform – at least twice a year – to analyze our progress in the energy transition.

I will start this process in next week’s blog.

Be Sociable, Share!
Posted in administration, Anthropocene, Climate Change, Sustainability, Trump | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Use of Computer Simulations in Climate Change Mitigation and Adaptation

Figure 2 in last week’s blog demonstrated our most important tool for determining how we identify human contributions to climate change on both a local and global scale. The figure, taken from the NASA site, describes simulated 20th century global mean temperatures (with and without human influences), as compared to observed temperatures. This important comparison determines the validity of the simulation and its conclusion regarding the driving forces behind a particular event:

Computer simulations reproduce the behavior of a system using a mathematical model. Computer simulations have become a useful tool for the mathematical modeling of many natural systems in physics (computational physics), astrophysics, climatology, chemistry and biology, human systems in economics, psychology, social science, and engineering. Simulation of a system is represented as the running of the system’s model. It can be used to explore and gain new insights into new technology and to estimate the performance of systems too complex for analytical solutions.[1]

In other words, this tool is used everywhere in science, to such an extent that it is unimaginable that any branch of science – including social sciences – could survive today without it. These mathematical models are designed to reproduce the behavior of real systems. How good the model is depends on how well it incorporates actual measured results into the assumed driving forces within the simulated system. This works especially well for simulating a complex system with many individual components and driving forces. As such, it is perfectly suited to simulate climate change. Figure 3 in last week’s blog showed an example of the framework of the forces assumed to drive climate and weather.

Per definition, mathematical models are products of human efforts, which means that they are subjective: they can be skewed by the unscrupulous to agree with preconceived opinions and mispresented as “real science.” Climate change is obviously a fertile ground for this kind of misbehavior, especially because the consensus among the scientific community implies real consequences for all of our futures. Cases of intentional cheating in science are not confined to climate change; in most cases those creating the false information do so while seeking career advancement, promotion or fame. The general public is not usually privy to these shameful activities and usually lacks the prerequisites to understand either what is being done or why it matters. Climate change is different because it affects all of us – the public needs to understand both.

Some scientists object to other scientists’ attempts to warn us about the consequences of climate change. They claim that predicting consequences equates to predicting the future, a field outside the mandate of science (May 7, 2012 blog):

The letter was signed by 49 former NASA employees that included seven Apollo astronauts and two former directors of NASA’s Johnson Space Center calling NASA to move away from climate model predictions and to limit its stance to what can be “empirically proven.” The letter specifically targets James Hansen – Director of NASA Goddard Institute (GISS) (Hansen and the GISS have been acting as the “the canary in the coal mine,” warning, for years, about the consequences of relying on fossil fuels as our main source of energy.) The letter states that, “We believe the claims by NASA and GISS, that man-made carbon dioxide is having a catastrophic impact on global climate change are not substantiated.”  The reason for the doubt includes that.”NASA is relying too heavily on complex climate models that have proven scientifically inadequate in predicting climate only one or two decades in advance” and that “There’s a concern that if it turns out that CO2 is not a major cause of climate change, NASA will have put the reputation of NASA, NASA’s current and former employees, and even the very reputation of science itself at risk of public ridicule and distrust.” This is backwards; it’s the letter that should be held up to public ridicule.

If the senior NASA employees had had their way, Figure 2 probably wouldn’t exist and I would have had a much harder time demonstrating how we can show the degree to which we are responsible for weather events and what we can do to modify our collective behavior. NASA’s fear of looking bad in the case that some of its predictions turn out somewhat differently than expected should not take precedence over our chances to decrease the odds of local disasters.

Not long after I started this blog (December 10, 2012), I described a situation in which I gave a talk to another campus’ physics department and got embroiled in a useless argument about whether or not trying to measure human contributions to climate change counted as “real” science – as compared to investigating the physical properties of a specific semiconducting material. At the end of this chat I was asked if I had heard Freeman Dyson’s take on the topic. I have.

Freeman Dyson is one of the most distinguished and recognized physicists, with many major accomplishments in a wide range of topics:

Freeman Dyson FRS is an English-born American theoretical physicist and mathematician. He is known for his work in quantum electrodynamics, solid-state physics, astronomy and nuclear engineering. Born: December 15, 1923 (age 94), Crowthorne, United Kingdom. Awards: Templeton Prize, Wolf Prize in Physics, Hughes Medal, MORE

Education: Bates College, University of Cambridge, Cornell University, Winchester College

In 2009 David Biello published an interview with him in the Scientific American blog. Biello’s take on Dyson’s points prompted the rather unsubtle title, “Freeman Dyson and the irresistible urge to be contrary about climate change”:

At the luncheon put on by the Cato Institute, when the talk turned to climate change Dyson started out sounding as if the whole thing was overblown, noting that the prospect of global warming is a problem that should be taken seriously. But he also said that no one should be alarmed about it yet.

Then he outlined his main criticism: Too much of the science of climate change relies on computer models, he argued, and those models are crude mathematical approximations of the real world. After all, a simple cloud—small in scale, big in climate effects, the product of evaporation and condensation, all of which it is difficult to create equations for—eludes the most sophisticated climate models.

So climate modelers turn to what they call parameters or, as Dyson likes to call them: “fudge factors.” These are approximations, such as the average cloudiness of a particular spot at a particular time, that are then applied globally. With the help of about 100 of these parameters, models can now closely match the world’s present day climate, Dyson says. These models then, like the one developed at Princeton University where Dyson is a professor emeritus, are “useful for understanding climate but not for predicting climate.”

That’s too much of a temptation for scientists working on the problem, however. “If you live with models for 10 to 20 years, you start to believe in them,” Dyson said, witness the implosion of the financial markets after over-reliance on quantitative models. He characterized this over-reliance as a disease infecting everything from physics to biology: “A model is such a fascinating toy that you fall in love with your creation.”

Ultimately, “every model has to be compared to the real world and, if you can’t do that, then don’t believe the model.” And he noted that the real world has been through some significant climate changes before: witness a lush Sahara Desert thousands of years ago or the forests that once covered Greenland.

Of course, models have been tested against the real world (both today’s and eons ago’s) and many of Dyson’s other objections have been rebutted elsewhere. He also did not address the real world impacts already observed: ice melt, sea level rise, ocean acidification and more. His main concern seems to be that worrying about climate change distracts from more important problems such as poverty and infectious disease. Many might note that poverty (the inundation of Bangladesh) and infectious disease (improved conditions for transmission) are also problems exacerbated by climate change.

But Dyson’s purpose seems to be to throw out “heretical” ideas that can then spur further debate. (As even he would admit, his heresies are a little more grounded in the real world when he’s talking about nuclear weapons. Before discussing climate change, he told a roomful of people who probably want to put former President Ronald Reagan on Mount Rushmore that the Great Communicator blew a real chance to rid the world of nuclear weapons in 1986 because he was too attached to the “Star Wars” missile defense program.) As he said: “I know a lot about nuclear weapons and nothing about climate change.”

“I like to express heretical opinions,” Dyson said, with an impish gleam in his eye. “They might even happen to be true.”

I am sure that Dyson himself has used extensive computer modeling of various sorts, opening him up to the same kind of possible abuses that he describes. He is also more than familiar with the ways in which scientists work to minimize the impact of such manipulations.

I will let you draw your own conclusions.

Be Sociable, Share!
Posted in Anthropocene, Anthropogenic, Climate Change, Sustainability | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | 1 Comment

Local Attributions

Myles Allen, one of the pioneers of the emerging science of determining how we attribute extreme climate events to humans, featured prominently in a Scientific American piece earlier this month:

But the radio voice added that it would be “impossible to attribute this particular event [floods in southern England] to past emissions of greenhouse gases,” said Allen in a commentary published in Nature shortly thereafter.

In 2003, that was the predominant view in the scientific community: While climate change surely has a significant effect on the weather, there was no way to determine its exact influence on any individual event. There are just too many other factors affecting the weather, including all sorts of natural climate variations.

His hunch held true. Nearly 15 years later, extreme event attribution not only is possible, but is one of the most rapidly expanding subfields of climate science.

The Bulletin of the American Meteorological Society now issues a special report each year assessing the impact of climate change on the previous year’s extreme events. Interest in the field has grown so much that the National Academy of Sciences released an in-depth report last year evaluating the current state of the science and providing recommendations for its improvement.

In 2004, he and Oxford colleague Daithi Stone and Peter Stott of the Met Office co-authored a report that is widely regarded as the world’s first extreme event attribution study. The paper, which examined the contribution of climate change to a severe European heat wave in 2003—an event which may have caused tens of thousands of deaths across the continent—concluded that “it is very likely that human influence has at least doubled the risk of a heat wave exceeding this threshold magnitude.”

The breakthrough paper took the existing science a step further. Using a climate model, the researchers compared simulations accounting for climate change with scenarios in which human-caused global warming did not exist. They found that the influence of climate change roughly doubled the risk of an individual heat wave. The key to the breakthrough was framing the question in the right way—not asking whether climate change “caused” the event, but how much it might have affected the risk of it occurring at all.

It is much easier to determine how humans have contributed to global climate change. Specific local weather event attribution requires in-depth examinations of driving forces in a particular location at a particular time, as well as an attempt to determine how to describe a system that fits within a global structure.

Figure 1Average yearly global temperature with (empty circles) and without (full circles) El Niño Southern Oscillations

Figure 1 describes average global temperature changes from 1880 until now. It also distinguishes between years that experienced El Niño Southern Oscillations and years that didn’t. On a global scale, the trend in both categories follows the same pattern. This is obviously not necessarily true on the local scale. Indeed, many of the papers that try to describe weather on a local scale incorporate the estimated extent of the El Niño impact. The figure emphasizes that – global or local – one of the best ways to determine attribution for a particular weather event is to try to identify a multi-year pattern in which one can distinguish the presence or absence of a particular driving force.

I discussed Figure 2 in an earlier blog (October 3, 2017) in the context of global attribution and as we will see below, it is currently the main tool being used to determine local weather event attributions. Figure 2 clearly shows the measured and calculated anomaly between human-influenced temperature and that without human influence.

Such a combination of superimposing observational data with and without human influence is now the cornerstone in determining human contributions to any extreme weather event.

Global warming attributions – simulation of 20th century global mean temperatures (with and without human influences) compared to observations (NASA)Figure 2 – Global warming attributions – simulation of 20th century global mean temperatures (with and without human influences) compared to observations (NASA)

Figure 3 shows the driving forces being shown in climate simulations, as taken from the most recent National Climate Assessment Report (August 15, 2017 blog). For the first time in the history of the report, this edition includes a full chapter on climate event attribution (Chapter 3).

The human attributions are included among sections of the gray box labeled “climate forcing agents.”

Once the simulations fit the measured results as they do in Figure 2, one can choose particular sections of interest within that gray box. In Figure 2, this is done using “human influence,” a broad term that includes CO2, non CO2 greenhouse gases, aerosols and land use.

simplified conceptual modeling framework of climate systemFigure 3Simplified conceptual modeling framework for the climate system as implemented in many climate systems (CSSR 4th National Climate Assessment Chapter 2)

The PRECIS model is a great example of a relatively simple model that was adopted to determine specific contributions to weather and climate events:

PRECIS is a regional climate model (RCM) ported to run on a Linux PC with a simple user interface, so that experiments can easily be set up over any region of the globe. PRECIS is designed for researchers (with a focus on developing countries) to construct high-resolution climate change scenarios for their region of interest. These scenarios can be used in impact, vulnerability and adaptation studies, and to aid in the preparation of National Communications, as required under Articles 4.1 and 4.8 of the United Nations Framework Convention on Climate Change (UNFCCC).[1]

The American Meteorological Society is now publishing yearly reports covering such events. Figure 4 summarizes those that took place in 2016, as shown in its January 2018 supplementary bulletin.

Location and types of events analyzed in BAMS supplement 2018Figure 4Location and types of events analyzed in the special supplement to the recent Bulletin of the American Meteorological Society Vol 99, No 1, January 2018

Each event constitutes a chapter in this report. Here are two such events (in bold) and the authors’ conclusions as to their attributions.

The 2016 extreme warmth across Asia would not have been possible without climate change. The 2015/16 El Niño also contributed to regional warm extremes over Southeast Asia and the Maritime Continent.

Conclusions. All of the risk of the extremely high temperatures over Asia in 2016 can be attributed to anthropogenic warming. In addition, the ENSO condition made the extreme warmth two times more likely to occur. It is found that anthropogenic warming contributed to raising the level of event probability almost everywhere, although the 2015/16 El Niño contributed to a regional increase of warm events over the Maritime Continent, the Philippines, and Southeast Asia, but had little significant contribution elsewhere in Asia.

The record temperature of April 2016 in Thailand would not have occurred without the influence of both anthropogenic forcing and El Niño, which also increased the likelihood of low rainfall.

Conclusions. Our analysis demonstrates that anthropogenic climate change results in a clear shift of the April temperature distribution toward warmer conditions and a more moderate, albeit distinct, shift of the rainfall distribution toward drier Aprils in Thailand. The synergy between anthropogenic forcings and a strong El Niño was crucial to the breaking of the temperature record in 2016, which our results suggest would not have occurred if one of these factors were absent. Rainfall as low as in 2016 is found to be extremely rare in La Niña years. The joint probability for hot and dry events similar to April 2016 is found to be relatively small (best estimate of about 1%), which implies that in addition to the drivers examined here, other possible causes could have also played a role, like moisture availability and transport (especially in the context of the prolonged drought), atmospheric circulation patterns, and the effect of other non-ENSO modes of unforced variability.

The attributions described here are mostly determined via a mixed analysis of direct observations and computer simulations. Such techniques are common in science. But some of my colleagues believe that the use of computer simulations to address issues such as anthropogenic climate change is inappropriate. I will address an example taken from cosmology of how computers have been used successfully to describe physical phenomena.

Be Sociable, Share!
Posted in Anthropocene, Anthropogenic, Climate Change, UN, UNFCCC | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Fake News: Attributions

Figure 1Snow in the Sahara

If you do a Google image search for “Snow in the Sahara,” you will get a screen full of images similar to this one. What would bring you to search for that particular term? I read The New York Times every morning and a couple of weeks ago I came across a story about just that phenomenon. For me, the photograph raised memories from close to 50 years ago when I first came to the US to do my postdoctoral education. I grew up in Israel, a small country with a Mediterranean climate where snow was a relatively rare occurrence. On my first visit to the US, my family and I drove from the east coast to California. We took Interstate 40 through New Mexico and Arizona, passing through the Mojave Desert. When I looked out the window I saw the unbelievable sight of a snow-covered terrain with the odd-looking Joshua trees emerging throughout.

Figure 2The Mojave Desert in winter

The photograph in the NYT triggered in me a similar wonder on a global scale. Going through the article I got some explanations:

“The Sahara is as large as the United States, and there are very few weather stations,” he added. “So it’s ridiculous to say that this is the first, second, third time it snowed, as nobody would know how many times it has snowed in the past unless they were there.”

Rein Haarsma, a climate researcher at the Royal Netherlands Meteorological Institute, cautioned against ascribing the white-capped dunes to changing temperatures because of pollution.

“It’s rare, but it’s not that rare,” said Mr. Haarsma said in an interview. “There is exceptional weather at all places, and this did not happen because of climate change.”

The snow fell in the Sahara at altitudes of more than 3,000 feet, where temperatures are low anyway. But Mr. Haarsma said cold air blowing in from the North Atlantic was responsible.

Those icy blasts usually sweep into Scandinavia and other parts of Europe, Mr. Haarsma explained, but in this case, high-pressure systems over the Continent had diverted the weather much farther south.

Mr. Haarsma offered a mechanism to explain why we see snow in the Sahara: he claimed it is not due to climate change but instead caused by cold air blowing down from the North Atlantic. He did not, however, explain why we are now seeing so much cold air from the North Atlantic reach as far south as the Sahara. Well, snow in the Sahara is not the only so-called unique extreme weather we are seeing.

I live in New York City and we have just experienced an extreme cold event that started around Christmas and lasted until mid-January with a few short breaks. The freeze affected the entire northeastern US, prompting President Trump to tweet:

In the East, it could be the COLDEST New Year’s Eve on record. Perhaps we could use a little bit of that good old Global Warming that our Country, but not other countries, was going to pay TRILLIONS OF DOLLARS to protect against. Bundle up!

4:01 PM – 28 Dec 2017

He got more than 60,000 retweets, more than 200,000 likes, comments and replies. Such is the power of social media these days.

Erik Ortiz wrote about this event:

Why climate change may be to blame for dangerous cold blanketing eastern U.S”:

A study published last year in the journal WIRES Climate Change, however, lays out how the warming Arctic and melting ice appear to be linked to cold weather being driven farther south.

“Very recent research does suggest that persistent winter cold spells (as well as the western drought, heatwaves, prolonged storminess) are related to rapid Arctic warming, which is, in turn, caused mainly by human-caused climate change,” Jennifer Francis, a climate scientist at Rutgers University and one of the study’s authors, said in an email.

But researchers have said the loss of sea ice and increased snow cover in northern Asia is helping to weaken the polar vortex.

In addition, “abnormally” warm ocean temperatures off the West Coast are causing the jet stream over North America — which moves from west to east and follows the boundaries between hot and cold air — to “bulge” northward, Francis said.

That scenario is what has caused a lack of storms so far this winter in California and Alaska’s unusually warm and record-breaking temperatures, she added.

Meanwhile, the wrinkled jet stream as it travels east is also being pushed farther south, according to her research.

Mr. Ortiz included a picture from NOAA (National Oceanic and Atmospheric Administration) that shows the schematic shift between a strong jet stream – which more or less confines the extreme cold air to the Arctic – and the global-warming-caused weaker jet stream, which allows the air to push further south to regions including the northeastern US and the Sahara Desert (Figure 3).

Figure 3The shifting jet stream (NOAA)arctic-temperature-rise-arrFigure 4 Change of temperature in the Arctic vs globally, 1880-2010s

Figure 4 summarizes the Arctic temperature changes as compared to the average global temperature changes from 1880 to the 2010s.

Bloomberg published a series of three articles on “How a Melting Arctic Changes Everything,” in which it tried to summarize the global changes resulting from the warming Arctic:

Part I – The Bare Arctic

 Part II – The Political Arctic

Part III – The Economic Arctic

There is almost universal agreement that the accelerated temperature increase in the Arctic is the result of climate change by way of changes in the region’s ground reflectivity. In other words, while Mr. Haarsma was basically right about the mechanism of the snow falling in the Sahara Desert, he was wrong about its attribution.

It is much more difficult to determine how to attribute localized, specific events to human activities than it is to attribute the culpability of climate change (multiyear weather) on a global scale. But most people’s perception of blame derives from specific extreme events in specific locations (i.e. we understand concrete examples best). So understanding where we stand in our efforts to create/connect culpability maps is of the utmost importance. Next week I will try to show an analysis of a specific case.

Be Sociable, Share!
Posted in administration, Anthropocene, Anthropogenic, Climate Change, IPCC, Sustainability, Trump | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Fake News

I am an old guy; my bucket list is modest but I am trying to check off the remaining items while I can still fully enjoy them. There’s only one that remains out of my reach: I want to travel to Mars. Not only would I like to see planet Earth from space with my own eyes but as a cosmology teacher, it would give me joy to experience a fraction of the distances that field studies. I am realistic enough to know that it won’t happen. That is, after all, one of the attractions of a bucket list.

Well, according to Elon Musk I might yet get my chance. Granted, a ticket will cost me $200,000.

Meanwhile, some people might get an even better deal. Stephen Hawking just announced that he’s willing to pay for people’s tickets, although in this case the trip is to Venus. There is only one caveat: the voyagers must “qualify” as climate change deniers. Unfortunately, that takes me out of the running. There is a reason for Hawking’s publicity stunt. There are strong correlations between the conditions one would find on Venus and those we can expect to find on Earth should climate change reach its logical conclusion according to business as usual projections (June 25, 2012). If only I could agree with denier logic that carbon dioxide is a benign gas and all of its links to climate change are “fake news” that scientists fabricate to get grant money from the government, I’d be well on my way toward fulfilling my space travel goals. After all, there probably wouldn’t be that much difference between the experiences of visiting Mars or Venus.

Fake news is a popular topic these days. It’s an excellent cover for ignorance – it means that anyone can make or dismiss an argument – whether or not they have supporting facts or reproducible observations to follow up their claim.

Deniers use the label of fake news to delegitimize climate change in the eyes of the public, especially when it comes to mitigation efforts that require voter support:

“Global Warming: Fake News from the Start” by Tim Ball and Tom Haris

President Donald Trump announced the U.S. withdrawal from the Paris Agreement on climate change because it is a bad deal for America. He could have made the decision simply because the science is false, but most of the public have been brainwashed into believing it is correct and wouldn’t understand the reason.

Canadian Prime Minister Justin Trudeau, and indeed the leaders of many western democracies, though thankfully not the U.S., support the Agreement and are completely unaware of the gross deficiencies in the science. If they did, they wouldn’t be forcing a carbon dioxide (CO2) tax, on their citizens.

Trudeau and other leaders show how little they know, or how little they assume the public know, by calling it a ‘carbon tax.’ But CO2 is a gas, while carbon is a solid. By calling the gas carbon, Trudeau and others encourage people to think of it as something ‘dirty’, like graphite or soot, which really are carbon. Calling CO2 by its proper name would help the public remember that it is actually an invisible, odorless gas essential to plant photosynthesis.

…CO2 is not a pollutant…the entire claim of anthropogenic global warming (AGW) was built on falsehoods and spread with fake news.

…In 1988 Wirth was in a position to jump start the climate alarm. He worked with colleagues on the Senate Energy and Natural Resources Committee to organize a June 23, 1988 hearing where Dr. James Hansen, then the head of the Goddard Institute for Space Studies (GISS), was to testify…Specifically, Hansen told the committee,

“Global warming has reached a level such that we can ascribe with a high degree of confidence a cause and effect relationship between the greenhouse effect and observed warming…It is already happening now…The greenhouse effect has been detected and it is changing our climate now…We already reached the point where the greenhouse effect is important.”

…More than any other event, that single hearing before the Energy and Natural Resources Committee publicly initiated the climate scare, the biggest deception in history. It created an unholy alliance between a bureaucrat and a politician that was bolstered by the U.N. and the popular press leading to the hoax being accepted in governments, industry boardrooms, schools, and churches across the world.

Trump must now end America’s participation in the fake science and the fake news of man-made global warming. To do this, he must withdraw the U.S. from further involvement with all U.N. global warming programs, especially the IPCC as well as the agency that now directs it—the United Nations Framework Convention on Climate Change. Only then will the U.S. have a chance to fully develop its hydrocarbon resources to achieve the president’s goal of global energy dominance.

The Ball and Haris piece claims that people who believe in climate change have been duped by fake news. Their key point is that the notion of carbon dioxide being the chemical largely responsible for climate change is utterly false. Their reasoning is twofold. The first is semantic – they take exception to calling carbon dioxide “carbon.” This is largely an issue of units. Most scientific publications use this association mainly because carbon dioxide is not the only anthropogenic greenhouse gas; other gases, such as methane, are often expressed in terms of units of carbon dioxide. Any transition between the units of “carbon” and “carbon dioxide” involves multiplication by 44/12 or 3.7, which is the ratio of the molecular weight of carbon dioxide to the atomic weight of carbon. I will remind those of you to whom the last sentence is a foreign language that some prerequisites – especially in vocabulary and basic math – are necessary when we describe the details of the physical environment. Ball and Haris’ second objection is that carbon dioxide is not a “pollutant” but an “invisible, odorless gas essential to plant photosynthesis.” While the latter statement is true, it does not represent the whole truth. My Edward Teller quote from last week directly addressed this issue. Essentially, carbon dioxide is a (measurable) pollutant because of its optical absorption properties, i.e. it is responsible for a great deal of the observed climate change because of its preferential absorption of the longer wavelengths of the electromagnetic spectrum.

Figure 1Atmospheric carbon dioxide concentration over the last 400,000 years

Figure 1 shows the atmospheric concentrations of carbon dioxide over the last 400,000 years. Again, the famous hockey-stick curve shows up in the perpendicular rise after the industrial revolution at the far right section of the graph. Last week’s blog’s data about how carbon dioxide added to the atmosphere between the industrial revolution and the 1950s didn’t have any carbon-14 in it, indicates the ancient origins of fossil fuels.

Furthermore, one of my earliest blogs (June 25, 2012) illustrates the carbon cycle (again – much of it in the form of carbon dioxide): where it’s going and where it’s coming from. Photosynthesis and respiration are part of it. Without the anthropogenic contributions (burning fossil fuels, land use change, cement production, etc.) the emission and sequestration balance to explain the long steady state (approximately constant rate) shown in Figure 1. Once we factor in the anthropogenic contributions and how they impact atmospheric absorption properties, we start to see the changing energy balance with the sun and hence the change of climate.

Ball and Haris’ piece at least presented a mechanism of internal logic that can be refuted. Many are satisfied with simply branding something fake news. Unfortunately, social media makes spreading this sort of false information a relatively painless process. Even Google is facing flak for its role in spreading fake news

“How Climate Change Deniers Rise to the Top in Google Searches” by HIROKO TABUCHI

Groups that reject established climate science can use the search engine’s advertising business to their advantage, gaming the system to find a mass platform for false or misleading claims.

Type the words “climate change” into Google and you could get an unexpected result: advertisements that call global warming a hoax. “Scientists blast climate alarm,” said one that appeared at the top of the search results page during a recent search, pointing to a website, DefyCCC, that asserted: “Nothing has been studied better and found more harmless than anthropogenic CO2 release.”

Not everyone who uses Google will see climate denial ads in their search results. Google’s algorithms use search history and other data to tailor ads to the individual, something that is helping to create a highly partisan internet.

It seems that even parts of the internet that we consider to be neutral are starting to reflect the increasingly combative political climate and the related problem of fake news.

Be Sociable, Share!
Posted in administration, Anthropocene, Anthropogenic, Climate Change, IPCC, Sustainability, Trump | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

“Natural” or “Anthropogenic”? – Climate Change

Last week’s blog looked at various methods of distinguishing “natural” vs. “artificial” vanilla. I used this as a jumping off point to facilitate answering the much more important question of how we distinguish anthropogenic climate change from “natural” climate change (i.e. that which took place way before humans had the capacity to inflict any changes on the physical environment of the planet).

Deniers’ most common argument is that they agree that the climate is changing but that it has been doing so since long before humans were around. More specifically, they claim that carbon dioxide couldn’t be a cause of such changes in the here and now because it is a “natural,” “harmless” compound.

Tom Curtis summarized the issue and enumerated the problems with such reasoning in his July 25, 2012 post on Skeptical Science. Skeptical Science is a blog for which I have the utmost respect (July 13, 2013) and which published one of my own guest posts. Here is a condensed outline of the data Curtis provided:

There are ten main lines of evidence to be considered:

1. The start of the growth in CO2 concentration coincides with the start of the industrial revolution, hence anthropogenic;

2. Increase in CO2 concentration over the long term almost exactly correlates with cumulative anthropogenic emissions, hence anthropogenic;

3. Annual CO2 concentration growth is less than Annual CO2 emissions, hence anthropogenic;

4. Declining C14 ratio indicates the source is very old, hence fossil fuel or volcanic (ie, not oceanic outgassing or a recent biological source);

5. Declining C13 ratio indicates a biological source, hence not volcanic;

6. Declining O2 concentration indicate combustion, hence not volcanic;

7. Partial pressure of CO2 in the ocean is increasing, hence not oceanic outgassing;

8. Measured CO2 emissions from all (surface and beneath the sea) volcanoes are one-hundredth of anthropogenic CO2 emissions; hence not volcanic;

9. Known changes in biomass too small by a factor of 10, hence not deforestation; and

10. Known changes of CO2 concentration with temperature are too small by a factor of 10, hence not ocean outgassing.

Figure 1 – Anthropogenic and total atmospheric carbon dioxide concentrations and the 14C isotopic decline (for more details see my Oct 3, 2017 blog on attributions)

Figure 1 demonstrates the changes in the first four items on this list, which I have also described in earlier blogs. The decline of 14C is the common denominator with the characterization of “natural” vanilla from last week’s blog. This important metric, however, becomes far less useful after the end of WWII because of the atmospheric contamination from the nuclear testing that took place at that time (As stated in the caption for Figure 1: “After 1955 the decreasing 14C trend ends due to the overwhelming effect of bomb 14C input into the atmosphere”).

Yet even the limited decline of 14C from 1900 to 1950 is telling: that period constitutes the start of the significant anthropogenic contributions to the atmospheric concentrations of carbon dioxide.

These important results present a strong argument that most of the increase in the atmospheric concentrations of carbon dioxide comes from humans burning fossil fuels. Deniers say that this is not sufficient grounds for associating the increase carbon dioxide concentration with climate change.

To make the argument that carbon dioxide is the main greenhouse gas responsible for climate change, I will quote one of the most famous scientists of the 20th century. Far from being thought of as a climate change–centered scientist, Edward Teller is instead known for creating the hydrogen bomb after the Second World War.

But Teller associated carbon dioxide with the global climate when he made a speech at the celebration of the centennial of the American oil industry in 1959:

Ladies and gentlemen, I am to talk to you about energy in the future. I will start by telling you why I believe that the energy resources of the past must be supplemented. First of all, these energy resources will run short as we use more and more of the fossil fuels. But I would […] like to mention another reason why we probably have to look for additional fuel supplies. And this, strangely, is the question of contaminating the atmosphere. [….] Whenever you burn conventional fuel, you create carbon dioxide. [….] The carbon dioxide is invisible, it is transparent, you can’t smell it, it is not dangerous to health, so why should one worry about it?

Carbon dioxide has a strange property. It transmits visible light but it absorbs the infrared radiation which is emitted from the earth. Its presence in the atmosphere causes a greenhouse effect [….] It has been calculated that a temperature rise corresponding to a 10 per cent increase in carbon dioxide will be sufficient to melt the icecap and submerge New York. All the coastal cities would be covered, and since a considerable percentage of the human race lives in coastal regions, I think that this chemical contamination is more serious than most people tend to believe.

This connection is a simple physical property of carbon dioxide that falls under the scientific discipline called “spectroscopy” (December 10, 2012). The connection is one of the most important parameters that characterizes climate change, as shown in Figure 2; we call it “climate sensitivity.”

IPCC equilibrium global mean temperature increaseFigure 2 – Projected temperature increase as a function of projected carbon dioxide increase (from the 4th IPCC report – AR4) December 10, 2012 blog

Radiative forcing due to doubled CO2

CO2 climate sensitivity has a component directly due to radiative forcing by CO2, and a further contribution arising from climate feedbacks, both positive and negative. “Without any feedbacks, a doubling of CO2 (which amounts to a forcing of 3.7 W/m2) would result in 1 °C global warming, which is easy to calculate and is undisputed. The remaining uncertainty is due entirely to feedbacks in the system, namely, the water vapor feedback, the ice-albedo feedback, the cloud feedback, and the lapse rate feedback”;[14] addition of these feedbacks leads to a value of the sensitivity to CO2 doubling of approximately 3 °C ± 1.5 °C, which corresponds to a value of λ of 0.8 K/(W/m2).

In the next blog I will look at how climate change deniers self-justify labelling this as “fake news,” thus relieving themselves of the burden of engaging with (or even considering) the associated science.

Be Sociable, Share!
Posted in Anthropocene, Anthropogenic, Climate Change, IPCC, Sustainability | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

“Natural” or “Anthropogenic”? – Vanilla Extract and Climate Change

Figure 1 – My Christmas present

Happy New Year!!! I got a yummy Christmas present from a family member that doubled as a hint to try to improve my cooking: homemade vanilla extract. They got the vanilla beans from Honduras and immersed them in vodka!

As a good giftee, I will use the extract in some of my food preparation (I like the flavor). The gift also reminded me of a time a few years back when a gentleman from our neighborhood came to my school to ask for advice on how he could distinguish a “natural” vanilla flavor from an artificial one. Neither I, nor any of my colleagues, could come up with satisfactory answers but we were intrigued with the differentiation he sought.

My own interest this distinction is a bit broader: I want to know how to convince all of you that the global climate change that we are experiencing is “artificial,” as in man-made or anthropogenic, and not “natural.”

I get constant complaints from many good friends for naming carbon dioxide and methane – the main components of natural gas – as pollutants that cause climate change. This is especially true given how often I blame climate change for the ultimate slaughter of our children and grandchildren should we continue our business-as-usual living practices. After all, humans and most other living organisms, exhale carbon dioxide and we and many animals around us naturally emit flatulence with methane as an important component.

Well, I will try to start our new year with the much more pleasant vanilla flavor and follow that up next time by utilizing a common technique for distinguishing natural from artificial vanilla to parallel the differences between natural and artificial origins of climate change. We will find that in both cases a learning curve is required to follow some of the language used.

The main interest of the gentleman that came to us seeking help to distinguish the source of the vanilla was money. According to him, many people who sell the extract and claim it to be natural are cheating and actually sell the much cheaper artificial variety at a bumped-up price. Our interest in realizing that we are the instigators of most of the climate change that we are experiencing is to also understand that it is up to us to stop that forward momentum.

The essence of the identification of the origin of vanilla is legal, as Forbes’ Eustacia Huen writes in, “What Manufacturers Really Mean By Natural And Artificial Flavors”:

According to the U.S. Food and Drug Administration’s (FDA) Code of Federal Regulations (Title 21), the term “natural flavor” essentially has an edible source (i.e. animals and vegetables). Artificial flavors, on the other hand, have an inedible source, which means you can be eating anything from petroleum to paper pulp processed to create the chemicals that flavor your food. For example, Japanese researcher Mayu Yamamoto discovered a way to extract vanillin, the compound responsible for the smell and flavor of vanilla, from cow poop in 2006, as reported by Business Insider.

Gary Reineccius’, “What is the difference between artificial and natural flavors?” in Scientific American defines it somewhat differently:

Natural and artificial flavors are defined for the consumer in the Code of Federal Regulations. A key line from this definition is the following: ” a natural flavor is the essential oil, oleoresin, essence or extractive, protein hydrolysate, distillate, or any product of roasting, heating or enzymolysis, which contains the flavoring constituents derived from a spice, fruit or fruit juice, vegetable or vegetable juice, edible yeast, herb, bark, bud, root, leaf or similar plant material, meat, seafood, poultry, eggs, dairy products, or fermentation products thereof, whose significant function in food is flavoring rather than nutritional.” Artificial flavors are those that are made from components that do not meet this definition.

The question at hand, however, appears to be less a matter of legal definition than the “real” or practical difference between these two types of flavorings. There is little substantive difference in the chemical compositions of natural and artificial flavorings. They are both made in a laboratory by a trained professional, a “flavorist,” who blends appropriate chemicals together in the right proportions. The flavorist uses “natural” chemicals to make natural flavorings and “synthetic” chemicals to make artificial flavorings. The flavorist creating an artificial flavoring must use the same chemicals in his formulation as would be used to make a natural flavoring, however. Otherwise, the flavoring will not have the desired flavor. The distinction in flavorings–natural versus artificial–comes from the source of these identical chemicals and may be likened to saying that an apple sold in a gas station is artificial and one sold from a fruit stand is natural.

So is there truly a difference between natural and artificial flavorings? Yes. Artificial flavorings are simpler in composition and potentially safer because only safety-tested components are utilized. Another difference between natural and artificial flavorings is cost. The search for “natural” sources of chemicals often requires that a manufacturer go to great lengths to obtain a given chemical. Natural coconut flavorings, for example, depend on a chemical called massoya lactone. Massoya lactone comes from the bark of the Massoya tree, which grows in Malaysia. Collecting this natural chemical kills the tree because harvesters must remove the bark and extract it to obtain the lactone. Furthermore, the process is costly. This pure natural chemical is identical to the version made in an organic chemists laboratory, yet it is much more expensive than the synthetic alternative. Consumers pay a lot for natural flavorings. But these are in fact no better in quality, nor are they safer, than their cost-effective artificial counterparts.

As we can read in the last two paragraphs of the Scientific American piece, in terms of functionality (i.e. flavor) there is not much difference between the “artificial” and the “natural”; if anything, the synthetic variety is considerably purer.

Chemical & Engineering News (CEN) tackles the full complexity of the issue in Melody M. Bomgardner’s, “The problem with vanilla: After vowing to go natural, food brands face a shortage of the favored Flavor.” The essence of this article is easily summed up as follows:

In brief:

Vanilla is perhaps the world’s most popular flavor, but less than 1% of it comes from a fully natural source, the vanilla orchid. In 2015, a host of big food brands, led by Nestlé, vowed to use only natural flavors in products marketed in the U.S.—just as a shortage of natural vanilla was emerging. In the following pages, C&EN explains how flavor firms are working to supply.

However, we need to delve in farther.

In Réunion, output of vanilla soared thanks to the Albius method, and orchid cultivation expanded to nearby Madagascar. Today, about 80% of the world’s natural vanilla comes from smallholder farms in Madagascar. There, locals continue to pollinate orchids by hand and cure the beans in the traditional fashion.

It didn’t take long for vanilla demand to exceed supply from the farms of Madagascar. In the 1800s and 1900s, chemists took over from botanists to expand supply of the flavor. Vanillin, the main flavor component of cured vanilla beans, was synthesized variously from pine bark, clove oil, rice bran, and lignin.

Rhône-Poulenc, now Solvay, commercialized a pure petrochemical route in the 1970s. In recent years, of the roughly 18,000 metric tons of vanilla flavor produced annually, about 85% is vanillin synthesized from the petrochemical precursor guaiacol. Most of the rest is from lignin.

But the traditional vanilla bean is starting to enjoy a renaissance, thanks to consumer demand for all-natural foods and beverages. Last year, a string of giant food companies, including General Mills, Hershey’s, Kellogg’s, and Nestlé, vowed to eliminate artificial flavors and other additives from many foods sold in the U.S.

Figure 2 – The various ways to Vanillin

There is a problem, however: World production of natural vanilla is tiny and has been falling in recent years. Less than 1% of vanilla flavor comes from actual vanilla orchids. With demand on the upswing, trade in the coveted flavor is out of balance.

Flavor companies are working feverishly to find additional sources of natural vanillin and launch initiatives to boost the quality and quantity of bean-derived vanilla. Suppliers such as Symrise, International Flavors & Fragrances (IFF), Solvay, and Borregaard are using their expertise along the full spectrum of natural to synthetic to help food makers arrive at the best vanilla flavor for each product.

Food makers, meanwhile, are confronting skyrocketing costs for natural vanilla, reformulation challenges, complicated labeling laws, and difficult questions about what is “natural.”

Although consumer disdain for artificial ingredients has been building for years, credit —or blame—for last year’s wave of “all natural” announcements goes to Nestlé, which in February 2015 was the first major brand to announce plans to eliminate artificial additives from chocolate candy sold in the  U.S. The announcement upended the massmarket chocolate industry practice of adding synthetic vanillin to counter the bitterness of cocoa.

For a big food firm, however, switching to natural vanilla is akin to squeezing an elephant into a Volkswagen. While the ink was drying on those all-natural announcements last year, output of Madagascar vanilla beans had plummeted to 1,100 metric tons, about half the normal harvest. That, along with rising demand, caused prices to more than double to roughly $225 per kg by the middle of last year, according to Mintec, a raw material price-tracking firm.

Cured vanilla beans contain only 2% of extractable vanilla flavor, meaning prices for pure vanilla reached an eye-popping $11,000 per kg. The industry is closely watching this year’s harvest, hoping to see vanilla costs eventually return to pre-2012 levels of about $25 per kg for beans or $1,250 for vanilla.

The clear distinction that we can make is between the petroleum-based guaiacol and the other routes that all the others originate from living plants.

The distinction between the “natural” and the “synthetic” vanilla will follow the same rationale that I covered in the October 3, 2017 where I described the human attributions to climate change:

14C is a radioisotope of carbon with atoms that contain 6 protons and 8 neutrons as compared to the more abundant isotope of carbon (12C), which contains 6 protons and 6 neutrons. The radioisotope is unstable and slowly converts to nitrogen (14N) by converting one neutron to one proton. The conversion rate is measured through a parameter called half-time. It means that if we start with a certain amount of the material, half of it will convert into 14N in that period of time. The half-life of 14C is 5,730 years. The natural abundance of this isotope in the atmosphere is about one atom in a trillion. Plants that grow by photosynthesis of carbon dioxide in the atmosphere end up with the same amount of the carbon isotope.

All the routes to the “natural” vanillin in Figure 2, except for the petroleum routes, originate from photosynthetic plants – within the plants’ lifetimes. The plants photosynthesize atmospheric carbon dioxide and have lifetimes considerably shorter than 5730 years (the half-life of carbon-14 that they digested from the atmosphere) so we expect the vanilla extracted through any of the “native” routes to have the same C-14 concentration as the atmosphere.

The petroleum route is based on petroleum that was formed via decay of plants and digestion by anaerobic (no oxygen) bacteria, millions of years ago. That’s a much longer time than the half-life of C-14. So we expect the vanilla that we get from this route to have no C-14. Measuring C-14 in the extract is a quick and easy job for laboratories that are equipped to do so.

A different method for the differentiation is based on additives that are present within “natural” vanilla and are absent in the synthetic vanilla. One of them is described below:

The production of vanilla beans is quite expensive, since it is a very labor intensive process and harvesting takes place 2 to 3 years after planting. This drives the price of natural vanilla extract to about three to five times higher than artificial vanilla preparations. Due to quality, price concerns and economically motivated frauds, it is important to differentiate between natural and artificial forms of vanilla extracts. Apart from vanillin, natural vanilla extracts have 4-hydroxybenzaldehyde, which is absent in artificial vanilla flavorings. This compound can be used as a marker ion to rapidly differentiate between natural and artificial vanilla preparations.[2]

This method is more general and can use a wider variety of analytical instruments compared to the radioactivity measurements, provided that the assumed additives are present.

Next week I will return to the October 3, 2017 blog about attribution to solidify the analogy between vanilla and the anthropogenic origins of climate change.

Be Sociable, Share!
Posted in Anthropocene, Anthropogenic, Climate Change, Sustainability | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Invasive Species, Collective Suicide, and Self-Inflicted Genocide

Recently, the NYT published some answers to questions posed by readers: “Are We an Invasive Species? Plus, Other Burning Climate Questions.” One of these questions was whether or not humans count as an invasive species:

Are we an invasive species? 

By Livia Albeck-Ripka

In a high-rise in Malaysia’s capital in 1999, a group of scientists — convened by the International Union for Conservation of Nature to designate “100 of the World’s Worst Invasive Alien Species” — asked themselves this very question.

“Every single person in the room agreed, humans are the worst invasive species,” said Daniel Simberloff, a professor of environmental science at the University of Tennessee, who was at the meeting. “But the I.U.C.N. disagreed.” Homo sapiens, which has colonized and deforested much of the Earth and pumped enough carbon dioxide into the atmosphere to change its climate, was excluded from the list.

While the meaning of “invasive” has been the subject of some debate, the generally accepted definition is a plant, animal or other organism that, aided by humans, has recently moved to a place where it is nonnative, to detrimental effect. Humans don’t fit that definition, said Piero Genovesi, chairman of the I.U.C.N.’s invasive species group, because they are the ones doing the moving. A more useful way to think about ourselves, Dr. Genovesi said, is as the drivers of every problem conservation tries to remedy.

The piece reminded me of something I read in 2011 by a science writer for the Smithsonian, which addressed the same question in a somewhat clearer way:

Are Humans an Invasive Species?

Sarah Zielinski

Let’s start with the definition of an invasive species. It turns out, it’s not so simple. The legal definition in the United States is “an alien species whose introduction does or is likely to cause economic or environmental harm or harm to human health.” The International Union for Conservation of Nature (IUCN), which developed the list of the 100 world’s worst from which our invasive mammals piece originated, defines them as “animals, plants or other organisms introduced by man into places out of their natural range of distribution, where they become established and disperse, generating a negative impact on the local ecosystem and species.” And a 2004 paper in Diversity and Distributions that examines the terminology of invasiveness notes that there is a lack of consensus on this topic and lists five dominant definitions for ‘invasive,’ the most popular of which is “widespread that have adverse effects on the invaded habitat.”

Despite the lack of a single definition, however, we can pull from these definitions some general aspects of an invasive species and apply those to Homo sapiens.

  • An invasive species is widespread: Humans, which can be found on every continent, floating on every ocean and even circling the skies above certainly meet this aspect of invasiveness.
  • An invasive species has to be a non-native: Humans had colonized every continent but Antarctica by about 15,000 years ago. Sure, we’ve done some rearranging of populations since then and had an explosion in population size, but we’re a native species.

3) An invasive species is introduced to a new habitat: Humans move themselves; there is no outside entity facilitating their spread.

4) An invasive species had adverse effects on its new habitat and/or on human health: Humans meet this part of the definition in too many ways to count.

Verdict: We’re not an invasive species, though we’re certainly doing harm to the world around us. If you think about it, all of the harm done by invasive species is by definition our collective faults; some kind of human action led to that species being in a new place where it then causes some harm. And so I’m not at all astonished to find people arguing that we’re the worst invasive species of them all.

I fully agree with Ms. Zielinski: based on our fully human-centric definition of invasive species, the term cannot apply to us while we are on this planet. We can, however, become an invasive species via our space explorations and are taking some precautions to minimize that risk. A good example of this is the Cassini spacecraft’s deliberate crash into Saturn’s atmosphere in September 2017 to prevent contamination of any future efforts to find life in space. All the damage that humans are causing to the physical environment does not “justify” labeling us as foreign invaders. This planet is our home. It is also the only home of any other life form that we know. We are all “competing” for dominance here. These events are much better framed as collective suicides or self-inflicted genocides.

Figure 1 – Fundamental questions of Astrobiology

I am teaching Cosmology as an advanced General Education course at Brooklyn College. The syllabus covers our attempts to find extraterrestrial life, an area known as Astrobiology. Figure 1 shows the various areas of inquiry that are being considered as part of this effort. Closer inspection of this figure reveals that environmental issues play a key role. For instance, “what is life’s future on Earth and beyond?” and “Future Environmental Change.” In the latter category we see “Catastrophes” – but this entry only refers to collisions of various kinds with space objects, not with the self-destruction of Earth’s native life forms. I have covered self-destruction in specific blogs such as “Nuclear Winter” (July 9, 2013) as well as throughout the CCF blog (July 9, 2013 estimated the energy release of climate change as a multiple of that produced by the Hiroshima bombing).

Be Sociable, Share!
Posted in Anthropogenic, Climate Change, Sustainability | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Saving the World through the Pursuit of Self Interest: Part 2

Part 1 of this segment, which I published on April 25, 2017, focused on the March for Science – an event that took place on Earth Day and addressed the Trump administration’s attitude of climate change denial. This follow-up seeks my students’ input.

The 2017 Fall semester is over and my students are preparing for their final exam. I have been teaching my Climate Change course using the TBL (Team Based Learning) system. I have 6 groups in my class, each composed of roughly 7 students that have been studying together throughout the semester.

As an extra credit assignment I have challenged the groups to produce a collective paper addressing, “what can we do to save the world?” I view this question as synonymous with the objective of this whole blog. They will post their answers as comments for this post and all of you can be the judges.

Be Sociable, Share!
Posted in administration, Anthropocene, Anthropogenic, Climate Change, Education, politics, Sustainability, Trump | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | 1 Comment