Infrastructure Timing

They did it! Today, after a long slog of debate, the US Senate has finally passed a bipartisan version of the infrastructure bill that came out of the American Jobs Plan the Biden administration proposed in April.

For more information on the original proposal, see my April 6, 2021 blog:

In Tables 1-3, I have highlighted the entries that can be associated with climate change, and which I will speak to specifically in future blogs. Of course, the whole effort addresses a multitude of overlapping issues; none of the items can be exclusively associated with one of the entries of my earlier Venn diagram. Rounding up from the sum of the estimated costs for all of the entries, we come to $1.9 trillion. With the addition of the $400 billion for in-home care, we reach the plan’s quoted $2.3 trillion. The sum of the highlighted, climate-related items comes to $1.35 trillion, 70% of the total cost.

The New York Times published an article about the relationship between the American Jobs Plan and the new infrastructure plan, which also looked at the changes between the original and the current versions of the plan and included the graphic below.

infrastructure, bill, bipartisan

Figure 1

At 2,702 pages, this infrastructure bill is the largest such bill in American history. It was released Sunday evening (August 1st).

As we can see from Figure 1, while roughly half of the original bill tackled climate change, the new version has eliminated nearly all of those measures. That said, we don’t have to despair just yet. The Biden administration seems to be repackaging the proposals separately, aiming to pass as many as possible through the very balanced but incredibly partisan Congress. R&D and Manufacturing was a significant category of the climate-related legislation in the old proposal. In June, the US Senate passed a separate major bill, the United States Innovation and Competition Act of 2021 (see my June 15, 2021 blog, “Sputnik and China”), almost unanimously. Although this particular legislation was designed to counter China, an in-depth examination shows that many of the proposed activities will relate to climate change. The same, I’d argue, is true of the new infrastructure proposal, even in its current, gutted form: as long as this particular climate action-dedicated executive branch is in charge of coordinating these activities, that priority will be visible during implementation.

This is not necessarily true for the other important theme of the original proposal: the social element. Some have labeled that element, which mainly focused on equity in its many forms, “soft” infrastructure. The Republicans (and some key Democrats) refuse to cooperate on these measures. They argue that tackling these issues will inject too much money into the market at once and trigger uncontrollable inflation.

The Democrats are planning to pursue a $3 trillion independent effort that they hope to be able to pass on their own, through a process known as “budget reconciliation.” The danger here is that the House Democrats are threatening not to even bring either of the bills I mentioned above to a vote, even though the Senate has reached bipartisan agreements and the White House has endorsed the compromises. In other words, so far, none of the legislative activities is secured.

In the context of mitigation and adaptation of anthropogenic climate change, these activities still lack an examination of the impacts that will come from our energy transition away from fossil fuel dependence.

Figure 2 shows the outline of  US energy use in 2019, as constructed by Visual Capitalist, based on data from the Lawrence Livermore National Laboratory. The units of energy used throughout are quads. For reference, here’s an approximate conversion:

1quad = 1×1015BTU = 1×1018Joules ≈ energy in 8 billion gallons of gasoline

On the left-hand side of Figure 2, we have the energy sources, 80% of which are fossil fuels. To decarbonize the system, we need to eliminate all fossil fuels from the input, and/or capture the carbon dioxide in the air, which we are not currently doing in any significant way.

energy, consumption, input, output, energy services, physics, residential, commercial, industrial, transportation

Figure 2Composite energy use in the US – 2019

Breaking down the distribution of the energy above, we find that:

37 quads from the input are going to the production of electricity

28.2 quads are going to petroleum use in transportation.

24.2 quads from the production of electricity are waste energy (marked as rejected energy) or heat that depends on the temperature that we are using in the energy conversion process.

In a previous blog (October 22, 2019), I discussed the limits of the efficiency of producing electricity from primary energy sources. These limits are fundamental and are an expression of the Second Law of Thermodynamics, a key aspect of physics with an unwieldy name. This law applies not only to electricity but also to any process that involves the conversion of heat to work (i.e., in physics, anything useful). Figure 2 describes everything useful that we get out of energy and lumps them all into “energy services.” Out of approximately 100 quads of energy input, we get a mere 32.7 quads of energy services. The total efficiency of this process is 32.7%, an efficiency that is critically dependent on the temperature of the conversion.

We have three ways to decrease the carbonization of our energy use:

  1. Decrease the number of our energy services
  2. Decrease the number of fossil fuels that we use and replace them with sustainable energy sources (defined here as primary energy that does not emit carbon dioxide) as much as possible
  3. Increase the efficiency of the conversion by increasing the temperature of the conversion

As we saw in an earlier blog (October 15, 2019), the most important strategy taking place in the energy transition right now is the shift to electricity from direct use of primary energy sources. Three of the four components of the energy services shown in Figure 2 already have major electrical contributions. The only sector with minimal electrical contributions is transportation. Presently (2019), transportation uses 28.2 quads of energy, 22.3 quads of which are wasted as (mostly exhaust) emissions. President Biden set a goal for 50% of new cars to be electric by 2030. However, electric cars are only as low carbon as the sources of their energy (see my March 19, 2019 blog for a description of the shift to electric cars). So, before we can create a carbon-free economy, we’ll need carbon-free electricity.

Figure 2 includes the industrial and commercial sectors in its breakdown of energy services. Table 1 gives a list of examples of energy use in the industrial sector:

Table 1 Examples of industrial sector energy use

industrial, energy, manufacturing, mining, refining

A revolution is now taking place, aiming not only to decarbonize this sector but make it fully sustainable, part of the “cyclical economy.” Due to the Second Law of Thermodynamics that I mentioned earlier, the energy part of this sector cannot be reused, but the chemical part can. A piece in The New York Times provides a good summary, with a few examples:

For the past few years, a number of start-ups have begun developing products that aim to fold in carbon dioxide captured from smokestacks and other sources of pollution, in an attempt to reach a new level of environmentally friendly manufacturing: one in which greenhouse-gas molecules are not only kept out of the atmosphere but also repurposed. This undertaking, usually characterized as carbon utilization, goes well beyond flooring — to plastics, jet fuels, diesel, chemicals, building materials, diamonds, even fish food.

Advocates of carbon utilization, or carbontech, as it’s also known, want to remake many of the things we commonly use today. But with one crucial difference: No emissions would have been added to the environment through their fabrication. Carbontech sees a future where the things we buy might be similar in their chemistries and uses but different in their manufacture and environmental impact. You might wake in the morning on a mattress made from recycled CO2 and grab sneakers and a yoga mat made from CO2-derived materials. You might drive your car — with parts made from smokestack CO2 — over roads made from CO2-cured concrete. And at day’s end, you might sip carbontech vodka while making dinner with food grown in a greenhouse enriched by recycled CO2. Many of these items would most likely be more expensive to the consumer than their usual counterparts, in part because they often need significant amounts of energy to make. But the hurdles to making them are no longer insurmountable.

As for the commercial sector, most energy use relates to the day-to-day running of a wide variety of buildings in the following categories:

The top five energy-consuming building categories used about half of the energy consumed by all commercial buildings in 2012, and they include the following types of buildings:

  • Mercantile and service (15% of total energy consumed by commercial buildings)
    • Malls and stores
    • Car dealerships
    • Dry cleaners
    • Gas stations
  • Office(14% of consumption)
    • Professional and government offices
    • Banks
  • Education(10% of consumption)
    • Elementary, middle, and high school
    • Colleges
  • Health care(8% of consumption)
    • Hospitals
    • Medical offices
  • Lodging(6% of consumption)
    • Hotels
    • Dormitories
    • Nursing homes

Last updated: September 28, 2018

It will be interesting to see how we do in decarbonizing these sectors, whether via legislative intervention, advances in technology, commercial competition, or a combination of these factors. Here’s hoping the House can keep the forward momentum of the Senate’s new legislation.

Posted in Biden, Climate Change, Economics, Energy | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Heath Death Indicators

I have been spending most of my evenings watching the Tokyo Olympics.  One of the most frequent questions directed to athletes who have performed outdoors is how they handle the heat. Right now, Tokyo is having highs of 90oF, with 75% humidity. That humidity is on the high side for most US cities but the temperature itself is common. Using the heat index chart from last week’s blog, we see that Tokyo’s heat index comes out to 109oF—just on the dividing line between extreme caution and danger. Danger means a high likelihood of sunstroke, muscle cramps, and/or heat exhaustion. Unlike some of us, though, most of the athletes don’t have the option to stay home and turn on the air conditioning to escape the heat. The organizers of the Olympics have tried to mitigate the heat for some competitors: they sprayed water on the sand for beach volleyball and moved the marathon race about 500 miles north of Tokyo. However, this is not much consolation to tennis players or other track and field participants who still have to compete outdoors within city limits.

Additionally, these conditions are considerably harsher than those of the last Olympics that took place in Tokyo—in 1964. The average temperature for those games was around 70oF. However, part of the reason for this drastic difference is that they were held in October and not July-August. Even so, at the time, the average high temperature in July-August hovered around 80oF.

We already knew that climate change is a serious threat to the winter Olympics but it is becoming increasingly obvious that its effects will extend to the summer games: for safety’s sake, they will either need to shift them to a later date or move them to the southern hemisphere.

Olympics aside, what about the rest of us? What dangers do we face if forced to be outside under such extreme heat conditions? National Geographic had the most straightforward description that I have encountered:

The human body has evolved to shed heat in two main ways: Blood vessels swell, carrying heat to the skin so it can radiate away, and sweat erupts onto the skin, cooling it by evaporation. When those mechanisms fail, we die. It sounds straightforward; it’s actually a complex, cascading collapse.

As a heatstroke victim’s internal temperature rises, the heart and lungs work ever harder to keep dilated vessels full. A point comes when the heart cannot keep up. Blood pressure drops, inducing dizziness, stumbling, and the slurring of speech. Salt levels decline and muscles cramp. Confused, even delirious, many victims don’t realize they need immediate help.

With blood rushing to overheated skin, organs receive less flow, triggering a range of reactions that break down cells. Some victims succumb with an internal temperature of just 104 degrees Fahrenheit (40 degrees Celsius); others can withstand 107 degrees for several hours. The prognosis is usually worse for the very young and for the elderly. Even healthy older people are at a distinct disadvantage: Sweat glands shrink with age, and many common medications dull the senses. Victims often don’t feel thirsty enough to drink. Sweating stops being an option, because the body has no moisture left to spare. Instead, sometimes it shivers.

Even the fittest, heat-acclimated person will die after a few hours’ exposure to a 95° “wet bulb” reading, a combined measure of temperature and humidity that takes into consideration the chilling effect of evaporation. At this point, the air is so hot and humid it no longer can absorb human sweat. Taking a long walk in these conditions, to say nothing of harvesting tomatoes or filling a highway pothole, could be fatal. Climate models predict that wet-bulb temperatures in South Asia and parts of the Middle East will, in roughly 50 years, regularly exceed that critical benchmark.

By then, according to a startling 2020 study in Proceedings of the National Academy of Sciences, a third of the world’s population could be living in places—in Africa, Asia, South America, and Australia—that feel like today’s Sahara, where the average high temperature in summer now tops 104°F. Billions of people will face a stark choice: Migrate to cooler climates, or stay and adapt. Retreating inside air-conditioned spaces is one obvious work-around—but air-conditioning itself, in its current form, contributes to warming the planet, and it’s unaffordable to many of the people who need it most. The problem of extreme heat is mortally entangled with larger social problems, including access to housing, to water, and to health care. You might say it’s a problem from hell.

The Mayo Clinic provides information about how heat affects us in ways we might not have considered.  Indeed, we are seeing the deadly consequences:

Extreme heat causes many times more workplace injuries than official records capture, and those injuries are concentrated among the poorest workers, new research suggests, the latest evidence of how climate change worsens inequality.

Hotter days don’t just mean more cases of heat stroke, but also injuries from falling, being struck by vehicles or mishandling machinery, the data show, leading to an additional 20,000 workplace injuries each year in California alone. The data suggest that heat increases workplace injuries by making it harder to concentrate.

“Most people still associate climate risk with sea-level rise, hurricanes and wildfires,” said R. Jisung Park, a professor of public policy at the University of California, Los Angeles and the lead author of the study. “Heat is only beginning to creep into the consciousness as something that is immediately damaging.”

The findings follow record-breaking heat waves across the Western United States and British Columbia in recent weeks that have killed an estimated 800 people, made wildfires worse, triggered blackouts and even killed hundreds of millions of marine animals.

The graphic below provides some suggestions for how we can stay safer during heat waves:

While some of these seem relatively obvious, others are not necessarily as intuitive. For instance, in addition to the advice to build up a gradual heat tolerance, the part about how much water to drink (and keep drinking) is vital: don’t wait until you’re thirsty! According to the Mayo Clinic:

Thirst isn’t a helpful indicator of hydration.

In fact, when you’re thirsty, you could already be dehydrated, having lost as much as 1 to 2 percent of your body’s water content. And with that kind of water loss, you may start to experience cognitive impairments — like stress, agitation and forgetfulness, to name a few.

As we’ve discussed before, climate change means that the frequency and intensity of extreme weather will only get worse if we continue business as usual:

The study, conducted by international scientists with the World Weather Attribution group, found that the climate crisis was responsible for boosting peak temperatures by 3.6 degrees Fahrenheit (2 degrees Celsius). It also made the extreme temperatures at least 150 times more likely to occur. The research also shows that the heat wave was a 1-in-1,000-year event in our current climate, which means it’s still a rarity. But consider that it would’ve been a 1-in-150,000-year event in the pre-industrial era.

If the world heats up by another 0.8 degrees Celsius, breaching the Paris Agreement’s goal of limiting warming to 2 degrees Celsius (3.6 degrees Fahrenheit) above pre-industrial temperatures, events like this will become almost commonplace, occurring every five to 10 years.

The report itself isn’t peer-reviewed yet, but it relies on peer-reviewed techniques that have been repeatedly used for snap analyses, including recent heat waves in Siberia and Australia, and later peer-reviewed. That gives scientists confidence in the disturbing results.

The IPCC continues to warn about the future:

Millions of people worldwide are in for a disastrous future of hunger, drought and disease, according to a draft report from the United Nations’ Intergovernmental Panel on Climate Change, which was leaked to the media this week.

“Climate change will fundamentally reshape life on Earth in the coming decades, even if humans can tame planet-warming greenhouse gas emissions,” according to Agence France-Presse , which obtained the report draft.

The report warns of a series of thresholds beyond which recovery from climate breakdown may become impossible, The Guardian said. The report warns: “Life on Earth can recover from a drastic climate shift by evolving into new species and creating new ecosystems… humans cannot.

“The worst is yet to come, affecting our children’s and grandchildren’s lives much more than our own.”

But we are not just talking about the future; we’re suffering these extreme effects now. This is from just a month ago:

VANCOUVER/PORTLAND, June 30 (Reuters) – A heatwave that smashed all-time high temperature records in western Canada and the U.S. Northwest has left a rising death toll in its wake as officials brace for more sizzling weather and the threat of wildfires.

The worst of the heat had passed by Wednesday, but the state of Oregon reported 63 deaths linked to the heatwave. Multnomah County, which includes Portland, reported 45 of those deaths since Friday, with the county Medical Examiner citing hyperthermia as the preliminary cause.

By comparison all of Oregon had only 12 deaths from hyperthermia from 2017 to 2019, the statement said. Across the state, hospitals reported a surge of hundreds of visits in recent days due to heat-related illness, the Oregon Health Authority said.

In British Columbia, at least 486 sudden deaths were reported over five days, nearly three times the usual number that would occur in the province over that period, the B.C. Coroners Service said Wednesday.

Our only real remedy is to change the business-as-usual scenario we have been following—quickly! Next week, I’ll look at the timing necessary to accomplish such a transition.

Posted in Climate Change, Extreme Weather | Tagged , , , , , , , , , , , , , , , , , , , , , | 2 Comments

Heat Deaths and Cold Deaths

We have been seeing a slew of catastrophes throughout the world that roughly coincided with the beginning of summer in the Northern Hemisphere (June 20th). Almost all of them have been either partially caused or worsened by climate change. These disasters include record temperatures (Death Valley, California reached an alarming 130oF!), which have led to uncontrollable fires on the West Coast, whose resulting smog has impacted the entire country. Meanwhile, Arizona, Western Europe, China, and India have also experienced deadly floods. I follow the NYT Weather Report daily, tracking the temperatures and precipitation rates in many of the world’s largest cities (see my August 18, 2020 blog). Las Vegas, in particular, has stood out. I can’t remember a single day since the beginning of the summer when the city, which has a population of around 670,000, has had a high under three digits. In fact, it often exceeds 110oF. Even at night, its low temperature is almost always the highest in the US list, exceeding 85oF. Other cities on this list, such as Phoenix and Tucson, Arizona; Boise, Idaho; Salt Lake City, Utah; and Reno, Nevada have mirrored Las Vegas’ record heat. At times this summer, cities throughout the West Coast of the US and western Canada—including, shockingly, Washington and Oregon—took the lead in scorching temperatures, as years-long droughts gave way to major wildfires. Foreign cities, such as Damascus, Baghdad, Teheran, and Riyadh, matched the American records.

Sonya Landau, the editor of this blog, wrote a guest post from Tucson, Arizona, about some aspects of life under these conditions (June 22, 2021). She wrote that Tucson’s low humidity helps modulate the extremely high temperatures and makes life more bearable. Well, Figure 1 shows that, even with the relatively low humidity of 40%, once you reach a temperature above 100oF, the heat index reaches 110oF and you start to approach an extreme danger zone (see my July 3, 2018 blog for a more thorough explanation).

Figure 1 – Heat Index (July 3, 2018 blog)

You can alleviate the heat by running into an air-conditioned room, but, as Sonya wrote in her blog, many either have to work outside and don’t have that option or simply cannot afford air conditioning. Additionally, in many places, the electrical grid system cannot handle the extra power strain of air conditioning. As a result, many die of heatstroke. Tucson is not alone in its suffering; nor is the human race. In addition to the massive impact on the human population, heat waves have resulted in massive wildlife deaths.

Meanwhile, Bjorn Lomborg wrote a series of articles that appeared to claim that life is not so bad and that the greater decrease in cold death compensates for the increase in heat deaths. I wrote about Mr. Lomborg’s attitude to climate change and environmental issues in earlier blogs (See my September 3, 2012 blog, “Three Shades of Deniers”). For a time, I also used his most famous book, The Skeptical Environmentalist, in my climate change classes. I appreciated his skeptical input as long as it was based on facts but that connection has become increasingly shaky. He started to post about the correlations between heat deaths and cold deaths four years ago on his own website:

“Heat-death hysteria: the wrong reason to fight climate change”

2017-07-10

Politically tinged coverage of summer temperatures offers a lot of heat but not much light. “Deadly heat waves becoming more common due to climate change,” declares CNN. “Extreme heat waves will change how we live. We’re not ready,” warns TIME. Some stories are more sensationalist than others, but there is a common theme: Dangerous heat waves will increase in frequency and ferocity because of global warming. This isn’t fake news. In fact, it’s perfectly true. But these stories reveal a peculiar blind spot in the media’s climate reporting. While “deadly,” “killer,” “extreme” heat waves gain a lot of coverage, relatively scant attention is given in winter to much, much more lethal cold temperatures.

He repeated the same arguments over the years in other publications:

WSJ“An Overheated Climate Alarm: The White House launches a scary campaign about deadly heat. Guess what: Cold kills more people”

New York Post“More people die of cold: Media’s heat-death climate obsession leads to lousy fixes”

USA Today“Climate Change: science, media spread fears, ignore possible positives”

This month, he expanded upon this point:

A widely reported recent study found that higher temperatures are now responsible for about 100,000 of those heat deaths. But the study’s authors ignored cold deaths. A landmark study in Lancet shows that across every region climate change has brought a greater reduction in cold deaths over the past few decades than it has caused additional heat deaths. On average, it has avoided upwards of twice as many deaths, resulting in perhaps 200,000 fewer cold deaths each year.

These entries all refer to “recent studies,” which entail a single 2015 publication in The Lancet. Below is a summary of the methodology and findings of that paper:

Methods

We collected data for 384 locations in Australia, Brazil, Canada, China, Italy, Japan, South Korea, Spain, Sweden, Taiwan, Thailand, UK, and USA. We fitted a standard time-series Poisson model for each location, controlling for trends and day of the week. We estimated temperature–mortality associations with a distributed lag non-linear model with 21 days of lag, and then pooled them in a multivariate metaregression that included country indicators and temperature average and range. We calculated attributable deaths for heat and cold, defined as temperatures above and below the optimum temperature, which corresponded to the point of minimum mortality, and for moderate and extreme temperatures, defined using cutoffs at the 2·5th and 97·5th temperature percentiles.

Findings

We analysed 74 225 200 deaths in various periods between 1985 and 2012. In total, 7·71% (95% empirical CI 7·43–7·91) of mortality was attributable to non-optimum temperature in the selected countries within the study period, with substantial differences between countries, ranging from 3·37% (3·06 to 3·63) in Thailand to 11·00% (9·29 to 12·47) in China. The temperature percentile of minimum mortality varied from roughly the 60th percentile in tropical areas to about the 80–90th percentile in temperate regions. More temperature-attributable deaths were caused by cold (7·29%, 7·02–7·49) than by heat (0·42%, 0·39–0·44). Extreme cold and hot temperatures were responsible for 0·86% (0·84–0·87) of total mortality.

Ten other scientists analyzed Lomberg’s findings, concluding that its scientific credibility was ‘low’ to ‘very low’:

SUMMARY

On 4 April 2016 the US Global Change Research Program released a comprehensive overview of the impact of climate change on American public health. In an op-ed in the Wall Street Journal, Bjorn Lomborg criticizes the report as unbalanced. Ten scientists analyzed the article and found that Lomborg had reached his conclusions through cherry-picking from a small subset of the evidence, misrepresenting the results of existing studies, and relying on flawed reasoning.

Below are the opinions of two of the reviewers cited in this publication. One of these, who was a co-author of the original Lancet report, claims that Lomborg’s interpretation of the report is misleading. The other reviewer says that Lomborg is falsely conflating heat and cold deaths with seasonal variations of summer and winter:

Antonio Gasparrini, Senior Lecturer, London School of Hygiene and Tropical Medicine:

My review is limited to the part of the article that describes the results of the study published in The Lancet, which I first-authored.

The interpretation provided in the article is misleading, as our study is meant to provide evidfence on past/current relationships between temperature and health, and not to assess changes in the future. In addition, the study does not offer a global assessment, and it is limited to a set of countries not representative of the global population.

Kristie Ebi, Professor, University of Washington:

Mr. Lomborg is confusing seasonal mortality with temperature-related mortality. It is true that mortality is higher during winter than summer. However, it does not follow that winter mortality is temperature-dependent (which summer mortality is). Dave Mills and I reviewed the evidence and concluded that only a small proportion of winter mortality is likely associated with temperature. A growing numbers of publications are exploring associations between weather and winter mortality, with differences in methods and results. The country with the strongest association between winter mortality and temperature is England, which appears in other publications to be at least partly due to cold housing. Winter mortality is lower in northern European countries.

As always, there is a grain of truth in Lomborg’s argument. Below is a statistical evaluation of weather fatalities in 2017, compiled by NOAA (National Oceanic and Atmospheric Administration) and published by Weather Underground:

weather fatalities

Figure 2 – Weather fatalities: Weather-related deaths in the U.S. in 2017 (red bars), for the past 10 years (blue bars), and for the last 30 years (yellow bars), According to NOAA. Heat-related deaths dominate.

Extreme heat and extreme cold both kill hundreds of people each year in the U.S., but determining a death toll for each is a process subject to large errors. In fact, two major U.S. government agencies that track heat and cold deaths–NOAA and the CDC–differ sharply in their answer to the question of which is the bigger killer. One reasonable take on the literature is that extreme heat and extreme cold are both likely responsible for at least 1300 deaths per year in the U.S. In cities containing 1/3 of the U.S. population, a warming climate is expected to increase the number of extreme temperature deaths by 3900 – 9300 per year by 2090, at a cost of $60 – $140 billion per year. However, acclimatization or other adaptation efforts, such as increased use of air conditioning, may cut these numbers by more than one-half.

NOAA’s take: heat is the bigger killer

NOAA’s official source of weather-related deaths, a monthly publication called Storm Data, is heavily skewed toward heat-related deaths. Over the 30-year period 1988 – 2017, NOAA classified an average of 134 deaths per year as being heat-related, and just 30 per year as cold-related—a more than a factor of four difference. According to a 2005 paper in the Bulletin of the American Meteorological Society, Heat Mortality Versus Cold Mortality: A Study of Conflicting Databases in the United States, Storm Data is often based on media reports, and tends to be biased towards media/public awareness of an event.

CDC’s take: cold is the bigger killer

In contrast, the CDC’s National Center for Health Statistics Compressed Mortality Database, which is based on death certificates, indicates the reverse—about twice as many people die of “excessive cold” conditions in a given year than of “excessive heat.” According to a 2014 study by the CDC, approximately 1,300 deaths per year from 2006 to 2010 were coded as resulting from extreme cold exposure, and 670 deaths per year from extreme heat. However, both of these numbers are likely to be underestimated. According to the 2016 study, The Impacts of Climate Change on Human Health in the United States, “It is generally accepted that direct attribution underestimates the number of people who die from temperature extremes.” For example, during the 1995 Chicago heat wave, only 465 death certificates had heat as a contributing cause, while excess mortality figures showed that close to 700 people died as a result of the heat

I will further explore this issue in next week’s blog. It seems that a great part of this discussion is a matter of language and clarifying certain terms. As the second reviewer of Lomborg’s paper pointed out, many of The Lancet and Lomborg’s discussions are based on the association of cold with winter and heat with summer.

Figure 2 lists many causes of death, including floods, tornados, and hurricanes, which are triggered or amplified by the extreme heat caused by climate change. However, it doesn’t even mention death by fire, making it much harder to compare with earlier data:

 Between 1998 and 1999, the World Health Organization revised the international codes used to classify causes of death. As a result, data from earlier than 1999 cannot easily be compared with data from 1999 and later.

Unsurprisingly, many mentions of extreme heat in the news are accompanied by warnings that worse is yet to come. Newsrooms around the world are paying special attention to any new local temperature records. In the next blog, I will go into some quantitative detail about the impact extreme heat has on all of us.

Stay tuned.

Posted in Climate Change, Extreme Weather | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , | 3 Comments

Breaking With Business as Usual

My last three blogs focused on our collective attempts to limit anthropogenic global warming to an increase of 1.5oC in global temperature or, failing that, no more than 2oC.

The series of blogs started with a detailed road map recently outlined by the IEA (International Energy Agency) and continued with my own detailed description of the “business as usual” situation characterized by a constant trend of global acceleration of greenhouse gas usage. The trend, which started about 60 years ago and is calculated based on the measured increase of the atmospheric concentrations of these gases, continues to prevail now.

But where did these 1.5-20C targets come from? The IPCC (International Panel on Climate Change) has long been aware of this acceleration of greenhouse gas usage and set these targets almost immediately after the Paris Agreement was signed (See the December 14, 2015 blog). The organizers of the Paris meeting requested that the IPCC write a detailed report to justify those targets. The corresponding SR15 report came out in 2018. This blog is focused on some key conclusions and data from that report.

I have included surveys of many aspects of the IPCC reports through my nine years running this blog. Every report (at least those I have read) starts with a section titled, “Summary for Policy Makers” (the October 14, 2014 blog analyzes one of the earlier reports), usually referred to as SPM. The SR15 is no exception. The two paragraphs below are taken from the preamble to this section:

The IPCC accepted the invitation in April 2016, deciding to prepare this Special Report on the impacts of global warming of 1.5°C above pre-industrial levels and related global greenhouse gas emission pathways, in the context of strengthening the global response to the threat of climate change, sustainable development, and efforts to eradicate poverty.

This Summary for Policymakers (SPM) presents the key findings of the Special Report, based on the assessment of the available scientific, technical and socio-economic literature relevant to global warming of 1.5°C and for the comparison between global warming of 1.5°C and 2°C above pre-industrial levels. The level of confidence associated with each key finding is reported using the IPCC calibrated language.  The underlying scientific basis of each key finding is indicated by references provided to chapter elements. In the SPM, knowledge gaps are identified associated with the underlying chapters of the Report.

The first two high points of the SPM state the following:

A.2.1. Anthropogenic emissions (including greenhouse gases, aerosols and their precursors) up to the present are unlikely to cause further warming of more than 0.5°C over the next two to three decades (high confidence) or on a century time scale (medium confidence). {1.2.4, Figure 1.5}

A.2.2. Reaching and sustaining net zero global anthropogenic CO2 emissions and declining net non-CO2 radiative forcing would halt anthropogenic global warming on multi-decadal times scales (high confidence). The maximum temperature reached is then determined by cumulative net global anthropogenic CO2 emissions up to the time of net zero CO2 emissions (high confidence) and the level of non-CO2 radiative forcing in the decades prior to the time that maximum temperatures are reached (medium confidence). On longer time scales, sustained net negative global anthropogenic CO2 emissions and/or further reductions in non-CO2 radiative forcing may still be required to prevent further warming due to Earth system feedbacks and to reverse ocean acidification (medium confidence) and will be required to minimize sea level rise (high confidence). {Cross-Chapter Box 2 in Chapter 1, 1.2.3, 1.2.4, Figure 1.4, 2.2.1, 2.2.2, 3.4.4.8, 3.4.5.1, 3.6.3.2}

The SPM includes some key figures, among them: Figure 1 below and Figure 3 from last week’s blog, which summarizes risks posed by rising temperature.  Last week’s blog also included the definition of radiative forcing, as shown in Figure 1.

Figure 1

Panel a: Observed monthly global mean surface temperature (GMST, grey line up to 2017, from the HadCRUT4, GISTEMP, Cowtan–Way, and NOAA datasets) change and estimated anthropogenic global warming (solid orange line up to 2017, with orange shading indicating assessed likely range). Orange dashed arrow and horizontal orange error bar show respectively the central estimate and […]

(You can find a fuller explanation of Figure 1 in the original report, part of the downloadable 630-page high-resolution pdf).

The report contains five more detailed chapters and an explanatory appendix.

Below, I cite a few introductory paragraphs and a key figure from the first chapter:

Human-induced warming reached approximately 1°C (likely between 0.8°C and 1.2°C) above pre-industrial levels in 2017, increasing at 0.2°C (likely between 0.1°C and 0.3°C) per decade (high confidence). Global warming is defined in this report as an increase in combined surface air and sea surface temperatures averaged over the globe and over a 30-year period. Unless otherwise specified, warming is expressed relative to the period 1850–1900, used as an approximation of pre-industrial temperatures in AR5. For periods shorter than 30 years, warming refers to the estimated average temperature over the 30 years centered on that shorter period, accounting for the impact of any temperature fluctuations or trend within those 30 years.

Warming greater than the global average has already been experienced in many regions and seasons, with higher average warming over land than over the ocean (high confidence). Most land regions are experiencing greater warming than the global average, while most ocean regions are warming at a slower rate. Depending on the temperature dataset considered, 20–40% of the global human population live in regions that, by the decade 2006–2015, had already experienced warming of more than 1.5°C above pre-industrial in at least one season (medium confidence). {1.2.1, 1.2.2}

Past emissions alone are unlikely to raise global-mean temperature to 1.5°C above pre-industrial levels (medium confidence), but past emissions do commit to other changes, such as further sea level rise (high confidence). If all anthropogenic emissions (including aerosol-related) were reduced to zero immediately, any further warming beyond the 1°C already experienced would likely be less than 0.5°C over the next two to three decades (high confidence), and likely less than 0.5°C on a century time scale (medium confidence), due to the opposing effects of different climate processes and drivers. A warming greater than 1.5°C is therefore not geophysically unavoidable: whether it will occur depends on future rates of emission reductions.  {About committed warming}

This report defines ‘warming’, unless otherwise qualified, as an increase in multi-decade global mean surface temperature (GMST) above pre-industrial levels. Specifically, warming at a given point in time is defined as the global average of combined land surface air and sea surface temperatures for a 30-year period centered on that time, expressed relative to the reference period 1850–1900 (adopted for consistency with Box SPM.1 Figure 1 of IPCC (2014a))

Figure 2

Figure 3 shows another key finding from Chapter 3 in the report. It demonstrates the range of local warmings throughout the globe as the temperature increases from 1.50C to 20C above pre-industrial levels. As you can see, a significant fraction of the world would become unlivable, especially in the latter scenario.

Figure 3 (taken from Chapter 3) – Projected changes in the number of hot days (NHD; 10% warmest days) at 1.5°C (left) and at 2°C (middle) of global warming compared to the pre-industrial period (1861–1880), and the difference between 1.5°C and 2°C of warming (right).

In the next blog (or two) I will try to analyze the concept of what is “hot enough to be unlivable.” Clearly, however, in order to retain livable conditions, we will need to veer significantly from the business-as-usual scenario ASAP–not in the future but now–by following the guidelines of the IEA (International Energy Agency) report. The upcoming COP26 (“Conference of the Parties” global climate summit with almost every country on Earth) meeting in Glasgow, UK, scheduled to run from October 31 through November 12, 2021, is a good opportunity for that. The governments of the European Union and the US recognize it and they recently came with detailed plans on how to achieve these goals. (Europe’s plan is here; see all four April blogs this year for the US plan).

Unfortunately, both plans at this point are non-binding proposals. President Biden augmented his proposals with some executive orders but they were mostly drafted to counter President Trump’s previous policies that had erased prior mitigation efforts.

The relative success of the 2015 Paris agreement was driven to a large degree by the almost unanimous cooperation of the developing countries. One key element in this was a set of commitments from developed countries to support developing countries in the mitigation process through the Green Climate Fund (see the March 2, 2021 blog). Lately, we have heard very little so far in the way of declarations from the EU or the US on this issue.

Stay tuned.

Posted in Anthropocene, Anthropogenic, Biden, Climate Change, IPCC, Sustainability, US | Tagged , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Business as Usual: Part 2

The Connection Between Carbon Concentration and Temperature

Last week, I used The Scripps Institute and NOAA’s recent measurements of the global carbon dioxide concentration (as measured in Mauna Loa, Hawaii) to calculate the acceleration in carbon dioxide atmospheric accumulation from 1959 to 2020. The timing of these measurements approximately corresponds with the suggested start of the Anthropocene (see my October 30, 2018 blog), when human activities started to become the dominant force behind the changes in atmospheric composition. The results show that over this 60-year period, the acceleration of the accumulation of carbon dioxide in the atmosphere has been approximately constant. Given this consistency, we have defined this acceleration as the business-as-usual scenario, from which we have calculated future levels. Under the same scenario, calculations have shown that the concentration of carbon dioxide will reach 560 ppmv of CO2 by the year 2069. This is only 48 years from today and 110 years after we started measuring, in 1959.

The value of 560 ppmv CO2 is considered a doubling of the preindustrial concentration of 280 ppmv. We call the projected effect that this doubling will have on global temperature climate sensitivity:

Climate sensitivity is a measure of how much the Earth’s climate will cool or warm after a change in the climate system, for instance, how much it will warm for doubling in carbon dioxide (CO2) concentrations.[1] In technical terms, climate sensitivity is the average change in the Earth’s surface temperature in response to changes in radiative forcing, the difference between incoming and outgoing energy on Earth.[2] Climate sensitivity is a key measure in climate science,[3] and a focus area for climate scientists, who want to understand the ultimate consequences of anthropogenic climate change.

Wikipedia also defines radiative forcing:

Radiative forcing is generally defined as the imbalance between incoming and outgoing radiation at the top of the atmosphere.[5] Radiative forcing is measured in Watts per square meter (W/m2), the average imbalance in energy per second for each square meter of the Earth’s surface.[6]

As the Wikipedia site makes clear, it is incredibly difficult to estimate climate sensitivity. I wrote about this a few years ago (July 10, 2018). One of the key difficulties in measurement has to do with the fact that most (2/3) of the impacts of the greenhouse gases are not directly connected to blocking infrared radiation (heat); rather, they affect the phenomenon indirectly, through various feedback mechanisms. As I explained in my earlier blog, the main feedback mechanism comes from accelerated water evaporation from our oceans. While we mostly talk about CO2, water vapor is a powerful greenhouse on its own.

Below are Wikipedia’s aggregations of the IPCC’s predictions, accumulated from various reports throughout the years:

Historical estimates of climate sensitivity from the IPCC assessments. The first three reports gave a qualitative likely range, while the fourth and fifth assessment report formally quantified the uncertainty. The dark blue range is judged as being more than 66% likely.

Figure 1

The most recent IPCC report summarizes the state of knowledge on this issue:

The 2013 IPCC Fifth Assessment Report reverted to the earlier range of 1.5 to 4.5 °C (2.7 to 8.1 °F) (with high confidence), because some estimates using industrial-age data came out low. (See the next section for details.)[18] The report also stated that ECS is extremely unlikely to be less than 1 °C (1.8 °F) (high confidence), and is very unlikely to be greater than 6 °C (11 °F) (medium confidence). These values were estimated by combining the available data with expert judgement.[53]

When the IPCC began to produce its IPCC Sixth Assessment Report many climate models began to show higher climate sensitivity. The estimates for Equilibrium Climate Sensitivity changed from 3.2 °C to 3.7 °C and the estimates for the Transient climate response from 1.8 °C, to 2.0 °C. This is probably due to better understanding of the role of clouds and aerosols.[60]

We continue to learn that uncertainty leads to denial. We can see similar problems with today’s groups who refuse to get the approved COVID-19 vaccines. Unfortunately, mass refusal to get vaccinated means we cannot reach herd immunity, a key part of what makes vaccines effective in the long run. Both the US and Brazil are seeing correlations between anti-vaccine sentiments and presumed (sometimes “manufactured”) uncertainty about their effectiveness/effects:

With COVID-19, we can see the effects of this uncertainty play out in real-time. Since climate change has a longer timeframe, it becomes more difficult to show the direct correlations between global anthropogenic carbon emissions and the resulting increase in local temperatures worldwide. We need all the help that we can get in translating that addressing climate change is just as urgent as slowing the pandemic.

Figure 2 shows various organizations’ current instrumental measurements of rising global temperatures.

Figure 2

Wikipedia describes the methodology:

The temperature data for the record come from measurements from land stations and ships. On land, temperature sensors are kept in a Stevenson screen or a maximum minimum temperature system (MMTS). The sea record consists of surface ships taking sea temperature measurements from engine inlets or buckets. The land and marine records can be compared.[31] Land and sea measurement and instrument calibration is the responsibility of national meteorological services. Standardization of methods is organized through the World Meteorological Organization (and formerly through its predecessor, the International Meteorological Organization).[32]

Figure 3 describes the IPCC’s take on the global risks associated with rising temperatures. This particular IPCC report also presents a summary of the reasoning for the Paris agreement (see Dec 14, 2015 blog). Next week, I’ll look at this report in more depth. The colors in all of the categories in Figure 3 show major changes above 1.5oC (2.7oF).

Figure 3 – Risks posed by rising temperatures

The majority of people don’t live in areas with “average” temperatures. At this point, most people have moved to cities, which can act as heat islands where the temperature can reach double the measured “average.” (See the July 23, 2019 blog on this issue). I’ll describe how these facts impact individuals’ health and mortality in future blogs.

Posted in Anthropocene, Anthropogenic, Climate Change, Sustainability | Tagged , , , , , , , , , , , , , , , , , | Leave a comment

Heat Dome: Business as Usual

The “heat dome” has been at the top of the news in recent days, starting almost immediately after the official start of summer on Sunday, June 20th. Sonya Landau, the editor of this blog, wrote a beautiful and timely guest post (June 22nd), reporting from under the dome in Tucson, Arizona. While it has resulted in dangerous heat in areas already known for their hot summers, this enormous dome has also had disastrous consequences in historically cooler regions like the Pacific Northwest. Press outlets have explained the heat dome as a result of high pressure in the atmosphere that blocks the dissipation of heat. Well, that certainly fits the description of the effects of the greenhouse gases that now cover the world. Wikipedia gives us a basic definition:

greenhouse gas (GHG or GhG) is a gas that absorbs and emits radiant energy within the thermal infrared range, causing the greenhouse effect.

Last Monday, an Op-Ed in The New York Times, That Heat Dome? Yeah, It’s Climate Change,” provided a more technical explanation of the heat dome in northwestern US and Canada:

The heat wave afflicting the Pacific Northwest is characterized by what is known as an omega block pattern, because of the shape the sharply curving jet stream makes, like the Greek letter omega (Ω). This omega curve is part of a pattern of pronounced north-south wiggles made by the jet stream as it traverses the Northern Hemisphere. It is an example of a phenomenon known as wave resonance, which scientists (including one of us) have shown is increasingly favored by the considerable warming of the Arctic.

The Op-Ed warned us that the heat dome is huge now and similar phenomena will only get a lot worse and more frequent unless we do something to slow them down. A recent report by the IEA (see last week’s blog) enumerated some of the steps we can take to make that happen. An article from December 2019 quotes some experts who argue that climate change is accelerating and we are all living under the threat:

Climate change and its effects are accelerating, with climate related disasters piling up, season after season.

“Things are getting worse,” said Petteri Taalas, Secretary General of the World Meteorological Organization, which on Tuesday issued its annual state of the global climate report, concluding a decade of what it called exceptional global heat. “It’s more urgent than ever to proceed with mitigation.”

But reducing greenhouse gas emissions to fight climate change will require drastic measures, Dr. Taalas said. “The only solution is to get rid of fossil fuels in power production, industry and transportation,” he said.

I thought it was time to quantify the concept.

NOAA (National Oceanic and Atmospheric Administration) and the Scripps Institution recently published the most up-to-date data of the atmospheric carbon dioxide concentrations measured at the Mauna Loa Observatory in Hawaii (see December 1, 2015 blog). These data are shown in Figure 1.

CO2, emissions, NOAA, Mauna Loa, oscillation, carbon dioxide, heat, domeFigure 1

Back in 2015, I looked at a curve similar to Figure 1 and explained the yearly oscillations:

However, [the figure] also shows something else: superimposed on the steady CO2 concentrations in Mauna Loa are very regular oscillations. Such oscillations are absent from similar measurements taken in Antarctica and they vary with the latitude of the site. These oscillations directly exemplify the yearly cycle and the difference between each site’s carbon dioxide source and its sink. The fluctuations represent the full complexities of the carbon cycle as it equilibrates between the land, the air, and the ocean. The yearly cycles shown in Figure 3 were attributed to the large percentage of trees in the northern hemisphere that shed their leaves in the fall and regrow them in the spring. The corresponding result is that less carbon dioxide is captured during the winter than in the summer.

Figure 2 shows the updated parts of the curve (post-2015).

CO2, emissions, NOAA, Mauna Loa, oscillation, carbon dioxide, heat, dome

Figure 2

A simple inspection of Figure 1 makes it clear that the carbon dioxide emissions not only increase with time but accelerate. Fortunately, NOAA’s report also includes a table of the changes in the yearly emissions rates. This data is shown in Figure 3, from which we can see a linear increase in these rates: a constant acceleration.

CO2, emissions, NOAA, Mauna Loa, oscillation, carbon dioxide, heat, dome

Figure 3

One-dimensional motion with constant acceleration is defined as:

(1)       X(t) = X0 + V0*t + ½*A*t2

Where X is the displacement, V is velocity, A is acceleration, and t is time.

Velocity is defined as:

(2)       V = V0 + A*t

The straight line in Figure 3, which was analyzed to give the best fit with the oscillations, has a slope of 0.028 ppmv/(year)2 and an intercept of 0.75 ppmv/year (parts per million volume, a common unit for concentrations of greenhouse gases in the air.

We can put these parameter values into Equation 1 to calculate when, assuming constant acceleration values, the concentration of carbon dioxide will double to 560 ppmv, compared to the pre-Industrial Revolution concentration of 280 ppmv:

0.014*t2 + 0.75*t +310 = 560

Solving for t, we get 110 years after the start of Figure 1 in 1959, which puts us in 2069. That’s beyond my lifetime but well within the expected lifetime of my children and grandchildren. It’s also way before the definition of “Now” in my book.

The change in global air temperature that corresponds to doubling the carbon dioxide concentration relative to preindustrial conditions is called “climate sensitivity.” This is probably the most important parameter to connect the carbon atmospheric concentration to global temperature.

Of course, the timing estimates in this blog are completely independent of any computer simulations, modeling, or scenario building. They rely completely on experimental results over the last 60 years, which include, among other developments, the worst pandemic in the last 100 years. The data show that the acceleration in the concentration of carbon dioxide over this period has been constant and we can assume that it will stay that way over the next 50 years unless we change global policies in significant ways.

Next week, I’ll discuss the significance of reaching the carbon sensitivity point in terms of temperature and livability. Meanwhile, you can scan through an earlier blog (July 10, 2018) on the role of feedback in estimating atmospheric concentrations of carbon dioxide and the expected temperature changes.

Posted in Anthropocene, Climate Change, Extreme Weather | Tagged , , , , , , , , , , | 3 Comments

The IEA, Heat, and Net Zero

Summer has officially started. Over the last week or so, I’ve been keeping track of which large US cities have experienced temperatures above 100oF, according to the New York Times weather report (see August 18, 2020 blog for descriptions of these reports). American cities that have consistently shown temperatures above 110oF include Las Vegas, Phoenix, and Tucson, all of which are in the Southwest. A few other areas across the country, most shockingly in the Pacific Northwest, have been fluctuating between 100o – 110oF. These extreme conditions regularly cause severe droughts, water shortages, and fires. The press has been covering the matter in depth:

Sonya Landau, who edits this blog and wrote last week’s guest post, lives in Tucson, Arizona. She was kind enough to share an account of some of the consequences of this intense heat. All of this serves as an urgent reminder that we are already feeling the effects of climate change and we have to do something about it—not in the future but now. The Paris agreement, which we have just rejoined, set an objective to limit human-caused climate change to no more than 1.5oC (2.7oF) above pre-industrial conditions by 2050. A recent report by the International Energy Agency (IEA) laid out a roadmap to achieve this objective. The IEA is probably the most qualified global agency to analyze our global energy use and its likely consequences:

The International Energy Agency (IEA; French: Agence internationale de l’énergie) is a Paris-based autonomous intergovernmental organisation established in the framework of the Organisation for Economic Co-operation and Development (OECD) in 1974 in the wake of the 1973 oil crisis.[1] The IEA was initially dedicated to responding to physical disruptions in the supply of oil, as well as serving as an information source on statistics about the international oil market and other energy sectors. It is best known for the publication of its annual World Energy Outlook.[2]

In the decades since, its role has expanded to cover the entire global energy system, encompassing traditional energy sources such as oil, gas, and coal as well as cleaner and faster growing ones such as solar PV, wind power and biofuels. As the Covid-19 pandemic set off a global health and economic crisis in early 2020, the IEA called on governments to ensure that their economic recovery plans focus on clean energy investments in order to create the conditions for a sustainable recovery and long-term structural decline in carbon emissions.[3]

Today the IEA acts as a policy adviser to its member states, as well as major emerging economies such as Brazil, China, India, Indonesia and South Africa to support energy security and advance the clean energy transition worldwide. The Agency’s mandate has broadened to focus on providing analysis, data, policy recommendations and solutions to help countries ensure secure, affordable and sustainable energy for all. In particular, it has focused on supporting global efforts to accelerate the clean energy transition and mitigate climate change. The IEA has a broad role in promoting rational energy policies and multinational energy technology co-operation with a view to reaching net zero emissions.[4]

The IEA’s current Executive Director is Fatih Birol, who took office in late 2015 and began his second term four years later.

In response to the growing number of pledges by countries and companies around the world to limit their emissions to net zero by 2050 or soon after, although that is not compatible with the 1.5 °C target, that would require net zero in the mid 2030’s, IEA announced in January 2021 that it would produce a roadmap for the global energy sector to reach 2050 net zero. The IEA says the report, due to be published on May 18, 2021, tries to map out a pathway in line with preventing global temperatures from rising above 1.5 °C – something that environmental groups, investors and companies had been urging the IEA to do.[5] All IEA member countries have signed the Paris Agreement which strives to limit warming to 1.5 °C and two thirds of IEA member governments have made commitments to emission neutrality in 2050.

The agency’s official site has descriptions of its energy-related reports.

Last month, the IEA posted a new report, “Net-Zero by 2050 – a Roadmap for Global Energy Sector.” I found this report to be so important that I have decided to dedicate the second half of both my courses next semester to the topic.

I took the opening figure for this blog, “Decarbonization Targets for the largest US Utilities,” from the Visual Capitalist site. It is very much in keeping with the spirit of the IEA’s official report. I admire this site’s climate change-related graphics and have shared them here before (see the 2021 blogs on January 5th, March 30th, and April 6th). They do a great job at illustrating massive amounts of data in a way that is more accessible to non-scientists but the resulting graphics can be very large and crowded. The one at the top of this blog is a good example. Don’t worry if you cannot read everything; the original copy on their site lets you enlarge the figure to your liking. The figure demonstrates the current progress on decarbonizing the most important component of the needed energy transition: electrical utilities.

The IEA report is long (224 pages), with impressive graphics of its own. The agency’s press release describes its focus:

The world has a viable pathway to building a global energy sector with net-zero emissions in 2050, but it is narrow and requires an unprecedented transformation of how energy is produced, transported and used globally, the International Energy Agency said in a landmark special report released today.

Climate pledges by governments to date – even if fully achieved – would fall well short of what is required to bring global energy-related carbon dioxide (CO2) emissions to net zero by 2050 and give the world an even chance of limiting the global temperature rise to 1.5 °C, according to the new report, Net Zero by 2050: a Roadmap for the Global Energy Sector.

The report is the world’s first comprehensive study of how to transition to a net zero energy system by 2050 while ensuring stable and affordable energy supplies, providing universal energy access, and enabling robust economic growth. It sets out a cost-effective and economically productive pathway, resulting in a clean, dynamic and resilient energy economy dominated by renewables like solar and wind instead of fossil fuels. The report also examines key uncertainties, such as the roles of bioenergy, carbon capture and behavioural changes in reaching net zero.

“Our Roadmap shows the priority actions that are needed today to ensure the opportunity of net-zero emissions by 2050 – narrow but still achievable – is not lost. The scale and speed of the efforts demanded by this critical and formidable goal – our best chance of tackling climate change and limiting global warming to 1.5 °C – make this perhaps the greatest challenge humankind has ever faced,” said Fatih Birol, the IEA Executive Director. “The IEA’s pathway to this brighter future brings a historic surge in clean energy investment that creates millions of new jobs and lifts global economic growth. Moving the world onto that pathway requires strong and credible policy actions from governments, underpinned by much greater international cooperation.”

Figure 1 shows the present situation. Every five years, when the IEA releases a report, it includes figures similar to the one below, judged against the 2050 net-zero deadline.

Figure 1 – Where we stand now: promises as of 2021

For clarity, the interior circles show the individual contributions to the total CO2 emissions:

Blue: Power – 13.5 Gt (G stands for billion)
Green: Industry – 8.5Gt
Yellow: Transportation – 7.2Gt
Violet: Buildings – 2.9Gt
White: Other – 1.9Gt

The total comes to the equivalent of 33.9 billion tons of carbon dioxide.

Figure 2 shows the proposed dynamics for the rest of the first half of the 21st century.

Figure 2 – Key milestones on the pathway to net zero

In the near future, I will turn my focus to the roles that energy companies and governments can/should play in the energy transition, as well as the importance of timing. I have discussed all of these topics numerous times in the more than 9 years since I began this blog but they require much sharper focus now that the IEA has published its newest report. I will also try to better define the concept of a business-as-usual scenario to illustrate the consequences if we fail to follow our commitments to drive carbon emissions down to zero by mid-century. As always, I want to stress that this is already a problem and we have to start working on adaptation and mitigation now.

Posted in Climate Change, Electricity, Energy, Sustainability | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Guest Blog From Sonya Landau: Heat and COVID Disparities

Walking outside in southern Arizona right now is akin to walking into a giant oven. Waves of heat waft toward you from all sides the moment you set foot out the door. We always joke, “but it’s a dry heat,” and it’s true: the humidity is so low that standing in the shade of a telephone pole feels several degrees cooler. The problem is, a few degrees doesn’t mean much once it’s over 105oF, much less over 110oF. Summer (June especially) is always ridiculously hot here—not for the faint of heart—but the last few years have gotten to absurd levels. This year is especially bad and last week, we suffered through a massive heat wave, part of the “heat dome” that has been affecting half of the country:

Tucson hit a high Tuesday of 115 degrees, shattering a record that stood for 125 years.

The mark was just two degrees below Tucson’s highest temperature ever recorded of 117 on June 26, 1990.

The National Weather Service said this was the fourth consecutive day in Tucson of record highs stretching from Saturday at 110, Sunday at 112, Monday at 112 to Tuesday’s 115.

Phoenix fared even worse:

The high temperature in Phoenix on Thursday hit 118, breaking previous records by 4 degrees, according to an update from the National Weather Service.

If you ask Arizonans what they do to beat the heat, their first answer will almost always be to stay inside an air-conditioned—or at least evaporative cooler-cooled—building (we call them “swamp” coolers here). Those I know who leave the house voluntarily usually choose to do so either in the very early morning or after the sun sets. As we have seen with the COVID-19 pandemic, however, that sort of answer implies a certain amount of privilege. Over the last year and a half, while much of the country switched to a remote work model, this was not an option for essential workers. There are just certain jobs you have to do in person.

The same is true for those who work outside in this heat and—though the risk is different—a lot of the same systems and structures that drove the socioeconomic and racial disparities in the COVID-19 crisis also apply here.

Social Factors

We know that a disproportionate number of black and brown people have been infected with and have died from COVID-19. There are several factors that have contributed to this. The CDC lists the following:

  • Neighborhood and physical environment: There is evidence that people in racial and ethnic minority groups are more likely to live in areas with high rates of new COVID-19 infections (incidence).2,6
  • Housing: Crowded living conditions and unstable housing contribute to transmission of infectious diseases and can hinder COVID-19 prevention strategies like hygiene measures, self-isolation, or self-quarantine. Increasingly disproportionate unemployment rates for some racial and ethnic minority groups during the COVID-19 pandemic8may lead to greater risk of eviction and homelessness or sharing housing. In some cultures, it is common for family members of many generations to live in one household.
  • Occupation: Racial and ethnic minority groups are disproportionately represented in essential work settings such as healthcare facilities, farms, factories, warehouses, food processing, accommodation and food services, retail services, grocery stores, and public transportation.19,20,21,22 Some people who work in these settings have more chances to be exposed to COVID-19 because of several factors. These include close contact with the public or other workers, not being able to work from home, and needing to work when sick because they do not have paid sick days.23
  • Education, income, and wealth gaps: Inequities in access to high-quality education can lead to lower high school completion rates and barriers to college entrance. This may limit future job options and lead to lower paying or less stable jobs. People with limited job options likely have less flexibility to leave jobs that may put them at higher risk of exposure to COVID-19.

Heat, like COVID, is most dangerous to those who are repeatedly exposed without protection—we saw what the lack of PPE did to so many of the front-line workers.

Over the past 30 years, heat waves have killed more people than all other weather-related natural disasters combined.

… Heat waves are most likely to affect people who work or live outdoors. The Bureau of Labor Statistics estimates that half of all U.S. workers have jobs that require them to be outdoors. Of those, 6.1 million are in construction, 4.6 million are in logistics, and 2.1 million are in agriculture.17 In July 2018, a United States Postal Service worker died on the job when the temperature hit 117 F.

OSHA also includes, “… baggage handlers, electrical power transmission and control workers, and landscaping and yard maintenance workers.” We can add miners, warehouse workers, and service/utility providers. These are all examples of manual labor—jobs that are traditionally done by people on the lower end of the socioeconomic scale, often including minorities and those with less education.

Legislation on Heat

As it stands, there is no federal standard in place to protect workers from dangerous heat conditions. I was pleasantly surprised to hear that there is a bill in the works to help rectify this problem and mitigate some of the danger. The Asunción Valdivia Heat Illness and Fatality Prevention Act was introduced in the Senate in October 2020 and in the House in March of this year. It is named after a farmworker who died after working for 10 hours straight (no water or breaks provided), picking grapes in 105oF heat.

Heat-related illnesses can cause heat cramps, organ damage, heat exhaustion, stroke, and even death. Between 1992 and 2017, heat stress injuries killed 815 U.S. workers and seriously injured more than 70,000. Climate change is making the problem worse. According to the National Oceanic and Atmospheric Administration, the last seven years have been the hottest on record, with 2020 coming in only second to 2016. Farmworkers and construction workers suffer the highest incidence of heat illness. And no matter what the weather is outside, workers in factories, commercial kitchens, and other workplaces, including ones where workers must wear personal protective equipment (PPE), can face dangerously high heat conditions all year round.

The Asunción Valdivia Heat Illness and Fatality Prevention Act will protect workers against occupational exposure to excessive heat by: 

  • Requiring the Occupational Safety and Health Administration (OSHA) to establish an enforceable standard to protect workers in high-heat environments with measures like paid breaks in cool spaces, access to water, limitations on time exposed to heat, and emergency response for workers with heat-related illness; and

  • Directing employers to provide training for their employees on the risk factors that can lead to heat illness, and guidance on the proper procedures for responding to symptoms.

I like that it gives OSHA the power to enforce these safety measures but first, we’ll have to see if this bill makes any progress. According to an article in Civil Eats:

This is far from a new idea. In fact, it was first suggested in 1972 by the National Institute for Occupational Safety & Health, which was charged with recommending health and safety standards at the time. The recommendation was revised in 1986 and again in 2016, but has yet to be heeded.

Meanwhile, it’s not just those who are actively working in the heat that are suffering. Air conditioning is expensive but at temperatures like these, it’s dangerous to live somewhere without cooling amenities. For those without housing, it can be especially lethal:

Being homeless in an era of mega-heat waves is particularly deadly, as homeless people represented half of last year’s record 323 heat-related deaths across the Phoenix area. The homeless population has grown during the pandemic, and activists are now worried that an expiring eviction moratorium will mean others will lose their homes at the height of summer.

People who live in crowded or poorly insulated trailers or apartments or who cannot afford A/C are also at higher risk for serious heat exposure. This subset especially includes poor and elderly people, a disproportionate percentage of whom are people of color. Many of the same factors that the CDC credits with contributing to high COVID-19 mortality among minorities are equally relevant here, including housing, education, and education, income, and wealth gaps.

As in several other cities, there are a handful of “cooling centers” in Tucson and Phoenix, where people can go for “Indoor heat relief, water, snacks, and supplies during posted hours.” Again, this is a good start but seems like a small gesture given the vast number of people who are either homeless or whose homes are not sufficiently cooled (especially given that many of these people don’t know of these facilities).

Vaccines

Looking again at the parallels with COVID-19, we can see the thoroughly asymmetrical way in which the vaccine has been rolled out: richer countries with access to and control over vaccine production have relatively high vaccination rates, while poorer countries have been left in the lurch:

Although more than 700 million vaccine doses have been administered globally, richer countries have received more than 87 per cent, and low-income countries just 0.2 per cent. 

Here’s a visual representation:

COVID 19, coronavirus, vaccine, inequality, inequity, COVAX

In order to truly put the pandemic behind us, we need to ensure that the majority of the world gets vaccinated (sooner, rather than later) but that doesn’t quite seem to be happening. Even COVID-19 Vaccines Global Access (COVAX), an enterprise aimed at providing equitable access to COVID-19 vaccines, which has big goals, has so far been unable to realize them.

COVAX, managed by Gavi, along with the Coalition for Epidemic Preparedness Innovations and WHO, was designed to stand on two legs: one for HICs [high-income countries], which would pay for their own vaccines, and the other for 92 lower-income countries, whose doses would be financed by donor aid.

… even with full financing, the COVAX roll-out has moved much more slowly than that in high-income countries (HICs). Speaker after speaker at the summit lamented the gross inequity in access to vaccines. “Today, ten countries have administered 75% of all COVID-19 vaccines, but, in poor countries, health workers and people with underlying conditions cannot access them. This is not only manifestly unjust, it is also self-defeating”, UN secretary general António Guterres told the gathering. “COVAX has delivered over 72 million doses to 125 countries. But that is far less than 172 million it should have delivered by now.” Of the 2.1 billion COVID-19 vaccine doses administered worldwide so far, COVAX has been responsible for less than 4%.

Next Steps

Both COVID and heat exposure reflect the far-reaching health effects of already existing economic (and socioeconomic and racial) disparities. These are all symptoms of systemic inequities—locally and globally; there is no easy, quick fix for things like poverty, homelessness, overcrowding, or a lack of medical infrastructure but we can make changes to more specific factors like work conditions, education, and access to healthcare.

The first step is always awareness: educating the world about these problems. From there, it takes a lot of teamwork and pressure to put any meaningful protections into place. The new legislation for heat safety standards gives me hope. But hope isn’t enough; thoughts and prayers don’t fix anything. We—especially those of us who benefit from existing racial and socioeconomic imbalances—need to take action, to keep advocating for more (enforceable) legislation and systemic changes.

I have been Micha’s editor and helped run this blog since the beginning. I’m excited to have the chance to contribute to Climate Change Fork again. My last blog was also about Tucson.

Posted in Climate Change, Extreme Weather, Guest Blog, law, Sustainability, US | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | 7 Comments

Sputnik and China: US Response to Tech Rivalry

Sputnik, science, r&d

Sputnik

Back in April, I outlined President Biden’s new American Job Plan. Granted, the $2.3 trillion plan was more of a wish list than a proposal; given the 50-50 split in the Senate and the narrow majority in the House, it has no chance of being approved. However, the proposal was important in how it outlined the priorities of the new administration. In that blog, I photo-coded the sections that I consider to be climate change-related, whose budget totaled $1.35 trillion. Below are the two key paragraphs from that blog:

In Tables 1-3, I have highlighted the entries that can be associated with climate change, and which I will speak to specifically in future blogs. Of course, the whole effort addresses a multitude of overlapping issues; none of the items can be exclusively associated with one of the entries of my earlier Venn diagram. Rounding up from the sum of the estimated costs for all of the entries, we come to $1.9 trillion. With the addition of the $400 billion for in-home care, we reach the plan’s quoted $2.3 trillion. The sum of the highlighted, climate-related items comes to $1.35 trillion, 70% of the total cost.

At this stage, the plan is a proposal, not a legislative commitment. In this sense, it is similar to the Paris Agreement (great intentions but little enforcement/implementation power). Right now, the size and the scope of the plan make it vulnerable to cherry-picking by both supporters and opponents. Serious questions remain in terms of timing (the plan is supposed to take the rest of the decade) and payment (suggested distribution over the next 15 years). The soonest, most important commitment in terms of timing is the promise of carbon-free electric power delivery by 2035.

The next logical step was to wait for this wish list to be parceled into smaller, more manageable sections—a strategy that would give each piece a much better chance of passing through into law. About a week ago, we saw the first significant progress in this process with the United States Innovation and Competition Act of 2021:

Senate Overwhelmingly Passes Bill to Bolster Competitiveness With China
The wide margin of support reflected a sense of urgency among lawmakers in both parties about shoring up the technological and industrial capacity of the United States to counter Beijing.

WASHINGTON — The Senate overwhelmingly passed legislation on Tuesday that would pour nearly a quarter-trillion dollars over the next five years into scientific research and development to bolster competitiveness against China.

The measure, the core of which was a collaboration between Mr. Schumer and Senator Todd Young, Republican of Indiana, would prop up semiconductor makers by providing $52 billion in emergency subsidies with few restrictions. That subsidy program will send a lifeline to the industry during a global chip shortage that shut auto plants and rippled through the global supply chain.

The bill would sink hundreds of billions more into scientific research and development pipelines in the United States, create grants and foster agreements between private companies and research universities to encourage breakthroughs in new technology.

This has not passed into law yet. The proposed law still has to be approved by the House with a simple majority. I sincerely hope that the House does not erect obstacles to this happening and that the president signs it.

As we see in the NYT piece above, the legislation’s success in the Senate was largely a result of its being framed as a measure to protect the country’s security and industry from China’s success. I thought back to 1957. The Soviet Union managed to launch Sputnik, the first satellite, into space as I was finishing high school in Israel. The US Senate reacted with new legislation, which was presented similarly:

On October 4, 1957, the Soviet Union shocked the people of the United States by successfully launching the first earth-orbiting satellite, Sputnik. During the Cold War, Americans until that moment had felt protected by their technological superiority. Suddenly the nation found itself lagging behind the Russians in the Space Race, and Americans worried that their educational system was not producing enough scientists and engineers. Sometimes, however, a shock to the system can open political opportunities.

On the day Sputnik first orbited the earth, the chief clerk of the Senate’s Education and Labor Committee, Stewart McClure, sent a memo to his chairman, Alabama Democrat Lister Hill, reminding him that during the last three Congresses the Senate had passed legislation for federal funding of education, but that all of those bills had died in the House. Perhaps if they called the education bill a defense bill they might get it enacted. Senator Hill—a former Democratic whip and a savvy legislative tactician—seized upon on the idea, which led to the National Defense Education Act.

There had been strong resistance to federal aid to education, but as public opinion demanded government action in the wake of Sputnik, the Senate once again moved ahead with its education bill. Knowing that opponents in the House remained resistant, Senator Hill conferred with another Alabama Democrat, Representative Carl Elliott, who chaired the House subcommittee on education. Meeting in Montgomery, they devised a strategy for getting the NDEA enacted. They framed the debate around the question of whether federal funds should go to students as grants, as the Senate preferred, or as loans. Opponents in the House denounced the notion of grants as “socialist.” When the House prevailed on loans, the rest of the Senate’s version of the bill swooped through to passage. In fact, the grants versus loans debate had been a ploy. Senator Hill and Representative Elliott realized that having voted the same bill down repeatedly in the past, the House had to have something on which it could win.

The National Defense Education Act of 1958 became one of the most successful legislative initiatives in higher education. It established the legitimacy of federal funding of higher education and made substantial funds available for low-cost student loans, boosting public and private colleges and universities. Although aimed primarily at education in science, mathematics, and foreign languages, the act also helped expand college libraries and other services for all students. The funding began in 1958 and was increased over the next several years. The results were conspicuous: in 1960 there were 3.6 million students in college, and by 1970 there were 7.5 million. Many of them got their college education only because of the availability of NDEA loans, thanks to Sputnik and to Senator Hill’s readiness to seize the moment.

The 2021 Innovation and Competition Act has the potential to have a similarly strong impact on both education and research and development (R&D), globallye. Once the bill passes into law, I will elaborate on these possibilities, including how they may benefit research aimed at facilitating climate change mitigation and adaptation.

One of the main factors that facilitated such rare bipartisan support for this legislation is the feeling (and the fear) that allowing the Chinese to increase its R&D efforts while we decrease our own, constitutes a real danger to our national security.

Tables 1 and 2 below use data from recent OECD (Organization for Economic Co-operation and Development) compilations to explore this issue (I am using the Wikipedia source for Table 1 because it also includes data for countries that are not members of OECD). Table 1 shows the 10 largest spenders on this issue and the percentage they spent on R&D.

Table 1R&D as % of GDP and per capita of the 10 largest spenders in 2019

Country Expenditures on R&D

(Billions of US$, PPP)

% of GDP (PPP) Expenditure on R&D per capita (US$ PPP)
US 613 3.1 1,866
China 515 2.2 368
Japan 173 3.2 1,375
Germany 132 3.2 1,586
South Korea 100 4.6 1,935
France 64 2.2 944
India 59 0.65 43.4
UK 52 1.8 762
Taiwan 43 3.5 1,822
Russia 39 1.0 263

Table 1 shows the expenditure in terms of PPP (Purchasing Price Parity). For comparison, the 2019 GDP of China, measured in nominal US$, was $14.3 trillion; in PPP adjusted currency, that amounts to $23.5 trillion—larger than that of the US for the same year (per definition, the US nominal and PPP exchange rates are the same). In other words, in relative terms, China is spending significantly more on R&D than the US does.

Table 2 steps back to look at the US and China’s changes in expenditures over the last 10 years.

Table 2R&D of US and China as a percentage of GDP

Year China US
2009 1.665 2.813
2010 1.714 2.735
2011 1.780 2.765
2012 1.912 2.682
2013 1.998 2.712
2014 2.022 2.721
2015 2.057 2.719
2016 2.100 2.788
2017 2.116 2.847
2018 2.141 2.947
2019 2.235 3.07

Looking at both tables, one might be tempted to question the assertion that China is so far ahead of the US in dedicating resources to R&D. However, the data are all given as percentages of the countries’ GDP. In nominal US$, the GDP of China for the period 2009 – 2019 increased from $5.1 to $14.3 trillion, a growth of 180%. The US GDP over the same period went from $14.4 to $21.4 trillion, an increase of 49%. That means even though China’s percent of expenditures on R&D looks approximately constant in Table 2, its GDP over this period grew more than three times faster than that of the US, while its share of expenditure on R&D grew at roughly the same ratio.

Interestingly, we can also see from Table 2 that the US R&D expenditures were slowing down from 2009 – 2016 but they experienced a significant increase from 2016 – 2019, under the Trump administration. That said, we’d need to do some additional research to find out whether this increase was comprehensive or went almost entirely toward military research.

Once the new legislature is approved by the House and signed by the president, I’ll delve more into its possible impact on expenditures for climate change mitigation and adaptation.

Posted in Biden, Climate Change, Economics, law, politics, US | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Electricity Through Fusion: Hope vs. Reality

I am finishing writing this blog on D-Day, Sunday, June 6th. This commemorates the day the Allied forces invaded Normandy on their way to liberating the rest of Western Europe from the Nazi menace. On April 13, 1945, they reached Farsleben—near the city of Magdeburg, in what’s now the German state of Saxony-Anhalt—and liberated me and my family. Today also happens to be an election day in this state. Many eyes around the world are focused on this election as an early indicator of the strength of the right-wing, anti-immigration party, AfD, ahead of the national German elections in September.

Due to pandemic precautions, this year’s celebrations of D-Day will be muted, similarly to those from last year. I hope that you will use the occasion to go to my last pre-pandemic D-Day entry (June 11, 2019) for my take of this important day. By the time that you read this blog, two days from now, we will all know the results of the German election in Saxony-Anhalt. Meanwhile, I am lighting a memorial candle to all the soldiers that took part in the invasion and wishing that these election results will not remind me too much of those from the early 1930s.

That aside, this blog post is about science. It will be the last blog in this series about factoring in electricity while calculating our energy use and the resulting carbon emissions.

I teach the same two general education courses almost every semester. One is focused on climate change on Earth, the other explores cosmology: what do we know about our universe and how do we know it? I emphasize the connection between the two topics. Per definition, the combination of the two covers a lot of territory. Stars are generally organized into galaxies. Rough estimates say there are about two trillion galaxies in total, with a “typical” number of more than a billion stars per galaxy.

Stars are defined in terms of conditions in their core that facilitate the fusion of hydrogen to helium, the act of which releases an enormous amount of energy. In other words, fusion runs the universe, including our existence. Our experience with fusion on Earth has mainly related to our use of our sun’s energy—which facilitates life here. We have also witnessed fusion’s destructive potential in the form of the hydrogen bomb. However, since the end of WWII, there has been an effort to recruit its enormous energy potential for peaceful and productive objectives to support human activities on this planet.

These activities have accelerated since we started to realize that our current main sources of energy are changing the chemistry of the atmosphere in a way that—in the not-so-distant future—will raise the temperature of this planet to unlivable conditions. We can look at the inhospitable surface temperature of our close neighbor Venus (around 425 oC = 800 oF) for a reminder of the most extreme possibilities. The only remedy is to replace our “dirty” energy sources with non-polluting ones. The ultimate hope in this transition is fusion energy.

Several years ago, I discussed the basic physics of fusion and the results of our attempts to utilize it in our energy transition (December 12, 2017). I strongly recommend that everyone reading through this blog revisit that earlier post on this topic (sorry about the extended homework in this blog). Meanwhile, I am borrowing one paragraph from that entry for those of you who don’t want to do all that work:

The ultimate solution to our energy problems is to learn how to use fusion as our source of energy. Since immediately after the Second World War, we have known how to use fusion in a destructive capacity (hydrogen bombs) and have been earnestly trying to learn how to use it for peaceful applications such as converting it into electrical power. It is difficult. To start with, if we want to imitate our sun, we have to create temperatures on the order of 100 million degrees Celsius. Before we can do that, we have to learn how to create or find materials that can be stable at such temperatures: all the materials that we know of will completely decompose in those circumstances. Figure 1 illustrates the facilities engaging in this research and their progress. We are now closer than we have ever been to maintaining a positive balance between energy input and energy output (ignition in the graph) but we are not there yet.

Today, I want to look at the largest coordinated global effort to learn how to fit fusion into the economy in a way that is both efficient and affordable. These qualities will help ensure its place as an important component in our energy transition. There is a facility dedicated to this study, called ITER (International Thermonuclear Experimental Reactor), which means “the way” in Latin. I mentioned ITER in the 2017 blog when I examined the rate of progress of energy transition efforts.

I am bringing all of this up so I can amplify the importance of both knowing that electricity needs to be calculated as a secondary energy source (including the multiple alternative names that we discussed in the last two blogs: Scope 2, heat rate, and, here, the distinction between Q-physics and Q-engineering) and understanding how to make those calculations. Grant Hill wrote an insightful piece a few weeks ago for WHYY, a Philadelphia educational channel of PBS, that addresses how crucial communication can be. I have emphasized some pieces that I find especially important:

A fusion experiment promised to be the next step in solving humanity’s energy crisis. It’s a big claim to live up to

It’s called the International Thermonuclear Experimental Reactor, ITER for short. Still under construction in the south of France, the huge doughnut-shaped test reactor is often labeled the world’s largest science experiment, and the next step in the journey to fusion energy.  (The Latin word “iter” means “way” or “journey.”)

ITER is a product of diplomacy. In the 1980s, two longtime foes, the United States and the former Soviet Union, came together with a common mission: to harness this seemingly magical power source for something other than mutual annihilation. By then, the idea of working together was a more novel concept than fusion power itself.

ITER’s design wasn’t finalized until 2001, but its approval carried a big punch: a promise to create 10 times more energy than it consumed. At that point, 35 countries had joined the project to split the cost (and benefits) of such an achievement. The final price tag is still up for debate. If you ask ITER, the bill will run around $25 billion. The U.S. Department of Energy puts it at nearly $65 billion.

ITER’s made a bold promise: to produce 10 times the amount of energy it consumed. Even though ITER was only a test reactor that would never actually connect to the grid and produce electricity, such a result would be a record-smashing number for fusion reactors compared to its predecessor, a reactor called JET in the U.K. That one couldn’t even break-even — meaning it produced less power than it consumed.

ITER’s remarkable energy-production upgrade is thanks to the reactor’s scaled-up design. When it comes to doughnut-shaped fusion reactors called tokamaks, such as ITER and JET, size is a limiting factor. It is ITER’s enormous magnitude, nearly 240 feet tall and weighing 23,000 tons, that allows the organization to make such big claims. And the ITER organization has done so, frequently touting its 10-times power gain number, often called the Q ratio.

But when Krivit needed to double-check some figures for a book he was writing, a closer look at the Q ratio ITER promised revealed something concerning, he said.

“I assumed that everybody knew the rate of power that went into these reactors. But the scientists that I spoke to said, ‘Well, actually, we don’t measure the rate of power that goes into the fusion reactors.’ And I’m going, `What are you talking about?’” Krivit said. “We all thought that the rate of power that you talked about from the JET reactor was a comparison of the power coming out versus the power coming in. And they said, ‘No.’ That power ratio doesn’t compare the rate of power coming out versus power coming in. It only compares the ratio of the power that’s used to heat the fuel versus the thermal power that’s produced by the fuel.”

In reality, the Q ratio only speaks to what happens deep inside the reactor when fusion occurs, not the total amount of energy it takes to run the whole operation, or the actual usable electricity the fusion reaction could produce.

According to Henderson, physicists and non-physicists think of Q differently.

“For me to put in 50 megawatts of power, I need to pull from the grid about 150 megawatts. And so an engineer would say, ‘Well, Mark, wait a minute, you know, you’re pulling 150 megawatts from the grid, you convert it to 50 megawatts, it gives me 500 megawatts out, but then those 500 megawatts are going to heat water that turns a turbine that then generates electricity that would put back to the grid roughly 150 megawatts,’” he said. “So from an engineer’s perspective, if ITER was a [commercial] fusion reactor, it would be giving a Q-engineering roughly of about one.”

A Q-physics ratio of 10, but a Q-engineering ratio of one. Henderson doubted whether actual engineers who work on fusion sites correctly understood this difference, let alone the public. But it’s an important difference, nonetheless.

“That means the ITER reactor is effectively a zero-power reactor,” Krivit said. “If the design works as expected, we’re going to get the same rate of power coming out as the power going in. Zero extra power that would be available to use for any practical purposes.”

Since we are starting to pay not only for our electricity energy use but also for the carbon emissions that result from that electricity’s production, it is high time that we include secondary energy sourcing in our educational objectives. We need to know both the monetary and environmental costs of the electricity that shapes the world we live in.

 

*Update: it looks like my candle worked—the right-wing, anti-immigration party, AfD, lost resoundingly in the German state’s election!

Posted in Electricity, Energy, Sustainability | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , | 1 Comment