Dynamic Scoring: Taxes and Climate Change

Our government’s executive and legislative branches, are in the midst of discussing two important issues: tax breaks and climate change. Well, in truth, the only real discussion going on has to do with the tax legislation. Climate change is only being addressed indirectly – The Trump administration officially approved the congressionally-mandated climate change report (discussed previously in the August 15, 2017 blog) that was compiled by scientists from 13 government agencies. It now becomes an “official” document, in spite of the fact the White House claims that the President never read it. Meanwhile, right now in Bonn, Germany the 23rd Conference of the Parties (COP23) to the United Nations Framework Convention on Climate Change (UNFCCC) is taking place. One hundred ninety-five nations are there to discuss implementation of the 2015 Paris Agreement. As we all remember, President Trump announced in June his intention to withdraw the US from this agreement. Nevertheless, the US is a full participant in this meeting. Syria has just announced that it will be joining this agreement, making the US the only country in the world to back out.

Let me now come back to taxes, starting with Forbes magazine’s “Tax Reform for Dummies”:

You see, they were planning to repeal Obamacare using something called the “budget reconciliation process,” and pay attention, because this becomes relevant with tax reform. Using this process, Congress can pass a bill that’s attached to a fiscal year budget so long as:

  1. The bill directly impacts revenue or spending, and
  2. The bill does not increase the budget deficit after the end of the ten-year budget window (the so-called “Byrd Rule.)

More importantly, using the reconciliation process, Republicans can pass a bill with only a simple majority in the Senate (51 votes), rather than the standard 60. And as mentioned above, the GOP currently holds 52 seats in the Senate, meaning it could have pushed through its signature legislation without a SINGLE VOTE from a Democrat, which is particularly handy considering that vote was never coming.

Well, the Republicans in Congress were able to pass this resolution with a simple majority, specifying that by using dynamic scoring, they will not increase the budget deficit after the specified 10-year period. They set the limit for this accounting as a deficit no larger than $1.5 trillion (1,500 billion to those of us that need help with big numbers). Here is an explanation of dynamic scoring:

Tax, spending, and regulatory policies can affect incomes, employment, and other broad measures of economic activity. Dynamic analysis accounts for those macroeconomic impacts, while dynamic scoring uses dynamic analysis in estimating the budgetary impact of proposed policy changes.

To give you an understanding of that timeline and budget, here’s a graph of our national deficit:

Figure 1 Post WWII budget deficit in the US

Republican rationale for dynamic scoring in tax cuts is that, “Tax cuts pay for themselves”:

Ted Cruz got at a similar idea, referencing the tax plan he unveiled Thursday: “[I]t costs, with dynamic scoring, less than $1 trillion. Those are the hard numbers. And every single income decile sees a double-digit increase in after-tax income. … Growth is the answer. And as Reagan demonstrated, if we cut taxes, we can bring back growth.”

Tax cuts can boost economic growth. But the operative word there is “can.” It’s by no means an automatic or perfect relationship.

We know, we know. No one likes a fact check with a non-firm answer. So let’s dig further into this idea.

There’s a simple logic behind the idea that cutting taxes boosts growth: Cutting taxes gives people more money to spend as they like, which can boost economic growth.

Many — but by no means all— economists believe there’s a relationship between cuts and growth. In a 2012 survey of top economists, the University of Chicago’s Booth School of Business found that 35 percent thought cutting taxes would boost economic growth. A roughly equal share, 35 percent, were uncertain. Only 8 percent disagreed or strongly disagreed.

But in practice, it’s not always clear that tax cuts themselves automatically boost the economy, according to a recent study.

“[I]t is by no means obvious, on an ex ante basis, that tax rate cuts will ultimately lead to a larger economy,” as the Brookings Institution’s William Gale and Andrew Samwick wrote in a 2014 paper. Well-designed tax policy can increase growth, they wrote, but to do so, tax cuts have to come alongside spending cuts.

And even then, it can’t just be any spending cuts — it has to be cuts to “unproductive” spending.

“I want to be clear — one can write down models where taxes generate big effects,” Gale told NPR. But models are not the real world, he added. “The empirical evidence is quite different from the modeling results, and the empirical evidence is much weaker.”

President Reagan’s tax cut that Senator Cruz is referring to took place in 1981 in the middle of a serious recession. But it came at the heels of a post-war deficit. The tax cut did not pay for itself.

We can now return to the executive summary that precedes the recently-approved government Climate Science Special Report. I am condensing it into the main , each of which receives a detailed discussion in the full 500-page report:

  • Global annually averaged surface air temperature has increased by about 1.8°F (1.0°C) over the last 115 years (1901–2016). This period is now the warmest in the history of modern civilization.
  • It is extremely likely that human activities, especially emissions of greenhouse gases, are the dominant cause of the observed warming since the mid-20th century.
  • Thousands of studies conducted by researchers around the world have documented changes in surface, atmospheric, and oceanic temperatures; melting glaciers; diminishing snow cover; shrinking sea ice; rising sea levels; ocean acidification; and increasing atmospheric water vapor.
  • Global average sea level has risen by about 7–8 inches since 1900, with almost half (about 3 inches) of that rise occurring since 1993. The incidence of daily tidal flooding is accelerating in more than 25 Atlantic and Gulf Coast cities in the United States.
  • Global average sea levels are expected to continue to rise—by at least several inches in the next 15 years and by 1–4 feet by 2100. A rise of as much as 8 feet by 2100 cannot be ruled out.
  • Heavy rainfall is increasing in intensity and frequency across the United States and globally and is expected to continue to increase.
  • Heatwaves have become more frequent in the United States since the 1960s, while extreme cold temperatures and cold waves are less frequent; over the next few decades (2021–2050), annual average temperatures are expected to rise by about 2.5°F for the United States, relative to the recent past (average from 1976–2005), under all plausible future climate scenarios.
  • The incidence of large forest fires in the western United States and Alaska has increased since the early 1980s and is projected to further increase.
  • Annual trends toward earlier spring melt and reduced snowpack are already affecting water resources in the western United States. Chronic, long-duration hydrological drought is increasingly possible before the end of this century.
  • The magnitude of climate change beyond the next few decades will depend primarily on the amount of greenhouse gases (especially carbon dioxide) emitted globally. Without major reductions in emissions, the increase in annual average global temperature relative to preindustrial times could reach 9°F (5°C) or more by the end of this century. With significant reductions in emissions, the increase in annual average global temperature could be limited to 3.6°F (2°C) or less.

We constantly worry what kind of damage we in the US are about to experience from these impacts, given the prevailing business-as-usual scenario. Fortunately, a detailed paper in Science Magazine (Science 356, 1362 (2017)) gives us some answers:

Estimating economic damage from climate change in the United States

Solomon Hsiang,1,2*† Robert Kopp,3*† Amir Jina,4† James Rising,1,5†

Michael Delgado,6 Shashank Mohan,6 D. J. Rasmussen,7 Robert Muir-Wood,8

Paul Wilson,8 Michael Oppenheimer,7,9 Kate Larsen,6 Trevor Houser6

Estimates of climate change damage are central to the design of climate policies. Here, we develop a flexible architecture for computing damages that integrates climate science, econometric analyses, and process models. We use this approach to construct spatially explicit, probabilistic, and empirically derived estimates of economic damage in the United States from climate change. The combined value of market and nonmarket damage across analyzed sectors—agriculture, crime, coastal storms, energy, human mortality, and labor—increases quadratically in global mean temperature, costing roughly 1.2% of gross domestic product per +1°C on average. Importantly, risk is distributed unequally across locations, generating a large transfer of value northward and westward that increases economic inequality. By the late 21st century, the poorest third of counties are projected to experience damages between 2 and 20% of county income (90% chance) under business-as-usual emissions (Representative Concentration Pathway 8.5).

Figure 2 Direct damage in various sectors as a function of rising temperature since the 1980s

The paper’s abstract above indicates a negative impact of 1.2% of GDP for each 1oC (1.8oF) rise. The current GDP of the US is around $18 trillion, so 1.2% of that per 1oC amounts to $216 billion. If we take the recent “typical” growth rate of the economy at 2% it amounts to $360 billion/year. The loss for 1oC of warming amounts to 60% of this “typical” growth.

Accounting through dynamic scoring should count losses as well as gains – in this case, those resulting from climate change. Next week I will expand on this topic.

Be Sociable, Share!
Posted in administration, Anthropogenic, Climate Change, COP21, IPCC, law, Sustainability, Trump, UN, UNFCCC | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Long-term Solutions: Energy

The last two blogs focused on the Netherlands’ leading role in showing the rest of the world strategies for living on an increasingly inhospitable planet, where the terrain is becoming uninhabitable for both humans and agricultural crops and the oceans are consuming ever larger masses of land through sea-level rise. The timing for all of this depends on decisions that we make today. The transition into this bleak future is at work even now, with large land areas already having been declared uninhabitable in places such as India and Africa, resulting in large scale immigration to better terrains.

The last two blogs focused on attempts to adapt to this reality, not through immigration (there are fewer and fewer places where we can go) but by living and growing crops isolated from the physical environment, using indoor agriculture and floating houses.

All of this requires us to realize that there are limits to our ability to adapt to hostile environment; we need to be convinced to completely decarbonize our energy sources. The next few blogs will focus on the issue of energy (with the usual disclaimer for interruptions due to unforeseen events), including a description of where we are in learning how to use the ultimate energy source that all the stars in the Universe use – fusion energy. Since we still don’t know how to use fusion energy for peaceful purposes, today’s blog will instead look at various forms of solar energy. Solar energy supply is intermittent and we need to learn how to adapt our usage to accommodate that flux. (October 21, 2014 and the following blogs, which include active exchanges with Joe Morgan),

Eduardo Porter’s article in The New York Times drew my attention to recent scientific activity and debate on this issue:

Fisticuffs Over the Route to a Clean-Energy Future

By Eduardo Porter

Could the entire American economy run on renewable energy alone?

This may seem like an irrelevant question, given that both the White House and Congress are controlled by a party that rejects the scientific consensus about human-driven climate change. But the proposition that it could, long a dream of an environmental movement as wary of nuclear energy as it is of fossil fuels, has been gaining ground among policy makers committed to reducing the nation’s carbon footprint. Democrats in both the United States Senate and in the California Assembly have proposed legislation this year calling for a full transition to renewable energy sources.

They are relying on what looks like a watertight scholarly analysis to support their call: the work of a prominent energy systems engineer from Stanford University, Mark Z. Jacobson. With three co-authors, he published a widely heralded article two years ago asserting that it would be eminently feasible to power the American economy by midcentury almost entirely with energy from the wind, the sun and water. What’s more, it would be cheaper than running it on fossil fuels.

Jacobson, et al. wrote a related article that was published in PNAS, which was then thoroughly critiqued on the same platform. For more on the exchange, I invite you to check the original references.

Low-cost solution to the grid reliability problem with 100% penetration of intermittent wind, water, and solar for all purposes

Mark Z. Jacobson, Mark A. Delucchib, Mary A. Camerona, and Bethany A. Frewa

Introduction:

Worldwide, the development of wind, water, and solar (WWS) energy is expanding rapidly because it is sustainable, clean, safe, widely available, and, in many cases, already economical. However, utilities and grid operators often argue that today’s power systems cannot accommodate significant variable wind and solar supplies without failure (1). Several studies have addressed some of the grid reliability issues with high WWS penetrations (2–21), but no study has analyzed a system that provides the maximum possible long-term environmental and social benefits, namely supplying all energy end uses with only WWS power (no natural gas, biofuels, or nuclear power), with no load loss at reasonable cost. This paper fills this gap. It describes the ability of WWS installations, determined consistently over each of the 48 contiguous United States (CONUS) and with wind and solar power output predicted in time and space with a 3D climate/weather model, accounting for extreme variability, to provide time-dependent load reliably and at low cost when combined with storage and demand response (DR) for the period 2050–2055, when a 100% WWS world may exist.

Conclusions:

Discussion and Conclusions The 2050 delivered social (business plus health and climate) cost of all WWS including grid integration (electricity and heat generation, long-distance transmission, storage, and H2) to power all energy sectors of CONUS is ∼11.37 (8.5–15.4) ¢/kWh in 2013 dollars (Table 2). This social cost is not directly comparable with the future conventional electricity cost, which does not integrate transportation, heating/cooling, or industry energy costs. However, subtracting the costs of H2 used in transportation and industry, transmission of electricity producing hydrogen, and UTES (used for thermal loads) gives a rough WWS electric system cost of ∼10.6 (8.25–14.1) ¢/kWh. This cost is lower than the projected social (business plus externality) cost of electricity in a conventional CONUS grid in 2050 of 27.6 (17.2–54.4) ¢/kWh, where 10.6 (8.73–13.4) ¢/kWh is the business cost and ∼17.0 (8.5–41) ¢/kWh is the 2050 health and climate cost, all in 2013 dollars (22). Thus, whereas the 2050 business costs of WWS and conventional electricity are similar, the social (overall) cost of WWS is 40% that of conventional electricity. Because WWS requires zero fuel cost, whereas conventional fuel costs rise over time, long-term WWS costs should stay less than conventional fuel costs. In sum, an all-sector WWS energy economy can run with no load loss over at least 6 y, at low cost. As discussed in SI Appendix, Section S1.L, this zero load loss exceeds electric-utility industry standards for reliability. The key elements are as follows: (i) UTES to store heat and electricity converted to heat; (ii) PCM-CSP to store heat for later electricity use; (iii) pumped hydropower to store electricity for later use; (iv) H 2 to convert electricity to motion and heat; (v) ice and water to convert electricity to later cooling or heating; (vi) hydropower as last-resort electricity storage; and (vii) DR. These results hold over a wide range of conditions (e.g., storage charge/discharge rates, capacities, and efficiencies; long-distance transmission need; hours of DR; quantity of solar thermal)(SI Appendix, Table S3 and Figs. S7–S19), suggesting that this approach can lead to low-cost, reliable, 100% WWS systems many places worldwide.

Response to Jacobson et al.:

More than one arrow in the quiver: Why “100% renewables” misses the mark

John E. Bistlinea, and Geoffrey J. Blanforda

Jacobson et al. (1) aim to demonstrate that an all renewable energy system is technically feasible. Not only are the study’s conclusions based on strong assumptions and key methodological oversights, but its framing also omits the essential notion of trade-offs. A far more relevant question is how renewable energy technologies relate to the broader set of options for meeting long-term societal goals like managing climate change. Even if the goal were to maximize the deployment of renewable energy (and not decarbonization more generally), Jacobson et al. still fail to provide a satisfactory analysis by glossing over fundamental implications of the technical and economic dimensions of intermittency. We briefly highlight two prominent examples, and then return to the question of framing.

First, the paper’s “no load loss” assertion is predicated on the large-scale availability of energy storage, demand response, and unconstrained transmission to handle periods of supply surpluses and shortfalls. Its assumptions about the cost and reliability of intertemporal demand flexibility within and across sectors, as well as the electrification of end-use demand, are particularly aggressive. The potential scale and scope of these novel technologies remain highly uncertain and speculative, and the narrow confidence intervals presented in Jacobson et al. (1) do not reflect the full range of possible outcomes. Second, the paper does not account for the regional provision of resource adequacy (i.e., market clearing with spatial heterogeneity) in its reliability results—indeed, the analysis is conducted at the national level. Although geographic smoothing can ameliorate some balancing challenges, seasonal and diurnal variability of wind and solar output cannot be managed through offsetting spatial variability alone. Moreover, these effects require data that reflect renewable output simultaneously with load in each hour of a given year, yet Jacobson et al. (1) use different, nonsynchronous datasets. Consequently, their analysis preserves neither joint temporal nor spatial variability between intermittent resources and demand, which are among the main drivers of decreasing returns to scale for renewable energy (2).

There is an emerging literature on integrated modeling of long-term capacity planning with high-temporal resolution operational detail that effectively incorporates such economic drivers (e.g., ref. 3). By contrast, Jacobson et al. (1) use a “grid integration model” in which investment and energy system transformations are not subject to economic considerations. The resulting renewable dominated capacity mix is inconsistent with the wide range of optimal deep decarbonization pathways projected in model inter-comparison exercises (e.g., refs. 4 and 5) in which the contribution of renewable energy is traded off in economic terms against other low-carbon options.

Jacobson et al. (1) underscore how balancing and fleet flexibility will be important elements of power system design, and that electrification of other demand sectors is a promising option. However, the study underestimates many of the technical challenges associated with the world it envisions, and fails to establish an appropriate economic context. Every low-carbon energy technology presents unique technical, economic, and legal challenges. Evaluating these trade-offs within a consistent decision framework is essential. Such analyses consistently demonstrate that a broad research, development, and deployment portfolio across supply- and demand-side technologies is the best way to ensure a safe, reliable, affordable, and environmentally responsible future energy system.

More recently, Miara et al. provided a different analysis of the changes that are needed in the future of US power supply (Nature Climate Change 7, 793 (2017)). Their analysis incorporates the requirements for cooling water in a constantly warming environment.

Climate and water resource change impacts and adaptation potential for US power supply

Ariel Miara , Jordan E. Macknick , Charles J. Vörösmarty , Vincent C. Tidwell , Robin Newmark & Balazs Fekete

Abstract:

Power plants that require cooling currently (2015) provide 85% of electricity generation in the United States1,2. These facilities need large volumes of water and sufficiently cool temperatures for optimal operations, and projected climate conditions may lower their potential power output and affect reliability3,4,5,6,7,8,9,10,11. We evaluate the performance of 1,080 thermoelectric plants across the contiguous US under future climates (2035–2064) and their collective performance at 19 North American Electric Reliability Corporation (NERC) sub-regions12. Joint consideration of engineering interactions with climate, hydrology and environmental regulations reveals the region-specific performance of energy systems and the need for regional energy security and climate–water adaptation strategies. Despite climate–water constraints on individual plants, the current power supply infrastructure shows potential for adaptation to future climates by capitalizing on the size of regional power systems, grid configuration and improvements in thermal efficiencies. Without placing climate–water impacts on individual plants in a broader power systems context, vulnerability assessments that aim to support adaptation and resilience strategies misgauge the extent to which regional energy systems are vulnerable. Climate–water impacts can lower thermoelectric reserve margins, a measure of systems-level reliability, highlighting the need to integrate climate–water constraints on thermoelectric power supply into energy planning, risk assessments, and system reliability management.

Just as I was about to finish writing this (late afternoon on Friday) my electronic device dinged with the news that the Trump administration had officially approved the recent US Climate Science Report – the result of the Congress-mandated National Climate Assessment. When the draft of this report came out this year many people, including myself, were doubtful that the administration would release it, since it put the responsibility for climate change squarely on humankind and was in complete agreement with almost all the other credible information about climate change, as of the IPCC’s inception more than 20 years ago. I wrote about this draft report earlier (August 15, 2017). I was already using it as the most up-to-date reference in my class and I will refer to it in my next blog, straying away from my intent to focus my attention on future energy needs.

Be Sociable, Share!
Posted in administration, Climate Change, IPCC, Sustainability, Trump | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Long-term Adaptations III – Following the Netherlands: Sea Level Rise

One of the most pressing problems jeopardizing our planet’s capacity to sustainably support (human and other) life is sea level rise. This comes as a direct consequence of the rise in global temperature: the accelerated melting of land-based ice combined with the expansion of ocean-based water.

The IPCC’s recent projections for business-as-usual scenarios include a sea level rise of 1 meter by the end of the century (April 25, 2017). But a paper in Nature made the case that the IPCC report significantly underestimated Antarctica’s contributions to the projected global sea level rise, stating that the estimate should be raised by a factor of 5.

Here is what NOAA has to say about it:

With continued ocean and atmospheric warming, sea levels will likely rise for many centuries at rates higher than that of the current century. In the United States, almost 40 percent of the population lives in relatively high-population-density coastal areas, where sea level plays a role in flooding, shoreline erosion, and hazards from storms. Globally, eight of the world’s 10 largest cities are near a coast, according to the U.N. Atlas of the Oceans.

Global sea level has been rising over the past century, and the rate has increased in recent decades. In 2014, global sea level was 2.6 inches above the 1993 average—the highest annual average in the satellite record (1993-present). Sea level continues to rise at a rate of about one-eighth of an inch per year.

Higher sea levels mean that deadly and destructive storm surges push farther inland than they once did, which also means more frequent nuisance flooding. Disruptive and expensive, nuisance flooding is estimated to be from 300 percent to 900 percent more frequent within U.S. coastal communities than it was just 50 years ago.

The real estate database company Zillow produced a report estimating the cost of the damage:

  • Coastal homes are threatened by the prospect of rising sea levels, a new report from Zillow says.
  • More than $900 billion worth of U.S. residential real estate could be lost by a 6-foot rise in sea levels, the report says.
  • Such a rise in water levels, projected to become a reality by 2100, could destroy nearly 2 million homes.

The Netherlands doesn’t need to wait until the end of the century to experience the impact. A lot of that has to do with its geography:

The European part of the country can be split into two areas: the low and flat lands in the west and north, and the higher lands with minor hills in the east and south. The former, including the reclaimed polders and river deltas, make up about half of its surface area and are less than 1 metre (3.3 ft) above sea level, much of it actually below sea level. An extensive range of seawalls and coastal dunes protect the Netherlands from the sea, and levees and dikes along the rivers protect against river flooding. The rest of the country is mostly flat; only in the extreme south of the country does the land rise to any significant extent, in the foothills of the Ardennes mountains. This is where Vaalserberg is located, the highest point on the European part of the Netherlands at 322.7 metres (1,059 ft) above sea level. The highest point of the entire country is Mount Scenery (887 metres or 2,910 ft), which is located outside the European part of the Netherlands, on the island of Saba.

The country has done a lot over the last 60 years to protect itself from flooding:

As an old proverb says: God created the world but the Dutch created the Netherlands. Over 30% of the Netherlands lies below sea level. Floodings of sea and river water caused many victims as for example the 1953 flood disaster during which 1850 people were drowned.
The struggle against the water was both defensive, manifest by many dikes and dams, and offensive, as is shown by the many land reclamation works from as early as the 14th century.

The Delta Works (the closing of the sea inlets) are a masterpiece of engineering, but the Maeslant Kering (1997) is also an example of hydraulic ingenuity. Three great Dutch hydraulic works, among which a steam pumping station, are already on the Unesco World Heritage list.

The Dutch Delta is too precious to not take the necessary flood prevention measures Flooding of the large rivers which flow into the North-Sea .must be prevented It may even be necessary to re-inundate some polders at high tide and to restore old side gullies of the rivers to create overflow capacity.

At the end of 2011 a new style Delta Plan , the Delta Programme, was approved by the Dutch Senate .The object of the Delta programme is to protect our country against high water and keep our freshwater supply up to standard, now and in the future.

Figure 1 shows a map of the various projects.

Figure 1 Dutch Delta waterworks

Figure 2 shows a timeline of the projects:

Figure 2 Construction timeline

The same Wikipedia reference that provided the timeline discusses both the current status and the impact of climate change on future construction in the area:

The original plan was completed by the Europoortkering which required the construction of the Maeslantkering in the Nieuwe Waterweg between Maassluis and Hoek van Holland and the Hartelkering in the Hartel Canal near Spijkenisse. The works were declared finished after almost fifty years in 1997. In reality the works were finished on 24 August 2010 with the official opening of the last strengthened and raised retaining wall near the city of Harlingen, Netherlands.

Due to climate change and relative sea-level rise, the dikes will eventually have to be made higher and wider. This is a long term uphill battle against the sea. The needed level of flood protection and the resulting costs are a recurring subject of debate. Currently, reinforcement of the dike revetments along the Oosterschelde and Westerschelde is underway. The revetments have proven to be insufficient and need to be replaced. This work started in 1996 and should be finished in 2015. In that period the Ministry of Public Works and Water Management in cooperation with the waterboards will have reinforced over 400 km of dikes.[1]

In September 2008, the Delta commission presided by Dutch politician Cees Veerman advised in a report that the Netherlands would need a massive new building program to strengthen the country’s water defenses against the anticipated effects of global warming for the next 190 years. The plans included drawing up worst-case scenarios for evacuations and included more than €100 billion, or $144 billion, in new spending through the year 2100 for measures, such as broadening coastal dunes and strengthening sea and river dikes.

The commission said the country must plan for a rise in the North Sea of 1.3 meters by 2100 and 4 meters by 2200.[2]

Attempts to adapt to such a reality are not constrained to the Netherland, nor are they limited to blocking the rising water. One new option is living in floating structures (some of which have so far experienced more success than others):

Are the Floating Houses of the Netherlands a Solution Against the Rising Seas?

Figure 3 Floating homes in IJburg, Amsterdam.

The technology used to build houses on water is not really new. Whatever can be built on land can also be built on water. The only difference between a house on land and a floating house is that the houses on water have concrete “tubs” on the bottom, which are submerged by half a story and act as counter-weight. To prevent them from floating out to sea, they are anchored to the lakebed by mooring poles.

As sea levels are rising globally, many cities around the world are under threat from water. Some areas are projected to disappear completely in the next few decades. Therefore, designing houses to float may, in some instances, be safer than building on land and risking frequent floods. “In a country that’s threatened by water, I’d rather be in a floating house; when the water comes, [it] moves up with the flood and floats,” Olthuis says. He believes that water shouldn’t be considered an obstacle, but rather a new ingredient in the recipe for the city.

Floating houses are not only safer and cheaper, but more sustainable as well. Because such a house could more readily be adapted to existing needs by changing function, or even moving to a whole new location where it can serve as something else, the durability of the building is much improved. Olthuis compares this to a second-hand car: “By having floating buildings, you’re no longer fixed to one location. You can move within the city, or you can move to another city, and let them be used and used again.”

When waters rise, these flood-proof houses rise right with them

Float House

Figure 4 Morphosis Float House in New Orleans

The Float House is an ambitious, affordable housing project based in New Orleans. The city still hasn’t fully recovered from Hurricane Katrina, and these pre-fabricated, flood-proof homes designed by Morphosis Architects attempt to ensure the region can withstand the next Katrina-scale event.

During a flood, the foundation of the house acts as a raft, rising with the water. The FLOAT House has the ability to rise up to 12 feet on guide posts, which are secured via two concrete pile caps that extend 45 feet into the earth for added stability.

The house is also designed to be as self-sufficient as it is flood-proof. Cisterns within the structure store rainwater collected from the roof, and the water is then filtered and stored until needed. Solar panels on the roof generate all of the house’s power. Electrical systems in the home store and convert this energy as needed.

Floating houses could be key to housing in post climate change world:

Is this liveable yacht concept the answer to housing amongst rising sea levels?

Innovative company Arkup thinks so, and will launch the 100 percent solar-powered, electric and self-sustaining floating homes at a Florida boat show next month.

Built to withstand hurricane strength winds, the luxury houseboat has four bedrooms, 4.5 bathrooms, collects and purifies rainwater and is self-sustaining enough to be considered off the grid.

Figure 5Example of a proposed floating house

Next week I will focus on some of the energy alternatives that we will require to adapt to these situations.

Be Sociable, Share!
Posted in Anthropocene, Anthropogenic, Climate Change, IPCC, Sustainability | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Long-term Adaptations II – Following the Netherlands: Food and Habitability

Figure 1 – Indoor lattice growing setting from National Geographic magazine article

The photograph above resembles the one I included in last week’s blog. Both show the process of growing crops in a glass enclosure – except that Matt Demon’s preparation of Martian ground for farming was fiction while this is real.

This week I’ll look at long-term adaptation efforts we can take in light of the climate-related doomsday predicted to take place in the foreseeable future.

The two indicators I’ll concentrate on today are heat death and the end of food (see last week’s blog – October 17, 2017). I already gave heat death a quantitative treatment in my October 10th blog, when I talked about wet-bulb temperatures higher than 350C (950F). An “obvious” way to adapt to both of these dire problems is to follow Matt Damon’s example, growing our food and living our lives isolated from the outside environment. Up to a point, we can do it; we have the technology and it will not be that expensive to implement. Indeed, we only have to globally mimic the steps already being taken by a small rich European country: the Netherlands. Table 1 shows a few of the Netherlands’ important indicators.

Table 1 – The Netherlands, important indicators (Wikipedia)

Indicators Value Rank
Population (2017) 17,116,000
Population density 413.3/km2 30
GDP/Capita (Nominal – 2017) $44,654 13
Area 41,543km2 66

I have good friends in the Netherlands and I like to visit the country as often as I can. Since almost half of the nation’s landmass is located at or below sea level, its residents have made and are continuing to make heroic efforts to protect themselves from flooding – both by the ocean and the large rivers that pass through the country on their way to the ocean. Almost every discussion throughout the world that concerns itself with adaptation to rising sea levels includes Dutch guests being invited to share their experiences. While I was already familiar with some of the strategies they use for flood abatement, I did not know about their efforts in greenhouse agriculture. When I got my September 2017 issue of National Geographic with an article on those efforts, I was stunned with admiration. All the beautiful photographs below come from that great article.

Figure 2 – Bird’s-eye view of greenhouses in the Netherlands

Figure 3 – Chicken growing facility

Figure 4 – A rotary milking machine

Figure 5 – Growing tomatoes

The effort is not only delightful to watch, it is also very efficient. Here is some statistical evidence of this success:

Table 2 – Total water footprints of tomato production (Gallons per pounds – 2010)

Country Footprints
Netherlands 1.1
US 15.2
Global Average 25.6
China 34.0

Table 2 shows the water footprints of tomato production in the Netherlands, US, and China, together with the world’s average. The numbers are stunning and it is one of the best demonstrations of virtual water visualization I’ve seen. I introduced the concept of virtual water here several years ago (November 18, 2014):

The two main inventories that are included in most analyses are energy and water. Both inventories suffer from the same broad range of values as measured in different facilities. I just recently returned from a conference in Iceland in which I presented our work on water stress (“The Many Faces of Water Use” by Gurasees Chawla and Micha Tomkiewicz; 6th International Conference on Climate Change, Reykjavik, Iceland (2014)). One of the issues that we discussed there was the concept of virtual water:  the “sum of the water footprints of the process steps taken to produce the product” – an idea that constitutes part of the LCA analysis of products. For instance, the typical virtual water of fruits is 1,000 m3/ton. To remind us all, the weight of pure water in these units is 1ton/1m3, so the weight of the virtual water is 1,000 times the weight of the product itself. Where did the extra water go? For the most part, it either became waste water or evaporated.

Yes they are blaming the US for the depletion of Mexican water but in fact they should blame themselves because they are subsiding the Mexican strawberries growers by not charging them anything for the water that they use so that they will be able to better compete against the American growers.

Here’s one example of the concept of virtual water, as given in an article in a Mexican science magazine: when the US imports strawberries from Mexico, the imported strawberries conceptually “carry” with them all the virtual fresh water that is so greatly needed in both countries (Camps, Salvador Penische and Patricia Ávila García, Revista Mexicana de Ciencias 3:1579 (2012)); this equates to the US “stealing” water from the Mexicans. As a result, some are blaming the US for the depletion of Mexican water, and charging our country with immoral economic capitalism.

Meanwhile when proper water management is being used (mainly waste water treatment), the virtual water can be reduced by 90%.

Table 3 – The Netherlands’ yield of various agricultural products and its rank in comparison to the rest of the world – 2014

Produce Tons per square mile Rank
Carrots 17,144 5
Chilies and green peppers 80,890 1
Cucumbers 210,065 1
Onions 13,037 6
Pears 11,582 2
Potatoes 13,036 6

 

Table 4 – Rank in various categories of tomato production

Rank Category Quantity
1 Yield 144,352 tons/square miles
22 Total production 992,080 tons
95 Area harvested 6.9 miles2

 

Table 5 – Changes in vegetable growing practices from 2003-2014

Vegetable Production +28%
Energy used -6%
Pesticides used -9%
Fertilizers used -29%

The efforts of the Netherlands in this area are being recognized throughout the world and visitors from many developing countries are visiting to try to learn from the Dutch so they can try to implement the same methods back home. A global map in the article demonstrates the reach of these efforts.

This shift to enclosed agricultural methods and similarly isolated living structures as a response to increasingly uninhabitable lands can only be effective if the structures are cooled to the temperatures necessary for people and crops to continue thriving. This will, in turn, require the use of cooling devices and energy sources that will not amplify the damage. Cooling devices have their own limits, as I will explain in future blogs.

Be Sociable, Share!
Posted in Anthropogenic, Climate Change, Sustainability | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Long-Term Adaptations

Figure 1 – Scene from “The Martian”

I was considering using a more descriptive title for the coming series of blogs, inspired by a recent article in the New York Times (NYT) called, “How to Survive the Apocalypse?” The article starts as follows:

President Trump threatens to “totally destroy North Korea.” Another hurricane lashes out. A second monster earthquake jolts Mexico. Terrorists strike in London. And that’s just this past week or so.

Yes, the world is clearly coming to an end. But is there anything you can do to prepare?

Prudence won out and I decided on a much better-defined option. My last 7 blogs (August 29October 10, 2017) focused on the doomsday that business-as-usual scenarios are projected to bring about. Based on early signs already visible today, this catastrophe will come from human changes to the physical environment; the world is expected to reach full-force uninhabitability toward the end of the century.

My school’s Faculty Day in May featured a panel presentation, “The Role of Science in the Anthropocene.” The audience’s questions mirrored those posited in the NYT article: is there anything we can do to prepare? Or are we doomed? – in which case we should just try to enjoy our last days as happily as we can.

Almost every day now brings new calamities that originate from our changes to the physical environment. The most recent are the deadly fires throughout California. Similarly extensive fires have recently been recorded around the Mediterranean Sea and Australia. I am not alone in discussing the US government’s ineptitude in dealing with the physical calamities of climate change. I got sick of being depressed and decided to try to answer the question, “what can we do to prepare?”

The NYT piece was half-satirical and focused on how individuals could start to prepare their own disaster tool kits for local threats. My take will be more serious, collective, and global. The transition to doomsday that I am interested in is different from singular apocalyptic doomsday events such as a nuclear holocaust, the eruption of a super volcano, or the collision of a large asteroid. While we can obviously try to adapt to or mitigate most of those terrible occurrences, none of them is foreseen to impact us the way that climate change based on a business-as-usual scenario will. The eventual uninhabitability that will come from continuing our present changes to the chemistry of the atmosphere will occur gradually – within the span of a few human generations. We can still mitigate it considerably and at the same time try to develop technologies that will help us adapt.

In one of my earliest blogs, I defined the physics of sustainability (January 28, 2013):

I define sustainability as the condition that we have to develop here to flourish until we develop the technology for extraterrestrial travel that will allow us to move to another planet once we ruin our own.

In my opinion, the conditions to achieve this are very “straightforward.” They have to be able to answer two “simple” questions:

  • For how long? – Forever! To repeat President Obama’s language – We must act, knowing that today’s victories will be only partial, and that it will be up to those who stand here in four years, and forty years, and four hundred years hence to advance the timeless spirit once conferred to us in a spare Philadelphia hall

  • How to do it? – To achieve the sustainable objectives on this time scale, we will have to establish equilibrium with the physical environment and at the same time maximize individual opportunities for everybody on this planet

This will take time. A better definition would incorporate the options of attempting to establish livability wherever we can, including the soon-to-be-uninhabitable planet Earth.

How can we adapt to an uninhabitable planet? This is basically the same question as, “is there anything you can do to prepare?”

Well, what came to my mind was the movie, “The Martian” – with Matt Damon and Jessica Chastain – about an astronaut who got stuck on Mars and is trying to adapt. Figure 1 shows an example of his efforts. The movie was released toward the end of 2015 and received a Golden Globe Award for Best Picture in the category of Musical or Comedy. As Matt Damon said while receiving the award, it certainly was not a musical and the only funny thing about it was the category in which it was placed. It was a very serious, excellent movie. It described in great detail the work that the astronaut had to put into trying to survive on the uninhabitable surface of Mars.

David Wallace-Wells’ New York Magazine piece, which I discussed in my September 12, 2017 and subsequent blogs, presented decent qualitative descriptions of the main indicators of uninhabitability brought about by climate change:

  • Heat death
  • The end of food
  • Climate plagues
  • Unbreathable air
  • Perpetual war
  • Permanent economic collapse
  • Poisoned oceans

The end of the world as we know it will happen in phases. Some areas will become uninhabitable before others and that shift will likely be the main trigger for an indicator such as perpetual war. In a business-as-usual scenario, the transition will soon engulf the entire planet. How do we adapt to this kind of an environment? Perhaps we’ll simply have to take a page from Matt Damon and learn how to live happily while isolated from the environment. In the next few blogs I will go into more detail.

Be Sociable, Share!
Posted in administration, Anthropocene, Anthropogenic, Climate Change, Sustainability, Trump | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Doomsday: Local Timelines

The last few blogs focused on the ultimate consequences of continuing to make “progress” by relentlessly using the physical environment to serve humanity as if it were as a limitless resource. I tried to make the case that such efforts (business-as-usual scenarios) push the planet outside the window of habitability, with no place to go. The now fully-anticipated result is global doomsday. The expected timeline is uncertain but can be counted within a few hundred years. For people such as myself, who grew up exposed to thousands of years of history, these prospects are unacceptable – particularly because it is still within our power to prevent or at least considerably postpone this impact. Such an apocalyptic scenario obviously will not take place instantaneously on a global scale. It will start with a slowly expanding area of habitable local environments turning uninhabitable and their residents fleeing to friendlier places. Such a migration creates millions of environmental refugees and affect us all. This is not a speculation on an unknown future. It is already happening. This is the main reason that national security organizations are trying to understand the changes in the physical environment and how they affect its ability to support humans (See the “Global Trends 2035” blog, May 23, 2017).

In this blog I will try to quantify the criteria for local doomsdays with some concrete examples, starting with the concept of Wet Bulb temperature:

Wet Bulb Temperature – Twb

The Wet Bulb temperature is the adiabatic saturation temperature.

Wet Bulb temperature can be measured by using a thermometer with the bulb wrapped in wet muslin. The adiabatic evaporation of water from the thermometer bulb and the cooling effect is indicated by a “wet bulb temperature” lower than the “dry bulb temperature” in the air.

The rate of evaporation from the wet bandage on the bulb, and the temperature difference between the dry bulb and wet bulb, depends on the humidity of the air. The evaporation from the wet muslin is reduced when air contains more water vapor.

The Wet Bulb temperature is always between the Dry Bulb temperature and the Dew Point. For the wet bulb, there is a dynamic equilibrium between heat gained because the wet bulb is cooler than the surrounding air and heat lost because of evaporation. The wet bulb temperature is the temperature of an object that can be achieved through evaporative cooling, assuming good air flow and that the ambient air temperature remains the same.

Steven Sherwood and Matthew Huber wrote a paper, “An adaptability limit to climate change due to heat stress,” about the connections between Wet-Bulb temperatures and local limits of habitability. It was published in the Proceedings of the National Academy of Sciences (PNAS). Here is the abstract and some of their conclusions:

Abstract

Despite the uncertainty in future climate-change impacts, it is often assumed that humans would be able to adapt to any possible warming. Here we argue that heat stress imposes a robust upper limit to such adaptation. Peak heat stress, quantified by the wet-bulb temperature TW, is surprisingly similar across diverse climates today. TW never exceeds 31 °C. Any exceedance of 35 °C for extended periods should induce hyperthermia in humans and other mammals, as dissipation of metabolic heat becomes impossible. While this never happens now, it would begin to occur with global-mean warming of about 7 °C, calling the habitability of some regions into question. With 11–12 °C warming, such regions would spread to encompass the majority of the human population as currently distributed. Eventual warmings of 12 °C are possible from fossil fuel burning. One implication is that recent estimates of the costs of unmitigated climate change are too low unless the range of possible warming can somehow be narrowed. Heat stress also may help explain trends in the mammalian fossil record.

Conclusions:

We conclude that a global-mean warming of roughly 7 °C would create small zones where metabolic heat dissipation would for the first time become impossible, calling into question their suitability for human habitation. A warming of 11–12 °C would expand these zones to encompass most of today’s human population. This likely overestimates what could practically be tolerated: Our limit applies to a person out of the sun, in gale-force winds, doused with water, wearing no clothing, and not working. A global-mean warming of only 3–4 °C would in some locations halve the margin of safety (difference between TW max and 35 °C) that now leaves room for additional burdens or limitations to cooling. Considering the impacts of heat stress that occur already, this would certainly be unpleasant and costly if not debilitating. More detailed heat stress studies incorporating physiological response characteristics and adaptations would be necessary to investigate this.

If warmings of 10 °C were really to occur in next three centuries, the area of land likely rendered uninhabitable by heat stress would dwarf that affected by rising sea level. Heat stress thus deserves more attention as a climate-change impact.

The onset of TW max > 35 °C represents a well-defined reference point where devastating impacts on society seem assured even with adaptation efforts. This reference point constructs with assumptions now used in integrated assessment models. Warmings of 10 °C and above already occur in these models for some realizations of the future (33). The damages caused by 10 °C of warming are typically reckoned at 10–30% of world GDP (33, 34), roughly equivalent to a recession to economic conditions of roughly two decades earlier in time. While undesirable, this is hardly on par with a likely near-halving of habitable land, indicating that current assessments are underestimating the seriousness of climate change.

The paper by Eun-Soon Im et al. in Science Advances, 2017, 3(8), “Deadly heat waves projected in the densely populated agricultural regions of South Asia,” describes the evolving situation in South Asia. Here are a few key paragraphs from that paper that describe the main conclusions. They include the physiological criteria of the concept of hyperthermia, as mentioned in the previous paper:

Abstract

The risk associated with any climate change impact reflects intensity of natural hazard and level of human vulnerability. Previous work has shown that a wet-bulb temperature of 35°C can be considered an upper limit on human survivability. On the basis of an ensemble of high-resolution climate change simulations, we project that extremes of wet-bulb temperature in South Asia are likely to approach and, in a few locations, exceed this critical threshold by the late 21st century under the business-as-usual scenario of future greenhouse gas emissions. The most intense hazard from extreme future heat waves is concentrated around densely populated agricultural regions of the Ganges and Indus river basins. Climate change, without mitigation, presents a serious and unique risk in South Asia, a region inhabited by about one-fifth of the global human population, due to an unprecedented combination of severe natural hazard and acute vulnerability

INTRODUCTION

The risk of human illness and mortality increases in hot and humid weather associated with heat waves. Sherwood and Huber (1) proposed the concept of a human survivability threshold based on wet-bulb temperature (TW). TW is defined as the temperature that an air parcel would attain if cooled at constant pressure by evaporating water within it until saturation. It is a combined measure of temperature [that is, dry-bulb temperature (T)] and humidity (Q) that is always less than or equal to T. High values of TW imply hot and humid conditions and vice versa. The increase in TW reduces the differential between human body skin temperature and the inner temperature of the human body, which reduces the human body’s ability to cool itself (2). Because normal human body temperature is maintained within a very narrow limit of ±1°C (3), disruption of the body’s ability to regulate temperature can immediately impair physical and cognitive functions (4). If ambient air TW exceeds 35°C (typical human body skin temperature under warm conditions), metabolic heat can no longer be dissipated. Human exposure to TW of around 35°C for even a few hours will result in death even for the fittest of humans under shaded, well-ventilated conditions (1). While TW well below 35°C can pose dangerous conditions for most humans, 35°C can be considered an upper limit on human survivability in a natural (not air-conditioned) environment. Here, we consider maximum daily TW values averaged over a 6-hour window (TWmax), which is considered the maximum duration fit humans can survive at 35°C.

IMPACTS OF CLIMATE CHANGE

To study the potential impacts of climate change on human health due to extreme TW in South Asia, we apply the Massachusetts Institute of Technology Regional Climate Model (MRCM) (24) forced at the lateral and sea surface boundaries by output from three simulations from the Coupled Model Intercomparison Project Phase 5 (CMIP5) coupled Atmosphere-Ocean Global Climate Model (AOGCM) experiments (25). By conducting high-resolution simulations, we include detailed representations of topography and coastlines as well as detailed physical processes related to the land surface and atmospheric physics, which are lacking in coarser-resolution AOGCM simulations (26). On the basis of our comparison of MRCM simulations driven by three AOGCMs for the historical period 1976–2005 (HIST) against reanalysis and in situ observational data, MRCM shows reasonable performance in capturing the climatological and geographical features of mean and extreme TW over South Asia. Furthermore, the mean biases of MRCM simulations are statistically corrected at the daily time scale to enhance the reliability of future projections (see Materials and Methods). We project the potential impacts of future climate change toward the end of century (2071–2100), assuming two GHG concentration scenarios based on the RCP trajectories (27): RCP4.5 and RCP8.5. RCP8.5 represents a BAU scenario resulting in a global CMIP5 ensemble average surface temperature increase of approximately 4.5°C. RCP4.5 includes moderate mitigation resulting in approximately 2.25°C average warming, slightly higher than what has been pledged by the 2015 United Nations Conference on Climate Change (COP21).

On the basis of the simulation results, TWmax is projected to exceed the survivability threshold at a few locations in the Chota Nagpur Plateau, northeastern India, and Bangladesh and projected to approach the 35°C threshold under the RCP8.5 scenario by the end of the century over most of South Asia, including the Ganges river valley, northeastern India, Bangladesh, the eastern coast of India, Chota Nagpur Plateau, northern Sri Lanka, and the Indus valley of Pakistan (Fig. 2). Under the RCP4.5 scenario, no regions are projected to exceed 35°C; however, vast regions of South Asia are projected to experience episodes exceeding 31°C, which is considered extremely dangerous for most humans (see the Supplementary Materials). Less severe conditions, in general, are projected for the Deccan Plateau in India, the Himalayas, and western mountain ranges in Pakistan.

Many urban population centers in South Asia are projected to experience heat waves characterized by TWmax well beyond 31°C under RCP8.5 (Fig. 2). For example, in Lucknow (Uttar Pradesh) and Patna (Bihar), which have respective current metro populations of 2.9 and 2.2 million, TW reaches and exceeds the survivability threshold. In most locations, the 25-year annual TWmax event in the present climate, for instance, is projected to become approximately an every year occurrence under RCP8.5 and a 2-year event under RCP4.5 (Fig. 2 and fig. S1). In addition to the increase in TWmax under global warming, the urban heat island effect may increase the risk level of extreme heat, measured in terms of temperature, for high-density urban population exposure to poor living conditions. However, Shastri et al. (28) found that urban heat island intensity over many Indian urban centers is lower than in non-urban regions along the urban boundary during daytime in the pre-monsoon summer because of the relatively low vegetation cover in non-urban areas.

We all live “locally,” as do our friends and families. It is not surprising that we are most interested in the projections for climate change in our local environments. Uncertainties in long-term projections for local environments are much greater than those regarding global projections. However, our ability to mitigate and adapt to local environmental changes reflects our skills in making decisions with regard to other, global risks.

Some localities published their projections based on the highest global spatial resolution simulations that they could find, while others are conducting dedicated local simulations. Simulations, local or global, depend on how we run our lives. Usually we follow the IPCC’s practices and make our forecasts based on scenarios similar to those which it publishes.

If we live in big cities, websites like “Climate Central” are good sources to start with. Some of its guidelines are given below, along with a global map of the cities that the site covers:

Summers around the world are already warmer than they used to be, and they’re going to get dramatically hotter by century’s end if carbon pollution continues to rise. That problem will be felt most acutely in cities.

The world’s rapidly growing population coupled with the urban heat island effect — which can make cities up to 14°F (7.8°C) warmer than their leafy, rural counterparts —  add up to a recipe for dangerous and potentially deadly heat.

Currently, about 54 percent of the world’s population lives in cities, and by 2050 the urban population is expected to grow by 2.5 billion people. As those cities get hotter, weather patterns may shift and make extreme heat even more common. That will in turn threaten public health and the economy.

Figure 1

Here’s a short, personal, account of the current circumstances in Phoenix, Arizona and its future prospects:

Sorry to put such a fine point on this, but even without climate change, Phoenix, Arizona, is already pretty uninhabitable. Don’t get me wrong, I spend a fair amount of time there, and I love it—particularly in the fall and winter—but without air-conditioning and refrigeration, it would be unlivable as is. Even with those modern conveniences, the hottest months take their toll on my feeble Southern Californian body and brain. The historical average number of days per year in Phoenix that hit 100 degrees is a mind-bending 92. But that number is rapidly rising as climate change bears down on America’s fifth-largest city.

“It’s currently the fastest warming big city in the US,” meteorologist and former Arizonan Eric Holthaus told me in an email. A study from Climate Central last year projects that Phoenix’s summer weather will be on average three to five degrees hotter by 2050. Meanwhile, that average number of 100-degree days will have skyrocketed by almost 40, to 132, according to another 2016 Climate Central study. (For reference, over a comparable period, New York City is expected to go from two to 15 100-degree days.)

I live in New York City. Periodically (usually following new IPCC reports) the city compiles an up-to-date report on how climate change will impact the city, based on the best available information. The information is published in a full dedicated issue of the “Annals of the New York Academy of Sciences” for everyone to see.

The press has picked this up:

Climate change will come to New York the same way water boils around one of those mythical frogs. The city will be the same old New York in 2050; people won’t be frying eggs on the manhole covers in the summer, or riding gondolas around Times Square. But by then winter will have fewer than 50 days of freezing cold, instead of the average of 72 that was the norm in the late 20th century. There will be less shivering agony on train platforms, plus less salting and shoveling. But the summers will be harsher—half a dozen heat waves instead of the usual two, and those heat waves will be even longer and more sweltering than usual; there will be twice as many plus-90-degree days as there once were. In 2006, a brutal summer led to 140 people dying of heat-related causes; it’s safe to say that that sort of death toll will be routine by 2050.

All of that is according to the estimates in the 2013 Climate Risk Information report from the New York City Panel on Climate Change (I’m using that report’s low or median estimates). As one of America’s wealthiest and most liberal big cities—where even some prominent Republicans are staunch climate hawks—it’s not surprising that New York would commission a report like that, or take other steps toward fighting the effects of climate change. But even with all the resources of the five boroughs, that’s a tall order.

Next week I will leave aside the depressing topic of imminent doomsdays and try to address our options for mitigating or at least postponing them.

Be Sociable, Share!
Posted in Anthropocene, Anthropogenic, Climate Change, immigration, IPCC, Sustainability, UN | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Doomsday: Attributions

The high concentration of natural disasters taking place all over the world includes:

  • Hurricanes and cyclones such as Harvey, with its flooding of Houston and the rest of Southern Texas and Louisiana, Maria and its devastating destruction of Puerto Rico and other Caribbean islands, Irma and its impact on Florida, and the flooding in India, Pakistan, and Nepal
  • Mudslides in Sierra Leone
  • Deadly Mexican earthquakes
  • Bushfires in Queensland, Australia, where temperature reached 42.5oC (109oF), the Canary Islands, and Southern California

These have resulted in the combined loss of hundreds of lives, the displacement of millions who have lost their homes, and collective fiscal losses that are fast approaching trillions of dollars. The recovery will take months if not years.

For people directly affected, the impacts certainly felt like doomsday, the end of the world, the Sixth Extinction, or whatever else you are inclined to call a global calamity. For the new GOP Senate candidate from the State of Alabama, all of these are just signs of God’s unhappiness the human race for engaging in certain activities (of which Mr. Moore does not approve).

In the last few blogs, I have dealt with various aspects of future impacts of our activities on the physical environment and how they will result in global doomsday. I have phrased it as occurring in the near future that I defined broadly as “now”: within the lifetime of my grandchildren. Here we are talking about the literal “now” in local communities, countries and regions.

A valid question to ask is how do we know (and how sure are we) that human activity has made a significant contribution to these existential disasters – which means that some of the dangers are within our power to mitigate.

Here is how Wikipedia defines attributions of climate change:

Attribution of recent climate change is the effort to scientifically ascertain mechanisms responsible for recent climate changes on Earth, commonly known as ‘global warming‘. The effort has focused on changes observed during the period of instrumental temperature record, when records are most reliable; particularly in the last 50 years, when human activity has grown fastest and observations of the troposphere have become available. The dominant mechanisms are anthropogenic, i.e., the result of human activity. They are:[3]

There are also natural mechanisms for variation including climate oscillations, changes in solar activity, and volcanic activity.

According to the Intergovernmental Panel on Climate Change (IPCC), it is “extremely likely” that human influence was the dominant cause of global warming between 1951 and 2010.[4] The IPCC defines “extremely likely” as indicating a probability of 95 to 100%, based on an expert assessment of all the available evidence.[5]

As the scope of the climate events changes, so will the way in which we attribute human contributions. The attribution of global climate change will also be different than that of local weather events. Attributions will likewise differ for past events and predicted future events.

According to the IPCC, its measurements of our historical impacts on the global climate make it 95% certain of humans’ dominant contributions to climate change. A summary of the IPCC report of human contributions can be found here.

Visual summaries of the various observations are given in Figures 1 and 2.

Attribution of global warming – simulation of 20th century global mean temperatures (with and without human influences) compared to observations (NASA)Figure 1 – Attribution of global warming – simulation of 20th century global mean temperatures (with and without human influences) compared to observations (NASA)

Figure 1 shows a computer analysis of observed global temperature change over the 20th Century. The analyses of conditions with and without human contributions show only short-term fluctuations until mid-century, after which they split sharply. Toward the end of the century the global temperature without human contributions is a bit lower than in the beginning – a difference attributed mainly to the impact of volcanic eruptions (which result in a slight cooling immediately afterward). The observed temperature change is close to 1oC, in agreement with that calculated to include human contributions. In the case that exactly 100% means full accounting for the temperature increase relative to the prediction of slightly lower temperatures without human contributions, this shows a human contribution to climate change of more than 100% over this period.

Atmospheric and anthropogenic CO2 and 14C in tree ringsFigure 2

The main forcing mechanism of human contribution to the climate is the carbon dioxide emitted through burning of fossil fuels. Figure 2 shows the correlations between the measured changes of the atmospheric concentrations of carbon dioxide compared with the human-caused emissions. The curves look similar in shape; both of them kick sharply up in the middle of the 20th Century. Unfortunately, the two curves are shown in different units so direct comparisons are impossible. I am teaching my General Education students how to make such comparisons, using the tools from my book, Climate Change: The Fork at the End of Now. I am not going to repeat the calculation here. You will have to take my word that the jump in atmospheric concentration of carbon dioxide from 1950 to 2000 taken from the figure amounts to about 0.4 trillion tons of carbon dioxide while the human emissions amount to 0.5 trillion tons of carbon dioxide. These numbers are a rough approximation but are close enough to convince us that they are closely related. Section c of the same figure provides probably the most convincing evidence for the origin of this extra carbon dioxide.

Here is the background:

14C is a radioisotope of carbon with atoms that contain 6 protons and 8 neutrons as compared to the more abundant isotope of carbon (12C), which contains 6 protons and 6 neutrons. The radioisotope is unstable and slowly converts to nitrogen (14N) by converting one neutron to one proton. The conversion rate is measured through a parameter called half-time. It means that if we start with a certain amount of the material, half of it will convert into 14N in that period of time. The half-life of 14C is 5,730 years. The natural abundance of this isotope in the atmosphere is about one atom in a trillion. Plants that grow by photosynthesis of carbon dioxide in the atmosphere end up with the same amount of the carbon isotope.

Combustible fossil fuels such as coal, natural gas (mainly methane), gasoline, and various oils, are deposits of organic materials formed from decayed plants and animals. These organisms were alive many millions of years ago – way longer than 5,730 years – so all of their 14C had plenty of time to decay completely. In other words, any emissions of this type of more stabilized carbon dioxide into the atmosphere should reduce the atmospheric concentration of 14C. This reduction is shown in Figure 2c. Unfortunately, this experiment had to stop around mid century because of the prevalence of nuclear atmospheric testing at that time, which added a great deal of 14C to the atmosphere. We are left with the convincing data up until that time. Most of the carbon dioxide that we added to the atmosphere until that time came from combustible fossil fuel use. Carbon dioxide is the main anthropogenic greenhouse gas connected to our energy use (see June 25, 2012 blog or search for “greenhouse gas emissions” throughout the blog). The increase in temperature shown in Figure 1 came mostly from that same addition of carbon dioxide to the atmosphere from burning those fossil fuels.

The increase in temperature has immediate consequences.

Figure 3 shows one important consequence of an increase in average temperature. It shows two identical distributions of temperature over the same time period with a small shift to an overall higher temperature. If we define an extreme heat event through frequency of a fixed high temperature such as 100oF (how many days in a year we reach that target temperature or higher), the figure clearly shows that the probability of the extreme event increases greatly.

Figure 3 – Attribution of Extreme Weather Events in the Context of Climate Change

The anthropogenic increase in global temperature has other direct implications.

A vital implication from an important law in Physical Chemistry called the Clausius Clapeyron Equation tells us that the vapor pressure of almost all liquids steadily increases as the temperature increases. That means that the amount of humidity in the atmosphere is increasing as we warm the planet. Most of the impacts of the increase in global temperature manifest themselves through the water cycle: rise in sea level, increased desertification through enhanced transpiration and heightened land evaporation, shifts in precipitation patterns, etc.

Next week I will focus on how climate change is impacting some specific areas as well as some steps that we need to take right now to mitigate (minimize) and adapt to these dangers.

Be Sociable, Share!
Posted in Anthropocene, Anthropogenic, Climate Change, IPCC, Sustainability | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Doomsday: Timing?

Figure 1 – IPCC projections of future climate change impacts based on two different scenarios

Figure 2 – Relationship between global mean equilibrium temperature change and stabilization concentration of greenhouse gases

Trying to estimate the timing of doomsday is not a very honorable activity. Doomsday theorists abound and they don’t enjoy a great reputation for accuracy. Here’s a historical list of such apocalyptic predictions.

I feel more comfortable, substituting doomsday with the phrase 6th Mass Extinction – as used in the PNAS article that I described last week – or “self-inflicted genocide,” a term I have used repeatedly here, even though that relabeling might be considered a matter of semantics. By using these terms, scientists are less afraid of being labeled as false prophets; we consider some of the societal and political activities happening now as existential threats, greatly heightening the feasibility of doomsday scenarios.

I have used Figures 1 and 2 from the IPCC report on various occasions throughout this blog (the earliest being December 10, 2012). The WG1 scenario’s sharp climb into the dark orange zone demonstrates our rocketing towards doomsday, an existential danger that needs to be halted immediately. Reaching this situation by the end of the century in a business-as-usual scenario seems reasonable. Such a sharp marker doesn’t exist in Figure 2, which expresses the future in terms of carbon accumulation in the atmosphere. Daniel Rothman from MIT just put out a paper that works to close that gap, analyzing changes in the carbon cycle that have triggered all the recorded mass extinctions over the last 500 million years (Daniel H. Rothman. “Thresholds of catastrophe in the Earth system,” Science Advances, 2017):

Thresholds of catastrophe in the Earth system

Daniel H. Rothman

The history of the Earth system is a story of change. Some changes are gradual and benign, but others, especially those associated with catastrophic mass extinction, are relatively abrupt and destructive. What sets one group apart from the other? Here, I hypothesize that perturbations of Earth’s carbon cycle lead to mass extinction if they exceed either a critical rate at long time scales or a critical size at short time scales. By analyzing 31 carbon isotopic events during the past 542 million years, I identify the critical rate with a limit imposed by mass conservation. Identification of the crossover time scale separating fast from slow events then yields the critical size. The modern critical size for the marine carbon cycle is roughly similar to the mass of carbon that human activities will likely have added to the oceans by the year 2100.

The threshold seems to be the same. Last week, a paper in Nature Geoscience (September 18, 2017) called into question some of the IPCC’s carbon dioxide estimates (GtC = gigatons of carbon = billion tons of carbon):

Emission budgets and pathways consistent with limiting warming to 1.5°C

Richard J. Millar, Jan S. Fuglestvedt, Pierre Friedlingstein, Joeri Rogelj, Michael J. Grubb, H. Damon Matthews, Ragnhild B. Skeie, Piers M. Forster, David J. Frame & Myles R. Allen

The Paris Agreement has opened debate on whether limiting warming to 1.5 °C is compatible with current emission pledges and warming of about 0.9 °C from the mid-nineteenth century to the present decade. We show that limiting cumulative post-2015 CO2 emissions to about 200 GtC would limit post-2015 warming to less than 0.6 °C in 66% of Earth system model members of the CMIP5 ensemble with no mitigation of other climate drivers, increasing to 240 GtC with ambitious non-CO2 mitigation. We combine a simple climate–carbon-cycle model with estimated ranges for key climate system properties from the IPCC Fifth Assessment Report. Assuming emissions peak and decline to below current levels by 2030, and continue thereafter on a much steeper decline, which would be historically unprecedented but consistent with a standard ambitious mitigation scenario (RCP2.6), results in a likely range of peak warming of 1.2–2.0 °C above the mid-nineteenth century. If CO2 emissions are continuously adjusted over time to limit 2100 warming to 1.5 °C, with ambitious non-CO2 mitigation, net future cumulative CO2 emissions are unlikely to prove less than 250 GtC and unlikely greater than 540 GtC. Hence, limiting warming to 1.5 °C is not yet a geophysical impossibility, but is likely to require delivery on strengthened pledges for 2030 followed by challengingly deep and rapid mitigation. Strengthening near-term emissions reductions would hedge against a high climate response or subsequent reduction rates proving economically, technically or politically unfeasible.

The paper’s outlook on carbon dioxide emissions is a bit more optimistic than that of the IPCC but it still requires humanity to go zero carbon well before the end of the century in order to prevent doomsday.

Judging by recent developments, the US is not only aiming to double down on business-as-usual scenarios for greenhouse gas emissions, we are actively trying to reverse previous attempts to mitigate the coming disaster. Here are some recent examples:

Data-tracking watchdog group, Environmental Data & Governance Initiative (EDGI), has reported dozens of instances where the National Institute of Environmental Health Sciences has deleted references to climate change from its site

This week saw the long-delayed release of the Department of Energy’s evaluation of grid stability. The report was commissioned by Energy Secretary Rick Perry, who suggested that the expansion of renewable energy was undermining the reliability of electricity delivery. Back in June, however, a draft of the expert evaluation leaked, and it stated that the US grid was now more reliable than it had been in recent decades. Those conclusions, however, were watered down in the final report.

But the report is also notable for avoiding the use of the term “climate change” anywhere in its 125 pages. This is despite the fact that increased heat will boost demand and stress grid hardware and that climate change is currently driving state-level energy policies. In fact, the report recommends the anti-solution of increasing the use of coal-fired power plants.

For years, climate change activists have faced a wrenching dilemma: how to persuade people to care about a grave but seemingly far-off problem and win their support for policies that might pinch them immediately in utility bills and at the pump.

But that calculus may be changing at a time when climatic chaos feels like a daily event rather than an airy abstraction, and storms powered by warming ocean waters wreak havoc on the mainland United States. Americans have spent weeks riveted by television footage of wrecked neighborhoods, displaced families, flattened Caribbean islands and submerged cities from Houston to Jacksonville.

“The conversation is shifting,” said Senator Brian Schatz, Democrat of Hawaii. “Because even if you don’t believe liberals, even if you don’t believe scientists, you can believe your own eyes.”

Doomsday timing doesn’t have to be global. We can see different markers of individual cataclysms around the world and how they trigger even bigger problems. The necessary massive movement of people from more vulnerable locations to less vulnerable ones creates its own set of issues that we are just now starting to encounter. In future blogs I will try to quantify some of these prospects within a few local communities.

Next week I will return to how and why we attribute climate change to human activities. I’ll examine how we try to determine the human contributions to each of these early markers of the 6th Mass Extinction. I will try to address this both globally and locally, followed by identifying tools that we already have that can minimize these catastrophic impacts – if we have the will to do so.

Be Sociable, Share!
Posted in administration, Anthropocene, Anthropogenic, Climate Change, IPCC, politics, Sustainability, Trump | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Doomsday Early Signs: The Science

The New York Times last week tried to highlight the dangers of climate change. On Friday, Alexander Burns opened his contribution with the following two paragraphs:

For years, climate change activists have faced a wrenching dilemma: how to persuade people to care about a grave but seemingly far-off problem and win their support for policies that might pinch them immediately in utility bills and at the pump.

But that calculus may be changing at a time when climatic chaos feels like a daily event rather than an airy abstraction, and storms powered by warming ocean waters wreak havoc on the mainland United States. Americans have spent weeks riveted by television footage of wrecked neighborhoods, displaced families, flattened Caribbean islands and submerged cities from Houston to Jacksonville.

On Thursday, Tom Friedman wrote an op-ed on the contradictory ways in which President Trump is handling two seemingly low-probability global doomsday events: North Korea’s use of nuclear weapons on the US or its allies, and climate change:

The other low-probability, high-impact threat is climate change fueled by increased human-caused carbon emissions. The truth is, if you simply trace the steady increase in costly extreme weather events — wildfires, floods, droughts and climate-related human migrations — the odds of human-driven global warming having a devastating impact on our planet are not low probability but high probability.

Friedman realized that the major difference between these two possible future events is that the latter has a much higher probability. For a business-as-usual scenario in which we continue with the same policies that we presently hold, the only contentious issue is the timing.

Estimating future events always comes with uncertainty about timing. In terms of climate change, the IPCC has addressed this via different scenarios (October 1, 2013 and October 28, 2014). Doomsday timing depends on our contributions to climate change (August 29, 2017 – dark orange region in Figure 2). A similar approach is to try to predict doomsday through its early signs. Some doomsday scenarios such as North Korea bombing the US with nuclear weapons would likely have few (if any) early warning signs; they would reflect totally irrational, suicidal thinking. Other doomsday scenarios would give us much more to work with. The largest computer file that I have on climate change is dedicated to the early signs that already exist.

A recently published paper in the Proceeding of the National Academy of Science (PNAS) deals directly with this issue. All three of its authors are distinguished scientists. One of them, Paul Ehrlich, is a member of the National Academy of Science and was one of the two original formulators of the IPAT identity that I use so often in this blog (November 26, 2012). I am including significant parts of the paper because the main differences between science and stories or opinions are details, methodology, and refutability based on observations. Readers should be able to distinguish between the conclusions of this paper and the New York Magazine article that I discussed last week:

Biological annihilation via the ongoing sixth mass extinction signaled by vertebrate population losses and declines

  1. Gerardo Ceballosa,1, Paul R. Ehrlichb,1, and Rodolfo Dirzob

Significance

The strong focus on species extinctions, a critical aspect of the contemporary pulse of biological extinction, leads to a common misimpression that Earth’s biota is not immediately threatened, just slowly entering an episode of major biodiversity loss. This view overlooks the current trends of population declines and extinctions. Using a sample of 27,600 terrestrial vertebrate species, and a more detailed analysis of 177 mammal species, we show the extremely high degree of population decay in vertebrates, even in common “species of low concern.” Dwindling population sizes and range shrinkages amount to a massive anthropogenic erosion of biodiversity and of the ecosystem services essential to civilization. This “biological annihilation” underlines the seriousness for humanity of Earth’s ongoing sixth mass extinction event.

Abstract

The population extinction pulse we describe here shows, from a quantitative viewpoint, that Earth’s sixth mass extinction is more severe than perceived when looking exclusively at species extinctions. Therefore, humanity needs to address anthropogenic population extirpation and decimation immediately. That conclusion is based on analyses of the numbers and degrees of range contraction (indicative of population shrinkage and/or population extinctions according to the International Union for Conservation of Nature) using a sample of 27,600 vertebrate species, and on a more detailed analysis documenting the population extinctions between 1900 and 2015 in 177 mammal species. We find that the rate of population loss in terrestrial vertebrates is extremely high—even in “species of low concern.” In our sample, comprising nearly half of known vertebrate species, 32% (8,851/27,600) are decreasing; that is, they have decreased in population size and range. In the 177 mammals for which we have detailed data, all have lost 30% or more of their geographic ranges and more than 40% of the species have experienced severe population declines (>80% range shrinkage). Our data indicate that beyond global species extinctions Earth is experiencing a huge episode of population declines and extirpations, which will have negative cascading consequences on ecosystem functioning and services vital to sustaining civilization. We describe this as a “biological annihilation” to highlight the current magnitude of Earth’s ongoing sixth major extinction event.

The loss of biological diversity is one of the most severe human-caused global environmental problems. Hundreds of species and myriad populations are being driven to extinction every year (1⇓⇓⇓⇓⇓⇓–8). From the perspective of geological time, Earth’s richest biota ever is already well into a sixth mass extinction episode (9⇓⇓⇓⇓–14). Mass extinction episodes detected in the fossil record have been measured in terms of rates of global extinctions of species or higher taxa (e.g., ref. 9). For example, conservatively almost 200 species of vertebrates have gone extinct in the last 100 y. These represent the loss of about 2 species per year. Few realize, however, that if subjected to the estimated “background” or “normal” extinction rate prevailing in the last 2 million years, the 200 vertebrate species losses would have taken not a century, but up to 10,000 y to disappear, depending on the animal group analyzed (11). Considering the marine realm, specifically, only 15 animal species have been recorded as globally extinct (15), likely an underestimate, given the difficulty of accurately recording marine extinctions. Regarding global extinction of invertebrates, available information is limited and largely focused on threat level. For example, it is estimated that 42% of 3,623 terrestrial invertebrate species, and 25% of 1,306 species of marine invertebrates assessed on the International Union for Conservation of Nature (IUCN) Red List are classified as threatened with extinction (16). However, from the perspective of a human lifetime it is difficult to appreciate the current magnitude of species extinctions. A rate of two vertebrate species extinctions per year does not generate enough public concern, especially because many of those species were obscure and had limited ranges, such as the Catarina pupfish (Megupsilon aporus, extinct in 2014), a tiny fish from Mexico, or the Christmas Island pipistrelle (Pipistrellus murrayi, extinct in 2009), a bat that vanished from its namesake volcanic remnant.

Species extinctions are obviously very important in the long run, because such losses are irreversible and may have profound effects ranging from the depletion of Earth’s inspirational and esthetic resources to deterioration of ecosystem function and services (e.g., refs. 17⇓⇓–20). The strong focus among scientists on species extinctions, however, conveys a common impression that Earth’s biota is not dramatically threatened, or is just slowly entering an episode of major biodiversity loss that need not generate deep concern now (e.g., ref. 21, but see also refs. 9, 11, 22). Thus, there might be sufficient time to address the decay of biodiversity later, or to develop technologies for “deextinction”—the possibility of the latter being an especially dangerous misimpression (see ref. 23). Specifically, this approach has led to the neglect of two critical aspects of the present extinction episode: (i) the disappearance of populations, which essentially always precedes species extinctions, and (ii) the rapid decrease in numbers of individuals within some of the remaining populations. A detailed analysis of the loss of individuals and populations makes the problem much clearer and more worrisome, and highlights a whole set of parameters that are increasingly critical in considering the Anthropocene’s biological extinction crisis.

In the last few decades, habitat loss, overexploitation, invasive organisms, pollution, toxification, and more recently climate disruption, as well as the interactions among these factors, have led to the catastrophic declines in both the numbers and sizes of populations of both common and rare vertebrate species (24⇓⇓⇓–28). For example, several species of mammals that were relatively safe one or two decades ago are now endangered. In 2016, there were only 7,000 cheetahs in existence (29) and less than 5,000 Borneo and Sumatran orangutans (Pongo pygmaeus and P. abelli, respectively) (28). Populations of African lion (Panthera leo) dropped 43% since 1993 (30), pangolin (Manis spp.) populations have been decimated (31), and populations of giraffes dropped from around 115,000 individuals thought to be conspecific in 1985, to around 97,000 representing what is now recognized to be four species (Giraffa giraffa, G. tippelskirchi, G. reticulata, and G. camelopardalis) in 2015 (32).

An important antecedent to our work (25) used the number of genetic populations per unit area and then estimated potential loss on the basis of deforestation estimates and the species–area relationship (SAR). Given the recognized limitations of the use of SAR to estimate extinctions, our work provides an approach based on reduction of species range as a proxy of population extirpation. The most recent Living Planet Index (LPI) has estimated that wildlife abundance on the planet decreased by as much as 58% between 1970 and 2012 (4). The present study is different from LPI and other related publications in several ways, including that here we use all decreasing species of vertebrates according to IUCN, mapping and comparing absolute and relative numbers of species, and focusing on population losses. Previous estimates seem validated by the data we present here on the loss of local populations and the severe decrease in the population size of many others (see also refs. 3, 4, 6⇓–8, 26). Here we examine the magnitude of losses of populations of land vertebrate species on a global system of 10,000-km2 quadrats (Methods). Species vary from common to rare, so that our analysis, which includes all land vertebrate species (amphibians, birds, reptiles, and mammals) deemed as “decreasing” by IUCN, provides a better estimate of population losses than using exclusively IUCN data on species at risk. Obviously, common species decreasing are not ordinarily classified as species at risk. IUCN criteria provide quantitative thresholds for population size, trend, and range size, to determine decreasing species (28, 33). We also evaluate shrinking ranges and population declines for 177 species of mammals for which data are available on geographic range shrinkage from ∼1900 to 2015. We specifically focus on local extinctions by addressing the following questions: (i) What are the numbers and geographic distributions of decreasing terrestrial vertebrate species (i.e., experiencing population losses)? (ii) What are the vertebrate groups and geographic regions that have the highest numbers and proportions of decreasing species? (iii) What is the scale of local population declines in mammals—a proxy for other vertebrates? By addressing these questions, we conclude that anthropogenic population extinctions amount to a massive erosion of the greatest biological diversity in the history of Earth and that population losses and declines are especially important, because it is populations of organisms that primarily supply the ecosystem services so critical to humanity at local and regional levels.

Conclusion

Population extinctions today are orders of magnitude more frequent than species extinctions. Population extinctions, however, are a prelude to species extinctions, so Earth’s sixth mass extinction episode has proceeded further than most assume. The massive loss of populations is already damaging the services ecosystems provide to civilization. When considering this frightening assault on the foundations of human civilization, one must never forget that Earth’s capacity to support life, including human life, has been shaped by life itself (47). When public mention is made of the extinction crisis, it usually focuses on a few animal species (hundreds out of millions) known to have gone extinct, and projecting many more extinctions in the future. But a glance at our maps presents a much more realistic picture: they suggest that as much as 50% of the number of animal individuals that once shared Earth with us are already gone, as are billions of populations. Furthermore, our analysis is conservative, given the increasing trajectories of the drivers of extinction and their synergistic effects. Future losses easily may amount to a further rapid defaunation of the globe and comparable losses in the diversity of plants (36), including the local (and eventually global) defaunation-driven coextinction of plants (3, 20). The likelihood of this rapid defaunation lies in the proximate causes of population extinctions: habitat conversion, climate disruption, overexploitation, toxification, species invasions, disease, and (potentially) large-scale nuclear war—all tied to one another in complex patterns and usually reinforcing each other’s impacts. Much less frequently mentioned are, however, the ultimate drivers of those immediate causes of biotic destruction, namely, human overpopulation and continued population growth, and overconsumption, especially by the rich. These drivers, all of which trace to the fiction that perpetual growth can occur on a finite planet, are themselves increasing rapidly. Thus, we emphasize that the sixth mass extinction is already here and the window for effective action is very short, probably two or three decades at most (11, 48). All signs point to ever more powerful assaults on biodiversity in the next two decades, painting a dismal picture of the future of life, including human life.

The doomsday scenario in this paper is the Sixth Extinction (February 3, 2015). The paper reaches the conclusion that not only it is not a new prediction but that we are already in the middle of the extinction process. The methodology was based on detailed investigation of databases that determined we are in a period of extinction both of entire species and also of individuals within non-extinct species. Additionally, the paper addresses the situations of the geographic areas surrounding these individuals and how they serve as proxies for further extinctions. The determination that this extinction is human-caused (anthropogenic) is based on its speed and global distribution. One common argument given the extinction of whole species is that the process is not the end of all life on Earth but a homogenization – a road to lower biodiversity (although presumably humans and domesticated animals, whether livestock or pets, would continue to survive). Examples of individuals dying out are putting this argument to rest.

Next week I will deal explicitly with the timing of the projected doomsday.

Be Sociable, Share!
Posted in administration, Anthropocene, Anthropogenic, Climate Change, IPCC, Sustainability, Trump | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Doomsday: The Simple (or Simplistic) Version

I am writing this blog on Saturday, one day before hurricane Irma is scheduled to make an unwelcome visit to Florida. Other unwanted weather events are taking place all around. Human impact, especially the undesirable variety, is taking a toll. The New York Times had a piece today that summarized the combination of some of these events in terms of the feeling of an apocalypse:

CLEWISTON, Fla. — Vicious hurricanes all in a row, one having swamped Houston and another about to buzz through Florida after ripping up the Caribbean.

Wildfires bursting out all over the West after a season of scorching hot temperatures and years of dryness.

And late Thursday night, off the coast of Mexico, a monster of an earthquake.

You could be forgiven for thinking apocalyptic thoughts, like the science fiction writer John Scalzi who, surveying the charred and flooded and shaken landscape, declared that this “sure as hell feels like the End Times are getting in a few dress rehearsals right about now.”

Or the street corner preacher in Harlem overheard earlier this week ranting about Harvey, Irma and Kim Jong Un, in no particular order.

Or the tens of thousands who retweeted this image of golfers playing against a raging inferno of a wildfire in Oregon.

And just last month darkness descended on the land as the moon erased the sun. Everyone thought the eclipse was awesome, but now we’re not so sure — for all the recent ruin seems deeply, darkly not coincidental.

If you thought that, you would be wrong, of course. As any scientist will tell you, nature doesn’t work that way.

Some of our esteemed communicators tried to shift the blame to supposedly sinful behavior:

Meanwhile, some of our key public servants who were appointed to take care of the consequences of these catastrophic events apparently felt insulted when anyone mentioned climate change as a contributing factor that needs addressing:

“Here’s the issue,” Pruitt told CNN in a phone interview. “To have any kind of focus on the cause and effect of the storm; versus helping people, or actually facing the effect of the storm, is misplaced.”

“All I’m saying to you is, to use time and effort to address it at this point is very, very insensitive to this people in Florida.”

But isn’t it our responsibility to try to mitigate the risks not only to ourselves but also to our children and grandchildren?

These catastrophic events are happening right now. They are highly visible and cost numerous lives as well as billions of dollars to repair. All the science that we know points to these events continuing to grow at accelerated rates. Throughout this blog, I have framed this future as “the end of now,” where I define “now” as the lifespan of my grandchildren: toward the end of the century.

As I have repeatedly mentioned, such a possible grand doomsday scenario toward the end of the century still leaves us with enough time now to take steps to mitigate and thus prevent or minimize what could be a much larger catastrophe. The implementation of such options is a legitimate topic of discussion.

David Wallace-Wells’ piece in New York Magazine prompted that sort of dialogue. The response was visceral; some described it as fear mongering while others labeled it “climate disaster porn.”  Here I would like to come to the paper’s defense. What the paper is doing is a legitimate way of describing a possible doomsday, albeit without going into scientific details or offering possible mitigating actions. The paper describes a “simplistic doomsday” and thus many people hate it – including many of my students on the few occasions that I have presented such scenarios.

Wallace-Wells addresses heat death, the end of food, climate plagues, unbreathable air, perpetual war, permanent economic collapse, and poisoned oceans. I will paste some key paragraphs from the first and last sections of his paper as they are not self-explanatory. For the rest I recommend that readers reference the original publication.

I. ‘Doomsday’
It is, I promise, worse than you think. If your anxiety about global warming is dominated by fears of sea-level rise, you are barely scratching the surface of what terrors are possible, even within the lifetime of a teenager today. And yet the swelling seas — and the cities they will drown — have so dominated the picture of global warming, and so overwhelmed our capacity for climate panic, that they have occluded our perception of other threats, many much closer at hand. Rising oceans are bad, in fact very bad; but fleeing the coastline will not be enough. Indeed, absent a significant adjustment to how billions of humans conduct their lives, parts of the Earth will likely become close to uninhabitable, and other parts horrifically inhospitable, as soon as the end of this century.

 

IX. The Great Filter
Surely this blindness will not last — the world we are about to inhabit will not permit it. In a six-degree-warmer world, the Earth’s ecosystem will boil with so many natural disasters that we will just start calling them “weather”: a constant swarm of out-of-control typhoons and tornadoes and floods and droughts, the planet assaulted regularly with climate events that not so long ago destroyed whole civilizations. The strongest hurricanes will come more often, and we’ll have to invent new categories with which to describe them; tornadoes will grow longer and wider and strike much more frequently, and hail rocks will quadruple in size. Humans used to watch the weather to prophesy the future; going forward, we will see in its wrath the vengeance of the past. Early naturalists talked often about “deep time” — the perception they had, contemplating the grandeur of this valley or that rock basin, of the profound slowness of nature. What lies in store for us is more like what the Victorian anthropologists identified as “dreamtime,” or “everywhen”: the semi-mythical experience, described by Aboriginal Australians, of encountering, in the present moment, an out-of-time past, when ancestors, heroes, and demigods crowded an epic stage. You can find it already watching footage of an iceberg collapsing into the sea — a feeling of history happening all at once

The paper obviously incited many objections. The author has added a site at New York Magazine, which is considerably longer than the original article, to try to answer some of these critical comments. It is not a scientific paper and was not published in a scientific journal. It doesn’t conform to some of the most basic requirements of scientific publications, the most important of which is refutability. It doesn’t even qualify as “fake news” – a popular concept these days – because it doesn’t pretend to offer news. Nor is it an opinion. It is a call for action that was designed to shock people to into considering the scenario plausible enough to provoke arguments that might lead to political pressure and movement. This was a good enough reason for a reputable publication such as New York Magazine to offer the author a platform. The described methodology – chatting with scientists – is common among popular science “documentary” books that attempt to describe the future. It cannot be refuted because the scientists are not identified by name. People can only argue with the author. The timing of his descriptions and the scenarios on which his predictions are based are well defined. The timing is toward the end of the century under “business as usual scenarios” that I have discussed many times on this blog.

“The Great Filter” segment summarizes the doomsday scenarios. The effect it describes has officially been named the “Shifting Baseline Syndrome.”  Our future generations might not be even aware that they are in trouble. They might think that this is simply how nature works.

Next week I am going to address the doomsday issue from a much more solid scientific foundation to show more concretely that not only is this coming but we are in the middle of it. I want to emphasize that in spite of the urgency we still have time to act.

Be Sociable, Share!
Posted in Climate Change | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment