Kosher Water and Desalination.

A few years back, I was teaching an environmental course to prospective teachers at our school of education (Brooklyn College, CUNY). One of the topics that was in the news at the time was the overpopulation of deer. Proposed remedies included increasing the allowed quota of hunting permits and the wider spread use of deer contraceptives. I decided to devote a class to the option of enhanced hunting permits, and titled the class “to kill or not to kill.” Since I didn’t feel that my credentials fully qualified me to teach this matter, I invited a colleague, a professor of philosophy with a background in environmental issue, to join me and advocate against enhanced hunting. The class discussion quickly drifted to boundaries: what killings were permissible for the benefit of the killers? Nobody in class (including the philosopher) advocated against killing cockroaches, so the question became where we draw the line of what is and is not acceptable. The philosopher’s position was to draw the line when we know that the organism can feel pain. Expertise in biology or oceanography was not a prerequisite for the class, so a lively debate of whether or not fish can feel pain followed. Our guest’s answer was negative, requiring the line to be drawn higher in the evolutionary scale. As it happened, on the next day, some science news publication was discussing the nervous system of fish.

Several years later I encountered a detailed discussion about kosher water. It was regarding the question of what items need a signature from a trusted Rabbi to certify that they are fit as food for consumption by orthodox Jews. Many of my secular Israeli friends have claimed that the existence of such a signature on so many items that, to the layperson, do not appear to have anything to do with Jewish law, reflects the imperial aspirations of the orthodox community; it is not a matter of religion, but rather, a means by which to extract financial support from society at large. The symbolic special case for this debate focused on kosher water.

Here is a direct quote from the orthodox press:

Is a kosher seal of approval needed for bacteria? Definitely. According to the book, in the United States there is a “bank” with 80,000 germs for food production, used mainly as a culture for different products such as cheese. Most are not kosher as they are stored inside the blood of cows which have not been slaughtered according to Jewish religious laws. The solution: In Indonesia there is a wide production of bacteria preserved in   different kosher conditions.

Here is Wikipedia’s description of bacteria:

Bacteria (i/bækˈtɪəriə/; singular: bacterium) constitute a large domain or kingdom of prokaryotic microorganisms. Typically a few micrometres in length, bacteria have a wide range of shapes, ranging from spheres to rods and spirals. Bacteria were among the first life forms to appear on Earth, and are present in most habitats on the planet. Bacteria inhabit soil, water, acidic hot springs, radioactive waste,[2] and the deep portions of Earth’s crust. Bacteria also live in plants, animals (see symbiosis), and have flourished in manned space vehicles.[3]

There are typically 40 million bacterial cells in a gram of soil and a million bacterial cells in a millilitre of fresh water. There are approximately 5×1030 bacteria on Earth,[4] forming a biomass that exceeds that of all plants and animals.[5] Bacteria are vital in recycling nutrients, with many steps in nutrient cycles depending on these organisms, such as the fixation of nitrogen from the atmosphere and putrefaction. In the biological communities surrounding hydrothermal vents and cold seeps, bacteria provide the nutrients needed to sustain life by converting dissolved compounds such as hydrogen sulphide and methane to energy. On 17 March 2013, researchers reported data that suggested bacterial life forms thrive in the Mariana Trench, the deepest spot on the Earth.[6][7] Other researchers reported related studies that microbes thrive inside rocks up to 1900 feet below the sea floor under 8500 feet of ocean off the coast of the northwestern United States.[6][8] According to one of the researchers,” You can find microbes everywhere — they’re extremely adaptable to conditions, and survive wherever they are.”[6]. Most bacteria have not been characterised, and only about half of the phyla of bacteria have species that can be grown in the laboratory.[9] The study of bacteria is known as bacteriology, a branch of microbiology.

There are approximately ten times as many bacterial cells in the human flora as there are human cells in the body, with large numbers of bacteria on the skin and as gut flora.[10] The vast majority of the bacteria in the body are rendered harmless by the protective effects of the immune system, and some are beneficial. However, several species of bacteria are pathogenic and cause infectious diseases, including cholera, syphilis, anthrax, leprosy, and bubonic plague. The most common fatal bacterial diseases are respiratory infections, with tuberculosis alone killing about 2 million people a year, mostly in sub-Saharan Africa.[11] In developed countries, antibiotics are used to treat bacterial infections and also in farming, so antibiotic resistance is becoming common. In industry, bacteria are important in sewage treatment and the breakdown of oil spills, the production of cheese and yogurt through fermentation, the recovery of gold, palladium, copper and other metals in the mining sector,[12] as well as in biotechnology, and the manufacture of antibiotics and other chemicals.[13]

Now, if all of the food ingredients that we consume in regular food contain bacteria, and they are everywhere, someone should be able to certify whether they are kosher or not. In the Jewish orthodox press, I have found more than one opinion: In addition to the argument that I have quoted above, an alternative argument states that Jewish dietary laws refer only to what can be seen by the naked eye. The reasoning cited was that Jewish dietary laws can be traced to the time (532 – 332 BCE) referred to as the Persian period, when magnification tools such as the microscopes were not available. The simplest microscope has so far been traced to the Dutch researcher Anton Van Leeuwenhoek around 1674. He is also credited with being the first person to observe bacteria. According to this interpretation – there is no issue: they couldn’t see the bacteria when the rules were made, so even though modern technology allows us to do so now, it does not fall within the purview of the rules.

Returning to the first argument about the need to certify bacteria as kosher or not, the argument makes sense provided that it is taken to its logical conclusion: the knowledge required to offer a stamp of Kosher based on bacterial content, requires knowing the full manufacturing conditions of the item – which is equivalent to being able to calculate the LCA (Life Cycle Assessment) of the item. How many rabbis that are assigned to provide a stamp of kashrut qualify?

Back to water desalination:

In the United States, due to a 2011 court ruling under the Clean Water Act, ocean water intakes are no longer viable without reducing mortality of the life in the ocean, the plankton, fish eggs and fish larvae, by 90%.[39] The alternatives include beach wells to eliminate this concern, but require more energy and higher costs, while limiting output.[40] .

Let’s see some more details:

According to Federal law:

Federal law (§ 303(c)(1) of the Clean Water Act) requires that ocean water quality standards be reviewed at least once every three years. State law (Wat. Code, § 13170.2(b) requires that ocean water quality standards be reviewed periodically. The purpose of the triennial review of the Ocean Plan is to guarantee the continued adequacy of water quality standards.

The court ruling:

United States Court of Appeals for the Second Circuits (January 25, 2007)

SOTOMAYOR, Circuit Judge:

This is a case about fish and other aquatic organisms. Power plants and other industrial operations withdraw billions of gallons of water from the nation’s waterways each day to cool their facilities. The flow of water into these plants traps (or “impinges”) large aquatic organisms against grills or screens, which cover the intake structures, and draws (or “entrains”) small aquatic organisms into the cooling mechanism; the resulting impingement and entrainment rom these operations kill or injure billions of aquatic organisms every year. Petitioners here challenge a rule promulgated by the Environmental Protection Agency (“the EPA” or “the Agency”) pursuant to section 316(b) of the Clean Water Act (“CWA” or “the Act”), 33 U.S.C.§ 1326(b),1 that is intended to protect fish, shellfish, and other aquatic organisms from being harmed or killed by regulating “cooling water intake structures” at large, existing power producing facilities. For the reasons that follow, we grant in part and deny in part the petitions for review, concluding that certain aspects of the EPA’s rule are based on a reasonable interpretation of the Act and supported by substantial evidence in the administrative record, but remanding several aspects of the rule because they are inadequately explained or inconsistent with the statute, or because the EPA failed to give adequate notice of its rule making. We also dismiss for lack of jurisdiction one aspect of the petitions because there is no final agency action to review.

This ruling was made by Justice Sotomayor, who is now member of the Supreme Court.

The state of California has interpreted it as relating to direct ocean intake for water desalination:

Draft Final Report of the Expert Review Panel on Intake Impacts and Mitigation

Mitigation and Fees for the Intake of Seawater by Desalination and Power Plants

Report submitted to Dominic Gregorio, Senior Environmental Scientist, Ocean Unit,

State Water Resources Control Board (SWRCB) in fulfillment of SWRCB Contract No. 09-052-270-1, Work Order SJSURF-10-11-003

The SWRCB is currently developing a policy for addressing desalination plant intakes and discharges which will be instituted through amendments to the Ocean Plan and Enclosed Bays and Estuaries Plan (statewide water quality standards). The California Water Code currently requires new or expanded industrial facilities (e.g., desalination plants) to use the “best available site, design, technology, and mitigation measures feasible” to minimize the intake and mortality of marine life.

Let’s put all of this into sharp personal focus and require of every drinking water provider and water consumer that he or she “use the ‘best available site, design, technology, and mitigation measures feasible’ to minimize the intake and mortality of marine life.” Every glass of water that we drink, use for animal consumption, or to irrigate plants around us, results in major “mortality of marine life” – that is the way that the world runs.

Posted in Climate Change | 3 Comments

“My Way or the Highway” Can Be a Problem With the Best of Intentions

I am an old guy. My wife is younger but also past her official retirement age. Like everybody else, we try to prepare for retirement times by investing our savings. She is also an academic so we share the responsibilities. She has a very “simple” investment policy: buy low and sell high. With this philosophy, you cannot lose. The problem with this philosophy is that you don’t know when a low is a low or when a high is a high, because a low can get much lower and a high can get much higher. To know when a low is a real low or a high is a real high you need to be able to predict the future. If you can predict the future you don’t need investment rules, you will always win. I also have my own very simple investment policy – diversify. We have combined our “philosophies,” and are doing fine. Diversification is needed because we cannot predict the future. Unfortunately, when I look around at issues that I truly care about, diversification of advocacies seems to be in short supply. Instead of welcoming multiple strategies for dealing with upcoming problems, there seems to be an attitude that only one approach can possibly work, and all others will lead to phenomenal failure.

One of the best examples of this is the dispute among advocates of mitigating climate change through a shift from fossil fuels. All of these groups agree that that this can best be attained by taxing fossil fuels, thus reducing their cost advantage as compared to more sustainable energy sources. The point of contention is the implementation of such a tax. The two options in question are a simple carbon tax, based on the carbon footprints that the fuels emit and a cap and trade policy based on trading emission rights while keeping a cap on the emissions overall. There have been bitter fights between advocates to decide which one is better. Among the leaders of this fight is one of the most admired advocates for changing public policy to enhance mitigation of man-made climate change: Dr. James Hansen (see my blog that honored him after his retirement announcement – April 9 blog). Hansen came out with a scorching OpEd in the New York Times titled “Cap and Fade” to criticize cap and trade and advocate carbon tax. Meanwhile, in light of the continuing disagreement over which to implement, we have ended up with neither.

Nuclear energy has been at the forefront of every dispute about energy use. It still is. The dispute got even more important after the Fukushima disaster on March 2011. One of the consequences of this disaster was that countries like Germany decided to completely remove nuclear energy as an acceptable source to feed the German electrical grid. It is obvious that nuclear energy has large issues as a major energy source for power generation, but so do all other power sources, including solar and wind. The concept of not choosing winners among competing technologies but, instead, keeping focus on a central goal and supporting any that are compatible with the goal to further research and development, while allowing the market decide as to their adaptability, seems a novel one.

Those addressing global water stress seem to suffer from the same issue. Spending five full years on an approval process (November 12 blog) and issuing a report by promoters of environmentally acceptable mitigation efforts to water stress titled “desalination with a grain of salt” that advocates to focus on allocation and recycling and keeping desalination as a last resort is not very useful (November 12 and 19 blogs). Neither is the use of small price differentials to justify using a particular technology for a facility for which the construction lasts 10 years.

Unfortunately, the use of environmental impact statements as a weapon to block a technology based on the hope that doing so will result in adopting a “favorable” technology often results in no action at all.

I will be specific, using an example that serves as the continuation of my recent series of blogs focused on attempts to alleviate the shortage in fresh water. This example is based on a recent paper in Agricultural Water Management 130, 1 – 13 (2013). The paper is a review article on the global state of wastewater, titled “Global, regional and country level need for data on wastewater generation, treatment and use” by Toshio Sato et al. The figure below summarizes the findings.

Wastewater Information by CountryIn this setting, “complete information” in the figure refers to availability of data on all aspects of wastewater use that include production, treatment and use, while “partial” refers to that of one or two aspects.

The tables in the paper include detailed information about the state of wastewater in most countries within the regions specified by the figure.

The table below provides the information about the US and Canada:

Country

Wastewater generated

Wastewater treated

Treated wastewater used

Reporting year

Volume (km3/year)

Reporting year

Volume (km3/year)

Reporting year

Volume (km3/year)

Canada

2006

5.395

2006

4.477

NA

US

1995

79.573

1995

56.642

2002

2.345

The data clearly show that water recycling has a long way to go, and to argue now that water recycling should be the clear preference in terms of fresh water supply over desalination is premature. Development work on both technologies is not only needed but is urgent. The same holds true for advancing the cause of finding broad uses for energy alternatives to fossil fuels and the required policies that will get us there. Parallel processing is a great concept that should apply to advocacy and not only computer programming.

Using environmental impact statements as a weapon to block a technology on the hope that it will result in adopting a more “favorable” technology often results in no action at all. The next blog will try to expand upon this problem with some examples.

Posted in Climate Change | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , | 2 Comments

Feedback on Desalination

Last week, after posting the newest blog here on CCF, I immediately received a response from Peter Gleick, one of the authors of the report about water desalination that I had discussed.  Through my twitter account, he led me to the Pacific Institute’s more recent publications.

  Peter Gleick@PeterGleick 12 Nov

@MichaTomkiewicz Thanks, but you might also look at our updated analyses! http://www.pacinst.org/publication/energy-and-greenhouse-gas-emissions-of-seawater-desalination-in-california/ …. And this one: http://www.pacinst.org/publication/costs-and-financing-of-seawater-desalination-in-california/ …

Well – when Peter Gleick tweets, I listen. Here are some excerpts of both links, taken directly from the documents:

Key Issues in Seawater Desalination in California: Energy and Greenhouse Gas Emissions

Published: May 1, 2013
Authors: Heather Cooley, Matthew Heberger
Pages: 37

Interest in seawater desalination in California is high, with 17 plants proposed along the California coast and two in Mexico. But removing the salt from seawater is an energy-intensive process that consumes more energy per gallon than most other water supply and treatment options. A new report from the Pacific Institute series “Key Issues for Seawater Desalination in California” describes the energy requirements and associated greenhouse gas emissions for desalinated water and evaluates the impact of short- and long-term energy price variability on the cost of desalinated water.

Energy requirements are key factors that will impact the extent and success of desalination in California. “Key Issues for Seawater Desalination in California: Energy and Greenhouse Gas Emissions” shows energy requirements for seawater desalination average about 15,000 kWh per million gallons of water produced. By comparison, the least energy-intensive options of local sources of groundwater and surface water require 0 – 3,400 kWh per million gallons; wastewater reuse, depending on treatment levels, may require from 1,000 – 8,300 kWh per million gallons; and energy requirements for importing water through the State Water Project to Southern California range from 7,900 – 14,000 kWh per million gallons.

“Beyond the electricity required for the desalination facility itself, producing any new source of water, including through desalination, increases the amount of energy required to deliver and use the water produced as well as collect, treat, and dispose of the wastewater generated,” said Heather Cooley, co-director of the Pacific Institute Water Program and report author. Conservation and efficiency, by contrast, can help meet the anticipated needs associated with growth while maintaining or even reducing total energy use and greenhouse gas emissions.”

Desalination is a reliable source of water, which can be especially valuable during a drought. However, building a desalination plant may reduce a water utility’s exposure to water reliability risks at the added expense of an increase in exposure to energy price risk. Energy is the largest single variable cost for a desalination plant, varying from one-third to more than one-half the cost of produced water. Because of its high energy use, desalination creates or increases the water supplier’s exposure to energy price variability.

In California, and in other regions dependent on hydropower, electricity prices tend to rise during droughts, when runoff, and thus power production, is constrained and electricity demands are high. Additionally, electricity prices in California are projected to rise by nearly 27% between 2008 and 2020 (in inflation-adjusted dollars) to maintain and replace aging transmission and distribution infrastructure, install advanced metering infrastructure, comply with once-through cooling regulations, meet new demand growth , and increase renewable energy production. While rising electricity prices will affect the price of all water sources, they will have a greater impact on those that are the most energy intensive, like desalination.

The high energy requirements of seawater desalination also raise concerns about greenhouse gas emissions. In 2006, California lawmakers passed the Global Warming Solutions Act, or Assembly Bill 32, which requires the state to reduce greenhouse gas emissions to 1990 levels by 2020. Thus, the state has committed itself to a program of steadily reducing its greenhouse gas emissions in both the short- and long-term, which includes cutting current emissions and preventing future emissions associated with growth.  Desalination ­­– through increased energy use – can cause an increase in greenhouse gas emissions, further contributing to the root cause of climate change and running counter to the state’s greenhouse gas reduction goals.

There are several ways to reduce the greenhouse gas emissions associated with desalination plants, including (1) reducing the total energy requirements of the plant; (2) powering the desalination plant with renewable energy; and (3) purchasing carbon offsets.

“Even renewables have a social, economic, and environmental cost, albeit much less than conventional fossil fuels. Furthermore, these renewables could be used to reduce existing emissions, rather than offset new emissions and maintain current greenhouse gas levels. Offsets also raise concerns; caution is required when purchasing offsets, particularly on the voluntary market, to ensure that they are effective, meaningful, and do no harm,” said Cooley. “Energy use is not the only factor that should be used to guide decision making. However, given the increased understanding of the risks of climate change for our water resources, the importance of evaluating and mitigating energy use and greenhouse gas emissions are likely to grow.”

The “Key Issues for Seawater Desalination” series is an update to the 2006 Pacific Institute report, “Desalination with a Grain of Salt,” which has proven to be an important tool used by policy makers, regulatory agencies, local communities, and environmental groups to raise and address problems with specific proposals, downloaded nearly 700,000 times. Researchers conducted some 25 one-on-one interviews with industry experts, environmental and community groups, and staff of water agencies and regulatory agencies to identify some of the key outstanding issues for seawater desalination projects in California. The resulting reports address proposed desalination plants in Californiacosts and financing, and energy and greenhouse gas emissions, with a forthcoming report on marine life and coastal ecosystem impacts.

Key Issues in Seawater Desalination in California: Costs and Financing

Published: November 27, 2012
Authors: Heather Cooley, Newsha Ajami
Pages: 48

Economics – including both the cost of the water produced and the complex financial arrangements needed to develop a project – are key factors that will determine the ultimate success and extent of desalination in California. New research from the Pacific Institute, “Key Issues for Seawater Desalination in California: Cost and Financing,” assesses desalination costs, financing, and risks associated with desalination projects. The Pacific Institute analysis finds that the cost to produce water from a desalination plant is high but subject to significant variability, with recent estimates for plants proposed in the state ranging from $1,900 to more than $3,000 per acre-foot.“Seawater desalination remains among the most expensive water-supply options available, although the public and decision-makers must exercise caution when comparing costs among different projects,” said Heather Cooley, co-director of the Pacific Institute Water Program and lead author of the report. “In some cases, costs are reported in ways that are not directly comparable. For example, some report the cost of the desalination plant alone, while others include the cost for additional infrastructure needed to integrate the desalination plant into the rest of the water system. Some estimates include costs to finance the project, while others don’t. Even when there is an apples-to-apples comparison, there are a number of site- and project-specific factors that make cost comparisons difficult, such as energy, land, and labor costs and the availability of visible and hidden subsidies.”

“Key Issues for Seawater Desalination in California: Cost and Financing” is part of a series of research reports in progress that identify key outstanding issues that must be addressed before additional proposals for new seawater desalination in California are approved. Other issues that will be addressed include the environmental impacts of seawater desalination, the cost and financing of proposed projects, and energy requirements and their greenhouse gas implications.

Our new analyses, which will be released by early 2013, seek to provide communities and decision makers with information needed to make decisions about building desalination plants and to create a more rational and sustainable policy around seawater desalination along the California coast, and elsewhere.

The debate in California is ongoing, and has recently been turning in what I consider to be a destructive direction – as a result of which, one major project has recently withdrawn its proposal.

Let’s take a look at Peter’s response. The two updates he mentions focus on energy requirements and comparative costs. By the authors’ own admission, both in the original report and in the more recent updates, costs are very soft numbers and cost comparison is a dangerous ground to walk on when analyzing the sustainability of the future. In the November 12 blog, I presented a graph taken from the 2006 report on the cost comparison between gravity surfaced water, recycled water and waste water in Los Angeles. I also mentioned there that California has a special global responsibility when treating global issues to set examples that can be expanded to a much broader use. The present difference in cost in LA between recycling waste water and water desalination is about 50%. In order to recycle water, however, you need waste water. I will expand upon the present status of global waste water in the next blog. Meanwhile, suffice it to say: more than two billion people in the world lack basic sanitation facilities. They don’t have “waste water.” Most of the water around them is “waste water.” In developed countries, where sanitation facilities are almost universal, one can recycle the water, but people have other issues with recycled water (the “yuck factor” in the September 10 blog). Almost all countries have stringent requirements about dumping waste water back into the ocean. The water needs to be treated extensively – whether it is recycled or not, so the pricing baselines for the alternative technologies are different than noted above.

For the reasons that I outlined in previous blogs, water desalination is an energy intensive business. The sources of energy that are used are important. Therefore, naturally, attempts to alleviate global fresh water stress should be closely connected to global attempts to reduce the energy reliance on fossil fuels. This can be done either directly or indirectly – through the exchange of sustainable energy with desalinated water. Such exchanges are being practiced, including one such instance in California. Advocacy for the use of renewable energy, however, shouldn’t be used as an argument against water desalination. As a long term response to water stress, there is probably no alternative to ocean desalination. Rich countries are in the position to experiment with the technology on a large scale; they have a responsibility to do so in a manner that explores sustainability and efficiency.

Posted in Climate Change | Tagged , , , , , , , , , , , , , , , , , , , , , , , | Comments Off on Feedback on Desalination

Obstacles to Adaptation of Water Desalination

This blog will focus entirely on a report that was issued by the Pacific Institute in 2006 under the title “Desalination, with a Grain of Salt.” The report was authored by Heather Cooley, Peter H. Gleick and Garry Wolf. The report was funded by the California Coastal and Marine Initiative (CCMI) and private foundations (listed in the report). The three authors are known environmentalists who have the complementary set of skills necessary to evaluate the technology. Peter Gleick is a recipient of the prestigious MacArthur Foundation Fellowship, among other honors, for his work on water systems.

In late 2012 construction started on the largest water desalination plant in California near Carlsbad. The facility, which is designed to produce more than 50 million gallons/day of desalinated water, is projected to start operation in 2016. The planning for the facility lasted 12 years, with an additional 6 years spent in the permitting process. The Pacific Institute report was well within this 18 year time period. In spite of the authors’ environmental credentials, the bias in the report is clear from its title. According to an idiomatic dictionary, to “take something with a grain of salt” is to consider something to be not completely true or right, as in: I’ve read the article, which I take with a grain of salt. Related vocabulary: “hard to swallow.

Selected recommendations from the report include:

1.      The cost of desalination has fallen in recent years, but it remains an expensive water-supply option. Desalination facilities are being proposed in locations where considerable cost-effective conservation and efficiency. Improvements are still possible.

The authors provide a cost comparison with the other alternatives, shown below:

Desalination Cost ComparisonFigure 1 – Relative cost of potable water from a Typical Ocean Desalination, Waste Water Recycling and Gravity Surface Water Source in the Los Angeles Metropolitan Area.

Cost Comparisons

Experience to date suggests that desalinated water cannot be delivered to users in California for anything less than the cost of production, which our research indicates is unlikely to fall below the range of $3.00 to $3.50 per thousand gallons ($/kgal) (roughly $0.79 to $0.92 per cubic meter ($/m3)) for even large, efficient plants. Because the cost of production can be as high as $8.35/kgal ($2.21/m3) (MPWMD 2005b), the cost of delivered water could be in the range of $9 to $10/kgal ($2.37 to $2.64/m3). This wide range is caused by the factors discussed below and the large variation in the cost of water distribution among service areas. Even the low end of this range remains above the price of water typically paid by urban water users, and far above the price paid by farmers. For example, growers in the western United States may pay as little as $0.20 to $0.40/kgal ($0.05 to $0.10/m3) for water. Even urban users rarely pay more than $1.00 to $3.00/kgal ($0.26 to $0.79/m3).

 2.      More energy is required to produce water from desalination than from any other water-supply or demand-management option in California. The future cost of desalinated water will be more sensitive to changes in energy prices than will other sources of water.

3.      Public subsidies for desalination plants are inappropriate unless explicit public benefits are guaranteed.

4.     More research is needed to fill gaps in our understanding, but the technological state of desalination is sufficiently mature and commercial to require the private sector to bear most additional research costs.

5.      Reliability and Water-Quality Considerations:
Desalination plants offer both system-reliability and water-quality advantages, but other options may provide these advantages at lower cost.

6.       Desalination can produce high-quality water but may also introduce biological or chemical contaminants into our water supply.

7.      More energy is required to produce water from desalination than from any other water-supply or demand-management option in California.

8.      Desalination can produce water that is corrosive and damaging to water distribution systems.

 Environmental Considerations:

 1.      Desalination produces highly concentrated salt brines that may also contain other chemical pollutants. Safe disposal of this effluent is a challenge.

2.      Disposal of brine in underground aquifers should be prohibited unless comprehensive and competent groundwater surveys are done and there is no reasonable risk of brine plumes appearing in freshwater wells.

3.      Impingement and entrainment of marine organisms are among the most significant environmental threats associated with seawater desalination.

 4.      Subsurface and beach intake wells may mitigate some of the environmental impacts of open ocean intakes. The advantages and disadvantages of subsurface and beach intake wells are site-specific.

 5.      Desalination may reduce the need to take additional water from the environment and, in some cases, offers the opportunity to return water to the environment.

Climate Change

 1.      Desalination offers both advantages and disadvantages in the face of climatic extremes and human-induced climate changes. Desalination facilities may help reduce the dependence of local water agencies on climate sensitive sources of supply.

2.      Extensive development of desalination can lead to greater dependence on fossil fuels, an increase in greenhouse gas emissions, and a worsening of climate change.

3.      Coastal desalination facilities will be vulnerable to the effects of climate change, including rising sea levels, storm surges, and extreme weather events.

 Siting and Operation of Desalination Plants

1.      Ocean desalination facilities, and the water they produce, will affect coastal development and land use.

 2.      There are unresolved controversies over private ownership and operation of desalination facilities.

3.      Desalination offers both advantages and disadvantages in the face of climatic extremes and human-induced climate changes.

4.      Co-location of desalination facilities at existing power plants offers both economic and environmental advantages and disadvantages.

5.      Siting, building, and operation of desalination facilities are likely to be delayed or halted if local conditions and sentiments and the public interest are not adequately acknowledged and addressed.

6.      The regulatory and oversight process for desalination is sometimes unclear and contradictory.

It is obvious to me that there is an agenda that consistently threads through these recommendations. According to the report, water desalination, at least in California, should be the last resort. As the figure shows, the alternatives are recycled water and Gravity Surfaced Water. The reasoning can be considered practical. There is plenty of room for improvement both in terms of more rational use of fresh water (Gravity Surfaced Water) and wider usage of recycled water. What the recommendations do not take into account is the broad influence that California has in the global attempts at environmental adaptation. I will go into details in the next few blogs that will focus on water stress. More often than not, as California sets trends which the rest of the US later follows (with varying lag time), and any time the US does something, the World is watching.

Posted in Climate Change | Tagged , , , , , , , , , , | Comments Off on Obstacles to Adaptation of Water Desalination

Michael Kirsch Guest Blog

My blogs from the last few months have focused on the global water stress we are suffering, and its relation to climate change. The last two blogs in this series (October 8 and 29) have centered on the efforts to address this crisis by adding desalinated water to supplement the natural fresh water supplies. Last Friday, I received an email from Mr. Michael Kirsch about earlier attempts to implement the usage of such technology on a much grander scale (at least in the US), that were contemplated by the Kennedy administration in the early 70s. I am delighted that Mr. Kirsch has agreed to let me post his email to me as a guest blog here.

As always, I welcome feedback to my blog – a large point of this endeavor is to communicate with the communities of people who are interested in climate change – some scientists, some not – and perhaps reach a few new ears. I am pleased to be part of this ongoing discussion, and am interested to know your opinions. (I really do pay attention to my email!)

The email is copied in full below:

Dear Professor Tomkiewicz,

I found your recent article on desalination very interesting and necessary for more people to consider.  I’d like to see more of where the discussion on this topic is going.

I am aware of successes in Israel. Also, that we are now finally building a 50 mgd (million gallons/day) capacity plant in the U.S., in Carlsbad, CA, to be completed in 2016, which sadly, will be the largest in the western hemisphere, and the first on the west coast. This 50 mgd site will constitute only 1/3 of Kennedy’s plan for a 150 mgd capacity plant, by 1970, 46 years earlier!

I wanted to know if you were aware of the John F. Kennedy administration’s study, “An Assessment of Large Nuclear Powered Sea Water Distillation Plants.” James Ramey testified in Congress about the results of the study on August 18, 1964. The results show that if Kennedy had not been assassinated, we would have been building large nuclear desalination plants by 1970, and just four of these plants, each with a capacity of 3 GW (3 billion Watts) of thermal energy, would have provided enough water for a city the size of Los Angeles. This report was the most ambitious to date and, I think still, marks a more logical standpoint from which to consider desalination, rather than the “cost-benefit” approach taken ever since.

Below is my summary of the results of that study, included in a recent article of mine (part of a larger report put out by my magazine). In the full article, report I have descriptions and a map of 43 proposed locations for nuclear desalination in the United States for municipal, agricultural waste water treatment, nitrate treatment, salt water intrusion problems, and saline river water in the west.

Nuclear-Desalination-PlantsNuclear-Desalination-Plants-2“In January 1963, Kennedy formed a task group with the Executive Office of Science and Technology to investigate the use of large nuclear reactors for desalination. Working closely with the Atomic Energy Commission (AEC) and the Department of Interior, the task group issued its report in March 1964, five months after his assassination. Their report estimated that if an appropriate research and development program were actively pursued, large-scale dual-purpose installations could produce 1,000 to 1,900 megawatts (MW) of electricity and 500 to 800 million gallons of water per day (0.6-0.9 million acre feet per year (MAFY)). The report also suggested a program to develop and demonstrate a plant operating with an 8,300-MWt reactor,  producing approximately 1,400 megawatts of electricity and 600 million gallons of water per day (0.7 MAFY).

“This 8,300 MWt reactor was the 1975 goal. The 1970 goal was set for plants of intermediate size.

“The task group proposed producing a half dozen intermediate sized units, two in southern California, one in the greater New York area, several for the Gulf Coast, and one in Florida.

“The Metropolitan Water District (MWD) of southern California was the first site for such nuclear desalination, and entered into a contract with the Department of the Interior and the AEC in 1964 for a detailed economic and engineering study of dual-purpose nuclear desalination plants with 50 to 150 million gallons per day (mgd) production capacity, to be in operation by 1970. (For scale, a desalination plant producing 150 mgd, would provide two times the current water use of San Francisco. Four of these plants, or one 8,300 MWt plant, would provide the current water use of Los Angeles.)

“James Ramsey of the AEC remarked, ‘Such a project could convert more water from the sea than all the other sea water conversion units currently operating in the world.’ The 150 mgd plant was to produce enough water for a city of about 750,000 with a power output of 1.8 GW, exceeding that of the Hoover Dam, or enough for a city of about 2 million. Two large conventional light-water nuclear reactors, of about 3000 thermal megawatts each, were to be the energy source, and the water plant was to consist of three large multistage flash distillation sections, each producing 50 million gallons of water per day. The plant would have been 30 times larger than the largest existing water-desalination plant at that time.

“Other plans were underway for Texas, Arizona, New York, and Florida. For example, in July, 1964, Glen Seaborg, chairman of the AEC, proposed dual-purpose plant for Key West, FL, of intermediate size, up to 1.5 GWt producing 150 mgd.

“In a 1966 AEC report, an even larger reactor was illustrated in a drawing, showing a nuclear-powered seawater- conversion plant that would produce 1 billion gallons of fresh water per day and 4.5 GW of power. The report continued, suggesting that “by the 1980s, plants embodying several nuclear reactors in a single installation, with a total capacity as high as 25,000 thermal megawatts, could be in operation. A plant like this would produce 5,950 electrical megawatts at 1.6 mills per kilowatt-hour and 1,300 mgd at 19¢ per thousand gallons.”

“Today, in the short term, the “interim sized” 150 mgd desalination plants of the Kennedy era should immediately be built. Coastal desalination for industrial and municipal use will provide for cities, offset demands on limited water for agriculture, and solve the problem of saltwater intrusion. Agricultural wastewater desalination combined with groundwater desalination will increase crop yields, and reclaim land abandoned due to high salinity levels and lack of water. Saline river water, a major problem in nearly every western river, can be treated.”

Michael Kirsch is an economics and science researcher and writer, and works for the LaRouche Policy Institute and for 21st Century Science and Technology Science Magazine.  He is the author of several in-depth reports including three on the North American Water and Power Alliance XXI, a continental water and power management system, updated from its original 1964 design by Kirsch and his collaborators, for the twenty-first century.

He is an expert on Alexander Hamilton’s unique American Credit System and its various applications under the administrations of John Quincy Adams, Abraham Lincoln, Franklin Delano Roosevelt, and John F. Kennedy.

Posted in Climate Change | Tagged , , , , , , , , , , , , , , , , , , | Comments Off on Michael Kirsch Guest Blog

Desalination – Where Are We?

After last week’s detour, I would like to take the chance now to refocus on desalination as a possible remedy to the global fresh water stress the world is currently suffering. Here, I will discuss specific aspects of the prevalence of current usage of this technology, while my next blog will describe some of the obstacles to its wider usage worldwide.

The figure below describes the growth in global desalinated cumulative installed capacity in million cubic meters/day (blue line) while the yellow line represents the growth in cumulative capacity using the same units. Following the definition of water scarcity as 1000m3 fresh water/person per year (September 24 blog), the generated additional cumulative capacity from water desalination rose from supplying fresh water to 6 million people at the threshold level of scarcity, to around 20 million people at that level. This rise represents the acceleration from around 6% increase to around 10% in more recent years. When we remember (September 3 blog) that by the UN estimates, in 2025 1800 million people will live under water stress conditions, these numbers are not very impressive, but they are a step in the right direction.

desalination growth

The main barrier that stands in the way of desalination becoming more prevalent is that supplementing the water cycle requires a lot of energy, as I explained in a previous blog (October 8 blog). The most cost-effective energy sources today are fossil fuels. However, cost-effective does not mean cheap, and they are also the foremost contributors to the man-made warming of the planet. To ensure success, desalinated water needs to compete with naturally available fresh water. Progress is being made on this front as well. The figure below (Menahem Elimelech and William A. Phillip; “The Future of Seawater Desalination: Energy, Technology and the Environment”, Science, 333, 712 (2011)) shows the evolution of the power consumption required for desalination since 1970. The horizontal dashed line corresponds to the theoretical minimum energy required for desalination of 35g/liter sea water at 50% recovery. Technology is making great progress in approaching this line.

Desalination Power Consumption

One of the most beneficial results of this reduced energy requirement is the sharp reduction in price that is shown in the figure below for facilities in various countries and some further details in facilities in the US. However, prices are a soft number that one should approach with some care because they vary with the size of the plants and the price of the available energy. Even so, the trend toward lower prices is unmistakable.

Desalination PricesThe map below shows the “hot spots” for the use of this technology. Unfortunately, the height of the bars is not normalized to any parameter that scales with the size of the country (population, GDP or water consumption) so the visual might be a bit misleading. For obvious reasons (abundance of oil money, great shortage of water) South Arabia and the Gulf States are leading the effort. The effort is visible in almost every continent but, as is so often the case with activities in which available money plays an important role, it is dominated not necessarily by need, but rather, by the ability to allocate the necessary resources.

water desalination countriesIn future blogs, I intend to elaborate upon the other impediments that stand in the way of desalination becoming a more prominently used solution to water stress; some of these obstacles come from unexpected sources.

Posted in Climate Change | Tagged , , , , , , , , , , , , , , | Comments Off on Desalination – Where Are We?

Out Of Date?

In the last few blogs (starting with the September 24 blog) I alternated between two topics – the continuing discussion of fresh water stress and my reactions to the beginning of the publication of the 5th IPCC report (AR5). Here, I will pause from those themes, and instead try to expand on the unique situation in which we currently find ourselves: trying to react to the continuous progression of events not as observers but as participants. One of the consequences of being a participant in a constantly changing environment is that the observations on which we base our discussion are out of date the moment that we have finished writing.

To demonstrate the issue of being instantaneously out of date I will use my June 25, 2012 blog regarding the Carbon Cycle as an example. The central feature in that blog was a description of the carbon cycle given below, as obtained from the IPCC’s 3rd Assessment Report (AR3). I compared the figure to an identical one that lacked the numbers, and made the claim that while the figure with the numbers constituted scientific reporting, while the one without, did not. Skeptical Science liked this blog and published it on their website as a guest blog. One of the comments that I received was that the data was not up-to-date. The commenter was right. In the 11 years that have passed since AR3 was released, some rather different numbers have emerged.

graph 1 june 25 blogNow that the first part of the new IPCC report has been published, I can compare the figure to its replacement in the new report, as shown below (the figure is referred to in the report as a “simplified schematic,” Figure 6.1, p122):

IPCC Simplified Schematics 1

 

The new figure is much more crowded and complicated than the older figure was. It is not easily seen with normal viewing magnification but fortunately, it is accessible with amplified magnification. The units are the same in the two figures – billion metric tons of carbon. In both figures, the arrows represent carbon exchange fluxes in billion tons/year. The red lines (broken in the 2001 figure) represent anthropogenic (man made) contributions and fluxes. The red numbers in the 2013 figure represent cumulative reservoir masses since 1750 (the year which serves as the standard reference for pre-Industrial Revolution data).

Let us compare the atmospheric reservoirs in the two figures: The 2001 figure gives the amount of carbon as 730 Gt-C, while the 2013 figure cites it as 589 + 240 ± 10; the second number in this last set is indicative of the cumulative human contribution. The total number in the newest figure is 829 ± 10 Gt-C. If we then use the average rate of 4Gt-C/years to subtract the difference that accumulated during those 10 years, we can compare the 789 Gt-C in the new figure with the 730 Gt-C in the old – a difference of about 7%.

As an exercise, you are all more than welcome to compare the other entries in both figures.

The best demonstration of the changes that have taken place lies in the growth in global population. One of the best sites for visual understanding of this can be found here. It is astonishing to watch how quickly the numbers change. Every value of the data that we are studying depends on the population.The approximate value at the time that I write this blog is 7,186,000,000, which is startling when compared to the population of approximately 2,000,000,000 people when I was born.

The fast changing reality in which we we live is another demonstration of the light-cone description that I described in last week’s blog. Our presence is changing so fast that we don’t have time to write about the present – only about the past and the trends that we anticipate for the future. That of course, makes it that much more imperative that we work to understand and shape that future as best we can.

Posted in Climate Change | Tagged , , , , , , , , , , , , , , | Comments Off on Out Of Date?

The IPCC, the Burning House, and the 100% Tipping Point

Following the publication of the IPCC Working Group I’s 5th assessment report (AR5), I posted my own response (October 1). I addressed the issue of raising the confidence level of significant human contributions to climate change to 95% – or “extremely likely.” Deniers have made the claim that 95% confidence is not enough to necessitate that we all engage in a massive effort to minimize the impact by changing behavior. They require 100% certainty before they are willing to take preventative actions. I took a rather dim view of that position, which I have summarized as follows:

The other side of this issue is that if a fire inspector were to find a 95% chance that our house would catch fire, he would tell us to fix the vulnerability immediately, especially given that no insurance company in the world would be willing to insure us. The only time that we can be 100% certain that the house is going to catch fire is after the house is already aflame. At that time, the only thing left to do is to try to get out as fast as possible. Presently we cannot get out of this planet – we have no place else to go.

Here I would like to expand on this issue.

Malcolm Gladwell defines Tipping Point as: “the moment of critical mass, the threshold, the boiling point.” The term has become a central feature in the climate change debate (often identified as a “large scale discontinuity”).  I have used it repeatedly on this blog site (just use the search facility for access). I consider the transition from 95% to 100% certainty to be the mother of all tipping points – it is a threshold jump from future to past: the jump from having some ability to affect future data to the inevitable eventuality where the only available option is to try to minimize the effects of events that have already occurred.

When I Google “Physics of the Future,” I mostly turn up entries from Michio Kaku’s (a fellow faculty member at the City University of New York) recent book by that title. His book describes the wonders that we might or might not discover in the distant future as we explore the boundaries of Physics. But the term has another description that is shown in a famous diagram given below.

The diagram describes the path that a flash of light, emanating from a single event (localized to a single point in space and a single moment in time) and traveling in all directions, would take through spacetime. It derives from Einstein’s Special Theory of Relativity. The double cone is anchored on the premise at the heart of the theory, which postulates that although the speed of light is the fastest speed that can be attained, it is still a finite speed. It can be generalized to our ability to gather information by postulating that the only way that we can gather information is through shining light on the object that we are interested in and analyzing the light that comes back. Since light travels at a finite speed, it takes time for the light to move to the object and get back reflected to us. Based on such an interpretation, the two cones in the figure represent the maximum information that we can attain. The walls of the cones are presented on a scale of the time multiplied by the speed of light. The upper cone represents the future, while the lower cone represents the past. In the middle, we have the present – the here and now. We have no available information about the present because it takes time to gather the information.

Light Cone Theory of Relativity

(From “Brief History of Time” by Stephen Hawking; Ch.2, 1988, Bantam Dell Publishing.)

The transition from a 95% probability of our house catching fire to the 100% certainty that our house is actually on fire fits Gladwell’s definition of a tipping point probably better than any other phenomenon – we move from predicting an event to either observing or (if the event itself has a finite duration) having just observed said event.

As I have mentioned before (June 18, 2012 blog)

I subscribe to the Popperian definition of the scientific method that is based on refutability and denies the existence of “general truths.”  A theory is “true” until it is refuted by observations; if it cannot be refuted – it is not science.

Refutability is a statement about our ability to make predictions of the future and then observe whether those predictions are coming true or not. 100% certainty doesn’t exist in science.  As a rule, if something is 100% certain, it is not science.

I will finish this blog with a quote that I used in a previous blog (September 3, 2012) from an address by Luiz Inácio Lula da Silva, the former (2003-2010) president of Brazil, in a reported comment on the European fiscal crisis: “Let’s be frank: if Germany had resolved the Greek problem years ago, it wouldn’t have worsened like this. I’ve seen people die of gangrene because they didn’t care for a problematic toenail.”

The problematic toenail was a predictor of what might have happened in the future. Once a serious gangrene takes over, there is very little that we can do. Between gangrene and a burning home, we have indicators of an impending disaster (or series thereof) – why should we wait for an eventuality to become an actuality?

Posted in Climate Change | Tagged , , , , , , , , , , , , , , , , , , | 1 Comment

Desalination: The Science

I discussed the effects of climate change on the water cycle in a previous blog (September 3). I focused on the fact that while the water cycle is not a perfect cycle, our planet, whose surface consists of 70% water, cannot experience shortage of water. Instead, I posited, (August 27, September 3, 10 and 24 blogs) the planet experiences severe shortages of fresh water; a problem that is bound to become worse. Fresh water is essential to survival for almost all of the land species on the planet – including humans. This shortage needs to be addressed. Previous blogs have dealt with addressing the shortage through recycling (September 10) and the better management of use (September 24). This blog and a subsequent blog will try to address the issue through increase in supply.

My previous blog (September 3) included the USGS (United States Geological Survey) version of the water cycle. It looks very scientific and somewhat confusing, so this week I’m giving you a much simpler version that was drawn to teach the cycle to young students. It still includes the most important elements, without providing so much information that the viewer is overwhelmed:

Water Cycle Simple

It emphasizes the way in which the sun-induced evaporation from the ocean separates fresh water from saltwater. By applying this theory on a larger scale, it becomes apparent that if we want to increase the supply of fresh water we will need to increase this separation in a controlled way and therefore must engage in water desalination. To accomplish that, we need to use energy, which means that the issue of addressing water stress is also inherently tied with the need to transition the energy sources that we use.

To start the discussion, we need to have a short look at the science of desalination (Sorry! I know you think Science is boring…).

In another previous blog (June 4, 2012) I mentioned thermodynamics without going into details. Thermodynamics is the area of science that describes the relationship between heat, energy and work. Its two basic laws are arguably the most essential tenets upon which the rest of science is built. If someone were able to refute one of these laws in a credible way, much of science would have to be reformulated. (Obviously, scientists take this definition of “credible way” very seriously). One of these laws is equivalent to the law of the conservation of energy and it tells us that energy cannot be created or destroyed but only converted from one form to another (chemical energy of burning a fuel to mechanical energy of the moving car, as an example). The second of these laws tells us that if a system is left on its own, it will tend to maximize its disorder. When I am discussing this topic in a class with students who have no prerequisites in science, I always make the analogy of a small child lucky enough to have a room of its own. Leave him (or her – gender makes no difference here) alone and the room will quickly get messy. His clothes will be everywhere, his toys will be everywhere and so will everything else that he gets his hands on. The state of the room can obviously be fixed – but not without some effort (i.e. work and expenditure of some energy) either by the parents or, after appropriate bribing, by the old enough kid. The physical reason for the “natural” state of the messy room is that there are simply many more ways for each of his items to find their way into a messy configuration rather than an orderly one. This law is also responsible for, among other things, heat’s tendency to move spontaneously upon contact, from hot object to cold object instead of the other way around; and energy costing approximately three times more to deliver from electricity obtained by burning fuel than from directly burning the fuel.

Back to desalination – if I take a cup of concentrated salt water and dump it into a container of fresh water, the salt will spread around very quickly through the entire sample, making a larger amount of diluted salt water (you can safely try it). The reverse process will never take place on its own because there will be many more ways to distribute the salt particles in a homogeneous solution than any segregation can provide. If we want to mimic the water cycle by separating the fresh water from the salt water we will have to put energy into the system. Nature does this by using solar energy to evaporate vapor of fresh water from the salty ocean. We can mimic the natural process by simply boiling water and collect the resulting steam. This process is used commercially in some places, but is gradually being replaced by the much more efficient process of Reverse Osmosis.

A schematic representation of the Reverse Osmosis process is shown below.

The salt water is “simply” being pushed against a semipermeable membrane that allows the pure water to pass through while retaining the salt particles.

Reverse Osmosis

The recent improvements in the power consumption needed for this separation are shown below (from “The Future of Seawater Desalination: Energy, Technology and the Environment” by Menachem Elimelech and William A. Phillip; Science 333, 712 (2011):

Power ConsumptionSuch an improvement in power consumption gives almost everybody hopes for a practical way to address the global water stress. The next few blogs will explore the ways in which this technology has already penetrated our daily lives, as well as how it will continue to do so in the future, and some of the issues that need to be resolved as we progress further.

Posted in Climate Change | Tagged , , , , , , , , , , , , , | 7 Comments

The IPCC, the NIPCC and the Meaning of 95% Certainty

The first part of the IPCC’s (Intergovernmental Panel on Climate Change) 5th report (AR5) came out on Friday. This part consists of the Summary for Policymakers of Working Group I (WGI) that focuses on the physical science basis. The full WGI report is available as of yesterday.

Meanwhile, however, conservative think tank and well known denier group The Heartland Institute didn’t wait for the full report, and has already published its own rebuttal to the WGI report. This rebuttal to the WGI, published by the Heartland-founded NIPCC (Nongovernmental International Panel on Climate Change) contains 1023 pages. I have yet to read them, but promise to do so, as I am going to discuss the AR5 with my class and on my blog.  Since at the time I wrote this, the full AR5 WGI report hadn’t been published yet, I was unable to read it, but I promise to read it and share it with my students and readers as soon as I can.

In order to forestall any arguments that I am picking and choosing sections of the report, I am instead including all of the highlights that were collected in the Summary for the Policymakers report of the WGI report:

Warming of the climate system is unequivocal, and since the 1950s, many of the observed changes are unprecedented over decades to millennia. The atmosphere and ocean have warmed, the amounts of snow and ice have diminished, sea level has risen, and the concentrations of greenhouse gases have increased (see Figures SPM.1, SPM.2, SPM.3 and SPM.4). {2.2, 2.4, 3.2, 3.7, 4.2–4.7, 5.2, 5.3, 5.5–5.6, 6.2, 13.2}.

Each of the last three decades has been successively warmer at the Earth’s surface than any preceding decade since 1850 (see Figure SPM.1). In the Northern Hemisphere, 1983–2012 was likely the warmest 30-year period of the last 1400 years (medium confidence). {2.4, 5.3}.

Ocean warming dominates the increase in energy stored in the climate system, accounting for more than 90% of the energy accumulated between 1971 and 2010 (high confidence). It is virtually certain that the upper ocean (0−700 m) warmed from 1971 to 2010 (see Figure SPM.3), and it likely warmed between the 1870s and 1971. {3.2, Box 3.1}.

Over the last two decades, the Greenland and Antarctic ice sheets have been losing mass, glaciers have continued to shrink almost worldwide, and Arctic sea ice and Northern Hemisphere spring snow cover have continued to decrease in extent (high confidence) (see Figure SPM.3).{4.2–4.7}.

The rate of sea level rise since the mid-19th century has been larger than the mean rate during the previous two millennia (high confidence). Over the period 1901–2010, global mean sea level rose by 0.19 [0.17 to 0.21] m (see Figure SPM.3). {3.7, 5.6, 13.2}.

The atmospheric concentrations of carbon dioxide (CO2), methane, and nitrous oxide have increased to levels unprecedented in at least the last 800,000 years. CO2 concentrations have increased by 40% since pre-industrial times, primarily from fossil fuel emissions and secondarily from net land use change emissions. The ocean has absorbed about 30% of the emitted anthropogenic carbon dioxide, causing ocean acidification (see Figure SPM.4). {2.2, 3.8, 5.2, 6.2,6.3}.

Total radiative forcing is positive, and has led to an uptake of energy by the climate system. The largest contribution to total radiative forcing is caused by the increase in the atmospheric concentration of CO2 since 1750 (see Figure SPM.5). {3.2, Box 3.1, 8.3, 8.5}.

Human influence on the climate system is clear. This is evident from the increasing greenhouse gas concentrations in the atmosphere, positive radiative forcing, observed warming, and understanding of the climate system. {2–14}.

Observational and model studies of temperature change, climate feedbacks and changes in the Earth’s energy budget together provide confidence in the magnitude of global warming in response to past and future forcing. {Box 12.2, Box 13.1}.

Human influence has been detected in warming of the atmosphere and the ocean, in changes in the global water cycle, in reductions in snow and ice, in global mean sea level rise, and in changes in some climate extremes (Figure SPM.6 and Table SPM.1). This evidence for human influence has grown since AR4. It is extremely likely that human influence has been the dominant cause of the observed warming since the mid-20th century. {10.3–10.6, 10.9}.

Continued emissions of greenhouse gases will cause further warming and changes in all components of the climate system. Limiting climate change will require substantial and sustained reductions of greenhouse gas emissions. {Chapters 6, 11, 12, 13, 14}.

Global surface temperature change for the end of the 21st century is likely to exceed 1.5°C relative to 1850 to 1900 for all RCP scenarios except RCP2.6. It is likely to exceed 2°C for RCP6.0 and RCP8.5, and more likely than not to exceed 2°C for RCP4.5. Warming will continue beyond 2100 under all RCP scenarios except RCP2.6. Warming will continue to exhibit interannual-to-decadal variability and will not be regionally uniform (see Figures SPM.7 and SPM.8). {11.3, 12.3, 12.4, 14.8}.

Changes in the global water cycle in response to the warming over the 21st century will not be uniform. The contrast in precipitation between wet and dry regions and between wet and dry seasons will increase, although there may be regional exceptions (see Figure SPM.8). {12.4, 14.3}.

The global ocean will continue to warm during the 21st century. Heat will penetrate from the surface to the deep ocean and affect ocean circulation. {11.3, 12.4}.

It is very likely that the Arctic sea ice cover will continue to shrink and thin and that Northern Hemisphere spring snow cover will decrease during the 21st century as global mean surface temperature rises. Global glacier volume will further decrease. {12.4, 13.4}.

Global mean sea level will continue to rise during the 21st century (see Figure SPM.9). Under all RCP scenarios the rate of sea level rise will very likely exceed that observed during 1971–2010 due to increased ocean warming and increased loss of mass from glaciers and ice sheets. {13.3–13.5}

Climate change will affect carbon cycle processes in a way that will exacerbate the increase of CO2 in the atmosphere (high confidence). Further uptake of carbon by the ocean will increase ocean acidification. {6.4}.

Cumulative emissions of CO2 largely determine global mean surface warming by the late 21st century and beyond (see Figure SPM.10). Most aspects of climate change will persist for many centuries even if emissions of CO2 are stopped. This represents a substantial multi-century climate change commitment created by past, present and future emissions of CO2. {12.5}.

Prior to the release of this latest report, most of the media focused on the two main points that were released early:

Human influence has been detected in warming of the atmosphere and the ocean, in changes in the global water cycle, in reductions in snow and ice, in global mean sea level rise, and in changes in some climate extremes (Figure SPM.6 and Table SPM.1). This evidence for human influence has grown since AR4. It is extremely likely that human influence has been the dominant cause of the observed warming since the mid-20th century. {10.3–10.6, 10.9}.

Changes in the global water cycle in response to the warming over the 21st century will not be uniform. The contrast in precipitation between wet and dry regions and between wet and dry seasons will increase, although there may be regional exceptions (see Figure SPM.8). {12.4, 14.3}.

Both of these statements are in direct response to deniers. The first one addresses the degree of certainty that humans bear major responsibility to climate change. In the IPCC terminology it is “extremely likely” that humans are responsible. The IPCC translates “extremely likely” to around 95% certainty.

The second point addresses the often heard claim that during the last 15 years carbon dioxide emission has continued unabated, while the temperature hardly budged. The argument here was that this behavior shows a disconnect between the carbon dioxide and the temperature rise. A quantitative comparison of model predictions by the IPCC and by various deniers is available on a recent Skeptical Science blog.

With regards to the meaning of 95%, here is what Seth Borenstein wrote on this topic under the title “What 95% certainty of Warming Means to Scientists.

There’s a mismatch between what scientists say about how certain they are and what the general public thinks the experts mean, specialists say.

That is an issue because this week, scientists from around the world have gathered in Stockholm for a meeting of a U.N. panel on climate change, and they will probably release a report saying it is “extremely likely” — which they define in footnotes as 95 percent certain — that humans are mostly to blame for temperatures that have climbed since 1951.

One climate scientist involved says the panel may even boost it in some places to “virtually certain” and 99 percent.

Some climate-change deniers have looked at 95 percent and scoffed. After all, most people wouldn’t get on a plane that had only a 95 percent certainty of landing safely, risk experts say.

But in science, 95 percent certainty is often considered the gold standard for certainty.

Yes – most of us would avoid boarding a plane knowing that it has 95% chance of landing safely (or 5% chance crashing on landing). We have an obvious choice of staying home or choosing an airline with a better safety record. This airline will probably be out of business well before we will have to decide whether to fly.

The other side of this issue is that if a fire inspector were to find a 95% chance that our house would catch fire, he would tell us to fix the vulnerability immediately, especially given that no insurance company in the world would be willing to insure us. The only time that we can be 100% certain that the house is going to catch fire is after the house is already aflame. At that time the only thing left to do is to try to get out as fast as possible. Presently we can not get out of this planet – we have no place else to go.


 

Posted in Climate Change | Tagged , , , , , , , , , , , , , , | 1 Comment