Tuesday, April 30, 2013

King's Day

Sunday, April 28, 2013

The better question is what should the global average temperature revert to after stabilizing

An argument I've seen more than once from climate inactivists sometimes comes in the form of a question, "what is the ideal average global temperature," as if the question has a deep implication. In mid-gallop from "there's no warming; the warming is all natural; humans have little contribution," this is the step, "the warming gets us to a better temperature anyway," before they move on to "the overall negative effect isn't that bad; it's too soon to take action; it's too late to take action."

The first naive thought would be that places like Alaska should welcome some warmth, and a lot of the world's land mass is polar. What they miss is how melting permafrost results in sinking roads and buildings, forests die because insect pests survive mild winters more easily, and coastlines disappear with the loss of sea-ice protection from waves. If you put Hawaii's climate in Alaska, then Alaska would suffer. Both the biological and human environments are adapted for the climates they have.

So here's my hypothetical alternative:  assume, very optimistically, that in the year 2050, gross CO2 and equivalent emissions have been reduced 95% from present through a variety of technological and behavioral changes, and that carbon-negative technologies like biochar and biomass-plus-sequestration balance out the remaining 5%. What do you do next year and the following years?

Simplest answer is do even better, if you can. The rule that when you find yourself in a hole, the first thing to do is stop digging, hasn't yet been satisfied. The oceans will be transferring back latent heat for decades after 2050, so even zeroing out emissions won't be enough to stop further warming. If you can get an increase in carbon-negative activities so that effect, plus annual ocean absorption of CO2, means the reduced atmospheric CO2 warming balances out the latent heat release from oceans, then at that point we'll have stopped digging deeper. And then, what next?

A further increase in carbon-negative actions will mean anthropogenic forcing is slightly net negative compared to the previous year. Continuing that year after year would start to raise the question, when do we stop? What average temperature are we aiming for? I don't think it's the 1850 average - neither we humans nor many ecosystems will function most naturally at that level.

I don't really have the answer; I just think it's an interesting question. Maybe more of a science fiction question, but our children will (hopefully) have to deal with it someday. As a policy question, the most recent, highest temperature will not be the one that people or ecologies are most adapted to, and neither will a temperature from a century or two earlier. People probably adapt faster than ecosystems, so if we choose a human-biased priority then the aimed-for cooling will be less pronounced than one prioritizing ecosystem recovery. Different societies and different ecosystems will have different ideal stabilizing temperatures, but unless they're really good with geo-engineering, then we only get one level of net forcing.

Maybe there won't still be millions of subsistence farmers on the edge of malnutrition 50 years from now, but I wouldn't count on that. Stabilizing their precipitation patterns probably should rank in the highest priority, but we'll have to see how much political pull they'll have to make that happen.

UPDATE:  Good comments, esp from Tom Curtis who says it's not correct to call the heat transfer back from oceans "latent heat". I'm not sure I agree though that the optimal temperature for humans or nature would be a pre-industrial temperature, either the 1850 temp I discuss above or Tom's reference to typical (warmer) Holocene temps. Natural ecosystems will have spent the previous 150 years moving in response to climate change - trying to get them to move again when 9 billion people are in the way could be a recipe for even further losses. Humans will be even more adapted to the existing climate.

A chosen temperature would eventually have to be low enough to stabilize the Greenland and West Antarctic ice sheets, although I assume we've got many additional decades or longer to do that.

I think in the very long term we would want to return to something like Tom's preferred temperature level.

Thursday, April 25, 2013

Barack Got Video

President Obama got tweets

Climate deniers in Congress refuse to even debate the issue. Make sure they don't get away with it:
The science on climate change is clear.
But many members of Congress are in complete denial, and they're standing in the way of progress.
We need to call them out.
Watch this embarrassing video—and join the fight to get serious on climate change:
Of course, the President has a significant decision on the Keystone Pipeline coming up soon.  Money/Mouth

Oh, yes, in addition, Willard wants Eli to call attention to a notice from the  Association of Honest Brokers, two of whom trampled on reality today in the US Congress

Just in: The Association of Honest Brokers Cries Foul
Today’s statement of Environment Subcommittee Chairman Chris Stewart (R-Utah) contains the presence of the LINEAR MODEL.
It is also important to recognize that the direction we choose to take on climate change is not resolvable by science alone. Once the scientific analysis is complete, we must then make value judgments and economic decisions based on a real understanding of the costs and benefits of any proposed actions. It is through this lens that we should review the President’s forthcoming executive actions and proposed regulations

By consensual assessment, The Association recommends (but not imposes) a meeting with the nearest honest broker to correct what could lead to the conceptual corruption of the policy making process.

And finally, Scrotum reports from New Zealand where the forces of the James Hansen Kombat Brigade Antipodean Division have turkey trussed Chris Monckton and made off with the package.

Wednesday, April 24, 2013

Bag Job

Cherry picking is a econometric sport, and figuring out the cost of things is often not as simple as a bunny might wish.  Brian had a few posts on fluoridation discussing the fact that there are not only pros, but also cons, and coming to the conclusion that on balance, and with care in application, the pros win.  Eli's attention was drawn to the question of are non-reusable or reusable bags better, and if so which sort and on what basis.  This, as Eli said, is not as straightforward as a bunny might wish.

There is an obvious part of the answer, the more times a bag is used, the cheaper it is, but how many more times is a good question and what are the problems with reusing a bag is another.

The last has an interesting answer, if you put meat, fish, and milk products into a bag, you have to worry about bacteria, and some of the bacteria, E. coli, for example can be nasty.  Well, you can wash the bags (something Eli has never done, but might start) and doing so reduces the risk of contamination, but increases the amount of water and electricity used.

So what is a Rabett to do?  Well Google is a friend and a quick search turned up Life Cycle Assessment of Reusable and Single-use Plastic Bags in California by Joseph Greene from Cal State Chico.  Besides his own life cycle analysis, Greene presents the results of several others, which pretty much agree with his, each within their own limits.

The first step is to figure out what bags one is talking about and how to compare them.  On the non-reusable side the running is pretty much made by High Density Polyethylene (HDPE) and paper, on the reusable side we got woven Low Density Polyethylene (LDPE) and non-woven polypropylene (PP) and cotton.  To compare all of these, the general agreement is to look to carrying capacity, in which case you need about 1500 HDPE bags to carry about the same amount as 1000 of the others.  Of course, HDPE, bags being very thin often spill the carrots.

Impact indicator
HDPE single- use
Reusab le PP non-
single- use
PP non- woven 8
PP non- woven 52
Reusable LLDPE bag with
40% PCR
single- use
Reusable LLDPE with 40%
PCR bag
8 times
Reusable LLDPE with 40%
PCR bag
52 times
Paper bag
Non-renewable energy, GJ
GHG emissions, CO2 eq
Solid Waste, kg
Fresh water consumption, gal
Mass, g

The amount of water includes that used in washing and manufacture.

While the number of times a bag is used is the major issue, such things as the amount of recycled materials, the cost of disposal, the cost of transportation,  biodegradeability, litter and the effects on fauna and flora are also significant in any decision to tax or eliminate different types of bags. It turns out that  cotton or canvas bags are not a good choice because of their weight and the energy needed to grow the cotton and make the bags.  HDPE bags can be recycled and, with a bit of luck, used one or two times, in Eli's experience maybe 1.5 (breaks the other .5).

So, on balance, multiple use woven bags made from LDPE with 40% recycled polyethylene are the best choice.

Tuesday, April 23, 2013

The Worst Thing In The World

So Eli was in the supermarket with Ms. Rabett, and the checkout clerk went, "Would you like to donate to our owner's favorite charity."  Ms. Rabett knows that this is like asking Eli if he would like to say a nice word about Roger Pielke Jr., and quickly dragged the Bunny away before he could sweetly ask if the supermarket would care to match any donation he made.

In short, this is a cheap way for businesses to pretend they care while doing nothing and annoy Eli at the same time.

I now get why Europeans are disgusted with the European Parliament

The European Parliament last week rejected a fix to their cap-and-trade system that would have set a bottom floor to the price of carbon, a floor that likely helped keep California's system functioning through a tentative start to a better shape (so far).

Among other things that are annoying is that European fossil fuel-dependent industries say that a floor will put them at a competitive disadvantage to Americans, an ironic repetition of what the same American industries say about Indian and Chinese competitors. In Europe's case it also happens to be a lie as far as California's concerned, and dubious in the case of New England (has an existing-if-low price for carbon, and plans to restrict allocations further).

So you've got a system that can work if you make it work. Demanding that the sausage making of government work as well as one's ideal proposal (like a carbon tax that would supposedly emerge unscathed from a political process) is unrealistic, but then the failure to improve the solution is just stupid. The only good aspect is that it's not over - the Parliament left open the door to reconsider their action.

Monday, April 22, 2013

On Mathematics and Science

In the preceding post Eli pointed en passant to an article by EO Wilson where he let the cat out of the bag

For many young people who aspire to be scientists, the great bugbear is mathematics. Without advanced math, how can you do serious work in the sciences? Well, I have a professional secret to share: Many of the most successful scientists in the world today are mathematically no more than semi-literate.
and described his own experience
I speak as an authority on this subject because I myself am an extreme case. Having spent my precollege years in relatively poor Southern schools, I didn't take algebra until my freshman year at the University of Alabama. I finally got around to calculus as a 32-year-old tenured professor at Harvard, where I sat uncomfortably in classes with undergraduate students only a bit more than half my age. A couple of them were students in a course on evolutionary biology I was teaching. I swallowed my pride and learned calculus.

Fortunately, exceptional mathematical fluency is required in only a few disciplines, such as particle physics, astrophysics and information theory. Far more important throughout the rest of science is the ability to form concepts, during which the researcher conjures images and processes by intuition. 
This, of course, brought forth a number of bleats. Eli would, if he were being kind, call these the cries of the inexperienced, if not, those of the clueless.

Wilson is making an important point that everyone is missing, that it is more important to be able to formulate the problem and from the formulation conceptualize the broad outline of the answer before doing the math.

At that point you can go out and learn the math, or learn who the mathematician is who can help you or buy Mathematica or hire a mathturbator. If you think about this in physics story terms, this is pretty much Einstein. His real strength was the ability (and the guts) to par the problem down to its basics not his skill as a mathemagician.  Physicists worship elegance not algebra.

Eli Grabs Another Envelope

Bunnies are not really into complex calculations where the meaning is hidden in the math.  For one thing there is great opportunity to fool yourself, for another, it is pretty simple to mislead others.  To Eli math is something you grab after you have figured out what is happening, and then you grab it in shells, with the easy parts first.  That way if you mess up, you always can look at the inner shell to figure out where you went wrong.  EO Wilson had some recent thoughts about the role of mathematics

Fortunately, exceptional mathematical fluency is required in only a few disciplines, such as particle physics, astrophysics and information theory. Far more important throughout the rest of science is the ability to form concepts, during which the researcher conjures images and processes by intuition.
and simple models help with forming those concepts.  In a recent previous life the Bunny Collective discussed the relative probabilities for emission by a single vibrationally excited  CO2 molecule that has been vibrationally excited by absorbing a photon.  There are two fates for the little guy, emission of an IR photon.

[1]  CO2* -->  CO2 + hν          Rate constant = kR

and collisional de-excitation by another, say nitrogren or oxygen molecule

[2]  CO2* + M -->  CO2 + M   Rate constant = kM

Plugging in the numbers, at atmospheric pressure the Rabett showed that only one in a hundred thousand excited molecules would emit an IR photon.  As several pointed out, high up in the stratosphere this decreases in proportion to the pressure, but, for example, at ~17 km it is one in ten thousand, and at ~ 25 km one in a thousand.

On the other hand, thermal excitation of CO2, the reverse of reaction [2] is just

[3]  CO2 + M -->  CO2* + M   Rate constant = k-M

so now, in addition to absorbing an IR photon CO2* can be excited by collisions, converting thermal translational energy into vibrational energy. 

How fast a reaction happens is called the reaction rate, and is proportional to the concentration of each reactant and a number called the rate constant which hides all the dynamics, e.g. the quantum chemistry, the probability of a collision having enough energy to actually cause molecular changes, etc.  If the rate of the forward reaction, equals the rate of the reverse reaction,  an equilibrium requires that as many molecules react in the forward direction as in the reverse

[4]  (kM [M] + kR )  = k-M  [CO2*][M]

If  kR < < kM[M] it can be neglected and then [2] and [3] form a simple equilibrium

[5]  CO2* + M = CO2 + M

which can be rearranged to yield

[6]    [CO2*]/[CO2] = k-M /kM = Keq

The next step is to show how thermodynamics allows a simple estimation of Keq and [CO2*].  The emission rate in photons per second per unit volume will then simply be

[7]   kR [CO2*]

Sunday, April 21, 2013

A bad rep for solar tax credit and LEED

Some news and rumors still seem to spread more by word of mouth than online. One of them for me is the issue of potential misuse of solar tax credits in the US, as opposed to feed-in tariffs done elsewhere. Solar tax credits are transferable and cost-based - the higher the cost of the system, then the greater the tax credit that can be sold to other businesses. That's an obvious disincentive to lowering solar costs, but less obviously it incentivizes leasing companies to inflate their cost estimates. When the IRS starts getting involved, that can really damage the political momentum we want to keep in place just as solar becomes increasingly competitive.

The solution is to play clean, folks, and maybe tighter IRS supervision. And maybe a feed-in tariff instead. Same word of mouth tells me the feed-in that's been tried at local levels in California is too small to get business support - we need a state or federal solution.

Similar issue for LEED, an environmental rating system for building design. Word of mouth that I hear is that it's way too easy to game the point system, especially because it's based on design standards instead of actual performance. Additional complication in California is that our state-mandated building design standards do a lot that LEED does, raising the question of what value LEED adds.

I've heard less about the Green Building Council standards, other than that they're supposed to be somewhat more lenient.

Yglesias discusses the issue here. As for his "price carbon" solution, good luck with that, at least on a national level (we're getting somewhere with California's cap-and-trade). Short of that solution, we need to have some performance standards incorporated into the rating system.

How much will the Earth's temperature rise?

Climate sensitivity: how has the Earth responded in the past?

In debates about the change of temperature of the Earth that we can expect in the future, it is illuminating to take a backward glance at the past changes of the temperature of the Earth. How sensitive is the Earth to the buildup of greenhouse gases?

The climate sensitivity is the rise in temperature divided by the forcing (in W/m2). In the absence of any feedback, the climate sensitivity can be shown to be 1/(4 SIGMA T3), where SIGMA is the Stefan-Boltzmann constant, (SIGMA = 5.67 x 10-8 W/M2K4), and the temperature is in Kelvin.

Numerically this is a sensitivity of 0.3 K/(W/m2), meaning that if the increase in forcing is 1 W/m2, then the resulting increase in temperature is 0.3 C, in the absence of feedback, when the Earth has come to a new equilibrium with the higher greenhouse gases.

The doubling of CO2 from pre-industrial levels will produce a forcing of 4 W/m2, which implies, in the absence of any feedback, a temperature rise of

4 x 0.3 = 1.2 C (the no-feedback result).

While global climate models are necessary, it is also valuable to have a model-independent estimate of the climate response to increased CO2. What climate sensitivity did the Earth show during warming that ended the last ice age? This is the large climate change that is closest to us in time (about 10 K years ago), and therefore the most valuable. Earlier times are less relevant, because of continental drift. If you go back 200 million years ago, the continents were not even close to where they are today.

The change in temperature between the ice age and post-ice age was 5 C, and the change in solar forcing was 7.1 W/m2. This implies a climate sensitivity of 5/7.1 = 0.7 K /(W/m2). Multiplying this sensitivity by the forcing expected from a doubling of CO2 from pre-industrial levels, namely 4 W/m2, yielding a predicted temperature rise of

0.7 x 4 = 2.8 C (from the Earth’s climate record).

This is consistent with the IPCC prediction of the rise in temperature (in response to a doubling of atmospheric CO2), which is an increase in the range of 1.5 to 4.5 C. This is the change between the new higher equilibrium temperature and the past equilibrium temperature.

Notice that the paleoclimate data implies that the feedback is positive, at least on the time scales of centuries to millennia. (The feedback may well be positive on a much larger time scale of millions of years, but that is not the relevant time scale for human civilization).

Reference: Seinfeld and Pandis, Atmospheric Chemistry and Physics, (Wiley, New York, 1998).pp. 1102-1103. These authors cite a difference between glacial and interglacial periods of 5 C, and a forcing of 7.1 W/m2, and a radiative forcing for doubled CO2 of 4 W/m2.

This refutes some of the arguments made at WUWT here .

Saturday, April 20, 2013

Friday, April 19, 2013

Games Bunnies Used to Play

Life has changed

When Eli was a bunny we had all sorts of organized disorganized team games, slapball, punchball, stickball, ringolevio, skellzie all with elaborate local rules, seasons to play them in and not, and more. Then there were two bunny games, like box baseball, boxball, stoopball, two person slapball etc.  These were passed down from older to younger kids and brought a real thrill to the proposition that kids should go play in traffic. Parents did not have to drag us to some organized team practice and we objected to needing to come in at night. After all there were streetlights. Right?

What went on in your neighborhood?

Wednesday, April 17, 2013

This Is Where Eli Came In

One of the useful things the Rabett used to do was to explain what happens to the energy when a molecule, say CO2 (carbon dioxide) although you could also say H2O (water vapor) or CH4 (methane) absorbs light. For the purpose of this post, the photon would be in the infrared region of the spectrum.  This is an evergreen for two classes of bunnies

  1. Bunnies who don't realize that the molecule can also emit light.  This is a popular one amongst organikers and analytical chemists whose experience with IR spectroscopy is in an absorption spectrum for analysis of samples
  2. Bunnies who think that the only way that an excited molecule can get rid of the energy is to emit a photon.  
 For every CO2 molecule there are roughly 3000 2500 other molecules in the same volume of air.  When a CO2 molecule collides with one of the other molecules, almost certainly an oxygen or nitrogen molecule, energy transfer occurs.  Each CO2 molecule can be described as having translational, vibrational and rotational energy and the same is true of the collision partner.  Any collision can in principle change the amount of any of these forms of energy by any amount subject to conservation of energy and momentum.  The probability of this happening depends on the relative translational energy of the collision, the relative orientation of the molecules, their distance of closest approach and the distribution of energy in each of the collision partners prior to the collision.  The detailed study of such effects is called collision dynamics or molecular dynamics. 

Fortunately, we can take thermal averages over many of these variables, either theoretically or experimentally which makes life, theory and experiments much simpler and a hell of a lot less expensive and time consuming.  That sort of thing usually goes under the rubric of reaction (when there is one) kinetics or energy transfer studies when there isn't.

The fate of the energy in a vibrationally  excited CO2 molecule can be thought of as a race between emission of a photon taking essentially all of the vibrational energy away and a collision that does the same.  Here Eli is going to tell you who wins the race, something that is generally handwaved, correctly handwaved, but handwaved none the less.

For the lowest vibrational level, the 667 cm-1 doubly degenerate bend, labelled as (0,10,0), the only lower state is the unexcited ground state (0,00,0)

 CO2(0,10,0) + M --> CO2(0,00,0) + M

where M is any other molecule.  The probability of this happening depends on M, but it can be measured by exciting a small amount of CO2 mixed into a large bath of M and then watching how long it takes for the excited CO2 to decay away.   This behavior is captured by the rate equation

d[CO2*]/dt = - kR [CO2*] - kM[CO2*][M]    (1)

where kR is the radiative emission rate in the absence of collisions and kM is the collisional quenching rate, the rate at which collisions with M de-excite the CO2.  The square brackets indicate the concentration of whatever species is inside.  If [M] is very large compared to [CO2] we have what is called pseudo first order conditions, because the effect of the collision on the quantum state of M will be very small and Eq. 1 is a first order differential equation whose solution is

[CO2*] =  [CO2*]t=0 exp (-{kR  + kM[M]}t)

where t is time and we are assuming that the CO2 is being excited during a very short time so we can neglect the competition between excitation and de-excitation which simplifies the kinetic analysis.

The best study of this process for CO2 was The vibrational deactivation of the (00o1) and (0110)Modes of CO2 measured down to 140 K by Siddles, Wilson, Simpson, Chemical Physics 189 (1994) 779-91.  They measured collisional quenching cross-sections for CO2 (0,10,0) and the higher lying (0,00,1) levels against a number of gases, including carbon dioxide, the rare gases, nitrogen, oxygen and deuterated methane and deuterium.  Oxygen and nitrogen have quenching rate constants of 5.5 and 3.1 x 10-15 cm3/molecules-sec respectively at 295K, decreasing roughly linearly to about a factor of five less at 140K.  To make sense of this number, we can compare it to what is known as the gas kinetic quenching rate, e.g. what the rate would be if every collision were 100% effective, of the order of 10-10 cm3/molecules-sec.  More to the point, at atmospheric pressure where the concentration is ~2.6 x 1019 molecules/cc (Loschmidt's number)
the rate of collisional de-excitation for CO2(0,10,0) by N2 will be

kM[M] ~ 5 x 10-15 cm3/molecules-sec x 2.5 x 1019 molecules/cm3 ~ 105 s-1

Note the ~ meaning oom or order of magnitude.  It's not that Eli can't afford a calculator, it's that who needs one in this sort of calculation when toes and ears are available.  The lifetime will be the reciprocal of this, 10-5 s or 10 us.

What about the radiative lifetime?  Radiative lifetimes of vibrationally excited states are long.  Insight into why this is so can be found by looking at the relationship between the Einstein A and B coefficients.  Roughly put, the A coefficient is proportional to the rate of spontaneous emission and the B coefficient to the rate at which photons can be absorbed by the molecule.  assuming that the transitions are between two isolated levels

A/B = 8 π h ν3/c3

 The ν3 is what makes emission from low lying vibrational levels so slow.  The isolated radiative lifetime of CO2(0,10,0) is ~1.1 s.  The (0,00,1) lifetime is 2.4 x 10-3, but remember it occurs at 2350 cm-1 where the ν3 factor can do its work.

Comparing the radiative rate kR (the inverse of the lifetime) to the collisional deactivation rate kM[M], provides a quick estimate that only one out of 100,000 CO2 molecules excited into the (0,10,0) by collision or absorbing a photon, will emit.

Monday, April 15, 2013

Bob G Returns to Normalcy

Bob Grumbine has a couple of posts( #1 and #2) asking when was the climate normal.  His conclusion is

  • Climate was 'normal' only between 1936-1977
  • Every year 1987-present has been warmer than any year before that
  • 1976 was warmer than any year before 1926
  • 1978 (next coldest year of the recent run) was warmer than any year before 1940
  • 1937 is much warmer than all the preceding years.
In #2, Bob makes some important points about why this matters.  Further, those who insist that GISS needs to redefine its base period for anomalies (1951-1980) might take heed and GISS itself might want to shade a couple of years off. 

California Democratic state convention and Grover Norquist

My two activities this weekend were to listen to the podcast of Grover Norquist speaking to the Commonwealth Club and attending the annual California Democratic Party convention in Sacramento. Norquist played up the libertarian angle, probably a smart move when a conservative addresses a liberal crowd. He definitely threw the Bushies under the bus on Iraq and claimed to oppose occupying nations (something contrary to his position back when it counted). He also claimed the Democrats drive up the size of government to increase the number of people dependent on government and therefore supportive of Democratic positions, making opposition to government spending a partisan issue on purely partisan grounds. A lot of it was either disingenuous or vague, like supporting tort action as a substitute for environmental regulation, when torts are incredibly inefficient and often limited by the Republican Party.

The best part of the Democratic state convention was a panel on strengthening partnerships to communities of color. The really interesting thing these independent organizations are doing is targeting intermittent, low-frequency voters and get them to turn out on issues (not for specific candidates). I can attest from my own campaign that those voters are not campaign primary targets - when you have limited money, you put your effort into reaching someone who votes 80-100% of the time, not 20%. While California is majority-minority, the stats they showed had a majority of voters being white and disproportionately wealthy, and until the electorate reflects the population, they argued that governmental priorities won't reflect popular needs - quite the opposite of the problem Norquist sees of a too-big government.

For myself, I'm not sure whether growing inequality is caused by unfair governmental processes biased against the poor, or by the nature of our current economy, but either reason to me justifies countervailing action. I'm not buying Norquist's argument that we just need government to leave us alone. That doesn't mean he's always wrong though - finding the areas where government doesn't work well or should be less intrusive could be an area of agreement. A cap-and-trade or carbon tax is a good example, as opposed to typical regulation. Just waiting for the Republicans to pick that one up.

Sunday, April 14, 2013

The Bees Are Buzzed

Wright et al, report in Science that "Caffeine in Floral Nectar Enhances a Pollinator's Memory of Reward".  Some flowers, including those of, of course, coffee plants, but also citrus and tea,  incorporate a bit of caffeine in their nectar.  Curiously, according to the authors, all the citrus varieties they studied did this but not all the coffee plants.  As any college bunny getting ready for exams knows, caffeine helps keep you alert and enhances the memory.  Fortunately the bees appear to dislike the taste of solutions with dangerous (to bees) levels of caffeine.  They will not be buzzing your energy drinks but they do have a fondness for sodas.

That's the good news.   The bad news is that bees are disappearing.  One of the causes is the use of neonicotinoid  pesticides and organophosphates.  The later has been used to wipe out Varroa mites, one of the major pests of bees.  It turns out that these pesticides make the bees stupid, wiping out brain cells that the bees use to learn things, like where the good caffine nectar is.  Industry, of course, doesn't want to know

"Christian Maus, a safety manager at Bayer Crop-Sciences which makes clothianidin, cautions that it's tough to determine what happens to bees in nature from this study, because it was conducted on isolated bee brains in direct contact with insecticides, without any of the normal protective barriers or metabolism."
Various organizations have filed a suit against EPA approval of two neonicotinoids because the effect on bees was not dealt with, other organizations are seeking ways to limit exposure.  For example, corn seeds are often coated with the neonicotinoids to protect the seeds from pests.  When the seeds rub against each other in hoppers during planting the neonicotinoids are released.  The newly formed Corn Dust Research Consortium recently accepted proposals to limit the pesticide dust, either by making the seeds stickier or limiting powders seed lubricants needed for flow of the seeds through mechanical planters.

The emphasis here is on commercial apiaries, but anyone walking about, even in the biggest cities knows that wild pollinators are also important.  Eli, for example,rents to a bunch of carpenter bees in his wooden fence.  Large, one could even say bumbling insects, they calmly go about their business, interacting with the birds and squirrels which make the miniature carrot patch a delight.  The wild pollinators are not having the best of times, which is bad news

Garibaldi, et al, again in Science, point out how key these guys are
The diversity and abundance of wild insect pollinators have declined in many agricultural landscapes. Whether such declines reduce crop yields, or are mitigated by managed pollinators such as honey bees, is unclear. We found universally positive associations of fruit set with flower visitation by wild insects in 41 crop systems worldwide. In contrast, fruit set increased significantly with flower visitation by honey bees in only 14% of the systems surveyed. Overall, wild insects pollinated crops more effectively; an increase in wild insect visitation enhanced fruit set by twice as much as an equivalent increase in honey bee visitation. . .
Human persistence depends on many natural processes, termed ecosystem services, which are usually not accounted for in market valuations. The global degradation of such services can undermine the ability of agriculture to meet the demands of the growing, increasingly affluent, human population (1, 2). Pollination of crop flowers by wild insects is one such vulnerable ecosystem service (3), as the abundance and diversity of these insects are declining in many agricultural landscapes (4, 5). 
 Garibaldi, et al, looked at the relative roles that domesticated and wild pollinators play in 41 crop systems distributed across the globe.  One might call this macroecology.  Burkle, Marlin and Knight,  replicated an  study in a small area, near Carlinville IL done by Charles Robinson in the late 1800s, who categorized the types of pollinators that visited different plants.  That study had already been replicated in the 1970s.

The news is not good.  The number of different interactions between plants and pollinators had declined by almost 50% (ok 46%)
Bee extirpations contributed substantially to the observed shifts in network structure. Of the 407 lost interactions, 45% (183) were lost because bee species were extirpated from the study region; all 26 forbs remained present. It is unlikely that the dramatic loss of bees observed in the contemporary data set resulted from differences in sampling effort between the historic and contemporary studies. Robertson observed the pollinators of each forb species for 1 to 2 years before moving on to other species. In our intensive resurvey over 2 years, we found less than half (54 of 109) of those bee species. Although Robertson’s sampling effort in each season is unknown, we were able to extrapolate our data based on sampling effort and found that our observations were close to the “true” richness (table S1). If Robertson’s sampling was less intense on a per plant species basis than ours, then the bee extirpations are a conservative estimate. Furthermore, the loss of bees was nonrandom, such that bees that were specialists, parasites, cavity-nesters, and/or those that participated in weak historic interactions were more likely to be extirpated (table S2), congruent with other findings. Specialists were lost more than generalists (even after correcting for potential observation bias), despite the fact that their host plants were still present 
With all types of pollinators declining the hard place is not as far away as one might hope.  Jason Tylianakis puts it bluntly
Are concerns of a pollinator crisis exaggerated, and can we make do with better management of honeybee colonies? Two articles in this issue provide compelling answers to these questions. On page 1611, Burkle et al. demonstrate that native wild pollinators are declining. On page 1608, Garibaldi et al. show that managed honeybees cannot compensate for this loss