Tuesday, June 26, 2012

Water - and Specific Heat Capacity


It's not a coincidence that about 80% of the human population lives within 60 miles/100 kilometers of an ocean margin. Spend a winter (or a summer) in an interior state like Kansas, or Kazakhstan, and you will understand why they are not crowded. Temperatures in Fairbanks, Alaska, can range from 86F/30C to below -60F/-50 between Summer and Winter - and that's above the Arctic Circle! I've personally experienced temperatures of 130F/50C in Arizona and 142F/61C while working in the interior of Saudi Arabia.

We would have occasional snows (and freeze rain) when we lived in Virginia. That wasn't so bad... But sometimes we do not even see snow during the winter in Vancouver, and it's notable when the temperatures get much above 75F/24C.

I could easily get used to this. I think I'll stay...

There's a downside to this, of course: populations close to a seashore are much more vulnerable to a tsunami from a seafloor fault rupture - or an asteroid impact. Volcanoes can even figure into the picture: the tsunami that resulted from the explosion of Krakatau in 1883 traveled more than 10 kilometers inland on neighboring Java and Sumatra... then swept everything it had picked up and took it all back out to sea. Contemporary accounts mention being able to walk across the Sunda Strait on logs and bodies.

There is a reason for that very human tendency to hug the coast, and it's not for the sandburgers and grit-flavored potato salad. It's because of the moderating effect of nearby oceans. The key to that effect is the specific heat capacity of water - it is more than 4 times greater than air. In other words, it takes more than four times as much energy to increase a unit mass of water by one degree C than it does to raise the same mass of air by one degree. That means that the oceans act like a thermal buffer - because they can absorb and release so much heat without much of a temperature change.

We notice the effects of water on temperature in a number of different ways, and the next series of questions raises an unusual issue:

Q:
Does an object traveling under water get colder as it increases it speed through the water? Similar to a wind chill factor. 
- Gaylord M.

A:

Yes - if the water is colder than the object moving through it.

Water has a specific heat capacity of 4.2 with respect to air. This means it can hold - and transfer - far more energy than air for just one degree of raised or lowered temperature. The faster you move through a medium (like water) that has a different temperature, the faster and more effective is the thermal exchange, all other variables constant.

Most people know that getting into cold water will affect them much faster than walking through air of the same temperature. I noticed when I lived near the Red Sea that if I went diving in temperatures below 82F/28C that I would quickly become hypothermic. This hugely different heat capacity is also why it is so important to wear clothing that keeps moisture away from the skin as much as possible.

Q: 

Thanks for the reply. I asked the question, as I was wondering if it could have had an effect on the Titanic's rivets to cause them to fail. I had watched a segment on the History channel where they had ran some tests and determined the rivets had not failed. However they were running their tests in what appeared to be a normal environment. Only one of the test rivets failed.


A: 

The possible effect of ice-temperatures on the Titanic's rivets is an interesting thought. I'm not a metallurgist, but have watched several back-and-forths in the semi-scientific literature about the possible "failing Titanic rivets" issue with some interest.

In this case I don't think the temperature would have made much difference, because North Atlantic water ranges between 0C and 22C, depending on the month.

That's not really much of a temperature difference, considering the temperature that the rivets were forged at, and the fact that the ocean temperature cannot go below the freezing point of ice. Because of water's large specific heat capacity, there really is not much of a temperature change in the North Atlantic.

There were literally thousands of steel-riveted ships plying the North Atlantic during that epoch, and it makes more sense to worry about metal impurities in a given production batch of rivets than in the narrow temperature range that they would operate under.
~~~~~



Friday, June 22, 2012

Infrasound


The Earth really is a living thing in many senses of the word. For instance, it is very active – it even makes sounds.

Q:
Hi, Wondering what kind of sounds the inner earth makes? Do you know where I might go to hear this?
Thank you 
-Nathan W.

A:
There are sounds from the "inner Earth", but they are generally at frequencies below what the human ear can detect - this frequency range is called "infrasonic". Occasionally these can be heard, but not normally.

I once heard a recording from a seismometer located on Tungurahua volcano in Ecuador - but it had been electronically speeded up about 400 times to bring the signal up into the audible range. It sounded like a large animal moaning and roaring. This of course would be normally inaudible to the human ear.

Here is one somewhat different, for volcanic gas venting: http://volcano.oregonstate.edu/vwdocs/videos/siocomm.mov 

Here is an example of the sounds of an actual surface eruption at Tungurahua: http://en.rian.ru/video/20101202/161596301.html

Note that if the volcano is not actually erupting at the surface, the sounds made are almost always inaudible (infrasonic).

More volcano sounds can be heard here: http://volcano.oregonstate.edu/book/export/html/385

In some volcanoes there is a seismic signal detected called "harmonic tremor" - it is generally thought to be caused by fluid movement through conduits deep below the volcano, and sometimes is a portent of an impending eruption (Mt Pinatubo in the Philippines in 1992, for instance). Harmonic tremor is typically in the 2 Hz frequency range - well below the lowest frequency that a human ear can detect (which is about 20 Hz).

Earthquakes (shifting, sliding crustal plates) also generate seismic waves, but like those under a volcano they tend to be mostly at frequencies well below what a human ear can readily detect.
~~~~~

Tuesday, June 19, 2012

Geoengineering


Geoengineering is a very broad topic – in fact, no one group of people can actually agree what the word actually encompasses. One thing for sure, however: the word carries with it a lot of emotion already, not unlike Fracking.

Q: What is geoengineering and why do people say it is bad?
- Byron S.
A:
The term “geoengineering” (or environmental engineering, depending on who you are listening to) can encompass a lot of very different things: 
  • Stratospheric Particle Injection for Climate Engineering (SPICE). This experiment this Spring in the Europe envisioned injecting water into the atmosphere at a 1-kilometer altitude. However, there have been proposals to inject vast quantities of sulfates into the stratosphere to reduce global warming. The theory underlying these is that Mt Pinatubo already did this in 1993 – and lowered the Earth’s average temperature by more than a degree C for two years.
  • Injecting large volumes of iron sulfates into the Southern Ocean in 2009. This was done to test a theory that adding iron to the ocean would encourage phytoplankton growth, leading to an increase in zooplankton growth with concomitant oxygen release and carbon dioxide sequestration all at the same time. The fear, of course, was that the exercise would trigger a massive, toxic algal bloom.
  • Injecting water from a hose maintained at a 1-kilometer altitude to test if this could cause more reflectance of solar radiation, and thereby reduce global warming effects.
  • All the Walmart parking lots in the world contribute to large-scale diversion of water from the Earth and unusual absorption of solar radiation, creating unnatural microclimates (“heat islands”) that will affect local and even regional weather. In fact, one can watch any local regional weather radar, and readily see that clouds will often form donut holes over large, paved metro areas like Portland, OR.
  • Groundwater depletion and other anthropogenic (man made) changes in terrestrial water usage were responsible for about 42% of the 8-cm rise in global sea level observed between 1961 and 2003.
  • Ethinyl Estradiol (EE2) is the active ingredient in birth-control pills. More than 100 million women worldwide use contraceptive pills, and the products make their way through waste-water treatment systems into rivers and lakes, where they have caused widespread disruption of aquatic environments. It has done this by disrupting endocrine systems in wildlife (for example, irreversible development of eggs in the testes of male fish, a condition called “intersex”). EE2 introduced into a Canadian lake in 2001, at a level of only 5 parts per trillion, caused the population of one fish species to completely collapse.

There are other potential kinds of geoengineering, limited only by the creativity of people who worry about the Earth we live on - and who DON’T worry about where funding for their proposals might possibly come from.

These mega-scale engineering changes all sound like good ideas – they promise potentially great (and highly leveraged) rewards. The problem with geoengineering, according to a lot of people, is that if we play with our ecosystem on broad scales like these, we can never be sure of the consequences.  We may very well, with the best of intentions, create a spiraling-out-of-control disaster. We could just be asking for it.

An extreme example of this fear was the concern that when the Large Hadron Collider in Europe went online, its huge particle beams would create a tiny Black Hole - that would burrow to the center of the Earth and destroy our planet from inside out. The most compelling argument against this, of course, is that far greater particle energies are generated daily in our upper stratosphere by cosmic rays… without any noticeable harm having been done over the past 4.5 billion years or so.

Another example of mega-scale engineering is the massive use of DDT to solve a perceived insect problem – to save crops and mitigate human disease by eliminating dangerous insect vectors. We now know, of course, that the extensive use of DDT did solve, at least temporarily, some crop and human disease problems. However, it had huge unforeseen downrange consequences like plummeting bird populations and possible birth defects.

Some people might call the massive use of antibiotics another example of a well-intended global effort to deal with a human problem – but one that has in fact led to a growing disaster. We now see explosive growth of Multiply-Resistant Staphylococcus Aureus (MRSA, or the terrifying “flesh-eating bacteria” increasingly in the news). Indiscriminate antibiotic use has also led to a world-wide resurrection of resistant tuberculosis, Bubonic Plague, and other once-curable diseases.

Perhaps even more terrifying is the research into genetic engineering: what if something unforeseen gets loose into the world’s environment, with disastrous and irreversible consequences, like Zebra Mussels, lampreys, and Asian Carp getting into the Great Lakes? Or Kudzu being introduced into the southeastern US? Or Africanized bees introduced into the Western Hemisphere? Or cases of incurable cerebral malaria exploding in areas where unregulated hydraulic mining is rampant?

In 2010, a gathering in Oxford, UK, came up with some guiding principles for geoengineering:
-          Geoengineering should be regulated as a public good
-          There should be public participation in decision-making
-          Research should be openly published
-          There should be independent assessments of potential impacts
-          Decisions to deploy any new technology should be managed within a “robust governance framework.”

All of these principles sound great – but are terminally vague. Furthermore, they will probably never be implemented on an international scale. It takes just one nation ignoring the International guidelines on something as far-reaching (and frontier-crossing) as geoengineering to abrogate the whole effort for the rest of the international community.

If there is a lesson here from the pesticides, antibiotics, and biological introductions, it is that nothing is consequence-free. However, many people feel that they are forced to just stand by and helplessly watch things unfold - decisions made by just a few people. That may be why there are such vociferous demonstrations to something as innocuous-sounding as SPICE.
~~~~~

Saturday, June 16, 2012

Black Holes & Super Novas & Geology


Here is a continuing question from 3-yr-old Samantha. It actually goes to the heart of why we have geology in the first place: black holes and supernovas of earlier suns have led to a cyclic mix of fusion-created heavy element products like oxygen, carbon, iron, and silicon - major constituents of our rocky Blue Marble, water-covered planet. A world like ours could not have existed in the early life of the universe.


Q:
Thank you so much for your reply. She (Samantha) still talks about you from time to time. Then out of the blue she asks "Mommy, what are black holes made of?" I don't know! :)
--Jo L.

A:
Well, the short answer is a LOT of mass. There are actually at least two different kinds of Black Holes.

Stellar Black Hole starts with the collapse of a very large star - a star much bigger than our Sun. As the star uses up its hydrogen by fusing it to helium, it starts converting helium to carbon - these stars are a deep red, almost garnet color in a visible light telescope. Rather quickly on a cosmic time-scale, it will start converting carbon and helium into a number of other life-critical elements, all the way up to iron. The fact that the Earth's crust contains elements up into the uranium range suggests other processes, too. All the material we find on our own Earth has come from this thermonuclear process - probably from many ancient stars that reached old age and blew up long ago.

In two words, we are “Star Stuff.”

Somewhere in this winding-down process for this very large, earlier star, there is an initial collapse of the outer blanket of hot gas material down to the star's core, and a "bounce" causing an initial huge blow-out of the outer envelope. This is called a nova, or in some cases a super nova. It produces prodigious, short-lived amounts of radiation from visible light all the to X-Ray energies and beyond. In a distant galaxy, a supernova can look temporarily like a nearby star in our own galaxy.

This outer shell ejection process creates something called a Planetary Nebula - a glowing shell of gas that almost looks like a planet in a cheap telescope. Finally, there is a huge terminal collapse and all the remaining matter, without thermonuclear heat to hold it up, collapses into what becomes a Black Hole. It's called a Black Hole because there is so much mass in such a tiny volume that it bends light. It bends light so strongly - this is an essential part of Einstein's General Theory of Relativity - that light can't get out of a certain volume outside the central concentrated mass. This "edge" where light can't escape from is called the Schwarzschild radius, or the Schwarzschild discontinuity. You can guess who suggested this idea first. If the original star isn't big enough, the mass will collapse back into a White Dwarf - or if there is more mass, it will collapse into a neutron star, a teaspoons of which would weigh tons on Earth (if you could get it here or even weight it).

This is a description of a multi-stellar-mass Black Hole. 

There are other, far larger Black Holes. Galactic Core Black Holes are found in the centers of most galaxies including our own – and they form for different reasons and are HUGE. These Black Holes result from too many large stars in the crowded center of the galaxy being in too small a confining space - and they coalesce into each other forming a Black Hole that grows ever larger with time as it gobbles other nearby stars spiraling into it from tidal orbital collapse. In some science fiction books this is called "The Eater" or the Black Monster. We know there is a Galactic Core Black Hole in the Sagittarius constellation - the center of the Milky Way galaxy - because astrophysicists can see huge Doppler shifts in radiated light over a very small angular separation in a tiny area. This zone was originally named "Sagittarius A" - for the first apparent brightest star classified in that constellation by early astronomers. Sensitive satellite detectors indicate that the center of this interesting area radiates light all the way up into the X-Ray range of energies. On one side the Doppler shift indicates that material is rotating TOWARDS us (the absorption bands are blue-shifted), and close by on the other side there is a red shift telling us that it is rotating AWAY from us. The latest indirect calculations suggest this area, called Sagittarius-A* (Sagittarius-A-Star, or "Sgr-A*" for short) is about the diameter of Mercury's orbit around our Sun - but holds a mass equivalent to at least 44 million Suns in that relatively tiny volume. It's hard to see this, as the whole mess is about 26,000 light years away from us, so it's taken some very clever astrometrics by some very smart astrophysicists to get these numbers.

~~~~~

This seems like more than a normal 3-yr-old might be able to absorb. I am struck, however, that this 3-yr-old of yours has such a wide-ranging interest in scientific things. She could not get there without a highly supportive parent who will spend the time at least trying to answer her questions. You must have some rather eclectic conversations with your daughter.
~~~~~

Wednesday, June 13, 2012

Age Dating - and Why it is VERY Important


We have received many queries that implicitly ask a question about how old something is – typically the odd rock that they found in their backyard. A review of age dating, and why it can be critically important, seems appropriate here. 

Q:  
How old is this rock?
- various 

In 1979, a young USGS geologist named Rick H. was mapping flows on the north flank of a beautiful, glacier-covered, symmetric volcano in southwest Washington State. He had some idea of how old the Goat Rock Dome was that he stood on – textures and some sparse historical information suggested that it was very young. He had no idea that in a year the huge outcrop he stood on would be moved many miles to the north – that the spot he then stood on would be more than a thousand feet up in the air. The 1980 eruption of Mount St Helens killed 59 people – that is, that authorities are certain of. The death-toll could have been greater by more than 800 people. By a miracle of governance, those people were being held back at a roadblock until 9am on May 18th. However, 45 minutes before the gate would have been opened - to allow in people who had property around the volcano - the monster blew up catastrophically with a lateral blast. Authorities were certain of 57 people killed, but speculate that there were more caught in the blast that no one knew about. Gray dacite dust fell on cars in Atlanta a day and a half later. 

As the new chief scientist for volcano hazards, I made a point of visiting - and spending time listening to - every single staff member scattered over six different centers. This was not a trivial exercise; it meant lots of airport time and lots of listening. I made a point of spending the same amount of time with the technicians as I did with the senior scientists, and this helped me get a better view for how things were really working within the organization that I had just inherited. In one case, techs clued me in that one of our observatories was no longer functioning, due to a perfect storm of very human conflict starting with a management failure.

The experience was not all grief, however. One of the people I spent time listening to was Andy C., a brilliant young PhD geologist/geochemist who had decided to specialize in age-dating. I also talked with Jim S., a smart, furiously hard-working tech who worked with him. After listening to them, I travelled to the USGS national headquarters in Reston, VA, and tin-cupped around the building. I mean this literally – I carried around a tin cup with me to help break down resistance by disarming people with humor. I was looking for “spare change” in people’s budgets that I could divert to Andy’s laboratory. Spare change in my little 120-person volcano science team usually meant a few thousand dollars from my cost center budget. Two thousand dollars would pay for a young scientist to attend a science meeting, where he would not only learn what was going on in his field, but connect with people he could cooperate with. This meeting attendance had the effect of leveraging our meager funds to accomplish quite a bit more with them by getting others to help us accomplish our objectives.

To put this in perspective, a Stryker combat vehicle costs about $1,500,000. For the Department of Defense, however, I was looking for funding down in their noise-level. For them, our needs were what Sherrie G., a DOD executive, dismissed as “decimal dust.”

But Andy had both energy (he frequently worked 60-hour weeks) and a vision. His vision was to create a center of excellence: a laboratory for high-precision dating. I found and funded him with $250,000 to develop the world’s best state-of-the-art 40Ar/39Ar age-dating laboratory. This has allowed him to steadily refine the ages of very young volcanic rocks, which in turn allows us to put better and better parameters on the eruptive frequency of volcanoes – so we know what we may or may not have to worry about.

In the 10 years that followed, Andy had accumulated a sufficient number of very good age-dates that he was able to start looking in broad brush at the eruptive history of the entire Cascades range - and he can now see episodic pulses of eruptive activity in the past 500,000 years. Importantly, this includes the first hints that we are entering into an unusual period of volcanic activity right now.

In the past 5 years, Andy refined his +/- errors on an age-date from several thousand years to just 400-500 years. He did this by gaining precise control on the atmospheric Argon component in rocks, but also by refining his sample-collecting techniques. He sought the centers of lava flows, parts that were “platy” because they were shearing as they cooled while still flowing. He also avoided porphyries, because the internal crystals formed under different circumstances than the fine-grained that had extruded out onto the Earth’s surface. Other things he avoided included air-vesicles and glass: that is, water-quenched lava that inevitably had an Argon contribution from the water. How did he sort these out? By licking the rock with his tongue. If his tongue stuck slightly to the rock, experience showed that he would get a low-precision date on it. Since he was working long hours to deal with a huge back-log of samples, this kind of “pre-sorting” had gone a long way toward narrowing down his error-bars on dates.

So..... How Does Age-Dating Work?

Wait, you ask, how does age-dating work?  In the pre-radioactive isotope days, crude dating could be done by measuring how fast sediments accumulated in a lake-bottom, measuring the thickness of a stack of those sediments, and noting which sedimentary units lay above (were younger than) another layer. But while you could easily get the relative ages with stratigraphy, you couldn’t get good absolute ages – there were too many variables, like water levels, wind influences, and changing sedimentation rates.

It’s not hard to get the radioactive decay rates of anything if you have something like a Geiger counter: a certain number of atoms are “popping” every minute, and you could measure how many atoms were in the sample to begin with using some relatively straight-forward chemistry.

Once you have decay rates, age dating - in principle at least - is pretty straightforward.  A mineral solidifies out of a magma mush somewhere with uranium in it. By early in the 20th Century, the decay-process of uranium was well known… there were intermediate “daughter products” with different half-lives, but they all ended up at stable lead, where the decay process ended. All you really needed to do was measure the lead-to-uranium ratio precisely, and with the rate of decay you could get a handle on how long it had been sitting there since the last melt solidified. This wouldn’t work if the rock had been metamorphosed (or “stewed and cooked” as old miners would say it) since the initial solidification. In that case, the age you got was the last re-melt. 

All is not lost, however. Some really smart people eventually worked out how to get something of a handle on even this. It required some good geology, some good chemistry, and some clever statistics… but you could at least get an idea if something had been messed with since original formation.

There were other problems, however. The half-life of uranium-235 is 704 million years. Also, the precision was not that hot when you measure micrograms of uranium and lead in a mass spectrometer - and try to divide that up into 704 million years. In the best of circumstances, you get a rather large plus-or-minus – many thousands, even hundreds of thousands of years. In many situations that didn’t matter all that much. For instance when we were trying to figure which rocks arrived before which in truly ancient southern Venezuela – then 500,000 years or 5,000,000 years one way or the other didn’t matter all that much.
If you have two volcanoes, and one erupts every 20,000 years and the other erupts every 200 years, then this precision doesn’t help you at all. Unless you know the eruption frequency, you have no easy way to know how dangerous the volcano is.

THIS is why age-dating volcanic rocks is so important. Do we pour our meager annual instrumentation budget into Mount St Helens, or into (literally) Crater Lake?

There are other radioactive decay series, of course: rubidium-strontium, carbon-14 (which we use with volcanoes if we can find some fried vegetation under a flow), and the argon series. You can’t always find uranium-hosting minerals, and you usually can’t find rubidium-hosting minerals. If you find something burned under a lava flow, it can’t be older than a few tens of thousands of years, or the 14C is already all gone. But 40Ar has a more useful half-life, and argon is also a significant constituent of the atmosphere. It can be found just about anywhere in volcanic rocks – which came from a subducted ocean floor once exposed to the atmosphere.

Rats. There is yet ANOTHER problem: all that argon in the atmosphere is also going to pollute any measurement you make. Like carbon-14, which is “made” in the upper atmosphere by cosmic rays transmuting nitrogen molecules, Argon-40 comes from atmospheric Argon-39 – which is everywhere, and seeps into just about everything. If you want real numbers – true age dates – you must find a way to get clean and unsullied samples.

Where there’s a smart person, there’s a way. Believe it or not, it comes down to something as low-tech as putting your tongue on a rock. Andy - by trial and error - found that the best rocks to date were the ones in the "platy" middle section of a solidified lava flow. The final test was to see if your tongue sticks to the sample that you hammered out. Does it stick? Chuck it and look for another sample, because it won't give you a reliable age-date.

Bottom line:
It comes down to this: if we are being shot at, it’s important to know how OFTEN we are being shot at. You can plan. You can set up many forms of disaster mitigation to keep a crisis from becoming a catastrophe. Rock-dating information is crucially necessary in order to have even half a chance of predicting the volcano’s future behavior – and roughly calculating the risk it carries. High-risk volcanoes then claim the larger share of our very limited instrumentation budget. Crater Lake (the former Mount Mazama) last erupted catastrophically over 7,000 years ago. Mount St Helens has erupted more frequently than almost all the other Cascades volcanoes combined… the last period of repose was just 24 years. So with good age-dates, we invested the lion’s share of instrumentation on this critical, very-high-risk volcano.

We were relieved that we had done so when the 2004 eruption started with just over a week’s seismic warning. We had seismic and GPS “eyes” already in place and with them we could ”see” what was going on under the volcano. With a huge experience base acquired by studying hundreds of volcanoes in the US, Kamchatka, Japan, Indonesia, and Latin America, the scientists at the Cascades Volcano Observatory could predict (sometimes to within hours) when the next eruptive pulse was coming – and on 1 October 2004 they called for the evacuation of hundreds of people from the Johnston Ridge observatory.

They could not have made that emergency call if the geologists had not already carefully mapped its past eruptive products. And the eruptive history would not have been deciphered without precision age-dating. The 2004 eruption was not nearly as violent as the one in 1980. But even if it had been, the disaster of 1980 would likely not have been repeated. By 2004 we had dates and knew what this volcano was likely to do.
~~~~~


Sunday, June 10, 2012

Snowball Earth - the Faint Young Sun Paradox


Here’s a detective story for you, and it doesn’t involve a murder.

Astronomers studying young stars like ours have realized that as a Main Sequence star evolves over time, the inner core becomes denser and the fusion rate of hydrogen to helium increases. In other words, our own Sun must have grown brighter and brighter during its first 5 billion years of existence.

Careful astrometric studies have even placed some numbers on this: the energy output of the Sun 2 billion years ago is inferred to be about 70% - 85% of what it is today. This would not be enough to warm the Earth above the freezing point of water. The Earth 2 billion years ago should have been a frozen ice-ball, like Mars today. Mars is more distant from the Sun than Earth is and correspondingly cooler.

There is a problem with this conclusion, however: it doesn’t agree with ancient evidence gleaned from geology. There are sedimentary rocks in South Africa with ripple-marks and mud-cracks. These rocks are derived from volcanic ash - and therefor easily dated at about 2 billion years old. Other rocks dated at 2.7 billion years ago show fossilized rain-drop imprints. I have personally handled ripple marks and pillow-lavas (lava that is fast-quenched in water) dated at about 1.7 billion years ago in southern Venezuela. Ancient stromatolytes - blue-green algal clumps and  mats - have been found in rocks over 3 billion years old in Australia.

The evidence is everywhere: the atmosphere may have been different, but there was liquid water on the Earth’s surface as far back as we can test.

What gives?  The arguments used to explain this so-called “Faint Young Sun Paradox” fall into three main groups:

- The young Earth may still have had a lot of residual heat left over from potential energy accumulated during the accretion process. However, the surface of the Earth would have equilibrated quickly with energy received from the Sun, and the existence of solid cratons back at least 3.4 billion years ago argues for a solid crust. Energy released from the Earth’s interior has actually ramped up with the onset of mantle convection and plate tectonics, now thought to have started about 2.5 billion years ago.

- The Earth’s atmosphere retained heat more efficiently than it does now - for instance, by containing more greenhouse gasses like carbon dioxide and methane. Sufficient nitrogen can also act as a greenhouse gas under a phenomena called nitrogen broadening. There are a few questionable gas inclusions in ancient rocks, but scientists argue over how pristine the gasses in these inclusions actually are - or if they have diffused (either into the rock or out) over time.

- The Earth’s albedo, or surface reflectance, was lower in the past. Lower surface albedo could have been due to less continental area (more dark, absorbing ocean), or perhaps by the lack of biologically induced cloud condensation nuclei. How would you ever obtain evidence for something like cloud cover 2.5 billions years ago, however?

There are other suggested explanations out there. One is the modulating effect of a stronger Solar Wind in Archean times (i.e., greater than 2.5 billion years ago). Another is that due to orbital mechanics and tidal effects, the Earth’s orbit was once closer to the Sun.

This last explanation is treated skeptically by most astronomers because of some bad science propagated in several books by Immanuel Velikovsky a generation ago. The Earth-Moon distance varies depending on where the Moon is in its orbit. Lunar laser ranging experiments show that in general the Moon is receding from the Earth at a rate just under 4 centimeters per year. This is due to tidal energy being transferred to the earth (and converted to heat) via the seas, and the deformation of the Earth’s crust along with the tides. It is a logical step to infer that the Earth’s orbit around the Sun could increase for the same reason over time.

There is a major problem with all these theories: with time, the evidence for anything becomes increasingly fragmentary, increasingly suspect. It’s like a Cold Case murder - only 2.5 billion years cold.

Scientists are clever folk, however - and they keep thinking, keep looking for other ideas. Recently some of them have gone back to the fossil imprints of ancient rain-drops onto volcanic ash, and have conducted comparison experiments to estimate the density of the Earth’s ancient atmosphere. There are many variables to deal with, however, including how big will rain drops be, and how much moisture was in the volcanic ash? Careful calibration has at least allowed scientists to put a range on the ancient Earth’s atmosphere: it was between 50% and 105% as dense as it is today. This immediately calls into question the greenhouse gas argument.

We also know from other geological evidence that the Earth’s atmosphere began to fill with freed-up oxygen around 2.5 billion years ago. Rounded pyrite grains found in ancient South African sandstones, which could not have occurred in the presence of oxygen, is one proof of this. The Great Oxygenation Event came at the expense of methane and carbon dioxide, which biological processes were already starting to sequester in the form of carbon accumulating in the bottoms of ancient swamps. 

You recently drove your car to the grocery store using gasoline - some of that sequestered carbon. That same trip thus released more of a greenhouse gas to the Earth’s atmosphere.

And so the Earth grows hotter and hotter...

~~~~~

Thursday, June 7, 2012

Rocks in Campfires


Sometimes we get extremely practical questions - so mundane that no one has ever spent time scientifically researching or studying them. Another way to put this: a million people have conducted a million unreported independent experiments. However, equally many people have opinions!

Case in point: putting wet rocks from a river in a campfire:


Q: 
Hi there!
     First let me say I was so happy to find a result when I googled "ask a geologist".  The internet continues to impress.
     A friend and I recently were talking about rocks in campfires, and the safety of it.  She was convinced that solid rocks can explode with shrapnel-like effect if overheated.  I ceded that I believe rocks may able to explode, but rocks that were solid and found in a dry area would probably be safe, and that it were more likely to simply crack than to actually provide enough force to send a fragment out at high velocity.
     So obviously when we returned I did a bit of research, and found that a lot of people talk about this, but no-one seems to have any concrete evidence.  It's all either anecdotal or stated as theory.  Most of these involve "river rocks", rocks which have been exposed to water over long periods, "soft rocks" such as sandstone or pumice, the combination of the two, or simply "rocks which have air or liquid in them".
     I don't doubt for a minute that there are circumstances where gas or liquid inside of a rock can expand and cause the rock to break.  What I question is whether or not the explosion can produce a shrapnel-like effect.  Whether or not the force can be great enough to send a piece
of the rock out at great velocity, or if it would be more likely for it to simply crack with little effect.  I did a tiny bit of research only to realize that the mechanics regarding density of rock and vapor
pressure were pretty deep. The engineer in me realizes that it relies on many variables, the distance of the pressurized water to the surface, the shape of the rock, and of course it's density and the amount of vaporizable water, and that water's coordination within the rock.
     So whaddya think?  Can rocks explode like grenades?
- Andy B.

A:

That's a classic question with thousands of anecdotal answers. I have personally seen a river rock, deposited in the middle of a roaring campfire, explode. There was a distinct bang sound (several, actually), but I don't remember any pieces flying off. Others around the campfire told me that yes, they had seen fragments fly out of other wet-rocks-in-a-fire experiments, and considered anyone putting a water-soaked rock in a hot campfire as being unusually foolish. I've seen worse: in a field camp in the deep Venezuelan jungle, I watched obreros throw half-used cans of insecticide spray into a campfire with predictable consequences.

The engineer in you has homed in on the best answer (I hesitate to use "correct" here, meaning that it has been experimentally verified, or verified from personal experience, take your pick). The problem is there are too many variables.These include how "tight" the rock is, how fractured it is, how much porosity, how much transmissivity (how interconnected the pore spaces are), how fast it heats up, and how large a volume. There are probably others.

It comes down to this: to constrain the variables, one must do thousands of experiments to get anything statistically meaningful. I suspect this experiment has been "conducted" millions of times by millions of kids around campfires, but no one ever collected and compiled the results. I can't imagine anyone other than Myth Busters having the time and resources to do an appropriate set of experiments. I would be surprised, however, if they have NOT done experiments like this.

~~~~~


Monday, June 4, 2012

The Largest Possible Bay Area Earthquake


Some questioners have serious worries about the risks in where they live – polluted groundwater from industrial plants or fracking, exposure to volcanic eruptions, and risk from earthquakes, hurricanes, and tornadoes. Sometimes people are just trying to get some reference they can use for legal action or engineering decisions they are required to make… or justify. In this example it’s not clear what the objective is, but I hope the answer is educational.

Q:
Please provide me with Web links or literature citations to the current estimates of maximum credible earthquakes in the San Francisco Bay Region.

Thank you. – Robert Z.


A:
I'll lay out selected references first, then explain the principles underlying them.

The largest historic earthquake to strike northern California remains the M = 7.9 event of 1906. These aren't that unusual; I remember vividly being thrown out of my bed as a small child by a magnitude 7.7 earthquake in the southern San Joaquin Valley.

... we find that "Determining whether the intraslab events occur within the crust or mantle portions of the slab is not only important for understanding the rupture process of these events, but also for estimating the maximum possible magnitude. Normal-faulting earthquakes confined to the 7 km thick subducted oceanic crust are not likely to exceed the magnitudes of the three large (M6.5-7.1) Cascadia intraslab earthquakes, while allowing a thicker seismogenic zone suggests that much larger earthquakes could occur."

...we read "The Calaveras Fault plays a major role in accommodating plate-motion slip in the San Francisco Bay region.  Geodetic modeling, historical creep data and paleoseismic trenching suggest a fault slip rate of about 15 mm/yr on the Central Calaveras Fault, which extends from San Felipe Lake on the southeast to Calaveras Reservoir on the northwest.  Within the uncertainty of limited geologic data, the long-term slip rate on the Central Calaveras Fault is consistent with the short term rate estimated from aseismic creep and geodetic modeling.  However, a critical question is whether or not the Central Calaveras Fault produces large-magnitude earthquakes, or whether the fault relieves strain only by aseismic creep and small to moderate earthquakes.  Existing seismic source characterization models generally assume or strongly weight scenarios in which the fault may rupture in earthquakes up to magnitudes of about M6.2.  Understanding the maximum size of earthquakes possible along the Central Calaveras Fault is critical to estimating probabilities of future earthquakes in the San Francisco Bay region. ".

In general, the maximum possible moment magnitude correlates closely with the amount of fault surface that actually breaks - and by how much (surface area * throw).

A subduction earthquake such as Cascadia in January 1700 (or Tohoku, 11 March 2011) presents a much larger potential slip-surface because it dips relatively shallowly, and can reach down-dip much farther (~200 km for the Tohoku event) before it reaches the plastic zone (pressures and temperatures high enough) of the Mantle. The San Andreas is a roughly vertical-dipping, right-lateral transform fault, and also has multiple bifurcations and bends, all of which would tend to limit the surface area where slip can actually take place. This suggests that M ~ 8 is the maximum that could be expected for the San Francisco Bay area.

~~~~~

Friday, June 1, 2012

Acid and Pyrite


Many people are interested in minerals – but some are also interested in how minerals interact with other things. This is for very practical reasons. A mining engineer will want to know something about all the minerals related to – found in – an ore deposit, and there could be dozens. A clear understanding is required of both the minerals and their interactions with heat and acids. Without this, it is impossible to sort out each mineral from the raw ore. Each will react differently to different processes and solutions. The mine infrastructure designers – the people who build 50 million-dollar mills - then can set up a mill and plant  to extract what they want from the ore… Lacking this understanding the ore will just remain strange-looking dirt.

Keep in mind that the minerals were concentrated by a complex chemical-physical process in the first place. Many people think of fluids in the Earth as being the same thing as potable groundwater – but the fluids forming and interacting with ore deposits can often be hot and very acidic (that’s why the concentration happens). Experiments with different solutions on different mineral species help mining engineers and geochemists to work out the extraction process. Pyrite is commonly found in almost all sulfide deposits, and must be removed first to extract the gold, copper, molybdenum, silver, lead, tin, etc., being sought after. Our early ancestors – creators of the Bronze Age of ancient Greece – had already worked out much of this process millennia ago.

Q: 
Will nitric and/or muriatic acid affect pyrite?
Thanks for your time. – Aaron C.

A:
Muriatic acid is just a tech grade of hydrochloric acid.

Pyrite fuses easily under heat, becoming magnetic and giving off sulfur dioxide fumes (SO2 – that burnt-match smell). Pyrite is insoluble in hydrochloric acid (not an oxidizer). However, a fine powder (which exposes much more of the pyrite surface area) will dissolve in concentrated nitric acid (HNO3), which IS an oxidizer.