Category Archives: Geophysics

Beneath the ice; Greenland’s bedrock


Greenland is giving up secrets, one at a time

Greenlands bedrock revealed by radar

Greenland’s ice-sheet is a significant component of the global ice-ocean volume budget. Ice-sheet waxing and waning has figured prominently in sea level change over the last 3 million years – not quite as long as its Antipodean counterpart. It is front and center of current estimates of sea level change and cold, freshwater through-flow to the North Atlantic Ocean. In focusing on ice budgets and sea level we tend to overlook the fact that the ice-sheet is underlain by bedrock. Melting can take place at the base of an ice-sheet as well as the top. Ingress of warmer seawater at the base of an ice-sheet (where it meets the coast) exacerbates melting. Coastal ice acts as a kind of buttress to the interior regions of ice sheets, which means that changes in melting and calving at the coast will affect ice-sheet dynamics in the interior.  Thus, knowing the depth and topography of the buried bedrock surface will permit more accurate estimates of ice volumes, and the kind of data needed to rank ice-sheet – coast intersections at risk of increased melting and calving.

We catch glimpses of Greenland’s bedrock along its coast. The foundations are metamorphosed Precambrian sedimentary and igneous rocks as old as 3.9 billion years. Zircon crystals in some of the rocks are even older, indicating the presence of solid crust more than 4 billion years ago).

The most widely used measure of ice thickness uses radar signals beamed from planes and satellites. Radar signals travel through the ice until they are reflected from boundaries like the top of the bedrock. Radar signal reflection is based on the different electrical properties of ice and bedrock (the process is analogous to reflection of seismic signals from different rock layers in the earth, but in the case of radar the signals respond to changes in electrical properties rather than changes in density). It’s a bit like lifting a veil; you never know what you might find.

The radar data is collected along intersecting flight paths, presenting a grid that extends over most of Greenland. Several programs have gathered data over large swaths of the island, the most extensive being NASA’s Operation Icebridge  which has produced ice thickness measurements over flight paths totaling more than 580,000 km. A comprehensive analysis of all this data (involving an international consortium of 32 institutions) was published in a 2018 issue of Geophysical Research Letters (lead author M. Morlighem).

Ice thickness, and therefore depth to bedrock data is used to refine digital reconstructions of bedrock topography, provide more accurate calculation of ice volumes and, in the event that the ice melts, the potential rise in sea level based on the increase in ocean volume (currently estimated at 7.42m, notwithstanding other effects such as rebound of the landmass and surrounding ocean floor; i.e. isostatic rebound).

New coastal bathymetry data was also used to map the depth below sea level at the point where glaciers enter the sea. These depth measures are important at depths greater than 200-300m because the exposure of ice to warmer ocean waters exacerbates subglacial ice melting, calving, and ice-front retreat at the contact between bedrock and ice. Identifying glaciers that are susceptible to ocean-forced melting is an important part of ice-budget monitoring (see maps at top of page).

The subglacial bedrock topography presented in these maps was reconstructed using present sea level as the datum. The central region (blue colours) is below sea level and corresponds to a region of thick ice. Overall, about 22% of Greenland’s ice is below sea level. Reconstructions like these represent Greenland with all ice instantly removed. This kind of portrayal is useful to see what presently lies beneath the ice, but in reality if substantial melting did occur the landmass and adjacent continental shelf would begin to rise in response to reduced ice loads. Isostatic rebound begins almost immediately and would continue long after the ice has disappeared. The central depression, in large part caused by the present weight of ice, would potentially be as high or at higher elevations than the adjacent margins in the event that it fully rebounded to its position before ice began to accumulate.


A Grand Canyon

The new data has allowed investigators to probe other features of the subglacial topography. Greenland’s own version of a ‘Grand Canyon’ wends in sinuous fashion almost 750km from the central part of the island, north to the fiord into which Petermann Glacier empties; in places it is 800m deep (published in Science, 2013 – unfortunately it is not Open Access).

Have a look at this NASA video.

It lies beneath about 3km of ice; it is also below sea level. For size it rivals Arizona’s Grand Canyon. The irregular outline, depth and general rugged appearance suggests that the channel and its incision into bedrock formed before the ice sheet at least 2-3 million years ago.  Subglacial channels can provide drainage for meltwaters at the base of the ice sheet and as such will influence the dynamics of ice flow.  This is a massive structure, but how and why it formed and its effect on ice sheet behaviour are still to be guessed at.


The Hiawatha impact crater

West of the Canyon on the northwest fringe of the ice sheet, an intriguing circular shape in the ice led researchers at the Natural History Museum of Denmark to probe deeper into NASA’s Icebridge data. Initial investigations indicated a sympathetic structure in the underlying bedrock. Additional airborne radar was acquired that subsequently confirmed their suspicions. Buried beneath a kilometre of ice was a crater, 31 km in diameter, about 300m deep, with a raised outer rim and raised center.

Greenland's meteorite crater

The morphology revealed by the radar data would probably have been enough to convince most people of its extraterrestrial origin. Fortuitously, the ice sheet here is drained by a couple of subglacial streams that appear to pass through the structure. Sediment samples were collected from streams where they emerge from the ice. The discovery of quartz crystals exhibiting evidence of high-pressure shock (shock lamellae visible in microscope views), provided the necessary confirmation.  This is indeed a meteorite impact crater, the result of a bolide about a kilometre across; in size it ranks 25th on an Earth scale. The radar data also suggests that fragmented bedrock is incorporated into ice near the base of the ice sheet, in which case the impact was probably quite recent – perhaps no more than two million years ago. This was a sizeable impact and it is intriguing to ponder its effects on the course of Pleistocene evolution and climate. It is the first impact crater to be discovered beneath an ice sheet.

Like the deep oceans, ice sheets contain hidden worlds. Remote sensing, like ice-penetrating radar, allows us to ‘see’ what lies beneath. There seems little doubt that Greenland and Antarctica will reveal more geological treasures.



Marie Tharp and the mid-Atlantic rift; a prelude to plate tectonics


Map of North Atlantic mid-ocean ridge

The history of science is littered with the misplaced contributions by women, contributions that at best were pushed aside or ignored, and at worst thought of as shrill outbursts. Witness Rosalind Franklin’s fraught journey to DNA’s double helix, the recent unveiling of Eunice Foote’s experimental discovery of the greenhouse effect of CO2, and Bell Bernell’s discovery of pulsars, as corrections to a history where women found it difficult to escape the status of ‘footnote’. We can add Marie Tharp (1920-2006) to the growing list of corrections. In 1952 Tharp discovered the central rift system in the mid-Atlantic Ocean ridge (that later would become a critical component of sea floor spreading and plate tectonics) but for many years was regarded as a minor player in the burgeoning, post-war field of oceanography.

During the War, Tharp in her early twenties took advantage of opportunities to engage in university study, openings in science and engineering left by men who had gone to battle. She completed a Master’s degree in geology, but given that geology is a field-based discipline, and that women weren’t supposed to go into the field, she extended her studies to a Master’s in mathematics. In 1948 Lamont Geological Laboratory (now Lamont Doherty Earth Observatory) hired 28 year-old Tharp to draft maps of the Atlantic ocean floor, based on the growing database from SONAR and historical soundings. She worked with well-known geologist-oceanographer Bruce Heezen, who spent much of his time at sea. It must have been tedious work, but she counted herself lucky to have a position at all. This was a time when very few American universities (or anywhere else for that matter) offered science and engineering positions to women; a time of patriarchal condescension – “Mad Men” versus “Hidden Figures”.


description of SONAR
Tharp poured over depth and positional data for years, constructing 2-dimensional profiles of the Atlantic Ocean floor. She was aware, as other oceanographers were that an elevated region of sea floor apparently separated east and west Atlantic. This was initially mapped in 1854 by US Navy oceanographer, geologist and cartographer Matthew Maury, and later confirmed with depth soundings taken during the HMS Challenger expeditions (1873-1876 – Challenger had 291 km of hemp onboard to do this kind of thing; the ridge is generally deeper than 2000m). Tharp wasn’t surprised to find the Atlantic ridge on her profiles. What did catch her attention was the rift-like valley in the central part of the ridge; a geomorphic structure that was consistent through all her profiles. She immediately recognized the importance of this, because it implied significant extension, a pulling apart of Earth’s crust in the middle of the ocean. At the time, the general consensus was that ocean floors were relatively benign, featureless expanses. Her discovery indicated otherwise.


bathymetry profiles mid Atlantic

According to Tharp’s bio the response by Heezen and his colleagues was that she was being a typical woman – you know, “girl talk”. One can imagine the coffee room banter; ‘she’s great at drafting cross-sections but should leave the interpretation to the more learned’.

However, after some months and more profiles all showing the same rift- like structure, Heezen gradually accepted that this was real. A turning point for Heezen was the coincidence of several mid-ocean earthquake epicenters along the ridge. This was mid 1953. He understood its potential significance, particularly for those who thought that the hypothesis of continental drift had some credence (Heezen was not initially one of those people).

Ocean bathymetry studies in other basins in the early 1950s (Indian Ocean, Red Sea) revealed similar mid-ocean rifts. Tharp had by this time surmised that a rift valley coursed its way almost continuously the entire length of North and South Atlantic, a distance of 16,000 km; it was the largest continuous structure on Earth. The Lamont Doherty group gradually realized that the Atlantic structure, together with those discovered in other ocean basins, represented a gigantic Earth-encircling system of mid-ocean rifts, more than 64,000 km long.

Heezen presented their research to a 1956 American Geophysical Union conference in Toronto. Marie Tharp barely received a mention. She did co-author a few subsequent publications as an ‘et al.’, but it was a kind of ‘also ran’; the accolades and approbation went Heezen’s way.

Tharp was fired by the Laboratory, the victim of a spat between Heezen and Lamont boss Maurice Ewing, but she continued to develop the maps at home. Marie continued to work in the background, as the humble and grateful recipient of a job she considered to be fascinating; “I worked in the background for most of my career as a scientist, but I have absolutely no resentments. I thought I was lucky to have a job that was so interesting”.

Marie Tharp and Bruce Heezen

Marie Tharp was named one of the four great 20th century cartographers by the Library of Congress in 1997, was presented with the Woods Hole Oceanographic Institution Women Pioneer in oceanography Award in 1999, and the Lamont-Doherty Heritage Award in 2001.

There is no question that Tharp’s discovery influenced the promotion of Continental Drift in the geoscience community. Alfred Wegener’s bold hypothesis (1915) had one major problem – there was no known mechanism that could move oceanic crust and continents around, like some precursor shuffle to a jigsaw puzzle. In 1929 Arthur Holmes posited a mechanism that involved large convection cells in the mantle, but this too lacked an important degree of empirical verification. Discovery of the mid-Atlantic rift provided a solution to this vexing problem, and in 1962 Harry Hess proposed that new magma, via mantle convection cells, was erupted from mid-ocean rifts allowing oceanic crust to spread outwards. This was Sea Floor Spreading, a precursor to the grand theory of Plate Tectonics – the tectonic shift in geological thinking wherein oceanic crust is created at mid-ocean rifts and consumed down subduction zones, with the continents playing tag.

Marie Tharp’s doggedness in her belief and understanding of mid-ocean rifting is now celebrated. It’s taken a few decades, but she is no longer a footnote.


“There are more Exoplanets than stars in our galaxy”


How many stars are there in our own galaxy, the Milky Way? A number frequently bandied about is 100 billion. This is a nice round number. The number could be as high as 400 billion; also a nice round number. In an interesting coincidence, the number of galaxies is estimated at 100 billion, a number that will no doubt increase as we peer into the farthest recesses of the universe. So, a 100 billion galaxies, and in each 100-400 billion stars – the numbers are getting out of hand (even for a geologist who works in millions of years).

So when astronomers announce that there are probably more exoplanets than stars in our own galaxy (ergo all the other galaxies), our ego-centric view of the universe seems just plain silly.  As of September 25, 2018, there were 3779 confirmed planets, 2737 NASA candidates, and 2819 solar systems. The graph below shows the astonishing rate of discovery over the last two decades; 2016 was a banner year with almost 1500 identifications. The NASA plot shows the estimated planet size and orbital periods, relative to Earth; the majority of exoplanets apparently whiz round their stars in less than 100 days, some in a few hours.

So, how does one discover a new exoplanet?

Having a decent telescope at your disposal is a good start. Land-based telescopes are okay, but the real successes have been with orbiting, satellite-based telescopes. Kepler was launched by NASA in March 2009 and was tasked with watching a swath of sky containing about 150,000 sun-like stars. About 70% of the discoveries so far have been made by Kepler. The second orbiting observatory, TESS (Transiting Exoplanet Survey Satellite), launched April 18, 2018, is also dedicated to finding exoplanets, and in its first few days of operation has made some exciting discoveries.

Kepler and TESS use a method of detection called Transiting – several other methods have been used but the transit method has been the most successful (e.g. Radial Velocity measures apparent changes in the velocity of a star’s own motion, or wobble, caused by the orbiting planet. An observer will see velocity changing as the star move towards and away from Earth. A nice summary of detection methods has been compiled by The Planetary Society). A transit occurs when a planet passes between its star and an observer on (or orbiting) Earth.  The planetary disc will block some of the star’s light, and if the telescope is pointing in the right direction, the reduction in luminosity can be measured. Planetary orbits are periodic. Therefore, an important part of the analysis is observing dimming at regular intervals.

Astronomer Edmund Halley (of Halley’s comet fame), and Captain James Cook provide us with a useful historical analogy. Halley surmised that observations of Venus during its transit of the Sun, from different geographical locations, would permit calculations of astronomical distances, and hence, the size of the solar system (the calculations involved simple trigonometry). Cook was dispatched to Tahiti in time for the June 4, 1769 transit. The disc presented by Venus is small compared to the Sun, but there is a measurable decrease in light during its transit.

A more dramatic example of a transit occurs in our own backyard, when the Moon passes between the Sun and Earth during daylight hours. Partial eclipses produce observable dimming of sunlight, but a full eclipse delivers brief twilight. This principle also applies to exoplanets.

In its first few days of operation, TESS discovered a planet orbiting the star Pi Mensae, a bright dwarf star about 60 light years from Earth. The observed period of starlight dimming indicates that Pi Mensae c has an orbit of only 6.27 days. But what about its size – its radius and mass compared to Earth?

Astronomers start by measuring the size of the star; this they can do quite accurately because stars are fairly predictable. The brighter a star, the hotter it burns. Thus, the colour spectrum emitted provides a good indication of its temperature. Knowing the brightness and temperature it is then possible to calculate the surface area of the star (the larger the surface area, the more light it will emit), and if surface area is known, simple arithmetic will give you the star’s diameter.

During a planet’s transit, a measurable proportion of star light will be dimmed – in other words, the planet will dim the light in proportion to its size. Bingo! We now know the size (radius) of our exoplanet. But we still need to know how ‘heavy’ it is.

This is determined by observing the gravitational tug of war between the planet and its sun. The orbiting exoplanet will cause its sun to wobble about its axis; the degree of wobbling will be proportional to the exoplanet mass (we already know the star’s mass).  These very small gravitational perturbations can be measured by Kepler, TESS, and earth-based telescopes.

Knowing the exoplanet size and mass, gives us all the information we need to calculate its density.  Newly discovered Pi Mensae-c has a radius 2.14 times, and a mass 4.82 times that of Earth. So it is roughly Earth-size, but too close to its sun to be habitable.

The Transit method is not without its drawbacks. Importantly, the exoplanet orbit must be aligned with the observer. There must be as many orbit geometries as there are planetary systems in our galaxy, which means our telescopes can detect only a fraction of all possible exoplanets. Binary stars (i.e. two stars in very close proximity having mutual orbits) can also complicate observations of transit and gravitational wobbles. Celestial bodies the size of Jupiter can also be problematic because this size range can include some dwarf stars.

I don’t know about you, but it seems that new discoveries or exploration events are announced almost on a daily basis. Some of the latest excitement centers on two small robots that are playing leap frog on asteroid Ryugu. The number of questions seems to expand exponentially, the more we delve into our universe. Exoplanet science will go a long way to answering some of the questions.


Crème brûlée, jelly sandwich, and banana split; the manger a trois of layered earth models


Some things in science are just too difficult to comprehend: the temperature at the center of the sun (15,000,000oC), the age of the earth (4.6 billion years), the size of a nano-particle (1-100 nanometres, or billionths of a metre). We can include in this list of imponderables, the skinny outer layers of the earth: the one we are in daily contact with (the crust), and other layers beneath it. Our familiarity with the crust is usually in terms of the dirt, rock, and water we work with. But what is it like 30km down? And, beneath the crust, the upper mantle is beyond reach of our senses. What does this layer look like? How does it respond to being pushed around?

Some scientists (geologists, geophysicists) spend a great deal of time pondering questions like these. The crust and upper mantle layers are collectively referred to as lithosphere. Beneath the continents it averages 150 km thick; beneath the oceans, it is as thin as 10 km beneath the mid-ocean ridges. Given we spend our entire lives on the uppermost veneer, a reasonable person might ask ‘why is it important?’.

A few common answers include: Most earthquakes are generated in the lithosphere; Magmas erupted at volcanoes melt at these depths. But the overarching reason is that all tectonic plates are born and destroyed as lithosphere. Plate tectonics governs pretty well everything that happens on earth over geologically short and long time-scales. So, what appears arcane at first sight, does have practical applications.

Enter the dessert trolley. There are three choices: a crème brûlée, a jelly sandwich, and a banana split. Proposed as models of the layered earth, they serve a dual purpose: they provide visual descriptions of how the lithosphere might be structured and, after evaluating the merits of each, they can be consumed.

The crème brûlée is a two-layered model.  A viscous fluid base (custard) is capped by a thin crust of caramelized sugar. The crust behaves in two ways. Poke it gently in the centre, and it will bend slightly – release the pressure and it will return to its original shape.  This represents elastic behaviour (think also of wire springs, or rubber bands). Press it too hard and it will break into several ragged pieces; in this instance, you have exceeded the elastic limit, or strength, and induced brittle failure. Earthquakes represent brittle failure where earth’s crust fractures, is displaced, and in the process causes mayhem. The crème brûlée model is probably the simplest of the dessert trio in terms of its relevance to the lithosphere.

The jelly sandwich is potentially the more variable of the three analogues. It is a three-layered model where two pieces of bread are separated by a layer of jelly.  Here, the upper bread layer represents a strong upper crust, and the jelly a weak lower crust. The bottom bread layer is compared with a strong upper mantle – in contrast to the weak custard (mantle) layer in the crème brûlée. The upper and lower bread layers are both quite bendy (unless you have toasted the bread). If you use plain white bread, then bending will be uniform. But if you prefer whole-grain slices there will be lots of lumps and greater heterogeneity, and hence a less predictable response to the application of pressure, or stress. The jelly is much less fluid than custard. It can behave elastically – witness the wobbling, that represents deformation from which it recovers, but at a certain point it too will fail.  Bread is less rigid than a crème brûlée crust; any kind of twisting or bending will probably result in some permanent deformation (i.e. it doesn’t bounce back to its original shape). Unlike the crème brûlée crust, bread is less prone to brittle failure.

The banana split adds another level of complication to models of the lithosphere. The rationale for this model is that the lithosphere contains zones of weakness, particularly near the boundaries of tectonic plates – imagine these plates colliding or sliding past one another, where the forces are large enough to create mountain belts and consume oceans. Here, scoops of ice-cream represent blocks of crust and mantle that are separated by large, very deep faults. This is a very temperature-dependant model. As the ice-cream melts there is a zone of weakness between it and the adjacent scoop (block). The presence of fluid, particularly water, exacerbates this weakness. In this dessert, we need to translate the fluid boundary between scoops of ice-cream, to structures 10s of kilometres deep. Modern examples include the Alpine fault in New Zealand, and San Andreas Fault in California. Some of these large structures can last for very long periods of geological time (100s of millions of years), and potentially influence events in the crust-upper mantle long after they first formed.

All models in science are simplifications of the things we try to explain. It may be the case that some consider the dessert trio to be trivial, even silly, providing little useful scientific information for the representation of the crust and mantle.  But the utility of models and analogues is not only in scientific explanation, but to present a complex world in visually interesting, and yes even amusing ways. Models and analogues need to stir the imagination of folk who are not directly involved in this kind of research but have a vested interest in it. In this regard, the dessert trio works, even if folk can relate to them only via our taste buds.


Submarine mud flows and landslides modified Kaikoura canyon during the 2016 M7.8 earthquake


A slashing blow by some mythical behemoth, knifing effortlessly through earth’s rocky foundations; a (seemingly) bottomless chasm, a canyon, with nothing but the wind between you and whatever lies below. A bit over the top perhaps, but canyons often spark the imagination – standing on the lip can feel like being perched on the edge of the world, vertiginous for some.  Most canyons have been carved by the relentless churning of stream and river, incising the layers of rock and removing the sediment to distant shores.

Terrestrial canyons have their submarine counterparts, that transect the submerged, outer margins of continents and volcanic islands. Submarine canyons commonly mark the transition from shallow continental shelves and platforms to ocean basins, acting as conduits for sediment delivery from rivers, deltas and shallow seas, to the deep oceans.  And like their land-based cousins, they are deep (1 to 2 km, or in the case of Grand Bahama Canyon, 5km), steep-sided incisions in the ocean floor.  More than 600 submarine canyons have been identified world-wide from bathymetry maps.

The 3-dimensional bathymetric reconstruction of Monterey Canyon (top image), about 100km south of San Francisco, illustrates common attributes of these structures. The canyon cuts deeply into the break between the shelf and steeper marine slope – in this image the break is a definite line separating light blue from darker shades.  The Monterey Canyon head encroaches onto the shallow shelf to within a few 100m of the shoreline; this is actually atypical of most other canyons where incision of the sea floor usually begins closer to the shelf edge. The canyon channel snakes down slope, eventually flattening out on the deep ocean floor; the main channel is joined by several smaller tributaries. Several smaller gullies are also incised into the shelf edge and slope.

The general opinion among earth scientists is that submarine canyons are formed by two main processes: Erosion by sea-floor hugging flows of mud and sand (given the general name sediment gravity flows), and by collapse of the steep margins, producing submarine landslides (and potentially, tsunamis).  Common triggers are thought to include storm surges and earthquakes. The primary basis for this interpretation is abundant geological evidence of past events, combined with some experimental work, but it remains a largely theoretical interpretation because there have been very few direct observations of either process in action.  The reasons for this disparity are that submarine flows of mud and sand are relatively rare events (at least on a human time scale), and because of the difficulties inherent in witnessing such processes in deep water. For this reason, recent events in Kaikoura Canyon (southeast New Zealand) have sparked significant international interest.

Kaikoura Canyon (New Zealand), 60km long and up to 1200m deep, is located along the tectonically active Hikurangi margin, close to the Alpine Fault system that transects northern South Island and the adjacent submarine shelf.  At its deepest extent (about 2100m) the main canyon channel merges with Hikurangi Channel, which at more than 1500km, is one of the longest deep-sea channels in the world; Hikurangi channel wends its way across the more subdued ocean floor towards the abyssal Pacific Ocean.  The submarine canyon head is an uncomfortable 1000m from the coast, a spitting distance that elevates the risk of destructive tsunamis that can evolve from submarine landslides along the canyon walls. November 2016, and the magnitude 7.8 Kaikoura earthquake, provided a rude reminder of the potential for disaster. The seismic jolt activated slope collapse and sediment movement down the canyon slopes and main channel; fortunately, the ensuing tsunami was small, but the bonus for science was huge. Mapping  of the canyon head and main canyon channel, fortuitously three years before the earthquake and three months after the event, has enabled scientists to track the changes to channel morphology and sediment distribution that can be attributed solely to the earthquake (The project was coordinated by NIWA – New Zealand’s National Institute of Water and Atmospheric Research).

The first two images show before (2013) and after conditions at a location near the canyon head (closest to shore). Large swaths of muddy sediment were dislodged from the ridges and slopes, cascading into the main channel; most of the canyon head is now devoid of its sediment mantle.  Parts of the canyon floor are 50m deeper than before the earthquake, because of erosion by the moving sediment.

The second set of images show before-and-after scenes of the canyon floor at 1800 to 2100m water depth. The striped pattern is formed by large, ripple-like gravel waves, or dunes, that under normal conditions would migrate slowly downslope. However, most of the gravel dunes were moved at least 500m downslope by the rapidly transiting muddy flow.

Much of the dislodged sediment continued as a turbulent muddy flow down the main canyon channel and thence to the deeper Hikurangi Channel; the flow had sufficient momentum to carry it more than 680km from its source. Evidence for this comes from deep-sea cores taken 4 days, 10 weeks, and 8 months after the earthquake.  Cores were taken from the floors of both canyon channel and the more distant Hikurangi Channel, plus the flatter area, or overbank, beyond the channel banks (analogous to a river floodplain).  The reasoning here is that, if sediment gravity flow deposits can be identified in the overbank region, it means that the flow itself was deeper than the channel and, given that we know how deep the channel is, an estimate can be made of the minimum depth of the actual flow.  Overbank deposits were detected in cores, indicating that the moving flow was at least 180m thick, 680km from Kaikoura Canyon. As Joshu Mountjoy (one of the project leaders for NIWA) has pointed out, this has proved to be one of the few occasions in which actual flow dimensions in a deep-sea channel could be measured.

From the Kaikoura event we have confirmed that seismicity can trigger physical modifications to submarine canyons and submarine slopes, and that sediment is flushed from canyons to the deep ocean by far-travelled, turbulent muddy flows (i.e. sediment gravity flows). We have learned something of the stability of the canyon itself and the sediment that gradually mantles the sea floor. The legacy of the Kaikoura earthquake (or any major earthquake for that matter) is often voiced in terms of broken lives, disrupted highways, and the costs of rebuilding. There should be no attempt to minimise these outcomes, but we should also remind ourselves of the advances in scientific understanding of earthquakes, and the geological consequences that accrue from an event like this. We should applaud these gains in knowledge because ultimately such knowledge will help save lives and property.

Most of the information for this post is gleaned from NIWA news articles and publications, linked in the text above.


Archeomagnetic Jerks; our decaying magnetic field


“Archeomagnetic Jerks”. This interesting phrase refers, not to people, but to our global magnetic field; the one that protects us from incoming solar radiation and protects all those electrical devices we’ve come to rely on, including satellites. The magnetic field is generated by earth’s solid core; it envelopes our earth. The magnetic poles (not the same as the geographic poles), move around a bit. Measurements of the field over several decades indicate that the north magnetic pole is migrating south, towards Siberia and has moved about 1000 km since it was first pin-pointed in 1831. Geological investigations of ‘fossil’ magnetic fields also demonstrates that the magnetic field has flipped hundreds of times over past millennia, where north becomes south (see an earlier post for details). Disconcerting as this sounds, we can take some comfort in the fact that these polarity reversals do not coincide with any extinctions.  Homo sapiens was around during the last reversal (780,000 years ago) and, I’m happy to report, survived intact. We will survive the next reversal, although some of the electrical accoutrements we have amassed, might not.

Earth’s magnetic field is generated by rotation, or convection of a liquid nickel-iron layer that surrounds the solid iron core; it is referred to as the liquid outer-core. The heat necessary to drive convection comes from the solid inner core; temperatures for the outer core range from about 2700C to 7700C. Movement of the liquid iron is also driven by forces generated by earth’s rotation, called coriolis forces. Convection in the outer core is not uniform, and variations in rotation, perhaps analogous to eddies, produce variations in the magnetic field.  One region of significant variation in the magnetic field is the South Atlantic Anomaly (SAA), a relatively narrow band where magnetic field strength is much lower than expected; this region extends from central South America to central Africa.

The SAA is thought to evolve from complex interactions in earth’s liquid outer core beneath Africa and central South America.  And although the SAA is considered by some as a possible harbinger of wholesale magnetic pole reversal, the extent of the anomaly has a more immediate impact because of the interaction between the magnetic field and the Van Allen radiation belts (these radiation belts were one of the first discoveries made by an orbiting satellite).  The radiation belts (there are usually two concentric belts) are doughnut-shaped regions in space where charged particles from the sun are trapped as they interact with the magnetic field. In doing so, they protect us from incoming solar radiation. However, the radiation ‘doughnut’ is not oriented symmetrically with earth’s axis of rotation but is slightly off-kilter. This means that one part of the radiation belt comes very close to earth – in fact about 200-300 km, and this low region is what defines the shape of the SAA.  An important consequence of the SAA is that solar radiation is significantly more intense over the extent of the anomaly; orbiting satellites that transit the region of the anomaly are fitted with protective shields to prevent failure of electrical systems.  For example, Hubble Space Telescope passes through the anomaly 15 times a day.

Globally, the strength of the magnetic field has decreased about 15% in the last 200 years. The current scientific dilemma with the SAA is that it seems to be expanding as the magnetic field weakens. This observation, given voice by several media outlets, has led to some predicting dire consequences during an imminent magnetic field reversal. The problem here is that scientists do not know whether this weakening is an unusual event, or one that anomalies like the SAA cycle through from time to time.  It is also not well understood whether the SAA is a relatively recent phenomenon that has been around for a few hundred years, or has persisted over much longer periods of time, perhaps waxing and waning in its extent.

In a recent study, Jay Shah and other geophysicists looked at the magnetic signatures in 46,000 to 90,000 year-old volcanic rocks from Tristan da Cunha.  These isolated volcanic islands in the South Atlantic lie within the SAA and may provide records of older magnetic anomalies. They discovered at least 4 periods of significantly reduce magnetic intensity, and concluded that the SAA could be a persistent anomaly, or at least one that recurs from time to time.  Although the results are preliminary, they suggest that decreasing field strength in the SAA may have happened before, but without wholesale field reversal (there have been no reversals in the last 90,000 years).

The idea that the SAA is a long-lived phenomenon has received an additional boost in a study of archeological materials by Vincent Hare and colleagues, who measured the preserved magnetic signatures in Iron Age mud from southern Africa.  The archeomagnetic materials used in this study were burnt, or baked mud from various Iron Age facilities such as grain storage and hut floors (perhaps baked by cooking and heating fires).  Mud baked above a certain temperature (known as the Currie Point) will retain the magnetic signatures present at the time, in much the same way as solidified volcanic rocks.  Measurements on these materials show significant changes in the magnetic field intensity, between 1225AD and 1550AD, and an earlier period around 500 to 600 AD.  Abrupt changes in field intensity like these are commonly referred to as archeomagnetic jerks.

Despite the ‘End is nigh’ approach taken by tabloids and other popular media to this scientific phenomenon, the actual science is equivocal.  It appears that the South Atlantic (magnetic field) Anomaly is long lived – at least many 10s of thousands of years, and that the magnetic field intensity of the anomaly has waxed and waned several times.  In this context, the current state of decay of the magnetic field both globally and in the SAA, may be nothing more than a repeat of other historical and prehistorical events.  However, on a more sobering note, we are overdue a complete magnetic pole reversal.  No doubt the geophysicists will keep us posted. In the meantime, if a pole reversal takes place tomorrow, you may have to get used to subtracting (or adding) 180o from your compass bearing to ensure you end up where you want to go.


Subcutaneous oceans on distant moons; Enceladus and Europa


Our blue Earth, rising above the lunar horizon, is an abiding image of our watery state that must evoke an emotional response in any sensible person. Cloud-swirled, Van Gogh-like. Such a blue – there’s nothing like it, at least in our own solar system.  A visitor to Mars three billion years ago might have also seen a red planet daubed blue, but all those expanses of water have since vanished, replaced by seas of sand.

Earth’s oceans are unique in our corner of the universe. Except for a thin carapace of ice at the poles, they are in a liquid state, and are in direct contact with the atmosphere to the extent that feed-back processes control weather patterns and climates.  Sufficient gravitational pull plus the damping effect of our atmosphere, prevents H2O from being stripped from our planet by solar radiation (again, unlike Mars). Our oceans exist because of this finely tuned balancing act. Continue reading


An Italian job; seismic risk-assessment at risk


Life is a risky business. Not a day goes by when some aspect of our lives comes under the gaze of risk assessment, an analysis of potential adversity, the probability that some event will impact our well-being. No black and white determinism here; we have become probabilistic entities. The seemingly simple act of driving your car, is translated into an actuarial assessment that determines the cost of insurance, a government’s health budget, a funeral director’s business plan, or a vehicle manufacturer’s liability. All are predicated on the probability of some event taking place – one chance in x occurrences. No luck, or absence of luck? Luck is when you win the lottery without buying a ticket.

Predicting natural phenomena, like volcanic eruptions, tornadoes, or earthquakes, is the stuff of science. The problem with these kinds of events is that they can have global impacts. How does Continue reading


Which satellite is that? What does it measure?


Space may well be the final frontier (there are one or two on earth that still require some work), but the space around our own planet is decidedly crowded. Folk at NASA’s Goddard Space Center (Maryland) estimate about 2300 satellites now orbit Earth; vehicles in various states of repair, use or disuse, of which a little more than 1400 are operational Continue reading


The earth moved; GPS, earthquakes, and slow-slip


It is often useful to know where you are, in a spatial sense. In the old days (LOL), field geologists, the kind that make maps of rocks and earth structures would, armed with topography maps and compass, determine their location from some vantage point using line-of-sight and triangulation.  I don’t hanker for a return to these days. I’m grateful for the kind of location data instantly available on my smart phone – the little blue dot that seems to follow my course across some digital representation of the universe. But I acknowledge a kind of smugness, in the event the digital world nosedives, knowing that I can still find my way; no General Panic Stations (GPS) if the satellite-based Global Positioning System (the other GPS) fails.

GPS devices can also be attached to bits of the earth’s crust.  This is useful because the crust, whether continent, sea floor, or volcanic island, is always on the move. Continue reading