Lucid thoughts » making complex science accessible » Page 2
Home » Page 2

Interstellar travel: how to spot a ‘starman’ going by

Posted February 23, 2013 By Kevin Orrman-Rossiter


Massive objects moving at near light speeds do not occur naturally in the universe as we know it. If we detect such objects it is a reasonable to assume they are artificial artifacts from advanced intelligent life. This according to Garcia-Escartin and Chamorro-Posada, authors of a recent paper, is a low-cost, sure-fire way of searching for intelligent life outside earth.


The habitable zone of Gliese 581 compared with our Solar System’s habitable zone. Image credit NASA.

Searching for life beyond earth is a grand and varied enterprise.

For a start we can look for exoplanets that fall inside the habitable zone of a star. A planet found in this zone may fulfill the requirements for life: liquid water, energy, elements and other nutrients, and appropriate physical conditions. Though we have located many exoplanets in recent times they are far from earth – many light years distant. For example one star system, Gliese 581, is 20.3 light years away (192,048,720,000,000 kilometres). With three planets in its habitable zone, we know nothing about conditions on them. The techniques used to find them can tell us nothing about their ecology – if any. being in a habitable zone does not guarantee life. It is only in recent years that we have realised how inhospitable Venus and Mars are to life – despite being in our habitable zone.

By looking for alien signals or transmissions, as in the SETI programme, we extend our search from ‘possible life’ to intelligent life. For advanced civilisations we look for artificial illumination or interstellar probes.

Let’s face it though, to know we are not alone will require quite good proof for most of us (apart from the misguided minority of UFO believers), and especially for the skeptical scientists.


Bussard ram-jet interstellar drive. Image credit NASA.

The intriguing proposition of Garcia-Escartin and Chamorro-Posada is based on three ideas. The first is that anything travelling faster than 3.3% of light speed (5,935,890 kilometres per hour) is artificial. All known natural objects travel slower than this speed, as do our current space probes. This speed was chosen as it is the estimated speed of the nuclear propulsion ship proposed by Freeman Dyson in the Orion project. Although the propulsion technology is feasible today the technological and economic hurdle of creating such a craft is way beyond our current means. Although it is certainly not inconceivable to achieve such interstellar travel in the next 100 years.


Scales of speed with respect to the speed of light in vacuum (logarithmic scale). The fastest man-made objects are in the range of velocities from 1/100,000 c to 1/1,000 c. Examples are the fastest manned ship, Apollo 10 on entry, the Galileo probe during its descent into Jupiter and the solar probe Helios 2. For comparison, we have included the average speed of Earth during its orbit around the Sun and the motion of the Solar System with respect to the cosmic microwave background frame. The fastest natural objects, like hypervelocity star HE 0437-5439 and neutron star RX J0822-4300, move in the scale of 1/1,000 c-1/100 c. We define a region of extraordinary propulsion (REP) for speeds which would point to an artificial object. The REP starts at the estimated speed for the nuclear propulsion Orion ship, which could be built with present human technology. Source original paper, Cornell University.

You are possibly thinking about now; “Doesn’t the mass of an object increases massively as its speed approaches light speed?” You would be correct, this consequence of Einstein’s theory of special relativity is demonstrated quite satisfactorily in particle accelerators around the world. To cover this the authors next identify a consequence of relativity theory: relativistic effects amplify the light reflected from a body travelling at near light speed – in some key situations. Allowing for the detection of ‘small’ objects.

This brings in the authors third criteria. Interstellar travel will be from one star system to another. The reflected-light magnifying effect would be greatest for the cases where earth is almost in line with the departure stellar system and the destination stellar system.


Earth’s position with respect to the ship’s trajectory. (a) Earth receives the light from the destination star reflected from an approaching ship. (b) Earth receives the light from the origin star reflected from an outbound ship. (c) Earth receives the light from a third star, which is reflected from the ship at an angle. Source original paper, Cornell University.

The authors propose to limit the first search to star systems that are reasonably close to each other (no further than 10 light years apart) to maximise the probability of stellar travel opportunities. Considering that Gliese 581, for example, is greater than 20 light years distance from us, I suggest that this criteria is too limiting.

The paper is an interesting, if not compelling, proposition. The authors do calculate what size an artifact would need to be, travelling at their minimum speed (3.3% light speed), to be detected at the distance of one of our closer stellar neighbours. Could such an artifact be detected by the Hubble or James Webb space telescopes, for example? What is the probability of success of such an experiment, compared to say the SETI experiments?

One idea I did find interesting is by focusing on detecting light reflected from ships, we do not need to assume any intention by the interstellar travellers to communicate with us. The ‘signal’ is independent of alien psychology. It is also independent of propulsion technology – we aren’t looking for any ‘signature’ of any particular technology, known or unknown.

It is an interesting paper. I’m not sure they have presented a compelling enough case to convince a funding body – yet.

This article was first posted on Australian Science you can read it here.

Be the first to comment

Ahead of his time: the genius of Nikola Tesla

Posted February 2, 2013 By Kevin Orrman-Rossiter
Nikola Tesla seated

Nikola Tesla – posed in his laboratory.

There is a dominant theme in the life of Nikola Tesla. His undoubted genius. Tesla pioneered, if not invented; AC motors, AC power generation and transmission, high voltage generation (Tesla coil), wireless transmission of power and information, radio controlled boats, cold discharge fluorescent lighting, and the ‘death-ray’.

It also meant that he was ahead of his time, in many cases unable or disdainful to translate what to him was now obvious to those of lesser vision or ability. This resulted in tempestuous clashes with entrepreneurial inventors in three major technologies, technologies that defined this as the ‘Age of Electricity’. Tesla’s was no ordinary progression in life and its  colorful and quirky story continues to determine his eccentric place in history – from near invisibility to cult figure.


Tesla, the showman, at his Houston street laboratory in 1898, sending 500,000 volts through his body to light a wireless fluorescent light. Image source Wiki commons.

Two books: many stories

My prompt for this writing this essay was my recent reading of two books on Tesla’s life. His autobiography; My Inventions and other writings, first published serially in 1919 when he was 63, is a technicolour, frenetic meditation on his major discoveries and innovations. It is autobiographical, mixing his life stories with his inventions, the narrative leaping around in time and place as Tesla seemed to in real life. Worth reading to obtain some of the character of Nikola Tesla – even if coloured by his own deliberate self mythologizing.

The second book Wizard: the life and times of Nikola Tesla (by Marc Seifer) captures much of the excitement of this early age of electricity. This book is a chronology of Tesla’s life, informative in its research and illuminating with its vignettes drawn from contemporary memoirs. At the same time its chronological presentation provides a misleading sequential perception of his life.

Seifer also lacks the engineering or science competence to describe in simple terms the genius of Tesla’s inventions. An essential for a biography of someone whose whole life revolved around his work. In the concluding chapters Seifer’s writing starts to take on the ludicrous credulity of the conspiracy theorist – which is a pity the rest of the book is clear of this nonsense.

in defense of Seifer I think it would be challenge for any biographer to tell the whole Tesla story.  Tesla was completely consumed by his ideas and inventions, eschewing most intimate contact – to the extreme of apparently being celibate his whole life. To make credible his fantastic life is a challenge. Furthermore, a modern reader, it most cases will struggle in comprehending the archaic technical descriptions and ideas.

The dawn of the Electric Age

This was an age when electricity and magnetism had only recently been linked by the arcane mathematics of James Clerk Maxwell and electricity was still thought to propagate by vibrations of an aether. Tesla was one of the few people alive who understood the physics of what we now call electromagnetism, and could also translate this into tangible inventions.


Wardenclyffe, circa 1903. Source Wiki commons.

Tesla’s name is associated with the invention of the rotating magnetic field and the ability of such a field to produce an electric current. By 1882 Tesla had invented and patented the AC polyphase motor – giving the ability to transfer electrical energy into mechanical energy. The reverse of this creates a turbine that converts mechanical energy, from say a waterfall, into electrical energy.

Tesla’s move, in 1884, from Europe to the USA was to develop his own inventions and contribute to Edison’s commercial interests. This collaboration parted  ways over what became the AC-DC power war. Edison’s commercial interests were firmly focused on his incandescent lamps and the use of DC power (direct current; such as we get from a battery). Tesla had correctly intuited from first principles that alternating current (AC power as we now operate our homes and industries on), as different to DC power, could be transported by wires over great distances with minimal power loss.

Ultimately Tesla was proved both scientifically and commercially correct. It was his turbine designs that Westinghouse used in the first major hydroelectric power station in the world – the 1894 powering of Buffalo by the might of Niagara falls.

This was a tumultuous period of commercial expansion. The ability to power industry by electricity rather than steam was arguably a bigger leap than from manual to steam power – certainly in commercial terms. The ensuing law-suits and counter-suits over patent precedence in motors, generation and transmission, roiled across the US and Europe, making and breaking reputations and fortunes. These actions bringing Edison General Electric to its knees and forcing it to join with others to become General Electric.

Westinghouse prevailed, at the same time neglecting to pay Tesla royalties that he deserved – despite he not bothering to ensure he had written agreements. This disdain for the corporate conventions of the time cost Tesla both wealth and reputation. He moved onto other new ideas whilst others claimed his inventions in the law and popular press.

Father of the wireless

This was repeated in the next huge modernisation trend – the invention of the wireless transmission of information. By 1893 Tesla was demonstrating the transmission of electric power by wireless means most notably at the Chicago World fair. He delighted in amazing audiences with fantastic high-voltage discharge displays, passing millions of volts through his body and remote lighting of fluorescent tubes by radio frequency.

Already in 1891 he had discussed his “wireless telegraphy” and demonstrated the technology required in 1892. It was 1894 before Guglielmo Marconi would begin his teenage tinkering in the wireless field.  So why do we remember the name of Marconi as synonymous with radio? Why did he share the 1909 Nobel Prize with Karl Braun rather than with Tesla?

It would appear from historical evidence that Tesla, in his own mind, had already proved it – and moved on. Whereas the entrepreneur in Marconi, much like Edison, was tenacious in development of his inventions. Tesla at this time had formed a company with the financier Pierpont Morgan to commercialise his wireless technologies. Morgan knew their was a fortune in wireless telegraphy and fluorescent lighting; provided they were developed sufficiently to present to investors as near commercial realities.

Nikola Tesla Lightbulb

Nikola Tesla illuminated by one of his wireless powered cold arc lamps. Source Wiki commons.

To this end Morgan had tasked him with demonstrating the fluorescent light technologies and maturing their manufacture and demonstrating his wireless by covering off-shore yacht races. The latter would have been a tangible demonstration for both the rich and the Navy. Tesla did neither. he scorned the triviality of the public demonstration – despite his very public earlier electric demonstrations. This left the field of wireless telegraphy (radio) for Marconi and other to develop. instead Tesla squandered the Morgan money on his other big dream – providing wireless transmission of electric power by radio.

Radio power, transmission and weapons

Tesla’s greatest dream was sure to be one not funded by the likes of Morgan. He envisaged a world where power and information were transmitted world-wide – for free. To this end he he used the money from Morgan to plan and start building a gigantic transmission tower, Wardenclyffe, in 1902. His philanthropic ideals and profligate spending meant that by 1906 his funding from Morgan had dried up, and his dream never realised. The tower was destroyed in 1917 by US Government orders to ensure that it was not used by enemies of the state.

In developing this idea he correctly understood the physics of wireless transmission both through the atmosphere and the ground. Laying down the principles that would guide the subsequent invention of both AM and FM radio.

A combination of creditors, stock market upheavals, World War 1 and the stock market collapse of 1930 ensured that Tesla could never raise the money required to bring about this revolutionary idea. A idea revolutionary even by the social standards and upheavals of the time.

Tesla RC boat1

Tesla’s radio controlled boat. Source Wiki Commons

At the same time Tesla was a continuing fountain of new ideas. Perhaps given the turbulent times these included the world’s first radio controlled boat in 1898 which he continually and unsuccessfully tried to interest the US Navy in, improvements on dirigibles, a helicopter plane called a flivver and at the age of 78 a ‘death-ray’.

This latter ‘invention’ was never built nor even prototyped but harked back to experiments of Tesla in the 1890’s that were only a small step away from the invention of the laser. The ideas were sufficiently developed though to serve as mental prototypes for particle-beam weapons and strategic defense shields loved by science fiction writers and some politicians.

Modern nonsense

Apart from the tangible technological legacies left by Tesla’s prodigious genius there are also quixotically hare-brained modern legacies. These Tesla, if he were alive today, would scoff at. None more so than the Tesla “free-energy-generator

This modern scam is based on the misrepresentation of Tesla’s laudable Wardenclyffe dream and his idea that you could use his generator as a receiver of the, at the time, newly discovered cosmic rays. The radio sophistication and development of radar during and subsequent to WW11 demonstrate the impracticality of large transmitters and receivers of radio power at the levels envisaged by Tesla. We now use networks of smaller powered repeaters (many of these satellites) to ensure uninterrupted radio/telephone/television coverage on a world-wide basis. As for cosmic rays, they are energetic, however of such low density (thankfully for life) that collecting sufficient power from them is impracticable.

That scams based on Tesla exist in this modern age is testament not to conspiracy theories as maintained by these swindlers. Rather it is testimony to Tesla being truly ahead of his time – a time of tumultuous technological growth, which he partially created without ever seeming to inhabit.

A complete biography of Nikola Tesla is still to be written. I believe it will require a writer who understands the science and engineering of Tesla’s age and who has the artistry to weave the many threads of his life into the dynamic, parallel genius of his life – teetering on the precipice of chaos – that was Nikola Tesla.

This article was first published on Australian Science you can read the original here.

3 Comments so far. Join the Conversation

The perils of space exploration: last flight of space shuttle Columbia

Posted January 25, 2013 By Kevin Orrman-Rossiter


The 28th and last flight (STS-107) of the space shuttle Columbia was ten years ago. Launched on January 16, 2003 Columbia was destroyed at about 0900 EST on February 1, 2003 while re-entering the atmosphere after its 16-day scientific mission. The destruction of the shuttle killed all seven astronauts on board.

An illustrious career

Columbia was the first of the space shuttles to fly, it was successfully launched on April 12, 1981, the 20th anniversary of the first human spaceflight by Yuri Gagarin in Vostok 1, and returned on April 14, 1981, after orbiting the Earth 36 times. The first flight of Columbia (STS-1) was commanded by John Young, a Gemini and Apollo veteran who was the ninth person to walk on the Moon in 1972, and piloted by Robert Crippen, a rookie astronaut who served as a support crew member for the Skylab and Apollo-Soyuz missions.

Columbia has an illustrious career as part of the US space program, featuring many ‘firsts’. It was the first true manned spaceship. It was also the first manned vehicle to be flown into orbit without benefit of previous unmanned “orbital” testing; the first to launch with wings using solid rocket boosters. It was also the first winged reentry vehicle to return to a conventional runway landing, weighing more than 99-tons as it was braked to a stop on the dry lakebed at Edwards Air Force Base, California.


Its second flight, STS-2 on November 12, 1981 marked the first re-use of a manned space vehicle. A year later it became the first 4-person space vehicle – bumping this to six on its sixth flight (STS-9) on November 28, 1983. This flight also featured both the first flight of the reusable laboratory ‘Spacelab’ and the first non-American astronaut on a space shuttle, Ulf Merbold. STS-93, launched on July 23, 1999, was commanded by Eileen Collins, the first female Commander of a US spacecraft.

Space Shuttle Columbia flew 28 flights, spent 300.74 days in space, completed 4,808 orbits, launched 8 satellites and flew 201,497,772 km in total, including its final mission. Its penultimate flight (STS-109) was the third of the highly publicised servicing and upgrade flights to the Hubble Space Telescope.

The fatal flight

The rockets fire. Amidst the thundering fiery roar the shuttle lifts majestically from the launch pad. Unnoticed at the time, at 81.9 seconds after launch a foam insulating block disintegrates upon hitting the leading edge of the shuttles left-wing. The launch continues as scheduled. One hour after launch Columbia was in orbit and the crew began to configure it for their 16-day mission in space.

The next day, routine analysis of high-resolution video from the tracking cameras reveals the debris strike. Multiple groups within the mission team review the tapes. They assess the possibility of damage and decide that an image is required of the wing. They make a request to the NASA ground management for imaging of the wing in-orbit.

However, it was considered “of low concern” that the carbon matrix could be damaged by the foam block. The engineers were over-reacting. The Space shuttle Program managers declined to get the Columbia imaged – or alert the shuttle crew. In fact the crew were told that the impact was a “turn-around issue”, something they had seen before and would be a maintenance check only. Titanic-like the mission continued.

Scientifically the mission was great success. The shuttle crew worked around the clock to ensure that maximum scientific value was achieved. Including an investigation of the web-spinning abilities of the Golden orb spider under low gravity. An experiment designed by students from Glen Waverley Secondary College, in Melbourne Australia.

The morning of re-entry all appears calm and normal in the mission control room. As re-entry started the crew are seen to be in good spirits and looking forward to coming home.

Then while travelling at Mach 24.1, during the 10-minute fiery re-entry, when the leading edge reaches temperatures in excess of 1550 Celsius, the damaged thermal protection panels on the wing overheated – then failed catastrophically. The wing and shuttle disintegrating.

The nearly 84,000 pieces of debris from the shuttle are stored in a 16th floor office suite in the Vehicle Assembly Building at the Kennedy Space Center.

The seven crew members who died aboard this final mission were: Rick Husband, Commander; Willie McCool, Pilot; Michael Anderson, Payload Commander; David Brown, Mission Specialist 1; Kalpana Chawla, Mission Specialist 2; Laurel Clark, Mission Specialist 4; and Ilan Ramon, Payload Specialist 1.

Two other died in the search for the debris: Jules Mier (Debris Search Pilot) and Charles Krenek (Debris Search Aviation Specialist).


Is spaceflight perilous? Or an unforgiving adventure?

It is rather remarkable that NASA had launched men into space sixteen times during the the Mercury and Gemini programs without a casualty – although there had been some scary moments.

Compared to the cramped and tiny Mercury capsule the Apollo command module was, in spaceflight terms, a luxury liner. So when a spark ignited the oxygen atmosphere of the Apollo 1 capsule on January 27, 1967 killing three astronauts it was shocking for both NASA and the public. The last communication from the Apollo 1 capsule was not revealed for a long time to the public:

Fire! We’ve got a fire in the cockpit! We’ve got a bad fire…..get us out. We’re burning up…..

The last sound was a scream, shrill and brief. After this nothing at NASA would be quite the same again.

The fatal Apollo 1 fire was also unexpected. At the time of the fire the crew of Gus Grissom, John Young and Roger Chaffee were perched atop an empty Saturn V rocket involved in routine testing of the capsule control systems.

The 1986 Challenger disaster was equally shocking – and far more public. The explosion 73 seconds after lift off claimed shuttle crew and vehicle. The cause of explosion was determined to be an o-ring failure in the right solid rocket booster. Cold weather was determined to be a contributing factor. The subsequent investigation and changes delayed the next shuttle launch to late 1988.


You could say that space exploration in itself is not inherently dangerous. But to an even greater degree than aviation, it is terribly unforgiving of any carelessness, incapacity or neglect. Gus Grissom has been quoted as saying during the pioneering Mercury missions:

If we die we want people to accept it. We hope that if anything happens to us it will not delay the program. The conquest of space is worth the risk of life.

I’m not sure that Gus Grissom would have accepted these deaths as an acceptable risk of human spaceflight.

The article was originally published on Australian Science, read it here.

2 Comments so far. Join the Conversation

Joy to the world: an ode to outer space at Christmas

Posted December 24, 2012 By Kevin Orrman-Rossiter


The six Expedition 30 crew members assemble in the U.S. Lab (Destiny) aboard the International Space Station for a brief celebration of the Christmas holiday on Dec. 25.

The six Expedition 30 crew members assemble in the U.S. Lab (Destiny) aboard the International Space Station for a brief celebration of the Christmas holiday on Dec. 25.

By Alice Gorman, Flinders University and Kevin Orrman-Rossiter, University of Melbourne

Christmas – whether you’re religious or not – is a time when people gather their families together to reinforce the bonds that make us human.

In the era of modern telecommunications, distance no longer separates people the way it once did. Whether you’re on another continent, another planet, or floating out in space, satellites enable us to talk to and see each other, to feel connected.

And speaking of Christmas and space, it turns out the two have a bit of a history.

Space travel would explain how Santa can get around the world in one night. Kennedy Space Centre

An Apollo Christmas

Apollo 8 was a Christmas mission, the only one of all the Apollo missions. On December 21, 1968, astronauts Frank Borman, Jim Lovell and Bill Anders blasted off from Cape Kennedy on a Saturn V rocket.

Their Christmas gift to the world was an extraordinary photograph that became one of the icons of the 20th century.

As they orbited the moon a few days after launch, an unscheduled change in orientation suddenly brought the earth into their view. The astronauts scrambled to get their cameras working, and Bill Anders took the famous shot of the Earth rising over the lunar horizon.


For the first time we saw our whole world from the outside. The fragility and beauty of the blue-and-white globe floating in the sea of darkness ignited an awareness of how interconnected the people of Earth are.

The nascent environmental movement drew inspiration from this vision and people really began to appreciate that we are only a small part of a rather large universe.

The Apollo program provided more concrete presents as well. The crew of Apollo 17, the last men on the moon, made a December 19 splashdown loaded with a 100kg-Santa’s-sack-worth of lunar rocks – our biggest collection so far. Many of these moon rocks were given as goodwill gestures to other nations.

They’re now the most valuable rocks in the world; each lump may be worth millions, as we have no idea when we’ll have the opportunity to get some more. Unbelievably, quite a few of these precious rocks have gone missing!

Home and away

The Apollo missions demonstrated that humans could survive in space; what they couldn’t tell us was whether it was possible to actually live for an extended amount of time in space. This was the purpose of Skylab – the first US spacecraft to be designed as a living space, a home away from home.

Skylab was launched in 1973 and hosted three crews (Skylab 1 was unmanned) during its short working life. While in the space station, the astronauts enjoyed showers, a special dining area, and a sadly punishing toilet routine – everything that left their bodies had to be kept for future analysis.

The crew of the Skylab 4 mission celebrated Christmas in 1973 with a crafty piece of improvisation. Astronauts Gerald Carr, William Pogue and Edward Gibson made this charming Christmas tree out of empty food cans.

Skylab 4 tin can Christmas tree. NASA

Wasting valuable mission time to make the tree may have been a passive act of resistance to having every minute of their waking days overplanned. Later in the three-month mission, the exhausted crew allegedly “mutinied” and chucked the first sickie in space.

Christmas merchandise

On Earth and on the moon, space was quickly incorporated into Christmas traditions.

In 1947, Woomera in South Australia became the location of one of the earliest rocket launch sites in the world. The card shown below, with a Christmas greeting inside, depicts a V2-like rocket being launched over the desert.

Germany developed the V2 in WWII and it became the basis of Cold War space programs in the US, UK, France and Russia. Two ended up in Australia and are now at the Australian War Memorial in Canberra. The card seems to send a rather mixed message about war and peace …

Woomera Christmas card, likely from the late 1940s or early 1950s. Martin Wimmer

Soviet Russia also got into the Christmas card action though not officially – the celebration of Christmas was not encouraged during the Soviet era.

That said, the card below, which depicts St Nicholas and three USSR spacecraft, leaves no doubt that the spirit of Christmas nonetheless endured. (Bonus points if you can identify the spacecraft!)

Soviet rocket Christmas card. Mazaika

Not to be outdone on either the space or Christmas card race, NASA responded in style. In the shot below, the Apollo 14 crew of Alan Shepard, Ed Mitchell and Stuart Roosa, receive a Christmas card from James Loy, Chief, Protocol Branch for the KSC Public Affairs Office.

Note the crew peeping out from behind the Christmas tree on the card.


Every NASA mission generates merchandise and memorabilia – patches, t-shirts, mugs, etc. But did you know you could give your own Christmas tree a NASA makeover?

The image below, and the one above of Santa and an Apollo capsule, show souvenir Christmas tree ornaments from the Kennedy Space Centre.

2012 Christmas in space

This Christmas will be a quiet one in space. That said, on December 19 a crew of three flight engineers did launch from Kazakhstan to complete expedition 34 on the International Space Station.

Santa on the Moon. Kennedy Space Centre

NASA astronaut Tom Marshburn, Russian cosmonaut Roman Romanenko, and Canadian astronaut Chris Hadfield will get to celebrate Christmas twice – once on December 25, and again for the Russian Orthodox feast on January 7.

Like many modern families, the Mars Rover family – Curiosity, Opportunity and Spirit – will spend the Christmas period far from each other, albeit on the red planet. (For Santa to include them in his rounds, he may need to battle the Martians – or so they thought in this classic 1964 film).

Similarly, the twin Voyager spacecraft are moving ever further apart from each other on their missions to interstellar space.

But it’s not all bad. The same technologies which created the Mars Rover family and the Voyager twins led to our modern telecommunications network.

Human and robot alike are linked in a web of electromagnetic waves that keep us communicating and connected. In space, no-one need feel alone, particularly at Christmas.

The authors do not work for, consult to, own shares in or receive funding from any company or organisation that would benefit from this article. They also have no relevant affiliations.

The Conversation

This article was originally published at The Conversation.
Read the original article.

1 Comment. Join the Conversation

NASA’s Curiosity shows there’s more to life than life

Posted December 13, 2012 By Kevin Orrman-Rossiter

By Kevin Orrman-Rossiter, University of Melbourne and Helen Maynard-Casely, Australian Synchrotron

The Curiosity rover has landed on Mars, driven around, started its scientific mission and, as of 4am today (AEDT), started reporting integrated science results.

In a news conference at the American Geophysical Union NASA’s Curiosity mission team presented a measured, low-key and hype-free discussion about the first use of Curiosity’s full array of analytical instruments.

What they have found are chlorinated hydrocarbons – simple organic molecules made up of carbon, chlorine and hydrogen, sulphur-containing compounds, and calcium perchlorate.

Perchlorates are salts that, when dissolved in water, lower the freezing temperature of that water. The presence of those salts could enable water to stay liquid in the near-surface layers of martian soil. This could provide a possible habitat for Martian microbes.

The discovery of perchlorates supports the 2008 finding made by NASA’s Phoenix lander, which detected perchlorate salts in soil samples from Mars’s north polar region. Being a stationary craft, Phoenix could only take limited samples and used a simpler “wet-chemistry” analytical instrument.


NASA’s Curiosity rover documented itself in the context of its work site, an area called “Rocknest Wind Drift,” on the 84th Martian day, or sol, of its mission (Oct. 31, 2012). The rover worked at this location from Sol 56 (Oct. 2, 2012) to Sol 100 (Nov. 16, 2012). NASA/JPL-Caltech/MSSS


Worth the wait?

The announcement of results from Curiosity comes after weeks of excited anticipation from scientists and the general public alike, following suggestions NASA was getting ready to announce the discovery of life on Mars.

Why? Well, on November 20, the Curiosity mission’s principal investigator John Grotzinger commented to NPR reporter Joe Palca:

The data is one for the history books. It’s looking really good.

This exuberant exclamation about the analysis of Martian soil samples, set off a frenzy of speculation on the internet and since then NASA has been working hard to lower expectations.

In the lead-up to this morning’s press conference NASA even revealed that no, they weren’t about to announce the discovery of life.

The search for life

It’s important to keep in mind that Curiosity never set out to find life. In fact it’s been suggested that, barring an alien waving at Curiosity’s many cameras, life would be more or less undetectable by the rover.

What Curiosity is trying to do is assess the habitability of Mars, both in the past and in the present. Has Mars got the required minerals and energy sources for primitive life to exist? Was there ever a water source that could have aided transport and delivery of these nutrients?


Scoop marks in the sand at ‘Rocknest’ This is a view of the third (left) and fourth (right) trenches made by the 4cm-wide scoop on NASA’s Mars rover Curiosity in October 2012. NASA/JPL-Caltech/MSSS


To this end, Curiosity is checking the Martian soil for organic molecules – carbon-containing chemicals and salts that could be ingredients for life. Just like the ones it has found.

Where and how?

The newly announced results follow Curiosity’s investigation of sandy soil at a site called “Rocknest”. This site was chosen to provide the first samples of “normal” soil (as if interplanetary soil could ever be normal).

Using a mechanism on its robotic arm, Curiosity dug up five scoopfuls of Martian soil, each from a pit roughly 4cm wide (see image above).

The first Rocknest scoop was collected on the mission’s 61st Martian day (also known as Sol 61) on October 7.

Fine sand and dust from that first scoopful and two subsequent scoops were used to scrub the inside of Curiosity’s sample-handling mechanism and to ensure they were analysing the right soil.

Samples from scoops three, four and five were then analysed by the chemistry and mineralogy instruments inside the rover.

Cause for excitement

These findings are exciting for scientists – they are repeatable and clear enough for the science history books. After all, they are the first well-characterised Martian soil samples. Scientists now have integrated chemical, mineralogical and visual data, which the couldn’t get from earlier landers and rovers.


The robotic arm on NASA’s Mars rover Curiosity delivered a sample of Martian soil to the rover’s observation tray for the first time during the mission’s 70th Martian day, or sol (Oct. 16, 2012). NASA/JPL-Caltech/MSSS


That said, it might take something a little bit “sexier” than soil analysis before the mainstream media reports Curiosity’s findings with as much gusto as it did with the rover’s landing.

Many voiced frustration over the wait before today’s Curiosity announcement, but given past experience you can understand why NASA needed to scrutinise the results.

We only need to cast our mind back to the “worms” of ALH84001 – supposed evidence for past life on Mars – and the report of arsenic-based life, both of which were eventually retracted by scientists.

Of course, it’s never that simple.

Imagine you had worked for ten years on this rover, no doubt putting in very long work days (and nights) and making the necessary life sacrifices.

You’ve seen the ups and downs of the project and lived through the success of the terrifying landing earlier this year.

Now it’s really happening and the data you have anticipated for years is finally pouring in, with a level of detail you could only have dreamt about. Who wouldn’t be keen to share the results with the world?



What’s next?

It’s been a productive and trouble-free first three months for Curiosity and the mission team has completed most of its baseline data, instrument and rover testing.

A drilling test experiment is yet to be completed before Curiosity moves on. Curiosity can sample rocks and soil by both scooping and drilling. Now that NASA is sure the analytical instruments are working, they can check out the drill attachment.

Following the drilling test, Curiosity will rove slowly over to the 5.5km-high Mount Sharp via a site called Yellowknife Bay. Mount Sharp is the main investigative target of Curiosity’s primary mission and the point at which the science program gets into full gear.

Mount Sharp’s slopes are gentle enough for Curiosity to climb, analysing and sampling as it goes. As it climbs it will be sampling younger and younger strata of rocks – it really is a Martian areological timeline.

So as we wait for Curiosity to take the next steps (figuratively) on its mission to better understand the red planet, it’s worth remembering that we don’t need to find life for the mission to be both exciting and scientifically worthwhile.

Further reading:

The authors do not work for, consult to, own shares in or receive funding from any company or organisation that would benefit from this article. They also have no relevant affiliations.

The Conversation

This article was originally published at The Conversation.
Read the original article.

Be the first to comment

Does my science look big in this? The astrobiology edition

Posted November 18, 2012 By Kevin Orrman-Rossiter

During the 20th century a powerful new idea gradually entered our consciousness and culture: cosmic evolution.  We are all par of a huge narrative: a cosmos billions of years old and billions of light years in extent. It is this idea that caught my attention this month via the proceedings of the Sao Paulo Advanced School of Astrobiology SPASA 2011, published in the October International Journal of Astrobiology.

Although the question of extraterrestrial life is very old, the concept of full-blown cosmic evolution – the connected evolution of planets, stars galaxies and life on Earth and beyond – is much younger. In a rather breathtaking paper, Steven Dick formerly of the Aerospace History at the National Air & Space Museum places his arguments for cosmic evolution. Dick traces the idea from its roots in the 19th century theories of Pierre-Simon Laplace and Robert Chambers through its philosophical, astronomical, and biological upbringing to the present day. He examines evolution, the worldview that it had become in the 1950s and 1960s and how it had permeated culture in numerous ways and different cultures in diverse ways. Dick cautions us though noting “we need to remember that ‘culture’ is not monolithic and that ‘impact’ is a notoriously vague term.”

Cosmic evolution. Image credit: Harvard University.

In addition to the impact of our new understanding on culture, cosmic evolution also provides a window on long-term human destiny, asserts Dick. He presents this idea via three scenarios, the: the physical , biological, and postbiological universe. Life is unique to earth in the physical universe scenario, and the options flow from this situation – think of Isaac Asimov’s Foundation series. We will certainly interact with extraterrestrials in the biological universe – here cosmic evolution commonly ends in life, mind and intelligence. Cultural evolution in a biological universe may replace biologicals with artificial intelligence creating what Dick calls a postbiological universe. We do not know yet, which of these is our reality, that is one of the challenges of astrobiology, maintains Dick.

In a second ‘big-picture’ paper Marcelo Gleiser presents his four ages of astrobiology. For Gleiser the influx of astrophysical data, particularly on the prevalence of exoplanets “indicates that there are plenty of potentially life-bearing platforms within our galaxy.” He then presents the ‘history’ of life in the universe in terms of the steps needed for matter to have sequentially self-organised into more and more complex structures. His sequence is best viewed as a prelude to the physical or biological universe scenarios of Dick. Gleiser’s fourth age, the Cognitive Age (the age of thinking biomolecules), really addresses whether we are unique or not i.e. which of Dick’s two scenarios, the physical or biological are reality. Gleiser’s first three ages: physical, the creation  of stars and planets from atomic nuclei; chemical, in which elements organise into biomolecules; and thirdly biological, in which living creatures of growing complexity form from biomolecules. the papers by Dick and Gleiser are both papers heady and exhilarating conceptual reads.

Jorge Horvath and Douglas Galante accept the premiss that life exists, and then argue we need to take high-energy astrophysical events seriously. Scientists and the public account for meteor impacts in both academic studies, science-fiction writing and film – not so for events such as supernovae, gamma-ray bursts and flares. They show that these events are more frequent than asteroid strikes and that the effects are non-negligible (academic speak for potentially fatal to planet based species). They conclude that just because we have not yet been wiped out by such events can be seen as either a measure of earthlife’s resilience or a threat we are statistically yet to encounter.

The Dry valleys, Antarctica. Photo credit NASA.

My attention was captured by two other papers from the proceedings. Martin Brasier and David Wacey address the problem of studying life in deep space – comparing it to study of life remote in time. This view is pertinent, as it is non-trivial for scientists to determine what is a viable signal of extinct life. The authors develop a set of protocols and then apply these to earth samples, of varying ages. They do this to show how we could interpret similar samples, where much of the desirable information (the context) has been filtered out during the process of transmission (either physical or data) across vast distances of space, or time or both (as is likely on Mars). Even 10 years ago these questions were moot, but we have learned much over the recent past about metabolic pathways and living microbial systems. Brasier and Wacey conclude that there is still work required on pseudo-fossils, structures that arise naturally within complex physico-chemical systems, so that we can confidently agree on signs of life that are remote in space and time.

My final pick is an experimental paper that looks at the ExoMars mission. The European Space Agency and (initially NASA ) ExoMars mission is scheduled for launch in 2018 – specifically to detect life signatures on the surface and subsurface of Mars. This probe will carry, for the first time, a Raman spectrometer,  a technique with proven ability to determine the spectral signals of key biochemicals. The authors support these assertions by assessing samples acquired from Arctic and Antarctic cold deserts and a meteorite crater. These terrestrial environments are similar to those found on Mars. The experimental results presented in this paper demonstrate that it will be possible using this technique to assess and detect spectral signals of extra-terrestrial (Mars in this case) extremophilic life signatures.

This article was first published on the Australian Sciencesite as: Does my science look big in this? The astrobiology edition

Be the first to comment

The (nuclear) alchemists of Darmstadt and the doubly magic tin-100 nucleus

Posted September 19, 2012 By Kevin Orrman-Rossiter

An international group of researchers announced in the journal Nature that they had succeeded in creating tin-100.   This experiment helps us understand how heavy elements have formed.  A few minutes after the Big Bang the universe contained no other elements than the lightest; hydrogen and helium.

We, the objects around us, the Earth and the other planets all contain heavier elements; carbon, oxygen, silicon, tin, iron etc.  These elements came into existence later than hydrogen and helium.  They formed through the fusion of atomic nuclei inside of stars.  Elements heavier than iron owe their existence to gigantic stellar explosions called supernovas.  Tin-100 is a very unstable, yet important, element for the understanding the formation of these heavier elements.

A multinational team headed by nuclear physicists from the Technische Universitat Munchen, the Cluster of Excellence Origin and Structures of the Universe and the GSI in Darmstadt carried out these precision experiments.  They shot xenon-124 ions at a sheet of beryllium to create the tin-100 atoms.  The subsequently measured the half-life and decay energy of tin-100 and its decay products using specially developed particle detectors.

What is our world made from?

The inspiration of creating new elements can be traced to alchemical traditions.  Alchemy is an arcane tradition, that can be viewed as a proto-science, a precursor to chemistry and nuclear physics.  It’s prime objective was to produce the mythical philosopher’s stone, which was said to be capable of turning base metals into gold or silver, and also act as an elixir of life that would confer youth and immortality upon its user.

It did bring to chemistry many ideas and provided procedures, equipment, and terminology that are still in use.  It also provided the inspiration for the creation of new elements.  Now we understand to create new elements requires a combination of precision equipment and experimental procedures coupled with a sound understanding of quantum theory.

So what is tin-100 and why is it useful to understand the astrophysics of heavy element formation?

Most people will recognise that matter around us is composed of atoms.   Atoms of carbon, hydrogen, oxygen for example form the building blocks to make organic molecules and silicon and oxygen bond together to make common beach sand and are fused together to make glass.  The familiar metals are solids made of one type of atom, for example gold and aluminium, or combinations, bronze being made of copper and tin atoms.

Atoms in turn are a central nucleus of protons and neutrons surrounded by a swarm of electrons.  The number of protons distinguishes one element from another.  This atomic number is used to designate an element 1 for hydrogen, 8 for oxygen and 50 for tin, for example.  Stable tin comprises 112 nuclear particles – 50 protons and 62 neutrons.  The neutrons act as a kind of buffer between the electrically repelling protons and prevent normal tin from decaying.  Each atom will contain an equal number of electrons to its protons.  Remove or add an electron and the atom becomes an ion, a charged particle.

The strange quantum world of the nuclei

Quantum mechanics which, amongst other things,  explains how the electrons form into shells around the nucleus.  Elements which have filled outer shells, helium, neon, argon, xenon are ‘noble’ gases, chemically inert – not the least reactive.  Nuclei are also complex quantum objects.

As far as we know, nuclei are the smallest objects that can be split up into their constituents.  They are therefore the smallest entities which emergent properties – patterns that arise from complexity – can be studied.  Nuclear scientists study these emergent phenomena and are using them to decipher the nature of the nuclear force.  In contrast to the structure of atoms, for which the fundamental interaction between the electrons and the nucleus – the electromagnetic force – is known with great precision, the interaction between the nucleons – the strong nuclear force – is not so well known.

In nature not all combinations of nucleons are stable.  As a general rule the more protons present then more neutrons are required to stablise the nuclei.  A useful graphical presentation of this is the Segre table of radionuclides.

If the shell structure of electrons was difficult at first for scientists to come to terms with, then the shell structure exhibited by nucleons is not only unexpected it is complex enough not to be discussed in many quantum physics texts.  It was first thought that such densely packed and strongly interacting objects as the nucleons would exhibit a liquid-like behavior, much like the flow of electrons in a good conductor such as a metal.

That is what makes these experiments so exciting.

Stability and magic numbers

Magic numbers are the number of protons or neutrons that form full shells in an atomic nucleus.  The term is thought to have been coined by the physicist Eugene Wigner.  The model has been used to explain – at least for stable nuclei – the observed sequence of magic numbers: 2, 8, 28, 50, 82 and 126.

Nuclei that have a magic number of neutrons or protons are more tightly bound than there non-magic counterparts.  This intrinsic simplicity makes them prime candidates for testing proposed models of nuclear structure.  Even more attractive are the doubly magic nuclei.  The lighter nuclei helium-4, oxygen-16 and calcium-40 do follow the magic number sequence.

However because of the repulsion between protons the line of stable nuclei veers away from the symmetry line.  As a result tin-100 represents the largest nuclei to follow the sequence.  It is bound but unstable.  It is very close to the edge of nuclear stability, where the nuclear force between the protons and neutrons can no longer bind them into a nucleus.  Unfortunately, what makes this nucleus so attractive to study is what also makes it so difficult.

How to make a new element

In nature elements heavier than iron come into being only in powerful stellar explosions – supernovas.  These include, for example, the precious metals gold and silver and the radioactive uranium.  The cauldron of a supernova gives rise to a whole array of high-mass atomic nuclei.  these decay to stable elements via different short-lived intermediate stages.

There are two ways to create new elements in the laboratory.  The first is is to fuse two nuclei in a manner that minimises the loss of protons or α-particles (helium-4 nuclei).  The second is is more brutal, fragmenting a small part off a heavier nuclei in a collision.

In these experiments energetic xenon-124 is sheared by making it collide with a target beryllium foil leaving a residue that is composed of 50 neutrons and 50 protons.  Out of the 120,000,000,000,000 xenon-124 accelerated in the experiment, only 259 tin-100 nuclei were identified.  These results were sufficient though for the decay of tin-100 to be studied with great precision.

The results, excitedly for the researchers, demonstrated a ‘superallowed Gamow-Teller decay‘.  This type of β-decay is beyond the scope of this essay to explain, needless to say it does provide new experimental depth to the models of nuclear chemistry.  It is an important decay transition that occurs in the collapse of supernovae.  It also is important in putting boundaries on the possible mass of the neutrino.  Both of which are important validations of the current nuclear theories as well as providing real experimental data to fine tune the theoretical models.

This allows more real models of nuclear synthesis to be constructed.  Allowing a deeper understanding of how the atoms that make up our universe were created.

Now other laboratories around the world will work on improving the production rates of tin-100 and other exotic nuclei, based on these experiments.  Allowing the emergent properties of these nuclei can be studied in more detail.  Giving us greater understanding of the forces that bind these particles together – to make us!

1 Comment. Join the Conversation

Its a wheel! its a wheel – a wheel on Mars!

Posted August 25, 2012 By Kevin Orrman-Rossiter

NASA’s rover Curiosity was safely on Mars.  It was a perfect landing.  The novel sky-crane method had proved its detractors wrong and its designers right.  What was needed then was signs that Curiosity was working as designed.  NASA had said that the first pictures may be anything up to 2 hours after landing.  A long time for the audiences, waiting, live, all over Earth.

“Got thumbnails.” Pause in the control centre, then someone else yells “It’s a wheel, it’s a wheel!” “A wheel on Mars!”  For the second time that momentous afternoon the NASA/Jet propulsion Lab crowd erupted into spontaneous and joyful applause.  Not only had they landed the rover, Curiosity, safely on Mars, they had received the first images back from its cameras.  Sometimes the unscripted, unexpurgated exclamations make for the best history.

The first two pictures were from the front and back hazard cameras.  They were low resolution black and white thumbnails taken through the dust caps that protected the cameras during landing.  As the minutes ticked by higher resolution images came through from the rover.  The business as usual, familiar image enhancement bought into sharp clarity the ‘first’ two images from the robot explorer.

Image from Curiosity’s front Hazard camera, Sol0. Photo credit NASA/JPL.

The first week on Mars

After the exuberance and press conference came the trademark NASA precision and methodical approach.  An approach that gets missions safely to Mars, at the same time can make the audacious appear mundane.

Mission controllers at NASA’s Jet Propulsion Laboratory in Pasadena, are now checking out Curiosity’s subsystems and 10 instruments.  Curiosity is in the opening days of a two-year mission to investigate whether conditions have been favorable for microbial life and preserving clues in the rocks about possible past life.

Mission team members are “living” on Mars time.  A Martian day is approximately 40 minutes longer than an Earth day, meaning team members start their shift 40 minutes later each day.

View of Mount Sharp, Curiosity’s roving destination. Image credit NASA/JPL

Amongst the important system events in this first week was a software upgrade.  It took four days to successfully upgrade Curiosity’s software in its main and back-up computer.  The software had been uploaded during its trek to Mars, but not activated until now.  The software to date was focused on getting Curiosity through the Martian atmosphere and safely to its destination in Gale Crater.  The software upgrade is to cover its surface exploration activity, roving and controlling the various scientific instruments.

Curiosity Ready to Roll

“There will be a lot of important firsts that will be taking place for Curiosity over the next few weeks, but the first motion of its wheels, the first time our roving laboratory on Mars does some actual roving, that will be something special,” said Michael Watkins, mission manager for Curiosity from the Jet Propulsion Laboratory in Pasadena.

Mission engineers are devoting more time to planning the first rove of Curiosity.  In the coming days, the rover will exercise each of its four steerable (front and back) wheels, turning each of them side-to-side before ending up with each wheel pointing straight ahead.  On a later day, the rover will drive forward about one rover-length 3 metres, turn 90 degrees, and then kick into reverse for about 2 metres.  Exciting times for the rover driver team!

View of Mount Sharp, Curiosity’s roving destination. Image credit NASA/JPL

The scientists and engineers of NASA’s Curiosity rover mission have selected the first driving destination for Curiosity.  The target area, named Glenelg, is a natural intersection of three kinds of terrain.  The trek to Glenelg will send the rover 400 metres east-southeast of its landing site.  One of the three types of terrain intersecting at Glenelg is layered bedrock, which is attractive as the first drilling target.

The choice described by Curiosity Principal Investigator John Grotzinger of the California Institute of Technology as, “With such a great landing spot in Gale Crater, we literally had every degree of the compass to choose from for our first drive.”  “We had a bunch of strong contenders.  It is the kind of dilemma planetary scientists dream of, but you can only go one place for the first drilling for a rock sample on Mars.  That first drilling will be a huge moment in the history of Mars exploration.”

Grotzinger estimated the rover’s journey would take between three weeks and two months to arrive at Glenelg, where it will stay for roughly a month before heading to the base of Mount Sharp.

It may be a full year before the remote-controlled rover gets to the base of the peak, which is within 20 kilometres of the rover’s landing site.

Zapping rocks and doing science

Before Curiosity heads off to Glenelg another first will occur.  The team in charge of Curiosity’s Chemistry and Camera instrument, is planning to give their mast-mounted, rock-zapping laser and telescope combination a thorough checkout.  ChemCam has “zapped” its first rock in the name of planetary science.  It was the first time such a powerful laser has been used on the surface of another world.

The Chemistry Camera calibration target, as seen by the camera. Photo credit NASA/JPL.

The technique is called ‘laser-induced breakdown spectroscopy’.  The high-powered, narrow-focused, laser beam vaporises the rock from a distance generating a plasma plume with temperatures in excess of 100,000°C.  At the high temperatures during the early plasma, the vaporised material breaks down into excited ionic and atomic species.  As it cools to 5,000–20,000°C the characteristic atomic emission lines of the elements can be recorded by the camera.  This data is compared to the ‘standards’ that the rover carries to identify the rock components.

The soon to be famous rock N165, target for testing the Chemistry Camera laser and analysis. Photo credit NASA/JPL.

As Roger Wiens, principal investigator of the ChemCam instrument from the Los Alamos National Laboratory explained earlier, “Rock N165 looks like your typical Mars rock, about three inches wide. It’s about 10 feet away.” “We are going to hit it with 14 millijoules of energy 30 times in 10 seconds.  It is not only going to be an excellent test of our system, it should be pretty cool too.”

Pretty cool indeed.

First weather report in 30 years

It is currently just above freezing point in gale Crater where Curiosity is.

Grotzinger noted the team’s report on the Martian crater’s temperature was “really an important benchmark for Mars science”.

“It’s been exactly 30 years since the last long duration monitoring weather station was present on Mars,” when Viking 1 stopped communicating with Earth in 1982,” he said.  Then Viking 1 lander recorded temperatures that varied from −17.2 °C to −107 °C.

Sensors on two finger-like mini-booms extending horizontally from the mast of NASA’s Mars rover Curiosity will monitor wind speed, wind direction and air temperature. One also will monitor humidity; the other also will monitor ground temperature. The sensors are part of the Rover Environmental Monitoring Station, provided by Spain for the Mars Science Laboratory mission.

The weather station devices on Curiosity being tested prior to launch. Photo credit NASA/JPL.

In this image, the spacecraft specialist’s hands are just below one of the Rover Environmental Monitoring Station mini-booms. The other mini-boom extends to the left a little farther up the mast.

As Curiosity’s primary mission is for a full Martian year it will be able to record the seasonal variations that occur for Mars.

On the ground radiation monitoring and weather conditions will be crucial for any future exploration or habitation by humans.  This mission by Curiosity represents an important step towards these aspirations.

This article was originally published on Australian Science on August 20, 2012.

2 Comments so far. Join the Conversation

NASA landing Curiosity, science, and technology on Mars

Posted August 11, 2012 By Kevin Orrman-Rossiter

Curiosity approaches Mars, and artist’s concept. Image credit: NASA

Go to NASA TV or Ustream, now.  Otherwise you may be missing your ‘Apollo’ moment .  In about an hour’s time the NASA control room in Pasadena will be strained, hushed, waiting to hear these joyful words, “touchdown signal detected.” The signal that the rover, Curiosity, has has landed safely on Mars.

After a picture perfect launch and a 254 day voyage the Mars Science Laboratory, Curiosity, is primed to descend to the  Gale Crater on the Martian equator at 3:31pm (AEST).

A safe landing on Mars

Landing a 899kg specialised roving science laboratory onto Mars is an audacious mission by NASA. The mass of the rover has presented  new technological challenges to NASA engineers. The airbag landing method of the previous three successful rovers was not a viable option for Curiosity.

This has given the NASA engineers the opportunity to trial technology that could be used for later human exploration missions.

As Curiosity enters the top of the Martian atmosphere, 125 kilometres, above the Martian surface, she will be travelling at about 21,960 kilometres per hour.  Then begins her “7 minutes of Terror”, her self-guided descent to the surface. Although NASA used this description initially for the May 25, 2008 landing of the Martian polar lander, Phoenix, it is still apt for the current challenge.

Curiosity’s sky-crane landing, and artist’s concept. Image credit: NASA

Of the 38 Mars space missions (fly-by, landers and rovers) since 1960 only seven have been successful. Curiosity’s guided descent is still considered less risky than that experienced by Spirit, Opportunity, and the Viking 1 and 2.

Why will Curiosity will go through her landing sequence unaided?  Simply radio signals will take some 14 minutes to be relayed from Mars to Earth.  When the mission controllers receive the first entry signals – Curiosity will already have been on the Martian surface for some seven minutes.

If previous landings are anything to go by.  you can expect the control room to be a sea of crisply-ironed blue NASA/JPL shirts. The landing to be accompanied by gleeful shouts, smiles, fist-pumping and manly hugs.

Experiencing this Curiosity moment

I am old enough to have experienced the first Apollo moment. I was one of the geeks who were at the CSIROTweetup for the launch last November. I will witnessing this hopefully historic event, again joining others at the Canberra Deep Space Communication Complex. If you are in Canberra join us at the landing party.

In Melbourne the Space Association and the Victorian Space Science Education Centre have partnered for a Melbourne landing party. Again if you are not near these then grab your nearest iDevice or conference room and stream NASA live.

Snapshots for the folks back home

Within minutes of landing the first pictures will be taken by Curiosity.  These will be low-resolution black and white images. These very first images are likely to arrive more than two hours after landing, due to the timing of NASA’s signal-relaying Odyssey orbiter.

Curiosity’s many cameras. Image credit NASA/JPL

These first views will give engineers a good idea of what surrounds Curiosity, as well as its location and tilt. Once engineers have determined that it is safe they will deploy the rover’s Remote Sensing Mast and its high-tech cameras, a process that may take several days. Curiosity will start surveying its exotic surroundings.

As the rover descends it will acquire low-resolution colour pictures from it’s Mars Descent Imager (MARDI). These initial colour images will also help pinpoint the rover’s location. They, as well as one full-resolution image, are expected to be released the day after landing.

Additional colour images of Mars’ surface are expected some further 12 hours later from the Mars Hand Lens Imager (MAHLI). This camera, located on its arm, is designed to take close-up pictures of rocks and soil. When Curiosity lands and its arm is still stowed, the instrument will be pointed to the side, allowing it to capture an initial colour view of the Gale Crater area.

Once Curiosity’s mast is standing tall, the Navigation cameras will begin taking stereo pictures 360 degrees around the rover. These cameras can resolve the equivalent of a golf ball lying 25 metres away. They are designed to survey the landscape fairly quickly. If the mast is deployed on schedule, expect to see these pictures about three days after landing.

Let the science begin

The landing site is Gale Crater, an ancient impact crater 154 kilometres in diameter.  It holds a mountain rising five kilometres above the crater floor.

The impact crater is deep. The Gale mountain offers one of the deepest continuous sequence of rock layers in the solar system. Deep enough to provide access to an unprecedented cross-section of the global Martian geological history.

The slope of the mountain is gentle enough for Curiosity to climb. During its primary mission Curiosity will travel some 20 kilometres in total. Probably exploring not much beyond some intriguing areas near its landing site.

The pace at which Curiosity gets to the features of high scientific interest will depend on a number of things; the findings and decisions made after landing, including the possibility of finding the unexpected!

Earth and Moon from the Mars Reconaissance Orbiter in orbit around Mars 2007. Photo credit: NASA/JPL-Caltech/University of Arizona

Grab this opportunity and share it.  It’s fascinating to see what journeys begin from such real Martian moments of inspiration and challenge.

This article had an interesting time.  This is as originally submitted to The Conversation.  Unfortunately they could not contact me on the day of the landing as I was in Canberra in a radio -quiet zone.  After eventually making contact after the landing time this was re-edited and became; NASA’s Curiosity is on Mars safely – so now what?

Be the first to comment

The NASA rover Curiosity is expected to be landing on Mars at 3:31 am August 6, 2012 (AEST).  It’s mission, lasting one Martian-year (98 Earth weeks),  is of scientific significance and perhaps even of human significance.  Curiosity will be fulfilling the prospecting stage of a step-by-step program of exploration, reconnaissance, prospecting and mining evidence for a definitive answer to the question “Has life existed on Mars?”  In answering this question scientists will also be determining the habitability of Mars for humans.

Is there life on Mars?

Life as we know it depends on liquid water.

Life began appearing on Earth within a billion years of its formation.  The oldest probable traces of life dating back to within 700 million years of its formation.  This is not long after the end of a period where the crust of the newly formed Earth was kept molten by heavy bombardment from asteroid or comets.  Mars enjoyed similar conditions for a few hundred million years during its Noachian period.

Ancient Mars. NASA

Water on Mars now is much less abundant than it is on Earth, at least as a liquid or gas.  Most of the water known is locked into the permafrost and polar caps, Vast quantities of water have been discovered frozen beneath much of the Martian surface. There are however no bodies of liquid water creating surface lakes oceans and certainly no canals!  Only a small amount of water vapor is present in the atmosphere.

We have come to understand that Mars had enough water to form lakes and to carve huge river valleys.  There was still abundant water on Mars in the form of catastrophic flooding through its Hesperian period.  Nevertheless, many significant issues remain.  When exactly was the water once flowing on Mars?   Was water persisting for long enough for life to develop and evolve?  Are there special environments that could be supporting life?  Could life have adapted for high salt, high acid conditions?  Could life on Mars stay viable for millions of years when conditions turned hostile for life as we know it?

Getting to Mars can be a tough challenge

Curiosity will be the fourth rover and be the most advanced scientific laboratory to go exploring our neighbor, the red planet, Mars. The rovers, representing a formidable pedigree of success by the USA space agency NASA, vary from the 1997 microwave oven size Sojourner, the magnificently resilient twins Spirit and Opportunity of 2003, to the 2012 sophistication of Curiosity.

Recent proposals of human exploratory missions to Mars seemingly gloss over the difficulty in landing safely on Mars.  Mars has proved to be a tough target to explore directly.  Failures to date are numerous and come from the USSR/Russia, USA and British space agencies.  Whilst the USSR/Russians, USA, Chinese and Japanese have all suffered failures of, what appears to be the more simple objective of safely orbiting Mars.

Mars 3 lander (model) at the Memorial Museum of Cosmonautics in Russia. Photo credit: NASA

The first probe to make a ‘soft-landing’ on Mars was the USSR’s Mars 3 on December 3, 1971.  Unfortunately it stopped operating, 20 seconds after landing.  This was a second blow to the Martian aspirations of the USSR who on November 2 of the same year lost their identical probe Mars 2, this one entering at too steep an angle and burning up in the thin Martian atmosphere.

The early explorers

The NASA Viking probes were the first to successfully land on Mars; Viking 1 on July 20, 1976 and Viking 2 on September 3, 1976.  These were both stationary probes, exploration was limited to there landing site.  The first rover was the Sojourner, which powered its way into history on late July 4, 1997.  Landing in the Martian night, daylight bought power to its solar cells, and within hours of landing, the Pathfinder lander had unfolded, and began transmitting data and pictures.  The next day it’s 10.6kg robotic Sojourner began roving.

Sojourner rover on Mars. Photo credit: NASA

Roving conjures an image of long voyages and far-flung lands.  Well Mars is far away.  Sojourner was a cautious and leisurely rover with a maximum speed of one centimetre per second (0.036km/hour) and travelling approximately 100 metres in total, and never wandering more than 12 metres from Pathfinder. During its 83 sols (a mean Martian solar day, or “sol“, is 24 hours, 39 minutes, and 35.244 seconds long) of roving, it sent 550 photographs to Earth and analyzed the chemical properties of 16 locations near Pathfinder.  It didn’t find evidence of life.  However its data was sufficiently sensitive to enable scientists back on Earth to identify the types of rocks it encountered – mission accomplished.

The Pathfinder lander also was actively photographing and measuring, sending 16,500 pictures and making 8.5 million measurements of the atmospheric pressure, temperature and wind speed.  By taking multiple images of the sky at different distances from the sun, Pathfinder images enabled scientists to determine that the size of the particles in the pink Martian haze was about one micrometre in radius. Furthermore the color of some soils was similar to that of an iron oxyhydroxide phase which would support the theory of a warmer and wetter climate in the past.

In addition to scientific objectives, Mars Pathfinder was designed to be a “proof-of-concept” for the technology necessary to deliver a lander and a free-ranging robotic rover to the surface of Mars in a cost-effective and efficient manner.   This mission Successfully trialling various technologies, such as airbag-mediated touchdown and automated obstacle avoidance, this mission paved the way for both Spirit and Opportunity.

Testing the airbags for the Pathfinder lander. Photo credit: NASA

Mars Pathfinder also fulfilled its political goals.  It was remarkable for its extremely low cost, relative to other unmanned space missions to Mars.  Its total mission cost was US$280million. This was including the launch vehicle and mission costs and a blisteringly short 3 year US$150million development cycle.  For comparison the Viking mission cost US$935 million in 1974- or US$3.5 billion in 1997-money.

Remote observation

Mars is currently host to three functional orbiting spacecraft: NASA’s Mars Odyssey and the Mars Reconnaissance Orbiter, and the ESA’s Mars Express.  The 2001 Mars Odyssey, to give it its full name, arrived in orbit around Mars on October 24, 2001.  The name was selected as a tribute to the “vision and spirit of space exploration as embodied in the works of renowned science fiction author Arthur C. Clarke.”  After spending three years mapping the amount and distribution of many chemical elements and minerals that make up the martian surface.  Including the distribution of hydrogen which led scientists to discover vast amounts of water ice in the polar regions buried just beneath the surface.  Mars Odyssey also recorded the radiation environment in low Mars orbit to determine the radiation-related risk to any future human explorers who may one day go to Mars.  It has since acted as an orbiting communication relay for the NASA surface lander and rovers.

The over-achieving twins

Remote sensing from orbit is one thing.  There is nothing more convincing than a hands-on experiment, or the next best thing a robotic experimenter.  The ability to use rovers to qualify and quantify the remote observations has proved invaluable for our understanding of the planet Mars.  Scientists can also use these experiences to extrapolate on the remote observations of other planets in our solar system and perhaps even the growing catalogue of exoplanets.

The launch patch for Spirit, featuring Marvin the Martian. Photo credit: NASA

NASA’s twin robot geologists, the Mars Exploration Rovers, Spirit and Opportunity, launched toward Mars on June 10 and July 7, 2003.  The mission’s scientific objective was to search for and characterize a wide range of rocks and regoliths that hold clues to past water activity on Mars.

On March 23, 2004, a news conference was held announcing “major discoveries” of evidence of past liquid water on the Martian surface.  A delegation of scientists showed pictures and data revealing a stratified pattern and cross bedding in the rocks of the outcrop inside a crater in Meridiani Planum, landing site of Opportunity. This suggested that water once flowed in the region. The irregular distribution of chlorine and bromine also suggests that the place was once the shoreline of a salty sea, now evaporated.

Further graphic example is shown in the images of a thin fin on the edge of a rock in “Victoria Crater”, taken by Opportunity.  The fin was rich in hematite, a mineral that often forms in the presence of water.  Long ago, water circulating through a crack in the sandstone may have dissolved some of the surrounding material and filled the crack with mineral deposits. The filling resisted weathering while the surrounding rock eroded away.  Today, the fin marks a place that used to be empty, and the space around it used to be rock!

In recognition of the vast amount of scientific information amassed by both rovers, two asteroids have been named in their honor: 39381 Spirit and 39382 Opportunity.

A winter holiday postcard

Does life exist on Mars now?  Has life existed at any stage in the past? These questions are unanswered at present.  A successful mission by Curiosity will certainly bring an answer closer.   Concurrently this mission is a small incremental step in what is surely the great human endeavor of extra-planetary exploration.  Curiosity’s mission is just beginning, so let’s conclude with a message from the perennial Opportunity, still photographing while wintering in Greeley Haven – a panorama postcard – “wishing you were here, drop by when you are in the neighborhood.”

Greeley Haven, Mars. Photo montage credit: NASA

This was originally posted on the Australian Science blog on July 26, 2012 as: “Is there life on Mars? Sojourner, Spirit, Opportunity and Curiosity go roving.”

Be the first to comment

It is the year 2023 and humans have settled on Mars

Posted June 24, 2012 By Kevin Orrman-Rossiter

Do you wish to become a Martizen, a citizen of Mars, anytime in the near future?  If you are serious about this then Dutchman, Bas Lansdorp, CEO of Mars One, is your man.

Bas Lansdorp is a person with an audacious ambition.  Through his company, Mars One, he plans to establish the first human settlement on Mars by April 2023.  In addition to this he intends that a new team of four settlers will join the Martian settlement every two years.  By 2033 there will be over twenty people living, working, and they believe, flourishing on Mars, their new home.

If the Mars one publicity is believable, and on this point there is no real reason to doubt it, organizing a manned mission to Mars has been Bas Lansdorp’s dream for many years.  Bas has been working on Mars One with partner Arno Wielders since January 2011.  During 2011 they had confidential discussions with possible equipment suppliers to ensure that there was reality in their idea.  In May 2012 they announced their vision to the world.

Like any large entrepreneurial venture their success will predicated on the skill, experience and credibility of the venture and the people involved.  To be credible they will need to be convincing in, at least, these four aspects of the venture; technological; financial; psychological; and finally ethical.  They will need to be convincing in a way that engages and excites both investors and participants.

It is rocket science

Getting to Mars is not trivial, if it were, well I expect there would be more than the spectacular array of NASA super, and superannuated rovers there currently is on Mars.  Mars One have developed and made integral to their model a simple theme to get to and live on Mars: buy already developed technology from existing component manufacturers.

Take the Falcon Heavy lifter from SpaceX, to boost the components into low earth orbit.  Combine a SpaceX Dragon capsule as the landing stage, add a transit living module from Thales Alenia Space and attach to two propellant stages which are a variant of the SpaceX Falcon 9 upper stage rockets and you have the vehicle to get from low earth orbit to orbit around Mars via a Hohmann transfer trajectory.

The seven-month trip to mars will be Spartan, similar to, but more cramped, than current conditions experienced on the International Space Station.  This is where rigorous training will first pay off:

“Showering won’t be an option; instead they will have to make do with wet wipes like the International Space Station astronauts.  Tinned food only, constant noise from the ventilators and equipment and a regimented routine of three hours of exercise a day to keep up muscle mass all add to their trials.  If they are hit by a solar storm they will have to take refuge in the shelter area of the rocket, which provides the best protection, for as long as several days.”

When the first 4 settlers land on Mars in April 2023 they will arrive at an established site.  They will be picked up from their SpaceX Dragon capsule and taxied to the settlement by two robotic Mars rovers designed and built by MDA Space Missions.  To get to this point is an ambitious and tight timeline.

2013 Settler selection begins.  Replica of Mars settlement is built on an Earth desert to help the settlers prepare and train, and for a realistic environment in which to test the equipment.  The settler selection and the preparations in the simulated Mars base will be broadcast on television and online for the public to view.
2014 Preparation for the supplies mission.  Production of the first Mars communication satellites.
2016 January launch of the supplies mission, landing in October, includes the first habitat module (modified Dragon capsule) and 2500kg of supplies.
2018 First robotic rover lands (again in a modified Dragon capsule) to enable the pick of the specific settlement site.
2021 A total of 2 robot rovers, 2 living units, 2 life support units and 2 supply units are now all present at the Mars settlement site.
2022 All H2O, O2, and atmosphere production will be ready before a go-ahead to launch the settlers.
2023 First 4 settlers arrive at the Mars settlement.
2025 Second group of 4 settlers arrive, to be no doubt enthusiastically greeted by the pioneering first four.

Once arrived there will be work for the settlers to connect up the various habitats.  However once complete they will have substantial living space, 50m²+ each, equipped with showers, flushing toilets and kitchens.  The living units are a Dragon capsule with an inflatable living section supplied by ILC Dover, who have supplied NASA with space suits and landing bags for the previous Mars rovers Opportunity and Spirit.  The inflatable living sections are to be covered in Martian regolith to provide adequate radiation shielding.

Mars One

When moving around on the Mars surface the settlers will be wearing Mars suits, similar to the suits worn by the Apollo astronauts on the Moon.  These suits will be made by Paragon Space Developments, the same company who provide NASA with ‘extra-vehicular’ suits, for when astronauts work in space outside the International Space Station.By focussing on proven existing technologies Mars One are certainly presenting a reliable low cost technology solution.  It is also deceptively simple.  Let us remind ourselves this is a first, these conditions will be new.

For example the first step to settlement, safely landing the settlers on Mars, is unproven at present.  NASA has described the process of entering the Red Planet’s atmosphere and slowing down to land as “six minutes of terror.”  Computer graphics of Mars landings, in full colour and exquisite detail do not provide the simple fact that landing payloads that are large enough to bring humans and sustain their survival on the Red Planet is still beyond our capability.  Currently NASA expects to have testable solutions to this some time in 2014.

Similarly we could look at the Mars suits and pose, repairs? replacements?  These will be an absolute necessity for survival, however you won’t be able to buy a replacement online or wander down to high street shops to get an upgraded model or new one for a growing Martizen child.

Competent and knowledgeable engineers and specialists, as well as countless armchair experts, will no doubt be picking apart the technology of the Mars One mission, as I have just briefly done.  There is no doubt that each step of the timetable above has a myriad of ‘first-time’ problems that will require solutions, some of which can be inferred some which will only become apparent as the experience proceeds.  I hope that all involved have read Gregory Benson’s 1999 novel, The Martian Race, a gripping primer to life on Mars.

Show me the money

Mars manned mission. Image credit: NASA

Getting to Mars is not cheap.  Since the late 1940s there have been many proposals for manned exploration and settlements on mars.  A commonality is that they are all pitched 10-20 years in the future and large sums of money are mentioned.  To put this into today’s context on August 6 (EDT), 2012 NASA’s Mars rover, Curiosity, will land on Mars.  This mission will place an 899kg six-wheeled, un-manned science laboratory on Mars; for the approximate mission cost of US$2.5B.  It is expected that a 2030s NASA mission to Mars will be of the order of US$20B.  Mars One says it will cost them US$6B to put the first four settlers on Mars.

In many ways focusing on the mission cost is a furphy.  NASA mission budgets come from USA public purses and there is always great argument in the US Senate about the value of such publically funded scientific enterprise.  In the US this argument is always balanced by the technology and enterprise that this brings to US companies and the economy.  Mars One have no such public funding in mind.  They intend to buy the above technologies based on price and quality, not through political or national preferences.

Colonisation of Mars 2023, Mars One. Image Credit: Ariukux

The ability to fund such a mission will depend on what value it returns for investors.  Here is the Mars One point of difference; funding will be via sponsorship and as the World’s largest media event.  If I were a settler having ILC Dover and Paragon Space Development would be more reassuring than IKEA on my Living and Life Support Modules.  As for the thought of a 7 months trip to Mars eating McDonalds pre-prepared ‘meals’ that would be unpalatable.  Choose the sponsors wisely Mars One.

There are no stated scientific or economic goals.  Instead they see it this way:

“A manned mission to Mars is one of the most exciting, inspiring and ambitious adventures that mankind can take on.  We see this as a journey that belongs to us all, and it is for this reason that we will make every step one that we take together.  This will also be our way to finance the mission: the mission to Mars will be the biggest media event ever!  The entire world will be able to watch and help with decisions as the teams of settlers are selected, follow their extensive training and preparation for the mission and of course observe their settling on Mars once arrived.  The emigrated astronauts will share their experiences with us as they build their new home, conduct experiments, and explore Mars.  The mission itself will provide us with invaluable scientific and social knowledge that will be accessible to everyone, not just an elite select few.”

To assist in making this worldwide media frenzy Mars One has enlisted Paul Römer as an ambassador.  An established expert on grasping the attention of a global public, he was the co-creator of the worldwide phenomenon “Big Brother” – the television program that revolutionized reality television.

The 24/7 Martizen lab-rat

More than the tangibles of this venture, I believe it will be the intangible elements that make this a standout human endeavour.  Especially the ethics and psychology of the Martizen being media fodder 24/7.  A previous article has already questioned the ethics of such, admitedly voluntary, surveillance.

The psychology of such surveillance is fascinating and worrying.  Even the most extroverted of people have private lives.  Only the totally naive display ‘real’ faces through the public media.  Media such as facebook display a mixture unconscious representations, as well as carefully and foolishly contrived facets of our lives.  In many cases events are morphed and selectively recorded on media such as facebook and twitter.  It is one thing to post to your facebook friends, it is quite a different thing to know that all that you do will be on display for a public you do not know.

It is hopefully obvious that the narcissist, wastrel, celebrity personalities that populated the many versions of Big Brother are not what will make a great four-person team on Mars.  I also am happy to be labelled an ‘elitist’ and state that public participation via stringent selection processes, such as voting-off someone you don’t like, will be a disaster for a serious mission.

I am unsure how history’s first off-world conception, birth and death will go as media events.  I can appreciate the lure for marketers of such landmark voyeuristic events, I am at the same time unsure how the participants of such private events will feel.

Mars500 crew 1 year into experiment. Photo credit ESA

There is psychologically a world of difference between the isolation that would be experienced in genuine remote exploration, think Antarctica, to the pseudo-isolation of contrived event that has a definite endpoint, think Big Brother and Survivor.  The Marsonauts of Mars500 ended with smiling faces after their 17 month long isolation experiment.   The European Space Agency’s Directorate of Human Spaceflight has a long tradition of conducting research on the physiological and psychological aspects of spaceflight.  In light of this, ESA undertook the Mars500 cooperative project with the Russian Institute for Biomedical Problems (IBMP) in Moscow, in 2010-11.  This all male crew experiment is instructive, and illuminating for Mars One, however no matter how ‘isolated’ Moscow may feel, like the people in the Big Brother household, they could if they chose leave at any stage.

Despite this a key science project during Mars500 was to determine the implications of personal values held by individual crew-members for compatibility within the group as a whole or otherwise, and for individual coping strategies and adaptation during long lasting confinement.  On a human exploration mission to Mars, the psychological resilience of the crew will play a critical role for the maintenance of health and performance and hence the success of the mission.  One factor impacting on psychological resilience is the personal values of crew members defining their motivational goals and attitudes.  Crew member selection is for real, not a game where if a poor choice is made they leave the set or you re-boot the computer.

It’s a one-way trip

That is one clear distinction this is a one-way journey.  Since returning astronauts from the surface of Mars is one of the most difficult, and expensive, parts of a Mars mission, the idea of a one-way trip to Mars has been proposed several times.  The notion of settlers, rather than expedition astronauts changes the technology and psychology of the mission.

A one-way trip scenario has been proposed seriously a number of times since 1998.  Including a 2004 proposal by Paul Davies.  Another organisation, Mars to Stay, proposed that astronauts sent to Mars for the first time should stay there indefinitely, both to reduce mission cost and to ensure permanent settlement of Mars.  Among many notable Mars to Stay advocates, former Apollo astronaut Buzz Aldrin is a particularly outspoken promoter who has suggested in numerous forums “Forget the Moon, Let’s Head to Mars!”

During a 2009 public hearing of the U.S. Human Space Flight Plans Committee at which Robert Zubrin presented a summary of the arguments in book The Case for Mars, dozens of placards reading “Mars Direct Cowards Return to the Moon” were placed throughout the Carnegie Institute.  The passionate uproar among space exploration advocates – both favourable and critical – is an indication of the interest in Mars exploration.

I find the Mars to Stay idea appealing and compelling for both economic and safety reasons.  More emphatically, I find it a representation of the spirit of human exploration and discovery.  Also personally it is a fulfilment of the ultimate mandate by which manned space programs (US, European, Russian, Indian, Japanese etc.) are sold, at least philosophically and long-term, as a step to colonizing other worlds.  I hope that Mars One either credibly fulfils this trust or propels alternative programs that deliver human settlement on Mars via a well-defined (i.e. non-suicidal) exploration program.

This article was first published on the Australian Science blog on June 19, 2012 as “In the year 2023, and humans are on Mars for all to see.

Be the first to comment

Here be Dragons

Posted May 3, 2012 By Kevin Orrman-Rossiter

On May 11, a Dragon will mate with the International Space station.  Rather than some mythical creature, this Dragon is of human artifice.  The Dragon’s rendezvous and berthing with the International Space Station presages a new chapter in human exploration of space.

The significance of this event is Dragon is a reusable spacecraft, developed, and built by the American company Space Exploration Technologies, SpaceX, as it is more commonly known.  Established in 2002, SpaceX has developed a new family of launch and cargo and crew capsules from the ground up.

The commercial race to space

NASA has now “set it sights on exploring once again beyond low earth orbit.”  This gives the opportunity for private industry to take on routine access to space and resupply of the International Space Station.  From the U.S. perspective the first phase of this strategy is known as Commercial Orbital Transportation Services.  This program, announced on January 18, 2006, is to fund and co-ordinate the delivery of cargo and crew to the International Space Station.

This photograph, taken by one of the Expedition 30 crewmembers aboard the International Space Station from approximately 384km above the southeastern Tasman Sea, is believed to be the one-millionth still image recorded by space station crews. The view, focuses on an area just west of the south end of South Island, New Zealand and was taken about 3:19 a.m. New Zealand time, March 7, 2012. A Russian Soyuz and a Russian Progress vehicle are seen center and right in the foreground, respectively. Photo credit NASA.

After a series of competitions and capability demonstrations in 2006 and 2008 two companies were chosen by NASA to receive funding via this program.  Initially in 2006 SpaceX and Rocketplane Kistler were awarded agreements through to 2010.  However the agreement with Rocketplane Kistler was terminated in 2007 when it had failed to raise sufficient private equity funds.  In a second round of competition Orbital Sciences Corporation was awarded, in 2008, the second group of agreements.

Orbital are no new comers to the space arena.  It is an American company, which specializes in the manufacturing and launch of satellites.  Its Launch Systems Group is heavily involved with missile defence launch systems.  Orbital Sciences since inception, in 1982, has built 569 launch vehicles with 82 more to be delivered by 2015.  This includes 174 satellites have been built by the company since 1982 with 24 more to be delivered by 2015.  With the tag “Innovation you can count on” Orbital present a solid commercial face for routine access and resupply of the International Space Station.

SpaceX on the other hand present a more entrepreneurial face to the world.  This is due, no doubt, to the strong influence of its founder Elon Musk.  Musk has notably made his mark with the creation and sale of the companies Zip2 and PayPal.  His current ventures include the listed Tesla Motors and un-listed SolarCity.  His ambition for SpaceX is obvious in the type of development program they are undertaking.

SpaceX has the Falcon family of launch vehicles and their Dragon cargo and crew capsule.  SpaceX is “based on the philosophy that simplicity, low cost and reliability go hand in hand.”  The sales blurb on their website emphasizes “we recognize that nothing is more important than getting our customer’s satellite or other spacecraft safely to its intended destination.”  Make no mistake this is a commercial enterprise in that “can do” American entrepreneurial model.

In case you were wondering, for a mere US$10.9M a Falcon 1 could deliver your 1010kg payload into a 185km circular low Earth orbit.  Alternatively US$54.0M gets you 10,450kg into low Earth orbit or 4,540kg into a higher geosynchronous Earth orbit, via the larger Falcon 9.  The still under development, Falcon Heavy, is intended to deliver a massive 53,000kg into low Earth orbit, that is more than twice the payload of a Space Shuttle, all for US$83-128M per launch.  The key point is that this will save the U.S. many billions of dollars over the agreement periods.

These flights are not to be confused with the sub-orbital aspirations of companies such as Virgin Galactic and Copenhagen Suborbitals.  The goal of these companies is to provide space-tourism experiences at lower altitudes.  Virgin Galactic for example offer a US$200,000 sub-orbital experience, sometime in 2014.  To keep this in perspective low Earth orbit minimum is 160km, the International Space Station orbits at 378km, the Space Shuttle could achieve 378km, and Virgin Galactic’s SpaceShipTwo will get you to 109km from sea level.  This experience includes a 3-day training period with your fellow astronauts.  Then a parabolic trajectory flight that will take you to the edge of the atmosphere, where the sky changes from blue to cobalt to finally black and you experience a period of weightlessness, then an assisted glide to Earth, much like a Shuttle re-entry.

Enter, the Dragon

It is the Dragon capsule that most clearly delineates the vision of SpaceX from Orbital.    Dragon is a free-flying, reusable spacecraft and is made up of a pressurized capsule and unpressurized trunk used for Earth to low Earth orbit transport of pressurized cargo, unpressurized cargo, and/or crew members.

Dragon Spacecraft with Solar Panels deployed. Image credit NASA/SpaceX

The Dragon can be rapidly transitioned from cargo to crew capability, with the cargo and crew configurations of Dragon almost being identical.  The exceptions are the crew escape system, the life support system, and onboard controls that allow the crew to take over control from the flight computer when needed.  This focus on commonality and modular construction has minimised the design effort and simplified the human rating process.  This allows systems critical to the space station as well as future Dragon crew safety to be fully tested on uncrewed demonstration flights.

Dragon is designed for the cargo and crew requirements of the International Space Station.  As a free-flying spacecraft Dragon also provides a platform for in-space technology demonstrations, scientific instrument testing, and the extension to lunar and planetary landings.

Expedition 30/31 of the International Space Station

Following the completion of NASA’s flight readiness review, on April 16, 2012, SpaceX was ready to launch on Monday, April 30.  On April 23 a delay was called until the next available launch slot on May 7.  The delay was caused by the need for more software and hardware testing.  The testing is designed to validate the Dragon’s ability to safely fly in close proximity to the space station, a tightly-controlled operating sphere requiring redundant hardware systems, fault-tolerant computers and robust software.

The Falcon 9 rocket carrying, the Dragon capsule, will liftoff from Space Launch Complex 40 at the Cape Canaveral Air Force Station in Florida.

This launch will certainly be a noted event.  In recent times NASA has created NASATweetups around launches.  For example the launch of the Mars Science Laboratory in November last year had simultaneous twitter gatherings in both the US and Australia.  For this SpaceX launch over 1600 people applied for the 50 spaces available to be present at the launch.  Space exploration launches are getting cult followings.

During the flight, SpaceX’s Dragon capsule will conduct a series of checkout procedures to test and prove its systems, and then on May 9 it will perform a “fly-under” of the International Space Station.  This fly-under will come within 2.4km of the International Space Station to validate the operation of sensors and flight systems necessary for a safe approach and rendezvous.  The spacecraft also will demonstrate the ability to abort the rendezvous.  After these capabilities are successfully proven, the Dragon will be cleared to berth with the International Space Station on May 10.

Meanwhile onboard the International Space Station, flight engineers Don Pettit and Andre Kuipers are training for the arrival of the Dragon capsule.  They will use the Canadarm2 to retrieve Dragon and berth it into the harmony node of the International Space Station.  The next day, May 11, the hatch will be opened and one of the crew, perhaps Expedition 30 commander, NASA astronaut Dan Burbank, will be the first to enter the commercial spacecraft.  It will be then unloaded and eventually filled with trash.  After 18 days of docked operations the duo of Pettit and Kuipers will then detach and release Dragon for its splashdown in the Pacific Ocean 400km off the U.S. west coast.

That is one of the first key differences of SpaceX compared to Orbital and the non-commercial operators.  A trash-filled Russian Progress 46 spacecraft departed from the International Space Station on April 19, 2012.  It’s Russian flight controllers will command the Progress 46 for several days of tests, and then send it to burn up in Earth’s atmosphere over the Pacific Ocean.  In addition the International Space Station has had fuel, water, and food deliveries from Japanese and European Space Agency craft.  Before the completion of Expedition 30/31 the first delivery, from an Orbital Cygnus craft, is also expected.  These craft, including Cygnus, are one-off use, being burned up deliberately upon re-entry to the Earth’s atmosphere.

What next after the grocery deliveries?

This May Dragon flight is the first of 12 NASA scheduled resupply flights by SpaceX to the International Space Station using their Falcon9/Dragon combination, a US$1.6B contract.  They also have a solid book of satellite launches all through to the end of 2015.  If SpaceX is “cash flow positive” then Elon Musk expects to make a listing, an initial public offering, of SpaceX sometime in 2013.

In addition to this SpaceX is looking at a number of sites, including Texas, Alaska, California, Virginia, and Florida to build a commercial spaceport.  On of the recent sites discussed in a Federal Aviation Administration environment review document was near Brownesville in Texas.  This private site is located in Cameron County, southern Texas.  If a launch facility were built here, then all rockets departing this installation would head east, over the Gulf of Mexico.  This path would enable the Dragon spacecraft to reach the International Space Station.

During the day-long test of the engineering prototype, SpaceX and NASA evaluators participated in human factors assessments which covered entering and exiting Dragon under both normal and contingency cases, as well as reach and visibility evaluations. Test crew included (from top left): NASA Crew Survival Engineering Team Lead Dustin Gohmert, NASA Astronaut Tony Antonelli, NASA Astronaut Lee Archambault, SpaceX Mission Operations Engineer Laura Crabtree, SpaceX Thermal Engineer Brenda Hernandez, NASA Astronaut Rex Walheim, and NASA Astronaut Tim Kopra. Photo: Roger Gilbertson / SpaceX

All the current planned flights are unmanned, but SpaceX is already developing a manned version of the Dragon capsule.  It recently completed another important milestone – the first NASA Crew Trial, one of two crew tests as part of SpaceX’s work to build a prototype Dragon crew cabin.  For this milestone SpaceX demonstrated that the new crew cabin design would work well for up to seven astronauts in both expected and unusual scenarios.  It also provided SpaceX engineers with the opportunity to gain valuable feedback from both NASA astronauts and industry experts.

Excitingly true to their entrepreneurial spirit, SpaceX has already sold its first launch to the moon.  A lunar mission that gives Pittsburgh-based Astrobotic Technology, a Carnegie Mellon University spin-off, an early lead in a US$32M race to land a privately owned rover on the lunar surface.  The contract, announced on May 6, 2011, reserves a SpaceX Falcon 9 rocket to fly Astrobotic Technology’s lander and rover to the moon as early as December 2013.

Mars is also firmly in the sights of SpaceX, even if only at the conjecture stage at present.  It was reported in 2011 that NASA science hardware would fly to Mars aboard SpaceX’s Dragon capsule.  This so-called “Red Dragon” mission could be ready to launch by 2018, and would carry a cost of about US$400M or less.  Astrobiologist Chris McKay, of NASA’s Ames Research Centre and his colleagues are developing the Red Dragon concept as a potential NASA Discovery mission, a category that stresses exploration on the relative cheap.  NASA is currently vetting three Discovery candidates, one of which it will choose for a 2016 launch.  That mission will be cost-capped at $425 million, not including the launch vehicle.

This still from a SpaceX mission concept video shows a Dragon space capsule landing on the surface of Mars. Image credit SpaceX.

Red Dragon is not in that group of three finalists.  NASA will make another call for Discovery proposals and McKay and his team plan to be ready for that one.  If Red Dragon is selected in that round, it could launch toward Mars in 2018.  Assuming that $425 million cap is still in place, Red Dragon could come in significantly under the bar.

In comparison to the proposed costs of this real expedition Disney recently spent over US$350M on a Mars sci-fi flop.  Disney spent US$250M to make “John Carter” and a further US$100M to market it, making an estimated US$200M loss, Hollywood’s largest loss ever on a film.

It is not clear at present what will happen with NASA’s Mars aspirations after recent Congress budget decisions.  Nonetheless rocket entrepreneur Elon Musk believes he can get the cost of a round trip to Mars down to about US$500,000.  The SpaceX CEO says he has finally worked out how to do it, would reveal further details later this year or early in 2013.

Letting a Dragon loose into space has certainly released a fiery-breath of fresh air into space exploration.

Originally published on Australian Science on April 25, 2012.

3 Comments so far. Join the Conversation

The Earth just aged a little bit more

Posted April 12, 2012 By Kevin Orrman-Rossiter
This composite image of the Tycho supernova remnant combines X-ray and infrared observations obtained with NASA's Chandra X-ray Observatory and Spitzer Space Telescope, respectively, and the Calar Alto observatory, Spain. It shows the scene more than four centuries after the brilliant star explosion witnessed by Tycho Brahe and other astronomers of that era.  Credit: X-ray: NASA/CXC/SAO, Infrared: NASA/JPL-Caltech; Optical: MPIA, Calar Alto, O.Krause et al.

Have you ever had a moment when person responds to you in a way that just makes you feel a little bit older than you did before?  You comment, for example, about a music group to someone, only to be met with that incredulous stare that conveys the message to you that their parents liked that music, and that you must be a little older than you at first appeared.

An international research team just gave the Earth such a moment.  The researchers did this, not by experimenting on musicians, rather by measuring the radioactive decay of samarium-146; one of the isotopes used to chart the evolution of the Solar System.

By using a more precise technique to remeasure the half-life of samarium-146, they shrank the chronology of early events in the solar system, like the formation of planets, into a shorter time span.  It also means some of the oldest rocks on Earth would have formed even earlier.  Some Australian rocks forming as early as 120 million years after the solar system formed.

Understanding how a seemingly simple measurement, such as the half-life of samarium-146, can have such far-reaching results will take us on an exhilarating journey through many areas of science.

How did our Solar System form?

According to current theory, everything in our Solar System formed from stardust several billion years ago.  Some of this dust was formed in giant supernovae explosions.  These explosions then supplied most of the heavy elements for the objects that make up our Solar System.  The synthesis of the elements we see on Earth, in rock samples from the Moon and Mars, as well as from meteorites and asteroids, is a subject of great interest.  By understanding the physics of the nucleo-synthesis of the isotopes of these elements it has become obvious that the dust and molecules that coalesced to form our solar system came from a number of different processes.

Proto-planets. Image credit: NASABlueshift

The formation of the terrestrial planets (the rocky planets Mercury, Venus, Earth, Mars and their respective moons) is generally divided into three major stages based on the different physical processes involved and their respective time scales: (1) the stardust aggregates into planetismals, like individuals forming into swarms of nomadic tribes; (2) then runaway and oligarchic growth of embryos from planetismals resulting several tens to 100 Lunar- to Mars-mass embryos embedded, like mediaeval barons, in a swarm of remnant planetismals; and (3) the final stage of terrestrial planet formation by high-velocity impacts between embryos over a span of ~10-100 million years, forming the planets as we know them.

The Allende meteorite and the age of the Solar System

The age of the Solar System can be defined as the time of formation of the first solid grains in the nebular disk surrounding the proto-Sun.  This age is estimated by dating calcium-aluminium-rich inclusions in meteorites.  All chronology, by convention, is referenced to T0, which is the abbreviation for the age of the oldest known solid material in the solar nebula.

Scientists have found that calcium-aluminum-rich inclusions are some of the oldest objects in the solar system.  These inclusions, roughly millimetres to centimetres in size, are believed to have formed very early in the evolution of the solar system and had contact with nebular gas, either as solid condensates or as molten droplets.

Relative to planetary materials, calcium-aluminium-rich inclusions are enriched with the lightest oxygen isotope and are believed to record the oxygen composition of solar nebular gas where they grew.  Calcium-aluminium-rich inclusions, at 4.57 billion years old, are millions of years older than more modern objects in the solar system, such as planets, which formed about 10-50 million years after them.

In recent research, a US team led by Justin Simon from NASA Johnson Space Centre and University of California Berkeley, studied a specific calcium-aluminium-rich inclusion found in a piece of the Allende meteorite.  Allende is the largest carbonaceous chondrite meteorite ever found on Earth.  It fell to the ground in 1969 over the Mexican state of Chihuahua and is notable for possessing abundant calcium-aluminium-rich inclusions.

Carbonaceous chondritic meteorites are stony meteorites that have not been modified due to melting or differentiation of the parent body.  They formed in oxygen-rich regions of the early, first stage, Solar System so that most of the metal is not found in its free form but as silicates, oxides, or sulfides.  Most of them contain water or minerals that have been altered in the presence of water, and some of them contain larger amounts of carbon as well as organic compounds.  The Allende meteorite is a ‘pristine’ meteorite, so called because its provenance is known.  It was found and sampled under conditions that precluded contamination from terrestrial chemicals and minerals.

Their findings imply that calcium-aluminium-rich inclusions formed from several oxygen reservoirs, likely located in distinct regions of the solar nebula.  Calcium-aluminium-rich inclusions travelled within the nebula by lofting outward away from the sun and then later falling back into the mid-plane of the Solar System or by spiralling through shock waves around the Sun.

Through oxygen isotopic analysis, the team found that meteorite material surrounding the calcium-aluminium-rich inclusion show that late in the calcium-aluminium-rich inclusion’s evolution, it was in a nebular environment distinct from where it originated.  This latter region was closer in composition to the protoplanetary disk, the environment in which the building materials of the terrestrial planets formed.  A protoplanetary disk is an area of dense gas surrounding any newly formed star.  In this case, the calcium-aluminium-rich inclusion formed when our Sun was quite young.

This composite image of the Tycho supernova remnant combines X-ray and infrared observations obtained with NASA's Chandra X-ray Observatory and Spitzer Space Telescope, respectively, and the Calar Alto observatory, Spain. It shows the scene more than four centuries after the brilliant star explosion witnessed by Tycho Brahe and other astronomers of that era. Credit: X-ray: NASA/CXC/SAO, Infrared: NASA/JPL-Caltech; Optical: MPIA, Calar Alto, O.Krause et al.

The formation of the Solar System as we know it today, was complex and dynamic process.  The protoplanetary disk evolves through accretion to the star, the particles and molecules being gravitational attracted to the proto-Sun.  Each particle’s attraction was mediated or dampened by collisions, the viscous drag of the gaseous nebula, coupled with an outward ‘fling’ due to their angular momentum.

Radioactive dating the age of the Solar System

Timescales of early Solar System processes rely on precise, accurate and consistent ages obtained with radiometric dating.  The relative abundance of different nuclei and their correlation or non-correlation with models of their formation and their radioactive decay provide a series of clocks to determine when and how material was formed.

Recent advances in instrumentation now allow scientists to make more precise measurements.  Some of these measurements are revealing inconsistencies in the ages of samples as well as clearing up existing inconsistencies.

For example, recent analysis, by Audrey Bouvier and Meenakshi Wadhwa from Arizona State University, of the meteorite, Northwest Africa 2364, found that the age of the Solar System predates previous estimates by up to 1.9 million years.  They used a radioactive chronometer based on the decay of isotopes of uranium to lead.

By using this lead-lead dating technique these researchers were able to calculate the age of a calcium-aluminium-rich inclusion contained within the Northwest Africa 2364 chondritic meteorite.  In lead-lead dating the lead isotope, 207Pb/206Pb ratios are measured; these lead-207 and lead-209 isotopes are the decay products of the uranium isotopes 235U and 238U respectively.

The study’s findings fix the age of the Solar System at 4.5682 billion years old, between 0.3 and 1.9 million years older than previous estimates.  This relatively small revision to the currently accepted age of about 4.56 billion years is significant since some of the most important events that shaped the Solar System occurred within the first ~10 million years of its formation.

This relatively small age adjustment means that there was as much as twice the amount of iron-60, a certain short-lived isotope of iron, in the early Solar System than previously determined.  This higher initial abundance of this isotope in the Solar System can only be explained by supernova injection.  The researchers believe the supernova event, and possibly others, could have triggered the formation of the Solar System.  By studying meteorites and their isotopic characteristics, they bring new clues about the stellar environment of our Sun at birth.

Planetary formation from a soar nebula. Image credit NASA

This work also helps to resolve some long-standing inconsistencies in early Solar System time scales as obtained by different high-resolution chronometers.  The story is not yet complete, it will be important to conduct high precision chronologic measurements of calcium-aluminium-rich inclusions from other pristine meteorites.  We also need to understand the reasons why the calcium-aluminium-rich inclusions measured previously from two other chondritic meteorites, Allende and Efremovka, have yielded younger ages.

One significant aspect of this study is that it is the first published lead-lead isotopic investigation that takes into account the possible variation of the uranium isotope composition.  Earlier work conducted in Wadhwa’s laboratory by a graduate student Gregory Brennecka, in collaboration with Ariel Anbar, has shown that the uranium isotope composition of calcium-aluminium-rich inclusions, long assumed to be constant, can in fact be highly variable and this has important implications for the calculation of the precise lead-lead ages of these objects.

Using the relationship demonstrated by Brennecka and colleagues between the uranium isotope composition and other geochemical indicators in calcium-aluminium-rich inclusion, Bouvier and Wadhwa inferred a uranium isotope composition for the calcium-aluminium-rich inclusion for which they reported the lead-lead age.

This work can help researchers better understand the sequence of events that took place within the first few million years of the Solar System formation, such as the accretion and melting of proto-planetary bodies.  All these processes happened extremely rapidly, and only by reaching such a precision on isotopic measurements and chronology can we find out about these processes of planetary formation.

The importance of the half-life of the isotope samarium-146

As well as the lead-lead dating technique the radioactive chronometer based on the isotope samarium-146 is one of interest for this story.  Samarium-146, or 146Sm, is unstable and occasionally emits an alpha particle, a helium-4 particle, which changes the atom into a different element, neodymium-142.

As samarium-146 decays slowly—on the order of millions of years—many models use it to help determine the age of the Solar System.  In particular, in models of terrestrial planetary formation, rather than dating calcium-aluminium-rich inclusions in meteorites used in studying early Solar System formation.

Although samarium-146 decays slowly, it is still short compared to the time-scale of solar system evolution.  For a known number of any isotope type, the number of years it takes for this to radioactively decay by half of its number, is called its half-life.  Since samarium-146 emits particles so rarely, it takes a sophisticated instrument to measure this half-life.  The half-life of samarium-146 allows its use as a determinator of the time between the end of its synthesis in the early Solar System and the inclusion of it in a solid body in the solar system.

What scientists look for are disparities in the relative abundances of samarium isotopes in terrestrial rocks and in the relative abundances of samarium and neodymium and neodymium isotopes.  The reason for interest in the samarium-146 to neodymium-142 is that the half-life means that samarium-146 present at the time of solidification would no longer be available for observation at the present-time; it all will have decayed to neodymium-142.  Therefore the isotopic composition of neodymium will vary with the amount of samarium, which was present at solidification.

The researchers remeasured the half-life of samarium-146 using the sophisticated instrument at the Argonne Tandem Linac Accelerator System, Kanazawa University, and the University of Tsukuba in Japan.  What they did was very clever and very precise.

Firstly, they synthesised samples of samarium-146, in three independent nuclear-synthesis reactions, from samples of isotopically enriched samarium-147.  The different techniques gave analysis samples with different contaminants and samarium-146 levels.  Secondly, they measured the decay of these samples over a period of months using highly accurate detectors.

The Argonne Tandem Linac Accelerator System was then used as a mass spectrometer, in two different experimental set-ups, to pick out the small number of samarium-146 in the samples, one in tens of billions of atoms.  These measurements took into account contributions from contaminants such neodymium-146, which caused contamination problems in earlier experiments.  Neodymium-146 has the same atomic mass as samarium-146, and in mass spectroscopic measurements they cannot easily be separated.

By accurately counting the number samarium-146 atoms and tracking the particles that the sample emits, the team came up with a new calculation for its half-life: just 68 million years.

This is significantly shorter than the previously used value of 102.6 and 103.1 million years of recent (1966 and 1987 respectively) measurements.  At the same time the result is closer to earlier measurements of ~50 million years and 74 million years from 1953 and 1964 respectively.

A new samarium-146 half-life measurement; now what?

The new value patches some holes in current understanding.  The new time scale now matches up with a recent, precise dating taken from a lunar rock, and is in better agreement with dates obtained with other chronometers.

Applying this new half-life to rocks from Greenland and Australia gives them revised ages.  These rocks are now dated to be 50 million years older than previously thought.  That is they were formed only 120 million years after T0, the time of solar system formation, rather than the 170 million years from previous results.  Similarly rocks from Quebec were found to be over 80 million years older than previous measurements.  These are now found to have formed 205 million years, rather than 287 million years, after Solar System formation.  These results illustrate that the events that formed terrestrial rocks occurred at much earlier ages than we even recently thought.

Analyses of moon rock samples have also shown an increase in their ages, in this case by over 70 million years.  These are now found to have formed 170 and 175 million years, rather than 242 and 250 million years respectively, after Solar System formation.  These new lunar results now bring ages of these rocks, using two different chronometers, the samarium-146 and lead-lead techniques, into the same ranges.

The early days of Earth and the other terrestrial planets are looking quite different than previously thought.  All this is thanks to some precision measurements of the half-life of an extinct isotope of an exotic rare-earth element, samarium.


First published as: “These rocks just got a little bit older” on Australian Science.

Be the first to comment

There is no doubt in the mind of Australia’s Chief Scientist, Professor Ian Chubb, the future will be shaped by science technology, engineering and mathematics.  Unfortunately, he finds that at present the standing of science, as an expert authority, is being challenged.  Furthermore, Ian Chubb finds that the science message is getting lost in the white noise of the mainstream media.  I was heartened to hear his positive words about science communication, social media, science and technology education and innovative Australian workplaces.

These were the messages from Ian Chubb at an address he gave as part of NICTA’s Big Picture Seminar series on Wednesday March 28, 2012 at the University of Melbourne.

It was refreshing to see Australia’s Chief Scientist out and about and addressing public forums such as this one.  Although judging by the faces, the suits and the overheard conversations at the drinks and nibbles prior to the address, I think this was definitely a speech to the science and technology faithful.  That is a pity, his words were worth  exposure and considered comment in the mainstream Australian media.

Prof. Ian Chubb at the Climate congress, Copenhagen 2009, March 10-12. Opening session.

Professor Ian Chubb emphasises Mathematics, Engineering and Science provide the enabling skills and knowledge that underpin every aspect of modern life. They help us understand the natural world and enable us to respond as humans to this world with a constructed view aimed at improving the lot of human kind.

In Australia, as in many economies, we have observed a decline in the number of people choosing a career in these disciplines.  Not only that, the STEM subjects (Science Technology Engineering and Mathematics), as he called them, are taken for granted or simply ignored.   Although it is obvious without at least an appreciation of these subjects, a modern citizen is hampered in their ability to critically evaluate and make informed decisions about the issues that are shaping their future. Among his many roles as Australia’s Chief Scientist, Professor Ian Chubb has been charged with examining this decline and offering strategies to address it.

Professor Ian Chubb is eminently suited to this task.  He was appointed to the position of Chief Scientist on 19 April 2011 and commenced the role on 23 May 2011. Prior to his appointment as Chief Scientist, Professor Ian Chubb was Vice-Chancellor of the Australian National University.  Professor Chubb’s research focused on the neurosciences.  Although he jokingly said on the night he would prefer not to be quizzed, on science specifics, by such an informed audience.  He has co-authored some 70 full papers and co-edited one book all related to his research. In 1999 Professor Chubb was made an Officer of the Order of Australia (AO) for “service to the development of higher education policy and its implementation at state, national and international levels, as an administrator in the tertiary education sector, and to research particularly in the field of neuroscience”. In 2006 he was made a Companion (AC) in the order for “service to higher education, including research and development policy in the pursuit of advancing the national interest socially, economically, culturally and environmentally, and to the facilitation of a knowledge-based global economy”.  I certainly expect to see some informed, well-researched and erudite outputs from his office over the next few years.

On the evening he emphasized that we in the audience have an important role to play.  He seized upon two, in my mind important aspects: the first was the quality and engagement of science teaching in schools and the second was science communication into the mainstream consciousness.

Ian Chubb sees a key way to develop a more scientifically and technologically literate Australia is to enable the sciences at the primary and secondary school level.  He sees this as the way to demonstrate how useful science; scientific concepts and processes are to everyday lives.  To do this we need to have inspirational teachers.  Teachers who are creative and imaginative and can make the subject interesting without being simplistic.  It still needs to be challenging, that is part of science.  He sees CSIRO outreach programmes being and integral part of this.  By tackling the problem at this level he believes we can make science relevant and part of the community values.

The second part focused on scientists doing media better and media doing science better.  This he trusts will change the current state of ill-informed debate that occurs about many subjects in the public media.  He, thankfully, pressed home that communicating science better needs to be a goal of the practicing scientists, and the learned science, technology, engineering and mathematics professional.  The existing discipline silos also hamper us taking advantage of what we do have.  The pieces of the jigsaw are still separated on the table rather than being used to advantage.

Ian Chubb is also a fan of social media.  I see eye-to-eye with him on that social media can get science into the mainstream of people’s consciousness.  It can bring immediacy about the scientific process; the rigour, analysis and observation that are part of science in practice.  He also emphasized that we take a PhD to mean “educated intelligent person” rather than the narrow view of “researcher” as is commonly held.  The universities and commercial workplaces need to see these people as key to creating an innovative workplace.  A workplace that will transform traditional Australian economies.

In conclusion he saw that for Australia to become smarter, more competitive, and more productive we need to have a cultural change.  A change that enables people to understand that science and technology is good and a common cause.  A change, that needs to start right now.  I am enthused and ready, how about you?

Originally published on Australian Science March 30, 2012.

Be the first to comment

Who found the water on the Moon?

Posted March 28, 2012 By Kevin Orrman-Rossiter

At just over two tonnes, the second stage of an Atlas V rocket makes for an unusual ‘kinetic probe’.  Nonetheless on October 9, 2009 NASA deliberately impacted a spent Centaur rocket into the lunar south polar crater Cabeus.  The target area was a permanently shadowed region within this crater.  The impact, not surprisingly, ejected a spectacular plume of debris, dust, and vapour.

Science experiment: observe the system, perturb it, and measure what happens

The US scientists had thrown a heavy object at the Moon.  They then threw all the instruments possible to monitor the impact.  The prize was a decades-long search to directly find water on the Moon.

The impact would have been majestic to watch.  Picture those slow motion images of Apollo astronauts on the Moon.  Hold that thought and then imagine the impact.  An observer could marvel at the slow motion, low gravity, return of the dust and debris cloud to the Moon’s surface.  If you could see in the infrared, the impact flash lasts for 10 seconds.  There is a cloud of debris, dust, and vapour rising.  At eight seconds the ejecta cloud is 4.5km in diameter, in the ultra-violet spectrum, the plume is 10km in diameter.  At 20 seconds after impact the ejecta cloud was is at its maximum diameter of 8.5km and the plume has reduced to little less than 10km.

The observer would be watching a science experiment on a grand scale.

The observer in this experiment was neither you nor I it was a trailing “shepherding spacecraft”.  The Centaur had propelled NASA’s Lunar Reconnaissance Orbiter and Lunar Crater Observation and Sensing Satellite to the Moon.  Shortly after launch the Lunar Reconnaissance Orbiter had separated to go on its own mission.  Once in lunar orbit the Centaur had vented its remaining fuel.  Control was then assumed, for the next four months, by the Lunar Crater Observation and Sensing Satellite as the shepherding satellite.  During this next period the shepherding satellite manoeuvred the Centaur to allow the Sun to bake-out residual water and volatiles.  This was to ensure that no contaminant chemicals were passengers to the lunar impact site.  The Centaur’s fuel was a volatile combination of liquid hydrogen and liquid oxygen, both chemicals that were to be scanned for in the impact cloud.  The Lunar Crater Observation and Sensing Satellite also calibrated its instruments, then, targeted the Centaur to impact with the Moon.  Four months of meticulous preparation.

LCROSS spacecraft with Centaur stage, image credit NASA

The Lunar Crater Observation and Sensing Satellite carried nine instruments, including cameras, spectrometers, and a radiometer.  The spectrometers measured the reflected light at different wavelengths.  These enabled the identification of the chemicals present in the ejected cloud.

Near-infrared absorbance attributable to water vapour and ice, and ultraviolet emissions attributable to hydroxyl radicals (OH) support the presence of water in the debris.  The researchers determined from these observations that there was over 5%, by mass, of water ice in the lunar regolith of the impact site.  Certainly this is small by terrestrial soil standards, but more substantial than most earlier estimates.

Over a year after the impact, in the October 22, 2010 issue of the journal Science, the results of this experiment were delivered to the world’s attention.  This certainly marked a defining moment for lunar scientists, directly confirming the availability of water on the moon.  It was however neither the first nor last word on this.

Cabeus crater LCROSS impact site, photo credit NASA

Early attempts

Since the first lunar samples were carried back to earth by Apollo astronauts, in the late 1960s, scientists have operated under the presumption that the moon was entirely dry.  In total 382kg of lunar material was bought to Earth by the Apollo mission astronauts and a further 0.32kg by the unmanned USSR Lunar missions.  New analyses of these rocks with improved analytical techniques have made it possible to perform highly sensitive isotopic measurements on very small lunar grains.  These analyses are revealing water in Apollo samples that were once thought to be dry.

Well before these new studies, scientists had been puzzling about why more water was not seen on the moon.  It was thought that volatile materials, such as water, could be accumulating at the moon’s permanently shaded polar regions.  Here they could be trapped for geological periods of time without significant loss.  The in 1998, the orbiting Lunar Prospector spacecraft measured the abundance of elements on the moon’s surface using neutron spectroscopy.  This provided compelling evidence for enhanced hydrogen concentrations, and by inference water, at both of the lunar poles.

In 1999 the Cassini spacecraft flew by the moon on its way to Saturn.  It turned its Visual and Infrared Mapping Spectrometer to the moon.  By measuring the surface reflectance of light from the moon scientists found absorption attributed to hydroxyl and water on the sunlit surface of the moon.  These results were not published until 10 years later, in October 2009.  The reason was renewed interest in water on the moon.

On October 22, 2008 the Indian Space Research Organisation launched Chandrayaan-1, on its lunar mission.  One of its major scientific missions was to look for water on the moon.  It had three different instruments ready to make 2008-10 an interesting period for lunar water exploration.

Chandrayaan-1, India’s lunar water finder

The Chandrayaan-1 story is told in detail elsewhere.  Here I intend to showcase the marvellous outcome of Chandrayaan-1’s water finding experiments.  Perhaps the most exciting of all these was one of the simplest.  This was the CHandra’s Altitudinal Composition Explorer (CHACE) on board the Moon Impact Probe.

On November 14, 2008 (the birthday of the late Pandit Jawaharlal Nehru, India’s 1st Prime Minister) the Moon Impact Probe became the first Indian built object to reach the surface of the Moon.  The probe was a 34kg box-shaped object containing a video image system, radar altimeter, and The CHACE mass spectrometer.

Symbolically the Indian tricolour was painted on three sides of the Moon Impact Probe.  This enables India to also lay claim to having the “Indian tricolour placed on the Moon”.  Needless to say that “placing” in this case was a hard landing in the Moon’s south polar region near the Shackleton crater, flying over the Malapert mountain en route.

The CHACE mass spectrometer took 650 spectra of the tenuous lunar atmosphere during its 1487-second, 98km, plunge to the lunar surface.  Tenuous is right the atmosphere even on the sunlit side is only 7/10,000,000,000th of the Earth’s atmosphere.

The mass spectrometer was tuned to look find water and direct evidence of water it did find.  The team leader of the experiment, Dr S M Ahmed, remembers, “We all were jumping when we saw water was literally pouring out of our instrument” on November 14, 2008.  The Indian scientists had established that the dominant species of the tenuous sunlit lunar atmosphere were H2O, N2, and CO2.

These results were not published until August 6, 2010 after being confirmed (on August 22, 2009) and complemented by the results from two of the other 11 instruments that formed the scientific payload of Chandrayaan-1.  Amongst the instruments on Chandrayaan-1 were the Moon Mineralogy Mapper (M3) and Miniature Synthetic Aperture Radar (Mini-SAR) from NASA.  The Moon Mineralogy Mapper has covered nearly 97% of the lunar surface, some of the other instruments have covered more than 90%.

Water detected at high latitudes on the Moon, image credit NASA

A detailed analysis of the data obtained from Moon Mineralogy Mapper, has clearly indicated the presence of water molecules on the lunar surface extending from the lunar poles to about 60 degrees latitude. Hydroxyl, a molecule consisting of one oxygen atom and one hydrogen atom, was also found in the lunar soil.

The Moon Mineralogy Mapper measured the intensity of reflected sunlight from the lunar surface at infrared wavelengths, splitting the spectral colours of the lunar surface into small enough bits revealing finer details of the lunar surface composition.  This enabled identification of the presence of various minerals on the lunar surface that have characteristic spectral signature at specific wavelengths.  Since reflection of sunlight occurs near the moon’s surface, such studies provide information on the mineral composition of the top crust of a few millimeters of the lunar surface.

The findings from Moon Mineralogy Mapper clearly showed a marked signature in the infrared region of 2.7 to 3.2 micron in the absorption spectrum, which provided a clear indication of the presence of hydroxyl (OH) and water (H2O) molecules on the surface of the Moon closer to the polar region.  It was also concluded that they are in the form of a thin layer embedded in rocks and chemical compounds on the surface of the moon and the quantity is also extremely small of the order of about 700ppm.

These molecules could have come from the impact of comets or radiation from the sun. But the most probable source could be low energy hydrogen carried by solar wind impacting on the minerals on lunar surface.  This in turn could form OH or H2O molecules by deriving the oxygen from metal oxide.

Following these findings, the scientific team revisited the data from NASA’s Deep Impact Mission launched in 2005 which carried an instrument similar to Moon Mineralogy Mapper. Deep Impact Probe observed the moon during the period June 2 to 9, 2009.  As previously mentioned the Moon Mineralogy Mapper observations are further strengthened by results obtained from the analysis of archived data of the Cassini probe.

Further to these findings, ice was detected in small polar craters (2-15km in diameter) that are not visible from the Earth.  These north polar craters have sub-surface water ice located at their base.  The interior of these craters is in permanent shadow from the Sun.  Although the total amount of ice depends on its thickness in each crater, it’s estimated there could be at least 600 million metric tons of water ice.

This water was detected using the mini-SAR instrument.  Mini-SAR is a lightweight (less than 10kg) synthetic aperture imaging radar.  It uses the polarization properties of reflected radio waves to characterize surface properties.  Looking at their internal roughness this instrument could detect whether craters were newly formed.  It found the water by looking for craters that gave anomalous signals.  Signals that were consistent, however, with them having water in their base.

How did the water come to be there?

Using methods, both direct and indirect, we now know that water is present on the moon.  We even have a good estimate of the amount of it that is present and were it is located.  How were these reservoirs of lunar water formed?  Studies published in 2011 suggest that water was acquired by both the Earth and Moon after the Moon’s formation.  Geochemical analysis of the rocks retrieved by the Apollo missions show the lunar and terrestrial water are isotopically distinct.  It remains a conundrum how the Earth and the Moon could have sampled water from different origins.

The conventional explanation for ice in polar craters like Cabeus – whose floor is in permanent shadow and thus hovers near 40 degrees above absolute zero – is that icy asteroids or comets strike somewhere on the Moon, and some of the resulting water vapour reaches a permanently shadowed crater’s deep chill and freezes out there.  This cold trapping would only fill the empty space in the regolith, not form nearly pure ice.  Regolith is the name for the lunar ‘soil’, more like dust as it contains no organic matter that would make it truly soil.

No one knows how the subsurface ice would form.  Now that it has been blasted into view, its presence confirmed, scientists can move their thoughts to determining how it came to be there, in the form that it is.

Still that is the beauty and fun in science, understanding how reality comes to be the way it is.

Water on the Moon, so what?

So why the interest in whether there is water on the moon?  I would suggest that there is a one word succinct answer to this question; curiosity.  Science by its nature is a curiosity-driven enterprise.  How did the universe come to be the way it is?  This I suspect is, ultimately, the question behind most if not all scientific enterprise.  The quest for water on the moon was also driven by the need to sustain astronauts returning to the Moon.  Readily accessible water is a necessity to sustain a moon-base.  Human exploration, although always framed in economic, or national, or in this case human-kind language is, I suspect, deeply motivated by human curiosity.

Since then, the Obama-administration and now Congress have turned their backs on sending humans to the Moon.  This decades-long push by NASA, with help from their friends, to find water will benefit the others with lunar aspirations.  The Indians have a Space 2025 vision to follow up Chandrayaan-1 with further unmanned Moon voyages.  In addition they have recently announced that the Indian Air Force has responsibility for developing a viable astronaut training programme for the purpose of  undertaking future manned spaceflights.

ISRU/NASA Water processing demonstration concept

The Russian newspapers have recently trumpeted that Russia will put men on the Moon by 2030 – although, officially Russia’s most up to date planning covers to the year 2015.  Japan has a proposed unmanned lunar probe for launch ‘some time in the 2010 decade’.  This follows up on a successful September 14, 2007 launch of its lunar orbiter, Kaguya. Meanwhile the first spacecraft of the Chinese lunar exploration programme, the unmanned lunar orbiter Chang’e 1, was successfully launched from Xichang Satellite Launch Center on October 24, 2007.   A second unmanned orbiter, Chang’e 2, was launched successfully on October 1, 2010.  Chang’e 3, China’s first lunar rover, is expected to launch in 2013.  A manned expedition may occur in 2025-2030.

The 2020-30 decades could herald an intense, multinational, focus on water on the Moon.

The essay was first published on March 26, 2012 on the Australian Science website.  It is also an entry in the Bragg UNSW Press Prize for Science Writing 2012.

1 Comment. Join the Conversation

Two Genes Do Not a Voter Make

Posted March 24, 2012 By Kevin Orrman-Rossiter

Voting behavior cannot be predicted by one or two genes as previous researchers have claimed, according to Evan Charney, a Duke University professor of public policy and political science.

In “Candidate Genes and Political Behavior,” a paper published in the February 2012 American Political Science Review, Charney and co-author William English of Harvard University call into question the validity of all studies that claim that a common gene variant can predict complex behaviors such as voting.

They use as an example a 2008 study by James H. Fowler and Christopher T. Dawes of the University of California, San Diego which claimed that two genes predict voter turnout. Charney and English demonstrate that when certain errors in the original study are corrected — errors common to many gene association studies — there is no longer any association between these genes and voter turnout.

“The study of Fowler and Dawes is wrong,” Charney said. “Two genes do not predict turnout. We re-ran the study using all of their assumptions, equations, and data and found that their results were based upon errors they made. When we corrected the errors, there was no longer any association between these two genes and voter turnout.”

Charney and English also document how the same two genes that Fowler and Dawes claimed would predict voter turnout are also said to predict, according to other recently published studies, alcoholism, Alzheimer’s disease, anorexia nervosa, attention deficit hyperactivity disorder, autism, depression, epilepsy, extraversion, insomnia, migraines, narcolepsy, obesity, obsessive compulsive disorder, panic disorder, Parkinson’s disease, postpartum depression, restless legs syndrome, premature ejaculation, schizophrenia, smoking, success by professional Wall Street traders, sudden infant death syndrome, suicide, Tourette syndrome, and several hundred other behaviors. They point to a number of studies that attempted to confirm these findings and could not.

“Researchers the world over are using data sets that contain behavioral information about study participants along with limited genetic data for a handful of their genes,” Charney said. “Often, the genetic data contained in these various data sets is limited to the very same four or five genes. The result is that the same genes are now said to predict an astonishing array of human behavior.”

“How could one common gene variant possibly predict so many diverse behaviors?” Charney asked. “And what are the odds that the very same handful of genes — out of an estimated 25,000 to 30,000 genes — will miraculously turn out to be the genetic key to all of human behavior?”

Charney and English also note that the underlying assumption of gene association studies is at odds with our current understanding of the relationship between genes and complex human behaviors, such as political behavior.

“There is a growing consensus that complex traits that are heritable are influenced by differences in thousands of genes interacting with each other, with the epigenome (which regulates gene expressivity), and with the environment in complex ways,” Charney said. “The idea that one or two genes could predict something like voting behavior or partisanship violates all that we now know about the complex relationship between genes and traits.”

Be the first to comment

How much does antimatter weigh?

Posted March 19, 2012 By Kevin Orrman-Rossiter

A pulse of particles speeds into the vacuum chamber.  Positrons, 20,000,000 antimatter particles, clumped in a pulse one nanosecond deep.  Like a silent, angry swarm they are targeted into a porous silica target.  The positrons are confined by a magnetic field, increasing their interaction with the silica.  Some attract electrons and synthesize into positronium, a hybrid unstable ‘molecule’.  Before it can decay the positronium is excited first by a burst of ultra-violet laser light.  Then a second burst, this time in the infrared.  With each excitation the positronium puffs-up. The still bound, positron and electron orbit further and further away from each other.  Decreasing their opportunity to meet.  Increasing the lifetime of the positronium.  Eventually they meet.  They annihilate.  Mutually destructing in a soundless flash of energy.  A pair of gamma rays to remind us of a nanosecond long courtship.

Producing positronium, binding the negatively charged electron with its antimatter equivalent, the positively charged positron, is not a normal occurrence in our world.  If this experiment sounds a little like science fiction, the reason for doing it is part of an even more exotic tale.  This laboratory experiment is to produce ‘long-lived’ positronium is so that the researchers can measure the effect of gravity on it.  Published in January 2012, this success represents 10 years of hard work for Allen Mills and his team from the University of California, Riverside.

The utra-high vacuum chamber at the Positron Lab, UC Riverside

The positron is the antimatter version of the electron.  It has the identical mass to the electron, but a positive charge.  The researchers are trying to find whether matter and antimatter behave the same way under gravity.  That is, do they weigh differently?  If they find such behaviour it would truly rock the physics world.

Antimatter.  The idea of it is weird.  The idea of weighing it is stranger still.

The weirdness of antimatter

Antimatter has been one of the most fascinating fields of research ever since the prediction of its existence by Paul Dirac in 1929.  Paul Dirac solved the basic equations of quantum mechanics.  In doing so he found their solution implied the existence of antimatter particles.  At the time most physicists, Dirac included, thought the idea was preposterous.

Within a short time, experimenters looking at cosmic rays bombarding the Earth found particles that behaved like electrons, but with a positive charge.  In 1932 Carl Anderson carried out the definitive experiments demonstrating the existence of the anti-electron, naming it the positron, and being awarded the Nobel Prize in 1936 for his discovery.

3D image of Antihydrogen

Dirac had postulated in 1931 that each of the fundamental particles has an equivalent antimatter partner.  Furthermore, when matter and its antimatter opposite come into contact they annihilate each other, releasing energy into two equal energy photons.  Their energy is given by Einstein’s famous equation E=mc².  This latter relationship has been much exploited by science fiction writers.  Matter – antimatter reactions powered the star-ship USS Enterprise in the 60s hit TV show Star Trek.

Work with high-energy antiparticles is now commonplace in physics and materials science.  Anti-electrons are used regularly in the medical technique of positron emission scanning tomography.  Perhaps more importantly, the equivalence model of matter and antimatter has been fully incorporated into the Standard Model of Particle Physics.  Matter particles and their anti-matter pairs have the same mass and equal, but opposite charge.

The most important point in all of this focusses on when our universe came into being, the Big Bang.  Models of the Big Bang predict that equal amounts of matter and antimatter were formed initially.  Ordinary matter is clearly what our observable Universe is made of today.  Where is the antimatter?  A glaring, niggling, imbalance that begs for an explanation.

This imbalance could be explained by a slight difference in one of the fundamental properties of particle-antiparticle pairs (such as charge or mass).  There isn’t yet any experimental evidence for such a difference.  Another alternative might be a difference in their gravitational attractiveness.  It is widely expected that the gravitational interaction of matter and antimatter should be identical.  If they were not, then this could explain the preponderance of matter in our universe.

Elementary antimatter particles naturally occur in radioactive decays and in cosmic radiation.  Some of them, such as the positron and the antiproton, have been studied extensively and even compared to their matter equivalents.  Measurements with charged antiparticles are difficult because gravity is a far weaker force than the electromagnetic force.  The first experiments to measure the gravitational attraction of antimatter were at the University of Stanford and CERN in 1974 and 1993 respectively.  Both used charged antimatter particles.  The experiments were marred by stray electric fields and did not produce satisfactory results.

Trying to do experiments like this requires precision and reproducible backgrounds.  The world is a rather messy place to measure these relationships.  Neutral positronium, such as in Mill’s experiments, or an anti-atom could be used to test the effect of gravity on antimatter for the first time, because it is immune to stray electromagnetic fields that have hampered the previous studies with charged antimatter particles.

Does antimatter fall down?

Understanding gravity has proven to be a little more complicated than falling apples.  To the Greek philosopher and polymath Aristotle, the concept that heavy objects fell faster than light objects was obvious.  His elegant, but fanciful, notion persisted well into the Middle Ages.  The Italian physicist, mathematician, and astronomer Galileo Galilei successfully challenged Aristotle’s impractical theories of motion.  Experiments using the swing of pendulums proved him correct.  Observing their swing rates was more practical than the dropping of objects from the Leaning Tower of Pisa, as Galileo had originally proposed.

Galileo also put forward the basic principle of relativity, that the laws of physics are the same in any system that is moving at a constant speed in a straight line, regardless of its particular speed or direction.  Hence, there is no absolute motion or absolute rest.  This principle is central to Einstein’s special theory of relativity.

The key point under consideration here is the correlation between inertial mass and gravitational mass.  This is the correlation between the forces measured on a mass when it is falling under gravity or being accelerated.  The earliest experiments were done by English natural philosopher Isaac Newton and improved upon by the German mathematician and astronomer Friedrich Wilhelm Bessel in the 1820s.

The problem is that Newton’s theories and his mathematical formulae did not and do not explain the equivalence of the behavior of various masses under the influence of gravity, independent of the quantities of matter involved.  The observation that the gravitational mass and the inertial mass is the same for all objects is unexplained within Newton’s Theories.  The experiments of Galileo Galilei, decades before Newton, established that objects that have the same air or fluid resistance are accelerated by the force of the Earth’s gravity equally, regardless of their different inertial masses. Yet, the forces and energies that are required to accelerate various masses is completely dependent upon their different inertial masses, as can be seen from Newton’s Second Law of Motion, F = ma.

Precision measurements by the Hungarian physicist Loránd Eötvös originally in 1885, and then again with improved instruments and precision between 1906 and 1909, established the universality of Newton’s law of gravitation.  These were followed with a series of similar but more accurate experiments, these included experiments with different types of materials, on moving ships and in different locations around the Earth.  These experiments demonstrated the equivalence of gravitational and inertial mass for ordinary matter.  In turn, these experiments led to the modern understanding of the equivalence principle encoded in general relativity, which states that the gravitational and inertial masses are the same.

So far, so good, for understanding the behaviour of ordinary matter.  Rather than disembodied logic alone, the combination of experiment, measurement, and sound reasoning proving to be the correct way to discern the laws that represent reality.

Gravity is now best described by general relativity.  General relativity is a classical theory that does not imply the existence of antimatter.  In the 1980s a quantum-mechanical formulation of gravity allowed for non-Newtonian contributions to the force which might lead to a difference in the gravitational force on matter and antimatter.

A number of theories propose how differential interactions between matter and antimatter may be explained.  It must also be pointed out that numerous models and experiments with matter have been used to derive upper limits on the possible differences in the nature of such gravitational attractions.

Direct investigation of antimatter, experimentally, is characterised by an almost complete lack of data.

The experimenters

Allen Mills is not alone in his quest to measure the weight of antimatter.

In 2011 the AEgIS collaboration at CERN, had funding approved to use antihydrogen to measure any difference in the gravitational force on matter and antimatter.  CERN is Europe’s particle-physics research lab located near Geneva in Switzerland.  Perhaps currently best known for its search for the Higgs boson, the co-called God particle.  The funding scale of the AEgIS experiment (Antimatter Experiment: Gravity, Interferometry, Spectroscopy) is more modest.  It’s goal, to create a horizontal beam of antihydrogen and to study its free fall in the Earth’s gravitational field with a matter wave interferometry apparatus, is scientifically equally far-reaching.

The quest to create, trap and study antihydrogen is now entering its third decade at CERN.

In 2002, the ATHENA experiment at CERN’s Antiproton Decelerator was the first to produce copious amounts of cold antihydrogen, the simplest atomic antimatter system.  The ATHENA (AnTiHydrogEN Apparatus) experiment had the objective to produce, to store and to study antihydrogen at extremely low temperatures, at less than 1 Kelvin temperature.  The goal was to compare the energy levels of antihydrogen and hydrogen with extreme accuracy.  The ATHENA set-up was used as a proof of concept in this case for the successor experiment AEgIS.

The antiprotons supplied by the Antiproton Decelerator were trapped and cooled, and brought into overlap with positrons from a radioactive sodium source in a cylindrical Penning trap.  The produced anti-atoms, no longer confined in the charged-particle trap, drifted radially outward and annihilated on the electrodes.  ATHENA’s sophisticated detector allowed the temporally and spatially resolved reconstruction of these annihilation events.

During the data taking periods in 2003 and 2004, the experimental parameters were optimized in order to maximize the antihydrogen production rate, and the temperature and internal quantum states of the anti-atoms were determined.  ATHENA was not configured to measure the gravitational attraction of antihydrogen.  Data taking with ATHENA has now ended.

It was collaborators from the ATHENA experiment, along with new groups from other institutes, that have designed the successor experiment, AEgIS, with the aim of performing gravitational studies with antimatter.  The AEgIS proposal was submitted in January 2008 and approved by the CERN Research Board in December 2008.  Construction began in early 2010.

Experimental area at CERNs Antiproton Decelerator Hall

Meanwhile the CERN group has been building on its expertise for the production and trapping of antihydrogen.  A new experimental collaboration called ALPHA (Antihydrogen Laser PHysics Apparatus) is another successor to ATHENA.  In late 2010 the ALPHA group managed, 38 times to confine single antihydrogen atoms for 172 milliseconds.  At the time the spokesperson Jeffrey Hangst said, “We’re ecstatic.  This is five years of hard work.”

By July 2011 they had confined seven antihydrogen anti-atoms for 1,000 seconds, extending their earlier results by nearly four orders of magnitude.  To compare with these CERN successes, Mills, in his late 2011 experiment, produced 12 positronium atoms that did not annihilate until they hit the chamber wall.  This journey of a few centimetres takes about a microsecond.

Based on these results Mills believes he can produce a collimated, long-lived beam for the direct measurement of the gravitational free fall of positronium atoms.

Kudos and plaudits

We have in 2012 then, two experiments, both different in their experimental make-up.  Both are trying to measure the gravitational free-fall of antimatter: one using antihydrogen, one using positronium.  Assuming that both will be successful, then one will be used as a confirmation of the results of the other.

This is how great science is done.  Great scientists are nonetheless people.  People are competitive.  In years to come, the science textbooks will record, and perhaps laud, who was first to measure the weight of antimatter.

Originally published as “Weighty thoughts on antimatter” on March 16 2012 in Australian Science.

1 Comment. Join the Conversation

Hubble finds a dark matter puzzle

Posted March 9, 2012 By Kevin Orrman-Rossiter

Astronomers using data from NASA’s Hubble Telescope have observed what appears to be a clump of dark matter left behind from a wreck between massive clusters of galaxies. The result could challenge current theories about dark matter that predict galaxies should be anchored to the invisible substance even during the shock of a collision. Abell 520 is a gigantic merger of galaxy clusters located 2.4 billion light-years away. Dark matter is not visible, although its presence and distribution is found indirectly through its effects. Dark matter can act like a magnifying glass, bending and distorting light from galaxies and clusters behind it. Astronomers can use this effect, called gravitational lensing, to infer the presence of dark matter in massive galaxy clusters.

This technique revealed the dark matter in Abell 520 had collected into a “dark core,” containing far fewer galaxies than would be expected if the dark matter and galaxies were anchored together. Most of the galaxies apparently have sailed far away from the collision.

Merging Galaxy Cluster Abell 520

“This result is a puzzle,” said astronomer James Jee of the University of California in Davis, lead author of paper about the results available online in The Astrophysical Journal. “Dark matter is not behaving as predicted, and it’s not obviously clear what is going on. It is difficult to explain this Hubble observation with the current theories of galaxy formation and dark matter.”

Initial detections of dark matter in the cluster, made in 2007, were so unusual that astronomers shrugged them off as unreal, because of poor data. New results from NASA’s Hubble Space Telescope confirm that dark matter and galaxies separated in Abell 520.

One way to study the overall properties of dark matter is by analyzing collisions between galaxy clusters, the largest structures in the universe. When galaxy clusters crash, astronomers expect galaxies to tag along with the dark matter, like a dog on a leash. Clouds of hot, X-ray emitting intergalactic gas, however, plow into one another, slow down, and lag behind the impact.

That theory was supported by visible-light and X-ray observations of a colossal collision between two galaxy clusters called the Bullet Cluster. The galactic grouping has become an example of how dark matter should behave.

Studies of Abell 520 showed that dark matter’s behavior may not be so simple. Using the original observations, astronomers found the system’s core was rich in dark matter and hot gas, but contained no luminous galaxies, which normally would be seen in the same location as the dark matter. NASA’s Chandra X-ray Observatory was used to detect the hot gas. Astronomers used the Canada-France-Hawaii Telescope and Subaru Telescope atop Mauna Kea to infer the location of dark matter by measuring the gravitationally lensed light from more distant background galaxies.

The astronomers then turned to the Hubble’s Wide Field Planetary Camera 2, which can detect subtle distortions in the images of background galaxies and use this information to map dark matter. To astronomers’ surprise, the Hubble observations helped confirm the 2007 findings.

“We know of maybe six examples of high-speed galaxy cluster collisions where the dark matter has been mapped,” Jee said. “But the Bullet Cluster and Abell 520 are the two that show the clearest evidence of recent mergers, and they are inconsistent with each other. No single theory explains the different behavior of dark matter in those two collisions. We need more examples.”

The team proposed numerous explanations for the findings, but each is unsettling for astronomers. In one scenario, which would have staggering implications, some dark matter may be what astronomers call “sticky.” Like two snowballs smashing together, normal matter slams together during a collision and slows down. However, dark matter blobs are thought to pass through each other during an encounter without slowing down. This scenario proposes that some dark matter interacts with itself and stays behind during an encounter.

Another possible explanation for the discrepancy is that Abell 520 has resulted from more complicated interaction than the Bullet Cluster encounter. Abell 520 may have formed from a collision between three galaxy clusters, instead of just two colliding systems in the case of the Bullet Cluster.

A third possibility is that the core contained many galaxies, but they were too dim to be seen, even by Hubble. Those galaxies would have to have formed dramatically fewer stars than other normal galaxies. Armed with the Hubble data, the group will try to create a computer simulation to reconstruct the collision and see if it yields some answers to dark matter’s weird behavior.

The Hubble Space Telescope is a project of international cooperation between NASA and the European Space Agency. NASA’s Goddard Space Flight Center in Greenbelt, Md., manages the telescope. The Space Telescope Science Institute (STScI) in Baltimore, Md., conducts Hubble science operations. STScI is operated by the Association of Universities for Research in Astronomy, Inc., in Washington, D.C.

Source: NASA

1 Comment. Join the Conversation

Poor sleep increases risk for health problems

Posted March 6, 2012 By Kevin Orrman-Rossiter

Researchers have shown that older adults who sleep poorly have an altered immune system response to stress.  This may increase their risk for mental and physical health problems.

In the study, stress led to significantly larger increases in a protein marker of inflammation in poor sleepers compared to good sleepers.   The protein, interleukin-6, is primarily produced at sites of inflammation.  It is a marker associated with poor health outcomes and even death.

“This study offers more evidence that better sleep not only can improve overall well-being but also may help prevent poor physiological and psychological outcomes associated with inflammation,” said Kathi L. Heffner, Ph.D., assistant professor of Psychiatry at the Rochester Medical Center and study leader.

The association between poor sleep and a heightened inflammatory response to acute stress could not be explained by other factors linked to immune impairment, including depression, loneliness and perceived stress, the researchers said in the study published by the American Journal of Geriatric Psychiatry.

“Our study suggests that, for healthy people, it all comes down to sleep and what poor sleep may be doing to our physiological stress response, our fight or flight response,” Heffner said.

The study, advertised as an investigation of stress and memory, involved 45 women and 38 men with an average age of 61 years. The participants were evaluated for cognitive status using a standard assessment. Each participant completed a self-report of sleep quality, perceived stress, loneliness and medication use. The participants had to be in good physical health to be in the study, but even so, about 27 percent of the participants were categorized as poor sleepers.

On the day of the study, the participants were given a series of tests of verbal and working memory, a battery of questions that served as the stressor. Blood was drawn before any testing began and then immediately following the testing and at three intervals spaced out over 60 minutes. The blood was studied for levels of interleukin-6, a protein primarily produced at sites of inflammation.

Poor sleepers reported more depressive symptoms, more loneliness and more global perceived stress relative to good sleepers. Poor sleepers did not differ from good sleepers when interleukin-6 was measured before the tests began. Across the group, the participants showed increases in interleukin-6. However, poor sleepers had a significantly larger increase in interleukin-6 in response to the stressful tests compared to good sleepers, as much as four times larger and at a level found to increase risk for illness and death in older adults.

A further analysis of the results for the impact of loneliness, depression or perceived stress on interleukin-6 levels found no association. Poor sleep stood as the predictor of elevated inflammation levels.

“We found no evidence that poor sleep made them deal poorly with a stressful situation. They did just as well on the tests as the good sleepers. We did not expect that,” Heffner said. “We did find that they were in a worse mood after the stressor than a good sleeper, but that change in mood did not predict the heightened inflammatory response.”

As people age, a gradual decline in the immune system occurs along with an increase in inflammation. Heightened inflammation increases the risk for cardiovascular disease, diabetes and other illnesses, as well as psychiatric problems.

While relatively little is known about the pathways through which poor sleep impacts circulating levels of inflammatory proteins, the study led by Heffner provides a clinical target for preventing poor outcomes for older adults.

“There are a lot of sleep problems among older adults,” Heffner said. “Older adults do not have to sleep poorly. We can intervene on sleep problems in older adulthood. Helping an elderly person become a better sleeper may reduce the risk of poor outcomes associated with inflammation.”


2 Comments so far. Join the Conversation

Earth’s exoplanet ‘siblings’ can be different

Posted February 25, 2012 By Kevin Orrman-Rossiter

The finding of numerous exoplants, planets outside of our own solar system or extra-solar planets, have made astrophysics once again a hot topic.  A week does not seem to go by without a new discovery of an exoplanet by NASA’s Hubble telescope.  At the same time intense activity is taking place to understand the nature of these exoplanets.

The interest from professionals and public is understandable.  this forms part of one of the ‘big’ questions; “Are we alone in the universe?”

Finding a variety of planetary systems enables scientist help test theories and models of planetary formation.  One such model looks at the stars of these planetary systems.

The study of the the abundance of elements in the photosphere of stars that host planets is the key to understanding how protoplanets form.  The photosphere is the visible surface of a star.  Since a star is a ball of hot gas this is not a solid surface but actually a layer about 100km thick (very thin compared to the 700,000km radius of the Sun).  It is found that the ratios of elemental abundance in the photosphere is a good indication of the chemistry of any planets that form around it.

It also helps to model which protoplanetary clouds evolve planets and which do not. These studies have important implications for models of giant planet formation and evolution.  They also help us to investigate the internal and atmospheric structure and composition of extrasolar planets.

An international team of researchers has discovered that the chemical structure of Earth-like planets can be very different from the bulk composition of Earth.

They have presented results of simulations of terrestrial planet formation.  Their results have looked at three extrasolar planetary systems.  These systems have photospheres with Mg/Si  values less than 1.0.

Theoretical studies suggest that carbon/silicon (C/O) and magnesium/silicon (Mg/Si), are the most important elemental ratios in determining the mineralogy of terrestrial planets.  The ratios can give us information about the composition of these planets. The C/O ratio controls the distribution of Si among carbide and oxide species, while Mg/Si gives information on the silicate mineralogy.

This resulting bulk chemical is expected to have a dramatic effect on the existence and formation of the biospheres and life on Earth-like planets.

The Earth’s upper mantle has an Mg/Si atomic ratio (1.27) which is also similar to Venus. These characteristics seem to predominate throughout the inner solar system.

During formation of the terrestrial planets, elements more volatile than silicon were depleted and may have been transported outwards to recondense in the lower-temperature environment of the outer asteroid belt. Some Si may also have been lost in this manner, although not enough to alter planetary Mg/Si ratios. However, recondensation of this Si on the relatively small mass of dust particles in the asteroid belt would have caused a substantial enrichment of Si relative to Mg.  It thus seems likely that it is the Mg/Si ratio of the inner planets ( ∼ 1.27), which is more representative of the solar nebula value.

An analogous process of radial chemical fractionation may also have occurred in the outer solar nebula, with volatile elements and silicon lost from the growing giant planets being recondensed onto cosmic interplanetary dust particles and cometary bodies further out from the Sun.

In 2010 the first numerical simulations of planet formation in which the chemical composition of the proto-planetary cloud was taken as an input parameter.  Terrestrial planets, rocky siblings of Earth, were found to form in all the simulations with a wide variety of chemical compositions.   So these planets might be very different from Earth.

A first detailed and uniform study of C, O, Mg and Si abundances was also carried out in 2010.  This was the first to determine the abundance of all of the required elements in a completely internally consistent manner, using high quality spectra and an identical approach for all stars and elements, for a large sample of both host and non-host stars.  This 2010 study looked at 100 stars with detected planets and 270 stars without detected planets.  The majority of this data came from from the homogeneous high-quality European Southern Observatory HARPS studies.

In 2009 the HARPS team announced the discovery of the lightest exoplanet so far, Gliese 581e.  As well as the first exoplanet, Gliese 581d, to exist in the habitable zone.  A zone around its host star where surface water could exist.

Mineralogical ratios quite different from those in the Sun were found.  Showing that there is a wide variety of planetary systems which are unlike the Solar System.  Many planetary-host stars had a Mg/Si value lower than 1.  Suggesting that their planets will have a high Si content to form species such as MgSiO3.  The amount of radioactive and some refractory elements (especially Si) can have important implications for planetary processes like plate tectonics, atmospheric composition and volcanism.

The latest numerical simulations have shown that a wide range of extrasolar terrestrial planet bulk compositions are likely to exist. Planets simulated as forming around stars with Mg/Si ratios less than 1 are found to be Mg-depleted (compared to Earth), consisting of silicate species such as pyroxene and various types of feldspars.

Planetary carbon abundances also vary in accordance with the host stars’ C/O ratio. The predicted abundances are in keeping with observations of polluted white dwarfs (expected to have accreted their inner planets during their previous red giant stage).

From these earlier studies the present authors believe there could be billions of Earth-like planets in the Universe but a great majority of them may have a totally different internal and atmospheric structure.

The observed variations in the key C/O and Mg/Si ratios for known planetary host stars implies that a wide variety of extrasolar terrestrial planet compositions are likely to exist, ranging from relatively “Earth-like” planets to those that are dominated by C, such as graphite and carbide phases (e.g. SiC, TiC).

The chemical and dynamical simulations were combined by assuming that each embryo retains the composition of its formation location and contributes the same composition to the simulated terrestrial planet. The innermost terrestrial planets located within approximately half the distance of Earth to the Sun (~0.5 AU from the host star) contain a significant amount of the refractory elements Al and Ca (~47% of the planetary mass).

Planets forming beyond half-earth distances from the host star contain steadily less Al and Ca with increasing distance. One planetary system, 55 Cnc, has a C/O ratio above 1 (C/O = 1.12). This system produced carbon-enriched “Earth-like” planets.  All of the terrestrial planets considered in this work have compositions dominated by O, Fe, Mg and Si, most of these elements being delivered in the form of silicates or metals (in the case of iron). However, important differences between those planets forming in systems with C/O < 0.8 (Iota Horologii, HD19994) and those with C/O > 0.8 (55Cnc) have been found.

These results highlight planets built in chemically non-solar environments (which are very common in the Universe) may lead to the formation of strange worlds, very different from the Earth!

Be the first to comment

Anticipation of stressful events may cause cellular ageing

Posted February 24, 2012 By Kevin Orrman-Rossiter

Psychologists have found people most threatened by the anticipation of stressful tasks looked older at the cellular level.  The ability to anticipate future events allows us to plan and exert control over our lives.  Anticipation may also contribute to stress-related increased risk for the diseases of aging, according to this study.

The researchers studied 50 women, about half of them caring for relatives with dementia.  The stress researchers were trying to examine the psychological process of how people respond to a stressful event and how that impacts their neurobiology and cellular health.

The researchers assessed cellular age by measuring telomeres, which are the protective caps on the ends of chromosomes.  Short telomeres index older cellular age and are associated with increased risk for a host of chronic diseases of aging, including cancer, heart disease and stroke.

Research on telomeres, and the enzyme that makes them, was pioneered in 1985, by three scientists who received the Nobel Prize in Physiology or Medicine in 2009 for their work.  Molecular biologist Elizabeth Blackburn, was one of these three, and is a co-author on this study.

The researchers also found evidence that caregivers anticipated more threat than non-caregivers when told that they would be asked to perform the same public speaking and math tasks. This tendency to anticipate more threat put them at increased risk for short telomeres.  Based on that, the researchers propose that higher levels of anticipated threat in daily life may promote cellular aging in chronically stressed individuals.

How you respond to a brief stressful experience in the laboratory may reveal a lot about how you respond to stressful experiences in your daily life.  These findings are preliminary for now, but they suggest that the major forms of stress in your life may influence how your respond to more minor forms of stress, such as losing your keys, getting stuck in traffic or leading a meeting at work.

The long term goal of this research is to gain better understanding of how psychological stress promotes biological aging.  Targeted interventions could then be designed to reduce risk for disease in stressed individuals.

The researchers do feel that they are making some strides toward understanding how chronic stress translates into the present moment.  We now have preliminary evidence that higher anticipatory threat perception may be one such mechanism.

Be the first to comment

Microbial ‘slime’ powered fuel cell

Posted February 23, 2012 By Kevin Orrman-Rossiter

Biotechnology is not all ‘genetically modified foods‘ or ‘silver bullet‘ medical treatments.  At the more mucky end of biotechnology scientists have doubled the power output of a microbial fuel cell that employs an artificial biofilm.

A biofilm — or ‘slime’ — coats the carbon electrodes of the microbial fuel cell and as the bacteria feed, they produce electrons which pass into the electrodes and generate electricity.

Microbial fuel cells, which work in a similar way to a battery, use bacteria to convert organic compounds directly into electricity by a process known as bio-catalytic oxidation.

Bacillus stratosphericus — a microbe commonly found in high concentrations in the stratosphere (10-50 kilometres above the earth’s surface) — is a key component of this new ‘super’ biofilm.

Isolating 75 different species of bacteria from the Wear Estuary, Country Durham, UK, the team tested the power-generation of each one using a microbial fuel cell.

By selecting the best species of bacteria, a kind of microbial “pick and mix,” they were able to create an artificial biofilm, doubling the electrical output of the microbial fuel cell from 105 Watts per cubic metre to 200 Watts per cubic metre.

While still relatively low, this would be enough power to run an electric light and could provide a much needed power source in parts of the world without electricity.

Among the ‘super’ bugs was B. stratosphericus, a microbe normally found in the atmosphere but brought down to Earth as a result of atmospheric cycling processes and isolated by the team from the bed of the River Wear.

As well as B. stratosphericus, other electricity-generating bugs in the mix were Bacillus altitudinis — another bug from the upper atmosphere — and a new member of the phylum Bacteroidetes.

This is the first time individual microbes have been studied and selected in this way. Finding B. stratosphericus was quite a surprise but what it demonstrates is the potential of this technique for the future — there are billions of microbes out there with the potential to generate power.

The use of microbes to generate electricity is not a new concept and has been used in the treatment of waste water and sewage plants.

Until now, the biofilm has been allowed to grow un-checked but this new study shows for the first time that by manipulating the biofilm you can significantly increase the electrical output of the fuel cell.

Be the first to comment

DNA nanorobot destroys cancer cells

Posted February 20, 2012 By Kevin Orrman-Rossiter

Researchers  have manufactured an autonomous nanorobotic device made from DNA which delivered instructions, encoded in antibody fragments, to two different types of cancer cells — leukemia and lymphoma. This was a proof of principle experiment.  In each case, the message to the cell was to activate its “suicide switch” — a standard feature that allows aging or abnormal cells to be eliminated.

Since leukemia and lymphoma cells speak different chemical languages, the messages were written in different antibody combinations.  This demonstrated that the nanorobot could potentially seek out specific cell targets within a complex mixture of cell types and deliver important molecular instructions, such as telling cancer cells to self-destruct.

Inspired by the mechanics of the body’s own immune system, the technology might one day be used to program immune responses to treat various diseases.  By combining several novel elements for the first time, the new system represents a significant advance in overcoming previous obstacles. The research findings recently appear in the journal Science.

The researchers used a DNA origami method, in which complex three-dimensional shapes and objects are constructed by folding strands of DNA.  From these a nanosized robot was created in the form of an open barrel whose two halves are connected by a hinge.  The DNA barrel, which acts as a container, is held shut by special DNA latches that can recognize and seek out combinations of cell-surface proteins, including disease markers.

When the latches find their targets, they reconfigure, causing the two halves of the barrel to swing open and expose its contents, or payload.  The container can hold various types of payloads, including specific molecules with encoded instructions that can interact with specific cell surface signaling receptors.

This programmable nanotherapeutic approach was modeled on the body’s own immune system in which white blood cells patrol the bloodstream for any signs of trouble.  These infection fighters are able to home in on specific cells in distress, bind to them, and transmit comprehensible signals to them to self-destruct.

The DNA nanorobot emulates this level of specificity through the use of modular components in which different hinges and molecular messages can be switched in and out of the underlying delivery system.  Much as different engines and tires can be placed on the same car chassis. The programmable power of this modular method means the system has the potential to one day be used to treat a variety of diseases.

Finally sensing and logical computing functions have been integrated via complex, yet predictable, nanostructures.  This represents some of the first hybrids of structural DNA, antibodies, aptamers and metal atomic clusters to be aimed at useful, very specific targeting of human cancers and T-cells.

Because DNA is a natural biocompatible and biodegradable material, DNA nanotechnology is widely recognized for its potential as a delivery mechanism for drugs and molecular signals.  But there have been significant challenges to its implementation, such as what type of structure to create; how to open, close, and reopen that structure to insert, transport, and deliver a payload; and how to program this type of nanoscale robot.

By combining several novel elements for the first time, the new system represents a significant advance in overcoming these implementation obstacles. For instance, because the barrel-shaped structure has no top or bottom lids, the payloads can be loaded from the side in a single step–without having to open the structure first and then reclose it.

Also, while other systems use release mechanisms that respond to DNA or RNA, the novel mechanism used here responds to proteins, which are more commonly found on cell surfaces and are largely responsible for transmembrane signaling in cells. Finally, this is the first DNA-origami-based system that uses antibody fragments to convey molecular messages.  This feature offers a controlled and programmable way to replicate an immune response or develop new types of targeted therapies.

Be the first to comment

A computer program of the genius category

Posted February 20, 2012 By Kevin Orrman-Rossiter

Researchers have developed a computer programme more intelligent than 96% of the human population.  Intelligence is often measured through IQ tests where the average score for humans is 100.  The computer programme can score 150, putting it in the ‘genius’ category.

The programme is smarter than George W Bush, but not as smart as Stephen Hawking.  Nor could it qualify for Mensa.

IQ tests are based on two types of problems: progressive matrices, which test the ability to see patterns in pictures, and number sequences, which test the ability to see patterns in numbers. according to these researchers the most common math computer programmes score below 100 on IQ tests with number sequences.

This was a reason Claes Strannegård is trying to design ‘smarter’ computer programmes.  Trying to make programmes that can discover the same types of patterns that humans can see.

Pattern recognition is a notoriously difficult artificial intelligence problem.  For example human babies can from a very young age recognise faces and facial expressions.  We are only now approaching competence

The researchers believe that number sequence problems are only partly a matter of mathematics — psychology is important too.  For example: ‘1, 2, …, what comes next? Most people would say 3, but it could also be a repeating sequence like 1, 2, 1 or a doubling sequence like 1, 2, 4. Neither of these alternatives is more mathematically correct than the others. What it comes down to is that most people have learned the 1-2-3 pattern.

The group is therefore using a psychological model of human patterns in their computer programmes. They have integrated a mathematical model that models human-like problem solving. The programme that solves progressive matrices scores IQ 100 and has the unique ability of being able to solve the problems without having access to any response alternatives. The group has improved the programme that specialises in number sequences to the point where it is now able to ace the tests, implying an IQ of at least 150.

The programmes are beating the conventional math programmes because they are combining mathematics and psychology. This method could have potential to be used to identify patterns in any data with a psychological component, such as financial data.  But it is not as good at finding patterns in more science-type data, such as weather data, since then the human psyche is not involved.

They believe they have developed a pretty good understanding of how the IQ tests work.  The next goal to develop new IQ tests with different levels of difficulty.   To divide them into different levels of difficulty and design new types of tests, which can then be used to design computer programmes for people who want to practice their problem solving ability.

At present these research finding are unpublished.

Be the first to comment

Me, a bird-brain? thank you for the compliment

Posted February 17, 2012 By Kevin Orrman-Rossiter

Humans move between ‘patches’ in their memory using the same strategy as bees flitting between flowers for pollen or birds searching among bushes for berries.

When faced with a memory task, we focus on specific clusters of information and jump between them like a bird between bushes. For example, when hunting for animals in memory, most people start with a patch of household pets—like dog, cat and hamster.

Then as this patch becomes depleted, they look elsewhere. They might then alight on another semantically distinct ‘patch’, for example predatory animals such as lion, tiger and jaguar.

The study, Optimal Foraging in Semantic Memory published in Psychological Review, shows that people who either stay too long or not long enough in one ‘patch’ did not recall as many animals as those who better judged the best time to switch between patches.

In this study scientists from the University of Warwick and Indiana University asked asked 141 undergraduates (46 men and 95 women) at Indiana University to name as many animals as they could in three minutes.

The responses were then analysed using a categorisation scheme and also a semantic space model, called BEAGLE, which identifies clusters in the memory landscape based on the way words are related to one another in natural language.

They and then compared the results with a classic model of optimal foraging in the real world, the marginal value theorem, which predicts how long animals will stay in one patch before jumping to another.

The similarity to animal behaviour is evident from studies on foraging behaviour.  A bird’s food tends to be clumped together in a specific patch – for example on a bush laden with berries.

But when the berries on a bush are depleted to the point where the bird’s energy is best focused on another more fruitful bush, it will move on.

This kind of behaviour is predicted by the marginal value theorem, for a wide variety of animals.

The conclusion therefore is that people who most closely adhered to the marginal value theorem produced more items from memory.

I hypothesise that this result also shed light onto the way human attention has perhaps evolved.  Humans use the same strategies to forage in memory as animals do for food in the wild.

This study raises interesting ideas to test the mechanisms that those with ‘good’ memories use, as opposed to a population of people who have ‘poor’ memories.  It also enticingly poses a mechanism that could be explored to improving your own memory.

2 Comments so far. Join the Conversation

Nano-scale materials make cooler, more efficient infrared detectors

Posted February 16, 2012 By Kevin Orrman-Rossiter

Researchers have shown how infrared photodetection can be done more effectively by using certain materials arranged in specific patterns in atomic-scale structures.

This significant advance was accomplished by using multiple ultrathin layers of the materials that are only several nanometers thick. Each layer is deposited by a technique called molecular beam epitaxy.  The successive layers all form a crystal structure. These layered structures are then combined to form what are termed “superlattices.”

Photodetectors made of different crystals absorb different wavelengths of light and convert them into an electrical signal. The conversion efficiency achieved by these crystals determines a photodectector’s sensitivity and the quality of detection it provides. We read the magnitude of this current to measure infrared light intensity.

In this chain, we want all of the electrons to be collected from the detector as efficiently as possible. But sometimes these electrons get lost inside the device and are never collected.

The team’s use of the new materials is reducing this loss of optically excited electrons, which increases the electrons’ carrier lifetime by more than 10 times what has been achieved by other combinations of materials traditionally used in the technology, such as HgCdTe. Carrier lifetime is a key parameter that has limited detector efficiency in the past.

The unique property of the superlattices is that their detection wavelengths can be broadly tuned by changing the design and composition of the layered structures. The precise arrangements of the nanoscale materials in superlattice structures helps to enhance the sensitivity of infrared detectors.

The multidisciplinary team at Arizona State University is using a combination of indium arsenide and indium arsenide antimonide to build the superlattice structures. The combination allows devices to generate photo electrons necessary to provide infrared signal detection and imaging.

Another advantage is that infrared photodetectors made from these superlattice materials don’t need as much cooling. Such devices are cooled as a way of reducing the amount of unwanted current inside the devices that can “bury” electrical signals.

The need for less cooling reduces the amount of power needed to operate the photodetectors, which will make the devices more reliable and the systems more cost effective.

Researchers say improvements can still be made in the layering designs of the intricate superlattice structures and in developing device designs that will allow the new combinations of materials to work most effectively.

The advances promise to improve everything from guided weaponry and sophisticated surveillance systems to industrial and home security systems, the use of infrared detection for medical imaging and as a road-safety tool for driving at night or during sand storms or heavy fog.


Be the first to comment

Chandrayaan-1, India’s lunar water finder, close to a Moon ending

Posted February 5, 2012 By Kevin Orrman-Rossiter

India an emerging force in space exploration

Sometime in later this year it is expected that India’s first lunar spacecraft is set to crash into the Moon.  Currently the 675kg spacecraft is silently orbiting the Moon.  Silently as since August 29 2009 radio contact has been lost with the craft.

Now, every 2 hours or so, it complete a lunar orbit 200kms or less above the lunar surface.  It has been in this orbit since May 20 2009.  At 200km above the lunar surface this orbit is expected to last a 1000 days.

In reading this you may at first be surprised to know that India has a space exploration program.  Perhaps even more surprised to realise that India had carried out a successful scientific probe to the Moon.

India today has one of the stronger emerging space programs with a strong remote sensing satellite capability, including a developing lunar and human space efforts.  They see no ambiguity in the relevance of space activities to a developing nation.  The Indian authorities are convinced that “if they are to play a meaningful role nationally, and in the comity of nations, they must be second to none in the application of advanced technologies to the real problems of man and society.”

A triumph of Newtonian mechanics and Kepler’s insight

The Indian Space Research Organisation (ISRO) successfully launched Chandrayaan-1 on the 22 October 2008 from the Satish Dhawan Space Centre in Sriharikota, southern India.  After a series of orbital manoeuvres the 1380kg spacecraft entered into lunar orbit on  November 8 2008.

A series of four orbit raising manoeuvres between October 23 and October 29 had bought Chandrayaan-1 half way to Moon orbit.  These orbit changes were carried out via India’s Telemetry, tracking and Command network at Bangalore and the Indian Deep Space Network antennas at Byalalo.

There are a number of firsts for India in this voyage.  The manoeuvre on October 25  took Chandrayaan-1 into an elliptic orbit with a perigee at 74,715km from earth.  This was the first time an Indian spacecraft had gone beyond the 36,000km geostationary orbit.  On October 26 the manoeuvre took Chandrayaan-1 past the iconic 150,000km from Earth point entering ‘deep space’.

The two accompanying diagrams illustrate the elegance of the manoeuvres.  Beautiful examples of the utility of Newtonian mechanics.  Also superb illustrations of Kepler’s elliptical-orbit theory in practice.

After a  fifth manoeuvre on November 4 Chandrayaan-1 entered a lunar transfer trajectory.  On November 8 Chandrayaan-1 entered into a lunar orbit.  The first time an Indian spacecraft had orbited the Moon.  It’s elliptical polar orbit enabled India to lay claim to being just the fifth country, after the USSR (Lunar 2, 1959), USA (Ranger 7, 1964; Apollo 8, 1968), Japan (Hiten, 1990) and China (Chang’e 1, 2007), to send a spacecraft to the Moon.  In a somewhat perverse piece of country-ism the European Space Agency, which has also reached the Moon (a small, low-cost lunar orbital probe called SMART 1, 2003), as a non-sovereign 17 country consortium, is not counted in this tally.

On November 10 and 12 the orbiter two further planned manoeuvres moved Chandrayaan-1 into its operation orbit 100kms above the surface of the Moon.  India’s inaugural lunar mission had taken the country, via a series of elegant orbital manoeuvres, from being an emerging technology force into a small club of countries with viable space aspirations.

Nehru’s lunar legacy

The cultural and political significance of Chandrayaan-1 was realised on November 14.  This is the birthday of the late Pandit Jawaharlal Nehru, India’s 1st Prime Minister.  Nehru, as Prime Minister, supported and initiated the Indian space program in 1962 and the Indian Space Research Organisation.

On this day (IST) the Moon Impact Probe became the first Indian built object to reach the surface of the Moon.  The probe was a 34kg box-shaped object containing a video image system, radar altimeter, and mass spectrometer.  This can be contrasted with the 899kg Martian Space Laboratory rover Curiosity, currently on its way to Mars.

Symbolically the Indian tricolour was painted on three sides of the Moon Impact Probe.  This enables India to also lay claim to having the “Indian tricolour placed on the Moon”.  Needless to say that “placing” in this case was a hard landing in the Moon’s south polar region.

There is water on the Moon!

The scientific payload of Chandrayaan-1 was 11 instruments.  Five of these instruments, including the Moon Impact Probe, were from India, the remainder were from various countries including the UK, USA and Bulgaria.  One of the great successes of this mission, from a scientific perspective, was the detection of water ice on the Moon for the first time on August 21 2009.

On May 20 the orbit was raised to 200km.  With the Indian Space Research Organisation declaring that “all major objectives achieved” in their press release.

This aside the detection of hydroxyl and water were tremendous finds and also demonstrate the value of joint missions. Amongst the instruments on Chandrayaan-1 were the Moon Mineralogy Mapper (M3) and Miniature Synthetic Aperture Radar (Mini-SAR) from NASA.  The Moon Mineralogy Mapper has covered nearly 97% of the lunar surface, some of the other instruments have covered more than 90%.

A detailed analysis of the data obtained from Moon Mineralogy Mapper, has clearly indicated the presence of water molecules on the lunar surface extending from the lunar poles to about 60 deg. latitude. Hydroxyl, a molecule consisting of one oxygen atom and one hydrogen atom, was also found in the lunar soil. The confirmation of water molecules and hydroxyl molecule in the moon’s polar regions raises new questions about its origin and its effect on the mineralogy of the moon.

The Moon Mineralogy Mapper measures the intensity of reflected sunlight from the lunar surface at infrared wavelengths, splitting the spectral colours of the lunar surface into small enough bits revealing finer details of the lunar surface composition. This enabled identification of the presence of various minerals on the lunar surface that have characteristic spectral signature at specific wavelengths. Since reflection of sunlight occurs near the moon’s surface, such studies provide information on the mineral composition of the top crust of a few millimeters of the lunar surface.

The findings from Moon Mineralogy Mapper clearly showed a marked signature in the infrared region of 2.7 to 3.2 micron in the absorption spectrum, which provided a clear indication of the presence of hydroxyl (OH) and water (H2O) molecules on the surface of the moon closer to the polar region. It was also concluded that they are in the form of a thin layer embedded in rocks and chemical compounds on the surface of the moon and the quantity is also extremely small of the order of about 700 ppm.

These molecules could have come from the impact of comets or radiation from the sun. But most probable source could be low energy hydrogen carried by solar wind impacting on the minerals on lunar surface. This in turn forms OH or H2O molecules by deriving the oxygen from metal oxide.

Following these findings, the scientific team revisited the data from NASA’s Deep Impact Mission launched in 2005 which carried an instrument similar to Moon Mineralogy Mapper. Deep Impact Probe observed the moon during the period June 2 and 9, 2009. This, along with some laboratory tests carried out from samples brought from Apollo missions, has confirmed that the signature is genuine and there is a thin layer of surface mineral which contains traces of hydroxyl and water molecules.

The Moon Mineralogy Mapper observations are further strengthened by results obtained from the analysis of archived data of lunar observation in 1999 by another NASA Mission, Cassini, on its way to Saturn. This data set also revealed clear signatures of both OH and H2O absorption features on the lunar surface.

Further to these findings, ice was detected in small polar craters (2-15km in diameter) that are not visible from the Earth.  These North polar craters have sub-surface water ice located at their base.  The interior of these craters is in permanent shadow from the Sun.  Although the total amount of ice depends on its thickness in each crater, it’s estimated there could be at least 600 million metric tons of water ice.

This water was detected using the mini-SAR instrument.  Mini-SAR is a lightweight (less than 10 kg) synthetic aperture imaging radar.  It uses the polarization properties of reflected radio waves to characterize surface properties.  This instrument could detect new craters by their internal roughness.  As well it could find craters that gave anomalous signals, that were consistent with them having water in their base.

Manned lunar exploration

Chandrayaan-1 show cased India’s ability to plan and deploy a lunar exploration.  It will be interesting to see whether they can realise their Space Vision 2025.  The major objective of a 2016 manned mission program is to develop the fully autonomous three-ton ISRO Orbital Vehicle spaceship to carry a 2 member crew to low Earth orbit and safe return to the Earth after a mission duration of few orbits to two days. The extendable version of the spaceship will allow flights up to seven days, rendezvous and docking capability with space stations or orbital platform.

Milestones along the way include a second unmanned Chandrayaan-2 mission to the Moon, for launch in 2012.  The creation of an astronaut training programme in 2012.  The launch of a manned lunar orbital mission sometime after 2020 and manned expedition to the Moon by 2025.  Although these are all near impossible to verify, at present.  India’s space aspirations are without doubt real, just like the existence of Chandrayaan-1.  Their immediate future is not quite predictable – just as is Chandrayaan-1’s.


3 Comments so far. Join the Conversation

The art and science of being ‘unseen’

The art or ability to become invisible is a staple in myth, folklore and modern story.  My personal modern favorites are ‘the Ring’ in Tolkien’s Lord of the Rings, the invisibility cloak in Rowling’s Harry Potter series, the ‘experiment gone wrong’ in H G Well’s The Invisible Man, and finally the Klingon ‘Bird-of-Prey’, using Romulan developed cloaking, in the Star Trek series.

Science fiction writer, Arthur C. Clarke, said that sufficiently advanced technology is indistinguishable from magic.  Recent articles on ‘cloaking’ a 3-D object has led to the hyperbole that an invisibility cloak is here.  Closer examination of the physics shows that  this technology has many applications.  However, unless you ‘see’ with microwaves, an invisibility cloak is not one of them.  The invisibility cloak is still more fantasy and science-fiction than science.

Any science though, like this, that brings the world of science fiction into the realms of science-almost-fact presents an interesting cross-over of popular culture and cutting edge science.

Now you see me, now you don’t: cloaking using plasmonic meta-materials

This most recent study is good science.  The authors from the University of Texas have managed to make a 3-D object, a 30cm long cylinder, disappear from all angles, for the first time.

The most important things to note in this study are that it ‘disappeared’ at microwave wavelengths and not visible light wavelengths and secondly it employed plasmonic materials.  The object was still visible to the eye of the observers.

The usual approach to designing an invisibility cloak works on the basis of bending light ― using highly specific materials ― around an object that you wish to conceal, thereby preventing the light from hitting the object and revealing its presence to the eye of the observer.

When the light is bent, it engulfs the object, much like water covering a rock sitting in a river bed, and carries on its path making it seem as if nothing is there.

Metamaterials are artificially made structures that have electromagnetic properties.  These properties are not observed in either their constituent materials nor in naturally occurring materials.

Plasmonic meta-materials are composites of metal and non-conductive synthetics made of nanometre-sized structures that are far smaller than the wavelength of the light that strikes them.  As a result, when incoming photons hit the material, they excite currents that make the light waves scatter.

The new experiment entailed making a shell of plasmonic meta-materials and placing the cylinder inside, and exposing the combination to microwaves.

Microwaves scattered by the shell ran into microwaves bounced from the object, preventing them sending a return signal to the viewer.

When the scattered fields from the shell and the object interfere, they cancel each other out, and the overall effect is transparency and invisibility at all angles of observation.  As a result, the cloak has to be tailored to work for a given object. If one were to swap different objects within the same cloak, they would not be as effectively hidden.  On the other hand, the researchers said the shape of the object is irrelevant, saying oddly shaped and asymmetric objects can both be cloaked using this technique.

However, the idea is unlikely to work at the visible light part of the spectrum.  In principle, this technique could be used to cloak light; in fact, some plasmonic materials are naturally available at optical frequencies. However, the size of the objects that can be efficiently cloaked with this method scales with the wavelength of operation.  When applied to optical frequencies, this reduces the size of objects to micrometre (1/1000th of a millimetre) range.

Materials for making the visible invisible.

Meta-materials have been used to cloak in the visible region.  In 2011 researchers at the University of California, Berkeley, succeeded in a proof of concept that invisibility and other optical illusion phenomena can be achieved with visible light.  They achieved this by hiding a sub-micron sized bump by coating it with a special meta-material layer.

The carpet cloak works by concealing an object under the layers, and bending light waves away from the bump that the object makes, so that the cloak appears flat and smooth like a normal mirror.

There are still some problems that need to be solved. To achieve the cloaking effect for the full visible range, they need to employ nanofabrication techniques to make the required metamaterials. Today, these are still difficult and time consuming processes, and therefore, the cloak the team demonstrated is very small, 6 micrometers wide and 300nm tall – barely enough to cloak a red blood cell.

Besides rendering objects invisible and creating other optical illusions, the use of meta-materials to control the flow of light can have many applications in the fields of energy, medical imaging, information technology, etc. Examples are using transformation optics and meta-materials for better use of solar light in energy devices; for elimination of sources of noise in imaging and microscopy; and for controlling on-chip propagation of optical signals.

Some months ago it was reported that researchers at the University of Birmingham had made a paper clip invisible.  They has achieved this by bouncing the light around the clip using a naturally formed calcite crystal.  The  Telegraph, London report says:

Dr Shuang Zhang, a physicist and lead investigator on the University of Birmingham team, said: “This is a huge step as, for the first time, the cloaking area is rendered at a size that is big enough for the observer to ‘see’ the invisible object with the naked eye.

Invisible paper-clip

“By using natural crystals for the first time, rather than artificial meta-materials, we have been able to scale up the size of the cloak and can hide larger objects, thousands of times bigger than the wavelength of the light.”

The new technique is limited only by the size of the naturally formed crystals.

“We believe that by using calcite, we can start to develop a cloak of significant size that will open avenues for future applications of cloaking devices,” Dr Zhang said.

Planes, tanks and Bird-of-Prey

Why isn’t this all a little easier.  the answer to that is in what it means to make something invisible.  If you are trying to make a plane or ship invisible (or even your car from being detected by a speed camera) then you are talking about making it invisible at long wavelengths – radar and microwave.

Radar observation occurs by transmitting the waves and then receiving the waves scattered by the object  You can see the relevance of the experiments we first discussed.

If we are trying to detect a large mechanical object, such as a tank, we would look for it’s heat signature.  This means looking at the infra-red area of the spectrum.  In this case making something invisible is hiding the heat emitted.  One such way is the water-jacket “Adaptiv” method developed for tanks by BAE Systems.

In the case of ships you may also want to cloak them from magnetic signals.  Spanish researchers have designed what they believe to be a new type of magnetic cloak, which shields objects from external magnetic fields, while at the same time preventing any magnetic internal fields from leaking outside, making the cloak undetectable.

If we are trying to cloak at visible wavelengths the hurdles are quite large.  We see objects via scattered light that can come from many diffuse sources and over a spectrum.  To not see on object the background behind it must blend into our field of view either side.  A formidable task.  In physical terms we must bend the light around the object from all directions.

The tricky bit is manipulating the speed of the light differently for the different wavelengths to ensure that the image is seen completely by the observer.  This tricky manipulation has only just been solved – in theory.  This is also for a sphere.  Still it is a conceptual step forward in understanding how an object may be cloaked, from visible sight, in a sphere.  Not sure how we move the object or the sphere, one for the science-fiction story to pose and scientists to explore.

Technology rather than magic.  In fiction a hobbit slips on a ring to exit from his birthday party.  We are a while away from pressing an app on my iPhone to modify my smart clothing to become invisible to unwanted visitors.  We are however quite close to employing cloaking technologies in a variety of other technology applications.

Be the first to comment

The curious science of life in space

Posted December 31, 2011 By Kevin Orrman-Rossiter

Packing for Mars: the curious science of life in space” by Mary Roach 2010 Oneworld Books, Oxford UK, ISBN 978-1-85168-823-4.

Reading this book was a little like watching street opera designed by a fifteen year old male.  In street opera you get just the arias – opera with all the boring bits removed.  Or so the publicity will proclaim to you.  It’s great fun.  Opera by sound bite.  This book is a little like that –  the good songs and plenty of added comic asides and additions featuring puking, excreta and sex.

The title “Packing for Mars” promised much.  It delivered little on Mars.  It presented a wealth of humorous scatological titbits on travelling in space.  Telling you more than you need to know on many topics.  At the same time falling the book  falls far short or ignores topics that I expected – given the title.

I did find the humor in the book a refreshing change.  Science is a serious subject – ask any scientist.  Finding a well researched, book length piece of science communication is worth noting and appraising.  If you are a science communicator, whether non-fiction or fiction author, corporate or government communicator the humor is worth noting.  It is worth noting and asking yourself “How can I put some of that into my writing?”

Mary Roach has developed a readable style.  It is a breathy journalist writing a non-fiction book formula.  Each chapter is well researched has requisite entertaining interview pieces and is a self-contained essay in itself.  each chapter covers a subject well and has a one sentence link that catapults you into the next chapter.  I romped through this book over a few days, admittedly skimming some pages as I went.

Roach covers off many subjects that are not normally discussed in works on space travel.  Bathing, or not, eating defecating in space are all explored in great historical detail.  If you were ever puzzled by these aspects of space travel from early animal travels of Laika, Belka and Strelka through to the Gemini and Apollo missions and the recent orbiting space station and space shuttle missions then read this book.

There are chapters on many aspects of space travel, some obvious and some novel.  Not sure how you would pick an astronaut now or were they picked in the past?  What sort of food is edible on a space mission?  How comfortable is a space suit, for 2 days, the 14 days of a Gemini mission, setting up intricate scientific experiments?  What interesting personality and psychological manifestations does the isolation of space travel bring out?  From the short Mercury missions through to the longer space station stays Roach puts it all out for observation and comment.

How does eating, sleeping, working, puking, defecating (yes there is specific chapters on this) in zero and low gravity work?  How effective and efficient is the human operating under these conditions?  What were the early medical concerns about humans operating in no gravity or travelling at high speeds, high accelerations or even being separated from the earth?

There is one important element that I appreciate greatly in this book.  Without belittling the personalities involved, Roach makes them all so much more human.  There are plenty of interview and mission transcript excerpts to illustrate many of the points.  Whether astronaut, scientist, engineer or other participant in space exploration they all become part of a more human enterprise.  This achievement alone is worth acknowledgement and appreciation.

I was disappointed to not find more on Mars.  There was little if anything on the particulars of a Mars mission.  What are the attributes that would make this really different to the Apollo moon missions?  What could the purpose of such missions be?  What are the human fascinations with Mars, as compared to Venus?  What is the physiological impact of interplanetary radiation?  How may the space agencies mitigate this?  What would the best physical and psychological attributes be for a Mars mission?  What have we learned from the many robotic missions to Mars?  In my mind all of these need at least be acknowledged, if not discussed, in a book entitled “Packing for Mars”.

I found the book both engrossing and annoying.  It is worth reading, even if it does not live up to it’s title.


Be the first to comment

Curiosity about life on Mars

Posted December 11, 2011 By Kevin Orrman-Rossiter

On July 20, 1969 Neil Armstrong uttered one of the most remembered quotes of the 20th century, “That’s one small step for man, one giant leap for mankind….”

Millions of people heard these words as they watched, via grainy black and white television images, Neil Armstrong step from the landing pad of the Lunar Module Eagle onto the surface of the Moon.

With that single step Neil Armstrong became the first human to step foot onto a celestial body other than the Earth. He was joined 14 minutes later by Buzz Aldrin.

To date, 12 people have stepped foot on the Moon. The last Apollo mission was Apollo 17 in December 1972; Apollo 18, 19 and 20 were scrapped due to rising costs.

Curiosity about life on Mars

At 2:21am (AEDT) on November 26, 2011, the Mars Science Laboratory launched from Cape Canaveral in Florida. The payload of this spaceship is not humans. Rather, it’s a remarkable 899 kilogram, six-wheeled Martian rover named Curiosity.

The purpose of Curiosity is to determine the habitability of Mars. The mission , lasting one Martian-year (98 Earth weeks), will begin on August 5, 2012. It’s mission is of scientific significance and perhaps even of human significance.

Curiosity will carrying out the prospecting stage in a step-by-step program of exploration, reconnaissance, prospecting and mining evidence for a definitive answer to, “Has life existed on Mars?”

There are three conditions that are considered crucial for habitability.  They are: liquid water, the other chemical ingredients utilized by life (such as nitrogen, phosphorus, sulfur and oxygen), and finally a source of energy.

The landing site for Curiosity, the Gale Crater, was chosen to maximize the chances of answering this question. It was identified as having mineral evidence of a wet history by both NASA’s Mars Reconnaissance Orbiter and the European Space Agency’s Mars Express.

The Gale Crater provides a variety of accessible features for study.  Included are clays and sulfate-rich deposits which are good at latching onto organic chemicals and protecting them from oxidation.  There are also features that will shield any organic chemicals from the natural radiation.

The natural radiation levels on Mars are higher, due to its lack of a screening atmosphere.  The site offers rocks that have become exposed by recent small-crater impacts.  One capability of Curiosity’s science payload is to look for these organic chemicals; the carbon-based building blocks of life.

A Tweetup comes to Canberra

Many people will remember what they were doing on “the day that man landed on the moon.”  I raced home from primary school to watch the moon-walk live.  As an adult I pursued science, physics in my case, as a profession.  Many others fired by childhood dreams of being an astronaut or inspired by the astronaut program became scientists and engineers.

I also will remember where I was when the Mars Science Laboratory launched.  I was at the Canberra Space Centre in Tidbinbilla attending the #CSIROTweetup.  This was a group of 50 science ‘geeks’ who headed to the Australian capital city, Canberra, to tweet about the launch.

NASA has made a tradition of Tweetups to cover it’s most recent launches.  The idea was transplanted and organized to Australia by Vanessa (@NessyHill) Hill from CSIRO in Townsville  who had attended ISS and GRAIL Tweetups in the US.  The event in Canberra was hosted by Glenn (@DSNCanberra) Nagle from the CSIRO run Canberra Deep Space Communication Complex.  This is a NASA tracking station that is managed on their behalf by CSIRO.

I found the weekend a most enjoyable experience.  Mixing with what can only be described as an eclectic group of people.  A group whose joint characteristic was a interest in space exploration.  A number had also attended previous #NASATweetups and at least one will attend at least the next launch as well!  The idea of a Mars landing Tweetup was discussed with great enthusiasm.

I found three moments particularly memorable.  The first was the silence at the precise moment of the launch.  The clatter of tweeting keyboards had ceased and I’m sure room full of observers had collectively stopped breathing.  The launch was greeted with intake of breaths and elated cheering.

The second was sheltering from drizzling rain under a tin car-port.  The group had walked here to near the DSS34 antenna to watch for the spacecraft to come over our horizon.  It was hoped that we might see the final burn of the Centaur main engine and the separation of the spacecraft.  This final maneuver lofts the spacecraft out of Earth orbit and on its way to Mars.  The antennas, DSS34 and DSS45, at Tidbinbilla are the first of NASA’s Deep Space Network to receive communication from the spacecraft.

Cloud obscured any possibility of seeing the burn phase.  It was rather profound though watching the dish antenna.  Both the tracking dishes ‘staring’ at the horizon.  Then synchronously slowly tracking the spacecraft as it above our horizon and across the night sky on its 210 day cruise phase to Mars.

The Canberra sites three antenna have a constant 24hour schedule of communicating with all manner of deep space probes.  My third moment was the realization that Voyager I was part of that schedule whilst I was there.  Imagine this Voyager I was launched in 1977 and is currently at the outer reaches of our solar system and there we were picking up its data signals, a engineering and science wonder!

Finding a place for humans on Mars

There is much to get excited about this Mars mission.  Not only from the ‘normal science’ arising from the explorations.  It is a large incremental step into uncovering profound insights into Mars’ past and present environments.  It is a crucial next step in the program strategy towards missions that ultimately return soil and rock samples to Earth.

For those intrigued by engineering there are two things of note.  Firstly the virtuosity of the “sky crane” descent of the lander and secondly the Mars Science Laboratory rover Curiosity.

The lander enters the Mars atmosphere initially much like the Apollo mission re-entered Earth’s atmosphere.  It uses atmospheric braking followed by a parachute descent.  Then, at about 1.6km to touchdown the parachute shell separates.

A rocket-powered descent-stage lowers the rover to within 20 meters of the surface.  The descent-stage then deploys the rover to touchdown on a ‘sky-hook’ of nylon cords .  When touchdown is detected the descent-stage then continues, under power, past the rover touchdown area.  There was much discussion at the #CSIROTweetup about this novel landing.  I think there will be many, the NASA engineers included, who will be holding their breath at this crucial point.

The rover, Curiosity, for the first time lands on it’s own wheels.  Curiosity is then ready to begin characterizing the landing site, conduct health checks of various systems and start taking weather measurements.  The last two Mars rovers, Spirit and Opportunity landed as rolling ‘airbags’; I find the ‘sky-hook’ a far more elegant and hopefully gentler landing maneuver.

During the descent I am looking forward to the view from the Mars Descent Imager.  For the final few minutes of Curiosity’s flight to the surface there will be full-color imaging of the ground.  This will provide all wannabe astronauts a real-time experience of riding a spacecraft to a landing on Mars.

For me there is special interest in one group of seemingly trivial experiments.  Curiosity will record information about daily and seasonal changes in Martian weather.  These instruments are the Rover Environmental Monitoring Station.  The team plans on posting daily weather reports from Curiosity.

Information about Martian wind, temperatures, and humidity will provide a way to improve and verify atmosphere modeling of Mars.  For the first time the full ultra-violet spectrum of radiation will also be measured.  This will strengthen the understanding about the global atmosphere of Mars.

This all contributes to the mission’s evaluation of habitability.  These are all crucial steps towards the ultimate missions that land humans on Mars.

This station also showcases some of the international involvement in the science payload of Curiosity.  The principle investigators are from he Spanish Centre for Astrobiology and Spanish National Research Council.  Whilst the Finnish meterological Institute developed the pressure sensor on this station.  Other nations included in the science payload collaborations include Russia, Canada, and France.

Space exploration post the Apollo era

The latter space programs did little to fire the public imagination. The International Space Station, Skylab and Space-shuttle  programs were great feats of engineering and diplomacy.

These missions were intended to maximize the scientific aspects of each journey.  Ground-breaking research has been carried out on zero-gravity science and engineering,how to build and maintain non-terrestrial habitats and the intricacies of space-travel.  Much has been learned about the psychology and physiology of the extended periods of isolation and gravity-free existence that will be necessary for interplanetary exploration.

Despite all these achievements these missions did not seem to generate the same levels of public inspiration as the Apollo moon-shots.  They have generated significant professional science and engineering interest.  As well there is a core following of ‘space geeks’.  This is evidenced, for example, by the the crowd that watched the final shuttle launch and are participating in #NASATweetups and #CSIROTweetups.

There is also a small but growing commercial interest in spaceflight.  Virgin Galactic and ShareSpace provide some of the alternative ideas for commercial human spaceflight.  For US$200,000 you can book a space flight aboard the Virgin Galactic SpaceShipTwo.  Meanwhile Space Exploration Technologies, or SpaceX, will launch its Dragon spacecraft on its second Commercial Orbital Transportation Services demonstration flight on Feb. 7, 2012. Pending completion of final safety reviews, testing and verification, SpaceX might also send Dragon to rendezvous with the International Space Station.

At the same time there are also the established space aspirations of the Europeans, Russia and Japan.  Then there is also the emerging push by the Chinese and it would be foolish to discount other pushes by India for example.

Human exploration of Mars may just be what is required to provide that next inspirational step for mankind.  With the current pace and breadth of space exploration, there will be somewhere on Earth a 9-15 year old who will in all probability become the first human to step onto Mars.  I expect to remember where I was on that memorable day.

I like to think that Mars exploration will fire a new generation to great dreams and achievements.

4 Comments so far. Join the Conversation